lang
stringclasses
1 value
s2FieldsOfStudy
listlengths
0
8
url
stringlengths
78
78
fieldsOfStudy
listlengths
0
5
lang_conf
float64
0.8
0.98
title
stringlengths
4
300
paperId
stringlengths
40
40
venue
stringlengths
0
300
authors
listlengths
0
105
publicationVenue
dict
abstract
stringlengths
1
10k
text
stringlengths
1.94k
184k
openAccessPdf
dict
year
int64
1.98k
2.03k
publicationTypes
listlengths
0
4
isOpenAccess
bool
2 classes
publicationDate
timestamp[us]date
1978-02-01 00:00:00
2025-04-23 00:00:00
references
listlengths
0
958
total_tokens
int64
509
40k
en
[ { "category": "Medicine", "source": "external" }, { "category": "Computer Science", "source": "external" }, { "category": "Medicine", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/016413781ecc672f0ce4a9445b4c123eeade1f0b
[ "Medicine", "Computer Science" ]
0.905367
Blockchain for Electronic Vaccine Certificates: More Cons Than Pros?
016413781ecc672f0ce4a9445b4c123eeade1f0b
Frontiers in Big Data
[ { "authorId": "2175287218", "name": "Raphaëlle Toubiana" }, { "authorId": "1665579645", "name": "Millie Macdonald" }, { "authorId": "66712739", "name": "S. Rajananda" }, { "authorId": "2175285912", "name": "Tale Lokvenec" }, { "authorId": "4710461", "name": "T. Kingsley" }, { "authorId": "1402528690", "name": "S. Romero-Brufau" } ]
{ "alternate_issns": null, "alternate_names": [ "Front Big Data" ], "alternate_urls": null, "id": "165fa1b5-e07f-4b6e-9203-04493f6a7c5c", "issn": "2624-909X", "name": "Frontiers in Big Data", "type": null, "url": "https://www.frontiersin.org/journals/big-data" }
Electronic vaccine certificates (EVC) for COVID-19 vaccination are likely to become widespread. Blockchain (BC) is an electronic immutable distributed ledger and is one of the more common proposed EVC platform options. However, the principles of blockchain are not widely understood by public health and medical professionals. We attempt to describe, in an accessible style, how BC works and the potential benefits and drawbacks in its use for EVCs. Our assessment is BC technology is not well suited to be used for EVCs. Overall, blockchain technology is based on two key principles: the use of cryptography, and a distributed immutable ledger in the format of blockchains. While the use of cryptography can provide ease of sharing vaccination records while maintaining privacy, EVCs require some amount of contribution from a centralized authority to confirm vaccine status; this is partly because these authorities are responsible for the distribution and often the administration of the vaccine. Having the data distributed makes the role of a centralized authority less effective. We concluded there are alternative ways to use cryptography outside of a BC that allow a centralized authority to better participate, which seems necessary for an EVC platform to be of practical use.
Edited by: Juan Zhao, Vanderbilt University Medical Center, United States Reviewed by: Tsung-Ting Kuo, University of California, San Diego, United States Jiyong Park, University of North Carolina at Greensboro, United States *Correspondence: Santiago Romero-Brufau [romerobrufau.santiago@mayo.edu](mailto:romerobrufau.santiago@mayo.edu) Specialty section: This article was submitted to Medicine and Public Health, a section of the journal Frontiers in Big Data Received: 10 December 2021 Accepted: 31 May 2022 Published: 08 July 2022 Citation: Toubiana R, Macdonald M, Rajananda S, Lokvenec T, Kingsley TC and Romero-Brufau S (2022) Blockchain for Electronic Vaccine Certificates: More Cons Than Pros? Front. Big Data 5:833196. [doi: 10.3389/fdata.2022.833196](https://doi.org/10.3389/fdata.2022.833196) p y [doi: 10.3389/fdata.2022.833196](https://doi.org/10.3389/fdata.2022.833196) # Blockchain for Electronic Vaccine Certificates: More Cons Than Pros? Raphaëlle Toubiana [1], Millie Macdonald [2], Sivananda Rajananda [3], Tale Lokvenec [3], Thomas C. Kingsley [4,5] and Santiago Romero-Brufau [1,6]* 1 Department of Biostatistics, Harvard T.H. Chan School of Public Health, Harvard University, Boston, MA, United States, 2 University of Queenland, Saint Lucia, QLD, Australia, 3 Institute for Applied Computational Science, Graduate School of Arts and Sciences, Harvard University, Cambridge, MA, United States, [4] Department of Medicine, Mayo Clinic, Rochester, MN, United States, [5] Department of Biomedical Informatics, Mayo Clinic, Rochester, MN, United States, [6] Department of Otolaryngology - Head and Neck Surgery, Mayo Clinic, Rochester, MN, United States ### Electronic vaccine certificates (EVC) for COVID-19 vaccination are likely to become widespread. Blockchain (BC) is an electronic immutable distributed ledger and is one of the more common proposed EVC platform options. However, the principles of blockchain are not widely understood by public health and medical professionals. We attempt to describe, in an accessible style, how BC works and the potential benefits and drawbacks in its use for EVCs. Our assessment is BC technology is not well suited to be used for EVCs. Overall, blockchain technology is based on two key principles: the use of cryptography, and a distributed immutable ledger in the format of blockchains. While the use of cryptography can provide ease of sharing vaccination records while maintaining privacy, EVCs require some amount of contribution from a centralized authority to confirm vaccine status; this is partly because these authorities are responsible for the distribution and often the administration of the vaccine. Having the data distributed makes the role of a centralized authority less effective. We concluded there are alternative ways to use cryptography outside of a BC that allow a centralized authority to better participate, which seems necessary for an EVC platform to be of practical use. Keywords: blockchain (BC), electronic vaccination record, electronic vaccine certificate, cryptography, COVID-19, clinical informatics ## INTRODUCTION The Rise of COVID-19 Electronic Vaccine Certificates The requirement of proof-of-vaccination to COVID-19 is gaining traction in government agencies and the private sector, despite vocal opposition. The European Commission on December 21st, 2021 created regulations around the use of European Union Digital COVID Certificates (EUDCC) (EU Digital COVID Certificate, 2022). These regulations apply to all nations (non-EU included) that choose to adopt the EUDCC. Its primary use is to open travel between EU countries, but some nations are using it domestically to control entry to public places such as restaurants or sporting events. As of February 1st 2022, 42 countries are already connected to the EUDCC, and many more are considering joining (EU Digital COVID Certificate, 2022). The EUDCC uses a technology called distributed identity. The United States (US) federal government has taken a more limited role in regulating and mandating proof-of-vaccination through EVC platforms. This has left the responsibility to the private sector and state governments. Employers such as airlines, hospitals, and restaurants are increasingly requiring proof-of-vaccination for their patrons and employees (Eldred, 2021). Other non-EU countries are also evaluating EVC technology platforms to use domestically. ----- ## Blockchain Technology as a Solution Blockchain has been a commonly proposed technology solution for COVID EVC platforms (Mithani et al., 2021). Although awareness of blockchain has increased because of the rise of digital currency such as Bitcoin and Ethereum, the majority of the public and decision makers have little understanding of the technology, especially in non-currency-based uses. Moreover, despite vocal opposition to proof-of-vaccination measures, it seems likely some versions of them will stay and become more widespread as COVID becomes more endemic, especially if COVID remains a deadly disease in those unvaccinated. Blockchain use in EVCs is commonly proposed but there is a paucity of literature or real-world examples of its use for this purpose. As pressure increases for decision makers to choose amongst the various technology options, the authors of this paper thought it was important to review this topic. ## Ten Important Characteristics of an EVC Technology Platform As governments and the private sector are evaluating EVC platforms for deployment there are multiple considerations. Through discussion, our team identified 10 key considerations: (1) data privacy and security (patient health information, demographic data, location, etc), (2) data verifiability and **fidelity (data remains auditable and accurate over time), (3)** **data retrievability (data can be queried and retrieved with** accuracy and within a timeframe that is useful for its application), (4) technology accessibility (how easy it is for the public to access it as users), (5) equitable (regardless of socioeconomic, racial, or cultural differences), (6) interoperability with other public health and healthcare system information technology, (7) scalability (to be broadly available to the public within a short time period) (8) cost effective to maintain and operate (9) potential for public adoption (important factors include understandability, trust, and public perception of the technology), and (10) feasibility of development and operationalization (e.g., prior examples of the technology platform being successfully deployed in similar contexts). ## BACKGROUND Databases A data storage application like an EVC system would traditionally use a database (generally what is called a relational database) to store patient and vaccination data. A relational database can be compared to a Microsoft Excel or Google Sheets document - data is stored in tables with rows of entries similar to a spreadsheet, and may contain multiple, possibly interlinked tables similar to the tabs in an Excel or Sheets document. Data can be retrieved from the database by writing queries in the appropriate query language, similar to the functions that can be used with Excel and Sheets. There are other types of databases that do not use blockchain technology, and the main benefit of databases is that they can also be optimized for specific use-cases, such as minimizing the size of the data and increasing the speed of updating or querying the database. Theoretically, any kind of data can be represented in a database in any way, with any kind of relationships between different pieces of data. For example, for an EVC, there might be one table where each row contains the full private data of a patient and a vaccination they received. Alternatively, for a vaccine that requires multiple shots, data that is duplicated between each entry, such as a patient’s details, could be entered into its own table which can then be linked to a second table that contains only the data for each shot. This way, the amount of data stored for each patient is reduced, and therefore so is the overall size of the database. This can lead to various improvements to the overall system, including the hard drive space required to store the database. Generally, the security of the data in a database depends on the security of the systems it is connected to, unless the data itself is encrypted (see glossary). For example, most applications that use a database would have a user interface (UI) to make it easier for users to view and update the data in the database. Permission systems (such as usernames and passwords) can be used to control who can do what with the database data - for example, perhaps anyone with a login can read the data entries that pertain to themselves, but only some people can add or change data. The security of such an application then depends on factors like who has permission to do what operations, and how easy it is for a malicious entity to gain access to the database (e.g., by hacking the system or stealing login information from a user and using it to access the data via the user interface). Cryptographic techniques are commonly used at various points in an application in order to add layers of security. ## WHAT IS BLOCKCHAIN TECHNOLOGY Blockchain is a distributed ledger technology for storing and transmitting information. Its main characteristics are transparency, security, and decentralization (operating without a centralized control body) of both data and authority (Cawrey, 2021). A common application is money transfers that can be performed without the need for trusted third parties or banks. This is how Bitcoin or Ethereum work: thanks to blockchain, there is peer-to-peer (P2P) review that permits direct transfers between individuals. The blockchain can therefore be compared to a public, anonymous and unforgeable accounting ledger. We can also think of this technology as a way to securely store private information such as vaccination records. In this section we describe what’s known as a public blockchain, which is the original design by Nakamoto (2008). There are other variations of blockchain called “permissioned blockchains” that we will describe in the next section. The first step is to initiate the transfer. Let’s say Mike wants to do a transaction with Santiago. If we consider Bitcoin for example, Mike would like to transfer money to Santiago; in that case we would have a record that says: “Mike pays Santiago 2 Bitcoins (transaction signed by Mike).” If we consider vaccination records, we could record the vaccination similarly: “Mike vaccinates Santiago (transaction ----- signed by Mike),” with Mike being a vaccinator. A vaccinator is anyone approved to administer the vaccine, often a licensed healthcare provider or a public health official. In step two, the transaction is sent to the network, composed of all the people using the blockchain, for verification. The first verification concerns the identity of the individuals involved in the transaction: is it really Mike that wants to do the transaction with Santiago? How does this validation step work? Mike has to sign the transaction with an electronic signature called a private key. Only Mike has access to this key. The rest of the network has a public key that can only be used to decode Mike’s private key. When the transaction is sent by Mike, several people in the network will verify that their public key decodes Mike’s private key (Figure 1). If the public key doesn’t decode Mike’s private key, it means that it is not really Mike that sent the transaction. The transaction is thus canceled. In the case of money transfers, the verification consists of verifying the identity of Mike with his electronic signature, as explained above, and verifying if Mike has enough money on his account to send to Santiago. In the case of vaccination records, one could envision a similar verification process using two keys to verify the identity of the parties. The transaction is approved only if more than half of the people on the network accept it. This way, since there is a vast number of users, it is very unlikely that a compromised transaction will be approved. Once the transaction is verified by the network, it is grouped together with other transactions to form a block (Figure 2, step 3). On step four (Figure 2), a block is built for the group of transactions. In Bitcoin and other proof-of-work systems, the “validators” of the chain, also called “miners,” must spend computational work to find the solution to a mathematical problem, and that solution links the block to the chain. In systems using proof-of-stake or proof-of-authority, the miners only need to produce a digital signature that authenticates it to the network. Once the block is validated, a timestamp is added to the block, i.e., the approximate date and time when the block was found. Step five (Figure 2) is called hashing. Each block has an identifier, which is a unique cryptographic fingerprint, resulting from the hash of the data that this block contains: the transactions, the timestamp and the hash of the previous block. If someone attempts to modify the information stored in a block, ----- the hash will change drastically, and the fraud will be detected (see Figure 3). The block is then broadcast to the network and is verified one last time before being added to the chain. We call this technology blockchain, because each block of transactions is linked to the previous one through the hash, as shown in Figure 4. ## PERMISSIONED BLOCKCHAINS In the previous section we have described the general functioning of blockchain technology. However, there are multiple variations, which can change critical aspects of the technology. In general, there are three types of blockchains: public, consortium, and private (Zhang and Lin, 2018). Public blockchains such as Bitcoin allow anyone to participate: there are no restrictions on who can read or write to the blockchain. Consortium blockchains are permissioned blockchains where a consortium of entities are able to validate blocks. Access to the blockchain may vary between public or restricted (e.g., via APIs). Private blockchains are permissioned blockchains where a single entity has complete authority over the network and that entity fully controls both read and write permissions. In the context of vaccination records, public blockchains will likely not suffice since vaccination records in the chain must be trustworthy (i.e., they should be added to the chain by a trusted medical entity). This then naturally leads to a private or consortium blockchain, where the ability to add to the chain and validate blocks can be restricted to only trusted entities, such as vaccinators (doctors and professionals in the medical community). In this scenario, we can imagine a certain trusted entity, such as the Health Ministry of one or several countries, having control over who is allowed to add vaccination records to the blockchain. A system like the European Union Digital Covid Certificate allows any of several countries to add vaccination records. ## Proof-of-Work Validation We have described how and when a block is validated. After this occurs, it is then added to the blockchain (Step 4 on Figure 2). However, there are many different consensus algorithms for validating blocks. The most popular, due to its use in Bitcoin and the way it incentivizes participation, is the proof-ofwork algorithm. In proof-of-work blockchains, a block is validated by performing a task that is computationally expensive, but easy to confirm. For example, in Bitcoin this task is finding a sequence when added to the block that will result in a hash that ends in a certain number of zeroes. This requires miners to use trialand-error to find a sequence that will result in a certain hash. However, once that sequence is found, it is very easy to confirm that its hash has the required number of zeroes. Proof-of-work systems often need to provide an incentive to the agent who solved the problem. In currency-focused blockchains, this is easily solved by rewarding that agent with a certain amount of currency. However, in an EVC system there isn’t a clear reward that could be provided to the agent that validated the block. For these reasons, a proof-of-work validation algorithm would not be appropriate for this application, and other validation systems would need to be used. An algorithm which relies on a majority consensus between parties may be best, and especially, if used in a permissioned blockchain system, where the various entities are trusted. ----- TABLE 1 | Differences between public and permissioned blockchains. Property Public Permissioned Access restrictions No restrictions inherent to the blockchain Ability to read and write data to the blockchain is controlled Trust Doesn’t require trust between agents in the network Requires trust, due to agents having different read, write and validation permissions Risk of takeover by majority of Anyone can join the network and validate transactions Only some nodes are authoritative (can validate authoritative nodes transactions) Security Malicious entities can easily gain access, and data is public Permissions control who can do what, including viewing the data Validation Anyone can validate blocks, but validation is computationally Trusted entities can be assigned the duty of validating expensive, so an incentive is generally needed blocks which removes the need for an incentive Consensus algorithm Can operate in an environment with low trust between entities, Trust allows the consensus algorithm to be simplified and may need to handle faults and malicious entities In Table 1 we summarize the differences between a public blockchain and a permissioned blockchain. An EVC system would likely use a permissioned blockchain. In some ways a permissioned blockchain is more similar to a traditional database compared to a public blockchain. For example, fewer authoritative entities means that an entity or group of entities could theoretically gain authority more easily, allowing them to block new transactions and rewrite their past transactions. However, in a permissioned blockchain, like in a traditional database, those entities need to be externally permissioned, which increases security. However, a blockchain-based application will generally have more components than just the blockchain, such as user management and other data storage. A permissioned blockchain may allow for security trade-offs to be made elsewhere, such as choosing a less secure but faster consensus algorithm. ## Considerations for Cooperative Applications Decentralized authority may be an appealing solution when multiple entities are collectively using a system and each one is unwilling to let others have more authority over the system (such as countries sharing a common vaccination record system). This then could incentivize additional entities to join the blockchain. However, a major hurdle for using blockchain technology on such a large scale is agreeing on a common protocol for the chain. These include the consensus mechanism, privacy standards, incentives for maintaining the chain, and managing write access to the chain. In addition, there has to be some level of trust that the other entities are managing their write access to the chain properly and those records can be trusted. Some technical designs using consortium blockchains for EVC have been described (Haque et al., 2021). In the case where multiple countries share the same blockchain, a consortium blockchain could theoretically be employed. This would allow each country to control the permissions to their respective medical institutions to write to the chain. Since no one country would have complete authority over the blockchain, the core benefit of decentralized authority would be preserved. With regards to suitable blockchain platforms, Bitcoin and Ethereum are public, not consortium. Other platforms such as Multichain, Hyperledger Fabric and Hyperledger Sawtooth are likely more appropriate (Chowdhury et al., 2018; Chowdhury et al., 2019). ## DIFFERENCES BETWEEN DATA STORAGE IN BLOCKCHAIN AND DATABASES The biggest difference between blockchain and other types of distributed ledger technologies is the use of cryptographic techniques to add a layer of security to the data. While cryptography is often used for secrecy, in the context of blockchain the technology is used to make it significantly harder to change the transaction history, as described above. This is how cryptocurrency got its name. It is currency that is traded on the blockchain, many of the advantages of which come from the cryptographic techniques it utilizes. As mentioned above, databases are based around storing data in tables with various methods for optimizing a database. This flexibility, especially combined with the various innovations in database technology and other fields over the last few decades, means there is very little to which databases are not suitable, with the right configuration. Blockchain technology, in comparison, is designed to store individual data entries in a chronological manner. Innovations such as Ethereum have greatly improved what kinds of data can be stored on a blockchain, but the chronological nature of the technology and the fact that each data entry is independent of any other entry are core to blockchain. With cryptocurrencies such as Bitcoin, people who use the currency do not directly access the blockchain to make transactions. Each user has a “wallet” which contains a list of their private keys, usually combined with a software interface with which users can manage keys and make transactions (Frankenfield, 2022). The data within a wallet is not stored on a blockchain. Instead, there are various data storage methods that are used, and one common option is a traditional database. ----- TABLE 2 | Properties of blockchain and how they relate to the EVC use case. Property Advantage Disadvantage Mitigation Counterfactual Decentralized authority Safe operations (public blockchain) of applications Incentivize co-operation of shared authority Agreement on protocols, etc. Can’t control who has access Use private or consortium Standard databases can be permissioned blockchain permissioned Minimization of data loss risk in traditional databases through backups or other redundancy methods All operations allowed in databases and can be controlled through permissions. Possible performance optimization Decentralized data Less risk of data loss with The dataset for each storage redundancy of data authority can become extremely large blockchain Limit which entities require the full blockchain Limit on-chain data storage Data can not be erroneous, or policies must be created for changing chain history Immutability, data Improved data security handling thanks to limited data operations (create, read) No updates or deletion of data Overhead introduced to create and read operations Timeline verification Reliable verification of N/A N/A Similar timeline verification functionality timeline with database encryption methods Resource usage (energy and computation) Usage controlled by blockchain implementation choices, e.g., consensus algorithm Significant energy consumption, particularly of popular blockchain properties Architect blockchain to reduce resource usage, e.g., choice of less energy-intensive consensus algorithm Databases can be optimized to minimize resource usage Pseudonymous Tracking of transactions by IDs (usernames) can not be identities entities linked to real-world identities without integration with external systems Integrate with external identity Standard databases can use any identity systems verification system and completely control the creation of identities Performance Validity of data and Block validation speed Carefully select properties such as Standard databases are faster and more ordering thereof ensured affects performance block size limit optimized Bold is for emphasis. ## ANALYSIS OF BLOCKCHAIN TECHNOLOGY FOR EVC USE Pros and Cons of BC Compared to Traditional Databases Many blockchain platforms now exist, but most are designed for specific use cases or are too early in development or adoption for a use case as important as EVCs. The following therefore generalizes blockchain systems, based mainly on popular platforms Bitcoin and Ethereum. On Table 2 we provide a summary of the characteristics of blockchain and how they relate to the EVC use case. ## Decentralized Data Storage Decentralized data storage means that, theoretically, every node would have a complete copy of the blockchain. However, blockchain data can grow quickly to gigabytes or even terabytes of data. For example, as of January 20th 2022, the blockchain size of Bitcoin was 386 GB for its 704 million transactions (Blockchain Charts). The full Ethereum chain was 1178.68 GB (Ethereum Chain Full Sync Data Size, 2022). The full blockchain is required for authorities who validate blocks, but usually not required just to create transactions. It is also unrealistic that every entity would be willing to store the full chain. Therefore, these blockchains can create light nodes, which only store the data necessary to create transactions and rely on full nodes for other data as well as validation (Wackerow, 2022). The blockchain size scales with the number of transactions and the data size of each transaction. Databases scale in a similar way, but as a more mature technology are optimized to reduce the impact. Data redundancy is another benefit of decentralized data but can also be achieved with databases using backups. For context for the EVC use case, the population of the USA is 329.5 million with 551 million doses given. The population of the European Union is 447 million with roughly 848 million doses given (Daily COVID-19 vaccine doses administered, 2021; Ritchie et al., 2022). These vaccinations have been done in the last year, compared to Bitcoin’s transaction history which goes back to 2009. This means that not only would vaccination records quickly exceed the size of Bitcoin transaction history, it would also present problems with record entry speeds. Blockchain systems tend to limit how fast entries can be added by controlling how long or how big blocks can get. For example, Bitcoin is designed so a new block is mined every 6–10 min. This restriction on the system may be a significant problem with EVCs, whether they are set up at the beginning of vaccinations or, like now, potentially having to catch up with a significant number of past vaccinations. ## Immutability, Data Handling and Performance Databases support operations to create, read, update and delete (CRUD), and who performs each of these operations can be managed with permissions. Blockchain only supports create and read operations. As past transactions cannot be easily changed, this theoretically creates an immutable record. Rewriting the chain is technically possible, but extremely difficult. It would require changing past transactions, propagating the changes ----- through the chain, then getting majority acceptance from the authoritative nodes. This would require recomputing blocks, which may be costly and slow. The majority agreement may also be difficult. Other options for changing the chain may be viable but depend on the specific blockchain implementation. Databases can be optimized for the most used operations. Blockchain’s “create” and “read” operations are slower due to the overhead of the validation and consensus mechanisms. Bottlenecks can also happen, such as block validation delays slowing transaction processing. Databases are also designed to allow for any data to be queried based on any relationship between the data points. For example, an EVC database could likely be easily queried for “one patient’s records,” or “everyone vaccinated with a specific vaccine lot.” Blockchain data is not designed to be queried in this way, as it is structured based on individual transactions and metadata about the entities doing transactions. It is possible to replicate such queries with blockchain technology, but due to it not being designed for such purposes this requires additional effort to implement and compute. Whether immutability is beneficial for an application can depend on the risk of human error. For instance, is the data generated by a trusted program, or is it entered by humans who may make mistakes? If reading data very soon after it is created is important, databases may be preferable to blockchain. Some existing blockchain applications try to get around some of the limitations of blockchain by using a combination of blockchain and databases. This requires careful implementation. A recent incident with OpenSea, a blockchain application that allows users to trade in images and other media, which used a hybrid blockchain and database approach to avoid Ethereum’s high transaction fees. A bug was found where the blockchain and database got out of sync. This allowed an attacker to buy several items at an older, lower price, then sell them at the more recent price for a substantial profit (Cimpanu). ## Timeline Verification A major advantage of blockchain is that transaction validity and order can be easily verified. This is due to it being an immutable and chronological ledger. Databases can store timestamps for entries, security techniques can be applied to achieve immutability, and there are methods of encrypting database information to provide similar functionality. In the case of EVCs, the specific order of the records is not critical. For example, it does not really matter whether Sue was vaccinated before or after Mary. ## Pseudonymous Identities An EVC system will require integration with real-world identification systems. A common example is using Social Security Numbers in the US to link the blockchain records with real-world people. This would apply to vaccinators, patients, and anyone else involved. There must also be checks to ensure individuals are not duplicated in the system. Implementing these required checks in the blockchain system may be difficult for the same reasons querying data is difficult. Additionally, the existing identity systems are traditional databases, and integration with a blockchain-based system would add complexity and challenges. ## Resource Usage Blockchains can require a significant amount of computation and energy. Different blockchain implementations require different amounts due to factors like choice of consensus algorithm. In proof-of-work verification, nodes race to complete the computation of each block for a reward, but as a winner-takesall contest, energy used by the losing nodes is wasted. Other consensus algorithms tend to use less energy (Chowdhury et al., 2018), thereby lowering the energy cost of the entire system. Another consideration is the resource usage of everyone using the blockchain application. Because of its distributed nature, all full nodes who are capable of validating transactions. This requires each entity to have a computer storing the full blockchain and capable of validating nodes, which most likely must run continuously. This requirement may affect adoption in the case of EVCs, as it is an added cost and burden on those entities who would have authority to validate blocks. Light nodes at least must only store part of the blockchain, and do not need the computation ability to validate nodes. So careful organization of who requires a full node and who can use a light node can minimize this distributed cost. Databases, in comparison, due to their centralized nature, only use the energy required to run their servers (including those used for backups) and external systems such as air conditioning (Sedlmeir et al., 2020). Users of the application would connect to it via the Internet, so no special machines or systems are needed. This also allows for low-cost backups that can be performed routinely but do not require to be constantly connected and computing. ## Hype and Public Opinion Blockchain, with regards to its use in cryptocurrencies, NFTs, and games, has been appearing in the news more often in recent months and years. It is a technology that is drawing a lot of attention and is often described as being “hyped” (Litan, 2021), meaning that the amount of attention and public expectations may surpass its actual delivery of progress. There have been reports of publicly-traded companies adding the term “blockchain” to their name and having their shares surge (What is in a name UK stock surgers 394% on blockchain rebrand, 2017). This points toward significant expectations associated with the term, regardless of its actual feasibility. However, as with any novel term, its valence in the public opinion can quickly turn. For example, several companies in the software and gaming industries announced blockchain-related projects near the end of 2021, usually receiving mixed feedback from the general public. For example, when the CEO of Discord, a popular chat program, hinted at blockchain integration, there were supporters but also many users who were publicly against the move on Twitter, Reddit and Discord’s own forum, and an unknown number canceled their paid subscriptions in protest (Orland, 2021). Molly White’s timeline of problems with “web3” (a catch-all term for blockchain-based innovations), while focused on negative news, is a good indicator of what ----- TABLE 3 | Comparison of blockchain and alternative technologies regarding EVC requirements. EVC platform technology feature Optimal blockchain configuration compared to Comments alternative technology solutions Data privacy and security Equivalent or uncertain based on current information Both blockchain and standard databases can use similar cryptographic techniques [Transparent data encryption (TDE), 2022]. Data verifiability and fidelity Superior Harder to forge records without leaving a trace of it in blockchains Data retrievability Inferior Blockchain’s data structure is not designed for flexible data queries, databases are Technology accessibility Equivalent or uncertain based on current information Depends on the front-end design and not much affected by the underlying data storage technology Equitable Equivalent or uncertain based on current information Same as above. Mainly depends on accessibility. Interoperability Inferior Blockchain is a less mature technology, and by design harder to modify? combining data registries or changing data standards is much harder Scalability Inferior Traditional databases can be more easily scaled in transaction rate and storage Cost effectiveness Inferior Blockchain’s distributed nature makes it more costly to maintain. Traditional databases have been optimized for efficiency. Potential for public adoption Equivalent or uncertain based on current information As a novel technology, public perception of blockchain can change quickly Feasibility Inferior Blockchain is a less mature technology compared to time-tested database solutions. Bold is for emphasis. is happening in the space, especially in terms of its effects on the general public (White, 2022). It highlights that scams and hack are abundant in the web3 sphere, and many people are suffering losses, usually monetary, because of blockchainbased applications. A question then, regarding adopting blockchain for EVCs, is “Will the public trust their data is safe on a blockchain-based solution?” Blockchain is known for being difficult to understand, not helped by the complexities around all the variations and different use cases it can be used for. If public opinion of the technology - informed or otherwise - becomes negative, will people be willing to have their private medical data stored using such a technology? ## Assessment of Blockchain for EVC In Table 3 we summarize our assessment of the comparison between blockchain technology and traditional database solutions regarding the 10 key considerations presented in the introduction. As can be seen, blockchain only seems superior in the Data verifiability and fidelity domain, with all other aspects being either clearly inferior, equivalent, or uncertain. ## CURRENT BLOCKCHAIN-BASED EVC SOLUTIONS Some existing EVC solutions do claim to be using blockchain as part of their technology. A recent review by Mithani et al. (2021) listed eight such applications, including IBM’s Digital Health Pass. However, most of these solutions have not made public the technical details of how blockchain is used. In fact, the solutions proposed in this article published in March 2021 are not operational today. Some of the webpages are not even functional. Raising the question whether would the projects are still active? For these solutions, the question remains of whether blockchain is really a key part of the technology, or if the name is being used for the “hype factor.” Given the lack of transparency it is hard to estimate the number of truly functional blockchain platforms in use for EVC, but from our teams estimate it appears to be none. ## DISCUSSION In this paper we have described the conceptual framework of blockchain technology as it could apply to storing electronic vaccine certificates (EVC). We have also discussed some of the advantages and drawbacks. Overall, blockchain technology seems to have more cons than pros for this use case. In line with our assessment, some widely-respected cyber-security companies have also assessed that blockchain is not necessary for EVCs, taking the example of the European COVID certificates system (Schubert, 2021). A recent review of blockchain applications for COVID-19 (Ng et al., 2021) found that “vaccine passport monitoring” was one of the most common applications described in blockchain papers. However, most papers were limited to the technical description or reports of technical performance. Several blockchain system designs for vaccine supply management have also been described (Peng et al., 2020; Yong et al., 2020; Antal et al., 2021). ----- There have been other attempts to use blockchain technology for the storage and access to vaccination records using what is known as “smart contracts” (Zhao and Ma, 2022). In these approaches, the common idea is that the vaccination data (including vaccine certificates) is stored publicly but in encrypted form. The blockchain “smart contract” is then used to manage access to the key that would allow to decrypt the public data or a portion of it (Abubakar et al., 2021). This has been shown to significantly increase speed and convenience of data retrievability compared to scanning the blocks in the blockchain to find the vaccination information (Abuhashim et al., 2021). As mentioned earlier, some of the main principles that inspired the creation of blockchain technology run counter to the EVC use case. For example, one of the key principles is decentralized authority. However, with vaccination records it makes sense to have one, or a few, central authorities who certify that an approved vaccine was administered. In a blockchain that stores information about money, the agreement in the network that a certain person has X amount can be enough to make that judgment meaningful. However, vaccines must correlate with an external event in the real world (the person’s immunity status against a virus). That requires a central authority to determine, at least, that what was administered was a vaccine. This centralized assessment could be delegated to each “physician” agent in the network. The aspect of blockchain technology that makes the most sense for the vaccination record use case, is the use of cryptography, which is closely linked to privacy. However, as we have discussed, a centralized or federated system to record and store vaccinations using cryptography can be designed without the use of blockchain, possibly using another distributed ledger technology. For example, a very simple system could store hashed records and make them publicly accessible. In the simplest form, there would be one hash per vaccination record. In this case the patient would go get their vaccine at a point of care and would have privileged access to the public record. After confirming the patient’s identity, they would put information about the patient (e.g., patient full name and date of birth), the vaccine administered (e.g., vaccine name, provider, and lot number), and the date of administration, and create a hash with that information. Because this cryptographic hash is a one-way function that can’t be tracked back, the hash can be posted publicly without loss of patient privacy. The provider would then upload this information into a public repository maintained by the authorized central agency (either the CDC or a similar ## REFERENCES Abubakar, M., McCarron, P., Jaroucheh, Z., and Buchanan, A. A. D. W. J. (2021). Blockchain-based Platform for Secure Sharing and Validation of Vaccination Certificates. arXiv preprint arXiv:211 2.10124. Abuhashim, A. A., Shafei, H. A., and Tan, C. C. (2021). “Block-VC: a blockchainbased global vaccination certification,” in 2021 IEEE International Conference [on Blockchain (Melbourne, VIC: IEEE), 347–352. Available online at: https://](https://ieeexplore.ieee.org/abstract/document/9680556) [ieeexplore.ieee.org/abstract/document/9680556](https://ieeexplore.ieee.org/abstract/document/9680556) organization). Then, to verify the patient’s vaccination status, the patient would only need to present the information that was used to create the hash (which includes their identification), and the verifier could run it by the hashing function and compare to the public list of hashes posted in the trusted public repository. This hashing and comparison step could be easily automated into a phone app that would either read the patient’s information from a printed vaccination card, or from a QR code that the patient would carry. There is a similar idea to that described in recent papers (Haque et al., 2021). There are other questions that would need to be resolved almost independently of the technology used to store the vaccination records. There are several COVID vaccines available, with varying degrees of effectiveness. Ideally, the technology would store the information that is the most primary. In the case of an EVC, that’s probably the record of which vaccine was administered, and when. This way, the rules of what constitutes a “fully vaccinated” patient can be flexible for different uses and can even be adjusted as more information becomes available. For example, if evidence becomes clear that vaccine efficacy wanes significantly with time, some countries may choose to include the time from the last dose in the definition of “fully vaccinated.” However, even in this scenario, a central body still needs to decide whether some vaccines are not considered effective enough to even include in the record. ## CONCLUSION While blockchain has some useful applications, it does not seem to have clear advantages for electronic vaccine certificates (EVC) compared to more traditional database technologies. There is significant hype associated with blockchain that could be motivating its utilization for use cases in which it is not necessary. The existing EVC solutions that claim to use blockchain do not provide enough detail to assess whether blockchain is a core component of the system. ## AUTHOR CONTRIBUTIONS SR-B, RT, SR, and TL conceptualized the paper. SR, RT, TL, TK, and SR-B drafted the initial manuscript. MM significantly revised the manuscript critically. SR, RT, TL, and MM performed the review. RT composed the figures. SR-B and TK provided general guidance. All authors contributed to the article and approved the submitted version. Antal, C., Cioara, T., Antal, M., and Anghel, I. (2021). Blockchain platform For COVID-19 vaccine supply management. IEEE Open J. Comput. Soc. 2, 164–178. [doi: 10.1109/OJCS.2021.3067450](https://doi.org/10.1109/OJCS.2021.3067450) [Blockchain Charts. (2022). Available online at: https://www.blockchain.com/charts](https://www.blockchain.com/charts) (accessed April, 2022). Cawrey, L. L. D. (2021). Mastering Blockchain, Vol. 1. Sebastopol, CA: O’Reilly Media, Inc. Chowdhury, M. J. M., Colman, A., Kabir, M. A., Han, J. and Sarda, P. (2018). “Blockchain versus database: a critical analysis,” in 2018 17th IEEE International Conference on Trust, Security and Privacy in Computing and ----- Communications/12th IEEE International Conference on Big Data Science and Engineering (IEEE), 1348–1353. Chowdhury, M. J. M., Ferdous, M. S., Biswas, K., Chowdhury, N., Kayes, A. S. M., Alazab, M., et al. (2019). A comparative analysis of distributed ledger technology platforms. IEEE Access 7, 167930–167943. [doi: 10.1109/ACCESS.2019.2953729](https://doi.org/10.1109/ACCESS.2019.2953729) Cimpanu, C. (2022). Hacker abuses OpenSea to buy NFT at older, cheaper [prices. The Record. Available online at: https://therecord.media/hacker-abuses-](https://therecord.media/hacker-abuses-opensea-to-buy-nfts-at-older-cheaper-prices/) [opensea-to-buy-nfts-at-older-cheaper-prices/ (accessed April, 2022).](https://therecord.media/hacker-abuses-opensea-to-buy-nfts-at-older-cheaper-prices/) Daily COVID-19 vaccine doses administered (2021). Daily COVID-19 vaccine [doses administered. Available online at: https://ourworldindata.org/grapher/](https://ourworldindata.org/grapher/daily-covid-19-vaccination-doses) [daily-covid-19-vaccination-doses (accessed April, 2022).](https://ourworldindata.org/grapher/daily-covid-19-vaccination-doses) Eldred, S. M. (2021). “Coronavirus FAQ: is there an app that’ll prove i’m vaccinated, or is paper the best?,” in NPR. Online Ethereum Chain Full Sync Data Size (2022). YCHARTS. Available online at: [https://ycharts.com/indicators/ethereum_chain_full_sync_data_size](https://ycharts.com/indicators/ethereum_chain_full_sync_data_size) (accessed April, 2022). [EU Digital COVID Certificate (2022). Available online at: https://ec.europa.](https://ec.europa.eu/info/live-work-travel-eu/coronavirus-response/safe-covid-19-vaccines-europeans/eu-digital-covid-certificate_en) [eu/info/live-work-travel-eu/coronavirus-response/safe-covid-19-vaccines-](https://ec.europa.eu/info/live-work-travel-eu/coronavirus-response/safe-covid-19-vaccines-europeans/eu-digital-covid-certificate_en) [europeans/eu-digital-covid-certificate_en (accessed April, 2022).](https://ec.europa.eu/info/live-work-travel-eu/coronavirus-response/safe-covid-19-vaccines-europeans/eu-digital-covid-certificate_en) [Frankenfield, J. (2022). Bitcoin Wallet. Available online at: https://www.](https://www.investopedia.com/terms/b/bitcoin-wallet.asp) [investopedia.com/terms/b/bitcoin-wallet.asp (accessed April, 2022).](https://www.investopedia.com/terms/b/bitcoin-wallet.asp) Haque, A. B., Naqvi, B., Islam, A. K. M., and Hyrynsalmi, S. (2021). Towards a GDPR-compliant blockchain-based COVID vaccination passport. Appl. Sci. 11, [6132. doi: 10.3390/app11136132](https://doi.org/10.3390/app11136132) Litan, A. (2021). Hype Cycle for Blockchain 2021; More Action than Hype. [Available online at: https://blogs.gartner.com/avivah-litan/2021/07/14/hype-](https://blogs.gartner.com/avivah-litan/2021/07/14/hype-cycle-for-blockchain-2021-more-action-than-hype/) [cycle-for-blockchain-2021-more-action-than-hype/ (accessed April, 2022).](https://blogs.gartner.com/avivah-litan/2021/07/14/hype-cycle-for-blockchain-2021-more-action-than-hype/) Mithani, S. S., Bota, A. B., Zhu, D. T., and Wilson, K. (2021). A scoping review of global vaccine certificate solutions for COVID-19. Hum. Vaccin. Immunother. [18, 1–12. doi: 10.1080/21645515.2021.1969849](https://doi.org/10.1080/21645515.2021.1969849) Nakamoto, S. (2008). Bitcoin: A Peer-to-Peer Electronic Cash System. Available online at: [https://www.debr.io/article/21260-bitcoin-a-peer-to-peer-](https://www.debr.io/article/21260-bitcoin-a-peer-to-peer-electronic-cash-system) [electronic-cash-system (accessed April, 2022).](https://www.debr.io/article/21260-bitcoin-a-peer-to-peer-electronic-cash-system) Ng, W. Y., Tan, T. E., Movva, P. V., Fang, A. H. S., Yeo, K. K., Ho, D., et al. (2021). Blockchain applications in health care for COVID-19 and beyond: a systematic review. Lancet Digit. Health 3, e819–e829. doi: 10.1016/S2589-7500(21) 00210-7 Orland, K. (2021). Discord CEO backs away from hinted NFT integration [after backlash. Available online at: https://arstechnica.com/gaming/2021/](https://arstechnica.com/gaming/2021/11/discord-ceo-backs-away-from-hinted-nft-integration-after-backlash/) [11/discord-ceo-backs-away-from-hinted-nft-integration-after-backlash/](https://arstechnica.com/gaming/2021/11/discord-ceo-backs-away-from-hinted-nft-integration-after-backlash/) (accessed April, 2022). Peng, S., Hu, X., Zhang, J., Xie, X., Long, C., Tian, Z., et al. (2020). An efficient double-layer blockchain method for vaccine production supervision. IEEE Trans. NanoBiosci. 19, 579–587. doi: 10.1109/TNB.2020.29 99637 Ritchie, H., Mathieu, E., Rodés-Guirao, L., Appel, C., Giattino, C., Ortiz-Ospina, E., [et al. (2022). Coronavirus (COVID-19) Vaccinations. Available online at: https://](https://ourworldindata.org/coronavirus) [ourworldindata.org/coronavirus (accessed April, 2022).](https://ourworldindata.org/coronavirus) Schubert, I. (2021). The New Technology Powering Europe’s COVID Certificates. [Available online at: https://www.securid.com/en-us/blog/the-new-technology-](https://www.securid.com/en-us/blog/the-new-technology-powering-european-covid-certificates/) [powering-european-covid-certificates/ (accessed April, 2022).](https://www.securid.com/en-us/blog/the-new-technology-powering-european-covid-certificates/) Sedlmeir, J., Buhl, H. U., Fridgen, G., and Keller, R. (2020). The energy consumption of blockchain technology: beyond myth. Bus. Inf. Syst. Eng. [62, 599–608. doi: 10.1007/s12599-020-00656-x](https://doi.org/10.1007/s12599-020-00656-x) [Transparent data encryption (TDE) (2022). SQL Docs. Available online at: https://](https://docs.microsoft.com/en-us/sql/relational-databases/security/encryption/transparent-data-encryption?view=sql-server-ver15) [docs.microsoft.com/en-us/sql/relational-databases/security/encryption/](https://docs.microsoft.com/en-us/sql/relational-databases/security/encryption/transparent-data-encryption?view=sql-server-ver15) [transparent-data-encryption?view=sql-server-ver15 (accessed April, 2022).](https://docs.microsoft.com/en-us/sql/relational-databases/security/encryption/transparent-data-encryption?view=sql-server-ver15) [Wackerow, P. (2022). NODES AND CLIENTS. Available online at: https://](https://ethereum.org/en/developers/docs/nodes-and-clients/#node-types) [ethereum.org/en/developers/docs/nodes-and-clients/#node-types](https://ethereum.org/en/developers/docs/nodes-and-clients/#node-types) (accessed April, 2022). What is in a name UK stock surgers 394% on blockchain rebrand (2017). Available [online at: https://www.bloomberg.com/news/articles/2017-10-27/what-s-in-](https://www.bloomberg.com/news/articles/2017-10-27/what-s-in-a-name-u-k-stock-surges-394-on-blockchain-rebrand) [a-name-u-k-stock-surges-394-on-blockchain-rebrand (accessed April, 2022).](https://www.bloomberg.com/news/articles/2017-10-27/what-s-in-a-name-u-k-stock-surges-394-on-blockchain-rebrand) White, M. (2022). Web3 is going great. Available online at: [https://](https://web3isgoinggreat.com/) [web3isgoinggreat.com/ (accessed April, 2022).](https://web3isgoinggreat.com/) Yong, B., Shen, J., Liu, X., Li, F., Chen, H., and Zhou, Q. (2020). A blockchain based system for safe vaccine supply and supervision. Faculty Eng. Inf. Sci. 52, 102024. [doi: 10.1016/j.ijinfomgt.2019.10.009](https://doi.org/10.1016/j.ijinfomgt.2019.10.009) Zhang, A., and Lin, X. (2018). Towards secure and privacy-preserving data sharing in e-health systems via consortium blockchain. J. Med. Syst. 42, 140. [doi: 10.1007/s10916-018-0995-5](https://doi.org/10.1007/s10916-018-0995-5) Zhao, Z., and Ma, J. (2022). Application of blockchain in trusted digital vaccination certificates. China CDC Weekly [4, 106–110. doi: 10.46234/ccdcw2022.021](https://doi.org/10.46234/ccdcw2022.021) **Conflict of Interest: TK has a role in Pathcheck Foundation, a non-profit** involved in the development of public health related technology that partially uses cryptography. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. **Publisher’s Note: All claims expressed in this article are solely those of the authors** and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Copyright © 2022 Toubiana, Macdonald, Rajananda, Lokvenec, Kingsley and Romero-Brufau. This is an open-access article distributed under the terms of [the Creative Commons Attribution License (CC BY). The use, distribution or](http://creativecommons.org/licenses/by/4.0/) reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. ----- ## GLOSSARY **Cryptography** is the study of techniques with which communications can be secured such that only the sender and intended recipient can understand the message. Encryption is a technique that is part of cryptography, where data is scrambled so that it is unintelligible, then sent to the recipient who knows how to unscramble it. **Encryption is the process of codifying the data so that it** cannot be immediately read without an “decryption key”. The data is scrambled (as with a hash) and can only be unscrambled into an understandable form by using the decryption key. Data that is encrypted is more secure because, even if a malicious agent manages to access the data storage, they won’t be able to read the data itself unless they also have access to the decryption key. **Hashing is a method of scrambling data that is often used** in encryption as it creates a fixed-length series of characters which are usually shorter than the original data. It is possible for different input data to produce the same hash, however choosing the correct hashing algorithm will mean that chances of that happening are considered too unlikely to be a risk. In this way, it can be compared to a fingerprint. Hashing is also a one-way function - given a hash, it is computationally infeasible (i.e., near to impossible given current computing technology) to calculate the original data, which gives us a secure way to represent a piece of data without using the data directly. **Public and private keys also come from cryptography, where** the public-private key pairs are used as described in the previous section to scramble and unscramble data. -----
{ "disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC9304987, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.frontiersin.org/articles/10.3389/fdata.2022.833196/pdf" }
2,022
[ "Review", "JournalArticle" ]
true
2022-07-08T00:00:00
[ { "paperId": "ae221eeab23e6fc167483f90952788b59b4c3e28", "title": "Application of Blockchain in Trusted Digital Vaccination Certificates" }, { "paperId": "0f69a7a21d0764eee5a421b920e7a50a7b31ade6", "title": "Blockchain-based Platform for Secure Sharing and Validation of Vaccination Certificates" }, { "paperId": "e47a94ca91ab7664fcdcad121053f6b501beacbc", "title": "Block-VC: A Blockchain-Based Global Vaccination Certification" }, { "paperId": "b071917060a55ef90f7553fb4ff6f98a05f75c0c", "title": "Blockchain applications in health care for COVID-19 and beyond: a systematic review" }, { "paperId": "942be051ae08c762fd9c195184d66d32a839f2f3", "title": "Towards a GDPR-Compliant Blockchain-Based COVID Vaccination Passport" }, { "paperId": "128c569c20cedaac3fbce6ea569fff81fa9b9949", "title": "A scoping review of global vaccine certificate solutions for COVID-19" }, { "paperId": "3a641e513ea6b385b390a140a21d4f8a7548242f", "title": "Blockchain Platform For COVID-19 Vaccine Supply Management" }, { "paperId": "1f296f47f8f81270d994b749f25cddf8a8ac48c2", "title": "The Energy Consumption of Blockchain Technology: Beyond Myth" }, { "paperId": "973549d954d642b931fdc426463c4c4a4f5ff881", "title": "An Efficient Double-Layer Blockchain Method for Vaccine Production Supervision" }, { "paperId": "b6075a93400ab49535289d8f16b7bf8e9340f152", "title": "An intelligent blockchain-based system for safe vaccine supply and supervision" }, { "paperId": "0bc9d6fc09fb1229f5a22929b016ab01f2d78b53", "title": "A Comparative Analysis of Distributed Ledger Technology Platforms" }, { "paperId": "90b3dd7066a1f3293caa13cafa8425acdc32794e", "title": "Blockchain Versus Database: A Critical Analysis" }, { "paperId": "56cd71e02772d6bd7adead9aa876a862ee0537c2", "title": "Towards Secure and Privacy-Preserving Data Sharing in e-Health Systems via Consortium Blockchain" }, { "paperId": "01974d9f865235169a2ab038e94f11615fd95df9", "title": "Alloxan Induced Oxidative Stress and Impairment of Oxidative Defense System in rats" }, { "paperId": null, "title": "YCHARTS" }, { "paperId": null, "title": "Ethereum Chain Full Sync Data Size (2022)" }, { "paperId": null, "title": "BlockchainCharts" }, { "paperId": null, "title": "Bitcoin Wallet" }, { "paperId": null, "title": "Web3 is going great" }, { "paperId": null, "title": "2022).Coronavirus (COVID-19) Vaccinations" }, { "paperId": null, "title": "Hacker abuses OpenSea to buy NFT at older, cheaper prices" }, { "paperId": null, "title": "Discord CEO backs away from hinted NFT integration after backlash" }, { "paperId": null, "title": "Mastering Blockchain, Vol" }, { "paperId": null, "title": "Coronavirus FAQ: is there an app that’ll prove i’m vaccinated, or is paper the best?,” in NPR" }, { "paperId": null, "title": "Hype Cycle for Blockchain 2021; More Action than Hype" }, { "paperId": null, "title": "What is in a name UK stock surgers 394% on blockchain rebrand (2017)" }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": "6571865dca553cb8d79dc72e137012dee6331cce", "title": "New technology" }, { "paperId": null, "title": "Transparent data encryption (TDE) (2022)" }, { "paperId": null, "title": "Blockchain for Electronic Vaccine Certificates" }, { "paperId": null, "title": "Conflict of Interest" }, { "paperId": null, "title": "This is an open-access article distributed under the terms of Commons Attribution License (CC BY). The use, distribution or in other forums is permitted, provided" }, { "paperId": null, "title": "secure way to represent a piece of data without using the data directly" }, { "paperId": null, "title": "Daily COVID-19 vaccine doses administered" }, { "paperId": null, "title": "declare that the research was conducted in the absence of commercial or financial relationships that could be construed as a potential of interest" }, { "paperId": null, "title": "NODES AND CLIENTS" }, { "paperId": null, "title": "Any product that may be evaluated in article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher" } ]
13,439
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0165f6875cbe02342d634bae23170392eca9e05a
[ "Computer Science" ]
0.884428
Multisurface Interaction in the WILD Room
0165f6875cbe02342d634bae23170392eca9e05a
Computer
[ { "authorId": "1401678555", "name": "M. Beaudouin-Lafon" }, { "authorId": "47278074", "name": "Stéphane Huot" }, { "authorId": "1793712", "name": "Mathieu Nancel" }, { "authorId": "1732917", "name": "W. Mackay" }, { "authorId": "1728256", "name": "Emmanuel Pietriga" }, { "authorId": "2182932", "name": "Romain Primet" }, { "authorId": "144906824", "name": "Julie Wagner" }, { "authorId": "3342979", "name": "O. Chapuis" }, { "authorId": "1753624", "name": "Clément Pillias" }, { "authorId": "2152856", "name": "James R. Eagan" }, { "authorId": "2449366", "name": "Tony Gjerlufsen" }, { "authorId": "3027683", "name": "C. Klokmose" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE Computer", "IEEE Comput" ], "alternate_urls": [ "https://ieeexplore.ieee.org/servlet/opac?punumber=2", "http://www.computer.org/portal/site/ieeecs/index.jsp", "https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=2" ], "id": "f6572f66-2623-4a5e-b0d9-4a5028dea98f", "issn": "0018-9162", "name": "Computer", "type": "journal", "url": "http://www.computer.org/computer" }
null
## Multisurface Interaction in the WILD Room ### Michel Beaudouin-Lafon, Olivier Chapuis, James Eagan, Tony Gjerlufsen, Stéphane Huot, Clemens Klokmose, Wendy E. Mackay, Mathieu Nancel, Emmanuel Pietriga, Clément Pillias, et al. To cite this version: Michel Beaudouin-Lafon, Olivier Chapuis, James Eagan, Tony Gjerlufsen, Stéphane Huot, et al.. Multisurface Interaction in the WILD Room. Computer, 2012, Special Issue on Interaction Beyond the Keyboard, 45 (4), pp.48-56. ￿10.1109/MC.2012.110￿. ￿hal-00687825￿ ### HAL Id: hal-00687825 https://inria.hal.science/hal-00687825 Submitted on 15 Apr 2012 **HAL is a multi-disciplinary open access** archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. ----- # Multisurface Interaction in the WILD Room #### Michel Beaudouin-Lafon, Stéphane Huot, Mathieu Nancel, Université Paris-Sud Wendy Mackay, Emmanuel Pietriga, Romain Primet, Julie Wagner, INRIA Olivier Chapuis, Clément Pillias, CNRS James R. Eagan, Télécom ParisTech Tony Gjerlufsen, Clemens Klokmose, Aarhus University **Abstract** The WILD room (wall-sized interaction with large datasets) serves as a testbed for exploring the next generation of interactive systems by distributing interaction across diverse computing devices, enabling multiple users to easily and seamlessly create, share, and manipulate digital content. © Copyright 2012, IEEE. Author version of the article published in the April 2012 special issue of IEEE Computer on Interaction Beyond the Keyboard: Beaudouin-Lafon, M., Huot, S., Nancel, M., Mackay, W., Pietriga, E., Primet, R., Wagner, J., Chapuis, O., Pillias, C., Eagan, J.R., Gjerlufsen, T. and Klokmose, C. (2012), “Multisurface Interaction in the WILD Room”, IEEE Computer, vol 45, nº 4, pp. 48-56. DOI bookmark: http://doi.ieeecomputersociety.org/10.1109/MC.2012.110 1 ----- Ubiquitous computing offers a vision in which each person owns multiple computers that work together seamlessly, embedded into the fabric of everyday life [1]. Part of this vision has arrived: interactive surfaces are everywhere, from smartphones, tablets, and laptops to large-screen televisions and smart boards; from car navigation systems to fitness monitoring devices. Their integration, however, is hardly seamless: data is often trapped in individual applications or services, and interaction is usually limited to a single device at a time. As the “The WILD Platform” sidebar describes, the WILD room (wall-sized interaction with large datasets) is a multisurface environment featuring a wall-sized display, a multitouch table, and various mobile devices that we designed to help scientists collaborate on the analysis of large and complex datasets. We combine empirical studies, participatory design, and fundamental research on basic interaction tasks to explore the design and engineering of the next generation of interactive systems. The key to this approach is to distribute interaction, not just content, across a variety of interactive surfaces. ### Designing with extreme users Our research strategy involves designing an extreme environment that pushes the limits of technology—both hardware and software. To ground the design process, we needed extreme users—people whose daily work both inspires and stress-tests the environment. We chose scientists who use a variety of techniques to understand exceptionally large and complex datasets. We invited researchers from the Paris-Saclay campus in astrophysics, particle physics, chemistry, molecular biology, neuroscience, mechanical engineering, and applied mathematics to an initial “show-and-tell” workshop. Scientists from each lab presented specific examples of the challenges they faced at that time, along with their data analysis processes and tools. We discussed the similarities and differences among their approaches, seeking to identify both universal needs and unique opportunities. For example, a group of microbiologists might arrive in the WILD room with their laptops and analysis tools to study how one molecule docks with another. One might bring up a large molecular model downloaded from the research lab’s server, another might add interactive 3D models of related molecules, and others might access online databases, websites, and research articles. They could shift smoothly among different representations of each molecule and transfer them from one interactive display to another, working together in the same room or collaborating with remote colleagues. We identified four common strategies for managing complex scientific data where the WILD multisurface environment could significantly improve and even completely change work practices: 1. navigation through a single, very large object, such as a simulation of a molecule with tens or hundreds of thousands of atoms or a gigapixel image of deep space containing thousands of galaxies; 2. comparison of a large number of related images, such as pathological brain scans or observations of regions of the sky at different wavelengths; 3. juxtaposition of a variety of heterogeneous forms of data from different sources, such as a mix of research articles, raw data tables, formulas, graphs, photographs, and video clips; 4. communication with remote colleagues about all of the above to facilitate collaborative exploration. We then used the WILD room as a working laboratory for exploring advanced multisurface interaction techniques. 2 ----- #### Sidebar: The WILD platform The WILD room (Wall-size Interaction with Large Datasets) features a large wall display (top, left) powered by a 16-computer cluster (top, right) and two front-end computers, a motion tracking system (bottom, left), and an interactive table (bottom, right). The wall display consists of 32 off-the-shelf 30-inch monitors organized in an 8 x 4 grid, for a total resolution of 131 million pixels (20480 x 6400). The high pixel density (about 100 dpi), a defining characteristic of WILD, is rare on wall displays. The monitors are mounted on four movable carts, letting users test different configurations such as the triptych shown in Figure A. Each computer has two graphics cards driving one screen each. Displaying wall-sized images requires distributed software that runs across the cluster. The motion tracking system uses 10 infrared cameras to detect the position of passive markers attached to different devices, such as the T-shaped tool shown at the lower left in Figure A. The system has very low latency and a precision of less than one millimeter across the room. We typically use it to precisely track each device’s position and to support advanced interaction techniques. The interactive table uses FTIR (frustrated total internal reflection) technology to track up to 32 simultaneous contact points with a 1920 x 1080 resolution. Because it has only half the pixel density of the wall display, we are adding a second table with higher pixel density and a flat screen. Smartphones, PDAs, tablets, and laptops provide additional, personal interactive surfaces. We also use input devices such as gyroscopic and wireless mice and custom devices. 3 ----- **_Figure 1: Using the Wizard of Oz technique to prototype how a tablet can serve as a_** _mobile, physical filter atop a wall-sized image._ ### Exploring multi-surface interaction We employ two complementary strategies for generating and testing ideas: participatory design, which focuses on qualitative understanding and external validity, and controlled experiments, which focus on quantitative evaluations and internal validity. Participatory design actively involves users throughout the design process. We visited several labs to observe their current research procedures and conducted participatory design workshops in the WILD room with the astrophysicists and neuroanatomists, who face interestingly different analysis challenges. One of the most effective techniques was the Wizard of Oz, in which scientists acted out ideas for manipulating their data, using paper images, laptops, and other props. A member of the group, identified as the wizard, would operate the WILD wall so that it reacted to the users’ actions, creating a compelling shared experience of a possible future. This often sparked additional ideas and provided insights as to which techniques were most worth pursuing. For example, the scientists spontaneously experimented with midair hand gestures and using external props to manage their data. One neuroscientist brought along a 3D physical model of his own brain from an MRI scan. He had the idea of using it to control the orientation of all 64 normal and pathological brains displayed on the wall. He had dreamed of doing this in his lab, where he was limited to using a mouse to compare at most four brain scans on a single screen. Scientists also explored relationships among mobile and stationary devices. For example, one astrophysicist was examining a large image of the Milky Way galaxy, accompanied by a series of smaller images at different wavelengths. He suddenly grabbed an iPad tablet, held it up to the primary image, and simulated how he would like to treat it as a physical, interactive filter. Figure 1 shows how he envisioned moving the tablet around, maintainng an overview of the whole image while flipping through different filters to focus on specific wavelengths. Participatory design helped us to delve deeply into the problem space and generate specific innovative ideas. However, we also needed a more systematic approach for characterizing the design space of interaction techniques and making informed choices. For example, the astrophysicists showed us a 400,000-pixel-wide image of the center of the galaxy. While they could see it on WILD much better than in their lab, the image was still 20 times larger than the display capabilities of our wall. 4 ----- These and other gigapixel images highlighted the need for powerful panning and zooming techniques that could be operated from any location in front of the wall. This suggested midair interaction, using the hands to point to the locus of the zoom within an image and to control its expansion and contraction from there. Based on the participatory design results and our own explorations, we identified three important dimensions, illustrated in Figure 2, that characterize the design space for pan-and-zoom on a wall display. We ran a controlled experiment to evaluate our hypotheses about which factors increase performance, accuracy, and comfort [2]. Our goal was not necessarily to determine the single “best” technique, but rather to understand the tradeoffs and help users and designers decide which to use under what circumstances. We found that, in general, two hands are better than one; linear gestures are faster than circular ones, despite the need to “clutch;” and greater guidance (or fewer degrees of freedom) significantly increases performance. Most midair freehand gestures are tiring and inefficient. The only exception is the two-handed linear gestures in free space shown in Figure 2f—an appealing technique that requires no additional device. These and other experiments, together with the results of the participatory design sessions, have led to an effective set of techniques that we now use routinely in WILD. **Figure 2: A design space for midair pan and zoom techniques with three dimensions:** _interaction with one hand (top row) or both hands (bottom row); gestures that are_ _constrained to one dimension (left column), to a 2D surface (center column), or free in_ _3D space (right column); linear or circular gestures (insets in each cell). For example,_ _(d) corresponds to using the dominant hand as a laser pointer to indicate the focus point_ _and the nondominant hand to control zooming with linear or circular gestures on a_ _handheld device. In (c), both tasks are carried out with the dominant hand._ 5 ----- **Figure 3: Interaction instruments. (left) An interaction instrument sorts the 64 displayed** _brain scans, (center) a brain prop controls the scan orientation, and (right) a digital_ _pen annotates content on the wall. (Source: Photothèque CNRS, Cyril Fresillon.)._ ### Developing multi-surface applications Developing software for multisurface environments raises several challenges. First, applications are inherently distributed and the environment is dynamic: 20 to 30 computers are involved in a typical session, including the cluster running the wall, the computers running the table and motion tracking system, the handheld devices, and the users’ laptops. Second, input devices can be combined in various ways to interact with the various surfaces, and multiple users must be able to interact in parallel. Finally, content comes from a variety of sources, including static documents brought by users and live windows from legacy applications. Our goal was to simplify the development of applications in this context without sacrificing the flexibility and openness required by our users. This led to a modular approach that separates user interaction, graphical rendering, and content sources. #### Distributed interaction Our concept of ubiquitous instrumental interaction separates interaction from the rest of the application [3]. An interaction instrument mediates interaction between a user and the objects of interest. For example, users can designate objects with a pointing instrument, move them with a drag-and-drop instrument, and change their color with a color selection instrument. Instruments are independent of the objects they operate on: they need only know that the object implements a given protocol, such as selecting, changing position, or setting a color. Multiple instruments can be used in parallel. Instruments can also be embodied in portable devices—for example, a smartphone used as a laser pointer. In this case, the instrument runs on the device and interacts with objects located on other surfaces. We have created generic instruments for selecting, moving, organizing, and annotating objects, as well as more specific ones, such as the brain prop shown in Figure 3, which is used to control the orientation of brain scans on the wall. These interaction instruments have proven very flexible since they can be customized to the users’ needs without modifying the application: instruments discover which objects they can interact with based on the protocols that the objects implement. 6 ----- **_Figure 4: jBricks and the WILD Input Server. (left) A jBricks application manages a_** _scene of 2D objects laid out on an infinite canvas. On the cluster, render servers_ _replicate the scene and display only the objects that lie in their viewing frustum. (right)_ _A configuration of the WILD Input Server for a virtual device combining a VICON_ _position-tracking component and an iPod handheld device. The configuration can be_ _tested outside the WILD room by replacing the VICON component with those in gray_ _and using a mouse for position input. The pan-zoom component on the right sends high-_ _level events to the application._ At a lower level, input in a multisurface environment can come from a variety of sources, including standard devices such as mice and keyboards, multitouch devices such as interactive tables and tablets, and systems such as motion trackers. Rather than sending this raw input directly to applications or instruments, we have created an intermediate layer called the WILD Input Server [4]. The WILD Input Server uses the ICon visual editor [5] to create and edit input configurations. Figure 4 shows how a configuration transforms low-level input from physical devices into higherlevel events sent to client applications. The WILD Input Server supports standard protocols such as USB-HID, OSC, TUIO, and VRPN as well as devices such as LiveScribe interactive pens or the VICON motion tracker. The server sends events to applications through various protocols (primarily OSC; http://opensoundcontrol.org) or plug-ins. Applications can also remotely control the server to start, stop, or change a configuration or to load a plug-in. Developers can easily create and modify configurations by assembling components such as filters, adapters, and flow controllers, even during a prototyping session. Configurations typically define virtual devices that aggregate input from multiple sources. For example, the application sees a multitouch handheld device whose 3D position is provided by the motion tracking system as a single device. Our implementation of the pan-and-zoom techniques from Figure 2 illustrate the flexibility of this approach. We developed the techniques outside the WILD room, substituting a mouse or a Wiimote for the motion tracking system, and created a set of virtual devices that we could modify and fine-tune in the WILD room, without relaunching the application. 7 ----- #### Distributed rendering Displaying graphics in a multisurface environment is challenging because users want to organize their data onto a virtual canvas that spans multiple surfaces. Depending on the configuration and the task at hand, different surfaces display either the same part or different parts of the canvas. Tiled displays require particularly high performance to create the illusion of a single, continuous surface with no tearing. Existing cluster-based systems for distributed rendering do not fit our requirements. For example, Equalizer and CGLX require adapting or rewriting applications using OpenGL, while SAGE [6] uses pixel streaming and therefore cannot take full advantage of ultra-high resolution wall displays. Our approach uses replication: each machine driving a display runs a replica of the complete application or a rendering client that holds a copy of the scene. Each replica knows which part of the scene to display; a master application synchronizes changes to the scene and the viewing camera. We created two frameworks to develop multisurface applications based on this model. The first, jBricks [4], is based on a 2D scene graph that describes the canvas’s content and a set of reactions that describe how to respond to user actions, similar to traditional user-interface toolkits. Scene graph objects include geometric shapes, text, images, and Java Swing widgets laid out on an infinite canvas and observed through one or more cameras. jBricks uses a replicated approach to render the scene graph on a cluster-driven tiled display. The toolkit supports smooth real-time panning and zooming of very large information spaces, including gigapixel images, as well as interactive visual effects such as magnifying lenses. By making distribution transparent to the application, jBricks greatly lowers the barrier to developing multisurface applications. Our second framework, Shared Substance, takes a different approach by making distribution explicit [7]. A Shared Substance application is a collection of processes called environments that run on different machines. The application discovers environments dynamically, and they can appear and disappear at any time. Each environment contains a hierarchical data structure that it can share, in whole or in part, with other environments. An environment accesses a shared subtree either by replicating it and accessing the local copy or by mounting it and accessing the original through remote procedure calls. Environments can use facets to dynamically add functionality to a shared subtree. For example, Figure 5 shows how our Substance Canvas application uses facets to display the canvas, modify its content, and support interaction. Shared Substance provides great flexibility and makes it possible to create applications that dynamically adapt to their use context and are reconfigurable at runtime. #### Distributed content sources In a multisurface environment, users need to juxtapose content from multiple sources, as if the various surfaces were extensions of their laptops. Sources include passive documents such as PDF files and images, active documents such as webpages, and live applications such as data analysis and visualization programs. The challenge lies in integrating such heterogeneous sources into a unified environment. We began with simple but effective solutions based on conventional tools: a user can e-mail a document to WILD to display it on the wall or “print to the wall” by sending a document to a printer queue that WILD monitors. Users can also fill out a simple Web form or use a bookmarklet to display webpages on the wall. 8 ----- **Figure 5:** _Substance Canvas application. (left) Two users share content between the_ _wall, the table and a laptop. (right) A master environment shares a scene graph_ _representing a canvas. Rendering environments replicate the scene graph to add local_ _rendering capabilities, while interaction instruments mount the scene graph to add_ _editing functions. Content providers then mount the scene graph to modify its content,_ _for example, through a webservice. (Source: INRIA.)_ Even so, scientists must be able to use existing applications. Since porting them to our frameworks is not practical, at least in the short term, both jBricks and Shared Substance support the display of live applications running on a different computer, typically a user’s laptop. For Linux, we use Metisse [8] to send pixel-based representations of the windows. For Mac OS, we use Scotty [9] to send vector-based representations of the windows, resulting in smooth scaling when displayed on the wall. In both cases, the scientists can use an instrument that simulates a mouse to interact with the teleported applications. An alternative with better performance is to run the legacy application on the WILD cluster itself. Using Shared Substance, we wrapped the BrainVISA 3D visualization application (http://brainvisa.info) into an environment that shares the address of the scan being displayed and the position of the virtual camera controlling its orientation. Figure 3 shows the cluster running 64 such environments, each displaying a different brain scan. The table runs an instrument for organizing the brain scans, while the brain prop controls the orientation of a master camera, which is shared by the 64 environments that display the individual brain scans. The resulting application was created in a few days, providing neuroanatomists with a unique tool to study the brain. We used a similar approach with the PyMol molecule viewer. We can display a single molecule on the full wall by having each replica display its part. Rotating it in real time shows no visible tearing. By distributed content, rendering, and interaction we have created a modular architecture that simplifies the development of multisurface applications while supporting flexible interaction as well as legacy content and applications. Even without optimization, performance is good: users can interact with full-wall images in real-time with little perceivable lag. The ability to change configurations and components on the fly during a design session makes these tools an excellent platform for rapid prototyping. 9 ----- #### Sidebar: Recommended Reading Researchers have long been interested in room-scale interaction. An early project was the Stanford iRoom [10], an infrastructure that enabled the devices in a room to communicate with each other. Lucia Terrenghi and colleagues provided a comprehensive taxonomy of different scales of multisurface environments [11], from wristwatches and phones to the side of a building. These environments support users interacting in isolation or simultaneously, in parallel or collaboratively. At the room-sized scale of this spectrum, much work has focused on creating large highresolution displays such as wall-sized tiled displays and CAVEs. These projects often focus on high-performance distributed rendering and data-sharing rather than on interaction. Tao Ni and colleagues surveyed the technologies and application for such environments and emphasized the need for better interaction techniques [12]. Our work addresses these issues by introducing concepts and techniques for distributed, multisurface interaction [3, 4, 7]. ### Conclusion Realizing the vision of ubiquitous computing requires creating interaction architectures and paradigms that harness the power of combining devices and services into integrated environments. Today’s smartphones, tablets, multitouch tables, and wall displays bring little more than the sum of their parts. In contrast, the WILD room’s multisurface interaction paradigm illustrates how interaction, not just content, can be distributed across multiple devices. The scientists we have worked with are eager to use WILD for their daily work. By involving them in the design process, we have been able to focus on their real needs and identify the real technological challenges. We have learned the following lessons in the process: - decouple tools from one another and use simple protocols to facilitate their integration; - focus on interaction rather than rendering, and assume that hardware will provide sufficient performance; - leverage existing tools when possible, but also develop from scratch when needed; and - explore alternative designs to gain deeper understanding of their respective advantages and disadvantages. However, this is just the beginning. We must work with additional user groups to gain new insights and expand the scope of multisurface interaction, extend our interaction vocabulary to match the richness of desktop interfaces, and scale our software architectures to test them with other applications. One important requirement not currently addressed by WILD and unanimously requested by our users is support for collaboration among remote colleagues. While the multisurface interaction paradigm naturally scales to remote groups, additional technology is needed to support face-toface communication. The WILD room is now part of Digiscope (http://digiscope.fr), a larger project that will create a network of interactive visualization rooms specifically designed to address these issues. In the long run, platforms such as WILD will become increasingly affordable. Wall-sized displays will combine high-definition and multitouch surfaces without borders, and motion tracking will become more reliable, without the need for markers. These advances will reduce the constraints on users and support a wider range of multisurface interactions. 10 ----- We anticipate that this technology will become prevalent in the workplace, first in meeting rooms and design studios, then in offices, and later in the home, offering families new ways to play, study, communicate, and enjoy entertainment. Only then will multisurface interaction become truly integrated into the fabric of our everyday lives. ### Acknowledgments _We thank our partner laboratories, in particular IAS (astrophysics), LAL (particle physics), IGM_ _(biology) and Neurospin (neuroscience) for their participation. WILD is supported by a Région_ _Île-de-France/Digiteo grant and by Université Paris-Sud, INRIA, CNRS, ANR and the INRIA-_ _Microsoft joint laboratory._ ### References 1. Mark Weiser. The computer for the 21st century. Scientific American, 265(3):94–104, 1991. 2. Mathieu Nancel, Julie Wagner, Emmanuel Pietriga, Olivier Chapuis , and Wendy Mackay. Mid-­‐ air pan-­‐and-­‐zoom on wall-­‐sized displays. In _Proc. Human Factors in Computing Systems,_ CHI ’11, 177–186. ACM, 2011. 3. Clemens Klokmose and Michel Beaudouin-­‐Lafon. VIGO: Instrumental interaction in multi-­‐ surface environments. In Proc. Human Factors in Computing Systems, CHI ’09, 869–878. ACM, 2009. 4. Emmanuel Pietriga, Stéphane Huot, Mathieu Nancel, and Romain Primet. Rapid development of user interfaces on cluster-­‐driven wall displays with jBricks. In _Proc. Engineering_ _Interactive Computing Systems, EICS ’11, 185–190. ACM, 2011._ 5. Pierre Dragicevic and Jean-­‐Daniel Fekete. Support for input adaptability in the ICon toolkit. In Proc. _Multimodal_ _Interfaces, ICMI ’04, 212–219. ACM, 2004._ 6. Byungil Jeong, Jason Leigh, Andrew Johnson, Luc Renambot, Maxine Brown, Ratko Jagodic, Sungwon Nam, and Hyejung Hur. Ultrascale collaborative visualization using a display-­‐rich global cyberinfrastructure. IEEE Computer Graphics and Applications, 30(3):71–83, 2010. 7. Tony Gjerlufsen, Clemens Nylandsted Klokmose, James Eagan, Clément Pillias, and Michel Beaudouin-­‐Lafon. Shared Substance: developing flexible multi-­‐surface applications. In Proc. _Human Factors in Computing Systems, CHI ’11, 3383–3392. ACM, 2011._ 8. Olivier Chapuis and Nicolas Roussel. Metisse is not a 3D desktop! In _Proc. User Interface_ _Software and Technology, UIST ’05, 13–22. ACM, 2005._ 9. James R. Eagan, Michel Beaudouin-­‐Lafon and Wendy E. Mackay. Cracking the cocoa nut: user interface programming at runtime. In Proc. User Interface Software and Technology, UIST ’11, 225–234. ACM, 2011. 10. Jan Borchers, Meredith Ringel, Joshua Tyler, and Armando Fox. Stanford interactive workspaces: a framework for physical and graphical user interface prototyping. _IEEE_ _Wireless Communications, 9(6):64–69, December 2002._ 11. Lucia Terrenghi, Aaron Quigley, and Alan Dix. A taxonomy for and analysis of multi-­‐person-­‐ display ecosystems. Personal and Ubiquitous Computing, 13:583–598, November 2009. 12. Tao Ni, Greg S. Schmidt, Oliver G. Staadt, Mark A. Livingston, Robert Ball, and Richard May . A survey of large high-­‐resolution display technologies, techniques, and applications. In _Proc._ _Virtual Reality Conference, VR ‘06, 223–236. IEEEMarch 2006._ 11 ----- ### About the authors **_Michel Beaudouin-Lafon is a professor of computer science at Université Paris-Sud and a senior member_** _of Institut Universitaire de France. His research interests include interaction techniques and paradigms,_ _collaborative systems, and engineering of interactive systems. He received a PhD in computer science from_ _Université Paris-Sud. Contact him at mbl@lri.fr._ **_Olivier Chapuis is a research scientist at CNRS. His research interests include windowing systems,_** _pointing, multiscale interfaces, and interaction techniques. He received a PhD in mathematics from_ _Université Paris VII Diderot. Contact him at olivier.chapuis@lri.fr._ **_James R. Eagan is an assistant professor at Télécom ParisTech. His research interests include information_** _visualization and making software more malleable for end-users and programmers. He received a PhD in_ _computer science from the Georgia Institute of Technology. Contact him at james.eagan@telecom-_ _paristech.fr._ **_Tony Gjerlufsen received a PhD in computer science from Aarhus University. His research interests_** _include software architecture, human-computer interaction, philosophy of computer science, and_ _ubiquitous computing. Contact him at tony@cs.au.dk._ **_Stéphane Huot is an associate professor at Université Paris-Sud, on leave at INRIA. His research interests_** _include interaction techniques, input devices and methods, and engineering of interactive systems. He_ _received a PhD in computer science from Université de Nantes. Contact him at stephane.huot@lri.fr._ **_Clemens Klokmose is a postdoctoral fellow at Aarhus University. His research interests include human-_** _computer interaction and multisurface environments. He received a PhD in computer science from Aarhus_ _University. Contact him at clemens@cs.au.dk._ **_Wendy Mackay is a principle research scientist at INRIA and heads the INSITU lab. Her research interests_** _include coadaptive systems, interactive paper, mediated communication, and participatory design. She_ _received a PhD from the Massachusetts Institute of Technology. Contact her at wendy.mackay@lri.fr._ **_Mathieu Nancel is pursuing a PhD at Université Paris-Sud. His research interests include interaction_** _techniques, visualization platforms, and user performance modeling. He received an MSc and an_ _engineering degree in computer science from Université Paris-Sud. Contact him at mathieu.nancel@lri.fr._ **_Emmanuel Pietriga is a research scientist at INRIA. His research interests include interaction techniques,_** _information visualization, the Semantic Web, and the engineering of interactive systems. He received a_ _PhD in computer science from Institut National Polytechnique de Grenoble. Contact him at_ _emmanuel.pietriga@inria.fr._ **_Clément Pillias is an engineer at CNRS. He received an MSc in computer science from Université Paris 6._** _His research interests include interaction techniques, gestural interfaces, collaborative interaction, and_ _engineering of interactive systems. Contact him at clement.pillias@lri.fr._ **_Romain Primet is a research engineer at INRIA. He received an MSc in computer science from Université_** _de Nice. Contact him at romain.primet@inria.fr._ **_Julie Wagner is pursuing a PhD at INRIA. Her research interests include embodied and tangible_** _interaction with large surfaces. She received an MSc in computer science from RWTH Aachen University._ _Contact her at julie.wagner@lri.fr._ 12 -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/MC.2012.110?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/MC.2012.110, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "other-oa", "status": "GREEN", "url": "https://hal.inria.fr/hal-00687825/file/WILD-IEEEComputer-authorversion.pdf" }
2,012
[ "JournalArticle" ]
true
2012-04-01T00:00:00
[]
7,988
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0166c8b5c6445043b94fc7b62d145d0c3c8b6483
[ "Computer Science" ]
0.84913
More efficient oblivious transfer and extensions for faster secure computation
0166c8b5c6445043b94fc7b62d145d0c3c8b6483
Conference on Computer and Communications Security
[ { "authorId": "2297426", "name": "Gilad Asharov" }, { "authorId": "1682750", "name": "Yehuda Lindell" }, { "authorId": "145139628", "name": "T. Schneider" }, { "authorId": "1744880", "name": "Michael Zohner" } ]
{ "alternate_issns": null, "alternate_names": [ "Int Workshop Cogn Cell Syst", "CCS", "Comput Commun Secur", "CcS", "International Symposium on Community-centric Systems", "International Workshop on Cognitive Cellular Systems", "Conf Comput Commun Secur", "Comb Comput Sci", "Int Symp Community-centric Syst", "Combinatorics and Computer Science", "Circuits, Signals, and Systems", "Computer and Communications Security", "Circuit Signal Syst" ], "alternate_urls": null, "id": "73f7fe95-b68b-468f-b7ba-3013ca879e50", "issn": null, "name": "Conference on Computer and Communications Security", "type": "conference", "url": "https://dl.acm.org/conference/ccs" }
Protocols for secure computation enable parties to compute a joint function on their private inputs without revealing anything but the result. A foundation for secure computation is oblivious transfer (OT), which traditionally requires expensive public key cryptography. A more efficient way to perform many OTs is to extend a small number of base OTs using OT extensions based on symmetric cryptography. In this work we present optimizations and efficient implementations of OT and OT extensions in the semi-honest model. We propose a novel OT protocol with security in the standard model and improve OT extensions with respect to communication complexity, computation complexity, and scalability. We also provide specific optimizations of OT extensions that are tailored to the secure computation protocols of Yao and Goldreich-Micali-Wigderson and reduce the communication complexity even further. We experimentally verify the efficiency gains of our protocols and optimizations. By applying our implementation to current secure computation frameworks, we can securely compute a Levenshtein distance circuit with 1.29 billion AND gates at a rate of 1.2 million AND gates per second. Moreover, we demonstrate the importance of correctly implementing OT within secure computation protocols by presenting an attack on the FastGC framework.
# More Efficient Oblivious Transfer and Extensions for Faster Secure Computation ## Gilad Asharov, Yehuda Lindell #### Cryptography Research Group Bar-Ilan University, Israel ## asharog@cs.biu.ac.il, lindell@biu.ac.il ABSTRACT Protocols for secure computation enable parties to compute a joint function on their private inputs without revealing anything but the result. A foundation for secure computation is oblivious transfer (OT), which traditionally requires expensive public key cryptography. A more efficient way to perform many OTs is to extend a small number of base OTs using OT extensions based on symmetric cryptography. In this work we present optimizations and efficient implementations of OT and OT extensions in the semi-honest model. We propose a novel OT protocol with security in the standard model and improve OT extensions with respect to communication complexity, computation complexity, and scalability. We also provide specific optimizations of OT extensions that are tailored to the secure computation protocols of Yao and Goldreich-Micali-Wigderson and reduce the communication complexity even further. We experimentally verify the efficiency gains of our protocols and optimizations. By applying our implementation to current secure computation frameworks, we can securely compute a Levenshtein distance circuit with 1.29 billion AND gates at a rate of 1.2 million AND gates per second. Moreover, we demonstrate the importance of correctly implementing OT within secure computation protocols by presenting an attack on the FastGC framework. ## Categories and Subject Descriptors F.1.2 [Modes of computation]: Interactive and reactive computation—cryptographic protocols ## Keywords Secure computation; oblivious transfer extensions; semi-honest adversaries Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org. CCS’13, November 4–8, 2013, Berlin, Germany. Copyright 2013 ACM 978-1-4503-2477-9/13/11 ...$15.00. http://dx.doi.org/10.1145/2508859.2516738. ## Thomas Schneider, Michael Zohner #### Engineering Cryptographic Protocols Group TU Darmstadt, Germany ## thomas.schneider@ec-spride.de, michael.zohner@ec-spride.de 1. INTRODUCTION 1.1 Background In the setting of secure two-party computation, two parties P0 and P1 with respective inputs x and y wish to compute a joint function f on their inputs without revealing anything but the output f (x, y). This captures a large variety of tasks, including privacy-preserving data mining, anonymous transactions, private database search, and many more. In this paper, we consider semi-honest adversaries who follow the protocol, but may attempt to learn more than allowed via the protocol communication. We focus on semi-honest security as this allows construction of highly efficient protocols for many application scenarios. This model is justified e.g., for computations between hospitals or companies that trust each other but need to run a secure protocol because of legal restrictions and/or in order to prevent inadvertent leakage (since only the output is revealed from the communication). Semi-honest security also protects against potential misuse by some insiders and future break-ins, and can be enforced with software attestation. Moreover, understanding the cost of semi-honest security is an important stepping stone to efficient malicious security. We remark that also in a large IARPA funded project on secure computation on big data, IARPA stated that the semi-honest adversary model is suitable for their applications [27]. **Practical secure computation.** Secure computation has been studied since the mid 1980s, when powerful feasibility results demonstrated that any efficient function can be computed securely [15, 51]. However, until recently, the bulk of research on secure computation was theoretical in nature. Indeed, many held the opinion that secure computation will never be practical since carrying out cryptographic operations for every gate in a circuit computing the function (which is the way many protocols work) will never be fast enough to be of use. Due to many works that pushed secure computation further towards practical applications, e.g., [4, 5, 8, 11, 13, 21, 24, 30, 35–37, 44, 50], this conjecture has proven to be wrong and it is possible to carry out secure computation of complex functions at speeds that five years ago would have been unconceivable. For example, in FastGC [24] it was shown that AES can be securely computed with 0.2 seconds of preprocessing time and just 0.008 seconds of online computation. This has applications to private database search and also to mitigating server breaches in the cloud by sharing the decryption key for sensitive data between two servers and never revealing it (thereby forcing an attacker to compromise the security of two servers in ----- stead of one). In addition, [24] carried out a secure computation of a circuit of size 1.29 billion AND gates, which until recently would have been thought impossible. Their computation took 223 minutes, which is arguably too long for most applications. However, it demonstrated that large-scale secure computation can be achieved. The FastGC framework was a breakthrough result regarding the practicality of secure computation and has been used in many subsequent works, e.g., [22, 23, 25, 26, 44]. However, it is possible to still do much better. The secure computation framework of [49] improved the results of FastGC [24] by a factor of 6-80, depending on the network latency. Jumping ahead, we obtain additional speedups for both secure computation frameworks [24] and [49]. Most notably, when applying our improved OT implementation to the framework of [49], we are able to evaluate the 1.29 billion AND gate circuit in just 18 minutes. We conclude that significant efficiency improvements can still be made, considerably broadening the tasks that can be solved using secure computation in practice. **Oblivious transfer and extensions.** In an oblivious _transfer (OT) [48], a sender with a pair of input strings_ (x0, x1) interacts with a receiver who inputs a choice bit _σ. The result is that the receiver learns xσ without learn-_ ing anything about x1−σ, while the sender learns nothing about σ. Oblivious transfer is an extremely powerful tool and the foundation for almost all efficient protocols for secure computation. Notably, Yao’s garbled-circuit protocol [51] (e.g., implemented in FastGC [24]) requires OT for every input bit of one party, and the GMW protocol [15] (e.g., implemented in [8, 49]) requires OT for every AND gate of the circuit. Accordingly, the efficient instantiation of OT is of crucial importance as is evident in many recent works that focus on efficiency, e.g., [8,16,19,22–24,26,34,37,43,49]. In the semi-honest case, the best known OT protocol is that of [40], which has a cost of approximately 3 exponentiations per 1-out-of-2 OT. However, if thousands, millions or even billions of oblivious transfers need to be carried out, this will become prohibitively expensive. In order to solve this problem, OT extensions [2, 28] can be used. An OT extension protocol works by running a small number of OTs (say, 80 or 128) that are used as a base for obtaining many OTs via the use of cheap symmetric cryptographic operations only. This is conceptually similar to public-key encryption where instead of encrypting a large message using RSA, which would be too expensive, a hybrid encryption scheme is used such that only a single RSA computation is carried out to encrypt a symmetric key and then the long message is encrypted using symmetric operations only. Such an OT extension can actually be achieved with extraordinary efficiency; specifically, the protocol of [28] requires only three hash function computations on a single block per oblivious transfer (beyond the initial base OTs). **Related Work. There is independent work on the effi-** ciency of OT extension with security against stronger malicious adversaries [17,42,43]. In the semi-honest model, [20] improved the implementation of the OT extension protocol of [28] in FastGC [24]. They reduce the memory footprint by splitting the OT extension protocol sequentially into multiple rounds and obtain speedups by instantiating the pseudorandom generator with AES instead of SHA-1. Their implementation evaluates 400,000 OTs (of 80-bit strings without precomputations) per second over WiFi; we propose additional optimizations and our fastest implementation eval uates more than 700,000 OTs per second over WiFi, cf. Tab. 4. ## 1.2 Our Contributions and Outline In this paper, we present more efficient protocols for OT extensions. This is somewhat surprising since the protocol of [28] sounds optimal given that only three hash function computations are needed per transfer. Interestingly, our protocols do not lower the number of hash function operations. However, we observe that significant cost is incurred due to other factors than the hash function operations. We propose several algorithmic (§4) and protocol (§5) optimizations and obtain an OT extension protocol (General OT, G-OT §5.3) that has lower communication, faster computation, and can be parallelized. Additionally, we propose two OT extension protocols that are specifically designed to be used in secure computation protocols and which reduce the communication and computation even further. The first of these protocols (Correlated OT, C-OT §5.4) is suitable for secure computation protocols that require correlated inputs, such as Yao’s garbled circuits protocol with the free-XOR technique [32, 51]. The second protocol (Random OT, R-OT _§5.4) can be used in secure computation protocols where_ the inputs can be random, such as GMW with multiplication triples [1, 15] (cf. §5.1). We apply our optimizations to the OT extension implementation of [49] (which is based on [8]) and demonstrate the improvements by extensive experiments (§6).[1] A summary of the time complexity for 1-out-of-2 OTs on 80-bit strings is given in Fig. 1. While the original protocol of [28] as implemented in [49] evaluates 2[23] OTs in 18.0 s with one thread and in 14.5 s with two threads, our improved R-OT protocol requires only 8.4 s with one thread and 4.2 s with two threads, which demonstrates the scalability of our approach. **Figure 1: Runtime for 1-out-of-2 OT extension opti-** **mizations on 80-bit strings. The reference and num-** **ber of threads is given in (); the time for 2[23]** **OTs is** **given in {}.** **Secure random number generation. In §3 we empha-** size that when OT protocols are used as building block in a secure computation protocol, it is very important that random values are generated with a cryptographically strong 1Our implementation is available online at `http://` ``` encrypto.de/code/OTExtension. ``` ----- random number generator. In fact, we show an attack on the latest version of the FastGC [24] implementation (version v0.1.1) of Yao’s protocol which uses a weak random number generator. Our attack allows the full recovery of the inputs of both parties. To protect against our attack, a cryptographically strong random number generator needs to be used (which results in an increased runtime). **Faster semi-honest base OT without random ora-** **cle. In the semi-honest model, the OT of [40] is the fastest** known with 2 + n exponentiations for the sender and 2n fixed-base exponentiations for the receiver, for n OTs. However, it is proven secure only in the random oracle model, which is why the authors of [40] also provide a slower semihonest OT that relies on the DDH assumption, which has complexity 4n fixed-base + 2n double exponentiations for the sender and 1 + 3n fixed-base + n exponentiations for the receiver. In §5.2 we construct a protocol secure under the Decisional Diffie-Hellmann (DDH) assumption that is much faster when many transfers are run (as in the case of OT extensions where 80 base OTs are needed) and is only slightly slower than the fastest OT in the random oracle model (§6.1). **Faster OT extensions. In §5.3 we present an improved** version of the original OT extension protocol of [28] with reduced communication and computation complexity. Furthermore, we demonstrate how the OT extension protocol can be processed in independent blocks, allowing OT extension to be parallelized and yielding a much faster runtime (§4.1). In addition, we show how to implement the matrix transpose operation using a cache-efficient algorithm that operates on multiple entries at once (§4.2); this has a significant effect on the runtime of the protocol. Finally, we show how to reduce the communication by approximately one quarter (depending on the bit-length of the inputs). This is of great importance since local computations of the OT extension protocol are so fast that the communication is often the bottleneck, especially when running the protocol over the Internet or even wireless networks. **Extended OT functionality. Our improved protocol** can be used in any setting that regular OT can be used. However, with a mind on the application of secure computation, we further optimize the protocol by taking into account its use in the protocols of Yao [51] and GMW [15] in §5.4. For Yao’s garbled circuits protocol, we observe that the OT extension protocol can choose the first value randomly and output it to the sender while the second value is computed as a function of the first value. For the GMW protocol. we observe that the OT extension protocol can choose both values randomly and output them to the sender. In both cases, the communication is reduced to a half (or even less) of the original protocol of [28]. **Experimental evaluation and applications. In §6 we** experimentally verify the performance improvements of our proposed optimizations for OT and OT extension. In §7 we demonstrate their efficiency gains for faster secure computation, by giving performance benchmarks for various application scenarios. For the Yao’s garbled circuits framework FastGC [24], we achieve an improvement up to factor 9 for circuits with many inputs for the receiver, whereas we improve the runtime of the GMW implementation of [49] by factor 2, e.g., a Levenshtein distance circuit with 1.29 billion AND gates can now be evaluated at a rate of 1.2 million AND gates per second. ## 2. PRELIMINARIES In the following, we summarize the security parameters used in our paper (§2.1) and describe the OT extension protocol of [28] (§2.2), Yao’s garbled circuits protocol (§2.3), and the GMW protocol (§2.4) in more detail. Standard definitions of security are given in Appendix A. ## 2.1 Security Parameters Throughout the paper, we denote the symmetric security parameter by κ. Tab. 1 lists usage times (time frames) for different values of the symmetric security parameter κ (SYM ) and corresponding field sizes for finite field cryptography (FFC) and elliptic curve cryptography (ECC) as recommended by NIST [45]. For FCC we use a subgroup of order q = 2κ. For ECC we use Koblitz curves which had the best performance in our experiments. **Security (Time Frames)** **SYM** **FFC** **ECC** Short (legacy) 80 1024 K-163 Medium (< 2030) 112 2048 K-243 Long (> 2030) 128 3072 K-283 **Table 1: Security parameters and recommended key** **sizes.** ## 2.2 Oblivious Transfer and OT Extension The m-times 1-out-of-2 OT functionality for ℓ-bit strings, denoted m×OTℓ, is defined as follows: The sender S inputs _m pairs of strings x[0]i_ [,][ x]i[1] _[∈{][0][,][ 1][}][ℓ]_ [(1][ ≤] _[i][ ≤]_ _[m][), the re-]_ ceiver R inputs a string r = (r1, . . ., rm) of length m, and _R obtains xrjj_ (1 ≤ _j ≤_ _m) as output. OT ensures that S_ learns nothing about r and R learns nothing about x1j _−rj_ . An OT extension protocol implements the m × OTℓ functionality using a small number of actual OTs, referred to as base OTs, and cheap symmetric cryptographic operations. In [28] it is shown how to implement the m×OTℓ functionality using a single call to κ×OTm, and 3m hash function computations. Note that κ×OTm can be implemented via a single call to κ _×_ _OTκ in order to obliviously transfer symmetric_ keys, and then using a pseudo-random generator G to obliviously transfer the actual inputs of length m (cf. [26,28]). In the first step of [28], S chooses a random string s ∈R {0, 1}[κ], and R chooses a random m _×_ _κ bit matrix T = [t[1]_ _| . . . | t[κ]],_ where t[i] _∈{0, 1}[m]_ denotes the i-th column of T . The parties then invoke the κ _×_ _OTm functionality, where R plays_ the sender with inputs (t[i], t[i] _⊕_ **r) and S plays the receiver** with input s. Let Q = [q[1] _| . . . | q[κ]] denote the m × κ_ matrix received by S. Note that q[i] = (si · r) ⊕ **t[i]** and **qj = (rj · s) ⊕** **tj (where tj, qj are the j-th rows of T and** _Q, respectively). S sends (yj[0][, y]j[1][) where][ y]j[0]_ [=][ x]j[0] _[⊕]_ _[H][(][q][j][)]_ putsand y zj[1]j =[=][ x] yj[1]jrj[⊕]⊕[H]H[(][q](t[j]j[⊕]) for every[s][), for 1] j[ ≤]. The protocol is secure[j][ ≤] _[m][.][ R][ finally out-]_ assuming that H : {0, 1}[m] _�→{0, 1}[ℓ]_ is a random oracle, or a correlation robust function as in Definition A.2; see [28] for more details. ## 2.3 Yao’s Garbled Circuits Protocol Yao’s garbled circuits protocol [51] allows two parties to securely compute an arbitrary function that is represented as Boolean circuit. The sender S encrypts the Boolean gates of the circuit using symmetric keys and sends the encrypted |Security (Time Frames)|SYM|FFC|ECC| |---|---|---|---| |Short (legacy)|80|1024|K-163| |Medium (< 2030)|112|2048|K-243| |Long (> 2030)|128|3072|K-283| ----- function together with the keys that correspond to his input bits to the receiver R. R then uses a 1-out-of-2 OT to obliviously obtain the keys that correspond to his inputs and evaluates the encrypted function by decrypting it gate by gate. To obtain the output, R sends the resulting keys to S or S provides a mapping from keys to output bits. We emphasize that Yao’s garbled circuits protocol requires a 1-out-of-2 OT on κ-bit strings for each input bit of R. For our experiments we use the Yao’s garbled circuits framework FastGC [24]. ## 2.4 The GMW Protocol The protocol of Goldreich, Micali, and Wigderson (GMW) [15] also represents the function to be computed as a Boolean circuit. Both parties secret-share their inputs using the XOR operation and evaluate the Boolean circuit as follows. An XOR gate is computed by locally XORing the shares while an AND gate is evaluated interactively with the help of a multiplication triple [1,49] which can be precomputed by two random 1-out-of-2 OTs on bits (cf. §5.1). To reconstruct the outputs, the parties exchange their output shares. The performance of GMW depends on the number of OTs and on the depth of the evaluated circuit, since the evaluation of AND gates requires interaction. For our experiments we use the GMW framework of [49], which is an optimization of the framework of [8] for the two party case. ## 3. RANDOM NUMBER GENERATION The correct instantiation of primitives in implementations of cryptographic protocols is a challenging task, since various security properties have to be met. For instance, an important security property of a pseudo-random generator (PRG) is its unpredictability, i.e., given a sequence of pseudo-random bits x1...xn, the next bit xn+1 should not be predictable. If the security property of the primitive is not met, the security of the overall scheme can be compromised. We found that this was the case for the FastGC framework in version 0.1.1 [24] that uses the standard Java Random class in order to generate random values used in the base OTs, the random choices of vector s and matrix T in the OT extension, and the input keys of the garbled circuit. Overall, this enables an attack that allows each party to recover the inputs of the respective other party, as we will describe now. ## 3.1 The Java Random Class The Java Random class implements a so-called truncated _linear congruential generator (T-LCG) with secret seed ψ ∈_ _{0, 1}[48]. Random numbers can be generated by invoking the_ next method of an object of the Java Random class which takes as input the requested number of random bits b (for 1 ≤ _b ≤_ 32), updates the seed ψ[′] = (αψ + β) mod m, and returns the topmost b bits of ψ, where α = 0x5DEECE66D, _β = 0xB, and m = 2[48]_ are public constants. If more than 32 random bits are needed, next is called repeatedly until a sufficient number of bits has been generated. The security of T-LCGs was widely studied and they were shown to be predictable [18], even if the generated sequence is not directly output [3]. In case of the Java Random class, each iteration reveals b bits of the seed, leaving a remaining entropy of 48 − _b bits. Furthermore, consecutive values can_ be used to build linear equations. For our analysis, we assume that the generated random value has at least length 64 bits, i.e., it was generated by two consecutive calls to the next method with b = 32. This holds for the FastGC framework [24] which uses a Java Random object to generate symmetric keys and the columns of the _T matrix (we use the first 64 bits only)._ To predict the output of the Java Random object, we recover its secret seed ψ = ψ1...ψ48 using the 64 bit output d = d1...d64. Since the topmost 32 bits are directly used as output, we have ψ17...ψ48 = d1...d32. In addition, we have ψ17[′] _[...ψ]48[′]_ [=] _d33...d64. Now, the remaining lower 16 bits ψ1...ψ16 can be_ recovered using the linear equation ψ[′] = (αψ + β) mod m. Specifically, for each of the 2[16] possible values of ψ we compute (αψ +β) mod m−(ψ17[′] _[...ψ]48[′]_ [)][·][2][16][. Now, for the correct] value of ψ the result will be zero in the 32 most-significant bits and so will be smaller than 2[16], whereas for all other values it will be larger (with high probability). In practice, this suffices for finding the entire seed ψ in 2[16] steps, which takes under a second. The recovered secret seed ψ can then be used to predict the output of the Java Random object. ## 3.2 Exploiting the Weak PRG in FastGC [24] We demonstrate how the usage of the Java Random class in version v0.1.1 of the FastGC [24] framework can be exploited such that the sender can recover the input bits of the receiver using the T matrix generated in the OT extension protocol (cf. §2.2), and the receiver can recover the input bits of the sender using the sender’s input keys to the garbled circuit. We implemented and verified both attacks on FastGC, which both run in less than a second. Note that both attacks are carried out on the honestly generated transcript, as required for the setting of semi-honest adversaries. **Recovering the Receiver’s Inputs. The sender can** recover the receiver’s input bits using the T matrix, which is chosen randomly by the receiver in the OT extension (cf. §2.2). Upon receiving the matrix Q = [q[1] _| . . . | q[κ]],_ the sender knows that q[i] = t[i], if si = 0, and q[i] = t[i]⊕ **r, if** _si = 1. Hence, whenever the receiver has si = 0, the sender_ obtains q[i] = t[i] and can recover an intermediate seed ψ of the Java Random object that was used to generate this column of T . Afterwards, the sender computes for j > i consecutive random outputs t[j] until he obtains a column q[j] ≠ _t[j]_ where _sj = 1 which occurs with overwhelming probability 1_ _−_ _[κ]2[+1][κ][ .]_ Now, the sender can recover the receiver’s input bits r by computing q[j] _⊕_ **t[j]** = t[j]⊕ **r ⊕t[j]** = r. **Recovering the Sender’s Inputs. The receiver can re-** cover the sender’s input bits using the sender’s input keys to the garbled circuit. In FastGC, the sender generates random symmetric keys ki ∈{0, 1}[κ] for each of his ℓ input bits bi ∈{0, 1} using the same Random object. If bi = 0, he sends Ki = ki to the receiver, else he sends Ki = ki _⊕(∆||0),_ where ∆ _∈{0, 1}[κ][−][1]_ is a constant global value [32]. In order to recover the sender’s input bits, the receiver iteratively computes a candidate for the seed with which Ki was generated, computes the next ℓ _−_ _i keys kj[′]_ [(][i < j][ ≤] _[ℓ][) and checks]_ whether the candidate seed generates a consistent view for the observed values Kj[′] [. If][ b][i] [= 0, then][ K][i] [=][ k][i] [and the re-] ceiver knows that he has recovered the correct seed by finding either ki[′]+1 _[⊕][k]i[′]+2_ [=][ K][i][+1] _[⊕][K][i][+2]_ [if there are at least two] more input bits bi+1 = bi+2 = 1 or kj[′] [=][ K][j] [if another input] bit is bj = 0. Once the receiver has found such a bi = 0, he can recover all subsequent input bits by checking whether _kj[′]_ [=][ K][j] [(][⇒] _[b][j]_ [= 0) or not (][⇒] _[b][j]_ [= 1). If][ b][i] [= 1, then] _Ki = ki ⊕_ (∆||0) and the receiver recovers the wrong seed such that neither Kj[′] [=][ K][j] [nor][ K]i[′]+1 _[⊕]_ _[K]i[′]+2_ [=][ K][i][+1] _[⊕]_ _[K][i][+2]_ hold with very high probability. Thus, the receiver knows ----- that bi = 1 and repeats the attack for i + 1. Note that this attack fails if the sender has less than three input bits or all except the last two input bits of the sender are set to 1. In this case, however, the receiver can recover the input bits with high probability by using the remaining κ − 64 bits of the key to check if the candidate seed is correct. **Securing FastGC [24]. Securing the FastGC framework** is relatively easy, since Java also provides a cryptographically strong random number generator, called SecureRandom, which by default is implemented based on SHA-1.[2] Replacing all usage of the Random class by SecureRandom increased the runtime of our experiments in §7 by around 0.5 − 4%, depending on the application. A complementary method to reduce the overhead in runtime is to use our correlated input OT extension of §5.4 which eliminates the need of generating a random T matrix s.t. our attack for reconstructing the receiver’s inputs no longer works. Nevertheless, all randomness that is needed (even for our method) must be generated using SecureRandom. ## 4. ALGORITHMIC OPTIMIZATIONS In the following we describe algorithmic optimizations that improve the scalability and computational complexity of OT extension protocols. We identified computational bottlenecks in OT extension by micro-benchmarking the 1-out-of-2 OT extension implementation of [49].[3] We found that the combined computation time of S and R was mostly spent on three operations: the matrix transposition (43%), the evaluation of H, implemented with SHA-1 (33%), and the evaluation of G, implemented with AES (14%). To speed up OT extension, we propose to use parallelization (§4.1) and an efficient algorithm for bit-matrix transposition (§4.2). Note that these implementation optimizations are of general nature and can be applied to our, but also to other OT extension protocols with security against stronger active/malicious adversaries, e.g., [28, 43]. As we will show later in our experiments in §6.2, both algorithmic improvements result in substantially faster runtimes, but only in settings where the computation is the bottleneck, i.e., over a fast network such as a LAN. ## 4.1 Blockwise Parallelized OT Extension Previous OT extension implementations [8, 49] improved the performance of OT extension by using a vertical pipelining approach, i.e., one thread is associated to each step of the protocol: the first thread evaluates the pseudorandom generator G and the second thread evaluates the correlation robust function H (cf. §2.2). However, as evaluation of G is faster than evaluation of H, the workload between the two threads is distributed unequally, causing idle time for the first thread. Additionally, this method for pipelining is designed to run exactly two threads and thus cannot easily be scaled to a larger number of threads. As observed in [20], a large number of OT extensions can be performed by sequentially running the OT extension protocol on blocks of fixed size. This reduces the total memory consumption at the expense of more communication rounds. 2In response to our findings, the usage of Random has been replaced with SecureRandom in version 0.1.2 of FastGC. 3Note that the implementation in [49] performs 1-out-of-4 OT, but we adapted their implementation since our protocol optimizations in §5 target 1-out-of-2 OT extension. We propose to use a horizontal pipelining approach that splits the matrices processed in the OT extension protocol into independent blocks that can be processed in parallel using multiple threads with equal workload, i.e., each of the N threads evaluates the OT extension protocol for _mN_ [inputs] in parallel. Each thread uses a separate socket to communicate with its counterpart on the other party, s.t. network scheduling is done by the operating system. ## 4.2 Efficient Bit-Matrix Transposition The computational complexity of cryptographic protocols is often measured by counting the number of invocations of cryptographic primitives, since their evaluation often dominates the runtime. However, non-cryptographic operations can also have a high impact on the overall run time of executions although they might seem insignificant in the protocol description. Matrix transposition is an example for such an operation. It is required during the OT extension protocol to transpose the m _×_ _κ bit-matrix T (cf. §2.2), which is created_ column-wise but hashed row-wise. Although transposition is a trivial operation, it has to be performed individually for each entry in T, making it a very costly operation. We propose to efficiently implement the matrix transposition using Eklundh’s algorithm [10], which uses a divideand-conquer approach to recursively swap elements of adjacent rows (cf. Fig. 2). This decreases the number of swap operations for transposing a n × n matrix from O(n[2]) to _O(n log2 n). Additionally, since we process a bit-matrix, we_ can perform multiple swap operations in parallel by loading multiple bits into one register. Thereby, we again reduce the number of swap operations from O(n log2 n) to _O(⌈_ _[n]r_ _[n][), where][ r][ is the register size of the CPU (][r][ = 64]_ _[⌉]_ [log][2] for the machines used in our experiments). Jumping ahead to the evaluation in §6, this reduced the total time for the matrix transposition by approximately a factor of 9 from 7.1 s to 0.76 s per party. 1 2 3 4 1 5 3 7 1 5 9 13 5 6 7 8 2 6 4 8 2 6 10 14 9 10 11 12 9 13 11 15 3 7 11 15 13 14 15 16 10 14 12 16 4 8 12 16 **Figure 2: Efficient matrix transposition of a 4 × 4** **matrix using Eklundh’s algorithm.** ## 5. PROTOCOL OPTIMIZATIONS In this section, we show how to efficiently base the GMW protocol on random 1-out-of-2 OTs (§5.1), introduce a new OT protocol (§5.2), outline an optimized OT extension protocol (§5.3), and optimize OT extension for usage in secure computation protocols (§5.4). ## 5.1 GMW with Random 1-out-of-2 OTs An AND gate in the GMW protocol can be computed efficiently using the multiplication triple functionality [1]: the parties hold no input, and the functionality chooses random bits a0, a1, b0, b1, c0, c1 under the constraint that _c0 ⊕_ _c1 = (a0 ⊕_ _a1)(b0 ⊕_ _b1). Each Pi receives the shares_ labeled with i. To precompute the multiplication triples, previous works suggest to use 1-out-of-4 bit OT [8,49]. |1|2|3|4|Col5| |---|---|---|---|---| |5|6|7|8|| |9|10|11|12|| |13|14|15|16|| |1|5|3|7|Col5|1|5|9|13| |---|---|---|---|---|---|---|---|---| |2|6|4|8||2|6|10|14| |9|13|11|15||3|7|11|15| |10|14|12|16||4|8|12|16| ----- In the following, we present a different approach for generating multiplication triples using two random 1-out-of-2 OTs on bits (R-OT). The R-OT functionality is exactly the same as OT, except that the sender gets two random messages as outputs. Later in §5.4, we will show that R-OT can be instantiated more efficiently than OT. In comparison to 1-out-of-4 bit OTs, using two R-OTs only slightly increases the computation complexity (one additional evaluation of G and H and two additional matrix transpositions), but improves the communication complexity by a factor of 2. In order to generate a multiplication triple, we first introduce the f _[ab]_ functionality that is implemented in Algorithm 1 using R-OT. In the f _[ab]_ functionality, the parties hold no input and receive random bits ((a, u), (b, v)), under the constraint that ab = u ⊕ _v. Now, note that for a multiplication triple_ _c0 ⊕_ _c1 = (a0 ⊕_ _a1)(b0 ⊕_ _b1) = a0b0 ⊕_ _a0b1 ⊕_ _a1b0 ⊕_ _a1b1._ The parties can generate a multiplication triple by invoking the f _[ab]_ functionality twice: in the first invocation P0 acts as R to obtain (a0, u0) and P1 acts as S to obtain (b1, v1) with a0b1 = u0 ⊕ _v1; in the second invocation P1 acts as R_ to obtain (a1, u1) and P0 acts as S to obtain (b0, v0) with _a1b0 = u1 ⊕_ _v0. Finally, each Pi sets ci = aibi ⊕_ _ui ⊕_ _vi._ For correctness, observe that c0 ⊕ _c1 = (a0b0 ⊕_ _u0 ⊕_ _v0) ⊕_ (a1b1 ⊕ _u1 ⊕_ _v1) = a0b0 ⊕_ (u0 ⊕ _v1) ⊕_ (u1 ⊕ _v0) ⊕_ _a1b1 =_ _a0b0 ⊕_ _a0b1 ⊕_ _a1b0 ⊕_ _a1b1 = (a0 ⊕_ _a1)(b0 ⊕_ _b1), as required._ A proof sketch for security is given in Appendix B. **Algorithm 1 Random (a, u), (b, v) with ab = u ⊕** _v_ 1: R chooses a ∈R {0, 1}. 2: S and R perform a R-OT with a as input of R. _S obtains bits x0, x1 and R obtains bit xa as output._ 3: R sets u = xa; S sets b = x0 ⊕ _x1 and v = x0._ [Note that ab = u _⊕_ _v as ab = a(x0 ⊕_ _x1) = (a(x0 ⊕_ _x1)_ _⊕_ _x0) ⊕_ _x0 = xa ⊕_ _x0 = u ⊕_ _v.]_ 4: R outputs (a, u) and S outputs (b, v). ## 5.2 Optimized Oblivious Transfer The best known protocols for oblivious transfer with security in the presence of semi-honest adversaries are those of Naor-Pinkas [40]. They present two protocols; a more efficient protocol that is secure in the random oracle model and a less efficient protocol that is secure in the standard model and under standard assumptions. In this section, we describe a new semi-honest OT protocol that is secure in the standard model and is essentially an optimized instantiation of the OT protocol of [12]. When implemented over elliptic curves, our protocol is about three times faster than the standard model OT of [40] and only two times slower than the random oracle OT of [40] (see §6.1 for a comparison of the protocol runtimes). Hence, our protocol is a good alternative for those preferring to not rely on random oracles. Our n×OTℓ protocol is based on the DDH assumption and uses a key derivation function (KDF); see Definition A.1. We also assume that it is possible to sample a random element of the group, and the DDH assumption will remain hard even when the coins used to sample the element are given to the distinguisher (i.e., (g, h, g[a], h[a]) is indistinguishable from (g, h, g[a], g[b]) for random a, b, even given the coins used to sample h). This holds for all known groups in which the DDH problem is assumed to be hard and can be implemented as described next. For finite fields, one can sample a random element h ∈ Zp of order q by choosing a random x ∈R Zp and computing h = x[(][p][−][1)][/q] until h ̸= 1. For elliptic curves, one chooses a random x-coordinate, obtains a quadratic equation for the y-coordinate and randomly chooses one of the solutions as h (if no solution exists, start from the beginning). The computational complexity of our protocol for n×OTℓ is 2n exponentiations for the sender Sand 2n fixed-base exponentiations for the receiver R (in fixed-base exponentiations, the same “base” g is raised to the power of many different exponents; more efficient exponentiation algorithms exist for this case [38, Sec. 14.6.3]). In addition, S computes the KDF function 2n times, and R computes it n times. R samples _n random group elements according to the above definition._ See Protocol 5.1 for a detailed description of the protocol. PROTOCOL 5.1 (Optimized n×OTℓ **Protocol).** **Inputs: S holds n pairs (x[0]i** _[, x]i[1][) of][ ℓ][-bit strings, for every]_ 1 ≤ _i ≤_ _n. R holds the selection bits σ = (σ1, . . ., σn)._ The parties agree on a group ⟨G, q, g⟩ for which the DDH is hard, and a key derivation function KDF. **First Round (Receiver): Choose random exponents** _αi∈RZq and random group elements hi∈RG for every_ 1 ≤ _i ≤_ _n. Then, for every i, set (h[0]i_ _[, h]i[1][) as follows:]_ � (g[α][i] _, hi)_ if σi = 0 (h[0]i _[, h]i[1][)][ def]=_ (hi, g[α][i] ) if σi = 1 Send the pairs (h[0]i _[, h]i[1][) to][ S][.]_ **Second Round (Sender): Choose a random element** _r∈RZq and compute u = g[r]. Then, for each pair (h[0]i_ _[, h]i[1][)]_ compute the keys: (ki[0][, k]i[1][) =] �(h[0]i [)][r][,][ (][h]i[1][)][r][�] and compute the pair of ciphertexts: _vi[0]_ [=][ x]i[0] _[⊕]_ [KDF][(][k]i[0][)] and _vi[1]_ [=][ x]i[1] _[⊕]_ [KDF][(][k]i[1][)][.] Send u together with the n pairs (vi[0][, v]i[1][) to][ R][.] **Output** **Computation** **(Receiver):** For every 1 ≤ _i ≤_ _n, set ki[σ][i]_ = u[α][i] and x[σ]i _[i]_ = vi[σ][i] _⊕_ KDF(ki[σ][i] [).] _R outputs (x[σ]1_ [1] _[, . . ., x]n[σ][n]_ [);][ S][ has no output.] The protocol is secure in the presence of a semi-honest adversary (see Definition A.3). The view of a corrupted sender consists of the pairs {(h[0]i _[, h]i[1][)][}]i[n]=1_ [which are com-] pletely independent of the receiver’s inputs, and therefore can be simulated perfectly. For the corrupted receiver, we need to show the existence of a simulator S1 that produces a computationally-indistinguishable view, given the inputs and outputs of the receiver, i.e., σ and (x[σ]1 [1] _[, . . ., x]n[σ][n]_ [), with-] out knowing the other sender values (x[1]1[−][σ][1] _, . . ., x[1]n[−][σ][n]_ ). S1 works by running an execution of the protocol playing an honest S using inputs x[σ]1 [1] _[, . . ., x]n[σ][n]_ and using x[1]i _[−][σ][i]_ = 0 for all 1 ≤ _i ≤_ _n._ The only difference between the view of the receiver generated by the simulator and in a real execution is regarding the values {vi[1][−][σ][i] _}i[n]=1[, which equal]_ _x[1]i_ _[−][σ][i]_ _⊕KDF(ki[1][−][σ][i]_ ) in a real execution and just KDF(ki[1][−][σ][i] ) in the simulation. From the security of the KDF with respect to DDH (see Definition A.1), and using a standard hybrid argument, the values (KDF(k1[1][−][σ][1] ), . . ., KDF(kn[1][−][σ][n] )) = (KDF(h[r]1[)][, . . .,][ KDF][(][h][r]n[)) are indistinguishable from][ n][ uni-] form strings z1, . . ., zn each of size ℓ (even when the distinguisher sees ⟨G, q, g, u = g[r]⟩). This implies that the values _{vi[1][−][σ][i]_ _}i[n]=1_ [in the real execution are computationally indis-] tinguishable from those in the simulation. **An additional optimization for random OT. When** constructing OT extensions (see §2.2) the parties first run _κ × OTκ on random inputs (this holds for our optimized_ ----- OT extension protocol, and also for the original protocol of [28] if κ×OTm is implemented via κ×OTκ as described in §2.2). Observe that in this case, the sender only needs to send u = g[r] to the receiver R; the parties can then derive the values locally (S by computing x[0]i [=][ KDF][((][h]i[0][)][r][) and] _x[1]i_ [=][ KDF][((][h]i[1][)][r][), and][ R][ by computing][ x]i[σ][i] = KDF(u[α][i] )). This reduces the communication since the elements vi[0] [and] _vi[1]_ [do not have to be sent.] In addition, this means that the messages sent by S and R are actually independent of each other, and so the protocol consists of a single round of communication. (As pointed out in [43], this optimization can also be carried out on the protocols of Naor-Pinkas [40]. However, those protocols still require two rounds of communication which can be a drawback in high latency networks.) The timings that appear in §7 are for an implementation that uses this additional optimization.[4] ## 5.3 Optimized General OT Extension In the following, we optimize the m×OTℓ extension protocol of [28], described in §2.2. Recall, that in the first step of the protocol in [28], R chooses a huge m × κ matrix _T = [t[1]| . . . |t[κ]] while S waits idly. The parties then engage_ in a κ×OTm protocol, where the inputs of the receiver are (t[i], t[i] _⊕_ **r) where r is its input in the outer m×OTℓ** protocol (m selection bits). After the OT, S holds t[i] _⊕(si_ _·r) for every_ 1 ≤ _i ≤_ _κ. As described in the appendices of [26,28], the pro-_ tocol can be modified such that R only needs to choose two small κ _×_ _κ matrices K0 = [k[0]1[|][ . . .][ |][k][0]κ[] and][ K]1_ [= [][k]1[1][|][ . . .][ |][k][1]κ[]] of seeds. These seeds are used as input to κ×OTκ; specifically _R’s input as sender in the i-th OT is (k[0]i_ _[,][ k]i[1][) and, as in [28],]_ the input of S is si. To transfer the m-bit tuple (t[i], t[i] _⊕r) in_ the i-th OT, R expands k[0]i [and][ k]i[1] [using a pseudo-random] generator G, sends (vi[0][,][ v]i[1][) = (][G][(][k]i[0][)][ ⊕] **[t][i][, G][(][k]i[1][)][ ⊕]** **[t][i][ ⊕]** **[r][),]** and S recovers G(k[s]i _[i]_ [)][ ⊕] **[v]i[s][i]** [.] Our main observation is that, instead of choosing t[i] randomly, we can set t[i] = G(k[0]i [). Now,][ R][ needs to send only] one m-bit element u[i] = G(k[0]i [)][ ⊕] _[G][(][k]i[1][)][ ⊕]_ **[r][ to][ S][ (whereas]** in previous protocols of [26, 28] two m-bit elements were sent). Observe that if S had input si = 0 in the i-th OT, then it can just define its output q[i] to be G(k[0]i [) =][ G][(][k]i[s][i] [).] In contrast, if S had input si = 1 in the i-th OT, then it can define its output q[i] to be G(k[1]i [)][ ⊕] **[u][i][ =][ G][(][k]i[s][i]** [)][ ⊕] **[u][i][.]** Since u[i] = G(k[0]i [)][ ⊕] _[G][(][k]i[1][)][ ⊕]_ **[r][, we have that][ G][(][k]i[1][)][ ⊕]** **[u][i][ =]** _G(k[0]i_ [)][ ⊕] **[r][ =][ t][i][ ⊕]** **[r][, as required. The full description of our]** protocol is given in Protocol 5.2. This optimization is significant in applications of m×OTℓ extension where m is very large and ℓ is short, such as in GMW. In typical use-cases for GMW (cf. §7), m is in the size of several millions to a billion, while ℓ is one. Thereby, the communication complexity of GMW is almost reduced by half. In addition, as in [26], observe that unlike [28] the initial OT phase in Protocol 5.2 is completely independent of the actual inputs of the parties. Thus, the parties can perform 4We remark that, in order to prove the security of this optimization in the standard model (without a random oracle), we need to change the ideal functionality for the random OT such that for every i, the output of the sender is (βi[0][, x]i[0] [=][ KDF][(][g][β]i[0] )) and (βi[1][, x]i[1] [=][ KDF][(][g][β]i[1] )), and the output of the receiver is (σi, βi[σ][i] _[,][ KDF][(][g][β]i[σi] )). That is, in_ addition to receiving their input and output from the random OT functionality, the parties receive the “discrete log” of the pertinent values. This additional information is of no consequence in our applications of random OT. the initial OT phase before their inputs are determined. Finally, another problem that arises in the original protocol of [28] is that the entire m × κ matrix is transmitted together and processed. This means that the number of OTs to be obtained must be predetermined and, if m is very large, this results in considerable latency as well as memory management issues. As in [20], our optimization enables us to process small blocks of the matrix at a time, reducing latency, computation time, and memory management problems. In addition, it is possible to continually extend OTs, with no a priori bound on m. This is very useful in a secure computation setting, where parties may interact many times together with no a priori bound. PROTOCOL 5.2 (General OT extension protocol). **Inputs: S holds m pairs (x[0]j** _[, x][1]j_ [) of][ ℓ][-bit strings, for every] 1 ≤ _j ≤_ _m. R holds m selection bits r = (r1, . . ., rm)._ **Initial OT Phase (base OTs):** 1. S choose a random string s = (s1, . . ., sκ) and R chooses κ pairs of κ-bits seeds {(k[0]i _[,][ k]i[1][)][}]i[κ]=1[.]_ 2. The parties invoke the κ×OTκ-functionality, where S plays the receiver with input s and R plays the sender with inputs (k[0]i _[,][ k]i[1][) for every 1][ ≤]_ _[i][ ≤]_ _[κ][.]_ 3. For every 1 ≤ _i ≤_ _κ, let t[i]_ = G(k[0]i [).] Let T = [t[1]| . . . |t[k]] denote the m × κ bit matrix where the i-th column is t[i], and let tj denote the j-th row of T, for 1 ≤ _j ≤_ _m._ **OT extension Phase[a]:** 1. R computes t[i] = G(k[0]i [) and][ u][i][ =][ t][i][ ⊕] _[G][(][k]i[1][)][ ⊕]_ **[r][, and]** sends u[i] to S for every 1 ≤ _i ≤_ _κ._ 2. For every 1 ≤ _i ≤_ _κ, S defines q[i]_ = (si · u[i]) ⊕ _G(k[s]i_ _[i]_ [).] (Note that q[i] = (si · r) ⊕ **t[i].)** 3. Let Q = [q[1]| . . . |q[κ]] denote the m _×_ _κ bit matrix where_ the i-th column is q[i]. Let qj denote the j-th row of the matrix Q. (Note that qj = (rj · s) ⊕ **tj** .) 4. S sends (yj[0][, y]j[1][) for every 1][ ≤] _[j][ ≤]_ _[m][, where:]_ 5. For 1yj[0] [=] ≤[ x]jj[0] ≤[⊕] _m[H],[(] R[j,][ q] computes[j]_ [)] and xrjyjj[1] =[=] y[ x]jrj[1]j _[⊕]⊕[H]H[(]([j,]j,[ q] t[j]j[ ⊕])._ **[s][)]** **Output: R outputs (x[r]1[1]** _[, . . ., x]n[r][n]_ [);][ S][ has no output.] _aThis phase can be iterated. Specifically, R can com-_ pute the next κ bits of t[i] and u[i] (by applying G to get the next κ bits from the PRG for each of the seeds and using the next κ bits of its input in r) and send the block of κ×κ bits to S (κ bits from each of u[1], . . ., u[κ]). Theorem 5.3. Assuming that G is a pseudorandom gen_erator and H is a correlation-robust function (as in Defi-_ _nition A.2), Protocol 5.2 privately-computes the m_ _×_ _OTℓ-_ _functionality in the presence of semi-honest adversaries, in_ _the κ×OTκ-hybrid model._ **Proof: We first show that the protocol indeed implements** the m×OTℓ-functionality. Then, we prove that the protocol is secure where the sender is corrupted, and finally that it is secure when the receiver is corrupted. **Correctness. We show that the output of the receiver is** (x[r]1[1] _[, . . ., x]m[r][m]_ [) in an execution of the protocol where the in-] puts of the sender are ((x[0]1[, x][1]1[)][, . . .,][ (][x][0]m[, x][1]m[)) and the input] of the receiver isthat zj = xrjj [. We have two cases:] r = (r1, . . ., rm). Let 1 ≤ _j ≤_ _m, we show_ 1. rj = 0: Recall that qj = (rj · s) ⊕ **tj, and so qj = tj.** ----- Thus: _zj_ = _yj[0]_ _[⊕]_ _[H][(][t]j[) =][ x][0]j_ _[⊕]_ _[H][(][q]j[)][ ⊕]_ _[H][(][t]j[)]_ = _x[0]j_ _[⊕]_ _[H][(][t]j[)][ ⊕]_ _[H][(][t]j[) =][ x][0]j_ 2. rj = 1: In this case qj = s ⊕ **tj, and so:** _zj_ = _yj[1]_ _[⊕]_ _[H][(][t]j[) =][ x][1]j_ _[⊕]_ _[H][(][q]j_ _[⊕]_ **[s][)][ ⊕]** _[H][(][t]j[)]_ = _x[1]j_ _[⊕]_ _[H][(][t]j[)][ ⊕]_ _[H][(][t]j[) =][ x][1]j_ **Corrupted Sender. The view of the sender during the** protocol contains the output from the κ×OTκ invocation and the messages u[1], . . ., u[κ]. The simulator S0 simply outputs a uniform string s ∈{0, 1}[κ] (which is the only randomness that S chooses in the protocol, and therefore w.l.o.g. can be interpreted as the random tape of the adversary), κ random seeds k[s]1[1] _[, . . .,][ k]κ[s][κ]_ [, which are chosen uniformly from] _{0, 1}[κ], and κ random strings u[1], . . ., u[κ], chosen uniformly_ from {0, 1}[m]. In the real execution, (s, k[s]1[1] _[, . . .,][ k]κ[s][κ]_ [) are cho-] sen in exactly the same way. Each value u[i] for 1 ≤ _i ≤_ _κ is_ defined as G(k[0]i [)][ ⊕] _[G][(][k]i[1][)][ ⊕]_ **[r][. Since][ k]i[1][−][s][i]** is unknown to S (by the security of the κ×OTκ functionality), we have that _G(k[1]i_ _[−][s][i]_ ) is indistinguishable from uniform, and so each u[i] is indistinguishable from uniform. Therefore, the view of the corrupted sender in the simulation is indistinguishable from its view in a real execution. **Corrupted Receiver.** The view of the corrupted receiver consists of its random tape and the messages ((y1[0][, y]1[1][)] _, . . ., (ym[0]_ _[, y]m[1]_ [)) only. The simulator][ S]1 [is invoked with the] inputs and outputs of the receiver, i.e., r = (r1, . . ., rm) and (x[r]1[1] _[, . . ., x]m[r][m]_ [).] _S1 then chooses a random tape ρ for the_ adversary (which determines thematrixThen, it chooses each T, and computes y yj1−jrrjj =uniformly and independently x krjj [0]i[⊕][,][ k][H]i[1] [(][values), defines the][t][j][) for 1][ ≤] _[j][ ≤]_ _[m][.]_ at random from {0, 1}[ℓ]. Finally, it outputs (ρ, (y1[0][, y]1[1][)][, . . .,] (ym[0] _[, y]m[1]_ [)) as the view of the corrupted receiver.] We now show that the output of the simulator is indistinguishable from the view of the receiver in a real execution. If rj = 0, then qj = tj and thus (yj[0][, y]j[1][) = (][x]j[0] _[⊕]_ _H(tj), x[1]j_ _[⊕]_ _[H][(][t][j]_ _[⊕]_ **[s][)). If][ r][j]** [= 1,][ q][j] [=][ t][j] _[⊕]_ **[s][ and therefore]** (the valuesyj[0][, y]j[1][) = (] y[x]jrj[0]j _[⊕]are computed as[H][(][t][j]_ _[⊕]_ **[s][)][, x]j[1]** _[⊕] x[H]rj[(]j[t][⊕][j][)). In the simulation,][H][(][t][j][) and therefore]_ are identical to the real execution. It therefore remains to show that the values (y1[1][−][r][1] _, . . ., ym[1][−][r][m]_ ) as computed in the real execution are indistinguishable from random strings as output in the simulation. As we have seen, in the real execution each yj1−rj is computed as x1j−rj _⊕_ _H(tj ⊕_ **s). Since** _H is a correlation robust function, it holds that:_ c _{t1, . . ., tm, H(t1 ⊕_ **s), . . ., H(tm ⊕** **s)}** _≡{Um·κ+m·ℓ}_ for random s, t1, . . ., tm ∈{0, 1}[κ], where Ua defines the uniform distribution over {0, 1}[a] (see Definition A.2). In the protocol we derive the values t1, . . ., tm by applying a pseudorandom generator G to the seeds k[0]1[, . . .,][ k][0]κ [and transpos-] ing the resulting matrix. We need to show that the values _H(t1 ⊕_ **s), . . ., H(tm ⊕** **s) are still indistinguishable from uni-** form in this case. However, this follows from a straightforward hybrid argument (namely, that replacing truly random **t[i]** values in the input to H with pseudorandom values preserves the correlation robustness of H). We conclude that the ideal and real distributions are computationally indistinguishable. ## 5.4 Optimized OT Extension in Yao & GMW The protocol described in §5.3 implements the m _×_ _OTℓ_ functionality. In the following, we present further optimizations that are specifically tailored to the use of OT extensions in the secure computation protocols of Yao and GMW. **Correlated OT (C-OT) for Yao. Before proceeding to** the optimization, let us focus for a moment on Yao’s protocol [51] with the free-XOR [32] and point-and-permute [37] techniques.[5] Using this techniques, the sender does not choose all keys for all wires independently. Rather, it chooses a global random value δ ∈R {0, 1}[κ][−][1], sets ∆= δ||1, and for every wire w it chooses a random key kw[0] _[∈]R_ _[{][0][,][ 1][}][κ][ and sets]_ _kw[1]_ [=][ k]w[0] _[⊕]_ [∆. Later in the protocol, the parties invoke OT] extension to let the receiver obliviously obtain the keys associated with its inputs. This effectively means that, instead of having to obliviously transfer two fixed independent bit strings, the sender needs to transfer two random bit strings with a fixed correlation. We can utilize this constraint on the inputs in order to save additional bandwidth in the OT extension protocol. Recall that in the last step of Protocol 5.2 for OT extension, S computes and sends the messages _yj[0]_ [=][ x]j[0] _[⊕][H][(][q][j][) and][ y]j[1]_ [=][ x]j[1] _[⊕][H][(][q][j]_ _[⊕][s][). In the case of Yao,]_ we have that x[0]j [=][ k]w[0] [and][ x]j[1] [=][ k]w[1] [=][ k]w[0] _[⊕]_ [∆. Since][ k]w[0] [is] just a random value, S can set kw[0] [=][ H][(][q]j[) and can send the] _single value yj = ∆⊕H(qj)⊕H(qj_ _⊕s). R defines its output_ as H(tj) if rj = 0 or as yj ⊕ _H(tj) if rj = 1. Observe that_ if rj = 0, then tj = qj and R outputs H(qj) = x[0]j [=][ k]w[0] [, as] required. In contrast, when rj = 1, it holds that tj = qj ⊕ **s** and thus yj ⊕ _H(qj ⊕_ **s) = ∆** _⊕_ _H(qj) = ∆_ _⊕_ _kw[0]_ [=][ k]w[1] [,] as required. Thus, in the setting of Yao’s protocol when using the free-XOR technique, it is possible to save bandwidth. As the keys kw[0] _[, k]w[1]_ [used in Yao are also of length] _κ, the bandwidth is reduced from 3κ bits that are trans-_ mitted in every iteration of the extension phase to 2κ bits, effectively reducing the bandwidth by one third. Proving the security of this optimization requires assuming that H is a random oracle, in order to “program” the output to be as derived from the OT extension. In addition, we define a different OT functionality, called correlated OT (C-OT), that receives ∆and chooses the sender’s inputs uniformly under the constraint that their XOR equals ∆. Since Yao’s protocol uses random keys under the same constraint, the security of Yao’s protocol remains unchanged when using this optimized OT extension. Note that by using the correlated input OT extension protocol, the server needs to garble the circuit after performing the OT extension; this order is also needed for the pipelining approach used in many implementations, e.g., [24, 34, 36]. We remark that this optimization can be used in the more general case where in each pair one of the inputs is chosen uniformly at random and the other input is computed as a function of the first. Specifically, the sender has different functions fj for every 1 ≤ _j ≤_ _m,_ and receives random values x[0]j [as output from the extension] protocol, which defines x[1]j [=][ f][j][(][x]j[0][). E.g., for Yao’s garbled] circuits protocol, we have x[1]j [=][ f][j][(][x]j[0][) = ∆] _[⊕]_ _[x][0]j_ [.] **Random-OT (R-OT) for GMW. When using OT ex-** tensions for implementing the GMW protocol, the efficiency can be improved even further. In this case, the inputs for _S in every OT are independent random bits b[0]_ and b[1] (see _§5.1 for how to evaluate AND gates using two random OTs)._ 5Our optimization is also compatible with the garbled row reduction technique of [47]. ----- Thus, the sender can allow the random OT extension protocol (functionality) R-OT to determine both of its inputs randomly. This is achieved in the OT extension protocol by having S define b[0] = H(qj) and b[1] = H(qj ⊕ _s). Then, R_ computes b[r][j] just as H(tj). The receiver’s output is correct because qj = (rj · s) ⊕ **tj, and thus H(tj) = H(qj) when** _rj = 0, and H(tj) = H(qj ⊕_ **s) when rj = 1. With this op-** timization, we obtain that the entire communication in the OT extension protocol consists only of the initial base OTs, together with the messages u[1], . . ., u[κ], and there are no yj messages. This is a dramatic improvement of bandwidth. As above, proving the security of this optimization requires assuming that H is a random oracle, in order to “program” the output to be as derived from the OT extension. In addition, the OT functionality is changed such that the sender receives both of its inputs from the functionality, and the receiver just inputs r (see [43, Fig. 26]). **Summary. The original OT extension protocol of [28]** and our proposed improvements for m _×_ _OTℓ_ are summarized in Tab. 2. We compare the communication complexity of R and S for m parallel 1-out-of-2 OT extensions of ℓbit strings, with security parameter κ (we omit the cost of the initial κ _×_ _OTκ). We also compare the assumption on_ the function H needed in each protocol, where CR denotes Correlation-Robustness and RO denotes Random Oracle. **Protocol** **Applicability** R → S S → R _H_ **Original [28]** All applications 2mκ 2mℓ CR **G-OT §5.3** All applications _mκ_ 2mℓ CR **C-OT §5.4** only x[0]j [random] _mκ_ _mℓ_ RO **R-OT §5.4** _x[0]j_ _[, x][1]j_ [random] _mκ_ 0 RO **Table 2: Sent bits for sender S and receiver R for m** **1-out-of-2 OT extensions of ℓ-bit strings and security** **parameter κ.** ## 6. EXPERIMENTAL EVALUATION In the following, we evaluate the performance of our proposed optimizations. In §6.1 we compare our base OT protocol (§5.2) to the protocols of [40] and in §6.2 we evalute the performance of our algorithmic (§4) and protocol optimizations (§5.3 and §5.4) for OT extension. **Benchmarking Environment. We build upon the C++** OT extension implementation of [49] which implements the OT extension protocol of [28] and is based on the implementation of [8]. We use SHA-1 to instantiate the random oracle and the correlation robust function and AES-128 in counter mode to instantiate the pseudo-random generator and the key derivation function. Our benchmarking environment consists of two 2.5 GHz Intel Core2Quad CPU (Q8300) Desktop PCs with 4 GB RAM, running Ubuntu 10.10 and OpenJDK 6, connected by a Gigabit LAN. ## 6.1 Base OTs In the following, we compare the performance of the OT protocols of Naor and Pinkas [40] in the random oracle (RO) and standard (STD) model to our STD model OT protocol of §5.2 for different libraries. We either use finite field cryptography (FFC) (based on the GNU-Multiprecision library v.5.0.5) or elliptic curve cryptography (ECC) (based on the Miracl library v.5.6.1). We measure the time for performing κ 1-out-of-2 base OTs on κ-bit strings, for symmetric security parameter κ, using the key sizes from Tab. 1. The runtimes are shown in Tab. 3. For the short term security parameter, FFC using GMP outperforms ECC using Miracl by factor 2 for all protocols. However, starting from a medium term security parameter, ECC becomes increasingly more efficient and outperforms FCC by more than factor 2 for the long term security parameter. For ECC, we can observe that [40]-RO is about 5-6 times faster than [40]-STD but only 2 times faster than our §5.2-STD protocol. For FFC, our §5.2-STD protocol becomes more inefficient with increasing security parameter, since the random sampling requires nearly full-range exponentiations as opposed to the subgroup exponentiations in [40]-RO and [40]-STD. **Security** **[40]-RO** **[40]-STD** **_§5.2-STD_** _GMP (FFC)_ Short [ms] 18 (±0.9) 99 (±0.6) 41 (±3.3) Medium [ms] 107 (±3.4) 629 (±3.3) 352 (±18) Long [ms] 288 (±7.9) 1,681 (±4.7) 1,217 (±47) _Miracl (ECC)_ Short [ms] 39 (±1.6) 178 (±0.3) 61 (±2.5) Medium [ms] 82 (±2.9) 418 (±0.6) 137 (±5.0) Long [ms] 138 (±5.0) 763 (±0.8) 239 (±7.5) **Table 3: Performance results and standard devia-** **tions for base OTs.** ## 6.2 OT Extension To evaluate the performance of OT extension, we measure the time for generating the random inputs for the OT extension protocol and the overall OT extension protocol execution on 10,000,000 1-out-of-2 OTs on 80-bit strings for the short-term security setting, excluding the times for the base OTs. Tab. 4 summarizes the resulting runtimes for the original version without (Orig [49] (1 T)) and with pipelining (Orig [49] (2 T)), the efficient matrix transposition (EMT _§4.2), the general protocol optimization (G-OT §5.3), the_ correlated OT extension protocol (C-OT §5.4), the random OT extension protocol (R-OT §5.4), as well as a two and four threaded version of R-OT (2 T and 4 T, cf. §4.1). The line (x T) denotes the number of threads, running on each party. Since our optimizations target both, the runtime as well as the amount of data that is transferred, we assume two different bandwidth scenarios: LAN (Gigabit Ethernet with 1 GBit bandwidth) and WiFi (simulated by limiting the available bandwidth to 54 MBit and the latency to 2 ms). As our experiments in Tab. 4 show, the LAN setting benefits from computation optimizations (as computation is the bottleneck), whereas the WiFi setting benefits from communication optimizations (as the network is the bottleneck). All timings are the average of 100 executions with one party acting as sender and the other as receiver. Note that each version includes all prior listed optimizations. **LAN setting.** The original OT extension implementation of [49] has a runtime of 20.61 s without pipelining, which is reduced to only 80% (16.57 s) when using pipelining. Implementing the efficient matrix transposition of §4.2 decreases the runtime to 70% of the one-threaded original version (14.43 s) and already outperforms the pipelined version even though only one thread is used. The general improved OT extension protocol of §5.3 removes the need to |Security|[40]-RO|[40]-STD|§5.2-STD| |---|---|---|---| |GMP (FFC)|||| |Short [ms]|18 (±0.9)|99 (±0.6)|41 (±3.3)| |Medium [ms]|107 (±3.4)|629 (±3.3)|352 (±18)| |Long [ms]|288 (±7.9)|1,681 (±4.7)|1,217 (±47)| |Miracl (ECC)|||| |Short [ms]|39 (±1.6)|178 (±0.3)|61 (±2.5)| |Medium [ms]|82 (±2.9)|418 (±0.6)|137 (±5.0)| |Long [ms]|138 (±5.0)|763 (±0.8)|239 (±7.5)| |Protocol|Applicability|R → S|S → R|H| |---|---|---|---|---| |Original [28] G-OT §5.3 C-OT §5.4 R-OT §5.4|All applications All applications only x0 random j x0 j, x1 random j|2mκ mκ mκ mκ|2mℓ 2mℓ mℓ 0|CR CR RO RO| ----- (±0.18) (±0.20) (±0.24) (±0.26) (±0.14) (±0.12) (±0.18) (±0.22) **Table 4: Performance results and standard deviations for 10,000,000 1-out-of-2 OTs on 80-bit strings using** **our optimizations in §4 and §5.** |Network|Orig [49] (1 T)|Orig [49] (2 T)|EMT §4.2 (1 T)|G-OT §5.3 (1 T)|C-OT §5.4 (1 T)|R-OT §5.4 (1 T)|R-OT §5.4 (2 T, §4.1)|R-OT §5.4 (4 T, §4.1)| |---|---|---|---|---|---|---|---|---| |LAN [s]|20.61 (±0.07)|16.57 (±0.33)|14.43 (±0.05)|13.92 (±0.07)|10.60 (±0.03)|10.00 (±0.02)|5.03 (±0.08)|2.62 (±0.05)| |WiFi [s]|30.69 (±0.18)|30.42 (±0.20)|30.45 (±0.24)|29.36 (±0.26)|14.39 (±0.14)|14.22 (±0.12)|14.23 (±0.18)|14.23 (±0.22)| generate the random matrix T, which reduces the runtime to 13.92 s. The C-OT extension of §5.4 decreases the runtime to 10.60 s, since the protocol generates the random input values for the sender. The R-OT extension of §5.4 further decreases the runtime to 10.00 s, since the last communication step is eliminated. Finally, the parallelized OT extension of §4.1 results in a nearly linear decrease in runtime to 50% (5.03 s) for two threads and to 26% (2.62 s) for four threads. Overall, using two threads, we decreased the runtime in the LAN setting by a factor of 3 compared to the two-threaded original implementation. **WiFi setting. In the WiFi setting, we observe that the** one and two threaded original implementation is already slower compared to the LAN setting. Moreover, all optimizations that purely target the runtime have little effect, since the network has become the bottleneck. We therefore focus on the optimizations for the communication complexity. The G-OT optimization of §5.3 only slightly decreases the runtime since both parties have the same up and download bandwidth and the channel from sender to receiver becomes the bottleneck (cf. Tab. 2).[6] The C-OT extension of _§5.4 reduces the runtime by a factor of 2, corresponding to_ the reduced communication from sender to receiver which is now equal to the communication in the opposite direction. The R-OT extension of §5.4 only slightly decreases the runtime, since now the channel from receiver to sender has become the bottleneck. Finally, the multi-threading optimization of §4.1 does not reduce the runtime as the network is the bottleneck. ## 7. APPLICATION SCENARIOS OT extension is the foundation for efficient implementations of many secure computation protocols, including Yao’s garbled circuits implemented in the FastGC framework [24] and GMW implemented in the framework of [8, 49]. To demonstrate how both protocols benefit from our improved OT extensions, we apply our implementations to both frameworks and consider the following secure computation usecases: Hamming distance (§7.1), set-intersection (§7.2), minimum (§7.3), and Levenshtein distance (§7.4). The overall performance results are summarized in Tab. 5 and discussed in §7.5. All experiments were performed under the same conditions as in §6 (LAN setting) using the random-oracle protocol of [40] as base OT. We extended the FastGC framework [24] to call our C++ OT implementation using the Java Native Interface (JNI). We stress that the goal of our performance measurements is to highlight the efficiency gains of our improved OT protocols, but not to provide a comparison between Yao’s garbled circuits and the GMW protocol. 6For shorter strings or if the channel would have a higher bandwidth from sender to receiver (e.g., a DSL link), the runtime would decrease already for the G-OT optimization. ## 7.1 Hamming Distance The Hamming distance between two ℓ-bit strings is the number of positions that both strings differ in. Applications of secure Hamming distance computation include privacypreserving face recognition [46] and private matching for cardinality threshold [29]. As shown in [24,49], using a circuitbased approach is a very efficient way to securely compute the face recognition algorithm of [46] which uses ℓ = 900. We use the compact Hamming distance circuit of [6] with size ℓ _−_ _HW_ (ℓ) AND gates and ℓ input bits for the client, where HW (ℓ) is the Hamming weight of ℓ. ## 7.2 Set-Intersection Privacy-preserving set-intersection allows two parties, each holding a set of σ-bit elements, to learn the elements they have in common. Applications include governmental law enforcement [9], sharing location data [41], and botnet detection [39]. Several Boolean circuits for computing the setintersection were described and evaluated in [23]. The authors of [23] state that for small σ (up to σ = 20 in their experiments), the bitwise AND (BWA) circuit achieves the best performance. This circuit treats each element e ∈{0, 1}[σ] as an index to a bit-sequence {0, 1}[2][σ] and denotes the presence of e by setting the respective bit to 1. The parties then compute the set-intersection as the bitwise AND of their bit-sequences. We build the BWA circuit for σ = 20, resulting in a circuit with 2[σ] = 1,048,576 AND gates and input bits for the client. To reduce the memory footprint of the FastGC framework [24], we split the overall circuit and the OTs on the input bits into blocks of size 2[16] = 65,536. ## 7.3 Secure Minimum Securely computing the minimum of a set of values is a common building block in privacy-preserving protocols and is used to find best matches, e.g., for face recognition [11] or online marketplaces [8]. We use the scenario considered in [36] that securely computes the minimum of _N = 1,000,000 ℓ_ = 20-bit values, where each party holds 500,000 values. Using the minimum circuit construction of [31], our circuit has 2ℓN − 2ℓ _≈_ 40,000,000 AND gates and the client has _N2_ _[ℓ]_ [= 10][,][000][,][000 input bits.] We note that the performance of the garbled circuit implementation of [36] is about the same as that of FastGC [24] – their circuit has twice the size and takes about twice as long to evaluate. For the FastGC framework we again evaluate the overall circuit by iteratively computing the minimum of at most 2,048 values. ## 7.4 Levenshtein Distance The Levenshtein distance denotes the number of operations that are needed to transform a string a into another string b using an alphabet of bit-size σ. It can be ----- |Implementation|Base-OTs|Hamming §7.1|Set-Intersect. §7.2|Minimum §7.3|Levenshtein §7.4| |---|---|---|---|---|---| |FastGC [24]|470 ms|149 ms (86.8 ms)|249 s (227 s)|1094 s (552 s)|265 min (148 ms)| |FastGC [24] fixed with CPRG|482 ms|155 ms (87.6 ms)|253 s (227 s)|1106 s (554 s)|266 min (157 ms)| |FastGC [24] with C-OT (4 T)|69 ms|85 ms (4.4 ms)|27 s (0.96 s)|593 s (15 s)|266 min (15 ms)| |GMW [49]|142 ms|79 ms (46.5 ms)|1.91 s (1.34 s)|44 s (41 s)|—| |GMW [49] with R-OT (4 T)|28 ms|30 ms (11.3 ms)|0.93 s (0.51 s)|21 s (19 s)|18 min (11 min)| |AND gates|-|896|1,048,576|39,999,960|1,290,653,042| |Client input bits|-|900|1,048,576|10,000,000|2,000| **Table 5: Performance results for the frameworks of [24] and [49] with and without our optimized OT imple-** **mentation. The time spent in the OT extensions is given in ().** used for privacy-preserving matching of DNA and proteinsequences [24]. We use the same circuit and setting as [24] with σ = 2 to compare strings a and b of size |a| = 2,000 and |b| = 10,000. The resulting circuit has 1.29 billion AND gates and σ|a| = 4,000 input bits for the client. The GMW framework of [49] was not able to evaluate the Levenshtein circuit since their OT extension implementation tries to process all OTs at once and their framework tries to store the whole circuit in memory, thereby exceeding the available memory of our benchmarking environment. Hence, we changed their underlying circuit structure to support largescale circuits by deleting gates that were used and building the circuit iteratively. ## 7.5 Discussion We discuss the results of our experiments in Tab. 5 next. For the FastGC framework [24], our improved OT extension implementation written in C++ and using 4 threads is more than one order of magnitude faster than the corresponding single-threaded Java routine of the original implementation. The improvements on total time depend on the ratio between the number of client inputs and the circuit size: for circuits with many client inputs (§7.1, §7.2, §7.3), we obtain a speedup by factor 2 to 9, whereas for large circuits with few inputs (§7.4) the improvement for OTs has a negligible effect on the total runtime. To further improve the runtime of large circuits, a faster engine for circuit garbling, e.g., [4], could be combined with our improved OT implementation. For the GMW framework [49], the total runtime is dominated by the time for performing OT extension, which we reduce by factor 2. ### Acknowledgements. We thank David Evans and the anonymous reviewers of ACM CCS for their helpful comments on our paper. The first two authors were funded by the European Research Council under the European Union’s Seventh Framework Programme (FP/2007-2013) / ERC Grant Agreement n. 239868. The third and fourth author were supported by the German Federal Ministry of Education and Research (BMBF) within EC SPRIDE and by the Hessian LOEWE excellence initiative within CASED. ## 8. REFERENCES [1] D. Beaver. Efficient multiparty protocols using circuit randomization. In Advances in Cryptology – _CRYPTO’91, volume 576 of LNCS, pages 420–432._ Springer, 1991. [2] D. Beaver. Correlated pseudorandomness and the complexity of private computations. In Symposium on _Theory of Computing (STOC’96), pages 479–488._ ACM, 1996. [3] M. Bellare, S. Goldwasser, and D. Micciancio. “pseudo-random” number generation within cryptographic algorithms: The DDS case. In Advances _in Cryptology – CRYPTO’97, volume 1294 of LNCS,_ pages 277–291. Springer, 1997. [4] M. Bellare, V. Hoang, S. Keelveedhi, and P. Rogaway. Efficient garbling from a fixed-key blockcipher. In _Symposium on Security and Privacy, pages 478–492._ IEEE, 2013. [5] A. Ben-David, N. Nisan, and B. Pinkas. FairplayMP: a system for secure multi-party computation. In _Computer and Communications Security (CCS’08),_ pages 257–266. ACM, 2008. [6] J. Boyar and R. Peralta. The exact multiplicative complexity of the Hamming weight function. _Electronic Colloquium on Computational Complexity_ _(ECCC’05), (049), 2005._ [7] R. Canetti. Security and composition of multiparty cryptographic protocols. J. Cryptology, 13(1):143–202, 2000. [8] S. G. Choi, K.-W. Hwang, J. Katz, T. Malkin, and D. Rubenstein. Secure multi-party computation of Boolean circuits with applications to privacy in on-line marketplaces. In Cryptographers’ Track at the RSA _Conference (CT-RSA’12), volume 7178 of LNCS,_ pages 416–432. Springer, 2012. [9] E. De Cristofaro and G. Tsudik. Practical private set intersection protocols with linear complexity. In _Financial Cryptography and Data Security (FC’10),_ volume 6052 of LNCS, pages 143–159. Springer, 2010. [10] J. O. Eklundh. A fast computer method for matrix transposing. IEEE Transactions on Computers, C-21(7):801–803, 1972. [11] Z. Erkin, M. Franz, J. Guajardo, S. Katzenbeisser, I. Lagendijk, and T. Toft. Privacy-preserving face recognition. In Privacy Enhancing Technologies _Symposium (PETS’09), volume 5672 of LNCS, pages_ 235–253. Springer, 2009. [12] S. Even, O. Goldreich, and A. Lempel. A randomized protocol for signing contracts. Communmunications of _the ACM, 28(6):637–647, 1985._ [13] K. Frikken, M. Atallah, and C. Zhang. Privacy-preserving credit checking. In Electronic _Commerce (EC’05), pages 147–154. ACM, 2005._ [14] O. Goldreich. Foundations of Cryptography, volume 2: Basic Applications. Cambridge University Press, 2004. ----- [15] O. Goldreich, S. Micali, and A. Wigderson. How to play any mental game or a completeness theorem for protocols with honest majority. In Symposium on _Theory of Computing (STOC’87), pages 218–229._ ACM, 1987. [16] S. D. Gordon, J. Katz, V. Kolesnikov, F. Krell, T. Malkin, M. Raykova, and Y. Vahlis. Secure two-party computation in sublinear (amortized) time. In Computer and Communications Security (CCS’12), pages 513–524. ACM, 2012. [17] D. Harnik, Y. Ishai, E. Kushilevitz, and J. B. Nielsen. OT-combiners via secure computation. In Theory of _Cryptography (TCC’08), volume 4948 of LNCS, pages_ 393–411. Springer, 2008. [18] J. H˚astad and A. Shamir. The cryptographic security of truncated linearly related variables. In Symposium _on Theory of Computing (STOC’85), pages 356–362._ ACM, 1985. [19] W. Henecka, S. K¨ogl, A.-R. Sadeghi, T. Schneider, and I. Wehrenberg. TASTY: Tool for Automating Secure Two-partY computations. In Computer and _Communications Security (CCS’10), pages 451–462._ ACM, 2010. [20] W. Henecka and T. Schneider. Faster secure two-party computation with less memory. In ACM Symposium _on Information, Computer and Communications_ _Security (ASIACCS’13), pages 437–446. ACM, 2013._ [21] A. Holzer, M. Franz, S. Katzenbeisser, and H. Veith. Secure two-party computations in ANSI C. In _Computer and Communications Security (CCS’12),_ pages 772–783. ACM, 2012. [22] Y. Huang, P. Chapman, and D. Evans. Privacy-preserving applications on smartphones. In _Hot topics in security (HotSec’11). USENIX, 2011._ [23] Y. Huang, D. Evans, and J. Katz. Private set intersection: Are garbled circuits better than custom protocols? In Network and Distributed Security _Symposium (NDSS’12). The Internet Society, 2012._ [24] Y. Huang, D. Evans, J. Katz, and L. Malka. Faster secure two-party computation using garbled circuits. In Security Symposium. USENIX, 2011. [25] Y. Huang, J. Katz, and D. Evans. Quid-pro-quo-tocols: Strengthening semi-honest protocols with dual execution. In Symposium on _Security and Privacy, pages 272–284. IEEE, 2012._ [26] Y. Huang, L. Malka, D. Evans, and J. Katz. Efficient privacy-preserving biometric identification. In Network _and Distributed Security Symposium (NDSS’11). The_ Internet Society, 2011. [27] Intelligence Advanced Research Projects Activity (IARPA). Security and Privacy Assurance Research (SPAR) Program, 2010. [28] Y. Ishai, J. Kilian, K. Nissim, and E. Petrank. Extending oblivious transfers efficiently. In Advances _in Cryptology – CRYPTO’03, volume 2729 of LNCS,_ pages 145–161. Springer, 2003. [29] A. Jarrous and B. Pinkas. Secure hamming distance based computation and its applications. In Applied _Cryptography and Network Security (ACNS’09),_ volume 5536 of LNCS, pages 107–124. Springer, 2009. [30] F. Kerschbaum. Automatically optimizing secure computation. In Computer and Communications _Security (CCS’11), pages 703–714. ACM, 2011._ [31] V. Kolesnikov, A.-R. Sadeghi, and T. Schneider. Improved garbled circuit building blocks and applications to auctions and computing minima. In _Cryptology And Network Security (CANS’09), volume_ 5888 of LNCS, pages 1–20. Springer, 2009. [32] V. Kolesnikov and T. Schneider. Improved garbled circuit: Free XOR gates and applications. In _International Colloquium on Automata, Languages_ _and Programming (ICALP’08), volume 5126 of LNCS,_ pages 486–498. Springer, 2008. [33] H. Krawczyk. Cryptographic extraction and key derivation: The HKDF scheme. In Advances in _Cryptology – CRYPTO’10, volume 6223 of LNCS,_ pages 631–648. Springer, 2010. [34] B. Kreuter, A. Shelat, and C.-H. Shen. Billion-gate secure computation with malicious adversaries. In _Security Symposium. USENIX, 2012._ [35] P. MacKenzie, A. Oprea, and M. K. Reiter. Automatic generation of two-party computations. In Computer _and Communications Security (CCS’03), pages_ 210–219. ACM, 2003. [36] L. Malka. VMCrypt - modular software architecture for scalable secure computation. In Computer and _Communications Security (CCS’11), pages 715–724._ ACM, 2011. [37] D. Malkhi, N. Nisan, B. Pinkas, and Y. Sella. Fairplay — a secure two-party computation system. In Security _Symposium, pages 287–302. USENIX, 2004._ [38] A. Menezes, P. C. van Oorschot, and S. A. Vanstone. _Handbook of Applied Cryptography. CRC Press, 1996._ [39] S. Nagaraja, P. Mittal, C.-Y. Hong, M. Caesar, and N. Borisov. Botgrep: Finding P2P bots with structured graph analysis. In Security Symposium, pages 95–110. USENIX, 2010. [40] M. Naor and B. Pinkas. Efficient oblivious transfer protocols. In ACM-SIAM Symposium On Discrete _Algorithms, SODA ’01, pages 448–457. Society for_ Industrial and Applied Mathematics, 2001. [41] A. Narayanan, N. Thiagarajan, M. Lakhani, M. Hamburg, and D. Boneh. Location privacy via private proximity testing. In Network and Distributed _Security Symposium (NDSS’11). The Internet Society,_ 2011. [42] J. B. Nielsen. Extending oblivious transfers efficiently - how to get robustness almost for free. Cryptology ePrint Archive, Report 2007/215, 2007. [43] J. B. Nielsen, P. S. Nordholt, C. Orlandi, and S. S. Burra. A new approach to practical active-secure two-party computation. In Advances in Cryptology – _CRYPTO’12, volume 7417 of LNCS, pages 681–700._ Springer, 2012. [44] V. Nikolaenko, U. Weinsberg, S. Ioannidis, M. Joye, D. Boneh, and N. Taft. Privacy-preserving ridge regression on hundreds of millions of records. In _Symposium on Security and Privacy, pages 334–348._ IEEE, 2013. [45] NIST. NIST Special Publication 800-57, Recommendation for Key Management Part 1: General (Rev. 3). Technical report, 2012. ----- [46] M. Osadchy, B. Pinkas, A. Jarrous, and B. Moskovich. SCiFI - a system for secure face identification. In _Symposium on Security and Privacy, pages 239–254._ IEEE, 2010. [47] B. Pinkas, T. Schneider, N. P. Smart, and S. C. Williams. Secure two-party computation is practical. In Advances in Cryptology – ASIACRYPT’09, volume 5912 of LNCS, pages 250–267. Springer, 2009. [48] M. O. Rabin. How to exchange secrets with oblivious _transfer, TR-81 edition, 1981. Aiken Computation_ Lab, Harvard University. [49] T. Schneider and M. Zohner. GMW vs. Yao? Efficient secure two-party computation with low depth circuits. In Financial Cryptography and Data Security (FC’13), LNCS. Springer, 2013. [50] A. Schr¨opfer and F. Kerschbaum. Demo: secure computation in JavaScript. In Computer and _Communications Security (CCS’11), pages 849–852._ ACM, 2011. [51] A. C. Yao. How to generate and exchange secrets. In _Foundations of Computer Science (FOCS’86), pages_ 162–167. IEEE, 1986. ## APPENDIX A. DEFINITIONS We let κ denote the security parameter. A function µ(·) is negligible if for every positive polynomial p(·) and all sufficiently large n it holds that µ(n) < 1/p(n). A distribution ensemble X = {X(a, n)}a∈Dn,n∈N is an infinite sequence of random variables indexed by a ∈Dn and n ∈ N. Two distribution ensembles X, Y are computationally indistinguishc _able, denoted X_ _≡_ _Y if for every non-uniform polynomial_ time algorithm D there exists a negligible function µ(·) such that for every n, and every a ∈Dn: _|Pr [D(X(a, n), a, n) = 1] −_ Pr [D(Y (a, n), a, n]| ≤ _µ(n)._ **Key Derivation Function. The following definition is** an adaptation of the general definition of [33] for the case of the DDH problem. Intuitively, the adversary should not be able to distinguish between an output of the KDF function and a uniform string. Let Gen(1[κ]) be a function that produces a group (G, q, g) for which the DDH problem is believed to be hard. We define: Definition A.1 (Key-Derivation Function). A key _derivation function KDF with ℓ-bit output is said to be secure_ with respect to DDH if for any ppt attacker A there exists _a negligible function µ(·) such that:_ _|Pr [A(G, q, g, g[r], h, KDF(h[r])) = 1]_ _−_ Pr [A(G, q, g, g[r], h, z) = 1]| ≤ _µ(κ)_ _where (G, q, g) = Gen(1[κ]), r is distributed uniformly in Zq_ _and z is distributed uniformly in {0, 1}[ℓ]._ **Correlation Robust Function. We present a definition** for correlation robust function. The definition is based on the definition in [28]. Definition A.2. [Correlation Robustness] An efficiently _computable function H : {0, 1}[κ]_ _→{0, 1}[ℓ]_ _is said to be cor-_ relation robust if it holds that: c _{t1, . . ., tm, H(t1 ⊕_ **s), . . ., H(tm ⊕** **s)}** _≡{Um·κ+m·ℓ}_ _where t1, . . ., tm, s are chosen uniformly and independently_ _at random from {0, 1}[κ], and Um·κ+m·ℓ_ _is the uniform distri-_ _bution over {0, 1}[m][·][κ][+][m][·][ℓ]._ **Secure Two-Party Computation. We give a formal** definition for security of a two party protocol in the presence of a semi-honest adversary. The definition is the standard definition, see [7,14].The view of the party P0 during an execution of a protocol π on inputs (x, y), denoted view[π]i [(][x, y][),] is defined to be (x, r; ⃗m) where x is P0’s private input, r its internal coin tosses, and ⃗m are the messages it has received in the execution. The view of P1 is defined analogously. Let output[π](x, y) denote the output pair of both parties in a real execution of the protocol. We are now ready to security definition: Definition A.3. Let f : ({0, 1}[∗])[2] _→_ ({0, 1}[∗])[2] _be a_ _(possible randomized) two–party functionality, and let fi(x, y)_ _denotes the ith element of f_ (x, y). _Let π be a protocol._ _We say that π privately–computes f if for every (x, y) ∈_ ({0, 1}[∗])[2]: output[π](x, y) = f (x, y) and there exists a pair _of probabilistic polynomial-time ppt algorithms S0, S1:_ _{S0(x, f0(x, y)), f_ (x, y)}z _≡{c_ viewΠ0 [(][x, y][)][,][ output][π][(][x, y][)][}]z _{S1(y, f1(x, y)), f_ (x, y)}z _≡{c_ viewΠ1 [(][x, y][)][,][ output][π][(][x, y][)][}]z _where z = (x, y) ∈_ ({0, 1}[∗])[2]. In case the function f is deterministic (like in the OT functionality), there is no need to consider the joint distribution of the outputs and the view, and it is enough to show that the output of the simulator Si is indistinguishable from the view of the party Pi. ## B. MULTIPLICATION TRIPLE PROTOCOL In this section, we show that the protocol presented in _§5.1 privately computes the multiplication triple functional-_ ity. First, we consider the f _[ab]_ functionality. The protocol implements the functionality since any random (b, v), (a, u), for which ab = u⊕v, can be written as (b, v) = (x0 _⊕x1, x0) and_ (a, u) = (a, ab ⊕ _v) = (a, xa), since it holds that: ab ⊕_ _v =_ _ab ⊕_ _x0 = a(x0 ⊕_ _x1) ⊕_ _x0 = xa. The inputs and outputs of_ each party fully determine its view, and therefore simulators are trivial and just re–arrange their inputs. Consistency of the generated view with the output of the parties holds trivially. We turn to the multiplication triple functionality. It is easy to verify that the protocol implements the functionality. Regarding simulation, a simulator S0 is given (a0, b0, c0), chooses random u0 and defines: v0 = c0 ⊕ _a0b0 ⊕_ _u0. Since_ _u0, v0 are random and hidden from the distinguisher, the_ view is consistent with (a1, b1, c1). A simulator for S1 works the same, and security holds from the same reasoning (i.e., _v1 = a0b1 ⊕_ _u0 is random since u0 is hidden from the distin-_ guisher, and v1 is fully determined from c1, a1, b1, u1). -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1145/2508859.2516738?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1145/2508859.2516738, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GREEN", "url": "https://encrypto.de/papers/ALSZ13.pdf" }
2,013
[ "JournalArticle", "Book", "Conference" ]
true
2013-11-04T00:00:00
[ { "paperId": "83721103a6fd5535e943b1b575cf70862c2322a8", "title": "Handbook of Applied Cryptography" }, { "paperId": "c320a52959ed7bdbde14775338ce867b97697601", "title": "When private set intersection meets big data: an efficient and scalable protocol" }, { "paperId": "547a94f8b16f521ee2eac299572a5c767d628289", "title": "Improved OT Extension for Transferring Short Secrets" }, { "paperId": "0eefa33a1ad9118ba91a2e4a88e555b453a952f1", "title": "Privacy-Preserving Ridge Regression on Hundreds of Millions of Records" }, { "paperId": "27e9745fc94ccf6039dd1804cbb99760544fc59b", "title": "Efficient Garbling from a Fixed-Key Blockcipher" }, { "paperId": "a6518d716b43194e751569d0748896cacbfbf409", "title": "Faster secure two-party computation with less memory" }, { "paperId": "9fa0ee74353fd008f2fbb1f6d724437678cbf9dd", "title": "GMW vs. Yao? Efficient Secure Two-Party Computation with Low Depth Circuits" }, { "paperId": "b273f47f97fc3f1ed922c3effda9ab88c52a1680", "title": "Secure two-party computations in ANSI C" }, { "paperId": "dd9876d5f2a54651d0d82c13fa15880f53e538d0", "title": "Secure two-party computation in sublinear (amortized) time" }, { "paperId": "216b47f6875f952523a9c082e968df8618e77163", "title": "Billion-Gate Secure Computation with Malicious Adversaries" }, { "paperId": "15964bef0c5a10420ccf44f4e02f4905aa9d85d0", "title": "Quid-Pro-Quo-tocols: Strengthening Semi-honest Protocols with Dual Execution" }, { "paperId": "7eb831ce0d0f4fc58bae945639a0d0c808d0aca3", "title": "Secure Multi-Party Computation of Boolean Circuits with Applications to Privacy in On-Line Marketplaces" }, { "paperId": "64586645de8c7c9793f568b976d733c312069abb", "title": "A New Approach to Practical Active-Secure Two-Party Computation" }, { "paperId": "fcde640edd1caa49f8d61d0a3007ffd2b7dc680a", "title": "Automatically optimizing secure computation" }, { "paperId": "6cf4e6fab02294ca3a1a8f2d536249e6f4a3fa53", "title": "VMCrypt: modular software architecture for scalable secure computation" }, { "paperId": "8728747bdff8b1e2aedf63ada403f4e595606d82", "title": "Demo: secure computation in JavaScript" }, { "paperId": "30f1fd87a5632da8a5457b9cc76c135ef35e704a", "title": "Privacy-Preserving Applications on Smartphones" }, { "paperId": "801efba3069e08f26658a5ec49f27f442b3ef80d", "title": "Faster Secure Two-Party Computation Using Garbled Circuits" }, { "paperId": "cceb00082b595427990cd9203ba2d83190267ad9", "title": "TASTY: tool for automating secure two-party computations" }, { "paperId": "5f3f1a4d23a0b0b363719c3731fbcad381019c7c", "title": "Privacy-preserving fingercode authentication" }, { "paperId": "2977e30243c4a93462cdb466d97abff4bcd638d2", "title": "Cryptographic Extraction and Key Derivation: The HKDF Scheme" }, { "paperId": "6e39b04b21ca33790071c1458e983e21170e392d", "title": "BotGrep: Finding P2P Bots with Structured Graph Analysis" }, { "paperId": "b0b6346104cbf878a072da93d49ad6e9f65befaf", "title": "SCiFI - A System for Secure Face Identification" }, { "paperId": "d30bf3722157c71938dc94419802239ef4e4e0db", "title": "Practical Private Set Intersection Protocols with Linear Complexity" }, { "paperId": "dea4328bd965da7e97da387b1d6ecf032dbfcb0f", "title": "Secure Two-Party Computation is Practical" }, { "paperId": "ca2e11406f02a9fcd6b426ae90d10710de8087b2", "title": "Efficient Privacy-Preserving Face Recognition" }, { "paperId": "a824e211a07889fd1c1e471a0248e90468203787", "title": "Improved Garbled Circuit Building Blocks and Applications to Auctions and Computing Minima" }, { "paperId": "2668e789f2f8f62bcdcfcb7e9248a2238a57b94f", "title": "Privacy-Preserving Face Recognition" }, { "paperId": "f5ea5a472a0d8f33ba372e009697a81aeabe2ef6", "title": "Secure Hamming Distance Based Computation and Its Applications" }, { "paperId": "d9a80152cf0bc0f7908d3ddf8eca9d75f54a50db", "title": "FairplayMP: a system for secure multi-party computation" }, { "paperId": "264bfb17824e11db74c87d8af0e5dd25f2b376fd", "title": "Improved Garbled Circuit: Free XOR Gates and Applications" }, { "paperId": "02477609a3568d7ab4c80bc3ca64f3d5bd0d8737", "title": "OT-Combiners via Secure Computation" }, { "paperId": "7061a0d83cdd455537d797f681b1ef908fbbcdc8", "title": "The Exact Multiplicative Complexity of the Hamming Weight Function" }, { "paperId": "f192b6b321252240f027f7f9b477baae09be9e4d", "title": "Privacy-preserving credit checking" }, { "paperId": "922d9d1bc39d49a10ba267ed70ad52c75d5651d4", "title": "Fairplay - Secure Two-Party Computation System" }, { "paperId": "c7514685d9a26e5bffd6cf8e6d94f1b7f6de241f", "title": "Foundations of Cryptography: Volume 2, Basic Applications" }, { "paperId": "862267a2c5618865dc6ec1a63e1efdb64efcbd4b", "title": "Automatic generation of two-party computations" }, { "paperId": "ffb967a1754500aa3ab35295aa05c10992719f64", "title": "Extending Oblivious Transfers Efficiently" }, { "paperId": "490b2ab76335de294498bff727c0a25314317c63", "title": "Efficient oblivious transfer protocols" }, { "paperId": "3243703235e20572cbe6dcd77159d82f6997ba97", "title": "\"Pseudo-Random\" Number Generation Within Cryptographic Algorithms: The DDS Case" }, { "paperId": "71f582193c434a57f0dd7e8d8da9bbb6cc86777e", "title": "Correlated pseudorandomness and the complexity of private computations" }, { "paperId": "8dded7fef81405f48b717e4cbca2922cb9ec35aa", "title": "Efficient Multiparty Protocols Using Circuit Randomization" }, { "paperId": "df2473061df11b76cebb7400c50246d0b354390c", "title": "How to play ANY mental game" }, { "paperId": "29b0f06d18949fc7f3a38bb0022571aa15725dc7", "title": "How to generate and exchange secrets" }, { "paperId": "18ec5610ddaced90662f407c3a10f52d96fbe92a", "title": "The cryptographic security of truncated linearly related variables" }, { "paperId": "f2c4398e489bed6cd2ac00492c762f6b112aa7bc", "title": "A randomized protocol for signing contracts" }, { "paperId": "bb2b1a16fc5cfa26dc5ae4ef9f41bc2bc610ce90", "title": "A Fast Computer Method for Matrix Transposing" }, { "paperId": "14720266a35ced804438cdf06bc8d151e7e9903c", "title": "Private Set Intersection: Are Garbled Circuits Better than Custom Protocols?" }, { "paperId": "09e73a08ee516df2d69ae6a6126bb05ff58e2042", "title": "SCAPI: The Secure Computation Application Programming Interface" }, { "paperId": null, "title": "Special Publication 800-57, Recommendation for Key Management Part 1: General (Rev. 3)" }, { "paperId": "571458c41a4ab85f231499afde7b1cb84b410b33", "title": "Efficient Privacy-Preserving Biometric Identification" }, { "paperId": "1e0b693c1c9c69aae413729b58c552ad3cc838ca", "title": "Location Privacy via Private Proximity Testing" }, { "paperId": null, "title": "31. Intelligence Advanced Research Projects Activity (IARPA). Security and Privacy Assurance Research (SPAR) Program" }, { "paperId": null, "title": "volume 5536 of LNCS" }, { "paperId": "f2a37db2f2104375e6283d13b8bce6a4ee3d8bea", "title": "Extending Oblivious Transfers Efficiently - How to get Robustness Almost for Free" }, { "paperId": "772cdcc8a67cc878b39409230cbf2488a1117e62", "title": "How To Exchange Secrets with Oblivious Transfer" }, { "paperId": "d1ce47c53cc2cec75cb148d799eb396e54d8109b", "title": "The Foundations of Cryptography - Volume 2: Basic Applications" }, { "paperId": "9a639a1e48318a482fb646e5e9f9ab8d006f69c2", "title": "Security and Composition of Multiparty Cryptographic Protocols" }, { "paperId": null, "title": "R computes t i = G ( k 0 i ) and u i = t i ⊕ G ( k 1 i ) ⊕ r , and sends u i to S for every 1 ≤ i ≤ κ" }, { "paperId": "084fdcf7e27d2a563ded976f8c54ac789d5e03a1", "title": "2008 IEEE Symposium on Security and Privacy Towards Practical Privacy for Genomic Computation" }, { "paperId": null, "title": "Table 3. Performance results and standard deviations for base OTs" }, { "paperId": "4bec3c62d4f2194bad4de7d41f7ce74f88b11e04", "title": "Computation" }, { "paperId": null, "title": "Let Q = [ q 1 | ... | q κ ] denote the m × κ bit matrix where the i -th column is q i . Let q j denote the j -th row of the matrix Q ." }, { "paperId": null, "title": "Definition A.3. Let f : ( { 0 , 1 } ∗ ) 2 → ( { 0 , 1 } ∗ ) 2 be (possible randomized) two–party functionality, and let f i ( x, y denotes the i th element of f ( x, y )" }, { "paperId": null, "title": "where t 1 , . . . , t m , s are chosen uniformly and independently at random from { 0 , 1 } κ , and U m · κ + m · (cid:96) is the uniform distribution over { 0 , 1 } m · κ + m · (cid:96)" }, { "paperId": null, "title": "Secure Two-Party Computation. We give a formal definition for security of a two party protocol in the presence of a semi-honest adversary. The definition is the standard definition, see [7,14]" }, { "paperId": null, "title": "The parties invoke the κ × OT κ -functionality, where S plays the receiver with input s and R plays the sender with inputs ( k 0 i , k 1 i ) for every 1 ≤ i ≤ κ" }, { "paperId": null, "title": "For every 1 ≤ i ≤ κ , S defines q i = ( s i · u i ) ⊕ G ( k s i i )" } ]
26,097
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/016969e097b466e97a4e0f221e772f9457b57c49
[ "Computer Science" ]
0.914533
Management and Control of Domestic Smart Grid Technology
016969e097b466e97a4e0f221e772f9457b57c49
IEEE Transactions on Smart Grid
[ { "authorId": "1692381", "name": "A. Molderink" }, { "authorId": "1730722", "name": "V. Bakker" }, { "authorId": "145348353", "name": "M. Bosman" }, { "authorId": "1688140", "name": "J. Hurink" }, { "authorId": "1742628", "name": "G. Smit" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE Trans Smart Grid" ], "alternate_urls": null, "id": "1c2f3998-b5ca-48ca-9991-94b71c71ecb7", "issn": "1949-3053", "name": "IEEE Transactions on Smart Grid", "type": "journal", "url": "http://ieeexplore.ieee.org/servlet/opac?punumber=5165411" }
null
# Management and control of domestic smart grid technology ## Albert Molderink, Student member, IEEE, Vincent Bakker, Student member, IEEE, Maurice G.C. Bosman, Johann L. Hurink, Gerard J.M. Smit **_Abstract—Emerging new technologies like distributed genera-_** **tion, distributed storage, and demand side load management will** **change the way we consume and produce energy. These tech-** **niques enable the possibility to reduce the greenhouse effect and** **improve grid stability by optimizing energy streams. By smartly** **applying future energy production, consumption and storage** **techniques, a more energy efficient electricity supply chain can** **be achieved. In this paper a three-step control methodology is** **proposed to manage the cooperation between these technologies,** **focused on domestic energy streams. In this approach, (global)** **objectives like peak shaving or forming a Virtual Power Plant** **can be achieved without harming the comfort of residents. As** **shown in this work, using good predictions, in advance planning** **and realtime control of domestic appliances, a better matching** **of demand and supply can be achieved.** **_Index Terms—Micro-generation, Energy efficiency, Microgrid,_** **Virtual Power Plant, Smart grid** I. INTRODUCTION In the last decades, more and more stress is put on the electricity supply and infrastructure. On the one hand, electricity usage increased significantly and became very fluctuating. Demand peaks have to be generated and transmitted, and they define the minimal requirements in the chain. Thus, due to the fluctuating demand, minimal grid requirements have increased. Another effect of fluctuations in demand is a decrease in generation efficiency [1]. On the other hand, the reduction in the CO2 emissions and the introduction of generation based on renewable sources become important topics today. However, these renewable resources are mainly given by very fluctuating and uncontrollable sun-, water- and wind power. The generation patterns resulting from these renewable sources may have some similarities with the electricity demand patterns, but they are in general far from being equal. For this reason, supplemental production is required to keep the demand and supply in balance, resulting in an even more fluctuating generation pattern for the conventional power plants. Finally, the introduction of new, energy efficient technologies such as electrical cars can result in a even further fluctuating electricity demand. Uncontrolled charging of electrical cars will result in a high peak demands of electricity since these vehicles need to be This research is conducted within the Islanded House project supported by E.ON Engineering and the SFEER project supported by Essent, Gasterra and STW. All authors are with University of Twente, Department of Computer Science, Mathematics and Electrical Engineering, P.O. Box 217, 7500 AE, E h d Th N th l d ld i k@ t t l charged fast to ensure enough capacity for the upcoming trip. Lowering the peaks in demand is desirable to prolong the usage of the available grid capacity. A solution for these problems may be to transform domestic customers from static consumer into active participants in the production process. Consumers participation can be achieved due to the development of new (domestic) appliances with controllable load, microgeneration and domestic energy storage of both heat and electricity. These devices have potential to shift electricity consumption in time without harming the comfort of the residents. Examples of devices with optimization potential are (smart) freezers and fridges which can adjust their cooling cycles to shift their electricity load or batteries that can temporarily store excess electricity. How to improve energy efficiency using this domestic potential is still not well studied and needs to be a topic of further research. It is, in general, agreed that it is both desirable and necessary to manage Distributed Generation (DG) and to optimize its efficiency. In [2] it is stated that a fit-and-forget introduction of domestic DG will cause stability problems. Furthermore, the large scale introduction of renewables requires a new grid design and management. A study of the International Energy Agency concludes that, although DG has higher capital costs than power plants, it has potential and that it is possible with DG to supply all demand with the same reliability, but with lower capacity margins [3]. The study foresees that the supply can change to decentralized generation in three steps: 1) accommodation in the current grid, 2) introduction of a decentralized system cooperating with the central system and 3) supplying most demand by DG. However, both [2] and [3] indicate that commercial attainability and legislation are important factors for the success of the introduction of DG. The goal of our research is to determine a methodology to use the domestic optimization potential to 1) optimize efficiency of current power plants, 2) support the introduction of a large penetration level of renewable sources (and thereby facilitate the means that are needed for CO2 reduction) and 3) optimize usage of the current grid capacity. In this work we give a more detailed description of the control strategy presented in [4] to exploit domestic optimization potential. This control strategy consists of (local) profile prediction, in advance global planning and realtime local control. Here, these individual steps, the choices made and the idea behind the methodology are expounded. Furthermore, results of a new realistic use case simulated using a simulator [5] are given. Furthermore, lessons learned from our prototype ith first ersions of o r algorithms to st d controllabilit of ----- the devices in the real world are given. The remaining of this paper is structured as follows. The following section introduces the domestic optimization potential. Section III gives an overview of related work and ends with a general management and control concept based on the related work. Section IV describes our approach and the proposed three-step methodology. Next, sections V to VII describe the details of the three steps. In section VIII the results of two case studies are given. We conclude this paper with a discussion of the results. II. OPTIMIZATION POTENTIAL The goal of our control methodology is to exploit the optimization potential of domestic technologies. Although some of these technologies themselves may lead to a decreased domestic energy usage (electricity and heat), the initial goal of this method is not to decrease domestic energy usage, but to optimize the electricity import/export by reshaping the energy profiles of the houses. The energy profiles are reshaped such that they can be supplied more efficiently or by a higher share of renewable sources. Besides improving efficiency, optimization can (and has to) enhance the reliability of supply [2], [3]. The primary functionality of the system is to control the domestic generation and buffering technologies in such a way that they are used properly. Furthermore, the required heat and electricity supply and the comfort for the residents should be guaranteed. Some devices have some scheduling freedom in how to meet these requirements. This scheduling freedom of the domestic devices is limited by the comfort and technical constraints and can be used for optimizations. More scheduling freedom can be gained when residents are willing to decrease their comfort level leading to less restrictive constraints for the scheduling. This (small) decrease in comfort should lead to benefits for the residents, e.g. a reduced electricity bill. The optimization objective can differ, depending on the stakeholder of the control systems. The objective for residents or utilities can be earning/saving money and therefore the goal is to generate electricity when prices are high and consume electricity when prices are low. For network operators the goal can be to maintain grid stability and decrease the required capacity while an environmental goal can be to improve the efficiency of power plants. Therefore, an optimization methodology should be able to work towards different objectives. Next to different objectives, control methodologies can have different scopes for optimization: a local scope (within the house), a scope of a group of houses e.g. a neighborhood (microgrid) or a global scope (Virtual Power Plant). Every scope again might result in different optimization objectives. _1) Local scope: On a local scope the import from and_ export into the grid can be optimized, without cooperation with other houses. Possible optimization objectives are shifting electricity demand to more beneficial periods (e.g. nights) and peak shaving. The ultimate goal can be to create an independent house, which implies no net import from or net export into the grid. A house that is physically isolated from the grid is called an islanded ho se The advantages of a local scope is that it is relatively easy to realize; there is no communication with others (privacy) and there is no external entity deciding which appliances are switched on or off (social acceptance). _2) Microgrid: In a microgrid a group of houses together_ optimize their combined import from and export into the grid, optionally combined with larger scale DG (e.g. windturbines). The objectives of a microgrid can be shifting loads and shaving peaks such that demand and supply can be matched better internally. The ultimate goal is perfect matching within the microgrid, resulting in an islanded microgrid. Advantage of a group of houses is that their joint optimization potential is higher than that of individual houses since the load profile is less dynamic (e.g. startup peaks of appliances disappear in the combined load). Furthermore, multiple microgenerators working together can match more demand than individual microgenerators since better distribution in time of the production is possible [6]. However, for a microgrid a more complex optimization methodology is required. _3) Virtual Power Plant (VPP): The original VPP concept_ is to manage a large group of micro-generators with a total capacity comparable to a conventional power plant. Such a VPP can replace a power plant while having a higher efficiency, and moreover, it is much more flexible than a normal power plant. Especially this last point is interesting since it expresses the usability to react on fluctuations. This original idea of a VPP can of course be extended to all domestic technologies. Again, for a VPP also a complex optimization methodology is required. Furthermore, communication with every individual house is required and privacy and acceptance issues may occur. III. RELATED WORK Most research projects in first instance focus on introducing _and managing (domestic) DG. In [7] the impact of DG on the_ stability of the grid itself is studied, i.e. whether the oscillatory stability of the grid and transformers can be improved with DG. Their conclusion is that it is possible to improve the stability when the generators are managed correctly. The authors of [8] conclude, based on UK energy demand data, that it is attractive to install microCHPs to reduce CO2 emission significantly. Next to DG, energy storage and demand side load management are also relevant research topics. One of the options is to combine windturbines with electricity storage to level out the fluctuations by predicting the expected production and planning the amount of electricity exported to the grid exploiting the electricity buffer [9]. In [10] and [11] Grid Friendly Appliances are described. These appliances switch (parts of) their load off when the frequency of the grid deviates too much. This frequency deviation is a measure for the stress of the grid. A lot of control methodologies for DG, energy storage and/or demand side load management are described in literature, mostly using an agent-based solution. Most agent based methodologies propose one agent per device placing bids at the agent one le el higher [12] This higher le el agent ----- aggregates the bids and sends them upwards. The top level agent determines a market clearing price based on the bids and the objective. In [13] multiple domestic technologies are combined: they conclude that demand side load management offers 50% of the potential. However, there have to be incentives for the residents to allow some discomfort (e.g. a reduced energy bill to allow a deviation on the room temperature). The PowerMatcher described in [14] and [15] uses a similar agent based approach but also takes the network capacity into account. Field tests showed a peak reduction of 30% when a temperature deviation of one degree of the thermostat is allowed [16]. In [17] the results of individual (local) and overall (global) optimizations are compared. They conclude that global optimizations lead to better results. Next, they claim that agent based methodologies outperform non-agent based methodologies since agent based methodologies take more (domestic) information into account. Next to agent based methodologies, there are also non-agent _based methodologies. The research described in [18] proposes_ a method that is capable to aim for different objectives. The methodology is based on a cost function for every device and using a Non Linear Problem definition the optimal schedule is found. The authors of [19] address the problems of both agent and non-agent based solutions: non-agent based solution are less scalable and agent based solutions need local intelligence and are not transparent. Therefore, they propose a combination: aggregate data on multiple levels, while these levels contain some intelligence. In [20] a methodology is proposed using Stochastic Dynamic Programming (SDP). The stochastic part of the methodology considers the uncertainty in predictions and the stochastic nature of (renewable) production and demand. Most methodologies use prediction of demand and/or production. Both can be predicted rather good with neural networks, as described in [21] and [22]. _Summary: Most of the researchers propose a hierarchical_ structured, agent based solution. The hierarchical structure ensures the scalability of the solution. Although a lot of approaches claim to be distributed without a central algorithm, all approaches found have one decision-making element. The similarities between the described approaches and our approach is the control up to an appliance level and the hierarchical structure with aggregation on each level (local and global control). The main differences are the prediction/planning and the lack of agents. Although some agentbased approaches use prediction and planning on a device level, this is utilized for profit raising of the agent itself. The latter is also the main difference between our approach and an agent-based approach: agents are greedy and try to optimize their own profit where our optimization methodology tries to reach a global objective for the whole fleet. As stated in [17], global optimization algorithms lead to better results. Furthermore, our approach can address each household individually using different steering signals instead of using the same signal (price) for e er one |Fridge|Col2| |---|---| ||| Fig. 1. Model of domestic energy streams IV. APPROACH Our research focuses on the development of algorithms for the control of energy streams in (a group of) houses. These algorithm are verified using a simulator. This simulator can simulate the complete methodology for a large fleet of houses on a device level incorporating local and global controllers. A detailed description of the simulator can be found in [5]. Furthermore, the validity of assumptions made during development of our models have been verified with a prototype. This prototype consists of a microCHP appliance, a heatstore, controllable appliances (both heat and electricity) and control algorithms implemented in software. A detailed description of this prototype can be found in [23]. The remainder of this section describes the underlying model of a house on which the algorithms and also the simulator are based. Next, the basic idea and a general description of the proposed control methodology are given. _A. Model_ The model of a single house is shown in Fig. 1. Every house consists of (several) micro-generators, heat and electricity buffers, appliances and a local controller. Multiple houses are combined into a (micro)grid, exchanging electricity and information between the houses. Electricity can be imported from and exported into the grid. Heat is produced, stored and used only within the house. All domestic heat and electricity devices are divided into three groups: 1) producers producing heat and/or electricity, 2) _buffers temporarily storing heat or electricity and 3) consumers_ consuming heat and/or electricity. Every producer, buffer and consumer is called a device. Heat and electricity production can be coupled on device level. For example, a microCHP produces either heat and electricity or nothing at all. The same holds for some consuming devices, e.g. a hot fill washing machine. A more detailed description of the model can be found in [5]. Within the model, the planning horizon is discretisized res lting in a set of consec ti e time inter als The n mber of ----- intervals depends on the length of the planning horizon and the length of the intervals. We often use a 6 minute time interval since such an interval length is a good trade off between accuracy and amount of data [24]. Furthermore, 6 minute time interval calculate easy since it is 101 [of an hour.] _B. Methodology_ The goal of the energy management methodology is to introduce a generic solution for different (future) domestic technologies and house configurations. Furthermore, within the methodology multiple objectives are possible and the scope of the methodology can differ. As a consequence, the methodology needs to be very flexible and generic. Since there can be global objectives (e.g. in case of a VPP) and the actual control of devices is on domestic level, both a global and a local controller are needed. Furthermore, the methodology should be able to optimize for a single house up to a large group of houses. So, the algorithms used in the control system should be scalable and the amount of required communication limited. The goal of the methodology is to exploit as much potential as possible while respecting the comfort constraints of the residents and the technical constraints of the devices. One of the applications of the control methodology is to act actively on an electricity market. To trade on such a market, an electricity profile must be specified one day in advance. Therefore, it should be possible to determine a planning one day in advance for the next day. Another application can be to react on fluctuations in the grid. Reacting on fluctuations requires a realtime control and sufficient generation capacity must be available at every moment. To achieve this available capacity, again a planning must be determined in advance. Therefore, the proposed control strategy consists of three steps. A schematic representation of the method is given in Figure 2. In the first step, a system located at the consumers predicts the production and consumption pattern for all appliances for the upcoming day. For each appliance, based on the historical usage pattern of the residents and external factors like the weather, a predicted energy profile is generated. The local controller aggregates these profiles and sends them to the global controller. The aggregated energy profile determines the potential of all appliances located in the houses. In the second step, these optimization potentials can be used by a central planner to exploit the potential to reach a global objective. The global controller consists of multiple nodes connected in a tree structure. Each house sends its profile to its parent node, this node aggregates all received profiles and sends the aggregated profile upwards in the tree, etc. Based on the received profile and the objective, the root node determines steering signals for its children to work towards the global objective. Each node in the tree determines steering signals for its children based on the received steering signals. The house controllers can determine an adjusted profile, incorporating the steering signals. This profile is sent upwards in the tree and when necessary the root node can adjust the steering signals. So, the planning is an iterative, distributed algorithm lead by the global controller The position of the ppermost node and Fig. 2. Three step methododolgy therefore the global controller determines the scope of the optimization (within the house, a neighborhood node, etc.). The result of the second step is a planning for each household for the upcoming day. In the final step, a realtime control algorithm decides at which times appliances are switched on/off, when and how much energy flows from or to the buffers and when and which generators are switched on. This realtime control algorithm uses steering signals from the global planning as input, but preserves the comfort of the residents in conflict situations. Furthermore, the local controller has to work around prediction errors. The combination of prediction, planning and real-time control exploits all potential on the most beneficial times. The hierarchical structure with intelligence on the different levels ensures scalability, reduces the amount of communication and decreases the computation time of the planning. This three-step approach is discussed in more detail in the following sections. The combination of prediction, local controllers and global controllers can be extended to a Smart Grid [2] solution, controlling non-domestic DG, non-domestic buffers and domestic imports/exports optimizing efficiency of central power plants. Since the use case described in the Results section is based on a microCHP, the description of the first two steps focus on the optimization of a fleet of microCHP devices. V. STEP 1: LOCAL PREDICTION The optimization potential of micro-generators is based on their scheduling freedom. While PV or microwindturbine are solely dependent on renewable resources and thus have no scheduling freedom, a microCHP appliance is controllable. When a heat buffer is added to the system, the production and the consumption of heat can be decoupled, within the limits of the heat buffer. This freedom can be used to schedule the microCHP to produce heat, and thus electricity, on more beneficial periods. Using a heat buffer enables the possibility to have an electricity steered control of a microCHP appliance instead of a heat steered control. The scheduling freedom of a microCHP appliances is limited b the heat demand of the ----- household and size/level of the heat buffer. By predicting the heat demand in advance, a better schedule can be determined for heat-driven generators, improving its optimization potential. Since the use case described in the Results section is based on a microCHP, the rest of this section focuses on heat demand prediction. In our approach, the heat demand for each individual household is predicted using neural network techniques. The goal is to predict the heat profile for the next day as accurately as possible. Based on the prediction, a schedule for the microCHP can be calculated. The value of this schedule depends on the accuracy of the predictions. There are several reasons why individual heat demand prediction is used. The first and most important reason is that the schedules of the generators are made locally. A second reason is that when our approach is used for optimization of a group of households. The group might consists of hundreds of thousands up to a million of households. It is then infeasible to do a prediction per house centrally. It might be possible to do a prediction of a whole group, but eventually all individual generators must be scheduled, based on local heat demand. By moving the prediction to a local control system in the house, a scalable system is achieved. The heat demand (of a household) is dependent on factors like weather, insulation and human behavior. The prediction model should be able to predict the heat demand one day ahead, based on recent observations. In other words, based on recent heat demand data and information about external factors like weather and insulation, the model should learn the relation between these factors and the heat demand. The relation between external factors, behavior and the corresponding heat demand might be different for each house and household. Each house is different and has different insulation characteristics. Every household is different and has different behavioral patterns. By predicting the heat demand per house locally, local information about the specific environmental and behavioral characteristics can be used to improve the prediction. One important factor in the heat demand is the behavior of the household. However, due to human nature, this behavior is not static. People have different behavior on different days of the week, thus the model has to be flexible. Changes in behavior should be learned quickly in order to cope with changes, e.g. holidays. _A. Prediction Model_ 12 9 6 3 0 5 10 15 20 Time (h) Fig. 3. Heat demand prediction for a household on Nov. 22, 2007 To learn the behavior of the residents, historical heat demand is used as an input. Information about the weather can for example be represented with outdoor temperatures, wind speeds and solar radiation. Since houses do not change that often, we consider the characteristics of the house static. Because of this, the neural network should be able to learn these characteristics since they are present in all other input data used. In [22] and [26] multiple possible combinations of input sets and their influence on the predictions are presented. Furthermore, in [26] a different way of constructing the training set is presented. Common use, when generating a training set for neural network applications, is to select a large, randomly selected set used for training. In our case, this translated to giving the network many samples to find as much general behavior as possible. However, since behavior is changing during the year, [26] shows that this is not the best way. Using only information of the last weeks as training information gives better prediction. _B. Results_ An example of a good prediction is depicted in Figure 3. Here, a prediction is done for a household on November 22, 2007 using historical heat demand data and outdoor temperatures as input. As can be seen in the figure, the trend is followed quite good. As expected, due to human nature and unmeasurable influences, there is some deviation from the real heat demand. VI. STEP 2: GLOBAL PLANNING For our prediction model, neural networks techniques are used. Neural networks are computational models based on biological neurons [25]. They are able to learn, to generalize, or to cluster data. A network has to be configured (trained) such that the application of the network to a set of given inputs produces the desired outputs (which are also given). The output of our prediction model is the heat demand per hour. We assume the most relevant factors for the heat demand are the behavior of the residents, the weather and the characteristics of the house. Therefore, information about these factors are th s candidates as inp t for o r prediction model The planning described in this section focuses on a large fleet of houses combined into a VPP, all equipped with a microCHP and heat buffer. Based on the heat demand prediction for a single house we plan the runs of the corresponding microCHP. This means that the exact periods in time are specified during which the microCHP should be switched on. This planning takes into account that the complete heat demand of the house has to be guaranteed, while using a heat buffer. Furthermore, the planning is restricted by technical constraints of the microCHP like minimal runtime. An complete explanation of these constraints can be found in [27]. Based on the heat demand prediction, each house of a group of ho ses (of si e N ) makes a prod ction plan satisf ing the ----- domestic, or local, constraints (i.e. the heat demand constraints plus the technical, microCHP related constraints). Considering the generators in these houses as a Virtual Power Plant (VPP) introduces a new dimension in the planning problem, since we now have to focus on the total electricity production of this group of houses. As a consequence, the planning does not only need to satisfy local constraints, also a global constraint on the total electricity production is added. More precisely, the group of houses should satisfy a predefined production plan P, that is based on the role the VPP wants to play. The problem of realizing the production planning for the group of houses is based on a discretisation of time, as noticed in Section IV-A. The planning horizon of a single day is divided into NT intervals for which a decision must be made for each microCHP in each house. Since a simplified version of the problem is known to be NP-complete in the strong sense [27], we develop heuristics which find in reasonable time a planning for the group of houses that is ‘good enough’. In this context, we mean by ‘good enough’ that we approximate the predefined (discrete) production plan P = (P1, . . ., PNT ). As objective, we use the squared mismatch ms to this plan _P_, which should be minimized: the microCHP and heat buffer constraints can be met by only allowing feasible states and state changes in the corresponding time periods. Since the global production plan P often is based on the electricity market (e.g. the Dutch APX market [29]), the costs in the Dynamic Programming formulation are chosen to also be electricity price related. More formally, if pj denotes the price on the electricity market in period j, we define the market related costs cj for state changes in time period j by _cj = (maxi_ _pi) −_ _pj._ (2) since the steering signal for production should be low when the price is high (steering signals are costs, the objective is cost reduction). The costs of a state change from period j to period j + 1 depend on the related decision xj and are given by xjcj. Now, for each interval j and state s we define the cost function Fj(s), which expresses the minimal costs needed from interval j until the end of the planning horizon, _NT, assuming that the current situation is characterized by the_ state s. In practice the number of states is not too large, if the time periods are chosen larger than or equal to five minutes. Via a backtracking algorithm the value of F0(s0) can be calculated, which minimizes the total costs from the start of the planning period (indicated by state (s0) in period 0) until the end of the planning period. The path(s) corresponding to this value give the state changes and, thus, the corresponding decision values _xj to switch the microCHP on or off, i.e. it gives a production_ plan for the house. _2) Minimizing the squared mismatch from the global pro-_ _duction plan: By sending all local production plans to a global_ planner, the sum of all production plans of the group of houses can be calculated and can be calculated and gives a global electricity output of the VPP, leading to a squared mismatch _ms from the production plan P_ . In an iterative approach we aim to minimize this mismatch by iteratively steering the local production plans in a mismatch-reducing direction. As a consequence, most of the computation is still done locally at the houses. On a central level the steering of the plans in a certain direction is calculated. To allow for scalability, the group of houses is divided into a hierarchical structure. In this way a limited number of houses can be regarded as a sub group, which is steered into the right direction independently from other sub groups. For simplicity we refer in the following to the plan P as the production plan for a sub group of houses. In combination with the use of the local Dynamic Programming approach, we adapt the steering signals in the following way. Artificial additional costs a[i]j [are added to the state change] costs cj for time period j in iteration i, if: _• the electricity output of the VPP is larger than the plan_ _Pj, and_ _• in the local house plan the microCHP is running at time_ period j. The values of a[i]j [are sent to the local planner and a new] planning is determined by the local planner. In this way, microCHPs that are running in periods where the sub group plan is exceeded are stimulated to produce at other time periods In the steering method the additional cost _i that_ _ms =_ _NT_ _N_ � � ( _en,j −_ _Pj)[2],_ (1) _j=1_ _n=1_ where en,j is the produced electricity in house n during time period j. Since we deal with an NP-complete problem, in the next subsection we propose a heuristic method that works in reasonable time. This method makes use of fast locally optimizing methods, which, in the presence of a hierarchical structure, results in a scalable planning method from a global perspective. _A. Iterative Distributed Dynamic Programming_ The problem is to find production plans for local households which are subject to local constraints, whereas we want to minimize the global deviation of the total electricity production, measured by the squared mismatch ms. In this subsection we describe a heuristic that solves this problem by separating the two elements that make the problem difficult: 1) finding a local plan satisfying local constraints; 2) minimizing the squared mismatch from the global production plan. Next, these two elements are combined in an Iterative Distributed Dynamic Programming approach. This approach is explained in more detail by tackling the two single elements. _1) Finding a local plan satisfying local constraints: A local_ production plan that satisfies both technical (microCHP related) and domestic (heat demand) constraints can be found by using a Dynamic Programming approach. This approach uses a state s to describe the household situation in each interval. For more detail we refer to [28]. Over time, the state s changes based on the decision xj to have the microCHP running or not. From the state the run history and the total production until the c rrent time period are ded cted So technical constraints of ----- is used in the steering process, decreases with each iteration _i, to minimize negative overshooting effects and guarantee a_ convergence. VII. STEP 3: LOCAL SCHEDULING _B2_ _A2 = 0_ |A 1|A = 0 2 A 3| |---|---| _F1_ _T1 F2 = T2 F3_ _T3_ This section presents the scheduling algorithm that controls the devices in a single house. The decisions of the algorithm are based on the current situation in the house and optionally on the steering signals from the global controller. The most important requirement of the algorithm is to guarantee the comfort for the residents and the proper usage of devices. Within this requirement, the goal is to optimize the electricity import/export. The basic idea is that there is a certain demand and this demand should be matched. The demand is defined as the sum of the heat and electricity demand of all consumers. This demand is given as an input parameter and can be matched with 1) import from the grid, 2) production by generators, 3) the buffers and 4) switching off consumers (not providing them). When the sum of the four possibilities gives more heat and/or electricity than the demand, the corresponding energy flows to a buffer and/or into the grid. However, some matching is more desirable than others: e.g. it might be allowed to switch off a fridge temporarily but a TV set should stay on. Therefore, for every matching costs are defined. As stated above, every device (in the house) and the grid can match a certain amount of energy demand (optionally zero). Furthermore, energy flowing to a buffer or to the grid is seen as negative matching. Via this generic model, matching costs of all devices, independent of technology, can be expressed with linear cost functions. The cost function can express 1) the costs of the matching, 2) the costs of state transitions (e.g. startup costs) and 3) costs to steer the behavior and reach global objectives. Following this setup, the algorithm has to find an optimal combination of matching sources using for all devices cost functions of the same structure. The algorithm is executed for each time interval. The matching cost for each device is determined at the beginning of the time interval, based on the status of the device. The status of the devices cannot be determined on beforehand, since the status may depend on decisions in former time intervals. In the current implementation, the costs only depend on the current status without taking future states into account. The optimization problem considers a given set of devices _Dev. Decision variables xi are introduced which express the_ amount of matching of device i _Dev. Since these variables_ _∈_ are used for both heat and electricity, two multiplication factors are introduced, one for heat (Hi) and one for electricity (Ei), e.g. the heat/electricity ratio of a microCHP is 8 : 1 thus possible choices are Hi = 8 and Ei = 1. The possible values for the variables xi may be restricted. For example, a consuming device can be switched off (xi = _demand or xi = 0) and a certain amount of electricity_ can be import/exported (−2000 ≤ _xi ≤_ 5000). Furthermore, the cost function parameters may rely on the concrete value of i e the cost f nction is a non contin o s step ise _B1_ _B3_ _xd_ Fig. 4. Example intervals and costs for xi function. To model this, for each device i ∈ _Dev a set Si_ of intervals is specified and the variable xi is allowed to take only values from one of these disjoint intervals. Each interval _Iij = [Fij, Tij] ∈_ _Si specifies a uniform area for the variable_ _xi, in the sense that the costs associated with xi_ _Iij can_ _∈_ be expressed by Aij _xi + Bij. The value Aij expresses the_ _×_ matching costs and Bij the startup costs if xi is chosen from the interval Iij. An example of intervals and associated costs is shown in Figure 4. The problem of finding a best solution is modeled as an Integer Linear Program (ILP). The objective of the ILP is to minimize the costs while all given heat demand D[h] and electricity demand D[e] is matched. This is ensured with the constraints in (5) and (6) given below. Furthermore, all values of xi must be valid, i.e. chosen on one of the intervals Iij. To ensure this, extra binary decision variables cij are introduced and every xi is split up into variables xij for every interval _j ∈_ _Si. Via (7) is forced that for every device only one of_ the cij is one, i.e. the variable cij specifies the interval from which xi is chosen. Constraint (8) ensures that only the xij corresponding to the nonzero cij is nonzero and lies within the specified interval. The value of xi of a device gets defined as the sum of all xij for that device (see (4)). � _min_ _Aij_ _xij + cij_ _Bij_ (3) _×_ _×_ _i,j_ � _s.t._ _xi =_ _xij ∀i ∈_ _Dev_ (4) _j_ � _D[h]_ = _Hi × xi_ (5) _i_ � _D[e]_ = _Ei × xi_ (6) _i_ � _cij = 1 ∀i ∈_ _Dev_ (7) _j_ _cij × Fij ≤_ _xij ≤_ _cij × Tij ∀i ∈_ _Dev, j ∈_ _Si_ (8) VIII. CASE STUDIES To verify the methodology, two case studies are used. The first case study is a simulation of a group of houses using real heat demand data and real prediction to verify whether it is possible to make a planning based on prediction. Furthermore, it is verified how well the actual scheduler follows the planning The second case st d is a test ith a single ho se ----- 30 25 20 15 10 5 0 0 4 8 12 16 20 24 Time (h) 0 4 8 12 16 20 24 Time (h) 30 25 20 15 10 5 0 (a) Planning Fig. 5. Planning and simulation using the three-step methodology for 39 houses (b) Simulation prototype to verify whether the methodology is also applicable in a real world situation. _A. Simulation_ A neighborhood consisting of 39 houses has been simulated with our simulator using the three-step-approach. From our database with real heat demand data of Dutch households, 39 heat profiles between Nov. 19, 2007 until Nov. 31, 2007 have been extracted and used as input for the simulations. _1) Planning: For all houses, a prediction is made using the_ above described method. Using the heat demand predictions, the global planner schedules the runtime of the generators in these houses. The objective of the planning is a combination of flattening the electricity production and to produce during periods when electricity is expensive. Since it is the winter season, there is quite some heat demand. The high heat demand results in less scheduling freedom, making the scheduling more difficult. The results of the scheduler are depicted in Figure 5(a). The solid line gives production plan P, the preferred production pattern. However, this objective cannot be reached due to limited schedulingsfreedom. Two different plannings are made: one using the predicted heat demand (dashed line) and one using the actual heat demand (dotted line). As can be seen, both plannings cannot reach the objective and there quite a difference between both plannings. The total electricity production of both plannings is almost equal, 475 kWh using the prediction and 477 kWh using the actual demand. However, the periods the electricity is produced differs; the sum of the absolute difference per time interval (SAD) between both plannings is 82 kWh, 17% of the total production. So, the total heat demand is predicted quite accurate (2 kWh difference), but the prediction of the heat demand pattern during the day is less accurate. Since the actual heat demand is not known one day in advance, the planning based on the predicted heat demand is used. _2) Realtime control: Within the simulation, the houses are_ controlled using the local controller which receives steering signals from the global controller. For the simulation the real heat demand is used, so the determined planning can probably not e actl be follo ed d e to prediction errors The res lts of the simulations are depicted in Figure 5(b). The solid line depicts the planning made by the global planner. The dotted line depicts the actual number of microCHPs running (i.e. the production pattern). The dashed line depicts the production pattern when no optimization was used, i.e. if the microCHPs were only heat-led. The production pattern using optimization deviates 96 kWh from the schedule without optimization (SAD); the optimization methodology shifted 17% of the production, while there was limited optimization potential due to high heat demand. The total electricity production in the optimized pattern was 540 kWh, more than planned; all free capacity of the heat buffers is used to enable more production capacity to follow the planning as good as possible. The optimized pattern deviates 77 kWh (14%) from the planning (SAD), roughly equal to the prediction error of 82 kWh. From this 77 kWh, only 10 kWh was under production, the rest was overproduction. So, in the actual schedule almost all electricity we promised to produce based on the planning is produced. However, the deviation caused imbalance due to overproduction. So, the scheduler did not efficiently worked around prediction errors but tried to reach the promised production by producing more electricity. This drawback might be overcome by taking not only the current state into account in the scheduler but also some future state. Determining the global planning by the iterative approach using our simulator took a couple of minutes on a single PC (using local TCP/IP connection between the nodes). In a real situation the computational time will decrease since the computations are distributed while the communication time will slightly increase. The expectation is that the total time will be in the order of minutes due to the hierarchical structure, which is acceptable for a one-day planning for 24 hours. The computation of the local controller can be done within a second for a five minute time frame. _B. Field test_ In [30] we showed that peak shaving and shifting of demand in time using only a realtime scheduler is possible using a single house prototype. In this case study also the possibility to act all s itch on/off the appliance on the preferred times ----- 0 2 4 6 8 10 12 14 16 18 20 22 24 5 4 3 2 1 0 0 2 4 6 8 10 12 14 16 18 20 22 24 Time (h) start microCHP start microCHP Time (h) (b) Planned and actual free buffer capacity based on a less good heat prediction 1) wrong predicted peak in demand 2) effect of wrong prediction 5 4 3 2 1 0 (a) Planned and actual free buffer capacity based on a good heat prediction Fig. 6. Results lab tests local planning and scheduling of a microCHP by the local scheduler is verified. The house prototype consists of a Whispergen microCHP, a Gledhill heat buffer, a computer controllable hot water tap and a controllable thermostat in combination with a heat exchanger. The objective is to shift production as much as possible to daylight hours (prevent noise at night). Furthermore, short runs are avoided (wearing of the machine). The generator runs until the buffer is filled, so only switch on signals are given. The planned and actual level in the Gledhill for two different days is given in Figure 6. The heat demand prediction for the day in Figure 6(a) was accurate. Therefore, the planned and actual level in Figure 6(a) are similar and, more important, the planned and actual runtimes of the microCHP are also equal. Furthermore, the microCHP is started on initiative of the scheduler and not as a natural reaction on the buffer level at t = 9.3. The planning for the second day was to switch on the microCHP at t = 7.5 and stay on until t = 10, supplying the peak demand at t = 8.5. However, the peak demand came a few minutes later, the buffer was full before the peak and the microCHP had to be switched off. Therefore, the peak was supplied by heat from the heat buffer and the actual and scheduled buffer level deviate for multiple hours. This shows the long term effect of small differences between prediction and actual heat demand. However, re-planning some moment later in time in Figure 6(b) (e.g. at t = 8.5) might have prevented a non-scheduled start at t = 10.2 and the planning might have been followed better. IX. CONCLUSION AND FUTURE WORK reached by producing more heat than necessary (by filling the heat buffers), resulting in a overproduction on other times. Therefore, improved methods for the local scheduler to work around prediction errors are needed. The second case study shows that it is possible to determine a planning based on a prediction one day ahead. The models are accurate enough to determine a planning and it is possible to control the microCHP. However, when the heat demand deviates from the prediction, the planned and actual runtimes of the microCHP deviate as well. A wrongly predicted peak (for only a few minutes!) can have a severe impact on the runtime. However, if a new planning is determined, the buffer levels and therefore the runtimes of the microCHP converge earlier. Current and future work focuses on working around prediction errors. On one hand, the local controller should take future states into account to prevent decisions that influence future states very negatively. On the other hand, when the local controller cannot deal with the prediction errors anymore, replanning on a higher level is required. Due to the hierarchical structure of the planning, re-planning can be done on different levels. REFERENCES The three step methodology proposed in this paper using a hierarchical planning is a scalable solution with limited communication requirements. The local prediction and scheduler result in a generic solution supporting different technologies and houses with different optimization potential. The first case study shows that it is possible to make a planning for a group of houses based on predicted heat demand using an objective. Furthermore, the local scheduler is capable of following this planning up to a certain level. The schedule deviates from the planning due to prediction errors. The local controller is not capable of coping with prediction errors ell eno gh The promised prod ction is [1] A. de Jong, E.-J. Bakker, J. Dam, and H. van Wolferen, “Technisch energie- en CO2-besparingspotentieel in Nederland (2010-2030),” Plat_form Nieuw Gas, p. 45, Juli 2006._ [2] J. Scott, P. Vaessen, and F. Verheij, “Reflections on smart grids for the future,” Dutch Ministry of Economic Affairs, Apr 2008. [3] “Distributed generation in liberalised electricity markets,” 2002. [4] A. Molderink, V. Bakker, M. Bosman, J. Hurink, and G. Smit, “A threestep methodology to improve domestic energy efficiency,” in IEEE PES _Conference on Innovative Smart Grid Technologies, 2010._ [5] A. Molderink, M. G. C. Bosman, V. Bakker, J. L. Hurink, and G. J. M. Smit, “Simulating the effect on the energy efficiency of smart grid technologies,” in Proceedings of the 2009 Winter Simulation Conference, _Austin, Texas, USA._ Los Alamitos: IEEE Computer Society Press, December 2009, pp. 1530–1541. [6] S. Abu-sharkh, R. Arnold, J. Kohler, R. Li, T. Markvart, J. Ross, K. Steemers, P. Wilson, and R. Yao, “Can microgrids make a major contribution to uk energy supply?” Renewable and Sustainable Energy _Reviews, vol. 10, no. 2, pp. 78–127, Sept 2004._ [7] A. Azmy and I. Erlich, “Impact of distributed generation on the stability of electrical power system,” in Power Engineering Society General _Meeting, 2005. IEEE, June 2005, pp. 1056–1063 Vol. 2._ [8] R. Morgan, J. Devriendt, and B. Flint, “Microchp a mass market t it ?” S _i_ _bili_ _Mi_ _i_ 2006 ----- [9] L. Costa, F. Bourry, J. Juban, and G. Kariniotakis, “Management of energy storage coordinated with wind power under electricity market conditions,” in Probabilistic Methods Applied to Power Systems, 2008. _PMAPS ’08. Proceedings of the 10th International Conference on, May_ 2008, pp. 1–8. [10] N. Lu and D. Hammerstrom, “Design considerations for frequency responsive grid friendlytm appliances,” in Transmission and Distribution _Conference and Exhibition, 2005/2006 IEEE PES, May 2006, pp. 647–_ 652. [11] N. Lu and T. Nguyen, “Grid friendlytm appliances - load-side solution for congestion management,” in Transmission and Distribution Confer_ence and Exhibition, 2005/2006 IEEE PES, May 2006, pp. 1269–1273._ [12] J. Oyarzabal, J. Jimeno, J. Ruela, A. Englar, and C. Hardt, “Agent based micro grid management systems,” in Internation conference on Future _Power Systems 2005._ IEEE, Nov 2005, pp. 6–11. [13] C. Block, D. Neumann, and C. Weinhardt, “A market mechanism for energy allocation in micro-chp grids,” in 41st Hawaii International _Conference on System Sciences, Jan 2008, pp. 172–180._ [14] J. Kok, C. Warmer, and I. Kamphuis, “Powermatcher: Multiagent control in the electricity infrastructure,” in 4th international joint conference on _Autonomous agents and multiagent systems. ACM, Jul 2005, pp. 75–82._ [15] M. Hommelberg, B. van der Velde, C. Warmer, I. Kamphuis, and J. Kok, “A novel architecture for real-time operation of multi-agent based coordination of demand and supply,” in Power and Energy Society _General Meeting - Conversion and Delivery of Electrical Energy in the_ _21st Century, 2008 IEEE, July 2008, pp. 1–5._ [16] C. Warmer, M. Hommelberg, B. Roossien, J. Kok, and J. Turkstra, “A field test using agents for coordination of residential micro-chp,” in _Intelligent Systems Applications to Power Systems, 2007. ISAP 2007._ _International Conference on, Nov. 2007, pp. 1–4._ [17] A. Dimeas and N. Hatziargyriou, “Agent based control of virtual power plants,” in Intelligent Systems Applications to Power Systems, 2007. _ISAP 2007. International Conference on, Nov. 2007, pp. 1–6._ [18] R. Caldon, A. Patria, and R. Turri, “Optimisation algorithm for a virtual power plant operation,” in Universities Power Engineering Conference, _2004. UPEC 2004. 39th International, vol. 3, Sept. 2004, pp. 1058–1062_ vol. 2. [19] E. Handschin and F. Uphaus, “Simulation system for the coordination of decentralized energy conversion plants on basis of a distributed data base system,” in Power Tech, 2005 IEEE Russia, June 2005, pp. 1–6. [20] L. Costa and G. Kariniotakis, “A stochastic dynamic programming model for optimal use of local energy resources in a market environment,” in Power Tech, 2007 IEEE Lausanne, July 2007, pp. 449–454. [21] J. V. Ringwood, D. Bofelli, and F. T. Murray, “Forecasting electricity demand on short, medium and long time scales using neural networks,” _Journal of Intelligent and Robotic Systems, vol. 31, no. 1-3, pp. 129–_ 147, december 2004. [22] V. Bakker, A. Molderink, J. Hurink, and G. Smit, “Domestic heat demand prediction using neural networks,” in 19th International Con_ference on System Engineering._ IEEE, 2008, pp. 389–403. [23] A. Molderink, M. Bosman, V. Bakker, J. Hurink, and G. Smit, “Hardand software implementation and verification of an islanded house prototype,” in International Conference on System Engineering. IEEE, 2009. [24] A. Wright and S. Firth, “The nature of domestic electricity-loads and effects of time averaging on statistics and on-site generation calculations,” _Applied Energy, vol. 84, no. 4, pp. 389–403, April 2007._ [25] B. Krose and P. van der Smagt, “An introduction to neural networks,” 1993. [Online]. Available: citeseer.ist.psu.edu/article/krose93introduction.html [26] V. Bakker, M. Bosman, A. Molderink, J. Hurink, and G. Smit, “Improve heat demand prediction of individual households,” in Conference on _control methodologies and technology for energy efficiency, March 2010._ [27] M. G. C. Bosman, V. Bakker, A. Molderink, J. L. Hurink, and G. J. M. Smit, “On the microchp scheduling problem,” in Proceedings of the _3rd Global Conference on Power Control and Optimization PCO, 2010,_ _Gold Coast, Australia._ Australia: PCO, February 2010. [28] ——, “The microchp scheduling problem,” in Proceedings of the Second _Global Conference on Power Control and Optimization, PCO 2009, Bali,_ _Indonesia._ London: Springer Verlag, June 2009, p. 8. [29] http://www.apxgroup.com/. [30] A. Molderink, V. Bakker, M. Bosman, J. Hurink, and G. Smit, “Domestic energy management methodology for optimizing efficiency in smart grids,” in IEEE conference on Power Technology. IEEE, 2009. X BIOGRAPHIES **Albert Molderink was born in Heerenveen (The** Netherlands) in 1983. He received his B.Sc and M.Sc. degree in Computer Science from the University of Twente, Enschede, The Netherlands, in respectively 2004 and 2007. In addition, he received an Electrical Engineering minor certificate. When he completed his study he started working towards a Ph.D. degree at the University of Twente under supervision of Prof. dr. ir. G.J.M. Smit. He is working in a research group that investigates the possibilities of increasing energy efficiency using embedded control, mainly via optimization and control algorithms. His research focus is on algorithms to optimize energy streams within a house. **Vincent Bakker received his M.Sc. degree in Com-** puter Science from the University of Twente in 2007, with a minor certificate in Entrepreneurship. Currently he is working on his Ph.D. thesis researching domestic demand prediction for in home optimizations. Currently his interest are: machine learning, optimization modeling and large scale distributed (intelligent) systems. **Maurice G.C. Bosman received his M.Sc. degree in** Applied Mathematics from the University of Twente in February 2008. Currently he is a PhD student in the CAES and DMMP groups at the faculty of Electrical Engineering, Mathematics and Computer Science at the University of Twente. His research interests include energy efficiency, scheduling and online algorithms. **Johann L. Hurink received a Ph.D. degree at the** University of Osnabrueck (Germany) in 1992 for a thesis on a scheduling problem occurring in the area of public transport. From 1992 until 1998 he has been an assistant professor at the same university working on local search methods and complex scheduling problems. From 1998 until 2005 he has been an assistant professor and from 2005 until 2009 an associated professor in the group Discrete Mathematics and Mathematical Programming at the department of Applied Mathematics at the University of Twente. Since 2009 he is a full professor of the same group. Current work includes the application of optimization techniques and scheduling models to problems from logistics, health care, and telecommunication. **Gerard J.M. Smit received his M.Sc. degree in** electrical engineering from the University of Twente. He then worked for four years in the research and development laboratory of Oc´e in Venlo. He finished his Ph.D. thesis entitled ”the design of Central Switch communication systems for Multimedia Applications” in 1994. He has been a visiting researcher at the Computer Lab of the Cambridge University in 1994, and a visiting researcher at Lucent Technologies Bell Labs Innovations, New Jersey in 1998. Since 1999 he works in the Chameleon project, which investigates new hardware and software architectures for batterypowered hand-held computers. Currently his interests are: low-power communication, wireless multimedia communication, and reconfigurable architectures for energy reduction. Since 2006 he is full professor in the CAES chair (Computer Architectures for Embedded Systems) at the faculty EEMCS of the University of Twente. Prof. Smit has been and still is responsible of a number of research projects sponsored by the EC, industry and Dutch government in the field of multimedia and reconfigurable systems. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/TSG.2010.2055904?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/TSG.2010.2055904, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "other-oa", "status": "GREEN", "url": "https://research.utwente.nl/files/6523162/SmartGrid.pdf" }
2,010
[ "JournalArticle" ]
true
2010-08-05T00:00:00
[]
13,740
en
[ { "category": "Economics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01698d3506755a2c12c940de4bb450c1a0a1eb2f
[]
0.879386
The evolution of fixed-supply and variable-supply currencies
01698d3506755a2c12c940de4bb450c1a0a1eb2f
Humanities and Social Sciences Communications
[ { "authorId": "32806034", "name": "Guizhou Wang" }, { "authorId": "2994071", "name": "K. Hausken" } ]
{ "alternate_issns": null, "alternate_names": [ "Humanit Soc Sci Commun" ], "alternate_urls": null, "id": "91194503-cbea-4904-81c9-15005d57575b", "issn": "2662-9992", "name": "Humanities and Social Sciences Communications", "type": "journal", "url": "https://www.nature.com/palcomms/" }
Competition is analyzed between a fixed-supply currency (e.g. Bitcoin) and a variable-supply currency (e.g. a fiat currency). Two kinds of players support the currencies differently and choose their volume fractions of transactions in each currency. The variable-supply currency enables money printing/withdrawal and inflation/deflation, which counteract each other in each player’s utility. The exponentially increasing 1959–2021 US M2 money supply and the positive inflation cause this utility to increase over time with high weight assigned to money printing/withdrawal, and decrease otherwise. Three replicator equations determine each player’s volume fraction of transactions in each currency, and which kind of player each player prefers to be. High weight assigned to money supply relative to inflation induces players to prefer the variable-supply currency. A player’s utility of transacting in each currency is proportional to the player’s support of that currency, the volume fraction of all players’ transactions in that currency, and the fraction of players of the same kind as the given player. A player’s utility of transacting in the variable-supply currency is additionally proportional to two ratios. The first is the initial money supply plus the accumulative money printing/withdrawal divided by the initial money supply. The second is the inverse of the accumulative inflation/deflation. The players’ fractions of transactions in each currency may be inverse U shaped or U shaped before typically converging towards preferring one or the other currency. If each player can choose which kind of player to be, it may choose to be the kind with the highest support of a given currency. If a player’s utility of transacting in a given currency depends more on the fraction of players being of one kind than the other kind, the player prefers to be of the first kind, thus assigning less weight to its support of that currency and the volume fractions of transactions in that currency.
## ARTICLE https://doi.org/10.1057/s41599-022-01150-3 **OPEN** # The evolution of fixed-supply and variable-supply currencies ### Guizhou Wang [1,2] & Kjell Hausken 1,2 ✉ Competition is analyzed between a fixed-supply currency (e.g. Bitcoin) and a variable-supply currency (e.g. a fiat currency). Two kinds of players support the currencies differently and choose their volume fractions of transactions in each currency. The variable-supply currency enables money printing/withdrawal and inflation/deflation, which counteract each other in ’ – each player s utility. The exponentially increasing 1959 2021 US M2 money supply and the positive inflation cause this utility to increase over time with high weight assigned to money printing/withdrawal, and decrease otherwise. Three replicator equations determine each player’s volume fraction of transactions in each currency, and which kind of player each player prefers to be. High weight assigned to money supply relative to inflation induces players to prefer the variable-supply currency. A player’s utility of transacting in each cur rency is proportional to the player’s support of that currency, the volume fraction of all players’ transactions in that currency, and the fraction of players of the same kind as the given player. A player’s utility of transacting in the variable-supply currency is additionally proportional to two ratios. The first is the initial money supply plus the accumulative money printing/withdrawal divided by the initial money supply. The second is the inverse of the accumulative inflation/deflation. The players’ fractions of transactions in each currency may be inverse U shaped or U shaped before typically converging towards preferring one or the other currency. If each player can choose which kind of player to be, it may choose to be the kind with the highest support of a given currency. If a player’s utility of transacting in a given currency depends more on the fraction of players being of one kind than the other kind, the player prefers to be of the first kind, thus assigning less weight to its support of that currency and the volume fractions of transactions in that currency. 1 Faculty of Science and Technology, University of Stavanger, 4036 Stavanger, Norway. [2] These authors contributed equally: Guizhou Wang, Kjell Hausken. [✉email: kjell.hausken@uis.no](mailto:kjell.hausken@uis.no) HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2022) 9:137 | https://doi.org/10.1057/s41599-022-01150-3 1 ----- ## ARTICLE HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-022-01150-3 Introduction Background. Humans have used cash currencies for 40,000 years, which evolved from natural objects to coins to paper to digital versions (Kusimba, 2017). The Mesopotamian shekel emerged nearly 5000 years ago, and silver and gold mints emerged in Asia Minor 650–600 B.C, expanding to lead and copper coins in the first millennium A.D. Currencies commonly have a central authority and usually emerged for certain geographic areas and nations. Sometimes the expansion is global, e.g. as a world reserve currency. Fiat currencies have more recently expanded to also become digital. Digital currencies such as Bitcoin have no central authority and easily expand globally. Nakamoto (2008) shows how a decentralized currency such as Bitcoin can be built on a blockchain. He applies the proof of work technology to secure the ledger and avoid the double spending problem. Today 17,834 [cryptocurrencies exist with a market cap of $1.8 trillion (https://](https://coinmarketcap.com/) [coinmarketcap.com/, retrieved February 26, 2022). These vary](https://coinmarketcap.com/) substantially regarding fixed versus variable supply, consensus mechanisms (e.g. proof of stake), degree of decentralization, ownership, regulation, confirmation of transactions, etc. New digital currencies suggest competition between these and conventional currencies. Understanding this competition can be expected to be essential in the coming years. Contribution. This article’s purpose, motivation, objectives, research hypotheses, and research questions are as follows: First, competition between one fixed-supply and one variable-supply currency is analyzed to determine the evolutionary dynamics of each currency and which currency survives. Second, each player maximizes its utility by choosing which volume fraction of transactions to conduct in each currency, and which of two kinds of player to be, depending on various preferences. Third, the variable-supply currency enables money printing/withdrawal which impacts inflation/deflation which impacts each player’s utility and strategic choices and thus how each currency evolves. Being a certain kind of player means supporting one or the other currency to a certain extent. Such support is expressed by a currency’s backing, convenience, confidentiality, transaction efficiency, financial stability, and security. A player’s utility of transacting in the fixed-supply currency depends on the player’s support of that currency, the volume fraction of all players’ transactions in that currency, and the fraction of players of the same kind as the given player. A player’s utility of transacting in the variable-supply currency depends on the same kinds of factors, and additionally depends on the variable money supply and inflation/deflation. That latter dependence is expressed on the Cobb Douglas form multiplying two ratios, i.e. the initial supply plus the accumulative money printing/withdrawal divided by the initial supply, and the inverse of accumulative inflation/ deflation. If both ratios are valued equally and multiply to 1, money printing/withdrawal and inflation/deflation counteract each other. A product higher (lower) than 1 suggests higher (lower) weight to money printing/withdrawal. Fixed-supply currencies have been historically uncommon. Gold viewed as a currency (Mitchell, 2021) is the best example, with 1.5% additional gold mined in 2020 (197,576 metric tons has been mined (gold.org, 2022). 3030 metric tons were produced in 2020 (Basov, 2022)). As a comparison, as of January 2022, 18.9 million Bitcoin out of 21 million coins have been mined, i.e. 90% (Hayes, 2022). The process will continue at a decreasing speed until approximately 2140. Both gold and Bitcoin are durable and fungible (Learn, 2021). Gold has more established history, with more entrenchment in cultures, central banks, and institutions, but falls short of Bitcoin on portability, divisibility, censorship resistance, verifiability, and scarcity (Ikkurty, 2019). Whereas fixed-supply currencies eliminate inflation/deflation caused by money printing/withdrawal, variable-supply currencies do not. Variable-supply currencies offer added flexibility and possibilities not possible for fixed-supply currencies, e.g. funding wars and critical events, and Roosewelt’s 1933–1939 New Deal for economic recovery. Money printing during such events suggests subsequent contraction to avoid inflation. Many economies have not exhibited the sufficient fiscal discipline. Even a traditionally fiscally responsible economy like the US has experienced that $1 in 2022 buys 1.22% of what it would buy in 1695. Using the 1959–2021 US M2 money supply and inflation data, we show how a player’s utility of exchanging in the fixed-supply currency is constant over time. The player’s utility of exchanging in the variable-supply currency increases over time if more weight is assigned to money printing/withdrawal, and otherwise decreases over time. One replicator equation expresses each kind of player’s transaction volume in each currency. A third replicator equation expresses how each player prefers to be of one or the other kind. Each player’s fractions of transactions in each currency may be inverse U shaped or U shaped before converging towards preferring one or the other currency, depending on the player’s support of each currency. If a player can choose which kind of player to be, thus changing its support for a certain currency, it may choose to be of the kind which supports a certain currency highly. If a player is additionally impacted by how many players exist of each kind, it may choose to be of the kind that is most common. Understanding how players choose between competing currencies is useful for consumers, traders, policy makers, regulators, institutions designing and issuing currencies, and institutions adjusting and impacting money supply and inflation/ deflation. Literature. Four groups of literature have been identified, i.e. competition between fiat currencies and cryptocurrencies, central bank digital currencies and cryptocurrencies, the cryptocurrency market, and game theoretic analyses. Competition between fiat currencies and cryptocurrencies. Schilling and Uhlig (2019) evaluate how agents choose between a cryptocurrency and a fiat currency. Cryptocurrencies may enable tax evasion, anonymity, and censorship resistance, impacted by transaction fees to miners. Fiat currencies are currently useful for most purchases, impacted by value-added-taxes. They argue that substitution decreases as the asymmetry in exchange fees and transaction costs increase. This finding relates to how players in the current article choose volume fractions of transactions in two currencies, depending on their support for each currency which in turn depends on each currency’s transaction efficiency, and depending on other factors. Fernández-Villaverde and Sanches (2019) specify a price stable equilibrium, and some less desirable equilibria, for multiple competing privately issued fiat currencies in a Lagos-Wright environment. Their approach has a linkage to the analysis of two coexisting currencies in the current article. Almosova (2018) evaluates costly circulation of private currencies, impacted by verification of transactions, mining costs, etc. She finds that sufficiently low costs of private currency circulation (mining costs) are needed to put downward pressure on the inflation for the public currency. Cryptocurrency competition may not cause price stability. These insights relate to the current article where players may choose a fixed-supply currency to avoid the inflation in the variable-supply currency. 2 HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2022) 9:137 | https://doi.org/10.1057/s41599-022-01150-3 ----- ## HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-022-01150-3 ARTICLE Benigno et al. (2019) evaluate a global cryptocurrency and two national currencies. They find that different interest rates may cause the national currency to be abandoned or the zero lower bound may be approached. They argue that ensuring an independent monetary policy, free capital flows, and a fixed exchange rate may become even less possible. As a comparison, the current article evaluates various other conditions that may cause a currency to be abandoned. Rahman (2018) considers how monetary policy is impacted by fiat and digital currency competition. He argues that a purely private arrangement of digital currencies cannot cause socially efficient allocation, and that optimal monetary policy at the Friedman rule will be socially inefficient. These insights suggest the need to understand the nature of currency competition. Verdier (2021) analyzes how competition in the deposit and lending markets is impacted by a digital currency. She finds that the digital currency crowds out bank deposits causing increasing bank lending rates. That insight furthermore illustrates how currency competition can cause substantial disruption, which suggests a need to understand the evolutionary dynamics. Central bank digital currencies and cryptocurrencies. Caginalp and Caginalp (2019) analyze how the wealthy divide their assets between a cryptocurrency and a home currency, similarly to how the current article analyzes players choosing how to transact in two currencies. Additionally they evaluate how a government can confiscate some of the players’ assets. Blakstad and Allen (2018) evaluate various conditions for issuing central bank digital currencies, and risks and possibilities associated with cryptocurrencies. Their analysis relates to the current article where two currencies may be supported differently, and the variable-supply currency may be designed with different characteristics related to facilitating money printing/withdrawal and inflation/deflation. Masciandaro (2018) analyzes the evolution of different media of payments depending on individual preferences, similarly to this article modeling this evolution. They assess the implications for monetary policy, addressing the zero lower bound constraint for interest rates, and banking policy, e.g. risks of bank disintermediation when the opportunity-cost discrepancies between currencies decrease. That latter focus is partly or indirectly present in the current article in the sense that the abandonment of a variable-supply currency may cause banks to change how they operate. Benigno (2021) argues that competing currencies may cause central banks to lose control of the nominal interest rate and inflation which depend on structural factors. Cryptocurrencies may set lower bounds on interest rates and inflation. The implication of that insight may be the kind of coexistence of two currencies, or one currency going extinct, as analyzed in the current article. Asimakopoulos et al. (2019) evaluate substitution between a government currency and a cryptocurrency, depending on preferences, technology and monetary policy shocks, akin to how the current article considers players’ substitution between currencies. The cryptocurrency market. ElBahrawy et al. (2017) analyze the 2013–2017 evolutionary dynamics of market shares of cryptocurrencies. They find several stable statistical properties, e.g. the market share distribution, turnover, and number of active cryptocurrencies. The current article confines attention to the evolutionary dynamics of two currencies. Caporale et al. (2018) find that cryptocurrencies’ past and future values are positively correlated, with changing degree over time. They argue that this constitutes market inefficiency, enabling the generation of abnormal profits. Partly related, the current article shows how players’ utilities change over time depending on how they transact in two currencies. ElBahrawy et al. (2019) evaluate the interplay between online Wikipedia attention and market performance of cryptocurrencies. They find that tightly knit editors impact Wikipedia and that trading based on Wikipedia views mostly performs better than baseline strategies, apart from buying and holding during explosive market expansion. This also illustrates how players’ utilities change over time depending on various strategies, and analyzed in this article. White (2014) evaluates the market shares of Bitcoin and altcoins, similarly to this article evaluating players’ volume fractions of transactions in two currencies. Sapkota and Grobys (2021) identify market inefficiency where privacy coins exhibit market equilibrium unrelated to nonprivacy coins. They suggest that the result may be due to criminals preferring non-privacy coins with high liquidity and anonymity. Their approach shows how players consciously choose between currencies with different properties, as in the current article. Milunovich (2018) determines weak connectedness between six major asset classes and five cryptocurrencies, and mostly strong connectedness within each of these two groups. If such weak connectedness proves to be common for multiple currencies, that suggests the need to understand how players choose between multiple currencies with different characteristics, as in the current article. Gandal and Halaburda (2016) characterize recent cryptocurrency competition as winner-take-all, and early competition as no winner-take-all. That more recent insight may reflect the finding in this article of players gradually moving towards favoring one or the other currency. Game theoretic analyses. Imhof and Nowak (2006) consider a – stochastic frequency dependent Wright Fisher process to determine the survival of two strategies. They specify two absorbing states for the Markov process, where homogeneous populations choose either strategy A or strategy B. Players typically abandon a strategy occurring less frequently than 1/3 in an unstable equilibrium. That corresponds partly to this article’s finding of players often preferring one or the other currency. Lewenberg et al. (2015) apply cooperative game theory to determine that Bitcoin mining pools may find it challenging to distribute rewards in a stable way, causing players to switch pools frequently. That, in turn, may cause fluctuations which suggests the importance of applying evolutionary dynamics to assess players preferences over time. Article organization. Section “The model” presents the model. Section “Analyzing the model” analyzes the model. Section “Discussion and future research” discusses the results. Section “Conclusion” concludes. The model Nomenclature. Parameters g Fixed-supply currency n Variable-supply fiat currency t 0 Initial time, t 0 ≥ 0 T Final time, T ≥ t 0 j Time counting variable, t 0 ≤ j ≤ T i Player of kind i,i = 1,2 s it Player i’s support of currency g relative to currency n at time t, 0 ≤ s it ≤ 1 μ i Scaling proportionality parameter in player i’s utilities u igt and u int, μ i ≥ 0 HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2022) 9:137 | https://doi.org/10.1057/s41599-022-01150-3 3 ----- ## ARTICLE HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-022-01150-3 m i Scaling exponent in player i’s utilities u igt and u int, m i ≥ 0 S j Supply at discrete time j of the variable-supply fiat currency n, S j 2 R απ ji In Playerflation at time i’s Cobb Douglas elasticity for money supply j, π j 2 R S j, 0 ≤ α i ≤ 1 k i Player i’s process sensitivity for the fraction p it in the replicator equation, k i ≥ 0 h Process sensitivity for the fraction q 1t in the replicator equation, h ≥ 0 Independent variables t Time, t ≥ t 0 p it Volume fraction of player i’s transactions in currency g at time t, 0 ≤ p it ≤ 1 q it Fraction q it of players of kind i at time t, 0 ≤ q it ≤ 1, q 1t = 1−q 2t Dependent variables p t Volume fraction of all players’ transactions in currency g at time t, 0 ≤ p t ≤ 1 u igt Player i’s utility of transacting in the fixed-supply currency g at timeu int Player t, u igt i ≥’s utility of transacting in the variable-supply0 currency n, u int ≥ 0 u it Player i’s weighted utility of transacting in both currencies, u it ≥ 0 u t Society’s utility weighing the utilities of all players of both kinds, u t ≥ 0 Overview of the model. Section “Simplified player utilities” presents the simplified player utilities where two kinds of players receive a fixed utility depending on their support of a fixed-supply currency to two different extents. They also receive a variable utility of transacting in the variable-supply currency depending on money printing/withdrawal of that currency and inflation/ deflation. Section “More realistic player utilities” generalizes so that the two kinds of players’ utilities also depend on their support of a given currency, the volume fraction of all players’ (of both kinds) transactions in the given currency, and the fraction of players of the same kind as the player being analyzed. Section “Replicator dynamics” introduces three replicator equations specifying each player’s volume fraction of transactions in each currency, and which kind of player each player prefers to be. Simplified player utilities. Consider two kinds of players referred to as kind i; i ¼ 1; 2. Assume that player i (i.e. player of kind i) earns a simplified utility u igst of transacting in the fixed-supply currency g proportional to player i’s support s it, 0 ≤ s it ≤ 1, of currency g relative to currency n at time t, i.e. u igst ¼ 0:5s it ð1Þ where the scaling 0.5 is chosen to ensure comparison with the generalization in the next section. Assume further that player i’s utility u inst of transacting in the variable-supply currency n is ’ proportional to its support 1 � s it of currency n. Player i s utility u inst also depends on the variable money supply S j and inflation/ deflation π j expressed on the Cobb Douglas form with elasticities α i and 1 � α i, respectively, 0 ≤ α i ≤ 1. We assume money supply S j, S j 2 R, at the discrete times j ¼ t 0 ; t 0 þ 1; ¼ ; T, where t 0 ≥ 0 is the initial time and T is the final time. Any time interval of length 1 applies, e.g. year, month, week, day, etc. Thus S jþ1 � S j is the changed supply from time j to time j þ 1, ∑ [t] j ¼ [�][1] t 0 � S jþ1 � S j � is the changed supply from j ¼ t 0 to j ¼ t � 1, and S t0 þ∑ j [t] ¼ [�][1] t S 0t0 ð [S] jþ1 [�][S] j Þ is the supply at time t divided by the supply at time t 0 which expresses player i’s purchasing power at time t relative to its purchasing power at time t 0 without inflation. With inflation π j, π j 2 R, at time j ¼ t 0 ; ¼ ; T, an asset valued as 1 at time j ¼ t 0 is valued as t 1 Q j ¼ t0þ1 ð [1][þ][π] [j] Þ [at time][ j][ ¼][ t][, thus degrading the asset] value due to accumulative inflation if [Q] [t] j ¼ t 0 þ1 �1 þ π j �>1, and increasing the asset value otherwise. Thus player i’s simplified utility of transacting in the variable-supply currency n is u inst ¼ 0:5 1� � s it �0@S t 0 þ ∑ [t] j ¼ [�][1] t S 0 t � 0 S jþ1 � S j �1A α i 0@Q tj ¼ t 0 þ1 1�1 þ π j �1A 1�α i ð2Þ If α i - 0:5, player i assigns more weight to purchasing power than to inflation/deflation, and conversely if α i < 0:5. Equal weights α i ¼ 0:5 can theoretically be conceptualized as equating the two last Cobb Douglas terms in Eq. (2) with 1 where player i’s adjusted purchasing power from adjusted money supply S jþ1 � S j is exactly offset by inflation/deflation π j through time. More realistic player utilities. A fraction q it of the players are of kind i at time t, where q 1t ¼ 1 � q 2t, 0 ≤ q 1t ≤ 1. Player i chooses a volume fraction p it of its transactions in currency g, and the remaining volume fraction 1 � p it of its transactions in currency n, see Fig. 1 which exemplifies with p 1t >p 2t and q 1t <q 2t, but generally 0 ≤ p it ≤ 1, 0 ≤ q it ≤ 1; i ¼ 1; 2. Hence the volume fraction p t at time t of all players’ transactions in currency g is the weighted sum of each player i’s volume fraction p it in currency g, weighted by the fraction q it of each kind of player i; i ¼ 1; 2, i.e. p t ¼ p 1t q 1t þ p 2t q 2t ð3Þ More realistically than the previous section “Simplified player utilities”, assume that player i earns a utility u igt of transacting in the fixed-supply currency g proportional to three factors, i.e. its support s it of currency g relative to currency n, the volume fraction p t of all players’ (of both kinds) transactions in currency g, and the fraction q it of players of kind i. We operationalize the latter as 1 þ μ i q [m] it [i] [, where][ μ] i [,][ μ] i [≥] [0 is a scaling proportionality] parameter, and m i, m i ≥ 0, is a scaling exponent. Thus a negligible fraction q it � 0 causes the proportionality parameter � 1, and a dominant fraction q it ¼ 1 causes the proportionality parameter 1 þ μ i . Generalizing Eq. (1), player i’s utility of transacting in the fixed-supply currency g is u igt ¼ s it p� 1t q 1t þ p 2t q 2t ��1 þ μ i q [m] it [i] � ð4Þ Analogously, player i’s utility of transacting in the variablesupply currency n is proportional to the same three factors, i.e. its support 1 � s it of currency n, the volume fraction 1 � p t of all Fig. 1 Volume fractions p 1t and p 2t of transactions in currencies g and n for two kinds of players of different fractions q 1t and q 2t . Player i, i ¼ 1; 2, chooses a volume fraction p it of its transactions in currency g, and 1 � p it in currency n, 0 � p it � 1, 0 � q it � 1, q 1t þ q 2t ¼ 1, i ¼ 1; 2. 4 HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2022) 9:137 | https://doi.org/10.1057/s41599-022-01150-3 ----- ## HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-022-01150-3 ARTICLE ’ ’ players transactions in currency n, and 1 þ μ i q [m] it [i] [. Generalizing] Douglas elasticities α i ¼ 0:6; 0:5; 0:35; 0:2. Player i s utility is ’ Eq. (2), player i s utility of transacting in the variable-supply constant at u igt ¼ 0:25 since currency g has no changes in supply currency n is and no inflation. High and intermediate weights α i ¼ 0:6 and ’ u int ¼ 1� � s it ��1 � p 1t q 1t � p 2t q 2t ��1 þ μ i q [m] it [i] � α i ¼ 0:5 for changes in money supply S j causes player i s utility ´ � S t0 þ∑ j [t] ¼ [�] t [1] S 0t0 ð [S] jþ1 [�][S] j Þ� α i Q tj¼t0 þ 1 1 ð [1][þ][π] [j] Þ! 1�α i ð5Þ ucausesplot playerslightly above and below int to increase. Low weight u int to decrease overall. Figure i’s weighted utility u int ¼ α u i 0 ¼ iAt :25. Very low weight 0 of transacting in both cur-: 235 causesc uses Eqs. ( u int 6 to oscillate) and ( α i ¼7 0) to:2 Equations (4), (5) simplify to Eqs. (1), (2) when p it ¼ q it ¼ 0:5 rencies and society’s utility u At weighing the utilities of all players and μ i ¼ 0. Player i’s utility at time t is the weighted combination of both kinds. These two utilities u iAt ¼ u At are equal since of its volume fraction p it of transactions in the fixed-supply p it ¼ q it ¼ 0:5. Since u igt ¼ 0:25, the weighted utilities u iAt ¼ u At currency g, and its remaining volume fraction 1 � p it in the increase less for α i ¼ 0:6 and α i ¼ 0:5 and decrease less for variable-supply currency n, i.e. α i ¼ 0:2. u it ¼ p it u igt þ 1� � p it �u int ð6Þ Society’s utility, comprising all players of both kinds, is Replicator dynamics with simpliEqs. (1) and (2). Figure 3 applies the simplified utilitiesfied utilities u igst and u u igstinst and in u t ¼ q 1 u 1t þ 1� � q 1 �u 2t ð7Þ uplayer inst in Eqs. ( i’s fraction1), ( p2) and the replicator equation in Eq. ( it 1959–2021 with the same assumptions as in8) to plot Fig. 2, i.e. q it ¼ 0:5, μ i ¼ 0, and 0:01 ≤ s it ≤ 0:99. Player i’s process Replicator dynamicsPlayer i’s volume of transactions in the fixed-supply currency g’ . To sensitivity and initial condition areassumes the high weight α i ¼ 0:6 for money supply k i ¼ p it 0 ¼ 0: S5. Figure j . With low 3a analyze the evolution of the fractiontransactions in the fixed-supply currency p it of player g, causing 1 i s volume of � p it to be supportvariable-supply currency s it ≤ 0:5 for the fi nxed-supply currency, the fraction p it of transactions in g relative to the in currencyWeibull, 1997 n, the replicator equation (Taylor and Jonker,) 1978; the fraction increases to a maximumcurrency g decreases towards zero. With higher support p it ¼ 0:59 in 1972, and s it ¼ 0:6, ∂p it thereafter decreases towards lim ∂t [¼][ k] [i] [p] [it] [ u] � [igt] [ �] [u] [it] � ¼ k i p it 1� � p it ��u igt � u int � ð8Þ occurs because of the high weight t!T [p] [it] α [ �] i ¼ [0. That eventual decrease] 0:6 assigned to money is applied, inserting Eq. (6), where k i - 0 is the process sensitivity, supply S j, which for the US 1959–2021 has meant preferable i.e. how rapidly the fraction p it changes. Intermediate k i causes a money printing, which is impossible for the fixed-supply currency stable process, while high and low p it give quick and slow g. With higher support s it ¼ 0:7, the fraction increases to a changes, respectively. The right-hand side of Eq. (tional to the difference u igt � u it between player i8) is propor-’s utility of highmaximumsupport p it ¼ 0s: it 84 in 1990, and thereafter decreases. With very ¼ 0:99, the fraction increases towards transacting in thecombination of both utilities in Eq. ( fixed-supply currency6), and also proportional to’ g and the weighted cause player t lim !T [p] [it] [ �] [1. Hence suf] i to prefer it even with high weight assigned to [fi][ciently high support][ s] [it] [ for currency][ g][ can] the difference u igt � u int between player i s utility of transacting in money supply S j . Figure 3b assumes the low weight α i ¼ 0:2 for theWhen fixed-supply currency u igt exceeds u it or g u and the variable-supply currency int, the fraction p it increases, and n. money supplyp it to quickly increase towards lim S j . High support s it ≥ 0:6 then causes the fraction decreases otherwise. The right-hand side of Eq. (8) is furthermore t!T [p] [it] [ �] [1. Intermediate support] proportional to p nt 1� � p nt � which is inverse U shaped with a s it ¼ 0:5 causes the fraction p it to decrease marginally to p it ¼ maximum at p it ¼ 0:5 and minima at p it ¼ 0 and p it ¼ 1. The 0:498 in 1968, and thereafter increase towards lim t!T [p] [it] [ �] [1. Sup-] fractions p it and 1 � p it change most quickly when equally large, port s it ¼ 0:4 causes p it to decrease to p it ¼ 0:32 in 1979, and and most slowly when one fraction dominates the other. thereafter to increase. Support s it ¼ 0:3 causes p it to decrease to p it ¼ 0:115 in 2000, and thereafter to increase marginally to p it ¼ The fraction q 1t of players of kind 1. If we allow each player of 0:126 in 2021. Negligible support s it ¼ 0:01 causes p it to decrease kind 1 to change its preferences so as to be of kind 2, and each quickly to lim player of kind 2 to be of kind 1, we can analyze the analogous t!T [p] [it] [ �] [0.] evolution of the fraction1 � q 1t to be of kind 2, i.e. q 1t of players of kind 1, causing q 2t ¼ that the process sensitivity is 10 times higher, i.e.Figure 3c, d makes the same assumptions as Fig. k 3 i ¼a, b except 5. That causes p it to approach lim ∂q 1t t!T [p] [it] [ �] [0 more quickly when][ s] [it] [ ≤] [0][:][3 and] ∂t [¼][ hq] [1][t] [ u] � [1][t] [ �] [u] [t] � ¼ hq 1t 1� � q 1t ��u 1t � u 2t � ð9Þ approach lim t!T [p] [it] [ �] [1 more quickly when][ s] [it] [ ≥] [0][:][99. In Fig.][ 3][c] where Eq. (7) is inserted and the process sensitivity h > 0 is where α i ¼ 0:6, p it when s it ¼ 0:6 reaches a higher maximum interpreted analogously to k i >0 in Eq. (8). p it ¼ 0:59 than in Fig. 3a, but in the same year 1972. Also in Fig. 3c, p it when s it ¼ 0:7 reaches a maximum extremely close to 1 (determined numerically as p it ¼ 0:9999999314), which is higher Analyzing the modelThe US 1659–2021. Figure 2a, b plots the US M2 money supply than in Fig.decreases towards lim 3a, and in the same year 1990, and thereafter S j (Federal Reserve, 2022) and the US inflation π i (CPI Inflation t!T [p] [it] [ �] [0. Similarly in Fig.][ 3][d where][ α] [i] [ ¼][ 0][:][2,] Calculator, 2022) from time t 0 ¼ 1959 to time T ¼ 2021. Figure p it when s it ¼ 0:5 reaches a lower minimum p it ¼ 0:476 than in 2c uses Eqs. (4), (5) and the empirics in Fig. 2a, b to plot player i’s Fig. 3b, and in the same year 1968, and thereafter increases utilities u igt and u int of transacting in both currencies, assuming towards lim t!T [p] [it] [ �] [1. Also in Fig.][ 3][d,][ p] [it] [ when][ s] [it] [ ¼][ 0][:][4 reaches a] support s it ¼ 0:5, equal volume fractions p it ¼ 0:5 of transactions minimum extremely close to 0 (determined numerically as in both currencies, equal fractions q it ¼ 0:5 of both kinds of p it ¼ 0:000429), which is lower than in Fig. 3b, and in the same players, scaling proportionality parameter μ i ¼ 0, and Cobb year 1979, and thereafter increases towards lim t!T [p] [it] [ �] [1.] HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2022) 9:137 | https://doi.org/10.1057/s41599-022-01150-3 5 ----- ## ARTICLE HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-022-01150-3 Fig. 2 US M2 money supply, US inflation, player utilities and society’s utility. a US M2 money supply S j 1959–2021 in $billion. b US inflation π i 1959–2021. c and d Player i’s utilities u igt, u int, u it, u t as functions of time t when s it ¼ p it ¼ q it ¼ 0:5, μ i ¼ 0 and α i ¼ 0:6; 0:5; 0:35; 0:2. Fig. 3 The volume fraction p it of player i’s transactions in currency g at time t 1959–2021 with simplified utilities u igst and u inst in Eqs. (1) and (2) when p it 0 ¼ 0:5, μ i = 0, and 0:01 � s it � 0:99. a α i ¼ 0:6, k ¼ 0:5, b α i ¼ 0:2, k ¼ 0:5, c α i ¼ 0:6, k ¼ 5 and d α i ¼ 0:2, k ¼ 5. Replicator dynamics with the utilities u igt and u int in Eqs. (4) α i ¼ 0:6 assigned to money supply S j, two curves that approach and (5). Figure 4 applies the utilities u igt and u int in Eqs. (4), (5), lim and Eq. (q it ¼ p it 0 ¼8) to plot k i ¼ 0: p5, it μ with the same assumptions as in Fig. i ¼ 0, and 0:01 ≤ s it ≤ 0:99. Accounting for 3, i.e. approach lim t!T [p] [it] [ �] [0 or eventually decrease favoring currency] t!T [p] [it] [ �] [1 in Fig.][ 4][a so that player][ i][ prefers currency g][ n][ in Fig.][ 3][a,] or limp it in the utilities u igt and u int causes p it to approach lim t!T [p] [it] [ �] [0] until 2019 in Fig.instead. First, with high support 3a which positively impacts player s it ¼ 0:7 for currency i’s utility g, p it - u0 igt :5 t!T [p] [it] [ �] [1 more quickly than in Fig.][ 3][. With high weight] 6 HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2022) 9:137 | https://doi.org/10.1057/s41599-022-01150-3 ----- ## HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-022-01150-3 ARTICLE Fig. 4 The volume fraction p it of player i’s transactions in currency g at time t 1959–2021 with the utilities u igt and u int in Eqs. (4) and (5) when q it ¼ p it 0 ¼ k i ¼ 0:5, μ i ¼ 0, and 0:01 � s it � 0:99. a α i ¼ 0:6 and b α i ¼ 0:2. causing player i to favor currency g in Fig. 4a. Second, with slightly lower support s it ¼ 0:6 for currency g, p it >0:5 until 1985 in Fig. 3a which is sufficient for player i to quickly favor currency g in Fig. 4a, contrary to Fig. 3a. With low weight α i ¼ 0:2 assigned to money supply S j, only one curve that eventually increases in Fig. 3b, with support s it ¼ 0:4, quickly decreases in Fig. 4b. That curve eventually increases in Fig. 3b since player i’s utility u igt does not depend on p it . That enables player i to favor currency g since low weight α i ¼ 0:2 assigned to money supply S j causes player i to prefer to avoid the inflation associated with currency n. The opposite result follow in Fig. 4b since p it <0:5 until 2008, causing p it to quickly decrease towards lim t!T [p] [it] [ �] [0] where currency n is preferred. Replicator dynamics when players support currency g differently with s 1t ≠s 2t . This section assumes that the two kinds of players support currency g differently with s 1t ≠ s 2t . Figure 5 applies Eq. (8) to plot the volume fractions p 1t and p 2t of player i’s transactions, i ¼ 1; 2, in currency g with the same assumptions as in Fig. 4, i.e. q it ¼ p it 0 ¼ k i ¼ 0:5, μ i ¼ 0, and 0:01 ≤ s it ≤ 0:99. Additionally, s 1t ≠ s 2t . With high weight α i ¼ 0:6 assigned to money supply S j, negligible support s 1t ¼ 0:01 by player 1 and more support s 2t ≤ 0:7 by player 2 cause both volume fractions to eventually approach lim t!T [p] [it] [ �] [0 favoring currency][ n][, though][ p] [2][t] initially experiences an inverse U shape. Although the high support s 1t ¼ s 2t ¼ 0:7 comfortably enables both players to eventually transact exclusively in currency g in Fig. 4a, lim t!T [p] [2][t] [ �] [1, the] opposite result follows in Fig. 5a since player 1 supports currency g much less at s 1t ¼ 0:01. Negligible support s 1t ¼ 0:01 by player 1 and overwhelming support s 2t ¼ 0:99 by player 2 cause opposite results for the two players, i.e. lim t!T [p] [1][t] [ �] [0 for player 1 and] lim t!T [p] [2][t] [ �] [1 for player 2. Support][ s] [1][t] [ ¼][ 0][:][3 by player 1 and more] support s 2t ¼ 0:7 by player 2 cause both volume fractions to eventually approach lim t!T [p] [it] [ �] [0 favoring currency][ n][, though][ p] [2][t] initially experiences a higher inverse U shape than when s 1t ¼ 0:01. Support s 1t ¼ 0:3 by player 1 and overwhelming support s 2t ¼ 0:99 by player 2 also cause opposite results for the two players, although player 1’s volume fraction p 1t approaches lim t!T [p] [1][t] [ �] [0 more slowly than when][ s] [1][t] [ ¼][ 0][:][01, lim] t!T [p] [2][t] [ �] [1.] Support s 1t ¼ 0:4 by player 1 and more support s 2t ¼ 0:7 by player 2 cause both volume fractions to eventually approach lim t!T [p] [it] [ �] [0 favoring currency][ n][, though][ p] [2][t] [ initially experiences a] higher inverse U shape than when s 1t ¼ 0:3. Support s 1t ¼ 0:4 by player 1 and overwhelming support s 2t ¼ 0:99 by player 2 interestingly cause both volume fractions to eventually approach lim t!T [p] [it] [ �] [0 favoring currency g. Although support][ s] [1][t] [ ¼][ s] [2][t] [ ¼][ 0][:][4] causes both players to eventually transact exclusively in currency n in Fig. 4a, lim t!T [p] [2][t] [ �] [0, the opposite result follows in Fig.][ 5][b] since player 2 supports currency g much more at s 1t ¼ 0:99, which enables player 1 to also eventually support currency g. Support s 1t ¼ 0:5 by player 1 and more support s 2t ¼ 0:6 by player 2 cause both volume fractions to eventually approach lim favoring currency n. Both fractions approach t!T [p] [it] [ �] [0] lim t!T [p] [it] [ �] [0 slowly, and][ p] [2][t] [ initially experiences an inverse U] shape. Support s 1t ¼ 0:5 by player 1 and more support s 2t ≥ 0:7 by player 2 cause both volume fractions to eventually approach lim t!T [p] [it] [ �] [1 favoring currency][ g][. This interesting result shows that] ’ when s 1t ¼ 0:5 for player 1, merely increasing player 2 s support from s 2t ¼ 0:6 to s 2t ¼ 0:7 causes both players to eventually change their preferences from currency n to currency g. With low weight α i ¼ 0:2 assigned to money supply S j, both players generally prefer currency g more easily. Negligible support s 1t ¼ 0:01 by player 1 and more support s 2t ¼ 0:6 by player 2 cause both volume fractions to eventually approach lim t!T [p] [it] [ �] [0] favoring currency n, though p 2t in Fig. 5c initially experiences a lower inverse U shape than in Fig. 5a. Negligible support s 1t ¼ 0:01 by player 1 and more support s 2t ≥ 0:7 by player 2 cause opposite results for the two players, i.e. lim t!T [p] [1][t] [ �] [0 for player 1] and lim t!T [p] [2][t] [ �] [1 for player 2, so that player 2 eventually prefers] currency g. This result in Fig. 5c differs from Fig. 5a when s 2t ¼ 0:7 where s 2t ¼ 0:7 causes both players to eventually prefer currency n. Support s 1t ¼ 0:3 by player 1 and more support s 2t ¼ 0:5 by player 2 cause both volume fractions to eventually approach lim t!T [p] [it] [ �] [0 favoring currency][ n][. Support][ s] [1][t] [ ¼][ 0][:][3 by] player 1 and even more support s 2t ¼ 0:6 by player 2 cause the fraction p 1t for player 1 to decrease towards lim t!T [p] [1][t] [ �] [0, while the] fraction p 2t for player 2 increases overall extremely slowly towards lim t!T [p] [2][t] [ �] [0][:][89 in 2021, in major support of currency][ g][.] Support s 1t ¼ 0:3 by player 1 and yet more support s 2t ¼ 0:7 by player 2 cause player 2’s fraction p 2t to increase quickly towards lim t!T [p] [2][t] [ �] [1. Player 1][’][s fraction][ p] [1][t] [ is U shaped towards a] minimum, and thereafter increases slowly towards lim t!T [p] [1][t] [ �] [0][:][30] in 2021. Although player 1 supports currency g modestly at ’ s 1t ¼ 0:3, player 2 s higher support s 2t ¼ 0:7 causes player 1 to choose currency g to some modest extent. Support s 1t ¼ 0:3 by player 1 and overwhelming support s 2t ¼ 0:99 by player 2 cause player 2’s fraction p 2t to increase quickly towards lim t!T [p] [2][t] [ �] [1.] Player 1’s fraction p 1t is first U shaped towards a minimum that is HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2022) 9:137 | https://doi.org/10.1057/s41599-022-01150-3 7 ----- ## ARTICLE HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-022-01150-3 Fig. 5 The volume fractions p 1t and p 2t of the two kinds of players’ transactions in currency g at time t 1959–2021 with different support s 1t ≠s 2t when q it ¼ p it 0 ¼ k i ¼ 0:5, μ i ¼ 0, and 0:01 � s it � 0:99. a and b α i ¼ 0:6. c and d α i ¼ 0:2. higher than when s 2t ¼ 0:7, and thereafter increases logistically towards lim t!T [p] [1][t] [ �] [0][:][98. Despite low support][ s] [1][t] [ ¼][ 0][:][3, player 1] eventually supports currency g substantially. Support s 1t ¼ 0:4 by player 1 and more support s 2t ¼ 0:5 by player 2 cause both volume fractions to slowly and eventually approach lim t!T [p] [it] [ �] [0] favoring currency n. Support s 1t ¼ 0:4 by player 1 and more ’ support s 2t ≥ 0:6 by player 2 cause player 2 s fraction p 2t to increase towards lim t!T [p] [2][t] [ �] [1, while player 1][’][s fraction][ p] [1][t] [ is U] shaped towards a minimum (when s 2t ¼ 0:6) and thereafter increases towards lim t!T [p] [1][t] [ �] [1.] Replicator dynamics when the fraction q it of players of kind i changes. This section assumes that the fraction q it of players of kind i changes through time. Figure 6 applies Eqs. (8), (9) to plot ’ the volume fractions p 1t and p 2t of player i s transactions, i ¼ 1; 2, in currency g and the fraction q 1t of players of kind 1 with the same assumptions as in Fig. 5 except that q 1t varies instead of q it ¼ 0:5, i.e. p it 0 ¼ k i ¼ 0:5, μ i ¼ 0, 0:01 ≤ s it ≤ 0:99, s 1t ≠ s 2t . Additionally, we assume the process sensitivity h ¼ 0:5 for the fraction q 1t and initial condition q 1t 0 ¼ 0:5 With high weight α i ¼ 0:6 assigned to money supply S j, the first three combinations of curves in Fig. 5 with support s� 1t ; s 2t � equal to ð0:01; 0:7Þ, ð0:01; 0:99Þ, ð0:3; 0:7Þ eventually implying lim t!T [p] [1][t] [ �] [0, cause the fraction][ q] [1][t] [ of players of kind 1 to increase] towards 1. According to Eq. (9), the players prefer to be of kind 1 when u 1t ≥ u 2t, i.e. when p 1t u 1gt þ 1� � p 1t �u 1nt ≥ p 2t u 2gt þ �1 � p 2t �u 2nt according to Eq. (6), which approaches u 1nt ≥ u 2nt when lim ð0:01; 0:7Þ, t!T [p] [1][t] [ �] [0. The three support combinations] ð0:01; 0:99Þ, 0ð :3; 0:7Þ satisfy s 1t ≤ s 2t, 1 � s 1t ≥ 1 � s 2t which is inserted into Eq. (5) to give u 1nt ≥ u 2nt when lim t!T [p] [1][t] [ �] [0. Non-] mathematically, players prefer to be of kind 1 since they prefer currency n which gives higher utility u 1nt ≥ u 2nt when s 1t ≤ s 2t . That is, the players converge towards transacting in currency n compatibly with kind 1 supporting currency n much more than ’ currency g. With support �s 1t ; s 2t � ¼ 0ð :3; 0:99Þ, player 2 s volume fraction p 2t of transactions in currency g approaches lim t!T [p] [2][t] [ �] [1 in Fig.][ 5][, and in Fig.][ 6][ lim] t!T [p] [it] [ �] [1, which causes the] opposite result where players prefer to be of kind 2. That is, u 1t ≤ u 2t implies p 1t u 1gt þ 1� � p 1t �u 1nt ≤ p 2t u 2gt þ 1� � p 2t �u 2nt approaches u 1gt ≤ u 2gt when lim t!T [p] [it] [ �] [1. Support] �s 1t ; s 2t � ¼ ð0:3; 0:99Þ means that s 1t ≤ s 2t which is inserted into Eq. (4) to give u 1gt ≤ u 2gt when lim t!T [p] [it] [ �] [1. Non-mathematically, players] prefer to be of kind 2 since they prefer currency g which gives higher utility u 2gt ≥ u 1gt when s 2t ≥ s 1t . That is, the players converge towards transacting in currency g compatibly with kind 2 supporting currency g much more than currency n. With this insight the interpretations of the subsequent panels in Fig. 6 is straightforward. That is, lim t!T [p] [it] [ �] [0 so that players] eventually prefer to transact in currency n implies that players prefer to be of kind 1 which gives higher utility u 1nt ≥ u 2nt when s 1t ≤ s 2t . In contrast, lim t!T [p] [it] [ �] [1 so that players eventually prefer] to transact in currency g implies that players prefer to be of kind 2 which gives higher utility u 2gt ≥ u 1gt when s 2t ≥ s 1t . Replicator dynamics with positive scaling proportionality parameter μ i . This section assumes that the scaling proportionality parameter μ i in player i’s utilities u igt and u int is positive. When μ i increases, player i’s utilities u igt and u int in Eqs. (4) and (5) of transacting in both currencies g and n increase equally much. The increase is proportional to the fraction q it of players of kind i at time t raised to the parameter m i . If both μ 1 and μ 2 increase equally much, both u igt and u int increase which in the replicator Eq. (8) can be interpreted as increasing the process sensitivity k i, which means quicker changes which are otherwise qualitatively similar to Fig. 6. Figure 7 makes the same 8 HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2022) 9:137 | https://doi.org/10.1057/s41599-022-01150-3 ----- ## HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-022-01150-3 ARTICLE Fig. 6 The fractions p 1t, p 2t, q 1t at time t 1959–2021 with different support s 1t ≠s 2t when q it 0 ¼ p it 0 ¼ k i ¼ h ¼ 0:5, μ i ¼ 0, and 0:01 � s it � 0:99. a1, a2, b1, b2 α i ¼ 0:6. c1, c2, d1, d2 α i ¼ 0:2. assumptions as in Fig. 6 except that μ 2 ¼ 1 and μ 1 ¼ 0, i.e. q 1t 0 ¼ p it 0 ¼ k i ¼ h ¼ 0:5, μ i ¼ 0, 0:01 ≤ s it ≤ 0:99, s 1t ≠ s 2t . The higher μ 2 ¼ 1 > μ 1 ¼ 0 means that players to a higher extent than in Fig. 6 tend to prefer to be of kind 2 which gives higher utilities u 2gt and u 2nt . Hence Fig. 7 shows three, four, two, four curves (summing to 13 curves out of 16 possible curves) for the fraction q 1t of players of kind 1 at time t approaching lim t!T [q] [1][t] [ �] [0, as] compared with one, two, zero, three curves (summing to only six curves), respectively, approaching lim t!T [q] [1][t] [ �] [0 in Fig.][ 6][. In Fig.] 7a1 the low support s 1t ¼ 0:01 of player 1 for currency g causes both players to eventually not transact in currency g when s 2t ¼ 0:7, as explained for Fig. 6, which implies that players prefer to be of kind 1 since they prefer currency n which gives higher utility u 1nt ≥ u 2nt when s 1t ≤ s 2t . The corresponding curve q 1t in Fig. 7a2 gives lim t!T [q] [1][t] [ �] [1, while the other three curves with] higher support s 1t þ s 2t give lim t!T [q] [1][t] [ �] [0 so that the players prefer] to be of kind 2. Fig. 7b1, b2 with higher support s 1t þ s 2t shows a clearer trend where lim t!T [p] [it] [ �] [0 and lim] t!T [q] [1][t] [ �] [0 so that players] HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2022) 9:137 | https://doi.org/10.1057/s41599-022-01150-3 9 ----- ## ARTICLE HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-022-01150-3 Fig. 7 The fractions p 1t, p 2t, q 1t at time t 1959–2021 with different support s 1t ≠s 2t when q it 0 ¼ p it 0 ¼ k i ¼ h ¼ 0:5, μ 2 ¼ 1, μ 1 ¼ 0, and 0:01 � s it � 0:99. a1, a2, b1, b2 α i ¼ 0:6. c1, c2, d1, d2 α i ¼ 0:2. prefer to be of kind 2. Figure 7c2 shows two curves, with support �s 1t ; s 2t � equal to ð0:3; 0:5Þ, ð0:3; 0:6Þ, eventually approaching lim t!T [q] [1][t] [ �] [0 so that players prefer to be of kind 2, in contrast to] Fig. 6c2 which has no such curves. Figure 7d2 shows how all the four curves eventually approach lim t!T [q] [1][t] [ �] [0 so that players prefer] to be of kind 2. Figure 7d2 also shows how it is possible for both players to eventually prefer no transactions in currency g, lim t!T [p] [it] [ �] [1, while at the same time the fraction][ q] [1][t] [ of players of] kind 1 slowly decreases. Discussion and future research New currencies, especially these in digital format, may induce more currency competition. The competition may become especially fierce between fixed-supply and variable-supply currencies. Fixed-supply currencies rigidly avoids inflation/deflation which would otherwise be induced by altering the money supply. Variable-supply currencies allow more flexibility by allowing money printing during critical events (e.g. wars and recession), but requires fiscal discipline thereafter to avoid inflation. 10 HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2022) 9:137 | https://doi.org/10.1057/s41599-022-01150-3 ----- ## HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-022-01150-3 ARTICLE To understand the competition, a player is assumed to earn a utility depending on its support of and volume of transactions in a given currency, and the fraction of players of the same kind as itself. A player may be any individual or collective unit. Essential in the article is how a player values money printing/withdrawal on the one hand versus inflation/deflation on the other hand. A time delay usually exists from the former to the latter. Batini (2006), Batini and Nelson (2001) and Friedman and Schwartz (1982) suggest that it takes over one year from money printing until inflation. Hence temptation may exist to increase money supply in the short run and postpone worrying about the subsequent inflation. The 1959–2021 US money supply and inflation data suggest that money printing and inflation indeed occur. With high weight assigned to money supply relative to inflation, this article finds that players are more inclined to prefer the variable-supply currency. They thereby benefit from the temporarily increased purchasing power enabled by the increased money supply. Such players may not have excessively large time horizons, since then they might value the future negative consequences of inflation. This assumes that the player itself indeed can access the increased money supply. In contrast, low weight assigned to money supply relative to inflation induces players to be more inclined to prefer the fixed-supply currency, to avoid the negative impact of inflation. When two kinds of players support two currencies differently, the players’ fractions of transactions in the two currencies may exhibit substantial variation, e.g. be inverse U shaped or U shaped before converging towards preferring one or the other currency. This relates to earlier studies of how players choose between multiple currencies, see e.g. Schilling and Uhlig (2019), FernándezVillaverde and Sanches (2019), Almosova (2018), Benigno et al. (2019). For example, assume high weight assigned to money supply, and that one player supports the fixed-supply currency much less than the other player. The first player may quickly abandon the fixed-supply currency which fails to offer additional money supply. The second player may initially support the fixed-supply currency increasingly, but may thereafter be influenced by the first player and also abandon the fixed-supply currency, thus potentially being negatively impacted by inflation. In contrast, assume low weight assigned to money supply, and that one player supports the fixedsupply currency much more than the other player. The first player may prefer the fixed-supply currency which provides a hedge against inflation. The second player may initially support the variable-supply currency increasingly, but may thereafter be influenced by the first player and also prefer the fixed-supply currency, thus potentially not benefitting from the increased money supply. The two currencies may obtain different market shares, as also analyzed ElBahrawy et al. (2017) and Imhof and Nowak (2006). These results indicate how countries or societies through various evolutionary dynamics may transform themselves into using one or another currency, or a combination of several currencies, potentially for different purposes. This in turn may impact a country’s financial markets, monetary policy, and interaction with other countries. We next allow players to choose which kind of player they can be. That can be realistic when a player prefers to transact in currencies that many other players transact in, thus being less influenced by how the player individually supports each currency independently of the other players. The analysis shows that players may choose to be of a kind supporting a given currency if that support is much higher than the other kind’s support of the same currency. The first kind of player may thus become more common, while the second kind player becomes less common. We finally enable a player’s utility of transacting in a given currency to be proportional to the fraction of players of the same kind as the given player. Thus players not only choose what kind of player they want to be, but they may receive higher utility for being of one kind rather than of another kind, regardless of the players’ support for each currency and their volume fractions of transactions in each currency. When the proportional impact of being a certain kind of player increases equally for both kinds of players, the players’ fractions of transactions in each currency change more quickly, as if the process sensitivity in the replicator equation increases. When the proportional impact increases more for one kind of player, players increasingly prefer to be of that kind. Future research, which implicitly indicates limitations of the current article, may extend the analysis to more features than money supply and inflation. More than two currencies and more than two kinds of players may be analyzed. Each kind of player’s utility may depend on further features related to each currency’s backing, convenience, confidentiality, transaction efficiency, financial stability, and security. Players may be assumed to apply different currencies for different purposes. Different kinds of players gaining different access to increased money supply, or suffering differently from money contraction, may be analyzed. Alternative player risk attitudes and time preferences may be evaluated. Empirics from other world regions may be incorporated. Additional players may be analyzed, e.g. players in different countries accessing different currencies, private versus public players, governmental agencies imposing regulation and taxation, and currency competition between countries. Conclusion This article builds a model of two kinds of players who can choose between two currencies, i.e. a fixed-supply currency (e.g. Bitcoin) and a variable-supply currency (e.g. a fiat currency or a central bank digital currency). A player may be any individual or collective unit. A variable-supply currency enables money printing or money withdrawal, and may be associated with inflation or deflation. Comparing fixed-supply and variable-supply currencies has become relevant due to new currencies emerging which incorporate supply, ownership, decentralization, regulation, confirmation of transactions, geographical extension, etc. differently. A player’s utility of transacting in a given currency is proportional to three factors, i.e. the player’s support of that currency, the volume fraction of all players’ (of both kinds) transactions in that currency, and the fraction of players of the same kind as the given player. A currency’s support depends on its financial stability, transaction efficiency, backing, convenience, confidentiality, and security. Additionally, a player’s utility of transacting in the variable-supply currency is proportional to a Cobb Douglas utility of two factors. The first factor is the initial money supply plus the accumulative money printing (positive) and money withdrawal (negative) in the numerator, divided by the initial money supply in the denominator. The second factor is the inverse of the accumulative inflation (positive) and deflation (negative when measured as a percentage). If the output elasticity for the first ratio is high, money printing/withdrawal is highly valued relative to inflation/deflation, and conversely if the output elasticity for the second ratio is high. The players’ utility of transacting in the variable-supply currency is illustrated for various output elasticities for 1959–2021. The exponentially increasing US M2 money supply and the positive inflation cause this utility to increase over time with high output elasticity, and decrease with low output elasticity. Such changing utilities over time constitute policy tools for how to adjust money supply/withdrawal and inflation/deflation. Three replicator equations are developed based on the players’ utilities. Two of these model each kind of player’s volume fractions of transactions in each currency over time. The third models the evolution of the fraction of each kind of player over time, i.e. how players choose to be of one or the other kind. HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2022) 9:137 | https://doi.org/10.1057/s41599-022-01150-3 11 ----- ## ARTICLE HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | https://doi.org/10.1057/s41599-022-01150-3 High weight assigned to money supply relative to inflation causes players to more likely prefer the variable-supply currency, to gain from the increased money supply, and conversely prefer the fixedsupply currency given low weight assigned to money supply. When the two kinds of players support the two currencies differently, the players’ fractions of transactions in the two currencies may be inverse U shaped or U shaped before converging towards preferring one or the other currency. When players can choose which kind of player to be, players may choose to be of a kind supporting a given currency if that support is especially high. When a player’s utility of transacting in a given currency is proportional to the fraction of players of the same kind as the given player, and the proportional impact is higher for one kind of player, players tend to prefer to be of that kind. Data availability The article contains no associated data. All data generated or analyzed during this study are included in this published article. Received: 1 October 2021; Accepted: 28 March 2022; References Almosova A (2018) A note on cryptocurrencies and currency competition. International Research Training Group 1792 Discussion Paper No. 2018-006, Technical University Berlin, Berlin. Asimakopoulos S, Lorusso M, Ravazzolo F (2019) A new economic framework: a DSGE model with cryptocurrency. Centre for Applied Macro- and Petroleum Economics Working Paper No. 07/2019. BI Norwegian Business School, Oslo. [Basov V (2022) Top 10 largest gold producing countries in 2021—report. https://](https://www.kitco.com/news/2022-01-31/Top-10-largest-gold-producing-countries-in-2021-report.html) [www.kitco.com/news/2022-01-31/Top-10-largest-gold-producing-countries-](https://www.kitco.com/news/2022-01-31/Top-10-largest-gold-producing-countries-in-2021-report.html) [in-2021-report.html](https://www.kitco.com/news/2022-01-31/Top-10-largest-gold-producing-countries-in-2021-report.html) Batini N (2006) Euro area inflation persistence. Empir. Econ. 31(4): 977–1002. [https://doi.org/10.1007/s00181-006-0064-7](https://doi.org/10.1007/s00181-006-0064-7) Batini N, Nelson E (2001) The Lag from monetary policy actions to inflation: [Friedman revisited. Int Finance 4(3):381–400. https://doi.org/10.1111/1468-](https://doi.org/10.1111/1468-2362.00079) [2362.00079](https://doi.org/10.1111/1468-2362.00079) Benigno P (2021) Monetary policy in a world of cryptocurrencies. Centre for Economic Policy Research Discussion Paper No. DP13517, Luiss Guido Carli University, Roma. Benigno P, Schilling LM, Uhlig H (2019) Cryptocurrencies, currency competition, and the impossible trinity. National Bureau of Economic Research Working Paper Series No. w26214. National Bureau of Economic Research, Cambridge. Blakstad S, Allen R (2018) Central Bank digital currencies and cryptocurrencies. [FinTech Revolution 87–112. https://doi.org/10.1007/978-3-319-76014-8_5](https://doi.org/10.1007/978-3-319-76014-8_5) Caginalp C, Caginalp G (2019) Establishing cryptocurrency equilibria through game [theory. AIMS Math 4(3):420–436. https://doi.org/10.3934/math.2019.3.420](https://doi.org/10.3934/math.2019.3.420) Caporale GM, Gil-Alana L, Plastun A (2018) Persistence in the cryptocurrency market. [Res Int Bus Finance 46:141–148. https://doi.org/10.1016/j.ribaf.2018.01.002](https://doi.org/10.1016/j.ribaf.2018.01.002) [CPI Inflation Calculator (2022) CPI inflation calculator. https://www.officialdata.](https://www.officialdata.org/us/inflation/1850) [org/us/inflation/1850](https://www.officialdata.org/us/inflation/1850) ElBahrawy A, Alessandretti L, Baronchelli A (2019) Wikipedia and cryptocurrencies: interplay between collective attention and market performance. [Front Blockchain 2:12. https://doi.org/10.3389/fbloc.2019.00012](https://doi.org/10.3389/fbloc.2019.00012) ElBahrawy A, Alessandretti L, Kandler A, Pastor-Satorras R, Baronchelli A (2017) Evolutionary dynamics of the cryptocurrency market. R Soc Open Sci [4(11):170623. https://doi.org/10.1098/rsos.170623](https://doi.org/10.1098/rsos.170623) Federal Reserve (2022) Money stock measures—H.6 release. [https://www.](https://www.federalreserve.gov/releases/h6/current/default.htm) [federalreserve.gov/releases/h6/current/default.htm](https://www.federalreserve.gov/releases/h6/current/default.htm) Fernández-Villaverde J, Sanches D (2019) Can currency competition work? J [Monet Econ 106:1–15. https://doi.org/10.1016/j.jmoneco.2019.07.003](https://doi.org/10.1016/j.jmoneco.2019.07.003) Friedman M, Schwartz AJ (1982) Interrelations between the United States and the [United Kingdom 1873–1975. J Int Money Finance 13–19. https://doi.org/10.](https://doi.org/10.1016/0261-5606(82)90002-X) [1016/0261-5606(82)90002-X](https://doi.org/10.1016/0261-5606(82)90002-X) [gold.org. (2022) How much gold has been mined? https://www.gold.org/about-](https://www.gold.org/about-gold/gold-supply/gold-mining/how-much-gold) [gold/gold-supply/gold-mining/how-much-gold](https://www.gold.org/about-gold/gold-supply/gold-mining/how-much-gold) Gandal N, Halaburda H (2016) Can we predict the winner in a market with network effects? Competition in cryptocurrency market. Games 7(3):16. [https://doi.org/10.3390/g7030016](https://doi.org/10.3390/g7030016) [Hayes A (2022) What happens to Bitcoin after all 21 million are mined? https://](https://www.investopedia.com/tech/what-happens-bitcoin-after-21-million-mined/) [www.investopedia.com/tech/what-happens-bitcoin-after-21-million-mined/](https://www.investopedia.com/tech/what-happens-bitcoin-after-21-million-mined/) [Ikkurty S (2019) Fiat, gold, and bitcoin comparison. https://medium.com/@](https://medium.com/@samikkurty/fiat-gold-and-bitcoin-comparison-e878fa2292bc) [samikkurty/fiat-gold-and-bitcoin-comparison-e878fa2292bc](https://medium.com/@samikkurty/fiat-gold-and-bitcoin-comparison-e878fa2292bc) Imhof LA, Nowak MA (2006) Evolutionary game dynamics in a Wright–Fisher process. J Bank Reg 52(5):667–681 Kusimba C (2017) When—and why—did people first start using money? The conversation. [https://theconversation.com/when-and-why-did-people-first-](https://theconversation.com/when-and-why-did-people-first-start-using-money-78887) [start-using-money-78887](https://theconversation.com/when-and-why-did-people-first-start-using-money-78887) [Learn B (2021) Bitcoin vs. gold: which is a better store of value? https://learn.bybit.](https://learn.bybit.com/investing/bitcoin-vs-gold-store-of-value/) [com/investing/bitcoin-vs-gold-store-of-value/](https://learn.bybit.com/investing/bitcoin-vs-gold-store-of-value/) Lewenberg Y, Bachrach Y, Sompolinsky Y, Zohar A, Rosenschein JS (2015) Bitcoin mining pools: a cooperative game theoretic analysis. Paper presented at the proceedings of the 2015 international conference on autonomous agents and multiagent systems, Istanbul, Turkey. Masciandaro D (2018) Central Bank digital cash and cryptocurrencies: insights from a new Baumol–Friedman demand for money. Aust Econ Rev [51(4):540–550. https://doi.org/10.1111/1467-8462.12304](https://doi.org/10.1111/1467-8462.12304) Milunovich G (2018) Cryptocurrencies, mainstream asset classes and risk factors: a [study of connectedness. Aust Econ Rev 51(4):551–563. https://doi.org/10.](https://doi.org/10.1111/1467-8462.12303) [1111/1467-8462.12303](https://doi.org/10.1111/1467-8462.12303) [Mitchell C (2021) Gold: the other currency. https://www.investopedia.com/articles/](https://www.investopedia.com/articles/forex/10/gold-the-other-currency.asp) [forex/10/gold-the-other-currency.asp](https://www.investopedia.com/articles/forex/10/gold-the-other-currency.asp) [Nakamoto S (2008) Bitcoin: a peer-to-peer electronic cash system. https://bitcoin.](https://bitcoin.org/bitcoin.pdf) [org/bitcoin.pdf](https://bitcoin.org/bitcoin.pdf) Rahman AJ (2018) Deflationary policy under digital and fiat currency competition. [Res Econ 72(2):171–180. https://doi.org/10.1016/j.rie.2018.04.004](https://doi.org/10.1016/j.rie.2018.04.004) Sapkota N, Grobys K (2021) Asset market equilibria in cryptocurrency markets: evidence from a study of privacy and non-privacy coins. J Int Financial Mark [Inst Money 74:101402. https://doi.org/10.2139/ssrn.3407300.](https://doi.org/10.2139/ssrn.3407300) Schilling LM, Uhlig H (2019) Currency substitution under transaction costs. AEA [Pap Proc 109:83–87. https://doi.org/10.1257/pandp.20191017](https://doi.org/10.1257/pandp.20191017) Taylor PD, Jonker LB (1978) Evolutionary stable strategies and game dynamics. [Math Biosci 40(1):145–156. https://doi.org/10.1016/0025-5564(78)90077-9](https://doi.org/10.1016/0025-5564(78)90077-9) Verdier M (2021) Digital currencies and bank competition. Université Panthéon-Assas Paris 2, Paris, Manuscript. [https://doi.org/10.2139/ssrn.](https://doi.org/10.2139/ssrn.3673958) [3673958.](https://doi.org/10.2139/ssrn.3673958) Weibull JW (1997) Evolutionary game theory. MIT Press, Cambridge, MA. White LH (2014) The market for cryptocurrencies. Cato J 35(2):383–402 Competing interests The authors declare no competing interests. Ethical approval Does not apply. Informed consent Does not apply. Additional information Correspondence and requests for materials should be addressed to Kjell Hausken. [Reprints and permission information is available at http://www.nature.com/reprints](http://www.nature.com/reprints) Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from [the copyright holder. To view a copy of this license, visit http://creativecommons.org/](http://creativecommons.org/licenses/by/4.0/) [licenses/by/4.0/.](http://creativecommons.org/licenses/by/4.0/) © The Author(s) 2022 12 HUMANITIES AND SOCIAL SCIENCES COMMUNICATIONS | (2022) 9:137 | https://doi.org/10.1057/s41599-022-01150-3 -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1057/s41599-022-01150-3?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1057/s41599-022-01150-3, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.nature.com/articles/s41599-022-01150-3.pdf" }
2,022
[]
true
2022-04-20T00:00:00
[ { "paperId": "6911abe4ac9886159df01b7c480e8336264b7607", "title": "Digital Currencies and Bank Competition" }, { "paperId": "23cca4451cf4cfe0c0c540129429bf58a3efc122", "title": "A New Economic Framework: A DSGE Model with Cryptocurrency" }, { "paperId": "1c59dc3f5fb32c9b7da20a94e6dc7b054e5e612b", "title": "Cryptocurrencies, Currency Competition and the Impossible Trinity" }, { "paperId": "b1b0019540269122e64bdff050ce7feb0a3b75a9", "title": "Asset Market Equilibria in Cryptocurrency Markets: Evidence from a Study of Privacy and Non-Privacy Coins" }, { "paperId": "5414bfd709c026910c2ec75368fbf788fc48948e", "title": "Establishing Cryptocurrency Equilibria Through Game Theory" }, { "paperId": "36ad9b3711ef8d435595db428cc23de8bc2a7980", "title": "Monetary Policy in a World of Cryptocurrencies" }, { "paperId": "6ede7caa5dd1f01ef19f5ddd0523773691df196f", "title": "Currency Substitution under Transaction Costs" }, { "paperId": "22a95840f96d13e5c622ba326e559fd3bfe0c439", "title": "Central Bank Digital Cash and Cryptocurrencies: Insights from a New Baumol–Friedman Demand for Money" }, { "paperId": "9f3ff268b86a634aaed12d73b0969c1477c96ed0", "title": "Cryptocurrencies, Mainstream Asset Classes and Risk Factors: A Study of Connectedness" }, { "paperId": "3b337279071c95d2d58e10a981f481ecc3d16dd1", "title": "Deflationary policy under digital and fiat currency competition" }, { "paperId": "3dae2f3c7ca5e9c60aa41df4e6dfba35f9586d83", "title": "Persistence in the Cryptocurrency Market" }, { "paperId": "69eaab0858e3c9865490dd4e6d636e260e9a61d6", "title": "Evolutionary dynamics of the cryptocurrency market" }, { "paperId": "97fd49eb7ca1773b7e696d543edf92076019de04", "title": "Can We Predict the Winner in a Market with Network Effects? Competition in Cryptocurrency Market" }, { "paperId": "8a591a7441852aecebc806073d82e34cb202f6fe", "title": "Can Currency Competition Work?" }, { "paperId": "ee87bf0b37acd76313ce469aed1bc05540f2d184", "title": "Bitcoin Mining Pools: A Cooperative Game Theoretic Analysis" }, { "paperId": "79b442dadc03fbd8d2517b8e05111b8c2d8d2a20", "title": "Evolutionarily Stable Strategies and Game Dynamics" }, { "paperId": null, "title": "What happens to Bitcoin after all 21 million are mined?" }, { "paperId": null, "title": "Top 10 largest gold producing countries in 2021—report" }, { "paperId": null, "title": "Gold: the other currency. https://www.investopedia.com/articles/ forex/10/gold-the-other-currency.asp Nakamoto S (2008) Bitcoin: a peer-to-peer electronic cash system. https://bitcoin" }, { "paperId": null, "title": "Bitcoin vs. gold: which is a better store of value?" }, { "paperId": null, "title": "A (2019) Wikipedia and cryptocurrencies: interplay between collective attention and market performance" }, { "paperId": "679160538ca974288bee86d7fe3bff29b0880393", "title": "Central Bank Digital Currencies and Cryptocurrencies" }, { "paperId": null, "title": "International Research Training Group 1792 Discussion Paper No" }, { "paperId": null, "title": "When-and why-did people first start using money? The conversation" }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": null, "title": "Euro area in fl ation persistence" }, { "paperId": "c8c736cc40212f4e4279b806d619fcc33b2d72ae", "title": "Digital Object Identifier (DOI) Jan Brase, DataCite, IDF Metadata" }, { "paperId": "8dd9857b16152bc20cafda944287df2a98339b56", "title": "The Lag from Monetary Policy Actions to Inflation: Friedman Revisited" }, { "paperId": null, "title": "White LH (2014) The market for cryptocurrencies" }, { "paperId": "b0b08b659ff4ff649ed5e7a7970081aeda3512b7", "title": "Interrelations between the United States and the United Kingdom, 1873–1975" }, { "paperId": null, "title": "CPI Inflation Calculator (2022) CPI inflation calculator" }, { "paperId": null, "title": "Federal Reserve (2022) Money stock measures-H.6 release" } ]
19,594
en
[ { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Medicine", "source": "s2-fos-model" }, { "category": "Business", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/016c46515711bfa84565487489ba131239f8405d
[]
0.87771
Integration of Blockchain and Edge Computing in Healthcare: Accountability and Collaboration
016c46515711bfa84565487489ba131239f8405d
Transdisciplinary Journal of Engineering &amp; Science
[ { "authorId": "3378608", "name": "Rakshit Kothari" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
A decentralized, safe, and effective ecosystem is created in the healthcare industry through the integration of blockchain and edge computing. Secure data interchange, real-time analytics, enhanced privacy, and patient-centered treatment are all made possible. Realizing the full potential of integrating blockchain and edge computing for health care will need accountability and collaboration. It will make it possible to create reliable, secure, and cooperative healthcare organizations that will increase patient care, protect the confidentiality of information, and support cutting-edge applications for healthcare. Our Solution is to share data safely and cooperatively, improve patient confidentiality, and support healthcare data's ethical and accountable use. In this paper, we propose that combining blockchain technology with edge computing in healthcare is intended to improve accountability and teamwork. The methodologies used in integrating deep learning deploy various models on edge devices such as Q-Learning and Deep Q-Networks (DQN), SVM, etc. In conclusion, the application of edge computing and blockchain in the healthcare sector offers fascinating possibilities for cooperation and accountability. Healthcare systems may improve data security, privacy, interoperability, and real-time analytics by combining the advantages of the two technologies. The delivery of healthcare might change as a result of this integration, which could also foster cooperative research and eventually enhance patient outcomes.  
Transdisciplinary Journal of Engineering & Science **205** # Integration of Blockchain and Edge Computing in Healthcare: Accountability and Collaboration ### Rakshit Kothari[1][,][∗] 1Geetanjali Institute of Technical Studies, Udaipur, Rajasthan _∗rakshit007kothari@gmail.com_ **Received 15 July, 2023; Revised 4 August, 2023; Accepted 5 August, 2023** **Available online 5 August 2023 at www.atlas-journal.org, doi: 10.22545/2020/00230** **Abstract:A decentralized, safe, and effective ecosystem is created in the healthcare industry through the** _integration of blockchain and edge computing. Secure data interchange, real-time analytics, enhanced_ _privacy, and patient-centered treatment are all made possible. Realizing the full potential of integrating_ _blockchain and edge computing for health care will need accountability and collaboration. It will make it_ _possible to create reliable, secure, and cooperative healthcare organizations that will increase patient care,_ _protect the confidentiality of information, and support cutting-edge applications for healthcare. Our Solution_ _is to share data safely and cooperatively, improve patient confidentiality, and support healthcare data ethical_ _and accountable use. In this paper, we propose that combining blockchain technology with edge computing in_ _healthcare is intended to improve accountability and teamwork. The methodologies used in integrating deep_ _learning deploy various models on edge devices such as Q-Learning and Deep Q-Networks (DQN), SVM, etc._ _In conclusion, the application of edge computing and blockchain in the healthcare sector offers fascinating_ _possibilities for cooperation and accountability. Healthcare systems may improve data security, privacy,_ _interoperability, and real-time analytics by combining the advantages of the two technologies. The delivery_ _of healthcare might change as a result of this integration, which could also foster cooperative research and_ _eventually enhance patient outcomes._ **Keywords:Blockchain, edge computing, security, privacy, medical research, sharing** ## 1 Introduction Accountability in healthcare systems is ensured by the transparent and unchangeable database that blockchain technology offers. It permits the safe and decentralized storage of private information [1], including that related to patients, research, and medical study. Blockchain’s distributed architecture guarantees **ISSN: 1949-0569 online** **Vol. 14, pp. 205-220** ----- _Rakshit Kothari_ Integration of Blockchain and Edge Computing in Healthcare: Accountability and Collaboration **206** that no single party retains authority over the data, minimizing the possibility of data manipulation or unauthorized access. An audit trail that may be readily followed and validated is produced by recording every transaction or data modification in a separate block. The promotion of trust among those involved, such as patients, healthcare workers, and investigators, is made possible by this degree of responsibility [2 – 6]. By enabling real-time data processing and analysis at the edge of the network, nearer the data source, edge computing enhances blockchain technology. With this strategy, data interchange and collaboration among healthcare stakeholders are more effective and have lower latency [3, 4]. Wearable sensors and Internet of Things (IoT) devices are examples of edge computing devices that may gather and analyse data locally before safely passing it to the blockchain network. This decentralized data processing capacity improves collaboration by enabling the exchange of crucial information between various researchers and healthcare professionals. The Integration of blockchain and edge computing in healthcare represents various factors [5, 7] with the preference for accountability and collaboration in Figure 1. **Figure 1: Representation of Accountability and Collaboration.** **Data Sharing and Interoperability: Blockchain technology has been investigated to address the** problems with data sharing and interoperability in the healthcare industry. Healthcare professionals may easily access and share patient information since it facilitates secure and consistent data transmission across many platforms. By enabling local data preparation and real-time data synchronization with the blockchain network, edge computing [8 – 12] improves this procedure. **Clinical Trials and Research: Clinical trials and medical research can be more transparent and ethical** when edge computing and blockchain are used together. The technology allows auditable and tamper-proof records by securely documenting every step of the trial or research process on the blockchain, including participant recruiting, data gathering, and analysis. In order to lessen dependency on centralized systems and increase data accuracy [9 – 14], edge computing devices can gather and analyze data directly from trial participants. **Internet of Medical Things (IoMT): A significant quantity of healthcare data is produced by the** IoMT, which comprises wearable technology and remote monitoring systems. Edge computing and blockchain integration make it possible to store, process, and analyse data securely and effectively. With this connectivity, real-time health monitoring can be more accurate, personalized treatment plans can be created, and patient and healthcare provider remote cooperation is made easier. **Data Privacy and Security: Through safe key management, blockchain’s cryptographic protocols** provide patients ownership over their health data, ensuring data privacy and security. By keeping sensitive data localized and lowering the possibility of unauthorized access or data breaches, edge computing further **ISSN: 1949-0569 online** **Vol. 14, pp. 205-220** ----- Transdisciplinary Journal of Engineering & Science **207** increases security. ## 2 History Due to its promise to address data security and interoperability issues in a variety of industries, including healthcare, blockchain technology became more well-known in 2017. It has been acknowledged that the decentralized and open nature of blockchain technology offers a way to enhance data integrity and accountability in healthcare systems. Edge computing gained popularity around this time as a means of processing and analyzing data closer to its source, lowering latency and increasing efficiency. With the emergence of wearable technology and the Internet of Things (IoT) in healthcare, the requirement for real-time data processing and analysis became clear. Since then, blockchain and edge computing has been actively integrated to improve accountability and collaboration in the healthcare industry by technology businesses, research organizations, and healthcare organizations [10]. To test and improve this integration, several pilot projects, research studies, and collaborations have been started. These programs have concentrated on a variety of topics, including clinical trials, the Internet of Medical Things (IoMT), secure data exchange, interoperability, patient consent management, and so on. The difficulties of data privacy, security, fragmentation, and the requirement for real-time data processing and cooperation have all been addressed by efforts to merge blockchain with edge computing. ### 2.1 Scope The healthcare sector has a great deal of potential to be revolutionized by block and edge computing. By guaranteeing the safe storage of healthcare data and enabling frictionless data transmission across healthcare stakeholders, blockchain technology can improve data security, privacy, and interoperability [10]. Additionally, it may streamline the procedures for clinical trials, supply chain management, and billing, enhancing patient outcomes, lowering costs, and avoiding fraud. Edge computing, on the other hand, makes it possible to monitor patients in real-time, practice telemedicine, and provide treatment from a distance. This technology enables prompt interventions, expands access to healthcare in under-served regions, and guarantees continuity of care in emergency situations. There are several applications for edge computing and blockchain in the healthcare industry. It can strengthen consent management, provide patients with more control over their health data, and improve data security and privacy. Healthcare systems may improve data accuracy, minimize administrative hassles, and speed up operations by incorporating these technologies. Furthermore, by providing openness, traceability, and correctness of outcomes, integration can revolutionize clinical trials and medical research. ## 3 Overview of Blockchain and Edge Computing in Healthcare This section presents an overview of blockchain and edge computing respectively. ### 3.1 Blockchain Blockchain’s technology of distributed ledgers makes it easier to transfer patient medical records securely, improves healthcare data security, controls the medication supply chain, and aids genetic code study in the medical field. It’s hardly surprising that the most well-liked blockchain healthcare use at the moment is safeguarding medical data. Security is a significant problem in the healthcare sector. From July 2021 to June 2022, 692 significant healthcare data breaches were disclosed. Health and genomic testing records, as well as banking and credit card information, were stolen by the offenders [11]. Blockchain is a technology that is perfect for security-related uses because it can maintain an incorruptible, distributed, and transparent log of all patient data. Additionally, blockchain is both private and transparent, obscuring **ISSN: 1949-0569 online** **Vol. 14, pp. 205-220** ----- _Rakshit Kothari_ Integration of Blockchain and Edge Computing in Healthcare: Accountability and Collaboration **208** any person’s identity with intricate and secure protocols that can safeguard the sensitivity of medical data. The technology’s distributed nature also makes it possible for patients, physicians, and other healthcare professionals to easily and securely share comparable information. Figure 2 shows the various versions of blockchain. **Figure 2: Version of Blockchain.** Blockchain has advanced greatly over time. We categorize the five versions of blockchain into versions 1.0 through 5.0. The most fundamental kind of decentralized ledger for recording transactions and storing data across several devices is this one. It is known as Blockchain 1.0 and was first published by Nakamoto. The data in the first blockchains, to put it simply, was limited to the values of a "thing" that saw ownership changes over time [18]. Usually, the "thing" we’re talking about is a type of virtual money like Bitcoin, ripple, and so on. Blockchain 2.0 is sometimes referred to as the emergence of Ethereum, the upgraded cryptocurrency suggested by Vitalik Buterin in 2014. Due to the inability of traditional health information exchange (HIE) and personal health record (PHR)-based exchanges to deliver on their promise of a shared coalescent, blockchain technology has a lot of potential in the healthcare business. The trust deficit present in traditional health information exchange intermediations continues to be exposed by electronic health records (EHR), conflicting interests, and a number of other reasons. As a result, blockchain technology has lately gained attention and has emerged as a top option in the healthcare industry. A Description of Blockchain technology in the healthcare industry [12]. The healthcare professionals and patients who provide the data, the medical cloud, and the blockchain network with distributed ledger and smart contracts are the components of the healthcare blockchain. The global Google trends for the term "Blockchain - Healthcare" are shown in Figure 3. This clearly demonstrates how the research community’s interest has grown. ### 3.2 Edge Computing Edge computing and AI go hand in hand. Patients’ data must be gathered, but doctors must also analyse it and provide real-time responses. This is becoming increasingly viable thanks to edge computing. Currently, edge computing systems with embedded AI are in place to quickly identify abnormalities and other important results from X – rays and other scans, including potentially life-threatening disorders. By delivering information more quickly at the imaging point, this technology enables clinicians to prioritize exams in a timely and economical manner. Because of this, edge computing and AI have a lot of promise for use in the healthcare industry. Across sectors, edge computing provides a number of advantages. Reduced latency is a key benefit. Edge computing reduces the amount of time data must travel to centralized **ISSN: 1949-0569 online** **Vol. 14, pp. 205-220** ----- Transdisciplinary Journal of Engineering & Science **209** **Figure 3: Blockchain Healthcare.** cloud servers by processing data closer to the source, allowing for real-time replies. This is essential for applications that demand quick responses, such as Internet of Things devices or driverless vehicles. Edge computing further improves real-time capabilities by processing data locally, facilitating quicker reaction and decision times. By sending only pertinent data to the cloud and lowering network traffic [14 – 15], it also improves overall network performance and bandwidth utilization which explores the benefits of Edge Computing in Figure 4. **Figure 4: Benefits of Edge Computing.** Additionally, by preserving sensitive data within the local network and lowering the likelihood of data breaches, edge computing improves privacy and security. Additionally, it offers higher dependability since edge devices may keep running even when cloud access is interrupted or lost. Overall, edge computing gives businesses more power through quicker processing, more effectiveness, improved privacy, and increased dependability. ## 4 Methodology Blockchain and Edge Computing may be used in the healthcare industry to improve responsibility and cooperation while preserving the confidentiality, privacy, and transactional integrity of data. Although blockchain technology itself has built-in accountability characteristics, specialized algorithms and processes may be used to meet the particular needs of the healthcare industry. LSTM algorithms can also be used to analyse and predict market trends in blockchain-based cryptocurrencies [14]. By processing historical **ISSN: 1949-0569 online** **Vol. 14, pp. 205-220** ----- _Rakshit Kothari_ Integration of Blockchain and Edge Computing in Healthcare: Accountability and Collaboration **210** **Figure 5: Methods of Accountability and Collaboration in Blockchain.** transaction data, an LSTM model can learn patterns and trends, allowing for the creation of predictive models to forecast price movements. With a plug-and-play design (modular) that enables a high degree of security, privacy, and secrecy of the data, Blockchain is a source that creates the distributed ledger. Peers who are endorsing each other validate the transaction, carry it out, and produce the read-and-write sets. The client is then informed of the response. The client gathers all peer responses, and then sends them to the "order." In this instance, the order places all transactions in ascending order, which is followed by the formation of a block. Each committer verifies this block, and as a consequence, adds a new block to their own copy of the ledger. A unique form of deep learning called recurrent neural networks uses the output from one stage as the input for the next. Recurrent neural networks can learn the long-term dependencies of data thanks to a unique form known as LSTM. The repeating module of the LSTM, which consists of a mixture of four separate layers coupled to one another, facilitates this form of learning. The character classification step uses the dataset for training and testing. In LSTM Training curves start at 83.6% and increase to 85% after 30 epochs. The testing curve begins at 84% and drops to 86% before rising to 87.4%. ### 4.1 Blockchain for Accountability and Collaboration Due to its transparency and immutability, blockchain technology by default promotes accountability. The employment of certain methods and algorithms can, however, improve accountability in blockchain systems [15]. Here are several essential blockchain accountability and collaboration algorithms and methods in Figure 5. **Digital Signatures: In order to confirm the legitimacy and integrity of healthcare data stored on a** blockchain, digital signatures are essential. Participants can sign transactions and data with their private keys using asymmetric cryptographic techniques like RSA or elliptic curve cryptography (ECC), allowing verification of the sender’s identity and guaranteeing non-repudiation. **Access Controls: The blockchain may be used to construct access control techniques and algorithms to** manage the rights and privileges of healthcare stakeholders. The blockchain can impose accountability by **ISSN: 1949-0569 online** **Vol. 14, pp. 205-220** ----- Transdisciplinary Journal of Engineering & Science **211** regulating the visibility and modification rights of sensitive healthcare data by setting access regulations and applying cryptographic techniques like attribute-based access control (ABAC) or role-based access control (RBAC). **Consensus Algorithms: The integrity of healthcare data in a blockchain network must be preserved** using consensus algorithms in order to guarantee responsibility. Consensus systems, such as Proof-of-Work (PoW), Proof-of-Stake (PoS), or Practical Byzantine Fault Tolerance (PBFT), allow for agreement among network users, limiting criminal activity and the alteration of medical information [16]. **Auditing and Logging: Healthcare systems built on blockchain technology may keep meticulous audit** trails and event logs to record and manage network activity. These logs could provide details on medical transactions, data access, and modification activities. Blockchain solutions enable openness, traceability, and accountability in healthcare operations by preserving thorough audit trails. **Privacy-Preserving Algorithms: To preserve sensitive healthcare data while enabling analysis and** accountability, privacy-preserving algorithms can be connected with the blockchain. These algorithms include differential privacy and secure multi-party computation (MPC). While providing aggregated insights and analysis for accountability reasons, these algorithms ensure that patient information is kept private. **Consensus of Truth Algorithms: Consensus of truth algorithms can be used in the healthcare industry,** where numerous parties may have conflicting accounts of events. These algorithms try to establish a single source of truth by bringing together contradictory evidence. Techniques like reputation-based consensus or weighted voting can be used to make sure. **Encrypted Data Storage: It is possible to encrypt healthcare data on the blockchain using symmetric or** asymmetric encryption techniques. In order to ensure that only persons with the necessary decryption keys may access and read the healthcare data [17], encryption adds an extra degree of protection and secrecy. **Secure Multi-Party Computation (MPC): Collaboration on encrypted data is made possible via** secure multi-party computing. Without disclosing the sensitive material below, it enables many parties to calculate shared data. Through the use of MPC algorithms in healthcare blockchain, aggregated patient data may be collaboratively analysed and researched while maintaining privacy and confidentiality [17, 18]. **Zero-Knowledge Proofs (ZKPs): Participants in zero-knowledge proofs can demonstrate the accuracy** of particular facts or calculations without disclosing the real data. ZKPs can be used in healthcare cooperation to verify the accuracy of certain data or calculations without disclosing private patient data. Collaboration is made possible while retaining secrecy and privacy. **Interoperability Standards: In order for healthcare organizations to collaborate on the blockchain,** interoperability standards and protocols like HL7 and FHIR are essential. These standards make sure that various healthcare systems may communicate data without any problems, encouraging cooperation and data sharing between various organizations and stakeholders. **Digital Identity Management: For safe and dependable cooperation in healthcare blockchain networks,** digital identity management algorithms and protocols are crucial. Only persons who have been given permission to do so may take part in collaborative activities and access healthcare data thanks to these algorithms, which monitor and verify participants digital identities. ### 4.2 Edge Computing for Accountability and Collaboration Edge computing, as opposed to merely depending on centralized cloud servers, refers to the discipline of processing and analysing data closer to its source or at the edge of the network. While the general architecture and protocols of edge computing are largely responsible for accountability, there are several algorithms and strategies that can improve accountability in these contexts. Collaboration between distributed edge devices and entities is essential for effective data processing and decision-making in edge computing. While numerous protocols and frameworks are utilized to promote collaboration in edge computing, specialized algorithms and approaches are employed to assist collaborative operations in Figure 6. **Secure Communication Protocols: For edge computing to remain accountable, secure communica-** tion protocols like Transport Layer Security (TLS) or Secure Shell (SSH) are crucial. In order to secure **ISSN: 1949-0569 online** **Vol. 14, pp. 205-220** ----- _Rakshit Kothari_ Integration of Blockchain and Edge Computing in Healthcare: Accountability and Collaboration **212** **Figure 6: Methods of Accountability and Collaboration in Blockchain with Edge Computing** communication channels between edge devices, gateways, and central servers, these protocols make use of techniques for encryption, authentication, and data integrity. Secure communication protocols encourage accountability in edge computing environments by guaranteeing the confidentiality and integrity of data while it is being sent [15 – 18]. **Audit Trails and Logging: In edge computing, keeping thorough audit trails and records is crucial for** accountability. It is possible to track actions and identify any unauthorized or questionable behaviour by capturing activities, transactions, and events inside the edge environment. Reconstructing and analysing events using audit trails and logging algorithms enables accountability and, if necessary, forensic investigations. **Consensus Mechanisms: For accountability in edge computing, preserving complete audit trails and** records is essential. By recording activities, transactions, and events inside the edge environment, it is feasible to keep track of actions and spot any unapproved or dubious behaviours. Accountability and, if necessary, forensic investigations are made possible by reconstructing and evaluating events using audit trails and logging algorithms [16 – 19]. **Distributed Ledger Technologies (DLTs): Edge computing can make use of DLTs, such as blockchain** or Directed Acyclic Graph (DAG) technology, to improve accountability. These innovations offer a decentralized and impenetrable ledger that keeps track of and authenticates data transfers or transactions [13]. In order to ensure accountability and data integrity, edge computing systems can use DLTs to keep an immutable and transparent record of actions. **Secure Enclaves: To safeguard delicate calculations and data in edge computing, secure enclaves like** Intel Software Guard Extensions (SGX) or Trusted Execution Environments (TEEs) offer hardware-based security capabilities. Accountability may be improved by ensuring that computations are carried out in a trustworthy and tamper-resistant environment by isolating important activities within secure enclaves [9, 12]. **Federated Learning: A collaborative machine learning approach called federated learning enables edge** devices to jointly train a single model without sharing their raw data. A central server receives just the model updates from each edge device, which trains the model locally using its own data. A global model is collectively learned through training iterations and model update aggregation. While protecting data privacy and minimizing transmission overhead, federated learning enables collaborative model training in edge computing [14]. **Data Synchronization: For collaborative data sharing and consistency in edge computing, data synchro-** nization methods are crucial. These techniques make a guarantee that data is consistently synchronized and current among scattered edge devices or nodes [27]. Data synchronization techniques facilitate cooperation by offering a consistent picture of shared data among participating entities by effectively propagating and **ISSN: 1949-0569 online** **Vol. 14, pp. 205-220** ----- Transdisciplinary Journal of Engineering & Science **213** **Table 1: Blockchain and Edge Computing Survey.** **Characteristics Covered** **(2020)** **(2021)** **(2022)** **Current Survey** Overview architecture     Consensus Protocol     Health care features     Healthcare applications     Privacy and Security issues     Standards for healthcare     Security and privacy threats Comparison     Blockchain and Edge Computing security and privacy     Performance of Blockchain and Edge Computing     reconciling data modifications. **Task Offloading and Load Balancing: Algorithms for task offloading and load balancing assist in** distributing computational workloads and jobs across edge devices cooperatively. These algorithms decide what operations should be carried out locally on edge devices, what operations may be delegated to other devices, and how to distribute the computing burden among the edge network’s devices. job offloading and load balancing techniques allow for effective teamwork in edge computing by optimizing job allocation and resource use. **Replication and Caching: In edge computing, replication and caching methods are used to improve** data availability and decrease latency. These techniques facilitate cooperative data sharing and quicker access to shared resources by duplicating frequently requested data or storing computation results at edge devices. The availability of pertinent data at the edge for local processing is ensured by replication and caching methods, which facilitate collaborative operations [22, 24]. **Coordination and Synchronization Protocols: Edge computing uses coordination and synchroniza-** tion protocols, such as the Message Passing Interface (MPI) or Publish-Subscribe models, to make it easier for distant entities to work together and share information [17, 18]. The cooperation, coordination, and sharing of data and events throughout the edge network are made possible by these protocols, which specify communication patterns, message-carrying methods, and synchronization primitives. ## 5 Literature Review Recent blockchain and Edge Computing survey literature is compared with this survey feature by feature. As we see in Table 1 [15, 16, 19] which shows various survey categories in reference to Blockchain and Edge computing in the healthcare sector. ### 5.1 Performance Matrix of Blockchain and Edge Computing **Transaction Throughput (TT): The number of transactions that are completed in a certain amount of** time is known as transaction throughput. The time it takes to add valid data to blocks is measured using this metric. This influences how quickly the process transactions. The total number of records that have been authenticated and committed is divided by the time (in seconds) required to validate and save all of those records [5, 21]. _TT =_ _[TotalTransaction]_ (1) _TimeTaken_ Developers employ a variety of tactics, including roll-ups, sidechains, country channels, new consensus processes, and longer blocks, to enhance the throughput. The transaction throughput of a decentralized protocol is determined by the consensus algorithm on the platform. For instance, a proof-of-stake (PoS) **ISSN: 1949-0569 online** **Vol. 14, pp. 205-220** ----- _Rakshit Kothari_ Integration of Blockchain and Edge Computing in Healthcare: Accountability and Collaboration **214** blockchain has a higher throughput than a proof-of-work (PoW) blockchain like Bitcoin. The length of a block in a blockchain, website traffic, edge computing, and transaction complexity are other factors that influence throughput [19]. **Transaction per Second (TS): The number of records or transactions that have been submitted and** stored each second is measured using the metric known as Transactions per Second (TS). It is used to calculate a network’s processing capacity and scalability requirements [12 – 20]. The quantity of data kept in the ledger and the number of entries transferred to the various network are often counted separately. The block size and block time should both rise in order to enhance the number of transactions per second. � _Transfrom(x, y)_ _TS(n) = Count_ _y_ _x_ _−_ � (2) _∗_ _[Trans]_ _s_ If the time periods x and y are, is the number of transactions, is the duration in seconds, and TPS n designates the specific node for which the TPS is computed. As a result, the average TS may be used to compute TS for all nodes (N), as shown below. � (3) _∗_ _[Trans]_ _s_ _TS =_ �� _n_ _[Transn]_ _N_ **Transaction Latency (TL): The time it takes for a transaction to be verified and delivered to the** blockchain network to be written to the ledger (or denied) is measured using the Transaction Latency (TL) [12] metric. This statistic is determined by contrasting the timestamps on the submitted transactions with the timestamps on the verified and stored transactions [15, 20, 23]. This metric can also show how rapidly consensus-building strategies are being used. Transaction latency is the interval between when a transaction is submitted to a various networks and edge computing when it is first validated. Additionally, it denotes the amount of time that must pass between pushing the submit button and seeing the transaction display on the screen. _TL = Net_ _Trans_ _Transst_ (4) _∗_ _−_ where Transit, indicates the transaction submission time, Transact, denotes the transaction confirmation time, and Net represents the network threshold. **Transaction per CPU (TC): When they are being executed, smart contracts use a lot of CPU power.** How much CPU is consumed is dependent on the business logic that was incorporated into the contract [23]. Loops will use a significant portion of the CPU resources when encryption is used. It requires a lot of CPU time to commit the block and calculate the global state’s hash. Transaction per CPU applications employs different encryption techniques, hashing formulas, and consensus techniques. We will thus require a metric to monitor CPU use while smart contracts are in operation, where F is the frequency of a single CPU core and CPU(t) is the amount of CPU used by a blockchain program from a to b [25, 26]. Then, the following formula can be used to determine TC for the entire blockchain network of N nodes: Transaction per **second per memory: TMS is a measurement that illustrates how much physical and virtual memory is** used by the software. The TMS of a node (n) connected to a blockchain network between time periods a and b with the execution of a certain number of transactions (Trsac) was calculated using the following formula. The following formula may be used to compute the TMS of the whole blockchain network. � _TCn_ _TCn =_ (5) _N_ � _GHz.sT rans_ � **Transaction per disk INPUT/OUTPUT: Blockchain apps will have a dedicated storage space to keep** data and the status of the world. TDIO [4] is a metric that keeps track of the input/output measurements made during certain processes including contract execution and block commits. In the blockchain network, **ISSN: 1949-0569 online** **Vol. 14, pp. 205-220** ----- Transdisciplinary Journal of Engineering & Science **215** the TDIO for a particular node n is determined as follows: � _nTMSn_ _TMS =_ (6) _N_ � _T ransMB.s_ � Edge computing performance evaluation may be theoretically approached utilizing many metrics and modeling methodologies [5, 6, 9, 27]. � _nTDIO_ _TDIO =_ (7) _N_ � _T ranskbs_ � (i) Queuing Theory: Edge computing system performance may be modeled and examined using queueing theory. It makes use of mathematical models that record the pace at which jobs arrive, the rate at which edge devices are serviced, and the total number of servers in the system. Performance measures like queue length, waiting time, and reaction time may be determined by analyzing these models. (ii) Markov Chains: The state transitions and performance characteristics of edge computing systems may be examined using Markov chains. The probability of existing in various states and the transitions between states may be calculated by modeling the system as a stochastic process. This makes it possible to assess performance indicators like reaction time, availability, and dependability [15 – 20]. (iii) Network Theory: For evaluating the performance of linked edge devices and their communication network, network theory offers mathematical methods. The architecture of the network may be modeled, network latency can be examined, and the data routing between edge devices can be optimized using methods like graph theory and optimization techniques. (iv) Simulation Modeling: Building computational models that imitate the behavior of edge computing systems is known as simulation modeling. These models represent the arrival of tasks, task processing by edge devices, and device-to-device communication. Performance indicators like latency, throughput, and resource utilization may be assessed by conducting simulations with various situations and settings [17, 20]. (v) Machine Learning Techniques: Large datasets gathered from edge computing devices may be analysed using machine learning methods. Performance patterns may be discovered and future system behaviour predictions can be established by training models using previous data. This can aid in enhancing performance overall, forecasting system problems, and optimizing resource allocation. ## 6 Results When blockchain and edge computing are combined, they significantly improve accountability and teamwork in the healthcare industry. Healthcare systems can achieve improved accountability by utilizing the irreversible and transparent features of blockchain and combining it with edge computing’s capacity to analyse data at the edge of the network. The blockchain may be used to record patient data acquired and securely kept by edge devices, creating an auditable trail of data access and usage. Encouraging accountability among healthcare professionals assures legal compliance. Additionally, this connectivity makes it possible for healthcare stakeholders to collaborate securely and effectively. With edge devices serving as nodes in the blockchain network, real-time data access and sharing are made possible without the use of middlemen. Effective collaboration between healthcare professionals, researchers, and patients can result in better care coordination, data sharing, and decision-making. When blockchain and edge computing are combined, real-time data exchange and analytics are made possible. Without depending on centralized cloud servers, edge devices locally process and analyse data to produce insightful results. This makes decision-making possible in a rapid manner, especially in urgent medical situations where quick action is essential. **ISSN: 1949-0569 online** **Vol. 14, pp. 205-220** ----- _Rakshit Kothari_ Integration of Blockchain and Edge Computing in Healthcare: Accountability and Collaboration **216** **Figure 7: Blockchain and Edge Computing.** This integration also benefits consent management and privacy protection. By utilizing the decentralized design of the blockchain, patients have more control over their data. Through smart contracts, they may immediately give or cancel access permissions, protecting user privacy and data security. Edge devices reduce the dangers associated with centralized data storage by enforcing data privacy standards and keeping sensitive data inside the local network. Additionally, edge computing and blockchain integration enhance healthcare supply chain management. Stakeholders can trace and instantly confirm the legitimacy and provenance of medicines, medical equipment, and supplies by logging supply chain transactions on the blockchain. Figure 7 depicts the representation of Blockchain and Edge Computing in Healthcare based on various factors. Edge devices are crucial in the collection and verification of supply chain data at multiple points, ensuring transparency and lowering the dangers of fake or sub-par goods. In conclusion, the use of edge computing and blockchain in healthcare produces measurable improvements in accountability. It improves data accountability, makes it possible to collaborate securely and effectively, makes it easier to share and analyze data in real time, improves consent management and privacy protection, and streamlines supply chain management. By promoting openness, reliability, and effectiveness in data management and decision-making processes [19, 20] these results revolutionize healthcare. Finally, experimental results show that the LSTM outperforms the other models in terms of precision, recall, and F1 score in Figure 8. This work is practically possible but the maintenance cost is more when compared to the traditional model. Enhancing accountability and collaboration within the healthcare sector is made possible by the integration of blockchain technology with edge computing. Healthcare systems may attain new levels of openness, security, and efficiency by integrating the characteristics of these technologies. Blockchain technology creates a strong foundation for accountability due to its decentralized and unchangeable nature. Patient data may be securely gathered and stored by edge devices, and the access, use, and sharing of that data can be the subject of blockchain-based transactions. So that healthcare providers, researchers, and other stakeholders are held responsible for their actions, this generates an auditable record of data activity. Patients can more easily see how their data is utilized and shared thanks to the openness offered by the blockchain, which promotes trust and confidence in the system. Collaboration among stakeholders in the healthcare industry is made possible by the combination of blockchain and edge computing. In the blockchain network, edge devices serve as nodes to enable seamless cooperation and real-time data exchange. Patients, healthcare professionals, and researchers may work together to develop treatment plans, discuss research findings, and exchange data in a safe and effective **ISSN: 1949-0569 online** **Vol. 14, pp. 205-220** ----- Transdisciplinary Journal of Engineering & Science **217** **Figure 8: LSTM in Healthcare.** manner. This encourages efficient care coordination, multidisciplinary study, and the creation of novel medical treatments [17]. By allowing quick analysis and decision-making, edge computing’s real-time data processing capabilities further improve cooperation. In order to reduce latency and enable quick reactions, edge devices have the ability to process and analyse data at the time of collection. This is especially helpful in challenging healthcare situations when real-time information can have a big influence on how patients are treated. Additionally, privacy preservation and consent management are ensured by the combination of blockchain and edge computing. Through blockchain-based processes, patients have ownership over their data and may give or remove access permissions as needed. Edge devices protect sensitive information by enforcing privacy regulations and keeping it on the local network, reducing the dangers of centralized storage and unauthorized access. Overall, the adoption of edge computing and blockchain in the healthcare sector strengthens accountability in the sector. It creates a framework for data management that is visible and auditable, allows for direct and secure communication between stakeholders, makes it easier to analyse data in real-time and make decisions, and manages privacy and permission. By encouraging trust, effectiveness, and creativity in the provision of patient care, this integration has the potential to revolutionize healthcare. ## 7 Conclusion In conclusion, the application of edge computing and blockchain in the healthcare sector has enormous prospects for improving accountability and teamwork. Healthcare systems may reach a new level of trust, security, and efficiency by utilizing blockchain’s transparency and immutability as well as edge computing’s real-time data processing capabilities. By creating an auditable trail of data activity, the combination of these technologies makes it possible for enhanced accountability. Patient data is securely collected and stored by edge devices, and the blockchain keeps track of all data access and usage activities. This promotes openness and confidence in the handling of patient data by guaranteeing that healthcare practitioners and other stakeholders are accountable for their actions. Additionally, smooth communication across healthcare stakeholders is made possible by integration. By functioning as nodes in the blockchain network, edge devices allow for safe and direct communication, doing away with the need for middlemen. In order to improve care coordination and research efforts, healthcare professionals, researchers, and patients may work together in real time by exchanging data and ideas. This cooperative setting encourages creativity and information exchange, which improves healthcare results. By allowing quick analysis and decisionmaking, edge computing’s real-time data processing capabilities enhance the blockchain’s transparency. **ISSN: 1949-0569 online** **Vol. 14, pp. 205-220** ----- _Rakshit Kothari_ Integration of Blockchain and Edge Computing in Healthcare: Accountability and Collaboration **218** By processing data at the moment of collection, edge devices may cut down on latency and enable quick replies. Real-time insights may significantly improve patient care in time-sensitive healthcare circumstances; thus, this is very helpful. Additionally, privacy preservation and consent management are ensured by the combination of blockchain and edge computing. Through blockchain-based processes, patients have more control over their data since they may give or remove access permissions as necessary. In order to reduce the dangers associated with centralized data storage, edge devices enforce privacy regulations and preserve sensitive data on the local network. The potential for blockchain and edge computing to improve cooperation and accountability in the healthcare industry is exciting. The benefits and capabilities of this integration may be increased through developments in data governance, interoperability, scalability, AI integration, and regulatory compliance. The healthcare sector may increase efficiency, transparency, and collaboration by using these upcoming developments, which will eventually enhance patient outcomes and healthcare delivery. **Authors’ Contribution: RK established the proposed concept, developed the theory, and carried out** the computations. RK also validated the analytical techniques, encouraged the investigation of real-world issues, and oversaw the results of this work. **Funding Statement: This research received no external funding.** **Conflicts of Interest: The author declares no conflict of interest.** Copyright © 2023 by the authors. This is an open access article distributed under the Creative Commons Attribution License (CC BY-NC International, https://creativecommons.org/licenses/by/4.0/), which allow others to share, make adaptations, tweak, and build upon your work non-commercially, provided the original work is properly cited. The authors can reuse their work commercially. ## References 1. Zheng, Z., Xie, S., Dai, H. N., Chen, X., & Wang, H. (2018). Blockchain challenges and opportunities: A survey. International journal of web and grid services, 14(4), 352-375. 2. Fernandez-Aleman, J. L., Seor, I. C., Lozoya, P. O., & Toval A. (2013). Electronic health record security and privacy: A thorough overview of the literature. Journal of Biomedical Informatics, 46(3), 541-562. 3. Kuo, T. T., & Pham, A. (2022). Detecting model misconducts in decentralized healthcare federated learning. _International journal of medical informatics, 158, 104658._ 4. Yang, Y., Xu, R., Zhang, J., & Qian (2018). Design and implementation of a blockchain-based access control system for medical records. 3rd International Conference on Crowd Science and Engineering Proceedings, 182-188. 5. Zhang, X., Poslad, S., & Ma, Z. (2018, December). Block-based access control for blockchain-based electronic medical records (EMRs) query in eHealth. In 2018 IEEE Global Communications Conference (GLOBECOM) (pp. 1-7). IEEE. 6. Farooq, M. S., Ahmed, M., & Emran, M. (2022). A survey on blockchain acquainted software requirements engineering: model, opportunities, challenges, and future directions. IEEE Access, 10, 48193-48228. 7. Shi, S., He, D., Li, L., Kumar, N., Khan, M. K., & Choo, K. K. R. (2020). Applications of blockchain in ensuring the security and privacy of electronic health record systems: A survey. Computers & security, 97, 101966. 8. Iqbal, R., Salah, & Chakraborty, S. (2019). Healthcare IoT with Blockchain and Edge Computing: Opportunities, Problems, and Solutions. IEEE Access, 7, 10254–10267. **ISSN: 1949-0569 online** **Vol. 14, pp. 205-220** ----- Transdisciplinary Journal of Engineering & Science **219** 9. Zeng, Z., Sheng, Q. Z., & Qin, Y. (2019). Blockchain in healthcare: A thorough evaluation of the literature, a framework for synthesis, and a research plan for the future. International Journal of Information Management, _49, 128–144._ 10. Hussien, H. M., Yasin, S. M., Udzir, S. N. I., Zaidan, A. A., & Zaidan, B. B. (2019). A systematic review for enabling of develop a blockchain technology in healthcare application: taxonomy, substantially analysis, motivations, challenges, recommendations and future direction. Journal of medical systems, 43, 1-35. 11. Kothari, R., Choudhary, N., & Jain, K. (2021). CP-ABE scheme with decryption keys of constant size using ECC with expressive threshold access structure. In Emerging Trends in Data Driven Computing and _Communications: Proceedings of DDCIoT 2021(pp. 15-36). Springer Singapore._ 12. Rahman, M. A., Hossain, M. S., Loukas, G., Hassanain, E., Rahman, S. S., Alhamid, M. F., & Guizani, M. (2018). Blockchain-based mobile edge computing framework for secure therapy applications. IEEE access, 6, 72469-72478. 13. Alotaibi, E. F., AlBar, A. M., & Hoque, M. R. (2016). Mobile computing security: issues and requirements. _Journal of Advances in Information Technology Vol, 7(1)._ 14. Ahad, M. A., Tripathi, G., Zafar, S., & Doja, F. (2020). IoT data management—Security aspects of information linkage in IoT systems. Principles of internet of things (IoT) ecosystem: Insight paradigm, 439-464. 15. Abu-Elezz, I., Hassan, A., Nazeemudeen, A., Househ, M.,& Abd-Alrazaq, A. (2020). The benefits and threats of blockchain technology in healthcare: A scoping review. International Journal of Medical Informatics, 142, 104246. 16. Xiao, K., Shi, W., Gao, Z., Yao, C., & Qiu, X. (2020). DAER: A resource preallocation algorithm of edge computing server by using blockchain in intelligent driving. IEEE Internet of Things Journal, 7(10), 9291-9302. 17. Lin, X., Wu, J., Mumtaz, S., Garg, S., Li, J., & Guizani, M. (2020). Blockchain-based on-demand computing resource trading in IoV-assisted smart city. IEEE Transactions on Emerging Topics in Computing, 9(3), 1373-1385. 18. Wang, S., Ye, D., Huang, X., Yu, R., Wang, Y., & Zhang, Y. (2020). Consortium blockchain for secure resource sharing in vehicular edge computing: A contract-based approach. IEEE Transactions on Network _Science and Engineering, 8(2), 1189-1201._ 19. Hammoud, A., Sami, H., Mourad, A., Otrok, H., Mizouni, R., & Bentahar, J. (2020). AI, blockchain, and vehicular edge computing for smart and secure IoV: Challenges and directions. IEEE Internet of Things _Magazine, 3(2), 68-73._ 20. Aggarwal, L., Sachdeva, S., & Goswami, P. (2023). Quantum healthcare computing using precision based granular approach. Applied Soft Computing, 144, 110458. 21. Xiong, Z., Zhang, Y., Niyato, D., Wang, P., & Han, Z. (2018). When mobile blockchain meets edge computing. _IEEE Communications Magazine, 56(8), 33-39._ 22. Tanwar, S., Parekh, K., & Evans, R. (2020). Blockchain-based electronic healthcare record system for healthcare 4.0 applications. Journal of Information Security and Applications, 50, 102407. 23. Attaran, M. (2022). Blockchain technology in healthcare: Challenges and opportunities. International Journal _of Healthcare Management, 15(1), 70-83._ 24. Khan, F., Kothari, R., Patel, M., & Banoth, N. (2022, April). Enhancing non-fungible tokens for the evolution of blockchain technology. In 2022 International conference on sustainable computing and data communication _systems (Icscds) (pp. 1148-1153). IEEE._ 25. Hofert, A. (2023). Converging Technologies and Business Models That Will Transform the Healthcare Sector Exponentially. In Digital Identity in the New Era of Personalized Medicine (pp. 46-64). IGI Global. 26. Mantey, E. A., Zhou, C., Srividhya, S. R., Jain, S. K., & Sundaravadivazhagan, B. (2022). Integrated blockchain-deep learning approach for analyzing the electronic health records recommender system. Frontiers _in Public Health, 10, 905265._ **ISSN: 1949-0569 online** **Vol. 14, pp. 205-220** ----- _Rakshit Kothari_ Integration of Blockchain and Edge Computing in Healthcare: Accountability and Collaboration **220** 27. Gupta, R., Tanwar, S., Al-Turjman, F., Italiya, P., Nauman, A., & Kim, S. W. (2020). Smart contract privacy protection using AI in cyber-physical systems: tools, techniques and challenges. IEEE access, 8, 24746-24772. ## About the Author **Mr. Rakshit Kothari is working as an Assistant Professor in the Department of Computer Science and** Engineering at Geetanjali Institute of Technical Studies, Dabok, Udaipur, Rajasthan. He has done B.Tech in Computer Science and Engineering at Rajasthan Technical University, Kota with first division honours. He secured Master of Technology in Computer Science and Engineering at College of Technology and Engineering, Maharana Pratap University of Agriculture and Technology, Udaipur, Rajasthan, India. He is in teaching profession for more than 2 years and published varieties of books. He has presented number of papers in National and International Journals, Conference and Symposiums. He is currently a member in Soft Computing Research Society. His main area of interest includes Internet of Things, Cryptography and Blockchain. **ISSN: 1949-0569 online** **Vol. 14, pp. 205-220** -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.22545/2023/00230?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.22545/2023/00230, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBYNC", "status": "GOLD", "url": "https://www.atlas-tjes.org/index.php/tjes/article/download/745/346" }
2,023
[ "JournalArticle" ]
true
2023-08-05T00:00:00
[]
11,210
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/016cffca854ccedc33ebe8545d64a0777aa618f8
[ "Computer Science" ]
0.848722
Real-Time Big Data Architecture for Processing Cryptocurrency and Social Media Data: A Clustering Approach Based on k-Means
016cffca854ccedc33ebe8545d64a0777aa618f8
Algorithms
[ { "authorId": "2129068800", "name": "Adrian Barradas" }, { "authorId": "2163340854", "name": "Acela Tejeda-Gil" }, { "authorId": "28087807", "name": "Rosa María Cantón Croda" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-150910", "http://www.mdpi.com/journal/algorithms", "http://www.mdpi.com/journal/algorithms/" ], "id": "e95c8d18-09be-464f-a3cf-5b2637f0eff6", "issn": "1999-4893", "name": "Algorithms", "type": "journal", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-150910" }
Cryptocurrencies have recently emerged as financial assets that allow their users to execute transactions in a decentralized manner. Their popularity has led to the generation of huge amounts of data, specifically on social media networks such as Twitter. In this study, we propose an iterative kappa architecture that collects, processes, and temporarily stores data regarding transactions and tweets of two of the major cryptocurrencies according to their market capitalization: Bitcoin (BTC) and Ethereum (ETH). We applied a k-means clustering approach to group data according to their principal characteristics. Data are categorized into three groups: BTC typical data, ETH typical data, BTC and ETH atypical data. Findings show that activity on Twitter correlates to activity regarding the transactions of cryptocurrencies. It was also found that around 14% of data relate to extraordinary behaviors regarding cryptocurrencies. These data contain higher transaction volumes of both cryptocurrencies, and about 9.5% more social media publications in comparison with the rest of the data. The main advantages of the proposed architecture are its flexibility and its ability to relate data from various datasets.
# ***algorithms*** *Article* ## **Real-Time Big Data Architecture for Processing Cryptocurrency** **and Social Media Data: A Clustering Approach Based** **on k -Means** **Adrian Barradas *** **[,†]** **, Acela Tejeda-Gil** **[†]** **and Rosa-María Cantón-Croda** **[†]** Graduate School of Engineering, UPAEP-University, Puebla 72410, Mexico; acela.tejeda@upaep.edu.mx (A.T.-G.); rosamaria.canton@upaep.mx (R.-M.C.-C.) ***** Correspondence: adrian.barradas@upaep.edu.mx - These authors contributed equally to this work. ��������� **�������** **Citation:** Barradas, A.; Tejeda-Gil, A.; Cantón-Croda, R.-M. Real-Time Big Data Architecture for Processing Cryptocurrency and Social Media Data: A Clustering Approach Based on *k* -Means. *Algorithms* **2022**, *15*, 140. [https://doi.org/10.3390/a15050140](https://doi.org/10.3390/a15050140) Academic Editors: Christos Makris and Andreas Kanavos Received: 16 March 2022 Accepted: 7 April 2022 Published: 22 April 2022 **Publisher’s Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affil iations. **Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons [Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/) [creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/) 4.0/). **Abstract:** Cryptocurrencies have recently emerged as financial assets that allow their users to execute transactions in a decentralized manner. Their popularity has led to the generation of huge amounts of data, specifically on social media networks such as Twitter. In this study, we propose an iterative kappa architecture that collects, processes, and temporarily stores data regarding transactions and tweets of two of the major cryptocurrencies according to their market capitalization: Bitcoin (BTC) and Ethereum (ETH). We applied a *k* -means clustering approach to group data according to their principal characteristics. Data are categorized into three groups: BTC typical data, ETH typical data, BTC and ETH atypical data. Findings show that activity on Twitter correlates to activity regarding the transactions of cryptocurrencies. It was also found that around 14% of data relate to extraordinary behaviors regarding cryptocurrencies. These data contain higher transaction volumes of both cryptocurrencies, and about 9.5% more social media publications in comparison with the rest of the data. The main advantages of the proposed architecture are its flexibility and its ability to relate data from various datasets. **Keywords:** kappa architecture; iterative data processing; document-oriented No-SQL database; Bitcoin; Ethereum; Twitter **1. Introduction** During the past few years, the use of digital currencies has emerged as a novel manner of executing financial transactions [ 1 ]. A digital currency works the same way a real currency does, with the particularity that it is not issued by a central bank; thus it is a decentralized currency [ 2 ]. Digital currencies are generated using a cryptographic algorithm called blockchain, which employs mathematical encryption methods to create and verify a continuously growing data structure. Therefore, blockchain protects data by transforming it into an unreadable format, which can only be decrypted employing the corresponding decryption algorithm. Blockchain transactions flow through a computer network without the need for intermediaries as the algorithm links users directly [ 1 ]. That kind of network is known as a cryptocurrency network as it enables the establishment of decentralized peer-to-peer data exchange [3]. In terms of trading volume, Bitcoin is currently the most popular cryptocurrency; it allows electronic cash transactions directly from one partner to another without going through a financial institution [ 4 ]. Diverse studies serve as evidence that Bitcoin has been strangely volatile since its establishment. Its volatile nature has brought into vogue its use among speculators [ 5 ]. Although its use until now has been mostly for speculation, since at least 2010, numerous intermediaries have begun to transact with Bitcoin [ 6 ]. It has been reported that the market capitalization of the one hundred largest cryptocurrencies exceeded the equivalent of USD 2.65 trillion by November 2021; nevertheless, according *Algorithms* **2022**, *15* [, 140. https://doi.org/10.3390/a15050140](https://doi.org/10.3390/a15050140) [https://www.mdpi.com/journal/algorithms](https://www.mdpi.com/journal/algorithms) ----- *Algorithms* **2022**, *15*, 140 2 of 11 to CoinMarketCap, Bitcoin accounts for the largest cryptocurrency with a market capitalization that surpasses the 1.1 trillion mark, while Ethereum stands as the second-largest cryptocurrency with a market capitalization equivalent to USD 543 billion [ 7 ]. Both Bitcoin and Ethereum use the same principles of blockchain technology; nevertheless, while Bitcoin’s purpose is limited to functioning as a digital currency, Ethereum is designed to be a general-purpose programmable blockchain, which can manage the transactions of a digital currency, but also any kind of data expressible as a key-value tuple [ 8 ]. This gives Ethereum the advantage of being suitable for other decentralized applications, however, this study focuses only on its use as a digital currency. Cryptocurrencies rose as a tendency due to their popularity on social media. In that context, one of the main sources of information about cryptocurrencies is Twitter. It allows users to share their thoughts and mindsets regarding cryptocurrencies; therefore it is, among other social networks, a medium to boost the cryptocurrency world [ 9, 10 ]. According to data from BitInfoCharts, the number of daily tweets related to Bitcoin during 2021 fluctuated between 30,540 and 363,566; the latter corresponds to around 0.072 percent of the average daily tweets worldwide [ 11 ]. This evidences the wide use of Twitter as an information medium for cryptocurrencies [ 12, 13 ]. It is worth mentioning that Twitter is considered a leading social media platform and a rich source of real-time information [ 14 ]. On the other hand, during the same period, daily Bitcoin transactions averaged 332,355 [ 15 ]. In that context, a large amount of data is generated every day; i.e., around 136 tweets and 230 transactions per minute. Big data refers to large and complex datasets which require advanced data storage, management, and analysis technologies [ 3 ]. One of the sources of big data is social media which has an increasing number of users [ 16 ] that integrate their background and daily activities into the networks. This fact contributes to the rapid generation of gigantic datasets [ 17 ]. As data are generated rapidly, it is meaningful to obtain information and insights in real time to react appropriately to events and trends surrounding large volumes of data. In this case, it concerns the analysis of social media posts and cryptocurrency transactions [18]. Given the popularity of cryptocurrencies, there is a vast number of recent studies and projects focused on analyzing data from social media and cryptocurrencies in real time utilizing novel data processing tools and methodologies. Moapatra et al. [ 19 ] proposed a distributed architectural design for handling large volumes of data from Twitter and Bitcoin transactions in real time to predict price fluctuations; by means of a combined machine learning and lexicon approach, they determined the sentiments of the tweets and related them with the price of Bitcoin to predict the next minute’s price. Bandi [ 20 ] utilized a lambda architecture to process and visualize real-time data regarding cryptocurrencies’ prices. On the other hand, Horvat et al. [ 21 ] proposed an architecture for real-time cryptocurrency data processing and analysis based on the lambda architectural approach to obtain insights through the relation of different data sources such as social media, cryptocurrencies, and the stock market. A kappa architecture was proposed by Bandi and Hurtado [ 18 ] to process real-time data from Twitter to visualize analytics, such as trends and tweet volume. In addition, a relation between tweets and cryptocurrencies’ prices was studied by Abraham et al. [22] as a way to predict the direction of the price variation, from which it was found that the volume of tweets is more significant than their sentiments. It was also found by Park and Lee [ 23 ] that the volume of tweets correlates with Bitcoin prices. Garcia et al. [24] found that an increase in Bitcoin’s price led to a higher number of tweets which again would drive the price further up [ 25 ]. Some other studies focused only on the relation between tweets and cryptocurrencies, leaving in the second term the methods involved in the management and processing of data. Aharon et al. [ 26 ] found that there is a causal relationship between the uncertainty associated with sentiments in social media and cryptocurrency returns. In addition, we have found evidence of works that aim to identify behavioral patterns regarding cryptocurrencies by means of clustering algorithms. Baek et al. [ 27 ] applied a *k* -means clustering approach to identify suspicious transactions of Ethereum. Aspembitova et al. [ 28 ] identified four types of cryptocurrency users through ----- *Algorithms* **2022**, *15*, 140 3 of 11 the application of *k* -means clustering and support vector machines (SVMs) on Bitcoin and Ethereum transactional data. Fang et al. [ 29 ] used *k* -means to classify positive and negative publications from Twitter related to Bitcoin. The previous research serves as a reference and basis for our study; although similar approaches have been proposed, to the best of our knowledge there is no evidence of related papers that utilize an iterative kappa architecture to process, relate and manage data from Twitter and cryptocurrency markets in real time. In this context, this study proposes the application of a novel kappa architecture, derived from the lambda architecture, for processing and analyzing real-time data from Twitter and the cryptocurrency market. It integrates a temporary batch step which allows the relation of data from different data sources in a specific time span. The proposed architecture focuses on the processing of data in real time while looking for insights and patterns regarding the number of tweets, their sentiment, and the number, type, and volume of cryptocurrency transactions. Data are collected through application programming interfaces (APIs) and streamed to be processed and stored in a document-oriented No-SQL database (MongoDB ™ ). Afterward, data are related with the purpose of finding meaningful patterns. The present work aims to demonstrate the use and benefits of the proposed architecture as a choice for relating data from cryptocurrencies and social media while identifying patterns in real time; for that purpose, data from a defined period of time are used. This paper is organized as follows: Section 2 describes in detail the materials and methods used for the study’s development. Section 3 presents the results obtained by processing and relating data using the proposed kappa architecture. Finally, Section 4 summarizes the main findings and future works for this study. **2. Materials and Methods** This study is developed by following an approach based on the kappa architecture for big data as shown in Figure 1. The kappa architecture was first introduced by Kreps in 2014 [ 30 ]. It derives from the lambda architecture, which is considered one of the industry’s best practices for scalable real-time big data processing [ 21 ]. Lambda architecture consists of three layers: batch layer, speed layer, and serving layer. The batch layer processes data and stores them to query precomputed data on demand instead of querying them on the fly. The speed layer processes data in real-time to compensate for the low latency updates in the batch layer. Thus, data are processed in a parallel manner in both layers. Finally, the serving layer stores the views from the previous two layers [ 31 ]. Kappa architecture is similar to the lambda architecture, with the difference that it does not include a batch layer, therefore it processes data only in real time [ 30 ]. In this context, the main characteristics of the kappa architecture are its simplicity and its flexibility in comparison with other big data architectures [32]; thus it is suitable for online processing of data flows [33]. **Figure 1.** Proposed kappa architecture. Source: compiled by authors with data from [ 30 ]. “Apache Kafka”, and “Apache Spark” are trademarks of the Apache Software Foundation. “TWITTER, TWEET, RETWEET and the Twitter Bird logo are trademarks of Twitter Inc. or its affiliates”. ----- *Algorithms* **2022**, *15*, 140 4 of 11 The proposed architecture consists of a real-time streaming layer that receives and processes new incoming data and a serving layer that stores data in MongoDB ™ to be displayed or queried on demand. At the streaming layer, the processing is executed by means of Apache Kafka ™ and Apache Spark ™ which are helpful to process data in a distributed manner and consequently faster, in comparison with non-distributed approaches [ 34 ]. At the serving layer of the kappa architecture, the processed, modeled, and evaluated data coming from the real-time streaming layer are finally loaded into a database management system (DBMS), i.e., MongoDB ™ . In this case, as we handle huge volumes of unstructured data from Twitter, a document-oriented No-SQL database is better suitable than a traditional relational database due to its advantages regarding the horizontal scalability and the storage of unstructured data. The kappa architecture that we present is iterative. In the first iteration, single datasets from Twitter and CryptoCompare are processed and transformed in order to be related; thereafter, a second iteration is executed to classify the related datasets and obtain insights. In that context, data are collected as they are generated and then streamed, transformed, and stored in MongoDB ™ from which datasets are queried. In this case, MongoDB ™ serves as a batch that stores data from the last 120 s with the purpose of relating it, by considering a time span of one minute and therefore obtaining one register for each minute. Finally, the queried dataset is returned to the streaming layer to be processed by means of a machine learning approach; in this case, *k* -means clustering. *K* -means clustering is one of the most popular algorithms for unsupervised machine learning. It groups data with similar characteristics under a determined number of clusters while separating them according to their dissimilarities [ 35 ]. Clustering is defined as a method for finding homogeneous groups of data points in a dataset; in that sense, it allows the recognition of patterns in data [36]. For this study, data related to the two largest cryptocurrencies, according to their market capitalization, were collected, i.e., Bitcoin (BTC) and Ethereum (ETH) [ 7 ]. Data mining for the corresponding tweets was done considering publications made in English. Parameters for the *k* -means clustering approach were calculated for data collected on 14 January 2022 corresponding to a period of 8 h from 06:59:00 (UTC-6) to 16:59:00 ( UTC-6 ). The algorithms for the proposed architecture were executed by a single computer, nevertheless, it is suitable for its execution in a computer cluster, which distributes the computational requirements between the computers that conform to it. Figure 2 shows a representation of the steps involved in the development of the study. First, data mining is executed in real time by means of public APIs [ 37, 38 ] that enable the retrieval of the latest available raw data from Twitter and CryptoCompare. Collected data are then streamed and immediately transformed. Datasets are cleaned by deleting unuseful variables, and the remaining are transformed in order to be correctly processed and related. Additionally, a standard notation for the data is defined, and derived attributes are calculated when needed. Thereafter, data pass to the serving layer, where they are stored in MongoDB ™ and then queried to relate the corresponding datasets according to their most relevant attributes. The queried and related data are then returned to the real-time streaming layer, at which a *k* -means clustering approach is executed to categorize data in groups according to their characteristics. In that sense, data flow in a second iteration in parallel through the architecture with the purpose of obtaining more information from them in real time. ----- *Algorithms* **2022**, *15*, 140 5 of 11 **Figure 2.** Process diagram for the proposed kappa architecture. Source: compiled by authors . **3. Results** Data for this study were obtained from two different sources (Twitter and CryptoCompare) in the form of a JSON real-time stream, by means of an API [ 37, 38 ]. To query data, a set of keywords were given which correspond to the name and symbol of the cryptocurrencies, i.e., Bitcoin (BTC), and Ethereum (ETH). As shown in Table 1, data collected from Twitter contain several attributes related to each tweet such as *id*, *timestamp*, and *text*, but also attributes related to the user such as *user mentions*, *number of followers*, and *location*, among others. On the other hand, data from CryptoCompare contain transaction-inherent attributes, i.e., *timestamp [TS]*, *market [M]*, *symbol [FSYM]*, *price [P]*, and v *olume [Q]* . **Table 1.** Attributes of each raw dataset obtained. **Twitter** **Cryptocompare** created at: ‘Fri Jan 14 07:00:00 +0000 2022’, id: 61e173da2e853f6c8c8c92ff, id str: ‘148197437765466521’, text: ‘RT @Saki5786: @WatcherGuru A big transformation is on the way! The TIME HAS COME date:“2022-01-14 07:00:00” for #CryptoIslandDAO!NOW is the best time to start TYPE:“0” thi...’, M:“Coinbase” truncated: True, FSYM:“BTC” entities: TSYM:“USD” hashtags: [], F:“2” followers: [], ID:“263883436” user mentions: [], TS:“1642165200” urls: [ Q:“0.00059115” url: ”, P:“42070.6406” display url: ‘twitter.com/i/web/status/1. . . ’, TOTAL:“24.8704” location: []], RTS:“1642165200” metadata: TSNS:“7000000000” iso language code: ‘en’, RTSNS:“392000000” result type: ‘recent’, [href=“https://mobile.twitter.com, accessed on 14](https://mobile.twitter.com) January 2022” rel=“nofollow”>Twitter Web App, Data streams feed their corresponding topic ( *Twitter* and *Crypto* ) in Apache Kafka ™ . Data streaming is executed in a parallel manner, and in that way they can be processed simultaneously. Then, data processing is sequentially carried out in Apache Spark ™, which allows the computation tasks to be divided between various processors forming a cluster. Data from Twitter in Table 1 contain fields related to the user that, for the purposes of this study, are not representative. Only the following attributes were kept: *timestamp*, *id*, and *text* . In the case of data obtained from CryptoCompare, none of their attributes were ----- *Algorithms* **2022**, *15*, 140 6 of 11 neglected as they contain representative information regarding each trade. At this point, data are transformed into a binary object which can be managed by Apache Kafka ™ . Text data from each tweet are processed in the real-time streaming layer by means of the library for natural language processing: Spark NLP, which is one of the most widely used NLP libraries [ 39, 40 ]. Attribute *text* is split into sentences and, for each one, sentiment analysis is executed to identify whether it is positive or negative. Thus, a new attribute *sentence* for each tweet is generated. On the other hand, data on the Apache Kafka ™ topic *Crypto* are transformed to have the same notation as data from the topic *Twitter*, so they can be related. Finally, data are immediately uploaded to the corresponding collection in the database hosted at MongoDB™. By following the process presented in Figure 2, a new dataset that relates the individual data from topics *Twitter* and *Crypto* is queried from the database. Attributes *timestamp* and *currency* are defined as keys to establish a relationship that allows generating a new dataset containing facts regarding the transactions. Considering the speculative nature of cryptocurrencies dominated by short-term investors [ 25 ], data are analyzed on a time basis of minutes; thereafter, new attributes are calculated: *number of tweets, accumulated sentiment,* *transaction volume, average currency price*, and *number of transactions* . The obtained dataset, as shown in Table 2, is then sent to a new topic ( *Query* ) in Apache Kafka™to be streamed to Apache Spark™and thus processed in a second iteration. **Table 2.** Relation between Twitter and CryptoCompare datasets on a time basis of minutes. **Sell** **Buy** **Buy** **Timestamp** **Symb.** **Tweets** **Sent.** **Sell Avg.** **Sell No.** **Buy Vol.** **Vol.** **Avg.** **No.** 14 January 2022 T07:00:00.00 BTC 693 *−* 165 1.77 42.1 * 101 3.98 42.1 * 160 14 January 2022 T07:00:00.00 ETH 878 *−* 352 63.19 3.21 * 182 23.5 3.21 * 160 14 January 2022 T07:01:00.00 BTC 618 *−* 124 4.9 42.0 * 154 5.11 42.0 * 213 14 January 2022 T07:01:00.00 ETH 809 *−* 238 24.7 3.21 * 176 24.4 3.21 * 155 14 January 2022 T07:02:00.00 BTC 620 *−* 160 0.38 42.0 * 95 2.06 42.0 * 165 14 January 2022 T07:02:00.00 ETH 815 *−* 272 76.2 3.21 * 135 66.7 3.21 * 135 - Expressed in thousands. With the purpose of demonstrating the application of the proposed algorithm, we collected data for a period of 8 h, from 06:59:00 (UTC-6) to 16:59:00 (UTC-6) of 14 January 2022 . This corresponds to 248,313 tweets, 73,506 sell transactions, and 114,493 buy transactions of both cryptocurrencies. Figures 3 and 4 show a graphical representation of the behavior of the collected data regarding the cryptocurrencies Bitcoin (BTC) and Ethereum (ETH), respectively. **Figure 3.** Graphical representation of data related to cryptocurrency Bitcoin (BTC). Source: compiled by authors. ----- *Algorithms* **2022**, *15*, 140 7 of 11 **Figure 4.** Graphical representation of data related to cryptocurrency Ethereum (ETH). Source: compiled by authors. It is notorious that in the case of Bitcoin (BTC), as the price increases, the sentiment does too. A similar behavior is seen when the price remains steady, thus having a stable sentiment range. On other hand, buy and sell transactions seem to behave according to the change in price, and this means that an increase or decrease in price is related to a larger or smaller number of buy and sell transactions, respectively; nevertheless, this behavior appears to happen only when there is an abrupt change in price. It is worth mentioning that the number of tweets and transactions tends to lower values as the day goes by. This may indicate that the vast majority of activities regarding cryptocurrencies are carried out during normal working hours. Moreover, in the case of Ethereum (ETH), its behavior is similar to that of Bitcoin (BTC). As shown in Figure 4, there is a relation between the number of tweets, the sentiment around them, and price, but only when the price change is abrupt. When the price remains steady, the rest of the variables seem to behave in the same manner. In this case, it can also be seen that during the final minutes of the graph, the sentiment does not affect the price, which tends to remain significantly unchanged. Finally, as in the previous graph, the number of tweets and transactions tends to decrease as the day passes by. To determine whether there is a correlation between variables, a Pearson correlation analysis was executed. For this purpose, data were standardized to let all the attributes be expressed in the same terms, so they can be correctly related. Table 3 presents a correlation matrix for the corresponding variables of the dataset, from which only the statistically significant values ( *p* -value *≥* 0.05) were considered. It was found that there is a positive correlation between the number of tweets and the buy and sell volumes (0.34, 0.43). Additionally, there is a positive correlation between the sentiment and the buy and sell prices of the cryptocurrencies (0.30, 0.30), while the correlation between the latter and the number of tweets is negative ( *−* 0.69, *−* 0.69). In addition, a correlation between *volume*, *avg. price*, and *number of transactions* of both buy and sell transactions was found, which was expected because of their mutually dependent nature. This approach complements the findings from Figures 3 and 4. Before returning data to the streaming layer for the execution of the *k* -means clustering approach, an optimal number of clusters is defined by means of the *silhouette method*, which measures compactness and separation of data [ 41 ] Compactness refers to the similarity between each data point and the cluster, while when compared to other clusters, it is called separation [ 42 ]. In this case, the optimal number of clusters is determined according to the collected data; therefore, a silhouette coefficient was calculated for an arbitrary range of clusters, from *k* = 3 to *k* = 9. As our data consider two cryptocurrencies, we neglected a *k* -value of 2 with the purpose of grouping data beyond their cryptocurrency symbol. The silhouette coefficient ranges between *−* 1 and 1, with 1 being the value that denotes that clusters are apart from each other, and data points belonging to them are close to their centroid, while *−* 1 denotes that data points are grouped in the wrong clusters and that their ----- *Algorithms* **2022**, *15*, 140 8 of 11 centers are not well separated [ 43 ]. In that context, the higher the value of the coefficient the better the behavior of the clusters. We selected the optimal number of clusters according to these criteria. Figure 5 shows the calculated values of the silhouette coefficient for the clusters between the defined range. The highest coefficient is obtained by grouping data in 3 clusters, therefore this is the number that we consider for *k* . **Table 3.** Pearson correlation matrix. **Tweets** **Sent** **Sell Vol.** **Sell Avg.** **Sell No.** **Buy Vol.** **Buy Avg.** **Buy No.** - - - - - - - **symb.** **tweets** 1 - - - - - - **sent** - 1 - - - - - **sell vol.** 0.34 *−* 0.07 1 - - - - **sell avg.** *−* 0.69 0.30 *−* 0.39 1 - - - **sell no.** - 0.09 0.34 0.12 1 - - **buy vol.** 0.43 *−* 0.10 0.52 *−* 0.49 0.24 1 - **buy avg.** *−* 0.69 0.30 *−* 0.39 - 0.12 *−* 0.49 1 **buy no.** - 0.04 0.19 0.09 - 0.39 - 1 - Omitted: *p* -value < 0.05. **Figure 5.** Silhouette coefficient related to the number of clusters. Source: compiled by authors. Now that the optimal number of clusters is selected, data are modeled at the streaming layer in a second iteration. Thereafter, it was found that data are grouped according to their symbol in the first and second clusters; nevertheless, the third cluster concentrates data from both cryptocurrencies whose numbers of buy and sell transactions are significantly higher in comparison with the rest of the data; in consequence, the volume of bought and sold cryptocurrencies is also higher. In those cases, on average, the sentiment tends to be more positive as well as the number of tweets. Table 4 shows the average values of the grouped data, which indicate that cluster 3 groups data related to an increase in the activity over cryptocurrencies. In that sense, and in relation to findings from graphs in Figures 3 and 4, we consider that clusters 1 and 2 contain data corresponding to a steady behavior of the cryptocurrencies while cluster 3 corresponds to data whose behavior is more volatile. ----- *Algorithms* **2022**, *15*, 140 9 of 11 **Table 4.** Average values separated by cluster. **Avg.** **Avg. Sell** **Avg. Sell** **Avg. Sell** **Avg. Buy** **Avg. Buy** **Avg.** **Symb.** **Avg. Sent.** **% Data** **Teets** **Vol.** **Price** **No.** **Vol.** **Price** **Buy No.** 1 BTC 515 *−* 82 4.7 42.8 * 145 5.65 42.8 * 223.4 42% 2 ETH 720 *−* 103 31.0 3.27 * 125 39.24 3.27 * 189.9 38% BTC 564 *−* 72 16.4 43.0 * 332 33.25 43.0 * 603.7 3 14% ETH 789 *−* 77 87.8 3.28 * 194 105.24 3.28 * 290.0 - Expressed in thousands. **4. Discussion** Results show that the proposed iterative kappa architecture is useful for processing data and for determining patterns in real time. From the correlation analysis, it was found that there is a relation between the activity in social networks, i.e., Twitter, and the behavior of cryptocurrency markets. This evidences a positive correlation between the number of tweets and the buy and sell volumes of the cryptocurrencies. The findings support previous studies [ 19, 22 – 24 ], in which it was found that the number of tweets and sentiment were positively correlated with cryptocurrencies’ transaction volumes and prices. In addition, by means of the *k* -means clustering approach, it was found that some data lie outside the common trends regarding transaction volumes of the cryptocurrencies. We have identified the outliers by grouping data in three clusters; two of them correspond to a steady behavior of the cryptocurrencies, while the third gathers data related to unusual transaction volumes. Thus, this latter group is useful for identifying anomalous behaviors in the market which are characterized mainly by a higher volume of tweets with a more positive sentiment, and higher transaction volumes. From the executed *k* -means clustering approach, we have found that around 14% of data fall in the third cluster. In that cluster, on average, Bitcoin (BTC) was sold and bought around 128% and 170% more times than in cluster 1, while for Ethereum (ETH), the percentages were 54% and 52%, respectively, in comparison with cluster 2, thus resulting in higher transaction volumes. In both cases, the number of tweets was around 9.5% higher than in the first two clusters. Additionally, the sentiment of the tweets shows higher values (12% for Bitcoin (BTC) and 25% for Ethereum(ETH)) in the third cluster. The previous findings demonstrate that positive sentiment in the environment regarding cryptocurrencies promotes the activity in the market, thus giving sense to the correlation found between the number of tweets and the buy and sell volumes. The proposed architecture may be misidentified with a lambda architecture because both have a batch step; nevertheless, they do accomplish different tasks, and thus different purposes. While the lambda architecture contains an extra batch layer that receives data simultaneously with the streaming layer, our proposed variant of the kappa architecture applies a batch step inside the existent serving layer to temporarily store processed data. In that sense and in comparison with the simple kappa architecture, our proposal has the advantage of being able to relate various datasets in the second iteration by considering a different time span than the one selected for data streaming at the first iteration. It is a flexible architecture, which offers an alternative solution for real-time data processing and modeling from the perspective of traditional techniques, i.e., relational databases [44]. The application of our proposal is not limited to the execution of a *k* -means clustering approach. Other unsupervised machine learning algorithms may be explored, such as hierarchical cluster analysis (HCA) or fuzzy C-means clustering, which could help find different patterns regarding the behavior of cryptocurrencies. In addition, supervised machine learning algorithms may be supported. Some other studies proposed a similar application of the kappa architecture to process and model data in real time [ 33, 45 ]; nevertheless, our proposal differs in the way data are processed. None of the previous studies found combined an iterative approach with a batch step involving a database management system ----- *Algorithms* **2022**, *15*, 140 10 of 11 and machine learning processing together. The proposed iterative kappa architecture in this study contributes to expanding the alternatives for real-time data processing with machine learning techniques. Even though this study considers only data from Twitter for a limited period of time and in a specific language, in future works, data from different social networks, i.e., Reddit and Telegram [ 14 ], over a longer period and in other languages can be evaluated. In addition, other machine algorithms may be explored within the architecture in order to widen the knowledge regarding the data. The integration of data from new data sources in order to analyze the architecture from a multidimensional approach also remains open for further studies. Finally, a higher volume of data and more attributes may be considered with the purpose of identifying if other variables correlate to specific trends in the cryptocurrency market. **Author Contributions:** Methodology, A.B.; Supervision, R.-M.C.-C.; Writing—review and editing, A.B. and A.T.-G. All authors have read and agreed to the published version of the manuscript. **Funding:** This research received no external funding. The APC was funded by UPAEP-University. **Institutional Review Board Statement:** Not applicable. **Informed Consent Statement:** Not applicable. **Data Availability Statement:** Restrictions apply to the availability of these data. Data were obtained [in real time from Twitter and CryptoCompare and are available at https://twitter.com, accessed](https://twitter.com) [on 14 January 2022 and https://www.cryptocompare.com, accessed on 14 January 2022 with the](https://www.cryptocompare.com) permission of Twitter and CryptoCompare. **Conflicts of Interest:** The authors declare no conflict of interest. **References** 1. Peters, G.; Panayi, E.; Chapelle, A. Trends in Cryptocurrencies and Blockchain Technologies: A Monetary Theory and Regulation Perspective. *J. Financ. Perspect.* **2017**, *3*, 1–46. 2. de Albuquerque, B.S.; de Castro Callado, M. Understanding Bitcoins: Facts and Questions. *Rev. Bras. Econ.* **2015**, *69*, 3–16. [[CrossRef]](http://doi.org/10.5935/0034-7140.20150001) 3. Hassani, H.; Huang, X.; Silva, E.S. Fusing Big Data, Blockchain, and Cryptocurrency. In *Fusing Big Data, Blockchain and* *Cryptocurrency: Their Individual and Combined Importance in the Digital Economy* ; Hassani, H., Huang, X., Silva, E.S., Eds.; Springer [International Publishing: Cham, Switzerland, 2019; pp. 99–117. [CrossRef]](http://dx.doi.org/10.1007/978-3-030-31391-3_5) 4. Shen, D.; Urquhart, A.; Wang, P. Does Twitter Predict Bitcoin? *Econ. Lett.* **2019**, *174* [, 118–122. [CrossRef]](http://dx.doi.org/10.1016/j.econlet.2018.11.007) 5. Mallikarjuna, B.; Ramana, T.; Kallam, S.; Patan, R.; Manikandan, R. Visualizing Bitcoin Using Big Data Mempool Visualization, Visualization, Peer Visualization, Attack Visual Analysis, High-Resolution Visualization of Bitcoin Systems, Effectiveness. In *Blockchain, Big Data and Machine Learning* [, 1st ed.; CRC Press: Boca Raton, FL, USA, 2020; pp. 155–176. [CrossRef]](http://dx.doi.org/10.1201/9780429352546-7) 6. Harwick, C. Cryptocurrency and the Problem of Intermediation. *Independ. Rev.* **2016**, *20*, 569–588. 7. [CoinMarketCap. Bitcoin. Available online: https://coinmarketcap.com/currencies/bitcoin/ (accessed on 28 December 2021).](https://coinmarketcap.com/currencies/bitcoin/) 8. Antonopoulos, A.M.; Wood, G. *Mastering Ethereum: Building Smart Contracts and DApps* ; O’Reilly Media, Inc.: Sevastopol, CA, USA, 2018. 9. Nizzoli, L.; Tardelli, S.; Avvenuti, M.; Cresci, S.; Tesconi, M.; Ferrara, E. Charting the Landscape of Online Cryptocurrency Manipulation. *IEEE Access* **2020**, *8* [, 113230–113245. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2020.3003370) 10. Tandon, C.; Revankar, S.; Palivela, H.; Parihar, S.S. How Can We Predict the Impact of the Social Media Messages on the Value of Cryptocurrency? Insights from Big Data Analytics. *Int. J. Inf. Manag. Data Insights* **2021**, *1* [, 100035. [CrossRef]](http://dx.doi.org/10.1016/j.jjimei.2021.100035) 11. [Bitcoin Tweets Chart. Available online: https://bitinfocharts.com/comparison/bitcoin-tweets.html (accessed on 28 December](https://bitinfocharts.com/comparison/bitcoin-tweets.html) 2021). 12. [Internet Live Stats. Twitter Usage Statistics. Available online: https://www.internetlivestats.com/twitter-statistics/ (accessed on](https://www.internetlivestats.com/twitter-statistics/) 28 December 2021). 13. [Sayce, D. The Number of Tweets per Day in 2020. 2019. Available online: https://www.dsayce.com/social-media/tweets-day/](https://www.dsayce.com/social-media/tweets-day/) (accessed on 28 December 2021). 14. Rothman, T. Trading the Dream: Does Social Media Affect Investors Activity—The Story of Twitter, Telegram and Reddit. *Int. J.* *Financ. Res.* **2019**, *10* [, 147–152. [CrossRef]](http://dx.doi.org/10.5430/ijfr.v10n2p147) 15. Nasdaq Data Link. Bitcoin Number of Transactions. 2021. [Available online: https://data.nasdaq.com (accessed on 28](https://data.nasdaq.com) December 2021). 16. Campbell, Stefan. Twitter Statistics 2022: How Many People Use Twitter? 2021. Available online: //thesmallbusinessblog.net/ twitter-statistics/ (accessed on 29 December 2021). ----- *Algorithms* **2022**, *15*, 140 11 of 11 17. Ghani, N.A.; Hamid, S.; Targio Hashem, I.A.; Ahmed, E. Social Media Big Data Analytics: A Survey. *Comput. Hum. Behav.* **2019**, *101* [, 417–428. [CrossRef]](http://dx.doi.org/10.1016/j.chb.2018.08.039) 18. Bandi, A.; Hurtado, J.A. Big Data Streaming Architecture for Edge Computing Using Kafka and Rockset. In Proceedings of the 2021 5th International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, 8–10 April 2021; [pp. 323–329. [CrossRef]](http://dx.doi.org/10.1109/ICCMC51019.2021.9418466) 19. Mohapatra, S.; Ahmed, N.; Alencar, P. KryptoOracle: A Real-Time Cryptocurrency Price Prediction Platform Using Twitter Sentiments. In Proceedings of the 2019 IEEE International Conference on Big Data (Big Data) Los Angeles, CA, USA, 9–12 [December 2019; pp. 5544–5551. [CrossRef]](http://dx.doi.org/10.1109/BigData47090.2019.9006554) 20. Bandi, A. Data Streaming Architecture for Visualizing Cryptocurrency Temporal Data. In *Computer Networks, Big Data and IoT* ; [Pandian, A., Fernando, X., Islam, S.M.S., Eds.; Springer: Singapore, 2021; Volume 66, pp. 651–661. [CrossRef]](http://dx.doi.org/10.1007/978-981-16-0965-7_50) 21. Horvat, N.; Ivkovic, V.; Todorovic, N.; Ivanˇcevi´c, V.; Gaji´c, D.; Lukovic, I. Big Data Architecture for Cryptocurrency Real-time Data Processing. In Proceedings of the ICIST 2020 Proceedings, Information Society of Serbia—ISOS, Belgrade, Serbia, 8–11 March 2020; pp. 150–155. 22. Abraham, J.; Higdon, D.; Nelson, J.; Ibarra, J. Cryptocurrency Price Prediction Using Tweet Volumes and Sentiment Analysis. *SMU Data Sci. Rev.* **2018**, *1*, 1–21. 23. Park, H.W.; Lee, Y. How Are Twitter Activities Related to Top Cryptocurrencies’ Performance? Evidence from Social Media Network and Sentiment Analysis. *Drustvena Istrazivanja* **2019**, *28* [, 435–460. [CrossRef]](http://dx.doi.org/10.5559/di.28.3.04) 24. Garcia, D.; Tessone, C.J.; Mavrodiev, P.; Perony, N. The Digital Traces of Bubbles: Feedback Cycles between Socio-Economic Signals in the Bitcoin Economy. *J. R. Soc. Interface* **2014**, *11* [, 20140623. [CrossRef] [PubMed]](http://dx.doi.org/10.1098/rsif.2014.0623) 25. Kjærland, F.; Meland, M.; Oust, A.; Øyen, V. How Can Bitcoin Price Fluctuations Be Explained? *Int. J. Econ. Financ. Issues* **2018**, *8*, 323–332. 26. Aharon, D.Y.; Demir, E.; Lau, C.K.M.; Zaremba, A. *Twitter-Based Uncertainty and Cryptocurrency Returns* ; SSRN Scholarly Paper ID [3735435; Social Science Research Network: Rochester, NY, USA, 2020. [CrossRef]](http://dx.doi.org/10.2139/ssrn.3735435) 27. Baek, H.; Oh, J.; Kim, C.Y.; Lee, K. A Model for Detecting Cryptocurrency Transactions with Discernible Purpose. In Proceedings of the 2019 Eleventh International Conference on Ubiquitous and Future Networks (ICUFN), Zagreb, Croatia, 2–5 July 2019; [pp. 713–717. [CrossRef]](http://dx.doi.org/10.1109/ICUFN.2019.8806126) 28. Aspembitova, A.T.; Feng, L.; Chew, L.Y. Behavioral Structure of Users in Cryptocurrency Market. *PLoS ONE* **2021**, *16*, e0242600. [[CrossRef] [PubMed]](http://dx.doi.org/10.1371/journal.pone.0242600) 29. Fang, J.; Chiu, D.K.W.; Ho, K.K.W. Exploring Cryptocurrency Sentiments with Clustering Text Mining on Social Media. In *Intelligent Analytics with Advanced Multi-Industry Applications* ; Sun, Z., Ed.; IGI Global: Hershey, PA, USA, 2021; pp. 157–171. [[CrossRef]](http://dx.doi.org/10.4018/978-1-7998-4963-6.ch007) 30. [Kreps, J. Questioning the Lambda Architecture. 2014. Available online: https://www.oreilly.com/radar/questioning-the-](https://www.oreilly.com/radar/questioning-the-lambda-architecture/) [lambda-architecture/ (accessed on 28 December 2021).](https://www.oreilly.com/radar/questioning-the-lambda-architecture/) 31. Marz, N.; Warren, J. Lambda Architecture. In *Big Data: Principles and Best Practices of Scalable Real-Time Data Systems* ; Manning Publications: Westhampton, NY, USA, 2015; p. 328. 32. Domínguez, J. De Lambda a Kappa: Evolución de las Arquitecturas Big Data. 2018. [Available online: https://www.](https://www.paradigmadigital.com/techbiz/de-lambda-a-kappa-evolucion-de-las-arquitecturas-big-data/) [paradigmadigital.com/techbiz/de-lambda-a-kappa-evolucion-de-las-arquitecturas-big-data/ (accessed on 29 December 2021).](https://www.paradigmadigital.com/techbiz/de-lambda-a-kappa-evolucion-de-las-arquitecturas-big-data/) 33. Nkamla Penka, J.B.; Mahmoudi, S.; Debauche, O. A New Kappa Architecture for IoT Data Management in Smart Farming. *Procedia Comput. Sci.* **2021**, *191* [, 17–24. [CrossRef]](http://dx.doi.org/10.1016/j.procs.2021.07.006) 34. [ProjectPro. How Data Partitioning in Spark Helps Achieve More Parallelism? 2021. Available online: https://www.projectpro.](https://www.projectpro.io/article/how-data-partitioning-in-spark-helps-achieve-more-parallelism/297) [io/article/how-data-partitioning-in-spark-helps-achieve-more-parallelism/297 (accessed on 29 December 2021).](https://www.projectpro.io/article/how-data-partitioning-in-spark-helps-achieve-more-parallelism/297) 35. Sinaga, K.P.; Yang, M.S. Unsupervised K-Means Clustering Algorithm. *IEEE Access* **2020**, *8* [, 80716–80727. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2020.2988796) 36. Likas, A.; Vlassis, N.; Verbeek, J. The Global K-Means Clustering Algorithm. *Patt. Recognit.* **2003**, *36* [, 451–461. [CrossRef]](http://dx.doi.org/10.1016/S0031-3203(02)00060-2) 37. [Cryptocompare. Cryptocurrency API, Historical & Real-Time Market Data. Available online: https://min-api.cryptocompare.](https://min-api.cryptocompare.com) [com (accessed on 14 January 2022).](https://min-api.cryptocompare.com) 38. [Roesslein, J. Tweepy. Available online: https://www.tweepy.org/ (accessed on 4 January 2022).](https://www.tweepy.org/) 39. Kuilboer, J.P.; Stull, T. Text Analytics and Big Data in the Financial Domain. In Proceedings of the 2021 16th Iberian Conference on Information Systems and Technologies (CISTI), Chaves, Portugal, 23–26 June 2021; pp. 1–4. 40. [John Snow Labs. Spark NLP. Available online: https://nlp.johnsnowlabs.com/ (accessed on 4 January 2022).](https://nlp.johnsnowlabs.com/) 41. Lengyel, A.; Botta-Dukát, Z. Silhouette Width Using Generalized Mean—A Flexible Method for Assessing Clustering Efficiency. *Ecol. Evol.* **2019**, *9* [, 13231–13243. [CrossRef] [PubMed]](http://dx.doi.org/10.1002/ece3.5774) 42. Yuan, C.; Yang, H. Research on K-Value Selection Method of K-Means Clustering Algorithm. *J* **2019**, *2* [, 226–235. [CrossRef]](http://dx.doi.org/10.3390/j2020016) 43. Hmwe, T.T.; Thein, N.Y.T.; Cho, K.M. Improving Clustering Quality Using Silhouette Score. *J. Comput. Appl. Res.* **2020**, *1*, 58–62. 44. [Education, I.C. What Is Data Modeling? 2020. Available online: https://www.ibm.com/cloud/learn/data-modeling (accessed on](https://www.ibm.com/cloud/learn/data-modeling) 20 January 2022). 45. Zschörnig, T.; Wehlitz, R.; Franczyk, B. A Personal Analytics Platform for the Internet of Things—Implementing Kappa Architecture with Microservice-based Stream Processing. In Proceedings of the 19th International Conference on Enterprise Information Systems, Porto, Portugal, 26–29 April 2017; SCITEPRESS—Science and Technology Publications: Porto, Portugal, [2017; pp. 733–738. [CrossRef]](http://dx.doi.org/10.5220/0006355407330738) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/a15050140?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/a15050140, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/1999-4893/15/5/140/pdf?version=1650595081" }
2,022
[ "JournalArticle" ]
true
2022-04-22T00:00:00
[ { "paperId": "72f219915a6170f77eb9678886df614e921f3db4", "title": "How can we predict the impact of the social media messages on the value of cryptocurrency? Insights from big data analytics" }, { "paperId": "69fa9eee9d5fa15e6ee011feae2e9501fd7b7ed1", "title": "Behavioral structure of users in cryptocurrency market" }, { "paperId": "5edd62a9783e3d7e3dc91515fb8027179351ec7b", "title": "Twitter-Based Uncertainty and Cryptocurrency Returns" }, { "paperId": "8cc8f762a52936f0d6baddb4f1bd5c06f1062605", "title": "Social media big data analytics: A survey" }, { "paperId": "33ec8c704d7267b381fa8c684ddee27abb3d8c2d", "title": "How Are Twitter Activities Related to Top Cryptocurrencies' Performance? Evidence from Social Media Network and Sentiment Analysis" }, { "paperId": "4f76785d62eb0f6eb21ee71280f446e61a5c4aac", "title": "Research on K-Value Selection Method of K-Means Clustering Algorithm" }, { "paperId": "ada9aae9a4e8d36a32dd416891d865fc4c87ad8d", "title": "Trading the Dream: Does Social Media Affect Investors’ Activity - The Story of Twitter, Telegram and Reddit" }, { "paperId": "e8da6b1ceb20f99d61a341a2485ae2d89f1484ae", "title": "Does twitter predict Bitcoin?" }, { "paperId": "7fc407db0f05297872daad9e5b6996b2b88628f5", "title": "Silhouette width using generalized mean—A flexible method for assessing clustering efficiency" }, { "paperId": "59cb95d8e452baffaf810da8f354260250f5c63b", "title": "Trends in Crypto-Currencies and Blockchain Technologies: A Monetary Theory and Regulation Perspective" }, { "paperId": "576bf492806ffbe02432b3d94480bc9fe96484ab", "title": "Cryptocurrency and the Problem of Intermediation" }, { "paperId": "c01b99a5d26819c17154647ae94d4d83d180c861", "title": "Understanding Bitcoins: Facts and Questions" }, { "paperId": "7328d31f9b5078658718ada22e94088c5a094291", "title": "The digital traces of bubbles: feedback cycles between socio-economic signals in the Bitcoin economy" }, { "paperId": "9d5f9ed6c1afcb539029575713cc3cdfd9eb750c", "title": "A new Kappa Architecture for IoT Data Management in Smart Farming" }, { "paperId": "51d8a6a310dcced2241d0ce8a897bc109ef09115", "title": "Improving Clustering Quality Using Silhouette Score" }, { "paperId": "48dd94cb5c7c0f3134ea9af1ac8797354938fcd8", "title": "Lambda Architecture" }, { "paperId": "2a865d7bfcde586df5c142a5876c1e698726533a", "title": "Cryptocurrency Price Prediction Using Tweet Volumes and Sentiment Analysis" }, { "paperId": "5e5b9b39ceb7e184eb579e7cd7b603f0c99ed34e", "title": "How can Bitcoin Price Fluctuations be Explained" }, { "paperId": "4af179674c92e4cf57e6cf82f85d5e8e8fdba639", "title": "A Personal Analytics Platform for the Internet of Things - Implementing Kappa Architecture with Microservice-based Stream Processing" } ]
11,875
en
[ { "category": "Economics", "source": "s2-fos-model" }, { "category": "Business", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/016dbbb966db82909247b51559317d640dd302d8
[]
0.801555
The effect of gold, dollar and Composite Stock Price Index on cryptocurrency
016dbbb966db82909247b51559317d640dd302d8
International Journal of Research In Business and Social Science
[ { "authorId": "123916409", "name": "A. Rivai" } ]
{ "alternate_issns": null, "alternate_names": [ "Int J Res Bus Soc Sci" ], "alternate_urls": null, "id": "112d22ab-7307-420f-b121-15fbf6f4fcfc", "issn": "2147-4478", "name": "International Journal of Research In Business and Social Science", "type": "journal", "url": "https://www.ssbfnet.com/ojs/index.php/ijrbs/index" }
This paper aims to analyze cryptocurrency volatility by examining the effect of Gold, Dollar Index, and Composite Stock Price Index (IHSG) as independent variables and on Bitcoin and Ethereum as dependent variables. The cryptocurrency objects in this study are Bitcoin and Ethereum, which have the largest market capitalization. The data in this study used the period January 1, 2018, to December 31, 2021. This study used GARCH analysis. This study's results indicate that Bitcoin's volatility is influenced by the price of Bitcoin itself, gold, and the stock exchange index, and Ethereum and the stock exchange index influence Ethereum. This shows that the cryptocurrency market is inefficient as the prices are also affected by past prices.
INTERNATIONAL JOURNAL OF RESEARCH IN BUSINESS AND SOCIAL SCIENCE 12(3)(2023) 231-236 # **Research in Business & Social Science ** #### ***IJRBS VOL 12 NO 3 (2023) ISSN: 2147-4478 *** [Available online at www.ssbfnet.com](http://www.ssbfnet.com/) Journal homepage: https://www.ssbfnet.com/ojs/index.php/ijrbs ## **The effect of gold, dollar and Composite Stock Price Index on ** **cryptocurrency ** ### * Aswin Rivai [(a)*]* *(a)* *Lecturer at Faculty of Economics Universitas Pembangunan Nasional Veteran Jakarta, Jl.Fatmawati Raya No.1, Pondok Labu, Jakarta Selatan,* *Indonesia* A R T I C L E I N F O *Article history:* Received 09 January 2023 Received in rev. form 16 April 2023 Accepted 24 April 2023 *Keywords:* Cryptocurrency, bitcoin, gold, stock price index JEL Classification: E52, E31, J23 ### **Introduction ** A B S T R A C T *This paper aims to analyze cryptocurrency volatility by examining the effect of Gold, Dollar Index, and* *Composite Stock Price Index (IHSG) as independent variables and on Bitcoin and Ethereum as* *dependent variables. The cryptocurrency objects in this study are Bitcoin and Ethereum, which have* *the largest market capitalization. The data in this study used the period January 1, 2018, to December* *31, 2021. This study used GARCH analysis. This study's results indicate that Bitcoin's volatility is* *influenced by the price of Bitcoin itself, gold, and the stock exchange index, and Ethereum and the* *stock exchange index influence Ethereum. This shows that the cryptocurrency market is inefficient as* *the prices are also affected by past prices.* © 2023 by the authors. Licensee SSBFNET, Istanbul, Turkey. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). Advances in technology have reached the financial sector. One of them is the emergence of crypto currency using cryptographic technology or often called cryptocurrency. Crypto currency has appeared since 2008 was discovered by a group of unknown people named Satoshi Nakamoto (2008). Crypto currency or cryptocurrency which means a manifestation of the development of a technology that has a series of cryptographic codes. The code can be formed so that the code can be stored in a computer device (Robiyanto et al., 2019). Besides that, the advantage of crypto currency is that it can be transferred, such as in electronic mail to make payments in a transaction (Yohandi et al., 2017). In crypto currency transactions it is very difficult to fake or manipulate because it has very good security (Bhosale & Mavale, 2018). The crypto market has developed very well until now, its market capitalization is very large. So it is believed that the crypto market itself can help investors in finding large returns. This research will discuss cryptocurrency in Bitcoin and Ethereum. Sor far based on author knowledge not many study on analysing effect of gold, stock exchange index and bitcoin cryptocurreny executed in Indonesia or still scanty. Bitcoin is a type of cryptocurrency that is often used by people in developed countries. Even in Indonesia, it has become an investment tool even though it cannot be used as a means of payment because it has not been recognized as a legal payment instrument in Indonesia. According to Auso, Asep Zaenal, Elsa Silvia Nur Aulia. (2018) Bitcoin has several advantages. The most important advantage is Blockchain technology. However, besides these advantages, there are several disadvantages, including that Bitcoin virtual money does not have an underlying asset, is not controlled by a responsible authority (in Indonesia by the Financial Services Authority/OJK) so it is not safe, and without the clear name of the owner so that it is prone to be used as a means of crime. The value of Bitcoin rises and falls based on the laws of market demand and supply. When there are only a few Bitcoins in circulation to meet needs while there is a lot of demand, the price of Bitcoin will rise. Basically, Ethereum is the same as Bitcoin, but they differ in purpose and function. Bitcoin focuses on peer-to-peer electronic money transfers. Meanwhile, Ethereum can be used to run any application, including electronic money transfers in the form of Ether or other Ethereum tokens. - *Corresponding author. ORCID ID:* *0000-0001-5346-1052* © 2023 by the authors. Hosting by SSBFNET. Peer review under responsibility of Center for Strategic Studies in Business and Finance. *[https://doi.org/10.20525/ijrbs.v12i3.2561](https://doi.org/10.20525/ijrbs.v12i3.2561)* ----- *Aswin Rivai, International Journal of Research in Business & Social Science 12(3) (2023), 231-236* This paper aims to analyze the volatility of cryptocurrency by examine the effect of Gold, Dollar Index, and Composite Stock Price Index (IHSG) as independent variables and on Bitcoin and Ethereum as dependent variables. This paper is organized as follows: following the introduction part, a second part is a literature review with theoretical and empirical studies that shed a light on linkage between theory and practice. The third part introduces the background information on research and methodology. After analysis and findings of the study, authors provide discussions and implications. Finally, this paper concludes with key points, recommendations, future research directions and limitations. ### **Methodology ** Before making an investment, investors need to know the rate of return or yield and level of risk. Volatility analysis is able to assist investors in recognizing the level of risk. In addition, volatility analysis is useful in price formation, portfolio formation and risk management. That way volatility analysis can help investors in making a decision. When the value of volatility is high, investors will try their best to sell their assets to minimize risk. However, at times of high volatility, this shows that prices will experience very fast ups and downs. Thus, providing an opportunity to be able to get a high rate of return and risk. Conversely, if the volatility value is low, the chance of taking the rate of return quickly will be small. So that it will usually be carried out over a long period of time in order to obtain the desired rate of return. This in financial terms is usually called "high risk high return". The volatility analysis will be tested using the GARCH (Generalized Autoregressive Conditionally Heteroskedasticity) system, because the framework system from GARCH is considered very suitable for use. GARCH is commonly used in analyzes such as returns and volatility. The data in this study used the period of January 1 2018 to December 31 2021 sourced from Indonesia’s Stock Exchange (BEI), Antam (Indonesia’s authorized gold seller) and coin market cap.com. The GARCH framework can provide something that is sensitive to the assets to be measured, such as cryptocurrencies in this study, especially Bitcoin and Ethereum. In addition, this study uses the GARCH model because it has advantages over other models. The GARCH model does not see heteroscedasticity as a problem, but uses it to create a model. In addition, this model does not only produce forecasts of Y, but also forecasts of variance. Research on volatility analysis has been widely carried out in examining an asset, as was done by analyzing the volatility of shares of companies going public. Other research was conducted by Hartati & Saluza (2017) examining the analysis of volatility in the financial sector. Apart from the financial sector, research on volatility analysis in agriculture, especially coffee, has been carried out by Rahayu, Chang, & Anindita (2015). In the same year, Dyhrberg conducted volatility analysis research on Bitcoin, gold, and dollars. In further research, Bhosale & Mavale (2018) examined volatile analysis. In this study we use two calculation models to investigate the similarity between Bitcoin, gold, Dollar Index, and Composite Stock Price Index and Ethereum, gold, Dollar Index, and Composite Stock Price Index (IHSG) with explanatory variables and mean equation (1), and variance equation (2) as follows, **Bitcoin** Δln price t BTC =β0 + β 1 lnpricet- 1 + β2Goldt- 1 + β3DXYt- 1 + β4IHSGt- 1 + εt σt [2] BTC= exp ( + + + ) + α + whereas, BTC : Bitcoin price Goldt -1 : previous day Gold price DXY t- 1 : previous day Dollar index IHSGt- 1 : previous day Composite Stock Price Index εt-1 and : error terms Δln pricet BTC : variance price of Bitcoin. **Ethereum** Δln pricet ETH =β 0 + β1lnpricet- 1 + β2Goldt- 1 + β3DXYt- 1 + β4IHSGt- 1 + εt σt [2] ETH = exp ( + + + ) + α + whereas, ETH : Etherum price ##### 232 ----- *Aswin Rivai, International Journal of Research in Business & Social Science 12(3) (2023), 231-236* Goldt- 1 : previous day Gold price DXY t- 1 : previous day Dollar index IHSGt- 1 : previous day Composite Stock Price Index εt- 1 and : error term Δln price t Etherum : variance price of Etherum ### **Results and Discussion** After carrying out the data stationarity test, the GARCH test was then carried out. On daily price data to investigate the volatility of Bitcoin and Ethereum with past explanatory variables, gold, Dollar Index, and the Composite Stock Price Index (IHSG). **Table 1:** GARCH with explanatory variables and mean equation in Bitcoin **Va** **ri** **ab** **l** **e** **Coe** **ffi** **c** **i** **e** **n** **t** **Std.** **Err** **o** **r** **z-** **Stat** **i** **st** **i** **c** **Pr** **obab** **ili** **ty** C 1 . 2E- 05 2 .35 E- 06 5.609933 0.000000 Bitcoin ( BTC ) ln p ricet-1 0.13109 0.000120 945.4099 0.000000 Gold ( XAU ) p ricet-1 -7.04E-07 2.88E-07 -2.440809 0.0145 Dollar Index ( DXY ) p ricet-1 -9.24E-06 9.27E-06 -0.996187 0.192 IHSG p ricet-1 8.70E-09 4.00E-08 0.217427 0.0327 **Va** **ri** **a** **n** **ce** **E** **quat** **i** **o** **n** C 9.63E-11 5.64E-11 1.724935 0.0845 RESID ( -1 ) ^2 0.588703 0.037855 12.90992 0.0000 GARCH ( -1 ) 0.609231 0.016152 43.90923 0.0000 R-s q uared 0.899289 Mean de p endent var 0.000380 Ad j usted R-s q uared 0.839213 S.D. de p endent var 0.006113 S.E. of re g ression 0.000635 Akaike info criterion -13.61065 Sum s q uared resid 0.000287 Schwarz criterion -13.55322 Lo g likelihood 4924.268 Hannan- Q uinn criter. -13.8848 Durbin-Watson stat 2.06542 Based on Table 1, the results of the analysis show that with an alpha of 5%, it is found that Bitcoin returns are affected by the previous price, namely the Bitcoin price ln t-1, Stock Exchange Index (IHSG)t-1 and the gold price-1. However, the price variable Dollar Index t-1 does not affect Bitcoin returns. In addition, this study follows the GARCH model, as evidenced by the GARCH result (-1) of less than 5%. **Table 2:** GARCH with explanatory variables and mean equation in Ethereum. **Variable** **Coefficient** **Std. Error** **z-Statistic** **Probability** C 7.58e-05 1.50E-05 4.805446 0.000000 Ethereum (ETH) ln p ricet-1 0.176908 0.000430 434.5627 0.000000 Emas (XAU) p ricet-1 1.63E-06 2.16E-06 0.793529 0.4875 Dollar Index (DXY) p ricet-1 -7.01E-05 5.60E-05 -1.230231 0.3186 IHSG p ricet-1 -3.06E-07 2.32E-07 -1.260582 0.04075 **Variance E** **q** **uation** C 3.41E-09 1.55E-09 2.066333 0.0588 RESID(-1)^2 0.363923 0.025504 12.33471 0.0000 GARCH(-1) 0.745398 0.012867 63.68745 0.0000 R-s q uared 0.884331 Mean de p endent var 0.001276 Ad j usted R-s q uared 0.883658 S.D. de p endent var 0.019895 S.E. of re g ression 0.003523 Akaike info criterion -10.50005 Sum s q uared resid 0.01197 Schwarz criterion -10.74262 Lo g likelihood 3709.119 Hannan-Quinn criter. -10.67788 Durbin-Watson stat 1.470626 Based on Table 2, the results of the analysis show that with an alpha of 5%, it is found that Bitcoin returns are affected by the previous closing price, namely at Bitcoin ln price t-1 and Stock Exchange Index (IHSG). But for the the Dollar Indext-1 price and gold t-1 price does not affect Ethereum returns. In addition, this study follows the GARCH model, as evidenced by the GARCH result (-1) of less than 5%. ##### 233 ----- *Aswin Rivai, International Journal of Research in Business & Social Science 12(3) (2023), 231-236* **Table 3:** GARCH with variance equation in Bitcoin **Variable** **Coefficient** **Std. Error** **z-Statistic** **Probabilit** **y** **C** 4.85E-05 8.54E-05 0.579068 0.4625 Bitcoin (BTC) ln pricet-1 0.217083 0.001680 69.69001 0.0000 Gold ( XAU ) p ricet-1 9.17E-07 1.05E-05 0.087630 0.0402 Dollar Index (DXY) p ricet-1 1.19E-05 0.000244 0.044875 0.8542 IHSG p ricet-1 -2.46E-07 1.94E-06 -0.131885 0.03851 **Variance E** **q** **uation** C 3.54E-07 5.64E-11 1.724935 0.0101 RESID(-1)^2 0.170000 0.037855 12.90992 0.1193 GARCH(-1) 0.700000 0.016152 43.90923 0.0000 EXP(XAU p ricet-1) 0.000000 1.18E-05 0.000000 1.0000 EXP (DXY p ricet-1) 0.000000 6.06E-05 0.000000 1.0000 EXP (IHSG p ricet-1) 0.000000 2.93E-06 0.000000 1.0000 R-s q uared 0.950506 Mean de p endent var 0.000380 Ad j usted R-s q uared 0.950440 S.D. de p endent var 0.004113 S.E. of re g ression 0.000498 Akaike info criterion -11.78614 Sum s q uared resid 0.000354 Schwarz criterion -11.50957 Lo g likelihood 4301.481 Hannan-Quinn criter. -11.45657 Durbin-Watson stat 2.177506 Based on Table 3, the results of the analysis show that with an alpha of 5% volatility it is found that Bitcoin is influenced by the price of Bitcoin namely the price of lnt -1, goldt-1, and Stock Exchange index (IHSG)t-1. Table 4. GARCH with the variance equation in Ethereum. **Table 4:** GARCH with variance equation in Ethereum **Variable** **Coefficient** **Std. Error** **z-Statistic** **Probabilit** **y** C 0.000382 0.000823 0.585905 0.4579 Ethereum ( ETH ) ln p ricet-1 0.200261 0.015164 13.21949 0.0000 Gold ( XAU ) p ricet-1 2.29E-05 8.94E-05 0.256650 0.6974 Dollar Index ( DXY ) p ricet-1 0.000629 0.002135 0.341610 0.6326 IHSG p ricet-1 9.28E-07 1.53E-05 0.063289 0.0359 **Variance E** **q** **uation** C 1.91E-05 1.02E-05 1.914929 0.055 RESID ( -1 ) ^2 0.140000 0.073852 2.031077 0.0322 GARCH ( -1 ) 0.500000 0.170765 3.513600 0.0004 EXP ( XAU p ricet-1 ) 0.500000 1.13E-05 0.000000 1.0000 EXP ( DXY p ricet-1 ) 0.000000 0.000130 0.000000 1.0000 EXP ( IHSG p ricet-1 ) 0.000000 3.16E-06 0.000000 1.0000 R-s q uared 0.891732 Mean de p endent var 0.001176 Ad j usted R-s q uared 0.881112 S.D. de p endent var 0.012895 S.E. of re g ression 0.003441 Akaike info criterion -7.737780 Sum s q uared resid 0.014021 Schwarz criterion -7.571210 Lo g likelihood 2789.579 Hannan- Q uinn criter. -7.618214 Durbin-Watson stat 1.538048 Based on Table 4, an alpha of 5% shows that Ethereum volatility is not affected by other variables but is influenced by the price of Ethereum and Stock Exchange Index (IHSG). The results of the analysis show that the price of Bitcoin is influenced by the past prices of Bitcoin gold and Stock Exchange Index (IHSG). These results are proven by the probability value on the past price of Bitcoin, gold and Stock Exchange Index in the mean equation which is smaller than 5%. These results are evidenced by the probability value of Bitcoin, Gold and IHSG in the variance equation. Moreover, with probability GARCH(-1) it is found that Bitcoin follows the GARCH pattern. With an alpha of 5%, getting the Ethereum price results is affected by the past price of Ethereum, and Stock Exchange Index only. These results are proven by the probability value on the past price of Ethereum and Stock Exchange Index in the mean equation. The same results were found in Ethereum's volatility analysis. ##### 234 ----- *Aswin Rivai, International Journal of Research in Business & Social Science 12(3) (2023), 231-236* In volatility Ethereum is affected by the past price of Ethereum, and Stock Exchange Index alone. Meanwhile, other variables do not affect Ethereum. These results are evidenced by the probability value of Ethereum in the variance equation. Moreover, with probability GARCH(-1) it is found that Ethereum follows a pattern GARCH. The results above show that the results of the analysis of the mean equation and variance equation are consistent. This shows that the model we are using is robust. So the above results can be trusted. The results also show that the cryptocurrency market is not efficient because it is influenced by past prices and does not run randomly. This study results is the same as study made by IMF in early 2022 which concluded that the fluctuation of cryptocurrecy represented by Bitcoin and Etherum are following the trend or fluctuation of stock exchange market in New York (NYSE) ### **Conclusions** This study aims to understand the analysis of cryptocurrency volatility using gold, Dollar Index, and the Composite Stock Price Index (IHSG) with explanatory variables and mean equation (1) variance equation (2). The results of the mean equation show that the price of Bitcoin is affected by the past prices of Bitcoin gold and stock exchange index, whereas other variables do not affect the price of Bitcoin. Ethereum price is only affected by the past price of Ethereum and stock exchange index while other variables are not affected. Therefore, the cryptocurrency market is not as efficient as it can be analyzed from its past prices. In addition, the results of the variance equation show that Bitcoin volatility is only affected by the past price of Bitcoin, gold and stock exchange index and is not influenced by other variables. The same result is obtained in Ethereum volatility which is influenced by the past price of Ethereum and is not affected by any other variables. So that in investing in cryptocurrencies, especially Bitcoin and Ethereum investors can do an analysis on the previous price. This research has several limitations, namely this research only leads to Ethereum and Bitcoin so that future research can analyze other instruments besides Bitcoin and Ethereum. ##### **Acknowledgement ** The authors have read and agreed to the published version of the manuscript. **Author Contributions** : Conceptualization, Writing, Analysis by author **Funding** : This research was funded by the author himself **Informed Consent Statement** : Informed consent was obtained from all subjects involved in the study. **Data Availability Statement** : The data presented in this study are available on request from the corresponding author. The data are not publicly available due to restrictions. **Conflicts of Interest** : The author declares no conflict of interest. ### **References ** Aulia, M. R. (2018). Pros and Cons of Bitcoin: Analysis of the Influence of Bitcoin Development, Flat Money Performance and State Governance System. University of Lampung, 10(2), 1–15. Auso, Asep Zaenal, Elsa Silvia Nur Aulia. (2018). Bitcoin Cryptocurrency Technology for Investment and Business Transactions According to Islamic Shari'a. Journal of Sociotechnology, 17(1), 74-92. Available at: [http://journals.itb.ac.id/index.php/sostek/article/view/7365/3177.](http://journals.itb.ac.id/index.php/sostek/article/view/7365/3177) Bhosale, J., & Mavale, S. (2018). Volatility of select Crypto-currencies: A comparison of Bitcoin, Ethereum and Litecoin. Annual Research Journal of SCMS, Pune, 6(March), 132–141. Connor, F. A. O., Lucey, B. M., Batten, J. A., & Baur, D. (2015). The Financial Economics of Gold – A Survey. International Riview [of Financial Analysis, July 2018. https://doi.org/10.1016/j.irfa.2015.07.005](https://doi.org/10.1016/j.irfa.2015.07.005) Dannen, C. (2017). Introducing ethereum and solidity: Foundations of cryptocurrency and blockchain programming for beginners. In Introducing Ethereum and Solidity: Foundations of Cryptocurrency and Blockchain Programming for Beginners. Apress. [https://doi.org/10.1007/978-1-4842-2535-6](https://doi.org/10.1007/978-1-4842-2535-6) Dyhrberg, A. H. (2015). Bitcoin, gold and dollars - A GARCH volatility. Finance Research Letters, 000, 1–8. [https://doi.org/10.1016/j.frl.2015.10.008](https://doi.org/10.1016/j.frl.2015.10.008) Dynand, M. R., & Kartawinata, B. R. (2018). Comparative Analysis of Cryptocurrency in Forms of Bitcoin, Stock, and Gold as Alternative Investment Portfolio in 2014 – 2017 –51. Eli, D. (2008). Cryptocurrencies. The New Palgrave: A Dictionary of [Economics, March, 1–5. https://doi.org/10.1057/978-1-349-95121-5.](https://doi.org/10.1057/978-1-349-95121-5) [Habashi, F. (2017). Gold - An Historical Introduction. December 2016. https://doi.org/10.1016/B978-0-444-63658-4.00001-3](https://doi.org/10.1016/B978-0-444-63658-4.00001-3) Hartati, & Saluza, I. (2017). GARCH Application in Overcoming Volatility in Financial Data. Journal of Mathematics, 7(2), 107. https://doi.org/10.24843/jmat.2017.v07.i02.p87 Lydianita, H. (2011). Analysis of Factors Affecting Stock Price Volatility. Nurmapika, Ryafini, Nurliza, Imelda. 2018. Analysis of Strategic Food Commodity Price Volatility in West Kalimantan Province (Case Study of the Pontianak Flamboyan Market). Journal of Social Economics of Agriculture, Volume 7, Number 1, Hal. [41-53. Available at: http://jurnal.untan.ac.id/index.php/jsea/article/view/30751.](http://jurnal.untan.ac.id/index.php/jsea/article/view/30751) Rahayu, M. F., Chang, W.-I., & Anindita, R. (2015). Volatility Analysis and Volatility Spillover Analysis of Indonesia's Coffee Price [Using Arch/Garch, and Egarch Models. Journal of Agricultural Studies, 3(2), 37. https://doi.org/10.5296/jas.v3i2.7185](https://doi.org/10.5296/jas.v3i2.7185) Robiyanto, & Pangestuti, I. R. D. (2018). Weak form market efficiency analysis in the cryptocurrency market. Journal of Economic and Business, 2(c), 124–128. Robiyanto, R. (2018). Gold VS bonds: What is the safe haven for the Indonesian and Malaysian [capital markets? Gadjah Mada International Journal of Business, 20(3), 277–302. https://doi.org/10.22146/gamaijb.27775](https://doi.org/10.22146/gamaijb.27775) ##### 235 ----- *Aswin Rivai, International Journal of Research in Business & Social Science 12(3) (2023), 231-236* Robiyanto, R., Susanto, Y. A., & Ernayani, R. (2019). Examining the day-of-the-week-effect and the-month-of-the-year-effect in [cryptocurrency market. Journal of Finance and Banking, 23(3), 361–375. https://doi.org/10.26905/jkdp.v23i3.3005.](https://doi.org/10.26905/jkdp.v23i3.3005) S. Dimas Okky, & Setiawan. (2012). Modeling of the Composite Stock Price Index (IHSG), Exchange Rates, and World Oil Prices with. ITS Journal of Science and Arts, 1(1), 1–6. Satoshi Nakamoto (2008). Bitcoin: A Peer-to-Peer Electronic Cash System.www.bicoin.org Shen, Z. (2014). How the US Dollar Index Affects Gold Prices. September. https://webcache.googleusercontent.com/search?q=cache:kAFsdBGe1fwJ:https://www.askfinan cials.com/pdf/knowledge panel/Dollar%2520Index%2520The%2520Bull%2520market%2520no%2520one%2520wants %2520January%25202017.pdf+&cd=12&hl=en&ct=clnk&gl=id Suharsono, A. (2012). Volatility Analysis of Go Public Companies' Shares with the ARCH-GARCH Method. ITS Journal of Science and Arts, 1(1), 259–264. Wood, G. (2014). Ethereum: a secure decentralized generalized transaction ledger. Yohandi, A., Trihastuti, N., & Hartono, D. (2017). Juridical implications of using bitcoin virtual currency as a means of payment in commercial transactions (comparative study between Indonesia and Singapore). Diponegoro Law Journal, 6, 1–19. https://doi.org/10.1017/S0269888907001014 ***Publisher’s Note:*** SSBFNET stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. © 2023 by the authors. Licensee SSBFNET, Istanbul, Turkey. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/). [International Journal of Research in Business and Social Science (2147-4478) by SSBFNET is licensed under a Creative Commons Attribution 4.0](http://ssbfnet.com/ojs/index.php/ijrbs) International License. ##### 236 -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.20525/ijrbs.v12i3.2561?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.20525/ijrbs.v12i3.2561, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GOLD", "url": "https://www.ssbfnet.com/ojs/index.php/ijrbs/article/download/2561/1778" }
2,023
[ "JournalArticle" ]
true
2023-05-06T00:00:00
[ { "paperId": "cc3af002b4fdcadebee0b3a3337990f97adca88f", "title": "Examining the day-of-the-week-effect and the-month-of-the-year-effect in cryptocurrency market" }, { "paperId": "cde20325c7b92ced15c61e91d0ded689c232187e", "title": "Gold VS Bond: What Is the Safe Haven for the Indonesian and Malaysian Capital Market?" }, { "paperId": "aaebea31e324077f926152083e39989f342088ae", "title": "Teknologi Cryptocurrency Bitcoin Dalam Transaksi Bisnis Menurut Syariat Islam" }, { "paperId": "268f2253c50dd1a897d2d5a767a8c1f65aab80fd", "title": "Bitcoin, gold and the dollar – A GARCH volatility analysis" }, { "paperId": "a3a8ce0b60bf0c1ecd11ccd14520ef4591a70718", "title": "The Financial Economics of Gold – A Survey" }, { "paperId": "0b07b4b356855e412bf9cfdf6276f2d49746b4e9", "title": "Volatility Analysis and Volatility Spillover Analysis of Indonesia's Coffee Price Using Arch/Garch, and Egarch Model" }, { "paperId": "a7ec2e290c7d6a91229bf978d79a45134752fd7a", "title": "Introducing Ethereum and Solidity" }, { "paperId": "cc1e0b70ae41d4938084a82498a7b9515a15d9b2", "title": "The New Palgrave Dictionary of Economics, 2nd edition" } ]
7,121
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/016fdb52d49c9b2fdea0d7203c936a6a3ed58de7
[ "Computer Science" ]
0.850366
Ontology-Based Integration of Streaming and Static Relational Data with Optique
016fdb52d49c9b2fdea0d7203c936a6a3ed58de7
SIGMOD Conference
[ { "authorId": "1697928", "name": "E. Kharlamov" }, { "authorId": "144105942", "name": "S. Brandt" }, { "authorId": "1402158435", "name": "Ernesto Jiménez-Ruiz" }, { "authorId": "1751420", "name": "Y. Kotidis" }, { "authorId": "1748185", "name": "S. Lamparter" }, { "authorId": "40020144", "name": "T. Mailis" }, { "authorId": "2028218", "name": "C. Neuenstadt" }, { "authorId": "3342377", "name": "Özgür L. Özçep" }, { "authorId": "3272670", "name": "Christoph Pinkel" }, { "authorId": "3361902", "name": "Christoforos Svingos" }, { "authorId": "1702171", "name": "D. Zheleznyakov" }, { "authorId": "145655431", "name": "Ian Horrocks" }, { "authorId": "1684197", "name": "Y. Ioannidis" }, { "authorId": "113364585", "name": "R. Möller" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
null
# City Research Online ## City, University of London Institutional Repository ##### Citation: Kharlamov, E., Brandt, S., Jimenez-Ruiz, E., Kotidis, Y., Lamparter, S., Mailis, T., ###### Neuenstadt, C., Oezcep, O., Pinkel, C., Svingos, C., et al (2016). Ontology-Based Integration of Streaming and Static Relational Data with Optique. In: SIGMOD '16 Proceedings of the 2016 International Conference on Management of Data. (pp. 2109- 2112). New York: ACM. ISBN 978-1-4503-3531-7 doi: 10.1145/2882903.2899385 ##### This is the accepted version of the paper. This version of the publication may differ from the final published version. Permanent repository link: https://openaccess.city.ac.uk/id/eprint/22947/ Link to published version: https://doi.org/10.1145/2882903.2899385 Copyright: City Research Online aims to make research outputs of City, University of London available to a wider audience. Copyright and Moral Rights remain with the author(s) and/or copyright holders. URLs from City Research Online may be freely distributed and linked to. Reuse: Copies of full items can be used for personal research or study, educational, or not-for-profit purposes without prior permission or charge. Provided that the authors, title and full bibliographic details are credited, a hyperlink and/or URL is given for the original metadata page and the content is not changed in any way. ----- ###### City Research Online: http://openaccess.city.ac.uk/ publications@city.ac.uk ----- ### Ontology-Based Integration of Streaming and Static Relational Data with Optique ###### E. Kharlamov[1] S. Brandt[2] E. Jimenez-Ruiz[1] Y. Kotidis[6] S. Lamparter[2] T. Mailis[3] C. Neuenstadt[4] Ö. Özçep[4] C. Pinkel[5] C. Svingos[3] D. Zheleznyakov[1] I. Horrocks[1] Y. Ioannidis[3] R. Möller[4] 1 2 3 4 5 6 ###### Uni. of Oxford Siemens CT Uni. of Athens Uni. of Lübeck fluid Operations AUEB ###### ABSTRACT Real-time processing of data coming from multiple heterogeneous data streams and static databases is a typical task in many industrial scenarios such as diagnostics of large machines. A complex diagnostic task may require a fleet of up to hundreds of queries over such data. Although many of these queries retrieve data of the same kind like temperature measurements, they are different since they access structurally different data sources. We have investigated how Semantic Technologies can make such complex diagnostics simpler by providing an abstraction semantic layer that integrates heterogeneous data. We developed the system OPTIQUE to put our ideas in practice. In a nutshell, OPTIQUE allows to express complex diagnostic tasks with just a few high-level semantic queries. Then, the system can automatically enrich these queries, translate them into a fleet with a large number of low-level data queries, and finally optimise and efficiently execute the fleet in a heavily distributed environment. We will demo the benefits of OPTIQUE on a real world scenario of Siemens Energy. For this purpose we prepared anonymised streaming and static data relevant to 950 Siemens power generating turbines with more than 100, 000 sensors and deployed OPTIQUE on multiple distributed environments with up to 128 nodes. By registering and monitoring continuous semantic high-level queries that combine streaming and static data the demo attendees will be able to see how OPTIQUE makes diagnostics of turbines easy. They will also see how OPTIQUE can handle more than a thousand concurrent complex diagnostic tasks that integrate heterogeneous data in real-time with a 10 TB/day throughput. Finally, they will see that creating a semantic layer, such as the one over the Siemens demo data, can be done in realistic time with the help of our bootstrapping interactive system. ###### 1. INTRODUCTION Motivation. Real-time processing of streaming and static data is a typical task in many industrial scenarios such as diagnostics of large machines. This task is challenging since it often requires integration of data from multiple sources. For example Siemens Energy runs service centres dedicated to diagnostics of thousands power generating appliances across the globe. A typical task for such centres is to detect in real-time a failure of appliances caused by, e.g., ACM ISBN 978-1-4503-2138-9. DOI: 10.1145/1235 an abnormal temperature and pressure increase. Such tasks require simultaneous processing of sequences of digitally encoded coherent signals produced and transmitted from thousands gas and steam turbines, generators, and compressors installed in power plants, and of static data that includes structure of equipment, history of its exploitation and repairs, and even weather conditions. These data is scattered across multiple and heterogeneous data streams with 30 GB/day throughput and static DBs with hundreds TBs of data. Even for a single diagnostic task that a turbine may fail, Siemens engineers have to analyse streams with temperature measurements from up to 2, 000 censors installed in different parts of the turbine, analyse historical data of turbine’s temperature, compute temperature patterns, compare them to patterns in other turbines, compare weather conditions, etc. This requires to pose a fleet with hundreds of queries, majority of which are semantically the same (they ask about temperature) but syntactically different (they are over different schemata). Formulating and executing so many queries, and then assembling computed answers is expensive—it takes up to 80% of overal diagnostic time [10]. ###### Ontology-Based Integration Approach. To tackle this issue in Siemens Energy we propose a data integration approach that is based on Semantic Technologies. In this paper we will refer to our approach as Ontology-Based Stream-Static Data Integration (OBSSDI). It follows the classical data integration paradigm that requires to create a common ‘global’ schema that consolidates ‘local’ schemata of the integrated data sources, and mappings that define how the local and global schemata are related [5]. In OBSSDI the global schema is an ontology: a formal conceptualisation of the domain of interest that consists of a vocabulary, i.e., names of classes, attributes and binary relations, and axioms over the terms from the vocabulary that, e.g., assign attributes of classes, define relationship between classes, composed classes, class hierarchies, etc. The Siemens Energy ontology that we developed [10] contains hundreds of terms and axioms that encode generic specifications of appliances, characteristics of sensors, materials, processes, descriptions of diagnostic tasks, etc. OBSSDI mappings relate each ontological term to a set of queries over the underlying data. For example, the generic attribute temperature-of-sensor from the Siemens Energy ontology is mapped to all specific data and procedures that return temperatures of sensors in dozens of different turbines and DBs storing historical data, thus, all particularities and varieties of how the temperature of a sensor can be measured, represented, and stored are hidden in these mappings. In OBSSDI the integrated data can be accessed by posing queries over the ontology, i.e., ontological queries. These queries are hy_brid: they refer to both streaming and static data. Evaluation of an_ ontological query in OBSSDI has three stages: _(i) in enrichment_ stage the ontological query is automatically reformulated with the help of axioms in another ontological query in order to access as much of relevant data as possible, (ii) in unfolding stage the en ----- riched ontological query is automatically translated with the help of mappings in possibly many queries over the data, (iii) in execu_tion stage the unfolded data queries are executed over the data._ The main benefit of OBSSDI is that the combination of ontologies and mappings allows to ‘hide’ the technical details of how the data is produced, represented, and stored in data sources, and to show only what this data is about. This allows to formulate the Siemens Energy diagnostic task above using only one ontological query instead of a fleet of hundreds data queries that today Siemens IT specialists have to write. Observe that these fleet of queries does not disappear: the enrichment and unfolding stages of the evaluation by an OBSSDI system will turn the high-level ontological query into the fleet of low-level data queries automatically. Another important benefit of OBSSDI is modularity and composition_ality of its assets: every mapping relates only one ontological term_ to the data, thus, the semantics of the ontology is modularised for each separate term which allows to construct its assets independently from each other and on demand; then, the same ontological terms can be used in different queries, thus, by defining mappings for only a few ontological terms one will be able compose many queries using these mapped terms. _OBSSDI extends existing semantic data integration solutions that_ either assume that data is in (static) relational DBs, e.g [3, 4], or streaming, e.g., [2, 6] but not of both kinds. OBSSDI also extends existing solutions for unified processing of streaming and static semantic data e.g. [13], since they assume that data is natively in the WC3 standardised RDF semantic data format while we assume the data to be relational and mapped to the semantic format. ###### Research Challenges. The benefits of OBSSDI come with a price. The main practical challenges for OBSSDI that are not addressed by existing Semantic Technologies include: [C1] development of tools for semi-automatic support to construct quality ontologies and mappings over relational and streaming data, [C2] development of a query language over ontologies that combines streaming and static data and allows for efficient enrichment and unfolding that preserves semantics of ontological queries, [C3] development of a backend that can optimise large numbers of queries automatically generated via enrichment and unfolding and efficiently execute them over distributed streaming and static data. Construction of ontologies and mappings in OBSSDI is done independently and prior to query formulation and processing. Nevertheless, addressing C1 is practically important since such tools can dramatically speedup deployment and maintenance, e.g., adjustment to new query requirements, of OBSSDI systems. Addressing C2 is crucial since to the best of our knowledge no devoted query language for hybrid semantic queries has required properties. Addressing C3 is vital to ensue that OBSSDI queries are executable in reasonable time. Note that C3 is not trivial since even in the context where the data is only static and not distributed, query execution without devoted optimisation techniques performs poorly [3], since the queries that are automatically computed after enrichment and unfolding can be very inefficient, e.g., they contain many redundant joins and unions. ###### Our Contributions. Besides proposing OBSSDI we addressed the challenges C1-C3 and implemented our solutions in the OPTIQUE system. For C2, we introduced STARQL [12] query language that allows to pose semantic queries over both streaming and static data. STARQL queries are expressed over OWL 2 QL ontologies and OBSSDI mappings that relate each ontological term to a set of queries over the underlying data in the global-as-view fashion [5]. STARQL queries admit polinomial-time enrichment and can unfoldable into SQL[(+)] queries, i.e. SQL queries enhanced with the essential operators for stream handling. For C3, we introduced EXASTREAM [11, 14], a highly optimised engine capable of handling complex hybrid queries in real time. EXASTREAM supports parallel query execution and its Infrastructure as a Service architecture enables us to elastically scale the system to support userdemand in complex diagnostic scenarios. EXASTREAM incorporates several query optimisations, such as adaptive main-memory indexing of stream measurements and native User Defined Functions that permit a user to express complex operators in a concise way. Finally, for C1, we developed BOOTOX [9], a system for bootstrapping, i.e., extracting, ontologies and mappings from static and streaming relational schema and data that proved its efficiency in creating OBSSDI assets. See Section 2 for more details on OPTIQUE solutions for C1-C3 challenges. ###### Demo Overview. During the demonstration the attendees will be able to see how OPTIQUE makes diagnostics for Siemens easy: they will set and monitor continuous diagnostic tasks as STARQL queries, see how EXASTREAM can handle more than a thousand complex diagnostic tasks, and deploy OPTIQUE over Siemens data using BOOTOX. See Section 3 for more details on demo scenarios. ###### 2. OPTIQUE SYSTEM OPTIQUE is an integrated system that consist of multiple components to support OBSSDI end-to-end. For IT specialists OPTIQUE offers support for the whole lifecycle of ontologies and mappings: semi-automatic bootstrapping form relational data sources, importing of existing ontologies, semi-automatic quality verification and optimisation, cataloging, manual definition and editing of mappings. For end-users OPTIQUE offers tools for query formulation support, query cataloging, answer monitoring, as well as integration with GIS systems. Query evaluation is done via OPTIQUE’s query enrichment, unfolding, and execution backends that allow to execute up to thousands complex ontological queries in highly distributed environments. In this section we give some details of three OPTIQUE components that address the C1-C3 challenges above. ###### Deployment Support. Our BOOTOX component allows to extract W3C standardised OWL 2 ontologies and R2RML mappings from relational streaming and static data. Consider for example a class Turbine; a mapping for it is an expression: Turbine(f (⃗x)) ← _∃⃗y SQL(⃗x, ⃗y), that can be seen as a view definition, where SQL(⃗x, ⃗y)_ is an SQL query, ⃗x are its output variables, ⃗y are its variables that are projected out (existentially quantified) and f is a function that converts tuples returned by SQL into identifiers of objects populating the class Turbine. Intuitively, mapping bootstrapping of BOOTOX boils down to discovery of ‘meaningful’ queries _∃⃗y SQL(⃗x, ⃗y) over the input data sources that would correspond to_ either a given element of the ontological vocabulary, e.g., the class _Turbine or attribute temperature-of-sensor, or to a new ontologi-_ cal term. BOOTOX employs several novel schema and data driven query discovery techniques. For example, BOOTOX can map two tables like Turbine and Country into classes by projecting them on primary keys, and the attribute locatedIn of Turbine into an object property between these two classes if there is either an explicit or implicit foreign key between Turbine and Country . For more complex mappings, BOOTOX requires users to provide a set of examples of entities from the class, e.g., Turbine, where each example is a set of keywords, e.g., {albatros, gas, 2008}. Then the system turns these keywords into SQL queries by exploiting graph based techniques similar to [8] for keyword-based query answering over DBs. Moreover, BOOTOX also allows to incorporate third party OWL 2 ontologies in an existing OPTIQUE’s deployment using ontology alignment techniques. The ontological terms bootstrapped with BOOTOX are then used ----- CONSTRUCT GRAPH NOW { ?c2 rdf:type :MonInc } FROM STREAM S_Msmt [NOW-"PT10S"^^xsd:duration, NOW]->"PT1S"^^xsd:duration, STATIC DATA <http://www.optique-project.eu/siemens/ABoxstatic>, ONTOLOGY <http://www.optique-project.eu/siemens/TBox> USING PULSE WITH START = "00:10:00CET", FREQUENCY = "1S" WHERE {?c1 a sie:Assembly. ?c2 a sie:Sensor. ?c1 sie:inAssembly ?c2.} SEQUENCE BY StdSeq AS seq HAVING MONOTONIC.HAVING(?c2,sie:hasValue) CREATE AGGREGATE MONOTONIC:HAVING ($var,$attr) AS HAVING EXISTS ?k IN SEQ: GRAPH ?k { $var sie:showsFailure } AND FORALL ?i < ?j IN seq, ?x, ?y: IF ( ?i, ?j < ?k AND GRAPH ?i {$var $attr ?x} AND GRAPH ?j {$var $attr ?y}) THEN ?x<=?y **Figure 1: An example diagnostic task in STARQL, where the prefix sie stands for the URI of the Siemens ontology** to formulate STARQL ontological queries and the bootstrapped mappings – to translate these queries into data queries. We shall now discuss STARQL queries and their translation. ###### Diagnostic Queries. In order to express diagnostic tasks we developed a query language STARQL [12] that allows to perform complex semantic queries blending streaming with static data. The syntax of STARQL extends so-called basic graph patterns of W3C standardised SPARQL query language for RDF databases. STARQL queries can express basic graph patterns, and typical mathematical, statistical, and event pattern features needed in realtime diagnostic scenarios; moreover, STARQL queries can be nested, thus allowing to employ the result of one query as input when constructing another query. STARQL has a formal semantics that combines open and closed-world reasoning and extends snapshot semantics for window operators [1] with sequencing semantics that can handle integrity constraints such as functionality assertions. Due to space limit we cannot present STARQL in details. Instead, we will illustrate its main features on the following example diagnostic task: _Detect a real-time failure of the turbine caused_ _by the a temperature increase within 10 seconds. This task can be_ expressed using STARQL over the Siemens ontology [10] as in Figure 1 and it requires to combine streaming and static data. An output stream S_out is defined by the following language constructs: The CONSTRUCT specifies the format of the output stream, here instantiated by RDF triples asserting that there was a monotonic increase. The FROM clause specifies the resources on which the query is evaluated: the ONTOLOGY, STATIC DATA, and input STREAM(s), for which a window operator is specified with window range (here 10 seconds) and with slide (here 1 second). The PULSE declaration specifies the output frequency. In the WHERE clause bindings for sensors (attached to some turbine’s assembly) are chosen. For every binding, the relevant condition of the diagnostic task is tested on the window contents. Here this condition is abbreviated by MONOTONIC.HAVING(?c, sie:hasValue) using a macro that is defined at the bottom of Fig. 1 in an AGGREGATE declaration. In words, the conditions asks whether there is some state ?k in the window s.t. the sensor shows a failure message at ?k and s.t. for all states before ?k the attribute value ?attr (in the example instantiated by sie:hasValue) is monotonically increasing. STARQL has favourable computational properties [12]: despite its expressivity, answering STARQL queries is efficient since they can be efficiently enriched and then unfolded into efficient relational stream queries. STARQL query enrichment is polynomialtime in the size of the input ontology if the ontology is OWL 2 QL ontology language and the queries are essentially conjunctive with value comparison and aggregate functions. STARQL unfolding is linear-time in the size of both mappings and query and enriched STARQL queries can be unfolded into relational stream queries. We developed a devoted STARQL2SQL[(+)] translator that unfolds STARQL queries to SQL[(+)] queries, i.e. SQL queries enhanced with the essential operators for stream handling. ###### Streaming and Static Relational Data Processing. Relational queries produced by the STARQL2SQL[(+)] translation, are handled by EXASTREAM, OPTIQUE’s high-throughput distributed Data Stream Management System (DSMS). The EXASTREAM _DSMS is embedded in EXAREME, a system for elastic large-scale_ dataflow processing on the cloud [11, 14] that has been publicly available as an open source project under the MIT License. In the following, we present some key aspects of EXASTREAM. EXASTREAM is built as a streaming extension of the SQLite _DBMS, taking advantage of existing Database Management tech-_ nologies and optimisations. It provides a declarative language, namely SQL[(+)], for querying data streams and relations that conforms to the CQL semantics [1]. In contrast to other DSMS, the user does not need to consider low-level details of the execution of a query. Instead, the system’s query planner is responsible for choosing an optimal plan depending on the query, the available stream/ static data sources, and the execution environment. EXASTREAM’s optimizer makes it possible to process SQL[(+)] queries that blend streaming with static data. This has been proved mostly useful in the Siemens use case since it allows to combine streaming attributes (such as temperature measurements of a turbine) with metadata that remain invariant in time (such as the model or structure of a turbine) as well as archived stream data (such as past sensor readings, temperature measurements, etc). Static relational tables may be stored in our system, or, they may be federated from external data-sources. Moreover, EXASTREAM allows defining database schemata on top of streaming and static data; this gives a wide range of opportunities for applying Semantic Web technologies and optimisations, e.g., bootstrapping techniques, that rely on these features. EXASTREAM supports parallelism by distributing processing across different nodes in a distributed environment. Its architecture is shown in Figure 2. Queries are registered through the Asynchronous Gateway Server. Each registered query passes through the EXAREME parser and then is fed to the Scheduler module. The Scheduler places stream and relational operators on worker nodes based on the node’s load. These operators are executed by a Stream Engine instance running on each node. The EXASTREAM system natively supports User Defined Func_tions (UDFs) with arbitrary user code. The engine blends the ex-_ ecution of UDFs together with relational operators using JIT tracing compilation techniques. This greatly speeds-up the execution as it reduces context switches, and most importantly, only the relevant execution traces are used, allowing the engine to perform optimizations at runtime that are not possible when the query is pre-compiled. UDFs allow to express very complex dataflows using simple primitives. For OPTIQUE we used UDFs to implement communication with external sources, window partitioning on data streams, and data mining algorithms such as the Locality-Sensitive _Hashing technique [7] for computing the correlation between val-_ ues of multiple streams. Whenever SQL abstractions are not sufficient (or efficient) for complex stream processing scenarios, we use standard SQL to com ----- #### Year 2 in Short " **3 types of bootstrappers** " Logical: logical axioms, direct map. " Provenance: mappings to query for where answers come from " Visual: enhancing onto vocabulary with annotations for visual QF " **Improved ontology importing module** |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |||||||||| |||||||||| |th annotations for dule||||||||| " Alignment: checks for undesired logical consequences Figure 3: O[ISWC-14-in-use-1] [ISWC-14-in-use-2] PTIQUE screenshots " Layering: new importing regime " **Integration** bines, and other information that is typically required by Siemens **Figure 2: Distributed Stream Engine Architecture"** All modules are tightly integrated in the platform Energy service engineers. Finally, we deployed OPTIQUE over the Siemens data by bootstrapping ontologies and mappings and then " Integrated bootstrapping interface bine data and process them with" **Evaluation and Demo UDFs. Two main operators, imple-** manually post-processing and extending them so that they reach mented asforming SQLite into a UDFs, that incorporate the algorithmic logic for trans- DSMS" " Extensive experiments with Statoil, Siemens, other schemas Preliminary version of bootstrapping benchmark are timeSlidingWindow and wCache: the required quality and contain necessary terms and mappings tocover 20 Siemens diagnostic tasks. _• timeSlidingWindowtime window and associates them with a unique window id, groups tuples that belong to the same"_ Statistical modules: quantitative and qualitative evaluation of Bootstr. [S1]During the demo O Diagnostics with our deploymentPTIQUE will be available in three scenarios:: The attendeed will be able _• wCacheconstraints on the time column when processing infinite stre- acts as an index for answering efficiently equality"_ **Ongoing "** Research on bootstrapping of complex mappings (logical bootstr.) to query our preconfigured Siemens deployment using diag-nostic tasks from from the Siemens catalog and using their ams. The time column may be the" Further enhancement of provenance and visual bootstrapping window identifier pro- own STARQL queries, i.e., they will be able to create diagduced by theproduce results to multiple queries accessing different streams. timeSlidingWindow" Papers submission operator. WCache will then nostic tasks as parametrised continuous queries and registerconcrete instances of these tasks over specific data streams.• 4 These UDFs are transparent to OPTIQUE’s users and are intended [S2] Performance showcase of our deployment: the attendees will be able to run various tests over our deployment using one for performing the STARQL2SQL[(+)] translation. of 128 preconfigured Siemens distributed environments and In order to enable efficient processing of data streams of very one of 10 test sets of queries. While running the tests they high velocity we have implemented a number of optimisations in will monitor the throughput and progress of parallel query the stream processing engine. An optimisation that will be pre execution progresses. sented in the demo is adaptive indexing. With this technique EX [S3] Diagnostics with user’s deployment: the attendees will be ASTREAM collects statistics during query execution and, adaptively, able to deploy OPTIQUE over the Siemens data by bootstrap decides to build main-memory indexes on batches of cached stream ping ontologies and mappings saving them, and observing tuples, in order to expedite their processing during a complex oper and possibly improving them in devoted editors. Then, the ation (as in a join). attendees will query their deployment with diagnostic tasks from from the Siemens catalog or their own STARQL queries. ###### 3. DEMONSTRATION SCENARIOS In Figure 3 we presented some OPTIQUE screenshots about the The benefits of OPTIQUE will be demonstrated on the real world deployment module BOOTOX and monitoring dashboards. scenario from Siemens Energy. In particular, we will show that: _• formulating diagnostic tasks with OPTIQUE is practical: Sie-_ **4.** **REFERENCES** mens diagnostic queries in OPTIQUE are concise and concep- [1] A. Arasu, S. Babu, and J. Widom. The CQL continuous query lantually easy while fleets of Siemens data queries are and large guage: semantic foundations and query execution. In: VLDBJ 15.2 and hard to comprehend, (2006). _• running diagnostic tasks in OPTIQUE is practical: OPTIQUE_ [2] J. Calbimonte, Ó. Corcho, and A. J. G. Gray. Enabling Ontologyallows to process in real time up to 1, 024 complex Siemens Based Access to Streaming Data Sources. In: ISWC. 2010. [3] D. Calvanese et al. Ontop: Answering SPARQL Queries over Rela diagnostic tasks with the throughput of up to 10, 000, 000 tional Databases. In: Sem. Web. Journal (2015). tuples/sec by executing the tasks in parallel on a highly dis [4] C. Civili et al. MASTRO STUDIO: Managing Ontology-Based Data trbute environment with up to 128 nodes, Access applications. In: PVLDB 6.12 (2013). _• creating OPTIQUE ontologies and mappings is practical: OP-_ [5] A. Doan, A. Y. Halevy, and Z. G. Ives. Principles of Data IntegraTIQUE allows to create ontologies and mappings necessary _tion. Morgan Kaufmann, 2012._ for system deployment over Siemens streaming and static [6] L. Fischer, T. Scharrenbach, and A. Bernstein. Scalable Linked Data data in a reasonable time. Stream Processing via Network-Aware Workload Scheduling. In: SS _WKBS@ISWC. 2013._ For the demonstration purpose we selected 20 diagnostic tasks [7] N. Giatrakos et al. In-network approximate computation of outliers typical for Siemens Energy service centres and expressed these with quality guarantees. In: Information Systems 38.8 (2013). tasks in STARQL. An example diagnostic task is to calculate the [8] V. Hristidis and Y. Papakonstantinou. Discover: Keyword Search in Pearson correlation coefficient between turbine stream data. Then, Relational Databases. In: VLDB. 2002. we prepared a demo data set that contains streaming and static data [9] E. Jiménez-Ruiz et al. BootOX: Practical Mapping of RDBs to OWL 2. produced by 950 gas and steam turbines during 2002–2011 years. In: ISWC. 2015. [10] E. Kharlamov et al. How Semantic Technologies Can Enhance Data This data is anonymised in a way that preserves the patterns needed Access at Siemens Energy. In: ISWC. 2014. for demo diagnostic tasks. During the demo we will ‘play’ the [11] H. Kllapi et al. Elastic Processing of Analytical Query Workloads on streaming data and thus emulate real time streams. Then, we dis- IaaS Clouds. In: arXiv preprint arXiv:1501.01070 (2015). tributed the demo-data in several installations with different num- [12] Ö. Özçep, R. Möller, and C. Neuenstadt. A Stream-Temporal Query ber of nodes (VMs) ranging from 1 to 128, where each node has 2 Language for Ontology Based Data Access. In: KI. Vol. 8736. 2014. processors and 4GB of main memory. To demonstrate diagnostics [13] D. L. Phuoc et al. A Native and Adaptive Approach for Unified Proresults we prepared a devoted monitoring dashboard for each diag- cessing of Linked Streams and Linked Data. In: ISWC. 2011. [14] M. M. Tsangaris et al. Dataflow Processing and Optimization on nostic task in the catalog. Dashboards show diagnostics results in Grid and Cloud Infrastructures. In: IEEE Data Eng. Bull. 32.1 (2009). real time, as well as statistics on streaming answers, relevant tur #### Year 2 in Short " " " " " " " _DSMS_ " _timeSlidingWindow_ " " " ams. The time column may be the" _timeSlidingWindow"_ " " -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1145/2882903.2899385?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1145/2882903.2899385, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://openaccess.city.ac.uk/id/eprint/22947/1/main-sigmod-16-siemens-demo.pdf" }
2,016
[ "JournalArticle", "Book", "Conference" ]
true
2016-06-26T00:00:00
[]
7,316
en
[ { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/017104798b0269d56a68480a8d835918d4f5a8b2
[]
0.863168
Anomaly Detection in IIoT Transactions using Machine Learning: A Lightweight Blockchain-based Approach
017104798b0269d56a68480a8d835918d4f5a8b2
Engineering, Technology &amp; Applied Science Research
[ { "authorId": "2304475697", "name": "Mayar Ibrahim Hasan Okfie" }, { "authorId": "2304512998", "name": "Shailendra Mishra" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
The integration of secure message authentication systems within the Industrial Internet of Things (IIoT) is paramount for safeguarding sensitive transactions. This paper introduces a Lightweight Blockchain-based Message Authentication System, utilizing k-means clustering and isolation forest machine learning techniques. With a focus on the Bitcoin Transaction Network (BTN) as a reference, this study aims to identify anomalies in IIoT transactions and achieve a high level of accuracy. The feature selection coupled with isolation forest achieved a remarkable accuracy of 92.90%. However, the trade-off between precision and recall highlights the ongoing challenge of minimizing false positives while capturing a broad spectrum of potential threats. The system successfully detected 429,713 anomalies, paving the way for deeper exploration into the characteristics of IIoT security threats. The study concludes with a discussion on the limitations and future directions, emphasizing the need for continuous refinement and adaptation to the dynamic landscape of IIoT transactions. The findings contribute to advancing the understanding of securing IIoT environments and provide a foundation for future research in enhancing anomaly detection mechanisms.
## **Engineering, Technology & Applied Science Research Vol. 14, No. 3, 2024, 14645-14653 14645** # Anomaly Detection in IIoT Transactions using Machine Learning: A Lightweight Blockchain- based Approach ## **Mayar Ibrahim Hasan Okfie ** Department of Information Technology, College of Computer and Information Sciences, Majmaah University, Saudi Arabia 441103734@s.mu.edu.sa (corresponding author) **Shailendra Mishra ** Department of Information Technology, College of Computer and Information Sciences, Majmaah University, Saudi Arabia s.mishra@mu.edu.sa *Received: 29 March 2024 | Revised: 16 April 2024 | Accepted: 25 April 2024* *Licensed under a CC-BY 4.0 license | Copyright (c) by the authors | DOI: https://doi.org/10.48084/etasr.7384* **ABSTRACT** **The integration of secure message authentication systems within the Industrial Internet of Things (IIoT) is** **paramount for safeguarding sensitive transactions. This paper introduces a Lightweight Blockchain-based** **Message Authentication System, utilizing k-means clustering and isolation forest machine learning** **techniques. With a focus on the Bitcoin Transaction Network (BTN) as a reference, this study aims to** **identify anomalies in IIoT transactions and achieve a high level of accuracy. The feature selection coupled** **with isolation forest achieved a remarkable accuracy of 92.90%. However, the trade-off between precision** **and recall highlights the ongoing challenge of minimizing false positives while capturing a broad spectrum** **of potential threats. The system successfully detected 429,713 anomalies, paving the way for deeper** **exploration into the characteristics of IIoT security threats. The study concludes with a discussion on the** **limitations and future directions, emphasizing the need for continuous refinement and adaptation to the** **dynamic landscape of IIoT transactions. The findings contribute to advancing the understanding of** **securing IIoT environments and provide a foundation for future research in enhancing anomaly detection** **mechanisms.** ***Keywords-cyber security; machine learning; deep learning; blockchain; lightweight deep learning*** I. INTRODUCTION primary hurdle lies in developing a protocol that can operate seamlessly within the resource constraints inherent in industrial The advent of the Industrial Internet of Things (IIoT) has devices. These constraints often involve limitations in brought about unprecedented advancements in industrial processing power, memory, and energy, demanding a delicate processes, facilitating seamless communication and data balance between security and efficiency. Developing a protocol exchange among interconnected devices. However, with the that can address these constraints while maintaining the robust increasing complexity and scale of IIoT ecosystems, ensuring security features of blockchain is a critical challenge. the security and integrity of communication channels has become a paramount concern. This study attempts to address The challenges in developing lightweight blockchain-based this challenge by proposing and exploring a lightweight authentication for IIoT are multifaceted and crucial to ensuring blockchain-based message authentication system specifically the practical viability and security of such protocols. Scalability for the industrial context. Traditional security mechanisms in issues arise due to the immense transaction volume inherent in IIoT environments often struggle with issues related to IIoT environments, demanding solutions that can efficiently scalability, efficiency, and vulnerability to various cyber threats handle this load without compromising performance [2]. The [1]. Blockchain technology, known for its decentralized and compatibility of authentication protocols with resourcetamper-resistant nature, has emerged as a promising solution to constrained devices is a pressing concern, given the prevalence fortify the security of data exchanges. By integrating of devices with limited computational capabilities in IIoT blockchain principles into the fabric of IIoT communication, ecosystems. The absence of standardized protocols introduces the proposed lightweight message authentication system seeks interoperability challenges, highlighting the need for to establish a robust and efficient security framework. The universally accepted standards to foster seamless ***www.etasr.com*** ***Okfie & Mishra: Anomaly Detection in IIoT Transactions using Machine Learning: A Lightweight …*** ----- ## **Engineering, Technology & Applied Science Research Vol. 14, No. 3, 2024, 14645-14653 14646** communication among diverse IIoT devices. Crtifying the privacy and confidentiality of sensitive IIoT data is substantial and requires robust encryption mechanisms [3-4]. The impetus behind this research on lightweight blockchain-based message authentication for the IIoT stems from the urgent need to enhance the security infrastructure within industrial environments. With the proliferation of IoT devices, industrial systems face increasing threats related to unauthorized access, data tampering, and potential breaches. These security challenges pose immediate risks to operational continuity, safety, and confidentiality, emphasizing the necesity for innovative and resilient security solutions [5]. The main aim is to strengthen the IIoT's security base so that industries can confidently adopt its advantages without sacrificing efficiency or data integrity. This study aims to contribute to the development of a secure, efficient, and practical message authentication system designed specifically for the challenges posed by the Industrial Internet of Things. Such a protocol must: - Be a lightweight blockchain-based authentication protocol specifically tailored for the challenges posed by the IIoT environments. - Address the critical scalability challenges in IIoT, considering the substantial transaction load and the imperative to design protocols compatible with resourceconstrained devices. - Address interoperability concerns to ascertain seamless communication among diverse IIoT devices. - Ensure the privacy and confidentiality of sensitive IIoT data through the integration of robust encryption mechanisms. - Use optimization techniques to enhance the energy efficiency of authentication protocols crucial for IIoT devices powered by batteries or energy-harvesting methods. - Be evaluated in real-world industrial settings, bridging the gap between theoretical proposals and practical implementations. - Enhance adaptability to dynamic IIoT networks, accommodating frequent device joinings and leavings, thereby assuring the flexibility and reliability of the authentication protocols. II. LITERATURE REVIEW Identification is crucial in the Industrial Internet of Things (IIoT) to ensure the integrity and security of data flows between networked devices. As IIoT usage grows, conventional authentication systems confront reliability, effectiveness, and compatibility issues. *A.* *Traditional Authentication in IIoT* Early IIoT authentication techniques depended on centralized systems and conventional cryptography methods. Although these approaches work well in some situations, they are not suitable for the particularities of industrial settings. Challenges include efficiency concerns that affect the instantaneous communication, the scalability as the number of connected devices increases, and the support provided to the numerous devices and protocols in IIoT. *B.* *Blockchain Technology in IIoT Security* Previous studies have highlighted challenges in optimizing blockchain for resource-constrained industrial devices, necessitating the development of lightweight solutions that balance security and efficiency. In [6], a novel approach was proposed combining blockchain-based identity management with an access control mechanism specifically tailored for edge computing environments. The proposed solution leveraged self-certified cryptography to facilitate the registration and authentication of network entities, utilizing implicit certificates bound to their identities. The identity and certificate management mechanism is constructed on a blockchain, guaranteeing a transparent and secure foundation. Furthermore, an access control mechanism that incorporated Bloom filter technology was introduced and was seamlessly integrated with the identity management system. A lightweight secret key agreement protocol was devised to address the unique security considerations of resource-constrained edge devices based on self-authenticated public key cryptography. These mechanisms synergistically contribute to providing robust data security assurances for IIoT applications, encompassing certain crucial aspects, such as authentication, auditability, and confidentiality. This study not only acknowledged the significance of edge computing in IIoT, but also proposed a comprehensive and secure solution to mitigate the emerging security challenges introduced by the unique features of edge computing. In [7], the deployment of a private blockchain mechanism customized for an industrial application within a cement factory was presented. This approach prioritized attributes, such as low power consumption, scalability, and a lightweight security scheme, effectively controlling access to critical data from sensors and actuators. This architecture used a low-power ARM Cortex-M processor to improve the computational efficiency of cryptographic algorithms. The blockchain network adopted a Proof of Authentication (PoAh) consensus mechanism instead of Proof of Work (PoW), ascertaining secure authentication, scalability, speed, and energy efficiency. In [8], a thorough examination of security solutions for IoT was presented, encompassing both emerging and traditional mechanisms, including blockchain, machine learning, cryptography, and quantum computing. This study offered a comparative analysis of the pertinent literature, describing the distinctive features, advantages, and disadvantages of each mechanism. This study classified these solutions based on their demonstrated security capabilities. Additionally, the potential advantages and challenges inherent in each of the four mechanisms were identified, contributing valuable insights into the security landscape of IoT [9]. *C.* *Lightweight Blockchain Authentication Protocols* Such protocols aim to overcome the limitations of traditional methods by optimizing blockchain principles to operate efficiently within resource-constrained devices. Some of the key aspects explored include design considerations for lightweight protocols, scalability in dynamic IIoT environments, and the trade-off between security and efficiency [10]. In [11], private key generators were employed for ***www.etasr.com*** ***Okfie & Mishra: Anomaly Detection in IIoT Transactions using Machine Learning: A Lightweight …*** ----- ## **Engineering, Technology & Applied Science Research Vol. 14, No. 3, 2024, 14645-14653 14 6 47** essential functions, such as offline registration and traceability, to address the intricate landscape of cross-domain communication within IIoT, specifically tailored to accommodate collaborative device deployment by multiple manufacturers. This decentralized structure is reinforced by edge gateways, essential in orchestrating distributed authentication and token distribution through secret-sharing technology. In [12], batch authentication was integrated to minimize latency and enhance the scheme's efficiency. In [13], a comprehensive security analysis confirmed the scheme's robust adherence to the stringent requirements of cross-domain authentication in IIoT scenarios. In [14], the experimental results support the practical viability of the proposed framework, demonstrating superior computational efficiency and reduced communication costs compared to similar approaches. This emphasis on security, privacy, and computational efficiency addresses the pressing challenges inherent in collaborative IIoT environments [15]. In [16-17], the proposed schemes not only contributed to theoretical advances in cross-domain communication, but also provided a practical and efficient solution with potential implications to enhance the security and efficiency of IIoT systems in collaborative manufacturing settings. *D.* *Research Gaps and Challenges* The existing literature on lightweight blockchain-based authentication for IIoT reveals several research gaps and challenges that present opportunities for further investigation and development [18]. - Lack of standardized lightweight blockchain authentication protocols for IIoT: The research landscape highlights the absence of standardized lightweight blockchain authentication protocols specifically tailored for Industrial IoT. Although some protocols have been proposed, there is a lack of consensus on a standardized approach [19]. The absence of standardized protocols may hinder interoperability and the seamless integration of IIoT devices in diverse industrial settings. - Limited exploration of optimization techniques for resource-constrained devices: Many IIoT devices operate under resource constraints, posing challenges for the adoption of blockchain technology. A literature review reveals a limited exploration of optimization techniques tailored for resource-constrained devices. Addressing this gap involves developing innovative approaches to optimize blockchain processes, ensuring efficient execution on devices with limited computation and energy resources [20]. - Need for comprehensive evaluations in real-world or simulated industrial environments: While several lightweight blockchain authentication protocols have been proposed, there is a notable gap in comprehensive evaluations within real-world or simulated industrial environments. The lack of empirical validation in authentic industrial settings hinders understanding how these protocols perform under realistic conditions. Future research should prioritize practical implementations or simulations that mirror the complexities of industrial environments [21]. These research gaps underscore the importance of standardization and optimization for resource-constrained devices, and that of the empirical validations in industrial contexts. Addressing these gaps will contribute to the development of robust, interoperable, and efficient lightweight blockchain-based authentication protocols tailored to the unique requirements of IIoT [22-24]. III. METHODOLOGY An IIoT environment encompasses a network of devices and sensors interconnected to facilitate seamless data exchange and communication. Within this dynamic landscape, ensuring the integrity and security of data transmissions is paramount. The deployment of a lightweight blockchain-based message authentication system serves as a robust solution to fortify the trustworthiness of transactions within the IIoT framework. Figure 1 shows the design of the proposed system. Fig. 1. System design. At the core of the system lies the concept of a lightweight blockchain, incorporating principles similar to those of established blockchain networks such as Bitcoin. The system integrates seamlessly into the IIoT environment, providing a secure foundation for transactional data. Transactions, represented as messages between devices, are recorded in blocks, each cryptographically linked to the previous one, forming an immutable chain. This ensures the traceability and integrity of the entire transaction history. An anomaly detection module is integrated to improve the security of the IIoT ecosystem, acting as a vigilant guardian against potentially malicious or aberrant activities. This module uses sophisticated machine-learning techniques to discern patterns within transactional data and identify anomalies that may indicate suspicious behavior. The anomaly detection module involves a two-step process: feature selection and machine learning. In the feature selection phase, the system employs different feature selection methods. This method systematically evaluates different combinations of features, selecting the most relevant ones for anomaly detection. This certifies that the subsequent machine-learning models focus on key aspects of the data, ***www.etasr.com*** ***Okfie & Mishra: Anomaly Detection in IIoT Transactions using Machine Learning: A Lightweight …*** ----- ## **Engineering, Technology & Applied Science Research Vol. 14, No. 3, 2024, 14645-14653 14648** enhancing their ability to identify anomalies. The core of anomaly detection is powered by three prominent machinelearning techniques. These algorithms are trained on the selected features to discern normal patterns within IIoT transactions. Through this collective approach, the anomaly detection module achieves a comprehensive understanding of the IIoT transactional landscape. The output of the system is a set of detected anomalies that provide information on potentially malicious activities or deviations from normal behavior. This valuable information equips IIoT stakeholders with the means to proactively address security concerns and maintain the integrity of the industrial network. The integration of a lightweight blockchain-based message authentication system with a sophisticated anomaly detection module fortifies the IIoT environment against security threats. Through the fusion of blockchain principles and advanced machine learning techniques, the system offers a resilient shield, ensuring the reliability and security of transactions in the ever-evolving landscape of industrial connectivity. *A.* *Dataset* The dataset comprises 600,000 entries detailing Bitcoin transactional graph metadata. Each entry includes a transaction hash (txhash), indicating a unique identifier for a specific Bitcoin transaction. The "indegree" and "outdegree" columns provide a compregension of the transactional graph structure by representing the number of incoming and outgoing edges, respectively, for each address involved. The "inbtc" and "outbtc" columns capture the total Bitcoin received and sent in a given transaction, respectively. This dataset is designed to study the blockchain anomalies and detect fraud. Analyses conducted in the specific dataset can involve exploring patterns, conducting network analyses, and employing machine learning techniques to identify unusual or fraudulent transactions within the Bitcoin network. *B.* *Machine Learning Model* *1)* *Isolation Forest* Isolation forest is an anomaly detection algorithm that relies on a tree-based approach to efficiently identify anomalies within a dataset. It begins by randomly selecting a feature and a split value for each data point, creating binary partitions. Through recursive partitioning, anomalies, which are typically isolated instances, tend to have shorter paths in the constructed trees, making them stand out from normal data points. The average path length of a data point across multiple trees in the forest serves as its anomaly score. Shorter paths imply easier isolation and a higher likelihood of being an anomaly. This algorithm is computationally efficient, especially in highdimensional datasets, and can work without assuming a specific data distribution. Isolation Forest finds applications in cybersecurity for intrusion detection, fraud detection in finance, and various domains where identifying anomalies is crucial. Its simplicity and versatility make it a valuable tool for detecting outliers and unusual patterns in diverse datasets. Algorithm 1 describes the integration of a lightweight blockchain with the isolation forest algorithm for anomaly detection. ALGORITHM 1: LIGHTWEIGHT BLOCKCHAIN WITH ISOLATION FOREST ALGORITHM FOR ANOMALY DETECTION ``` 1. Initialize the number of convolution blocks denoted as N 2. for i = 1 to N do 3. Encode additional features from forward and backward path for better enhancement 4. Encode additional features 5. Get the spatial features using (1) to (5) ``` `6. Obtain local best` *θ* `local` `and global best` *θ* `global` ``` 7. // Continuously check the if condition for parameter update 8. if condition then 9. Retain the previous state value 10. else if other_condition then ``` `11. Update` *θ* `local` `and` *θ* `global` `12. Get` θ `by taking the average combination of min,` ``` max, and global values 13. end for 14. Initialize a lightweight blockchain and the isolation forest algorithm with parameters 15. for each IIoT transaction do 16. Add the transaction to the blockchain 17. Calculate the anomaly score using the isolation forest algorithm 18. if the anomaly score exceeds the threshold then 19. Perform the action for anomaly detection - Alert or take corrective action 21. end if 22. end for. ``` IV. IMPLEMENTATION The lightweight blockchain-based message authentication system for IIoT was implemented in a well-structured development environment. The choice of development tools played a crucial role in achieving an efficient and effective implementation. Python was selected as the primary programming language due to its versatile and extensive libraries and suitability for both blockchain development and machine learning. The core blockchain functionality was implemented utilizing Python libraries, such as hashlib, json, and time, to facilitate the creation of block transaction structures and cryptographic hashing. The scikit-learn framework was deployed, as it provides easy-to-use implementations of various algorithms, such as isolation forest, k-means clustering, and support vector machine. *A.* *Lightweight Blockchain Design* *1)* *Block Structure* The blocks within the blockchain were structured to include essential components, such as the index timestamp transactions, proof of work, and the previous block hash. This design adheres to fundamental blockchain principles, ensuring data integrity and traceability. *2)* *Transaction Format* The transactions within the blocks were formatted to accommodate sender, recipient, and message details. The standardized format allowed for consistent representation and interpretation of transactional data. Figure 2 defines a blockchain class with methods for managing the creation of new blocks, adding transactions, and performing proof-of-work mining. The blockchain is initialized with a genesis block and ***www.etasr.com*** ***Okfie & Mishra: Anomaly Detection in IIoT Transactions using Machine Learning: A Lightweight …*** ----- ## **Engineering, Technology & Applied Science Research Vol. 14, No. 3, 2024, 14645-14653 14 6 4 9** new blocks are created by appending them to the existing chain. Transactions, such as authentication requests and responses, are added to each block before mining. The mining process involves generating a proof of work, and once extracted, a new block is added to the chain, linking it to the previous block through a cryptographic hash. The hash method utilizes SHA-256 to create a hash of a given block, and the last_block property conveniently retrieves the last block in the chain. The two transactions are added to the blockchain, simulating a simple authentication process. The mining step showcases the addition of a new block with proof-of-work, creating a secure and immutable link within the blockchain. Finally, the printed blockchain details offer a glimpse into the chronological sequence of blocks, including their indices, timestamps, transactions, proof values, and hash references. This implementation serves as a foundational example of how a blockchain can be constructed and utilized for maintaining a secure and transparent ledger of transactions. Fig. 2. Blockchain implementation. Fig. 3. Exploratory data analysis. *B.* *Anomaly Detection Method* *1)* *Exploratory Data Analysis* The dataset was thoroughly examined to gain a foundational understanding of its structure and content. The dataset includes distinct features that represent various aspects of transactions within the IIoT environment. The data types include object identifiers for transactions, hash-integer representations for incoming and outgoing transactions, and floating point values on Bitcoin-related features. Additionally, the dataset entails indicators and anomalies related to malicious behavior represented as integer values. The dataset lacks missing values, ensuring completeness and reliability in subsequent analyses. This examination sets the stage for a more detailed exploration including statistical summaries, distribution visualizations, and correlation analyses. *2)* *Data Visualization* Figure 4 displays a visual representation of malicious transactions within the metadata. The bar plot illustrates the counts of various types of malicious transactions. The analysis revealed the prevalence of different categories of malicious activities providing a quick and intuitive overview of potential security concerns within IIoT. This visualization helps in quickly identifying patterns and trends related to malicious behavior and lays the groundwork for more detailed analyses and targeted mitigation strategies. The analysis of malicious transactions within the metadata reveals intriguing patterns, where 1222 exhibit the highest frequency and indicate that a substantial number of transactions serve as inputs to malicious activities. This suggests a notable trend, where a significant portion of transactions contributes to the initiation of malicious behavior. Out_malicious transactions, with a count of 65, depict a lower occurrence, suggesting that the dissemination of malicious funds to subsequent transactions is relatively less frequent. Fig. 4. Types of malicious transactions. Figure 5 portrays a correlation heatmap of malicious categories, providing a comprehensive visualization of the relationships among the various indicators of malicious transactions. In this heatmap, deeper hues represent stronger correlations. The analysis reveals insights into how different malicious categories are correlated with each other, shedding light on potential dependencies and patterns. Figure 5 shows a strong positive correlation between the 'is' and 'out' and 'tx' malicious categories, implying a noteworthy association between these two indicators in the dataset. When a transaction is identified as malicious, there is a notable likelihood that it is also categorized as an output or is directly linked to another malicious transaction. This correlation suggests a significant connection between transactions flagged as malicious, indicating that the identification of one type of malicious activity often coincides with the presence of another. This underscores the interrelated nature of malicious transactions, emphasizing the importance of comprehensive anomaly detection strategies that consider these correlations to improve the overall effectiveness of security measures. ***www.etasr.com*** ***Okfie & Mishra: Anomaly Detection in IIoT Transactions using Machine Learning: A Lightweight …*** ----- ## **Engineering, Technology & Applied Science Research Vol. 14, No. 3, 2024, 14645-14653 14 650** 'total_btc', 'mean_in_btc', and 'mean_out_btc', whereas malicious flags encompass indicators and anomalies related to malicious behavior. Figure 7 presents the correlation coefficients between these features, visually highlighting the strength and direction of their relationships. This analysis can help identify patterns and dependencies between transaction features and potential security threats. Fig. 5. Heat map of malicious categories. Figure 6 depicts the distribution of various malicious transaction types within the dataset, providing a concise representation of their prevalence. The chart discloses the proportional contribution of each malicious category, with 'is malicious' being the predominant category. This dominant presence suggests that a substantial portion of transactions exhibit some form of malicious behavior. The distribution further highlights the relative frequencies of other malicious indicators, providing a quick and accessible overview of the landscape of security concerns. This analysis places a noteworthy emphasis on understanding the origin points of potentially malicious activity transactions, particularly the examination of 'in malicious' transactions, where a transaction serves as an input to malicious activities, bringing attention to the initiation points of potential security threats. This focus on the origin points allows for a deeper exploration of the transactions that contribute to the propagation of malicious behavior. By identifying and understanding these starting points, stakeholders can tailor their security measures and anomaly detection strategies to effectively address and mitigate the potential risks emerging from these specific transactional origins. Fig. 6. Distribution of malicious transactions. The correlation heatmap between transaction features and malicious flags provides a comprehensive overview of their relationships. The selected features include transactional attributes, such as 'indegree', 'outdegree', 'in_btc', 'out_btc', Fig. 7. Correlation heatmap between transaction features and malicious flags. *C.* *Integration and Testing* *1)* *Merging Datasets* The transactional data from the blockchain were merged with the metadata dataset, creating a unified dataset for machine learning input. *2)* *Feature Selection* A subset of the relevant features and the target variable were carefully chosen from the transaction metadata dataset. The selected features include essential transaction attributes, namely 'indegree', 'outdegree', 'in btc', 'out btc', 'total btc', 'mean in btc', and 'mean out btc', which are instrumental in capturing the structural characteristics of transactions within an IIoT environment. The primary objective of this feature selection process is to distill the most informative attributes that contribute to the identification of potentially malicious transactions. The target variable denoted 'is malicious' serves as the binary outcome indicating whether a given transaction is classified as malicious. Focusing on these specific features and the target variable, the feature selection aims to streamline the dataset for subsequent machine learning modeling. This strategic processing facilitates a more focused and efficient training process, enhancing the model's ability to discern patterns and relationships that contribute to the detection of malicious activities within the IIoT transactions. Figure 8 shows the number of anomalies for each category after feature selection. ***www.etasr.com*** ***Okfie & Mishra: Anomaly Detection in IIoT Transactions using Machine Learning: A Lightweight …*** ----- ## **Engineering, Technology & Applied Science Research Vol. 14, No. 3, 2024, 14645-14653 14 65 1** notable achievement, the F1-score, which is a balance between precision and recall, was also low at 0.0056%, suggesting a trade-off between precision and recall. Achieving a balance between these metrics is crucial for ensuring that the model effectively identifies both malicious and normal transactions. The model identified a total of 429,713 anomalies within the dataset. This number represents instances where the model flagged transactions as potentially malicious. An analysis of these anomalies is essential for further investigation. Understanding the characteristics of these flagged transactions can offer insights into the model's sensitivity to potential security threats. *A.* *Discussion* The results of the proposed message-transaction authentication system were compared with previous studies in the field. This comparison serves as a reference point to Fig. 8. Count anomalies in the original dataset after feature selection. evaluate the progress made and the distinctive features of the proposed system. Previous studies in the domain of IIoT *D.* *Model Training and Testing* security and anomaly detection have often focused on leveraging blockchain principles and machine learning The Isolation Forest model was trained on a subset of the techniques to enhance the robustness of authentication systems. dataset and evaluated on a test set, using an 80-20 train-test Although various methods have been explored, the emphasis split ratio. The features selected for training the model included has consistently been on achieving a balance between accuracy, crucial transaction attributes, such as 'indegree', 'outdegree', precision, and recall. The proposed system, using a 'in_btc', 'out_btc', 'total_btc', 'mean_in_btc', and 'mean_out_btc', combination of sequential forward feature selection and whereas the target variable 'is_malicious' served as the binary isolation forest, achieved a notable accuracy of 95.02%. outcome indicating the presence (1) or absence (0) of malicious behavior. However, the precision and recall scores reveal a trade-off between these metrics, highlighting the challenges of *E.* *Model Evaluation* accurately identifying malicious transactions while minimizing false positives. When comparing these results with [4], it The isolation forest model exhibited an overall accuracy of becomes evident that simultaneously attaining high precision 95.02%, indicating its ability to make correct predictions. and recall remains a complex task. The nature of IIoT However, a more nuanced examination reveals challenges in transactions, often characterized by diverse patterns and precision, as reflected by an exceedingly low value of evolving threat landscapes, contributes to the intricacies of 0.0028%, indicating a notable number of false positives, where anomaly detection. Although the proposed system excels in transactions are incorrectly flagged as malicious. On the overall accuracy and anomaly detection, the need for further positive side, the model demonstrated a recall of 80%, refinement to enhance precision without compromising recall implying its effectiveness in capturing four-fifths of the actual malicious transactions. The F1 score, which harmonizes becomes apparent. Future research directions could involve a more nuanced exploration of feature engineering, leveraging precision and recall, is at a low value of 0.0056%, underscoring more advanced machine learning algorithms, and incorporating the difficulty in achieving a balanced performance between real-time feedback mechanisms to adapt to evolving threats. By precision and recall. The model identified 429,713 anomalies, building on the foundation laid by previous research and pointing to its ability to pinpoint potentially malicious addressing the unique challenges posed by IIoT transactions, behavior. These findings underscore the need to meticulously the field can continue to advance towards more effective and weigh the model's performance metrics to optimize its comprehensive security solutions. effectiveness in detecting anomalies within blockchain transactions. *B.* *Research Limitations* V. RESULTS AND DISCUSSION Despite the promising outcomes of the proposed lightweight blockchain-based message authentication system Upon evaluating the model several critical performance for IIoT transactions, several limitations must be metrics were derived. The low precision achieved highlights a acknowledged. Understanding and addressing these limitations substantial challenge in correctly identifying malicious is essential to provide a nuanced interpretation of the research transactions. This exceedingly low precision implies a findings and guide future endeavors towards more significant number of false positives, indicating that a large comprehensive and tailored solutions for securing the IIoT portion of transactions flagged as malicious were benign. This transactions. aspect needs careful consideration as false positives can have adverse consequences in real-world scenarios. Recall, standing *1)* *Dataset Constraints* at 80%, indicates the model's ability to successfully capture the The research heavily relies on the characteristics and two-thirds of actual malicious transactions. While this is a patterns present in the chosen open-source dataset. The ***www.etasr.com*** ***Okfie & Mishra: Anomaly Detection in IIoT Transactions using Machine Learning: A Lightweight …*** ----- ## **Engineering, Technology & Applied Science Research Vol. 14, No. 3, 2024, 14645-14653 14652** generalization of the findings may be limited if the dataset does not fully encapsulate the diverse nature of IIoT transactions across various industries. *2)* *Feature Selection* Although sequential forward feature selection was deployed for feature selection, the efficacy of the chosen features and their relevance to all possible IIoT scenarios may be subject to variation. A more exhaustive exploration of feature engineering techniques could improve the model's performance. *3)* *Model Sensitivity* The system's sensitivity to hyperparameter tuning and the selection of machine learning algorithms is a noteworthy limitation. Different IIoT environments may require tailored approaches, and the generalizability of the implemented model should be interpreted with caution. *4)* *False Positives* The low precision score implies a substantial number of false positives. The potential consequences of false alarms in IIoT security scenarios underscore the need for a continuous refinement of the model to reduce false positives without compromising overall accuracy. *5)* *Real-Time Adaptability* This study focuses primarily on batch processing and may not fully capture the real-time dynamics of the IIoT transactions. Future extensions should explore mechanisms for adaptive learning and continuous model refinement in response to the evolving threats. *6)* *Ethical and Regulatory Considerations* As with any security system, ethical considerations surrounding privacy and regulatory compliance should be carefully addressed. Striking a balance between robust security measures and respecting privacy norms is an ongoing challenge in the implementation of such systems. VI. CONCLUSIONS In conclusion, the proposed lightweight blockchain-based message authentication system for IIoT transactions, augmented with machine learning techniques such as Isolation Forest, can significantly advance the security of IIoT environments. With an impressive accuracy of 95.02%, the system competently detects anomalies and potential security threats, showcasing its ability to improve transaction security. However, the trade-off between precision and recall underscores the need for continual refinement to minimize false positives while maintaining overall accuracy. This study contributes to the expanding realm of IIoT security by elucidating the complexities inherent in safeguarding industrial transactions within diverse and dynamic environments. Using principles from both blockchain and machine learning, the proposed system presents a resilient approach to ensuring message authentication security. For future endeavors, emphasis should be placed on mitigating the identified limitations and refining the system to accommodate the evolving demands of the IIoT landscapes. This includes delving into advanced machine learning algorithms, such as ensemble methods or deep learning architectures, to discern intricate transaction patterns more effectively. Additionally, exploring real-time adaptive learning mechanisms can enable dynamic adjustments to the evolving threats and anomalies, thereby enhancing the system's agility. REFERENCES [1] M. Anwer, S. M. Khan, M. U. Farooq, and Waseemullah, "Attack Detection in IoT using Machine Learning," *Engineering, Technology &* *Applied Science Research*, vol. 11, no. 3, pp. 7273–7278, Jun. 2021, https://doi.org/10.48084/etasr.4202. [2] P. Singh, Z. Elmi, V. Krishna Meriga, J. Pasha, and M. A. Dulebenets, "Internet of Things for sustainable railway transportation: Past, present, and future," *Cleaner Logistics and Supply Chain*, vol. 4, Jul. 2022, Art. no. 100065, https://doi.org/10.1016/j.clscn.2022.100065. [3] H. Liu and B. Lang, "Machine Learning and Deep Learning Methods for Intrusion Detection Systems: A Survey," *Applied Sciences*, vol. 9, no. 20, Jan. 2019, Art. no. 4396, https://doi.org/10.3390/app9204396. [4] Y. Wu, X. Jin, H. Yang, L. Tu, Y. Ye, and S. Li, "Blockchain-Based Internet of Things: Machine Learning Tea Sensing Trusted Traceability System," *Journal of Sensors*, vol. 2022, Feb. 2022, Art. no. e8618230, https://doi.org/10.1155/2022/8618230. [5] R. Doshi, N. Apthorpe, and N. Feamster, "Machine Learning DDoS Detection for Consumer Internet of Things Devices," in *2018 IEEE* *Security and Privacy Workshops (SPW)*, San Francisco, CA, USA, May 2018, pp. 29–35, https://doi.org/10.1109/SPW.2018.00013. [6] A. Rahman *et al.*, "On the Integration of Blockchain and SDN: Overview, Applications, and Future Perspectives," *Journal of Network* *and Systems Management*, vol. 30, no. 4, Oct. 2022, Art. no. 73, https://doi.org/10.1007/s10922-022-09682-4. [7] A. Rahman *et al.*, "Impacts of blockchain in software-defined Internet of Things ecosystem with Network Function Virtualization for smart applications: Present perspectives and future directions," *International* *Journal* *of* *Communication* *Systems*, 2023, Art. no. e5429, https://doi.org/10.1002/dac.5429. [8] O. O. Mohammed, M. W. Mustafa, D. S. S. Mohammed, and A. O. Otuoze, "Available transfer capability calculation methods: A comprehensive review," *International Transactions on Electrical Energy* *Systems*, vol. 29, no. 6, 2019, Art. no. e2846, https://doi.org/10.1002/ 2050-7038.2846. [9] R. Kumar, P. Kumar, R. Tripathi, G. P. Gupta, S. Garg, and M. M. Hassan, "A distributed intrusion detection system to detect DDoS attacks in blockchain-enabled IoT network," *Journal of Parallel and Distributed* *Computing*, vol. 164, pp. 55–68, Jun. 2022, https://doi.org/10.1016/ j.jpdc.2022.01.030. [10] I. Butun, P. Österberg, and H. Song, "Security of the Internet of Things: Vulnerabilities, Attacks, and Countermeasures," *IEEE Communications* *Surveys* *&* *Tutorials*, vol. 22, no. 1, pp. 616–644, 2020, https://doi.org/10.1109/COMST.2019.2953364. [11] S. Basha, D. Rajput, and V. Vandhan, "Impact of Gradient Ascent and Boosting Algorithm in Classification," *International Journal of* *Intelligent Engineering and Systems*, vol. 11, no. 1, pp. 41–49, Feb. 2018, https://doi.org/10.22266/ijies2018.0228.05. [12] S. Ismail, M. Nouman, D. W. Dawoud, and H. Reza, "Towards a lightweight security framework using blockchain and machine learning," *Blockchain: Research and Applications*, vol. 5, no. 1, Mar. 2024, Art. no. 100174, https://doi.org/10.1016/j.bcra.2023.100174. [13] S. Bassendowski, “The Internet of Things (IoT),” *Canadian Journal of* *Nursing Informatics*, vol. 13, no. 1, 2018. [14] S. M. Basha and D. S. Rajput, "Chapter 9 - Survey on Evaluating the Performance of Machine Learning Algorithms: Past Contributions and Future Roadmap," in *Deep Learning and Parallel Computing* *Environment for Bioengineering Systems*, A. K. Sangaiah, Ed. Academic Press, 2019, pp. 153–164. [15] R. Kumar, P. Kumar, R. Tripathi, G. P. Gupta, S. Garg, and M. M. Hassan, "A distributed intrusion detection system to detect DDoS attacks in blockchain-enabled IoT network," *Journal of Parallel and Distributed* ***www.etasr.com*** ***Okfie & Mishra: Anomaly Detection in IIoT Transactions using Machine Learning: A Lightweight …*** ----- ## **Engineering, Technology & Applied Science Research Vol. 14, No. 3, 2024, 14645-14653 14653** *Computing*, vol. 164, pp. 55–68, Jun. 2022, https://doi.org/10.1016/ j.jpdc.2022.01.030. [16] B. K. Mohanta, D. Jena, U. Satapathy, and S. Patnaik, "Survey on IoT security: Challenges and solution using machine learning, artificial intelligence and blockchain technology," *Internet of Things*, vol. 11, Sep. 2020, Art. no. 100227, https://doi.org/10.1016/j.iot.2020.100227. [17] A. Derhab *et al.*, "Blockchain and Random Subspace Learning-Based IDS for SDN-Enabled Industrial IoT Security," *Sensors*, vol. 19, no. 14, Art. no. 3119, Jan. 2019, https://doi.org/10.3390/s19143119. [18] E. Kfoury, J. Saab, P. Younes, and R. Achkar, "A Self Organizing Map Intrusion Detection System for RPL Protocol Attacks," *International* *Journal of Interdisciplinary Telecommunications and Networking* *(IJITN)*, vol. 11, no. 1, pp. 30–43, Jan. 2019, https://doi.org/10.4018/ IJITN.2019010103. [19] N. Waheed, X. He, M. Ikram, M. Usman, S. S. Hashmi, and M. Usman, "Security and Privacy in IoT Using Machine Learning and Blockchain: Threats and Countermeasures," *ACM Computing Surveys*, vol. 53, no. 6, Sep. 2020, Art. no. 122, https://doi.org/10.1145/3417987. [20] F. Hussain, R. Hussain, S. A. Hassan, and E. Hossain, "Machine Learning in IoT Security: Current Solutions and Future Challenges," *IEEE Communications Surveys & Tutorials*, vol. 22, no. 3, pp. 1686– 1721, 2020, https://doi.org/10.1109/COMST.2020.2986444. [21] "Python". https://www.python.org/. [22] M. Baz, "SEHIDS: Self Evolving Host-Based Intrusion Detection System for IoT Networks," *Sensors*, vol. 22, no. 17, Jan. 2022, Art. no. 6505, https://doi.org/10.3390/s22176505. [23] T. Su, H. Sun, J. Zhu, S. Wang, and Y. Li, "BAT: Deep Learning Methods on Network Intrusion Detection Using NSL-KDD Dataset," *IEEE Access*, vol. 8, pp. 29575–29585, 2020, https://doi.org/10.1109/ ACCESS.2020.2972627. [24] N. A. Alsharif, S. Mishra, and M. Alshehri, "IDS in IoT using Machine Learning and Blockchain," *Engineering, Technology & Applied* *Science Research*, vol. 13, no. 4, pp. 11197–11203, Aug. 2023, https://doi.org/10.48084/etasr.5992. ***www.etasr.com*** ***Okfie & Mishra: Anomaly Detection in IIoT Transactions using Machine Learning: A Lightweight …*** -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.48084/etasr.7384?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.48084/etasr.7384, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://etasr.com/index.php/ETASR/article/download/7384/3714" }
2,024
[ "JournalArticle" ]
true
2024-06-01T00:00:00
[ { "paperId": "b37f1ea38aa3e33bb648e58bda5f35ee8cda4282", "title": "Towards a lightweight security framework using blockchain and machine learning" }, { "paperId": "22786bd2da2cd5eae10c39ec7856774690518feb", "title": "IDS in IoT using Machine ‎Learning and Blockchain" }, { "paperId": "85317f5f6ee510776e979b4652c00e9b6d1f5a4c", "title": "Impacts of blockchain in software‐defined Internet of Things ecosystem with Network Function Virtualization for smart applications: Present perspectives and future directions" }, { "paperId": "bdc962c10d8031c8f0e791589ceadd33c3241a8c", "title": "SEHIDS: Self Evolving Host-Based Intrusion Detection System for IoT Networks" }, { "paperId": "3ed8c9b99ac1531dbaecd322fc663d0d9f7bcc8d", "title": "On the Integration of Blockchain and SDN: Overview, Applications, and Future Perspectives" }, { "paperId": "2ead4cba0fe2a59bfad3c3958a0593ece990d16a", "title": "Internet of Things for Sustainable Railway Transportation: Past, Present, and Future" }, { "paperId": "f9be175e11da5dbc300d96c4474567137aaedd81", "title": "Blockchain-Based Internet of Things: Machine Learning Tea Sensing Trusted Traceability System" }, { "paperId": "8ca159f79cf369bd90083b24b169c8ef843630aa", "title": "A distributed intrusion detection system to detect DDoS attacks in blockchain-enabled IoT network" }, { "paperId": "f15b80c3faa75d323f678f75f2eae39397a6b887", "title": "Attack Detection in IoT using Machine Learning" }, { "paperId": "346dc80571d13880a87bbf341577d6eb83414911", "title": "Security and Privacy in IoT Using Machine Learning and Blockchain" }, { "paperId": "4939f2ab9400c2c8ebc1b0a5fbe515ebdf495bcf", "title": "Survey on IoT security: Challenges and solution using machine learning, artificial intelligence and blockchain technology" }, { "paperId": "236dfdeb4511754cf71ba220ac569b11973502cd", "title": "Machine Learning and Deep Learning Methods for Intrusion Detection Systems: A Survey" }, { "paperId": "e62b1428ba8273eccbe74168a8442e75853d9dc0", "title": "Blockchain and Random Subspace Learning-Based IDS for SDN-Enabled Industrial IoT Security" }, { "paperId": "3130e37a2e2df999eb4b231887b37c8b07967eb9", "title": "Available transfer capability calculation methods: A comprehensive review" }, { "paperId": "3142d5355f87a735809d5f4c9a66480f6f7ff868", "title": "Impact of Gradient Ascent and Boosting Algorithm in Classification" } ]
10,434
en
[ { "category": "Environmental Science", "source": "external" }, { "category": "Medicine", "source": "external" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01733eee0834a5dba9e33e63864dab1b3bb0b6a5
[ "Environmental Science", "Medicine" ]
0.857308
Hospital-Use Pharmaceuticals in Swiss Waters Modeled at High Spatial Resolution.
01733eee0834a5dba9e33e63864dab1b3bb0b6a5
Environmental Science and Technology
[ { "authorId": "2975323", "name": "K. Kuroda" }, { "authorId": "13342710", "name": "R. Itten" }, { "authorId": "4773583", "name": "L. Kovalova" }, { "authorId": "4433376", "name": "C. Ort" }, { "authorId": "4806926", "name": "D. Weissbrodt" }, { "authorId": "4662906", "name": "Christa S. McArdell" } ]
{ "alternate_issns": null, "alternate_names": [ "Environ Sci Technol", "Environ Sci Technol", "Environmental Science & Technology" ], "alternate_urls": [ "http://pubs.acs.org/journals/esthag/", "https://pubs.acs.org/page/esthag/about.html", "http://pubs.acs.org/journals/esthag/index.html" ], "id": "9efb20cf-7484-450d-8245-12dfbf639d3e", "issn": "0013-936X", "name": "Environmental Science and Technology", "type": "journal", "url": "https://pubs.acs.org/journal/esthag" }
null
1 Hospital-use pharmaceuticals in Swiss waters 2 modeled at high spatial resolution 3 Keisuke Kuroda,[†‡*] René Itten,[†] Lubomira Kovalova,[†] Christoph Ort,[†] David Weissbrodt[†] and 4 Christa S. McArdell[†] 5 † Eawag, Swiss Federal Institute of Aquatic Science and Technology, Überlandstrasse 133, 6 Dübendorf 8600, Switzerland 7 ‡ Graduate School of Engineering, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 8 113-8656, Japan 9 - NIES, National Institute for Environmental Studies, 16-2 Onogawa, Tsukuba, Ibaraki 10 305-8506, Japan. 11 _Corresponding author:_ _keisukekr@gmail.com, Phone: +81 29 850 2843, Fax: +81 29 850_ 12 _2920_ 13 14 Word count: 6030 (text)+300 (Figure 1)+300 (Figure 2)+300 (Figure 3)+300 (Table 1)+300 15 (Table 2)=7530 words 16 17 18 This document is the accepted manuscript version of the following article: Kuroda, K., Itten, R., Kovalova, L., Ort, C., Weissbrodt, D. G., & McArdell, C. S. (2016). Hospital-use pharmaceuticals in Swiss waters modeled at high spatial resolution. Environmental Science and Technology, 50(9), 4742-4751. http://doi.org/10.1021/acs.est.6b00653 ----- 19 ABSTRACT 20 A model to predict the mass flows and concentrations of pharmaceuticals predominantly used 21 in hospitals across a large number of sewage treatment plant (STP) effluents and river waters 22 was developed at high spatial resolution. It comprised 427 geo-referenced hospitals and 742 23 STPs serving 98% of the general population in Switzerland. In the modeled base scenario, 24 _domestic, pharmaceutical use was geographically distributed according to the population size_ 25 served by the respective STPs. Distinct _hospital_ scenarios were set up to evaluate how the 26 predicted results were modified when pharmaceutical use in hospitals was allocated 27 differently; for example, in proportion to number of beds or number of treatments in hospitals. 28 The _hospital scenarios predicted the mass flows and concentrations up to 3.9 times greater_ 29 than in the _domestic_ scenario for iodinated X-ray contrast media (ICM) used in computed 30 tomography (CT), and up to 6.7 times greater for gadolinium, a contrast medium used in 31 magnetic resonance imaging (MRI). Field measurements showed that ICM and gadolinium 32 were predicted best by the scenarios using number of beds or treatments in hospitals with the 33 specific facilities (i.e., CT and/or MRI). Pharmaceuticals used both in hospitals and by the 34 general population (e.g., cyclophosphamide, sulfamethoxazole, carbamazepine, diclofenac) 35 were predicted best by the scenario using the number of beds in all hospitals, but the deviation 36 from the _domestic scenario values was only small. Our study demonstrated that the bed_ 37 number-based hospital scenarios were effective in predicting the geographical distribution of 38 a diverse range of pharmaceuticals in STP effluents and rivers, while the _domestic scenario_ 39 was similarly effective on the scale of large river-catchments. 40 KEYWORDS: benzotriazole; carbamazepine; catchment; cyclophosphamide; diatrizoate; 41 diclofenac; fluconazole; furosemide; gabapentin; gadolinium; hospitals; iobitridol; iodinated 42 X-ray contrast media (ICM); iohexol; iomeprol; iopamidol; iopromide; ioxitalamic acid; ----- 43 modeling; oxazepam; ritonavir; river; sewage treatment plant (STP); sulfamethoxazole; 44 verapamil 45 46 1. INTRODUCTION 47 Hospitals are often discussed as potential point sources for the discharge of numerous 48 human-use pharmaceuticals into the environment, with major contributions to wastewater 49 loads.[1-4] Case studies have shown that the contribution of hospital wastewater to 50 pharmaceutical loads in sewage treatment plants (STPs) varies considerably, from less than 51 5% to more than 50%, depending on the specific hospital characteristics (location, type, size, 52 and number relative to the catchment population) and the target substance.[3,5-9] The number of 53 hospital beds per 1000 population is a general measure of inpatient services availability, and 54 varies among countries; for example, 0.3 (Bangladesh), 2.5 (China), 2.9 (world average), 3.2 55 (U.S.), 5.5 (Switzerland), 8.4 (Germany) and 14.1 (Japan), as of 2005.[10] The values found in 56 specific catchment studies on hospital wastewater were 0.5–4.4 (Australia),[5,7] 3.6–3.8 57 (Switzerland),[1,9] 4.4 (Oslo),[11] 6.5 (Italy),[12] and 12.1 (Berlin).[13] The environmental impact of 58 pharmaceutical residues in hospital wastewater has been studied.[6,9,14,15] Pharmaceuticals of 59 particular concern include iodinated X-ray contrast media (ICM), which are used for 60 computed tomography (CT) in large quantities;[16] cytostatics, which are often toxic;[17] and 61 antibiotics, which contribute to the spread of antibiotic resistance.[4,18] In conventional sewage 62 treatment processes, these pharmaceuticals are only partially eliminated, and their residues are 63 found in surface and groundwater.[19-21] 64 Thus far, two options have been proposed for reducing environmental discharge of 65 hospital-derived pharmaceuticals: (1) separate treatment of hospital wastewater at the 66 source,[22,23] and (2) upgrading municipal STPs to include post-treatments such as ozonation 67 and powdered activated carbon.[24-26] Several countries have already begun to consider the ----- 68 latter; in Switzerland, for example, a general 80% reduction in organic micropollutants from 69 raw sewage, evaluated by selected compounds, is envisaged for roughly 100 STPs by a new 70 water protection act.[27,28] 71 Both options above naturally involve massive costs. Therefore, in deciding whether a given 72 STP and/or hospital should be modified for the elimination of pharmaceuticals, it is essential 73 to identify which catchments have high loads or high concentrations of pharmaceuticals in the 74 receiving waters. In large geographical areas with multiple substances, such assessments can 75 be very laborious when based on field monitoring; and in such cases, modeling approaches 76 are more useful.[29,30] The discharge of domestically used compounds has been successfully 77 modeled using catchment-scale water quality models, such as GREAT-ER,[31] LF2000-WQX,[32] 78 or similar approaches.[29,33] In these models, however, hospitals are not included as emission 79 sources. Recently, Al Aukidy et al.[8] proposed a framework for assessing the environmental 80 risk posed by pharmaceuticals derived from hospital wastewater. They proposed to use 81 pharmaceutical concentrations in hospital wastewater reported in various countries as 82 reference concentrations, and use total hospital bed numbers in catchments to estimate the 83 dilution of the hospital wastewater by domestic wastewater. In their study, the estimated risk 84 quotient had an uncertainty of 2–3 orders of magnitude, owing to the large variation in 85 pharmaceutical concentrations in hospital wastewater in the literature. This uncertainty 86 increases in the case of pharmaceuticals not used in every hospital but only in specific types 87 of hospital; here, the spatial distribution of such pharmaceuticals would differ from that of 88 hospital beds, a case not addressed by Al Aukidy et al. However, assessments based on actual 89 consumption of pharmaceuticals in the target area, with consideration of hospital types, would 90 greatly reduce such uncertainty. In Australia and Switzerland, audit data of pharmaceutical 91 consumption obtained from hospitals was used to successfully estimate the hospital-based 92 contribution to pharmaceutical loads in one or a few STPs.[5,7,9] For assessment across a large ----- 93 number of catchments, however, a model using more easily-available data (e.g., number of 94 beds, hospital types) is more convenient in terms of data collection and modeling. In 95 Germany, the mass flow of ICM in the urban water cycle of Berlin was predicted by a model 96 comprising 12 STPs and hospitals, using estimated ICM consumption data.[34] Thus far, 97 however, there has been no study on the modeling of different classes of hospital-use 98 pharmaceuticals across a large number of catchments, with consideration of hospital type. 99 Here, we proposed and validated a model-based method using national consumption data to 100 efficiently predict the geographical distribution of a diverse range of pharmaceuticals 101 (including some specifically used in hospitals) in STP effluents and rivers, at high spatial 102 resolution, incorporating multiple types of hospitals as geo-referenced point sources, across 103 all of Switzerland. The model is based on a previously developed national substance flow 104 model, which predicted the respective amounts of micropollutants discharged by the general 105 population.[29] Our objectives were to (i) test and compare distinct scenarios with different 106 levels of model complexity (i.e., pharmaceuticals were geographically distributed according 107 to (a) population size served by respective STPs, (b) total number of hospital beds, (c) number 108 of beds at specific hospitals, or (d) number of medical treatments related to specific 109 pharmaceutical usage); (ii) test different cases for varying ratios of outpatients to total 110 patients; (iii) validate the model through field measurements; and (iv) evaluate the 111 applicability of the model in terms of spatial resolution, model complexity, data acquisition 112 demands, and predictive uncertainty. 113 114 2. EXPERIMENTAL SECTION 115 2.1 Model setup 116 As a basis, we used a substance flow model for Switzerland.[29] The model incorporates a total 117 of 742 STPs, covering more than 98% of the general population (7.31 million). The input data ----- 118 are (i) national pharmaceutical consumption data, (ii) excretion rates of pharmaceuticals, (iii) 119 elimination rates in municipal STPs, (iv) location and population of the catchments, and (v) 120 dilution in the receiving waters. No elimination was assumed in the rivers, as the 121 environmental half-lives of many pharmaceuticals are on the same order of magnitude or 122 larger than the maximum residence time of Swiss rivers (1 d). The base flow conditions (Q347) 123 were used to account for minimum dilution in the rivers. Geographically, the total national 124 pharmaceutical loads were allocated proportionally to the population size of the respective 125 STP catchments, which ranged from 30–390,000 (average 9900, median 2700). 126 The national pharmaceutical consumption data for 2009 was purchased from IMS Health 127 (Danbury, CT, USA), which collects data in many countries, which is used for scholarly 128 research.[35] In Switzerland, data are available on the overall national distribution of all 129 registered pharmaceuticals by manufacturers, importers, wholesalers and suppliers, divided 130 into four distribution channels: (i) pharmacies, (ii) drug stores, (iii) doctors’ offices, and (iv) 131 hospitals. The sum of channels (i)–(iii), plus the amount dispensed through hospitals but 132 excreted outside by outpatients, is considered as the total consumption by the general 133 population (domestic consumption). Only the amount dispensed and excreted in hospitals 134 (e.g., by inpatients) is assumed to be discharged from hospitals. This amount, as a fraction of 135 total consumption, is thus referred to in this study as the _effective hospital fraction (HF),_ 136 which describes the allocation of pharmaceuticals between hospitals and households. HF was 137 determined in the following manner. First, for each pharmaceutical, the ratio of the total 138 amount dispensed through hospitals (i.e., channel (iv) above) to total consumption was termed 139 the _hospital-dispensed fraction (HFdis). Based on HFdis, HF was estimated as the fraction of_ 140 the total amount of a given pharmaceutical dispensed through all hospitals, minus the portion 141 of this amount excreted by outpatients outside the hospitals. Hence, the consumed amount ----- 142 that was subsequently discharged from hospitals was expressed as (total consumption × HF), 143 and that discharged from the general population as (total consumption × (1−HF)). 144 The hospital-allocated pharmaceutical consumption (i.g., total consumption × HF) was 145 distributed to the respective hospitals according to the scenarios described in Section 2.3. The 146 consumption by the general population (i.g. total consumption × (1−HF)) was distributed in 147 proportion to the respective catchment populations. The respective consumption by hospitals 148 and by the general population was assigned for each STP, and the resulting loads and 149 concentrations of pharmaceuticals in STP effluents and rivers were predicted, taking excretion 150 from the human body and elimination at STPs into account, as in the base model.[29] This 151 allocation of pharmaceutical consumption to STP catchments is illustrated in Figure S1 of the 152 Supporting Information (hereafter, SI). 153 154 2.2 Hospital data 155 Information on the hospitals’ location, type and number of inpatient beds was derived from 156 the official database of 2007 (Federal Office of Public Health), which included all the 157 hospitals in Switzerland (427 hospitals, with 44,892 beds in total). In addition, for hospitals 158 with radiology and/or oncology departments, information on the respective facilities, as well 159 as the actual numbers of treatments related to CT, MRI and inpatient chemotherapies, was 160 acquired. Further details of the data acquisition are described in S1 (SI). The number of in-use 161 beds was determined based on the occupancy rate for each hospital (median 90%, Q1 84%, 162 Q3 98%), and used for subsequent modeling and analysis, including the characteristic hospital 163 bed density per 1000 population (hereafter, B1000). 164 In Switzerland, hospitals and hospital beds are concentrated in the large cities (Figures S2 and 165 S3a). In comparison, B1000 provides a different picture (Figure S3b): several suburban 166 catchments had higher B1000 values of up to 118, which is 21 times the national average B1000 ----- 167 for Switzerland (5.5). More details on the geographical distribution of hospitals and hospital 168 beds, and the distribution of B1000 are described in S2. 169 170 2.3 Scenarios 171 Five distinct scenarios were developed for distributing pharmaceuticals (Table 1). Four of 172 these were the _hospital_ scenarios, in which hospital-allocated pharmaceutical consumption 173 was distributed over the number of hospital beds or specific treatments. For contrast media 174 and cytostatics, hospitals equipped with CT or MRI facilities, and hospitals with oncology 175 departments, were distinguished in bed-specific and treatment-specific scenarios. 176 Furthermore, for each pharmaceutical, the effect of HF variation was evaluated for two cases; 177 in the average case (AC), HFac was set according to a realistic average proportion of expected 178 inpatients; in the high case (HC), HFhc was set at the same value as HFdis or slightly lower, 179 conservatively assuming a higher proportion of inpatients than the average case. The domestic 180 scenario was evaluated in comparison with the hospital scenarios. In the _domestic scenario,_ 181 all the pharmaceutical consumption was allocated to the general population (i.e., HF = 0), as 182 in the base model. 183 184 2.4 Input data uncertainty 185 For the base model, the maximum uncertainty in the predicted pharmaceutical load 186 discharged from each STP, through variation of the model parameters, was evaluated as 187 64%.[29] In rivers, the uncertainty of Q347 added further uncertainty, ranging from 30–70%, to 188 the predicted concentrations, depending on the river size. These uncertainties naturally 189 applied to the hospital scenarios as well. In addition, the uncertainty of the hospital data per 190 _se must be accounted for. Briefly, the uncertainty was small (< 10%) for basic hospital data,_ 191 pharmaceutical consumption, and HFdis. In comparison, large uncertainty was found for ----- 192 treatment numbers of MRI and chemotherapy. In 49% of hospitals equipped with MRI 193 facilities, and 74% of hospitals with an oncology department, treatment numbers were not 194 available, and thus had to be estimated. The actual treatment numbers (where available) did 195 not show a good correlation with the bed numbers (Figures S4b and S4c). Therefore, for each 196 hospital type (e.g., supply hospitals, primary care hospitals), the median of the actual 197 treatment numbers (SI, Table S1) was used, in order to avoid extreme over- or 198 underestimation. In contrast, the actual CT treatment numbers highly correlated with the bed 199 numbers (Pearson, r = 0.92, P < 0.001; Figure S4a). Therefore, in 54% of CT hospitals where 200 treatment numbers were not available, linear regression using the respective bed numbers was 201 employed, and thus the uncertainty was expected to be small. Further details on the 202 uncertainty regarding input data for the hospital scenarios are described in S3. 203 204 2.5 Pharmaceuticals 205 We modeled and measured 19 compounds, including 11 major pharmacological classes. 206 Escher et al.[6] provide a list of the top 100 pharmaceuticals used and excreted in the largest 207 amounts in a typical and regionally important general hospital in Switzerland. Based on this 208 list, we selected pharmaceuticals which were representative and poorly eliminated during 209 sewage treatment. We studied seven ICM used for CT (Table 2), representing the 210 pharmacological class which showed the highest consumption and was mainly dispensed in 211 hospitals (HFdis = 0.58). The seven ICM were modeled altogether as ‘iodine’, which was the 212 sum of the iodine content of all the ICM. This was because the occurrence of ICM measured 213 in the STP effluents varied significantly among catchments (Figure S5), seemingly owing to 214 varying hospital preferences. Gadolinium complexes are used for MRI as contrast media, and 215 are dispensed only in hospitals (HFdis = 1). Therefore, gadolinium (Gd) is most useful for 216 studying the discharge of hospital effluents to STPs.[5,36] Gadolinium complexes are designed ----- 217 to be stable and non-reactive, and are quickly excreted from the human body, with a 1.3–2 h 218 half-life.[36,37] In addition, they are not removed during conventional sewage treatment.[38] 219 Cyclophosphamide was selected as a model cytostatic, because it is used only for 220 chemotherapy and has a large HFdis (0.68). Sulfamethoxazole (HFdis = 0.17) was selected 221 because of its broad use as an antibiotic in general hospitals. In addition, we selected eight 222 more pharmaceuticals with relatively small HFdis (0.03–0.49). Benzotriazole, which is closely 223 related to domestic wastewater (i.e., HFdis = 0), was selected as a reference compound.[39] The 224 modeled compounds included four of the five originally proposed indicator compounds used 225 to evaluate the removal of micropollutants in advanced wastewater treatment as envisaged in 226 the new Swiss water protection act.[27,28] Excretion rates and elimination in STP were based on 227 averages from literature data. The parameters and scenarios for the ICM, gadolinium, 228 cyclophosphamide and sulfamethoxazole are shown in Table 2; and those for the remaining 229 compounds, in Table S2. 230 231 **2.6 Field sampling and laboratory analyses** 232 Samples of 14 STP effluents and 7 river waters in Switzerland (Table S3; locations are 233 indicated in Figures 1 and S3b) were collected during June and October, 2010. The sampling 234 sites were selected based on meeting at least one of the following criteria: catchments with 235 large variation in predictions between the _hospital scenarios and the_ _domestic scenario;_ 236 catchments with hospitals equipped with CT or MRI facilities, or an oncology department; 237 locations with high predicted mass flows or concentrations. The STP catchments contained 238 varying combinations of general, psychiatric, and rehabilitation hospitals, with varying 239 proportions of hospital beds by hospital type (Table S4). 240 Details on the sampling methods, analytical procedures and quality control are described in 241 S4. Briefly, 24-h composite samples were taken over one week (STPs), and 1-, 2- or 4-week ----- 242 composite samples over 1–8 weeks (rivers). The compounds, excluding gadolinium, were 243 analyzed by online SPE-HPLC-MS/MS.[40] Gadolinium was analyzed using ICP/HRMS. 244 245 **2.7 Methods for scenario evaluation and model validation by measurement** 246 Throughout this study, we evaluated the pharmaceutical discharge based on mass flow (load). 247 For rivers, pharmaceutical mass flow was evaluated at each STP discharge point by 248 aggregating the loads from the upstream STPs. To compare the model predictions among the 249 different _hospital scenarios, with respect to the_ _domestic scenario, the modeled_ 250 pharmaceutical mass flow was evaluated as the change relative to the domestic scenario (i.e., 251 mass flow predicted by a _hospital scenario/mass flow predicted by the_ _domestic scenario)._ 252 This relative change remained the same for all pharmaceutical concentrations, as the assumed 253 flows are the same in all the scenarios. 254 The measured pharmaceutical mass flows were determined by multiplying the measured 255 concentrations by the actual discharge over the sampling period, for each STP and river 256 (Table S5). The agreement between the respective calculated and modeled average daily mass 257 flows was evaluated, following Ort et al.,[29] using the predictive accuracy factor 258 (prediction/observation; hereafter, PAF), its median value (MPAF), its relative standard 259 deviation (RSD), and the R[2] from the linear regression forced through 0. 260 The benzotriazole mass flow predicted by the _domestic_ scenario agreed well with the 261 measured mass flow, both in the STP effluents and the rivers (Figure S6); and this showed 262 that the domestic scenario was valid for predicting compounds used domestically. 263 Throughout the paper, evaluations are mainly based on the results for ICM, gadolinium, 264 cyclophosphamide and sulfamethoxazole. The predicted mass flow and concentrations of 265 those four compounds in all the STP effluents and river waters, along with their relative 266 change, are shown in Table S6. ----- 267 268 3. RESULTS 269 3.1 Modeled pharmaceutical discharge 270 3.1.1 STP effluents 271 **_Overall._** Our results showed that the hospital scenarios predicted higher mass flows than the 272 _domestic scenario only in a small number of catchments (Figures 1a, 2, S7–S9). In 76% of all_ 273 catchments with 30–100,000 population and no hospital beds, the change relative to the 274 _domestic scenario was simply (1–HF) for all the_ _hospital scenarios, because the catchments_ 275 had no hospital-allocated pharmaceutical consumption (see Figure S1 for the expected 276 relative change depending on catchment characteristics). Among all the scenarios, the relative 277 change exceeded 1 in 8–15% of catchments (Tables S7 and S8), where the number of hospital 278 beds or treatments per population were above the respective national average values (Table 279 S9). Large relative changes were found in a few percent of catchments, which were mostly 280 suburban or relatively remote, with 1000–10,000 population. These catchments differed 281 among scenarios and compounds, depending on the services provided by the hospitals. In 282 contrast, in catchments with more than 100,000 population, the predicted mass flows varied 283 less among scenarios and compounds, and the relative change mostly ranged from 1–3. This 284 indicated an abundance of hospitals of all types in these heavily populated catchments. 285 The _all beds scenario had more catchments with large relative changes than the other_ 286 scenarios. This is because, in _all beds, catchments containing any type of hospital are_ 287 assigned hospital-allocated pharmaceutical consumption. Therefore, a very large relative 288 change was found in catchments where the existence of hospitals of special types caused large 289 B1000 (e.g., suburban catchments). In such catchments, the all beds scenario can overestimate 290 the mass flow of specific pharmaceuticals. For example, the relative change for ICM and 291 gadolinium in _all beds was the largest (e.g., around 7 for ICM, AC) in the two small_ ----- 292 catchments of Rheinau and Schinznach-Bad (Figure 2), both of which have a population of 293 1300 and the highest B1000 values (118 and 108, respectively). However, this result is 294 unrealistic, because the catchments only contain a psychiatric hospital (Rheinau), and a 295 rehabilitation hospital (Schinznach-Bad), neither of which has a radiology department; and 296 the field measurements confirmed the overestimation of contrast media in these catchments 297 (Section 3.2.1). 298 **_Contrast media. The contrast media included ICM (HFac = 0.3, HFhc = 0.5) and gadolinium_** 299 (HFac = 0.5, HFhc = 0.8). In the CT beds scenario, the largest relative changes, of 3.9 (AC) and 300 5.8 (HC), were found in STP Saignelegier (2100 population) and STP Zurzach (7500). 301 Gadolinium showed the largest relative change because it has the largest HF value. In the MRI 302 _beds scenario, the largest relative changes (3.7–6.7 in AC, 5.3–10.1 in HC) were found in six_ 303 catchments, with varying population ranging from 2100–52,000. In the _MRI treatments_ 304 scenario, the maximum relative change was even larger, at 22.4 (STP Saignelegier, HC); note, 305 however, that the treatment number in this catchment was estimated. 306 **_Other pharmaceuticals._** Various HF values (HFac = 0.01–0.1, HFhc = 0.03–0.49) were applied 307 to the remaining ten pharmaceuticals. Among the latter, ritonavir had the largest relative 308 change (e.g., 3.1 in _all beds, AC, STP Rheinau). In the case of cyclophosphamide (HFac =_ 309 0.05, HFhc = 0.34), the relative change reached 1.5 in the _oncology beds and_ _chemotherapy_ 310 scenarios for AC, and 3.8 in oncology _beds for HC. Regarding sulfamethoxazole (HFac = 0.03,_ 311 HFhc = 0.17), the relative change reached 1.7 (AC) and 4.5 (HC) in all beds, and 1.4 (AC) and 312 3.2 (HC) in general beds. 313 **_Comparison between bed-specific scenarios and treatment-specific scenarios. The_** 314 respective predictions by the bed-specific and treatment-specific scenarios did not differ much 315 for ICM (Figure S10). In comparison, the difference was large for gadolinium (up to a factor 316 of 2) and cyclophosphamide (up to 4). This reflects the fact that bed and treatment numbers ----- 317 did not correlate well in the case of MRI and chemotherapy (Figures S4b and S4c). However, 318 missing treatment numbers produced large uncertainties (Section 2.4); therefore, great care 319 should be taken in the case of treatment-specific scenarios in such catchments. 320 321 3.1.2 Rivers 322 Similarly to STP effluents, the relative changes were largest in catchments with 1000–10,000 323 upstream population (Figures 1b, S11–S13). The largest relative changes in rivers (e.g., 3.9 324 for ICM in _CT beds, AC) were similar to those in STP effluents. However, large relative_ 325 changes were found in fewer river waters than STP effluents (Tables S10 and S11); for 326 example, relative changes of greater than 3 for gadolinium in MRI beds (AC) were noted in 327 effluents of six STPs but in only three river waters. All the scenarios predicted very similarly 328 in rivers where the upstream population exceeded 10,000. As the loads of all the upstream 329 STPs were aggregated, the respective differences from the _domestic scenario were averaged_ 330 out. 331 332 3.2 Model validation by measurement 333 3.2.1 Contrast media 334 As explained above, the _all beds scenario was inappropriate for modeling ICM and_ 335 gadolinium (e.g., MPAF 3.0, RSD 469% for ICM, AC; Figure 3, and Tables S12 and S13). 336 Especially in the STP catchments of Rheinau and Schinznach-Bad, the _all beds scenario_ 337 considerably overestimated the contrast media (PAF 11–55), because the catchments did not 338 have CT or MRI facilities. In contrast, the facility-specific scenarios showed better agreement 339 with the measured values (e.g., PAF 1.2–3.6 in MRI beds, AC) in those catchments. 340 The domestic, CT beds and CT treatments scenarios somewhat overestimated ICM in the STP 341 effluents (MPAF 1.5–2.5, AC; Figure 3a), although the magnitude of overestimation was not ----- 342 markedly higher than the base model uncertainty. The observed overestimation may have 343 partly derived from the uncertainties of consumption numbers and/or the varying elimination 344 rates of ICM in the STPs. The reported elimination of ICM varied largely (e.g., 0–90% for 345 iopromide), possibly due to varying sludge age and/or degree of nitrification.[20,41-43] In our 346 model, an average elimination rate of 40% was assumed (Table 2). Among the scenarios, 347 _domestic had an MPAF closest to 1, with the largest_ _R[2] value (Figure 3a), but the deviation_ 348 from the measured values was found to be large (PAF 0.27–7.7). In contrast, the CT beds and 349 _CT treatments scenarios in AC showed smaller RSD values, and deviated less from the_ 350 measured values (PAF 0.48–5.8). Among the sampled STPs, four catchments showed relative 351 changes of greater than 2 in _CT beds, AC. The_ _domestic scenario underestimated the_ 352 measured values in two of these catchments (PAF 0.27 and 0.48 in domestic, against 0.82 and 353 1.1 in _CT beds), and_ _CT beds overestimated them in the other two (PAF 2.5 and 4.6,_ 354 respectively, against 1.2 in domestic). 355 In the case of gadolinium, the mass flows predicted by the _MRI beds and_ _MRI treatments_ 356 scenarios, AC, agreed better with the measured STP effluent mass flows (MPAF 0.79, RSD 357 152%, R[2] 0.93 in MRI beds and 1.0, 99%, 0.96 in MRI treatments; Figure 3b) than those of 358 the domestic scenario (0.56, 323%, 0.71). The advantage of the hospital scenarios was clearly 359 demonstrated in the case of gadolinium, probably because predictions of gadolinium showed 360 the largest difference among scenarios (due to the large HF), and because uncertainty in the 361 elimination rate was very small owing to gadolinium’s high persistence during conventional 362 sewage treatment processes.[38] Five measured STP catchments exhibited relative changes of 363 greater than 2 in MRI beds (AC). The domestic scenario underestimated the measured values 364 in four of these catchments (PAF 0.13–0.37 in domestic against 0.55–2.4 in MRI beds). In the 365 remaining STP, Langnau (indicated in Figures 3, S14 and S15), the catchment’s PAF was 366 better in _domestic (0.88) than in_ _MRI beds (4.0), and the ICM values were also predicted_ ----- 367 better by the domestic scenario. In this catchment, CT and MRI facilities were only installed 368 two months before the sampling campaign, and therefore usage of contrast media was 369 probably still low, which may explain the overestimation by the hospital scenarios. 370 In three catchments where the treatment numbers were all actual numbers, the respective PAF 371 values for MRI treatments, AC (0.59, 0.75 and 0.81), were similar to those for MRI beds, AC 372 (0.79, 0.55 and 0.61). In five catchments where all the treatment numbers were estimated, 373 _MRI treatments (AC) predicted well compared to the catchments with actual treatment_ 374 numbers, but showed greater variation (PAF 0.99–2.8, median 1.6). 375 More catchments exhibited over- or underestimation in HC than in AC, for both ICM and 376 gadolinium (Figures S14 and S15), which shows that the assumed HFhc was too large. 377 In rivers, the contrast media were predicted well by both bed-specific scenarios and 378 treatment-specific scenarios in AC (e.g., MPAF 1.8, RSD 32%, R[2] 1.00 in CT beds; and 0.67, 379 38%, 1.00 in MRI beds; Figures S14 and S15). 380 381 3.2.2 Other pharmaceuticals 382 The predictions for the other pharmaceuticals in AC did not differ much among scenarios, and 383 mostly agreed with measured loads (within a factor of 2), both in the STPs and rivers (Figures 384 S16–S19, Tables S12–S15). Interestingly, however, for cyclophosphamide, sulfamethoxazole, 385 carbamazepine and diclofenac, the all beds scenario generally predicted better than domestic 386 and _general beds in the STP catchments where psychiatric and/or rehabilitation hospitals_ 387 accounted for more than half of the total hospital beds (e.g., in STP Rheinau, for 388 cyclophosphamide, PAF 0.93 in all beds, compared with < 0.46 in the other scenarios). This 389 suggests that the special types of hospital discharged these pharmaceuticals, unlike the 390 contrast media, at rates similar to general hospitals. Overall, these results suggest that 391 incorporating hospitals as point sources is important in catchments with high B1000, even for ----- 392 compounds with small HF; and that great care should be taken when estimating 393 pharmaceutical discharges from small catchments with high B1000. 394 395 4. DISCUSSION 396 4.1 Parameters and scenarios 397 Our results showed that, in a large proportion (> 95%) of both STP effluents and rivers in 398 Switzerland, the predictions of the hospital scenarios did not differ much from those of the 399 _domestic scenario (e.g., relative change < 1.5). In a few percent of catchments and rivers,_ 400 however, the difference from the domestic became larger, and the hospital scenarios showed 401 greater predictive accuracy. For example, in the all beds scenario, with HF = 0.3, the largest 402 relative change was 7.2, and the relative change exceeded 2 in 24 of the STP catchments, 403 serving 2.1% of the national population (158,000). The magnitude of the difference from 404 _domestic values depended on B1000_ and HF (see also S5 and Figure S20 for theoretical 405 explanation). HF is the most critical model parameter in determining the allocation of 406 pharmaceuticals between hospitals and households; the other parameters, such as excretion 407 and elimination, have no effect on this allocation. The large impact of HF variation on the 408 model predictions was demonstrated by the two cases, AC and HC. With a large HF of 0.8, 409 the change relative to the _domestic scenario could reach 22. Nevertheless, the agreement_ 410 between the modeled and measured loads was generally better when HF was set at half of 411 HFdis or less. This shows that a significant fraction of pharmaceuticals dispensed within 412 hospitals were discharged outside the hospitals, and that HF is therefore meaningful only if it 413 is reduced from HFdis by the appropriate portion of outpatients (see Section 2.1). It should 414 also be noted that, even for general pharmaceuticals with small HF, the relative change can be 415 large in catchments with high hospital bed density, where the hospital scenarios predict better 416 than the domestic scenario (see 3.2.2). The hospital bed density can vary widely (e.g., up to 21 ----- 417 times the national average in Switzerland; see S2), often owing to the presence of special 418 hospital types (e.g., psychiatric or rehabilitation hospitals). In this study, these special types of 419 hospital were, like general hospitals, found to be the sources of many general 420 pharmaceuticals, favoring the _all beds_ scenario. On the other hand, in the case of 421 pharmaceuticals used for specific treatments (i.e., in this study, ICM and gadolinium), the all 422 _beds scenario should not be applicable, and the relevant hospitals must be distinguished in_ 423 bed-specific or treatment-specific scenarios. For such pharmaceuticals, bed-specific scenarios 424 are typically more reliable than treatment-specific scenarios, because (as in the present study) 425 bed numbers and occupancy are far more easily accessed and have smaller uncertainties in 426 estimation than treatment numbers. In the cases where treatment numbers were available in 427 this study, however, the measured values agreed as well with the predictions of the 428 treatment-specific scenarios as they did with those of the bed-specific scenarios. 429 The data used for the hospital scenarios can change over time, although typically not rapidly 430 (e.g., 2% annual change in Switzerland; see S3). Thus, these data would need to be updated 431 regularly (e.g., every few years). In the case of pharmaceuticals used for specific treatments 432 (e.g., ICM, gadolinium), information such as the establishment or abolishment of 433 corresponding facilities must be updated; otherwise, the discharge from hospitals may be 434 significantly over- or underestimated. 435 436 4.2 Applicability 437 Through the comparison of different scenarios, our study revealed the relationship between 438 spatial resolution, model complexity and predictive accuracy. At low spatial resolution (e.g., 439 large river-catchments), the difference between scenarios was very small for all the 440 pharmaceuticals tested, as shown in the results of rivers with large upstream population. In 441 contrast, at high spatial resolution (e.g., STP catchment or small river-catchments), the ----- 442 difference was larger, and the _hospital scenarios showed good predictive accuracy (e.g.,_ 443 within a factor of 2). In this case, the domestic scenario could produce discrepancies of up to 444 1 order of magnitude. 445 Therefore, at low spatial resolution, the domestic scenario is a simple and efficient model for 446 predicting the distribution of a diverse range of pharmaceuticals. Incorporating geo-referenced 447 STP and pharmaceutical consumption data, the _domestic scenario is suitable for identifying_ 448 potential river catchments of concern in a large geographical area (e.g., at the national or 449 regional level). Our results suggest that other, population-based models for predicting the 450 discharge of domestic-use compounds (e.g., carbamazepine and diclofenac)[31-33] can also 451 accurately predict hospital-use pharmaceuticals on the scale of large catchments of rivers. 452 In contrast, the _hospital_ scenarios can be used most effectively at high spatial resolution. 453 These scenarios additionally incorporate geo-referenced hospital data; information on hospital 454 type, bed number and bed occupancy; and HF. Therefore, the _hospital scenarios are most_ 455 suitable for the detailed evaluation of smaller regions of interest (e.g., at the county or 456 prefectural level), or catchments of particular rivers, where the related data-collection efforts 457 are justified. Large relative changes (vs. the _domestic scenario) were found mostly in_ 458 suburban STPs and small rivers with high catchment B1000. Nevertheless, large cities also 459 tended to have a relatively high B1000, and thus significant relative change (up to 3). Therefore, 460 the hospital scenarios are also useful for urban STPs and their adjacent receiving waters. 461 Through field measurement, we validated our scenarios in STP catchments of various sizes 462 (1300–52,000 population). In both the hospital scenarios and domestic scenario, the predictive 463 uncertainty was less than the uncertainty in the approach of Al Aukidy et al.[8] (e.g., 2–3 orders 464 of magnitude), who used measurement-derived concentrations from the literature instead of 465 hospital consumption data. This demonstrates a significant advantage of our 466 consumption-based approach. To further improve the model’s predictive accuracy, the input ----- 467 data that were here assumed to be geographically and temporally constant (for model 468 simplicity and wide applicability) may be refined; for example, as suggested in Coppens et 469 al.,[33] by incorporating variability in consumption depending on geographic, climatic, seasonal 470 and/or socio-cultural conditions; varying the elimination of pharmaceuticals according to 471 varying sewage treatment methods; and incorporating environmental attenuation in rivers. 472 Interestingly, a similar approach, in this case using pharmaceutical consumption and animal 473 production, was proposed for predicting the discharge of veterinary antibiotics.[44] 474 475 4.3 Implications for pharmaceutical discharge reduction and risk assessment 476 In this study, the predicted contribution of hospitals to the total discharge at a single STP was 477 up to 92% for gadolinium (MRI beds scenario, AC), 82% for ICM (CT beds, AC), and 55% 478 for cyclophosphamide (all beds, AC). For catchments with such a large hospital contribution, 479 on-site treatment of hospital effluents[40,45] would be efficient in reducing pharmaceutical 480 discharge from STPs, reducing losses into the environment through sewer leakage[46] and 481 combined sewer overflows,[47] and preventing hospital wastewater-derived pathogens and 482 antibiotic multiresistant bacteria[4] from entering the environment. 483 Pharmaceutical concentrations in rivers can vary a great deal, as the predictions here reveal; 484 and in some rivers may be higher than has previously been determined. For example, in the 485 _CT beds scenario here, a range of 0.2 ng/L to 40 µg/L (AC) and 58 µg/L (HC) was predicted_ 486 as the sum of the seven modeled ICM concentrations; whereas in German rivers, the 487 measured sum of several ICM was only a few µg/L.[16,43] Therefore, the hospital scenarios may 488 be useful for revealing such hotspots, as well as for evaluating real and potential 489 environmental impacts, and for devising countermeasures. 490 491 ACKNOWLEDGMENTS ----- 492 We thank A. Doberer (Labor Veritas); M. Lanfranchi and A. Koch (Amt für Natur und 493 Umwelt Graubünden); M. Langmeier (Eawag); J. Schenzel (Research Station Agroscope); C. 494 Stamm (Eawag); A. Strawczynski (Service des eaux, sols et assainissement Laboratoire 495 SESA); staff from STPs Bioggio, Davos, Füllinsdorf, Herisau, Langnau I.E., Leuggern, Muri, 496 Münsterlingen, Rancate/Mendrisio, Rheinau, Schinznach-Bad, St.Gallen-Hofen, Wil and 497 Zurzach for their help in sample collection for measurement. We thank A. Ammann, F. 498 Dorusch, A. Lück and the members of Department of Environmental Chemistry (Uchem), all 499 from Eawag, for their help in sample collection and analyses. IMS Health (Danbury, CT, 500 USA) provided Swiss pharmaceutical consumption numbers, and Bayer HealthCare 501 Pharmaceuticals (Berlin, Germany), Bracco (Milano, Italy) and Guerbet (Villepinte, France) 502 ICM consumption numbers, the Federal Office of Public Health (FOPH) information on 503 Swiss hospitals, the Federal Statistical Office of Switzerland (FSO) information on the 504 hospital types and facilities as well as the geocoordinates of all Swiss buildings (registered 505 constructions and lodgments), the Swiss Armed Forces (Logistikbasis der Armee, 506 Sanitärdienstes) the coordinates of the hospitals. This study was supported by the Swiss 507 Federal Office for the Environment (FOEN; contract no. 07.0142.PJ/H163-1663), the Swiss 508 cantons AG, BE, BL, GE, SG, SH, SO, SZ, TG, VD and ZH, the Swiss State Secretariat for 509 Education and Research (SER)/COST within COST Action 636 (Project C05.0135), the EU 510 project NEPTUNE (contract no. 036845, SUSTDEV-2005-3.II.3.2) within the Energy, Global 511 Change and Ecosystems Programme of the Sixth Framework (FP6-2005-Global-4), the 512 CREST project grant for ‘Development of Well-balanced Urban Water Use System Adapted 513 for Climate Change’ from the Japan Science and Technology Agency (JST), and Research 514 Fellowships for Young Scientists (#21-04295) from Japan Society for the Promotion of 515 Science (JSPS). 516 ----- 517 SUPPORTING INFORMATION AVAILABLE 518 Details of hospital data acquisition and uncertainty, information on the STPs and rivers for 519 field measurements, methods of sampling and analysis, QA/QC, and all results of predictions, 520 measurements and model validation. This information is available free of charge via the 521 Internet at http://pubs.acs.org. 522 ----- 523 FIGURES AND TABLES a) STP effluent Füllinsdorf (Ergolz II) Aare-Brugg Murg-Frauenfeld Jonenbach-Zwillikon Aabach-Mönchaltorf Jona-Rüti RheinMaienfeld Leuggern Zurzach Rheinau a) STP effluent Rheinau b) River Münsterlingen Wil St Gallen-Hofen Herisau (Bachwis) Davos (Gadenstatt) b) River Venoge-Ecublens, Les Bois Murg-Frauenfeld Wil Füllinsdorf (Ergolz II) Davos (Gadenstatt) Jonenbach-Zwillikon Venoge-Ecublens, Les Bois Muri Rancate/ Mendrisio Münsterlingen Schinznach-Bad St Gallen-Hofen Jona-Rüti Maienfeld Bioggio (Lugano) Aare-Brugg Herisau (Bachwis) Gadolinium Aabach-Mönchaltorf Langnau I.E. STP mass flow change relative to the domestic scenario 0.2–0.5 1.2–1.5 0.5–1 1.5–3 (Lugano) Gadolinium _MRI beds scenario_ Muri Bioggio Mendrisio _MRI beds_ Rhein Catchments not connected to STPs STP catchment with field measurement scenario Concentration in rivers at Change in river concentration STP discharge points relative to the domestic scenario Rancate/ ≤ 0.005 µg/L 0.005–0.01 µg/L Langnau I.E. 1.2–1.5 1.5–3 0.025–0.05 µg/L 0.05–0.2 µg/L 0.2–0.5 0.5–1 Location of river water 524 1–1.2 3–10 field measurement 0.01–0.025 µg/L 0.2–0.8 µg/L 1–1.2 3–10 525 **Figure 1. Predicted geographical distribution of gadolinium for (a) mass flow in STP** 526 effluents and (b) concentration in rivers. Gadolinium mass flow and concentrations are 527 predicted by the _MRI beds scenario (AC). The mass flow in STP effluents is shown as the_ 528 change relative to the domestic scenario, using different colors for designated STP-catchment 529 areas. For the rivers, the concentration change relative to the domestic scenario is shown using 530 different colors for designated STP-catchment areas, and the predicted concentrations at the 531 STP discharge points are indicated by different colored dots. The field measurement locations 532 are also indicated, for both STP catchments and river waters. 533 534 1.2–1.5 1.5–3 3–10 ----- |Col1|b) Gadolinium HF = 0.5 ac ‡ †| |---|---| † STP Rheinau (with a psychiatric hospital) ‡ STP Schinznach-Bad (with a rehabilitation hospital) 535 536 **Figure 2. The mass flow of (a) ICM and (b) gadolinium in STP effluents, relative to** 537 catchment size, as predicted by the different hospital scenarios (average case). The mass flow 538 is shown as the change relative to the _domestic_ scenario. Data of _CT treatments_ and MRI 539 _treatments are shown separately according to the ratio of the estimated treatment number (<_ 540 50% and ≥ 50%) to the total treatment number in each catchment (see Table S6 for the data). 541 ----- 1000 100 HFac = 0.3 § HFac = 0.5 10 100 10 1 1 0.1 0.1 1 10 100 1000 All beds: 1.29, 1660% 0.1 MRI beds: 0.79, 152% MRI treatments: 1.00, 99% Domestic: 0.56, 323% 0.01 0.01 0.1 1 10 100 Measured mass flow (g-iodine/day)Measured load (g-iodine/day) Measured mass flow (g/day)Measured load (g/day) All beds CT/MRI beds CT/MRI treatments (<50% estimated) CT/MRI treatments (≥50% estimated) Domestic 542 † STP Rheinau (with a psychiatric hospital) ‡ STP Schinznach-Bad (with a rehabilitation hospital) § STP Langnau (CT and MRI were installed recently) 543 **Figure 3. Comparison between the measured mass flow and the modeled flow of (a) ICM and** 544 (b) gadolinium in STP effluents in different scenarios (average case). Data of CT treatments 545 and MRI treatments are shown separately according to the ratio of the estimated treatment 546 number (< 50% and ≥ 50%) to the total treatment number in each catchment (Table S4). The 547 MPAF and its RSD in each scenario are also shown in the figures. 548 ----- 549 Table 1. Scenarios. scenarios HF distribution of pharmaceuticals _hospital scenarios_ _all beds_ HFac / the consumption allocated to hospitals is distributed HFhc over all hospital beds _general beds_ HFac / HFhc _bed-specific scenarios:_ _CT beds_ _MRI beds_ _oncology beds_ HFac / HFhc the consumption allocated to hospitals is distributed over the beds in the general hospitals (hospitals excluding psychiatric hospitals and rehabilitation hospitals) the consumption of contrast media/cytostatics allocated to hospitals is only distributed over the beds of hospitals with their relevant departments, respectively (CT, MRI and oncology department) _treatment-specific scenarios:_ the consumption of contrast media/cytostatics _CT treatments_ HFac / allocated to hospitals is distributed over the number _MRI treatments_ HFhc of respective treatments (CT, MRI and _chemotherapy_ chemotherapy) _domestic scenario_ _domestic_ 0 all the consumption is distributed over the population 550 551 552 553 ----- 554 Table 2. Parameters used for modeling ICM, gadolinium, cyclophosphamide and 555 sulfamethoxazole. compound ICM (as iodine)[a] gadolinium cyclophosphamide sulfamethoxazole contrast pharmaceutical group contrast media cytostatic antibiotic media total consumption in Switzerland (kg/year)[b] 16,064[c] 157[d] 28 2427 hospital-dispensed fraction (HFdis)[e] 0.58 1 0.68 0.17 effective hospital fraction, average case (HFac)[f] 0.3 0.5 0.05 0.03 effective hospital fraction, high case (HFhc)[g] 0.5 0.8 0.34 0.17 excretion rate (combined for urine and feces) 0.97 1 0.2 0.45 elimination in STP 0.40 0 0 0.65 Scenarios all beds X X X X general beds X CT beds X CT treatments X MRI beds X MRI treatments X oncology beds X chemotherapy X domestic X X X X literature for excretion/elimination 1 / h 38 / 38 48 / 48 29 / 29, 49 556 _a Total iodine content of 7 ICM (diatrizoate, iobitridol, iohexol, iomeprol, iopamidol,_ 557 iopromide and ioxitalamic acid). 558 _b According to 2009 consumption data from IMS Health (Danbury, CT, USA)._ 559 _c Iodine content of 7 ICM was calculated using confidential 2009 consumption data_ 560 (information courtesy of Bayer HealthCare Pharmaceuticals (Berlin, Germany), Bracco 561 (Milano, Italy), and Guerbet (Villepinte, France)). 562 _d Gadolinium consumption in Switzerland was not known, and thus was extrapolated from the_ 563 consumption in the hospital in Baden (3.1 kg, total 6728 MRI treatments) to all of 564 Switzerland (340,376 MRI treatments). 565 _e Ratio of pharmaceuticals dispensed inside hospitals to total consumption, calculated by sales_ 566 data from IMS Health (Danbury, CT, USA). ----- 567 _f Outpatient-adjusted ratio of pharmaceuticals dispensed inside hospitals to total consumption_ 568 (average case). 569 _g Outpatient-adjusted ratio of pharmaceuticals dispensed inside hospitals to total consumption_ 570 (high case). 571 _h_ References16,20,40-43,50. Excretion and elimination of iobitridol was assumed to be similar to 572 the other ICM, as no relevant literature was available. 573 574 REFERENCES 575 1. Weissbrodt, D.; Kovalova, L.; Ort, C.; Pazhepurackel, V.; Moser, R.; Hollender, J.; 576 Siegrist, H.; McArdell, C. S., Mass flows of X-ray contrast media and cytostatics in hospital 577 wastewater. Environ. Sci. Technol. 2009, _43, (13), 4810–4817._ 578 2. Verlicchi, P.; Galletti, A.; Petrovic, M.; Barceló, D., Hospital effluents as a source of 579 emerging pollutants: An overview of micropollutants and sustainable treatment options. J. 580 _Hydrol. 2010,_ _389, (3–4), 416–428._ 581 3. Santos, L. H. M. L. M.; Gros, M.; Rodriguez-Mozaz, S.; Delerue-Matos, C.; Pena, A.; 582 Barceló, D.; Montenegro, M. C. B. S. M., Contribution of hospital effluents to the load of 583 pharmaceuticals in urban wastewaters: Identification of ecologically relevant pharmaceuticals. 584 _Sci. Total Environ. 2013,_ _461–462, (0), 302–316._ 585 4. Carraro, E.; Bonetta, S.; Bertino, C.; Lorenzi, E.; Bonetta, S.; Gilli, G., Hospital effluents 586 management: Chemical, physical, microbiological risks and legislation in different countries. 587 _J. Environ. Manage. 2016,_ _168, 185–199._ 588 5. Ort, C.; Lawrence, M. G.; Reungoat, J.; Eaglesham, G.; Carter, S.; Keller, J., Determining 589 the fraction of pharmaceutical residues in wastewater originating from a hospital. Water Res. 590 **2010,** _44, (2), 605–615._ 591 6. Escher, B. I.; Baumgartner, R.; Koller, M.; Treyer, K.; Lienert, J.; McArdell, C. S., 592 Environmental toxicology and risk assessment of pharmaceuticals from hospital wastewater. 593 _Water Res. 2011,_ _45, (1), 75–92._ 594 7. Le Corre, K. S.; Ort, C.; Kateley, D.; Allen, B.; Escher, B. I.; Keller, J., 595 Consumption-based approach for assessing the contribution of hospitals towards the load of 596 pharmaceutical residues in municipal wastewater. Environ. Int. 2012, _45, (0), 99–111._ 597 8. Al Aukidy, M.; Verlicchi, P.; Voulvoulis, N., A framework for the assessment of the 598 environmental risk posed by pharmaceuticals originating from hospital effluents. Sci. Total 599 _Environ. 2014,_ _493, (0), 54–64._ 600 9. Daouk, S.; Chèvre, N.; Vernaz, N.; Widmer, C.; Daali, Y.; Fleury-Souverain, S., 601 Dynamics of active pharmaceutical ingredients loads in a Swiss university hospital 602 wastewaters and prediction of the related environmental risk for the aquatic ecosystems. Sci. 603 _Total Environ. 2016,_ _547, 244–253._ 604 10. _Hospital beds (per 1,000 people). World Bank Open Data;_ 605 http://data.worldbank.org/indicator/SH.MED.BEDS.ZS?page=1. Accessed: 2015-03-09. 606 (Archived by WebCite® at http://www.webcitation.org/6WtAd0sR8) 607 11. Thomas, K. V.; Dye, C.; Schlabach, M.; Langford, K. H., Source to sink tracking of 608 selected human pharmaceuticals from two Oslo city hospitals and a wastewater treatment ----- 609 works. J. Environ. Monitor. 2007, _9, (12), 1410–1418._ 610 12. Verlicchi, P.; Al Aukidy, M.; Galletti, A.; Petrovic, M.; Barceló, D., Hospital effluent: 611 Investigation of the concentrations and distribution of pharmaceuticals and environmental risk 612 assessment. Sci. Total Environ. 2012, _430, (0), 109–118._ 613 13. Heberer, T.; Feldmann, D., Contribution of effluents from hospitals and private 614 households to the total loads of diclofenac and carbamazepine in municipal sewage 615 effluents—modeling versus measurements. J. Hazard. Mater. 2005, _122, (3), 211–218._ 616 14. Emmanuel, E.; Perrodin, Y.; Keck, G.; Blanchard, J. M.; Vermande, P., Ecotoxicological 617 risk assessment of hospital wastewater: a proposed framework for raw effluents discharging 618 into urban sewer network. J. Hazard. Mater. 2005, _117, (1), 1–11._ 619 15. Orias, F.; Perrodin, Y., Characterisation of the ecotoxicity of hospital effluents: A review. 620 _Sci. Total Environ. 2013,_ _454–455, (0), 250–276._ 621 16. Ternes, T. A.; Hirsch, R., Occurrence and behavior of X-ray contrast media in sewage 622 facilities and the aquatic environment. Environ. Sci. Technol. 2000, _34, (13), 2741–2748._ 623 17. Kümmerer, K.; Haiß, A.; Schuster, A.; Hein, A.; Ebert, I., Antineoplastic compounds in 624 the environment—substances of special concern. Environ. Sci. Pollut. R. 2014, 1-14. 625 18. Watkinson, A. J.; Murby, E. J.; Kolpin, D. W.; Costanzo, S. D., The occurrence of 626 antibiotics in an urban watershed: From wastewater to drinking water. Sci. Total Environ. 627 **2009,** _407, (8), 2711–2723._ 628 19. Michael, I.; Rizzo, L.; McArdell, C. S.; Manaia, C. M.; Merlin, C.; Schwartz, T.; Dagot, 629 C.; Fatta-Kassinos, D., Urban wastewater treatment plants as hotspots for the release of 630 antibiotics in the environment: A review. Water Res. 2013, _47, (3), 957–995._ 631 20. Joss, A.; Keller, E.; Alder, A. C.; Göbel, A.; McArdell, C. S.; Ternes, T.; Siegrist, H., 632 Removal of pharmaceuticals and fragrances in biological wastewater treatment. Water Res. 633 **2005,** _39, (14), 3139–3152._ 634 21. Göbel, A.; Thomsen, A.; McArdell, C. S.; Joss, A.; Giger, W., Occurrence and sorption 635 behavior of sulfonamides, macrolides, and trimethoprim in activated sludge treatment. 636 _Environ. Sci. Technol. 2005,_ _39, 3981–3989._ 637 22. Pauwels, B.; Verstraete, W., The treatment of hospital wastewater: an appraisal. J. Water 638 _Health 2006,_ _4, 405–416._ 639 23. Lienert, J.; Koller, M.; Konrad, J.; McArdell, C. S.; Schuwirth, N., Multiple-criteria 640 decision analysis reveals high stakeholder preference to remove pharmaceuticals from 641 hospital wastewater. Environ. Sci. Technol. 2011, _45, (9), 3848–3857._ 642 24. Hollender, J.; Zimmermann, S. G.; Koepke, S.; Krauss, M.; McArdell, C. S.; Ort, C.; 643 Singer, H.; von Gunten, U.; Siegrist, H., Elimination of organic micropollutants in a 644 municipal wastewater treatment plant upgraded with a full-scale post-ozonation followed by 645 sand filtration. Environ. Sci. Technol. 2009, _43, (20), 7862–7869._ 646 25. Reungoat, J.; Escher, B. I.; Macova, M.; Argaud, F. X.; Gernjak, W.; Keller, J., 647 Ozonation and biological activated carbon filtration of wastewater treatment plant effluents. 648 _Water Res. 2012,_ _46, (3), 863–872._ 649 26. Margot, J.; Kienle, C.; Magnet, A.; Weil, M.; Rossi, L.; de Alencastro, L. F.; Abegglen, 650 C.; Thonney, D.; Chèvre, N.; Schärer, M.; Barry, D. A., Treatment of micropollutants in 651 municipal wastewater: Ozone or powdered activated carbon? Sci. Total Environ. 2013, 652 _461–462, (0), 480–498._ 653 27. _Mikroverunreinigungen: Massnahmen bei der Abwasserreinigung (Micropollutants:_ 654 _measures in wasteater treatment). Federal Office for Environment (FOEN), Switzerland_ 655 Website (in German); http://www.bafu.admin.ch/gewaesserschutz/03716/index.html?lang=en. 656 Accessed: 2015-03-09. (Archived by WebCite® at http://www.webcitation.org/6WtI9NAAf) 657 28. Eggen, R. I. L.; Hollender, J.; Joss, A.; Schärer, M.; Stamm, C., Reducing the discharge 658 of micropollutants in the aquatic environment: The benefits of upgrading wastewater ----- 659 treatment plants. Environ. Sci. Technol. 2014, _48, (14), 7683–7689._ 660 29. Ort, C.; Hollender, J.; Schaerer, M.; Siegrist, H., Model-based evaluation of reduction 661 strategies for micropollutants from wastewater treatment plants in complex river networks. 662 _Environ. Sci. Technol. 2009,_ _43, (9), 3214–3220._ 663 30. Verlicchi, P.; Al Aukidy, M.; Jelic, A.; Petrović, M.; Barceló, D., Comparison of 664 measured and predicted concentrations of selected pharmaceuticals in wastewater and surface 665 water: A case study of a catchment area in the Po Valley (Italy). Sci. Total Environ. 2014, 666 _470–471, (0), 844–854._ 667 31. Kehrein, N.; Berlekamp, J.; Klasmeier, J., Modeling the fate of down-the-drain chemicals 668 in whole watersheds: New version of the GREAT-ER software. Environ. Modell. Softw. 2015, 669 _64, 1–8._ 670 32. Price, O. R.; Williams, R. J.; Zhang, Z.; van Egmond, R., Modelling concentrations of 671 decamethylcyclopentasiloxane in two UK rivers using LF2000-WQX. Environ. Pollut. 2010, 672 _158, (2), 356–360._ 673 33. Coppens, L. J. C.; van Gils, J. A. G.; ter Laak, T. L.; Raterman, B. W.; van Wezel, A. P., 674 Towards spatially smart abatement of human pharmaceuticals in surface waters: Defining 675 impact of sewage treatment plants on susceptible functions. Water Res. 2015, _81, 356–365._ 676 34. Knodel, J.; Geißen, S. U.; Broll, J.; Dünnbier, U., Simulation and source identification of 677 X-ray contrast media in the water cycle of Berlin. J. Environ. Manage. 2011, _92, (11),_ 678 2913–2923. 679 35. Van Boeckel, T. P.; Gandra, S.; Ashok, A.; Caudron, Q.; Grenfell, B. T.; Levin, S. A.; 680 Laxminarayan, R., Global antibiotic consumption 2000 to 2010: an analysis of national 681 pharmaceutical sales data. Lancet Infect. Dis. 14, (8), 742–750. 682 36. Ort, C.; Lawrence, M. G.; Reungoat, J.; Mueller, J. F., Sampling for PPCPs in wastewater 683 systems: Comparison of different sampling modes and optimization strategies. Environ. Sci. 684 _Technol. 2010,_ _44, (16), 6289–6296._ 685 37. Lawrence, M. G.; Ort, C.; Keller, J., Detection of anthropogenic gadolinium in treated 686 wastewater in South East Queensland, Australia. Water Res. 2009, _43, (14), 3534–3540._ 687 38. Verplanck, P. L.; Furlong, E. T.; Gray, J. L.; Phillips, P. J.; Wolf, R. E.; Esposito, K., 688 Evaluating the behavior of gadolinium and other rare earth elements through large 689 metropolitan sewage treatment plants. Environ. Sci. Technol. 2010, _44, (10), 3876–3882._ 690 39. Janna, H.; Scrimshaw, M. D.; Williams, R. J.; Churchley, J.; Sumpter, J. P., From 691 dishwasher to tap? Xenobiotic substances benzotriazole and tolyltriazole in the environment. 692 _Environ. Sci. Technol. 2011,_ _45, (9), 3858–3864._ 693 40. Kovalova, L.; Siegrist, H.; Singer, H.; Wittmer, A.; McArdell, C. S., Hospital wastewater 694 treatment by membrane bioreactor: Performance and efficiency for organic micropollutant 695 elimination. Environ. Sci. Technol. 2012, _46, (3), 1536–1545._ 696 41. Batt, A. L.; Kim, S.; Aga, D. S., Enhanced biodegradation of iopromide and trimethoprim 697 in nitrifying activated sludge. Environ. Sci. Technol. 2006, _40, (23), 7367–7373._ 698 42. Ternes, T. A.; Bonerz, M.; Herrmann, N.; Teiser, B.; Andersen, H. R., Irrigation of 699 treated wastewater in Braunschweig, Germany: An option to remove pharmaceuticals and 700 musk fragrances. Chemosphere 2007, _66, (5), 894–904._ 701 43. Kormos, J. L.; Schulz, M.; Ternes, T. A., Occurrence of iodinated X-ray contrast media 702 and their biotransformation products in the urban water cycle. Environ. Sci. Technol. 2011, 703 _45, (20), 8723–8732._ 704 44. Menz, J.; Schneider, M.; Kümmerer, K., Usage pattern-based exposure screening as a 705 simple tool for the regional priority-setting in environmental risk assessment of veterinary 706 antibiotics: A case study of north-western Germany. Chemosphere 2015, _127, 42–48._ 707 45. Kovalova, L.; Siegrist, H.; Von Gunten, U.; Eugster, J.; Hagenbuch, M.; Wittmer, A.; 708 Moser, R.; McArdell, C. S., Elimination of micropollutants during post-treatment of hospital ----- 709 wastewater with powdered activated carbon, ozone and UV. Environ. Sci. Technol. 2013, _47,_ 710 (14), 7899–7908. 711 46. Kuroda, K.; Murakami, M.; Oguma, K.; Muramatsu, Y.; Takada, H.; Takizawa, S., 712 Assessment of groundwater pollution in Tokyo using PPCPs as sewage markers. Environ. Sci. 713 _Technol. 2012,_ _46, (3), 1455–1464._ 714 47. Phillips, P. J.; Chalmers, A. T.; Gray, J. L.; Kolpin, D. W.; Foreman, W. T.; Wall, G. R., 715 Combined sewer overflows: An environmental source of hormones and wastewater 716 micropollutants. Environ. Sci. Technol. 2012, _46, (10), 5336–5343._ 717 48. Buerge, I. J.; Buser, H.-R.; Poiger, T.; Müller, M. D., Occurrence and fate of the 718 cytostatic drugs cyclophosphamide and ifosfamide in wastewater and surface waters. Environ. 719 _Sci. Technol. 2006,_ _40, (23), 7242–7250._ 720 49. Luo, Y.; Guo, W.; Ngo, H. H.; Nghiem, L. D.; Hai, F. I.; Zhang, J.; Liang, S.; Wang, X. 721 C., A review on the occurrence of micropollutants in the aquatic environment and their fate 722 and removal during wastewater treatment. Sci. Total Environ. 2014, _473–474, (0), 619–641._ 723 50. Deblonde, T.; Cossu-Leguille, C.; Hartemann, P., Emerging pollutants in wastewater: A 724 review of the literature. Int. J. Hyg. Envir. Heal. 2011, _214, (6), 442–448._ 725 726 -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1021/acs.est.6b00653?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1021/acs.est.6b00653, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "other-oa", "status": "GREEN", "url": "https://www.dora.lib4ri.ch/eawag/islandora/object/eawag%3A10607/datastream/PDF2/Kuroda-2016-Hospital-use_pharmaceuticals_in_Swiss_waters-%28accepted_version%29.pdf" }
2,016
[ "JournalArticle" ]
true
2016-04-19T00:00:00
[]
17,298
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01736f11b413c0462da86411e95a6466d8f2e4d1
[ "Computer Science" ]
0.89909
Decentralized enforcement of k-anonymity for location privacy using secret sharing
01736f11b413c0462da86411e95a6466d8f2e4d1
IEEE Vehicular Networking Conference
[ { "authorId": "34567559", "name": "David Förster" }, { "authorId": "50450434", "name": "Hans Löhr" }, { "authorId": "1728575", "name": "F. Kargl" } ]
{ "alternate_issns": null, "alternate_names": [ "Vehicular Networking Conference", "Veh Netw Conf", "VNC", "IEEE Veh Netw Conf" ], "alternate_urls": null, "id": "87a9041d-3fcb-4c5a-b22c-4a99beca7888", "issn": null, "name": "IEEE Vehicular Networking Conference", "type": "conference", "url": "http://www.ieee-vnc.org/" }
null
## Decentralized Enforcement of k-Anonymity for Location Privacy Using Secret Sharing ### David Förster Robert Bosch GmbH david.foerster@de.bosch.com ### Hans Löhr Robert Bosch GmbH hans.loehr@de.bosch.com ### Frank Kargl Ulm University, Germany & University of Twente, NL frank.kargl@uni-ulm.de **_Abstract—Protection of location privacy by reducing the_** **accuracy of location data, until a desired level of privacy (e.g.,** **measured as k-anonymity) is reached, is a well-known concept** **that is typically implemented using a privacy proxy. To eliminate** **the risks associated with a central, trusted party, we propose** **a generic method to enforce k-anonymity of location data in a** **decentralized way, using a distributed secret sharing algorithm** **and the concept of location and time specific keys. We describe** **our method in the context of a system for privacy-friendly traffic** **flow analysis, in which participants report origin, destination,** **start and end time of their trips. In order to protect their privacy** **the accuracy of time and location information is reduced, until** **it applies to at least k distinct trips. No trusted, central party is** **required to determine how much the accuracy of each trip report** **must be reduced. The participants establish location and time** **specific keys via vehicle-to-vehicle (V2V) communication at the** **beginning and end of their trips. They use these keys to encrypt** **trip reports with several levels of accuracy, and uploaded them** **to a central, untrusted database. The keys are published using a** **secret sharing algorithm that allows their reconstruction, once at** **least k shares of the same key have been uploaded. Consequently,** **trip reports become available automatically, after k vehicles have** **made “the same trip” (same origin, destination, start and end** **time) with respect to a certain accuracy level.** I. INTRODUCTION Traffic authorities require information about traffic flows for operational control as well as strategic planning of new infrastructure. Only a few years ago it was hardly feasible to measure traffic flows directly. Instead, the origin-destination (OD) matrices representing the traffic flow were often estimated based on traffic counts [1]. The advent of cellular communication allowed for large-scale collection of traffic flow data. Even without drivers’ involvement traffic flows can be derived from the data generated by the regular operation of mobile phone networks [2], [3]. More accurate results can be achieved by explicit collection of floating car data _(FCD), containing GPS position and sometimes also speed and_ other information [4], [5]. Most GPS navigation systems and smartphone navigation apps collect floating car data from their users, in order to incorporate traffic conditions in their routing decisions [6]. Measurement of local traffic densities can be done in a fully anonymous manner, by having vehicles submit FCD records in a predefined time interval. If no identifiers are included in the submitted data and different records from the same vehicle cannot be linked, submission of the data does not affect drivers’ privacy, because no information about their trips’ origin or destination can be inferred. For large-scale traffic analysis and planning, however, knowledge about traffic flows (as represented by OD matrices) is required. In contrast to FCD records this information is much more privacy sensitive. It was shown that, even with personal identifiers removed, detailed location traces (or origin/destination pairs) can be used to identify drivers’ home location [7] or even their identity [8], [9]. Therefore, additional privacy protection is required when collecting information about trips’ origin and destination. A common approach to protecting location privacy is to deliberately reduce the spatial or temporal accuracy of information until a certain privacy level can be guaranteed [10], e.g., expressed as k-anonymity [11]. A user is k-anonymous if he cannot be distinguished from k 1 other users based on the _−_ information he reveals. This is well-suited for the use case of traffic flow analysis: Information about routes that are taken by many drivers are most important. Those drivers can reveal origin and destination of their trip with a rather high accuracy and still remain k-anonymous. Routes that are only used by few drivers are less important, therefore it is acceptable that the accuracy of those reports must be reduced more in order to achieve the same level of privacy protection. k-anonymity can easily be enforced when all records are stored in central, trusted database. However, a database containing large quantities of highly accurate trip reports would be an attractive target for hackers. Recent security breaches such as the Sony hack [12] and revelations about state-run surveillance activities [13] have given rise to public concerns about privacy. It may be more attractive for drivers to participate in a system where privacy protection does not depend on the protection of a central database (and its operator’s honest behavior), but is verifiably enforced by the participants themselves. An essential building block of the system we propose is vehicle-to-vehicle (V2V) radio communication. Vehicle-tox (V2X) communication, comprising vehicle-to-vehicle and vehicle-to-infrastructure (V2I) communication, has been developed and standardized during the last decade. Car manufacturers have announced the first V2X equipped models for model year 2017 [14]. Based on IEEE 802.11p radio communication [15] vehicles can exchange messages in an adhoc manner within a range from one hundred to a few hundred meters [16]. The technology is expected to enable a wide variety of safety, comfort, and entertainment functions [17]. Due to the expected contribution to road safety, the U.S. has initiated the process for making V2X-based safety functions a requirement for newly sold cars [18], which is promising with regard to adoption and market penetration. ----- _Our contribution_ We describe a generic mechanism for enforcing kanonymity for location data that does not require a central, trusted party and is therefore robust against malicious backend providers and compromised backend systems. As an example for its application we created a system for privacy-preserving traffic flow analysis, in which participants make available origin, destination, start and end time of their trips. Parties that query the system learn the information with highest accuracy possible such that it still applies to at least k trips. The remainder of this paper is structured as follows: We survey related work in Section II and present our system model and our requirements in Sections III and IV. We describe our system and its building blocks in Section V and evaluate its security properties and performance in Section VI before we conclude with Section VII. II. RELATED WORK Beresford and Stajano define location privacy as “the ability to prevent other parties from learning one’s current or past location” [19]. Several publications highlight the requirement for advanced privacy protection beyond simple anonymization: Hoh et al. examine privacy in traffic monitoring systems and were able to identify drivers’ home locations from their GPS traces with a success rate of about 85% [7]. Krumm conducted a similar experiment and was able to infer the identity of 5% of the participants using a public internet search engine to look up people living near the identified home locations [8]. Using data from the U.S. Census Bureau, Golle and Partridge demonstrated that the majority of the U.S. working population can be uniquely identified by the combination of their home and work location [9]. Jeske examines the data submitted by the Google Maps and Waze smartphone navigation apps and finds that both apps submit location data with a high accuracy and use unique identifiers to track users even across several trips [6]. An established metric to measure location privacy is k_anonymity [11], originally defined for privacy protection of_ records in a central database. A record is k-anonymous in a given dataset if it cannot be distinguished from at least k 1 _−_ other records based on the attributes revealed. Gruteser and Grunwald apply k-anonymity to location privacy, suggesting that a user is k-anonymous if he cannot be distinguished from at least k 1 other users based on the location data (position _−_ and time) he reveals [10]. They propose to use spatial and _temporal cloaking of location data for privacy protection, i.e.,_ reducing their accuracy until a predefined level of k-anonymity is met. They employ a central, trusted anonymity server that acts as a proxy and calculates the required reduction of accuracy, based on its knowledge of all users’ exact position. Our approach is based on the same concept of privacy protection, however, we do not require a trusted, central party. Duckham and Kulik propose a graph based approach to obfuscation in order to degrade the quality of location to the level required by a service provider [20]. Their approach does not require a central, trusted server. Instead, each user applies the location obfuscation individually but protection of their users’ identities is not a requirement. Krumm gives a general overview of threats to location privacy and strategies for its protection [21]. There are several approaches to privacy-friendly collection of traffic data. However, their focus is to prevent linking of trip segments, and in particular origin and destination of trips, whereas we propose to make exactly this data available in a privacy-preserving way. Hoh and Gruteser describe a path perturbation algorithm (running on a central, trusted server) that protects location privacy while maintain a certain data quality by provoking path confusion for an attacker trying to track vehicles [22]. The PADAVAN scheme uses anonymous credentials and mix cascades for privacy-friendly collection of traffic densities [23]. As the scheme is explicitly designed to prevent linking of submitted samples, an end-to-end analysis of trips is not possible. Rass et al. describe the privacy-friendly collection of floating car data [24]. They use sample identifiers (for individual samples submitted to the server) and trip _identifiers constructed in such a way that only certain entities_ can determine which samples belong to the same trip. These entities, however, can reconstruct the trip with full accuracy. Hoh et al. propose a privacy-friendly traffic monitoring system using virtual trip lines, where vehicles report to a central database, whenever they cross a virtual trip line, similar to a virtual inductive loop [25]. k-anonymity can be achieved by reducing the temporal accuracy of trip line crossings. Privacy protection is based on a segregation of responsibilities between several central components. Therefore, no single entity can subvert the privacy guarantees. If multiple entities are compromised (or collaborate), though, position updates can be obtained with full accuracy. In the SOKEN protocol, due to Achenbach et al. [26], mobile users exchange and forward key material in an ad-hoc manner via Bluetooth. Later, two users who wish to communicate can derive a shared secret from their common keys. While the purpose of our system is different, we use a similar mechanism of ad-hoc key exchanges and key forwarding. We also share the authors’ assumption that large-scale surveillance of ad-hoc key exchanges via short-range radio is difficult to achieve for an attacker. III. SYSTEM MODEL AND SCENARIO We assume a traffic scenario with participating vehicles Vi that are all equipped with V2X communication devices and mobile internet access. They report information about their trips to the trip database. The traffic authority (TA) queries the trip database in order to obtain traffic flow information. We assume that the V2X system is protected by a standard privacyfriendly authentication mechanism [27]. Figure 1 shows an overview of our system model. _A. Attacker model_ The attacker’s goal is to learn the participants’ exact location traces, i.e., who traveled where and when. We consider different types of attackers: The malicious backend provider can access all central databases deployed in our scheme, but is unable to eavesdrop on local V2X communication. We argue that this a realistic attacker model as backend providers have full access to the data they store. Ubiquitous surveillance of V2X communication, in contrast, is very hard to achieve as it would require the attacker to be in transmission range whenever two vehicles exchange messages. The active insider _attacker possesses valid credentials for the V2X system and_ ----- Query database **Trip database** **Traffic authority** Submit trip data **V1** **V2** V2X communication Figure 1: Participating vehicles can exchange information via V2X communication. They also have a mobile data connection to connect to the trip database via internet. Traffic authorities can query the database to obtain information about traffic flows. actively participates in our system in order to subvert other users’ privacy. The passive insider attacker has valid credentials, too, but only eavesdrops on communication taking place in his vicinity, without actively participating in our system. The outsider attacker is equipped with a V2X communication device, but does not posses valid credentials. (This is a very weak attacker, merely listed for completeness.) IV. REQUIREMENTS We define the following requirements to capture the interests of traffic authorities on the one hand and participating drivers on the other hand: R.1 Traffic centers require information about traffic flows for the purpose of operational traffic control and assessment of requirements for infrastructure. We assume that while the information does not have to be totally accurate, the higher its accuracy the more useful it is. In particular, origin and destination of trips must be reported together in order to enable macroscopic traffic analysis. R.2 Drivers require protection of their privacy, quantified by the concept of k-anonymity. They will be reluctant to participate in data collection, if the information they report can be used to create individual mobility profiles. For maximum protection we put forward the requirement of verifiable privacy, i.e., technical protection that augments organizational controls, but has the added benefit that it can be verified by technical means. V. PRIVACY-FRIENDLY TRAFFIC ANALYSIS We first describe the idea behind our approach. Participants upload encrypted reports about their trips to a trip database. Multiple copies with different accuracy levels are uploaded and encrypted with different keys. The keys are chosen such that all users that made “the same trip” will use the same key (same trip means same origin, destination and time with respect to the selected accuracy level). The keys are split up using a secret sharing scheme and uploaded, too. A key can be reconstructed when at least k shares of it were uploaded, and the corresponding trip reports can be decrypted. Consequently, the accuracy of each trip report that can be obtained from the database will be such, that it applies to at least k trips. If many participants travel from A to B at the same time, their reports will be revealed with a high accuracy. If somebody travels to a far-off location, on the other hand, only the trip report with very low accuracy will be revealed. The scheme consists of three phases: 1) Participants establish location and time-specific keys, both at the start and destination of their trips. 2) Participants upload copies of their trip reports with different accuracy levels, encrypted with different keys, to the trip database. They apply a secret sharing scheme and upload their shares of the keys, too. 3) Traffic authorities query the trip database. They reconstruct the keys for which enough shares are available and decrypt the corresponding reports. If several reports exist for one trip, all but the one with the highest accuracy are discarded. Several parameters need to be set system-wide and are valid for all participants: **k – Required size of the anonymity set for trip reports to be** revealed to the traffic authority. **Accuracy levels made up by levels of spatial and temporal** accuracy, e.g., ((100 m, 1 hour), (1 km, 6 hours), (10 km, 24 hours)). In order to avoid inference attacks by partially overlapping levels of accuracy, we require that for any two accuracy levels (sa 1, ta 1) and (sa 2, ta 2): sa 1 < sa 2 _⇒_ _ta_ 1 ≤ _ta_ 2. **p – Modulus used for modular arithmetic in the decentralized** secret sharing scheme (cf. Section V-C). **Treconcile, Tupload – Timeouts for key reconciliation and** key uploads to the key database (cf. Section V-F). In the following we cover the building blocks used in our scheme, before we give a complete description of our scheme and its different phases in Section V-F. _A. Location obfuscation_ A trip is described by origin, destination, start time, and _arrival time. k-anonymity can be achieved by reducing the_ accuracy of each of these properties, until there are k 1 other _−_ indistinguishable trips. Each accurate location (or accurate _time) can be mapped to a corresponding coarse location (or_ _coarse time) according to a certain accuracy. For simplicity,_ we assume that a Cartesian coordinate system is in place.[1] We obtain the coarse location by rounding off the x and y components of the accurate location (e.g., x=3325 m, y=1876 m with an accuracy of 250 m becomes x=3250 m, y=1750 m). Similarly, the coarse location is obtained by rounding off the accurate location (e.g., 17:46 with a desired accuracy of 1h becomes 17:00). The set of all accurate locations that are mapped to the same coarse location are referred to as a region; the set of all points in time that are mapped to the same coarse time is referred to as a time window. 1When using GPS coordinates, rounding requires additional conversion steps, due to the spherical coordinate system, e.g., using a map projection algorithm. ----- |(i) V3 V4 [k2] [k2] k2 V1 V2 [k1] [k1] k1|(ii) V3 [k2] V4 k1 [k1, k2] k2 V2 V1 [k1, k2] [k1]|(iii) Key database (k 2) ENCk( ENC k 1 (k1) NCk2( 2 E 1k ) ENCk2 k1) V1 V4 [k1, k2] [k1V,2 [k1, k2] V3 k2] [k1, k2]| |---|---|---| Figure 2: Vehicles generate keys when they meet at the beginning or before the end of their trips (i) and forward them while in the respective region and time window (ii). Afterwards keys are synchronized in encrypted form through the key database (iii) and an authoritative key can be picked from the common set of keys. _B. Key establishment_ We want all participants that were physically present at a certain location at a certain time to share a common location _and time specific key. With regard to a certain accuracy level,_ the key should be known to anybody who was present in the region that maps to a specific coarse location during the _time window that maps to a specific coarse time. Several keys_ (for different accuracy levels) can be established independently and at the same time. Each key record contains the attributes _fingerprint, accuracy level, coarse time, coarse location and_ the cryptographic key itself. Let ID(key) denote a key’s fingerprint and ENC key (p) the symmetric encryption of some plaintext p using the key. We describe how vehicles establish a key for a specific location and time at a specific accuracy level. The procedure must be run independently for each accuracy level defined in the system parameters (cf. Section V): 1) Map current accurate time and accurate location to coarse location (region) and coarse time (time window), according to the selected accuracy level. 2) While the vehicle is within the region and time window, indicate readiness to exchange keys, e.g., using a flag in the V2V message sent out. When another participating vehicle comes into communication range, which is ready to exchange keys, forward and receive all preliminary keys (for the current time and location window) that have been obtained before. If no keys were forwarded in either direction, establish a new preliminary key (e.g., using Diffie-Hellman). Stop key exchanges and forwarding, once the vehicle leaves the region or the time window. 3) Derive an authoritative key from all preliminary keys as follows. a) Let S be the set of preliminary keys for the current region and time window. For each pair _sk i, sk j ∈_ (S × S), sk i ̸= sk j create the encrypted key record _ID(ski_ ), ID(skj ), ENCskj (ski ) and upload it to the central key server. (The server removes any duplicate uploads.) b) Download and decrypt all records of encrypted keys that are not stored locally yet, but for which the encryption key is available. Create and upload records for newly downloaded keys which are not stored on the server yet. Wait some time and repeat until Treconcile elapses. c) Sort all keys lexicographically. The first key is the authoritative key for the current time and location window. The procedure is based on the assumption that all participants present at the given region within the given time window are connected through paths of common and forwarded keys. If this is the case, they will all eventually obtain the same authoritative key, provided step 3 (b) is repeated often enough. If not, the accuracy with which the trips will be revealed later on will degrade, but privacy protection remains intact. For practicality, the reconciliation phase is limited by a timeout _Treconcile_ . Figure 2 shows a high-level sketch of the key establishment procedure. Key exchanges are only conducted among vehicles that posse valid credentials for the V2X system. All V2V communication links are encrypted to protect against local eavesdroppers, e.g., using Diffie-Hellman keys. To prevent identification based on network addresses, all connections to the key database are made through an anonymization network, such as Tor[2]. _C. Decentralized, non-interactive secret sharing_ Assume a common secret s, shared by an unknown number of parties. We want each party to derive some information from that secret, called a share, such that s is revealed only when at least k parties reveal their share. We base our construction on Shamir’s secret sharing [28]. In the original scheme the secret s is only known to a central trusted party which generates the shares and distributes them among the participants. The shares are created by constructing a polynomial f (x) of degree k with random coefficients, such that f (0) = s. Each of the n parties (n > k) obtains one point of the polynomial (xi, f (xi)), while the polynomial 2https://www.torproject.org/ ----- 4. Traffic authority queries trip database |1. Travel and exchange keys|Col2| |---|---| ||| |2. Reconcile keys using key database|Col2| |---|---| ||| |3. Upload encrypted trip reports and key shares|Col2| |---|---| ||| Max. temporal accuracy _Treconcile_ _Tupload_ _t_ Figure 3: High-level overview of processing steps. The length of each phase (but the last one) is specified and each phase must be completed by all participants, before the next step can begin. itself is kept secret. Consequently, any k of the n parties can collaborate and reconstruct the full polynomial and reveal the secret. All computations are done using modular arithmetic. Our setting is slightly different because each party knows the secret s, but must construct its share independently from the others. Using a cryptographic hash function h, each party can (by itself) obtain the coefficients _ai := h(i||s) for i ∈_ [1, k] and construct _f_ (x) = s + _k_ � _aix[i]_ mod p. _i=1_ Note, that all parties will obtain the same polynomial. Then each party chooses xr at random from a sufficiently large range to avoid collisions and calculates its share (xr, f (xr)). Like in the original construction, s will be revealed when at least k of the participants make their share available. We use share(s, k ) to denote the creation of a share. For a practical implementation the secret s and the output of h must be converted to numbers and the prime p used for modular computation must be larger than any possible value of s. _D. Build and upload trip reports_ Assume a participant has completed a trip and the location and time specific keys origin_key _i and destination_key_ _i have_ been established for each accuracy level ALi, at the trip’s origin and destination respectively. For each accuracy level he creates and uploads a trip report as follows: 1) Create trip_key _i := h(origin_key_ _i||destination_key_ _i)_ using a cryptographic hash function h. 2) Create the trip report rep containing the coarse locations of origin and destination and coarse start time and arrival _time with respect to the current accuracy level._ 3) Create the encrypted trip record _ID(trip_keyi_ ), share(trip_keyi _, k_ ), ENCtrip_keyi (rep) and upload it to the trip database. All connections to the trip database are made through an anonymization network. _E. Reconstruction of trip reports_ Query the trip database for all trip records that can be decrypted. Specifically, download records for which at least _k_ 1 other records are available which have been encrypted _−_ with the same key. Reconstruct the trip keys from the shares included in the records and decrypt the trip reports. _F. Phases of operations_ The building blocks described in Sections V-A to V-E are executed sequentially in different, dependent phases (cf. Figure 3). 1) Participants exchange location and time specific keys at the beginning and end of their trips. For each accuracy level keys are exchanged independently, while the vehicle is in the origin or destination region and start or end time window (with respect to that accuracy level). The beginning of a trip can be identified trivially, however, some trigger is required that signals the upcoming end of the trip, e.g., from the navigation system. Alternatively, keys can exchanged continuously during the trip, so that the keys for the end of the trip can be determined retrospectively when the vehicle is turned off. Continuous key exchanges can also improve the connectivity for other participants, if their keys are “carried and forwarded” within their validity regions and time windows. 2) Key reconciliation (which involves uploading encrypted keys to the key database) must only be started, when the time window for which the keys are valid has ended. If keys were uploaded too early, it would be possible to infer the end time of the respective trip more accurately than intended. For practical reasons and in order to execute the phase for all accuracy levels simultaneously, we propose to begin the phase only after the time window for the lowest temporal accuracy (i.e., the longest one) has ended. The length of the reconciliation phase Treconcile must be sufficiently long to allow all involved vehicles (which may not be online all the time) to perform multiple iterations of the reconciliation protocol. 3) Trip uploads must only be performed after the previous phase was completed because the authoritative keys may not be available before. It should be completed within the time Tupload . 4) The trip database may be queried at any time. However, the trip reports will only be available after the previous phases were completed. ----- VI. EVALUATION We evaluate our system with regard to the attacker model described in Section III-A and examine its performance in a specific scenario using simulations. _A. Security analysis_ Our scheme is secure against the malicious backend _provider, i.e., he cannot obtain more information than any_ honest party, that queries the trip database. Even with full access to the key database and the trip database, he would have to break the secret sharing scheme (which is informationtheoretically and even perfectly secure) or the encryption itself. He could delete or alter records in the key database, which would sabotage the establishment of common keys, or manipulate the trip database. These attempts would, in fact, affect the availability of trip reports, but not have any negative effect on participants’ privacy. We emphasize that even though the key database and the trip database are central components in our systems, they need not be trustworthy, as all the sensitive data they hold is encrypted. By regular participation in our scheme, the active insider _attacker can collect location and time specific keys and reveal_ them without applying the secret sharing scheme. This would, in fact, subvert the privacy of all participants that used the keys to encrypt their trip reports. However, the attack is quite limited because only those trips can be revealed where the attacker was physically present both at the origin and destination. Yet, an active insider attacker with large-scale physical presence, e.g., a malicious operator of a dense network of V2X roadside units that might be deployed in the future, poses a serious threat to our system. The passive insider attacker and the outsider attacker are equally weak and cannot interfere with our system in any meaningful way. Even though they can eavesdrop on V2X communication in general, the exchange and forwarding of keys is protected from them by the encrypted communication channel. k-anonymity towards the traffic authority is guaranteed when only one accuracy level is used. When using different accuracy levels (which makes the scheme more practical) special cases can be constructed in which k-anonymity can be violated by combining information from different accuracy levels: Consider a set of k trips with high accuracy that is contained in a set of k + 1 trips with lower accuracy. As both sets can be decrypted, some information about the trip that is only in the coarse set can be inferred. If this information is considered sensitive in a specific scenario, the scheme must be deployed with only one accuracy level. using a central privacy proxy that has access to all accurate trip data and decides for each trip at which accuracy levels it can be revealed while maintaining k-anonymity. For the second question, we examine how many trips are revealed at different accuracy levels (and for different values for k), both for our scheme and for the theoretical optimum. Traffic was generated using the SUMO traffic simulator and the LuST traffic scenario [29]. The scenario provides 24 hours of synthetically generated, yet realistic, traffic in the city of Luxembourg and covers an area of approximately 156 km[2]. We removed the public buses from the scenario, considering only passenger vehicles, and ended up with a total of 218 938 trips. In order to cope with the large number of vehicles and the long simulation time, we generated the traffic traces offline. Then we ran our Python-based implementation on the traces, assuming radio connectivity between two vehicles, whenever they are within a fixed communication range (100 or 200 m). We evaluated two variants of our scheme: In the start/end variant vehicles exchange keys only when they are within the origin or destination regions (and within the start or end time windows). In the whole trip variant keys are exchanged during the whole trip. Outside the origin or destination regions and start or end time windows, keys are only forwarded in order to increase the connectivity among other participants and discarded after leaving the respective regions. 83% 75% 50% 69% 25% 0% Whole trip, 200m 63% Theoretical optimum 56% 47% 42% 32% 27% 21% 14% 15% 8% [10%] [ 6%] 15min, 500m 30min, 1000m 60min, 1500m **Accuracy level** |Col1|Col2|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| ||Parameter Start/e||s nd, 100m||| ||||||| ||||||| |||Start/e Whole Whole|nd, 200m trip, 100m trip, 200m||63| ||||||| ||||||| |Theore||Theore|tical optimum||56%| ||||||| ||||||47% 42%| ||||27||32% %| ||||21% 14% 15%||| |8 5% 6%|||% 10%||| _B. Simulation results_ We evaluate our system’s performance in a specific simulation scenario and focus on two aspects: 1) Is our V2X-based approach for key establishment suitable for deriving common authoritative keys among vehicles within the same region and time window? 2) How does the reduction of accuracy affect the information available to the traffic authority? To answer the first question, we compare the results from our scheme to the theoretical optimum, that could be reached Figure 4: Percentage of revealed trips for k = 3, comparing our simulation results with the theoretical optimum at different accuracy levels and for different parameters. Figure 4 displays the number of revealed trips for k = 3 at different accuracy levels for different variants in comparison to the theoretical optimum. At the lowest accuracy level a significant number of trips are revealed (69% for the whole _trip variant and a communication range of 200 m), which is_ a significant share of the theoretical optimum of 83%. For higher accuracy levels less trips are revealed. However, the results for our scheme are still relatively close to the theoretical optimum. This suggests that the key exchange mechanism performs well, but that the specific traffic pattern does not allow for trips to be revealed at those accuracy levels without violating the k-anonymity boundary. The communication range has a significant impact on the results. While we were unable to conduct detailed simulations on the physical network layer ----- 100% 75% 50% 25% 0% |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Parameters|Col12|Col13|Col14|Col15|Col16|Col17|Col18|Col19|Col20|Col21|Col22|Col23|Col24|Col25|Col26|Col27|Col28|Col29| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| |||||||||||||Parameters||||||||||||||||| ||||||||||||||Theore Whole t||tical optimum rip, 200m|||||||||||||| |||||||||||||||||||||||||||||| |||||||||||Start/en|||Start/en||d, 200m|||||||||||||| |||||||||||||||||||||||||||||| |||||||||||||||||||||||||||||| |||||||||||||||||||||||||||||| |||||||||||||||||||||||||||||| |||||||||||||||||||||||||||||| 0 10 20 30 40 50 **Anonymity set** Figure 5: Percentage of trips revealed for a given value of k: Cumulative distribution function (x-axis truncated) of anonymity sets for the theoretical optimum and two simulation scenarios for an accuracy level of 60 min and 1500 m. due to the size of the scenario, related work [16] suggests that our assumed parameter choices of 100 and 200 m are in fact realistic. Our scheme performs significantly better in the whole _trip variant, where continuous key exchanges outside of origin_ and destination regions help other participants establishing common keys. Figure 5 displays the cumulative distribution function of anonymity sets for an accuracy level of 60 min and 1500 m, i.e., what fraction of trips would be revealed for a given choice of k. The share of revealed trips drops rather quickly for higher values of k. For k = 10, in the whole trip variant and a communication range of 200 m, 34% of trips are revealed, compared to the theoretical optimum of 40%, while for k = 20, only 15% are revealed for the same parameters, compared to the theoretical optimum of 16%. Again, we can can see that our scheme performs reasonably well, but that the k-anonymity constraint severely limits the revelation of information. Overall, the simulations show that the V2X-based key exchange mechanism works well and that our scheme can provide information about a significant share of traffic at an accuracy level that we expect is still useful practice. VII. CONCLUSION We propose a generic mechanism for enforcing kanonymity for location privacy based on secret sharing. Using a decentralized version of Shamir’s secret sharing [28], participants can make location information available in encrypted form together with a share of the key. It will only be revealed, once k 1 other parties made available the same location infor_−_ mation. This is particularly useful, when location information is made available with different levels of accuracy, resulting in the information being revealed with the highest possible accuracy such that it still applies to at least k distinct users. Note that when using different accuracy levels, special cases can be constructed in which k-anonymity can be violated by combining information from different levels. To establish the practicality of our proposal, we describe a traffic monitoring system, where participants make available origin, destination and start and end times of their trips to a traffic authority. For privacy protection the accuracy of time and location information is reduced, such that each report applies to at least k trips. We evaluate our scheme in a simulation scenario with 24 hours of synthetic, but highly realistic traffic in the city of Luxembourg and compare our results with the theoretical optimum, that could be achieved by having a central, trusted party calculate the minimum reduction of accuracy required to satisfy the k-anonymity requirement. Our results show a that significant share of trips is revealed for a rather coarse accuracy level, while less trips are revealed for higher accuracy levels. We conclude that our scheme performs rather well and that the smaller share of trips revealed for higher accuracy levels (and larger values of k) is due to the anonymity requirement itself. It is not surprising that it is much harder to enforce k-anonymity for origin/destination pairs than for single locations. In fact, most related approaches for privacy-friendly collection of traffic data aim for unlinkability of origin/destination pairs for that very reason. With our work we show that privacy-friendly collection of origin/destination pairs is in fact possible, although a significant loss of accuracy (or share of revealed trips) must be accepted. We expect that the described traffic monitoring system could be deployed and deliver useful information at different scales: In an urban context (as done in our simulation scenario), across several cities, e.g., in order to analyze requirements and efficiency of highway systems, or even across several countries, e.g., to find out where people from certain regions spend their vacation. As the mechanism for decentralized enforcement of k-anonymity is quite generic, we envision its application for location privacy in other scenarios and beyond. REFERENCES [1] T. Abrahamsson, “Estimation of origin-destination matrices using traffic counts – a literature survey”, International Institute for Applied Systems Analysis, Tech. Rep. IR-98-021, May 1998. ----- [2] J. White and I. Wells, “Extracting origin destination information from mobile phone data”, in Eleventh In_ternational Conference on Road Transport Information_ _and Control, IET, Mar. 2002, pp. 30–34._ [3] N. Caceres, J. Wideberg, and F. Benitez, “Deriving origin destination data from a mobile phone network”, _Intelligent Transport Systems, IET, vol. 1, no. 1, pp. 15–_ 26, Mar. 2007. [4] S. Turksma, “The various uses of floating car data”, in _Road Transport Information and Control, 2000. Tenth_ _International Conference on (Conf. Publ. No. 472), Apr._ 2000, pp. 51–55. [5] C. Nanthawichit, T. Nakatsuji, and H. Suzuki, “Application of probe-vehicle data for real-time traffic-state estimation and short-term travel-time prediction on a freeway”, Transportation Research Record: Journal of _the Transportation Research Board, no. 1855, pp. 49–_ 59, 2003. [6] T. Jeske, “Floating car data from smartphones: What google and waze know about you and how hackers can control traffic”, Proceedings of the BlackHat Europe, 2013. [7] B. Hoh, M. Gruteser, H. Xiong, and A. Alrabady, “Enhancing security and privacy in traffic-monitoring systems”, Pervasive Computing, IEEE, vol. 5, no. 4, pp. 38–46, 2006. [8] J. Krumm, “Inference attacks on location tracks”, in _Pervasive Computing, Springer, 2007, pp. 127–143._ [9] P. Golle and K. Partridge, “On the anonymity of home/work location pairs”, in Pervasive Computing, Springer, 2009, pp. 390–397. [10] M. Gruteser and D. Grunwald, “Anonymous usage of location-based services through spatial and temporal cloaking”, in Proceedings of the 1st international con_ference on Mobile systems, applications and services,_ ACM, 2003, pp. 31–42. [11] L. Sweeney, “K-anonymity: A model for protecting privacy”, International Journal of Uncertainty, Fuzzi_ness and Knowledge-Based Systems, vol. 10, no. 05,_ pp. 557–570, 2002. [12] BBC, “The interview: A guide to the cyber attack on hollywood”, Dec. 2014. [Online]. Available: http : / / www . bbc . com / news / entertainment - arts - 30512032 (visited on 09/03/2015). [13] The Guardian, “Surveillance”, Sep. 2015. [Online]. Available: http : / / www . theguardian . com / world / surveillance (visited on 09/03/2015). [14] General Motors, Cadillac to introduce advanced ‘intelli_gent and connected’ vehicle technologies on select 2017_ _models, Sep. 2014. [Online]. Available: http://media.gm._ com/media/us/en/gm/news.detail.html/content/Pages/ news/us/en/2014/Sep/0907-its-overview.html. [15] “IEEE standard for information technology– telecommunications and information exchange between systems–social and metropolitan area networks– specific requirements part 11: Wireless LAN medium access control (MAC) and physical layer (PHY) specifications”, IEEE Std. 802.11-2012, 2012. [16] H. Hartenstein and K. P. Laberteaux, “A tutorial survey on vehicular ad hoc networks”, IEEE Communications _Magazine, vol. 46, no. 6, pp. 164–171, 2008._ [17] H. Hartenstein and K. Laberteaux, VANET vehicular _applications and inter-networking technologies. John_ Wiley & Sons, 2009, vol. 1. [18] U.S. Departement of Transportation – National Highway Traffic Safety Administration, “Federal motor vehicle safety standards: Vehicle-to-vehicle (V2V) communications; advance notice of proposed rulemaking (ANPRM); Docket No. NHTSA–2014–0022”, Federal _Register, vol. 79, no. 161, Aug. 2014._ [19] A. R. Beresford and F. Stajano, “Location privacy in pervasive computing”, IEEE Pervasive computing, vol. 2, no. 1, pp. 46–55, 2003. [20] M. Duckham and L. Kulik, “A formal model of obfuscation and negotiation for location privacy”, in Pervasive _computing, Springer, 2005, pp. 152–170._ [21] J. Krumm, “A survey of computational location privacy”, Personal and Ubiquitous Computing, vol. 13, no. 6, pp. 391–399, 2009. [22] B. Hoh and M. Gruteser, “Protecting location privacy through path confusion”, in Security and Privacy for _Emerging Areas in Communications Networks, 2005._ _SecureComm 2005. First International Conference on,_ IEEE, 2005, pp. 194–205. [23] A. Tomandl, D. Herrmann, and H. Federrath, “Padavan: Privacy-aware data accumulation for vehicular ad-hoc networks”, in Wireless and Mobile Computing, Network_ing and Communications (WiMob), 2014 IEEE 10th_ _International Conference on, IEEE, 2014, pp. 487–493._ [24] S. Rass, S. Fuchs, M. Schaffer, and K. Kyamakya, “How to protect privacy in floating car data systems”, in _Proceedings of the fifth ACM international workshop on_ _VehiculAr Inter-NETworking, ACM, 2008, pp. 17–22._ [25] B. Hoh, M. Gruteser, R. Herring, J. Ban, D. Work, J.-C. Herrera, A. M. Bayen, M. Annavaram, and Q. Jacobson, “Virtual trip lines for distributed privacypreserving traffic monitoring”, in Proceedings of the _6th international conference on Mobile systems, appli-_ _cations, and services, ACM, 2008, pp. 15–28._ [26] D. Achenbach, D. Förster, C. Henrich, D. Kraschewski, and J. Müller-Quade, “Social key exchange network – from ad-hoc key exchanges to a dense key network”, in _Tagungsband der INFORMATIK 2011, Lecture Notes in_ _Informatics, vol. P192, Oct. 2011._ [27] P. Papadimitratos, L. Buttyan, T. Holczer, E. Schoch, J. Freudiger, M. Raya, Z. Ma, F. Kargl, A. Kung, and J.-P. Hubaux, “Secure vehicular communication systems: Design and architecture”, Communications Mag_azine, IEEE, vol. 46, no. 11, pp. 100–109, 2008._ [28] A. Shamir, “How to share a secret”, Communications _of the ACM, vol. 22, no. 11, pp. 612–613, 1979._ [29] L. Codeca, R. Frank, and T. Engel, “Lust: A 24-hour scenario of Luxembourg city for SUMO traffic simulations”, in SUMO User Conference 2015-Intermodal _Simulation for Intermodal Transport, 2015._ -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/VNC.2015.7385589?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/VNC.2015.7385589, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://ris.utwente.nl/ws/files/203065032/Forster2015decentralized.pdf" }
2,015
[ "JournalArticle", "Conference" ]
true
2015-12-01T00:00:00
[]
11,020
en
[ { "category": "Medicine", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Medicine", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0173708d8e3aeff0b8787c7a2fb7c7154c1d1283
[ "Medicine" ]
0.957568
You Have Been Hacked!
0173708d8e3aeff0b8787c7a2fb7c7154c1d1283
Annals of Family Medicine
[ { "authorId": "14089367", "name": "E. Bujold" } ]
{ "alternate_issns": null, "alternate_names": [ "Ann Fam Med" ], "alternate_urls": [ "http://www.annfammed.org/" ], "id": "584186f5-efe3-43a9-aaf8-8081794afa7f", "issn": "1544-1709", "name": "Annals of Family Medicine", "type": "journal", "url": "https://www.annfammed.org/" }
On October 31, 2021, I learned the electronic health record in my independent, solo practice had been attacked by a Russian syndicate who was holding our data and our practice management system for “ransom.” An encryption key could be given to our cloud provider once $5,100,000 was delivered in bitcoin to the hacking entity. After 3 long months of negotiations, with us going back to a completely paper-based system in the interim, our cloud provider paid the Russian syndicate and access was restored. There were many lessons to be learned from our experience. We were fortunate, and through the help of many of our business associates we were able to survive and live to see another day.
#### **REFLECTION** ## You Have Been Hacked! ### *Ed Bujold, MD, FAAFP* Family Medical Care Center, Granite Falls, North Carolina *Conflict of interest: author reports none.* **CORRESPONDING AUTHOR** Ed Bujold Family Medical Care Center 4132 Hickory Blvd Granite Falls, North Carolina 28630 [bujold@embarqmail.com](mailto:bujold@embarqmail.com) #### **ABSTRACT** On October 31, 2021, I learned the electronic health record in my independent, solo practice had been attacked by a Russian syndicate who was holding our data and our practice management system for “ransom.” An encryption key could be given to our cloud provider once $5,100,000 was delivered in bitcoin to the hacking entity. After 3 long months of negotiations, with us going back to a completely paper-based system in the interim, our cloud provider paid the Russian syndicate and access was restored. There were many lessons to be learned from our experience. We were fortunate, and through the help of many of our business associates we were able to survive and live to see another day. *Ann Fam Med* [2023;21:85-87. https://doi.org/10.1370/afm.2906](https://doi.org/10.1370/afm.2906) have been in an independent, solo practice for 37 years. I have a staff of 9 employees which includes a nurse practitioner. My usual routine on the week# I end is to log in to my electronic health record (EHR), review patient data, and make follow-up appointments for the next week. On Sunday October 31, 2021, I was unable to log in to my EHR. Our cloud-based data company has a 24/7 call center to address any issues we may have on weekends. I called their number and a recording stated, “Our phone system is currently out of order.” I thought this was a bit odd, but didn’t think much about it and figured this issue would be sorted out on Monday. My staff functions very much as a team and I assumed they would sort all this out and we would be up and running by the time I finished my hospital rounds on Monday morning. I arrived at the clinic at 8:30 am . I was informed by my staff all our computers were working but none of us had access to our EHR or our practice management (PM) software. Fifteen minutes later, I received an e-mail from our cloud provider informing us they had been attacked by ransomware. Ransomware is a type of malicious software designed to block access to a computer system until a sum of money is paid. I immediately called our cloud-based service to get more details. Our data (this included our EHR and our practice management system) was being held “ransom” and an encryption key would be given to our cloud provider once $5,100,000 was delivered in bitcoin to the hacking entity. Our cloud provider reached out to the FBI who was quickly able to determine the hacking entity was a Russian establishment preying on 2 to 3 companies daily. The FBI recommended hiring a cybersecurity team well versed in ransomware attacks to identify any additional threats. The cybersecurity team recommended containment procedures focused on limiting further damage, eradicating infected systems, wiping them clean and restoring them. This restoration requires systems to be rebuilt from backups; then recovery processes can be started to get everyone back online. In addition, the cloud-based service hired a professional negotiator. The cloud-based service had $2,100,000 in yearly gross revenue, much less than the $5,100,000 the Russians were asking to release the encryption key. By noon of November 1, 2021, we knew our cloud-based service had an action plan in place, but the CEO had no idea when we would get our system back online. Naively, we thought we would have our PM and EHR up and running in a few days. After 2 weeks, my staff and I realized this was much more serious. Negotiations were going nowhere. In addition, we had not transmitted any insurance claims in 2 weeks because of having no access to our PM system. I have 4 interfaces to our EHR and PM; they include an accountable care organization (ACO), a major laboratory, a large hospital entity, and a data extraction company which pulls data from every ANNALS OF FAMILY MEDICINE [✦] WWW.ANNFAMMED.ORG [✦] VOL. 21, NO. 1 [✦] JANUARY/FEBRUARY 2023 **85** ----- YOU HAVE BEEN HACKED patient record each night and prints a paper copy of each patient’s *International Classification of Disease* ( *ICD-10)* diagnostic codes, recent laboratory work, gaps in care, and the most recent updated list of patient medications. This document is known as a point-of-care (POC) report. The data extraction company had a server onsite which was not connected to the cloud-based provider and therefore was inaccessible to the Russians and their ransomware. As a result, we had accurate information on patients dating back to 1 day (October 30, 2021) before the ransomware attack. This proved invaluable as we had a mini version of each patient’s chart in paper format. Our first item of business was to reestablish cash flow. We electronically submitted our insurance claims through Payer Path, a claims management system, which was embedded within our PM system. Payer Path has an online site, and with a bit of instruction from our EHR provider, we were able to start transmitting insurance claims through this encrypted online site. Next, we went back to a completely paper-based system just like in the “good old days.” Our ACO and POC documents were printed daily and became our patient charts. Our laboratory and hospital reports were tracked and printed daily through an online access point. Prescriptions were written by hand on printed prescription pads. Finally, we needed access to cash until we could establish cash flow again. I have a longstanding relationship with my certified public account (CPA) and bank. After explaining our predicament, we were able to establish a much larger business line of credit based on their recommendations. After 3 long months of negotiations, my cloud provider paid the Russian syndicate $500,000 and they produced the encryption key providing access to our EHR and PM systems. Ironically, during this time frame I spent more time with patients, less time documenting medical records, and on average, left the office 1 hour earlier. On a parallel track, our cloud company couldn’t tell us if any patient’s personal information had been exposed. This flies right in the face of HIPAA compliance issues. [1] I contacted our malpractice insurance company; fortunately they have a division of cybersecurity. Our cloud-based company believed there was no exposure to any individual patient’s personal information, but they couldn’t prove it. Our legal counsel suggested we had to assume there was a violation even though we could not prove or disprove it occurred. If an investigation was opened with the HIPAA compliance division of the federal government (which it eventually was), we wanted to make sure we were complying with the letter of the law. The Justice Department required we set up a patient call center. The legal team set up a guide for patients of steps to be taken if their private information was accessed by this Russian syndicate, sent letters to patients, notified our local news outlet, etc. We were lucky to have such experts at our side during this difficult time. As of March 2022, we have a fully functioning EHR and PM and 3 of our 4 interfaces are functioning. Our POC interface was online by October 2022. Five years ago, we moved to a cloud service because it was a much cheaper alternative to maintaining servers on site. As our EHR software became more sophisticated, the hardware to support it became more expensive with each upgrade. Our EHR provider recommended a cloud-based provider specializing in small practices. This provider housed data for over 100 small medical offices on the East coast and was very reasonably priced. In the aftermath of the attack, we learned the company was underinsured for a ransomware attack and their backup protocols were not up to industry standards. Once our data was restored, we moved to a much larger cloud provider who backs up our data nightly and stores it in 3 different cities. It cost $8,000 to move to a more secure cloud service (also recommended by our EHR provider) and we recovered almost all our lost revenue by March 2022. #### **LESSONS TO BE LEARNED FROM OUR ** **EXPERIENCE** First and foremost, have a trusted computer consultant to manage your hardware and have them do a cybersecurity check yearly, which should also include a very frank discussion with your staff about potential cybersecurity risks and vulnerabilities in your practice. This consultant is as important as a good CPA and banker for a small practice. Your entire team should limit the number of devices con nected to the Internet. Your trusted computer consultant can show you how to do this. Each connected device provides another access point through which ransomware can gain access. The Cybersecurity and Infrastructure Security Agency (CISA) recently published “Cybersecurity Incident and Vulnerability Response Playbooks.” [2] In it, they describe 6 phases of incident response: preparation, detection and analysis, containment, education, post-incident activity, and coordination. You may not want to take on this responsibility, but your trusted computer analyst should. [3] The FDA recently posted an alert detailing how vulnerable medical devices are to ransomware attacks. [4] Attacking agents are known as Black Hats. Black Hats are defined as human agents seeking control over another person’s devices for nefarious purposes. They come in 3 varieties: the thief stealing data—be it intellectual property, passwords, or credit cards; the vandal—wreaking havoc and destruction via something called a denial-of-service attack stopping a service from functioning; and the soldier/assassin, who goes the vandal one step better and seeks to cause death/damage via attacks on critical infrastructure (think remotely opening flood gates on a large dam). It is important to realize the same prop (a computer virus) can be used singly or in combination with other props to satisfy any of the above-mentioned motivations. According to the Trust Wave Global Security Report of 2019, a single patient record or piece of personal data is worth $250 on the “black market.” These ransomware attacks ANNALS OF FAMILY MEDICINE [✦] WWW.ANNFAMMED.ORG [✦] VOL. 21, NO. 1 [✦] JANUARY/FEBRUARY 2023 **86** ----- YOU HAVE BEEN HACKED are multibillion dollar businesses and very profitable for these criminal elements. They aren’t going away anytime soon. These attacks are starting to affect patient care all over the world. We were able to move back to the paper world quickly and fortunately had a scaled-down paper version of our EHR data available. We were lucky. The hundred other practices involved in this attack were not so fortunate. Many small medical practices never recover from a ransomware attack and file for bankruptcy. Someday, an adverse cyber attack may occur affecting someone’s life and potentially result in a death. Based on our experience I strongly recommend practices prepare for such attacks ahead of time. **[Read or post commentaries in response to this article.](https://doi.org/10.1370/afm.2906)** **Key words:** ransomware attack; independent practice; cloud-based data storage; HIPPA violations Submitted April 29, 2022; submitted, revised, September 19, 2022; accepted September 27, 2022. **REFERENCES** 1. Langer SG. Cyber-security issues in healthcare information technology. *J Digi-* *tal Image* . 2017; ​30(1): ​117-125. 2. Cybersecurity and Infrastructure Security Agency. *Operational Procedures* *for Planning and Conducting Cybersecurity Incident and Vulnerability Response* *Activities in FCEB Information Systems* [. Published Nov 2021. https:// ​www.](https:// www.cisa.gov/sites/default/files/publications/Federal_Government_Cybersecurity_Incident_and) [cisa.gov/sites/default/files/publications/Federal_Government_Cybersecurity_](https:// www.cisa.gov/sites/default/files/publications/Federal_Government_Cybersecurity_Incident_and) [Incident_and_Vulnerability_Response_Playbooks_508C.pdf](https:// www.cisa.gov/sites/default/files/publications/Federal_Government_Cybersecurity_Incident_and) 3. Perakslis E. Responding to the escalating cybersecurity threat to health care. *NEJM.* [2022; ​387: ​767-770. 10.1056/NEJMp2205144](http://doi.org/10.1056/NEJMp2205144 ) 4. US Food & Drug Administration. The Role of the FDA to Advance Cyber[security: ​ https://asprtracie.hhs.gov/technical-resources/resource/4331/](https://asprtracie.hhs.gov/technical-resources/resource/4331/the-fdas-role-in-medical-device-cyberse) [the-fdas-role-in-medical-device-cybersecurity](https://asprtracie.hhs.gov/technical-resources/resource/4331/the-fdas-role-in-medical-device-cyberse) ANNALS OF FAMILY MEDICINE [✦] WWW.ANNFAMMED.ORG [✦] VOL. 21, NO. 1 [✦] JANUARY/FEBRUARY 2023 **87** -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1370/afm.2906?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1370/afm.2906, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GOLD", "url": "https://www.annfammed.org/content/annalsfm/21/1/85.full.pdf" }
2,023
[ "JournalArticle" ]
true
2023-01-01T00:00:00
[]
3,247
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Medicine", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Biology", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/017590d7bdb6252080cdb121e0e2a4627c68aed8
[ "Computer Science", "Medicine" ]
0.84187
PVP-SVM: Sequence-Based Prediction of Phage Virion Proteins Using a Support Vector Machine
017590d7bdb6252080cdb121e0e2a4627c68aed8
Frontiers in Microbiology
[ { "authorId": "49557292", "name": "Balachandran Manavalan" }, { "authorId": "7143195", "name": "T. Shin" }, { "authorId": "145543795", "name": "Gwang Lee" } ]
{ "alternate_issns": null, "alternate_names": [ "Front Microbiol" ], "alternate_urls": [ "https://www.frontiersin.org/journals/microbiology", "http://www.frontiersin.org/microbiology", "http://journal.frontiersin.org/journal/microbiology" ], "id": "25a655e4-17da-4bfc-a246-5cd20202068d", "issn": "1664-302X", "name": "Frontiers in Microbiology", "type": "journal", "url": "http://www.frontiersin.org/cellular_and_infection_microbiology/about" }
Accurately identifying bacteriophage virion proteins from uncharacterized sequences is important to understand interactions between the phage and its host bacteria in order to develop new antibacterial drugs. However, identification of such proteins using experimental techniques is expensive and often time consuming; hence, development of an efficient computational algorithm for the prediction of phage virion proteins (PVPs) prior to in vitro experimentation is needed. Here, we describe a support vector machine (SVM)-based PVP predictor, called PVP-SVM, which was trained with 136 optimal features. A feature selection protocol was employed to identify the optimal features from a large set that included amino acid composition, dipeptide composition, atomic composition, physicochemical properties, and chain-transition-distribution. PVP-SVM achieved an accuracy of 0.870 during leave-one-out cross-validation, which was 6% higher than control SVM predictors trained with all features, indicating the efficiency of the feature selection method. Furthermore, PVP-SVM displayed superior performance compared to the currently available method, PVPred, and two other machine-learning methods developed in this study when objectively evaluated with an independent dataset. For the convenience of the scientific community, a user-friendly and publicly accessible web server has been established at www.thegleelab.org/PVP-SVM/PVP-SVM.html.
Edited by: Qi Zhao, Liaoning University, China Reviewed by: Yi Xiong, Shanghai Jiao Tong University, China Wei Chen, North China University of Science and Technology, China *Correspondence: Gwang Lee [glee@ajou.ac.kr](mailto:glee@ajou.ac.kr) Specialty section: This article was submitted to Systems Microbiology, a section of the journal Frontiers in Microbiology Received: 07 December 2017 Accepted: 28 February 2018 Published: 16 March 2018 Citation: Manavalan B, Shin TH and Lee G (2018) PVP-SVM: Sequence-Based Prediction of Phage Virion Proteins Using a Support Vector Machine. Front. Microbiol. 9:476. [doi: 10.3389/fmicb.2018.00476](https://doi.org/10.3389/fmicb.2018.00476) p [doi: 10.3389/fmicb.2018.00476](https://doi.org/10.3389/fmicb.2018.00476) # PVP-SVM: Sequence-Based Prediction of Phage Virion Proteins Using a Support Vector Machine [Balachandran Manavalan](http://loop.frontiersin.org/people/36828/overview) [[1], Tae H. Shin](http://loop.frontiersin.org/people/536242/overview) [1,2] [and Gwang Lee](http://loop.frontiersin.org/people/505106/overview) [1,2]* 1 Department of Physiology, Ajou University School of Medicine, Suwon, South Korea, 2 Institute of Molecular Science and Technology, Ajou University, Suwon, South Korea ### Accurately identifying bacteriophage virion proteins from uncharacterized sequences is important to understand interactions between the phage and its host bacteria in order to develop new antibacterial drugs. However, identification of such proteins using experimental techniques is expensive and often time consuming; hence, development of an efficient computational algorithm for the prediction of phage virion proteins (PVPs) prior to in vitro experimentation is needed. Here, we describe a support vector machine (SVM)-based PVP predictor, called PVP-SVM, which was trained with 136 optimal features. A feature selection protocol was employed to identify the optimal features from a large set that included amino acid composition, dipeptide composition, atomic composition, physicochemical properties, and chain-transition-distribution. PVP-SVM achieved an accuracy of 0.870 during leave-one-out cross-validation, which was 6% higher than control SVM predictors trained with all features, indicating the efficiency of the feature selection method. Furthermore, PVP-SVM displayed superior performance compared to the currently available method, PVPred, and two other machine-learning methods developed in this study when objectively evaluated with an independent dataset. For the convenience of the scientific community, a user-friendly and publicly accessible web server has been established at www.thegleelab.org/PVP-SVM/PVP-SVM.html. Keywords: bacteriophage virion proteins, feature selection, hybrid features, machine learning, support vector machine ## INTRODUCTION Bacteriophages, also known as phages, are viruses that can infect and replicate in bacteria, and are found wherever bacteria survive. The phage virion is composed of proteins that encapsulate either DNA or RNA, which binds to bacterial surface and injects its genetic materials into the specific host bacteria. In lytic cycle, phage genes are expressed for proteins that poke hole in the cell membrane, which makes cell expand and burst. Subsequently, released phages from cell bursting spread and infects other host cells. Identification of phage virion proteins (PVPs) is important for understanding the relationship between phage and host bacteria and also development of novel antibacterial drugs or antibiotics (Lekunberri et al., 2017). For instance, phage encoded proteins including endolysins, exopolysaccharidases, and holins have been proven as promising antibacterial products (Drulis-Kawa et al., 2012). Experimental methods, including mass spectrometry, sodium ----- dodecyl sulfate polyacrylamide gel electrophoresis, and protein arrays (Lavigne et al., 2009; Yuan and Gao, 2016; Jara-Acevedo et al., 2018) have been used to identify PVPs. However, these methods are expensive and often time-consuming. Therefore, computational methods to predict PVPs prior to in vitro experimentation are needed. It is difficult to predict the function of PVPs from sequence information because of relatively limited experimental data. However, machine-learning (ML) approaches have been successfully applied to several similar biological problems. Therefore, it may be possible to predict the functions of phage proteins using ML. To this end, Seguritan et al., developed the first method to classify viral structure proteins using an artificial neural network, using amino acid composition (AAC) and protein isoelectric points as input features (Seguritan et al., 2012). Later, Feng et al., developed a naïve Bayesian method, with an algorithm utilizing AAC and dipeptide composition (DPC) as input features (Feng et al., 2013b). Subsequently, Ding et al., developed a support vector machine (SVM)-based prediction model called PVPred. In this method, analysis of variance was applied to select important features from g-gap DPC (Ding et al., 2014). Recently, Zhang et al., developed a random forest (RF)-based ensemble method to distinguish PVPs and non-PVPs (Zhang et al., 2015). PVPred is the only existing publicly available method that was developed using the same dataset as our method. Although the existing methods have specific advantages in PVPs prediction, it remains necessary to improve the accuracy and transferability of the prediction model. It is worth mentioning that several sequence-based features including AAC, atomic composition (ATC), chain-transitiondistribution (CTD), DPC, pseudo amino acid composition and amino acid pair, and several feature selection techniques including correlation-based feature selection, ANOVA feature selection, minimum-redundancy and maximum-relevance, RFalgorithm based feature selection have been successfully applied in other protein bioinformatics studies (Wang et al., 2012, 2016; Lin et al., 2015; Qiu et al., 2016; Tang et al., 2016; Gupta et al., 2017; Manavalan and Lee, 2017; Manavalan et al., 2017; Song et al., 2017). All these studies motivated us in the development of a new model in this study. Hence, we developed a SVM-based PVP predictor called PVP-SVM, in which the optimal features were selected using a feature selection protocol that has been successfully applied to various biological problems (Manavalan and Lee, 2017). We selected the optimal features from a large set, including AAC, DPC, CTD, ATC, and PCP. In addition to SVM (i.e., PVP-SVM), we also developed RF and extremely randomized tree (ERT)-based methods. The performance of PVP-SVM was consistent in both the training and independent datasets, and was superior to the current method and the RF and ERT methods developed in this study. ## MATERIALS AND METHODS Training Dataset In this study, we utilized the dataset constructed by Ding et al., which was specifically used for studying PVPs (Ding et al., 2014). We decided to use this dataset for the following reasons: (i) it is a reliable dataset, constructed based on several filtering schemes; (ii) it is a non-redundant dataset and none of the sequences possesses pairwise sequence identity (>40%) with any other sequence. Hence, this dataset stringently excludes homologous sequences; and (iii) most importantly, it facilitates fair comparison between the current method and existing methods, which were developed using the same training dataset. Thus, the training dataset can be formulated as: **S** **S[+]** ∪ **S[−]** (1) = where the positive subset S[+] contained 99 PVPs, the negative subset S[−] contained 208 non-PVPs, and the symbol ∪ denotes union in the set theory. Thus, S contained 307 samples. ## Independent Dataset We obtained PVP and non-PVP sequences from the Universal Protein Resource (UniProt) as previously described (Feng et al., 2013b; Ding et al., 2014; Zhang et al., 2015). To avoid overestimation in the prediction model, we excluded sequences that shared greater than 40% sequence identity with sequences in the training dataset. The final dataset contained 30 PVPs and 64 non-PVPs. We note that our independent dataset included Ding et al., independent dataset. The above two datasets can be downloaded from our web server. ## Input Features (i) AAC: The fractions of the 20 naturally occurring amino acid residues in a given protein sequence were calculated as follows: Frequency of amino acid (i) AAC (i) (2) = Length of the protein sequence where i can be any of the 20 natural amino acids. (ii) ATC: The fraction of five atom types (C, H, N, O, and S) in a given protein sequence was calculated as previously reported (Kumar et al., 2015; Manavalan et al., 2017), with a fixed length of five features. (iii) CTD: The global composition feature encoding method CTD comprises properties such as hydrophobicity, polarity, normalized van der Waals volume, polarizability, predicted secondary structure, and solvent accessibility. It was first proposed in protein folding class prediction (Dubchak et al., 1995). Composition (C) represents the composition percentage of each group in the peptide sequence. Transition (T) represents the transition probability between two neighboring amino acids belonging to two different groups. Distribution (D) represents the position of amino acids (the first 25, 50, 75, or 100%) in each group in the protein sequence. For each qualitative property of a given sequence, C, T, and D produce 3, 3, and 15-dimension features, respectively. As a result, 7 (3 3 15) 147 features × + + = can be generated for seven qualitative properties. (iv) DPC: The fractions of the 400 possible dipeptides present in a given protein sequence were calculated as follows: Total number of dipeptide (j) DPC(j) (3) = Total number of all possible dipeptides ----- where j can be any of the 400 possible dipeptides. (v) PCP: We employed 11 representative PCP attributes of amino acids for feature extraction (polar, hydrophobic, charged, aliphatic, aromatic, positively charged, negatively charged, small, tiny, large, and peptide mass). Note that all of the above features were in the range of [0, 1] as input for training and testing. ## The Support Vector Machine We employed a SVM as our classification algorithm, a wellknown supervised ML method introduced in Boser et al. (1992) that has been applied to several biological problems (Wang et al., 2009; Eickholt et al., 2011; Deng et al., 2013; Cao et al., 2014; Manavalan et al., 2015). The objective of a SVM is to find the hyperplane with the largest margin to decrease the misclassification rate. Given a set of data points (input features) and an objective function associated with the data points (PVPs: 1 and non-PVPs: 0), SVM learn a function in the form of y sign ��n � (4) = i = 1 [α][i][ y][i][ K][(][x][i][,][ x][)][ +][ b] and the trained model was tested on the independent dataset to confirm the generality of the developed method. ## Performance Evaluation Criteria The following four metrics are commonly used in literature to measure the quality of binary classification (Xiong et al., 2012; Li et al., 2015): sensitivity, specificity, accuracy and Matthews’ correlation coefficient (MCC), which are expressed as    Sensitivity = TP +TP FN Specificity = TNTN + FP Accuracy = TP + FPTP + + TN TN + FN MCC = √(TP + FP)(TPTP × + TN FN −)(TNFP × + FN FP)(TN + FN) (5) where y is the predicted class associated with an input feature vector of x; αi is the adjustable weight assigned to the training data point xi during training by minimizing a quadratic objective function; b is the bias term; and K is the Kernel function. Therefore, y can be viewed as a weighted linear combination of similarities between the training data points xi and the target data point x. Data points with positive weights in the training dataset affect the final solution and are called support vectors. SVM is especially effective when the input data are not linearly separable. K is required to map the input data into a higher dimensional space to identify the optimal separating hyperplane (Scholkopf and Smola, 2001). Therefore, we experimented with several common Ks, including linear, Gaussian radial basis, and polynomial functions. The Gaussian radial basis K (e[(][−][γ][ ×][ ∥][x][−][y][∥][2][)]; γ = σ1[2][ ) performed the best.] Here, two critical parameters (γ and C) required optimization: γ controls how peaked Gaussians are centered on the support vectors, while C controls the trade-off between the training error and the margin size (Smola and Vapnik, 1997; Vapnik and Vapnik, 1998; Scholkopf and Smola, 2001). These two parameters were optimized using a grid search from 2[−][15]–2[10] for C and 2[−][10]–2[10] for γ, in log2 steps. In this study, we used a SVM implemented in the scikit-learn package (Pedregosa et al., 2011). ## Cross-Validation and Independent Testing As demonstrated in a series of studies (Feng et al., 2013a,c, 2018; Chen et al., 2014, 2017a,b), among three cross-validation methods, i.e., independent dataset test, K-fold cross-validation test and Leave-one-out cross-validation (LOOCV, also called jackknife cross validation), LOOCV is the most rigorous and objective evaluation methods. Accordingly, the jackknife test has been widely recognized and increasingly used to test the quality for various predictors. In LOOCV, each sample in the training dataset is in turn singled out as an independent test sample and all the rule parameters are calculated without including the one being identified. We performed LOOCV on the training dataset where TP is the number of PVPs predicted to be PVPs; TN is number of non-PVPs predicted to be non-PVP; FP is the number of non-PVPs predicted to be PVP; and FN is the number of PVPs predicted to be non-PVP. To further evaluate the performance of the classifier, we employed a receiver operating characteristic (ROC) curve. The ROC curve was plotted with the false positive rate as the x-axis and true positive rate as the y-axis by varying the thresholds. The area under the curve (AUC) was used for model evaluation, with higher AUC values corresponding to better performance of the classifier. ## RESULTS Framework of the Proposed Predictor **Figure 1 illustrates the overall framework of the PVP-SVM** method. It consisted of four steps: (i) construction of the training and independent datasets; (ii) extraction of various features from the primary sequences, including AAC, ATC, CTD, DPC, and PCP; (iii) generation of 25 different feature sets based on feature importance scores (FIS) computed using the RF algorithm. These different sets were inputted to the SVM to develop their respective prediction models; and (iv) the model producing the best performance in terms of MCC was considered the final model, and the corresponding feature set was considered the optimal feature set. ## Feature Selection Protocol Generally, high dimensional features can contain a higher degree of irrelevant and redundant information that may greatly degrade the performance of ML algorithms. Therefore, it is necessary to apply a feature selection protocol to filter the redundant features and increase prediction efficiency (Wang et al., 2012; Zheng et al., 2012; Manavalan et al., 2014; Manavalan and Lee, 2017; Song et al., 2017). Previously, Manavalan and Lee applied a systematic feature selection protocol and developed a novel quality assessment method called SVMQA (Manavalan and Lee, 2017), which was the best method in CASP12 blind prediction experiments (Elofsson et al., 2017; Kryshtafovych et al., 2017). We applied a similar protocol in our recent studies, including cell-penetrating peptide ----- and DNase I hypersensitivity predictions (Manavalan et al., 2018). Interestingly, this protocol significantly improved the performance of our method. Therefore, we extended this approach to the current problem. The current protocol differs slightly from the published protocol in terms of parameters (ntree and mtry) used in the RF algorithm, which is mainly due to the large number of features used in this study (i.e., 26-fold more features than were used in SVMQA). In our study, each protein sequence was represented as 583 dimensional vectors, which was higher than the number of samples. In the first step, we applied the RF algorithm and estimated the FIS of 583 features (AAC: 20; DPC: 400; ATC: 5; PCP: 11; and CTD: 147) to distinguish PVPs and non-PVPs. A detailed description of how we computed the FIS scores of the input features has been reported previously (Manavalan et al., 2014; Manavalan and Lee, 2017). Briefly, we used all features as inputs in the RF algorithm and performed tenfold cross-validation using the training dataset. For each round of cross-validation, we built 5,000 trees, and the number of variables at each node was chosen randomly from 1 to 100. The average FIS from all the trees are shown in Figure 2A, where most of the features had similar scores and only 5% ∼ (FIS 0.005) contributed significantly to PVP prediction. In the ≥ second step, we applied a FIS cutoff 0.001 and selected 477 ≥ features as optimal feature candidates (Figure 2B). Subsequently, we generated 25 different sets of features from the optimal feature candidates based on an FIS cut-off (0.001 FIS 0.004, ≤ ≤ with a step size of 0.0011). Basically, we considered each set of more important features in a step-wise manner. To identify the optimal feature set, we inputted each set into the SVM separately and performed LOOCV to evaluate their performance. The prediction model that produced the best performance (i.e., the highest MCC) was considered final, and the corresponding feature set was considered optimal. ## Performance of Various Prediction Models on the Training Dataset **Figure 3A shows the performances of the SVM model using** different sets of input features, in which the MCC gradually increased with respect to the different feature sets, peaked with the F136-based model, and then gradually declined. Figure 3B shows the classification accuracy vs. parameter variation (C and γ ) of the final F136-based model. The maximal classification accuracy was 0.870, when the parameters log2(C) and log2(γ ) were 6.72 and 2.18, respectively, with MCC, sensitivity, and − specificity values of 0.695, 0.737, and 0.933, respectively. The feature type distribution of the optimal feature set and the total features employed in this study are shown in Figure 3C. Among 136 optimal features, there were eight AAC features, one ATC feature, 25 CTD features, 98 DPC features, and four PCP features, indicating that important properties from all five compositions contributed to PVP prediction. To demonstrate the effect of our feature selection protocol, we compared the F136-based model with the ----- control SVM (using all features) and also an individual composition-based prediction model. As shown in Table 1, F136-based model accuracy, MCC, and area under curve (AUC) were 15–44, 6–17, and 6–11% higher, respectively, than the other models. These results demonstrate that the many redundant or uninformative features present in the original feature set were eliminated through our feature selection protocol, resulting in significant performance improvement. ## Comparison of PVP-SVM With Other ML Algorithms In addition to PVP-SVM, we also developed RF- and ERT-based models using the same feature selection protocol and training ----- dataset (Figures 4A,B). These two methods have been described in detail in our previous study (Manavalan et al., 2017, 2018). The procedure for ML parameter optimization and final model selection was the same as for PVP-SVM. The performance of the final selected RF and ERT models was compared with PVPSVM, as well as PVPred, which was constructed using the same training dataset. Table 2 shows that the accuracy, AUC, and MCC of PVP-SVM were 2–4, 0.1–2, and 8–9% higher, respectively, than those achieved by other methods, indicating the superiority of PVP-SVM. ## Method Performance Using an Independent Dataset We evaluated the performance of our three ML methods and PVPred using an independent dataset. Table 3 shows that PVP-SVM achieved the highest MCC and AUC values (0.531 and 0.844, respectively). Indeed, the corresponding metrics were 2.2–17.4% and 4.8–10.0% higher than those achieved by other methods, indicating the superiority of PVP-SVM. Specifically, PVP-SVM outperformed PVPred in all five metrics, TABLE 1 | A comparison of the proposed predictor with the individual composition-based SVM model on training dataset. Methods MCC Accuracy Sensitivity Specificity AUC P-value PVP-SVM 0.695 0.870 0.737 0.933 0.900 SVM control 0.554 0.811 0.636 0.894 0.837 0.068 AAC 0.525 0.792 0.841 0.687 0.841 0.086 DPC 0.395 0.743 0.837 0.546 0.760 0.00023 CTD 0.534 0.801 0.880 0.636 0.819 0.022 DPC 0.478 0.782 0.889 0.556 0.812 0.014 ATC 0.252 0.708 0.091 1.000 0.788 0.002 The first column represents the method name employed in this study. The second, the third, the fourth and the fifth respectively represent the MCC, accuracy, sensitivity, and specificity. The sixth column and the seventh represent the AUC and pairwise comparison of ROC area under curves (AUCs) between PVP-SVM and the other methods using a two-tailed t-test. A P ≤ 0.05 indicates a statistically meaningful difference between PVP-SVM and the selected method (shown in bold italic). ----- TABLE 2 | A comparison of the proposed predictor with other ML-based methods on training dataset. Methods MCC ACC Sensitivity Specificity AUC P-value PVP-SVM 0.695 0.870 0.737 0.933 0.900 PVPred NA 0.850 0.758 0.894 0.899 0.974 RF 0.600 0.831 0.657 0.914 0.877 0.476 ERT 0.614 0.837 0.636 0.933 0.883 0.594 The first column represents the method name employed in this study. The second, the third, the fourth and the fifth respectively represent the MCC, accuracy, sensitivity, and specificity. The sixth column and the seventh represent the AUC and pairwise comparison of ROC area under curves (AUCs) between PVP-SVM and the other methods using a two-tailed t-test. TABLE 3 | Performance of various methods on independent dataset. Method MCC ACC Sensitivity Specificity AUC P-value PVP-SVM 0.531 0.798 0.667 0.859 0.844 ERT 0.509 0.798 0.533 0.922 0.778 0.367 RF 0.481 0.787 0.500 0.922 0.756 0.238 SVM control 0.414 0.755 0.533 0.859 0.796 0.505 PVPred 0.357 0.713 0.600 0.765 0.742 0.176 The first column represents the method name employed in this study. The second, the third, the fourth and the fifth respectively represent the MCC, accuracy, sensitivity, and specificity. The sixth column and the seventh represent the AUC and pairwise comparison of ROC area under curves (AUCs) between PVP-SVM and the other methods using a two-tailed t-test. suggesting its usefulness as an improvement to existing tools for predicting PVPs. In general, ML-based methods are problem-specific (Zhang and Tsai, 2005). Instead of selecting a ML method arbitrarily, it is necessary to explore different ML methods on the same dataset to select the best one. Hence, we explored three most commonly used ML methods (SVM, RF, and ERT), each having its own advantages and disadvantages. The PVP-SVM method performed consistently better than other two methods both with the training and independent datasets (Figures 5A,B). Although the differences in performance between these three methods were not significant (P > 0.05), SVM was superior to other ML methods in PVP prediction, consistent with a previous report (Ding et al., 2014). Hence, we selected PVP-SVM as the final prediction model. ## Comparison of PVP-SVM and PVPred Methodology A detailed comparison between our method and the existing method in terms of methodology is as follows: (i) the PVPred method utilizes only g-gap dipeptides as input features, and its optimal features were determined by an analysis of variancebased feature selection protocol. However, PVP-SVM utilizes AAC, ATC, CTD, and PCP in addition to DPC, with optimal features selected based on a RF algorithm; (ii) the number of optimal features used differs between the two methods; PVPSVM uses 136 features, while PVPred uses 160; (iii) although the same ML method was used for the two methods, the parameter optimization procedure differed, as PVP-SVM used LOOCV, while PVPred used five-fold cross-validation. ## Web Server Implementation Several examples of bioinformatics tools/web servers utilized for protein function predictions have been reported in previous publications (Govindaraj et al., 2010, 2011; Manavalan et al., 2010a,b, 2011; Basith et al., 2011, 2013), and are of great practical use to researchers. To this end, an online prediction server for PVP-SVM was developed, which is freely accessible at the following link: www.thegleelab.org/PVP-SVM/PVP-SVM.html. Users can paste or upload query protein sequences in FASTA format. After submitting the input protein sequences, the results can be retrieved in a separate interface. All the curated datasets used in this study can be downloaded from the web server. PVPSVM represents the second publicly available method for PVP prediction, and delivers a higher level of accuracy than PVPred. ## DISCUSSION PVPs play critical roles in adsorption between phages and their host bacteria, and are key in the development of new antibiotics. Phage-derived proteins are considered as safe and efficient antimicrobial agents due to its versatile properties, including bacteria-specific lytic mechanism, broad range of antibacterial spectrum, enhanced tissue penetration by small size, low immunogenicity, and reduced possibility for bacterial resistance (Drulis-Kawa et al., 2012). Thus, we have developed a novel computational method for predicting PVPs, called PVP-SVM. The molecular functions and biological activities of proteins can be predicted from their primary sequence (Lee et al., 2007); hence, we utilized the available PVPs sequences to develop the method. A combination of AAC, ATC, DPC, CTD, and PCP features was used to map the protein sequences onto numeric feature vectors, which were inputted into the SVM to predict PVPs. Although AAC, CTD, and DPC features have been used previously (Feng et al., 2013b; Ding et al., 2014; Zhang et al., 2015), this is the first report including ATC and PCP. In ML-based predictions, feature selection is one of the most important steps because of redundant and non-informative features. Generally, high dimensional features contain numerous non-informative and redundant features, which affect prediction accuracy. Hence, the feature selection protocol is considered one of the most important steps in ML-based prediction (Wang et al., 2012; Manavalan et al., 2014; Manavalan and Lee, 2017; Song et al., 2017). To this end, we applied a feature selection protocol that has been proven effective in various biological applications (Manavalan and Lee, 2017; Manavalan et al., 2018), and identified the optimal features. Of those, the major contribution was from DPC ( 72%), followed by CTD, AAC, PCP, and ATC, indicating ∼ that information about the fraction of amino acids as well as their local order might play a major role in predicting PVPs. A previous study demonstrated that basic amino acids (Lys and Arg) usually occur in the flanking potential cleavage site in PVPs, as their side chain flexibility is required to accommodate the ----- change observed in the cleavage site (Coia et al., 1988; Speight et al., 1988). Interestingly, our optimal features contain these two important types of residues. In general, if a prediction model is developed using a training dataset that contains highly homologous sequences, this method will overestimate the prediction accuracy. In this regard, Feng et al., and Ding et al., used a lower homology (<40% sequence identity) sequence dataset to develop their prediction models (Feng et al., 2013b; Ding et al., 2014). Zhang et al., developed their model using a highly homologous sequence dataset (<80% sequence identity); as a result, this method showed higher accuracy when evaluated with an independent dataset (Zhang et al., 2015). Furthermore, PVPred is the only publicly available method of the three, in the form of a web server, and was generated using the same dataset as our method. Therefore, we compared the performance of our method with PVPred only. Generally, a prediction model tends toward over-optimization in order to attain higher accuracy. Therefore, it is always necessary to evaluate the prediction model using an independent dataset, to measure the generalizability of the method (Chaudhary et al., 2016; Manavalan and Lee, 2017; Nagpal et al., 2017). Hence, we evaluated our three prediction models and PVPred on an independent dataset. Our study demonstrated that PVP-SVM consistently performed better than PVPred and the two other methods developed in this study on both datasets, indicating the greater transferability of the method. The superior performance of PVP-SVM may be attributed to two important factors: (i) integration of previously reported features and inclusion of novel features that collectively make significant contributions to the performance; and (ii) a feature selection protocol that eliminates overlapping and redundant features. Furthermore, our approach is a general one, which is applicable to many other classification problems in structural bioinformatics. Although PVP-SVM displayed superior performance over the other methods, there is room for further improvements, including increasing the size of the training dataset based on the experimental data available in the future, incorporating novel features, and exploring different ML algorithms including stochastic gradient boosting (Xu et al., 2017) and deep learning (LeCun et al., 2015). A user-friendly web interface has been made available, allowing researchers access to our prediction method. Indeed, this is the second method to be made publicly available, with higher accuracy than the existing method. Compared to experimental approaches, bioinformatics methods, such as PVP-SVM, represent a powerful and cost-effective approach for the proteome-wide prediction of PVPs. Therefore, PVP-SVM might be useful for large-scale PVP prediction, facilitating hypothesis-driven experimental design. ## AUTHOR CONTRIBUTIONS BM and GL conceived and designed the experiments; BM performed the experiments; BM and TS analyzed the data; BM and GL wrote paper. All authors reviewed the manuscript and agreed to this information prior to submission. ## FUNDING This work was supported by the Basic Science Research Program through the National Research Foundation (NRF) of Korea funded by the Ministry of Education, Science, and Technology [2015R1D1A1A09060192 and 2009-0093826], and the Brain Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT & Future Planning [2016M3C7A1904392]. ## ACKNOWLEDGMENTS The authors would like to thank Da Yeon Lee for assistance in the preparation of the manuscript. ----- ## REFERENCES Basith, S., Manavalan, B., Gosu, V., and Choi, S. (2013). Evolutionary, structural and functional interplay of the IkappaB family members. PLoS ONE 8:e54178. [doi: 10.1371/journal.pone.0054178](https://doi.org/10.1371/journal.pone.0054178) Basith, S., Manavalan, B., Govindaraj, R. G., and Choi, S. (2011). In silico approach to inhibition of signaling pathways of Toll-like receptors 2 and 4 by ST2L. PLoS [ONE 6:e23989. doi: 10.1371/journal.pone.0023989](https://doi.org/10.1371/journal.pone.0023989) Boser, B. E., Guyon, I. M., and Vapnik, V. N. (1992). “A training algorithm for optimal margin classifiers,” in Proceedings of the Proceedings of the Fifth Annual Workshop on Computational Learning Theory (Pittsburgh, PA: ACM). Cao, R., Wang, Z., Wang, Y., and Cheng, J. (2014). SMOQ: a tool for predicting the absolute residue-specific quality of a single protein model with support vector [machines. BMC Bioinformatics 15:120. doi: 10.1186/1471-2105-15-120](https://doi.org/10.1186/1471-2105-15-120) Chaudhary, K., Nagpal, G., Dhanda, S. K., and Raghava, G. P. (2016). Prediction of immunomodulatory potential of an RNA sequence for designing non-toxic siRNAs and RNA-based vaccine adjuvants. Sci Rep.6:20678. [doi: 10.1038/srep20678](https://doi.org/10.1038/srep20678) Chen, W., Feng, P. M., Lin, H., and Chou, K. C. (2014). iSS-PseDNC: identifying splicing sites using pseudo dinucleotide composition. Biomed. Res. Int. [2014:623149. doi: 10.1155/2014/623149](https://doi.org/10.1155/2014/623149) Chen, W., Tang, H., and Lin, H. (2017a). MethyRNA: a web server for identification of N(6)-methyladenosine sites. J. Biomol. Struct. Dyn. 35, [683–687. doi: 10.1080/07391102.2016.1157761](https://doi.org/10.1080/07391102.2016.1157761) Chen, W., Yang, H., Feng, P., Ding, H., and Lin, H. (2017b). iDNA4mC: identifying DNA N4-methylcytosine sites based on nucleotide chemical properties. [Bioinformatics 33, 3518–3523. doi: 10.1093/bioinformatics/btx479](https://doi.org/10.1093/bioinformatics/btx479) Coia, G., Parker, M. D., Speight, G., Byrne, M. E., and Westaway, E. G. (1988). Nucleotide and complete amino acid sequences of Kunjin virus: definitive gene order and characteristics of the virus-specified proteins. J. Gen. Virol. 69(Pt 1), 1–21. Deng, X., Li, J., and Cheng, J. (2013). Predicting protein model quality from sequence alignments by support vector machines. J. Proteomics Bioinform. [S9:001. doi: 10.4172/jpb.S9-001](https://doi.org/10.4172/jpb.S9-001) Ding, H., Feng, P. M., Chen, W., and Lin, H. (2014). Identification of bacteriophage virion proteins by the ANOVA feature selection and analysis. Mol. Biosyst. 10, [2229–2235. doi: 10.1039/c4mb00316k.](https://doi.org/10.1039/c4mb00316k.) Drulis-Kawa, Z., Majkowska-Skrobek, G., Maciejewska, B., Delattre, A. S., and Lavigne, R. (2012). Learning from bacteriophages - advantages and limitations of phage and phage-encoded protein applications. Curr. Protein Pept. Sci. 13, [699–722. doi: 10.2174/138920312804871193](https://doi.org/10.2174/138920312804871193) Dubchak, I., Muchnik, I., Holbrook, S. R., and Kim, S. H. (1995). Prediction of protein folding class using global description of amino acid sequence. Proc. Natl. Acad. Sci. U.S.A. 92, 8700–8704. Eickholt, J., Deng, X., and Cheng, J. (2011). DoBo: Protein domain boundary prediction by integrating evolutionary signals and machine learning. BMC [Bioinformatics 12:43. doi: 10.1186/1471-2105-12-43](https://doi.org/10.1186/1471-2105-12-43) Elofsson, A., Joo, K., Keasar, C., Lee, J., Maghrabi, A. H. A., Manavalan, B., et al. (2017). Methods for estimation of model accuracy in CASP12. Proteins [86(Suppl. 1), 361–373. doi: 10.1101/143925](https://doi.org/10.1101/143925) Feng, P. M., Chen, W., Lin, H., and Chou, K. C. (2013a). iHSP-PseRAAAC: identifying the heat shock protein families using pseudo reduced amino acid alphabet composition. Anal Biochem. 442, 118–125. [doi: 10.1016/j.ab.2013.05.024](https://doi.org/10.1016/j.ab.2013.05.024) Feng, P. M., Ding, H., Chen, W., and Lin, H. (2013b). Naive Bayes classifier with feature selection to identify phage virion proteins. Comput. Math. Methods [Med. 2013:530696. doi: 10.1155/2013/530696](https://doi.org/10.1155/2013/530696) Feng, P. M., Lin, H., and Chen, W. (2013c). Identification of antioxidants from sequence information using naive Bayes. Comput. Math. Methods Med. [2013:567529. doi: 10.1155/2013/567529](https://doi.org/10.1155/2013/567529) Feng, P., Yang, H., Ding, H., Lin, H., Chen, W., and Chou, K. C. (2018). iDNA6mA-PseKNC: identifying DNA N(6)-methyladenosine sites by incorporating nucleotide physicochemical properties into PseKNC. Genomics. [doi: 10.1016/j.ygeno.2018.01.005. [Epub ahead of print].](https://doi.org/10.1016/j.ygeno.2018.01.005) Govindaraj, R. G., Manavalan, B., Basith, S., and Choi, S. (2011). Comparative analysis of species-specific ligand recognition in Toll-like receptor 8 signaling: [a hypothesis. PLoS ONE 6:e25118. doi: 10.1371/journal.pone.0025118](https://doi.org/10.1371/journal.pone.0025118) Govindaraj, R. G., Manavalan, B., Lee, G., and Choi, S. (2010). Molecular modeling-based evaluation of hTLR10 and identification of potential ligands in Toll-like receptor signaling. PLoS ONE 5:e12713. [doi: 10.1371/journal.pone.0012713](https://doi.org/10.1371/journal.pone.0012713) Gupta, S., Mittal, P., Madhu, M. K., and Sharma, V. K. (2017). IL17eScan: a tool for the identification of peptides inducing IL-17 response. Front. Immunol. 8:1430. [doi: 10.3389/fimmu.2017.01430](https://doi.org/10.3389/fimmu.2017.01430) Jara-Acevedo, R., Diez, P., Gonzalez-Gonzalez, M., Degano, R. M., Ibarrola, N., Gongora, R., et al. (2018). Screening phage-display antibody libraries using protein arrays. Methods Mol. Biol. 1701, 365–380. [doi: 10.1007/978-1-4939-7447-4_20](https://doi.org/10.1007/978-1-4939-7447-4_20) Kryshtafovych, A., Monastyrskyy, B., Fidelis, K., Schwede, T., and Tramontano, A. (2017). Assessment of model accuracy estimations in CASP12. Proteins [86(Suppl. 1), 345–360. doi: 10.1002/prot.25371](https://doi.org/10.1002/prot.25371) Kumar, R., Chaudhary, K., Singh Chauhan, J., Nagpal, G., Kumar, R., Sharma, M., et al. (2015). An in silico platform for predicting, screening and designing of [antihypertensive peptides. Sci. Rep. 5:12512. doi: 10.1038/srep12512](https://doi.org/10.1038/srep12512) Lavigne, R., Ceyssens, P. J., and Robben, J. (2009). Phage proteomics: applications of mass spectrometry. Methods Mol. Biol. 502, 239–251. [doi: 10.1007/978-1-60327-565-1_14](https://doi.org/10.1007/978-1-60327-565-1_14) LeCun, Y., Bengio, Y., and Hinton, G. (2015). Deep learning. Nature 521, 436–444. [doi: 10.1038/nature14539](https://doi.org/10.1038/nature14539) Lee, D., Redfern, O., and Orengo, C. (2007). Predicting protein function from sequence and structure. Nat. Rev. Mol. Cell Biol. 8, 995–1005. [doi: 10.1038/nrm2281](https://doi.org/10.1038/nrm2281) Lekunberri, I., Subirats, J., Borrego, C. M., and Balcazar, J. L. (2017). Exploring the contribution of bacteriophages to antibiotic resistance. Environ. Pollut. 220(Pt [B), 981–984. doi: 10.1016/j.envpol.2016.11.059](https://doi.org/10.1016/j.envpol.2016.11.059) Li, L., Xiong, Y., Zhang, Z.-Y., Guo, Q., Xu, Q., Liow, H.-H., et al. (2015). Improved feature-based prediction of SNPs in human cytochrome P450 [enzymes. Interdiscipl. Sci. 7, 65–77. doi: 10.1007/s12539-014-0257-2](https://doi.org/10.1007/s12539-014-0257-2) Lin, H., Liu, W. X., He, J., Liu, X. H., Ding, H., and Chen, W. (2015). Predicting cancerlectins by the optimal g-gap dipeptides. Sci Rep. 5:16964. [doi: 10.1038/srep16964](https://doi.org/10.1038/srep16964) Manavalan, B., Basith, S., Choi, Y. M., Lee, G., and Choi, S. (2010a). Structurefunction relationship of cytoplasmic and nuclear IkappaB proteins: an in silico [analysis. PLoS ONE 5:e15782. doi: 10.1371/journal.pone.0015782](https://doi.org/10.1371/journal.pone.0015782) Manavalan, B., Basith, S., Shin, T. H., Choi, S., Kim, M. O., and Lee, G. (2017). MLACP: machine-learning-based prediction of anticancer peptides. [Oncotarget. 8, 77121–77136. doi: 10.18632/oncotarget.20365](https://doi.org/10.18632/oncotarget.20365) Manavalan, B., Govindaraj, R., Lee, G., and Choi, S. (2011). Molecular modelingbased evaluation of dual function of IkappaBzeta ankyrin repeat domain in [toll-like receptor signaling. J. Mol. Recognit. 24, 597–607. doi: 10.1002/jmr.1085](https://doi.org/10.1002/jmr.1085) Manavalan, B., Kuwajima, K., Joung, I., and Lee, J. (2015). “Structure-based protein folding type classification and folding rate prediction,” in Proceedings of the Bioinformatics and Biomedicine (BIBM), 2015 IEEE International Conference on (Washington, DC: IEEE). Manavalan, B., and Lee, J. (2017). SVMQA: support-vector-machine-based protein single-model quality assessment. Bioinformatics 33, 2496–2503. [doi: 10.1093/bioinformatics/btx222.](https://doi.org/10.1093/bioinformatics/btx222.) Manavalan, B., Lee, J., and Lee, J. (2014). Random forest-based protein model quality assessment (RFMQA) using structural features and potential energy [terms. PLoS ONE 9:e106542. doi: 10.1371/journal.pone.0106542](https://doi.org/10.1371/journal.pone.0106542) Manavalan, B., Murugapiran, S. K., Lee, G., and Choi, S. (2010b). Molecular modeling of the reductase domain to elucidate the reaction mechanism of reduction of peptidyl thioester into its corresponding alcohol in non-ribosomal [peptide synthetases. BMC Struct. Biol. 10:1. doi: 10.1186/1472-6807-10-1](https://doi.org/10.1186/1472-6807-10-1) Manavalan, B., Shin, T. H., and Lee, G. (2018). DHSpred: support-vectormachine-based human DNase I hypersensitive sites prediction using the optimal features selected by random forest. Oncotarget 9, 1944–1956. [doi: 10.18632/oncotarget.23099](https://doi.org/10.18632/oncotarget.23099) Nagpal, G., Chaudhary, K., Dhanda, S. K., and Raghava, G. P. S. (2017). Computational prediction of the immunomodulatory potential of RNA [sequences. Methods Mol. Biol. 1632, 75–90. doi: 10.1007/978-1-4939-7138-1_5](https://doi.org/10.1007/978-1-4939-7138-1_5) Pedregosa, F., Varoquaux, G., Gramfort, A., Michel, V., Thirion, B., Grisel, O., et al. (2011). Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830. ----- Qiu, W. R., Sun, B. Q., Xiao, X., Xu, Z. C., and Chou, K. C. (2016). iHydPseCp: identify hydroxyproline and hydroxylysine in proteins by incorporating sequence-coupled effects into general PseAAC. Oncotarget 7, 44310–44321. [doi: 10.18632/oncotarget.10027.](https://doi.org/10.18632/oncotarget.10027.) Scholkopf, B., and Smola, A. J. (2001). Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. London: MIT Press. Seguritan, V., Alves, N. Jr., Arnoult, M., Raymond, A., Lorimer, D., Burgin, A. B. Jr., et al. (2012). Artificial neural networks trained to detect viral and phage structural proteins. PLoS Comput. Biol. 8:e1002657. [doi: 10.1371/journal.pcbi.1002657](https://doi.org/10.1371/journal.pcbi.1002657) Smola, A., and Vapnik, V. (1997). Support vector regression machines. Adv. Neural Inf. Process. Syst. 9, 155–161. Song, J., Wang, H., Wang, J., Leier, A., Marquez-Lago, T., Yang, B., et al. (2017). PhosphoPredict: a bioinformatics tool for prediction of human kinase-specific phosphorylation substrates and sites by integrating heterogeneous feature selection. Sci. Rep. 7:6862. doi: 10.1038/s41598-017-0 7199-4 Speight, G., Coia, G., Parker, M. D., and Westaway, E. G. (1988). Gene mapping and positive identification of the non-structural proteins NS2A, NS2B, NS3, NS4B and NS5 of the flavivirus Kunjin and their cleavage sites. J. Gen. Virol. 69(Pt 1), 23–34. Tang, H., Su, Z. D., Wei, H. H., Chen, W., and Lin, H. (2016). Prediction of cellpenetrating peptides with feature selection techniques. Biochem. Biophys. Res. [Commun. 477, 150–154. doi: 10.1016/j.bbrc.2016.06.035](https://doi.org/10.1016/j.bbrc.2016.06.035) Vapnik, V. N., and Vapnik, V. (1998). Statistical Learning Theory. New York, NY: Wiley. Wang, H., Feng, L., Zhang, Z., Webb, G. I., Lin, D., and Song, J. (2016). Crysalis: an integrated server for computational analysis and design of protein crystallization. Sci. Rep. 6:21383. doi: 10.1038/srep 21383 Wang, M., Zhao, X. M., Takemoto, K., Xu, H., Li, Y., Akutsu, T., et al. (2012). FunSAV: predicting the functional effect of single amino acid variants using a two-stage random forest model. PLoS ONE 7:e43847. [doi: 10.1371/journal.pone.0043847](https://doi.org/10.1371/journal.pone.0043847) Wang, Z., Tegge, A. N., and Cheng, J. (2009). Evaluating the absolute quality of a single protein model using structural features and support vector machines. [Proteins 75, 638–647. doi: 10.1002/prot.22275.](https://doi.org/10.1002/prot.22275.) Xiong, Y., Liu, J., Zhang, W., and Zeng, T. (2012). Prediction of heme binding residues from protein sequences with integrative sequence profiles. Proteome [Sci. 10(Suppl. 1), S20. doi: 10.1186/1477-5956-10-S1-S20](https://doi.org/10.1186/1477-5956-10-S1-S20) Xu, Q., Xiong, Y., Dai, H., Kumari, K. M., Xu, Q., Ou, H.-Y., et al. (2017). PDCSGB: prediction of effective drug combinations using a stochastic gradient [boosting algorithm. J. Theor. Biol. 417, 1–7. doi: 10.1016/j.jtbi.2017.01.019](https://doi.org/10.1016/j.jtbi.2017.01.019) Yuan, Y., and Gao, M. (2016). Proteomic analysis of a novel Bacillus jumbo phage revealing glycoside hydrolase as structural component. Front. Microbiol. 7:745. [doi: 10.3389/fmicb.2016.00745](https://doi.org/10.3389/fmicb.2016.00745) Zhang, D., and Tsai, J. J. P. (2005). Machine Learning Applications in Software Engineering. River Edge, NJ: World Scientific. Zhang, L., Zhang, C., Gao, R., and Yang, R. (2015). An ensemble method to distinguish bacteriophage virion from non-virion proteins based on protein sequence characteristics. Int. J. Mol. Sci. 16, 21734–21758. [doi: 10.3390/ijms160921734](https://doi.org/10.3390/ijms160921734) Zheng, C., Wang, M., Takemoto, K., Akutsu, T., Zhang, Z., and Song, J. (2012). An integrative computational framework based on a two-step random forest algorithm improves prediction of zinc-binding sites in proteins. PLoS ONE [7:e49716. doi: 10.1371/journal.pone.0049716](https://doi.org/10.1371/journal.pone.0049716) **Conflict of Interest Statement: The authors declare that the research was** conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Copyright © 2018 Manavalan, Shin and Lee. This is an open-access article [distributed under the terms of the Creative Commons Attribution License (CC](http://creativecommons.org/licenses/by/4.0/) BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. -----
{ "disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC5864850, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.frontiersin.org/articles/10.3389/fmicb.2018.00476/pdf" }
2,018
[ "JournalArticle" ]
true
2018-03-16T00:00:00
[ { "paperId": "4c75b748911ddcd888c5122f7672f69caa5d661f", "title": "Statistical Learning Theory" }, { "paperId": "893bef467bedbea194e8148c2119c79a531068c2", "title": "An ensemble method" }, { "paperId": "99fe4342b5643b1130ae7642b00f682261fe03e8", "title": "Assessment of model accuracy estimations in CASP12" }, { "paperId": "063ca200dc83bf433c95552bc746e649ab857b4d", "title": "iDNA6mA-PseKNC: Identifying DNA N6-methyladenosine sites by incorporating nucleotide physicochemical properties into PseKNC." }, { "paperId": "eac14deda50018f8d0128d5c0f0aa4caae918513", "title": "DHSpred: support-vector-machine-based human DNase I hypersensitive sites prediction using the optimal features selected by random forest" }, { "paperId": "9e12dea419e54e1f727797835e7be4a691daffc7", "title": "iDNA4mC: identifying DNA N4‐methylcytosine sites based on nucleotide chemical properties" }, { "paperId": "3166d7e48597fb2dbbcc51f449560ba448065d18", "title": "IL17eScan: A Tool for the Identification of Peptides Inducing IL-17 Response" }, { "paperId": "a8ed44cbf02da0f135335bc2986ddff9109d9b53", "title": "MLACP: machine-learning-based prediction of anticancer peptides" }, { "paperId": "b065291e599ae127d870bc6678e132ae0a0f42ce", "title": "SVMQA: support‐vector‐machine‐based protein single‐model quality assessment" }, { "paperId": "4ba993c02be1fe32a1dadc00433b448df3f103b8", "title": "PhosphoPredict: A bioinformatics tool for prediction of human kinase-specific phosphorylation substrates and sites by integrating heterogeneous feature selection" }, { "paperId": "46aa966b054a4d5d4011082ce5fe5a273f24daab", "title": "Methods for estimation of model accuracy in CASP12" }, { "paperId": "fc00b73dce03adf5cd037f48f4e4c1483dcd7b47", "title": "PDC-SGB: Prediction of effective drug combinations using a stochastic gradient boosting algorithm." }, { "paperId": "a66bb52322e09746fe3924955667974a077fd6f9", "title": "MethyRNA: a web server for identification of N6-methyladenosine sites" }, { "paperId": "9d7b4ea331d5d31fb8d089d7212697bc1e2c1dc8", "title": "Prediction of cell-penetrating peptides with feature selection techniques." }, { "paperId": "03176653c70dd4e9e2063bc85652a18c19cdd360", "title": "iHyd-PseCp: Identify hydroxyproline and hydroxylysine in proteins by incorporating sequence-coupled effects into general PseAAC" }, { "paperId": "7c40a264798bac1881e15709f11ec76c13f542eb", "title": "Proteomic Analysis of a Novel Bacillus Jumbo Phage Revealing Glycoside Hydrolase As Structural Component" }, { "paperId": "47b79eba1aa62268624b097da0f5a062db3bbcf6", "title": "Crysalis: an integrated server for computational analysis and design of protein crystallization" }, { "paperId": "7ed6245abe816b1d9e30e2ae6dc79a83bbb1d0b1", "title": "Prediction of Immunomodulatory potential of an RNA sequence for designing non-toxic siRNAs and RNA-based vaccine adjuvants" }, { "paperId": "c176457f59cd6ab72437cb090971a9ab0c40d188", "title": "Predicting cancerlectins by the optimal g-gap dipeptides" }, { "paperId": "6913dd308e89c51d8f6c2e407b2a6497b7468224", "title": "Structure-based protein folding type classification and folding rate prediction" }, { "paperId": "b989b7731f3d0d8c6e339ab77876cc954fd23721", "title": "An Ensemble Method to Distinguish Bacteriophage Virion from Non-Virion Proteins Based on Protein Sequence Characteristics" }, { "paperId": "6d5a8689fae2d528940a90f626807c2ddda124e6", "title": "An in silico platform for predicting, screening and designing of antihypertensive peptides" }, { "paperId": "091e4d1a3a77357304cbf92f521029ccbb37f625", "title": "Improved feature-based prediction of SNPs in human cytochrome P450 enzymes" }, { "paperId": "deaf1f4b75f3a5fac7d48ae3d92acc82422093a8", "title": "Random Forest-Based Protein Model Quality Assessment (RFMQA) Using Structural Features and Potential Energy Terms" }, { "paperId": "fcd5d71e8e513b35ea0a12417690c6e419cd6887", "title": "Identification of bacteriophage virion proteins by the ANOVA feature selection and analysis." }, { "paperId": "3bb8f7c74d5437df912c8619608081eede7513a0", "title": "iSS-PseDNC: Identifying Splicing Sites Using Pseudo Dinucleotide Composition" }, { "paperId": "7c1857d6396d444b40ed73b5c3c5678b0b4ba272", "title": "SMOQ: a tool for predicting the absolute residue-specific quality of a single protein model with support vector machines" }, { "paperId": "bc9e927b42131be8ec89ec352020ee9d7d6d016e", "title": "iHSP-PseRAAAC: Identifying the heat shock protein families using pseudo reduced amino acid alphabet composition." }, { "paperId": "2c5369cf25019bd09d94c889f7bf874daa458e5d", "title": "Identification of Antioxidants from Sequence Information Using Naïve Bayes" }, { "paperId": "b4cdb3054f1e8888ca3b284f79745e2f0eb1b376", "title": "Naïve Bayes Classifier with Feature Selection to Identify Phage Virion Proteins" }, { "paperId": "22d38eb9fa8d3cea820e9e20cae5991faaccceb2", "title": "Evolutionary, Structural and Functional Interplay of the IκB Family Members" }, { "paperId": "8ddb594e208202d1fb4d17c5609961d5108033b6", "title": "Learning from Bacteriophages - Advantages and Limitations of Phage and Phage-Encoded Protein Applications" }, { "paperId": "bdbef8deace7b6a39b7eea24f1661a47b7411dcb", "title": "An Integrative Computational Framework Based on a Two-Step Random Forest Algorithm Improves Prediction of Zinc-Binding Sites in Proteins" }, { "paperId": "ab29a0aa1058c8d326bf9a5a6d030de267c1e2be", "title": "FunSAV: Predicting the Functional Effect of Single Amino Acid Variants Using a Two-Stage Random Forest Model" }, { "paperId": "c6002449584aa68524df4c17c5264adb0a0a2526", "title": "Artificial Neural Networks Trained to Detect Viral and Phage Structural Proteins" }, { "paperId": "8ee05ddd680c810f7960b2a096b50ace6969161f", "title": "Prediction of heme binding residues from protein sequences with integrative sequence profiles" }, { "paperId": "8911b72064a09b7842f256c3ffd4af72dd23629a", "title": "Comparative Analysis of Species-Specific Ligand Recognition in Toll-Like Receptor 8 Signaling: A Hypothesis" }, { "paperId": "0c0a1cbff2dc2a4ded6988e1f24bdccd4bb0b541", "title": "In Silico Approach to Inhibition of Signaling Pathways of Toll-Like Receptors 2 and 4 by ST2L" }, { "paperId": "6a410d260831b64044ff573d8201ad6bf52c2691", "title": "Molecular modeling‐based evaluation of dual function of IκBζ ankyrin repeat domain in toll‐like receptor signaling" }, { "paperId": "ad4fd2c149f220a62441576af92a8a669fe81246", "title": "Scikit-learn: Machine Learning in Python" }, { "paperId": "2a148998fb1ce507a6951a01f6c944ffb23fb164", "title": "DoBo: Protein domain boundary prediction by integrating evolutionary signals and machine learning" }, { "paperId": "39ea8890503dd83c78ffb81b6c4105eb19de8f5a", "title": "Structure-Function Relationship of Cytoplasmic and Nuclear IκB Proteins: An In Silico Analysis" }, { "paperId": "6f7259fe4c951964abb02bc542aaa0c75982fcdc", "title": "Molecular Modeling-Based Evaluation of hTLR10 and Identification of Potential Ligands in Toll-Like Receptor Signaling" }, { "paperId": "423e6d79882dd045e46100829460a0fcbe36d577", "title": "Molecular modeling of the reductase domain to elucidate the reaction mechanism of reduction of peptidyl thioester into its corresponding alcohol in non-ribosomal peptide synthetases" }, { "paperId": "2f328c21aaaf7b8fb951b7ae4bcaaa1146af7bff", "title": "Evaluating the absolute quality of a single protein model using structural features and support vector machines" }, { "paperId": "55d6ab8ae9bddb45910d90009e3e482fa1b703e8", "title": "Predicting protein function from sequence and structure" }, { "paperId": "447d1fdb3283f8c7fce382955cdb081d3c10ff4f", "title": "Machine learning applications in software engineering" }, { "paperId": "fc8cda36a0972e7de1ac3a7bcb81dc32da79bee4", "title": "Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond" }, { "paperId": "459285bfe4f16f107fc1e470716100a108582a0a", "title": "Prediction of protein folding class using global description of amino acid sequence." }, { "paperId": "4aaa30769ca49875f45670970130c088136986d1", "title": "A training algorithm for optimal margin classifiers" }, { "paperId": "f00834f0b585cbd405bc229dcb86fc184e862ee3", "title": "Screening Phage-Display Antibody Libraries Using Protein Arrays." }, { "paperId": null, "title": "be construed as a potential conflict of interest" }, { "paperId": "afff2e72df1fccc12307a24f22a91f2b94567321", "title": "Exploring the contribution of bacteriophages to antibiotic resistance." }, { "paperId": "9b512e0dc826aa0d1cd02b36d1edc42027e63777", "title": "Computational Prediction of the Immunomodulatory Potential of RNA Sequences." }, { "paperId": "4f8d648c52edf74e41b0996128aa536e13cc7e82", "title": "Deep Learning" }, { "paperId": "29340c6ddcb6e3f0df29ccf071b89ead8dba13a6", "title": "Predicting Protein Model Quality from Sequence Alignments by Support" }, { "paperId": "102016057deedaf0643033457c1cd77f0345aa0e", "title": "Phage proteomics: applications of mass spectrometry." }, { "paperId": null, "title": "Support vector regressionmachines.Adv" }, { "paperId": "c51b4d5aea3f7d8c9fd5f036f8c5dec728073a7e", "title": "Nucleotide and complete amino acid sequences of Kunjin virus: definitive gene order and characteristics of the virus-specified proteins." }, { "paperId": "c96a0192d1c2df8fe857423d1ac7bc9ec23fa89b", "title": "Gene mapping and positive identification of the non-structural proteins NS2A, NS2B, NS3, NS4B and NS5 of the flavivirus Kunjin and their cleavage sites." }, { "paperId": null, "title": "Prediction of Phage Virion Proteins" } ]
13,124
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Medicine", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01773eeba8202b901b7b1ce04f42e50529942eb5
[ "Computer Science" ]
0.900474
Wholesome Coin: A pHealth Solution to Reduce High Obesity Rates in Gulf Cooperation Council Countries Using Cryptocurrency
01773eeba8202b901b7b1ce04f42e50529942eb5
Frontiers in Blockchain
[ { "authorId": "2264431", "name": "Hessah A. Alsalamah" }, { "authorId": "2118924527", "name": "Shorog Nasser" }, { "authorId": "8478984", "name": "Shada Alsalamah" }, { "authorId": "2118923333", "name": "Albatoul I. Almohana" }, { "authorId": "87439164", "name": "A. Alanazi" }, { "authorId": "2119525270", "name": "Fay Alrrshaid" } ]
{ "alternate_issns": null, "alternate_names": [ "Front Blockchain" ], "alternate_urls": null, "id": "17d7865f-0af7-472c-b174-60948bf06d11", "issn": "2624-7852", "name": "Frontiers in Blockchain", "type": null, "url": "https://www.frontiersin.org/journals/blockchain#" }
Obesity is considered one of the leading causes of chronic and noncommunicable diseases; these include diabetes, cardiovascular disease, and cancer. The obesity prevalence is threefold higher in the Arab Gulf Cooperation Council (GCC) population than the rest of the world and leaves healthcare providers within the region with no alternative than to offer continuous and sustainable healthcare services. Obesity prevention would be more economical for governments than providing treatment. Preventing obesity is challenging because it requires motivating individuals to live a healthy lifestyle. Personal health (pHealth) has recently been actively involved in finding solutions to encourage healthy living. However, pHealth does not address the high percentage of people lacking the desire to maintain healthy living plans, which could have a negative effect on attempts aimed at reducing obesity prevalence. This study sheds light on the challenges faced by the GCC governments in reducing high obesity rates using pHealth; we propose a solution, Wholesome Coin, which incorporates advanced technologies to help governments reduce high obesity rates. Wholesome Coin has two components: one uses wearable IoT (WIoT) to help patients manage their behavior by tracking their physical activities and diet, and the other utilizes blockchain technology to help healthcare payers to incentify patients to maintain a healthy living plan by awarding digital coins that can be redeemed for real goods and services. GCC governments’ adoption of Wholesome Coin could improve the quality of life of obese patients in a seamless, secure, and self-motivated manner, resulting in a healthier tomorrow, especially amid challenging times featuring global social distance campaigns.
Edited by: Immaculate Dadiso Motsi-Omoijiade, University of Birmingham, United Kingdom Reviewed by: Iztok Perus, University of Maribor, Slovenia Taghreed Justinia, King Saud bin Abdulaziz University for Health Sciences, Saudi Arabia *Correspondence: Hessah A. Alsalamah [halsalamah@KSU.EDU.SA](mailto:halsalamah@KSU.EDU.SA) Specialty section: This article was submitted to Blockchain for Science, a section of the journal Frontiers in Blockchain Received: 16 January 2021 Accepted: 24 May 2021 Published: 12 July 2021 Citation: Alsalamah HA, Nasser S, Alsalamah S, Almohana AI, Alanazi A and Alrrshaid F (2021) Wholesome Coin: A pHealth Solution to Reduce High Obesity Rates in Gulf Cooperation Council Countries Using Cryptocurrency. Front. Blockchain 4:654539. [doi: 10.3389/fbloc.2021.654539](https://doi.org/10.3389/fbloc.2021.654539) p y [doi: 10.3389/fbloc.2021.654539](https://doi.org/10.3389/fbloc.2021.654539) # Wholesome Coin: A pHealth Solution to Reduce High Obesity Rates in Gulf Cooperation Council Countries Using Cryptocurrency Hessah A. Alsalamah [1][,][2]*, Shorog Nasser [3][,][4], Shada Alsalamah [1][,][5][,][6], Albatoul I. Almohana [4][,][7], Areej Alanazi [4] and Fay Alrrshaid [4] 1Department of Information Systems, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia, 2Department of Computer Engineering, College of Engineering and Architecture, Al Yamamah University, Riyadh, Saudi Arabia, 3Saudi Technology and Security Comprehensive Control Company (Tahakom), Riyadh, Saudi Arabia, 4Safe House Lab, Center of Excellence in Information Assurance, College of Computer and Information Sciences, King Saud University, Riyadh, Saudi Arabia, 5National Health Information Center, Saudi Health Council, Riyadh, Saudi Arabia, 6Digital Health and Innovation Department, Science Division, World Health Organization, Geneva, Switzerland, [7]National Cybersecurity Authority, Riyadh, Saudi Arabia ### Obesity is considered one of the leading causes of chronic and noncommunicable diseases; these include diabetes, cardiovascular disease, and cancer. The obesity prevalence is threefold higher in the Arab Gulf Cooperation Council (GCC) population than the rest of the world and leaves healthcare providers within the region with no alternative than to offer continuous and sustainable healthcare services. Obesity prevention would be more economical for governments than providing treatment. Preventing obesity is challenging because it requires motivating individuals to live a healthy lifestyle. Personal health (pHealth) has recently been actively involved in finding solutions to encourage healthy living. However, pHealth does not address the high percentage of people lacking the desire to maintain healthy living plans, which could have a negative effect on attempts aimed at reducing obesity prevalence. This study sheds light on the challenges faced by the GCC governments in reducing high obesity rates using pHealth; we propose a solution, Wholesome Coin, which incorporates advanced technologies to help governments reduce high obesity rates. Wholesome Coin has two components: one uses wearable IoT (WIoT) to help patients manage their behavior by tracking their physical activities and diet, and the other utilizes blockchain technology to help healthcare payers to incentify patients to maintain a healthy living plan by awarding digital coins that can be redeemed for real goods and services. GCC governments’ adoption of Wholesome Coin could improve the quality of life of obese patients in a seamless, secure, and self-motivated manner, resulting in a healthier tomorrow, especially amid challenging times featuring global social distance campaigns. Keywords: Arab Gulf Cooperation Council, blockchain, COVID-19, eHealth, gamification, obesity, pHealth, wearable ----- ## INTRODUCTION Obesity and being overweight are well-known health issues with significant risks that raise a major global concern (Zenag et al., 2011) (World Health Organization, 2000). According to the World Health Organization (WHO) (World Health Organization, 2020), approximately 70% of all deaths worldwide are due to noncommunicable diseases (NCDs), including heart disease, stroke, cancer, diabetes, and chronic lung disease. Obesity is a major cause of chronic diseases including cardiovascular diseases, cancers, and related issues that may lead to morbidity and mortality (Akil and Ahmad, 2011). Premature deaths due to type 2 diabetes mellitus (T2DM) and cardiovascular diseases (CVD) are also associated with obesity (Burns, 2016). The danger of obesity goes beyond its health risks, and it is extremely costly in terms of economics (Saudi Ministry of Health, 2018), mainly because treating obesity requires sustainable and continuous health care resources. ## Obesity Rates in Gulf Cooperation Council (GCC) Countries Diabetes rates are significantly higher in the Arab GCC region than other parts of the world. The International Diabetes Federation reported a diabetes prevalence of 23.9% in Saudi Arabia, 23.1% in Kuwait, and 19.8% in Qatar; the global average in 2015 was just 8.3% (International Diabetes Federation, 2021). The prevalence is expected to increase to 50% by 2025 in some GCC countries (International Diabetes Federation, 2021). The cost of treating diabetes is equally staggering in the Middle East and North Africa (MENA) regions. The immediate cost of diabetes treatment alone—discounting stunted productivity and indirect treatment costs—is expected to increase four-fold in Abu Dhabi by 2030. MENA spent USD 16.8 billion on obesity treatment in 2014 (International Diabetes Federation, 2021). Obesity is considered a serious problem in Saudi Arabia as the country is listed as the 15th most obese country in the world according to the World Atlas data (Alqarni, 2016). NCDs account for 73% of all deaths in Saudi Arabia (World Health Organization, 2018). In 2018, the General Authority for Statistics in Saudi Arabia (GASSA) (Saudi General Authority of Statistics, 2018) published figures indicating that only 18.99% of Saudis engage in sports activities, while the remaining 81.01% do not engage regularly in any kind of sports activity. Moreover, not being physically active is a known cause of obesity (World Health Organization, 2020), which implies that a high percentage of people with no desire to exercise or engage in sports, could have a negative effect on attempts aimed at reducing obesity prevalence in Saudi Arabia. Finally, having established the seriousness of obesity prevalence, it is important to mention the value of developing and implementing obesity prevention measures. However, the GCC governments exhibit no coherent regional plans to mitigate this challenge. Isolated policy responses in the form of detection campaigns and initiatives, some in line with WHO-suggested programs, remain markedly dwarfed by the size of the diabetes epidemic (World Health Organization, 2020). ## Enabling Personal Health Through Emerging Technologies Technology has impacted almost every facet of our lives; it is selfevident that the wide application of emerging technologies can help overcome obesity. Wearable Internet of Things (WIoT) (Hiremath et al., 2014) is one of the most important technologies that is utilized to enable the concept of a pHealth system. pHealth is one suggested paradigm to ensure low-cost and qualitative health services related to chronic diseases that require sustainable care (Teng et al., 2008) (Poon, and Zhang, 2008). This is mainly because engaging people is at the heart of the pHealth notion by encouraging people’s early participation in preventing or predicting illness through personalized healthcare (Teng et al., 2008). WIoT is defined as “Technological infrastructure that interconnects wearable sensors to enable monitoring human factors including health, wellness, behaviors and other data useful in enhancing individuals’ everyday quality of life” (Hiremath, et al., 2014). WIoT has great influence in the fields of health and fitness, as it has features for tracking physiological functions and biofeedback (Wright and Keith, 2014). WIoT presents various products such as watches, glasses, bracelets, and smart shirts (Wright and Keith, 2014). Another important technology that is gaining popularity is the blockchain technology (Mettler, 2016) (Nakamoto, 2008). Governments, organizations, and businesses have started to search for solutions that can adopt blockchain technology (Mettler, 2016). Initially, blockchain was used for financial transactions. Bitcoin, the digital coin described by Satoshi Nakamoto’s (pseudonym) in whitepaper in 2008, was the first implementation of blockchain (Mettler, M., 2016; Nakamoto, 2008). Since then, the distributed platform, which allows information flow through a shared and seamlessly accessed ledger that everyone owns, seems to attract many investors (Mettler, 2016). The accessibility and flexibility of access to information are controlled through the blockchain platform. Authors (Alsalamah and Nuzzolese, 2020) classified blockchain types into four main groups based on their accessibility and visibility, as illustrated in Figure 1. In terms of the blockchain applications, according to Swan, there are three generations of blockchain revelation: blockchain 1.0 for digital currency, blockchain 2.0 for contracts in relation to financial services, and finally blockchain 3.0 for general applications beyond currency and financial services (Swan, 2015). In 2015, approximately half a billion dollars were invested in blockchain startups (Mettler, 2016). Another report released by the research group shows that almost USD 3.9 billion in investments were raised in the first three quarters of 2018 by blockchain and cryptocurrency-focused startups (Diar, 2018). Moreover, blockchain technology was adopted by some governments, such as Saudi government, which announced launching of the “Aber” project, the common digital currency between Saudi Arabian Monetary Authority (SAMA) and United Arab Emirates Central Bank (UAECB) (Saudi Arabian Monetary ----- Authority, 2019). Along with the rise of investments in blockchain, the diversity of its applications has expanded (Mettler, 2016). Blockchain has recently begun to disturb many important industries, such as healthcare (Mettler, 2016). Furthermore, Bitcoin was the first to present the idea of digital currency (Kuo et al., 2017) (Nakamoto, 2008), which was used in financial disciplines, but through the years, the concept of digital coins has further been applied to disciplines such as health and medication. ## Gamification to Fight Obesity To encourage people to prevent illness and apply pHealth, gamification is a known methodology that can influence their behavior (Cugelman, 2013). Gamification is defined as “the use of game design elements in non-game contexts” (Cugelman, B., 2013). According to (Tang, 1992), money has a significant impact on people’s behavior, which is important because it is crucial to be motivated to overcome obesity or to become physically active. Many benefits can be derived from reducing obesity prevalence. Preventing people from becoming obese helps them to avoid noncommunicable and chronic diseases associated with obesity, and positively affect their quality of life (Cameron et al., 2011). Moreover, eliminating obesity as a cause of death, will have a significant impact on countries such as the United States, where approximately 300,000 people die prematurely due to obesity every year (Colman, 2000). In addition to the positive impact on people’s health, reducing obesity prevalence prevents associated diseases that usually require continuous and regular healthcare expenses, thereby affecting the economy. This study proposes a WIoT and blockchain-based solution to defeat obesity by encouraging people to engage in physical activities, and by motivating them with incentives that impact their behavior, such as gamification. The remainder of this paper is organized as follows: Literature Review reviews the literature and existing pHealth solutions and identifies the gap in the literature, Wholesome Coin Solution proposes the Wholesome Coin solution in detail, finally, in Wholesome Coin Design and Development, the paper concludes with a comprehensive discussion of challenges, impact, and further research recommendations. ## LITERATURE REVIEW Many studies mention that a lifestyle that heavily depends on technology is one cause of physical inactivity, which is associated with obesity (Rosin, 2008). However, the expansion of WIoT produced new technical devices that aim to help people live healthier lifestyles by encouraging them to engage in physical activities and by providing them with health measurements and feedback through mobile health applications (Ananthanarayan and Siek, 2012). Fitbit (2020) is a well-known example of a wristband wearable device used as an activity tracker. Fitbit tracks and records the measurements of different activities and health-related data, such as heart rate, walking distance, sleep patterns, and body temperature. The Fitbit wristband can be connected to a mobile application where the user can review a record of their activities and health-related data. One problem with Fitbit and similar devices is that although they are designed to encourage people to engage in physical activity by providing them with a self-monitoring tool, the effects are limited. According to a study conducted (Wang et al., 2015), simply providing Fitbit as a self-monitoring tool was insufficient to achieve an increase in target physical activity levels in a sample of overweight and obese adults. In addition, Fitbit admits that their average user is overweight, which signals the company to reconsider the development of its technology (Wright and Keith, 2014). More problems associated with such wearable devices are security and privacy issues. The Fitbit wristband collects health-related data that are considered highly sensitive and that can be used for nefarious purposes (Ching and Singh, 2016). One main concern is that the data can be exploited by insurance companies to obtain users’ health-related data (Ching and Singh, 2016). ----- To overcome the issues of data protection and user privacy invasion, newer technologies, such as blockchain, are being used to provide healthcare solutions (Kuo et al., 2017). In the following sections, we present some commercial and research solutions that use blockchain technology to overcome obesity, and to share, read, store, and manipulate personal health data, as a mobile health application (mHealth). A solution to provide an electronic health record system that shares personal health data in a way that ensures privacy, security, and interoperability, was proposed by Liang et al. (2017). The solution depends on wearable devices, manual inputs by the user, and medical records containing personal health data. The data are collected by a mobile health application, which is responsible for synchronizing data to a cloud-based database platform. A blockchain network is used to ensure the integrity of data, manages access requests by different parties, and record requests for future auditing (Liang, et al., 2017). Although the solution uses blockchain to improve security, one main vulnerability is the use of cloud database platforms to store health-related data. Public cloud services might generally be secure; however, its security depends on the provider’s security and privacy policies, which might not be adequate for highly sensitive data such as health-related data. Another solution specializing in defeating obesity and encouraging people to engage in physical activity is HealthCoin Plus (Healthcoin+, 2021a). It is a commercial company that provides a digital coin called HealthCoin Plus that aims at reinventing health and wellness payment systems (Healthcoin+, 2021b). The system has a mobile application that allows the user to gain HealthCoin Plus coins after completing health-related challenges listed in the application. The user can use the coins to buy real goods and services. In their published whitepaper, (Healthcoin+, 2021b), they admit that the business model of HealthCoin Plus depends on finding a strategic partnership that supports the development of the community. In addition, the paper does not provide any details on how the user’s health-related data, which are supposed to be collected by the application, are stored and accessed. Universal HealthCoin (Jones, 2017) is another commercial blockchain-based health delivery and payment platform. It is a platform that aims to make health-related services more efficient and democratic (Jones, 2017). It focuses on allowing the provider to provide healthcare to people without concerning about payment-related issues, since the platform’s main focus is to enhance healthcare payment systems (Jones, 2017). Moreover, the platform has a feature that rewards people with tokens when they complete health-related activities. Although the platform has many features, it mainly focuses on improving healthcare payment systems for the providers. Universal HealthCoin states in their published whitepaper that the tokens or the coins gained after completing a health-related activity, can only be used to pay providers. Therefore, the user cannot use these tokens or coins to buy other goods or services (Jones, 2017). This may adversely affect user maintainability and negatively impact on user motive to gain more coins because coins can only be used to pay the provider. It is evident that some solutions fail to motivate obese people to engage in physical activities, while others fail to provide a comprehensive system that ensures the security and privacy of users’ health-related data. ## WHOLESOME COIN SOLUTION Wholesome Coin provides a comprehensive platform that motivates obese people to engage in physical activity without invading the user’s privacy. Wholesome Coin comprises of two components: one uses WIoT to help patients manage their behavior by tracking their physical activities and diet; and the other utilizes blockchain-based cryptocurrency to allow healthcare payers to incentify patients by awarding digital coins that can be redeemed for real goods and services. ## System Overview The Wholesome Coin platform is based on a mobile system connected to a device that the user wears and measures the user’s health-related data, including walking distance, blood pressure, sleeping hours, eating habits, and heartbeats. The data are stored in and updated to a blockchain node. Each user is the manager of their own information and can allow access to health providers, government institutions, and insurance companies to view the data. The Wholesome Coin system applies the concept of patient-centric care by giving the user control over their health-related data that contain highlysensitive information. The user’s data are stored in a blockchain network providing security superior to cloud stored data. Through the blockchain network, government institutions and/or insurance companies can monitor and retrieve all stored information related to a user’s lifestyle upon receiving permission from the user; thereafter, when the user reaches a certain predetermined level of healthy lifestyle, the user will earn a corresponding amount of Wholesome Coin, which should be a digital coin verified by the government. In the event that the coin is verified and adopted by the government, Wholesome Coin will become an ideal currency for companies to accept as payment. The success of the system depends on the user’s ability to convert the maximum of the user’s assessed score into corresponding amount of coins. The user can ----- then convert those coins to cash or even use them to buy goods and services because the coins are valuable, legible, and verified by the government. Digital coins are used to apply the gamification concept to entice the users to use the system by employing gaming strategies such as collecting coins, but with real money that the user can benefit from in real life. Every piece of health data generated by the wearable devices, will be uploaded to the blockchain network for record keeping. Furthermore, every request for access or permission for access granted, will be recorded in the blockchain for future auditing. By using blockchain, users will be guaranteed to have control over their health information, and will be able to give access to whomever they choose. Moreover, when data are uploaded to the blockchain, it is not removable, therefore the user cannot defraud any information. Blockchain is known for its fast transfer and identity authentication capabilities, which block any attempt to commit fraud, and can also handle increased scalability of transactions. Figure 2 demonstrates the Wholesome Coin ecosystem. Wholesome Coin is a multi-user ecosystem that collects, assesses, and manipulates data coming from different sources that need to flow seamlessly. To integrate this solution with existing information resources, it would be best achieved by using a distributed infrastructure that would avoid discarding existing solutions that are not interoperable. This solution can be offered by blockchain technology rather than a traditional centralized infrastructure. Wholesome Coin is a permissioned private type of blockchain that preserves users’ privacy, while allowing all system users to contribute to one or all of the three components of this ecosystem, i.e., diet tracking, exercise tracking, and coin rewards. The system allows only authorized users access to a user’s data ledger. ## System Entities ### System Users System users are those users who store health-related data on the system and are authorized to grant access of such data to other entities. For example, a user can grant a private sector health payer reading and writing rights to their data, while limiting other health payers (government) to only reading rights. Furthermore, users can access their full transaction history (exercise, diet, coins, etc.) that has been recorded in the blockchain ledger. ### Wearable Devices Wearable devices are responsible for collecting users’ healthrelated data, such as walking distance, heartbeat, blood pressure, burned calories, and sleep patterns. The wearable device is connected to the user’s account through the mobile health application, which works as a dashboard and control port for the user. The data are directly uploaded to the blockchain network. ### Health Payers Health payers (including governments and insurance companies) are responsible to verify transactions from users’ requests to redeem coins. In addition, government institutions might, for example, allow a better exchange rate for Wholesome Coin to obese people for calories burned to encourage them to continue exercising. However, insurance companies might exert users’ exercise data to the user’s detriment, such as refusing to process a treatment for abnormal blood pressure because the user does not exercise sufficiently. Conversely, insurance companies can reward people who exercise. ### Blockchain Network A blockchain network is an ecosystem in which relevant user data are shared with a list of trusted participants of health payers. Wholesome Coin allows system users to completely control the data collected and the list of participants using private wallets (accessed through mobile-based apps). Simultaneously, health payer participants can grant users awards through a web-based app (Dapp). In addition, Wholesome Coin records all access requests and transactions for future auditing. ## WHOLESOME COIN DESIGN AND IMPLEMENTATION Wholesome Coin architecture comprises of 5-tier layered architecture as illustrated in Figure 3: ----- ### • User Interface Layer: with different interfaces for the user (mobile-based app) and health payer participants (webbased Dapp); ### • Application Services Layer: containing the three key services provided for the Wholesome Coin users and participants; ### • Authorization Layer: authorizes users before granting them access rights to the solution. Health payers are authorized to reward users with digital coins, while users are authorized to track their exercise and diet; ### • Blockchain Layer: stores the on-chain data that must be immutable, known generator, and time sequence on a shared ledger. This is linked to the final physical layer, where all the data are stored. Access to data is granted based on access rights and implemented through a smart contract for each service; and ### • Physical Layer: contains local off-chain database and WIoT sensor data that feeds the on-chain data. Two types of data are used to serve Wholesome Coin users: first, data that is stored locally in a database (i.e., off-chain data) and are protected by health payers’ internal protocol and policies; second, data that is stored in the solution’s blockchain ledger (i.e., on-chain data) and is available to the user and all participants in the blockchain network. Figure 4 (Alsuwailem et al., 2019) illustrates the off-chain and on-chain data components along with the remaining system components. Users access to data was through a mobile-based iOS app and Web Dapp. First, the off-chain data was implemented using Firebase and connected to the XCode project using Swift programming language, as illustrated in Figure 5 (Alsuwailem et al., 2019). All requests to access the offchain data on Firebase were checked against an authentication list. Second, the Web Dapp for Ethereum blockchain was developed separately from the iOS app. As shown in Figure 6 (Alsuwailem et al., 2019), the Web Dapp consists of a front-end, back-end, and server. Remix was used to write, compile, and deploy smart contracts written in Solidity programming language. The Web3 provider was chosen as the execution environment and connected to the Ethereum client node at the local host. The front-end was developed using VScode to build a web page with HTML, JavaScript, and CSS. The back-end to front-end connection uses Web3 to interact with the smart contract and the HTML page. The link between them was achieved by conveying the ContractABI and contract address from Remix to the HTML page. Web3 was written on the HTML page using ----- JavaScript. The back-end and front-end interactions with the TestRPC server, are the Ethereum blockchain emulator for running the transactions. When the TestRPC server is activated, it provides 10 fake Ethereum accounts with 100 Ethers for each account, allowing calls to be made to the blockchain. The smart contracts are called via Web3 on the HTML page using an account address, and the transactions are made in the TestRPC server. Finally, the Dapp was built on a private Ethereum blockchain with three smart contracts and used Truffle to compile it to the private Ethereum network; the Geth Server was used to run it. The Ethereum platform was chosen over Bitcoin. This is because it is an open source that supports blockchain’s third generation, supporting general applications beyond currency and financial services (Swan, 2015). The Wholesome Coin application uses the platform to support the three system cases used in the healthcare sector to collect medical data, create cryptocurrency to incentivize users, and provide financial services to redeem the coins collected as rewards. ## DISCUSSION Obesity is a major problem that has a negative impact on health, societies, and economies. It was considered an epidemic that needed to be treated effectively. Overcoming obesity is challenging because, to prevent people from becoming obese, they need to be motivated and engaged in physical activities. To date, there is no cryptocurrency-based digital eHealth solutions in GCC countries that targets the population and incentivizes them. In this study, we proposed a solution that encourages people to become healthier by exploiting technology. The solution integrates two rising technologies: WIoT and blockchain. The digital coins employed through blockchain technology enabled the concept of gamification in which users are motivated to engage in more physical activities because the more activities the user take part in, the more coins the user gains. Considering that the digital coins are real and can be used to buy goods and services, it can be expected to increase the user’s motivation. In addition, the use of blockchain, which is a reliable and secure platform to keep users’ data, increases the user’s trust because health-related data are highly protected. Wholesome Coin can help people to easily achieve a healthier lifestyle as they move closer to becoming fit and wholesome. Inevitably, encouraging people to live healthier, assists in preventing chronic diseases such as diabetes, high cholesterol, knee and back problems, heart diseases, and depression. ## CONCLUSION Wholesome Coin can have a positive impact on the government’s economy for the reason that the solution firmly involves government institutions as the main partners and sponsors. When people are healthy, medical costs are less and hospital visits diminish. People are also less likely to need surgical procedures, such as sleeve gastrectomy and medicines for diseases such as diabetes and high cholesterol. Because our solution depends on the accuracy of WIoT in measuring health-related data and its development in identifying identity techniques, this might result in the solution depending mainly on the evolution of such technology. Therefore, we encourage further research into and development of WIoT in general, as well as investigating the degree to which people are willing to use such systems in the region. In conclusion, the authors do not have concerns about the likelihood of government to use a system that depends on digital coins and blockchain, because some governments in the GCC have already started adopting projects that use similar technologies such as Masdar in UAE (Masdar, 2021) and NEOM in Saudi Arabia (NEOM, 2020), thereby providing a strong indication that the Wholesome Coin system, which depends on the same technology, can be adopted and implemented in the near future. Like any digital health solution, Wholesome Coin has a few key challenges of which adoption and misuse are paramount. Like any other commercial solution, it is prone to misuse because it involves money. Even with a tight access control model, authorized users can manipulate the system to redeem more coins, which could be managed through regular or random physical visits to verify user assessments. With regards to adoption, greenfield projects such as smart cities (Masdar, 2021; NEOM, 2020), are the ideal targets to adopt the Wholesome Coin application as the environment attracts people with the right mindset, most likely prone to adopt to new smart solutions. ## DATA AVAILABILITY STATEMENT The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author. ## AUTHOR CONTRIBUTIONS HA contributed to solution development, article writing, and overall article review from the perspective of subject-matter expertize to validate the solution and design. SN conceptualized the article, designed the solution ecosystem, reviewed the literature, and wrote relevant sections in this article. SA supervised and evaluated the concepts and development of the solution and designed the architecture. AA, AA, and FA contributed to the ecosystem design and literature review. ## FUNDING This work has received funding from the Deanship of Scientific Research at the King Saud University. ## ACKNOWLEDGMENTS The author would like to thank Ghada Alsuwailm, Fatima Bin Rajeh, Samar Alharbi, Salmah AlQahtani, Razan Alarifi, and Shaden Alshargi for both on-chain and off-chain data implementation support, which contributed significantly to Wholesome Coin’s design and implementation. ----- ## REFERENCES Akil, L., and Ahmad, H. A. (2011). Relationships between Obesity and Cardiovascular Diseases in Four Southern States and Colorado. J. Health [Care Poor Underserved 22 (4), 61–72. doi:10.1353/hpu.2011.0166](https://doi.org/10.1353/hpu.2011.0166) Alqarni, S. M. (2016). A Review of Prevalence of Obesity in Saudi Arabia. J. Obes. [Eat. Disord. 2, 2. doi:10.21767/2471-8203.100025](https://doi.org/10.21767/2471-8203.100025) Alsalamah, S., and Nuzzolese, E. (2020). Promising Blockchain Technology Applications and Use Case Designs for the Identification of Multinational [Victims of Mass Disasters. Front. Blockchain 3, 34. doi:10.3389/fbloc.2020.](https://doi.org/10.3389/fbloc.2020.00034) [00034](https://doi.org/10.3389/fbloc.2020.00034) Alsuwailem, G. N., Alrajeh, F., Aharbi, S., AlQahtani, S., AlArifi, R., and AlShargi, S. (2019). eHomeCaregiving: A Patient-Centered Blockchain for Family Caregiving [Dissertation]. Riyadh: King Saud University. Ananthanarayan, S., and Siek, K. A. (2012). Persuasive Wearable Technology Design for Health and Wellness in 2012 6th International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth), 236–240. [doi:10.4108/icst.pervasivehealth.2012.248694](https://doi.org/10.4108/icst.pervasivehealth.2012.248694) Burns, K. (2016). Estimating the Economic Cost of Obesity in Canadian Population. Manitoba, Canada: University of Winnipeg. Cameron, A. J., Magliano, D. J., Dunstan, D. W., Zimmet, P. Z., Hesketh, K., Peeters, A., et al. (2011). A Bi-directional Relationship between Obesity and Health-Related Quality of Life: Evidence from the Longitudinal AusDiab Study. [Int. J. Obes. 36:2, 295–303. doi:10.1038/ijo.2011.103](https://doi.org/10.1038/ijo.2011.103) Ching, K. W., and Singh, M. M. (2016). Wearable Technology Devices Security and [Privacy Vulnerability Analysis. Ijnsa. 8: 3, 19–30. bio: doi:10.5121/ijnsa.2016.8302](https://doi.org/10.5121/ijnsa.2016.8302) Colman, R. (2000). Cost of Obesity in Manitoba. Tantallon, Canada: GPI Atlantic. Cugelman, B. (2013). Gamification: What it Is and Why it Matters to Digital Health [Behavior Change Developers. JMIR Serious Games. 1:e3, doi:10.2196/games.3139](https://doi.org/10.2196/games.3139) Diar. (2018). The Digital Assets & Regulation Trade Publication. Available at: [https://diar.co/. (Accessed March 15, 2020).](https://diar.co/) Fitbit. (2020). Fitbit Official Site for Activity Trackers and More. Available at: [https://www.fitbit.com/my/home. (Accessed March 13, 2020).](https://www.fitbit.com/my/home) Healthcoin+. (2021a). H+: A New Cryptocurrency for a New World of Care. [Available at: https://www.healthcoinplus.com/ (Accessed January 16, 2021).](https://www.healthcoinplus.com/) HealthCoin+ (2021b). HealthCoin+ Whitepaper: The Coin to Reinvent Health and [Wellness Payment Systems. HealthCoin Plus. Available at: https://www.](https://www.healthcoinplus.com/wp-content/uploads/2019/02/Ammended-HealthCoin-Plus-Whitepapers-with-DCRC.pdf) [healthcoinplus.com/wp-content/uploads/2019/02/Ammended-HealthCoin-](https://www.healthcoinplus.com/wp-content/uploads/2019/02/Ammended-HealthCoin-Plus-Whitepapers-with-DCRC.pdf) [Plus-Whitepapers-with-DCRC.pdf. (Accessed January 16, 2021).](https://www.healthcoinplus.com/wp-content/uploads/2019/02/Ammended-HealthCoin-Plus-Whitepapers-with-DCRC.pdf) HiremathYang, S. G., and Mankodiya, K. (2014). “Wearable Internet of Things: Concept, Architectural Components and Promises for Person-Centered Healthcare,” in 2014 4th International Conference on Wireless Mobile Communication and Healthcare - Transforming Healthcare Through Innovations in Mobile and Wireless Technologies (MOBIHEALTH), [Athens, Greece, 304–307. doi:10.1109/MOBIHEALTH.2014.7015971](https://doi.org/10.1109/MOBIHEALTH.2014.7015971) International Diabetes Federation (2021). The Global Impact of Diabetes. Available [at: https://www.idf.org/. (Accessed January 16, 2021).](https://www.idf.org/) Jones, G. (2017). Universal Health Coin: The Story of a Public Benefit Corporation Creating a Cash-Based Health Cost Sharing System That Utilizes Blockchain Technology to Provide Fair Payment for Health Services. Bloomington, IN, USA: AuthorHouse. Kuo, T.-T., Kim, H.-E., and Ohno-Machado, L. (2017). Blockchain Distributed Ledger Technologies for Biomedical and Health Care Applications. J. Am. Med. [Inform. Assoc. 24: 6, 1211–1220. bio: doi:10.1093/jamia/ocx068](https://doi.org/10.1093/jamia/ocx068) Liang, X., Zhao, J., Shetty, S., Liu, J., and Li, D. (2017). Integrating Blockchain for Data Sharing and Collaboration in mobile Healthcare Applications, 2017 IEEE 28th Annual International Symposium on Personal, Indoor, and Mobile Radio [Communications (PIMRC). doi:10.1109/pimrc.2017.8292361](https://doi.org/10.1109/pimrc.2017.8292361) [Masdar. (2021). Developing Clean Energy Worldwide. Available at: https://masdar.](https://masdar.ae/) [ae/. (Accessed February 25, 2021).](https://masdar.ae/) Mettler, M. (2016). “Blockchain Technology in Healthcare: The Revolution Starts Here,” in 2016 IEEE 18th International Conference on e-Health Networking, Applications and [Services (Healthcom), Munich, Germany, 1–3. doi:10.1109/HealthCom.2016.7749510](https://doi.org/10.1109/HealthCom.2016.7749510) Nakamoto, S. (2008). “Bitcoin : A Peer-To-Peer Electronic Cash System.” [NEOM. (2020). Avision of a New Future. Available at: https://www.neom.com/.](https://www.neom.com/) (Accessed February 25, 2021). Poon, C. C. Y., and Yuan-Ting Zhang, Y. T. (2008). Perspectives on High Technologies for Low-Cost Healthcare. IEEE Eng. Med. Biol. Mag. 27 (5), [42–47. doi:10.1109/memb.2008.923955](https://doi.org/10.1109/memb.2008.923955) Rosin, O. (2008). The Economic Causes of Obesity: a Survey. J. Econ. Surv. 22 (4), [617–647. doi:10.1111/j.1467-6419.2007.00544.x](https://doi.org/10.1111/j.1467-6419.2007.00544.x) Saudi Arabian Monetary Authority. (2019). A Statement on Launching “ABER” Project, the Common Digital Currency between Saudi Arabian Monetary Authority (SAMA) and United Arab Emirates Central Bank (UAECB). SAMA 2019. Available at: [http://www.sama.gov.sa/en-US/News/Pages/](http://www.sama.gov.sa/en-US/News/Pages/news29012019.aspx) [news29012019.aspx. (Accessed March 14, 2020).](http://www.sama.gov.sa/en-US/News/Pages/news29012019.aspx) Saudi General Authority of Statistics (2018). Bulletin of Household Sport Practice Survey. Saudi Arabia: GASTAT. [Saudi Ministry of Health. (2018). Health Days 2018. Available at: https://www.](https://www.moh.gov.sa/en/HealthAwareness/healthDay/2018/Pages/HealthDay-2018-10-11.aspx) [moh.gov.sa/en/HealthAwareness/healthDay/2018/Pages/HealthDay-2018-](https://www.moh.gov.sa/en/HealthAwareness/healthDay/2018/Pages/HealthDay-2018-10-11.aspx) [10-11.aspx. (Accessed March 13, 2020).](https://www.moh.gov.sa/en/HealthAwareness/healthDay/2018/Pages/HealthDay-2018-10-11.aspx) Swan, M. (2015). Blockchain: Blueprint for New Economy. 1st edition. Sebastopol, CA, USA: O’Reilly Media Inc. Tang, T. L.-P. (1992). The Meaning of Money Revisited. J. Organiz. Behav. 13: 2, [197–202. doi:10.1002/job.4030130209](https://doi.org/10.1002/job.4030130209) Teng, X. F., Zhang, Y., Poon, C. C. Y., and Bonato, P. (2008). Wearable Medical Systems [for P-Health. IEEE Rev. Biomed. Eng. 1, 62–74. doi:10.1109/rbme.2008.2008248](https://doi.org/10.1109/rbme.2008.2008248) Wang, J. B., Cadmus-Bertram, L. A., Natarajan, L., White, M. M., Madanat, H., Nichols, J. F., et al. (2015). Wearable Sensor/device (Fitbit One) and SMS TextMessaging Prompts to Increase Physical Activity in Overweight and Obese Adults: a [Randomized Controlled Trial. Telemed. e-Health, 21:10, 782–792. bio: doi:10.1089/](https://doi.org/10.1089/tmj.2014.0176) [tmj.2014.0176](https://doi.org/10.1089/tmj.2014.0176) World Health Organization (2020). Noncommunicable Diseases and Their Risk Factors. [Available at: https://www.who.int/ncds/introduction/en/. (Accessed March 13, 2020).](https://www.who.int/ncds/introduction/en/) World Health Organization (2018). Noncommunicable Diseases Country Profiles 2018. Geneva: World Health Organization. World Health Organization (2000). Obesity: Preventing and Managing the Global Epidemic: Report of a WHO Consultation (WHO Technical Report Series 894. Geneva: World Health Organization. Wright, R., and Keith, L. (2014). Wearable Technology: if the Tech Fits, Wear it. J. Electron. [Resour. Med. Libraries 11 (4), 204–216. doi:10.1080/15424065.2014.969051](https://doi.org/10.1080/15424065.2014.969051) Zheng, W., McLerran, D. F., Rolland, B., Zhang, X., Inoue, M., Matsuo, K., et al. (2011). Association between Body-Mass Index and Risk of Death in More [Than 1 Million Asians. N. Engl. J. Med. 364 (8), 719–729. doi:10.1056/](https://doi.org/10.1056/NEJMoa1010679) [NEJMoa1010679](https://doi.org/10.1056/NEJMoa1010679) Conflict of Interest: SN was employed by Saudi Technology and Security Comprehensive Control Company. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. Copyright © 2021 Alsalamah, Nasser, Alsalamah, Almohana, Alanazi and Alrrshaid. This is an open-access article distributed under the terms of the [Creative Commons Attribution License (CC BY). The use, distribution or](https://creativecommons.org/licenses/by/4.0/) reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3389/fbloc.2021.654539?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3389/fbloc.2021.654539, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.frontiersin.org/articles/10.3389/fbloc.2021.654539/pdf" }
2,021
[ "JournalArticle" ]
true
2021-07-12T00:00:00
[ { "paperId": "5d9b65e48e3fdf64508a63140233e6b93272b2b1", "title": "Promising Blockchain Technology Applications and Use Case Designs for the Identification of Multinational Victims of Mass Disasters" }, { "paperId": "cf1dcc9d8ae1655b1717c531c7d74d4b2e853750", "title": "Integrating blockchain for data sharing and collaboration in mobile healthcare applications" }, { "paperId": "5bbc4181e073ec6b3ec894a35eacdc6a67e8c3a3", "title": "Blockchain distributed ledger technologies for biomedical and health care applications" }, { "paperId": "310e677ce23004fdf0a549c2cfda2ef15420d6ec", "title": "Blockchain technology in healthcare: The revolution starts here" }, { "paperId": "6736c9b913b4597f148d409e494f1c2f516db8c5", "title": "Estimating the Economic Cost of Obesity in Canadian Populations" }, { "paperId": "ed59579757a718715ef61c3346a667257464d312", "title": "WEARABLE TECHNOLOGY DEVICES SECURITY AND PRIVACY VULNERABILITY ANALYSIS" }, { "paperId": "0e9b539a268e8133c00faf875f1ba33c4444f7a9", "title": "Wearable Sensor/Device (Fitbit One) and SMS Text-Messaging Prompts to Increase Physical Activity in Overweight and Obese Adults: A Randomized Controlled Trial." }, { "paperId": "4feb9e0f8ad2921f592c964ece84197e6b557f48", "title": "Wearable Internet of Things: Concept, architectural components and promises for person-centered healthcare" }, { "paperId": "26883df8e4ffdc39b88aa80507aca09ae2f4f125", "title": "Wearable Technology: If the Tech Fits, Wear It" }, { "paperId": "a6f597c9bf763a13bed27dd108f9c3696cf97bc7", "title": "Gamification: What It Is and Why It Matters to Digital Health Behavior Change Developers" }, { "paperId": "9d0714ab9139153bcc07758cfbebb1f9645c837f", "title": "Persuasive wearable technology design for health and wellness" }, { "paperId": "7b5815200c425ccdbdf8df2401ca4020421196ff", "title": "A bi-directional relationship between obesity and health-related quality of life: evidence from the longitudinal AusDiab study" }, { "paperId": "a957e0ebf157be7fa143006be911fa1e2c519dfc", "title": "Relationships between Obesity and Cardiovascular Diseases in Four Southern States and Colorado" }, { "paperId": "440c38e1801aaa6b1a1ef7a726e00c268f6d31cb", "title": "Association between body-mass index and risk of death in more than 1 million Asians." }, { "paperId": "4420fca3cb722ad0478030c8209b550cd7db8095", "title": "Wearable Medical Systems for p-Health" }, { "paperId": "dabfb036d65bf3edb1e988850bad481469be526a", "title": "Perspectives on High Technologies for Low-Cost Healthcare" }, { "paperId": "5a642749219f8b8891d7a9db217147d68ce61c3a", "title": "The Economic Causes of Obesity: A Survey" }, { "paperId": "85df5cb48ec9e4de55674f3c0e7e9aae6cbee016", "title": "The meaning of money revisited" }, { "paperId": "728dc3a6ce840a675474498c75b4187218167db3", "title": "A Review of Prevalence of Obesity in Saudi Arabia" }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": null, "title": "The Global Impact of Diabetes" }, { "paperId": null, "title": "eHomeCaregiving: A Patient-Centered Blockchain for Family Caregiving [Dissertation" }, { "paperId": null, "title": "Developing Clean EnergyWorldwide" }, { "paperId": null, "title": "Universal Health Coin: The Story of a Public Benefit Corporation Creating a Cash-Based Health Cost Sharing System That Utilizes Blockchain Technology to Provide Fair Payment for Health Services" }, { "paperId": null, "title": "Cost of Obesity in Manitoba" }, { "paperId": null, "title": "Blockchain Technology inHealthcare : The Revolution StartsHere" }, { "paperId": null, "title": "The Digital Assets & Regulation Trade Publication" }, { "paperId": null, "title": "Fitbit Official Site for Activity Trackers and More" } ]
10,395
en
[ { "category": "Economics", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/017791689ef57ff1f583aff63186fdbfe925d81b
[]
0.901869
Digital Currency: Prospects And Challenges
017791689ef57ff1f583aff63186fdbfe925d81b
Journal of Economics and Sustainable Development
[]
{ "alternate_issns": null, "alternate_names": [ "J Econ Sustain Dev", "Journal of economics and sustainable development", "J econ sustain dev" ], "alternate_urls": null, "id": "1d15f08b-c12a-483a-8aad-fcf7f19f3956", "issn": "2222-1700", "name": "Journal of Economics and Sustainable Development", "type": "journal", "url": "https://www.iiste.org/Journals/index.php/JEDS" }
null
p g ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.13, No.6, 2022 # Digital Currency: Prospects And Challenges Dr. B. NAGARJUNA Professor, Dept of Management Studies, Sree Vidyanikethan Institute of Management TIRUPATI nagarjuna1975@gmail.com **Abstract** The barter system is a long-established method of trading goods and services. Despite Amsterdam's rise to become Europe's largest and wealthiest city, the Amsterdamsche Wisselbank (Amsterdam Bank) pioneered the banking idea in 1609. Online banking, often known as net banking or online banking, is a payment system that allows bank or financial institution clients to make financial and non-financial transactions through the internet. A wallet, often known as a mobile wallet or a wallet, is a gadget that allows you to save money on your phone in digital form. Digital money exists only in digital form and has no physical properties. Computers or electronic wallets linked to the Internet or specific networks are used to perform transactions. A Central Bank Digital Currency (CBDC) is an electronic banking system that may be used to make payments by both individuals and companies. The Reserve Bank of India has the option of launching its digital currency. Because the majority of Indians do not have bank accounts, cash must be constantly circulated. According to the author, CBDCs will require further clarification in the coming days, and much will depend on how the notion originated in India. CBDCs should not be structured in such a way that they obstruct the RBI's capacity to carry out its current responsibilities. **Keywords: Barter system, Currency, Digital Currency, CBDC** **DOI: 10.7176/JESD/13-6-03** **Publication date:March 31[st] 2022** 1. INTRODUCTION 1.1 Concept of Bartering Bartering has a long and illustrious history that dates back to 6000 BC. Mesopotamian tribes invented bartering, which the Phoenicians embraced. Goods were exchanged for food, tea, swords, and spices. Salt was another product that was regularly traded. During the Middle Ages, Europeans went all over the world, swapping crafts and furs for silks and perfumes. Colonial Americans exchanged musket balls, deer hides, and wheat (Mint, 16 Dec. 2014).A barter system is an old-fashioned way of exchanging products and services. This method has been used for millennia, long before money was invented. Bartering used to be restricted to persons who lived in the same geographical area, but it is now a global phenomenon.The opposite side might decide on the value of bartering commodities. You may buy things by swapping something you already have but don't want or need. Today, much of this trade is done through internet auctions and swap marketplaces. Bartering revived during the Great Depression of the 1930s due to a lack of cash. It was carried out in groups or by individuals acting as bankers. If something is sold, the owner's account is credited and the buyer's account is debited. 1.2 Merit and Demerit of Bartering Without spending any money, two parties can receive what they desire or need from one other through bartering. Determining how trustworthy the person he is negotiating with is a complexity of bartering. Because excellent bartering needs expertise and experience, it may be a good idea to limit deals to family and friends at first. 1.3 Concept of Money The term "money" can apply to a wide range of things. On the one hand, someone who claims they have a lot of money usually means they are wealthy. On the other hand, for economists, money has a very specific meaning. According to the authors, money is defined as "something that is commonly accepted in return for goods and services or in the repayment of debts," according to the authors. Mishkin (Mishkin, 1992). Money, whether it is made of gold, silver, or other metals; paper; beads; or diamonds, performs three functions in every economy. It's a monetary unit, a means of exchange, and a store of value (Mankiw, 1999&Michael McLeay, Amar Radia, and Ryland Thomas, 2014). 1.4 Classification of Money The following are the different types of money that circulate in an economy: ----- p g ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.13, No.6, 2022 _Figure 1. Classification of Money; Source: Geoffrey Lightfoot (2015)_ 1.4.1 Money with a lot of oomph It is a sort of money whose value as money is the same as its value as a commodity, such as gold coins. 1.4.2 Token Money/Credit Money/Paper Money The value of money is far higher than the value of a commodity. – Printed Money 1.4.3 Representative full-bodied money It's a type of token money, but it's backed by an equivalent amount of bullion from the issuing authorities (gold and silver in bulk). 1.4.4 Legal tender money is issued by the central bank (Reserve Bank of India) and is in the form of cash, banknotes, and coins. 1.4.5 Local currencies include quasi-banknotes, WIR[1], and other forms of paper money. 1.4.6 Virtual currencies, such as Bitcoin, Litecoin, and Ripple, are both centralized and peer-to-peer digital currencies. 1.5 Concept of Currency For more than 3,000 years, some type of currency has been in use. Money, often in the form of coins, proved to be critical in allowing cross-continental commerce.Currency is a unit of account that may be used to purchase and sell goods and services. In a nutshell, it's paper or metal money that's commonly issued by a government and widely recognized as a form of payment at face value.Currency has long since supplanted bartering as the principal way of exchanging goods and services in the contemporary world (Jake Frankenfield, 2020).Currency is a widely used method of payment that is usually issued by a government and distributed within its borders. In respect to other currencies, the value of every currency varies continually. The purpose of the currency exchange market is to benefit from these movements.Many nations accept the US dollar as a form of payment, while others have their currencies pegged to the US dollar. 1.6 Concept of Bank Money The growth of trade and commerce necessitated the creation of convenient exchangeable forms of money. The Amsterdamsche Wisselbank (the Bank of Amsterdam) created the notion of bank money in 1609, amid Amsterdam's rise to prominence as Europe's biggest and wealthiest city. It functioned as an exchange bank, allowing people to deposit money or bullion and retrieve the money or bullion's value (George A. Selgin 2020). The initial decree that formed the bank also stipulated that any invoices of 600 guldens or more had to be paid through the bank—that is, by transferring deposits or credits to the bank. 1.7 Concept of Online/Internet Banking Internet banking, sometimes referred to as net banking or online banking, is a payment system that allows the bank or financial institution clients to conduct financial and non-financial transactions through the internet. Customers can use this service to do almost every banking operation that was previously only available at a local branch, such as cash transfers, deposits, and online bill payments.An active bank account holder or who is a member of a financial institution who has registered for online banking at a bank is entitled to use it. A client who has signed up for online banking no longer has to go to the bank every time he or she needs financial services. 1 The WIR Bank, formerly the Swiss Economic Circle (German: Wirtschaftsring-Genossenschaft), or WIR, is an independent _complementary currency system_ ----- p g ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.13, No.6, 2022 1.8 Concept of Mobile Wallet A mobile wallet is also known as m-wallets, digital wallets, and e-wallets. It's a mobile wallet that works like a traditional wallet. A mobile wallet is a payment service that enables customers to send and receive money using their smartphones. It's a type of e-commerce model designed specifically for mobile devices to deliver ease of doing banking transactions and access to banking information. A mobile wallet, also sometimes referred to as a mobile money wallet or a mobile money transfer wallet, is a device that allows you to save money in digital form on a mobile phone. 1.9 Digital Payments in India Table 1 shows the volume and value growth of digital payments. The data was obtained from the table below, and the trend percentage was computed using 2015-16 as the base year. Over five years, from 2015-16 to 2019-20, the volume of digital payments increased by 578.59 percent. Over five years, from 2015-16 to 2019-20, the value of digital payments increased by 176.35 percent. However, the average value (Rs in Cr) is decreasing. The average value in 2015-16 was 1550.48 Cr, which fell to 1156.72 Cr in 2016-17, 938.90 Cr in 2017-18, 699.21 Cr in 2018-19 472.57 Cr in 2019-20. This is evidenced by the mean trend percentage. **Table 1. Digital payments trend in India** Value Mean Volume in Volume Trend Value in Mean value per Year Trend Trend lakhs Percentage Rs Crore payment Rs in Cr Percentage Percentage 2015-16 59361 100 92038330 100 1550.48 100 2016-17 96912 163.26 112099726 121.80 1156.72 74.60 2017-18 145901 245.79 136986734 148.84 938.90 60.56 2018-19 234340 394.77 163852286 178.03 699.21 45.10 2019-20 343455 578.59 162305934 176.35 472.57 30.48 _Source: RBI Handbook of Indian Statistics (2020)_ 1.10Banks on UPI – Volume - Value Table 2 shows the number of banks using the Unified Payments Interface (UPI), the volume of transactions in millions, and the value (in Rs Cr) during the most recent 13 months, January 2021 to January 2022. **Table 2. Banks on UPI – Volume - Value** **No. of Banks live on UPI** **Volume** **Value (in Rs Crore)** **No. of** **No.of Banks** **Volume in** **Value** **Month** **Volume** **Growth** **Banks live** **Growth** **(Mn)Growth** **(In Rs** **(in Mn)** **Percentage** **on UPI** **Percentage** **Percentage** **Crore)** Jan-21 207 100 2302.73 - 4,31,181.89 Feb-21 213 2.90 2,292.90 -0.43 4,25,062.76 -1.42 Mar216 4.35 2,731.68 18.63 5,04,886.44 21 17.09 Apr-21 220 6.28 2,641.06 14.69 4,93,663.68 14.49 May224 8.21 2,539.57 10.29 4,90,638.65 21 13.79 Jun-21 229 10.63 2,807.51 21.92 5,47,373.17 26.95 Jul-21 235 13.53 3,247.82 41.04 6,06,281.14 40.61 Aug249 20.29 3,555.55 54.41 6,39,116.95 21 48.22 Sep-21 259 25.12 3,654.30 58.69 6,54,351.81 51.76 Oct-21 261 26.09 4,218.65 83.20 7,71,444.98 78.91 Nov274 32.37 4,186.48 81.81 7,68,436.11 21 78.22 Dec-21 282 36.23 4,566.30 98.30 8,26,848.22 91.76 Jan-22 297 43.48 4,617.15 100.51 8,31,993.11 92.96 _https://www.npci.org.in/what-we-do/upi/product-statistics_ Each of the variables was estimated independently, including the number of banks that adopt UPI, the volume of transactions in millions, and the value of transactions (in Rs crore). From 207 in January 2021 to 297 in January 2022, the number of banks using UPI has increased by 43.98 percent. The total number of UPI transactions has doubled (100.51% Growth). UPI transaction value has increased by 92.96 percent. |Table 1. Digital payments trend in India|Col2|Col3|Col4|Col5|Col6|Col7| |---|---|---|---|---|---|---| |Year|Volume in lakhs|Volume Trend Percentage|Value in Rs Crore|Value Trend Percentage|Mean value per payment Rs in Cr|Mean Trend Percentage| |2015-16|59361|100|92038330|100|1550.48|100| |2016-17|96912|163.26|112099726|121.80|1156.72|74.60| |2017-18|145901|245.79|136986734|148.84|938.90|60.56| |2018-19|234340|394.77|163852286|178.03|699.21|45.10| |2019-20|343455|578.59|162305934|176.35|472.57|30.48| |Source: RBI Handbook of Indian Statistics (2020)||||||| |Table 2. Banks on UPI – Volume - Value|Col2|Col3|Col4|Col5|Col6|Col7| |---|---|---|---|---|---|---| |Month|No. of Banks live on UPI||Volume||Value (in Rs Crore)|| ||No. of Banks live on UPI|No.of Banks Growth Percentage|Volume (in Mn)|Volume in (Mn)Growth Percentage|Value (In Rs Crore)|Growth Percentage| |Jan-21|207|100|2302.73|-|4,31,181.89|-| |Feb-21|213|2.90|2,292.90|-0.43|4,25,062.76|-1.42| |Mar- 21|216|4.35|2,731.68|18.63|5,04,886.44|17.09| |Apr-21|220|6.28|2,641.06|14.69|4,93,663.68|14.49| |May- 21|224|8.21|2,539.57|10.29|4,90,638.65|13.79| |Jun-21|229|10.63|2,807.51|21.92|5,47,373.17|26.95| |Jul-21|235|13.53|3,247.82|41.04|6,06,281.14|40.61| |Aug- 21|249|20.29|3,555.55|54.41|6,39,116.95|48.22| |Sep-21|259|25.12|3,654.30|58.69|6,54,351.81|51.76| |Oct-21|261|26.09|4,218.65|83.20|7,71,444.98|78.91| |Nov- 21|274|32.37|4,186.48|81.81|7,68,436.11|78.22| |Dec-21|282|36.23|4,566.30|98.30|8,26,848.22|91.76| |Jan-22|297|43.48|4,617.15|100.51|8,31,993.11|92.96| |https://www.npci.org.in/what-we-do/upi/product-statistics||||||| ----- p g ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.13, No.6, 2022 1.11Reserve Bank-Digital Payments Index (RBI-DPI) The RBI-DPI contains five broad parameters that allow for the depth measurement and input of digital payments into the country at different times. These are: (i) Payment (25% weight), (ii) Payment Infrastructure-Needs (10%), (iii) Payment Infrastructure-Supply Chain (15%), (iv) Performance Payment (45%) and (v) Consumer Centre (5%). Each parameter has sub-parameters that contain various measurable indicators. Larger, smaller parameters under each parameter as described below (RBI, 2021): 1.11.1 Payment services include - internet connection, mobile connection, Aadhar cards, bank accounts, participants, and merchants or merchants. 1.11.2 The demand side of Payment Infrastructure includes - debit cards, credit cards, other advance payment tools, registered mobile customers, and online banking. 1.11.3 Payment side of the supply chain includes - bank branches, business contacts, ATMs, POS terminals, QR codes, Mediators. 1.11.4 Payment processing includes - digital payment systems; volume and quantity, different users, paper extensions, currency circulation, and cash withdrawals. 1.11.5 Consumer focus includes - awareness and education, downgrades, complaints, fraud, and system downtime. The Reserve Bank of India (RBI) has called for the establishment of India's Integrated Reserve Bank-Digital Payments Index (RBI-DPI), which will be established in March 2018 to reflect the level of digital payments in the country. The September 2021 index is at 304.06, up from 270.59 in March 2021. The use of digital payments and immigration continues to rise, according to the RBI-DPI index. The following is a list of references made since the company's inception shown in table 3 (RBI, 2022). **Table 3. Growth of RBI-DPI Index** Period RBI-DPI Index March 2018 (Base) 100 March 2019 153.47 September 2019 173.49 March 2020 207.84 September 2020 217.74 March 2021 270.59 September 2021 304.06 _Source: https://www.rbi.org.in/Scripts/BSPressRelease_ 1.12Concept of Digital Currency Digital currencies are only available in digital form and have no physical qualities. Digital currency transactions are carried out using computers or electronic wallets connected to the internet or specific networks.Digital currencies also enable cross-border transactions to be completed quickly. A person in the United States, for example, can send digital money to a counterparty in any nation as long as they are both connected to the same network. 1.12.1 Digital currency Only digital or electronic forms of regulated or uncontrolled money are accessible. 1.12.2 Virtual currency An unregulated digital currency that is controlled by its developer(s), its founding organization, or its defined network protocol. 1.12.3 Cryptocurrency Cryptography is used to safeguard and verify transactions as well as govern and control the generation of new currency units in a virtual currency. 1.12.4 Central Bank Digital Currencies A Central Bank Digital Currency (CBDC) would be an electronic form of central bank money that could be used by people and businesses to make payments (Tobias Adrian & Tommaso Mancini-Griffoli, 2019). CBDCs are digital currencies that are controlled and issued by a country's central bank. CBDCs can be used in addition to or instead of traditional fiat currency. A CBDC is solely available in digital form instead of fiat currency, which is available in both physical and digital forms. The United Kingdom, Sweden, and Uruguay are among the countries contemplating the introduction of a digital form of their national currency (Bank of England, 2020). 1.12.5 Design of CBDC The March 2020 CBDC discussion paper lays forth an example of a CBDC platform for storing value and facilitating UK payments by families and enterprises depicted in Figure 2. |Table 3. Growth of RBI-DPI Index|Col2| |---|---| |Period|RBI-DPI Index| |March 2018 (Base)|100| |March 2019|153.47| |September 2019|173.49| |March 2020|207.84| |September 2020|217.74| |March 2021|270.59| |September 2021|304.06| |Source: https://www.rbi.org.in/Scripts/BSPressRelease|| ----- p g ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.13, No.6, 2022 _Figure 2. Proposed design of CBDC of Bank of England; Source: Bank of England (2020)_ 1.12.6 Types of digital central bank money CBDCs are available in two designs: cash-like and token-based access and payment anonymity. Individual users would be able to access the CBDC using a password similar to a digital signature employing private-public key cryptography without having to identify themselves. The alternative option, which would be based on a digital identification scheme, is based on validating users' identities is depicted in Figure 3. _Figure 3. Forms of digital currency_ _Source: BIS Annual Economic Report (2021) BIS elaboration_ 1.12.7 Features of CBDC vis-à-vis Traditional Currency The Bank of International Settlements (BIS) produced a paper on central bank digital currencies in January 2019, noting the currency's four important characteristics: issuer (central bank or not), form (digital or virtual), accessibility (wide or limited), and technology. It distinguishes three kinds of CBDC. The features of CBDC and options of its introduction in comparison with a traditional currency are presented in Table 4. **Table 4. Features of CBDC vis-à-vis Traditional Currency** |Table 4. Features of CBDC vis-à-vis Traditional Currency|Col2|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| |Features|Existing Central Bank currency/money||Central Bank Digital Currencies||| ||Cash|Reserves and Settlement balances|General Purpose||Token for Wholesale| ||||Token-based|Account-based|| |24/7 availability|Yes|No, but possible|Yes|Yes|Yes| |Anonymity Vs Central Bank|Yes|No|Yes|No|Yes| |Peer-to-Peer transfer|Yes|No|Yes|No|Yes| |Interest Bearing|No|Yes, but subject to central bank policy|Yes|Yes|Yes| |Browsed from Ashok K Nag (2021) Source: BIS paper (No: d174) by Market Committee Central bank digital currencies, March 2018: P6|||||| ----- p g ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.13, No.6, 2022 The central bank acts as a bank that allows people to open large bank accounts and transfer prices between account holders. "This will be widely available and targeted for sale (but also available for general use." This is called account-based CBDC. The second variation will be similar to cash — a "normal" purpose based on a different token. A token-based system is also called a "value-based system" as each token represents a certain amount of cash available in an existing bank. The final form for the CBDC will be a "complete," "token-or valuebased token"—that is, a limited digital token to pay for supermarket sales (e.g., bank payments, or securities payments). The BIS Market Payments and Infrastructure Committee summarised the features of these different types of CBDC in the previous table. 1.12.8 Interoperability between some key CBDC roles and functions Collaboration, programming, or data transmission across several operational units requires the user to have little or no understanding of the unique features of those units, as well as technical or legal compliance that permits the system or technique to be used in combination with other systems or methods (Bank for International Settlements 2021) and the rest is self-explanatory in Figure 4. _Figure 4. Interoperability between some key CBDC roles and functions_ _Source: David MacKeith (2020)_ 2. ROLE OF DIGITAL CURRENCY CBDCs can be utilized by people and businesses (retail CBDCs) or in interbank transactions (wholesale CBDCs), with the former lauded as a smoothing element in global finance since it allows universal access to digital money. Eighty-one (81) nations are studying CBDCs, accounting for over 90% of global GDP. Pilots have been tried in 14 nations, and similar currencies are being developed in 16 countries and researched in 32 countries. The Bank of England is also promoting a trial program for bitcoin. China has plans to implement digital Yuan at the Winter Olympics next year. In the next few weeks, the Bank of England intends to launch a bitcoin trial program (Abhinav Singh, 2021). HenriArslanian, a PwC partner, and global crypto lead remarked, "and that is a big milestone in the evolution of money." Only two countries now employ CBDCs, with the Bahamas' 'Sand Dollar' starting in October 2020 and Nigeria's 'e-Naira' launching in October of this year.The digital currency has a clear set of goals: to improve payment speed, efficiency, and security; reduce the cost of financial services and increase investment in people of all ages and socioeconomic backgrounds; and tighten control over money laundering, fraud, and other money laundering fraud. The Reserve Bank of India (RBI) is considering launching India's currency experiment system, which might be a significant step forward in the future management and spending of money. It's critical to keep in mind that the goal isn't to raise money or to imitate cryptocurrencies. They are known as "central bank digital currencies" (CBDCs), and they will function similarly to the current system. CBDCs are particularly appealing to growing economies such as India. Unbanked persons continue to make up a significant portion of the population. The CBDC can help with national economic investment.The RBI can create the CBDC using either its centralized ledger system or the decentralized blockchain idea. A centralized system, on the other hand, provides greater control, whilst a decentralized system is said to be greater efficient.Experts recognize that digital currencies have all the internal benefits of fiat currencies such as being strong, portable, frustrated, and fragmented. As it is digital, it will make it easier to secure, more secure, and trackable. Therefore, to enhance the existing benefits of paper money (Abhinav Singh 2021). In addition, the RBI must decide whether the CBDC will be wholesale, retail, or a combination of both. A wholesale CBDC is a digital currency used by financial institutions, whereas a retail CBDC is used by the general ----- p g ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.13, No.6, 2022 public. The objective behind a wholesale CBDC is for financial institutions to use it to settle their accounts by transacting with one another in central bank money. Much of wholesale business has already been digitized in numerous ways, with institutions settling transactions using central bank reserves. As a result, even if the RBI switches from digital reserves to wholesale CBDC, the overall trend may not alter significantly. A major challenge is the construction of a commercial or retail CBDC, which presents a wide range of complexities related to distribution, bank stability, and technology platforms. The RBI distributes real banknotes through its large cash register and bank branch network. Although the current CBDC distribution system can be monitored, there is another very effective way in which the central bank distributes CBDC to the public directly. CBDC might give a variety of options for the Bank to pursue its goals of monetary and financial stability which is depicted in Figure 5. The merits of CBDC are: a) Assists in maintaining a stable payment environment. b) Prevents the creation of new forms of private money. c) Promote payment efficiency, competitiveness, and innovation. d) It satisfies future payment expectations in a digital economy. e) Increase the availability and usability of central bank money. f) dealing with the consequences of a cash shortfall. g) Assists in the improvement of cross-border payments. h) Less reliance on the tangible currency. i) Cost savings on printing actual cash. j) It is possible to develop a reliable and fast settlement system k) In (forex) currency transactions, the time zone difference is eliminated. _Figure 5. CBDC - Opportunities; Source: Bank of England, (2020)_ 3. TIMELINE OF CBDC RESEARCH ANNOUNCEMENTS IN INDIA The research on CBDC and the official announcements of the RBI are stated as follows: 3.1 The Government Committee highlights the advantages of CBDC implementation in 2016. 3.2 In 2018, the RBI banned regulated firms from trading in digital currencies. 3.3 The Government Committee is undecided on whether or not the CBDC should be adopted. 3.4 The governor of the Reserve Bank of India remarked that it is too early to discuss CBDC implementation in 2020. 3.5 In the year 2021, CBDC is included in the RBI's Payment Systems Booklet as part of the RBI's roadmap. Legislation requiring the Reserve Bank of India to issue an official digital currency is listed on the Lok Sabha agenda. The RBI's Deputy Governor indicates that an internal committee is due to announce the CBDC conclusion. For the 2022-2023 financial year, which runs from April 1, 2022, India's central bank will issue a digital rupee. Nirmala Sitharaman, India's finance minister, said the implementation of the digital rupee would be based on "blockchain and other technologies." If its plans are successfully followed, India will become one of the world's leading economies in developing the so-called central bank digital currency (CBDC), following in the footsteps of China, which is exploring the digital yuan. CBDC's main objectives and objectives Several projects are underway to achieve India's payment system policy. The RBI is trying to import. The currency management system works ----- p g ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.13, No.6, 2022 very well and is very inexpensive. In addition, the digital economy will prosper. 4. POTENTIAL ISSUES IN CBDC The RBI's physical cash is currently in circulation. Everyone can hold physical currency, ensuring privacy and anonymity in transactions. People also have access to online banking, which includes RTGS (Real-time Gross Settlement), NEFT (National Electronic Fund Transfer), and IMPS (Instant Payment Service). Many business organizations, such as Google Pay, Phonepe, Amazon Pay, and so on, also provide mobile wallet money services. The Reserve Bank of India has the possibility of launching its wallet. In this instance, the wallet service providers are no longer active. Because the majority of Indians do not have bank accounts, the usage of digital money is questionable, necessitating the continued circulation of physical cash. Even after two decades of mobile phone use in India, a significant portion of the population is still without one, and internet usage penetration is yet to be improved. _Figure 6. Digital Money – Challenges_ _Source: Erik Feyen, Jon Frost, Harish Natarajan (2020)_ Figure 6 depicts six major development, macroeconomic, and cross-border challenges as perceived by analysts. Anti-money laundering (AML) and counter-terrorist funding (CFT) are two development issues. CBDCs may have some drawbacks also. Bringing digital currency to the market is contrary to the concept of segregation. More digital currencies without supporting gold reserves if issued by the central bank, possibly leading to higher inflation which harms the development of the economy. The main challenges will always be user adoption, acceptance, and security. If governments use technology and find a way to control the flow of digital payments, we can expect more competition in the years to come. Cryptocurrencies will continue to provide a variety of business application cases from the arts, finance, advertising to the supply chain. Some point out that user adoption could be a major setback for the smooth rollout of CBDC in India. 5. PRINCIPLES OF EFFECTIVE USE OF DIGITAL CURRENCY The following are the principles to make CBDC effective and successful: _5.1_ _CBDC with support of gold, equities, bonds, and other financial assets_ CBDC is a digital currency created by the Reserve Bank of India that supports assets like gold, equities, bonds, and other financial assets recognized by the RBI. With CBDC risk is reduced, flexibility is increased, and worldwide adoption is facilitated by the central bank guarantee(Abhinav Singh, 2021). _5.2_ _Speedy money transfers for investment purposes& financial inclusion_ CBDC has the potential to significantly enhance money transfers from the central bank to commercial banks while also eliminating clients considerably more quickly than the existing method. It can also be integrated into the CBDC, especially if it happens as an investment, benefiting millions of citizens who need money but are currently unbanked or have restricted access to banking services. _5.3_ _Monetary policy development_ The RBI move to roll out CBDC could significantly boost India's monetary policy development. Experts point out that improved monitoring and real-time monitoring of digital funds by the central bank could go a long way in promoting these processes. The central bank's efforts to be at the forefront of digital innovation can help to develop an environmentally friendly system such as UPI that will reduce end-customer inefficiency and create greater opportunities for entrepreneurs. _5.4_ _Design of CBDC to curb illegal money transfers_ The CBDC can allow governments to deal effectively with illegal activities, such as payment fraud, to give people a greater sense of security with their money. Digital currencies create huge barriers to illegal activity, as tangible money can help hide and transfer funds without regulated financial systems. With the increasing discovery of CBDCs, payments and referrals will make it easier to identify and track previous sources, significantly reducing the risk of fraud and money laundering. ----- p g ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.13, No.6, 2022 _5.5_ _Retail CBDCs will strong and secure_ The RBI is not clear yet whether the CBDC of India will be accounted for or based on tokens. Retail CBDCs will strengthen the digital payment system in India by making it more robust and accessible. As a fixed currency, the digital rupee should be used for service and transportation charges first. CBDCs should work with existing payment methods such as cash and digital payments.CBDCs is an effort to make independent digital money accessible to the general public and rely on the banking system, just as it is a digital fiat currency. _5.6_ _CBDC for international payments_ In the context of cross-border payments, India can earn through the digital rupee, especially in countries like Bhutan, Saudi Arabia, and Singapore, where the National Payments Corporation of India (NPCI) has plans in place for digital payments. The effectiveness of CBDCs will depend on factors such as the structure of confidentiality _and order. The CBDC for general-purpose must have the same anonymity as cash and be recognized as a valid_ tender to gain acceptance. _5.7_ _CBDC is a forward-thinking move toward a cashless economy_ Experts say that the central bank’s digital money is a direct responsibility of the central bank. There is less volatility in CBDCs compared to private blockchain-based funds. This helps to prevent fraudulent activities and is a continuous step towards a cashless economy. Besides, it will certainly make the banking system more efficient. _5.8_ _Rethink and revise the RBI's role._ For the time being, the general public only has access to central bank money in the form of cash. With the rise of digitalization and the reduction of currency, the CBDC might assist the RBI in maintaining a direct relationship between central banks and individuals (retail CBDC), which could aid public awareness of central banks' functions and the need for independence. This is especially important if the RBI wishes to maintain its independence in a key sector, such as retail payments. _5.9_ _Cross-border payments should be improved._ India may take the lead in developing potential CBDC use cases for improving cross-border payment efficiency. The current correspondent banking paradigm results in a time-consuming and costly procedure. The development of essential standards to ensure interoperability would necessitate international cooperation. This will also necessitate a re-evaluation of each country's legislative system, which may be difficult. _5.10CBDC design should be able to prevent financial crimes._ CBDC has the potential to increase a country's capacity to tackle financial crimes such as money laundering and tax evasion, among other things. It has been suggested that an account-based CBDC, rather than a token-based architecture, might be more suited to enabling this traceability. CBDC might create a new route for financial crimes if these elements aren't present. Separately, the impact on privacy will have to be examined depending on the degree of traceability incorporated into the CBDC architecture. _5.11Private digital currency backed by the risk-free central bank._ If privately produced digital currencies outperform conventional payment systems in terms of usefulness and efficiency, they will be widely accepted. A well-designed CBDC with improved payment facilities backed by risk_free central bank money should help to diminish the demand for alternative currencies. This must be supplemented_ by measures to guarantee that domestic payment systems can support the population's payment demands, both domestically and internationally. **6.** **CONCLUSION** With the large-scale distribution and acceptance of digital currencies, India has a unique opportunity to lead the world. CBDCs will need more clarification on the concept in the coming days, and much will depend on how the concept evolves in India, which is primarily a paper and physical currency market.A well-considered regulatory plan for CBDC issuance in India is required, and a consultation approach with relevant stakeholders. Regardless of the benefits and use cases outlined by the RBI, CBDC research in India must adhere to the key principles. CBDCs should not be designed in a way that limits the Reserve Bank of India's (RBI) ability to carry out its current mandate. CBDC issuance must yield an increased payment efficiency in India; its issuance should not be influenced primarily by the emergence of privately created currencies like cryptocurrencies and stable coins. **BIBLIOGRAPHY** Abhinav Singh (2021). RBI's digital currency plan: Challenges, risks, and benefits. RBI is working on a phased implementation strategy for its digital currency. The Week. July 26[th,] 2021. https:// www.theweek.in/news/biz_tech/2021/07/26/rbi-digital-currency-plan-challenges-risks-and-benefits.html_ Amol Agarwal (2021). Cryptocurrency | What happens when RBI issues a digital currency? _https://www.moneycontrol.com/news/opinion/cryptocurrency-what-happens-when-rbi-issues-a-digital-currency-_ _7780241.html._ Ashok K Nag(2021).A Proposed Architecture for a Central Bank Digital Currency for India.ORF Occasional Paper No. 340, December, Observer Research Foundation. Pp: 1-47. ----- p g ISSN 2222-1700 (Paper) ISSN 2222-2855 (Online) Vol.13, No.6, 2022 Bank of England (2020). Central Bank Digital Currency: opportunities, challenges, and design. A Discussion Paper. _https://www.bankofengland.co.uk/paper/ 2020/central-bank-digital-currency-opportunities-challenges-and-_ _design-discussion-paper._ Barter System History: The Past and Present.” Mint, 16 Dec. 2014, www.mint.com/barter-system-history-the-past_and present._ David MacKeith (2020). The future of money is digital: How the cloud can deliver solutions for central bank digital currencies. AWS Public Sector Blog. _https://aws.amazon.com /blogs/public-sector/future-money-_ _digital-how-cloud-deliver-solutions-central-bank-digital-currencies/_ Erik Feyen, Jon Frost, Harish Natarajan (2020).Digital money: Implications for emerging market and developing economies. _https://voxeu.org/article/digital-money-implications-emerging-market-and-developing-_ _economies._ Geoffrey Lightfoot (2015). Price Fluctuations and the Use of Bitcoin: An Empirical Inquiry. International Journal _of Electronic Commerce. 20 (1): 9-49._ George A. Selgin (2020). Bank Finance. https://www.britannica.com/topic/bank. Jake Frankenfield (2020). Currency. https://www.investopedia.com/terms/c/currency .asp Mankiw, N.G. (1999). Macroeconomics. New York, Worth Publishers. Michael McLeay, Amar Radia and Ryland Thomas, (March 2014) ‘Money in the modern economy: an introduction. _Bank_ _of_ _England_ _Quarterly_ _Bulletin._ _https://www.Bankofengland.co.uk/quarterly-_ _bulletin/2014/q1/money-in-the-modern-economy-an-introduction._ Mishkin, F.S. (1992). The Economics of Money, Banking, and Financial Markets. New York, Harper Collins Publishers. RBI (2021). Reserve Bank of India introduces the RBI-Digital Payments Index. RBI-Digital Payments Index – Parameters and Sub-parameters. www.rbi.org.in RBI (2022). Reserve Bank of India announces Digital Payments Index for September 2021. RBI-Digital Payments Index. www.rbi.org.in Sneha Kulkarni (2021). India's Central Bank Digital Currency (CBDC); Advantages and Disadvantages of CBDCs. _https://www.goodreturns.in/classroom/india-s-central-bank-digital-currency-cbdc-advantages-and-_ _disadvantages-of-cbdcs-1221827.html_ Susanne König (2001). THE EVOLUTION OF MONEY: From Commodity Money to E-Money. UNICERT IV Program, MBA Dissertation Report. Tobias Adrian & Tommaso Mancini-Griffoli (2019). The Rise of Digital Money. FinTech International Monitory Fund (IMF) Pp: 1-20. ***** -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.7176/jesd/13-6-03?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.7176/jesd/13-6-03, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "HYBRID", "url": "https://www.iiste.org/Journals/index.php/JEDS/article/download/58401/60296" }
2,022
[]
true
2022-03-01T00:00:00
[]
10,124
en
[ { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0178afd777c806ffc2b65b809cafacbaf478b211
[]
0.883499
Communication Resource Allocation of Raft in Wireless Network
0178afd777c806ffc2b65b809cafacbaf478b211
IEEE Sensors Journal
[ { "authorId": "2014488397", "name": "Dachao Yu" }, { "authorId": "2116887965", "name": "Yao Sun" }, { "authorId": "2168999436", "name": "Yuetai Li" }, { "authorId": "1720539", "name": "L. Zhang" }, { "authorId": "2113544497", "name": "M. Imran" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE Sens J" ], "alternate_urls": [ "http://ieee-sensors.org/sensors-journal/", "https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?puNumber=7361", "http://www.ieee-sensors.org/journals", "https://ieee-sensors.org/sensors-journal/" ], "id": "b210fd3d-11d7-478e-a0aa-7e3d2a4f482d", "issn": "1530-437X", "name": "IEEE Sensors Journal", "type": "journal", "url": "http://ieeexplore.ieee.org/servlet/opac?punumber=7361" }
The distributed consensus intends to improve the reliability of critical decision making in wireless connected autonomous systems. The performance of distributed consensus heavily depends on the reliability of wireless links, which should be stochastic with limited communication resources. Therefore, advanced communication resource allocation schemes are needed to achieve high reliability and low latency for the distributed consensus. This article first derives optimized resource allocation schemes for the distributed consensus. The optimal number of nodes for the best reliability performance of the distributed consensus is also investigated to solve the inadequate overall communication resources issue. The revealed derivation and simulation results can provide guidelines to deploy the appropriate paradigm of communication resource allocation in autonomous wireless systems.
### Yu, D., Sun, Y., Li, Y., Zhang, L. and Imran, M. (2023) Communication resource allocation of Raft in wireless network. IEEE Sensors Journal, (doi: 10.1109/JSEN.2023.3293715). There may be differences between this version and the published version. You are advised to consult the publisher’s version if you wish to cite from it. ## https://eprints.gla.ac.uk/302436/ ### Deposited on: 6 July 2023 Enlighten – Research publications by members of the University of Glasgow https://eprints.gla.ac.uk ----- # Communication Resource Allocation of Raft in Wireless Network #### Dachao Yu, Yao Sun, Yuetai Li, Lei Zhang, and Muhammad Imran **_Abstract— The distributed consensus intends to improve_** **the reliability of critical decision making in wireless con-** **nected autonomous systems. The performance of dis-** **tributed consensus heavily depends on the reliability of** **wireless links, which should be stochastic with limited** **communication resources. Therefore, advanced communi-** **cation resource allocation schemes are needed to achieve** **high reliability and low latency for the distributed consen-** **sus. This article first derives optimized resource alloca-** **tion schemes for the distributed consensus. The optimal** **number of nodes for the best reliability performance of** **the distributed consensus is also investigated to solve the** **inadequate overall communication resources issue. The re-** **vealed derivation and simulation results can provide guide-** **lines to deploy the appropriate paradigm of communication** **resource allocation in autonomous wireless systems.** **_Index Terms— Distributed consensus, Reliability, La-_** **tency, Resource allocation** I. INTRODUCTION Industrial scenarios, such as Autonomous Vehicles and Industrial Robots, usually require high reliability and low latency in critical decision-making within the network and essential data processing for distributed sensors and IoT devices. In these scenarios, local nodes from the network can collect data, make initial decisions, and send global consents to the joint nodes in the network. This is especially pertinent in diverse 5G-enabled networks, which include long-term UltraReliable Communication for critical applications, Vehicle-toVehicle coordination for enhanced road safety, reliable cloud connectivity for seamless data exchange, and real-time virtualization to enable efficient network services. For example, a decentralized approach has been proposed for decision making in autonomous driving [1], which presents that the local nodes in distributed networks can collect data, make initial decisions and send the global consents (i.e., consensus) to the joint nodes in the distributed network. Because critical decisions making are reliability-intensive and latency-sensitive, a mechanism is required to enhance the reliability of decision-making in critical scenarios, and distributed consensus can work as the fault tolerant protocol for the critical decision making in this scheme [2]. Distributed consensus, which has been prevalently applied to distributed ledger technology (DLT), is defined as a protocol to ensure all normal nodes in the system can achieve the agreements on unified states, even if the network suffers from a certain amount of faulty progress or attack [3]. Therefore, the distributed consensus can work as an interior algorithm th t l t th d i i b d th ll t d i f ti by nodes. In the protocol of a distributed consensus, every participant is capable of transmitting and receiving the command to switch the state of replicas if it can follow specific fault-tolerant protocols. Crash failure and Byzantine failure are two types of errors that may occur in the distributed system. Crash failure refers to the failure that the progress abruptly stops and cannot resume. Crash fault tolerance (CFT) protocol, such as Raft [4] and Paxos [5], aims to manage reliable state duplication and prevent system breakdown from node crash failure. Byzantine failure represents the malicious behaviors given by an adversary, including contradictory commands to the progress, communication abort, and lengthy intentional delays to critical messages, which are more disruptive to the system than crash failures. Corresponding byzantine fault tolerance (BFT) protocols like PBFT [6] and Hotstuff BFT [7] have been introduced to the decentralized systems against the potential malicious attack [8]. In both CFT and BFT protocols, communication acts as a critical enabler to ensure that every node can exchange its state information with others in the distributed consensus. Currently, most of the distributed consensus usually is deployed through stable wired communication [9]. However, the majority of the upcoming generation of IoT networks have the trend to become wireless systems. For example, The protocol of distributed consensus can be deployed in DLT-enabled wireless networks [10]. Unlike the reliable link transmission in a wired network, wireless channels are more stochastic and dynamic. The link transmission failure that occurs in the wireless channel can have the same influence on the state synchronization as the node that has crash or byzantine faults within it. This influence should be addressed when distributed consensus is implemented in the wireless network. Resource allocation for distributed consensus in wireless networks has been a focal point of research due to its significant impact on consensus performance. [11] have delved into the role of communication resources in the distributed consensus within wireless networks. They demonstrated the feasibility of consensus mechanisms for critical decisionmaking in distributed wireless communication systems, particularly through the implementation of a consensus-enabled industrial IoT network based on the PBFT protocol. However, wireless networks inherently face challenges such as the risk of link transmission errors and state synchronization loss [12]. The reliability of consensus protocols like Raft is closely tied to the reliability of wireless link transmissions [2]. In scenarios where excessive nodes intensively occupy limited wireless i ti th b d i b th li k ----- and consensus reliability. This issue is particularly prevalent in massive IoT networks with wireless connections [13]. The above researches indicate that limited communication resources can compromise the reliability of link connections, thereby affecting the reliability of distributed consensus. This problem may increase the frequency of primary node changes, which can cause a longer latency for consensus completion and state synchronization among network nodes. Therefore, reasonable and practical communication resource allocation methods should be investigated to achieve a better performance of the distributed consensus. [14] proposes the first joint interest, energy, and physical-aware framework for coalition formation among wireless IoT devices and energyefficient resource allocation in M2M communication, considering mutual interest, energy availability, physical proximity, and communication channel quality, which not only ensures efficient and accurate coalitions but also increases overall system energy efficiency. Other researchers try to use machine learning in the optimization of resource allocation in wireless networks. [15] explores the use of machine learning algorithms for AP selection strategy and found that the Random Forest algorithm demonstrated superior performance in terms of accuracy and complexity in both the training and testing phases. [16] discusses the capacity maximization problem in wireless networks. The authors propose the use of machine learning techniques, specifically support vector machines (SVMs) and deep belief networks (DBNs), for direct approximation of optimal subproblem solutions. However, there are few papers that have systematically analyzed the communication resource allocation to the distributed consensus in wireless networks, which is the motivation of this paper. In this article, we make efforts to optimize communication resource allocation to improve the reliability and reduce latency of Raft through different algorithms. Our main contributions are summarized as follows. _• We derive an optimal transmit power allocation method_ through Sequential Quadratic Programming (SQP) to maximize the reliability of Raft. _• The optimal bandwidth allocation method is investigated_ to minimize the latency in the distributed consensus. We choose Particle Swarm Optimization (PSO) as the optimization algorithm to search for the optimal bandwidth allocation scheme when overall bandwidth is constant. _• We investigate the optimal number of nodes deployed in_ the wireless network to maximize the reliability of distributed consensus when constant overall communication resources are provided. Relevant analytical proof has been provided to support the conclusion. The structure of this paper is explained as follows. The protocol of the Raft is given in Section II. Section III introduces the algorithms of nonlinear optimization programming for the performance of the distributed consensus. Section IV proposes the optimized network size for Raft with limited overall communication resources. Section V compares the numerical results of the performance given by different resource allocation methods, which demonstrates the conclusion i S ti VI II. PROTOCOL OF RAFT The protocol of distributed consensus has been deployed in many decentralized systems to keep the consistency of the state in nodes. In a system that requires a trusted authority to access (i.e., private blockchain [17]), the possibility that the system suffers from Byzantine fault can be negligible [18]. The crash of nodes and link transmission failure are the main threats to these trusted systems. Therefore, it is appropriate to deploy the CFT protocol in these scenarios. Raft, as a typical CFT consensus algorithm, is generally implemented in a private, trustworthy, distributed system to oppose the breakdown of replicas [4]. The simplicity of Raft has drawn attention to the research about its optimization and applications [19], [20]. Fig. 1 shows that the Raft-enabled distributed network, which is composed of a leader and a group of followers in the stage of log replication. The leader needs to pack the commands in log entries and replicate the entries to all followers ceaselessly through downlink transmission. Depending on the successful reception of log messages, the followers need to reply confirmation packets to the leader through uplink unicast and start to execute the confirmed commands. A successful Raft consensus represents that more than 50% overall followers have received the log entries from the leader and sent the confirmation back to the leader successfully within one term of the consensus. The voting for the leader follows the criteria of first come, first serve, which means the leader candidate with the most reliable wireless connections and lowest latency is most likely to be chosen as a leader. Fig. 1: Communication scheme of Raft The protocol of Raft indicates that it relies on the internode information exchange to achieve the consensus among nodes [11]. Therefore the consensus reliability of Raft heavily depends on the reliability of the link connection between the leader and followers. III. COMMUNICATION RESOURCE ALLOCATION SCHEMES FOR RAFT Reliability and latency are the important performance metrics for the distributed consensus in wireless networks [12]. Th li bilit P f t th b bilit th t t ----- trusted nodes complete vote or log replication in a term and the latency of Raft, which includes the time consumed by one round of downlink and uplink transmissions between the leader and all followers and the time of message verification [21]. When the number of nodes in the network is constant, PC only depends on the link reliability of channels, which refers to the probability of successful link transmission between the leader and followers [2]. Different resource allocation methods and stochastic fading gains may cause variations in the link reliability and transmission time among the channels between the leader and followers. Therefore, varied link reliabilities and latency of wireless channels are determined by a derived wireless link model in this section initially. And relevant optimization problems of resource allocation are solved based on the proposed link reliability and latency. _A. Wireless Link Model_ The protocol of Raft is deployed on the considered wireless network that has N + 1 static nodes, including a leader and _N followers. The communication scheme in the protocol of_ Raft is assumed to be frequency division in this paper. The 2N channels, which include N downlink channels and N uplink channels that connect the leader and followers, are characterized by the Rayleigh fading model [22]. Rayleigh Fading is a statistical model for the effect of a propagation environment on a radio signal, such as that used by wireless devices. This model assumes that the magnitude of a signal that has passed through a communication channel will vary randomly, or fade, according to a Rayleigh distribution. It is viewed as a reasonable model in situations where the communication signal may bounce off objects from many directions before reaching the receiver, resulting in a large number of signal paths that can destructively interfere with each other. Rayleigh Fading Model simulates the worst-case scenario for signal distortion by a propagation environment. Therefore it is used extensively in designing wireless networks even if the channels are in terrible conditions. Hk denotes the Rayleigh fading gain of the k[th] channel that k [1, 2N ], _∈_ which follows the complex normal distribution, i.e., Hk ∼ (0, 1). The channel gains are assumed to be independent _CN_ and identically distributed (i.i.d.). Therefore, |Hk|[2] follows the exponential distribution. When a package is sent through the _k[th]_ channel with a given transmit power Ptk, the signal-tonoise ratio (SNR) in this channel can be indicated as γk _γk =_ _[S][k][|][H][k][|][2][P][tk]_ _,_ (1) _Pnoise_ where Pnoise refers to the white Gaussian noise power, Sk represents the large-scale effect on the k[th] channel from the environment, such as the path loss and shadowing, and ρ is the SNR threshold. If γk is below the threshold ρ, the SNR outage occurs in the k[th] channel. Consequently, the link reliability _Plk of the k[th]_ channel can be calculated by the SNR outage probability in this channel [23] _Plk = 1 −_ _Pr(γk < ρ) = exp(−_ _[ρP][noise]_ ) (2) where Qk refers to the set of k followers that successfully complete both the downlink and uplink transmission. ΩS refers to the set that over _[N]2_ [followers have reached the consensus.] _W is a successful follower that belongs to Qk and v is a failed_ follower that belongs to complement of set Qk. Pw represents the probability that w belongs to the set Qk _Pw = Plw[DL][P]lw[ UL][,]_ (5) which is the product of the downlink reliability Plw[DL] and uplink reliability Plw[UL][. Similarly,][ P][v][ refers to the probability that] nodes from v complete the downlink and uplink transmissions successfully _Pv = Plv[DL][P]lv[ UL][,]_ (6) Other parameters in (4) are assumed constant for all 2N channels. The scheme of power allocation aims to maximize the consensus reliability PC when the overall transmit power _Psum is fixed. In the protocol of Raft, the overall transmit_ _P_ i ll t d t ll 2N h l Th f th which reveals that the transmit power Ptk is the communication resource that can affect the link reliability Plk when other parameters keep constant in the wireless link model. Meanwhile, the latency cost by transmission in the k[th] channel can be represented as _M_ _tk =_ (3) _Bklog(1 + γk)_ _[,]_ where M is the average length of the package sent by the leader or followers, and Bk is the bandwidth used in this channel. When the distributed consensus is implemented in the wireless network, the derived model of link reliability Plk in (2) and time latency tk in (3) can determine the critical parameters of the performance, such as consensus reliability _PC and the latency of consensus tc. And the derived model_ shows that these performance parameters can be improved by optimizing the power and bandwidth allocation. _B. Power Allocation Scheme for Consensus Reliability_ The model of wireless channel in (2) is implemented as an example to demonstrate the influence in the consensus reliability PC given by the allocated transmit power Ptk, which is a prevalent type of communication resource that can influence the link reliability in practice. Therefore, Ptk is regarded as a variable of the communication resource allocation scheme to pursue the maximum consensus reliability PC. The procedure of analysis can be similar when other wireless communication models are selected. With the link reliability given by (2), the consensus reliability PC can be represented as a function with the transmit power Ptk. The communication scheme of Raft in Fig. 1 shows that the successful follower needs to complete both the downlink and uplink transmission. Therefore, the consensus reliability PC can be calculated as � � _Pw_ _w∈Qk_ _v∈Q[C]k_ _PC =_ _N_ � _k=_ _[N]2_ [+1] � _Qk∈ΩS_ (1 − _Pv),_ (4) ----- problem of optimization for the power allocation scheme can be formulated as minPt 1 − _PC_ s.t. 2N (7) � _Ptk ≤_ _Psum,_ _k=1_ This optimization problem has 2N variables of transmit power. The channels from 1 to N represent the downlink channel of _N followers, and channels from N + 1 to 2N are the corre-_ sponding uplink channel of N followers. Sequential quadratic programming (SQP) is implemented to solve the nonlinear programming in this resource allocation scheme, which aims to transform the original optimization problem into an optimal quadratic problem and find the appropriate descent direction d. The transformed quadratic optimal problem can be formulated as follows: mind _f_ (Ptk) + ∇f (Ptk)[T] _d + [1]2_ _[d][T][ ∇][2][L][(][P][tk][, λ][)][d]_ (8) s.t.∇g(Ptk)d + g(Ptk) = 0, where f (Ptk) represents the objective function 1 − _PC with_ a vector of transmit power Ptk allocated to all 2N channels, _∇f_ (Ptk)[T] denotes the gradient of the transpose of f (Ptk), _g(Ptk) denotes the constriant and L(Ptk, λ) denotes the La-_ grangian multiplier, � _L(Ptk, λ) = f_ (Ptk) − _λg(Ptk)._ (9) The objective function of the transformed quadratic optimal problem in (8) is the first three terms of the Taylor series from the original optimization problem [24]. The remainder _Rn of the Taylor series [25] can be calculated as:_ With identical communication resources, the channel with better channel gain will have higher link reliability to complete transmission. The second power allocation method aims to ensure all channels receive appropriate transmit power Pt to reach the same link reliability Pl, which follows the proportion of the channel fading gain Sk in each channel to the summation of channel fading gains from all 2N channels. The link reliability in (2) indicates that the transmit power Ptk is inversely proportional to Sk when link reliability Plk is constant. Therefore, the link reliability in this power allocation method should be �2N _Ptk2 =_ _[P][sum]_ _k=1_ _[S][k]_ _._ (12) _Sk_ According to the inversely proportional relationship between the transmit power Pt and fading gain Sk when the link reliabilities of all channels tend to be identical with this allocation method, more transmit power should be compensated to the communication channel with lower Sk to keep the identical link reliability. These two power allocation methods have lower complexity than the result of SQP, which means they can replace the optimal power allocation method from the nonlinear optimization if the gap between their performances can be tolerated. _D. Bandwidth Allocation Scheme for Consensus Latency_ Besides reliability, latency is also critical to the performance of distributed consensus. Consensus reliability and transmission time are two factors that can influence the overall latency of distributed consensus in a wireless network. Optimal consensus reliability indicates that the protocol of Raft has the maximum probability of preventing a new leader election and spending extra time on this stage. Therefore, an optimal consensus latency means the reliability of consensus needs to reach the maximum, which means the power allocation method in this condition should be optimal, and it follows the result of SQP, then the only factor that can change the consensus latency is the transmission time cost by nodes. Based on the model in (3), the consensus latency can be reduced by minimizing the transmission time through an optimal bandwidth allocation method. In this section, we aim to investigate this optimal bandwidth allocation scheme to pursue the minimum value of consensus latency. The protocol of Raft indicates that each follower needs to receive a downlink message from the leader and respond with confirmation through uplink transmission in one term of consensus. The time that _n_ 1, 2..., N follower spends in _∀_ _∈_ completing the consensus can be represented as _tn = t[DL]n_ + t[UL]n + tv _M_ _[DL]_ _M_ _[UL]_ (13) = _Bn[DL][log][(1+][SNR]n[DL][) +]_ _Bn[UL][log][(1+][SNR]n[UL][) +][t][v][,]_ which is the summation of delays caused by the downlink _t[DL]n_ [, uplink transmissions][ t]n[UL] and verification time tv. M _[DL]_ and M _[UL]_ refer to the package length during downlink and uplink transmission. In the same round of communications, the t l f R ft i di t th t M _[DL]_ d M _[UL]_ id ti l f _Rn =_ +∞ � _n=3_ _∇[n]f_ (Ptk) _d[n]._ (10) _n!_ If the descent direction d is small in each iteration, the remainder Rn will converge to zero, which means the transformed optimization problem in (8) is equal to the original nonlinear optimization problem. Therefore, the solution to the optimization problem (7) is identical to the convergence of the result from SQP. However, consensus reliability PC from (4) shows that the overall probability is the summation of the product of link reliabilities from 2N channels, which can exponentially increase the complexity of nonlinear programming. The high complexity can be impractical to deploy the scheme of communication resource allocation in a large-scale wireless network. _C. Comparison of Optimal Power Allocation and Other_ _Power Allocation Schemes_ Two power allocation methods, which can be practical to implement in reality, are proposed to compare with the performance of the optimal power allocation scheme from SQP. The first method is allocating the transmit power equally to each channel, _Ptk1 =_ _[P][sum]_ (11) ----- all downlink and uplink channels, respectively. All nodes are assumed to have the same ability to handle the verification, so the verification time tv of all N followers is the same. The derived model of latency in (13) shows the bandwidth allocated to n[th] channel is the communication resource that can influence the transmission latency tn besides the SNR of channels. The consensus ends up the term when the last follower completes its transmission. Therefore, the longest latency cost by the follower can be considered as the latency _tc of distributed consensus._ _tc = max {t1, t2, ..., tN_ _},_ (14) which derives an optimization problem to solve the minimum value of tc when the overall bandwidth Bsum is constant. min _tc_ _B_ **Algorithm 1 PSO algorithm for tc** Initialize population **for m = 1 : Iterations do** **for i = 1 : n do** _ti,m = f_ (Bi,m) **if ti,m < ti,h then** _ti,h = ti,m_ _Bi,h = Bi,m_ **else** _ti,h = ti,h_ _Bi,h = Bi,h_ **end if** _ti,opt = min(ti,m)_ _Bi,opt = Bmin(ti,m)_ **end for** **for i = 1 : n do** _vi(m+1) = wvi(m)+c1r1(Bi,opt_ _−Bi)+c2r2(Bi,h_ _−_ _Bi)_ _Bi(m + 1) = Bi(m) + Vi(m + 1)_ **if Vi(m + 1) > Vmax then** _Vi(m + 1) = Vmax_ **else if Vi(m + 1) < Vmin then** _Vi(m + 1) = Vmin_ **end if** **if Bi(m + 1) > Bi,max then** _Bi(m + 1) = Bi,max_ **else if Bi(m + 1) < Bi,min then** _Bi(m + 1) = Bi,min_ **end if** **end for** **end for** updated iteratively through the combination of its inertia weight and acceleration constants. After sufficient iterations, we are able to derive ti,opt as the maximum value of consensus latency tc, a testament to PSO’s effectiveness in exploring and converging towards an optimal solution in a complex problem space. In the protocol of Raft, all followers need to occupy a constant overall bandwidth. A reasonable expectation of the optimization result is that most of the followers’ latency tn tends to be close when the optimal bandwidth allocation method is implemented because a non-optimal bandwidth allocation method can cause some followers to cost more time to complete the transmission, which increases the overall latency of distributed consensus in wireless networks. However, the stochastic wireless channel between the leader and some followers may have extremely terrible conditions, which can occupy a large proportion of communication resources and limit the optimal performance of the distributed consensus. IV. LIMITED OVERALL COMMUNICATION RESOURCE AND OPTIMAL NUMBER OF NODES The algorithms of nonlinear optimization proposed in Section. III can solve the optimization problem of the communiti ll ti t hi th i s.t. 2N (15) � _Bk ≤_ _Bsum._ _k=1_ where SNR in all downlink and uplink channels of the followers are based on the result of SQP in the section IIIB, which means the consensus reliability PC converges to the theoretical maximum value in this scheme. The overall bandwidth Bsum is the constraint for this optimization problem. Table. I shows the notations of major parameters used in the proposed resource allocation schemes. TABLE I: Notation used in resource allocation of Raft-enabled Network Notation Definition _N_ Number of Nodes within network _Sk_ Large Scale Effect of the k[th] channel _Hk_ Rayleigh Fading Gain of the k[th] channel _Psum (dBm)_ The overall transmit power _Bsum (MHz)_ The overall bandwidth _Ptk (dBm)_ Transmit Power allocated to the k[th] channel _Bk (MHz)_ Bandwidth allocated to the k[th] channel _Plk_ Link reliability of the k[th] channel _PC_ Consensus reliability _tk (s)_ Transmission time of the k[th] channel _tc (s)_ Transmission time cost by consensus _Nmax_ Number of node with maximized consensus reliability The optimization problem presented in equation (15) is nonlinear, and its objective function lacks an explicit closed-form solution, implying that the solution is complex and cannot be obtained through straightforward mathematical methods. Thus, we have employed Particle Swarm Optimization (PSO) to iteratively resolve this optimization problem and find the minimum value of tc. The PSO algorithm, renowned for its prowess in global optimization, enables us to evade suboptimal solutions [26]. In the context of our study, Algorithm.1 represents the application of PSO in bandwidth allocation within the Raft consensus algorithm. The position of the particles in this algorithm corresponds to the bandwidth distributed to the wireless channels. The PSO’s inertia weight w, along with acceleration constants c1 and c2, guide the particle’s movements and drive it towards the historically optimal and ll ti ti l iti Th iti f ti l t |Notation|Definition| |---|---| |N Sk Hk Psum (dBm) Bsum (MHz) Ptk (dBm) Bk (MHz) Plk PC tk (s) tc (s) Nmax|Number of Nodes within network Large Scale Effect of the kth channel Rayleigh Fading Gain of the kth channel The overall transmit power The overall bandwidth Transmit Power allocated to the kth channel Bandwidth allocated to the kth channel Link reliability of the kth channel Consensus reliability Transmission time of the kth channel Transmission time cost by consensus Number of node with maximized consensus reliability| ----- reliability PC and minimum consensus latency tc. However, if the overall communication resources are not adequate, even the optimal consensus reliability and latency cannot reach the requirement of high reliability and low latency in specific scenarios. This section aims to investigate the solution to the problem of inadequate overall communication resources in resource allocation. Firstly, the criteria of adequate communication resources for the distributed consensus Raft is defined. Then we find out the solution based on the feature of fault tolerance in the distributed consensus to improve the performance of the optimized consensus reliability and latency from the perspective of network size. _A. Limited Overall Communication Resource for Raft_ In the assumption of this article, the allocated communication resources to the wireless channels and channel gains are the parameters that can influence link reliability Pl and the consensus reliability PC. Therefore, the link reliability Pl and consensus reliability PC can be reasonable criteria to judge the condition of overall communication resources when the wireless channel gain is determined. The reliability of information delivery and synchronization changes in different applications. These reliability requirements correspond to the consensus reliability if the distributed consensus is implemented. The dotted lines in Fig. 2 denote the target consensus reliability in multiple 5G scenarios, including URC over the long term, V2V wireless coordination, Reliable cloud connectivity, and Real-time Virtualization [27] [28]. The optimization problem in (7) indicates that even though the power allocation method is optimized by SQP, adequate overall transmit power should also be provided if the consensus reliability needs to be improved to reach the requirement of a specific scenario. Otherwise, an alternative solution should be implemented to improve the consensus reliability of Raft in the wireless network. _B. Optimal Number of Nodes_ When the overall communication resource is constant, the number of nodes that participate in the distributed consensus can influence the performance of distributed consensus because more nodes should occupy the limited communication resources, and each node is expected to take fewer resources for the transmission. Specifically, the performance of the resource allocation method will be damaged when the overall communication resources are inadequate because some channels cannot gain enough resources to achieve the target performance. A reasonable solution to this problem is eliminating the redundant consensus nodes that are linked with terrible communication channels. However, the increasing size of the network represents that the distributed consensus can tolerate more crash faults or byzantine fault nodes [29]. These two controversial characteristics can cause the maximum global value for the reliability of consensus PC with a dynamic number of nodes but constant communication resources for a local wireless network. The corresponding number of nodes _N to the maximum of PC can be determined by Proposition_ 1. It shows that when the overall communication resources are inadequate for a distributed network, the number of nodes engaged in this network should be less than the value of Nmax. The maximum value of the consensus reliability PC indicates that excessive consensus nodes can damage the reliability of Raft. Therefore, a large-scale network can abandon some nodes that have terrible communication channels to converge the number of nodes N to Nmax if the overall communication resource is rare, which can improve the consensus reliability of Raft. For example, In a multiple-layer consensus network [30], the network size in the consensus layers can be optimized based on the communication resource allocated to them, which helps the whole network achieve the highest performance. _Proposition 1: If Nmax is assumed as the number of fol-_ lowers that can reach the maximum of consensus reliability, **0** **-1** _Nmax = ⌈Ma⌉_ = ⌊Mb⌋. (16) _Ma and Mb correspond to the value of function_ (17) **-2** **-3** _Ma =_ _P� −_ ��1P 2 − 4P � + 1 2 _[−]_ [2][P][ �] **-4** **-5** **-6** **-7** **-2** **-1.8** **-1.6** **-1.4** **-1.2** **-1** **-0.8** **-0.6** **-0.4** **-0.2** **0** lg(1-P ) l _Mb = [1][ −]_ [3][ �][P][ −]1��P 2 − 4 �P + 1 _,_ 2 _[−]_ [2][P][ �] where _P[�] = (1 −_ _Pl[2][)][P][ 2]l_ and Pl denotes the average link reliability of channels _Proof: See Appendix A_ The computational complexity of the model revolves around the calculation of Nmax, which is the optimal number of nodes that can reach the maximum consensus reliability. Calculating Nmax involves solving the equation (17), which is the function of link reliability P (N ). P (N ) is a function with 2N variables, which means the calculation of P (N ) can involve iterating over all 2N variables at least once. Th f th t ti l l it f _P_ (N ) ill b |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| ||||||||||||||||| |||||||||||URC|over lon|g term|||| ||||||||||||||||| |||||||||||V2V wir|eless co|ordinati|on||| |||||||||||Reliable|cloud c|onnecti|vity||| |||||||||Rea|||||||| ||||||||||Rea|l-time V|irtualiza|tion|||| ||||||||||||||||| ||||||||||||||||| ||||||||||||||||| Fig. 2: Reliability requirements in different scenarios ----- _O(N_ ). Subsequently, Nmax is calculated from P (N ) with the equation (17), which are operations with constant computational complexity. Therefore, the overall computational complexity of the model primarily depends on the calculation of P (N ) and is O(N ). While the proposed model’s computational complexity is linear in the size of network, the feasibility of real-time or near-real-time implementation of the proposed model depends on the number of nodes N and environmental effects. If _N is large in the network, the calculation of link reliabil-_ ity P (N ) can be computationally intensive, which makes real-time implementation challenging. Moreover, the dynamic change of the communication environment causes a varied distribution of link reliability among nodes, and the Raftenabled network has to frequently recalculate the optimal resource allocation scheme, which may pose influence the real-time implementation of the proposed model. Therefore, an ideal condition for the real-time deployment of the proposed model should contain an appropriate number of nodes within the network and a stable communication environment. V. SIMULATION RESULTS **0** **-2** Coefficient of Variation = 1.3030 **-4** **-6** **-8** **-10** **-12** **Equal power** **Equal link reliability** **SQP result** **-14** **-3** **-2.5** **-2** **-1.5** **-1** **-0.5** **0** **0.5** **lg(P** **)** **sum** |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |||||||||| |||||||||| |||||||||| |||||||||| |E||||||||| |||E|qual pow|er||||| |E S|E S||qual link QP result|reliabilit|y|||| |||||||||| In this section, the proposed resource allocation schemes for Raft are simulated in MATLAB R2019b. Based on the Rayleigh Fading model, we assume the channel fading gain _Hk and large-scale effect Sk of 2N channels from (1) are_ in the Gaussian distribution [22]. The nodes are set as static nodes, and the number of them N in the wireless network is set to 13. The overall power Psum ranges from 20 dBm to 36 dBm for the transmit power allocation. The Coefficient of Variation (CV), which refers to the ratio of standard derivation to mean of channel fading gain H and large-scale effect _S in the wireless model, is implemented in the simulation_ to represent the dispersion in the probability distribution of wireless channel fading gains and large-scale effect. A higher CV means that part of channels have more probabilities of suffering terrible fading gain H and large-scale effect S, which influence the performance of proposed resource allocation schemes. The optimal reliability of the distributed consensus PC from SQP is compared with the other two transmit power allocation methods. The numerical results of three transmit power allocation methods are presented in Fig. 3 when the channel gains Sk has a high coefficient of variation (CV = 1.303). The consensus reliability given by the three allocation methods is significantly different. The output PC from the equal power method in (11) is closer to the optimized result of SQP, which reveals that the equal power allocation method has better performance than the equal link reliability method when the variation of channel gains is large. Even though the complexity of SQP will rise when the size of the network increases, the transmit power allocation method derived by SQP is still the best allocation method to use in this case. Moreover, Fig. 4 shows that when the channel fading gain is more concentrated (CV = 0.388), the curves of equal power and equal link reliability methods will converge to th ti i d f il t 1 _P_ hi h Fig. 3: Performance of three power allocation methods with a high coefficient of variation in channel gains three power allocation methods will have similar performances when the conditions of wireless channels are close. Therefore, two practical transmit power allocation methods in (11) and (12) can substitute the optimal power allocation method derived by SQP in this case. **0** **-2** Coefficient of Variation = 0.3917 **-4** **-6** **-8** **-10** **-12** **Equal link reliability** **SQP result** **-14** **-3** **-2.5** **-2** **-1.5** **-1** **-0.5** **0** **0.5** **lg(P** **)** **sum** |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| ||||||||| ||||||||| ||||||||| ||||||||| ||||||||| |||Equal pow|er||||| ||||||||| |||Equal link SQP resu|reliabili lt|ty|||| ||||||||| ||||||||| ||||||||| Fig. 4: Performance of three power allocation methods with low coefficient of variation in channel gains Fig. 5 illustrates the influence of the varied channel gains in consensus reliability where PC denotes the consensus reliability derived by the two practical power allocation methods in (11) and (12), PCopt is the optimal consensus reliability from SQP, and Reliability Gap (RG) represents the ratio of consensus failure rate between 1 − _PC and 1 −_ _PCopt._ The difference among the three allocation methods gradually i h th CV f h l i i i i All th ----- methods have approximate results when the CV is less than 0.5, which means the other two power allocation methods can replace the optimal power allocation method derived by SQP with a small compromise performance. In practice, the CV The curve of fitness function in PSO optimization **300** **Bsum** **Bsum** **250** **B** **15** **10** **200** **150** **100** **50** **0** |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Hz) MHz)| |---|---|---|---|---|---|---|---|---|---|---| |||||||||B s B s|=8(M um =10( um|| |||||||||B s B s|=12( um =14( um|MHz) MHz)| |||||||||||| |||||Times of|iteratio|n for co||nvergen|ce|| |||||||||||| |||||||||||| |||||||||||| **50** **100** **150** **200** **250** **300** **350** **400** **450** **500** Times of Iteration **5** **0** **0.2** **0.4** **0.6** **0.8** **1** **1.2** **1.4** Coefficient of Variation (CV) Fig. 6: The curve of fitness function in PSO 12 10 Transmission time for followers |Col1|Col2|Col3|r eliability|Col5|Col6|Col7| |---|---|---|---|---|---|---| |||Equal powe Equal link r|r eliability|||| |||||||| |||||||| |||||||| Fig. 5: The performance comparison among optimal consensus reliability and other two methods with different CVs in wireless channel gains of wireless channel gain can be reduced by abandoning some nodes with bad channel conditions (e.g., low large-scale effect _S, etc.) to achieve a near-optimal power allocation scheme,_ which is supported by the feature of fault tolerance in the distributed consensus. The simulation of bandwidth allocation assumes that the amount of overall bandwidth Bsum ranges from 8 to 14 MHz, and the number of nodes N = 13. The model of the wireless channel is the same as the previous transmit power allocation, and the SNRs of all channels are set based on the optimal result of transmit power allocation scheme from SQP. The iteration rounds are set to 500 in the PSO algorithm. The curve of the fitness function in the proposed optimization problem should be presented first. Fig. 6 shows the convergence of the optimal consensus latency when different overall bandwidths are used in the same wireless network. The convergence of consensus latency decreases when more overall bandwidth is provided for the communication. The number of iterations that the result of PSO converges to the minimum consensus reliability is between 100 to 150. The transmission time cost by all followers is evaluated in Fig. 7 when the optimal bandwidth allocation scheme is exploited. Because the definition of consensus latency refers to the longest time cost by the follower from the whole wireless network, the simulation result matches the expectation that the transmission time cost by most of the followers is close when th l t _t_ h i i l 8 6 4 2 0 -1 0 1 2 3 4 5 6 log(Bk) |T c|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10| |---|---|---|---|---|---|---|---|---|---| ||||||||||| ||||||||||| ||||||||||| ||||||||||| ||||||||||| Fig. 7: The transmission time used by followers with optimized bandwidth allocation scheme The stochastic wireless channels between the leader and followers have variable channel gains, which can have a significant influence on consensus latency. Fig. 8 aims to indicate the tendency of optimized consensus latency tc with an increased coefficient of variation CV in channel gain Sk. The results show that when CV increases from 0.74 to 1.56, the optimal consensus latency tc dramatically rises from 1 µs to 10[5] _µs. This numerical result reveals a larger variation of_ channel gain can increase the optimal latency of Raft in the wireless network. The simulation of the optimal number of nodes is presented in Fig. 9, which illustrates the change in the consensus reliability when the number of nodes in the network increases. The number of nodes N is assumed to range from 4 to 40, and the overall communication resource keeps constant. The trend of consensus reliability increases first and then drops when the number of followers reaches the optimal network size d fi ll i Th b f d th t d ----- The curves of fitness function with different number of nodes N **10[5]** **N=8** **N=7** **N=6** **10[4]** **10[3]** **10[2]** **10[1]** **50** **100** **150** **200** **250** **300** **350** **400** **450** **500** Times of Iteration |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|N=8 N=7 N=6| |---|---|---|---|---|---|---|---|---|---| ||||||||||| ||||||||||| ||||||||||| ||||||||||| ||||||||||| ||||||||||| Fig. 8: The optimal consensus latency with different CV in the channel gains to the maximum consensus reliability matches the result of the optimal number of nodes in Proposition 1. S represents the rounds of synchronization processed during the Raft consensus protocol. When more rounds of synchronization _S are implemented to the distributed consensus protocol, the_ maximum value of consensus reliability PC will increase. But the eventual tendencies of all curves still remain the same. Optimization in the number of nodes **0** **-2** **-4** **-6** **-8** **s=0** **s=1** **s=2** **-10** **s=3** **s=4** **-12** **0** **5** **10** **15** **20** **25** **30** **35** **40** **Number of nodes (N)** Fig. 9: Optimal network size for Raft The negative influence on consensus latency from varied wireless channel gains indicates that if the consensus latency needs to be improved, the node with terrible channel gain should be removed. Fig. 10 compares the numerical result of optimal consensus latency tc before and after the followers with the worst channel gains are eliminated from the network. The number of followers N = 8 in the initial network. The channel gains Sk of all nodes follow the normal distribution. The convergence of optimized tc is close to 2000 µs when f ll d Th f t d t th Fig. 10: The convergence of consensus latency with different numbers of followers region between 300 and 400 µs when one follower with the worst channel gain is removed. And tc will keep dropping to 10 µs after two followers are removed from the network, which proves this method is also efficient in reducing the consensus latency. VI. CONCLUSION In this article, optimal power and bandwidth allocation methods are proposed to improve reliability and reduce latency for the distributed consensus Raft in a wireless network. Both power and bandwidth allocation methods, which are derived through two different optimization algorithms, can reach near-optimal performance when the overall communication resource is constant. Moreover, an optimized network size is defined to provide the solution to the scenario that the overall resources are inadequate to reach the required performance. These results can provide a guideline for the deployment of resource allocation schemes when consensus Raft is implemented in the distributed wireless network. APPENDIX The dominant term of consensus reliability PC from (4) is a discrete function, which means PC cannot determine its tendency through derivation. If the Raft consensus with _N followers can reach the maximum consensus reliability_ _PC(N_ ), PC(N ) should be less than the consensus reliability of the network that contains N 2 and N + 2 followers. _−_  PC(N ) > PC(N + 2) (18) PC(N ) > PC(N − 2). In the problem of communication resource allocation, if the network with N followers can reach the minimum consensus failure rate, the overall communication resource can be d d d t f thi t k hi h th ----- dominant term of (4) can replace the whole consensus reliability PC. Therefore, the difference among the average link reliability Pl in the network of N, N − 2, and N +2 followers can be negligible. The dominant term of the consensus failure rate is substituted into (18) to solve the Nmax  (f(N+1Nf−[)]2[)][(1][(1][−][−][P][P][l] 2[l] [2])[)]f[f]+1[ (][P]([l]P[2]l[)]2[N])N[−][f]−[−]f[2]−1 _< 1_ (19)  (f(N+1fN+2+2[)][(1][)][(1][−][−][P][P][l] 2[l])[2]f[)]+1[f] [+2](P[(][P]l 2[l])[2]N[)][N]−[−]f _−[f]_ 1 _< 1._ Eventually, the conclusion in Proposition 1 can be derived by replacing the number of fault tolerant nodes f = _[N]2_ [in (19)] when the distributed consensus protocol is Raft. REFERENCES [1] C. Feng, Z. Xu, X. Zhu, P. Valente Klaine, and L. Zhang, “Wireless distributed consensus in vehicle to vehicle networks for autonomous driving,” IEEE Transactions on Vehicular Technology, 2022. [2] D. Yu, W. Li, H. Xu, and L. Zhang, “Low reliable and low latency communications for mission critical distributed industrial internet of things,” IEEE Communications Letters, 2020. [3] L. Lamport, “Generalized consensus and paxos,” 2005. [4] D. Ongaro and J. Ousterhout, “In search of an understandable consensus algorithm,” in 2014 {USENIX} Annual Technical Conference _({USENIX}{ATC} 14), pp. 305–319, 2014._ [5] L. Lamport and M. Massa, “Cheap paxos,” pp. 307–314, 2004. [6] M. Castro, B. Liskov, et al., “Practical byzantine fault tolerance,” in _OSDI, vol. 99, pp. 173–186, 1999._ [7] M. Yin, D. Malkhi, M. K. Reiter, G. G. Gueta, and I. Abraham, “Hotstuff: BFT consensus with linearity and responsiveness,” in Proceedings _of the 2019 ACM Symposium on Principles of Distributed Computing,_ pp. 347–356, 2019. [8] M. Baudet, A. Ching, A. Chursin, G. Danezis, F. Garillot, Z. Li, D. Malkhi, O. Naor, D. Perelman, and A. Sonnino, “State machine replication in the libra blockchain,” The Libra Assn., Tech. Rep, 2019. [9] M. Van Steen, Distributed systems. Citeseer, 2017. [10] Y. Sun, L. Zhang, G. Feng, B. Yang, B. Cao, and M. A. Imran, “Blockchain-enabled wireless internet of things: Performance analysis and optimal communication node deployment,” IEEE Internet of Things _Journal, vol. 6, no. 3, pp. 5791–5802, 2019._ [11] L. Zhang, H. Xu, O. Onireti, M. A. Imran, and B. Cao, “How much communication resource is needed to run a wireless blockchain network?,” _IEEE Network, pp. 1–8, 2021._ [12] H. Seo, J. Park, M. Bennis, and W. Choi, “Communication and consensus co-design for distributed, low-latency, and reliable wireless systems,” _IEEE Internet of Things Journal, vol. 8, no. 1, pp. 129–143, 2021._ [13] Z. Sun, Z. Wei, N. Yang, and X. Zhou, “Two-tier communication for UAV-Enabled massive IoT systems: Performance analysis and joint design of trajectory and resource allocation,” IEEE Journal on Selected _Areas in Communications, vol. 39, no. 4, pp. 1132–1146, 2021._ [14] E. E. Tsiropoulou, S. T. Paruchuri, and J. S. Baras, “Interest, energy and physical-aware coalition formation and resource allocation in smart iot applications,” in 2017 51st Annual conference on information sciences _and systems (CISS), pp. 1–6, IEEE, 2017._ [15] D. Militani, S. Vieira, E. Valad˜ao, K. Neles, R. Rosa, and D. Z. Rodr´ıguez, “A machine learning model to resource allocation service for access point on wireless network,” in 2019 International Conference _on Software, Telecommunications and Computer Networks (SoftCOM),_ pp. 1–6, 2019. [16] X. Cao, R. Ma, L. Liu, H. Shi, Y. Cheng, and C. Sun, “A machine learning-based algorithm for joint scheduling and power control in wireless networks,” IEEE Internet of Things Journal, vol. 5, no. 6, pp. 4308–4318, 2018. [17] E. Androulaki, A. Barger, V. Bortnikov, C. Cachin, K. Christidis, A. De Caro, D. Enyeart, C. Ferris, G. Laventman, Y. Manevich, et al., “Hyperledger fabric: a distributed operating system for permissioned blockchains,” in Proceedings of the thirteenth EuroSys conference, pp. 1–15, 2018. [18] L. Lamport, R. Shostak, and M. Pease, “The byzantine generals prob [19] S. Pedersen, H. Meling, and L. Jehl, “An analysis of quorum-based abstractions: A case study using gorums to implement raft,” in Proceedings _of the 2018 Workshop on Advanced Tools, Programming Languages, and_ _PLatforms for Implementing and Evaluating Algorithms for Distributed_ _systems, pp. 29–35, 2018._ [20] J. Polge, J. Robert, and Y. Le Traon, “Permissioned blockchain frameworks in the industry: A comparison,” ICT Express, vol. 7, no. 2, pp. 229–233, 2021. [21] E. Sakic and W. Kellerer, “Response time and availability study of raft consensus in distributed SDN control plane,” IEEE Transactions on _Network and Service Management, vol. 15, no. 1, pp. 304–318, 2017._ [22] A. Goldsmith, Wireless communications. Cambridge university press, 2005. [23] N. C. Beaulieu and J. Hu, “A closed-form expression for the outage probability of decode-and-forward relaying in dissimilar rayleigh fading channels,” IEEE Communications Letters, vol. 10, no. 12, pp. 813–815, 2006. [24] P. T. Boggs and J. W. Tolle, “Sequential quadratic programming,” Acta _numerica, vol. 4, pp. 1–51, 1995._ [25] M. Kline, Calculus: an intuitive and physical approach. Courier Corporation, 1998. [26] R. Poli, J. Kennedy, and T. Blackwell, “Particle swarm optimization,” _Swarm intelligence, vol. 1, no. 1, pp. 33–57, 2007._ [27] P. Popovski, “Ultra-reliable communication in 5G wireless systems,” in 1st International Conference on 5G for Ubiquitous Connectivity, pp. 146–151, 2014. [28] S. Zhang, X. Xu, Y. Wu, and L. Lu, “5G: Towards energy-efficient, low-latency and high-reliable communications networks,” in 2014 IEEE _international conference on communication systems, pp. 197–201, IEEE,_ 2014. [29] D. Yu, H. Xu, L. Zhang, B. Cao, and M. A. Imran, “Security analysis of sharding in the blockchain system,” in 2021 IEEE 32nd Annual _International Symposium on Personal, Indoor and Mobile Radio Com-_ _munications, pp. 1030–1035, IEEE, 2021._ [30] W. Li, C. Feng, L. Zhang, H. Xu, B. Cao, and M. A. Imran, “A scalable multi-layer PBFT consensus for blockchain,” IEEE Transactions on _Parallel and Distributed Systems, vol. 32, no. 5, pp. 1146–1160, 2021._ **Dachao Yu received the B.S. degree in Elec-** tronic and Electrical Engineering from University of Electronic Science and Technology of China in 2019. He is currently pursuing the Ph.D. degree in Electronics and Communication Engineering at University of Glasgow. His research interest includes the performance analysis and optimization to the Crash fault tolerance and Byzantine Fault tolerance consensus in wireless network, security analysis of wireless blockchain system. **Yao Sun is currently a Lecturer with James** Watt School of Engineering, the University of Glasgow, Glasgow, UK. Dr. Sun has extensive research experience and has published widely in wireless networking research. He has won the IEEE Communication Society of TAOS Best Paper Award in 2019 ICC, IEEE IoT Journal Best Paper Award 2022 and Best Paper Award in 22nd ICCT. He has been the guest editor for special issues of several international journals. He has served as TPC Chair for UCET 2021, and TPC member for a number of international flagship conferences, including ICC 2022, VTC spring 2022, GLOBECOM 2020, WCNC 2019. His research interests include intelligent wireless networking, semantic communications, blockchain system, and resource management in next generation mobile networks. Dr. Sun is a senior member of IEEE. ----- **Yuetai Li is currently an undergraduate major** in communication engineering from the university of Glasgow and the University of Electronic Science and Technology of China (UESTC). His current research interests include distributed consensus, blockchain, information security, and distributed intelligent systems. **Lei Zhang (Senior Member, IEEE) is a Pro-** fessor of Trustworthy Systems at the University of Glasgow. He has academia and industry combined research experience on wireless communications and networks, and distributed systems for IoT, blockchain, autonomous systems. His 20 patents are granted/filed in 30+ countries/regions. He published 3 books, and 150+ papers in peer-reviewed journals, conferences and edited books. Prof. Zhang is an associate editor of IoT Journal, IEEE Wireless Communications Letters and Digital Communications and Networks, and a guest editor of IEEE JSAC. He received the IEEE Internet of Things Journal Best Paper Award 2022, IEEE ComSoc TAOS Technical Committee Best Paper Award 2019 and IEEE ICEICT’21 Best Paper Award. Dr. Zhang is the founding Chair of IEEE Special Interest Group on Wireless Blockchain Networks in IEEE Cognitive Networks Technical Committee (TCCN). He delivered tutorials in IEEE ICC’20, IEEE PIMRC’20, IEEE Globecom’21, IEEE VTC’21 Fall, IEEE ICBC’21 and EUSIPCO’21. **Muhammad Ali Imran is a professor of Wireless** Communication Systems with research interests in self organised networks, wireless networked control systems, and the wireless sensor systems. He heads the Communications, Sensing and Imaging CSI Research Group, University of Glasgow. He is an affiliate professor with the University of Oklahoma and a visiting professor with 5G Innovation Centre, University of Surrey, United Kingdom. He has more than 20 years of combined academic and industry experience with several leading roles in multi-million pounds funded projects. He has filed 15 patents; has authored/co-authored more than 400 journal and conference publications; was editor of three books and author of more than 20 book chapters; has successfully supervised more than 40 postgraduate students at doctoral level. He has been a consultant to international projects and local companies in the area of self-organised networks. He is a fellow of the IET and a senior fellow of the HEA. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/JSEN.2023.3293715?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/JSEN.2023.3293715, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://eprints.gla.ac.uk/302436/1/302436.pdf" }
2,023
[ "JournalArticle" ]
true
2023-09-01T00:00:00
[]
14,551
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Medicine", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0178f7275f581e10738c870a8cf005454fd72dcd
[ "Computer Science", "Medicine" ]
0.878031
Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition
0178f7275f581e10738c870a8cf005454fd72dcd
IEEE Transactions on Biomedical Circuits and Systems
[ { "authorId": "1781669", "name": "Runchun Wang" }, { "authorId": "1807880", "name": "C. S. Thakur" }, { "authorId": "145334576", "name": "Gregory Cohen" }, { "authorId": "4365413", "name": "T. Hamilton" }, { "authorId": "145380118", "name": "J. Tapson" }, { "authorId": "1738347", "name": "A. Schaik" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE Trans Biomed Circuit Syst" ], "alternate_urls": null, "id": "b705b89f-7498-41dc-960d-af625d263847", "issn": "1932-4545", "name": "IEEE Transactions on Biomedical Circuits and Systems", "type": "journal", "url": "http://ieeexplore.ieee.org/servlet/opac?punumber=4156126" }
null
# A neuromorphic hardware architecture using the Neural Engineering Framework for pattern recognition #### Runchun Wang, Chetan Singh Thakur, Tara Julia Hamilton, Jonathan Tapson, André van Schaik The MARCS Institute, University of Western Sydney, Sydney, NSW, Australia mark.wang@uws.edu.au **_Abstract—We present a hardware architecture that uses the_** **Neural Engineering Framework (NEF) to implement large-** **scale neural networks on Field Programmable Gate Arrays** **(FPGAs) for performing pattern recognition in real time.** **NEF is a framework that is capable of synthesising large-scale** **cognitive systems from subnetworks. We will first present the** **architecture of the proposed neural network implemented** **using fixed-point numbers and demonstrate a routine that** **computes the decoding weights by using the online** **pseudoinverse update method (OPIUM) in a parallel and** **distributed manner. The proposed system is efficiently** **implemented on a compact digital neural core. This neural** **core consists of 64 neurons that are instantiated by a single** **physical neuron using a time-multiplexing approach. As a** **proof of concept, we combined 128 identical neural cores** **together to build a handwritten digit recognition system using** **the MNIST database and achieved a recognition rate of** **96.55%. The system is implemented on a state-of-the-art** **FPGA and can process 5.12 million digits per second. The** **architecture is not limited to handwriting recognition, but is** **generally applicable as an extremely fast pattern recognition** **processor for various kinds of patterns such as speech and** **images.** **Keywords: neural engineering framework; time-multiplexing;** **pattern recognition; pseudo inverse; MNIST; neuromorphic** **engineering** ## 1. Introduction Neural networks have been proved to be powerful tools for real world tasks, such as pattern recognition, classification, regression, and prediction. However, their high computational demands are not ideally suited to modern computer architectures. This constraint has so far often prohibited their use in applications that need real-time control, such as interactive robotic systems. On the other hand, scientists have been developing hardware platforms that are optimised for neural networks over the past two decades (Vogelstein et al., 2007; Boahen, 2006; Pfeil et al., 2013; Wang et al., 2014d). However, these systems are not capable of synthesising large-scale neural networks for these real world tasks from subnetworks and therefore are not very suitable, as pointed out by Tapson et al. (Tapson et al., 2013). Here, we present a generic hardware architecture that uses the Neural Engineering Framework (NEF) (Eliasmith and Anderson, 2003) to implement large-scale neural networks on FPGAs, which are capable of processing up to millions of pattern recognitions in real time. The NEF, which was first introduced in 2003, is a framework that is capable of building large systems from subnetworks with a standard three-layer neural structure (the first layer contains the input neurons; the second layer is a hidden layer, which consists of a large number of non-linear neurons; and the third layer is the output layer, which consists of linear neurons). The NEF has been used to construct SPAUN, which is the first brain model, implemented in software and is capable of performing cognitive tasks (Eliasmith et al., 2012). This demonstrates that the NEF is a powerful tool for synthesising large-scale cognitive systems. We have previously presented a compact neural core architecture specifically for FPGA implementation of large NEF networks (Wang et al., 2014a). In this paper, we present an application that uses this neural core to build pattern recognition systems. The outline for this paper is as follows: Section 2.1 introduces the basic concepts of the NEF; the algorithm and theory is presented in Section 2.2; the hardware implementation is presented in Section 2.3; the performance for different design choices will be thoroughly compared in Section 3; in section 4 we compare our work with other solutions and discuss future works. ## 2. Materials and methods ### 2.1 Background In this section, we review the theoretical framework of a **Figure 1** | **A typical NEF network.** The stimulus _X(t) is_ encoded into a large number of nonlinear hidden layer neurons N using randomly initialised connection weights. The output of the system, _Y(t), is the linear sum of the_ weighted spike trains from the hidden neurons. ----- **Figure 2 |** **Tuning curves maps input stimuli to spike** **rates. For clarity, this figure only shows the tuning curve of** 16 neurons. Each neuron in the neural layer has a distinct tuning curve. trained using the training dataset only and is subsequently validated using the test dataset. The proposed digit recognition system is a three-layer feed forward neural network, consisting of 784 input layer neurons (pixels), 8192 (8k) hidden layer neurons and ten output layer neurons. The input layer neurons are connected to the hidden layer neurons using randomly weighted all-toall connections. The hidden layer neurons are also connected to the output-layer neurons using all-to-all connections but with weights calculated using a pseudoinverse operation. In the digit recognition system, a single input digit (28x28=784 pixels) is mapped onto a layer of input neurons, which we refer to as a vector Img with a dimension of 784× 1. The _Img matrix is multiplied by a matrix,_ _Random_weights, with a dimension of 8192_ × 784. The resultant vector, referred to as _Vin with a dimension of_ 8192×1, is thus given by: 𝑉𝑉𝑉𝑉𝑉𝑉= 𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅𝑅_𝑤𝑤𝑤𝑤𝑤𝑤𝑤𝑤ℎ𝑡𝑡𝑡𝑡×𝐼𝐼𝐼𝐼𝐼𝐼 (1) Each value in Vin is the sum of the randomly weighted pixels, and is the stimulus for the corresponding neuron in the hidden layer. Each neuron of the hidden layer responds to its Vin value according to a distinct tuning curve (Figure 2). The output of the hidden layer neurons for each input digit is collected in a matrix referred to as _H with a_ dimension of 8192×1. Finally, the response of the output layer neuron is given by: 𝑌𝑌 = 𝑊𝑊×𝐻𝐻 (2) where, W is the decoding weight (a matrix with a dimension of 10×8192, ten columns for ten digits: 0-9) and _Y (a_ Boolean matrix with a dimension of 10×1) represents the corresponding value of the input digit. For example, if the input digit represents 2, then, during training, _Y[2] will be_ set to 1 and the other values in Y will be set to 0. Since this is a linear system, the weights can be found by calculating W = H[+]Y, where H[+] is the pseudo-inverse of H. The above description is for one single digit. For training purposes, we used 60000 sample digits and hence the dimensions of _Img, Vin, H and_ _Y will change to_ 784×60,000, 8192×60,000, 8192×60,000 and 10×60,000, respectively. When we use the digits from the test dataset with 10,000 digits, the dimensions of Img, Vin, H and Y will change to 784×10,000, 8192×10,000, 8192×10,000 and 10×10,000, respectively. In the testing phase, the predicted output Y will be the product of W*H and will be compared with the expected output to obtain the error rate (the number of unrecognised digits among 10000 test digits). We will address the details of testing in Section 3. _2.2.2 Modelling_ Our aim is to develop a fast hardware pattern recognition system running in real time, rather than aiming for the lowest test error. Thus, we have adopted a hardwaredriven method to implement our system, which will achieve the best trade-off between performance and hardware resources. This method will first consider the hardware **Figure 2 |** **Tuning curves maps input stimuli to spike** **rates. For clarity, this figure only shows the tuning curve of** 16 neurons. Each neuron in the neural layer has a distinct tuning curve. typical NEF system, which encodes an input stimulus into a spiking rate of neurons of a heterogeneous population and decodes the desired function by linearly combining the responses of these neurons. The topology of the NEF network is illustrated in Figure 1. A NEF network performs three tasks to calculate a desired function f(X): **1. Encoding: An encoder will have a fixed random** weight (RW) for each hidden layer neuron, and multiplies the input stimulus by this weight. The firing rate of individual neurons is a nonlinear function of the input stimulus weighted by the random weights. The parameters of the neurons are also randomised, so that each neuron in the hidden layer exhibits a distinct tuning curve. An example of such tuning curves is shown in Figure 2. **2. Decoding: The activity, H, of the hidden neurons (i.e.** the spike rate of each neuron) can be measured over the desired range of input values X. The output of each neuron will be multiplied by their decoding weights such that WH = _f(X)_ = Y. Since this is a linear system, these weights can be found by calculating W = YH[+], where H[+] is the MoorePenrose pseudo-inverse (Penrose and Todd, 1955) of H. **3. Averaging:** The output of the system, _Y(t), is the_ linear sum of the weighted spike trains from the neurons. ### 2.2 Algorithm and Theory _2.2.1 Methodology_ Recognition or classification of handwritten digits is a standard machine learning problem, and in the form of the MNIST database (Lecun et al., 1998) it has become a benchmark problem. Hence, as a proof of concept, we have used the proposed design framework to implement a digit recognition system (Figure 3). Importantly, the same system could be used for other pattern recognition applications. In the MNIST database, the digits are represented as 28 × 28 = 784 pixels, and the training and testing dataset contain 60,000 and 10,000 digits, respectively. The system is ----- **Figure 3 | System Topology.** The inputs are the pixels; they are connected to a higher-dimensional hidden layer with 8k neurons, using randomly weighted connections. The output layer consists of linear neurons and the output layer weights are solved analytically using the pseudoinverse operation. **Figure 3 | System Topology.** The inputs are the pixels; they are connected to a higher-dimensional hidden layer with 8k neurons, using randomly weighted connections. The output layer consists of linear neurons and the output layer weights are solved analytically using the pseudoinverse operation. constraints, and then all the building blocks will be optimised. For FPGA implementations, there will be a significant difference in the hardware cost between fixed-point and floating-point implementations, as the latter requires many more digital signal processors (DSPs). More importantly, the floating-point number is represented by 64-bits, which would lead to a huge data storage requirement, which would be a bottleneck for the system. Thus, we have implemented our system using fixed-point numbers. Before implementing the design in hardware, we have modelled our system in Python, which is a popular software programming language, using the fixed-point representation. This will ensure that the software and the hardware results are the same, and avoid any performance drop or malfunctioning of the system in hardware due to conversion from floating to fixed point numbers. The models presented in the remaining part of this section were all software models unless otherwise specified. _2.2.3 Input layer_ The input layer will read digits from the MNIST database and map them into the input layer pixels (one by one). This task consists of not only converting the dimension from 28×28 to 784×1 but also converting the grey scale value (an 8-bit number that ranges from 0 to 255) of the pixels to a binary value. The latter is a major difference between our system and existing algorithms (Tapson and van Schaik, 2013) (Lecun et al., 1998). This conversion will reduce the hardware cost significantly with a negligible performance loss, and will be presented in detail in Section 2.3.2. We will compare the performance differences in section 3.1. This conversion is carried out by comparing the grey scale value with 0 - if it is larger than 0, that pixel will be set 1; else it will be set to 0. To guarantee that the pixels of each digit from the input layer will be nonlinearly projected to the high dimensional hidden layer, for each neuron in the hidden layer, the encoder will first generate a uniformly distributed random weight for each pixel of one input digit and then sum these weighted pixels up for generating the stimulus. For verification of our hardware system, the random weights used in the software and in the hardware models should be the same and produce identical results. In a software model, random weights are generated using special routines, which is difficult to implement on hardware. One option is to use a look up table (LUT) in the FPGA to store the random weights generated by the software model. The major drawback of this solution is that it requires a significant amount of memory, which scales linearly with number of input neurons and hidden layer neurons. For FPGA implementations, the most efficient way to generate random numbers is to use linear feedback shift registers (LFSRs), as we have previously used to implement a randomly weighed all-to-all connectivity in a spiking neural network (Wang et al., 2014c). Based on that work, we have developed an encoder, which uses LFSRs to perform the nonlinear projection. We have implemented the ----- **Figure 4 | The tuning curves of the proposed fixed-point** **non-spiking neuron. This figure shows the tuning curve of** 64 neurons. same LFSR encoder in software to ensure that the random weights are identical in both implementations. We have highly optimised the encoder for hardware implementation, and details of this will be presented in Section 2.3. _2.2.4 Rate neuron_ The NEF intrinsically uses spike rates to calculate the weights, and low-pass filters to sum the weighted output spikes to implement the desired function. In contrast, we have implemented our neurons as non-spiking neurons that compute their firing rate directly. If these neurons were to be implemented as leaky-integrate-and-fire neurons on FPGA, as we have done previously (Wang et al., 2014c), their average firing rates would have to be measured for each value of the input stimulus to compute the decoding weights. This method is quite inefficient and inflexible, as we would have to repeat the measurements each time the parameters of the neurons change. Another drawback is that spiking neurons running in real time would not be able to accurately communicate their firing rate in a short time period, e.g., 1ms. This would significantly limit their usage in real time applications. Using non-spiking neurons, their actual firing rate can be communicated immediately after presenting the stimulus to the neurons. This feature is quite important for applications that need real-time control, such as interactive robotic systems. In a system with non-spiking neurons, the system will not compute correctly if these neurons cannot reproduce the same firing rate as the one used to calculate the decoding weights. In other words, the computed firing rate must be repeatable for a given input value. Based on these requirements, we proposed to compute the firing rate of each neuron using its index in the array together with the stimulus value to produce a ‘broken-stick’ nonlinearity using the following algorithm: FOR N_index in (0, N_A-1): IF N_index < N_A/2: T = Max_Stim - (Stim + 4×N_index) ELSE: T = Stim + 4×N_index F_rate = max(2 × N_index × T / N_A, 0) END Here F_rate represents the firing rate of the neuron as a result of the input stimulus, N_index represents the index of the neuron in the neural core, and T is calculated as shown for the different neurons. N_A represents the size of the hidden layer, Max_Stim represents the maximum value of the stimulus and Stim represents the current value of the input stimulus using an integer in the range of [0, Max_Stim) to code for an input range of [-1, 1). Figure 4 shows the tuning curves of a set of N_A = 64 of the proposed fixed-point neurons, using Max_Stim = 255. The transfer function is thus a nonlinear function of the stimulus since the value of F_rate cannot go negative. Our system requires the stimulus to be nonlinearly encoded into the firing rate of the neuron and it is hardware intensive to use digital circuits to implement conventional nonlinear functions such as _tanh. Instead, this piecewise linear_ function can be easily implemented using a single 9-bit fixed-point multiplier. We will present its implementation in detail in section 2.3.3. _2.2.5 Hidden layer_ We refer to the set of 64 neurons as a neural core, which will be used as the standard building block for our digit recognition system. Multiple neural cores can easily be combined to build real-time large-scale neural networks using our design framework. Furthermore, the development cycle of large-scale neural networks will be significantly shortened as there is no requirement for measurement of the firing rate anymore, since each neural core has the same set of known tuning curves. The hidden layer was implemented with 128 identical neural cores, for a total of 8192 (8k) neurons and 8192×(784+10) ≈ 6.5M synaptic connections. This hidden layer size has achieved the best trade-off between performance and memory usage and we will compare the performance differences in Section 3.2. Given an input image, the encoder will generate, via the random weight projection, a different _Vin for each neuron in each core,_ even if each core contains identical neurons. In other words, even though neuron[0] in neural core[0] and neuron[0] in neural core[1] have the same tuning curve as a function of _Vin, the are highly likely to get different_ _Vin so that their_ firing rates will be different too. _2.2.6 Regression_ The decoding weights are obtained by calculating W = H[+]Y, where H[+] is the pseudoinverse of H. However, the pseudo-inverse of the matrix H of size 60000 × 8192 requires a huge amount of memory and computational time. We have previously developed an online pseudoinverse update method (OPIUM) (Tapson and van Schaik, 2013), which is an incremental method to compute the pseudoinverse solution to the regression ----- problem, which requires significantly less memory. Hence, we use this method here to compute the decoding weights. We chose to use a 6-bit resolution for the decoding weights, to obtain the best trade-off between performance and memory usage. We will address this in details in section 3.1. The pseudoinverse method only gives the best solution with the lowest square root error for any given _H matrix,_ i.e., any given set of random weights; it does not necessarily achieve the lowest test error for the MNIST data set. So we adopted a regression method to find the best seed, which will be used by the encoder to generate random weights, and will in turn change the _H matrix. In this way, we can_ obtain the lowest possible test error in our system. Figure 5 shows the flow of this regression method. It uses a simplified version of OPIUM, called OPIUM lite (Tapson and van Schaik, 2013), which is a fast online method for calculating an approximation to the pseudoinverse. It is significantly quicker than the full-scale OPIUM, but will find output weights resulting in a slightly worse test error. OPIUM lite is used with different random seeds, i.e., for different random weight vectors, until a seed is found with a target error below a desired threshold. After that, the full scale OPIUM is used to compute the decoding weights with that seed. As there is no guarantee that OPIUM lite will be able to achieve a target error below the desired threshold, a time-out mechanism is introduced. In our system, this timeout will be activated when the regression has run for 1000 seeds. If a time-out happens we simply use the seed that has so far resulted in the lowest error and then use the full scale OPIUM to compute the decoding weights. ### 2.3 Hardware implementation _2.3.1 Topology_ To efficiently implement the system on an FPGA, we use a time-multiplexing approach (Cassidy et al., 2011; Wang et al., 2013, 2014d, 2014c, 2014b, 2015; Thakur et al., 2014), which leverages the high-speed digital circuit. State-of-the-art FPGAs can easily run at a clock speed of 266MHz (clock period 3.75ns). Thus, we can exploit timemultiplexing approach to simulate 2[18 ] neurons (256k, powers of two are preferable as they optimise memory use for storage) in ~1 millisecond by only implementing one physical neuron on an FPGA. We refer to these neurons as time-multiplexed (TM) neurons. This means that on every clock cycle, a TM neuron will be processed. Each TM neuron is updated every 256k/266MHz ≈ 943 µs while a sub-millisecond resolution is generally acceptable for neural simulations. The time-multiplexing approach is however constrained by its data storage requirement. The on-chip SRAM is limited in size (usually only tens of MBs). Due to bandwidth constraints it is difficult to use off-chip memory with the time-multiplexing approach, as new values need to be available from memory every clock cycle to provide real-time simulation. Furthermore, the architecture of the system will be more complex when using off-chip memory because it needs a dedicated memory controller. Nevertheless, using off-chip memory promises the ability to implement much larger networks and we will investigate this option for future designs. However, we chose to use onchip memory for the current work to keep the architecture simple. **Figure 5 | The flow of the proposed regression method.** ----- **Figure 6 | FPGA implementation of the proposed system. (a) The system topology;(b) The internal structure of the time-** multiplexed system. Figure 6 shows the topology of the FPGA implementation of the system, which consists of an input layer (the encoder), a hidden layer with 128 neural cores and an output layer with 10 neurons. The encoder and the hidden layer are both implemented with the timemultiplexing approach and Figure 6b shows their internal structure. It consists of a physical encoder, a physical neuron, a global counter and a weight buffer. The global counter processes the time-multiplexed (TM) encoders and neurons sequentially. The decoding weights of the physical neuron are stored in the weight buffer. For simplicity, let us assume that each TM encoder and TM neuron are processed in only one clock cycle. This means that in every clock cycle, a TM encoder will generate the stimulus for an input digit, and the corresponding TM neuron will generate a firing rate with that stimulus and then multiply it with the decoding weights (ten numbers for ten digits obtained by using the OPIUM). The input digit will not change and will remain static until all the TM neurons finish their processing. The output of every TM neuron will be ten weighted firing rates, each of which will be accumulated by its corresponding output neuron. Using a pipelined architecture, the result from calculating one time step for a TM encoder and neuron only has to be available just before the turn of that TM encoder and TM neuron comes around again. The above description assumes that it only takes one clock cycle to process one TM encoder and TM neuron, while this timing requirement is quite difficult to meet in a practical design. We will address this issue in detail in next section. _2.3.2 Physical encoder_ The encoder will generate a uniformly distributed random weight for each pixel of the input digit, and then sum these weighted pixels to generate the stimulus for each neuron in the hidden layer. We have pre-processed the input digit by converting grey-scale value of each pixel to a binary value. This saves significant hardware resources in the FPGA, since otherwise we would need 784 multipliers to compute the multiplication between all pixels and their corresponding random weights. Each binary pixel is used to control a 2-input multiplexer, one is connected to its corresponding random weight and the other is tied down to zero. If the value of a pixel is high, that corresponding random weight will be accumulated for the generation of stimulus for a hidden layer neuron. The major challenge in implementing the encoder in hardware using the time-multiplexing approach is to meet the timing requirement. We need to sum all the 784 weighted pixels in 3.75 ns, since each TM neuron needs to be processed in one clock cycle. Moreover, this operation will require 784 adders, which will cost a significant amount of hardware resources. The introduction of pipelines will mitigate the critical timing requirement, but will need even more adders. As a compromise we chose to process each TM encoder and TM neuron in a time slot of four clock cycles. So the encoder will perform this sum operation in four cycles, each of which will sum 784/4=196 weighted pixels. This modification not only mitigates the critical timing requirement, but also reduces the number of adders that are needed. The price paid is that the timemultiplexing rate has to be divided by four. Hence, we can only time-multiplex 64k neurons rather than 256k neurons. Figure 7 shows the structure of the physical encoder, which consists of an input buffer, a global counter, 49 random weight (RW) generators (each implemented with an 20-bit LFSR), 196 2-input multiplexers and a sum up module. When an input digit arrives, it is stored in the input buffer. In each time slot, the global counter sends that stored digits to multiplexers for generating the weighted pixels. The lowest 196 bits are sent in the first clock cycle (of that time slot) and then the higher 196 bits in the next clock ----- **Figure 7. The structure of the physical encoder** cycle, one by one, and highest 196 bits in the fourth clock cycle. Each RW generator generates a 20-bit random number, which is divided into four 5-bit random signed numbers. Hence, 49 RW generators will provide totally 49x4 = 196 5bit random weights, each is sent to its corresponding multiplexer. All these LFSRs will reload their own initial seed (obtained using the pseudoinverse method) on the arrival of an input digit. After that, it keeps generating random numbers until a new input digit arrives. In this way, we can guarantee that the encoder will generate the exact same set of random weights (for each incoming digit) with any given seed. This “on the fly” generation scheme reduces the usage of the memory significantly, as there is no requirement for storing the random weights anymore – only the seeds need to be stored. The accumulator module sums the 784 weighted pixels (in four clock cycles) for generating the stimulus for that TM neuron. A naive implementation would need a 196input 5-bit parallel adder and create a large delay (~20 ns). To mitigate this critical timing requirement, we use a 2stage pipeline, which consists of fourteen 14-input 5-bit parallel adders and one 14-input 9-bit parallel adder. Since it is a pipelined design, the stimulus (for each TM neuron) is still being generated every time slot (with a latency of two clock cycles). _2.3.3 Physical neuron_ The rate neuron achieves a significant reduction in memory usage, since it computes its firing rate with its index, the input stimulus and fixed parameters, none of which need memory access. Memory access is only needed to read the decoding weights. In our previous work (Wang et al., 2014a), the physical neuron has already been implemented with a single 9-bit multiplier, which computes the F_rate and multiplies it with one and only one decoding weight. In the digit recognition system implemented here, the neuron needs to multiply F_rate with ten decoding weights (for ten digits: 0-9). A naïve implementation would instantiate ten identical neurons, each with one decoding weight (for each output neuron), and would cost 10 multipliers. The whole operation would require 11 multiplications. Since the time slot consists of four clock cycles, we can distribute these 11 multiplications to these four clock cycles so that only 11/4=3 multipliers will be needed. Based on this strategy, the neuron has been efficiently implemented with three identical 9-bit multipliers as shown in Figure 8. The number of the implementable multipliers is usually one of the bottlenecks **Figure 8. The structure of the physical neuron** ----- TABLE I Device utilisation Altera Cyclone 5CGXFC5C6F27C7 Adaptive Logic Modules RAMs DSPs (ALMs) 2162/29080 480k/4.5M 3/450 of large-scale FPGA/ASIC design. The multiplier’s inputs A and B are 9 bits wide and the output result is 18 bits wide. All of the three multipliers will need four clock cycles to process the algorithm. For multiplier [0], the first cycle computes the F_rate, which is represented by a 7-bit number, by multiplying N_index and T; the second cycle latches F_rate at input A of the multiplier; the third and fourth cycle multiplies F_rate with the decoding weight [0] and [1], respectively. For multiplier [1], the first, second, third and fourth cycle multiplies F_rate with the decoding weight [2],[3],[4] and [5] respectively. For multiplier [2], the first, second, third and fourth cycle multiplies F_rate with the decoding weight [6],[7],[8] and [9] respectively. Again, since it is a pipelined design, the output of each TM neuron is updated only once in its time slot (with a latency of four clock cycles). _2.3.4 Output layer_ The output layer consists of ten neurons (Figure 6) that will linearly sum the results of all the 8k TM neurons. Since it is a time-multiplexed system, this sum is just an accumulation of the outputs of the TM neurons of each time slot and the computational cost can be reduced in magnitudes. Hence, the implementation of each output neuron will only need a register and an adder. When all the 8k neurons have all been processed, the index of the output neuron with the maximum value will be sent out as the result, which indicates the most likely input digit. After that, the values of the ten output neurons are cleared. _2.3.5 Utilisation_ The system was developed using the standard ASIC design flow, and can thus be easily implemented with stateof-the-art manufacturing technologies, should an integrated circuit implementation be desired. A bottom-up design flow was adopted, in which we designed and verified each module separately. Once the module level verification was complete, all the modules were integrated together for toplevel verification. We have successfully implemented 128 proposed neural cores, yielding 8k neurons, on an Altera Cyclone V FPGA (on a Terasic Cyclone GX starter kit). The design uses less than 6% of the hardware resources (with the exception of the RAMs, Table I). Note that this utilisation table includes the circuits that carry out other tasks such as the JTAG interface. ## 3. Results The results presented here will focus on how different design choices will affect the performance of the proposed **Figure 9. (a) and (b) The histogram of the error rate for** **configuration 1 and configuration 2; (c) the normalised** **histogram of the difference between the paired errors** **(blue) and sample T distributions modelling the data** **(red); (d) the distribution of the estimated mean of the** **difference data.** system as our goal is to develop a hardware system running in real time, rather than exploiting an algorithm that is as accurate as possible. The performance results were obtained using the full test set of 10,000 handwritten digits after training on the full 60,000 digit training set, unless otherwise specified. The results presented in Section 3.1-3.2 were all obtained using the software (Python) models. The results presented in section 3.3 were obtained from the hardware implementation. _3.1 Comparison across different configurations_ ----- Compared to our previous work (Tapson and van Schaik, 2013), we have made three major modifications: the greyscale pixel in the input images were replaced by black & white (binary) pixels; tanh neurons in the hidden layer were replaced by rate neurons; and 64-bit floating-point numbers for the decoding weights were replaced by 6-bit fixed-point numbers. We investigated the effects of these modifications using four configurations: configuration 1 was the configuration used in our previous work (Tapson and van Schaik, 2013); configuration 2 used black and white images; configuration 3 used black and white images and rate neurons instead of tanh neurons; and configuration 4 had all three modifications. The hidden layer consisted of 8k neurons in all four configurations. For each configuration, 100 test runs were conducted, each with a different random seed. The same set of 100 seeds was used for all four configurations, so that the encoder will generate the same random weights. Since the goal of this exercise was simply to investigate the impact of the three modifications on performance, rather than to find the best possible performance, we only used the first five steps of the regression method, i.e., we only used OPIUM lite to calculate the decoding weights and the test error. This significantly reduces the simulation time needed for these tests while still providing a fair comparison between the four configurations. We first investigated the effect of using the binary values in the input layer. We compared the performance result between the one using the grey-scale values and binary values (see Figure 9). The top two panels show a histogram of the number of errors out of 10,000 test patterns. Given the skewed nature of the two error distributions, rather than simply reporting p-values to indicate the statistical significance of this difference, we have chosen to display the full distribution here. Because the same set 100 random weight vectors was used for each configuration, we can determine a paired difference between the two configurations, shown as a histogram in Figure 9c. We then modelled the distribution of the difference of errors using a non-central T distribution, which is optimal for modelling distributions that are approximately Gaussian but contain outliers. We followed the Bayesian estimation method according to Kruschke (Kruschke, 2012) using Markov Chain Monte Carlo **Figure 11. (a) The histogram of the error rate for** **configuration 4; (b) the normalised histogram of the** **difference between the paired errors (blue) and sample** **T distributions modelling the data (red); (c) the** **distribution of the estimated mean of the difference** **data.** ----- simulation. We simulated the Markov Chain for 110,000 steps and discarded the first 10,000 steps as a burn in period. Figure 9d shows the distribution of the 100,000 mean values for the T distribution modelling the data, and the red curves in Figure 9c show 50 examples of the T distribution with parameters (mean, standard deviation, and a normality parameter – see (Kruschke, 2012)) taken at random from the Markov Chain. From the distribution of the mean value for the difference data (Figure 9d), we can see that configuration 2 results in 59.5 more errors on average. If we define a difference of 10 or fewer errors as a region of practical equivalence (ROPE), or, in other words, we consider as insignificant a change of 10 or fewer errors out of 10,000 tests, i.e., a change of less than 0.1%, we note that the 95% highest density interval (HDI) of the distribution of the mean of the difference of errors is outside the ROPE, and therefore we conclude that changing the input images from grey scale to binary values results in a small but significant increase in error of around 0.6%. Next, we investigated the effect of using the rate neurons in the hidden layer. The distribution of errors for this configuration (configuration 3) is shown in Figure 10a. This should be compared with configuration 2 (Figure 9b) and their paired difference is shown in Figure 10b. Figure 10c shows the distribution of the mean of the difference in errors between configuration 3 and configuration 2. It shows that changing from _tanh neurons to rate neurons_ increases the number of errors by approximately 18.5. However, this difference is not strongly significant, as the 95% HDI is not entirely outside the ROPE, indicating that a difference within the region of practical equivalence is amongst the possible mean values. Finally, we investigated the effect of using limited-resolution decoding weights. Figure 11a shows the distribution of errors for this configuration and the difference between configuration 3 and configuration 4 is close to zero (Figure 11b). In fact the distribution of the mean of the error difference is entirely within the ROPE, indicating that somewhat surprisingly there is no significant loss in performance when using 6-bit fixed-point output weights instead of floating point weights. The performance drop between configuration 1 and 4 was merely 0.8%. We can therefore conclude that, in this digit recognition system, the modifications that we made achieved significant reductions in terms of hardware cost with a minimal drop in performance. _3.2 Size of the hidden layer_ In this scenario, we used configuration 4 from the previous section and changed the hidden layer size in the range from 1k to 16k neurons. For each size, ten test runs (each with a different random seed) were conducted. Again, to reduce the testing time, we used OPIUM lite to calculate the decoding weights and then calculate the test error. The median error over 10 runs (Figure 12) for the hidden layer with 1k, 2k, 4k, 8k, 12k and 16k neurons was 14.5%, 10.4%, 6.96%, 5.01%, 4.47% and 4.33% **Figure 12. Error rates as a function of the number of** **neurons in the hidden layer.** respectively. It is clear that the error decreases with the number of hidden layer neurons, although with a diminishing return. Since the system used the timemultiplexing approach and rate neurons, the hardware cost of a single TM neuron is almost negligible. The memory required by the decoding weights is linearly proportional to size of the hidden layer and is thus the bottleneck of the system. To achieve a good balance between the desired accuracy and memory, we chose to implement the hidden layer with 8k rather than 16k neurons. _3.2 System performance_ To explore the best performance that the proposed system can achieve, 1000 runs were carried out using the full regression method (Figure 5) with different random seeds. The lowest error achieved with lite and full version of OPIUM is 4.52% and 3.45%, respectively. After that, the decoding weights (obtained with full version of OPIUM) were loaded into the FPGA board for real time digit recognition. The pixels of input digits were converted to binary values in software and a Python-based front-end client software sent the selected test digit to the FPGA via JTAG interface. Since the system runs at 266MHz and the hidden layer contains 8k neurons, each of which has a time slot of four clock cycles, the processing time for one input digit will be 8k×4/266MHz ≈ 120 µs, yielding 1s/120µs ≈ 8k digit recognitions per second. Due to the fact that our system only used 8k out of 64k neurons in one single TM neuron layer, the maximum number of the digit recognitions that can be processed by one TM neuron layer is ~64k per second. The system used less than 6% of the hardware resources (with the exception of the RAMs), multiple TM neuron layers can be instantiated to run in parallel. It is practical to scale this system to process millions of digit recognitions in one second. We will address this in details in section 4.2. ## 4. Discussion ### 4.1 Comparison with other solutions The work reported here constitutes the basis for building real-time, large-scale, general purpose hardware pattern recognition systems using the NEF, hence we are mainly interested in the trade-off between the scale, the **Figure 12. Error rates as a function of the number of** **neurons in the hidden layer.** ----- performance and the hardware cost. We will concentrate on comparing our work with the solutions that were developed for similar goals, rather than the solutions that are extremely optimised for achieving the lowest error rate of MNIST although they cannot be efficiently implemented on hardware. The IBM TrueNorth system is a general-purpose system for building large-scale neural networks running in real time (Merolla et al., 2014). When it was programmed for digit recognition, it achieved a result of 8.06% error rate in the 10000 test set of the MNIST with 13 cores, each of which consisted of TM 256 spiking neurons and needs ~96k bits memories (Esser et al., 2013). Hence, our system achieved a much lower error rate while with significantly fewer hardware resources, especially the memories (Table II). Regarding the processing speed, their system needs 20 time steps (each one is 1 ms) to process one digit, whereas our system needs only 120 µs (approximately 167 times speedup). Moreover, while their system consists of a feature extractor that clusters and extracts features from data, our system is feature-less, hence can be easily configured for different input data without feature extractions. The TrueNorth system however has much more applications besides pattern recognition task, as compared to our system. The Minitaur, which is an event–based neural network accelerator, achieved an error rate of 8% on a deep spiking network with 1785 neurons (Neil and Liu, 2014). Since the scheme it used is a variant of the time-multiplexing approach, which only needs very few neurons to be physically implemented, the cost of one single neuron is also negligible and the bottleneck again is the memory. Each of the neuron used by the Minitaur needs 73 bits memories and the connection weight needs 16 bit memories. Our neuron needs 60 bit memories for the decoding weights. The processing time of the Minitaur for one digit is 0.152s (table II), which is approximately 1300 times slower than our system. ### 4.2 Future work Since the larger the scale is, the more pattern recognitions can be carried out, our future work will focus on scaling up the network that we have presented here. It is a scalable design as it is a fully digital implementation. The number of TM hidden neurons implemented by a single physical neuron will increase linearly with the amount of available memory, as long as the multiplexing scale keeps the time resolution within the biological time scale. The number of physical neurons will increase linearly with the number of available ALMs. In the following calculation, we will use the digits recognition system as a metric and different applications will require different amounts of hardware resources while still using the same topology. We can calculate the theoretical maximum network size on a state-of-the-art FPGA board, such as the Terasic DE5 board containing an Altera Stratix V (5SGXEA7N2F45C2) FPGA with ~230k ALMs, two DDR3 SDRAMs and four QDRII+ SRAMs. One single TM hidden layer requires ~1600 ALMs, which TABLE II Comparison with other solutions Error Computation Resources time Minitaur 8% 0.152 s 155k bits TrueNorth 8.06% 20 ms 1.248M bits This work 3.45% 120 µs 480k bits is mainly used by the encoders. Hence, the maximum number of the physical hidden neurons that can be implemented is 230k/1600 ≈ 143. The memory requirement of one single TM hidden neuron layer is 64k×60bits = 3840k bits. The on-chip SRAM, which is 52M bits, can be used to implement up to 13 TM hidden neuron layers. To further scale up the system, we need to use external memories. The bandwidth requirement is indeed a bottleneck for the time-multiplexing approach, as new values need to be available from memory every four clock cycles. The maximum theoretical bandwidth of one DDR3 SDRAM memory and one QDRII+ SRAM memory on the DE5 board is 512 bits and 72 bits @266MHz, respectively. The DDR3 memory, in general, can only achieve an efficiency of 70% (of the theoretical bandwidth) as it will need flow control, which takes into consideration the bus turn around time, refresh cycles, and so on. The maximum number of neuron arrays is ((512bits × 2 × 70% + 72bits×4)×4)/60bits ≈ 67. Adding the ones using the onchip SRAM, the theoretical maximum number of neuron layers is 80, yielding 64k×80 = 5.12M neurons. As the maximum number of the digit recognitions that can be processed by one TM neuron layer is ~64k per second, the maximum number of the digit recognitions that can be processed by the system with 80 parallel layers is therefore 5.12M per second. The programmability of the FPGA, especially the decoding weights, makes the integration of the system with the desired pattern recognition applications seamless. However, the advantages of running large-scale networks in real-time are strongly reduced if such neural networks take a long time to compute the decoding weights. Hence, another major improvement is to speed up this computationally extensive task. One promising solution is to implement the OPIUM on FPGA, since this algorithm is an adaption procedure without the requirement of hundreds of Gigabyte RAMs and is quite friendly for hardware implementation. Running OPIUM in real time makes it possible to upgrade the system to be a true turnkey solution for pattern recognition in real world. In addition, since the proposed system does not need feature extraction, it could be used for any other pattern recognition tasks such as speaker recognition, natural language processing and so on. ----- ## 5. Acknowledgment This work has been supported by the Australian Research Council Grant DP140103001. The support by the Altera university program is gratefully acknowledged. This work was inspired by the Capo Caccia Cognitive Neuromorphic Engineering Workshop 2013, 2014 and Telluride Neuromorphic workshop 2013. ## 6. References Boahen, K. (2006). Neurogrid: emulating a million neurons in the cortex. _Conf. Proc. IEEE Eng. Med. Biol. Soc. Suppl, 6702._ doi:10.1109/IEMBS.2006.260925. Cassidy, A., Andreou, A. G., and Georgiou, J. (2011). Design of a one million neuron single FPGA neuromorphic system for real-time multimodal scene analysis. 2011 45th Annu. Conf. Inf. Sci. Syst., 1–6. doi:10.1109/CISS.2011.5766099. Eliasmith, C., and Anderson, C. (2003). Neural Engineering: _Computation, Representation, and Dynamics in Neurobiological_ _Systems. Boston: MA: MIT Press._ Eliasmith, C., Stewart, T. C., Choo, X., Bekolay, T., DeWolf, T., Tang, Y., Tang, C., and Rasmussen, D. (2012). A large-scale model of the functioning brain. Science 338, 1202–5. doi:10.1126/science.1225266. Esser, S. K., Andreopoulos, A., Appuswamy, R., Datta, P., Barch, D., Amir, A., Arthur, J., Cassidy, A., Flickner, M., Merolla, P., et al. (2013). Cognitive computing systems: Algorithms and applications for networks of neurosynaptic cores. in The 2013 _International Joint Conference on Neural Networks (IJCNN)_ (IEEE), 1–10. doi:10.1109/IJCNN.2013.6706746. Kruschke, J. K. (2012). Bayesian Estimation Supersedes the t Test. J. Exp. _Psychol. Gen. 142, 573–603. doi:10.1037/a0029146._ Lecun, Y., Bottou, L., Bengio, Y., and Haffner, P. (1998). Gradient-based learning applied to document recognition. Proc. IEEE 86, 2278– 2324. doi:10.1109/5.726791. Merolla, P. A., Arthur, J. V., Alvarez-Icaza, R., Cassidy, A. S., Sawada, J., Akopyan, F., Jackson, B. L., Imam, N., Guo, C., Nakamura, Y., et al. (2014). A million spiking-neuron integrated circuit with a scalable communication network and interface. Science (80-. ). 345, 668–673. doi:10.1126/science.1254642. Neil, D., and Liu, S. (2014). Minitaur, an Event-Driven FPGA-Based Spiking Network Accelerator. IEEE Trans. Very Large Scale _Integr. Syst., 1–1. doi:10.1109/TVLSI.2013.2294916._ Penrose, R., and Todd, J. A. (1955). A generalized inverse for matrices. _Math. Proc. Cambridge Philos. Soc. 51, 406–413._ doi:10.1017/S0305004100030401. Pfeil, T., Grübl, A., Jeltsch, S., Müller, E., Müller, P., Petrovici, M. A, Schmuker, M., Brüderle, D., Schemmel, J., and Meier, K. (2013). Six networks on a universal neuromorphic computing substrate. _Front. Neurosci. 7, 11. doi:10.3389/fnins.2013.00011._ Tapson, J. C., Cohen, G. K., Afshar, S., Stiefel, K. M., Buskila, Y., Wang, R. M., Hamilton, T. J., and van Schaik, A. (2013). Synthesis of neural networks for spatio-temporal spike pattern recognition and processing. Front. Neurosci. 7, 153. doi:10.3389/fnins.2013.00153. Tapson, J., and van Schaik, A. (2013). Learning the pseudoinverse solution to network weights. Neural Netw. 45, 94–100. doi:10.1016/j.neunet.2013.02.008. Thakur, C. S., Hamilton, T. J., Tapson, J., van Schaik, A., and Lyon, R. F. (2014). FPGA Implementation of the CAR Model of the Cochlea. in IEEE International Symposium on Circuits and Systems, 1853– 1856. doi:10.1109/ISCAS.2014.6865170. Vogelstein, R. J., Mallik, U., Vogelstein, J. T., and Cauwenberghs, G. (2007). Dynamically reconfigurable silicon array of spiking neurons with conductance-based synapses. IEEE Trans. Neural _Netw. 18, 253–65. doi:10.1109/TNN.2006.883007._ Wang, R., Cohen, G., Stiefel, K. M., Hamilton, T. J., Tapson, J., and van Schaik, A. (2013). An FPGA Implementation of a Polychronous Spiking Neural Network with Delay Adaptation. Front. Neurosci. 7, 14. doi:10.3389/fnins.2013.00014. Wang, R., Hamilton, T. J., Tapson, J., and van Schaik, A. (2014a). A compact neural core for digital implementation of the Neural Engineering Framework. in BIOCAS2014 doi:10.1109/BioCAS.2014.6981784. Wang, R., Hamilton, T. J., Tapson, J., and van Schaik, A. (2014b). A compact reconfigurable mixed-signal implementation of synaptic plasticity in spiking neurons. in 2014 IEEE International _Symposium on Circuits and Systems (ISCAS) (IEEE), 862–865._ doi:10.1109/ISCAS.2014.6865272. Wang, R., Hamilton, T. J., Tapson, J., and van Schaik, A. (2014c). An FPGA design framework for large-scale spiking neural networks. in 2014 IEEE International Symposium on Circuits and Systems _(ISCAS) (Melboune: IEEE), 457–460._ doi:10.1109/ISCAS.2014.6865169. Wang, R. M., Hamilton, T. J., Tapson, J. C., and van Schaik, A. (2014d). A mixed-signal implementation of a polychronous spiking neural network with delay adaptation. Front. Neurosci. 8, 51. doi:10.3389/fnins.2014.00051. Wang, R. M., Hamilton, T. J., Tapson, J. C., and van Schaik, A. (2015). A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks. Front. Neurosci. 9, 1–17. doi:10.3389/fnins.2015.00180. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/TBCAS.2017.2666883?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/TBCAS.2017.2666883, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "http://arxiv.org/pdf/1507.05695" }
2,017
[ "JournalArticle" ]
true
2017-05-23T00:00:00
[]
12,886
en
[ { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/017dea6fc7524ba5248867d3e8aa0e82a2d22dd4
[]
0.882317
ICO as Crypto-Assets Manufacturing within a Smart City
017dea6fc7524ba5248867d3e8aa0e82a2d22dd4
Smart Cities
[ { "authorId": "2315201418", "name": "Oļegs Černiševs" }, { "authorId": "88937988", "name": "Yelena Popova" } ]
{ "alternate_issns": [ "2731-3409" ], "alternate_names": [ "Smart City" ], "alternate_urls": [ "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-1343723", "https://www.mdpi.com/journal/smartcities" ], "id": "d0bfb97a-a20e-4896-9afe-1e63e459db20", "issn": "2624-6511", "name": "Smart Cities", "type": null, "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-1343723" }
The digitalization of the economy provokes the rethinking of manufacturing processes. Despite numerous publications related to Industry 4.0 as a manufacturing approach, the production of fully digital and crypto-asset products was poorly researched. Besides having a supplementary role, crypto-assets may form an entire smart city product. The authors assess the manufacturing of smart city products, fully or partially formed by crypto-assets. The initial issuance of the crypto assets was usually addressed as an Initial Coin Offer, or through the process of increasing the issuer’s capital. The authors assess the Initial Coin Offer, and address it, like manufacturing to produce products for sale. The authors classify all milestones related to the crypto-assets’ issuance, distribution, and revaluation, and assign incomes and expenses to each milestone. Additionally, the ICO-based production costs and revenues were classified according to crypto-asset types, as defined by European Economic Area legislative acts.
# smart cities _Article_ ## ICO as Crypto-Assets Manufacturing within a Smart City **Olegs Cernisevs** **[1,]*** **and Yelena Popova** **[2,]*** 1 SIA StarBridge, LV-1050 Riga, Latvia 2 Transport and Telecommunication Institute, LV-1019 Riga, Latvia ***** Correspondence: olegs.cernisevs@star-bridge.lv (O.C.); popova.j@tsi.lv (Y.P.) **Abstract: The digitalization of the economy provokes the rethinking of manufacturing processes.** Despite numerous publications related to Industry 4.0 as a manufacturing approach, the production of fully digital and crypto-asset products was poorly researched. Besides having a supplementary role, crypto-assets may form an entire smart city product. The authors assess the manufacturing of smart city products, fully or partially formed by crypto-assets. The initial issuance of the crypto assets was usually addressed as an Initial Coin Offer, or through the process of increasing the issuer’s capital. The authors assess the Initial Coin Offer, and address it, like manufacturing to produce products for sale. The authors classify all milestones related to the crypto-assets’ issuance, distribution, and revaluation, and assign incomes and expenses to each milestone. Additionally, the ICO-based production costs and revenues were classified according to crypto-asset types, as defined by European Economic Area legislative acts. **Keywords: digitalization; crypto assets; financial services; fintech** **1. Introduction** **Citation: Cernisevs, O.; Popova, Y.** ICO as Crypto-Assets Manufacturing within a Smart City. Smart Cities 2023, _[6, 40–56. https://doi.org/10.3390/](https://doi.org/10.3390/smartcities6010003)_ [smartcities6010003](https://doi.org/10.3390/smartcities6010003) Received: 15 November 2022 Revised: 17 December 2022 Accepted: 19 December 2022 Published: 23 December 2022 **Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons [Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/) [creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/) 4.0/). Since the First Industrial Revolution, manufacturing has evolved through several revolutions—from water and steam-powered machines to electrical and digital automated production—making the manufacturing process more complex, automatic, and sustainable so that people can operate machines more efficiently, effectively, and consistently [1]. The Third Industrial Revolution, the digital revolution that has been taking place since the middle of the previous century, is now giving way to the Fourth Industrial Revolution. The distinction between the physical, digital, and biological domains is becoming increasingly muddled due to a convergence of technology [2–7]. Manufacturing physical products is no longer the only aspect of manufacturing. A fundamental change in how businesses conduct themselves has been brought about by changes in customer demand, the makeup of products, the economics of manufacturing, and the economics of the supply chain. Customers seek personalization and customization, as the distinction between consumer and producer becomes hazier. Products become “smart” with the addition of sensors and connections, progressively morphing into platforms and services [8]. With the advent of digital manufacturing, discrete technologies have given way to integrated systems. Industry 4.0, which depicts the Fourth Industrial Revolution, represents a new degree of organization and control of a product’s whole value chain, across its life cycle, and promotes intelligent, connected, and decentralized manufacturing. That trend has led to the transformation of the city into a smart city, in addition to the effect of global urbanization. Indeed, the emergence of technologies such as computational intelligence, automation and robotics, additive manufacturing, and human–machine interaction, combined with breakthroughs in data storage and new processing capabilities, are releasing innovations that alter the character and content of production [9]. Moreover, the smart city concept fully corresponds to industry 4.0 since it uses digital transformations of the city environment to benefit residents, businesses, and other stakeholders [10,11]. ----- _Smart Cities 2023, 6_ 41 One of the technologies that appeared in digital manufacturing is distributed ledger technology, which first arose in 2008. The distributed ledger is a database of issuance and transaction records held in several nodes (computers) that make up a distributed computer network. An electronic distributed ledger is used to share a crypto asset, an intangible digital asset whose issue, sale, or transfer is encrypted and protected by cryptography [12]. The phenomenon of cryptocurrency (crypto assets) is currently transforming into a standard digital transformation tool for products and services [13–15]. Crypto asset development and initial distribution are usually called Initial Coin Offer (ICO). The majority of the researchers, who wrote about the ICO, agreed on the following definition—Initial Coin Offer (ICO) is, based on blockchain technology and smart contracts, a list of actions that entrepreneurs use to attract external funding by issuing tokens without intermediaries [16–21]. They agreed that in correlation with Initial Public Offer (IPO)— when a private corporation first offers its shares to the public, the Initial Coin Offer (ICO) result is the increased capital of the company issuer. Conversely, researchers who work in the field of accounting classify cryptocurrency assets as goods held to sell, or intangible assets in case the issuer company uses crypto assets for their own needs [22–24]. This opinion is supported by the International Accounting Standards Board (IASB). The development and implementation of the International Finance Reporting System (IFRS) Accounting Standards are the responsibility of IASB members. In this case, if the digital assets are products for distribution, their development is manufacturing. Their distribution will not lead to a capital increase, but with acknowledgement of the income from distribution—it will be the income from distribution. That classification has both theoretical and practical value since, from the theoretical point of view, it allows for the correct assessment of costs for the issue of digital assets, and it facilitates the development of efficient financial management models for products based on digital assets. Moreover, it can be used in practice for building and implementing effective accounting principles for digital asset issuance, on the level of the city or corporate. The theoretical value of this approach lies in the fact that it contributes to the development of accounting principles applied for crypto assets; since crypto assets are a rather new form in financial market functioning, the accounting system is still under development for this type of asset, and the requirements for accounting from the practical sphere are under development. Therefore, this research can significantly contribute to the development of accounting principles for digital assets. The practical value is even higher. The professionals working in the area face problems with accounting these assets, particularly with attributing costs to this or that category, and analyzing the costs associated with crypto assets. Therefore, this study can serve as a basis for creating an efficient model of management for these assets, and also improving the system of accounting them. The study has a theoretical nature; however, it is exemplified by Rome as a smart city, using the crypto based assets for achieving the KPIs. The authors consider Rome as an example for supporting the theoretical provisions of the authors. Moreover, these provisions can be easily applied to any activities of the city’s municipal authorities, using the crypto based products for achieving the set goals. Given the crypto assets definition conflict mentioned above, due to their active development, crypto asset implementation creates some challenges for the market, especially within the Initial Coin Offer (ICO) which still needs to be well described [20]. Industry 4.0 not only changes manufacturing as a process, but raises questions about the manufactured product itself, its components, and its characteristics in case the distributed ledger technology is used within its production. The production of the products using the distributed ledger technology is already overviewed [25], but researchers did not assess the production process and distribution itself very well. The production of crypto-asset-based products usually start with crypto-asset issuance. The authors of [26] defined formalization as the critical element of digital product or service ----- _Smart Cities 2023, 6_ 42 development. They admit that an entirely bureaucratic approach to digital product innovation is ineffective, but some level of formalization of product development is necessary. The authors of [27] introduce a conceptual distinction between expected and disruptive change that may help to spot the disruptive potential of crypto asset implementation. The authors of [27] provide an analysis of the four stages of change, offered by Causal Layered Analysis, revealing that cryptocurrencies have posed various challenges to conventional currencies. The rise of cryptocurrencies has begun to pose a systemic change threat to longstanding businesses or organizations. An example is by enabling peer-to-peer transactions that are highly cost-effective in international money transfers; for instance, cryptocurrencies have the potential to lower transaction costs by removing or reducing the fees charged by the established middlemen that facilitate transactions. It is necessary to take into account: (1) the disruptive potential of the implementation of crypto assets, and (2) the poorly researched crypto asset manufacturing process, due to conflict with the crypto assets’ issuance goals classifications, in the approach to the manufacturing of crypto-assets and crypto-asset-based products. When developing such an approach, it is essential to consider the wide range of available crypto assets, and the potential for self-consumption when they are provided to manufacturers. It is also required to classify precisely all events related to issuing crypto assets that have a bearing on accounting. The goal of this research is to determine the order of accounting events related to the issuance of crypto assets. We speak, in this case, not about stages of issuance but about events, since they do not occur in definite sequence; moreover, they can happen simultaneously or in different orders, or even in some circumstances can be omitted in this way. The authors determined the objectives of the study to achieve the set goal. They are as follows: To classify the issuance of crypto assets as a manufacturing process; _•_ To determine the IFRS standards for each type of cryptocurrency issued; _•_ To estimate each event from the point of view of applied IFRS; _•_ _•_ To evaluate whether crypto assets-based products are fit and proper for the smart city goals achievements; To assess the costs and revenues, and leverage related to crypto assets. _•_ The research has a certain theoretical value since it determines the ICO as a manufacturing process within the frameworks of Industry 4.0. Moreover, the authors have not come across any research considering the smart city as an issuer of crypto assets. However, the most value this article has is for practical application, since it develops the procedure of accounting the events related to the crypto assets’ issuance within the European Union, which is very important in the contemporary situation. **2. The Changed Concept of the Product** The digitalization of the economy changes the understanding of the concept of the product. After digitalization, each product may, potentially, be composed of three components: Non-digital; _•_ Digital; _•_ Crypto-asset. _•_ This concept may be explained in the example of a Mug as the product. In case a Mug is sold in a traditional store, it is composed of only Non-digital parts. In the case, in addition to a traditional store, the merchant sells the Mug via e-shop, it is composed of Non-digital and Digital parts. If the e-shop mentioned above accepts crypto assets (such as Bitcoin) as a means of payment, then the product “Mug” comprises all three components. Such an approach allows fully digital products to exist, when a non-digital part does not exist for the product. Examples of such products are financial services and insurance services [28]. ----- _Smart Cities 2023, 6_ 43 In case crypto-assets or digital parts exist within the product or service, the product or service may be treated as digital [28]. The task of the study is to assess the crypto asset part of the product; therefore, the effect of other parts on the crypto asset and their nature is beyond this study. The traditional approach for the product, composed exclusively of Non-Digital Parts, is an in-line product-creating procedure, where the drawings are sent to the shop floor for the prototype’s fabrication. If the Digital part of the product amends the non-Digital part, then the non-Digital product prototype parameters are used as incoming parameters for the Digital part’s development. The Digital part of product manufacturing is conceptually designed and innovated via computer-aided design software and digital technology. The crypto-assets part of the product is manufactured similarly to the Digital part. These designs and procedures are simulated to determine whether it is feasible to manufacture the product. All parts of the product are tested using computer-aided quality control procedures, and it is scrutinized at every stage of the manufacturing process. Supply chain management is also digitalized for efficient inventory and customized items [29]. Digital manufacturing is widely represented in the scientific literature under Industry 4.0 [2,4,6,29]. However, aspects of digital manufacturing of fully digital products, and consequently crypto-based products, were not well developed by researchers. The cryptobased product manufacturing should be represented in the cycle of production, where each workflow milestone should be accessed. The authors already mentioned the conflict of definitions regarding the Initial Coin offer. The object of the Initial Coin Offer is a crypto asset. Crypto assets were initially developed using the following assumptions: The blockchain used: the developers, based on the task, may select the existing _•_ blockchain or decide to create a new one for their developing Crypto-asset; _•_ Definition of all parameters, how the Crypto-asset will interact with the blockchain, and which events this Crypto-asset allows. These parameters are called “smart contracts”. _•_ When the previous two steps are completed (they may even be carried out in parallel), then the internal information technology tools of the blockchain are used. The initial quantity of the crypto-assets’ creation process is called Issuance. The total amount of issuance and other parameters are the part of the smart contract, and this information cannot be changed after issuing the Crypto-assets. The majority of the researchers [16–21] believe that Initial Coin Offer is a set of actions, one of which is the issuance of crypto assets. They mentioned the attraction of external funding to the entity that made an Issuance (Issuer) as the purpose of this Issuance. On the other hand, the International Accounting Standards Board (IASB) and researchers who assess the crypto assets from the accounting perspective classified the crypto assets issued for distribution as accountable under the inventory goods. These two purposes of the Crypto assets—attraction of the external funding, and object to be sold—form the conflict between the definitions and mean that one of the definitions is incorrect. ICOs are used as a tool by many financial and non-financial organizations, so the first question that arises before creating the models of accounting systems for companies is the subject of the ICO—a token or a coin. Although there is no official division into “coin” and “token” yet in the regulatory framework, the authors agree with the definition given by the audit company PWC in its report [30]: The term “token” refers to an asset that provides the owner with additional functionality or utility, whereas the term “coin” typically refers to a cryptographic asset that has the explicit aim of operating only as a medium of exchange. The authors utilized the European Parliament’s classification of crypto assets, in legislative recommendations for crypto assets [31], as a source for classifying the crypto assets. This classification distinguished three categories of crypto assets: Utility tokens: these digital assets are released to grant access to digital services _•_ or platforms; ----- _Smart Cities 2023, 6_ 44 Asset-referenced tokens: they are digital assets that can be linked to a single or a _•_ collection of currencies, other digital assets, a single or a group of commodities that are traded on an exchange, or a single or a collection of stocks. Before the publishing of the proposal mentioned above, certain EEA nations passed local legislation governing Initial Coin Offerings (ICOs), in which the tokens linked to the assets are referred to as security tokens; Payment tokens: they are crypto assets that are primarily designed to be used as a _•_ form of payment (coin, electronic money tokens, e-money tokens). The European Commission’s approach divides crypto assets into three distinct groups. However, the so-called hybrid tokens, which include target uses from several subgroups of tokens, should be classified as belonging to one of the subgroups mentioned above if the proposed product incorporates those target uses. All three groups of crypto assets actively function and form the ecosystem within the smart city. Researchers have recently attempted to comprehend the industrial ecosystems in smart cities, including smart city industry ecosystems [32] and smart city governance/service/data ecosystems [33–35]. Smart city manufacturers (smart industries) lack precise definitions and classifications, and they are variously categorized based on the research goals and the researchers’ personal opinions [32]. Digital manufacturing, like traditional manufacturing, is based on supply chains. Supply chains are mainly digital since the main components (raw materials and services) are also digital. Digital services forming the supply chain for smart manufacturers may be the product of other smart manufacturers or smart consumers. For example, the final consumers of the TripAdvisor application, rating the companies presented by this application, made the principal value for this application. Considering that the border between manufacturer and consumer has become transparent, manufacturers and consumers of the smart city are called smart users. **3. Methods and Materials** The bibliographical research method was applied to determine the order of events of issuing coins. The bibliographical research was conducted as the expository type to recreate the investigation’s theoretical context. To achieve that, the authors use reliable sources and the careful selection and analysis of the material in question. The articles were retrieved from scientific databases such as Scopus, Web of Science. The keywords used for searching were “fintech” AND “cryptocurrency” OR “crypto” OR “blockchain” AND “accountancy” OR “accounting”. Another search was related to smart city KPI. The concept of a smart city as an environment for various digital processes was examined. In total, there were 59 sources considered, published from 2011 to 2022. The IFRS standards were applied to determine the costs [36–39]. The authors examined the information presented by the municipal authorities of Rome for the plans to develop Rome as a smart city in all possible areas [40]. Within the Rome Smart City plan, 81 projects from the 11 areas of intervention were identified and evaluated. A total of 119 city indicators and 120 smart key performance indicators (KPIs) were placed to monitor the plan’s progress, replicate successful projects, and intervene in the most critical areas. The indicators represent the expected result in terms of quality of life within the city. The authors estimated and selected the smart KPIs, which describe the city’s digitization level or innovative technologies. Another segment of KPIs was chosen for its capability to be implemented with crypto-asset-based product usage. Further, the authors assessed the smart city KPIs against the possibility of using crypto-asset-based products to implement the Rome municipal authority’s strategy. Then, the determined costs were used as a basis for formulae that are absolutely practical and applicable for solving the problems of financial institutions, and any other companies issuing crypto assets in the smart city and facilitating the development of ----- _Smart Cities 2023, 6_ 45 Rome within the smart city concept. The practical leverage formula, specified for these institutions, was created based on obtained costs and income formulas. The authors used the taxonomy of cryptocurrency to use the specific IFRS standards depending on the category of crypto asset. It is essential since the different types of crypto assets use different standards of IFRS. **4. Results** As mentioned above, the Initial Coin Offer (ICO) is the issuance and initial distribution of crypto assets. The subject of this process is the crypto-asset. As the authors have shown above, the crypto-asset is a part of the product; therefore, the Initial Coin Offer is the product issuance. For example, if a utility token is issued within the supercar test drive voucher product, and this product is distributed electronically via emails, then this product has crypto and digital parts. Nevertheless, if the cryptocurrency is issued, which is used as a payment means within the blockchain, this product will have only the crypto part. _4.1. Capital Increase Method vs. Manufacturing for Further Sale_ 4.1.1. ICO as Capital Increase Ref. [41] their article defined the following capital increase methods: Increase the capital through the issuance of shares; _•_ Increase the capital by incorporating reserves; _•_ Increase the capital by debt conversion; _•_ Initial Public Offer and further value of shares on the stock exchange changes. _•_ All these methods were focused on two approaches—increase the number of issued shares of the company, or increase the value of its shares. Some authors still compare the ICO and IPO [42,43]. Within the IPO, the number of shares increases. After those are sold to the public, within the ICO, the new crypto assets are issued and sold to the public. An investor obtains a firm’s share in an IPO, but in an ICO, they receive a token that does not reflect company shares. This is how they vary from one another. According to [22–24] the accounting approach to crypto assets shows that crypto assets should be registered within the issuer balance sheet in the inventory. Keeping in mind that it is the product for sale, which also does not support the opinion that crypto assets’ issuance and distribution are the methods by which to increase equity value. Given what is mentioned above, the authors believe that the Initial Coin Offer goal is not to increase the capital. 4.1.2. ICO as Manufacturing Manufacturing is typically used to describe an industrial production process where raw materials are turned into completed goods sold on the market. Today, manufacturing is regarded as an integrated concept at all levels, from the equipment and production systems to the overall company activity [44]. The ongoing and consistent emphasis on product innovation has resulted in the conceptualization of comparable large-scale investment and product development roadmaps for important industry participants, which has led to a similar range of new product options in the market. As a result, there is less product differentiation, and no company has been successful in the market competition [45]. As the stage advances to the following step, the business adds higher levels of customer service and sophisticated approaches to solve customer problems. Customers begin to view the products and services as an integrated solution that addresses all of their needs, rather than as discrete items [46]. Modern production is related to business process management [5], solving the following issues: Analysis of processes; _•_ Definition of structure between processes; _•_ ----- _Smart Cities 2023, 6_ 46 Choice of management method; _•_ Modeling and optimizing the processes; _•_ Performance measurement and diagnostics system. _•_ In the case of digital manufacturing, business process management is related to the management of the digital processes related to the crypto-asset-based product [7]. Applying the Business Process Management steps to crypto-assets manufacturing may show whether the same approach applies to the issuance of crypto assets. Table 1 represents the stages of management of crypto-assets issuance process. **Table 1. Business Process Management Steps for crypto-assets’ issuance. Source: generated by the** authors. **The Process** **Crypto Assets Issuance Stage** Definitions of the following: Analysis of the processes Definition of structure between processes _•_ General product features _•_ Distribution channels _•_ Blockchain type or exact blockchain _•_ The limitations if any _•_ Definition of the legal and technical structure as the interaction between issuer–distributor–buyer Definition—how the total issuance and its quality Choice of the management method will be controlled Product testing in accordance with the product Modelling and optimizing the processes oversight and governance principles [47] Performance measurement and Product monitoring in accordance with the diagnostics system product oversight and governance principles [47] Product oversight and governance are principles that the European Central Bank promotes and requests to be used by asset management companies and all financial institutions [47,48]. Consequently, product oversight and governance principles are the innovations within crypto asset and financial product manufacturing. The stages of the crypto-assets’ issuance accordingly correspond to the manufacturing cycle of the business processes. Given those mentioned above, the authors define Initial Coin Offer as the manufacturing method. _4.2. Crypto Assets Manufacturing_ As the Fourth Industrial Revolution, or “Industry 4.0” [2–6,49,50], has just emerged, traditional manufacturing processes and organizational and commercial paradigms are being tested and disrupted. As a result, all and any crypto assets issuers should deal with the new product life cycle typical to the product they develop or manufacture. The cryptoasset-based products have their life cycle, which issuers should use in their development and manufacturing. Since the crypto-asset-based product or service life cycle is similar to any product or service life cycle within Industry 4.0, it is possible to apply the same business management method [5]. Although numerous publications regarding cryptocurrencies and crypto assets exist [12,17,51–56], the authors decided to assess the stages of crypto-asset manufacturing. The authors developed the crypto-asset-based products or services lifecycle milestones. The authors assess the milestones of the crypto assets’ manufacturing lifecycle, related directly to the issuance of the crypto asset part of the product, and these milestones are as follows: _•_ Definition of a subgroup of crypto assets and development of parameters of a smart contract; ----- _Smart Cities 2023, 6_ 47 Determination of the issuance method; _•_ Issuing crypto assets using specific parameters of a smart contract; _•_ _•_ The distribution model of crypto assets (payment in fiat currency or other crypto assets); Circulation of the crypto assets; _•_ The Disposal method of crypto assets. _•_ Following the Business Process Management [5], the crypto assets’ lifecycle is defined, and issuers may build the processes (technological, accounting, legal, marketing, and so on) concerning each milestone of the lifecycle. _4.3. Smart City KPI Assessment_ Since the issuer of crypto-asset-based products considered in this research is a smart city, the authors examine Rome which has the most notable presence of reality businesses, with 300,000 businesses operating within it [40]. The Municipality Administration plans to invest in instruments that support the regeneration, expansion, and development of the city’s entrepreneurial and economic fabric, while promoting best practices in the region. It has also proposed its model of economic growth, which aims to: Streamline and facilitate the interactions between the public sector and private sector _•_ to create an ongoing, mutually beneficial discourse that benefits the entire community; Encourage firms to be more competitive to increase employment numbers, as well as _•_ productivity, efficiency, and human capital; _•_ Promote the formation and growth of synergies, exchange, and transfer of knowledge by identifying and implementing good practices for entrepreneurship development, which will benefit the region’s overall economic and social structure. Rome’s municipal government has chosen KPIs following these goals, and is assessing the effectiveness of implementing the smart city concept. Table 2 represents the possible use of the crypto-asset-based products for achieving these KPIs. **Table 2. Rome city smart KPIs and application of crypto-based products. Source: generated by** the authors. **KPI Name** **KPI Description** **Crypto-Based Products** The coworking space management has two aspects which crypto asset products may manage: _•_ Considering that space or objects (meeting rooms, working places) are usually limited, it may be controlled by issuing and circulating access tokens (utility tokens) or something based on them. _•_ The services of the coworking spaces may be paid for by the crypto-asset-based products (such as cryptocurrency). Services related to starting a business or engaging in commercial activity from the perspective of the processes, may be divided into three parts: _•_ Conducting the service itself. Smart Users may use crypto-asset-based products for the payments of the service. _•_ Identification of the applicant. Smart Users may use crypto-asset-based products to verify the identity of the applicant. _•_ Submitting to the applicant publicly verified extracts. Applicants may submit such documents via the blockchain. Places used for coworking Multiple online services or streamlined procedures for starting a business or engaging in commercial activities The number of coworking spaces. Coworking is sometimes referred to as the “new form of work” and is an example of the collaborative and sharing economy [57]. The number of businesses registered online. ----- _Smart Cities 2023, 6_ 48 **Table 2. Cont.** **KPI Name** **KPI Description** **Crypto-Based Products** _•_ Conducting the service itself. Smart Users may use crypto-asset-based products for the payments of the service. Number of requests _•_ Identification of the applicant. Smart Users submitted online Business models digitalization may use crypto-asset-based products to verify the identity of the applicant. _•_ Submitting to the applicant publicly verified extracts. An applicant may submit such a document via the blockchain. Presence of the Economic Development Plan for at least Smart City KPI is not directly connected to the crypto-asset-based products and services. 3 years Number of Knowledge Sharing events (conferences, meetings, etc.) Presence of the city brand on the platforms of e-commerce The number of conferences and events organized in the city. The Rome city brand within the payment platforms, payment products, or development of its payment platform for smart city users _•_ The tickets for such events may be sold as a crypto-asset-based product. _•_ Payments for these events may be made by crypto-assets, such as cryptocurrency. _•_ If they have limited access, the proceeding of the conferences may be available per presenting the crypto-asset-based ticket. _•_ Development of own payment planform, based on the blockchain technology _•_ The cryptocurrency issue with the city brand joins B2B (Business-to-Business) and B2C (Business-to-Customer) payment across the smart city. Number of participants who The presence of the city brand in the image or marketing campaign of the products or support the city’s brand services represented by the business forms the city’s economy. _•_ Own blockchain-based payment platform B2C and B2B will increase intra smart city payments volumes _•_ Tax payments (such as F24 (national tax payment system)), via the same smart city payment platform, will increase intra smart city payments volumes _•_ City utilities and services concentrated within the same platform will increase intra smart city payments volumes _•_ Server clusters are, in some way, coworking manufacturing infrastructure. Taking into account that contemporary servers may be segregated into areas, with the allowance of access for separate groups of users—one server cluster may be used by different smart users or producers of the smart city. _•_ Server cluster managing companies may use crypto-asset-based keys to control these accesses _•_ Server cluster managing companies may accept crypto-assets payments (including within the smart city’s own payment platform) for the services offered by the server cluster entities. Smart city products/service sales volumes Presence of the server clusters for the economic development (at the level of the city and districts) Number of transactions and sales volumes generated by the businesses presented within the smart city Server clusters for the digital economy are manufacturing, management and distribution infrastructure. Their existence, availability and location determine the sustainability and success of the smart city. ----- _Smart Cities 2023, 6_ 49 **Table 2. Cont.** **KPI Name** **KPI Description** **Crypto-Based Products** _•_ Smart city may widely use crypto assets and blockchain for such init-iatives such as: _•_ Network for crowdfunding _•_ Easy way of inter-payments _•_ Supporting SMEs with the standard payment acceptance solution (B2C and B2B) based on blockchain Number of initiatives for the development of SMEs (Small and Medium Enterprises) Achieving a high number of SME initiatives is not the goal by itself. The main target is to achieve an increased number of effective working initiatives, which will help develop small and medium enterprises. Table 2 shows that crypto-asset-based products may be blockchain-based or any existent crypto-asset-based. Both cases require that the product base is new, or in an existent crypto asset. Due to innovation, businesses within the smart city may increase profits by offering clients unique goods and services that cater to their constantly shifting wants and preferences [46]. _4.4. Crypto-Asset-Based Product Production Accounting_ 4.4.1. IFRS Approach for Accounting Lifecycle Milestones Related to Event Manufacturing When a Crypto assets issuer assesses the accounting techniques for the crypto-assetbased products, accounting approaches should be bonded to the events related to the lifecycle milestone. Each event forms costs associated with the crypto-assets issuer, which includes the total costs of crypto-asset-based product manufacturing. Summarizing the essence of Table 3, the authors define the formula for the calculation of the crypto-assets costs: _TC = LF + SF + TF_ (1) where _•_ _TC is Total manufacturing costs;_ _•_ _LF is License Fee (fixed costs per issuer);_ _•_ _SF is Salary or supplier fee (fixed costs per issuer);_ _•_ _TF is Transaction cost (variable fee, depending on the number of issued crypto assets)._ Following the IFRS, accounting of the crypto assets is related to the purpose of issuing the crypto assets [22]. The IFRS committee recognized that cryptocurrency, if it is intended for sale, must be accounted for following the IAS2 Inventory standard [36]. The authors believe that this approach to accounting can be extended to all types of crypto assets. Examples of crypto assets held for sale include the following ones: Crypto assets held by the Company for exchange; _•_ Crypto assets under management (for example, storing crypto assets in wallets for _•_ company clients); Crypto assets issued or held for sale. _•_ The IFRS committee also determined that if crypto assets are stored in an enterprise and not for sale, then such crypto assets must be accounted for following IAS 38 Intangible Assets. Examples of such crypto assets can be crypto assets issued for the company’s needs. The IFRS Committee recommended applying IFRS 2.3b for commodity brokers and traders when accounting for crypto assets [36]. Accordingly, commodity brokers and dealers are encouraged to carry their inventories at fair value or less as it costs to sell. However, it is recommended that the change in fair value be reflected in profit or loss in the period when this fair value changed. The document specifies that if an entity measures crypto assets at fair value, paragraphs 91–99 of IFRS 13 Fair value apply. The purpose of using this standard is to determine the price of goods hosted in inventory [38]. Following the standard, “The inventory cost must include all purchase ----- _Smart Cities 2023, 6_ 50 costs, processing fees, and other expenditures incurred to maintain the inventory in its current location and condition.” **Table 3. Manufacturing costs. Source: generated by the authors.** **Milestone** **Event** **Cost/Incomes** _Selection of the crypto assets type:_ For Utility tokens No costs Definition of a subgroup of crypto assets and development of parameters of a smart contract Fees for registering with the AML (anti For Payment tokens money laundering) control entity (license fee)—fixed costs Fee for registering as an asset For Asset-referenced tokens management or financial institution. (license fee)—fixed costs _Selection if customers crypto assets will be held in the “accounts” of the issuer:_ Fees for registering as the crypto wallets’ Yes holder (license fee)—fixed costs No No costs Determination of the issuance method No accounting related events _If the issuing method provide use of the blockchain:_ No No costs Issuing crypto assets using certain parameters of a smart contract Fees of the blockchain for the issuance Yes (transaction fee)—variable costs Salary or contractual fee for the issuance The physical crypto assets’ issuance (salary or supplier fee)—fixed costs As mentioned above, the product price is equal to TC (total manufacturing costs), per accounting for it within the company inventory or intangible assets. The IAS2 Inventory does not allow for taking into account the positions with 0 costs in inventory [58]. Considering that the issuance of crypto assets may not have direct costs, crypto assets are placed in inventory at the estimated initial selling value. All emission, expressed in this value is taken into account in the company income. 4.4.2. IFRS Approach for Accounting Lifecycle Milestones Related to Event Distribution The issuer or crypto-asset-based product distributes only crypto-asset products issued with such purpose. The distribution has only two options: When for crypto assets, buyer pays by “traditional currencies” (fiat currencies); _•_ When the crypto asset’s buyer pays in other crypto assets. _•_ The purpose of the ICO, if it is not produced for the issuer’s consumption, is to sell the issued crypto assets. The issuer then applies IFRS 15 Revenue from Contracts with Customers to sell goods to customers [22]. Application of the IFRS 15 is linked to the ownership right passage from the seller to the buyer; otherwise, the researchers shall assess such cases separately. Such cases are out of the scope of this article. Applying the IFRS 15 allows crypto-assets’ producers to account directly for the selling price, as product selling incomes are accounted for in profit and loss (PL). The authors contend that to clarify whether IFRS 15 may account for revenues from such transactions, it is essential to consider the scenario in which the sale of issued assets is carried out at the expense of other crypto assets. This standard does not apply to “nonmonetary transfers between businesses of the same line of business to facilitate sales to clients or potential customers,” as stated in IFRS 15 paragraph 6. This standard will not be applicable, for instance, to an agreement between two oil corporations to promptly swap oil to fulfill consumer demand in several designated locations. Since both exchanged items ----- _Smart Cities 2023, 6_ 51 fall under the inventory category, this transaction should be considered barter from an accounting perspective. According to the authors, the item is the same in this case, which is why revenue recognition under IFRS 15 does not apply to comparable transactions. The same corporation will act as both a supplier and a buyer of the same thing simultaneously, adding expenditures and profits while exchanging the same commodities. Treating crypto assets similarly, or even more equally, is prohibited given that they are provided for various goals, distinct smart contract specifications, and clients. That means that both the buyer and the seller should recognize revenue from the sale of goods following IFRS 15. In the authors’ opinion, their sale does not fall within the exclusions of paragraph 6 of this standard. Issuers should calculate the amount of income following paragraph 66 of IFRS 15, which defines non-cash consideration, and requires that revenue be measured at fair value. Following the above, fair value can be defined simply as the selling price for fiat currencies. Due to the high volatility of crypto assets, the current spot price for receiving crypto assets should be fixed at the time of sale. The authors note that the spot price in the fiat currency of the issued crypto assets, since most likely their market equivalent will not exist when they are released, and the spot price of the crypto asset, which the issuer receives in return, should be taken. _4.5. Write off Costs for the Sold Crypto Assets_ The weighted average cost method in accounting is one of three approaches to estimating inventory. It determines the average cost of all inventory based on individual costs, and the quantity of each item in stock. When the issuer issues many crypto assets, he can value each lot at a different fair value. When using the weighted average cost method, the value of the goods available for sale is divided by the units available for sale, and the following is usually used. _PWcat = Q ×_ _[TS][cat]_ (2) _TQcat_ where: _•_ _PWcat is a write-off of sold crypto assets costs;_ _•_ _Q is the quantity of sold crypto assets;_ _•_ _TScat is the total value of the crypto assets per type (category) in inventory;_ _•_ _TQcat is the total quantity of the crypto assets per type (category) in inventory._ Apart from the write-off costs, there are also costs related to the distribution; the issuers pay these costs as a blockchain transaction fee for the crypto assets’ transfer to the buyer, which is affected only by the number of the miner, persons, or entities, which confirm the transactions in a blockchain [59]. Due to this, the total distribution costs are as follows: _TCcat = PWcat + TFcat_ (3) where: _•_ _TCcat is a total distribution cost;_ _•_ _PWcat is a write-off of sold crypto assets costs;_ _•_ _TFcat is a transaction fee for transferring crypto assets via blockchain. The transaction_ fee differs per crypto asset since it is determined by the blockchain related to the crypto asset, and used for the transaction. Circulation and disposal events are presented in Table 4. ----- _Smart Cities 2023, 6_ 52 **Table 4. Circulation and disposal events. Source: generated by the authors.** **Milestone** **Event** **Cost/Incomes** Transfer of the crypto assets Fees of the blockchain for the transaction processing in blockchain by blockchain. (transaction fee) _Revaluation of the crypto assets in the Inventory:_ Issuers shall not evaluate it. Following its purpose, they should not form For Utility tokens the market. Shall be revaluated against the market price. The revaluation result is analyzed yearly within the annual report: _•_ It may form revaluation incomes (Revaluation income), if the value is registered on Credit _•_ It may for on costs), if the value is registered on Debit Revaluation of the assets referenced crypto assets is more complicated than for the payment tokens, since referenced assets shall also be reassessed. The revaluation result is analyzed yearly within the annual report: _•_ It may form revaluation incomes (Revaluation income), if the value is registered on Credit _•_ It may form revaluation costs (Revaluation costs), if the value is registered on Debit _Lost/stolen crypto assets:_ Circulation of crypto assets Disposal of crypto assets For Payment tokens For Asset-referenced tokens Own use (intangible assets) The total value of the lost or stolen crypto assets shall be written-off to Crypto assets held for sell or the lost/stolen expenses (lost/stolen product cost). Issuers shall calculate exchange (inventory) the write-off value based on the inventory/intangible assets value. Crypto assets under management (for example, storing crypto assets in wallets for company clients) Own use (intangible assets) Crypto assets held for sell or exchange (inventory) Crypto assets under management (for example, storing crypto assets in wallets for company clients) In such cases, issuers shall recover the crypto assets; if this is impossible, the customer should receive compensation per market price. _•_ If the market price is lower than the issuer calculated crypto asset value, then issuer writes off the value of lost crypto assets to the lost/stolen expenses (lost/stolen product cost). Issuer shall calculate the write-off value based on the inventory/intangible assets value. _•_ If the market price is higher than the issuer calculated crypto assets value, then issuer writes off the value of lost crypto assets to the lost/stolen expenses (lost/stolen product cost). Issuer shall calculate the write-off value based on the inventory/intangible assets value. However, the difference between the market value of stolen/lost crypto assets and the balance value is written off as Sunk Costs. _Expired crypto assets_ The total value of the lost or stolen crypto assets should be written-off to the lost/stolen expenses (lost/stolen product cost). Issuer should calculate the write-off value based on the inventory/intangible assets value. Following Table 3, the revaluation process differs for diverse crypto asset types. According to the IFRS, the revaluation of assets with unlimited helpful life is only conducted using market value. However, there is no common market since a utility token is a cryptocurrency asset offered to end users as an access key for some IT systems. The authors contend that this particular class of cryptocurrency assets is not subjected to revaluation. According to the IFRS, the revaluation of payment tokens, which are assets with an unlimited useful life, is only conducted using market value. There is a typical market ----- _Smart Cities 2023, 6_ 53 (on cryptocurrency exchanges) for this class of crypto assets where the issuer may find the current market price. The methodology for defining the payment token market price still needs to be developed. Therefore, the issuer should develop its methodology for the market price definition. Re-valuation resulting formula: **For crypto assets held for sale** _Revcat = Icat_ _MRcat_ _QIcat_ (4) _−_ _×_ where _•_ _Revcat is the revaluation result per each crypto asset;_ _•_ _Icat is the value of inventory per crypto assets;_ _•_ _MRcat is the market rate of the crypto asset;_ _•_ _QIcat is the quantity of re-valuated crypto assets in inventory._ **For crypto assets for own use:** _Revcat = IAcat_ _MRcat_ _QIAcat_ (5) _−_ _×_ where _•_ _Revcat is the revaluation result per each crypto asset;_ _•_ _IAcat is the value of intangible assets per crypto assets type;_ _•_ _MRcat is the market rate of the crypto asset;_ _•_ _QIAcat is the quantity of re-valuated crypto assets in intangible assets._ If MR is positive, it represents the revaluation costs—otherwise, revaluation incomes. **5. Conclusions** The smart city concept requires the inevitable rethinking of different processes across all its subsystems; new products require new manufacturing and distribution approaches. The authors assessed the smart city of Rome, and the possibility of achieving its KPIs via implementing crypto-asset-based products. The results show that all Rome smart economy KPIs, except one, are achievable by implementing crypto-asset-based products. This fact shows the high value of this research to the smart city, Rome; if the products based on the digital assets are the possible solution to achieving that KPI, their smart city will be highly likely to manufacture them. However, the assessment of the digital manufacturing stages, and related accounting events conducted by the authors, will allow smart city management to correctly develop the business plan, and further build an effective and transparent accounting approach. It creates additional possibilities for both smart cities, which receive the additional tool for implementing its KPIs, and for the financial market dealing with digital assets since these assets can be applied to a wider range of objects. The crypto assets are very popular among city residents; while scientists and governments discuss the viability of digital assets, the young generation actively uses them. Therefore, in implementing a smart city, KPIs will be facilitated by actively using this tool. ICOs have supplanted traditional sources of funding for blockchain-based start-up businesses. They launch new goods based on crypto assets, market them, and then utilize the revenue from sales to launch related programs and products. These businesses have collected more than $30 billion in revenue through ICOs [16]. In light of those mentioned earlier, transparent accounting procedures are required, including producing comprehensible and comparative yearly reports for the firms themselves, and the market as a whole. The authors rethink the Initial Coin Offer process, composed of the crypto asset’s issuance and distribution, by examining the accounting procedures for each milestone associated with the issuance of crypto assets. The authors clearly show that the first issuance of crypto assets is unrelated to the issuer’s capital raise. As a result, it is inaccurate to equate the Initial Coin Offer (ICO) to an Initial Public Offering (IPO), in the definition ----- _Smart Cities 2023, 6_ 54 of an ICO. When a company makes an initial public offering, its ownership shifts from private to public, and investors become the firm’s shareholders. However, IFRS-based evaluations of the ICO indicate no evidence of the issuer developing any responsibility to the purchasers of the crypto assets. As a result, the authors propose categorizing the process as the manufacturing of crypto assets, in the context of the ICO. This approach allows the smart city to develop more flexibly and use digital asset-based products, since the users do not use the capital of the smart city authorities; vice versa, they buy smart city manufactured products. The authors defined the crypto assets’ lifecycle, and assessed incomes and expenses related to all its events. The issuer should treat them as products in inventory. The discussion which arises from this research is related to the revaluation of crypto assets. As the authors have shown, issuers (companies) should re-evaluate the crypto assets held for sale. Accordingly, the current crypto assets’ value methodology must be developed. This study has a set of limitations: the crypto-asset products, as a possible solution for the smart city KPIs, were compared to the Rome smart city KPIs; the KPIs of other smart cities were not examined, which is the limitation of this study. The next limitation is connected to the fact that the authors do not consider energy costs separately; these costs are supposed to be a part of the suppliers’ expenses, and correspondingly they are accounted for in these types of costs. _Managerial Implication. This paper is the first one devoted to the theoretical exploration_ and evaluation of the procedure regarding how crypto-based assets may assist the smart city in achieving its KPIs; it uses the example of the smart city of Rome. The study report also provides a thorough overview of ICO accounting stages and IFRS-based accounting procedures. The authors classify the ICO process as manufacturing. _Practical/Social Implications. This study offered ways for calculating ICO manufacturing_ expenses for smart cities with practical ramifications. The same approach is also applicable to companies working within the smart city. The study defines expenses of the further manufactured crypto-assets, or their based product distribution stages, and their accounting under the IFRS. A clear and transparent accounting approach will lead to clear and transparent smart city financial reports, and such transparency is in the public interest. _Future Research. Future research can be focused on the blockchain type, which is more_ suitable for usage within a smart city. On one hand, use of the traditional blockchain, such as Ethereum, is simple due to developed protocols and approaches. However, on the other hand, considering that these networks are energy-consuming, maybe the new approach: nodes (blockchain points of the transaction approval and holder of the entire blockchain value copy) are assigned only to the transactions, approved by the Smart City and presupposed to control the confirmation process expenses and decrease energy consumption. **Author Contributions: Conceptualization, O.C. and Y.P.; methodology, O.C. and Y.P.; validation,** O.C.; investigation, O.C. and Y.P.; data curation, O.C. and Y.P.; writing—original draft preparation, O.C.; writing—review and editing, O.C. and Y.P.; supervision, Y.P.; funding acquisition, Y.P. All authors have read and agreed to the published version of the manuscript. **Funding: This project is financially supported by project No. 1.1.1.2/16/I/001 of the Republic of Latvia,** funded by the European Regional Development Fund. Research project No. 1.1.1.2/VIAA/3/19/458 “Development of Model of Smart Economy in Smart City”. **Institutional Review Board Statement: Not applicable.** **Informed Consent Statement: Not applicable.** **Data Availability Statement: Not applicable.** **Conflicts of Interest: The authors declare no conflict of interest.** ----- _Smart Cities 2023, 6_ 55 **References** 1. Qin, J.; Liu, Y.; Grosvenor, R. A Categorical Framework of Manufacturing for Industry 4.0 and Beyond. Procedia CIRP 2016, 52, [173–178. [CrossRef]](http://doi.org/10.1016/j.procir.2016.08.005) 2. Suleiman, Z.; Shaikholla, S.; Dikhanbayeva, D.; Shehab, E.; Turkyilmaz, A. Industry 4.0: Clustering of Concepts and Characteristics. _[Cogent. Eng. 2022, 9, 2034264. [CrossRef]](http://doi.org/10.1080/23311916.2022.2034264)_ 3. Ardito, L.; Petruzzelli, A.M.; Panniello, U.; Garavelli, A.C. Towards Industry 4.0. Bus. Process Manag. J. 2019, 25, 323–346. [[CrossRef]](http://doi.org/10.1108/BPMJ-04-2017-0088) 4. [Vaidya, S.; Ambad, P.; Bhosle, S. Industry 4.0—A Glimpse. Procedia Manuf. 2018, 20, 233–238. [CrossRef]](http://doi.org/10.1016/j.promfg.2018.02.034) 5. [Tupa, J.; Steiner, F. Industry 4.0 and Business Process Management. Teh. Glas. 2019, 13, 349–355. [CrossRef]](http://doi.org/10.31803/tg-20181008155243) 6. Alcácer, V.; Cruz-Machado, V. Scanning the Industry 4.0: A Literature Review on Technologies for Manufacturing Systems. Eng. _[Sci. Technol. Int. J. 2019, 22, 899–919. [CrossRef]](http://doi.org/10.1016/j.jestch.2019.01.006)_ 7. Ribeiro da Silva, E.H.D.; Shinohara, A.C.; Pinheiro de Lima, E.; Angelis, J.; Machado, C.G. Reviewing Digital Manufacturing [Concept in the Industry 4.0 Paradigm. Procedia CIRP 2019, 81, 240–245. [CrossRef]](http://doi.org/10.1016/j.procir.2019.03.042) 8. Hagel, J., III; Brown, J.S.; Kulasooriya, D.; Gif, C.; Chen, M. The Future of Manufacturing; Deloitte University Press: Westlake, TX, USA, 2015. 9. Da Silva, E.R.; Shinohara, A.C.; Nielsen, C.P.; de Lima, E.P.; Angelis, J. Operating Digital Manufacturing in Industry 4.0: The Role [of Advanced Manufacturing Technologies. Procedia CIRP 2020, 93, 174–179. [CrossRef]](http://doi.org/10.1016/j.procir.2020.04.063) 10. Rejeb, A.; Rejeb, K.; Simske, S.J.; Keogh, J.G. Blockchain Technology in the Smart City: A Bibliometric Review. Qual. Quant. 2021, _[56, 2875–2906. [CrossRef]](http://doi.org/10.1007/s11135-021-01251-2)_ 11. Georgiou, I.; Nell, J.G.; Kokkinaki, A.I. Blockchain for Smart Cities: A Systematic Literature Review; Springer: Berlin/Heidelberg, Germany, 2020; pp. 169–187. 12. Hashimy, L.; Treiblmaier, H.; Jain, G. Distributed Ledger Technology as a Catalyst for Open Innovation Adoption among Small [and Medium-Sized Enterprises. J. High Technol. Manag. Res. 2021, 32, 100405. [CrossRef]](http://doi.org/10.1016/j.hitech.2021.100405) 13. Tana, S.; Breidbach, C.F. Institutionalizing Digital Transformation through Cryptocurrency Use. ECIS 2021, 107. 14. Alahmadi, D.H.; Baothman, F.A.; Alrajhi, M.M.; Alshahrani, F.S.; Albalawi, H.Z. Comparative Analysis of Blockchain Technology [to Support Digital Transformation in Ports and Shipping. J. Intell. Syst. 2021, 31, 55–69. [CrossRef]](http://doi.org/10.1515/jisys-2021-0131) 15. Sunmola, F.T.; Burgess, P.; Tan, A. Building Blocks for Blockchain Adoption in Digital Transformation of Sustainable Supply [Chains. Procedia Manuf. 2021, 55, 513–520. [CrossRef]](http://doi.org/10.1016/j.promfg.2021.10.070) 16. [Fahlenbrach, R.; Frattaroli, M. ICO Investors. Financ. Mark. Portf. Manag. 2020, 35, 1–59. [CrossRef] [PubMed]](http://doi.org/10.1007/s11408-020-00366-0) 17. Hacker, P.; Thomale, C. Crypto-Securities Regulation: ICOs, Token Sales and Cryptocurrencies under EU Financial Law. Eur. Co. _[Financ. Law Rev. 2018, 15, 645–696. [CrossRef]](http://doi.org/10.1515/ecfr-2018-0021)_ 18. [Momtaz, P.P. Initial Coin Offerings. PLoS ONE 2020, 15, e0233018. [CrossRef]](http://doi.org/10.1371/journal.pone.0233018) 19. [Hsieh, H.-C.; Oppermann, J. Initial Coin Offerings and Their Initial Returns. Asia Pac. Manag. Rev. 2021, 26, 1–10. [CrossRef]](http://doi.org/10.1016/j.apmrv.2020.05.003) 20. De Andrés, P.; Arroyo, D.; Correia, R.; Rezola, A. Challenges of the Market for Initial Coin Offerings. Int. Rev. Financ. Anal. 2022, _[79, 101966. [CrossRef]](http://doi.org/10.1016/j.irfa.2021.101966)_ 21. [Tao, Z.; Peng, B. Optimal Initial Coin Offering under Speculative Token Trading. Eur. J. Oper. Res. 2022, in press. [CrossRef]](http://doi.org/10.1016/j.ejor.2022.07.023) 22. Procházka, D. Accounting for Bitcoin and Other Cryptocurrencies under IFRS: A Comparison and Assessment of Competing [Models. Int. J. Digit. Account. Res. 2018, 18, 161–188. [CrossRef]](http://doi.org/10.4192/1577-8517-v18_7) 23. [Bartolucci, S.; Kirilenko, A. A Model of the Optimal Selection of Crypto Assets. R. Soc. Open Sci. 2020, 7, 191863. [CrossRef]](http://doi.org/10.1098/rsos.191863) [[PubMed]](http://www.ncbi.nlm.nih.gov/pubmed/32968495) 24. Xiong, F.; Xie, M.; Zhao, L.; Li, C.; Fan, X. Recognition and Evaluation of Data as Intangible Assets. Sage Open 2022, 12, [215824402210946. [CrossRef]](http://doi.org/10.1177/21582440221094600) 25. Mayer, J.; Niemietz, P.; Trauth, D.; Bergs, T. How Distributed Ledger Technologies Affect Business Models of Manufacturing [Companies. Procedia CIRP 2021, 104, 152–157. [CrossRef]](http://doi.org/10.1016/j.procir.2021.11.026) 26. Pesch, R.; Endres, H.; Bouncken, R.B. Digital Product Innovation Management: Balancing Stability and Fluidity through [Formalization. J. Prod. Innov. Manag. 2021, 38, 726–744. [CrossRef]](http://doi.org/10.1111/jpim.12609) 27. Mäntymäki, M.; Wirén, M.; Najmul Islam, A.K.M. Exploring the Disruptiveness of Cryptocurrencies: A Causal Layered Analysis-Based _Approach; Springer: Berlin/Heidelberg, Germany, 2020; pp. 27–38._ 28. Pilat, D.; Hatem, L.; Ker, D.; Mitchell, J. A Roadmap toward a Common Framework for Measuring the Digital Economy. 2020. [Available online: https://www.oecd.org/sti/roadmap-toward-a-common-framework-for-measuring-the-digital-economy.pdf](https://www.oecd.org/sti/roadmap-toward-a-common-framework-for-measuring-the-digital-economy.pdf) (accessed on 4 June 2022). 29. Paritala, P.K.; Manchikatla, S.; Yarlagadda, P.K.D.V. Digital Manufacturing- Applications Past, Current, and Future Trends. _[Procedia Eng. 2017, 174, 982–991. [CrossRef]](http://doi.org/10.1016/j.proeng.2017.01.250)_ 30. Tucker, G.; Sedelnikova, I.; Saslow, M.; Meurer, H.; Coughlan, A. In Depth A Look at Current Financial Reporting Issues. [2017. Available online: https://www.pwc.com/sg/en/insurance/assets/ifrs17-current-financial-reporting.pdf (accessed on 9](https://www.pwc.com/sg/en/insurance/assets/ifrs17-current-financial-reporting.pdf) August 2021). 31. _EU Proposal for a Regulation of the European Parliament and of The Council on Markets in Crypto-Assets, and Amending Directive (EU)_ _2019/1937; The European Parliament: Brussels, Belgium, 2020._ ----- _Smart Cities 2023, 6_ 56 32. Jo, S.; Han, H.; Leem, Y.; Lee, S. Sustainable Smart Cities and Industrial Ecosystem: Structural and Relational Changes of the [Smart City Industries in Korea. Sustainability 2021, 13, 9917. [CrossRef]](http://doi.org/10.3390/su13179917) 33. Pellicano, M.; Calabrese, M.; Loia, F.; Maione, G. Value Co-Creation Practices in Smart City Ecosystem. J. Serv. Sci. Manag. 2019, _[12, 34–57. [CrossRef]](http://doi.org/10.4236/jssm.2019.121003)_ 34. Rotuna, C.; Gheorghita, A.; Zamfiroiu, A.; Smada, D.-M. Smart City Ecosystem Using Blockchain Technology. Inform. Econ. 2019, _[23, 41–50. [CrossRef]](http://doi.org/10.12948/issn14531305/23.4.2019.04)_ 35. Gupta, A.; Panagiotopoulos, P.; Bowen, F. An Orchestration Approach to Smart City Data Ecosystems. Technol. Forecast Soc. _[Change 2020, 153, 119929. [CrossRef]](http://doi.org/10.1016/j.techfore.2020.119929)_ 36. [Craig Smith IFRS Interpretations Committee Meeting. 2019. Available online: https://www.ifrs.org/content/dam/ifrs/](https://www.ifrs.org/content/dam/ifrs/meetings/2019/june/ifric/ap12-holdings-of-cryptocurrencies.pdf) [meetings/2019/june/ifric/ap12-holdings-of-cryptocurrencies.pdf (accessed on 23 November 2021).](https://www.ifrs.org/content/dam/ifrs/meetings/2019/june/ifric/ap12-holdings-of-cryptocurrencies.pdf) 37. [Pole, V. Revenue from Contracts with Customers—A Guide to IFRS 15. 2018. Available online: https://www2.deloitte.com/](https://www2.deloitte.com/content/dam/Deloitte/lu/Documents/audit/lu-IFRS-15.pdf) [content/dam/Deloitte/lu/Documents/audit/lu-IFRS-15.pdf (accessed on 30 October 2021).](https://www2.deloitte.com/content/dam/Deloitte/lu/Documents/audit/lu-IFRS-15.pdf) 38. IFRS Committee. IAS 2 Inventories; IFRS Accounting Standards Board: London, UK, 2021. 39. Pope, P.F.; McLeay, S.J. The European IFRS Experiment: Objectives, Research Challenges and Some Early Evidence. Account. Bus. _[Res. 2011, 41, 233–266. [CrossRef]](http://doi.org/10.1080/00014788.2011.575002)_ 40. [Municipality of Rome. Il Piano Roma Smart City. 2021. Available online: https://www.comune.roma.it/eventi-resources/cms/](https://www.comune.roma.it/eventi-resources/cms/documents/Roma%20Smart%20City_Il%20Piano.pdf) [documents/Roma%20Smart%20City_Il%20Piano.pdf (accessed on 10 May 2022).](https://www.comune.roma.it/eventi-resources/cms/documents/Roma%20Smart%20City_Il%20Piano.pdf) 41. Duma, F.; Paun, D. Company Financing through Capital Increase in the Hospitality Industry. Interdiscip. Manag. Res. 2011, 7, 787. 42. [Hashemi Joo, M.; Nishikawa, Y.; Dandapani, K. ICOs, the next Generation of IPOs. Manag. Finance 2019, 46, 761–783. [CrossRef]](http://doi.org/10.1108/MF-10-2018-0472) 43. [Wis, A. Initial Coin Offering as a Funding Source for Projects. ACC J. 2019, 25, 90–98. [CrossRef]](http://doi.org/10.15240/tul/004/2019-2-007) 44. Esmaeilian, B.; Behdad, S.; Wang, B. The Evolution and Future of Manufacturing: A Review. J. Manuf. Syst. 2016, 39, 79–100. [[CrossRef]](http://doi.org/10.1016/j.jmsy.2016.03.001) 45. [Shelton, R. Integrating Product and Service Innovation. Res.-Technol. Manag. 2009, 52, 38–44. [CrossRef]](http://doi.org/10.1080/08956308.2009.11657567) 46. Shin, J.; Kim, Y.J.; Jung, S.; Kim, C. Product and Service Innovation: Comparison between Performance and Efficiency. J. Innov. _[Knowl. 2022, 7, 100191. [CrossRef]](http://doi.org/10.1016/j.jik.2022.100191)_ 47. Asante, K.; Owen, R.; Williamson, G. Governance of New Product Development and Perceptions of Responsible Innovation in [the Financial Sector: Insights from an Ethnographic Case Study. J. Responsible Innov. 2014, 1, 9–30. [CrossRef]](http://doi.org/10.1080/23299460.2014.882552) 48. Marano, P. The Contribution of Product Oversight and Governance (POG) to the Single Market: A Set of Organisational Rules for Business Conduct. In Insurance Distribution Directive: A Legal Analysis; Marano, P., Noussia, K., Eds.; Springer International Publishing: Cham, Switzerland, 2021; pp. 55–74. ISBN 978-3-030-52738-9. 49. Williams, L.D. Concepts of Digital Economy and Industry 4.0 in Intelligent and Information Systems. Int. J. Intell. Netw. 2021, 2, [122–129. [CrossRef]](http://doi.org/10.1016/j.ijin.2021.09.002) 50. Horváth, D.; Szabó, R.Z. Driving Forces and Barriers of Industry 4.0: Do Multinational and Small and Medium-Sized Companies [Have Equal Opportunities? Technol. Forecast Soc. Change 2019, 146, 119–132. [CrossRef]](http://doi.org/10.1016/j.techfore.2019.05.021) 51. Ramos, S.; Pianese, F.; Leach, T.; Oliveras, E. A Great Disturbance in the Crypto: Understanding Cryptocurrency Returns under [Attacks. Blockchain Res. Appl. 2021, 2, 100021. [CrossRef]](http://doi.org/10.1016/j.bcra.2021.100021) 52. [ESMA SMSG Advice—Own Initiative Report on Initial Coin Offerings and Crypto-Assets. 2018. Available online: https://www.](https://www.esma.europa.eu/sites/default/files/library/esma22-106-1338_smsg_advice_-_report_on_icos_and_crypto-assets.pdf) [esma.europa.eu/sites/default/files/library/esma22-106-1338_smsg_advice_-_report_on_icos_and_crypto-assets.pdf (accessed](https://www.esma.europa.eu/sites/default/files/library/esma22-106-1338_smsg_advice_-_report_on_icos_and_crypto-assets.pdf) on 23 November 2021). 53. [Bech, M.; Garratt, R. Central Bank Cryptocurrencies. 2017. Available online: https://www.bis.org/publ/qtrpdf/r_qt1709f.pdf](https://www.bis.org/publ/qtrpdf/r_qt1709f.pdf) (accessed on 28 July 2021). 54. Giudici, G.; Milne, A.; Vinogradov, D. Cryptocurrencies: Market Analysis and Perspectives. Econ. E Politica Ind. J. Ind. Bus. Econ. **[2020, 47, 1–18. [CrossRef]](http://doi.org/10.1007/s40812-019-00138-6)** 55. Grobys, K.; Ahmed, S.; Sapkota, N. Technical Trading Rules in the Cryptocurrency Market. Finance Res. Lett. 2020, 32, 101396. [[CrossRef]](http://doi.org/10.1016/j.frl.2019.101396) 56. Gowda, N.; Chakravorty, C. Comparative Study on Cryptocurrency Transaction and Banking Transaction. Glob. Transit. Proc. **[2021, 2, 530–534. [CrossRef]](http://doi.org/10.1016/j.gltp.2021.08.064)** 57. Durante, G.; Turvani, M. Coworking, the Sharing Economy, and the City: Which Role for the ‘Coworking Entrepreneur’? Urban _[Sci. 2018, 2, 83. [CrossRef]](http://doi.org/10.3390/urbansci2030083)_ 58. Kotsupatriy, M.; Ksonzhyk, I.; Skrypnyk, S.; Shepel, I.; Koval, S. Use of International Accounting and Financial Reporting [Standards in Enterprise Management. J. Impact Factor 2020, 11, 788–796. [CrossRef]](http://doi.org/10.34218/IJM.11.5.2020.071) 59. Cernisevs, O. Analysis of the factors influencing the formation of the transaction price in the blockchain. Financ. Credit. Syst. _[Prospect. Dev. 2021, 3, 36–47. [CrossRef]](http://doi.org/10.26565/2786-4995-2021-3-04)_ **Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual** author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/smartcities6010003?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/smartcities6010003, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/2624-6511/6/1/3/pdf?version=1671788381" }
2,022
[]
true
2022-12-23T00:00:00
[ { "paperId": "983b498143cbbe00241771f60e6ee4717c7c2747", "title": "Optimal initial coin offering under speculative token trading" }, { "paperId": "185352afb0f288b47c9e4beec655b7e08ee56218", "title": "Product and service innovation: Comparison between performance and efficiency" }, { "paperId": "e1cbb51daa57ef0138911ecf16c7515af493e9db", "title": "Recognition and Evaluation of Data as Intangible Assets" }, { "paperId": "5ad876ff49af7541a5613687f5edc7838bfd1546", "title": "Industry 4.0: Clustering of concepts and characteristics" }, { "paperId": "6cbd71dad05bc49b8faa13acae4dfec3b82d2293", "title": "ANALYSIS OF THE FACTORS INFLUENCING THE FORMATION OF THE TRANSACTION PRICE IN THE BLOCKCHAIN" }, { "paperId": "147605832fd19a62d852597711a196deb08b01c6", "title": "Comparative analysis of blockchain technology to support digital transformation in ports and shipping" }, { "paperId": "d72d65741d0af1c189d8c832fd331257fd253ffc", "title": "Digital Product Innovation Management: Balancing Stability and Fluidity through Formalization" }, { "paperId": "17e29969c77f14f071dc54b402c810fe8b8c71b4", "title": "Challenges of the market for initial coin offerings" }, { "paperId": "975e28806752336fa57ec5a3e7c6376de8549187", "title": "Blockchain technology in the smart city: a bibliometric review" }, { "paperId": "94359cef4583150391cf77d7a9b7b546aec5bdde", "title": "Sustainable Smart Cities and Industrial Ecosystem: Structural and Relational Changes of the Smart City Industries in Korea" }, { "paperId": "1c5f26234e2c213daafe98c9d1248f5bbb79af2d", "title": "A great disturbance in the crypto: Understanding cryptocurrency returns under attacks" }, { "paperId": "4cbead0f9a188f7a3faa521e2f39ebc942e99d62", "title": "Comparative Study on Cryptocurrency Transaction and Banking Transaction" }, { "paperId": "8f085c0c252b8b43d7e284c06a17f3351f5e60da", "title": "Distributed ledger technology as a catalyst for open innovation adoption among small and medium-sized enterprises" }, { "paperId": "e4d4a4f3a371b32c2671187808f8f2c176a032f1", "title": "Initial coin offerings and their initial returns" }, { "paperId": "8895b390bd19b6d8e7a953d3c8848f9d0083426c", "title": "Use of International Accounting and Financial Reporting Standards in Enterprise Management" }, { "paperId": "df984cb055acf0130cedfe6bfdd321244718ec83", "title": "An orchestration approach to smart city data ecosystems" }, { "paperId": "af910e4395e4b7ff2f2238ecc1e4761910685f85", "title": "Smart City Ecosystem Using Blockchain Technology" }, { "paperId": "741ec4f136244b6c97f27db29b01204033a56ffc", "title": "Industry 4.0 and business process management" }, { "paperId": "6bd7f605a8801dc6dfb442c376ed97c13d3a20ae", "title": "Initial coin offering as a funding source for projects" }, { "paperId": "eb02f0a7d8000ee816470dde4aad857558441d78", "title": "Cryptocurrencies: market analysis and perspectives" }, { "paperId": "96f3fa43a837c4f919bfb12ac07f2451268819b7", "title": "Driving forces and barriers of Industry 4.0: Do multinational and small and medium-sized companies have equal opportunities?" }, { "paperId": "b29f54b46a9f5fb44142e145551e3c753e380ed3", "title": "ICO investors" }, { "paperId": "ceffbc156ca30da44920d50416c1e38f18c76e8b", "title": "ICOs, the next generation of IPOs" }, { "paperId": "529bf27cea17f9861db6a35b172476e5fddd506b", "title": "A model of the optimal selection of crypto assets" }, { "paperId": "45efc8ebfb855aab81c73cf90f33e9d3b4f1d5ce", "title": "Scanning the Industry 4.0: A Literature Review on Technologies for Manufacturing Systems" }, { "paperId": "47b844f6a83c6f36998bc60db813dd20a3dcce5b", "title": "Value Co-Creation Practices in Smart City Ecosystem" }, { "paperId": "b5036910bb803f6f227da357bb0d9f30924ff253", "title": "Coworking, the Sharing Economy, and the City: Which Role for the ‘Coworking Entrepreneur’?" }, { "paperId": "4a17d9a8576873393f16451145bbab1e60ecebc9", "title": "Towards Industry 4.0" }, { "paperId": "22e7679e4adb7999523811295d2b6c5153cb8fd8", "title": "Initial Coin Offerings" }, { "paperId": "73639061b1e1840429a3ae0d1ab2e225e5721fe6", "title": "Crypto-Securities Regulation: ICOs, Token Sales and Cryptocurrencies under EU Financial Law" }, { "paperId": "61d9b975555eae056e393b1380cf0a37de264fce", "title": "The evolution and future of manufacturing: A review" }, { "paperId": "5f11bb25f91676e50ca3afdb696af652fdc92848", "title": "Governance of new product development and perceptions of responsible innovation in the financial sector: insights from an ethnographic case study" }, { "paperId": "81577ebf50f3c3552db73a8baa073cd24544690d", "title": "The European IFRS experiment: objectives, research challenges and some early evidence" }, { "paperId": "b51da4d9fc5f9f461c1efcbf3b5d402d7ec62730", "title": "Integrating Product and Service Innovation" }, { "paperId": "77a7d13241a03c5f895fce021a8f44b1c375df53", "title": "How Distributed Ledger Technologies affect business models of manufacturing companies" }, { "paperId": "236e46dd8cc19aac426a2ab0ed290178ffbfdc20", "title": "Concepts of Digital Economy and Industry 4.0 in Intelligent and information systems" }, { "paperId": "742d555612b228d34f12f4ea49f8c342f9388d47", "title": "Insurance Distribution Directive: A Legal Analysis" }, { "paperId": "bf632d032b8957d3d120da912f8e6a7c7130a0a2", "title": "Building Blocks for Blockchain Adoption in Digital Transformation of Sustainable Supply Chains" }, { "paperId": "2e9d6a7afbb88400c9b608ed752dfdb68f72cb90", "title": "Technical trading rules in the cryptocurrency market" }, { "paperId": "ea1a5bef1fd8d39132f4b30e7303cec1f3be26d8", "title": "Operating Digital Manufacturing in Industry 4.0: the role of advanced manufacturing technologies" }, { "paperId": "a7de6d87bbcee5b1bdbdb2539383d91224554bdd", "title": "Reviewing Digital Manufacturing concept in the Industry 4.0 paradigm" }, { "paperId": "a8ffa7725a2b2bb54b4be3fb208b092453bc6ec3", "title": "Industry 4.0 – A Glimpse" }, { "paperId": "d34b6f9d406ad8d49f9eba4f9279eb98fcbd2650", "title": "Accounting for Bitcoin and Other Cryptocurrencies under IFRS: A Comparison and Assessment of Competing Models" }, { "paperId": "3181ae45eb697fcf450197ab3a62ac263b25d709", "title": "Digital manufacturing: Applications past, current, and future trends" }, { "paperId": "222764548f8c9f3f9ea6afab4e93cd9a4e968b10", "title": "A categorical framework of manufacturing for industry 4.0 and beyond" }, { "paperId": null, "title": "Company Financing through Capital Increase in the Hospitality Industry" } ]
17,491
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/017ec541c50159757cab3a950bcb4e53b6fd0109
[ "Computer Science" ]
0.900736
IoT–smart contracts in data trusted exchange supplied chain based on block chain
017ec541c50159757cab3a950bcb4e53b6fd0109
International Journal of Electrical and Computer Engineering (IJECE)
[ { "authorId": "120438498", "name": "S. G. Kumar" }, { "authorId": "2080568713", "name": "B. Sriman" }, { "authorId": "50601725", "name": "A. Murugan" }, { "authorId": "143604975", "name": "B. Muruganantham" }, { "authorId": "2350820524", "name": "Articles Info" } ]
{ "alternate_issns": null, "alternate_names": [ "Int J Electr Comput Eng", "Int J Electr Comput Eng (IJECE", "International Journal of Electrical and Computer Engineering" ], "alternate_urls": null, "id": "79ca3e12-1e1d-4da8-9915-54a393e8c512", "issn": "2088-8708", "name": "International Journal of Electrical and Computer Engineering (IJECE)", "type": "journal", "url": "http://iaesjournal.com/online/index.php/IJECE/issue/archive" }
Internet of Things (IoT) assumes a critical part in the advancement of different fields. The IoT data trusted exchange in recent year extend of uses influence an awesome request and increasing scale. In such a platform, exchange the data sets that they require and specialist organization can search. However, the enough trust as the third-party mediators for data exchange in centralized infrastructure cannot provide. This paper proposes a blockchain for IoT data trusted exchange based on decentralized solution. In particular, the fundamental standards of blockchain in verify manner, individuals can communicate with each other without a confided in mediator intermediary. Blockchain enable us to have a distributed, digital ledger. IoT (Internet of Things) sensor devices (zigbee) utilizing blockchain technology to assert public availability of temperature records, tracking location shipment, humidity, preventing damage, data immutability. The sensor devices looking the temperature, location, damage of each parcel during the shipment to completely guarantee directions. In blockchain all data is got moved from one position to another, where a smart contract assesses against the product attributes. Ethereum blockchain and smart contracts atlast it gets through knowledge a design to be copied and presents its decentralized distributed digital ledger, auditable, transparent, features visually.
**International Journal of Electrical and Computer Engineering (IJECE)** Vol.10, No.1, February 2020, pp. 438~446 ISSN: 2088-8708, DOI: 10.11591/ijece.v10i1.pp438-446  438 # IoT–smart contracts in data trusted exchange supplied chain based on block chain **S. Ganesh Kumar, B. Sriman, A. Murugan, B. Muruganantham** Department of Computer Science and Engineering, Faculty of Engineering and Technology, SRM Institute of Science and Technology, Chennai, India **Article Info** **ABSTRACT** **_Article history:_** Received Feb17, 2019 Revised Aug 23, 2019 Accepted Aug 30, 2019 **_Keywords:_** Blockchain Datatrusted exchange IoT (internet of things) Smart contracts **_Corresponding Author:_** Internet of Things (IoT) assumes a critical part in the advancement of different fields. The IoT data trusted exchange in recent year extend of uses influence an awesome request and increasing scale. In such a platform, exchange the data sets that they require and specialist organization can search. However, the enough trust as the third-party mediators for data exchange in centralized infrastructure cannot provide. This paper proposes a blockchain for IoT data trusted exchange based on decentralized solution. In particular, the fundamental standards of blockchain in verify manner, individuals can communicate with each other without a confided in mediator intermediary. Blockchain enable us to have a distributed, digital ledger. IoT (Internet of Things) sensor devices (zigbee) utilizing blockchain technology to assert public availability of temperature records, tracking location shipment, humidity, preventing damage, data immutability. The sensor devices looking the temperature, location, damage of each parcel during the shipment to completely guarantee directions. In blockchain all data is got moved from one position to another, where a smart contract assesses against the product attributes. Ethereum blockchain and smart contracts atlast it gets through knowledge a design to be copied and presents its decentralized distributed digital ledger, auditable, transparent, features visually. _Copyright © 2020 Institute of Advanced Engineering and Science._ _All rights reserved._ S. Ganesh Kumar, Department of Computer Science and Engineering, Faculty of Engineering and Technology, SRM Institute of Science and Technology, Chennai, India. Email: 13ganesh@mail.com **1.** **INTRODUCTION** Internet of Things (IoT) and Blockchain are viewed as rising ideas and technologies [1]. In the meantime they change ideas and make new conceivable outcomes, each in their particular situations, and there is a chance to make applications that can share the inborn attributes of both, investigating how the IoT can profit by the decentralized idea of the Blockchain. The forward development of exchange and networking technologies (e.g., Wi-Fi, Zigbee, Bluetooth), a developing more complete number of things (e.g., sensors, actuators, smart devices) are being associated Internet these days, are being connected to the Internet these day (IoT). Blockchain [2-5] basically a distributed, digital ledger [6] has numerous applications and can be used for any data exchange, agreements/contracts, tracking and of course, payment [6, 7]. Since every bit of transaction is recorded on a block and across number times another copies of the ledger that are made distribution over many nodes (computers), it is highly transparent. It’s also profoundly safe since each block makes connection to the one going in front of it and after it. There is not one focal being of the opinion that over the blockchain [2], and it’s to a great degree working well and turning readily to another work. **_Journal homepage: http://ijece.iaescore.com/index.php/IJECE_** ----- Int J Elec & Comp Eng ISSN: 2088-8708  439 In the end, blockchain can make the power and transparency of supply chains and decision act on everything from warehousing to delivery taken to payment. Chain of need is basic for some things, and blockchain has the chain of need made in. Gained recently attention with [8] Smart contracts, particularly with in connection with to the blockchain technology. Selected before the rules, blockchain technology that can verify [9] its correctness and support in smart contract (agreement) hence, its a self-implementation and self- executing. However, at all base platform without a right base platform a smart contract is not “smart” to run, execute and check these contracts it need such a base platform. A smart contracts that can work in a decentralized manner and completely self-ruling, such a base platform is blockchain. Financial services [10] (e.g., Bitcoin) or general services e.g., (Ethereum) [11, 12] can be used smart contract. A blockchain [4] executes, checks, and gathers and stores smart contracts in blocks. Every block has a statement, direction to atleast one person who had the position before, for this reason the limited stretch of time blockchain [13]. Blockchains are decentralized, distribute ledger, [7] based on cryptograph. A main interest in the financial industry for using blockchains is to put machines to use and digitalizing forms particularly when great number of a number of persons working together are covered. These agreement can be assessed naturally smart contract with blockchain utilizing the primary advantage. Current solutions produce that needs to be checked manually using smart contracts, the temperature indicators, tracking shipping, preventing shipping damage can be assesses automatically and notify sender and one who gets package. In addition, it is tamper-proof for the stored data and are used for looking over of account by expert by outside groups of persons. By using the Ethereum [11, 12], it is fully decentralized as tamper-proof, framework utilized by the requiring little to no hard work and on a for every contract and per byte premise. The remaining sections will explain the upcoming topics. The second part, Internet of Things (IoT) zigbee based technology wireless sensor network (WSN) discovered shipping damage indicators, temperature indicators and tracking shipping. Section III Blockchain enables the work of art of smart contracts, with terms and conditions both sides can give details and that say without doubt have trust in the enforceability of the contract and the mind and physical qualities of the counterparty. Section IV outlines the special technical details, which is had followed by a first stage put value evaluation, while Section V provides conclusions. **2.** **ZIGBEE BASED TECHNOLOGY (WSN)** Many applications that used in the great numbers today are GPS. Tracking is one of the applications in shipping or Any Portable Device and monitored regularly. The routs travelled and the exact locations of the information given by this tracking system, which are embedded in this hardware. Moreover, by that given information the user can locate the different places widely. In any of the weather conditions, the systems are enabled to track the target routes. For this, the Zigbee technology and the GPS are used. Similar to this, the landwide shipment tracking used similar to the GPS is represented. This type of the systems enable at the same time for the required Equipment’s to be tracked, which are accurate, long lasting, light in weight and are cheaper than any of the automatic positioning tags. In any devices, the sensors are built into the compactness of the prototypes, which are open structure designed for tracking the shipments. By using the Google Maps, the GPS and the API working: the information are sent over the network to devices like mobile phones which are embedded with the simple Zigbee technology for tracking the device shipments. Whenever, it is limiting to the particular person with an adjusting alert message to the receiver for tracking. The battery powers are saved and the tracking costs and the feasibility results and then doing the power efficiency of the battery and the transmission of the data. Now a day’s technology is growing higher and higher good level, because of this, the common people are ready to take up these technology facilities in their daily life. In their day to day living groups of person are demanding to protect their instruments, devices etc. by using available resources. Hence this project is made on the platform of this demand. Required components are: 1. Arduino. 2. GSM GPS Module. 3. 16×2 LCD. 4. Power Supply. 5. Connecting Wires. 6. Zigbee _IoT–smart contracts in data trusted exchange supplied chain based on block chain (S. Ganesh Kumar)_ ----- 440  ISSN: 2088-8708 **2.1.** **Arduino UNO ATmega328** Microcontroller board based on the AT mega328 (data sheet), hardware, software with the open source computers by the Arduino Uno. It consists of the 14 numbers with the digital input/output pins (6 used as PWM outputs), 6 inputs as the analogs, and a 16 MHz ceramic resonator, with the power jack, an ICSP header, USB connection and the button to start. The support for the micro controller provided, to make connection to the knowledge for processing machine to an USB subscription, or turn it on with the AC to DC for making the adjustment connector for producing electric current to initiate. It uses the FTDI USB to serial driver chip, so only it is different from all the process boards. Inspite of this, it uses the Atmega 16U2 (Atmega8U2 up to R2) of the knowledge processing machine orders which are listed as USB to the serial converter. Figure 1 show the arduino UNO R3. Figure 1. Arduino UNO R3 The Uno board that has a resistor version of Revision 2 that pulls the 8U2 HWB that get onto the land, which makes it easy to put them into the DFU of most frequent number. The board version of Revision 3 has the new upcoming features. The SCL pins are near to the RESET pin, 1.0 pinout: added the SDA [14] and the new other two pins are safely placed near the RESET pin. The safety shields are allowed to adjust to transmute the voltage that are provided from the board by the IOREF. The safety shields that are able to exist together are AVR used by the boards that operates with 5V and with Arduino [14] duo which are operated with 3.3V. The remaining pins are kept unconnected for the later purpose. a. The circuit is stronger in RESET. ## b. ATmega [14] 16U2 that gives another in place of 8U2. “Uno” is a way one in Italian. The direction accounts of Arduino, moving forward in the Uno and version 1.0. The Uno is the latest in a number, order, group, line of USB Arduino boards, and the statement, direction scaled copy model for the Arduino platform. **2.2.** **GPS-GSM module (SIM 808)** GPS-GSM[15] part of a greater module unit (SIM808) part of a greater module unit is a well constructed complete Quad- Band GSM/GPRS part of a greater module unit which grain processing machines combines GPS technology for one dependent on keeping satellite navigation. The package which mixed together GPRS and GPS in a SMT will importantly for both cost and time application enabled GPRS growth develop to customers. GPS purpose use, it let not fixed in level properties to be with ways, roads, lines without breaks tracking at any place and any time with amount signal coverage in feature an industry. **2.3.** **LCD 16x2** LCD [16] (Liquid Crystal Display) is discover a wide range applications and screen is an electronic exhibit module. A 16x2 LCD is used in different circuits and device and basic part of greater module unit is exhibit. These parts of a greater module unit are supported over seven part and othermore than one or two part LEDs. The reasons being: LCDs are money related; easily able to be made into list of machine orders; have no limiting condition of displaying special & even tax on goods coming into country characters (unlike in seven part), animations and so on. Figure 2 show LCD 16x2. The receiver part of a greater module unit is activated when its within the range of transmitter part of a greater module unit. Which is decoded and send message to the LCD [16] screen (Pin D4 to D7) to display the true statement message (Part of a greater Module unit is Found). If the receiver is in the range of transmitter part of a greater module unit then Zigbee module activated and sends the true statement signal to Arduino (Pin 2 DATA to D12). Int J Elec & Comp Eng, Vol. 10, No. 1, February 2020 : 438 - 446 ----- Int J Elec & Comp Eng ISSN: 2088-8708  441 Figure 2. LCD 16 *2 **2.4.** **Power supply** Power supply device for the make into different sort of ready (to be used) power of one group of qualities to have meeting with given details of requirements of a certain sort power supplies of application getting changed includes raw controlling the input power and/or operation of current for the electronic equipment and/or made voltage fixed. **2.5.** **Zigbee** Zigbee [15] is a wireless communication standard for low cost, low rate, low power, which can to be used far away, widely different control applications, likes smart home automation, smart cities, smart packing, smart health care system as shown in Figure 3. Zigbee quality example has been designed to offer least possible, recorded price and power to make connections for devices which have need of electric current from several years, and several months life for time ranging. Zigbee has based on the RF general condition to the looked on to come to cover 10-70 meters and given application output is required. The three main components are in Zigbee network as shown in Figure 4 zigbee based networks like routers (ZR) and person giving directions (ZC), End-devices (ZED) [15]. Figure 3. Zigbee Figure 4. Zigbee based networks a. GHz Radio frequency band. b. 250 kbit/s Data rate. c. 16 (802.15.4 Channels 11 to 26) Number of channels. d. 2 analogue I/O ports and 12 general purpose I/O port inputs. e. 100–300 meters Typical distances. A network component is done or not in the router. In participates and coordinator it may associate with in the message of multi- hop routing. Makes connection to one person giving directions or router and low power operation which made for end–device in finally. Each Zigbee network need only one person giving directions and it starts the network structuring. Zigbee is to guide and get fixed by the signing the Detected location when the receiver part of a greater module unit is in the range of Zigbee transmitter. The GSM send the put into signs co-ordinates to Cell phone and comes back a sign put out to [16] Arduino (Pin D0) for the make up of operation completed. At the same time the Arduino activate the Zigbee transmitter part of a greater module unit (Pin D12) [17] till the operation completed and provides the 5 V supply. The receiver part of a greater module unit is activated when it’s within the range of transmitter part of a greater module unit. Which is decoded and send message to the LCD screen (Pin D4 to D7) [17] to display the true statement message (Module is Found). If receiver is in the range of transmitter part of a greater module unit then Zigbee part of a greater module unit activated and sends the true statement sign put out to Arduino (Pin 2 DATA to D12). _IoT–smart contracts in data trusted exchange supplied chain based on block chain (S. Ganesh Kumar)_ ----- 442  ISSN: 2088-8708 We can unbroken brands over wheels the taker (property of another) readily or any device by coming here after sensed co-ordinates (Latitude and longitude) and make clear the location by detecting the separate Zigbee sign put out received from the taken (property of another) device. Zigbee Wireless Network: This is the part which physically doesn’t currently in existence. It is chiefly of the wireless communication between the Zigbee part of a greater modules attached to the Transmitter and Receiver Arduino board and microcontroller board. **3.** **BLOCKCHAIN IN SUPPLY CHAIN MANAGEMENT AND LOGISTICS** The essential fields in Block chain appropriation nowdays is the Logistics and the supplied chain industry [5, 7, 18]. Different enormous are looking into the implementation of blockchain [2, 3] for the easy communication process of delivers and makes the supply chain [10] traceable and efficient. This changing technology is very much helpful for tamper-proof, and tracking the product of anything begins from tomatoes to diamonds [4]. From order tracking to dispute resolution, blockchain has the response to every problem that has been plaguing the logistics industry for long time. Figure 5 shown Supply chain management in blockchain. The information flow in current goods is highly complicated, includes many parties, and includes heavy documentation [19] (payments, receipts, settlements, etc.). Monitoring every single exchanges and documents is a cumbersome job and sometimes important documents gets of transparency in the present supply chain system. Also it’s extremely difficult to investigate if there is happening of any illegal or dishonest practices in the system lost or manufactured, which creates confusion in the system, leading to huge loss. Figure 5. Supply chain management in blockchain Previously, supply chains were moderately easy and simple because commerce was local, but now it’s done globally, which makes it incredibly unpredictable. Due to globalization, in between the parties (clients, vendors and the suppliers) may get some more days to be processed, when the review of the contracts are done by the brokers and the lawyers comes with the additional for the delay and the cost. It is considered as unsafe of the goods that are passing through several places geographical locations (international / national) to the destinations that are done with the agreements [20]. It is very hard for tracing where the goods are coming from and where it is as, as those documents about the details may be forged or lost. Now, it is exceptionally for clients/buyers to really gather the information or the products value and the origin of the items, in the supply chain system, there may be lack in the product transparency. Due to high complexity and lack of transparency in the current supply chain, business people are anxious to explore the possibilities of blockchain [2] technology to transform the supply chain and logistic industry [21]. The records about the digital data [22] or the events are stored in a Distributed Ledger called the Block Chain [2]. It is a database that contains transactions details, information & records called blocks. These blocks hold incorruptible trust due to its highly secured nature. It offers a compelling solution by combining accessibility with security and privacy. Int J Elec & Comp Eng, Vol. 10, No. 1, February 2020 : 438 - 446 ----- Int J Elec & Comp Eng ISSN: 2088-8708  443 In this Globalized world, it is a difficult process to do the supply chain and it likes to be critical compared to others. Today there is a significant amount of trapped value in logistics, mostly stemming from the competitive and fragmented nature of the logistics industry. This frequently makes low transparency, data warehouse, unstandardized processes, and different levels of technology appropriation. **3.1.** **Trackability and transparency** Adopting blockchain [23] in supply chain could support trust, enhanced transparency, and predictability by enabling clients to track where a shipment/ order is at any given time [14]. **3.2.** **Automation** Use of smart contracts will enable companies to automate their purchasing process, which leads to cutting costs and saving time. Smart contract will also improve the transaction flow and security in the supply chain. **3.3.** **Accessibility** Ulilizing blockchain [2] dealers can store their product origin, place of storage, authenticity [24], product certificates and record, etc. on a single ledger. All of the important information being in one place will make accessibility data much more easier, which not only create more transparency in the supply chain but also helps in decreasing the amount of frauds and goods robbery that happens. **3.4.** **Security** Since a blockchain [3] is an unchanging distributed ledger, changes in ownership and possession of goods at any point could be entered into the ledger permanently and instantaneously. As the blockchain technology is cryptographically secured and is decentralized, shipping, possession and ownership of data could be better protected from altering or hacks. **3.5.** **Quick payments** Implementing blockchain [2] technology to the payment system could help in reducing grating in commercial financing, accordingly disposing exchaning debate. **3.6.** **Saves cost and time** Transport suppliers will have the capacity to information about availability of storing capacity and routes, which will decrease transport costs and time. Clients can know the origin of the products, manufacturer, date, time, etc,. **4.** **SEQUENTIAL DIAGRAM** Ethereum Blockchain Network [3] is used to verify temperature, tracking shipment data recorded listed in the front-end. Smart contracts written in the contract oriented programming language (solidity), [8] run in a virtual machine, called Ethereum [11, 12, 25]. The Figure 7 show sequential diagram the blockchain. Figure 7. Sequential diagram _IoT–smart contracts in data trusted exchange supplied chain based on block chain (S. Ganesh Kumar)_ ----- 444  ISSN: 2088-8708 Virtual Machine (EVM) giving power to the verification of data by smart contracts. a. Smart Contract: is give out for each new shipment, being responsible for making certain the doing as requested of temperature data, tracking shipment data that is connected with the shipment. b. Mobile Devices: Devices used by the end-users to register new shipments and track/send records of temperature data to the computer application [15]. ## c. Sensors: sensitive devices able to exist together with Zigbee technology configured to send data in a fixed polling space times between to a Mobile Device. **5.** **TECHNICAL DETAILS** In the back-end, the temperature, tracking shipment doing as requested made certain by smart contracts written with Solidity, a high-level language designed to compile code for EVM. Each and every product with the groups or the newly agreed shipments always has the particular requirements for the temperature. The GDP compliance requirements ensured to do the smart contract for tracking are deployed and configured. The changes occurred in the smart contracts and the participation of the [11, 12] Ethereum networks are done by the Ethereum nodes [26], that initiates the new contracts functions. The communication by the Ethereum nodes by the HTTP (Hypertext Transfered Protocol) over the JSON (JavaScript Object Notation). The ranges of the temperature are verified by the smart contracts and are storing the verified outputs with the hash values in the smart contracts. The encoding and the decoding for the Android clients for communication with the PC, REST (Representational State Transfer), API (Application Programming Interface)[27] are done by the using the JSON [9]. The users with the mobile phones register for their every new shipment with all their regulatory, in which the contract for every new shipments are created. The recent updates of the temperature records by the Zigbee to the PC or the devices should be allowed by the API. The awareness about the contract results should be known to both sender and the receiver, moreover they should be granted permission to access the measurement of the temperature and the tracability, mainly by using the Graphical Visualization. The Back-end offered by API can be used in different front-end applications in addition to a smartphone or tablet. For example, one could use a Web application in word used for joining other words, statements with the mobile devices to register doing has requested data of new shipments and verify their separate states on the run. Therefore, one could help from quicker ways to input doing as requested data in contrast with a smartphone or tablet. However, the logistics managing general condition has need of a high mobility of devices reading the sensors, or to register one or many number barcodes at several points of the end-to-end process. Temperature data, tracking location data, preventing damage data is on condition that by IoT sensors by Zigbee device that can be placed in strategical points of the shipment. The sensor has both identification and sensing power which allows to exchange ideas the right in details time to temperature measuring, tracking shipment in specific points. Temperature looking and tracking shipment location at is started point with the Android client. To start the process, a sensor device needs to be within range. As a first step, a track-and-trace number, which is representatively discovered on the packet, has to be connected with the MAC-address of the sensor device. Since both, track-and- trace number and MAC-address are barcodes, respectively QR-codes, the Android client captures both with its camera. After this process, the Android client starts via Zigbee the temperature measurements and tracking location on the sensor device, and sends the track- and-trace number/MAC-address association to the computer. The sensor also stores the track-and-trace number in case no computer access is provided. Thus, a sending package that has been, always has an association between its MAC-address and the current track-and-trace number. The computer stores the association and makes create existence, broadcasts the smart contract, and stores the smart contract ID on the sensor device. Now the sensor device can be placed inside the products packet. The sensor device is recording every 10 minutes the temperature and stores it in the internal memory on the Zigbee sensor device.Scanning the track and trace number after receiving the packet at the destination. The Android client requests the MAC-Address from the computer to connect to the sensor device [15]. Then the Android client automatically downloads all temperature data, tracking location details and sends it to the smart contract. Once the smart contract checks the temperature, track location anyone interested in that smart contract can verify if the temperature, track was within its specifications directly on the [11, 12] Ethereum blockchain [11, 12]. Thus, the sender will be notified immediately on such result. Int J Elec & Comp Eng, Vol. 10, No. 1, February 2020 : 438 - 446 ----- Int J Elec & Comp Eng ISSN: 2088-8708  445 **6.** **SUMMARY, CONCLUSIONS, AND FUTURE WORK** Many financial-related start-ups are looking into block chain-based answers in order to get changed to other from the Government controlled organization and amount made less gives idea of price [26]. However, block chains used in other areas as well as IOT and other start-ups working in non-financial areas. Ultimately, the startup rate, the rate of success in the block chain technology, that are both in the Public and the Private applications explain that, the clients technically able to contact and their characteristic, moreover they are having the advantages with the practical exploitation. **REFERENCE** [1] Amool Sudhan, Manisha J Nene, “Employability of blockchain technology in defence applications,”International _Conference on Intelligent Sustainable Systems (ICISS), Conference Paper, Publisher: IEEE, 2017._ [2] Satyabrata Aich, Sabyasachi Chakraborty, Mangal Sain, Hye-in Lee, Hee-Cheol Kim, “A Review on Benefits of IoT Integrated Blockchain based Supply Chain Management Implementations across Different Sectors with Case Study,”21st International Conference on Advanced Communication Technology (ICACT), 2019.DOI: 10.23919/ICACT.2019.8701910. [3] Si Chen, Rui Shi, Zhuangyu Ren, Jiaqi Yan, Yani Shi, Jinyu Zhang “A Blockchain-Based Supply Chain Quality Management Framework,” Published in: _2017 IEEE 14th International Conference on e-Business Engineering_ _(ICEBE), 2017. DOI: 10.1109/ICEBE.2017.34._ [4] Randhir Kumar, Rakesh Tripathi, “Traceability of counterfeit medicine supply chain through Blockchain,” Published in: _11th International Conference on Communication Systems & Networks (COMSNETS), 2019. DOI:_ 10.1109/COMSNETS.2019.8711418. [5] Shangping Wang, Dongyi Li, Yaling Zhang, Juanjuan Chen, “Smart Contract-Based Product Traceability System in the Supply Chain Scenario,” IEEE Access: Volume: 7, 2019. DOI:10.1109/ACCESS.2019.2935873. [6] Benhe Gao, Qian Zhou, Shigang Li, Xinglu Liu, “Everledger A Real Time Stare in Market Strategy for Supply Chain Financing Pledge Risk Management,”2018 IEEE International Conference on Industrial Engineeringand _Engineering Management (IEEM), Conference Paper, Publisher: IEEE, 2018._ [7] Yonggui Fu, JianmingZhu, “Big Production Enterprise Supply Chain Endogenous Risk Management Based on Blockchain,”IEEEAccess, Volume 7, Journal Article, Publisher: IEEE, 2019. [8] Michael Mylrea, Sri Nikhil Gupta Gourisetti, “Blockchain for Supply Chain Cybersecurity, Optimization and Compliance,”2018 Resilience Week (RWS), Conference Paper, Publisher: IEEE, 2018. [9] Mitsuaki Nakasumi, “Information Sharing for Supply Chain Management Based on Block Chain Technology, 2017 _IEEE 19th Conference on Business Informatics (CBI), Volume: 01, Conference Paper, Publisher: IEEE, 2017._ [10] Sidra Malik, Salil S. Kanhere, Raja Jurdak“ProductChain: Scalable Blockchain Framework to Support Provenance in Supply Chains,”2018 IEEE 17th International Symposium on Network Computing and Applications (NCA), Conference Paper, Publisher: IEEE, 2018. [11] Sandi Rahmadika, Bruno Joachim Kweka, Cho Nwe Zin Latt, Kyung-Hyune Rhee, “A Preliminary Approach of Blockchain Technology in Supply Chain System,”2018 IEEE International Conference on Data Mining _Workshops (ICDMW), Conference Paper, Publisher: IEEE, 2018._ [12] Miguel Pincheira Caro, Muhammad Salek Ali, Massimo Vecchio, Raffaele Giaffreda, “Blockchain-based traceability in Agri-Food supply chain management: A practical implementation,”2018 IoT Vertical and Topical _Summit on Agriculture - Tuscany (IOT Tuscany), Conference Paper, Publisher: IEEE, 2018._ [13] Thomas Bocek, Bruno B. Rodrigues, Tim Strasser, Burkhard Stiller, “Blockchains everywhere - a use-case of blockchains in the pharma supply-chain,”2017 IFIP/IEEE Symposium on Integrated Network and Service _Management (IM), Conference Paper, Publisher: IEEE, 2017._ [14] T. A. Alhmiedat and S. H. Yang, “A ZigBee-based Mobile tracking system through wireless sensor networks,”International Journal of Advanced Mechatronic Systems, vol. 1, pp.63-70, 2008. [15] R. K. Sharma, _et al., “Android interfacebased GSM home security system,”Issues and Challenges in Intelligent_ _Computing Techniques (ICICT), 2014International Conference on. IEEE, 2014._ [16] J. Blumenthal, et al., “Weighted centroid localization in ZigBee-based sensor networks,” Folien IEEE International _Symposium on Intelligent Signal Processing, WISP, Madrid, Spain, 2007._ [17] Kristjan Kuhi, Kati Kaare, Ott Koppel, “Ensuring performance measurement integrity in logistics using blockchain,”2018 IEEE International Conference on Service Operations and Logistics, and Informatics (SOLI), Conference Paper, Publisher: IEEE, 2018. [18] Jing Hua, Xiujuan Wang, Mengzhen Kang, Haoyu Wang, Fei-Yue Wang, “Blockchain Based Provenance for Agricultural Products: A Distributed Platform with Duplicated and Shared Bookkeeping,”2018 IEEE Intelligent _Vehicles Symposium (IV), Conference Paper, Publisher: IEEE, 2018._ [19] Aparna Ramalingaiah, ThaniyaSulthana, “Study of Blockchain with Bitcoin based Fund Raise Use case using Laravel Framework,”2018 3rd International Conference on Computational Systems and Information Technology _for Sustainable Solutions (CSITSS), Conference Paper, Publisher: IEEE, 2018._ [20] Adnan Imeri, Djamel Khadraoui, “The Security and Traceability of Shared Information in the Process of Transportation of Dangerous Goods,”2018 9th IFIP International Conference on New Technologies, Mobility and _Security (NTMS), Conference Paper, Publisher: IEEE, 2018._ [21] Guido Perboli, Stefano Musso, Mariangela Rosano, “Blockchain in Logistics and Supply Chain: A Lean Approach for Designing Real-World Use Cases,”IEEE Access, Volume: 6, Journal Article, Publisher: IEEE, 2018. _IoT–smart contracts in data trusted exchange supplied chain based on block chain (S. Ganesh Kumar)_ ----- 446  ISSN: 2088-8708 [22] Weizhi Meng, Elmar Wolfgang Tischhauser, Qingju Wang, Yu Wang, Jinguang Han, “When Intrusion Detection Meets Blockchain Technology: A Review,”IEEE Access, Volume: 6, Journal Article, Publisher: IEEE, 2018. [23] Shuang Su, Ke Wang, Hyong S. Kim, “Smartsupply: Smart Contract Based Validation for Supply Chain Blockchain,”2018 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and _Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data_ _(SmartData), Conference Paper, Publisher: IEEE, 2018._ [24] Mark Kim, Brian Hilton, Zach Burks, Jordan Reyes, “Integrating Blockchain, Smart Contract-Tokens, and IoT to Design a Food Traceability Solution,”2018 IEEE 9th Annual Information Technology, Electronics and Mobile _Communication Conference (IEMCON), Conference Paper, Publisher: IEEE, 2018._ [25] Leonor Augusto, Ruben Costa, José Ferreira, Ricardo Jardim-Gonçalves, “An Application of Ethereum smart contracts and IoT to logistics,”2019 International Young Engineers Forum (YEF-ECE), Conference Paper, Publisher: IEEE, 2019. [26] Raja Jayaraman, Fatima AlHammadi, Mecit Can Emre Simsekler, “Managing Product Recalls in Healthcare Supply Chain,” _2018 IEEE International Conference on Industrial Engineering and Engineering Management (IEEM),_ Conference Paper, Publisher: IEEE, Year: 2018. [27] Sharma, Rupam Kumar, _et al. “Android interfacebased GSM home security system,”Issues and Challenges in_ _Intelligent Computing Techniques (ICICT), International Conference on IEEE, 2014._ Int J Elec & Comp Eng, Vol. 10, No. 1, February 2020 : 438 - 446 -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.11591/IJECE.V10I1.PP438-446?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.11591/IJECE.V10I1.PP438-446, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBYSA", "status": "GOLD", "url": "http://ijece.iaescore.com/index.php/IJECE/article/download/18307/13459" }
2,020
[]
true
2020-02-01T00:00:00
[ { "paperId": "d91a912d8a959eb57ed494a0fac48441094b73a9", "title": "Smart Contract-Based Product Traceability System in the Supply Chain Scenario" }, { "paperId": "daf95d142bcf2e00e9ad3168a395f5d01152da45", "title": "An Application of Ethereum smart contracts and IoT to logistics" }, { "paperId": "f6eaacfeb2d3d35c35f3ec80f760c14f82b190cf", "title": "A Review on Benefits of IoT Integrated Blockchain based Supply Chain Management Implementations across Different Sectors with Case Study" }, { "paperId": "1ae4d2abaa14f5a87fa09b8acc5ac1233dd6b1be", "title": "Big Production Enterprise Supply Chain Endogenous Risk Management Based on Blockchain" }, { "paperId": "5a8419bdb2db83a589d71a4a796c8a682a1cd815", "title": "Traceability of counterfeit medicine supply chain through Blockchain" }, { "paperId": "0685e747ac5a389e8e67345e1e49198effbff91f", "title": "A Real Time Stare in Market Strategy for Supply Chain Financing Pledge Risk Management" }, { "paperId": "c5571260d0e503f94b0bb00ec6a7fb75a09710dc", "title": "Managing Product Recalls in Healthcare Supply Chain" }, { "paperId": "2020e7574312ae2685bc97199849a5d18c3ea83c", "title": "Study of Blockchain with Bitcoin based Fund Raise Use case using Laravel Framework" }, { "paperId": "26f28731330597f25a4a645bd53597d786193d35", "title": "2018 3rd International Conference on Computational Systems and Information Technology for Sustainable Solutions (CSITSS)" }, { "paperId": "9a51a58632178dda8d36d277885eb992cbd33057", "title": "A Preliminary Approach of Blockchain Technology in Supply Chain System" }, { "paperId": "e4a7be90e7695ba3061469cee505bd78bba6076b", "title": "Integrating Blockchain, Smart Contract-Tokens, and IoT to Design a Food Traceability Solution" }, { "paperId": "5d665705df32dd5225f01584e1902765eea4ba56", "title": "ProductChain: Scalable Blockchain Framework to Support Provenance in Supply Chains" }, { "paperId": "6271bd9f60d153564d3e5f01962249475ec3618e", "title": "Blockchain in Logistics and Supply Chain: A Lean Approach for Designing Real-World Use Cases" }, { "paperId": "c26ccc9ec23e3854896c876cc4c112217a73fcab", "title": "Blockchain for Supply Chain Cybersecurity, Optimization and Compliance" }, { "paperId": "874649dc188e83b4e1cb0d9c132d72343eb989f2", "title": "Ensuring performance measurement integrity in logistics using blockchain" }, { "paperId": "41a262b79028c81a49b7137c9f9ffeeba7c1ecaf", "title": "Smartsupply: Smart Contract Based Validation for Supply Chain Blockchain" }, { "paperId": "838b7f156f30b0542c82c76b10b6ae775bb1e5d8", "title": "2018 IEEE International Conference on Internet of Things (iThings) and IEEE Green Computing and Communications (GreenCom) and IEEE Cyber, Physical and Social Computing (CPSCom) and IEEE Smart Data (SmartData)" }, { "paperId": "a1d99c79f34f98fd937b2f186a2a889f00bf15d3", "title": "Blockchain Based Provenance for Agricultural Products: A Distributed Platform with Duplicated and Shared Bookkeeping" }, { "paperId": "2630f89f7c8fab0b171c1a7b504060f18d348b43", "title": "Blockchain-based traceability in Agri-Food supply chain management: A practical implementation" }, { "paperId": "d75f7e0860a7a2c122b5e949134ba8c43a12ccba", "title": "The Security and Traceability of Shared Information in the Process of Transportation of Dangerous Goods" }, { "paperId": "a30b4b52b1e7b0aff4a5085cdc43ace30ca66f5e", "title": "When Intrusion Detection Meets Blockchain Technology: A Review" }, { "paperId": "d339667983c7112ff56cefce276480fa1701b2d0", "title": "Employability of blockchain technology in defence applications" }, { "paperId": "520e4cd6fd17b9323ac7f77f8959d69b8366b7cf", "title": "A Blockchain-Based Supply Chain Quality Management Framework" }, { "paperId": "954fd7af3bc53719c0c681d5aad9706c553ed96e", "title": "Information Sharing for Supply Chain Management Based on Block Chain Technology" }, { "paperId": "0356360ce4e31a901f5cc48b090af30f56bb3f2d", "title": "Blockchains everywhere - a use-case of blockchains in the pharma supply-chain" }, { "paperId": "3e5b30e8d0167188db75357ae062171e90f05809", "title": "Information Sharing in Supply Chain Management" }, { "paperId": "8868abbe12a94c47e74a94da2ec27723873427a5", "title": "A ZigBee-based mobile tracking system through wireless sensor networks" }, { "paperId": "b71df3deca1294812279bf0d1946b2dc3177af39", "title": "Weighted Centroid Localization in Zigbee-based Sensor Networks" }, { "paperId": null, "title": "IoT–smart contracts in data trusted exchange supplied chain based on block chain" }, { "paperId": null, "title": "“Android interfacebased GSM home security system,”" }, { "paperId": null, "title": "b. Mobile Devices: Devices used by the end-users to register new shipments and track/send records of temperature data to the computer application" } ]
8,186
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0184948a8b351cd9607356f659633f05b6d41d92
[ "Computer Science" ]
0.858309
Model Checking: A Tutorial Overview
0184948a8b351cd9607356f659633f05b6d41d92
Modeling and Verification of Parallel Processes
[ { "authorId": "144488553", "name": "Stephan Merz" } ]
{ "alternate_issns": null, "alternate_names": [ "MOVEP", "Model Verification Parallel Process" ], "alternate_urls": null, "id": "d7d0258f-5ab4-4920-a2b5-f8c6ef37d3de", "issn": null, "name": "Modeling and Verification of Parallel Processes", "type": "conference", "url": null }
null
# Model Checking: A Tutorial Overview Stephan Merz Institut f¨ur Informatik, Universit¨at M¨unchen ``` merz@informatik.uni-muenchen.de ``` **Abstract. We survey principles of model checking techniques for the automatic** analysis of reactive systems. The use of model checking is exemplified by an analysis of the Needham-Schroeder public key protocol. We then formally define transition systems, temporal logic, ω-automata, and their relationship. Basic model checking algorithms for linear- and branching-time temporal logics are defined, followed by an introduction to symbolic model checking and partial-order reduction techniques. The paper ends with a list of references to some more advanced topics. ## 1 Introduction Computerized systems pervade more and more our everyday lives. We rely on digital controllers to supervise critical functions of cars, airplanes, and industrial plants. Digital switching technology has replaced analog components in the telecommunication industry, and security protocols enable e-commerce applications and privacy. Where important investments or even human lives are at risk, quality assurance for the underlying hardware and software components becomes paramount, and this requires formal models that describe the relevant part of the systems at an adequate level of abstraction. The systems we are focussing on are assumed to maintain an ongoing interaction with their environment (e.g., the controlled system or other components of a communication network) and are therefore called reactive systems [60, 94]. Traditional models that describe computer programs as computing some result from given input values are inadequate for the description of reactive systems. Instead, the behavior of reactive systems is usually modelled by transition systems. The term model checking designates a collection of techniques for the automatic analysis of reactive systems. Subtle errors in the design of safety-critical systems that often elude conventional simulation and testing techniques can be (and have been) found in this way. Because it has been proven cost-effective and integrates well with conventional design methods, model checking is being adopted as a standard procedure for the quality assurance of reactive systems. The inputs to a model checker are a (usually finite-state) description of the system to be analysed and a number of properties, often expressed as formulas of temporal logic, that are expected to hold of the system. The model checker either confirms that the properties hold or reports that they are violated. In the latter case, it provides a counterexample: a run that violates the property. Such a run can provide valuable feedback and points to design errors. In practice, this view turns out to be somewhat idealized: quite frequently, available resources only permit to analyse a rather coarse model of ----- the system. A positive verdict from the model checker is then of limited value because bugs may well be hidden by the simplifications that had to be applied to the model. On the other hand, counter-examples may be due to modelling artefacts and no longer correspond to actual system runs. In any case, one should keep in mind that the object of analysis is always an abstract model of the system. Standard procedures such as code reviews are necessary to ensure that the abstract model adequately reflects the behavior of the concrete system in order for the properties of interest to be established or falsified. Model checkers can be of some help in this validation task because it is possible to perform “sanity checks”, for example to ensure that certain runs are indeed possible or that the model is free of deadlocks. This paper is intended as a tutorial overview of some of the fundamental principles of model checking, based on a necessarily subjective selection of the large body of model checking literature. We begin with a case study in section 2 where the application of model checking is considered from a user’s point of view. Section 3 reviews transition systems, temporal logics, and automata-theoretic techniques that underly some approaches to model checking. Section 4 introduces basic model checking algorithms for linear-time and branching-time logics. Finally, section 5 collects some rather sketchy references to more advanced topics. Much more material can be found in other contributions to this volume and in the textbooks and survey papers [27, 28, 69, 97, 124] on the subject. The paper contains many references to the relevant literature, in the hope that this survey can also serve as an annotated bibliography. ## 2 Analysis of a Cryptographic Protocol **2.1** **Description of the Protocol** Let us first consider, by way of example, the analysis of a public-key authentication protocol suggested by Needham and Schroeder [104] using the model checker SPIN [65]. Two agents A(lice) and B(ob) try to establish a common secret over an insecure channel in such a way that both are convinced of each other’s presence and no intruder can get hold of the secret without breaking the underlying encryption algorithm. This is one of the fundamental problems in cryptography: for example, a shared secret could be used to generate a session key for subsequent communication between the agents. The protocol is pictorially represented in Fig. 1.[1] It requires the exchange of three messages between the participating agents. Notation such as ⟨M ⟩C denotes that message M is encrypted using agent C ’s public key. Throughout, we assume the underlying encryption algorithm to be secure and the private keys of the honest agents to be uncompromised. Therefore, only agent C can decrypt ⟨M ⟩C to learn M . 1. Alice initiates the protocol by generating a random number NA and sending the message ⟨A, NA⟩B to Bob (numbers such as NA are called nonces in cryptographic jargon, indicating that they should be used only once by any honest agent). The first 1 The original protocol includes communication between the agents and a central key server to distribute the public keys of the agents. We concentrate on the core authentication protocol, assuming all public keys to be known to all agents. ----- 1. ⟨A, NA⟩B **#** **s#** ## A  2. ⟨NA, NB ⟩A B **"!** **3"!** 3. ⟨NB _⟩B_ **Fig. 1. Needham-Schroeder public-key protocol.** component of the message informs Bob of the identity of the initiator. The second component represents “one half” of the secret. 2. Bob similarly generates a nonce NB and responds with the message ⟨NA, NB _⟩A._ The presence of the nonce NA generated in the first step, which only Bob could have decrypted, convinces Alice of the authenticity of the message. She therefore accepts the pair ⟨NA, NB _⟩_ as the common secret. 3. Finally, Alice responds with the message ⟨NB _⟩B_ . By the same argument as above, Bob concludes that this message must originate with Alice, and therefore also accepts ⟨NA, NB _⟩_ as the common secret. We assume all messages to be sent over an insecure medium. Attackers may intercept messages, store them, and perhaps replay them later. They may also participate in ordinary runs of the protocol, initiate runs or respond to runs initiated by honest agents, who need not be aware of their partners’ true identity. However, even an attacker can only decrypt messages that were encrypted with his own public key. The protocol contains a severe flaw, and the reader is invited to find it before continuing. The error was discovered some 17 years after the protocol was first published, using model checking technology [91]. **2.2** **A PROMELA Model** We represent the protocol in PROMELA (“protocol meta language”), the input language for the SPIN model checker.[2] In order to make the analysis feasible, we make a number of simplifying assumptions: **– We consider a network of only three agents: A, B, and I(ntruder).** **– The honest agents A and B can only participate in one protocol run each. Agent A** can only act as initiator, and agent B as responder. It follows that A and B need to generate at most one nonce. **– The memory of agent I is limited to a single message.** 2 The full code is available from the author. ----- Although the protocol is very small, our simplifications are quite typical of the analysis of “real-world” systems via model checking: models are usually required to be finite-state, and the complexity of analysis typically depends exponentially on the size of those models. (Esparza’s contribution to this volume surveys the state of the art concerning model checking techniques for infinite-state models.) Of course, our assumptions imply that certain errors such as “confusion” that could arise when multiple runs of the protocol interfere will go undetected in our model. This explains why model checking is considered a debugging rather than a verification technique. When no errors have been found on a small model, one can consider somewhat less stringent restrictions, as far as available resources permit. In any case, it is important to clearly identify the assumptions that underly the system model in order to assess the coverage of the analysis. With these caveats, it is quite straightforward to write a model for the honest agents A and B from the informal description of section 2.1. PROMELA is a guarded-command language with C-like syntax; it provides primitives for message channels and operations for sending and receiving messages. We first declare an enumeration type that contains symbolic constants to make the model more readable. Because one nonce suffices for each agent, we simply assume that these have been precomputed and refer to them by symbolic names. ``` mtype = { ok, err, msg1, msg2, msg3, keyA, keyB, keyI, agentA, agentB, agentI, nonceA, nonceB, nonceI }; ``` We represent encrypted messages as records that contain a key and two data entries. Decryption can then be modelled as pattern-matching on the key entry. ``` typedef Crypt { mtype key, data1, data2 }; ``` The network is modelled as a single message channel shared by all three agents. For simplicity, we assume synchronous communication on the network, indicated by a buffer length of 0; this does not affect the possible communication patterns but helps to reduce the size of the model. A message on the network is modelled as a triple consisting of an identification tag (the message number), the intended receiver (which the intruder is free to ignore), and an “encrypted” message body. ``` chan network = [0] of { mtype, /* msg# */ mtype, /* receiver */ Crypt }; ``` Figure 2 contains the PROMELA code[3] for agent A. Initially, a partner (either B or I) is chosen nondeterministically for the subsequent run (the token :: introduces the different alternatives of nondeterministic selection), and its public key is looked up. A message of type 1 is then sent to the chosen partner, after which agent A waits for a message of type 2 intended for her to arrive on the network. She verifies that the message body is encrypted with her key and that it contains the nonce sent in the first message. (PROMELA allows Boolean conditions to appear as statements; such a statement blocks if the condition is found to be false.) If so, she extracts the partner’s nonce, responds 3 In actual PROMELA, record formation is not available as a primitive operation, but must be simulated by a series of assignments. ----- ``` mtype partnerA; mtype statusA = err; active proctype Alice() { mtype pkey, pnonce; Crypt data; if /* choose a partner for this run */ :: partnerA = agentB; pkey = keyB; :: partnerA = agentI; pkey = keyI; fi; network ! (msg1, partnerA, Crypt{pkey, agentA, nonceA}); network ? (msg2, agentA, data); (data.key == keyA) && (data.info1 == nonceA); pnonce = data.info2; network ! (msg3, partnerA, Crypt{pkey, pnonce, 0}); statusA = ok; } ``` **Fig. 2. PROMELA code for agent A.** with a message of type 3, and declares success. (The variable statusA will be used later to express correctness statements about the model.) The code for agent B is similar, exchanging sending and reception of messages. In contrast, the intruder cannot be modelled using a fixed protocol—the purpose of the analysis is to let SPIN find the attack if one exists at all. Instead, agent I is modelled highly nondeterministically: we describe the actions that are possible at any given state and let SPIN choose among them. The overall structure of the code shown in Fig. 3 is an infinite loop that offers a choice between receiving and sending of messages on the network. The first alternative models the reception or interception of a message (the “don’t care” variable “_” reflects the fact that the intruder need not respect the intended recipient of a message). The message body may be stored in the variable intercepted, even if it cannot be decrypted. If, moreover, the message has been encrypted for agent I, it can be analyzed to extract nonces; since the model is based on a fixed set of nonces, it is enough to set Boolean flags for nonces that the intruder has learnt so far. The second alternative represents agent I sending a message. There are two subcases: either replay a previously intercepted message or construct a new message from the information learnt so far. Note that we allow arbitrary (“type-correct”) entries for the unencrypted fields of a message. Of course, most of the resulting combinations can be immediately recognized as inappropriate by the honest agents. Our model therefore contains many deadlocks, which we ignore during the following analysis. ----- ``` bool knows_nonceA, knows_nonceB; active proctype Intruder() { mtype msg, recpt; Crypt data, intercepted; do :: network ? (msg, _, data) -> if /* perhaps store the message */ :: intercepted = data; :: skip; fi; if /* record newly learnt nonces */ :: (data.key == keyI) -> if :: (data.info1 == nonceA) || (data.info2 == nonceA) -> knows_nonceA = true; :: else -> skip; fi; /* similar for knows_nonceB */ :: else -> skip; fi; :: /* Replay or send a message */ if /* choose message type */ :: msg = msg1; :: msg = msg2; :: msg = msg3; fi; if /* choose recipient */ :: recpt = agentA; :: recpt = agentB; fi; if /* replay intercepted message or assemble it */ :: data = intercepted; :: if :: data.info1 = agentA; :: data.info1 = agentB; :: data.info1 = agentI; :: knows_nonceA -> data.info1 = nonceA; :: knows_nonceB -> data.info1 = nonceB; :: data.info1 = nonceI; fi; /* similar for data.info2 and data.key */ fi; network ! (msg, recpt, data); od; } ``` **Fig. 3. PROMELA code for agent I.** ----- ``` 1!msg1,bob,keyB,alice,nonceA Bob:1 24 32 1!msg2,alice,keyA,nonceA,nonceB 33 39 1!msg2,alice,keyA,nonceA,nonceB 40 48 1!msg3,intruder,keyI,nonceB,0 49 63 1!msg3,bob,keyB,nonceB,0 64 80 80 80 ``` **Fig. 4. Message sequence chart visualizing the attack.** **2.3** **Model Checking the Protocol** The purpose of the protocol is to ensure mutual authentication (of honest agents) while maintaining secrecy. In other words, whenever both A and B have successfully completed a run of the protocol, then A should believe her partner to be B if and only if B believes to talk to A. Moreover, if A successfully completes a run with B then the intruder should not have learnt A’s nonce, and similarly for B. These properties are can be expressed in temporal logic (cf. section 3.2) as follows: **G(statusA = ok ∧** _statusB = ok ⇒_ (partnerA = agentB _partnerB = agentA))_ _⇔_ **G(statusA = ok ∧** _partnerA = agentB ⇒¬knows nonceA)_ **G(statusB = ok ∧** _partnerB = agentA ⇒¬knows nonceB_ ) We present SPIN with the model of the protocol and the first formula. In a fraction of a second, SPIN declares the property violated and outputs a run that contains the attack. The run is visualized as a message sequence chart, shown in Fig. 4: Alice initiates a |Bo|b:1| |---|---| |2|4| |3|2| |---|---| |6|4| |---|---| |8|0| |---|---| |Alic|e:0| |---|---| |8|| ||| |4|0| ||| |4|8| ||| |8|0| |Intru|der:2| |---|---| |9|| ||| |2|3| ||eA nceB| |3|3| ||| |3|9| ||| |4|9| ||| |6|3| ||| |8|0| ||| ----- protocol run with Intruder who in turn (but masquerading as A) starts a run with Bob, using the nonce received in the first message. Bob replies with a message of type 2 that contains both A’s and B’s nonces, encrypted for A. Although agent I cannot decrypt that message itself, it forwards it to A. Unsuspecting, Alice finds her nonce, returns the second nonce to her partner I, and declares success. This time, agent I can decrypt the message, extracts B’s nonce and sends it to B who is also satisfied. As a result, we have reached a state where A correctly believes to have completed a run with I, but B is fooled into believing to talk to A. The same counterexample will be produced when analysing the third formula, whereas the second formula is declared to hold of the model. The counterexample produced by SPIN makes it easy to trace the error in the protocol to a lack of explicitness in the second message: the presence of the expected nonce is not sufficient to prove the origin of the message. To avoid the attack, the second message should therefore be replaced with ⟨B _, NA, NB_ _⟩. After this modification, SPIN_ confirms that all three formulas hold of the model—which of course does not prove the correctness of the protocol (see, e.g., [106] for work on the formal verification of cryptographic protocols using interactive theorem proving). ## 3 Systems and Properties Reactive systems can be broadly classified as distributed systems whose subcomponents are spatially separated and concurrent systems that share resources such as processors and memories. Distributed systems communicate by message passing, whereas concurrent systems may use shared variables. Concurrent processes may share a common clock and execute in lock-step (time-synchronous systems, typical for hardware verification problems) or operate asynchronously, sharing a common processor. In the latter case, one will typically assume fairness conditions that ensure processes that could execute are eventually scheduled for execution. A common framework for the representation of these different kinds of systems is provided by the concept of tran_sition systems. Properties of (runs of) transition systems are conveniently expressed in_ temporal logic. **3.1** **Transition Systems** **Definition 1. A transition system** = (S _, I,_ _, δ) is given by a set S of states, a non-_ _T_ _A_ _empty subset I_ _S of initial states, a set_ _of actions, and a total transition relation_ _⊆_ _A_ _δ_ _S_ _S (that is, we require that for every state s_ _S there exist A_ _and_ _⊆_ _× A ×_ _∈_ _∈A_ _t_ _S such that (s, A, t)_ _δ)._ _∈_ _∈_ _An action A_ _is called enabled at state s_ _S iff (s, A, t)_ _δ holds for some_ _∈A_ _∈_ _∈_ _t_ _S_ _._ _∈_ _A run of T is an infinite sequence ρ = s0s1 . . . of states si ∈_ _S such that s0 ∈_ _I and_ _for all i ∈_ N, (si _, Ai_ _, si+1) ∈_ _δ holds for some Ai ∈A._ A transition system specifies the allowed evolutions of the system: starting from some initial state, the system evolves by performing actions that take the system to ----- a new state. Slightly different definitions of transition systems abound in the literature. For example, actions are sometimes not explicitly identified. We have assumed the transition relation to be total in order to simplify some of the definitions below. Totality can be ensured by including a stuttering action that does not change the state; only the stuttering action is enabled in deadlock or quiescent states. Definition 1 is often augmented by fairness conditions, see section 4.2. Some papers use the term Kripke structure instead of transition system, in honor of the logician Saul A. Kripke who used transition systems to define the semantics of modal logics [78]. In practice, reactive systems are described using modelling languages, including (pseudo) programming languages such as PROMELA, but also process algebras or Petri nets. The operational semantics of these formalisms is conveniently defined in terms of transition systems. However, the transition system that corresponds to such a description is typically of size exponential in the length of the description. For example, the state space of a shared-variable program is the product of the variable domains. Modelling languages and their associated model checkers are usually optimized for particular kinds of systems such as synchronous shared-variable programs or asynchronous communication protocols. In particular, for systems composed of several processes it is advantageous to exploit the process structure and avoid the explicit construction of a single transition system that represents the joint behavior of processes. This will be further explored in section 4.4. **3.2** **Properties and Temporal Logic** Given a transition system, we can ask questions such as the following: _T_ **– Are any “undesired” states reachable in**, such as states that represent a deadlock, _T_ a violation of mutual exclusion etc.? **– Are there runs of** such that, from some point onwards, some “desired” state is _T_ never reached or some action never executed? Such runs may represent livelocks where, for example, some process is prevented from entering its critical section, although other components of the system may still make progress. **– Is some initial system state of** reachable from every state? In other words, can _T_ the system be reset? Temporal logic [45, 79, 94, 95, 117] is a convenient language to formally express such properties. Let us first consider temporal logic of linear time whose formulas express properties of runs of transition systems. Assume given a denumerable set of _V_ atomic propositions, which represent properties of individual states. **Definition 2. Formulas of propositional temporal logic PTL of linear time are induc-** _tively defined as follows:_ **– Every atomic proposition v** _is a formula._ _∈V_ **– Boolean combinations of formulas are formulas.** **– If ϕ and ψ are formulas then so are X ϕ (“next ϕ”) and ϕ U ψ (“ϕ until ψ”).** **PTL formulas are interpreted over behaviors, that is, ω-sequences of states. We** assume that atomic propositions v can be evaluated at states s _S and write s(_ ) _∈V_ _∈_ _V_ to denote the set of propositions true at state s. For a behavior σ = s0s1 . . ., we let σi denote the state si and σ|i the suffix si _si+1 . . . of σ._ ----- **Definition 3. The relation σ** = ϕ (“ϕ holds of σ”) is inductively defined as follows: _|_ **– σ |= v (for v ∈V) iff v ∈** _σ0(V)._ **– The semantics of boolean combinations is defined as usual.** **– σ |= X ϕ iff σ|1 |= ϕ.** **– σ |= ϕ U ψ iff for some k ≥** 0, σ|k |= ψ and σ|j |= ϕ holds for all 0 ≤ _j < k_ _._ Other useful PTL formulas can be introduced as abbreviations: F ϕ (“finally ϕ”, “eventually ϕ”) is defined as true U ϕ; it asserts that ϕ holds of some suffix. The dual formula G ϕ ≡¬ F ¬ϕ (“globally ϕ”, “always ϕ”) requires ϕ to hold of all suffixes. The formula ϕ W ψ ( “ϕ waits for ψ”, “ϕ unless ψ”) is defined as (ϕ U ψ) ∨ **G ϕ and** requires ϕ to hold for as long as ψ does not hold; unlike ϕ U ψ, it does not require ψ to become true eventually. The following formulas are examples for typical correctness assertions about a twoprocess resource manager. We assume reqi and ownsi to be atomic propositions true when process i has requested the resource or when it owns the resource. **G ¬(owns1 ∧** _owns2) : It is never the case that both processes own the resource. In_ general, properties of the form G p, for non-temporal formulas p, express system _invariants._ **G(req1 ⇒** **F owns1) : Whenever process 1 has requested the resource, it will eventu-** ally obtain it. Formulas of this form are often called response properties [93]. **G F(req1 ∧¬(owns1 ∨** _owns2)) ⇒_ **G F owns1 : If it is infinitely often the case that** process 1 has requested the resource when the resource is free, then process 1 infinitely often owns the resource. This formula expresses a (strong) fairness condition for process 1. **G(req1 ∧** _req2 ⇒_ (¬owns2 W (owns2 W (¬owns2 W owns1)))) : Whenever both processes compete for the resource, process 2 will be granted the resource at most once before it is granted to process 1. This property, known as “1bounded overtaking”, is an example for a precedence property. It is best understood as asserting the existence of four, possibly empty or right-open, intervals that satisfy the respective conditions. **PTL formulas assert properties of single behaviors, but we are interested in system** _validity: we say that formula ϕ holds of_ (written = ϕ) if ϕ holds of all runs of . _T_ _T |_ _T_ In this sense, PTL formulas express correctness properties of a system. The existence of a run satisfying a certain property cannot be expressed in PTL. Such possibility _properties are the domain of branching-time logics such as the logic CTL (computation_ _tree logic [25])._ **Definition 4. Formulas of propositional CTL are inductively defined as follows:** **– Every atomic proposition v** _is a formula._ _∈V_ **– Boolean combinations of formulas are formulas.** **– If ϕ and ψ are formulas then EX ϕ, EG ϕ, and ϕ EU ψ are formulas.** **CTL formulas are interpreted at the states of a transition system. A path in** is an _T_ _ω-sequence σ = s0s1 . . . of states related by δ; it is an s-path if s = s0._ ----- _s0s1s2_ - _p_ - _¬p_ - _p_    **Fig. 5. A transition system T such that T |= F G p but T ̸|= AF AG p.** **Definition 5. The relation** _, s_ = ϕ is inductively defined as follows: _T_ _|_ **–** _, s_ = v (for v _) iff v_ _s(_ ). _T_ _|_ _∈V_ _∈_ _V_ **– The semantics of boolean combinations is defined as usual.** **– T, s |= EX ϕ iff there exists an s-path s0s1 . . . such that T, s1 |= ϕ.** **– T, s |= EG ϕ iff there is an s-path s0s1 . . . such that T, si |= ϕ holds for all i** _._ **– T, s |= ϕ EU ψ iff there exist an s-path s0s1 . . . and k ≥** 0 such that T, sk |= ψ _and T, sj |= ϕ holds for all 0 ≤_ _j < k_ _._ Derived CTL-formulas include EF ϕ **true EU ϕ, AX ϕ** **EX** _ϕ, and_ _≡_ _≡¬_ _¬_ **AG ϕ ≡¬ EF ¬ϕ. For example, the formula AG ¬(owns1 ∧** _owns2) expresses mu-_ tual exclusion for the two-process resource manager, whereas AG(req1 ⇒ **EF owns1)** asserts that whenever process 1 requests the resource, it can eventually obtain the resource, although there may be executions that do not honor the request. The formula **AG EF init (for a suitable predicate init) asserts that the system is resettable.** System validity for CTL-formulas is defined by = ϕ if _, s_ = ϕ holds for _T |_ _T_ _|_ all initial states s of . The expressiveness of PTL and CTL can be compared by _T_ analyzing which properties of transition systems can be formulated. It turns out that neither logic subsumes the other one [84, 41, 43]: whereas PTL is clearly incapable of expressing possibility properties, fairness properties cannot be stated in CTL. More specifically, there is no CTL formula that is system valid iff the PTL formula F G ϕ is. In particular, it does not correspond to AF AG ϕ, as shown in Fig. 5: every run of the transition system T satisfies F G p (either it stays in state s0 forever or it ends in state s2), but T, s0 ̸|= AF AG p (for the run that stays in state s0 there is always the possibility to move to state s1). _Extensions and variations. The lack of expressiveness of CTL is due to the requirement_ that path quantifiers (E, A) and temporal operators (X, G, U) alternate. The logic **CTL[∗]** [41, 43] removes this restriction and (strictly) subsumes both PTL and CTL. For example, the CTL[∗] formula AFG p is system valid iff the PTL formula F G p is. The propositional µ-calculus [77], also known as µTL, allows properties to be defined as smallest or greatest fixed points, generalizing recursive characterizations of temporal operators such as **EG ϕ** _ϕ_ **EX EG ϕ** _≡_ _∧_ It strictly subsumes the logic CTL[∗]. For example, the formula νX . ϕ **AX AX X** _∧_ asserts that ϕ holds at every state with even distance from the current state. _Alternating-time temporal logic [6] refines the path quantifiers of branching time_ temporal logics by allowing references to different processes (or agents) of a reactive ----- ``` a,b ```  - _q0_ `b`  ``` b ```  _q1_  **Fig. 6. A B¨uchi automaton.** system. One can, for example, assert that the resource manager can ensure mutual exclusion between the clients, or that the manager and client 1 can cooperate to prevent client 2 to access the resource. **3.3** **_ω-Automata_** We have seen how to interpret temporal logic formulas over transition systems. On the other hand, one can construct a finite automaton that represents the models of a given **PTL formula. This close connection between temporal logic and automata is the basis** for PTL decision procedures and model checking algorithms because many properties of finite automata are decidable, even when applied to ω-words. The theory of automata over infinite words and trees was initiated by B¨uchi [19], Muller [101], and Rabin [110]. We present some of its basic elements; for more comprehensive expositions see the excellent survey articles by Thomas [120, 121]. **Definition 6. A B¨uchi automaton** = (Q, I, δ, F ) over an alphabet Σ is given by _B_ _a finite set Q of locations[4], a non-empty set I_ _Q of initial locations, a transition_ _⊆_ relation δ _Q_ _Σ_ _Q, and a set F_ _Q of accepting locations._ _⊆_ _×_ _×_ _⊆_ _A run of B over an ω-word w = a0a1 . . . ∈_ _Σ[ω]_ _is an infinite sequence ρ = q0q1 . . ._ _of locations qi ∈_ _Q such that q0 ∈_ _I and (qi_ _, ai_ _, qi+1) ∈_ _δ holds for all i ∈_ N. The _run ρ is accepting iff there exists some q ∈_ _F such that qi = q holds for infinitely many_ _i ∈_ N. _The language_ ( ) _Σ[ω]_ _is the set of ω-words for which there exists some accept-_ _L_ _B_ _⊆_ _ing run ρ of_ _. A language L_ _Σ[ω]_ _is called ω-regular iff L =_ ( ) for some B¨uchi _B_ _⊆_ _L_ _B_ _automaton_ _._ _B_ B¨uchi automata are presented just as ordinary (non-deterministic) finite automata over finite words [68]. The notion of “final locations”, which obviously does not apply to ω-words, is replaced by the requirement that a run passes infinitely often through an accepting location. Figure 6 shows a two-location B¨uchi automaton with initial location _q0 and accepting location q1 whose language is the set of ω-words over {a, b} that_ contain only finitely many a’s. Many properties of classical finite automata carry over to B¨uchi automata. For example, the emptiness problem is decidable. 4 We use the term locations rather than the conventional states to avoid confusion with the states of transition systems and temporal logic. ----- **Theorem 7. For a B¨uchi automaton** _with n locations, it is decidable in time O(n)_ _B_ _whether_ ( ) = _._ _L_ _B_ _∅_ _Proof. Because Q is finite, L(B) ̸= ∅_ iff there exist locationsx _y_ _q0 ∈_ _I, qw ∈_ _F and finite_ words x ∈ _Σ[∗]_ and y ∈ _Σ[+]_ such that q0 _⇒_ _q and q_ _⇒_ _q (where q_ _⇒_ _q_ _′ means that_ there is a path in from location q to q _[′]_ labelled with w ). The existence of such paths _B_ can be decided in linear time using the Tarjan-Paige algorithm [119] that enumerates the strongly connected components of reachable from locations in I, and checking _B_ whether some SCC contains some accepting location. _⊓⊔_ Observe that the construction used in the proof of theorem 7 implies that an ωregular language is non-empty iff it contains some word of the form xy _[ω]_ where x ∈ _Σ[∗]_ and y _Σ[+]._ _∈_ Unlike the case of standard finite automata, deterministic B¨uchi automata are strictly weaker than non-deterministic ones. For example, there is no deterministic B¨uchi automaton that accepts the same language as the automaton of Fig. 6. Intuitively, the _B_ reason is that uses unbounded non-determinism to “guess” when it has seen the last _B_ input a (for a rigorous proof see e.g. [120]). It is therefore impossible to prove closure of the class of ω-regular languages under complement in the standard way (first construct a deterministic B¨uchi automaton equivalent to the initial one, then complement the set of accepting locations). Nevertheless, B¨uchi [19] has shown that the complement of an ω-regular language is again ω-regular. His proof relied on combinatorial arguments (Ramsey’s theorem) and was non-constructive. A succession of papers has replaced this argument with explicit constructions, culminating in the following result due to Safra [111] of essentially optimal complexity; Thomas [121, 122] explains different strategies for proving closure under complement. **Proposition 8. For a B¨uchi automaton** _with n locations over alphabet Σ there is a_ _B_ _B¨uchi automaton_ _with 2[O][(][n][ log][ n][)]_ _locations such that_ ( ) = Σ[ω] ( ). _B_ _L_ _B_ _\ L_ _B_ Other types of ω-automata have also been considered. Generalized B¨uchi automata define the acceptance condition by a (finite) set F = {F1, . . ., Fn _} of sets of loca-_ tions [126]. A run is accepting if some location from every Fi is visited infinitely often. Using a counter modulo n, it is not difficult to simulate a generalized B¨uchi automaton by a standard one. The algorithm for checking nonemptiness can be adapted by searching some strongly connected component that contains some location from every Fi . _Muller automata also specify the acceptance condition as a set_ of set of locations; a _F_ run is accepting if the set of locations that appears infinitely often is an element of . _F_ Rabin and Streett automata use pairs of sets of locations to define even more elaborate acceptance conditions, such as requiring that if locations in a set R _Q are visited in-_ _⊆_ finitely often then there are also infinitely many visits to locations in another set G _Q._ _⊆_ Streett automata can be exponentially more succinct than B¨uchi automata, and deterministic Rabin and Streett automata are at the heart of Safra’s proof. It is also possible to place acceptance conditions on the transitions rather than the locations [7, 36]. _Alternating automata [102] present a more radical departure from the format of_ B¨uchi automata and have attracted considerable interest in recent years. The basic idea is to allow the automaton to make a transition from one location to several successor ----- locations that are simultaneously active. One way to define such a relation is to let _δ(q, a) be a positive Boolean formula with the locations as atomic propositions. For_ example, _δ(q1, a) = (q2 ∧_ _q3) ∨_ _q4_ specifies that whenever location q1 is active and input symbol a ∈ _Σ is read, the au-_ tomaton moves to locations q2 and q3 in parallel, or to location q4. Runs of alternating automata are no longer infinite sequences, but rather infinite trees or dags of locations. Although they also define the class of ω-regular languages, alternating automata can be exponentially more succinct than B¨uchi automata, due to their inherent parallelism. On the other hand, checking for nonemptiness is normally of exponential complexity. **3.4** **Temporal Logic and Automata** We can consider a behavior as an ω-word over the alphabet 2[V], identifying a system state s and the set s( ) of atomic propositions that s satisfies. From this perspective, _V_ **PTL formulas and ω-automata are two different formalisms to describe ω-words, and** it is interesting to compare their expressiveness. For example, the B¨uchi automaton of Fig. 6 can be identified with the PTL formula F G b. We outline a construction of a generalized B¨uchi automaton Bϕ for a given PTL formula ϕ such that Bϕ accepts precisely those runs over which ϕ holds. In view of the high complexity of complementation (cf. Prop. 8), the construction is not defined by induction on the structure of ϕ but is based on a “global” construction that considers all subformulas of ϕ simultaneously. The Fischer-Ladner closure (ϕ) of formula ϕ is the _C_ set of subformulas of ϕ and their complements, identifying _ψ and ψ. The locations_ _¬¬_ of Bϕ are subsets of C(ϕ), with the intuition that an accepting run of Bϕ from location _q satisfies the formulas in q. More precisely, the locations q of Bϕ are all subsets of_ (ϕ) that satisfy the following healthiness conditions: _C_ **– For all ψ** (ϕ), either ψ _q or_ _ψ_ _q, but not both._ _∈C_ _∈_ _¬_ _∈_ **– If ψ1 ∨** _ψ2 ∈C(ϕ) then ψ1 ∨_ _ψ2 ∈_ _q iff ψ1 ∈_ _q or ψ2 ∈_ _q._ **– Conditions for other boolean combinations are similar.** **– If ψ1 U ψ2** _q, then ψ2_ _q or ψ1_ _q._ _∈_ _∈_ _∈_ **– If ψ1 U ψ2 ∈C(ϕ) \ q, then ψ2 /∈** _q._ The initial locations of Bϕ are those locations containing ϕ. The transition relation _δ of Bϕ is defined such that (q, s, q_ _[′]) ∈_ _δ iff all of the following conditions hold:_ **– s = q** is the set of atomic propositions that appear in ; these must obviously _∩V_ _V_ be satisfied immediately by any run starting in q. **– q** _[′]_ contains ψ (resp., does not contain ψ) if X ψ ∈ _q (resp., X ψ ∈C(ϕ) \ q)._ **– If ψ1 U ψ2 ∈** _q and ψ2 /∈_ _q then ψ1 U ψ2 ∈_ _q_ _[′]._ **– If ψ1 U ψ2 ∈C(ϕ) \ q and ψ1 ∈** _q then ψ1 U ψ2 /∈_ _q_ _[′]._ The healthiness and next-state conditions are justified by propositional consistency and by the “recursion law” _ψ1 U ψ2_ _≡_ _ψ2 ∨_ (ψ1 ∧ **X(ψ1 U ψ2))** ----- q1 q2 ~(p U q), p U q, ~(~p U q), ~(~p U q), ~p, ~q, ~F p, ~q, F p U q, p U q, ~p U q, ~p U q, ~p, q, F p, q, F q5 q6 ~(p U q), ~(p U q), ~(~p U q), ~p U q, p, ~q, ~F ~p, ~q, F q3 q4 **Fig. 7. B¨uchi automaton for F** (p U q) ( _p U q)._ _≡_ _∨_ _¬_ In particular, they ensure that whenever some location contains ψ1 U ψ2, subsequent locations contain ψ1 for as long as they do not contain ψ2. It remains to define the acceptance conditions of Bϕ, which must ensure that every location containing some formula ψ1 U ψ2 will be followed by some location containing ψ2. Let ψ1[1] **[U][ ψ]2[1][, . . .,][ ψ]1[k]** **[U][ ψ]2[k]** [be all subformulas of this form in][ C][(][ϕ][)][. Then] _Bϕ has the acceptance condition F = {F1, . . ., Fk_ _} where Fi is the set of locations_ that do not contain ψ1[i] **[U][ ψ]2[i]** [or that contain][ ψ]2[i] [. As an example, Fig.][ 7][ shows the au-] tomaton BF for the formula F ≡ (p U q) ∨ (¬p U q). For clarity, we have omitted the edge labels, which are simply the set of atomic propositions contained in the source location. The acceptance sets corresponding to the subformulas p U q and _p U q are_ _¬_ _{q1, q3, q4, q5, q6} and {q1, q2, q3, q5, q6}. For example, they ensure that no accepting_ run remains forever in location q2. This construction, which is very similar to a tableau construction [128], implies the existence of a B¨uchi automaton that accepts precisely the models of any given PTL formula. The following proposition is due to [87, 126]. **Proposition 9. For every PTL formula ϕ of length n there exists a B¨uchi automaton** _Bϕ with 2[O][(][n][)]_ _locations that accepts precisely the behaviors of which ϕ holds._ Combining proposition 9 and theorem 7, it follows that the satisfiability problem for PTL is solvable in exponential time by checking whether L(Bϕ) = ∅; in fact, Sistla and Clarke [114] have shown that the PTL satisfiability problem is PSPACE-complete. Note that the above construction invariably produces a B¨uchi automaton Bϕ whose size is exponential in the length of the formula ϕ. Constructions that try to avoid this exponential blow-up [56, 38, 36] are the basis for actual implementations. ----- On the other hand, it is not the case that every ω-regular language can be defined by a PTL formula: Kamp [74] has shown that PTL formulas can define exactly the same behaviors as first-order logic formulas of the monadic theory of linear orders, that is, formulas built from =, <, and unary predicates Pv (x ), for v ∈V, interpreted over the natural numbers, see also [54]. This fragment of first-order logic is known to define the set of star-free ω-regular languages, a result due to McNaughton and Papert [98, 121]. For example, the set of behaviors such that proposition p is true at the even positions (and may be true or false elsewhere) is not PTL-definable [128]. To attain the level of expressiveness of ω-regular languages (which, by B¨uchi’s theorem, is that of the monadic second order theory of linear orders), PTL can be augmented by socalled “automaton operators” [128], by fixed-point formulas [117] or by quantification over atomic propositions. Unfortunately, the satisfiability problem for some of these logics is of non-elementary complexity; moreover, few applications seem to require the added expressiveness. Nevertheless, such a decision procedure has been implemented in MONA [76] and performs surprisingly well on practical examples. _Automata for other temporal logics. Automata-theoretic characterizations of branching-_ time logics [80] are based on tree automata [120, 121], which again define a notion of regular tree languages. Alternating automata allow for a rather uniform presentation of decision procedures for linear-time, branching-time, and alternating-time temporal logics [103, 125, 82], based on different restrictions on the automaton format. An essentially equivalent approach that does not mention automata can be formulated in terms of logical games [118]. In particular, winning strategies replace the traditional presentation of counter-examples; this can give better feedback to the user who can then explore different scenarios that violate a property. The model checkers Truth [85] and CWBNC [31] are based on these concepts. ## 4 Algorithms for Model Checking Given a transition system and a formula ϕ, the model checking problem is to decide _T_ whether = ϕ holds or not. If not, the model checker should provide an explanation _T |_ why, in the form of a counterexample (i.e., a run of that violates ϕ). For this to be _T_ feasible, is usually required to be finite-state. _T_ In accordance with the two parameters of the model checking problem ( and ϕ), _T_ there are two basic strategies when designing a model checking algorithm: “global” algorithms recurse on the structure of ϕ and evaluate each of its subformulas over all of . “Local” algorithms, in contrast, explore only parts of the state space of, but _T_ _T_ check all subformulas of ϕ in the process. The choice between global and local model checking algorithms does not affect the worst-case complexity of model checking algorithms, but the average behavior on practical examples can differ greatly. Observe that local algorithms may even be able to find errors of infinite-state systems; this is also true for global algorithms that represent the state space of in an implicit form, _T_ as considered in section 4.3. Traditionally, PTL model checking has been based on the local approach, while model checkers for CTL and other branching-time logics have used global algorithms. ----- ``` dfs(boolean search_cycle) { p = top(stack); foreach (q in successors(p)) { if (search_cycle and (q == seed)) report acceptance cycle and exit; if ((q, search_cycle) not in visited) { push q onto stack; enter (q, search_cycle) into visited; dfs(search_cycle); if (not search_cycle and (q is accepting)) { seed = q; dfs(true); } } } pop(stack); } // initialization stack = emptystack(); visited = emptyset(); seed = nil; foreach initial pair p { push p onto stack; enter (p, false) into visited; dfs(false) } ``` **Fig. 8. On-the-fly PTL model checking algorithm.** **4.1** **Local PTL Model Checking** The model checking problem for PTL can be restated as follows: given and ϕ, does _T_ there exist a run of that does not satisfy ϕ? This is a refinement of the satisfiability _T_ problem considered in section 3.4: instead of asking whether L(B¬ϕ) = ∅, we now ask whether the language defined by the product of T and B¬ϕ is empty or not. Formally, assume given a finite transition system T = (S _, I, A, δT ) and a B¨uchi_ automaton B¬ϕ = (Q, J _, δB, F_ ) that accepts precisely those behaviors that do not satisfy ϕ. The model checking algorithm operates on pairs (s, q) of system states and automaton locations. A pair (s0, q0) is initial if s0 ∈ _I and q0 ∈_ _J are initial for T_ and B¬ϕ, respectively. A pair (s _[′], q_ _[′]) is a successor of (s, q) if both (s, A, s_ _[′]) ∈_ _δT_ (for some A ∈A) and (q, s(V), q _[′]) ∈_ _δB hold: T and B¬ϕ make joint transitions,_ the input for B¬ϕ being determined by the values of the atomic propositions at the current system state. A pair (s, q) is accepting if q _F is an accepting automaton_ _∈_ location; recall that does not define an accepting condition. In particular, we assume _T_ any fairness conditions to be expressed as part of the formula ϕ. As in the proof of theorem 7, T and B¬ϕ admit a joint execution iff there is some accepting pair that is reachable from some initial pair and from itself. The model checking algorithm shown in Fig. 8 is due to Courcoubetis et al [34]. It is called an “on-the-fly” algorithm because the exploration of reachable pairs is interleaved with the search for acceptance cycles. The algorithm maintains a stack of pairs whose successors need to be explored (resulting in a depth-first search) and a set of pairs that have already been visited. Starting from the initial pairs, the procedure dfs generates reachable pairs until ----- some accepting pair is found. At this point, the search switches to cycle search mode (indicated by the boolean parameter search cycle) and tries to find a path that leads back to the accepting pair. Pairs that have already been encountered in the current search mode are not explored further. Courcoubetis et al. [34] have shown that the algorithm will find some acceptance cycle if one exists, although it is not guaranteed to find all cycles (even if the search were continued instead of exiting). When an acceptance cycle is found, the sequence of system states contained in the stack represents a run of that violates formula ϕ and can be displayed to the user as _T_ a counter-example. Observe that the algorithm of Fig. 8 needs to store only the path back from the current pair back to the initial pair that it started from, and the set of visited pairs. In particular, it does not have to construct the entire product automaton. Of course, when no acceptance cycle is found (and the system is declared error-free), all reachable pairs will have to be explored eventually. However, state exploration stops as soon as an error has been detected. This can be an important practical advantage: the state space of a correct system is constrained by its invariants, which are usually broken when errors are introduced. It is therefore quite common for buggy systems to have many more reachable states, and resources could easily be exhausted if all of them had to be explored. For large models, storing the set of visited pairs may become a problem. If one is willing to trade complete coverage for the ability to analyze systems that would otherwise be unmanageable, one can instead maintain a set of hash codes of visited pairs, possibly using several hashing functions [66]. The model checking algorithm of Fig. 8 has time complexity linear in the product of the sizes of T and of B¬ϕ; by proposition 9 the latter can be exponential in the size of ϕ. However, correctness assertions are often rather short, and as we mentioned in section 3.1, the size of can be exponential in the size of the description input _T_ to the model checker. Therefore, in practice the size of the transition system is the limiting factor. Given current technology, the analysis of systems on the order of 10[6]– 10[7] reachable states is feasible. Techniques that try to overcome this limit are described in section 4.4. **4.2** **Global CTL Model Checking** Let us now consider global model checking algorithms for the logic CTL. By [[ψ]]T (for a CTL formula ψ) we denote the set of states s of such that _, s_ = ψ. The model _T_ _T_ _|_ checking problem can then be rephrased as deciding whether I ⊆ [[ϕ]]T holds. The satisfaction sets [[ψ]]T can be computed by induction on the structure of ψ, as follows: [[v ]]T = {s : v ∈ _s(V)}_ (for v ∈V) [[¬ψ]]T = S \ [[ψ]]T [[ψ1 ∨ _ψ2]]T = [[ψ1]]T ∪_ [[ψ2]]T [[EX ψ]]T = δ[−][1]([[ψ]]T ) = {s : t ∈ [[ψ]]T for some A, t s.t. (s, A, t) ∈ _δ}_ [[EG ψ]]T = gfp(λX .[[ψ]]T ∩ _δ[−][1](X ))_ [[ψ1 EU ψ2]]T = lfp(λX .[[ψ2]]T ∪ ([[ψ1]]T ∩ _δ[−][1](X )))_ ----- where lfp(f ) and gfp(f ), for a function f : 2[S] 2[S], denote the least and greatest _→_ fixed points of f . (These fixed points exist and can be computed effectively because S is finite.) The clauses for the EG and EU connectives are justified from the recursive characterizations **EG ψ** _ψ_ **EX EG ψ** _≡_ _∧_ _ψ1 EU ψ2_ _ψ2_ (ψ1 **EX(ψ1 EU ψ2))** _≡_ _∨_ _∧_ The clause for EU calls for the computation of a least fixed point. Intuitively, this is because ψ2 has to become true eventually, and thus the unfolding of the fixed point must eventually terminate. On the other hand, the greatest fixed point is required in the computation of [[EG ψ]] because ψ has to hold arbitrarily far down the path. Observe that the least fixed point of the function corresponding to EG ψ is the empty set, whereas the greatest fixed point in the case of EU computes [[ψ1 EW ψ2]]. For an implementation, we need to be able to efficiently calculate the inverse image function δ[−][1]. Sets [[ψ]]T that have already been computed can be memorized in order to avoid recomputation of common subformulas. In order to assess the complexity of the algorithm, first note that computation of the fixed points is at most cubic in _S_ (if _|_ _|_ the computation has not stabilized, at least one state is added to or removed from the current approximation per iteration, and every iteration may need to search the entire set of transitions, which may be quadratic in _S_ ). Second, there are as many recursive _|_ _|_ calls as ϕ has subformulas, so the overall complexity is linear in the length of ϕ and cubic in _S_ . _|_ _|_ Clarke, Emerson, and Sistla [29] have proposed a less naive algorithm whose complexity is linear in the product of the sizes of the formula and the model. For formulas _ψ1 EU ψ2, the idea is to apply backward breadth-first search. For EG ψ, first the_ model is restricted to states satisfying ψ (which have already been computed recursively), and the strongly connected components of this restricted graph are enumerated. The set [[EG ψ]]T consists of all states of the restricted model from which some SCC can be reached; these states are again found using breadth-first search. Because fairness assumptions can not be formulated in CTL, they must be specified as part of the model, and the model checking algorithm needs to be adapted accordingly. For example, the SMV model checker [97] allows to specify fairness constraints via **CTL formulas. We define fair variants EGf and EUf of the CTL operators whose** semantics is as in definition 5, except that quantifiers are restricted to fair paths, i.e., paths that contain infinitely many states satisfying the constraints. Let us call a state _s fair iff there is some fair s-path; this is the case iff T, s |= EGf true holds. It is_ easy to see that ψ1 EUf ψ2 is equivalent to ψ1 EU (ψ2 **EGf true), hence we need** _∧_ only define an algorithm to compute [[EGf ψ]]T . The algorithm of Clarke, Emerson, and Sistla can be modified by restricting to those SCCs that for each fairness constraint _ζi contain some state satisfying ζi_ . The complexity of fair CTL model checking is thus still linear in the sizes of the formula and the model. For more information on different kinds of fairness constraints and their associated model checking algorithms see [42, 44, 81]. A global model checking algorithm for the branching-time fixed point logic µTL can be defined along the same lines. The complexity is then of the order _ϕ_ _S_ _|_ _| · |_ _|[qd][(][ϕ][)]_ ----- where qd (ϕ) denotes the nesting depth of the fixed point operators in the formula ϕ. However, Emerson and Lei [44] observed that the computation of fixed points can be optimized for blocks of fixed point operators of the same type, resulting in a complexity of order _ϕ_ _S_ where ad (ϕ) is the alternation depth of fixed point operators of _|_ _| · |_ _|[ad][(][ϕ][)]_ different type in ϕ. In particular, the complexity of model checking alternation-free _µTL is the same as for CTL [42, 32]._ **4.3** **Symbolic model checking** The ability to analyze systems of relevant size using model checking requires efficient data structures to represent objects such as transition systems and sets of system states. Any finite-state system can be encoded using a set {b1, . . ., bn _} of binary variables, just_ as ordinary data types of programming languages are represented in binary form on a digital computer. Sets of states, for example the set of initial states, can then be represented as propositional formulas over {b1, . . ., bn _}, and sets of pairs of states, such as_ the pairs (s, t) related by δ (for some action) can be represented as propositional formulas over {b1, . . ., bn _, b1[′]_ _[, . . .,][ b]n[′]_ _[}][ where the unprimed variables represent the pre-state]_ _s and the primed variables represent the post-state t. The size of the representing for-_ mula depends on the structure of the represented set rather than on its size: for example, the empty set and the set of all states are represented by false and true, both of size 1. For this reason, such representations are often called symbolic, and model checking algorithms that work on symbolic representations are called symbolic model checking techniques [20, 97]. _Binary decision diagrams [16, 18] (more precisely, reduced ordered BDDs) are a_ data structure for the symbolic representation of sets that have become very popular for model checking because they offer the following features: **– Every boolean function has a unique, canonical BDD representation. If sharing of** BDD nodes is enforced, equality of two functions can be decided in constant time by checking for pointer equality. **– Boolean operations such as negation, conjunction, implication etc. can be imple-** mented with complexity proportional to the product of the inputs. **– Projection (quantification over one or several boolean variables) is easily imple-** mented; its complexity is exponential in the worst case but tends to be well behaved in practice. BDDs can be understood as compact representations of ordered decision trees. For example, Fig. 9 shows a decision tree for the formula (x1 ∧ _y1) ∨_ ((x1 ∨ _y1) ∧_ (x0 ∧ _y0))_ which is the characteristic function for the carry bit produced by an addition of the twobit numbers x1x0 and y1y0. To find the result for a given input, follow the path labelled with the bit values for each of the inputs. The label of the leaf indicates the value of the function. The tree is ordered because the variables appear in the same order along every branch. ----- 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 0 0 0 0 1 0 1 0 0 0 1 0 1 1 1 **Fig. 9. Ordered decision tree for 2-bit carry.** 1 1 **Fig. 10. BDDs for carry from 2-bit adder.** ----- The decision tree of Fig. 9 contains many redundancies. For example, the values of _y0 and y1 are irrelevant if x0 and x1 are both 0. Similarly, y0 is irrelevant in case x0_ is 0 and x1 is 1. The redundancies can be removed by combining isomorphic subtrees (producing a directed acyclic graph from the tree) and eliminating nodes with identical subtrees. In our example, we obtain the BDD shown on the left-hand side of Fig. 10, where the leaf labelled 0 and all edges leading into it have been deleted for clarity. In an actual implementation, all BDD nodes that have been allocated are kept in a hash table indexed by the top variable and the two sub-BDDs, in order to avoid identical BDDs to be created twice. This ensures that two BDDs are functionally equivalent if and only if they are identical. For a fixed variable ordering the BDD representing any given propositional formula is uniquely determined (and equivalent formulas are represented by the same BDD), but BDD sizes can vary greatly for different variable orderings. For example, the right-hand side of Fig. 10 shows a BDD for the same formula as before, but with the variable ordering x0, y0, x1, y1. When considering the carry for n-bit addition, the BDD sizes for the variable ordering x0, . . ., xn−1, y0, . . ., yn−1 grow exponentially with n, whereas they grow only linearly for the ordering x0, y0, . . ., xn−1, yn−1. It is usually a good heuristic to group “dependent” variables closely together [53, 47]. In general, however, the problem of finding an optimal variable ordering is NP-hard [17], and existing BDD libraries offer automatic reordering strategies based on steepest-ascent heuristics [51, 10]. There are also functions (such as multiplication) for which no variable ordering can avoid exponential growth. This is also a problem when representing queues, frequently necessary for the analysis of communication protocols, and special-purpose data structures have been suggested [13, 57]. Given two BDDs f and g (w.r.t. some fixed variable ordering) the BDD that corresponds to Boolean combinations such as f _g, f_ _g etc. can be constructed as follows:_ _∧_ _∨_ **– If f and g are both terminal BDDs (0 or 1), return the terminal BDD for the result** of applying the operation. **– Otherwise, let v be the smaller of the variables at the root of f and g. Recursively** apply the operation to the sub-BDDs that correspond to v being 0 and 1 (often called the “co-factors” of f and g for variable v ). The results l and r correspond to the left- and right-hand branches of the result BDD. If l = r, return l, otherwise return a BDD with top variable v and children l and r . When recursive calls to this “apply” function are memorized in a hash table, the number of subproblems to be solved is at most the number of pairs of nodes in f and g. Assuming perfect hashing, the complexity is therefore linear in the product of the sizes of f and g. Observing that existential quantification over propositional variables can be computed as (∃v : f ) ≡ _f |v_ =0 ∨ _f |v_ =1 the computation of a BDD corresponding to the quantified formula can be reduced to calculating co-factors and disjunction, and in fact quantification over a set of variables can be performed in a single pass over the BDD. ----- _Symbolic CTL model checking. The naive CTL model checking algorithm of sec-_ tion 4.2 is straightforward to implement based on a BDD representation of the transition system T . It computes BDDs for the sets [[ψ]]T ; in particular, the inverse image δ[−][1](X ) of a set X that is represented as a BDD is computed as the BDD _∃b1[′]_ _[, . . .,][ b]n[′]_ [:][ δ][ ∧] _[X][ ′]_ where X _[′]_ is a copy of X in which all variables have been primed, and b1[′] _[, . . .,][ b]n[′]_ [are all] the primed variables. Naive computation of fixed points is also very simple using BDDs because equality of BDDs can be decided in constant time. It is interesting to compare the complexity of this BDD-based algorithm with that of explicit-state CTL model checking: Because the representation of the transition relation using BDDs can be exponentially more succinct than an explicit enumeration, the symbolic algorithm has exponential worst-case complexity in terms of the BDD sizes for the transition relation. First, the number of iterations required for the calculation of the fixed points may be exponential in the number of the input variables, and secondly, the computation of the inverse image may produce BDDs exponential in the size of their inputs. In practice, however, the number of iterations required for stabilization is often quite small, and the inverse image operation is well-behaved. This holds especially for hardware verification problems of “regular” structure and with short data paths. (A precise definition of “regular” is, however, very difficult.) For this class of problems, symbolic model checking has been successfully applied to the analysis of systems with 10[100] states and more [30]. The main problem is then to find a variable ordering that yields a small representation of the transition system. _Symbolic model checking for other logics. The approach used for symbolic CTL model_ checking extends basically unchanged for propositional µTL. An extension for the richer relational µ-calculus [105] has been described by Burch et al. [20] and implemented in the model checker µcke [12]. Symbolic model checking for PTL has been considered in [24, 112]. The basic idea is to represent each formula in (ϕ) by a boolean variable and to define the transi_C_ tion relation and acceptance condition of B¬ϕ in terms of these variables rather than constructing the automaton explicitly. _Bounded model checking. Although symbolic model checking has traditionally been_ associated with BDDs, other representations of boolean functions have also attracted interest. A recent example is the bounded model checking technique described in [11]. It relies on the observation that state sequences of fixed length, say k, can be represented using k copies of the variables used to represent a single state. The set of fixed-length sequences that represent terminating or looping runs of a given finite-state transition system can therefore be encoded by formulas of (non-temporal) propositional logic, _T_ as well as the semantics of PTL formulas ϕ over such sequences. For any given length _k_, the existence of a state sequence of length k that represents a run of satisfying ϕ _T_ can thus be reduced to the satisfiability of a certain propositional formula, which can be decided using efficient algorithms such as St˚almarck’s algorithm [115] or SATO [130]. On the other hand, the small model property of PTL (which follows from the tableaubased decision procedure discussed in section 3.4) implies that there is a run of _T_ ----- _B_ _B_ _D_ _D_ _D_ � � - s0 _A_ - s1   _C_ QQQQQs _s2_ _B_  I � � � - _B_ - C - _t0_ _t1_ _t2_    **Fig. 11. Transition systems for two processes.** satisfying ϕ if and only if there is some such run that can be represented by a sequence of length at most _S_ 2[|][ϕ][|]. A model checking algorithm is therefore obtained by enu_|_ _| ·_ merating all finite executions up to this bound. **4.4** **Partial-order Reductions** Whereas symbolic model checking derives its power from efficient data structures for the representation and manipulation of large sets of sufficiently regular structure, algorithms based on explicit state enumeration can be improved if only a fraction of the reachable pairs need to be explored. This idea has been applied most successfully in the case of asynchronous systems that are composed of concurrent processes with relatively little interaction. The full transition system has as its runs all possible interleavings of the actions of the individual processes. For many properties, however, the relative order of concurrent actions is irrelevant, and it suffices to consider only a few sequentializations. More sophisticated models than simple interleaving-based representations have been considered in concurrency theory. In particular, Mazurkiewicz traces model runs as partial orders of events. Reduction techniques that take advantage of the commutativity of actions are therefore often called partial-order reductions, although the analogy to Mazurkiewicz traces is usually rather superficial. The main problem in the design of a practical algorithm is to detect when two actions commute, given only the “local” knowledge available at a given system state. For example, consider the transition systems for two processes represented in Fig. 11. The left-hand process has a choice between executing actions A and C, whereas the righthand process must perform action B before action C . Assuming that processes synchronize on common actions, action C is disabled at the global state (s0, t0), whereas A, _B_, and D could be performed. Moreover, all these actions commute at state (s0, t0). In particular, A and B can be executed in either order, resulting in the global state (s1, t1). However, it would be an error to conclude that only the successors of state (s0, t0) with respect to action A need be considered, because action C can then never be taken. The lesson is that actions that are currently disabled must nevertheless be taken into account when constructing a reduced state space. There is also a danger of prematurely stopping the state exploration because actions are delayed forever along a loop. For an extreme example, consider again the transition ----- systems of Fig. 11 at the global state (s0, t0). The local action D of the right-hand process is certainly independent of all other actions. The only successor with respect to that action is again state (s0, t0). A naive modification of the model checking algorithm of Fig. 8 would stop generating further states at that point, which is obviously inadequate. Partial-order reduction algorithms [123, 58, 67, 48, 108] differ in how these problems are dealt with in order to arrive at a reasonably efficient algorithm that is adequate for the given task. The general idea is to approximate the semantic notion of commutativity of actions using syntactic criteria. For example, for a language based on shared variables, two actions of different processes are certainly independent if they do not update the same variable. For message passing communication, send and receive operations over the same channel are independent at those states where the channel is neither empty nor full. Second, the formula ϕ being analysed must be taken into account: call an action A visible for ϕ if A may change the value of a variable that occurs in ϕ. Holzmann and Peled [67] define an action to be safe if it is not visible and if it is provably independent (with the help of syntactic criteria) of all actions of different processes, even if these actions are currently disabled. The depth-first search algorithm shown in figure 8 can then be modified so that only successor states are considered for some process that can only perform safe actions at the current state. Consideration of the actions of other processes is thus delayed. However, the delayed actions must be considered before a loop is completed. This rather simple heuristic can already lead to substantial savings and carries almost no overhead because the set of safe actions can be determined statically. More elaborate reduction techniques are considered, for example, in [58, 107, 124]. There is always a tradeoff between the potential effectiveness of a reduction method and the overhead involved in computing a sufficient set of actions that must be explored at a given state. Moreover, the effectiveness of partial-order reductions in general depends on the structure of the system: while they are useless for tightly synchronized systems, they may dramatically reduce the numbers of states and transitions explored during model checking for loosely coupled, asynchronous systems. ## 5 Further topics We conclude this survey with brief references to some more advanced topics in the context of model checking. Several of these issues are addressed in detail in other contributions to this volume. _Abstraction. Although techniques such as symbolic model checking and partial-order_ reduction attempt to battle the infamous state explosion problem, the size of systems that can be analysed using model checking remains relatively limited: even astronomical numbers such as 10[100] states are generated by systems with a few hundred bits, which is a far cry from realistic hardware or software systems. Model checking must therefore be performed on rather abstract models. It is often advocated that model checking be applied to high-level designs during the early stages of system development because the payoff of finding bugs at that level is high whereas the costs are low. ----- For example, Lilius and Paltor [88] describe a tool for model checking UML state machine diagrams [14], and model checking of system specifications of similar degrees of abstraction has been considered in [5, 52]. When the analysis of big models cannot be avoided, it is rarely necessary to consider them in full detail in order to verify or falsify some given property. This idea can be formalized as an abstraction function (or relation) that induces some abstract system model such that the property holds of the original, “concrete” model if it can be proven for the abstract model. (Dually, abstractions can be set up such that failure of the property in the abstract model implies failure in the concrete model.) In general, the appropriate abstraction relation depends on the application and has to be defined by the user. Abstraction-based approaches are therefore not entirely automatic “push-button” methods in the same way that standard model checking is. Given a concrete model and an abstraction relation, one can either attempt to construct the abstract model using techniques of abstract interpretation [35] or verify the correctness of a proposed abstract model using theorem proving. There is a large body of literature on abstraction techniques, including [26, 37, 89, 90, 99]. A particularly attractive way of presenting abstractions is in the form of predicate _abstractions where predicates of interest at the concrete level are mapped to Boolean_ variables at the abstract level. The abstract models can then be presented as verification _diagrams, which are intuitively meaningful to system designers and can be used to_ (interactively) verify systems of arbitrary complexity [39, 92, 113, 75, 22]. For restricted classes of systems, it may be possible to apply fixed abstraction mappings (an example is provided by parameterized systems with simple communication patterns [9]) and thus obtain completely automatic methods. Valmari, in his contribution to this volume, also considers a fixed notion of abstraction that is amenable to full automation. _Symmetry reductions. Informal correctness arguments are often simplified by appeal-_ ing to some form of symmetry in the system. For examples, components may be replicated in a regular manner, or data may be processed such that permuting individual values does not affect the overall behavior. More formally, a transition system is _T_ said to be invariant under a permutation π of its states and actions if (s, A, t) _δ iff_ _∈_ (π(s), π(A), π(t)) _δ and s_ _I iff π(s)_ _I holds for all states s, t and all actions A._ _∈_ _∈_ _∈_ is invariant under a group G of permutations if it is invariant under every permutation _T_ in the group. Such a group G induces an equivalence relation on the set of states defined by s _t iff t = π(s) for some π_ _G. Provided the properties are also insensitive to_ _∼_ _∈_ the permutations in G, one can check the quotient of under and obtain a system _T_ _∼_ that can be much smaller [116, 23, 70, 71]. _Infinite-state systems. The extension of model checking techniques to infinite-state sys-_ tems with sufficiently regular state spaces has been an area of active research in recent years [21, 49, 50, 100]. See Esparza’s contribution to this volume for more details. _Parameterized systems. One is often interested in the properties of a family of finite-_ state systems that differ in some parameter such as the number of processes. Although ----- individual members of the family can be analyzed using standard model checking techniques, the verification of the entire family requires additional considerations. A natural idea is to perform standard model checking for fixed parameter values and then establish correctness for arbitrary parameter values by induction. In some cases, even the induction step can be justified by model checking. For example, Browne et al. [15] suggest to model check a two-process system, and to establish a bisimulation relation between two-process and n-process systems, ensuring that formulas expressed in a suitable logic cannot distinguish between them. This approach has been extended in [83, 127] by using a finite-state process I that acts as an invariant in that the composition of I with another process is again bisimilar to I . Because both I and the individual processes are finite-state, this can be accomplished using (a variation of) standard model checking. Related techniques are described in [46, 55]. _Compositional verification. The effects of state explosion can be mitigated when the_ overall verification effort can be subdivided by considering the components of a complex system one at a time. As in the case of abstraction, compositional reasoning normally requires additional input from the user who must specify appropriate properties to be verified of the individual components. The main problem is that components cannot necessarily be expected to function correctly in arbitrary environments, because their design relies on properties of the system the components are expected to be part of. Thus, corresponding assumptions have to be introduced in the statement of the components’ correctness properties. Early work on compositional verification [8, 109] required components to form a hierarchy with respect to their dependency. In general, however, every component is part of every other component’s environment, and circular dependencies among components are to be expected. More recently, different formulations of assumption-commitment specifications have been studied [1, 33, 96] that can accomodate circular dependencies, based on a form of computational induction. A collection of papers on compositional methods for specification and verification is contained in [40]. Model checking algorithms for modular verification are described, among others, in [59, 73, 72]. _Real-time systems. Whereas temporal logics such as PTL and CTL only formalize the_ relative ordering of states and events, many systems require assertions about quantitative aspects of time, and adequate formal models such as timed automata [2] or timed transition systems [62] and logics [4] have been proposed. Algorithms for the reachability and model checking problems for such models include [3, 63, 64]. In general, the complexity for the verification of real-time and hybrid systems is much higher than for untimed systems, and tools such as KRONOS [129], UPPAAL [86] or HYTECH [61] are restricted to relatively small systems. See the contribution by Larsen and Pettersson to this volume for a more comprehensive presentation of the state of the art in model checking techniques for real-time systems. ## References [1] Mart´ın Abadi and Leslie Lamport. Conjoining specifications. _ACM Transactions on_ _Programming Languages and Systems, 17(3):507–534, May 1995._ ----- [2] R. Alur. Timed automata. In Verification of Digital and Hybrid Systems, NATO ASI Series. Springer-Verlag, 1998. [3] R. Alur, C. Courcoubetis, and D. Dill. Model-checking for real-time systems. In 5th Ann. _IEEE Symp. on Logics in Computer Science, pages 414–425. IEEE Press, 1990._ [4] R. Alur and T. A. Henzinger. Logics and models of real time: a survey. In Real Time: _Theory in Practice, volume 600 of Lecture Notes in Computer Science, pages 74–106._ Springer-Verlag, 1992. [5] R. Alur, G. J. Holzmann, and D. Peled. An analyzer for message sequence charts. In B. Steffen and T. Margaria, editors, Tools and Constructions for the Analysis of Sys_tems (TACAS’96), volume 1055 of Lecture Notes in Computer Science, pages 35–48,_ Passau, Germany, 1996. Springer-Verlag. See also http://cm.bell-labs.com/cm/cs/what/ ubet/index.html. [6] Rajeev Alur, Thomas A. Henzinger, and Orna Kupferman. Alternating-time temporal logic. In 38th IEEE Symposium on Foundations of Computer Science, pages 100–109. IEEE Press, October 1997. [7] A. Anuchitanukul. Synthesis of Reactive Programs. PhD thesis, Stanford University, 1995. [8] H. Barringer, R. Kuiper, and A. Pnueli. Now you may compose temporal logic specifications. In 16th ACM Symp. on Theory of Computing, pages 51–63. ACM Press, 1984. [9] K. Baukus, S. Bensalem, Y. Lakhnech, and K. Stahl. Abstracting WS1S systems to verify parameterized networks. In S. Graf and M. Schwartzbach, editors, Tools and Algorithms _for the Construction and Analysis of Systems (TACAS 2000), volume 1785 of Lecture_ _Notes in Computer Science, pages 188–203. Springer-Verlag, 2000._ [10] J. Bern, C. Meinel, and A. Slobodov´a. Global rebuilding of BDDs – avoiding the memory requirement maxima. In P. Wolper, editor, 7th Workshop on Computer Aided Verifica_tion (CAV’95), volume 939 of Lecture Notes in Computer Science, pages 4–15. Springer-_ Verlag, 1995. [11] A. Biere, A. Cimatti, M. Fujita, and Y. Zhu. Symbolic model checking using SAT procedures instead of BDDs. In 36th ACM/IEEE Design Automation Conference (DAC’99), 1999. [12] Armin Biere. Effiziente Modellpr¨ufung des µ-Kalk¨uls mit bin¨aren Entscheidungsdiagram_men. PhD thesis, Univ. Karlsruhe, Germany, 1997._ [13] B. Boigelot and P. Godefroid. Symbolic verification of communication protocols with infinite state spaces using QDDs. In R. Alur and T. Henzinger, editors, 8th Workshop _on Computer-Aided Verification (CAV’96), volume 1102 of Lecture Notes in Computer_ _Science, pages 1–12. Springer-Verlag, 1996._ [14] G. Booch, J. Rumbaugh, and I. Jacobson. Unified Modelling Language: User Guide. Addison Wesley, 1999. [15] M. C. Browne, E. M. Clarke, and O. Grumberg. Reasoning about networks with many identical finite-state processes. Information and Computation, 81:13–31, 1989. [16] R. E. Bryant. Graph-based algorithms for boolean function manipulation. IEEE Transac_tions on Computers, C-35(8):677–691, 1986._ [17] R. E. Bryant. On the complexity of VLSI implementations and graph representations of boolean functions with application to integer multiplication. IEEE Trans. on Computers, 40(2):205–213, 1991. [18] R. E. Bryant. Symbolic boolean manipulations with ordered binary decision diagrams. _ACM Computing Surveys, 24(3):293–317, 1992._ [19] J. R. B¨uchi. On a decision method in restricted second-order arithmetics. In International _Congress on Logic, Method and Philosophy of Science, pages 1–12. Stanford University_ Press, 1962. ----- [20] J. R. Burch, E. M. Clarke, K. L. McMillan, D. Dill, and L. J. Hwang. Symbolic model checking: 10[20] states and beyond. Information and Computation, 98(2):142–170, 1992. [21] O. Burkart and J. Esparza. More infinite results. Electronic Notes in Theoretical Computer _Science, 6, 1997. http://www.elsevier.nl/locate/entcs/volume6.html._ [22] Dominique Cansell, Dominique M´ery, and Stephan Merz. Predicate diagrams for the verification of reactive systems. In 2nd Intl. Conf. on Integrated Formal Methods (IFM 2000), Lecture Notes in Computer Science, Dagstuhl, Germany, November 2000. SpringerVerlag. To appear. [23] E. M. Clarke, T. Filkorn, and S. Jha. Exploiting symmetry in temporal logic model checking. In C. Courcoubetis, editor, 5th Workshop on Computer-Aided Verification (CAV’93), volume 697 of Lecture Notes in Computer Science, Elounda, Crete, 1993. SpringerVerlag. [24] E. M. Clarke, O. Grumberg, and K. Hamaguchi. Another look at LTL model checking. _Formal Methods in System Design, 10:47–71, 1997._ [25] Edmund M. Clarke and E. Allen Emerson. Synthesis of synchronization skeletons for branching time temporal logic. In Workshop on Logic of Programs, volume 131 of Lecture _Notes in Computer Science, Yorktown Heights, N.Y., 1981. Springer-Verlag._ [26] Edmund M. Clarke, Orna Grumberg, and David E. Long. Model checking and abstraction. _ACM Transactions on Programming Languages and Systems, 16(5):1512–1542,_ September 1994. [27] Edmund M. Clarke, Orna Grumberg, and Doron Peled. Model Checking. MIT Press, Cambridge, MA, 1999. [28] Edmund M. Clarke and Holger Schlingloff. Model checking. In A. Voronkov, editor, _Handbook of Automated Deduction. Elsevier, 2000. To appear._ [29] E.M. Clarke, E.A. Emerson, and A.P. Sistla. Automatic verification of finite-state concurrent systems using temporal logic specifications. ACM Transactions on Programming _Languages and Systems, 8(2):244–263, 1986._ [30] E.M. Clarke, O. Grumberg, H. Hiraishi, S. Jha, D.E. Long, K.L. McMillan, and L.A. Ness. Verification of the Futurebus+ cache coherence protocol. In D. Agnew, L. Claesen, and R. Camposano, editors, IFIP Conference on Computer Hardware Description Languages _and their Applications, pages 5–20, Ottawa, Canada, 1993. Elsevier Science Publishers_ B.V. [31] R. Cleaveland and S. Sims. Generic tools for verifying concurrent systems. Science of _Computer Programming, 2000. See also http://www.cs.sunysb.edu/˜cwb/._ [32] R. Cleaveland and B. Steffen. A linear-time model-checking algorithm for the alternationfree modal µ-calculus. Formal Methods in System Design, 2:121–147, 1993. [33] P. Collette. An explanatory presentation of composition rules for assumption-commitment specifications. Information Processing Letters, 50(1):31–35, 1994. [34] C. Courcoubetis, M. Vardi, P. Wolper, and M. Yannakakis. Memory-efficient algorithms for the verification of temporal properties. Formal methods in system design, 1:275–288, 1992. [35] Patrick Cousot and Radhia Cousot. Abstract interpretation: A unified lattice model for static analysis of programs by construction or approximation of fixpoints. In 4th ACM _Symposium on Principles of Programming Languages, pages 238–252, Los Angeles, Cal-_ ifornia, 1977. ACM Press. [36] J.-M. Couvreur. On-the-fly verification of linear temporal logic. In J.M. Wing, J. Woodcock, and J. Davies, editors, FM’99 – Formal Methods, volume 1708 of Lecture Notes in _Computer Science, pages 253–271, Toulouse, France, 1999. Springer-Verlag._ [37] Dennis Dams, Orna Grumberg, and Rob Gerth. Abstract interpretation of reactive systems: Abstractions preserving CTL[∗], CTL[∗] and CTL[∗]. In Ernst-R¨udiger Olderog, ed_∀_ _∃_ ----- itor, Programming Concepts, Methods, and Calculi (PROCOMET ’94), pages 561–581, Amsterdam, 1994. North Holland/Elsevier. [38] M. Daniele, F. Giunchiglia, and M. Vardi. Improved automata generation for linear temporal logic. In Computer Aided Verification (CAV’99), volume 1633 of Lecture Notes in _Computer Science, pages 249–260, Trento, Italy, 1999. Springer-Verlag._ [39] Luca de Alfaro, Zohar Manna, Henny B. Sipma, and Tom´as Uribe. Visual verification of reactive systems. In Ed Brinksma, editor, Tools and Algorithms for the Construction and _Analysis of Systems (TACAS’97), volume 1217 of Lecture Notes in Computer Science,_ pages 334–350. Springer-Verlag, 1997. [40] W.-P. de Roever, H. Langmaack, and A. Pnueli, editors. Compositionality: The Significant _Difference, volume 1536 of Lecture Notes in Computer Science. Springer-Verlag, 1998._ [41] E. A. Emerson and J. Y. Halpern. “sometimes” and “not never” revisited: on branching time vs. linear time. Journal of the ACM, 33:151–178, 1986. [42] E. A. Emerson, C. S. Jutla, and A. P. Sistla. On model checking for fragments of _µ-calculus. In C. Courcoubetis, editor, 5th Workshop on Computer-Aided Verification_ _(CAV’93), volume 697 of Lecture Notes in Computer Science. Springer-Verlag, 1993._ [43] E. A. Emerson and C. L. Lei. Modalities for model checking: Branching time strikes back. In 12th Symp. on Principles of Programming Languages (POPL’85), New Orleans, 1985. ACM Press. [44] E. A. Emerson and C. L. Lei. Efficient model checking in fragments of the propositional _µ-calculus. In 1st Symp. on Logic in Computer Science, Boston, Mass., 1986. IEEE Press._ [45] E. Allen Emerson. _Handbook of theoretical computer science, chapter Temporal and_ modal logic, pages 997–1071. Elsevier Science Publishers B.V., 1990. [46] E. Allen Emerson and Kedar S. Namjoshi. Automatic verification of parameterized synchronous systems. In R. Alur and T. Henzinger, editors, 8th International Conference _on Computer Aided Verification (CAV’96), Lecture Notes in Computer Science. Springer-_ Verlag, 1996. [47] R. Enders, T. Filkorn, and D. Taubner. Generating BDDs for symbolic model checking. _Distributed Computing, 6:155–164, 1993._ [48] J. Esparza. Model checking using net unfoldings. Science of Computer Programming, 23:151–195, 1994. [49] J. Esparza. Decidability of model-checking for infinite-state concurrent systems. Acta _Informatica, 34:85–107, 1997._ [50] J. Esparza, A. Finkel, and R. Mayr. On the verification of broadcast protocols. In 14th _IEEE Symposium on Logic in Computer Science, pages 352–359, Trento, Italy, 1999._ IEEE Press. [51] E. Felt, G. York, R. Brayton, and A. S. Vincentelli. Dynamic variable reordering for BDD minimization. In European Design Automation Conference, pages 130–135, 1993. [52] T. Firley, U. Goltz, M. Huhn, K. Diethers, and T. Gehrke. Timed sequence diagrams and tool-based analysis – a case study. In R. France and B. Rumpe, editors, 2nd Intl. _Conference on the Unified Modelling Language (UML’99), volume 1723 of Lecture Notes_ _in Computer Science, pages 645–660. Springer-Verlag, 1999._ [53] H. Fuji, G. Oomoto, and C. Hori. Interleaving based variable ordering methods for binary decision diagrams. In Intl. Conf. on Computer Aided Design (ICCAD’93). IEEE Press, 1993. [54] D. Gabbay, I. Hodkinson, and M. Reynolds. Temporal Logic: Mathematical Foundations _and Computational Aspects, volume 1. Clarendon Press, Oxford, UK, 1994._ [55] S. M. German and A. P. Sistla. Reasoning about systems with many processes. Journal _of the ACM, 39:675–735, 1992._ ----- [56] R. Gerth, D. Peled, M. Vardi, and P. Wolper. Simple on-the-fly automatic verification of linear temporal logic. In Protocol Specification, Testing, and Verification, pages 3–18, Warsaw, Poland, 1995. Chapman & Hall. [57] P. Godefroid and D. E. Long. Symbolic protocol verification with queue BDDs. In 11th _Ann. IEEE Symp. on Logic in Computer Science (LICS’96), New Brunswick, NJ, 1996._ IEEE Press. [58] P. Godefroid and P. Wolper. A partial approach to model checking. Information and _Computation, 110(2):305–326, 1994._ [59] Orna Grumberg and David E. Long. Model checking and modular verification. ACM _Transactions on Programming Languages and Systems, 16(3):843–871, May 1994._ [60] David Harel and Amir Pnueli. On the development of reactive systems. In K. R. Apt, editor, Logics and Models of Concurrent Systems, volume F13 of NATO ASI Series, pages 477–498. Springer-Verlag, 1985. [61] T. A. Henzinger, P.-H. Ho, and H. Wong-Toi. HyTech: A model checker for hybrid systems. Software Tools for Technology Transfer, 1:110–122, 1997. [62] T. A. Henzinger, Z. Manna, and A. Pnueli. Temporal proof methodologies for timed transition systems. Information and Computation, 112:273–337, 1994. [63] Thomas A. Henzinger, Orna Kupferman, and Moshe Y. Vardi. A space-efficient on-the-fly algorithm for real-time model checking. In 7th International Conference on Concurrency _Theory (CONCUR 1996), volume 1119 of Lecture Notes in Computer Science, pages_ 514–529. Springer-Verlag, 1996. [64] Thomas A. Henzinger, Xavier Nicollin, Joseph Sifakis, and Sergio Yovin. Symbolic model checking for real-time systems. Information and Computation, 111:193–244, 1994. [65] Gerard Holzmann. The Spin model checker. _IEEE Trans. on Software Engineering,_ 23(5):279–295, may 1997. [66] Gerard Holzmann. An analysis of bitstate hashing. Formal Methods in System Design, November 1998. [67] Gerard Holzmann and Doron Peled. An improvement in formal verification. In IFIP WG _6.1 Conference on Formal Description Techniques, pages 197–214, Bern, Switzerland,_ 1994. Chapman & Hall. [68] John E. Hopcroft and Jeffrey D. Ullman. Introduction to automata theory, languages, and _computation. Addison-Wesley, Reading, Mass., 1979._ [69] Michael Huth and Mark D. Ryan. Logic in Computer Science. Cambridge University Press, Cambridge, U.K., 2000. [70] C. N. Ip and D. Dill. Better verification through symmetry. In 11th Intl. Symp. on Com_puter Hardware Description Languages and their Applications, pages 87–100. North Hol-_ land, 1993. [71] C. N. Ip and D. Dill. Verifying systems with replicated components in Murphi. In Intl. _Conference on Computer-Aided Verification (CAV’96), Lecture Notes in Computer Sci-_ ence. Springer-Verlag, 1996. [72] Bernhard Josko. Verifying the correctness of AADL modules using model checking. In J. W. de Bakker, W.-P. de Roever, and G. Rozenberg, editors, Stepwise Refinement of _Distributed Systems: Models, Formalisms, Correctness, volume 430 of Lecture Notes in_ _Computer Science, pages 386–400. Springer-Verlag, Berlin, 1989._ [73] Bernhard Josko. Modular Specification and Verification of Reactive Systems. PhD thesis, Univ. Oldenburg, Fachbereich Informatik, April 1993. [74] H. W. Kamp. Tense Logic and the Theory of Linear Order. PhD thesis, Univ. of California at Los Angeles, 1968. [75] Yonit Kesten and Amir Pnueli. Verifying liveness by augmented abstraction. In Annual _Conference of the European Association for Computer Science Logic (CSL’99), Lecture_ Notes in Computer Science, Madrid, 1999. Springer-Verlag. ----- [76] Nils Klarlund. Mona & Fido: The logic-automaton connection in practice. In Computer _Science Logic, CSL ’97, volume 1414 of LNCS, pages 311–326, Aarhus, Denmark, 1998._ [77] Dexter Kozen. Results on the propositional mu-calculus. Theoretical Computer Science, 27:333–354, 1983. [78] Saul A. Kripke. Semantical considerations on modal logic. Acta Philosophica Fennica, 16:83–94, 1963. [79] Fred Kr¨oger. Temporal Logic of Programs, volume 8 of EATCS Monographs on Theoret_ical Computer Science. Springer-Verlag, Berlin, 1987._ [80] O. Kupferman, M. Vardi, and P. Wolper. An automata-theoretic approach to branchingtime model checking. In 6th Intl. Conf. on Computer-Aided Verification (CAV’94), Lecture Notes in Computer Science. Springer-Verlag, 1994. Full version (1999) available at http://www.cs.rice.edu/˜vardi/papers/. [81] O. Kupferman and M. Y. Vardi. Verification of fair transition systems. In R. Alur and T. Henzinger, editors, 8th Workshop on Computer-Aided Verification (CAV’96), volume 1102 of Lecture Notes in Computer Science, pages 372–382. Springer-Verlag, 1996. [82] Orna Kupferman and Moshe Y. Vardi. Weak alternating automata are not so weak. In _5th Israeli Symposium on Theory of Computing and Systems, pages 147–158. IEEE Press,_ 1997. [83] R. P. Kurshan and K. L. McMillan. A structural induction theorem for processes. In 8th _Ann. ACM Symp. on Principles of Distributed Computing. ACM Press, 1989._ [84] Leslie Lamport. ‘sometime’ is sometimes ‘not never’. In Proc. 7th Ann. Symp. on Princ. _of Prog. Lang. (POPL’80), pages 174–185. ACM SIGACT-SIGPLAN, January 1980._ [85] M. Lange, M. Leucker, T. Noll, and S. Tobies. Truth – a verification platform for concurrent systems. In Tool Support for System Specification, Development, and Verification, Advances in Computing Science. Springer-Verlag Wien New York, 1999. [86] K. Larsen, P. Petterson, and W. Yi. Uppaal in a nutshell. Software Tools for Technology _Transfer, 1, 1997._ [87] Orna Lichtenstein, Amir Pnueli, and Lenore Zuck. The glory of the past. In Rohit Parikh, editor, Logics of Programs, volume 193 of Lecture Notes in Computer Science, pages 196–218, Berlin, June 1985. Springer-Verlag. [88] J. Lilius and I. P. Paltor. Formalising UML state machines for model checking. In R. France and B. Rumpe, editors, UML’99 – Beyond the Standard, volume 1723 of Lec_ture Notes in Computer Science. Springer-Verlag, 1999._ [89] Claire Loiseaux, Susanne Graf, Joseph Sifakis, Ahmed Bouajjani, and Saddek Bensalem. Property preserving abstractions for the verification of concurrent systems. Formal Meth_ods in System Design, 6:11–44, 1995. A preliminary version appeared as Spectre technical_ report RTC40, Grenoble, France, 1993. [90] D. E. Long. Model checking, Abstraction and Compositional Verification. PhD thesis, CMU School of Computer Science, 1993. CMU-CS-93-178. [91] Gavin Lowe. Breaking and fixing the Needham-Schroeder public key protocol using FDR. In Tools and Algorithms for the Construction and Analysis of Systems (TACAS’96), volume 1055 of Lecture Notes in Computer Science, pages 147–166. Springer-Verlag, 1996. [92] Z. Manna, A. Browne, H.B. Sipma, and T.E. Uribe. Visual abstractions for temporal verification. In A. Haeberer, editor, AMAST’98, volume 1548 of Lecture Notes in Computer _Science, pages 28–41. Springer-Verlag, 1998._ [93] Zohar Manna and Amir Pnueli. A hierarchy of temporal properties. In 9th. ACM Sympo_sium on Principles of Distributed Computing, pages 377–408. ACM, 1990._ [94] Zohar Manna and Amir Pnueli. The temporal logic of reactive and concurrent systems— _Specification. Springer-Verlag, New York, 1992._ ----- [95] Zohar Manna and Amir Pnueli. The temporal logic of reactive and concurrent systems— _Safety properties. Springer-Verlag, New York, 1995._ [96] Kenneth L. McMillan. A compositional rule for hardware design refinement. In O. Grumberg, editor, 9th International Conference on Computer Aided Verification (CAV’97), volume 1254 of Lecture Notes in Computer Science, pages 24–35, Haifa, Israel, 1997. Springer-Verlag. [97] K.L. McMillan. Symbolic Model Checking. Kluwer Academic Publishers, 1993. [98] R. McNaughton and S. Papert. Counter-Free Automata. MIT Press, Cambridge, Mass., 1971. [99] Stephan Merz. Rules for abstraction. In R. K. Shyamasundar and K. Ueda, editors, _Advances in Computing Science—ASIAN’97, volume 1345 of Lecture Notes in Computer_ _Science, pages 32–45, Kathmandu, Nepal, December 1997. Springer-Verlag._ [100] Faron Moller. Infinite results. In U. Montanari and V. Sassone, editors, 7th International _Conference on Concurrency Theory (CONCUR’96), volume 1119 of Lecture Notes in_ _Computer Science, pages 195–216, Pisa, Italy, 1996. Springer-Verlag._ [101] D. E. Muller. Infinite sequences and finite machines. In Switching Circuit Theory and _Logical Design: Fourth Annual Symposium, pages 3–16, New York, 1963. IEEE Press._ [102] D. E. Muller, A. Saoudi, and P. E. Schupp. Alternating automata, the weak monadic theory of the tree and its complexity. In 13th ICALP, volume 226 of Lecture Notes in _Computer Science, pages 275–283. Springer-Verlag, 1986._ [103] D.E. Muller, A. Saoudi, and P.E. Schupp. Weak alternating automata give a simple explanation of why most temporal and dynamic logics are decidable in exponential time. In _3rd IEEE Symposium on Logic in Computer Science, pages 422–427. IEEE Press, 1988._ [104] Roger Needham and Michael Schroeder. Using encryption for authentication in large networks of computers. Communications of the ACM, 21(12):993–999, 1978. [105] D. M. Park. Finiteness is mu-ineffable. Theory of Computation Report 3, University of Warwick, 1974. [106] Lawrence C. Paulson. Proving security protocols correct. In 14th IEEE Symposium on _Logic in Computer Science, pages 370–383, Trento, Italy, 1999. IEEE Press._ [107] D. Peled. Combining partial order reductions with on-the-fly model-checking. Formal _Methods in System Design, 8(1):39–64, 1996._ [108] W. Penczek, R. Gerth, and R. Kuiper. Partial order reductions preserving simulations. Submitted for publication, 1999. [109] Amir Pnueli. In transition from global to modular temporal reasoning about programs. In K. R. Apt, editor, Logics and Models of Concurrent Systems, volume F 13 of ASI, pages 123–144. Springer-Verlag, Berlin, 1985. [110] M. O. Rabin. Decidability of second-order theories and automata on infinite trees. Trans_actions of the American Mathematical Society, 141:1–35, 1969._ [111] Shmuel Safra. On the complexity of ω-automata. In 29th IEEE Symposium on Founda_tions of Computer Science, pages 319–327. IEEE Press, 1988._ [112] Klaus Schneider. Yet another look at LTL model checking. In IFIP Advanced Re_search Working Conference on Correct Hardware Design and Verification Methods_ _(CHARME’99), Lecture Notes in Computer Science, Bad Herrenalb, Germany, 1999._ [113] H.B. Sipma, T.E. Uribe, and Z. Manna. Deductive model checking. In 8th International _Conference on Computer-Aided Verification, volume 1102 of Lecture Notes in Computer_ _Science, pages 208–219, New Brunswick, N.J., 1996. Springer-Verlag._ [114] A.P. Sistla and E.M. Clarke. The complexity of propositional linear temporal logic. Jour_nal of the ACM, 32:733–749, 1985._ [115] G. St˚almarck. A system for determining propositional logic theorems by applying values and rules to triplets that are generated from a formula. Swedish Patent No. 467076 (1992), US Patent No. 5 276 897 (1994), European Patent No. 0404 454 (1995). ----- [116] P. H. Starke. Reachability analysis of Petri nets using symmetries. Syst. Anal. Model. _Simul., 8:293–303, 1991._ [117] Colin Stirling. Handbook of Logic in Computer Science, volume 2, chapter Modal and temporal logics, pages 477–563. Oxford Science Publications, Clarendon Press, Oxford, 1992. [118] Colin Stirling. Bisimulation, model checking, and other games. Mathfit instructional meeting on games and computation, 1997. Available at http://www.dcs.ed.ac.uk/home/ cps/. [119] R. E. Tarjan. Depth first search and linear graph algorithms. SIAM Journal of Computing, 1:146–160, 1972. [120] Wolfgang Thomas. Automata on infinite objects. In Jan van Leeuwen, editor, Handbook _of Theoretical Computer Science, volume B: Formal Models and Semantics, pages 133–_ 194. Elsevier, Amsterdam, 1990. [121] Wolfgang Thomas. Languages, automata, and logic. In G. Rozenberg and A. Salomaa, editors, Handbook of Formal Language Theory, volume III, pages 389–455. SpringerVerlag, New York, 1997. [122] Wolfgang Thomas. Complementation of B¨uchi automata revisited. In J. Karhum¨aki, editor, Jewels are Forever, Contributions on Theoretical Computer Science in Honor of _Arto Salomaa, pages 109–122. Springer-Verlag, 2000._ [123] A. Valmari. A stubborn attack on state explosion. In 2nd International Workshop on _Computer Aided Verification, volume 531 of Lecture Notes in Computer Science, pages_ 156–165, Rutgers, June 1990. Springer-Verlag. [124] A. Valmari. The state explosion problem. In Lectures on Petri Nets I: Basic Models, volume 1491 of Lecture Notes in Computer Science, pages 429–528. Springer-Verlag, 1998. [125] Moshe Y. Vardi. Alternating automata and program verification. In Computer Science _Today, volume 1000 of Lecture Notes in Computer Science, pages 471–485. Springer-_ Verlag, 1995. [126] M.Y. Vardi and P. Wolper. Reasoning about infinite computations. Information and Com_putation, 115(1):1–37, 1994._ [127] P. Wolper and V. Lovinfosse. Verifying properties of large sets of processes with network invariants. In J. Sifakis, editor, Intl. Workshop on Automatic Verification Methods for _Finite State Systems, volume 407 of Lecture Notes in Computer Science. Springer-Verlag,_ 1989. [128] Pierre Wolper. Temporal logic can be more expressive. Information and Control, 56:72– 93, 1983. [129] S. Yovine. Kronos: A verification tool for real-time systems. Software Tools for Technol_ogy Transfer, 1, 1997._ [130] H. Zhang. Sato: An efficient propositional prover. In Intl. Conf. on Automated Deduc_tion (CADE’97), number 1249 in Lecture Notes in Computer Science, pages 272–275._ Springer-Verlag, 1997. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/3-540-45510-8_1?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/3-540-45510-8_1, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "http://www.loria.fr/~merz/papers/mc-tutorial.pdf" }
2,000
[ "JournalArticle", "Review" ]
true
2000-06-19T00:00:00
[]
27,306
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/018993a98516494126ec1497e481458996517267
[ "Computer Science" ]
0.842925
Pando: Personal Volunteer Computing in Browsers
018993a98516494126ec1497e481458996517267
International Middleware Conference
[ { "authorId": "1948303", "name": "Erick Lavoie" }, { "authorId": "1699786", "name": "L. Hendren" }, { "authorId": "145607029", "name": "F. Desprez" }, { "authorId": "145250139", "name": "M. Correia" } ]
{ "alternate_issns": null, "alternate_names": [ "Middleware", "ACM/IFIP/USENIX int conf Middlew", "ACM/IFIP/USENIX international conference on Middleware", "Int Middlew Conf" ], "alternate_urls": null, "id": "911e7332-8ea8-4e9d-bc20-5572a2523f92", "issn": null, "name": "International Middleware Conference", "type": "conference", "url": "https://dl.acm.org/conference/middleware/proceedings" }
The large penetration and continued growth in ownership of personal electronic devices represents a freely available and largely untapped source of computing power. To leverage those, we present Pando, a new volunteer computing tool based on a declarative concurrent programming model and implemented using JavaScript, WebRTC, and WebSockets. This tool enables a dynamically varying number of failure-prone personal devices contributed by volunteers to parallelize the application of a function on a stream of values, by using the devices' browsers. We show that Pando can provide throughput improvements compared to a single personal device, on a variety of compute-bound applications including animation rendering and image processing. We also show the flexibility of our approach by deploying Pando on personal devices connected over a local network, on Grid5000, a French-wide computing grid in a virtual private network, and seven PlanetLab nodes distributed in a wide area network over Europe.
## Pando: Personal Volunteer Computing in Browsers ### Miguel Correia INESC-ID Lisboa, Portugal miguel.p.correia@tecnico.ulisboa.pt ### Erick Lavoie, Laurie Hendren McGill University, Montreal, Canada erick.lavoie@mail.mcgill.ca hendren@cs.mcgill.ca ### Abstract ### Frederic Desprez INRIA Grenoble Rhône-Alpes Grenoble, France Frederic.Desprez@inria.fr The large penetration and continued growth in ownership of personal electronic devices represents a freely available and largely untapped source of computing power. To leverage those, we present Pando, a new volunteer computing tool based on a declarative concurrent programming model and implemented using JavaScript, WebRTC, and WebSockets. This tool enables a dynamically varying number of failure-prone personal devices contributed by volunteers to parallelize the application of a function on a stream of values, by using the devices’ browsers. We show that Pando can provide throughput improvements compared to a single personal device, on a variety of compute-bound applications including animation rendering and image processing. We also show the flexibility of our approach by deploying Pando on personal devices connected over a local network, on Grid5000, a French-wide computing grid in a virtual private network, and seven PlanetLab nodes distributed in a wide area network over Europe. **_CCS Concepts_** - Computing methodologies → **Distributed** **computing methodologies; • Software and its engineering →** **Development frameworks and environments;** **_Keywords_** Volunteer Computing, Personal Volunteer Computing, Web Technologies, JavaScript, WebRTC, WebSocket ### 1 Introduction More than 1.5 billion smartphones were sold in the world in 2018 [25] and the computing power of the highest-end devices today rivals that of desktops and laptops [52]. They collectively represent an _immense source of largely untapped computing power._ While the latest developments in distributed computing have had tremendous impact in industry and elsewhere, the major paradigms that sustained those developments have led to designs with barriers that limit the utilization of personal devices for distributed computing [70]: access to cloud platforms require financial instruments, such as a bank account or a credit card; access to grid platforms require administrative permissions; and the deployment of the most popular volunteer computing platform, BOINC [29], requires a significant technical effort because it has been designed for longrunning large-scale research projects with contributors that are anonymous and potentially malicious. In a sense, the underlying problem is socio-technical: we do not have technical solutions that _can leverage, in a seamless way, the abundance of computing power_ _we collectively already possess._ Recently, we have proposed personal volunteer computing [70] to address this problem. In contrast to volunteer computing, the approach focuses on the development of personal tools, for personal projects, that leverage the computing capabilities of personal devices owned by users and their friends, family, and colleagues. However, a comprehensive description of an example tool that could do so had yet to be published. In this paper, we therefore present Pando, a new tool that can leverage a dynamically varying number of failure-prone personal devices contributed by volunteers, to parallelize the application of a function on a stream of values, by using the devices’ browsers. Pando is based on a declarative concurrent programming paradigm [99] which greatly simplifies reasoning about concurrent processes: it abstracts the non-determinism in the execution by making it non-observable. This paradigm has already enjoyed great practical successes with the popular MapReduce [38] and Unix pipelining [56] programming models. We show for the first time it is also effective in personal volunteer computing tools. Pando abstracts distribution but otherwise relies on existing toolchains: programmers define the function to distribute and the modules it depends on following the current JavaScript programming idioms, and users can easily combine Pando in Unix pipelines. Deployment on volunteers’ devices simply requires opening, in their browser, a URL provided by Pando on startup. Devices may join or quit at any time and Pando will transparently handle the changes. We present both the high-level design principles that guided the design and a concrete working implementation, itself organized around the pull-stream design pattern and based on JavaScript [23], WebSockets [6], and WebRTC [18] to enable its execution inside browsers. The implementation of Pando is open source [65]. Compared to other volunteer computing tools, we conceived Pando as a personal tool for quick and easy deployment rather than as a long-running server process. We also avoided the use of a database for tracking the status of inputs and leveraged the heartbeat mechanism of WebSockets and WebRTC to simplify the implementation of fault-tolerance. The programming model of Pando corresponds to a streaming version of the functional map operation that supports a dynamic number of devices, without an a priori limit on their number. It reads new inputs only when computing resources are available for processing and tolerates failures in which devices suddenly disconnect, either intentionally or by crashing. To maximize throughput, faster devices receive more inputs and only a single copy of an input is submitted for processing at a time. Those properties are encapsulated in a reusable abstraction, StreamLender, that is independent of the communication protocols and input-output libraries we used for the implementation. StreamLender requires only higherorder functions for its implementation, making it portable to many popular programming languages of today. We describe the key aspects of the implementation of StreamLender. We also provide the JavaScript implementation used by Pando as a reusable JavaScript library [67]. To the best of our knowledge, StreamLender is the first articulation of those properties in a reusable abstraction for distributed stream processing. We have applied Pando to seven compute-bound applications, including crypto-currency mining, crowd computing, machine learning hyper-parameter optimization, and open data processing in combination with other peer-to-peer data distribution protocols. 1 ----- This effort has highlighted the suitability of Pando’s programming model to common processing pipelines but also the possibility of integrating Pando as a component in applications with more complex feedback loops, e.g. when performing synchronous parallel search or handling failures in external data distribution protocols. We have deployed Pando on personal devices in a local-area network on our personal collection of devices, on Grid5000 [31], a French-wide computing grid that regroups multiple clusters of computing nodes in a virtual private network (VPN) similar to the computing resources available to a large organization, as well as on seven PlanetLab computing nodes contributed by various organizations throughout Europe, connected over a wide-area network (WAN). By batching inputs for distribution, the network latency could be hidden, and we achieved overall throughput higher than on a single personal device, regardless of the position of the computing devices in the network. This shows that Pando can take advantage of both local and remote devices. To the best of our knowledge, it is the first time a tool for volunteer computing has been shown to be easily deployable in all three settings. Moreover, the comparison between the performance of recent personal devices and high-end servers shows that 2-5 cores on a personal device can outperform a core on a high-end server, highlighting the competitive opportunity offered by personal devices contributed by volunteers. The rest of this paper is organized as follows. We present the overall design of Pando in Section 2. We provide the key properties and behaviour of the StreamLender abstraction in Section 3. We present the different applications in Section 4 and evaluate the benefits and limitations of parallelizing them in real-world deployments in Section 5. We compare the specificities of our design to related work in Section 6. We conclude with a brief recapitulation of the paper and future work in Section 7. ### 2 Pando Pando is the first tool explicitly designed for the purpose of personal volunteer computing. We first explain how to use it and its concrete benefits using one of our supported application (Section 2.1). We then articulate the design principles that enable those benefits (Section 2.2). We continue with a more detailed explanation of Pando’s programming model (Section 2.3) and finally present an overview of how it is implemented in a concrete system (Section 2.4). **2.1** **Usage Example** Suppose a user is working on a personal project involving an animation, as shown in Figure 1, and the rendering uses raytracing [103], which is computationally expensive. To accelerate the rendering of the entire animation, they want to parallelize the rendering of individual frames, while still obtaining them in the correct order. **Figure 1. Rotation animation around a 3D scene.** If this were a professional project, our user could rely on professional solutions [19, 24]. However, these are often too expensive for personal projects and do not easily leverage the computing power of devices users already own. Instead, they can use Pando through a simple programming interface and a quick deployment solution. **2.1.1** **Programming Interface** Pando’s distribution of computation is organized around a process_ing function which is applied to a stream of input values to produce_ a stream of outputs. In this particular example, the processing function performs the raytracing of the scene from a particular camera _position and outputs an array of pixels. The animation consists in a_ sequence of positions of the camera rotating around the scene. Pando’s implementation parallelizes the execution of code in JavaScript by using the Web browsers of personal devices. To leverage those capabilities, a user writes a minimal amount of glue code to make the processing function compatible with Pando’s interface, as illustrated in Figure 2. In this example, the raytracing operation is provided by an external library, taken unmodified from the Web, which is first imported. Then a processing function using the required library is exposed on the module with the ’/pando/1.0.0’ property, which indicates it is intended for the first version of the Pando protocol. The function takes two inputs: cameraPos, the camera position for the current frame and cb, a callback to return the result. The body of the function first converts the camera position, which was received as a string, into a float value, then renders the scene. The pixels of the rendered image are then saved in a buffer, compressed with gzip, and output as a base64 encoded string [2], which simplifies its transmission on the network.[1] The result is then returned to Pando through the callback cb. In case an error occurred in any of those steps, an error is caught then returned through the same callback. 1 // Import existing function 2 **var render = require(** 'raytracer ') 3 // Import compressing module 4 **var zlib = require(** 'zlib') 5 module.exports[ '/pando /1.0.0 '] = function (cameraPos, cb) { 6 try { 7 **var pixels = render(parseFloat(cameraPos))** 8 cb( **null, zlib.gzipSync(** **new Buffer(pixels)).toString** ( 'base64 ')) 9 } catch (err) { 10 cb(err) 11 } 12 } **Figure 2. JavaScript programming interface example for rendering** with raytracing. The glue code should then be saved in a file, render.js in this example, and all library dependencies should be accessible using the Node Package Manager (NPM) conventions [21], typically in a node_modules sub-directory. Pando will automatically bundle all the dependencies on startup and adapt the code for the browser context by internally using browserify [13]. Pando is compatible with the Unix standard process interface, i.e. it can either receive its inputs on the standard input or as commandline arguments and it produces outputs on the standard output. In Figure 3, we connect Pando with other tools using bash scripting. 1Those last three operations take a negligible amount of time compared to rendering. 2 ----- The camera positions are provided as strings on the standard input by generate-angles.js, the rendered images are produced on the standard output as strings by Pando, and the assembly of the frames into a GIF animation is done by gif-encoder.js. All tools in the sequence are connected through Unix streams using the pipe operator (’|’). Pando could also be scripted from any other programming environment that supports the creation of Unix processes; the creation of inputs and the post-processing of outputs therefore need not be in JavaScript. 1 $ ./generate -angles.js | pando render.js --stdin | ./gif encoder.js 2 Serving volunteer code at http ://10.10.14.119:5000 **Figure 3. Unix programming interface example for rendering in-** puts and processing outputs. After starting, Pando lists the URL necessary for deployment on the standard error. **2.1.2** **Deployment** A user deploys Pando by starting it on the command-line[2], as illustrated in Figure 3. Then they should wait for URL messages to appear. When displayed, those messages indicate that Pando is ready for other devices to join. A user then opens the URL in the browser of its personal devices. Upon joining, additional devices will process individual frames in parallel. In one possible example execution, illustrated in Figure 4, a tablet joins after the volunteer URL has been opened, then renders an image, then a faster phone joins, also renders an image, then the tablet crashes, and the phone takes over for the missing image. Communications happen over a choice of WebRTC [18], a recent peer-to-peer protocol for browsers, or WebSocket [6]. A user can invite friends to add their devices, even if they are outside the local network. To do so, the user deploys a small microserver we built for Pando [66] on a platform that provides a public IP address, such as Heroku [20]. Being publicly accessible, the URL can then be shared to friends on existing social media. After opening the URL, a WebRTC connection will directly connect joining devices. As illustrated in this deployment example, Pando dynamically _scaled to accommodate the number of participating devices and_ _gracefully tolerated failures with no particular programming effort_ from the user beyond specifying a function to process a single value. Moreover, the user did not need to (1) buy new devices, (2) create an account or obtain administrative permissions, (3) use financial instruments, (4) accommodate device specificities, or (5) wait for resources to be freed. The user could also (1) combine Pando with existing Unix tools, (2) use social media to request for help, and (3) know their data has only been shared between trusted devices. **2.2** **Design Principles** The previous usage example provided significant benefits because we designed Pando around the following design principles (DPs), which we derived from the limitations of previous approaches [70]. _Specific deployment (DP1): the deployment of the tool that con-_ nects the different volunteers is specific to: (1) a single project, (2) a single known user with an existing social presence, either through 2After installing, ex: npm install --global pando-computing [64]. the contacts of volunteers, or an identity in a social platform, and (3) the lifetime of the corresponding tasks, after which it shuts down. _Compatible with a wide variety of existing personal devices (DP2):_ the tool should leverage desktops, laptops, tablets, phones, embedded devices, and personal appliances that people already own. _Easy to program (DP3): the implementation of tasks should be_ done with a minimum of programming effort for use in a distributed setting. Ideally, it should be as easy to program in a distributed setting as in a local one. _Quick to deploy (DP4): the tool should require little installation_ effort, should start processing quickly after launch, and then should dynamically scale up to benefit from help obtained from friends’ devices. _Composable and modular (DP5): the tool should focus on coordi-_ nating contributing volunteers’ devices but otherwise should rely on other tools and technologies for the rest of the needs of users. The core abstractions used in particular tools should be applicable to other uses. Tools should also combine with high-performance libraries, when available, to leverage the latest results of parallelism research without making the tools themselves more complicated. **2.3** **Programming Model** In effect, Pando’s programming model corresponds to a streaming version of the functional map operation: Pando applies a function _f on a series of input values xi to obtain a serie of results f (xi_ ). Its implementation is free to process inputs in any order but outputs results in the order of their corresponding inputs. We chose a streaming programming model because it is simple to program (DP3) yet powerful enough to coordinate the usage of multiple devices in parallel (DP2). The reason is that it belongs to the declarative concurrency paradigm [99] which abstracts the _non-determinism of executions by making it non-observable to the_ _programmer. In other words, a declarative concurrent program_ outputs the same result regardless of the order in which the various threads that compose the execution complete their tasks. That makes Pando as simple to program in a sequential setting with a single participating processor as for a parallel case with dozens. While it is implied by the definition of the map operation, it is worth noting that the ordering of outputs is important to preserve the declarative concurrency property; otherwise the relative speed of processors could influence the order of the results and make the non-determinism observable. Note also that an implementation of _f may have side-effects, such as pulling data and transferring back_ results to a server, while maintaining the benefits of declarative _concurrency. In this case however, it is the responsibility of the_ programmer to ensure that the order of side-effects does not matter. We initially chose the streaming map programming model because it fits more problems than the bag-of-tasks model of typical volunteer computing problems, which usually have independent inputs with no ordering requirement. Some applications however, such as the sequence of images that compose the animation of our previous example (Section 2.1), do require a particular order. Problems with unordered inputs can be reduced to a streaming version simply by incrementally traversing the values in an arbitrary order, making the streaming model more general. The streaming version also enables working with an infinite number of values and applications requiring feedback loops (Section 4). We also chose a number of additional distributed properties for Pando to make it easy to program (DP3) and fast to deploy (DP4). 3 ----- Pando Pando 1 Tablet Pando Tablet Phone X 3 X 1 X 2 X 2 X 3 X 1 X 2 **(a) Initial state.** X 3 X Pando 1 X 2 Phone **(f) Tablet crashed.** Pando Tablet Tablet X 2 X 1 X 2 X 3 **(b) A tablet joined.** Pando X X X 3 2 1 **(g) Phone rendered x2. Pro-** cessing is over. 3 Pando Phone X 3 X 3 Pando X 1 **(c) Tablet rendered x1.** X 1 **(d) A phone joined.** **(e) Phone rendered x3.** 3 Pando 2 Phone X 1 X 2 X 2 **Figure 4. Deployment example.** First, participating devices may join dynamically, at any time during execution. Pando’s computing power will grow automatically. This removes the overhead of registering computing resources in advance and simplifies scaling for quick deployment. Second, The potential number of participating devices is un_bounded. Pando strives to provide the illusion of infinite scalability_ so its actual performance grows automatically as users adopt new devices with more capabilities. Third, Pando is also lazy: i.e. it reads inputs only when computing resources become available. This adjusts the flow of values to the available computing power to avoid overloading Pando’s memory with pending values. It also makes the implementation compatible with infinite streams with no additional effort. Users get support for laziness with no additional programming effort. Last, Pando also tolerates failures of participating devices, making those failures transparent to the programmer. We chose a crash-stop failure mode[3], in which participating devices will always faithfully carry their assigned task without deviating from their prescribed behaviour until they either suddenly crash or disconnect. This model corresponds to failures in which a browser tab, that executes computations, is suddenly closed or to a loss of network connectivity. In the presence of such failures, Pando guarantees liveness: once an input xi has been read, if there are active participating devices, Pando will eventually provide f (xi ). The crash-stop failures of participating devices can be detected because we assume a partially synchronous execution[4]: most of the time, messages will be delivered within a specified time bound. This corresponds to the ability of communication channels such as TCP [1] and WebRTC [18] to suspect failures by failing to receive the acknowledgment of a heartbeat message within a time bound. In terms of performance goals, we decided to focus on maximizing throughput with the additional following two properties. 3Failure modes can range from crash-stop, in which a process follows its instructions then may crash and stop sending messages forever, passing by crash-recovery, in which a process may fail then recover and try participating again, to byzantine, in which a process may deviate arbitrarily from its instructions including intentionally sending messages to hamper progress. 4Timing assumptions may range from fully synchronous, in which there is an upper time bound on message delivery, passing by partially synchronous [42], in which there is a time bound on delivery that it will apply only eventually after an unknown delay, and culminating in asynchronous, in which there are no time bound on delivery. Pando distributes values to participating devices conservatively: a value is sent to at most one device for processing. The device will either produce a result or will crash, in which case the value will be sent to another device. This ensures participating devices process a maximum number of values simultaneously. Moreover, the rate at which values are submitted to participating devices adapts to their processing speed. Devices with a faster processing speed will receive more values to process, maximizing resource utilization. This combination of programming model properties, summarized in Table 1, provides a powerful yet easy-to-use programming model as shown by the breath of applications supported (Section 4). **Streaming Map** _x1,_ _x2, ... →_ _f (x1), f (x2), ...._ **Ordered** Outputs provided in order. **Dynamic** New devices may join any time. **Unbounded** No a priori limit on participants nb. **Lazy** Inputs read when resources are avail. **Fault-tolerant** _Crash-stop failures are tolerated._ **Conservative** A single copy submitted at a time. **Adaptive** Faster devices receive more inputs. **Table 1. Summary of the programming model properties.** **2.4** **Implementation Overview** Our implementation was first based on our choice between available Web technologies (Section 2.4.1). We then organized it around a declarative concurrent paradigm to simplify both its usage and implementation effort (Section 2.4.2). We finally designed a reusable architecture by decomposing it into modules and communication technologies (Section 2.4.3). **2.4.1** **Technology Choices** We based our implementation on Web technologies for a number of reasons. First, they are compatible with a wide number of personal devices, from smartphones and embedded devices to tablets, laptop, and desktops computers (DP2). Second, virtual machines in modern browsers execute numerical applications in JavaScript at a speed within a factor of 3 of equivalent numerical code written in C [52, 57]. A large variety of native applications, as represented by the SPEC CPU2006 and CPU2017 benchmarks and originally written in C for Unix systems, can also be executed in browsers supporting WebAssembly [50] without modification to the original source code by using Browsix-WASM [54]: the applications then run with an average slowdown of only 45% to 55% and peak slowdown of 2.5x compared to a native execution. In either case, the level of performance is sufficiently close to C to benefit from executing tasks inside multiple parallel Web pages. Third, browsers also provide a security sandbox that prevents code executing within a web page from tampering with the host operating system. Fourth, WebRTC Pando X 3 Pando X 1 4 ----- [18], enables direct communication between browsers, in many cases even in the presence of Network Address Translation (NAT), which removes the need for a server to relay all communications between the tool and the volunteers’ devices. Fifth, links shared on social media platforms enable their users to quickly mobilize their social networks. Sixth, both WebSocket [6] and WebRTC [18] provide heartbeats to detect disconnections. **2.4.2** **Declarative Concurrency With Pull-Streams** Pando provides a declarative concurrent abstraction [99] of the parallel execution of the different participating processors (Section 2.3). Mainstream languages, such as JavaScript, have not yet integrated features that make that style of programming widely accessible. We therefore instead based our design and implementation on the pullstream design pattern [96], a functional code pattern that enables streaming modules to be built by following a simple callback protocol. It only requires support for higher-order functions from the base language. Implementations of abstractions built by following the pattern should therefore be straight-forward to port to many programming languages of today. The pull-stream design pattern has originally been proposed by Dominic Tarr [96] as a simpler alternative to Node.js streams, that were plagued with design issues that had to be maintained for backward-compatibility. A community has grown around the pattern and more than a hundred modules have been contributed [15]. Perhaps, the simplest example of pull-stream modules is a source that lazily counts from 1 to n, connected to a sink that consumes all values and then stops, as illustrated in Figure 5. The callback protocol essentially consists in a request followed by an answer. The request may be used to ask for a value, abort the stream normally, or fail because of an error. Symmetrically, the answer may then produce a value, signify the end of the stream, or stop because of an error. A module may also both consume and produce values, in which case it can be used between a source and a sink. This is illustrated in Figure 6. 1 **function source (n) {** 2 **var i = 1** 3 **return function output (abort, cb) {** 4 **if (abort)** 5 **return cb(abort, undefined)** 6 **else if (i<=n)** 7 **return cb(** **false, i++)** 8 **else** 9 **return cb(** **true, undefined)** 10 } 11 } 12 **function sink (request) {** 13 request( **false, function answer (done, v) {** 14 **if (done) return** 15 **else request(** **false, answer)** 16 }) 17 } 18 sink(source (10)) 19 **var pull = require(** 'pull -stream ') 20 pull(source (10), sink) // equivalent to line 20 **Figure 5. Pull-stream example.** While the pattern does not simplify the task of implementing pull-stream modules, once implemented, the modules provide clear **Callback Protocol** 1 ask/abort/fail Upstream Downstream Output 2 value/done/err Input **Pipeline** Source Transformer(s) Sink Flow of values **Figure 6. Pull-stream design pattern: callback protocol on top and** pipeline of composable modules at the bottom. semantics and are easy to combine because they can provide declarative concurrent abstractions. Using the pull-stream design pattern therefore makes the rest of the implementation of Pando easier. **2.4.3** **Architecture** The core modules of Pando and the way they are connected is illustrated in Figure 7. They work together to implement a dis_tributed map that processes a stream of values xi with a function f ._ Our implementation uses Node.js but could also work as a hosted Web application. Deployment consists in executing the tool on the command-line, which starts the Master process. HTTP connections from volunteers’ devices may then be made directly to the Master, if on the same local area network (not shown), or through a Public Server, if direct connectivity is not possible. The HTTP connection is used to obtain the Worker code including the f function and eventually establish either a WebSocket [6] or WebRTC [18] connection. The bootstrap of the WebRTC connection, which requires _signalling of possible connection endpoints between peers, is done_ through a Public Server using a separate WebSocket connection. That connection closes after the WebRTC connection is established. Since signalling requires little resources, the Public Server could be executed on a small personal server such as a Raspberry Pi board [22] or the free tier of a cloud such as Heroku [20]. The pull-stream abstractions we designed and reused are shown as modules within the different processes, respectively in white and grey. The core coordination is performed by our novel Stream_Lender abstraction (Section 3), which creates multiple concurrent_ bi-directional sub-streams, one for each worker. A sub-stream continuously borrows values from the input of StreamLender and return results that are eventually returned on its output. The substreams are dynamically created as Workers join. We use existing libraries that expose WebRTC and WebSocket channels as pullstreams. Since their implementation eagerly reads all available val_ues on the sending side, we bound the total number of values that_ can be borrowed using our new Limiter module: initially a bounded number of inputs is let through until the limit is reached, then for each new result that comes in a new input is allowed. With a large enough limit, data transfers in both directions therefore happen in parallel with the computations and can hide transmission latency. The limit can be parameterized using an argument passed to Pando on startup. The actual processing of values is done inside Workers using the existing AsyncMap [15] module that applies the function _f on the different inputs._ Pando trivially enables parallel processing on multicore architectures on a single machine while enabling dynamically scaling up |Transformer(s)|Sink| |---|---| Downstream Input Transformer(s) Sink 5 Upstream Output ----- to other devices if necessary, making the tool useful in many contexts. Our design should also work with other technology choices, which could be mandated because users require specific libraries and technologies that are not available for the Web yet. For example, users may depend on specific numerical libraries available in Python/Numpy, MATLAB, or R. In that case, it should be straightforward to adapt the design by relying on TCP for communication and porting our modules to a different language. **Master** **Public Server** **(Node.js)** **(Node.js)** DistributedMap Pando Server StreamLender x2, x1, x0, … f(x2), f(x1), f(x0), … WebSocket **Worker** **(Browser Tab)** Limiter Limiter Volunteer (Candidate) WebSocket **(Browser Tab)Worker** WebRTC **(Browser Tab)Worker** **Legend** Volunteer Volunteer OS Process (Processor) (Processor) AsyncMap(f) AsyncMap(f) Bi-directional data stream Uni-directional data stream Bi-directional control Network boundary stream (with possible Protocol Network Address Translation)Network protocol module ContributedJavaScript module module ExistingJavaScript module **Figure 7. Architecture of Pando.** **2.5** **Applicability** The design and architecture of Pando are tailored to its application context: the acceleration of personal workloads with personal devices. Most of these workloads do not require strong timing guarantees, as could occur in real-time processing of sensor data or financial transactions for example. Moreover, a user has direct control over many or most of the personal devices that are used for computation: faults that may happen are the result of a user disconnecting a device accidentally or because it is not contributing significantly to the overall throughput. Fault-tolerance makes the tool more convenient to use but is not critical for efficient execution. Finally, it is easy to protect a Pando deployment against a denial-of-service attack because there is no long-running publicly accessible platform to target: an attacker needs to know when a deployment happens, in addition to where. It is also always possible to only deploy Pando behind a virtual private network for additional guarantees. The design of Pando therefore leverages the application context to simplify its implementation and therefore occupies a different part of the design space than many other distributed computing platforms. ### 3 StreamLender StreamLender is our novel abstraction that splits an input stream into multiple concurrent sub-streams and then merges back the results in a single output stream. The actual processing of the values is done using other transformer modules, as illustrated in Figure 8. We provide a usage example in Figure 9. **_StreamLender_** _Input_ _Output_ **Sub-Streams** _Out1_ T1 _In1_ _Out2_ T2 _In2_ **Figure 8.** StreamLender and its sub-streams. External transformer(s) modules connected to the sub-streams are greyed. They represent modules such as the Limiter of Figure 7. 1 **var pull = require(** 'pull -stream ') 2 // StreamLender 3 **var lender = require(** 'pull -lend -stream ') 4 **var limit = require(** 'pull -limit ') // Limiter 5 pull( 6 pull.count (10), 7 lender, 8 pull.drain () 9 ) 10 **var duplex = ... // On webrtc connection opened** 11 lender.lendStream( **function (err, subStream)) {** 12 **if (err) return** 13 pull( 14 subStream.source, // output 15 limit(duplex), 16 subStream.sink // input 17 ) 18 }) **Figure 9. StreamLender usage example.** StreamLender encapsulates the streaming, ordered, dynamic, fault_tolerant, conservative, and adaptive properties of Pando’s program-_ ming model (Section 2.3), independently of a particular communication protocol or other input-output libraries. To the best of our knowledge, StreamLender is the first articulation of those properties in a reusable abstraction for distributed stream processing. The complete and tested JavaScript implementation that we built and used in Pando is available as an independent pull-stream module [67]. The synchronization of events happening through callbacks initiated by multiple concurrent streams was tricky to correctly implement and is rather cumbersome to decipher through the source code. We therefore derived a more readable pseudo-code version that uses explicit waiting primitives and events that correspond to the invocation of callbacks to help reimplementations, available in an extended version of this paper [69]. As a sample, Algorithm 1 shows how the requests made on a sub-stream output are answered, either with a value from another sub-stream that failed, a new value requested on the StreamLender Input, or a done if no more values are left to process. The ordering and synchronization of outputs is simply solved with a blocking queue that waits for the result at the next index in the stream to arrive. ### 4 Applications Pando can be applied to a wide range of applications. In this section, we present some examples according to their dataflow pattern, i.e. how data flows between Pando and other tools and protocols. We 6 |Master Public Server (Node.js) (Node.js) DistributedMap Pando Server StreamLender x2, x1, x0, … f(x2), f(x1), f(x0), … WebSocket Worker (Browser Tab) Limiter Limiter Volunteer (Candidate) WebSocket Worker WebRTC Worker Legend (Browser Tab) (Browser Tab) Volunteer Volunteer OS Process (Processor) (Processor) AsyncMap(f) AsyncMap(f) Bi-directional data stream Uni-directional|Master (Node.js) DistributedMap StreamLender x2, x1, x0, … f(x2), f(x1), f(x0), … Limiter Limiter|Public Server (Node.js) Pando Server| |---|---|---| |||WebSocket| |||Worker (Browser Tab) Volunteer (Candidate)| |WebSocket Worker (Browser Tab) Volunteer (Processor) AsyncMap(f)|WebRTC Worker (Browser Tab) Volunteer (Processor) AsyncMap(f)| |---|---| DistributedMap StreamLender x2, x1, x0, … f(x2), f(x1), f(x0), … Limiter Limiter Volunteer (Processor) AsyncMap(f) WebSocket WebSocket Protocol module ----- **Algorithm 1 Sub-stream output ask request.** 1: upon Outi :ask⟨⟩ 2: **if f ailed �** ∅ **then** 3: answerWithFailedValue(Outi ) 4: **else if Input has terminated (done or err** ) then 5: waitOnOthers(Outi ) 6: **else** - Lazily read a new value 7: **trigger Input:ask⟨⟩** 8: **wait Input answer** 9: **if answer = Input:value⟨v⟩** **then** 10: remember v 11: **trigger Outi** :value⟨v⟩ 12: **else** 13: WaitOnOthers(Outi ) 14: 15: procedure answerWithFailedValue(Outi ) 16: let v be the oldest value of failed 17: remember v 18: _failed ←_ _failed\{v}_ 19: **trigger Outi** :value⟨v⟩ 20: procedure waitOnOthers(Outi ) 21: **wait until last result received or failed �** ∅ 22: **if last result received then** 23: **trigger Outi** :done⟨⟩ 24: **else** 25: answerWithFailedValue(Outi ) implemented each application using components built as separate Unix tools but the same components could be implemented as pullstream modules and combined into a single application as well, either as a standalone webpage or a smartphone application. We summarize key aspects of each application. **4.1** **Pipeline Processing** _Pipeline processing is a sequence of independent processing stages_ applied to a stream of inputs, as illustrated in Figure 10. Traditional _bag-of-tasks problems, typically associated with volunteer comput-_ ing, can also be solved with this approach, by listing each individual task in sequence. Pando Post-Processing **App.** **Inputs** **Pando** **Post** Collatz Ints Nb of steps Max Raytrace Camera pos. Raytracing Anim. gif Arxiv Meta-info Human tagging None SL test RNG seeds Rand. exec. Monitor fail. ML agent Hyperparams Simulation None Img proc. Landsat-8 imgs Blur filter None (http) **Figure 10. Pipeline processing dataflow and examples.** This approach is straight-forward to use with Pando and easiest to combine with other Unix tools. We implemented five applications that show diverse use cases. Collatz implements the Collatz Conjecture [17], an ongoing BOINC project, to find an integer that results in the largest number of computation steps. Our implementation was compiled from Matlab to JavaScript using the Matjuice compiler [14, 47] and then adapted to use a BigNumber library. Other languages with a JavaScript compiler may therefore benefit from Pando without having to implement a distribution strategy. Raytrace distributes the rendering of individual frames of a 3D animation and assembles them in an animated gif (Section 2.1). A similar strategy could be useful to integrate in open source animation tools for artists that do not have access to a rendering farm. Arxiv distributes the tagging of interesting papers to a group of collaborators, a form of crowdprocessing, by using the browser as a user interface rather than a processing environment. A similar approach could be used to quickly launch an online rescue search using satellite or aerial images in times of disasters. _StreamLender test performs random executions of StreamLender_ to find cases where the invariants of the pull-stream protocol are violated. It helped us fix three bugs in corner cases that were not found with manually written tests and then scale up the testing strategy to perform millions of executions quickly without finding errors, increasing confidence that our implementation is correct. _Machine learning agent searches for the optimal learning rate, an_ hyperparameter, that helps an autonomous agent in a simulated environment quickly learn sequences of steps that result in rewards. This approach could be beneficial to train deep neural networks in browsers. In this particular example, the training phase is interactive: the user can see the behaviour of the agent as it is learning and early-abort a particular hyper-parameter case if the agent fails to learn, a form a hybrid human-machine learning collaboration. Image _processing blurs the images from the open satellite dataset [88]._ We have implemented multiple versions of this application: this version uses an http server to distribute the images and receive the results through http requests. In contrast to the two other versions of Section 4.3, the data transfer between a Worker and the http server is synchronous: a worker processing function will not return a correct result until the output image has been fully transmitted to the server which guarantees that the output image will be received before the output will be produced by Pando. **4.2** **Synchronous Parallel Search** The structure of blockchains in crypto-currencies such as Bitcoin [79] mandates a synchronous parallel search organization: all miners compete to find a random value, or nonce, such that the hash of the nonce and the block of transactions combined is inferior to a difficulty threshold, itself controlling the probability of finding a nonce. Once a valid nonce has been found, the list of blocks is extended, and all miners start working on the next block. In the case of Bitcoin, there is no upper bound on the amount of computational power required to mine the next block because the difficulty is automatically adjusted such that the time between each successful block is roughly ten minutes. The increasing difficulty, and therefore computational requirements to mine a new block, makes it increasingly costly for malicious actors to generate a fork of the chain of blocks at arbitrary places, preserving the integrity of the longest chain of blocks. This results in a global consensus on the history of transactions. A synchronous parallel search introduces a feedback loop in the flow of data, as illustrated in Figure 11, because the next input to Pando 7 Post-Processing ----- process is determined by the last valid result obtained. In our implementation, a monitor therefore lazily provides mining attempts to Pando, including the current block and a range of integers to test. It generates as many as there are participating workers. Each worker tests all integers in the range and answers either with a valid nonce or a failure and then requests a new mining attempt. The monitor keeps providing new mining attempts until a valid nonce is found and then moves on to the next block. In this example, both the list of blocks and the computational requirements are potentially infinite, making a lazy streaming approach quite natural. Monitor Pando **App.** **Inputs** **Monitor** **Pando** Crypto-curr. Blocks Block + Range Mine nonce **Figure 11. Synchronous parallel search dataflow and example.** A more efficient implementation would need to relax the ordering constraint to ensure a valid nonce is reported as soon as possible. Otherwise a valid nonce might be held back by other uncompleted work units in front. Adding this support requires only a local change in Pando by adding an option to use a different version of StreamLender that returns unordered results. Moreover, Bitcoin miners nowadays use dedicated hardware that is several orders of magnitude faster than the performance that can be achieved with an equivalent implementation executing in JavaScript. There is therefore limited practicality in mining Bitcoins in browsers, even with the gains obtained by parallelizing the task. Nonetheless, proof-of-work algorithms have been designed to work better on regular CPUs [78]. There may therefore be potential applications in mining those emerging crypto-currencies with Pando to support charities and fund open source software. **4.3** **Stubborn Processing with Failure-Prone External** **Data Distribution** In addition to the http version of Section 4.1, We implemented two additional versions of distributed blurring of the Landsat-8 open satellite dataset [88]: one distributing the data with the DAT protocol [8], itself accessible in the Beaker browser [12], a fork of Chromium [4], and another that uses WebTorrent [9] running in browsers that support WebRTC. In both cases, managing data outside of Pando introduces an additional failure mode due to the asynchronous transmission of results: it is possible to receive a successful result but the worker may still crash before the results’ data have been fully downloaded. To address the issue, our application outputs a result only after a successful download. Otherwise, the input is resubmitted for computation. The monitoring to implement that feedback loop has been factored into our new stubborn pull-stream module [68] which can be combined with sharing and downloading modules that are specific to a particular protocol, as illustrated in Figure 12. This use of Pando could be especially appropriate in cases where there is a growing availability of open datasets combined with Download Pando **App.** **Inputs** **Share/Down.** **Pando** Img proc. Landsat-8 imgs DAT protocol Blur filter Img proc. Landsat-8 imgs WebTorrent protocol Blur filter **Figure 12. Stubborn processing with external data distribution** dataflow and example. limited funding and resources available to process them, as is the case for many citizen initiatives. ### 5 Evaluation Our focus in developing Pando has been to easily tap into the computing power of personal devices already owned by the general public. The collective performance of personal devices has previously been shown to be significant both when considering the collection of devices owned by individuals and the aggregate performance of mobile devices of co-workers [52, 70]. The design of Pando has also been shown to scale up to at least a thousand browsers when combined with a fat-tree overlay [71] but had not yet been tested on wide-area network deployments. In this section, and in complement to the previous results, we compare the performance of Pando on a local area network (LAN) with two additional deployment scenario: a France-wide state-ofthe-art computing grid, Grid5000 [31], connected over a virtual private network (VPN) that is similar to a large organization computing infrastructure, and a wide-area network (WAN) deployment with computing devices distributed throughout Europe on PlanetLab EU [3] that is similar to a deployment on the devices of a distributed volunteer community. The throughput results for all three scenario are detailed in Table 2: they show that the additional communication latency of the VPN and WAN cases could be hidden by sending multiple inputs at the same time to volunteering devices. Using Pando on compute-bound tasks therefore results in net throughput benefits when using multiple devices in parallel, whether on a LAN, a VPN, or a WAN. In the rest of this section, we detail our experiment settings for all three scenarios, the results obtained, and interesting findings that come from comparing the three scenario together. To the best of our knowledge, it is the first time an evaluation for a volunteer computing tool has compared those three scales together. **5.1** **Common Settings** We used all applications of Section 4 except Arxiv because the actual "processing" in the Arxiv case is performed by a volunteer rather than the device. All applications are compute-bound, as is typical of volunteer computing. We measured the computation duration and the number of items processed in each Worker over a five minute period, from which we derived the throughput. This diminished the impact of the variability of the computing time between inputs. We also checked that the total of all devices corresponded to the throughput observed at the output of Pando. Pando Stubborn 8 Monitor ----- The implementation of applications is similar to that used in previous experiments [70], the only major difference is that the image used for raytracing was smaller to avoid a limitation on the size of individual WebRTC messages in the simple-peer [16] library we use for managing WebRTC connections. The consequence is that throughput results, in this evaluation, shall be larger for the same devices, running the same browser, on the same network. Of the three versions of photo-batch-processing we implemented, we used the http version, rather than the DAT or the WebTorrent versions. The DAT version can only execute in the Beaker browser [12] because it is the only browser that supports the protocol and its security model requires an explicit confirmation by the user to enable results to be transmitted back, making the test automation cumbersome. The WebTorrent version was not always reliable and sometimes took multiple minutes to establish a connection most probably because the connection of a new node in the underlying WebRTC-based distributed hash table was slow and not always successful. However, choosing the http version meant that the http server that serves files was not accessible from outside a LAN or VPN, we therefore do not provide throughput results on the WAN case. Nonetheless, once peer-to-peer solutions for exchanging files become mature enough, the image-processing example shall be easy to adapt to take advantage of their capabilities. We used Pando version 0.17.14 [65] with the version of application examples in Pando’s handbook [64] at commit c5247923. **5.2** **LAN: Personal Devices** We selected a diverse set of devices from our own personal collection, similar to previous experiments on personal devices [70] but omitting the slowest devices and using a more recent version of Pando and applications. We used one iPhone SE (2 cores 1.85 Ghz ARMv8 64-bit), released in 2016, executing iOS 12.1, and Safari. For laptops, we evaluated: (1) a Macbook Air mid-2011 (2 cores i7 1.8 Ghz x86 64-bit) executing MacOS 10.13.6 and Firefox 66.0.5 64-bit; (2) the Novena [11], a linux laptop based on a Freescale iMX6 CPU (4 cores 1.2 Ghz ARMv7 32-bit) produced in a small batch in 2015, executing Debian Linux 8, and Firefox 60.3.0esr 32-bit; (3) an Asus Windows laptop based on a Pentium N3540 (4 cores 2.16 Ghz x86 64-bit) processor executing Windows 10 version 1803 and Firefox 66.0.5 64-bit; and (4) a Macbook Pro 2016 (4 cores i5 2.9 Ghz x86 64-bit) executing MacOS 10.14.1 and Firefox 63.0.1 64-bit. These devices represent a wide variety of CPU and OS choices, as well as a computing performance. We favoured the use of close versions of Firefox on laptops for consistency so the experiments would focus on the variations on CPU speed and because it is generally the fastest on numerical benchmarks [52]. We also used the minimum number of cores that provided close to the maximum performance, shown between brackets in Table 2; using more cores typically did not significantly increase the total throughput. The MacBook Air was connected to the other personal devices through a Wifi network. We used a batch-size of 2, effectively enabling one input to be transferred while the other is processed. **5.3** **VPN: Grid5000 Nodes** We selected one node for each of the 8 participating Grid5000 clusters, themselves distributed between major cities in France along the INRIA network. Each cluster has multiple models, each with a unique name that facilitates selecting a particular model. We list them by model name (ex: dahu) followed by the cluster site where they are hosted (ex: grenoble), as well as their technical characteristics. They all use different versions of Debian Linux 4.9.x 64-bit and as a browser, Chrome version 73.0.3683.121, through the Electron 5.0.1 environment. The nodes were acquired between 2011 and 2018: the oldest is _uvb.sophia and the most recent is dahu.grenoble. Each group of_ nodes comprises between 15 and 72 nodes. Each node has 2 Intel Xeon CPUs with different model: uvb.sophia uses an Intel Xeon X5670 with 6 cores/CPU, while dahu.grenoble uses an Intel Xeon Gold 6130 with 16 cores/CPU. The nodes have varying amounts of RAM from 32GB for petitprince.luxembourg to 256 GB RAM for _chetemy.lille. All nodes are connected through 10Gbps ethernet,_ except for uvb.sophia who are connected with 1 Gbps ethernet. We measured the performance on a single core on a single node per cluster. The results should scale linearly with additional nodes but less than linearly when using more than one core per node, as previous experiments have shown that there is increasing contention for CPU resources when the number of cores used in parallel is increased [71]. The Master process of Pando was executing on one core of the MacBook Air 2011, mentioned in the personal devices experiment and the connections between the Master process and the remote devices were made using the WebSocket protocol. The MacBook Air was itself connected to the Internet through the Wifi network of INRIA and to the Grid5000 nodes through a VPN access. We used a batch-size of 2, effectively enabling one input to be transferred while the other is being processed. **5.4** **WAN: PlanetLab EU Nodes** We selected seven nodes among the PlanetLab EU nodes that are still working and used one core per node. For each node, we used Chrome version 69.0.3497.128 through the Electron 4.1.3 environment. Each node has a single Intel CPU, the models comprise a Westmere (ple42.planet-lab.eu), a Core 2 Duo (planet2.elte.hu), and variations of Xeon (all others). All the nodes have 512MB of RAM, are running Fedora Core Linux version 25 with a 4.8, 4.11, or 4.13 Linux kernel. All nodes are connected through 10 Gpbs ethernet. We measured the performance on a single core on a single node per cluster. Similar to the VPN experiment, the Master process of Pando was executing on one core of the MacBook Air 2011. However, the connections between the Master process and the remote devices were made using the WebRTC protocol. The MacBook Air was itself connected to the Internet through the Wifi network of INRIA. We used a batch-size of 4, effectively enabling up to three inputs to be transferred while the last is being processed. **5.5** **Analysis** We highlight here interesting insights from the results of Table 2. _Pando can take advantage of computing devices, whether available_ _on a LAN, a VPN, or a WAN. We could use the same tool to execute_ the applications in parallel on personal devices, on a state-of-theart grid infrastructure, or a distributed set of devices connected to the Internet. In all cases, there was a performance benefit in using all those devices in parallel that improves significantly on the performance that would have been obtained otherwise on a single personal device. To the best of our knowledge, Pando is the first tool for volunteer computing that provides such a level of flexibility. That flexibility, for example, enables leveraging the 9 ----- fastest computing devices available with a minimum of effort: in our experiments, these were the Grid5000 nodes. _The throughput impact of network latency can be minimized for_ _computation-bound applications, if large enough batches of inputs_ _are used. For the LAN and VPN experiments, we used input batches_ of size 2 and for the PlanetLab experiments, we used input batches of 4. These were sufficiently large to compensate for the transmission delay of inputs, even in the case of image-processing where 168kb images were sent for processing through a different channel. Obviously, those results hold only as long the ratio between computation time and data transfer time is sufficiently large. Nonetheless, it shows that for application for which this holds, the option of sending inputs in batches is sufficient to hide the network latency. _A single core from personal devices of 2016 sometimes provide_ _higher throughput than older servers. On Collatz, the iPhone SE_ outperforms the uvb.sophia from Grid5000 and almost all PlanetLab server nodes. This is true in more cases when comparing the throughput of a single core on the MBPro 2016 with the performance of a few Grid5000 nodes and many PlanetLab nodes. It therefore means that, sometimes, it may be better to leverage many personal devices than relying on older server nodes. _The choice of browser sometimes can have dramatic effect on_ _throughput. The iPhone SE outperforms a single core on the Mac-_ Book Pro by 3.3x because Safari performs optimizations that Firefox does not, even if in previous studies Firefox was found to be better in general on numerical computations [52]. When using the browser as an execution environment, it is therefore important to try all available browsers to find the best for a specific application. _2-5 cores on recent personal devices can outperform the fastest_ _server core. It therefore means that asking 2-5 friends with recent_ smartphones or laptops, such as the iPhone SE or the Macbook Pro 2016, to participate with Pando can replace renting a highend server core in remote data centres. While this seems rather impractical if the devices are powered by their battery, the use of portable solar panels can remove the problem during sunny days. The previous experiments therefore show that using Pando, a user can leverage spare computing capacity either in local or remote personal devices, that batching inputs is sufficient to hide network latency, and that the computing power available in personal devices is quite significant, even compared to state-of-the-art server infrastructure. ### 6 Related Work The idea of using idle workstations for distributed computing was first published in 1982 [92] and was then explored in the 90s, 2000s, and 2010s under the umbrella of desktop grid [44, 45]. In parallel, volunteer computing developed [30, 91] to support high-profile research with the personal desktop computers and fast internet connections that were spreading into households. Individuals nowadays collectively own more computing power, through their personal devices such as desktops, laptops, tablets, phones, etc., than any organization ever did. While there has been work in extending volunteer computing to leverage mobile devices [83, 95], the recent personal volunteer computing approach [70] is the first to focus on creating personal tools for personal projects of programmers of the general public to seamlessly tap into the computing power of the personal devices they, and their personal _social network, already own._ To the best of our knowledge, Pando is the first tool explicitly designed for the purpose of personal volunteer computing. In this section, we provide more detail on the declarative concurrency work it was inspired from and other systems that share similar technology choices. While Pando shares some technology choices with previous platforms, it combines them for different aims. **6.1** **Declarative Concurrency** Declarative concurrency has been studied in the context of dataflow programming, with languages such as Lucid [101] and Oz [93]. In the Oz language, the declarative programming model can be used directly to implement concurrent modules [99, Chapter 4]; it is based on using single-assignment variables that enable multiple threads to implicitly synchronize on the availability of data, on top of which higher-level abstractions such as streams can be built. The declarative concurrency paradigm has also been experienced by a large number of programmers and researchers through the popular MapReduce [38] framework and Unix pipeline programming [56]. In effect, Pando implements the map operation of MapReduce; the other filtering and reduction phases can be performed locally, if necessary, by chaining with other Unix tools, e.g. grep and awk. JavaScript, as many other mainstream programming languages, has not yet integrated features that make declarative concurrency widely accessible and easy, with good declarative concurrency primitives. We therefore instead based our design and implementation on the pull-stream design pattern (Section 2.4.2). As far as we know, we are the first to develop and document systematic abstractions for volunteer computing using the declarative concurrent paradigm. **6.2** **Stream Processing** _Stream processing has been widely adopted as a programming_ model for scalable distributed stream processing [35], for general purpose programming on CPUs [49], for distributed GPU programming [105], and for Web-based peer-to-peer computing based on the WebRTC [18], WebSockets [6], and ZeroMQ [10] protocols. Those platforms are programmed using dataflow graphs of com_putation that combine multiple operators and complex data flows._ They then ensure an efficient and reliable execution on different targeted execution environments. This level of expressivity is not necessary for many personal projects and applications (Section 4). To support our applications with a lower level of implementation complexity and make our design easier to port to other programming environments, Pando therefore concentrates on distributing the computation that is applied in a single stage of the streaming pipeline with the map operation. Everything else is performed locally by leveraging other tools. **6.3** **Browser-Based Volunteer Computing** Fabisiak et al. [43] have surveyed more than 45 different browserbased volunteer computing systems developed over more than two decades. They grouped the publications in three generations, that followed the evolution of Web technologies: the first generation [28, 32, 37, 46, 81, 90] was based on Java applets; the second generation [33, 34, 60, 77] used JavaScript instead but was somewhat limited by its performance; and the third generation [39, 41, 63, 73, 74, 76, 85, 89] fully emerged once performance issues were solved in multiple ways: JavaScript became competitive with C [57], 10 ----- WebWorkers [5], that did not interrupt the main thread, were introduced, and new technologies, such as WebCL [7], were proposed to increase the performance beyond what is possible on a single thread of execution on the CPU. We further sub-divide Fabisiak and al.’s third generation into an explicit fourth [62, 72] that incorporates the latest communication technologies, such as WebSocket [6] and WebRTC [18], because they make fault-tolerance easier. Pando could be grouped with the fourth generation of systems and, as far as we know, is the first to leverage WebRTC for the explicit goal of volunteer computing. However, the key difference of Pando is in our focus on the personal aspects of volunteer computing [70] that led to specific design principles (DPs of Section 2.2) with the following concrete impacts on its programming model, deployment strategy, and implementation. Of the systems that have generic programming models, many focus on batch-processing [34, 39, 60–62, 85] as typically happens in high-profile long-running applications, sometimes reusing, in the browser, the MapReduce programming model that has been successful in data centers [33, 48, 63, 76, 89]. In contrast, by using a streaming model, Pando enables different and more personal applications by supporting infinite streams and feedback loops. This simplifies the combination of Pando with existing Unix tools and other programming environments (DP5). While some general purpose projects aim to deploy new global _platforms [27, 28, 32, 37, 39, 61, 63, 81, 84, 90], sometimes on clouds [72,_ 85], we have chosen to prioritize local deployments for personal uses. Pando also supports cloud platforms, if necessary for connectivity, but our common use cases do not require them. Moreover, by having a deployment that is specific to a single user and project (DP1), the implementation is simplified. That removes the need for solutions such as: (1) access restrictions in the form of _random URLs to segregate the computations of different concur-_ rent users [84], (2) brokers/dispatchers/bridges to organize the tasks submitted [27, 28, 37, 39, 61, 63], (3) dynamic management of man_agers [32], and (4) advocates [90] to represent clients in the server._ Many implementations are organized around a database [33, 34, 36, 39, 61, 85, 89]. Pando’s implementation instead encapsulates concurrency aspects in the StreamLender abstraction, removing the need for a database library. Other implementations are organized around a request-response API based on HTTP [33, 36, 39, 41, 60, 61, 63, 74, 77, 85, 89], to distribute inputs and collect results. Instead, and similar to newer projects [62, 72], Pando communicates through WebRTC and WebSocket. In our case, the heartbeat mechanism of both protocols enabled our design to encapsulate the fault-tolerance strategy within StreamLender. These simplifications in turn hopefully makes it more likely that other programmers will adapt the design for embedding in other applications or to reimplement as standalone tools for different programming environments. **6.4** **Peer-to-Peer Computing** _Peer-to-peer computing, in which participating devices provide re-_ sources and help coordinate the services that are used, has a rich literature [26, 51, 58, 59, 75, 80, 86, 87, 94, 97, 104]. However, the servercentric model of Web technologies has historically limited the development of peer-to-peer Web platforms and applications. The recent introduction of WebRTC [18] removed that limitation which lead to the creation of many new ones [40, 53, 55, 82, 98, 100, 102]. Of all previously mentioned systems, the closest to Pando is _browserCloud.js [40] in its aim to provide a computation platform_ powered by the devices of participants. However, Pando’s implementation approach is quite different and simpler because a deployment is restricted to a single client, its overlay organization need not make workers communicate with one another, it does not require maintenance when not in use for specific tasks, and removes the need for a discovery algorithm by instead relying on existing social media platforms. In our view, these differences come from a difference in application context. Using BrowserCloud.js’s approach, and that of other peer-to-peer systems, is better to create _globally-shared self-sustaining platforms. Ours is better to quickly_ obtain a working personal tool when a dependency on other tools and platforms is acceptable. ### 7 Conclusion In this paper, we presented the design of Pando, a new and first tool for personal volunteer computing that enables a dynamically varying number of failure-prone personal devices contributed by volunteers to parallelize the application of a function on a stream of values using the devices’ browsers. In doing so, we have explained how the declarative concurrent model made its programming simple and how the pull-stream design pattern was used to decompose its implementation in reusable modules. We then provided more detail about the properties and implementation of the new StreamLender abstraction that performs the core coordination work within Pando, which, by virtue of being independent of particular communication protocols or input-output libraries, should be easy to reimplement in many other programming environments. We followed with a presentation of a wide variety of novel applications organized along different dataflow patterns that showed Pando was useful on a wide number of existing and emerging use cases. We completed with an evaluation of Pando’s benefits in a real-world setting and showed throughput speedups on the previous applications on a local network with personal devices, on a virtual private network spanning France with state-of-the-art server nodes, and a wide area network spanning Europe with older server nodes. The ease and flexibility in deploying Pando shall enable a larger number of programmers to leverage the computing capabilities of personal devices available both locally and remotely. Moreover, our results suggest that the competitive performance of personal devices makes them serious alternative in aggregate for some compute intensive tasks. 11 ----- 12 ----- ### References [[1] 1981. Transmission Control Protocol. https://www.ietf.org/rfc/rfc793.txt. [On-](https://www.ietf.org/rfc/rfc793.txt) line; accessed 16-October-2018]. [[2] 2003. The Base16, Base32, and Base64 Data Encodings. https://tools.ietf.org/](https://tools.ietf.org/html/rfc3548) [html/rfc3548. [Online; accessed 16-October-2018].](https://tools.ietf.org/html/rfc3548) [[3] 2008. PlanetLab Europe. https://www.planet-lab.eu. [Online; accessed 12-May-](https://www.planet-lab.eu) 2019]. [[4] 2008. The Chromium Projects. https://www.chromium.org/. [Online; accessed](https://www.chromium.org/) 12-October-2018]. [[5] 2010. Web Workers. https://w3c.github.io/workers/.](https://w3c.github.io/workers/) [Online; accessed 26October-2018]. [[6] 2011. The WebSocket Protocol. https://tools.ietf.org/html/rfc6455. [Online;](https://tools.ietf.org/html/rfc6455) accessed 13-February-2017]. [7] 2011. WebCL: Heterogeneous parallel computing in HTML5 web browsers. [https://www.khronos.org/webcl/. [Online; accessed 26-October-2018].](https://www.khronos.org/webcl/) [[8] 2013. The Dat project. https://datproject.org/ [Online; accessed 12-October-](https://datproject.org/) 2018]. [[9] 2013. WebTorrent. https://webtorrent.io/. [Online; accessed 17-April-2017].](https://webtorrent.io/) [[10] 2014. ZeroMQ. http://zeromq.org/. [Online; accessed 12-October-2018].](http://zeromq.org/) [[11] 2015. Novena Main Page. https://www.kosagi.com/w/index.php?title=Novena_](https://www.kosagi.com/w/index.php?title=Novena_Main_Page) [Main_Page. [Online; accessed 9-November-2018].](https://www.kosagi.com/w/index.php?title=Novena_Main_Page) [[12] 2017. Beaker. https://beakerbrowser.com/. [Online; accessed 12-October-2018].](https://beakerbrowser.com/) [[13] 2017. Browserify. http://browserify.org/. [Online; accessed 15-April-2017].](http://browserify.org/) [[14] 2017. Matjuice Repository. https://github.com/sable/matjuice. [Online; accessed](https://github.com/sable/matjuice) 13-February-2017]. [[15] 2017. Pull-Stream Module List. https://pull-stream.github.io/. [Online; accessed](https://pull-stream.github.io/) 25-July-2017]. [[16] 2017. Simple-Peer. https://github.com/feross/simple-peer. [Online; accessed](https://github.com/feross/simple-peer) 17-April-2017]. [[17] 2017. The Collatz Conjecture. http://boinc.thesonntags.com/collatz/. [Online;](http://boinc.thesonntags.com/collatz/) accessed 14-February-2017]. [[18] 2017. WebRTC 1.0: Real-time Communication Between Browsers. https://www.](https://www.w3.org/TR/webrtc/) [w3.org/TR/webrtc/. [Online; accessed 05-April-2017].](https://www.w3.org/TR/webrtc/) [19] 2018. Deadline Compute Management System. [https://deadline.](https://deadline.thinkboxsoftware.com/) [thinkboxsoftware.com/. [Online; accessed 16-October-2018].](https://deadline.thinkboxsoftware.com/) [[20] 2018. Heroku. https://www.heroku.com/ [Online; accessed 17-November-2017].](https://www.heroku.com/) [21] 2018. Node Package Manager. [https://www.npmjs.com/ [Online; accessed](https://www.npmjs.com/) 17-November-2017]. [[22] 2018. Raspberry Pi. https://www.raspberrypi.org/](https://www.raspberrypi.org/) [[23] 2018. Standard ECMA-262. https://www.ecma-international.org/publications/](https://www.ecma-international.org/publications/standards/Ecma-262.htm) [standards/Ecma-262.htm. [Online; accessed 16-October-2018].](https://www.ecma-international.org/publications/standards/Ecma-262.htm) [[24] 2018. Zinc Render. https://www.zyncrender.com/. [Online; accessed 16-October-](https://www.zyncrender.com/) 2018]. [25] 2019. Gartner Says Global Smartphone Sales Stalled in the Fourth Quarter of [2018. https://www.gartner.com/en/newsroom/. [Online; accessed 17-May-2019].](https://www.gartner.com/en/newsroom/) [26] Nabil Abdennadher and Regis Boesch. 2005. Towards a Peer-to-Peer Platform for High Performance Computing. In Proceedings of the Eighth International _Conference on High-Performance Computing in Asia-Pacific Region. IEEE, 8–pp._ [https://doi.org/10.1109/HPCASIA.2005.98](https://doi.org/10.1109/HPCASIA.2005.98) [27] Leila Abidi, Christophe Cérin, Gilles Fedak, and Haiwu He. 2015. Towards an Environment for doing Data Science that runs in Browsers. In Proceedings of _the International Conference on Smart City/SocialCom/SustainCom (SmartCity)._ [IEEE, 662–667. https://doi.org/10.1109/SmartCity.2015.145](https://doi.org/10.1109/SmartCity.2015.145) [28] Albert D Alexandrov, Maximilian Ibel, Klaus E Schauser, and Chris J Scheiman. 1997. SuperWeb: Towards a global web-based parallel computing infrastructure. In Parallel Processing Symposium, 1997. Proceedings., 11th International. IEEE, 100–106. [29] David P. Anderson. 2004. BOINC: A System for Public-Resource Computing and Storage. In Proceedings of the 5th IEEE/ACM International Workshop on Grid _[Computing (GRID). IEEE, 4–10. https://doi.org/10.1109/GRID.2004.14](https://doi.org/10.1109/GRID.2004.14)_ [30] David P. Anderson. 2019. BOINC: A Platform for Volunteer Computing. CoRR [abs/1903.01699 (2019). arXiv:1903.01699 http://arxiv.org/abs/1903.01699](http://arxiv.org/abs/1903.01699) [31] Daniel Balouek, Alexandra Carpen Amarie, Ghislain Charrier, Frédéric Desprez, Emmanuel Jeannot, Emmanuel Jeanvoine, Adrien Lèbre, David Margery, Nicolas Niclausse, Lucas Nussbaum, Olivier Richard, Christian Pérez, Flavien Quesnel, Cyril Rohr, and Luc Sarzyniec. 2013. Adding Virtualization Capabilities to the Grid’5000 Testbed. In Cloud Computing and Services Science, IvanI. Ivanov, Marten Sinderen, Frank Leymann, and Tony Shan (Eds.). Communications in Computer and Information Science, Vol. 367. Springer International Publishing, [3–20. https://doi.org/10.1007/978-3-319-04519-1_1](https://doi.org/10.1007/978-3-319-04519-1_1) [32] Arash Baratloo, Mehmet Karaul, Zvi M Kedem, and Peter Wijckoff. 1999. Charlotte: Metacomputing on the Web. Future Generation Computer Systems 15, 5 [(1999), 559 – 570. https://doi.org/10.1016/S0167-739X(99)00009-6](https://doi.org/10.1016/S0167-739X(99)00009-6) [33] Kevin Berry. 2009. Distributed and Grid Computing via the Browser. In Proceed_ings of the 3rd Villanova University Undergraduate Computer Science Research_ _Symposium (CSRS 2009)._ [34] Fabio Boldrin, Chiara Taddia, and Gianluca Mazzini. 2007. Distributed computing through web browser. In Vehicular Technology Conference, 2007. VTC-2007 _Fall. 2007 IEEE 66th. IEEE, 2020–2024._ [35] Mitch Cherniack, Hari Balakrishnan, Magdalena Balazinska, Donald Carney, Ugur Cetintemel, Ying Xing, and Stanley B Zdonik. 2003. Scalable Distributed Stream Processing.. In CIDR, Vol. 3. 257–268. [36] Pawel Chorazyk, Aleksander Byrski, Kamil Pietak, Marek Kisiel-Dorohinicki, and Wojciech Turek. 2017. Volunteer computing in a scalable lightweight webbased environment. Computer Assisted Methods in Engineering and Science 24, 1 (2017), 17–40. [37] Bernd O. Christiansen, Peter Cappello, Mihai F. Ionescu, Michael O. Neary, Klaus E. Schauser, and Daniel Wu. 1997. Javelin: Internet-based parallel computing using Java. Concurrency: Practice and Experience 9, 11 (1997), 1139– [1160. https://doi.org/10.1002/(SICI)1096-9128(199711)9:11<1139::AID-CPE349>](https://doi.org/10.1002/(SICI)1096-9128(199711)9:11<1139::AID-CPE349>3.0.CO;2-K) [3.0.CO;2-K](https://doi.org/10.1002/(SICI)1096-9128(199711)9:11<1139::AID-CPE349>3.0.CO;2-K) [38] Jeffrey Dean and Sanjay Ghemawat. 2008. MapReduce: simplified data processing on large clusters. Commun. ACM 51, 1 (2008), 107–113. [39] Roman Dębski, Tomasz Krupa, and Przemyslaw Majewski. 2013. ComcuteJS: A web browser based platform for large-scale computations. Computer Science 14 (2013). [40] David Dias and Luís Veiga. 2018. BrowserCloud.js - A federated community cloud served by a P2P overlay network on top of the web platform. In Proceedings _of the 33rd Annual ACM Symposium on Applied Computing (SAC ’18). ACM, New_ [York, NY, USA, 2175–2184. https://doi.org/10.1145/3167132.3167366](https://doi.org/10.1145/3167132.3167366) [41] Jerzy Duda and Wojciech Dłubacz. 2012. Distributed Evolutionary Computing System Based on Web Browsers with JavaScript. In International Workshop on _Applied Parallel Computing. Springer, 183–191._ [42] Cynthia Dwork, Nancy Lynch, and Larry Stockmeyer. 1988. Consensus in the Presence of Partial Synchrony. J. ACM 35, 2 (April 1988), 288–323. [https:](https://doi.org/10.1145/42282.42283) [//doi.org/10.1145/42282.42283](https://doi.org/10.1145/42282.42283) [43] Tomasz Fabisiak and Arkadiusz Danilecki. 2017. Browser-based harnessing of voluntary computational power. Foundations of Computing and Decision _[Sciences 42, 1 (2017), 3–42. https://doi.org/10.1515/fcds-2017-0001](https://doi.org/10.1515/fcds-2017-0001)_ [44] Gilles Fedak. 2012. Desktop Grid Computing. Chapman & All/CRC Press. 362 [pages. https://hal.inria.fr/hal-00757056](https://hal.inria.fr/hal-00757056) [45] Gilles Fedak. 2015. Contributions to Desktop Grid Computing . Habilitation à [diriger des recherches. Ecole Normale Supérieure de Lyon. https://hal.inria.fr/](https://hal.inria.fr/tel-01158462) [tel-01158462](https://hal.inria.fr/tel-01158462) [46] David Finkel, Craig E Wills, Brian Brennan, and Chris Brennan. 1999. Distriblets: Java-based distributed computing on the Web. Internet Research 9, 1 (1999), 35–40. [47] Vincent Foley-Bourgon and Laurie Hendren. 2016. Efficiently Implementing the Copy Semantics of MATLAB’s Arrays in JavaScript. In Proceedings of the _12th Symposium on Dynamic Languages (DLS 2016). ACM, New York, NY, USA,_ [72–83. https://doi.org/10.1145/2989225.2989235](https://doi.org/10.1145/2989225.2989235) [[48] Ilya Grigorik. 2009. Collaborative Map-Reduce in the Browser. https://www.](https://www.igvita.com/2009/03/03/collaborative-map-reduce-in-the-browser/) [igvita.com/2009/03/03/collaborative-map-reduce-in-the-browser/. [Online;](https://www.igvita.com/2009/03/03/collaborative-map-reduce-in-the-browser/) accessed 16-October-2018]. [49] Jayanth Gummaraju and Mendel Rosenblum. 2005. Stream Programming on General-Purpose Processors. In Proceedings of the 38th Annual IEEE/ACM Inter_national Symposium on Microarchitecture (MICRO 38). IEEE Computer Society,_ [Washington, DC, USA, 343–354. https://doi.org/10.1109/MICRO.2005.32](https://doi.org/10.1109/MICRO.2005.32) [50] Andreas Haas, Andreas Rossberg, Derek L. Schuff, Ben L. Titzer, Michael Holman, Dan Gohman, Luke Wagner, Alon Zakai, and JF Bastien. 2017. Bringing the Web Up to Speed with WebAssembly. SIGPLAN Not. 52, 6 (June 2017), 185–200. [https://doi.org/10.1145/3140587.3062363](https://doi.org/10.1145/3140587.3062363) [51] Andrew B Harrison. 2008. Peer-to-grid computing: Spanning diverse service_oriented architectures. Ph.D. Dissertation. Cardiff University (United Kingdom)._ [https://search.proquest.com/openview/69ebafc7df184c92a437b66ee04345ee/](https://search.proquest.com/openview/69ebafc7df184c92a437b66ee04345ee/) [52] David Herrera, Hanfeng Chen, Erick Lavoie, and Laurie Hendren. 2018. Numerical Computing on the Web: Benchmarking for the Future. In Proceedings of the _14th ACM SIGPLAN International Symposium on Dynamic Languages (DLS 2018)._ [ACM, New York, NY, USA, 88–100. https://doi.org/10.1145/3276945.3276968](https://doi.org/10.1145/3276945.3276968) [53] Yonghao Hu, Zhaohui Chen, Xiaojun Liu, Fei Huang, and Jinyuan Jia. 2017. WebTorrent Based Fine-grained P2P Transmission of Large-scale WebVR Indoor Scenes. In Proceedings of the 22Nd International Conference on 3D Web Technology _[(Web3D ’17). ACM, New York, NY, USA, Article 7, 8 pages. https://doi.org/10.](https://doi.org/10.1145/3055624.3075944)_ [1145/3055624.3075944](https://doi.org/10.1145/3055624.3075944) [54] Abhinav Jangda, Bobby Powers, Emery D. Berger, and Arjun Guha. 2019. Not So Fast: Analyzing the Performance of WebAssembly vs. Native Code. In 2019 _USENIX Annual Technical Conference (USENIX ATC 19). USENIX Association,_ [Renton, WA, 107–120. https://www.usenix.org/conference/atc19/presentation/](https://www.usenix.org/conference/atc19/presentation/jangda) [jangda](https://www.usenix.org/conference/atc19/presentation/jangda) [55] Alan B. Johnston and Daniel C. Burnett. 2012. WebRTC: APIs and RTCWEB _Protocols of the HTML5 Real-Time Web. Digital Codex LLC, USA._ [56] Brian W. Kernighan and Rob Pike. 1983. The UNIX Programming Environment. Prentice Hall Professional Technical Reference. [57] Faiz Khan, Vincent Foley-Bourgon, Sujay Kathrotia, Erick Lavoie, and Laurie Hendren. 2014. Using JavaScript and WebCL for Numerical Computations: A Comparative Study of Native and Web Technologies. In Proceedings of the 10th _ACM Symposium on Dynamic Languages (DLS ’14). ACM, New York, NY, USA,_ [91–102. https://doi.org/10.1145/2661088.2661090](https://doi.org/10.1145/2661088.2661090) 13 ----- [58] Jik-Soo Kim. 2009. Decentralized and scalable resource management for desktop _grids. Ph.D. Dissertation. University of Maryland, College Park._ [http://hdl.](http://hdl.handle.net/1903/9259) [handle.net/1903/9259](http://hdl.handle.net/1903/9259) [59] Jik-Soo Kim, Beomseok Nam, and Alan Sussman. 2014. Scalable and effective peer-to-peer desktop grid system. Cluster Computing 17, 4 (2014), 1185–1201. [https://doi.org/10.1007/s10586-014-0390-z](https://doi.org/10.1007/s10586-014-0390-z) [60] Jon Klein and Lee Spector. 2007. Unwitting distributed genetic programming via asynchronous JavaScript and XML. In Proceedings of the 9th annual conference _on Genetic and evolutionary computation. ACM, 1628–1635._ [61] Fumikazu Konishi, Manabu Ishii, Shingo Ohki, Ryo UMESTU, and Akihiko Konagaya. 2007. RABC: A conceptual design of pervasive infrastructure for browser computing based on AJAX technologies. In Cluster Computing and _the Grid, 2007. CCGRID 2007. Seventh IEEE International Symposium on. IEEE,_ 661–672. [62] Makoto Kuhara, Noriki Amano, Kan Watanabe, Yasuyuki Nogami, and Masaru Fukushi. 2014. A peer-to-peer communication function among Web browsers for Web-based Volunteer Computing. In Communications and Information Tech_nologies (ISCIT), 2014 14th International Symposium on. IEEE, 383–387._ [63] Philipp Langhans, Christoph Wieser, and François Bry. 2013. Crowdsourcing MapReduce: JSMapReduce. In Proceedings of the 22nd International Conference _on World Wide Web. ACM, 253–256._ [64] Erick Lavoie. 2017. Pando Handbook. [https://github.com/elavoie/](https://github.com/elavoie/pando-handbook) [pando-handbook [Online; accessed 17-November-2017].](https://github.com/elavoie/pando-handbook) [65] Erick Lavoie. 2017. Pando Repository. [https://github.com/elavoie/](https://github.com/elavoie/pando-computing) [pando-computing [Online; accessed 17-November-2017].](https://github.com/elavoie/pando-computing) [66] Erick Lavoie. 2017. Pando Server. [https://github.com/elavoie/pando-server](https://github.com/elavoie/pando-server) [Online; accessed 17-November-2017]. [67] Erick Lavoie. 2017. Pull-LendStream Implementation. [https:](https://github.com/elavoie/pull-lend-stream) [//github.com/elavoie/pull-lend-stream and https://www.npmjs.com/package/](https://github.com/elavoie/pull-lend-stream) [pull-lend-stream. [Online; accessed 15-April-2017].](https://www.npmjs.com/package/pull-lend-stream) [[68] Erick Lavoie. 2018. Pull-Stubborn Implementation. https://github.com/elavoie/](https://github.com/elavoie/pull-stubborn) [pull-stubborn. [Online; accessed 28-October-2018].](https://github.com/elavoie/pull-stubborn) [69] Erick Lavoie. 2019. Personal Volunteer Computing. Ph.D. Dissertation. McGill University. [70] Erick Lavoie and Laurie Hendren. 2019. Personal Volunteer Computing. In _Proceedings of the 16th ACM International Conference on Computing Frontiers_ _[(CF ’19). ACM, New York, NY, USA, 240–246. https://doi.org/10.1145/3310273.](https://doi.org/10.1145/3310273.3322819)_ [3322819](https://doi.org/10.1145/3310273.3322819) [71] Erick Lavoie, Laurie Hendren, Fréderic Desprez, and Miguel Correia. 2019. Genet: A Quickly Scalable Fat-Tree Overlay for Personal Volunteer Computing using WebRTC. arXiv e-prints, Article arXiv:1904.11402 (Apr 2019), arXiv:1904.11402 pages. [arXiv:cs.DC/1904.11402](http://arxiv.org/abs/cs.DC/1904.11402) Publication to appear at SASO’19. [72] Guillaume Leclerc, Joshua E. Auerbach, Giovanni Iacca, and Dario Floreano. 2016. The Seamless Peer and Cloud Evolution Framework. In Proceedings of the _Genetic and Evolutionary Computation Conference 2016 (GECCO ’16). ACM, New_ [York, NY, USA, 821–828. https://doi.org/10.1145/2908812.2908886](https://doi.org/10.1145/2908812.2908886) [73] Tommy MacWilliam and Cris Cecka. 2013. CrowdCL: Web-based volunteer computing with WebCL. In High Performance Extreme Computing Conference _(HPEC), 2013 IEEE. IEEE, 1–6._ [74] Gonzalo J Martınez and Leonardo Val. 2015. Capataz: a framework for distributing algorithms via the World Wide Web. CLEI Electronic Journal 18, 02 (2015), 2. [http://www.scielo.edu.uy/scielo.php?script=sci_arttext&pid=](http://www.scielo.edu.uy/scielo.php?script=sci_arttext&pid=S0717-50002015000200002&lng=en&nrm=iso) [S0717-50002015000200002&lng=en&nrm=iso](http://www.scielo.edu.uy/scielo.php?script=sci_arttext&pid=S0717-50002015000200002&lng=en&nrm=iso) [75] Petar Maymounkov and David Mazieres. 2002. Kademlia: A peer-to-peer information system based on the xor metric. In International Workshop on Peer-to-Peer _Systems. Springer, 53–65._ [76] Edward Meeds, Remco Hendriks, Said Al Faraby, Magiel Bruntink, and Max Welling. 2015. MLitB: machine learning in the browser. PeerJ Computer Science [1, e11 (2015). https://doi.org/10.7717/peerj-cs.11](https://doi.org/10.7717/peerj-cs.11) [77] Juan Julián Merelo-Guervós, Pedro A Castillo, Juan Luis Jiménez Laredo, A Mora Garcia, and Alberto Prieto. 2008. Asynchronous distributed genetic algorithms with Javascript and JSON. In Evolutionary Computation, 2008. CEC 2008.(IEEE _World Congress on Computational Intelligence). IEEE Congress on. IEEE, 1372–_ 1379. [78] Ujan Mukhopadhyay, Anthony Skjellum, Oluwakemi Hambolu, Jon Oakley, Lu Yu, and Richard Brooks. 2016. A brief survey of cryptocurrency systems. In _Privacy, Security and Trust (PST), 2016 14th Annual Conference on. IEEE, 745–752._ [79] Satoshi Nakamoto. 2008. Bitcoin: A peer-to-peer electronic cash system. (2008). [80] Sagnik Nandy. 2005. Large scale autonomous computing systems. Ph.D. Disserta[tion. UC San Diego. https://escholarship.org/uc/item/3s96x9qc](https://escholarship.org/uc/item/3s96x9qc) [81] Noam Nisan, Shmulik London, Oded Regev, and Noam Camiel. 1998. Globally distributed computation over the internet-the popcorn project. In Distributed _Computing Systems, 1998. Proceedings. 18th International Conference on. IEEE,_ 592–601. [82] J. K. Nurminen, A. J. R. Meyn, E. Jalonen, Y. Raivio, and R. GarcÄśa Marrero. 2013. P2P media streaming with HTML5 and WebRTC. In 2013 IEEE Conference _[on Computer Communications Workshops (INFOCOM WKSHPS). 63–64. https:](https://doi.org/10.1109/INFCOMW.2013.6970739)_ [//doi.org/10.1109/INFCOMW.2013.6970739](https://doi.org/10.1109/INFCOMW.2013.6970739) [83] Pijush Kanti Dutta Pramanik, Prasenjit Choudhury, and Anindita Saha. 2017. Economical supercomputing thru smartphone crowd computing: An assessment of opportunities, benefits, deterrents, and applications from India’s perspective. In Proceedings of the 4th International Conference on Advanced Computing and _[Communication Systems (ICACCS). IEEE, 1–7. https://doi.org/10.1109/ICACCS.](https://doi.org/10.1109/ICACCS.2017.8014613)_ [2017.8014613](https://doi.org/10.1109/ICACCS.2017.8014613) [84] Sean R. Wilkinson and Jonas S. Almeida. 2014. QMachine: commodity supercomputing in web browsers. _BMC Bioinformatics (2014), 1–1._ [https:](https://doi.org/10.1186/1471-2105-15-176) [//doi.org/10.1186/1471-2105-15-176](https://doi.org/10.1186/1471-2105-15-176) [85] Cushing Reginald, Ganeshwara Putra, Spiros Koulouzis, Adam Belloum, Marian Bubak, and Cees de Laat. 2013. Distributed Computing on an Ensemble of [Browsers. IEEE Internet Computing 17, 5 (Sept. 2013), 54–61. https://doi.org/10.](https://doi.org/10.1109/MIC.2013.3) [1109/MIC.2013.3](https://doi.org/10.1109/MIC.2013.3) [86] Andrew Rosen. 2016. Towards a Framework for DHT Distributed Computing. [Ph.D. Dissertation. Georgia State University. https://scholarworks.gsu.edu/cs_](https://scholarworks.gsu.edu/cs_diss/107) [diss/107](https://scholarworks.gsu.edu/cs_diss/107) [87] Antony Rowstron and Peter Druschel. 2001. Pastry: Scalable, decentralized object location, and routing for large-scale peer-to-peer systems. In IFIP/ACM _International Conference on Distributed Systems Platforms and Open Distributed_ _Processing. Springer, 329–350._ [88] David P Roy, MA Wulder, Thomas R Loveland, CE Woodcock, RG Allen, MC Anderson, D Helder, JR Irons, DM Johnson, R Kennedy, et al. 2014. Landsat-8: Science and product vision for terrestrial global change research. Remote sensing _of Environment 145 (2014), 154–172._ [89] Sandy Ryza and Tom Wall. 2010. MRJS: A JavaScript MapReduce Framework [for Web Browsers. http://www.cs.brown.edu/courses/csci2950-u/f11/papers/](http://www.cs.brown.edu/courses/csci2950-u/f11/papers/mrjs.pdf) [mrjs.pdf](http://www.cs.brown.edu/courses/csci2950-u/f11/papers/mrjs.pdf) [90] Luis FG Sarmenta and Satoshi Hirano. 1999. Bayanihan: Building and Studying Web-Based Volunteer Computing Systems using Java. Future Generation _Computer Systems 15, 5-6 (1999), 675–686._ [91] Luis Francisco Gumaru Sarmenta. 2001. Volunteer computing. Ph.D. Dissertation. [Massachusetts Institute of Technology. http://hdl.handle.net/1721.1/16773](http://hdl.handle.net/1721.1/16773) [92] John F Shoch and Jon A Hupp. 1982. The "worm" programs - early experience [with a distributed computation. Commun. ACM 25, 3 (1982), 172–180. https:](https://doi.org/10.1145/358453.358455) [//doi.org/10.1145/358453.358455](https://doi.org/10.1145/358453.358455) [93] Gert Smolka. 1995. The Oz programming model. In Computer science today. Springer, 324–343. [94] Ion Stoica, Robert Morris, David Karger, M Frans Kaashoek, and Hari Balakrishnan. 2001. Chord: A scalable peer-to-peer lookup service for internet applications. ACM SIGCOMM Computer Communication Review 31, 4 (2001), 149–160. [95] Cristiano Tapparello, Colin Funai Bora Karaoglu, He Ba, Shurouq Hijazi, Jiye Shi, Abner Aquino, and Wendi Heinzelman. 2015. Volunteer Computing on Mobile Devices: State of the Art and Future. In Enabling Real-Time Mobile _Cloud Computing through Emerging Technologies, Tolga Soyata (Ed.). IGI Global,_ 153–181. [[96] Dominic Tarr. 2016. Pull Streams. http://dominictarr.com/post/145135293917/](http://dominictarr.com/post/145135293917/history-of-streams) [history-of-streams. [Online; accessed 7-February-2017].](http://dominictarr.com/post/145135293917/history-of-streams) [97] Niklas Therning and Lars Bengtsson. 2005. Jalapeno: Decentralized Grid Computing Using Peer-to-peer Technology. In Proceedings of the 2Nd Con_ference on Computing Frontiers (CF ’05). ACM, New York, NY, USA, 59–65._ [https://doi.org/10.1145/1062261.1062274](https://doi.org/10.1145/1062261.1062274) [98] N. Tindall and A. Harwood. 2015. Peer-to-peer between browsers: cyclon protocol over WebRTC. In 2015 IEEE International Conference on Peer-to-Peer _[Computing (P2P). 1–5. https://doi.org/10.1109/P2P.2015.7328517](https://doi.org/10.1109/P2P.2015.7328517)_ [99] Peter Van-Roy and Seif Haridi. 2004. Concepts, Techniques, and Models of Com_puter Programming. MIT Press._ [100] C. Vogt, M. J. Werner, and T. C. Schmidt. 2013. Leveraging WebRTC for P2P content distribution in web browsers. In 2013 21st IEEE International Conference _[on Network Protocols (ICNP). 1–2. https://doi.org/10.1109/ICNP.2013.6733637](https://doi.org/10.1109/ICNP.2013.6733637)_ [101] William W Wadge and Edward A Ashcroft. 1985. LUCID, the dataflow program_ming language. Vol. 303. Academic Press London._ [102] M. J. Werner, C. Vogt, and T. C. Schmidt. 2014. Let Our Browsers Socialize: Building User-Centric Content Communities on WebRTC. In 2014 IEEE 34th _International Conference on Distributed Computing Systems Workshops (ICDCSW)._ [37–44. https://doi.org/10.1109/ICDCSW.2014.35](https://doi.org/10.1109/ICDCSW.2014.35) [103] Turner Whitted. 1980. An Improved Illumination Model for Shaded Display. _Commun. ACM 23, 6 (June 1980), 343–349._ [https://doi.org/10.1145/358876.](https://doi.org/10.1145/358876.358882) [358882](https://doi.org/10.1145/358876.358882) [104] Dany Wilson. 2015. Architecture for a Fully Decentralized Peer-to-Peer Collabo_rative Computing Platform. Ph.D. Dissertation. Université d’Ottawa/University_ [of Ottawa. https://doi.org/10.20381/ruor-4170](https://doi.org/10.20381/ruor-4170) [105] Shinichi Yamagiwa and Leonel Sousa. 2007. Design and implementation of a stream-based distributedcomputing platform using graphics processing units. In Proceedings of the 4th international conference on Computing frontiers. ACM, 197–204. 14 -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1803.08426, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/1803.08426" }
2,018
[ "JournalArticle", "Book" ]
true
2018-03-22T00:00:00
[ { "paperId": "fbabbb368ab6329619053fd9c15ed53418d84555", "title": "Genet: A Quickly Scalable Fat-Tree Overlay for Personal Volunteer Computing using WebRTC" }, { "paperId": "ecd32884251bf2d6e03afe72a688f97cbe0afa24", "title": "BOINC: A Platform for Volunteer Computing" }, { "paperId": "16632daae3df0e657218582e0ceed2952ba197dc", "title": "Not So Fast: Analyzing the Performance of WebAssembly vs. Native Code" }, { "paperId": "a92f4d4241a3eb15c2717b8d93ccae5d3e4c52f5", "title": "Numerical computing on the web: benchmarking for the future" }, { "paperId": "89d4cc2d7224ac49f4b2588a42833568aa1f2e40", "title": "browsercloud.js: a distributed computing fabric powered by a P2P overlay network on top of the web platform" }, { "paperId": "aa703d229387d42ad5e7a5da82ff124591876888", "title": "Personal volunteer computing" }, { "paperId": "6eb15595b5a702afd3af0a177d14e880c1bff502", "title": "Volunteer computing in a scalable lightweight web-based environment" }, { "paperId": "8ca3ef3dd0f6ed54aeb0922434d9ceef41c57a92", "title": "Bringing the web up to speed with WebAssembly" }, { "paperId": "220c787e2c0002f8ac55ce8f4eac6d6fd9670f22", "title": "WebTorrent based fine-grained P2P transmission of large-scale WebVR indoor scenes" }, { "paperId": "9aa3a80d11098350479f45e3474671808634a605", "title": "Job Description Language for a Browser-Based Computing Platform - A Preliminary Report" }, { "paperId": "a06707bf0d51d5a8c3980c7395b525b419729915", "title": "Browser-based Harnessing of Voluntary Computational Power" }, { "paperId": "f1906a7118c0a0851c1ad9f2e19fce1a2b851d26", "title": "A brief survey of Cryptocurrency systems" }, { "paperId": "a7807b31b3426567e63949e53167db4235d6d12c", "title": "Efficiently implementing the copy semantics of MATLAB's arrays in JavaScript" }, { "paperId": "926fb4cce27ff83aa104b881088756f20c59811a", "title": "The Seamless Peer and Cloud Evolution Framework" }, { "paperId": "e3a442aa24e5df7e6b2a25e21e75c4c325f9eedf", "title": "Edge Computing: Vision and Challenges" }, { "paperId": "07f4b76b1feb5b8e66bf6453dbb0610f40952cd0", "title": "Towards an Environment for Doing Data Science That Runs in Browsers" }, { "paperId": "fbaab9fba7709ed2843544346f4aa626409fbc82", "title": "Peer-to-peer between browsers: cyclon protocol over WebRTC" }, { "paperId": "45ee14e839a5c7b4f5ccfa3b9c20508cd84b6b27", "title": "Capataz: a framework for distributing algorithms via the World Wide Web" }, { "paperId": "a14bb15298961545b83e7c7cefff0e7af79828f7", "title": "A Survey of CPU-GPU Heterogeneous Computing Techniques" }, { "paperId": "285ab6dcf7c912a021e6a0ac47367944aec123af", "title": "Contributions to Desktop Grid Computing" }, { "paperId": "6e5d6512b23ee8211416ff03ce6cac89ea261ee0", "title": "Gray Computing: An Analysis of Computing with Background JavaScript Tasks" }, { "paperId": "63b5bef1ccbe3d9aa6d7bfd46f3819b4b6c21b6d", "title": "Adaptive management of applications across multiple clouds: The SeaClouds Approach" }, { "paperId": "51b67e17bfe87796a6fed0a22d2d362d26a527b4", "title": "Hive.js: Browser-Based Distributed Caching for Adaptive Video Streaming" }, { "paperId": "2ffcfc7639ad17d704d6ee3a803a05d611541707", "title": "Scalable and effective peer-to-peer desktop grid system" }, { "paperId": "d58f3dc42640569a129662d275bdbbf0bc6d79fd", "title": "Implementing crossplatform distributed algorithms using standard web technologies" }, { "paperId": "f28d9bd00a957e2fe1cc7a05bf9e46fe6c06a46e", "title": "Using JavaScript and WebCL for numerical computations: a comparative study of native and web technologies" }, { "paperId": "dc2898284b6f7f146f834067706cae5572c42331", "title": "A peer-to-peer communication function among Web browsers for Web-based Volunteer Computing" }, { "paperId": "e36f0bd895f03ff142176d2921f83cce9a728fa7", "title": "Let Our Browsers Socialize: Building User-Centric Content Communities on WebRTC" }, { "paperId": "2cda768bff9f23eb8cb3a160386282716e599f65", "title": "QMachine: commodity supercomputing in web browsers" }, { "paperId": "55100a006330a0c1d9824b8cd54de030d71fb86b", "title": "Peer-to-Peer Communication Function among Web Browsers for Web-based Volunteer Computing" }, { "paperId": "7a2ea82ca1cebacc0770977d5297fd023eec3c32", "title": "Landsat-8: Science and Product Vision for Terrestrial Global Change Research" }, { "paperId": "cbf32fbfcbf507cf24a573a1bd52051399f7b089", "title": "Volunteer computing: requirements, challenges, and solutions" }, { "paperId": "646d84bdc6e158c26d4bf43bd26084c20ec69ed0", "title": "CrowdCL: Web-based volunteer computing with WebCL" }, { "paperId": "14e099f8644a499a189105acda20a8750f6bc246", "title": "Leveraging WebRTC for P2P content distribution in web browsers" }, { "paperId": "53491d7f67580aa4478ce54a1e87f3ef05af4665", "title": "Distributed Computing on an Ensemble of Browsers" }, { "paperId": "34be86231eda1995918d9738eccafed5ad0a729a", "title": "Crowdsourcing MapReduce: JSMapReduce" }, { "paperId": "3e286e1fbcfdd52322747e3b8e2e7522d2845563", "title": "P2P media streaming with HTML5 and WebRTC" }, { "paperId": "ebec08f81628c45ed93fbc19e62d6d1eef6a9dd5", "title": "ComcuteJS: A Web Browser Based Platform for Large-scale Computations" }, { "paperId": "7844afd0a42b9ef8df369e5c0433162799ef41f9", "title": "WebRTC: APIs and RTCWEB Protocols of the HTML5 Real-Time Web" }, { "paperId": "c81b437a278f6f278c6557a779fc215661b1b838", "title": "Desktop Grid Computing" }, { "paperId": "e868998170ac364978458717bbc985a4154e65a7", "title": "Distributed Evolutionary Computing System Based on Web Browsers with JavaScript" }, { "paperId": "ef92b4a3e2292db7c224c4a1788725baca6aa683", "title": "Adding Virtualization Capabilities to the Grid'5000 Testbed" }, { "paperId": "6a7f34970ece556019431dde42daa4982bfcc13d", "title": "The WebSocket Protocol" }, { "paperId": "23b777e80b892d40e83e59d4cb5eecdf0a096948", "title": "Volunteer computing" }, { "paperId": "bf408e669c67dcced01bda377de4c4d36e3c102e", "title": "From Dedicated Grid to Volunteer Grid: Large Scale Execution of a Bioinformatics Application" }, { "paperId": "3a513a8f62072e5d8d9ed482810bf7eb8904cf33", "title": "RUFT: Simplifying the Fat-Tree Topology" }, { "paperId": "c50a7f1850d1770fe728b8e42200e463ca669896", "title": "Cloud Computing and Grid Computing 360-Degree Compared" }, { "paperId": "453a0268cd508006639cbfa45bf97f43edff2546", "title": "Asynchronous distributed genetic algorithms with Javascript and JSON" }, { "paperId": "21aa09a28916f982b8ed0749b8a80040c3496aac", "title": "Distributed Computing Through Web Browser" }, { "paperId": "eed83266c793303ace3227e8fdf2987ff3fc6f14", "title": "Unwitting distributed genetic programming via asynchronous JavaScript and XML" }, { "paperId": "98cc673c9ce4d73226bb4c6f736fcfc73b32741a", "title": "RABC: A Conceptual Design of Pervasive Infrastructure for Browser Computing based on Ajax technologies" }, { "paperId": "22be985ce95698f1560d64928b54c800ae1b45e5", "title": "Design and implementation of a stream-based distributedcomputing platform using graphics processing units" }, { "paperId": "ddd9fcb14a7601669e10322914bf106591e0417e", "title": "Deterministic versus Adaptive Routing in Fat-Trees" }, { "paperId": "90c267834bc2ef1c73bef29d452bf481e5de631e", "title": "Towards a peer-to-peer platform for high performance computing" }, { "paperId": "7d01577c036d650c55027ef487301f69a6a652b1", "title": "Stream programming on general-purpose processors" }, { "paperId": "3cb746c1a332493868f32ef05710699b1da691e1", "title": "Jalapeno: secentralized grid computing using peer-to-peer technology" }, { "paperId": "0d9b90af172613d0d6af3b3352a1d351a7a09b5a", "title": "BOINC: a system for public-resource computing and storage" }, { "paperId": "908534daf024c087817f55ff4ff33781112d2b65", "title": "FatNemo: Building a Resilient Multi-source Multicast Fat-Tree" }, { "paperId": "0c4c84c0cd8080264e2e43029dca366fe33be3ea", "title": "Concepts, Techniques, and Models of Computer Programming" }, { "paperId": "36a14a0728440cdb9500a50b62e5797041807e77", "title": "Bullet: high bandwidth data dissemination using an overlay mesh" }, { "paperId": "61209d903884e8bc800e6f9ccdf61f26501dc257", "title": "The Base16, Base32, and Base64 Data Encodings" }, { "paperId": "eb51cb223fb17995085af86ac70f765077720504", "title": "Kademlia: A Peer-to-Peer Information System Based on the XOR Metric" }, { "paperId": "cf025469b2d7e4b37c7f2d2bf0d46c6776f48fd4", "title": "Pastry: Scalable, Decentralized Object Location, and Routing for Large-Scale Peer-to-Peer Systems" }, { "paperId": "f03db79dc2922af3ec712592c8a3f69182ec5d65", "title": "Chord: A scalable peer-to-peer lookup service for internet applications" }, { "paperId": "74af22834a36b9065dac00e1034dc60959a56520", "title": "XtremWeb: a generic global computing system" }, { "paperId": "1d67297c03bac2835b9d0c7d999897ae5e6303bf", "title": "The Anatomy of the Grid: Enabling Scalable Virtual Organizations" }, { "paperId": "94e64897e115734393e77f5e514f117ebd773095", "title": "Bayanihan: building and studying web-based volunteer computing systems using Java" }, { "paperId": "0d31f87e522afb5d77633552c7bbec4ca5b02be3", "title": "Distriblets: Java-based distributed computing on the Web" }, { "paperId": "bc109969f6132c585b442ae5d7e3e0b31c1d9294", "title": "Globally distributed computation over the Internet-the POPCORN project" }, { "paperId": "4b58ea806fc7782c23599cb6f652a22754efff06", "title": "About the Collatz conjecture" }, { "paperId": "55c4b605c90ab2c1a695590244097946c34c375b", "title": "Javelin: Internet‐based parallel computing using Java" }, { "paperId": "e84e80ce93bb104237f55b0a53c95da4abbe7ae6", "title": "SuperWeb: towards a global Web-based parallel computing infrastructure" }, { "paperId": "a0ae020a14de2b76598b81f8c2556cc0a3f7cc22", "title": "The Oz Programming Model" }, { "paperId": "ea6b2281bab9dd7efbc2ad6b95492a5263861cc9", "title": "Condor-a hunter of idle workstations" }, { "paperId": "4e63eed9e709b6c8e20a4a68300883898c7d8f37", "title": "Consensus in the presence of partial synchrony" }, { "paperId": "6347f8664678fcaf19ccd9422c21591a1d0a9063", "title": "Polymorphic Arrays: A Novel VLSI Layout for Systolic Computers" }, { "paperId": "15fd523748d0452834676007073f428144d50db0", "title": "The “worm” programs—early experience with a distributed computation" }, { "paperId": "752edecfa8560a39b34b6e64fba977d4d25ce890", "title": "An improved illumination model for shaded display" }, { "paperId": "88013a4075bf4ba70bdc13e95765fd9dab233b87", "title": "The UNIX™ programming environment" }, { "paperId": "e5705f2172fdf12e77b100581c576fdd35b14792", "title": "Browserify" }, { "paperId": "bd9c5ee7898024e87e02af943163bb36ff9f9c1d", "title": "Apache Hadoop" }, { "paperId": null, "title": "Standard ECMA-262" }, { "paperId": "2de9399be4dbdde9246d4a8f71563673ccd55ab9", "title": "What is the Open Compute Project?" }, { "paperId": "e22dbbf0c095b16676277206e126c02c98937a07", "title": "Economical supercomputing thru smartphone crowd computing: An assessment of opportunities, benefits, deterrents, and applications from India's perspective" }, { "paperId": null, "title": "Node Package Manager" }, { "paperId": null, "title": "Pull-LendStream Implementation" }, { "paperId": null, "title": "Pando Handbook. https://github.com/elavoie/ pando-handbook [Online; accessed 17-November-2017" }, { "paperId": null, "title": "Pando Server" }, { "paperId": "8c42938da9d96a4b6d7e3b130e331d29bded65eb", "title": "Towards a Framework for DHT Distributed Computing" }, { "paperId": "c0aaee33c0f3bca9380b39f26013dacd013a22c2", "title": "A Survey on Desktop Grid Systems-Research Gap" }, { "paperId": "eb7b9ccbdd0c975a36e293f95e2b05dd845c1568", "title": "Volunteer Computing on Mobile Devices: State of the Art and Future Research Directions" }, { "paperId": null, "title": "Pull Streams" }, { "paperId": "696ea5dfd4ad2027fdecf14655d9315c9927b742", "title": "Architecture for a Fully Decentralized Peer-to-Peer Collaborative Computing Platform" }, { "paperId": "b438811cee91cf2b9a0f082d3f77c47dd85f36a4", "title": "browserCloud.js A federated community cloud served by a P2P overlay network on top of the web platform" }, { "paperId": "c762fe9fca1d7e6261df4f20b291b469c8cd0251", "title": "The Collatz Conjecture" }, { "paperId": "61e04278cc6478a3d3fb800d672a0425fabfea78", "title": "MRJS : A JavaScript MapReduce Framework for Web Browsers" }, { "paperId": null, "title": "Open Stack" }, { "paperId": null, "title": "Web Workers" }, { "paperId": "3bf184c6912fee415ff07abf14f64cd6b83c7db5", "title": "Distributed and Grid Computing via the Browser" }, { "paperId": "1f7190fc294246f83f1f331cc51e3264851d0d36", "title": "Above the Clouds: A Berkeley View of Cloud Computing" }, { "paperId": "1fe8f1549b675ebdff949a87ce66d5a4da962508", "title": "Decentralized and Scalable Resource Management for Desktop Grids" }, { "paperId": "627be67feb084f1266cfc36e5aed3c3e7e6ce5f0", "title": "MapReduce: simplified data processing on large clusters" }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": null, "title": "Peer-to-grid computing: Spanning diverse serviceoriented architectures" }, { "paperId": "c3296904e1a6fe98c6d6f2cffff89636b8da8bea", "title": "Large scale autonomous computing systems" }, { "paperId": "9a7bd89ac205196e234eca8aeb69a3119eed0940", "title": "The Transmission Control Protocol" }, { "paperId": "4d0dd6664459ce43966fd47f5ce892cf9314dec9", "title": "WebCom: A Web Based Volunteer Computer" }, { "paperId": "90a27218faad130adac4e195ebde63f46df4a2e1", "title": "Scalable Distributed Stream Processing" }, { "paperId": "059ae6dc7f42bb9e3135f459d0cf2435cb71760c", "title": "Metacomputing and Resource Allocation on the World Wide Web" }, { "paperId": "fa8d3d50c1a1d4d05b231ef92624641372ffb00a", "title": "Charlotte: Metacomputing on the Web" }, { "paperId": "519df077a1a5f4bab08643d3702bfcf7bcb11d87", "title": "Lucid, the dataflow programming language" }, { "paperId": "e4fcce68f63092216ffeaf766d6715a82cbc27ba", "title": "Small is Beautiful: A Study of Economics as if People Mattered" }, { "paperId": "8fc928bb430d3f72ac876ca156042ad1860acacd", "title": "Article in Press Future Generation Computer Systems ( ) – Future Generation Computer Systems Cloud Computing and Emerging It Platforms: Vision, Hype, and Reality for Delivering Computing as the 5th Utility" }, { "paperId": "e139216e15a6100160e51cf2375ef2c1285057e3", "title": "Distributed under Creative Commons Cc-by 4.0 Mlitb: Machine Learning in the Browser" }, { "paperId": null, "title": "WebRTC 1.0: Real-time Communication Between Browsers" }, { "paperId": null, "title": "Deadline Compute Management System" }, { "paperId": null, "title": "Gartner Says Global Smartphone Sales Stalled in the Fourth Quarter" }, { "paperId": null, "title": "Link_removed" }, { "paperId": null, "title": "Gartner Says Worldwide Sales of Smartphones Recorded First Ever Decline During the Fourth" } ]
24,785
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Law", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/018b611c1a5d7c49fb7b95819f7b8d7484d8d564
[ "Computer Science" ]
0.878392
An Automatically Privacy Protection Solution for Implementing the Right to Be Forgotten in Embedded System
018b611c1a5d7c49fb7b95819f7b8d7484d8d564
IEEE Access
[ { "authorId": "2296939118", "name": "Yanan Zhao" }, { "authorId": "38798395", "name": "Nong Si" }, { "authorId": "2117104161", "name": "Y. Sun" }, { "authorId": "2115406741", "name": "Xin Gao" }, { "authorId": "2160338758", "name": "Haopeng Tong" }, { "authorId": "2160335428", "name": "Geng Yuan" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://ieeexplore.ieee.org/servlet/opac?punumber=6287639" ], "id": "2633f5b2-c15c-49fe-80f5-07523e770c26", "issn": "2169-3536", "name": "IEEE Access", "type": "journal", "url": "http://www.ieee.org/publications_standards/publications/ieee_access.html" }
Towards the massive amount of data generated in our daily work and life, embedded systems, with economical but powerful storage and computing resources, are inevitably becoming the most suitable platform for the Edge Computing for the Internet of Things. However, embedded system servers may also threaten individuals by storing individuals’ private data for years. This paper proposes a Resilient Tag-based Privacy Protection (RTPP) scheme for embedded systems. Specifically, to protect the privacy against the hackers and other non-users, we employ a pseudo-random number encryption technique with the chaos-based principle so that the third party cannot easily steal the private data and reduce the risk of personal privacy leakage. To protect the individuals’ interests, we propose a new approach to controlling the life cycle table of data to enable individuals themselves the flexibility to control the life cycle of private data. Unlike existing data lifetime management methods, the RTPP can support the retrieval of tags in the data life cycle table to control the corresponding privacy while automatically adding or removing tags. Our system automatically adjusted the survival period of private data in the life cycle table through the change of leaf weights, controlled the charge movement on the surface of flash memory, and finally achieved the resilient adjustment process of the life cycle of private data in the embedded system. The security proof and performance evaluation show that the proposed RTPP scheme is provable secure in the automatic privacy lifecycle tuning model for embedded systems and efficient in practice.
Received February 28, 2022, accepted March 17, 2022, date of publication March 25, 2022, date of current version April 6, 2022. _Digital Object Identifier 10.1109/ACCESS.2022.3162238_ # An Automatically Privacy Protection Solution for Implementing the Right to Be Forgotten in Embedded System YANAN ZHAO 1, NONG SI 1, (Member, IEEE), YU SUN 1, XIN GAO 1, HAOPENG TONG 1, AND GENG YUAN 2 1Faculty of Information Technology, Beijing University of Technology, Chaoyang, Beijing 100124, China 2Faculty of Natural Science, Kristianstad University, 291 88 Kristianstad, Sweden Corresponding author: Yu Sun (respectprivacy@yeah.net) This work was supported in part by the Industry-University Collaborative Foundation of Ministry of Education of China, and in part by Huawei under Grant 201902146003. **ABSTRACT Towards the massive amount of data generated in our daily work and life, embedded systems,** with economical but powerful storage and computing resources, are inevitably becoming the most suitable platform for the Edge Computing for the Internet of Things. However, embedded system servers may also threaten individuals by storing individuals’ private data for years. This paper proposes a Resilient Tag-based Privacy Protection (RTPP) scheme for embedded systems. Specifically, to protect the privacy against the hackers and other non-users, we employ a pseudo-random number encryption technique with the chaos-based principle so that the third party cannot easily steal the private data and reduce the risk of personal privacy leakage. To protect the individuals’ interests, we propose a new approach to controlling the life cycle table of data to enable individuals themselves the flexibility to control the life cycle of private data. Unlike existing data lifetime management methods, the RTPP can support the retrieval of tags in the data life cycle table to control the corresponding privacy while automatically adding or removing tags. Our system automatically adjusted the survival period of private data in the life cycle table through the change of leaf weights, controlled the charge movement on the surface of flash memory, and finally achieved the resilient adjustment process of the life cycle of private data in the embedded system. The security proof and performance evaluation show that the proposed RTPP scheme is provable secure in the automatic privacy lifecycle tuning model for embedded systems and efficient in practice. **INDEX TERMS Huffman coding, information security, chaotic mapping, flash memory, data lifecycle.** **I. INTRODUCTION** Automatically and opportunely deleting the correct personal data in an embedded system is challenging to protect privacy. As the European Union’s General Data Protection Regulation (GDPR) [1] went into effect on May 25, 2018, and the California Consumer Privacy Act (CCPA) [2] became effective on January 1, 2020, these laws contribute the rise of attention to individuals’ private data using, protecting, deleting and forgetting. While website visitors choose to allow cookies or upload personal data to the websites, the service provider will automatically record our preferences, individual private data The associate editor coordinating the review of this manuscript and approving it for publication was Jiafeng Xie. on their databases for years. Such activities increase the security risk of violating personal privacy under the above laws. However, people have the right to ask the data owner to delete personal information from any databases according to their requirements, fulfilling the legal ‘‘right to be forgotten’’ [3]. Except on the internet, due to the worldwide epidemic prevention and control, a large amount of personal information is collected by various devices, which poses a significant security risk to individuals’ privacy. Traditionally, there are two ways to prevent privacy leakage, one is to enhance the security of encryption algorithms in software to protect sensitive data, and the other is to remove private data directly from the hardware. Since most encryptions can be decrypted on purpose with adequate time, it is more thorough ----- in removing private data directly from the hardware. Therefore, research on the automatic and complete removal of personal data from hardware has become a hot topic in recent years. In this work, we designed and developed Resilient Tagbased Privacy Protection (RTPP) scheme. In the RTPP, much personal private data is sensitive, so the first thing to consider is private data encryption. We propose and evaluate an encryption method based on chaos theory for pseudo-random number generators. Since chaotic systems are susceptible to initial states and complex dynamic behavior, chaotic systems do not follow the probability statistics in the distribution. The proposed random sequence can provide a good randomness seed for the pseudo-random number generator, making the encryption system we design challenging to be broken for higher security. Secondly, we designed the Data Label Life Cycle Table (DLLCT). It allows dynamic and flexible control of the data lifecycle, enabling users to manage their private data more efficiently and conveniently. The rest of this paper is structured as follows: Section II reviews the existing methods for implementing ‘‘autoforgotten’’ for embedded systems and cryptographic algorithms based on chaos theory. Section III describes the design of the proposed pseudo-random number generator based on chaos theory. Section IV presents the RTPP scheme. Section V presents the performance and security analysis of the implemented algorithm. Section VI summarizes the entire paper and provides suggestions for future work. **II. RELATED WORKS** This section presents related existing methods for embedded automatic being forgotten and compares them intuitively. In addition, we investigate the suitability of chaos theory for improving encryption algorithms used for pseudo-random numbers. _A. THE EXISTING METHODS FOR IMPLEMENTING_ _‘‘AUTO-FORGOTTEN’’ FOR EMBEDDED SYSTEMS_ Many automatic forgotten methods have been proposed that are suitable for implementing the protection of personal privacy data in embedded systems. The hardware implementations of these approaches are usually analyzed based on the complexity of privacy data storage using a combination of spatial complexity and temporal complexity. Tanakamaru _et_ _al._ proposed the PP-SSS System in 2015 [4], which automatically destroys personal private data by setting the exact life spans for the different physical storage units. Data destruction is performed by consciously writing deliberate errors so that the error correction system cannot identify private data outside the expected life span. Compared to traditional data deletion methods, privacy-preserving solid-state storage systems remove personal privacy more directly from the source than hiding data from the user. However, the effectiveness of this system is limited to compressed data. It is also not suitable for the long-term storage of private personal data, as the data life cycle is different. Yamazawa et al. in 2016 used precise ECC and shredding techniques to precisely control the storage lifetime of private data in hardware [5]. Suzuki et al., in 2019, designed the PDLCS [6]. PDLCS, in comparison to PP-SSS, adds the process of In-3D vertical cell processing, where the lateral charge migration in 3D NAND flash controls the lifetime of the data, which provides a more efficient guarantee for a longer or shorter private data lifecycle. However, this system also has drawbacks. Firstly, it only performs simple encryption during the data processing process of the original private data, which can easily lead to privacy leakage. Secondly, it does not propose an exact data lifecycle management scheme. Multiple private data are processed one by one, increasing processing time, consuming embedded systems, processing process’s complexity, and depleting battery life. Therefore, we focus on the issue that the hardware can automatically adjust the lifecycle of private data without decreasing the security level of private data. _B. CHAOS-BASED ENCRYPTION ALGORITHM_ The existing chaotic cryptography is achieved in two steps: first, a pseudo-random key stream is generated using a chaotic system, and the plaintext is encrypted using the generated key stream, called stream-based chaotic encryption [7]. Second, the ciphertext is obtained by multiple iterations (or reverse iterations) using the plaintext (or key) as the initial condition (or control parameter) to achieve encryption. This method belongs to block-based chaotic encryption, widely used for traditional packet encryption such as DES and AES [8]. In chaotic encryption algorithms, chaotic mapping is often referred to as the core component of the encryption process, which generates many pseudo-random sequences [9]. The general idea of designing chaos-based ciphers is to use the sequences in chaotic mappings to perform cryptographic operations on-target messages [10]. Therefore, to improve the security of cryptosystems, chaotic mappings need to be continuously optimized. In 2011, Cao et al. improved the complexity of chaotic mappings by changing the parameters [11]. In 2019, Peng et al. added the quantum chaos and PWLCM chaotic mapping into a new method of S-box design, which significantly improved the security performance of the cryptography [12]. In 2020, Patel et al. proposed an improved 3D chaos logistic map encryption algorithm, which makes the encryption algorithm strong [13]. Currently, the construction of hash function based on chaotic mapping is a research direction in chaotic cryptography, which uses the sensitivity of initial values and pseudo-randomness inherent in chaotic systems to generate hash values. These Hash values are used as seeds for pseudo-random number ciphers, which finally undergo several chaotic iterations to generate unpredictable random keys. Among such studies, in 2016, Li et al. proposed the construction of a one-way hash function based on a sequence design with double perturbations of spacetime chaos [14]. In 2015, Teh et al. proposed the construction ----- of a hash function based on chaotic logic equations [15]. Meanwhile, it is proved that a single low-dimensional chaotic system is more vulnerable to attacks. In contrast, a highdimensional chaotic system can improve security but reduce the speed of cryptographic operations. Therefore, considering the above problems, a MAC pseudo-random function generator based on segmented logistic chaotic mapping for RTPP system is designed in this paper from the viewpoint of efficiency and security to complete the storage encryption of private data. **III. THE PROPOSED ALGORITHM: MODIFIED** **HMAC (CHMAC)** The security of storing private data is as important as the memory usage in the embedded system to implement the automatic forgotten scheme of private data. Since most MACbased pseudo-random number generators are constructed using the MAC algorithm [16] with embedded hash functions (HMAC) [17], in this study, cryptographers aim to design an algorithm that is more resistant to attacks than HMAC. Therefore, we propose a Chaos-based HMAC (CHMAC) algorithm in this subsection, and further details of the HMAC algorithm and the CHMAC algorithm are presented. _A. HMAC ALGORITHM_ The HMAC algorithm uses the underlying Hash function with the key to complete the encryption process, which is defined as follows [2]: HMAC(K _, M_ ) = H [(K [+] ⊕ opad)||H [(K [+] ⊕ ipad)||M ]] (1) where K is the key shared by both communication parties, _M is the message to be verified. H is the embedded Hash_ function. ‘‘ ’’ means ‘‘bitwise iso-or’’ operation. ‘‘ ’’ means ⊕ || ‘‘or’’ operation. When the length of the key K is less than the number of bits b contained in each group of the Hash, the length of K and b are the same by adding 0 to the end of the key K . The key becomes K [+]. opad and ipad are the internal and key-related bit sequences of HMAC. The HMAC algorithm structure diagram is shown in Fig. 1. **FIGURE 1. HMAC algorithm structure diagram.** The encryption process for each message block is divided into five steps [18]: Firstly, make the number of bits of the key K the same as the number of bits b in each Hash function grouping by adding zeros to the last bit to obtain K [+]. Secondly, performs an equal or operation on the ipad to produce a grouping of b bits and appends M to it to produce a message authentication code. Thirdly, input the message authentication code derived from step 2 into the embedded Hash function to generate the Hash code. Fourthly, it performs an iso-or operation with opad to generate a grouping of b bits and attaches the hash code generated in step 3 to fill to the b bits, generating a new message authentication code. Fifthly, the message authentication code generated in step 4 is directly applied to the Hash function to generate an HMAC value. The HMAC value generated after the encryption of the previous message block is used as the initial value for the subsequent message block processing, and so on repeatedly until the last message block processing is completed to get the final pseudo-random number output value. _B. CHMAC ALGORITHM_ According to the working requirements of the RTPP system, the HMAC algorithm should be improved in terms of time and energy consumption. Therefore, we tried to find a way to optimize the time consumption of private data encryption in HMAC. For this purpose, we conducted a series of tests and evaluations to find the most time-consuming part of the HMAC algorithm as a possible option to improve the algorithm running time. Each round of the HMAC algorithm contains three calls to the hash function, the main core of the algorithm, which processes messages in 512 bits increments, with the internal structure of each round consisting of permutations, shifts, and substitutions. Contrary to the simple and low-cost implementation of bit permutations in hardware [19], the software implementation is expensive from the aspect of processing time. Therefore, to further improve the security and encryption speed of the HMAC algorithm, this paper proposed the embedded Hash function in the HMAC algorithm. Combining the segmented logic chaos mapping with the embedded Hash function and invoking the Piecewise Logic Maps (PLM [20]) to construct the CHMAC algorithm. Our experience reduces the processing time of the software by reducing the number of substitution operations, while ensuring better encryption performance. Table 1 shows the time (milliseconds) required to encrypt 512 bits of data with different encryption rounds for the HMAC algorithm, the improved HMAC algorithm and the CHMAC algorithm proposed in this paper. The results show that the encryption time for encrypting 512 bits is reduced from 255.07ms to 20.97ms when the HMAC algorithm does not include the permutation operation. The CHMAC algorithm retains one permutation and introduces chaotic mapping. From comparing of encryption iteration times, the CHMAC algorithm has an encryption ----- **TABLE 1. Require time for encryption data (512 bits) in different scenarios with the different number of rounds (millisecond).** after the completion of the iteration. **FIGURE 2. CHMAC algorithm structure diagram.** advantage over the HMAC algorithm which has only one swap. In CHMAC algorithm, the introduction of chaotic mapping reduces the interaction between plaintext information blocks in the initial stage, effectively prevents external attacks, and greatly improves the algorithm’s security. The logistic map is a discrete-time dynamic system [20], being mathematically expressed as _xn+1 = f (xn) = µxn(1 −_ _xn)_ (2) where x0 ∈ (0, 1) is the state value, and µ is the control parameter. The basic logistic mapping is vulnerable to attacks due to its simple structure. this algorithm references PLM [20], enhances the resistance of logistic mappings, which is defined as (3). Where N is the number of segments of the logistic mapping. It has good ergodicity and a larger Lyapunov exponent than basic logistic mappings. The study shows that the mapping has good chaotic characteristics when the initial control parameter values µ ∈ (2, 4). Fig. 2 depicts the structure diagram of the CHMAC algorithm constructed in this paper. Compared with the HMAC structure diagram, the encryption of each group of messages only needs to be run twice in the same Hash function. It simplifies the design of the circuit while ensuring the improved security of the encryptor. The specific encryption process is as follows: firstly, the key and are subjected to the iso-or operation, and the generated message authentication code is input to the CHMAC algorithm structure to generate the Hash code; then, it is input to the PLM(So||ho) in the CHMAC algorithm structure to complete the second encryption operation, and the CHMAC code of a single message block can be obtained _xj+1 = PLM_ �xj�  � 1 � _N_ [2]µxj _,_ 0 < xj < [1] _N_ _N_ [−] _[x][j]_ � ��2 � 1 1 − _N_ [2]µ _xj_ − [1] _,_ _N_ _N_ [−][x][j] _N_ _[<][ x][j][ <][ 2]N_ _..._ � �� _i_ � _i_ 1 _N_ [2]µ _xj −_ _[i][ −]N_ [1] _N_ _,_ −N _< xj <_ _N[i]_ [−] _[x][j]_ � ��i 1 � _i_ 1−N [2]µ _xj_ − _[i]_ + −xj _,_ _N_ _N_ _N_ _[<][ x][j][ <][ i][ +]N[ 1]_  = � ��N 1 � _N_ 2 _N_ [2]µ _xj_ − _[N][ −][2]_ − −xj _,_ − _< xj_ _N_ _N_ _N_ _<_ _[N][ −][1]_ _N_ � _N_ 1 1−N [2]µ _xj_ − _[N][ −][1]��1−xj�_ _,_ − _<_ _xj < 1_ _N_ _N_ 1 _xj +_ 100N _[,]_ _xj = 0,_ _N[1]_ _[,][ 2]N_ _[,]_ _. . .,_ _[N][ −]_ [1] _N_ 1 xj − 100N _[,]_ _xj = 1_ (3) Finally, the CHMAC code is fed back to the initial value of the function, and the above steps are repeated until all message block groupings have all executed this process, and pseudo-random number encryption of privacy can be realized. The processing of the function part consists of three main steps: message key preprocessing, compression iteration of the message block, and generation of the CHMAC value. Equation 4 defines the CHMAC algorithm : CHMACK (M ) = PLM[P0||Si] (4) Message key preprocessing consists of two parts: message key padding and message code iterative chunking. First, the key K [+] and ipad perform the iso-or operation to divide the plaintext message into L groups of plaintext message blocks _Yi (0 ≤_ _i ≤_ (L − 1), and after merging the two, they form the message key Si. The length of each message key is 512 bits ----- **TABLE 2. Chaos-based HMAC algorithm.** and sent to the function for iterative compression, and finally, get the 256 bits code (ho). Then use it as the expansion bit generated by the key and for the iso-or operation. Then enter the function again for iterative compression. The CHMAC code value of this message block can be generated and used as the initial value for the next group of message blocks to be processed until all the message blocks of the message are processed. Then the final CHMAC code value can be obtained. This enhances the diffusion effect among message blocks and enhances the security of encrypted messages. The iterative compression process of message blocks is mainly used in the PLM iterative function. Table 2 shows the execution process of the CHMAC algorithm. **IV. APPROACH TO ‘‘AUTO-FORGOTTEN’’** **IMPLEMENTATION FOR EMBEDDED SYSTEM** One way to protect private data in storage and achieve an automatic deletion to implement the ‘‘right to be forgotten’’ is to limit users’ private information [21]. However, as data grows, the number of files that need to be deleted gradually increases the complexity of system processing. So far, the deletion operations users have performed on the device have only ostensibly been deleted on their own devices. The data system backend has saved this information in the backend database of each company [22]. Whenever a company receives the requirement to erase personal data from the database, the whole process of individual-by-individual review is very tedious and time-consuming. However, the final review decision does not always ensure successful deletion, reducing the legal system’s credibility to protect individuals’ privacy. Protecting individuals’ privacy through legal means is not a foolproof solution, so it is crucial to deal with the ‘‘automatic right to be forgotten.’’ In this paper, a Resilient Tag-based Privacy Protection (RTPP) scheme is designed to solve such problems effectively. The scheme automatically calculates the survival period of individuals’ private data by controlling the charge movement in the hardware and changing the biterror rate (BER) in combination with the data usage in a specified period. When the data is outside the survival cycle, it will be automatically and permanently destroyed in the hardware to be forgotten. Fig. 3 is the basic architecture of **FIGURE 3. Architecture of the RTPP system.** the RTPP scheme. The main features of the RTPP system are: firstly, private data all have corresponding tags; secondly, the existence time of individuals’ privacy can be flexibly adjusted by determining whether the user retrieves the relevant tag data for four out of seven days; thirdly, all private data under such tags can be accurately operated by directly retrieving the tags without decoding the private data. This system consists of four parts: first of all, using a pseudo-random number generator with chaotic mapping to perform cryptographic operations on privacy; next, using NOR flash memory and 3D-NAND flash memory controllers for collaborative processing; then controlling the length of personal privacy lifecycle with Huffman coding; finally achieving automatic forgetting of individuals’ privacy. The features are described in detail in Section 3 of this paper. _A. FLASH MEMORY OPERATION_ There are two typical types of flash memory, NAND and NOR flash memory [23]. However, NAND flash memory is classified into four types based on the difference in density of its electronic cells. After comparing the capacity, cost, and lifetime of these four types of flash memory, 3D-TLC NAND flash memory [24] was chosen for this system. 3D-TLC NAND flash memory is not simply a stack of NAND layers. It utilizes 3D-NAND technology, where memory particles are stacked in three dimensions from three dimensions. It dramatically improves storage capacity, performance, and security compared to two-dimensional planar-sized TLC NAND flash and has an advantage over two-dimensional flash in storing large-capacity private data [25]. NOR flash is a random storage medium. Each memory cell is connected in parallel, allowing direct random access to each bit and significantly reducing the execution time for processing instruction operations to store data tags in NOR flash [26]. When the host ----- issues a retrieval command, it first extracts the relevant tag from the NOR flash memory and sends it to the NAND flash memory. Then, it can view the data corresponding to this tag and transfer the data to the host to complete this retrieval operation. Similarly, when a host wants to delete a particular type of data, it can directly delete such tags and delete all the data under such tags simultaneously to achieve flexible regulation of the data lifecycle. Fig.3 designed the RTPP system to use two flash memory types for individuals’ privacy. Although the storage performance of the two types of flash memory is very different, the read and write processes are similar [27]. For NAND flash, the deletion or writing of data is based on the tunneling effect, which requires current to pass through the insulation layer between the floating gate and the polysilicon pillar, discharging or charging the floating gate [28]. NOR flash memory uses tunneling for data deletion and hot electron injection from the floating gate to the source for data writing [29]. In order to achieve flexible control of the survival cycle of individuals’ privacy, the proposed RTPP system designed in this paper utilizes the charge movement to control the erasure and writing of flash memory. The tags with private data are stored in 3D-TLC NAND flash memory, and each layer stores one week of private data. When the data life cycle is extended, the content of the bottom tag is substituted to the tag with the same name in the upper layer. By controlling the charging and discharging of the bottom cell, the outdated tag is erased while the data is written. At this time, the error correction code will receive the corresponding instruction to determine whether the BER should be increased or decreased [30], thus realizing the automatic adjustment of the private data life cycle by flash memory. The operation does not require direct private data processing but compresses and stores them in their respective tags. Our system only needs to manipulate the corresponding tags to achieve control over the life cycle of all private data, which saves memory processing time, dramatically improves efficiency, and effectively protects the privacy and security of users. _B. RULES OF LABELING DATA TAGS_ After the server is written with individual private data, it first classifies each private data by labeling it with a corresponding tag and stored in the flash memory. In the DLLCT designed in this paper, each tag type has its corresponding timeline from creation to disappearance. Its lifecycle is automatically updated in the table when the private data life span needs to be extended, shortened, or deleted immediately. The tags in the life cycle table are divided into four groups, among which the first three groups of tags are fixed in position and value in the life cycle table and cannot be modified in any way. At the same time, the system automatically generates the fourth group of tags according to the sensitive level of privacy. All tags are stored in the NOR flash memory of the embedded system as a server host. When users use the host, the generated privacy content will look for the tags matching their own inside the host to realize the categorization and storage of private information. At this time, each private data can be labeled by multiple tags, and different types of sub-tags can be stored under each group of tags. Each tag is stored in the life cycle table with a default validity of one year. If no operation is performed on these private data during this period, this private data under such tag will automatically be destroyed in the system. If the data is subject to an extended period, shortened period, or immediate deletion operation, the survival time of its corresponding life cycle table will also be automatically changed. On the one hand, the tag is stored in NOR flash memory so that the flash memory can directly handle a large amount of private information. On the other hand, the tag and the private data it contains are transferred to a pseudo-random number generator based on the chaos principle, which encrypts the data information to prevent private data leakage. The use of pseudo-random number generator based on chaos principle and its encryption principle is described in detail in Chapter 2. When the private data has completed the above operations, it will enter the embedded system’s Flash Translation Layer (FTL) [31]. This step converts the logical address of the private data into a physical address for writing to flash memory. After the conversion is completed, the privacy information is directly input into the Huffman coding designed in this paper to compress the private data. The regulation of the Huffman encoding is the core part of completing the automatic regulation of the private data, which is explained in detail below. _C. MODIFICATION OF HUFFMAN CODING_ After the data are tagged, the random encryption of the pseudo-random function generator is initialized, and FTL completes the address conversion. It enters the core module of the RTPP system, which is a crucial step used to realize the flexible regulation of the life cycle length of private data. This paper designs an algorithmic modulation of Huffman coding to achieve lossless compression of large amounts of private data using Huffman coding. The system flexibly changes the life cycle of private data in flash memory by judging the weight results of the Huffman tree so that the error correction code generates the corresponding bit error rate and thus controls the directional movement of the flash memory charge. Huffman coding algorithms have two manifestations in the current research: static mode [32] and adaptive mode [33]. Throughout the encoding process, the static encoding model bases the encoding process on a pre-assumed model of the distribution of encoded elements and allows the use of character distributions that correspond to the nature of the file. Our auto-adaptive algorithm does not lose compression gain if the differences between the presumed and actual models are too significant because it draws on the model details of the incremental model. When there are significant changes in the patterns of different elements, the adaptive approach also does not need to transfer these changes to the decoder. ----- Because in this mode, the encoder and decoder automatically keep the identical copy with the Huffman tree, thus showing that the adaptive mode is better than the static mode is more advantageous than the static mode. Although the RTPP system already provides a life cycle table of data tags, some of the more petite tags in group 4 can only be written to the life cycle table by the user. Thus, to make Huffman coding more effective in regulating the life cycle of private data in RTPP systems, this paper designs an Adaptive Model of Huffman Coding (AMHC). For Huffman coding, the construction of Huffman tree is the most fundamental work. In this paper, we design an adaptive dynamic mode of Huffman coding. The Huffman tree is based on the tag information in DLLCT as the basic structure, and in the actual use, the user’s Huffman tree is constructed step by step backward according to the date of each day, and the whole tree is not completed at the beginning. The process of its construction is rough, using the first set of tags as the root node and building the leaf nodes sequentially from top to bottom. The second group of month tags in the life cycle table is read as the leaf node of the root node, where the current month tag is placed in the left node. The right node is the next month tag; the third group of week tags is used as the leaf node of the second group of tags, with the current week tag as the left node and the right node as the next week tag. For the construction process of the leaf nodes of the second and third groups of tags, the above method is repeated in turn until the last tag in the second and third groups of tags in the life cycle table appears. It completes the construction process of the first three groups of tags for the whole year Huffman tree. The leaf nodes of the third group of tags are constructed according to the fourth group of user tags. The leaf nodes are sorted according to the order of user accesses built in order from left to right. Since the fourth group of tags is classified by the user’s private data sensitivity to the server, the initial weight of the leaf of the data with the highest sensitivity is set to 1. The weight is set to 2 to a higher sensitivity level, and so on. The Huffman tree of the fourth group tags is constructed with weights after tags are classified. This Huffman tree is merged under the third group of leaf tags to complete the construction of the Huffman tree of a user’s private data in a day in the server. The leaf weight of a user’s first-level sensitive tag is consistent with the number of days a user visits the server. If a user visits the server four days a week, its first-level sensitive tag leaf weight changes to 4, and the weights of all the remaining leaf nodes change accordingly. If a user visits the server frequently, the amount of his privacy record data increases, increasing the risk of privacy leakage. The server has specific protection measures for their private data for this type of user. Since all groups of tags are set to be valid for one year by default, Huffman coding sets a timeline every seven days. By determining whether the leaf weight of the first level-sensitive tag of the fourth group of tags is greater than 3, it is possible to decide whether the life span of the fourth group of tags is extended or shortened. When the determination is over, the weights of all tags in the fourth group of that user change to the initial weights, and then the task of regulating the data life cycle is performed. If it is larger than three, the Huffman tree changes at that time: the first three levels of sensitive tags in the fourth group of tags are then set to shorten the life cycle, and the server sets its initial leaf weight to decrease by one-twelfth, and the remaining sensitive tags of this user are set to extend the life cycle, and their initial leaf weights increase by one-twelfth; if it is less than or equal to three, all the fourth group of tags of this user is set to shorten the life cycle, and its leaf initial weight is reduced by one-twelfth. For a user with an extended lifecycle, when the leaf weight of the first three levels of sensitive tags is reduced to 0 within the one-year validity period, the user will not display the contents of the first three levels of sensitive tags when he/she revisits the server. At that time, the user’s fourth level of sensitive tags becomes the new first level of sensitive tags, and its leaf weight becomes 1. The fifth level of sensitive tags becomes the new second level of sensitive tags, and its leaf weight becomes 2. If the user revisits the server, his privacy tag will not participate in the construction of the Huffman tree, and the system will directly include this user in the critical protection list. When the one-year validity period expires, the system will directly set all tag leaf weights to 0. At this time, it enters the automatic forgotten phase of the embedded system. Until the second year, the above process starts again. Table 3 shows the implementation process of the AMHC algorithm. When the system receives the instruction to extend the tag life cycle, it reduces electrons’ migration and error rate to the 3D-TLC NAND flash interface. Thus, the error correction code does not easily reach saturation, and the data lifecycle extension is achieved. Instead, it will increase the charge migration on the 3D-TLC NAND flash interface and increase the error rate, allowing the error correction code to detect more errors, thus shortening the data lifecycle. For leaf node tags with a weight of 0, the system will remove them before entering the second cycle. Suppose a user sends a request to delete private data immediately while using the host system. In this case, the system first finds which type of tag the private data belongs to in the periodic table. Our system retrieves its usage frequency in the Huffman tree by the fourth group of tags and immediately reduces its leaf node weight to 0. In the Huffman pseudocode, this leaf node is simultaneously deleted in the Huffman tree, and its weight in the periodic life table will be deleted accordingly. At this time, the error correction code reaches the maximum error correction value, the parity check fails, and the user-submitted privacy deletion instruction enters the hardware immediate deletion phase. A large amount of charge will be transferred, and permanent hardware deletion of this private data is finally achieved after the discharge operation [34]. Since the private data is stored under tags, in this case, the deletion operation is performed directly on all the tags owned by this private data to achieve the deletion of private data. ----- **TABLE 3. AMHC algorithm.** _D. DLLCT WORKING PROCESS_ DLLCT is a key step in the RTPP scheme to achieve flexible extension/shortening of data lifecycle. First, it judges the Huffman tree’s leaf weights to change the private tag’s lifecycle. It then sends the corresponding instructions to the 3D flash memory to change the BER of the patient tag in this embedded system by controlling the direction of the electron flow at the flash interface to complete the change of the private data lifecycle, Fig. 4 shows the way of working of DLLCT in RTPP scheme. The figure shows that DLLCT first sets all the private data tags that enter the system after encryption to be valid for 1 year. At the same time, the AMHC algorithm starts to work, at which time the processing of data tags enters the working mode y and z, during which time if the system receives the command to delete the data immediately, it will enter the working mode { at this time. Fig. 5 depicts the workflow diagram of DLLCT for flexible regulation of private data life cycle. Firstly, DLLCT will estimate the BER based on the private data and optimize the leaf labels’ weights within seven days. The actual BER will be calculated on the eighth day, and the flexible control of the private data life cycle can be realized. **V. SYSTEM IMPLEMENTATION RESULTS ANALYSIS** In the proposed RTPP system, there are two core components, one is the encryptor, and the other is the AMHC implementation. In the following, we will analyze the RTPP system from two aspects: the security analysis of the system, and the process performance of automatic adjustment of the private data lifecycle by AMHC. _A. SECURITY ANALYSIS OF THE SYSTEM_ The security of the RTPP system is mainly reflected in the system’s resistance to attacks and the security of storing **FIGURE 4. Proposed Data Label Life Cycle Table (DLLCT).** **FIGURE 5. Flowchart of proposed DLLCT.** private data in the system. The performance index of the encryptor in the RTPP system can be tested, and the security analysis of the password can be judged. We tested the proposed CHMAC algorithm in the RTPP system and compared it with PRNG algorithm based on the Hash function and the PRNG algorithm based on the MAC function. We compared the three algorithms in terms of energy consumption, encryption time, and memory usage to evaluate the overall performance of the CHMAC algorithm. Evaluate the system’s resistance to attacks by studying the relationship between plaintexts and keys generated by the CHMAC algorithm. The following six experimental results show that the CHMAC algorithm introduces chaotic mapping and uses nonlinear elements compared with the Hash function-based PRNG algorithm and the MAC function-based PRNG algorithm. ----- Although its performance index is between the two, its resistance to attacks is the strongest and provides a stronger security defense for the RTPP system. 1) ENERGY CONSUMPTION OF ENCRYPTION For electronic devices, the battery is the direct component that provides energy, so we will calculate the energy consumption of the encryptor by measuring the usage of the battery by the encryption algorithm. Using a multimeter to measure the voltage and current values required for the algorithm to run, we will first find the power when the algorithm runs, according to the formula: power = voltage value [∗] current value, P (w) = U (v) ∗ _I (A), the power value is obtained._ In this formula, the voltage and current values are taken as the average of the measurement results of the algorithm run thirty times. The average power is obtained and brought to the formula: Q (J _) = P (w) ∗_ _T (s), which gives the amount_ of energy consumed by each encryption algorithm to run. In this case, T is the time required to execute the algorithm once, and its value remains the average time of thirty measurements. Fig. 6 shows the energy consumption required by the three encryption algorithms to execute 128 bytes, 256 bytes, and 512 bytes of data. From the figure, it can be seen that the energy demand for running the CHMAC algorithm lies between the two. Since energy consumption is directly related to the algorithm’s complexity, one cannot judge whether an encryption algorithm is good or not only by the degree of energy loss. Among the three algorithms, the CHMAC algorithm introduces chaotic mapping into the embedded Hash, increasing the algorithm’s complexity and improving encryption security. **FIGURE 6. Average energy consumption for three encryption algorithms** executing three bytes (MJ). 2) TIME CONSUMPTION OF ENCRYPTION In addition to energy consumption, the algorithm’s execution time is also a critical factor in determining its performance. In general, the higher the algorithm’s complexity, the faster it completes encryption and the better the algorithm’s performance. The time consumed by the three algorithms to execute 128 bytes, 256 bytes, and 512 bytes of data is shown in Fig. 7. The Hash algorithm has the fastest completion time because it has the lowest complexity. However, the security of the keys it generates is less than that of the other two algorithms. Therefore, although the Hash algorithm has the fastest execution speed, it is not the best algorithm. Compared with the HMAC algorithm, the CHMAC algorithm improves the process of message grouping iterations, shortening the execution time of data encryption. **FIGURE 7. Running time for three encryption algorithms executing three** bytes (ms). 3) THE MEMORY OCCUPATION OF ENCRYPTION The proposed RTPP system works in standby mode when the host generates browsing data. Once new private data is added, RTPP will immediately enter working mode. Therefore, the system memory needs to be occupied only with encrypting the private data or adjusting its life cycle. The adjustment data lifecycle phase mainly uses NOR flash memory and 3D-NAND flash memory, which requires more system memory in the encryption phase. Due to the limited memory, it is essential not to occupy too much memory while ensuring the encryption speed and quality. Therefore, the amount of memory required to run the encryption algorithm is generally considered memory RAM usage [35]. Fig. 8 Shows the RAM usage of the three encryption algorithms, and it can be seen from the figure that the Hash algorithm has minor RAM usage, followed by the CHMAC algorithm and the HMAC algorithm. Although the memory usage of CHMAC algorithm is not the least, the process of CHMAC algorithm encryption is the most complicated among these three algorithms. In a comprehensive view, CHMAC algorithm is still the best. The above three aspects of the evaluation results prove that the proposed CHMAC algorithm designed in this ----- **FIGURE 8. Memory usage of the three encryption algorithms (Bytes).** paper is slightly better performance. However, none of them is the smallest in terms of algorithm complexity, the anti-interference ability of encryption. For encryption algorithms, the length of the initial key determines the security level of the encryptor; the longer the key, the more resistant the encryption algorithm is to attack and the higher its security. In terms of performance, the longer the key, the longer the process of compressing and iterating the key by the encryptor, the longer the encryption time consumed by its complete encryption process, and the more RAM it takes up. The CHMAC algorithm achieves the optimal security of the encryptor without increasing the performance cost. In the following, three aspects of the correlation between the ciphertext and plaintext generated by the CHMAC algorithm, randomness, and resistance to attack will be analyzed. In order to make the analysis results more accurate and reliable, 200 random text samples were generated by the Lorem-Ipsum library to participate in this experiment [36]. Among them, 100 random texts have a size of 5000 bytes, and another 100 random texts have a size of 10000 bytes. 4) CORRELATION OF PLAINTEXT AND CIPHERTEXT For an encrypted ciphertext, the less correlation it has with the plaintext, the less the attacker can get the related plaintext content, and at this time, the more secure the plaintext is, the less the private content can be revealed. The correlation between plaintext and ciphertext can be determined by counting the ASCII characters values in the plaintext and the ciphertext. As long as the ASCII value distribution of the plaintext and the ciphertext does not show any pattern, it proves that the plaintext and the ciphertext are not correlated, and the private information after encryption is secure. For the formed ciphertext, 0 to 256 ASCII characters indicate that the encryption is secure and the formed ciphertext has low predictability. Fig. 9 (a) and (b). show the ASCII distribution of characters in plaintexts of 5000 and 10000 bytes, respectively; Fig. 10 (a) and (b). are the ASCII distributions of characters in the ciphertext after encryption for two random texts without size bytes. Comparing the four graphs, we can **FIGURE 9. ASCII distribution of plaintext characters for 200 random texts** in the CHMAC algorithm. **FIGURE 10. ASCII distribution of ciphertext characters for 200 random** texts in CHMAC algorithm. see that the characters in the random text before encryption are random and irregular. After encryption, the characters are uniformly distributed, indicating that the CHMAC algorithm’s encryptor has a relatively high-security index. 5) RANDOMNESS OF THE CIPHERTEXT After the encryption process, a ciphertext containing only binary numbers is generated after the encryption process encrypts the private data. Therefore, by counting the number of binary numbers 0 and 1 generated by encryption separately, it is possible to determine whether the encryptor satisfies the characteristic of the randomness of encryption output. Theoretically, the encrypted output is best when the number of 0s and 1s is fifty percent each. At this point, the ciphertext is not easy to find the pattern, and it is not easy to be broken, which means that the generated ciphertext is secure. Table 4 shows the counts of 0s and 1s in the encrypted output after encrypting 200 random samples by the CHMAC algorithm, ----- **TABLE 4. Average number and percentage of ‘‘0’’ and ‘‘1’’ in 200 random encrypted samples.** along with the respective percentages. Here the count values of each type of bytes are obtained as the average of such texts. From the table, we can see that the average total percentage of ciphertext 0 and 1 generated by the encryptor of CHMAC algorithm is basically around 50%, and the encryptor designed in this paper fully satisfies the randomness. The randomness of ciphertext can also be measured by information entropy, which is the discrete probability of detecting characters in a random text; the more chaotic the ciphertext is, the greater the uncertainty of each character. The information entropy value of the characters in the random text is calculated according to the formula H (S) = � _P(Si)log2_ _P(1Si)_ [, where][ P][(][S][i][) is the probability of each] _S_ ASCII occurring in the ciphertext [37]. When the value of the encrypted string is wholly distributed in the ciphertext, the salient value of information entropy is equal to 4 for a random text of 5000 bytes; the outstanding value of information entropy for a random text is that of 10000 bytes is 8. Fig. 11 is the value of information entropy for a random text. It shows that the entropy value of the ciphertext characters generated by the CHMAC algorithm is close to the entropy value of excellent information entropy, which shows that it conforms to the design principle of the encryptor. 6) ATTACK RESISTANCE OF CIPHERTEXT The ciphertext generated by a qualified cryptography must be highly resistant to external attacks to ensure that private information is secure Diffusion, obfuscation, and avalanche effect are three basic principles of cryptography design [38]. Diffusion allows each bit of information in the plaintext to affect many bits of information in the ciphertext, which can hide the contents of the ciphertext. Obfuscation makes the relationship between the statistical properties of characters between the ciphertext and the key more complex, even if the attacker obtains the relevant information of the ciphertext. The avalanche effect belongs to an unstable equilibrium state. When the plaintext or the fundamental changes slightly, the ciphertext will produce a considerable change, such as half of the binary bits in the ciphertext change in reverse. In order to test the resistance of ciphertext to attacks, the next part of this paper will analyze both the diffusion and obfuscation properties and the avalanche effect of the plaintext, key, and ciphertext parts. The diffusion and confusion properties between the plaintext and ciphertext characters in the encryption algorithm are calculated. According to the formula of Fig. 13 shows the change in the value of the avalanche effect for the random text. Since the number of bits of the **FIGURE 11. Entropy values of ciphertext information for 200 random** texts in the CHMAC algorithm. integrity metric, it has a value of 1 for all encryption algorithms. According to Equation (5) [7], where n is the number of bits of the plaintext input of the encryption method and m is the number of bits of the ciphertext output generated by the encryption method. _dc = 1 −_ _nm[1]_ �(i, j) |aij = 0� _,_ [̸≡] (i = 1, . . ., n; j = 1, . . ., m) (5) Fig. 12 shows the computed results of the diffusion and confusion properties of the CHMAC algorithm. It shows that the algorithm converges to 1 after the fourth iteration, which is consistent with the diffusion and confusion properties of the cryptograph. The avalanche effect is tested here by assuming that half of the binary bits of the ciphertext will be reversed and changed when the plaintext or key changes by one bit. The avalanche effect value is calculated according to Equation (6), where ‘#X,’ ‘n’ and ‘WH ’ represent the ciphertext data count, individual data bit count, and Hamming distance, respectively. F(x) is the ciphertext data, and F(x[(][i][)]) is the ciphertext data with one difference in the ith position [7]. 1 _da1 =_ #X _n_ ∗ _n_ � � ( _WH_ (F(x) ⊕ _F(x[(][i][)])))_ (6) _i=1_ _x∈X_ ----- **FIGURE 12. Integrity of the CHMAC algorithm.** **FIGURE 13. Avalanche effect of CHMAC algorithm.** ciphertext change is assumed, the outstanding value of the avalanche effect at this time should be 1. From the figure, we can see that the value of the avalanche effect of the CHMAC algorithm is closest to the ideal value after seven rounds. _B. DISCUSSION OF AMHC AUTOMATIC REGULATION_ _RULES IN EMBEDDED SYSTEMS_ The AMHC algorithm is a crucial part of the embedded system RTPP to achieve automatic data lifecycle adjustment. The AMHC algorithm achieves the compression of a large amount of private data and changes the lifecycle of private data in DLLCT by changing the weights of leaf tags. Next, we will evaluate the compression performance of the AMHC algorithm in software and the adjustment process of a data life cycle in RTPP for the embedded system. Finally, we will compare and analyze the advantages of the RTPP scheme with those of the traditional scheme. 1) COMPRESSION PERFORMANCE OF AMHC ALGORITHM The compression performance of the AMHC algorithm is analyzed by comparing it with static Huffman coding and dynamic Huffman coding in terms of the size of the data after compression and the processing time of the compression process. Due to the complexity and diversity of private data, we selected five types of text, English, Chinese, Internet, Picture, and Random with fixed character size, for testing [35]. Table 5 is the size of the five types of text after compression by the three algorithms, where the second column is the initial size of the five types of text (in MB), and the third column is the size after compression by the three algorithms (in Byte). It was evident from the figure that the size of the data compressed by the AMHC algorithm is smaller than the other two algorithms. Some of the data compressed by dynamic Huffman coding is smaller than static Huffman coding. In addition to the size of the compressed data, the compression time is also a critical factor in excellent compression performance. Table 6 shows the compression times for the five types of text under the three compression methods, and this time is the average time obtained for each type of text executed 100 times in each algorithm. As can be seen from the figure, static Huffman coding is the fastest, dynamic Huffman coding is about half the time used for static Huffman coding, and the AMHC algorithm takes a little bit slower than dynamic Huffman coding. Although the execution time of the AMHC algorithm is slightly longer, the compression performance of the AMHC algorithm should be considered better among the three in terms of the functions it implements and the size of the compressed data. 2) PERFORMANCE OF DATA LIFECYCLE ADJUSTING IN RTPP SCHEME Each tag is valid for one year in DLLCT, and it is up to the AMHC algorithm to decide whether to extend or shorten the life cycle of the tag. The key to this algorithm is the construction of the algorithm tree. The weights of the leaf tags of this tree will change every day. By determining whether the leaf weight of the first level-sensitive tag of the fourth group of tags is greater than 3, the system determines whether the life cycle of the user’s fourth group of privacy tags is longer or shorter. Suppose the leaf weight of the first-level sensitive tag is greater than 3. In that case, the first three levels of sensitive tags of the fourth group of tags are set to shorten the lifecycle with a one-twelfth decrease in leaf weight, and the remaining sensitive tags are set to extend the lifecycle with a one-twelfth increase in leaf weight. The related tags in the DLLCT will be recorded one by one to extend or shorten the lifecycle. If the leaf weight of a level 1 sensitive tag is less than 3 and no command is issued to delete the data immediately. In this case, the life cycle of that data is automatically shortened at this point by default. Then all of its fourth set of sensitive tags are recorded as shortened lifecycle, and the leaf weight is reduced by one-twelfth. When extending the data lifecycle, a small number of electrons will flow at the 3D flash interface, reducing the BER and achieving an extended data life cycle. Fig. 14 clearly shows that the electron influx at the flash interface dominates, the number of electrons at the oxide interface increases, and the ----- **TABLE 5. Compression performance of three algorithms.** **TABLE 6. Coding execution time of three algorithms.** **FIGURE 14. Electronic migration for BER reduction.** BER of the data changes from week to week. Fig. 15 shows the change in BER over eight days for the extended data life cycle. The BER decreases by one-twelfth for an extended data life cycle. When shortening the data lifecycle, there will be a little electron outflow at the 3D flash interface so that the BER will increase by one-twelfth consequently. Fig. 16 shows the predominance of outflowing electrons and the decrease of electrons at the interface of the oxide layer. Fig. 17 shows the change of BER within eight days for shortening the data life cycle. 3) PERFORMANCE OF IMMEDIATE DATA DELETION IN RTPP SCHEME When the system receives a delete command for data, the AMHC algorithm will immediately zero the weight of the leaf tag corresponding to this data, and the related tags in DLLCT will be deleted accordingly. At this time, many electrons will flow out at the 3D flash interface so that the BER reaches the **FIGURE 15. BER variation over an extended data lifecycle of eight days.** maximum, and the immediate delete command of the data is realized. Fig. 18 is a schematic of the electron flow, which shows that almost no electrons are present at the oxide layer interface. Fig. 19 is a hypothetical. The immediate deletion command is received on the fourth day, and the BER of this data changes in eight days. 4) ADVANTAGES OF THE RTPP SCHEME AT WORK As shown in Table 7, compared with the three traditional schemes, PP-SSS [4], Enhanced PP-SSS [5] and PDLCS [6], ----- **TABLE 7. Comparison of RTPP scheme and traditional scheme.** **FIGURE 16. Electronic migration for increasing BER.** **FIGURE 17. BER variation over a shortened data lifecycle of eight days.** **FIGURE 18. Electronic migration for maximum BER.** the RTPP scheme proposed in this paper has unique advantages in the following five aspects. The first point is that it sets a specific life cycle for private data, which saves the memory occupation of the system and effectively improves its efficiency. Traditional schemes do not have a specific lifecycle, and only change the survival cycle of data through the BER until the BER is zero before the data is permanently deleted in hardware. The second point is that the RTPP scheme is **FIGURE 19. BER change over eight days for data with immediate deletion** command. designed with encryption algorithms to encrypt private data. Only the PDLCS [6] scheme among the traditional schemes encrypts the data with a simple random encryption. In part A of this section, the encryption algorithms of the two schemes are experimented with. The experimental results show that the encryption algorithm of the RTPP scheme is significantly better in terms of performance and security. The third point is that this scheme designs a life cycle table of data label to classify the privacy data in detail, when a user’s privacy is deleted, it will not affect the rest of the privacy data, and the system still works normally, the traditional scheme does not make accurate classification. The last two points compare whether this solution and the traditional solution can flexibly control the data lifecycle. The results show that this solution can flexibly extend and shorten the data lifecycle and immediately and permanently delete the data on the hardware within the specified data lifecycle. Only the PDLCS [6] scheme can do it among the traditional schemes. The core technologies of these two schemes are different; the RTPP scheme uses the AMHC algorithm and the PDLCS [6] scheme is the Inverse Huffman-Coding VTH Modulation (IHVM) algorithm, whose core ideas belong to Dynamic Huffman coding. In part B of this section, the algorithms proposed by the two schemes are compared, and the experimental results show that the algorithm of the RTPP scheme is slightly better. ----- **VI. CONCLUSION** In order to protect personal privacy and make private data ‘‘automatically forgotten,’’ this paper proposes a flexible and adjustable private data lifecycle control RTPP scheme for embedded systems. This system encrypts the private data using pseudo-random function cryptography based on the chaos principle and completely deletes users’ private data by controlling life cycle tags. To avoid storing too much private data and occupying a large amount of system memory, the RTPP smartly links the compression of private data with its lifecycle regulation by modified Huffman coding techniques. This method can flexibly regulate the life cycle of private data, maximizing the protection of users’ privacy and security issues. The proposed solution can be further improved by carrying a performance study on the security metrics in various rounds in the RTPP. It is required to probe the possibility of reducing setting groups of tags while preserving the highsecurity criteria and the security assessment of cryptanalytic attacks for this embedded system. **REFERENCES** [1] P. Carey, ‘‘Outsourcing personal data processing,’’ in Data Protection a _Practical Guide to UK and EU Law, 5nd ed. Oxford, U.K.: Oxford Univ._ Press, 2018, pp. 175–176. [2] W. Stallings, ‘‘Handling of personal information and deidentified, aggregated, and pseudonymized information under the California consumer privacy act,’’ IEEE Secur. Privacy, vol. 18, no. 1, pp. 61–64, Jan. 2020. [3] A. Bayle, M. Koscina, D. Manset, and O. Perez-Kempner, ‘‘When blockchain meets the right to be forgotten: Technology versus law in the healthcare industry,’’ in Proc. IEEE/WIC/ACM Int. Conf. Web Intell. (WI), Dec. 2018, pp. 788–792. [4] S. Tanakamaru, H. Yamazawa, and K. Takeuchi, ‘‘Privacy-protection solid-state storage (PP-SSS) system: Automatic lifetime management of internet-data’s right to be forgotten,’’ in Proc. Symp. VLSI Circuits (VLSI _Circuits), Jun. 2015, pp. C130–C131._ [5] H. Yamazawa, K. Maeda, T. Ogura Iwasaki, and K. Takeuchi, ‘‘Privacyprotection SSD with precision ECC and crush techniques for 15.5× improved data-lifetime control,’’ in Proc. IEEE 8th Int. Memory Workshop _(IMW), May 2016, pp. 1–4._ [6] S. Suzuki, K. Mizoguchi, H. Watanabe, T. Nakamura, Y. Deguchi, K. Mizushina, and K. Takeuchi, ‘‘Privacy-aware data-lifetime control NAND flash system for right to be forgotten with in-3D vertical cell processing,’’ in Proc. IEEE Asian Solid-State Circuits Conf. (A-SSCC), Nov. 2019, pp. 231–234. [7] Y. Liu, S. Tian, and W. Hu, ‘‘Design and statistical analysis of a new chaotic block cipher for wireless sensor networks,’’ Commun. Nonlinear _Sci. Numer. Simul., vol. 17, no. 8, pp. 3267–3278, Aug. 2012._ [8] Y. Wang, K.-W. Wong, X. Liao, and T. Xiang, ‘‘A block cipher with dynamic S-boxes based on tent map,’’ Commun. Nonlinear Sci. Numer. _Simul., vol. 14, no. 7, pp. 3089–3099, Jul. 2009._ [9] G. Zaibi, F. Peyrard, A. Kachouri, D. Fournier-Prunaret, and M. Samet, ‘‘Efficient and secure chaotic S-box for wireless sensor network,’’ Secur. _Commun. Netw., vol. 7, no. 2, pp. 279–292, Feb. 2014._ [10] B. Liu and Q. Chen, ‘‘A method of generating pseudorandom binary sequences based on 3D chaotic mapping,’’ in Proc. 3rd Int. Conf. Inf. _Manage. (ICIM), Apr. 2017, pp. 243–246._ [11] C. Jianqiu, X. Huarong, and L. Zhangli, ‘‘Image dual scrambling encryption algorithm based on parameter variable chaotic system,’’ in Proc. Int. _Conf. Electr. Inf. Control Eng., Apr. 2011, pp. 4238–4242._ [12] J. Peng, S. Pang, D. Zhang, S. Jin, L. Feng, and Z. Li, ‘‘S-boxes construction based on quantum chaos and PWLCM chaotic mapping,’’ in _Proc. IEEE 18th Int. Conf. Cognit. Informat. Cognit. Comput. (ICCI[∗]CC),_ Jul. 2019, pp. 1–6. [13] S. Patel, K. P. Bharath, and R. M. Kumar, ‘‘Symmetric keys image encryption and decryption using 3D chaotic maps with DNA encoding technique,’’ Multimedia Tools Appl., vol. 79, nos. 43–44, pp. 31739–31757, Nov. 2020. [14] Y. Li and X. Li, ‘‘Chaotic hash function based on circular shifts with variable parameters,’’ Chaos, Solitons Fractals, vol. 91, pp. 639–648, Oct. 2016. [15] J. S. Teh, A. Samsudin, and A. Akhavan, ‘‘Parallel chaotic hash function based on the shuffle-exchange network,’’ Nonlinear Dyn., vol. 81, no. 3, pp. 1067–1079, Aug. 2015. [16] S. M. S. Hussain, S. M. Farooq, and T. S. Ustun, ‘‘Analysis and implementation of message authentication code (MAC) algorithms for GOOSE message security,’’ IEEE Access, vol. 7, pp. 80980–80984, 2019. [17] Y. Yang, G. Cao, M. Qu, J. Huang, and Y. Gao, ‘‘HSATA: Improved SATA protocol with HMAC,’’ in Proc. 27th Int. Conf. Comput. Commun. Netw. _(ICCCN), Jul. 2018, pp. 1–6._ [18] S. I. Naqvi and A. Akram, ‘‘Pseudo-random key generation for secure HMAC-MD5,’’ in Proc. IEEE 3rd Int. Conf. Commun. Softw. Netw., May 2011, pp. 573–577. [19] B. J. Mohd, T. Hayajneh, and A. V. Vasilakos, ‘‘A survey on lightweight block ciphers for low-resource devices: Comparative study and open issues,’’ J. Netw. Comput. Appl., vol. 58, pp. 73–93, Dec. 2015. [20] Y. Wang, Z. Liu, J. Ma, and H. He, ‘‘A pseudorandom number generator based on piecewise logistic map,’’ Nonlinear Dyn., vol. 83, no. 4, pp. 2373–2391, Mar. 2016. [21] D. Erdos, ‘‘The ‘right to be forgotten’ beyond the EU: An analysis of wider G20 regulatory action and potential next steps,’’ J. Media Law, vol. 13, no. 1, pp. 1–35, Jan. 2021. [22] M. Nur and L. Andrawina, ‘‘Designing engineering data management system in research and development company,’’ J. Phys., Conf. Ser., vol. 1339, no. 1, Dec. 2019, Art. no. 012099. [23] D. Zhang, H. Wang, Y. Feng, X. Zhan, J. Chen, J. Liu, and M. Liu, ‘‘Implementation of image compression by using high-precision in-memory computing scheme based on NOR flash memory,’’ IEEE Electron Device Lett., vol. 42, no. 11, pp. 1603–1606, Nov. 2021. [24] Z. Lun, S. Liu, Y. He, Y. Hou, K. Zhao, G. Du, X. Liu, and Y. Wang, ‘‘Investigation of retention behavior for 3D charge trapping NAND flash memory by 2D self-consistent simulation,’’ Proc. Int. Conf. Simulation _Semiconductor Processes Devices (SISPAD), 2014, pp. 141–144._ [25] C. Gao, M. Ye, Q. Li, C. J. Xue, Y. Zhang, L. Shi, and J. Yang, ‘‘Constructing large, durable and fast SSD system via reprogramming 3D TLC flash memory,’’ in Proc. 52nd Annu. IEEE/ACM Int. Symp. Microarchitecture, Oct. 2019, pp. 493–505. [26] P. Poudel, B. Ray, and A. Milenkovic, ‘‘Microcontroller TRNGs using perturbed states of NOR flash memory cells,’’ IEEE Trans. Comput., vol. 68, no. 2, pp. 307–313, Feb. 2019. [27] Y. Yamaga, C. Matsui, Y. Sakaki, A. Kobayashi, and K. Takeuchi, ‘‘Real usage-based precise reliability test by extracting read/write/retentionmixed real-life access of NAND flash memory from system-level SSD emulator,’’ in Proc. IEEE Int. Rel. Phys. Symp. (IRPS), Apr. 2017, pp. PM12.1–PM12.5. [28] J.-M. Sim and Y.-H. Song, ‘‘Asymmetric read bias for alleviating cell-tocell interference in 3D NAND flash memory,’’ in Proc. IEEE Region Symp. _(TENSYMP), Aug. 2021, pp. 1–4._ [29] L. Bai, M. Wang, and J. Yi, ‘‘Design of NOR FLASH data read-write controller based on FPGA,’’ in Proc. 7th Int. Symp. Mechatronics Ind. _Informat. (ISMII), Jan. 2021, pp. 104–110._ [30] T. Nakamura, Y. Deguchi, and K. Takeuchi, ‘‘9.1x error acceptable adaptive artificial neural network coupled LDPC ECC for charge-trap and floatinggate 3D-NAND flash memories,’’ in Proc. IEEE Custom Integr. Circuits _Conf. (CICC), Apr. 2018, pp. 1–4._ [31] C. Ma, Z. Zhou, L. Han, Z. Shen, Y. Wang, R. Chen, and Z. Shao, ‘‘Rebirth-FTL: Lifetime optimization via approximate storage for NAND flash memory,’’ IEEE Trans. Comput.-Aided Design Integr. Circuits Syst., [early access, Oct. 26, 2021, doi: 10.1109/TCAD.2021.3123177.](http://dx.doi.org/10.1109/TCAD.2021.3123177) [32] S. T. Klein, S. Saadia, and D. Shapira, ‘‘Forward looking Huffman coding,’’ _Theory Comput. Syst., vol. 65, no. 3, pp. 593–612, Apr. 2021._ [33] A. Fruchtman, Y. Gross, S. T. Klein, and D. Shapira, ‘‘Weighted adaptive Huffman coding,’’ in Proc. Data Compress. Conf. (DCC), Mar. 2020, p. 368. [34] S. Tanakamaru, H. Yamazawa, T. Tokutomi, S. Ning, and K. Takeuchi, ‘‘19.6 hybrid storage of ReRAM/TLC NAND flash with RAID-5/6 for cloud data centers,’’ in IEEE Int. Solid-State Circuits Conf. (ISSCC) Dig. _Tech. Papers, Feb. 2014, pp. 336–337._ [35] J. Moon and S. Lee, ‘‘Design of H.264/AVC entropy decoder without internal ROM/RAM memories,’’ in Proc. 3rd Int. Symp. Commun., Control _Signal Process., Mar. 2008, pp. 1464–1467._ ----- [36] M. Sharafi, F. Fotouhi-Ghazvini, M. Shirali, and M. Ghassemian, ‘‘A low power cryptography solution based on chaos theory in wireless sensor nodes,’’ IEEE Access, vol. 7, pp. 8737–8753, 2019. [37] X.-J. Tong, Z. Wang, Y. Liu, M. Zhang, and L. Xu, ‘‘A novel compound chaotic block cipher for wireless sensor networks,’’ Commun. Nonlinear _Sci. Numer. Simul., vol. 22, nos. 1–3, pp. 120–133, May 2015._ [38] X.-Y. Wang and Q. Yu, ‘‘A block encryption algorithm based on dynamic sequences of multiple chaotic systems,’’ Commun. Nonlinear Sci. Numer. _Simul., vol. 14, no. 2, pp. 574–581, Feb. 2009._ YANAN ZHAO received the B.E. degree in the Internet of Things Engineering from Qufu Normal University, China, in 2020. She is currently pursuing the M.A.Eng. degree with the Faculty of Information Technology, Beijing University of Technology, China. Her research interests include security, privacy, and federated learning. NONG SI (Member, IEEE) received the M.S. degree in electrical engineering from the Blekinge Institute of Technology, Sweden, and the Ph.D. degree from the Electronic Engineering Department, Beijing University of Technology, China. His research interests include security, privacy, and communication networks. He is a member of the IET and CCF. YU SUN received the B.E. degree in telecommunication engineering from Anhui Polytechnic University, China, in 2021. She is currently pursuing the M.A.Eng degree with the Faculty of Information Technology, Beijing University of Technology, China. Her research interests include security, privacy, and federated learning. XIN GAO received the M.E. degree in automation engineering from the Artificial Intelligence and Automation Department, Beijing University of Technology, China, in 2003. His research interests include embedded systems and wireless communications. HAOPENG TONG is currently pursuing the B.E. degree in telecommunication engineering with the Faculty of Information Technology, Beijing University of Technology, China. His research interests include information systems and telecommunication networks. GENG YUAN is currently pursuing the degree with the Faculty of Natural Science, Kristianstad University, Sweden. He also studied and worked with the Blekinge Institute of Technology and Lund University, Sweden. His research interests include the algorithm, applied machine learning, and statistical learning for data science. In 2007, he was awarded the Runner-Up Prize of the International Young Design Entrepreneur of 2007 by British Council. HAOPENG TONG munication networks. GENG YUAN -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2022.3162238?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2022.3162238, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBYNCND", "status": "GOLD", "url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09741841.pdf" }
2,022
[ "JournalArticle" ]
true
null
[ { "paperId": "b18ab95d9da9c2012af5f46c3022d6825f1da480", "title": "Rebirth-FTL: Lifetime Optimization via Approximate Storage for NAND Flash Memory" }, { "paperId": "7ca31fc720c0fe230a798b4291db1a540536512b", "title": "Implementation of Image Compression by Using High-Precision In-Memory Computing Scheme Based on NOR Flash Memory" }, { "paperId": "6cf8c5e005c08b6acca0a2c367e786f4d200e3c5", "title": "Asymmetric Read Bias for Alleviating Cell-to-Cell Interference in 3D NAND Flash Memory" }, { "paperId": "ddb4afdc5297839794086ea84b6e8bfd5bebde57", "title": "Design of NOR FLASH data read-write controller based on FPGA" }, { "paperId": "e2333b3a9bb883420d61f8308369eaf6c75e1df8", "title": "The ʻRight to be Forgottenʼ beyond the EU: An Analysis of Wider G20 Regulatory Action and Potential Next Steps" }, { "paperId": "3a404b682808512b0edb7c3c99ac62fe40d765b0", "title": "Symmetric keys image encryption and decryption using 3D chaotic maps with DNA encoding technique" }, { "paperId": "1cdb21c1820af1dd73322b1f12df3f2376f55bc5", "title": "Weighted Adaptive Huffman Coding" }, { "paperId": "1ee0c6368c09be2ce9543468607011f1c54f19ec", "title": "Handling of Personal Information and Deidentified, Aggregated, and Pseudonymized Information Under the California Consumer Privacy Act" }, { "paperId": "640830cdc1548f52d69f94f4057532b421eae730", "title": "Designing Engineering Data Management System in Research and Development Company" }, { "paperId": "587a0e307cbf3aff54ae55309ec1f3052c6ddb43", "title": "Privacy-Aware Data-Lifetime Control NAND Flash System for Right to be Forgotten with In-3D Vertical Cell Processing" }, { "paperId": "716cc56ef5fc5ace1b2dca013a9db23992cce61a", "title": "Constructing Large, Durable and Fast SSD System via Reprogramming 3D TLC Flash Memory" }, { "paperId": "b212f7680d2a952525e915f6193ba44585ef3807", "title": "S-boxes Construction Based on Quantum Chaos and PWLCM Chaotic Mapping" }, { "paperId": "edab158808eb363d3777f7ae718f54980a87f677", "title": "Analysis and Implementation of Message Authentication Code (MAC) Algorithms for GOOSE Message Security" }, { "paperId": "7372eb721d52a5f93d9a16b446bb9bd06df3ed33", "title": "Microcontroller TRNGs Using Perturbed States of NOR Flash Memory Cells" }, { "paperId": "1478b6f71c4ffb9588c67ebb4658645baea6d8e0", "title": "A Low Power Cryptography Solution Based on Chaos Theory in Wireless Sensor Nodes" }, { "paperId": "f2a0b8b2dd2dbef850ffe1355d2db86fc28c1371", "title": "When Blockchain Meets the Right to Be Forgotten: Technology versus Law in the Healthcare Industry" }, { "paperId": "a27ff1eb9a54df2fa0d8765f77e6818f574cedfa", "title": "HSATA: Improved SATA Protocol with HMAC" }, { "paperId": "c1275096a769115f27ff74cb36ae9b59675c116b", "title": "9.1x Error acceptable adaptive artificial neural network coupled LDPC ECC for charge-trap and floating-gate 3D-NAND flash memories" }, { "paperId": "734ea23e414877b4122e3be6de3d27e48cae2c3d", "title": "A method of generating pseudorandom binary sequences based on 3D chaotic mapping" }, { "paperId": "341ba2af6bb1d2741b6613ec220bdc3ba22cd36c", "title": "Real usage-based precise reliability test by extracting read/write/retention-mixed real-life access of NAND flash memory from system-level SSD emulator" }, { "paperId": "3e2e57eae8ac44ffc52e759ea91a6418bb930caa", "title": "Chaotic hash function based on circular shifts with variable parameters" }, { "paperId": "3f10c74d707d5d2eabaa66dfef1df2b3a5e06e80", "title": "Privacy-Protection SSD with Precision ECC and Crush Techniques for 15.5× Improved Data-Lifetime Control" }, { "paperId": "cfbf1312bb58d5beef1a161d29b97b5a4ce45c20", "title": "A survey on lightweight block ciphers for low-resource devices: Comparative study and open issues" }, { "paperId": "92f3072294032fe235b70da224c7e669660a2f81", "title": "A pseudorandom number generator based on piecewise logistic map" }, { "paperId": "173a13e6d2282ea7404598dd89122df4481939dc", "title": "Privacy-protection solid-state storage (PP-SSS) system: Automatic lifetime management of internet-data's right to be forgotten" }, { "paperId": "e9c727423c12880361eeb011b7c803841cdc9bf4", "title": "A novel compound chaotic block cipher for wireless sensor networks" }, { "paperId": "ddc29bfa830737026abdfee6c78da5000b5ba850", "title": "Parallel chaotic hash function based on the shuffle-exchange network" }, { "paperId": "3aa1fab511153b53dd2be24860bbe9e3f4a1275d", "title": "Investigation of retention behavior for 3D charge trapping NAND flash memory by 2D self-consistent simulation" }, { "paperId": "8eae37e3c515c5e50bb1b61a71b1614c68750b32", "title": "19.6 Hybrid storage of ReRAM/TLC NAND Flash with RAID-5/6 for cloud data centers" }, { "paperId": "4cd74da15447e086a91008681ad21efe369da85b", "title": "Efficient and secure chaotic S-Box for wireless sensor network" }, { "paperId": "2733c6aa72de6e0572645f2ea931c13a2c211c9d", "title": "Design and statistical analysis of a new chaotic block cipher for Wireless Sensor Networks" }, { "paperId": "1924b17c81fa74a2811a612911fe731a2e5e7de8", "title": "Pseudo-random key generation for secure HMAC-MD5" }, { "paperId": "00d2693766f4ec18b8dd8b11d6b444d8f39369c6", "title": "Image dual scrambling encryption algorithm based on parameter variable chaotic system" }, { "paperId": "a270b633bc027b99ba94861cfd3e9528f6e9f1ce", "title": "A block cipher with dynamic S-boxes based on tent map" }, { "paperId": "3e1d197429e66dca7759f173c07010d4c0fc0663", "title": "Design of H.264/AVC entropy decoder without internal ROM/RAM memories" }, { "paperId": null, "title": "‘‘ForwardlookingHuffmancoding,’’" }, { "paperId": null, "title": "Outsourcing personal data processing,'' in Data Protection a Practical Guide to UK and EU Law" }, { "paperId": "6385c787ab60bf21114b24a3f1d9f5c1f1aaa101", "title": "Security Analysis of a Block Encryption Algorithm Based on Dynamic Sequences of Multiple Chaotic Systems" }, { "paperId": null, "title": "of Technology, China" } ]
17,376
en
[ { "category": "Law", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/018c421e17e7fb7b93c6412ca8f3069912ea1b8d
[]
0.930798
Legal Conditions in the Field of Digital Assets and Feasibility Analysis of the Application of Blockchain Technology: the Support and Limitations of the Field in the Macro Background
018c421e17e7fb7b93c6412ca8f3069912ea1b8d
Highlights in Business, Economics and Management
[ { "authorId": "2164960636", "name": "Ziqi Zhou" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
With the development of blockchain technology and digital assets, the problem pages of digital assets at the legal level are becoming more and more prominent. This article will start with smart contracts and combine the case of Shenzhen Ethereum to analyze the legal issues based on blockchain technology and digital assets. The current status of conservation and its possible future development directions are analyzed. This article will specifically discuss the issue of contract law regulation of smart contracts from the perspective of legal system construction, as well as the compatibility between smart contracts and current contract law. Finally, the following conclusions are drawn: Firstly, consciously accepting the law needs to adapt to social changes and accepting the fact that the law needs to be adjusted. Secondly, at the operational level, the use of technology must comply with. Thirdly, at the research level, relevant legal research must be done, and legal scholars must have inter-professional knowledge and capabilities.
Highlights in Business, Economics and Management **EMFT 2022** Volume 2 (2022) # Legal Conditions in the Field of Digital Assets and Feasibility Analysis of the Application of Blockchain Technology: the Support and Limitations of the Field in the Macro Background ## Ziqi Zhou* School of Finance, University of International Business and Economics, Beijing, China *Corresponding author. Email:201741020@uibe.edu.cn **Abstract. With the development of blockchain technology and digital assets, the problem pages of** digital assets at the legal level are becoming more and more prominent. This article will start with smart contracts and combine the case of Shenzhen Ethereum to analyze the legal issues based on blockchain technology and digital assets. The current status of conservation and its possible future development directions are analyzed. This article will specifically discuss the issue of contract law regulation of smart contracts from the perspective of legal system construction, as well as the compatibility between smart contracts and current contract law. Finally, the following conclusions are drawn: Firstly, consciously accepting the law needs to adapt to social changes and accepting the fact that the law needs to be adjusted. Secondly, at the operational level, the use of technology must comply with. Thirdly, at the research level, relevant legal research must be done, and legal scholars must have inter-professional knowledge and capabilities. **Keywords: Smart Contracts, Legislation, Digital Assets.** ## 1. Introduction **1.1** **Background** With the rapid application and popularization of blockchain technology, digital asset NFTs on the blockchain have attracted widespread attention from academia and industry. From the users' point of view, a smart contract is usually thought of as an automatically secured account, for example, a program that releases and transfers funds when certain conditions are met [1]. **1.2** **Smart Contracts** From a technical point of view, smart contracts are considered web servers, but these servers are not set up on the Internet using IP addresses, but on the blockchain. So that a specific contract program can be run on it. But unlike web servers, smart contracts are visible to everyone because the code and state of these smart contracts are on the blockchain (assuming the blockchain is public) [2]. Moreover, unlike web servers, smart contracts do not depend on a specific hardware device, in fact, the code of smart contracts is executed by all devices involved in mining (which also means that the computing power entering a single contract is limited, although the automatic adjustment of mining difficulty moderates this effect). Smart contracts are assembly language programmed on the blockchain. Often people do not write bytecode themselves, but compile it from a higher-level language, such as Solidity, a specialized language similar to Javascript. These bytecodes do guide the functionality of the blockchain so that code can easily interact with it, such as transferring cryptocurrency and recording events. **1.3** **NFTs & Blockchain** As a new thing in the network industry, NFT and the Metaverse have produced a series of existing entity systems, legal conflict between regulations. In 1994, the cryptographer Nick Szabo proposed the concept of smart contracts, arguing that a smart contract "is a set of offers and promises expressed externally by code, and can cover the automatic behaviour of two parties in accordance with the offers and promises to perform the ----- Highlights in Business, Economics and Management **EMFT 2022** Volume 2 (2022) agreement." This Nearly two decades after the concept was proposed, it was stranded due to the lack of a credible execution environment suitable for smart contracts. Until the emergence of blockchain technology, participants who execute smart contracts in systems without third-party guarantees can still trust each other's The validity of the identity and contract execution, the automatic transaction of digital assets becomes possible. On this basis, although the definition of smart contracts is still controversial, it should be undeniable that blockchain technology is the basic condition for smart contracts to exist. Therefore, a smart contract is a computer program that is deployed on the blockchain and exists in the form of computer code that can automatically execute the terms of the contract. However, in view of the fact that blockchain, as a new type of network data underlying technology, is in the initial stage of development, and that blockchain and smart contracts themselves have high technical understanding difficulties, the research on it in the field of law is in the primary technical principle. At the stage of understanding and basic theoretical discussion, the number of relevant legal research results at home and abroad is relatively small, most of which are directional and enlightening research, and the content of existing research is relatively scattered. Foreign countries in the field of legal research focus on exploring the legal fields in which blockchain and smart contracts will bring paradigm shifts. **1.4** **Current Situation** In terms of legislation and judiciary, the legal status of smart contracts has been gradually recognized. For example, in the United States, many states or cities such as Arizona have enacted legislation on smart contracts, affirmed their legal validity and role and made a statement about the relevant laws and regulations of smart contracts [3,4]. protect the rights of the parties. In addition, countries with more developed technology fields such as the European Union, the United Kingdom, and Australia have also legislated or regulated smart contracts [5,6]. However, most of these legislations are frameworks, and only recognize the legal status of smart contracts without detailed legislation. At present, China has no special legislation on smart contracts, nor does it explicitly recognize the legality of smart contracts. In terms of domestic legal research results, it is basically agreed that smart contracts should be included in the adjustment scope of contract law, but there is no systematic research results on how to make corresponding adjustments to the contract law system. It is more pointed out that blockchain and smart contracts have the potential to change the boundaries of technology and law and form new governance models, but technical solutions may also threaten the non-efficiency value of law while improving efficiency and certainty, such as equality Therefore, when conducting legal research, the value dimension of the law should be preserved while considering the institutional innovation brought about by technology. In view of the current situation, this article starts with smart contracts. The purpose of this paper is to study the contract law regulation of smart contracts under blockchain technology, to explore the compatibility of smart contracts with current contract law norms, and to clarify that smart contracts operate under the current contract law system. The feasibility of the smart contract and the corresponding contract law system should be reformed and innovated, solve the problem of docking between smart contracts and the current contract law system, and provide a contract law system for the perfect design of smart contracts and their real application in the market and suggestions. ## 2. smart contracts & blockchain The characteristics of a smart contract should have: (1) Pure electronic nature: a smart contract should be based on computer code to read the contract, and execute instructions under trigger conditions to automatically complete the performance of the contract; (2) Software execution: compared to In traditional contracts, after the establishment of a smart contract, the performance of the contract no longer depends on both parties to the contract [7,8]. ----- Highlights in Business, Economics and Management **EMFT 2022** Volume 2 (2022) The behaviour of the parties, but the computer software program completes the execution of the contract, confirms or transfers the digital assets pointed to by the subject of the contract; (3) The object is special: because the smart contract is a virtual representation deployed on the computer program after all, its execution only It can be limited to the change of data and cannot directly dominate the physical entities in reality. Therefore, the object of smart contracts should be assets that exist in the form of electronic data, such as digital currency, virtual property and other digital assets; Changes in vouchers can determine the reality of equity changes [8]. Assets (that is, asset tokenization), such as real estate ownership, equity, intellectual property rights, etc. after the registration of rights and interests is completed on the chain. (4) Automatic performance, the performance of the contract no longer depends on the creditor's request behaviour and the debtor's payment, but the smart contract program automatically completes the performance of the contract [7,8]. ## 3. Examples of digital assets in the legal field In 2020, a local court in Shenzhen ruled that Ethereum is legal property. In the judgment, the court clearly mentioned: Although Ethereum cannot be circulated as currency in China, as a virtual property, its owner can control the currency he/she held. It is well managed, which can be paid in a specific way, transferred, and can be publicly traded using currency. It has a certain economic value and belongs to the "property" in criminal law. In recent years, the German federal government, together with the Federal Ministry of Finance and BaFin (Federal Financial Supervisory Authority), issued a number of laws and regulations aimed at laying a solid foundation for digital assets. One of them regulates how institutions store digital assets in their custody. In addition, the Electronic Securities Act and, more recently, the Funding Locator Act have been introduced. Although smaller countries like Switzerland are nimbler and more progressive than Germany, the German government is making progress in establishing a solid regulatory foundation for tomorrow's capital markets. At the same time, Europe as a whole is making great strides. While the above-mentioned legal and regulatory initiatives are being implemented in Germany, the introduction of MiCA regulations (markets of crypto assets) is being pursued across Europe. MiCA represents a universal regulatory effort with a speed and determination rarely seen in European bureaucracies, and the European Commission could enact it by the end of 2022. The regulatory framework covers all possible types of blockchain-based assets and applies the applicable unified regulation for all 450 million EU citizens. This is especially notable given that U.S. regulators are determining which agencies have jurisdiction over crypto assets. Of course, some aspects of the MiCA regulations are not optimally addressed. However, given the speed at which the regulation has been implemented and its general relevance, it may well be worthwhile considering that businesses need safety and protection before they are willing to make any investment. Digital assets are often viewed as property by market participants. Property and property rights are vital to modern societies, economies and legal systems. Therefore, they should be recognized and protected. The laws of England and Wales are flexible enough to accommodate digital assets. However, certain amendment of the law needs to be made to ensure that digital assets are consistently recognized and protected. For example, the USA law recognizes that digital assets can be property, and digital assets can be "owned". However, it does not recognize the possibility that digital assets can be "owned", as the concept of "owning" is currently limited to physical objects. This has implications for how digital assets are transferred, secured and secured under the law. Reforming the law to provide legal certainty will provide a solid foundation for the development and adoption of digital assets. It will also encourage the use of English and Welsh law and the jurisdiction of England and Wales in transactions involving digital assets. Legal classification of digital assets and analysis focusing on ownership interests, taking into account specific issues that ----- Highlights in Business, Economics and Management **EMFT 2022** Volume 2 (2022) arise in various situations, such as secured transactions, applicable law in cross-border transactions, insolvency and the legal status of intermediaries. The approach to be followed will be neutral, seeking to accommodate different types of assets and technologies, as well as different legal cultures. The principles identified will reflect best practices and international standards and enable jurisdictions to take a common approach to legal issues arising from the transfer and use of digital assets. ## 4. Discussion The contract law regulation of smart contracts specifically studied above essentially reflects the relationship between new technologies and laws in the context of the current era. That is, in the field of contract law, how exactly the new technologies such as blockchain and smart contracts coexist with traditional law, and which areas of the law need to be modified or even compromised for the technology. Taking the dispute resolution approach of smart contracts as a cut-in can better examine the nature of these problems. Not limited to legal remedies, when the assets in the smart contract are stolen by hackers, the digital assets are damaged and cannot be transferred or paid, the tokenized or digitized real assets are damaged and cannot be delivered, etc., the remedies the parties can seek. There are roughly three categories: (1) Platform relief: After the inevitable execution of the contract is completed, the party shall prove the situation to the blockchain platform or the community. The smart contract community issues an agreement after reaching a consensus, or it may be necessary to "fork" the blockchain to achieve the purpose of modifying the blockchain data; (2) Public relief: issued by centralized trust institutions represented by court’s Ruling, according to the ruling of the authority to achieve the effect of balancing the interests of the parties. It is also possible to consider creating a new specialized adjudication agency for the increasingly large-scale blockchain industry, using professional personnel to better handle smart contract disputes; (3) Self-help: when the smart contract is created, various emergencies, including Force majeure and other situations are written into the smart contract code, relying on oracle technology. When this happens, the oracle captures real data and triggers the smart contract to automatically allocate losses and balance benefits. A careful analysis of the relationship between the above three remedies may provide a clearer idea for us to better understand the relationship between technology and law. The first solution is that the main body of relief is the entire blockchain platform itself. The advantage is that it maintains the decentralized "advantage" of the blockchain that excludes judicial intervention. For example, if the blockchain wants to modify data in a "hard fork" way, it needs the consent of more than 51% of the nodes on the chain. If smart contracts want to be popularized in social life on a large scale and try to resolve disputes within the system with the platform itself, it is necessary for the smart contract community to develop its own dispute resolution mechanism in order to more effectively and properly adjust the disputes arising from smart contracts in the community and respond to user amendments. The second way is the traditional legal solution. The main body of relief is the state public authority. If the court’s ruling wants to restore the original property status of both parties, or force the property value on the blockchain. There are also two ways to compensate for damages. One is that the court forces both parties to reach a new smart contract to rearrange the ownership of the property. The other is that on the premise that the country establishes a sovereign blockchain, the court’s ruling can be directly passed through the sovereign blockchain. On state-owned rights to build new blocks with new data states replace the old data. The first way consumes judicial resources, and the second way is to establish sovereign blockchain. It is still controversial whether it destroys the decentralized nature of the blockchain; the advantage of the third way is that it can best maintain the characteristics of the blockchain. The main body of the relief is the parties themselves, and the focus of the relief that can be achieved depends on the technology. Enforceability can be regarded as a remedy provided by pure technology; the disadvantage is that the remedy needs to rely heavily on the development process of technology, specifically referring to the development of oracle technology and the realization of Internet of Things functions. A contract formulated by bounded rationality can never cover all possibilities. ----- Highlights in Business, Economics and Management **EMFT 2022** Volume 2 (2022) It can be seen that the above three approaches are mutually exclusive, and it is difficult to implement them in social life in a short period of time. A better approach is to apply all three at the same time, on the premise of clarifying the boundaries and clarifying the advantages and disadvantages. Not only blockchain, corresponding artificial intelligence, Internet of Things, big data, etc., can all be analogized. Kevin Warbach has described the complex relationship between blockchain and law in his book, “Blockchain Complements Law, Blockchain Complements Law, and Blockchain Replaces Law.” [9]. The value of the law lies in the establishment of rights and obligations between the parties. With the rights and obligations as the framework, with the help of the protection of public power, there are traces to follow, and there are legally binding. The contract law adjusts the legal relationship of the contract and endows the parties' claims with binding force. The German legal philosopher Radbruch believed that property rights are the end, and creditor's rights are just means at the beginning. Debt is a dynamic factor in the legal world, containing the gene of death, and the purpose has been achieved, that is, it will be eliminated [10]. In a smart contract, the performance of the contract is automatically executed by the smart contract technology, and the purpose of the transaction is achieved. There is no need to give the parties credit rights to bind the other party. an alternative to the law. The law as a normative technology faces the challenge of new technologies emerging in social progress. ## 5. Suggestions Its replacement and challenge, this is an indisputable fact. Similarly, if technology can replace the law to defuse risks and resolve disputes, it also poses a threat to traditional legal professions, such as judges and lawyers. If the contract can be calculated, artificial intelligence can draw up the contract through machine learning; if the oracle machine realizes the true and accurate capture of external data, the smart contract can automatically resolve disputes; In the era of networking, the fate of the traditional judicial system will be completely subverted. This question remains: can technology really completely replace the law? The answer to the question is no, but the legal profession must stand at the center of the wind of the times to change the degree of paradigm shift is yes. Under the influence of the digital migration, many of the traditional social relationships we have are undergoing or will undergo dramatic changes. But even so, Saab still believes that the classic contract law still has its reasonableness, replacing the contract law will pay a high price, and it is still necessary to retain the contract law. What important is how to better align our hard-earned laws with the digital age. First of all, in terms of cognition, the encroachment of technology on the legal territory is actually the fact that the law is breaking through the traditional way of implementation, that is, it tends to the deployment and realization of legal norms in technology, and "code is law" should mean this. Secondly, in terms of operation, the code must carry laws and regulations, which can start from the implementation of legal principles. No matter the application of any technology, it must comply with the requirements of traditional legal principles such as public order and good morals, honesty and trustworthiness, and this can be guaranteed by the implementation of external supervision. The further research direction is to place the specific laws and regulations of the country on the smart contract. At this time, it is not only limited to the automatic execution of various contracts, but also can realize the automatic execution of some laws and regulations, which is also not achieved by the current smart contract platform. The laws of the country carried on the blockchain should be the goal and guarantee of the construction of the "sovereign blockchain". In fact, contract modularization, financial technology. The legal practice of domain sandbox supervision has actually opened the door to the era of legal coding. Finally, in terms of research, the research on such new legal issues must require researchers to no longer be limited to the traditional field of law, but to the corresponding technical fields and other disciplines such as economics that are required for perfect social analysis. To master and consider in a coherent way. For example, in the study of computational jurisprudence, we can break the scope of ----- Highlights in Business, Economics and Management **EMFT 2022** Volume 2 (2022) the traditional legal problem research method of accumulation of experience and text analysis, and conduct research on legal problems. ## 6. Conclusion In summary, there is still gap between the digital assets and the law. However, many countries have already responded in this regard and and set an example. To further push for legislation, people should realize the necessity that the law should be changed with the development of technology. In turn, the technology also can be used to better understand what to do and make the law reasonable. The original intention of legislation always has the hope of reducing disputes and making life easier, but no matter how far human technology develops, conflicts cannot be eradicated. The law has a natural defect gene - lag, it may never catch up with the pace of technological development, but with such a defect gene, the law has a solid backing, and the law born for disputes is always the solution. The last line of defense against social conflicts. Even though the continuous emergence and vigorous development of high-tech now indeed let us see that technology is eroding the legal territory, but stepping forward ## References [1] Unidroit. Digital assets and private law-UNIDROIT. June 2, 2021. Accessed on July 26, 2022. Retrieved from: https://www.unidroit.org/work-in-progress/digital-assets-and-private-law/#1456405893720a55ec26a-b30a. [2] P. Sandner. Digital assets: The future of capital markets. Forbes. August 24, 2021. Accessed on July 26, 2022. Retrieved from: https://www.forbes.com/sites/philippsandner/2021/08/24/digital-assets-the-futureof-capital-markets/?sh=6b359d1a6a57. [3] M.H.K. Tank, M.F. Radcliffe, & E.S.M. Caires(Liz). Blockchain and digital assets news and trends. DLA Piper. April 19, 2022. Accessed on July 26, 2022. Retrieved from: https://www.dlapiper.com/en/us/insights/publications/2022/04/blockchain-and-digital-assets-news-andtrends/. [4] Commission, U. S. S. and E. (n.d.). Framework for Investment Contract. Analysis of Digital Assets. 2022. [5] A.J. Borrelli, L. Berlajolli, R.N. Holup, H. Ricker. Digital assets: Digital Assets: At The Intersection Of Law, Regulation, Public Policy And Technological Innovation. January 14, 2022. Accessed on July 26, 2022. Retrieved from: https://www.mondaq.com/unitedstates/commoditiesderivativesstockexchanges/1150490/digital-assets-at-the-intersection-of-law-regulation-65279public-policy-andtechnological-innovation2020-ccaf-legal-regulatory-considerations-report. [6] P. Athanassiou, T. Juutilainen, D. Philippe, et al. ELI Principles on the Use of Digital Assets as Security. 2022. [7] C. Ngai, T. Ma. Understanding the legal status of cryptoassets. August 2021. Retrieved on May 15, 2022, from https://www.hk-lawyer.org/content/understanding-legal-status-cryptoassets. [8] M.J. Zuckerman. Tennessee Passes Bill Recognizing Blockchain, Smart Contracts For Electronic Transactions. March, 2018. Accessed on July 26, 2022. Retrieved from: https://cointelegraph.com/news/vtennessee-passes-bill-recognizing-blockchain-smart-Contracts-forelectronic-transactions. [9] M. Huillet. Arizona Blockchain Bill Signed Into State Law. April 6, 2018. Accessed on July 26, 2022. Retrieved from: https://cointelegraph.com/news/arizona-blockchain-billsigned-into-state-law. [10] H. Barringer, C.S. Pasareanu, D. Giannakopolou, Proof rules for automated compositional verification through learning. In: Proc. of the 2nd International Workshop on Specification and Verification of Component Based Systems, 2003. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.54097/hbem.v2i.2361?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.54097/hbem.v2i.2361, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBYNC", "status": "HYBRID", "url": "https://drpress.org/ojs/index.php/HBEM/article/download/2361/2263" }
2,022
[]
true
2022-11-06T00:00:00
[ { "paperId": "6fa05560f47cbd96bf472138223309a8d88d29e3", "title": "Proof Rules for Automated Compositional Verification through Learning" }, { "paperId": null, "title": "Digital assets and private law-UNIDROIT" }, { "paperId": null, "title": "Blockchain and digital assets news and trends. DLA Piper" }, { "paperId": null, "title": "Digital assets: Digital Assets: At The Intersection Of Law, Regulation, Public Policy And Technological Innovation" }, { "paperId": null, "title": "Framework for Investment Contract" }, { "paperId": null, "title": "Tennessee Passes Bill Recognizing Blockchain, Smart Contracts For Electronic Transactions" }, { "paperId": null, "title": "Arizona Blockchain Bill Signed Into State Law" }, { "paperId": null, "title": "Understanding the legal status of cryptoassets . August 2021" }, { "paperId": null, "title": "Digital assets: The future of capital markets" } ]
5,478
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/018e933015320e427d83d8caaa3fa45980b9f21b
[ "Computer Science" ]
0.839277
Certificate-Based Signcryption Scheme for Securing Wireless Communication in Industrial Internet of Things
018e933015320e427d83d8caaa3fa45980b9f21b
IEEE Access
[ { "authorId": "72426080", "name": "Insaf Ullah" }, { "authorId": "143803205", "name": "Abdullah Alomari" }, { "authorId": "50641507", "name": "Ako Muhammad Abdullah" }, { "authorId": "2143693553", "name": "Neeraj Kumar" }, { "authorId": "27071100", "name": "Amjad Alsirhani" }, { "authorId": "143789028", "name": "F. Noor" }, { "authorId": "1734991546", "name": "Saddam Hussain" }, { "authorId": "2115772251", "name": "Muhammad Asghar Khan" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://ieeexplore.ieee.org/servlet/opac?punumber=6287639" ], "id": "2633f5b2-c15c-49fe-80f5-07523e770c26", "issn": "2169-3536", "name": "IEEE Access", "type": "journal", "url": "http://www.ieee.org/publications_standards/publications/ieee_access.html" }
The Industrial Internet of Things (IIoT) community is concerned about the security of wireless communications between interconnected industries and autonomous systems. Providing a cyber-security framework for the IIoT offers a thorough comprehension of the whole spectrum of securing interconnected industries, from the edge to the cloud. Several signcryption schemes based on either identity-based or certificateless configurations are available in the literature to address the IIoT’s security concerns. Due to the identity-based/certificateless nature of the available signcryption schemes, however, issues such as key escrow and partial private key distribution occur. To address these difficulties, we propose a Certificate-Based Signcryption (CBS) solution for IIoT in this article. Hyperelliptic Curve Cryptosystem (HECC), a light-weight version of Elliptic Curve Cryptosystem (ECC), was employed to construct the proposed scheme, which offers security and cost-efficiency. The HECC utilizes 80-bit keys with fewer parameters than the ECC and Bilinear Pairing (BP). The comparison of performance in terms of computation and communication costs reveals that the proposed scheme provides robust security with minimal communication and communication costs. Moreover, we used Automated Validation of Internet Security Protocols and Applications (AVISPA) to assess the security toughness, and the results show that the proposed scheme is secure.
Date of publication xxxx 00, 0000, date of current version xxxx 00, 0000. _Digital Object Identifier 10.1109/ACCESS.2017.Doi Number_ # Certificate-Based Signcryption Scheme for Securing Wireless Communication in Industrial Internet of Things **Insaf Ullah [1], Abdullah Alomari [2], Ako Muhammad Abdullah [3], Neeraj Kumar[4,5*], Amjad** **Alsirhani [6], Fazal Noor [7], Saddam Hussain [8] and Muhammad Asghar Khan[1]** 1. Hamdard Institute of Engineering & Technology, Islamabad 44000, Pakistan; insaf.ullah@hamdard.edu.pk; m.asghar@hamdard.edu.pk 2. Department of Computesr Science, Al-Baha University, Albaha, 65799 Saudi Arabia; alomari@bu.edu.sa 3. University of Sulaimani, College of Basic Education, Computesr Science Department, Sulaimaniyah, Kurdistan Region, Iraq; ako.abdullah@univsul.edu.iq 4. Thapar Institute of Engineering and Technology, Patiala, India. 5. School of computesr Science, University of Petroleum and Energy Studies, Dehradun, Uttarakhand.India 6. College of Computesr and Information Sciences, Jouf University, Sakaka 72388, Saudi Arabia; amjadalsirhani@ju.edu.sa 7. Faculty of Computesr and Information Systems, Islamic University of Madinah, Madinah 400411, Saudi Arabia; mfnoor@gmail.com 8. School of Digital Science, Universiti Brunei Darussalam, Jalan Tungku Link, Gadong BE1410, Brunei Darussalam; saddamicup1993@gmail.com Corresponding author: Neeraj Kumar (e-mail: Neeraj.kumar@thapar.edu). **ABSTRACT The Industrial Internet of Things (IIoT) community is concerned about the security of wireless** communications between interconnected industries and autonomous systems. Providing a cyber-security framework for the IIoT offers a thorough comprehension of the whole spectrum of securing interconnected industries, from the edge to the cloud. Several signcryption schemes based on either identity-based or certificateless configurations are available in the literature to address the IIoT's security concerns. Due to the identity-based/certificateless nature of the available signcryption schemes, however, issues such as key escrow and partial private key distribution occur. To address these difficulties, we propose a CertificateBased Signcryption (CBS) solution for IIoT in this article. Hyperelliptic Curve Cryptosystem (HECC), a light-weight version of Elliptic Curve Cryptosystem (ECC), was employed to construct the proposed scheme, which offers security and cost-efficiency. The HECC utilizes 80-bit keys with fewer parameters than the ECC and Bilinear Pairing (BP). The comparison of performance in terms of computation and communication costs reveals that the proposed scheme provides robust security with minimal communication and communication costs. Moreover, we used Automated Validation of Internet Security Protocols and Applications (AVISPA) to assess the security toughness, and the results show that the proposed scheme is secure. **INDEX TERMS certificate-based signcryption; industrial internet of things; wireless communication;** HECC; AVISPA **I.** **INTRODUCTION** Industrial Internet of Things (IIoT) refers to sensors, instruments, and other devices that are networked with industrial computesr applications, such as production and energy management [1]. This connectivity enables the gathering, sharing, and analysis of data, which may facilitate productivity and efficiency gains as well as other economic benefits. This, in turn, will help manufacturers develop products more efficiently and sustainably. In addition, the resulting IoT-node-embedded devices will also be included into the IIoT; this will allow for more efficient resource use, hence boosting consumer satisfaction and product quality. In addition, with the integration of Cyber-Physical Systems (CPS) and modern networking technologies, the monitoring and control capabilities of industrial systems have considerably improved [2], [3]. Industry 4.0 is a revolution in which wireless networking and CPS are coupled with sensors on products to monitor the whole product flow in order to make intelligent decisions [4], [5]. As the IIoT grows, new security risks emerge. Each new device or component that connects to the IIoT represents a potential vulnerability. It can be challenging to maintain security in the face of growing connectivity. Insecure IIoT systems can have serious adverse impact, including operational interruption and financial loss. Exposed ports, insufficient authentication procedures, and old software all contribute to the emergence of threats. The ----- aforementioned unsatisfactory situation will result in the demise of industrial output. Therefore, a strong security mechanism is essential to ensure the security of data transfer between users and sensing equipment. Signature and encryption are fundamental cryptographic procedures for secure communication [6]. Encryption provides confidentiality, whereas signature provides authenticity independently. If both signature and encryption are required simultaneously, signcryption [7] is used. The majority of signcryption schemes rely on cryptography certificates with public keys [8]. Therefore, a new collaboration in the form of an ID-based cryptosystem, in which the user's encryption key is the correct string for the user's identity [9]. However, as the Private Key Generator (PKGR) possesses all the information pertaining to the private keys of the individual members, this could result in an overwhelming Key Escrow (KE) problem [10],[11]. In 2003, Al-Riyami and Patterson [12] introduced the concept of a certificateless cryptosystem consisting of two components: the secret value and partial private key, in line with the KE. The Key Generation Center (KGC) offers a partial private key (PPK), while the participants determine the secret value. Similarly, certificateless cryptosystems are susceptible to the PPKDP problem inherent to certificateless cryptography, as the key distribution requires a secure connection between the KGCR and the recognised parties. In the same year, Gentry [13] introduced the concept of a certificate-based cryptosystem (CBC) in which a user can create his or her own private/public key pair while the Certifier Authority (CA) checks for a certain public key. Since the CA does not know the private keys of the participating users, the CBC avoids the KE. In addition, a secure connection between the user and the CA is not required. Typically, computationally hard problems, such as Bilinear Pairing (BP), Revest-Shamir-Edelman (RSA), Diffie-Hellman (DFHMN), and ECC [14-20], are used to evaluate the performance of security schemes. The RSA cryptosystem operates with 1024-bit keys. Similarly, the BP is 14.31% worse than the RSA [21] because to its extensive map-to-point computation and operation features. Similarly, an ECC was devised to alleviate the drawbacks of RSA and BPRNG's high key sizes [22]. Compared to the supplied cryptosystems, the security efficiency and security hardness of the ECC depend on 160-bit short keys [23]. Even with 160-bit keys, the ECC is unsuitable for IIoT data collected from the public. Consequently, the HCC, a new type of cryptosystem that is essentially a generalization of the ECC, is presented. The HCC provides correspondent-level security for the BP, RSA, DFHMN, and HCC with keys that are accordingly 80 bits shorter [24],[25]. In light of the preceding considerations, an ECC is seen a good option for crowdsourcing IIoT data. The above explanation encourages us to propose a new CBS for IIoT with the objective of removing the KE problem of identity-based cryptography and the PPKDP problem of certificateless cryptography with minimal cost and complexity. The proposed scheme is favorable to the environment since it employs the Hyperelliptic Curve Cryptosystem (HECC), which requires much smaller key sizes than bilinear pairing, RSA, and elliptic curves. Listed below are the characteristics of the proposed scheme. - We provide a Certificate-Based Signcryption (CBS) solution for IIoT using Hyperelliptic Curve Cryptosystem (HECC), a lightweight variant of Elliptic Curve Cryptosystem. (ECC). Using small key sizes makes the proposed scheme lightweight, which is the most desirable characteristic of HECC. - The proposed scheme offers confidentiality, unforgeability, integrity, anti-reply, forward secrecy, and non-repudiation as security characteristics. - We also investigate the performance of the proposed scheme and compare it to relevant existing schemes in order to validate its computational and communication capabilities. - The proposed scheme is validated using AVISPA, a well-known security verification and simulation tool. The findings demonstrate that the proposed scheme is SAFE in terms of the security claims based on the working idea of two back-end protocol checkers, OF-MC and CL-AtSe. The rest of the article is organized as follows: in Section 2, related work is covered. The Preliminaries for the construction and complexity analysis are presented in Section 3. Section 4 demonstrates the construction of the proposed scheme. The section 5 security analysis is followed by the section 6 cost analysis. Section 7 concludes the study. **2. Related Work** Information security is vital to the security of a communication systems. The fundamental security features highlight the confidentiality and authenticity of the data. In the literature, we have researched the proposed security schemes for IIoT infrastructure. A certificateless signature scheme for the IIoT infrastructure is proposed [27], however Zhang et al. [28] and Zhang et al. [29] showed the scheme to be vulnerable against both Type 1 and Type 2 adversaries. In addition, the scheme makes use of BP's fragility, which has the worst potential in terms of cost complexity. Therefore, in [29], the authors strengthened the security of scheme [27] using ECC; nonetheless, the scheme is not suited for real IIoT applications due to PPKDP and ECC's larger key sizes. The authors assert in [29] that the public key replacement attack exists in the method described in [28]. The authors then introduced the key insulated signature method using BP in [30]. Similarly, the presented method relied on ECC, which conducts intensive calculation and requires a larger bandwidth for transmission. Later, Qiao et al. [31] proposed a secure CBAS scheme for IIoT in order to enhance the CBAS scheme and offer a real implementation for it. In the random oracle model, based on the complexity of the discrete logarithm problem, the ----- proposed scheme's security is demonstrated. Compared to prior CBAS schemes, the proposed scheme structure provides excellent security and computation and communication efficiency. The aforementioned schemes provide the security feature of authentication solely. As the IIoT architecture needs confidentiality with authenticity. For this purpose, in 2017, Karate et al. [32], introduced a novel identity-based signcryption technique for IIoT crowdsourcing employing bilinear pairing. The presented method has an issue with overreliance on PKG, which is inborn in identity-based signcryption schemes, because it requires the PKG to create a complete private key. Furthermore, the security of the system is substantially affected once the PKG is attacked. In addition, the given scheme does not meet with the security criteria of confidentiality and forward secrecy. Besides, the suggested technique also suffers from the use of high bandwidth use and significant computation cost due to the utilization of bilinear pairing. In 2019, Ullah et al. [33], introduced a lightweight CLC scheme for crowdsourced IIoT applications with the aim of increasing security and minimizing communicational and computational expenses. However, the given scheme has an issue of PPKDP inborn with certificateless signcryption, since the key distribution needs a secure connection between KGCR and the respected participants. Unfortunately, the authors didn’t offer a formal demonstration of the proposed scheme in any security model such as random oracle or standard model. In 2020, Dharminder et al. [34], introduces an identity-based signcryption system for IIoT crowdsourcing. Performance study with comparable schemes suggests that the offered strategy is efficient in terms of both computing and communicational expenses. However, the suggested strategy suffers from the use of high bandwidth use and hefty computation cost due to the employment of bilinear pairing. All of the aforementioned approaches are proposed to secure the IIoT's infrastructure. However, the offered solutions suffer from significant computational costs and communication overheads, as well as key escrow and private key distribution issues. In addition, the security hardness of the aforementioned systems is based on ECC and bilinear pairing, which is appropriate for the Industrial Internet of Things. We proposed a new CBS strategy for IIoT crowdsourcing for this reason. The proposed scheme is effective and devoid of KE and PPKDP problems. Using the HECC, the proposed scheme reduces the high computational cost and communication overheads. **3. Preliminaries** This section covers formal definitions, Threat model, and notions used in the proposed scheme in table form (Table.1). **_A._** **_HYPERELLIPTIC CURVE DISCRETE LOGARITHM_** **_PROBLEM (HECDLP)_** Suppose 𝜙 ℇ {1,2,3,4,5, … . 𝑧−1} and Υ = 𝜙. 𝐷, if finding 𝜙 is negligible, then it said to be HECDLP. _Hyperelliptic Curve Computational Diffie-Hellman_ Suppose 𝜙 ℇ {1,2,3,4,5, … . 𝑧−1} and Υ = 𝜙 . ç. D, if finding 𝜙 and ℛ are negligible, then it said to be HECDHP. **_B._** **_THREAT MODEL_** The Dolev-Yao adversary model, which distinguishes between adversary (AVR) and forger (FR), has been taken into account when designing our proposed scheme. To break the forward security, integrity, and confidentiality of the proposed scheme, AVR's job is to launch an attack against it. Meanwhile, FR's job is to make the signature of the proposed scheme compromised. **TABLE 1: NOTATIONS** **S. No** **Symbol** **Explanation** 1 CA Certification authority 2 𝐹𝛾 A finite field 𝐹𝛾 of order 𝛾 3 𝛹 Public parameter set 4 𝜗 Private key of Certification authority 5 𝛶 Public key of Certification authority 7 𝐻1, 𝐻2, and 𝐻3 Hash functions 8 𝐷 Divisor of HEC 9 𝐼𝐷𝑐𝑠, 𝐼𝐷𝑐𝑢𝑠 Identity of CB-Signcrypter and CB-Un- Signcrypter 10 𝑃𝑐𝑠, 𝑃𝑐𝑢𝑠 Private key of the CB-Signcrypter and CB-Un- Signcrypter 11 𝐵𝑐𝑠, 𝐵𝑐𝑢𝑠 Public key of the CB-Signcrypter and CB-Un- Signcrypter 12 𝐶𝑐𝑠, 𝐶𝑐𝑢𝑠 Certificate of CB-Signcrypter and CB-Un- Signcrypter 13 𝒞 Ciphertext 𝓂 Plaintext 14 ⊕ Used as in encryption/decryption 15 𝓃𝑟, 𝓃𝑠 Nonce for CB-Signcrypter and CBUn- Signcrypter 16 𝒦 Encryption/decryption key for CBSigncrypter and CB-UnSigncrypter 17 𝜙 CB-signcrypted tuple 18 𝐸𝑋𝑃𝑁 Exponentiations 19 𝐵𝐼𝑃𝐺 bilinear pairing operation 20 𝐻𝑌𝐷𝑀 Hyper Elliptic Curve Divisor Multiplication operation 21 |𝑚| message size in bits 22 |𝐺| Parameter size in bilinear pairing 23 |𝑛| Parameter size in Hyper Elliptic Curve **4. Construction of the Proposed Scheme** This section discusses the construction of the proposed scheme, including the syntax, network model, and proposed algorithm. **_A._** **_GENERIC SYNTAX_** ----- In this phase, we provide the definitions for the working structure of each part of CBS in the following steps. **Setup: The Certificate Authority (CA), initially pick a** security parameter 1[𝜀], further outputs the secret key 𝜗 and global parameter set𝛹. **Public Number Generation: Given global parameter set𝛹** and entity identity 𝐼𝐷𝑒, it outputs the public number and the entity of identity 𝐼𝐷𝑒 transmits a pair ( 𝐼𝐷𝑒, 𝛽𝑒) to CA. **Certificate Generation: Assumed the entity identity 𝐼𝐷𝑒, 𝛹,** and a pair ( 𝐼𝐷𝑒, 𝛽𝑒), it outputs a certificate 𝐶𝑒, and then sends a pair ( 𝐶𝑒, 𝜇) to an entity of identity 𝐼𝐷𝑒 in open network. **Key Generation: Assumed 𝛹 and a pair ( 𝐶𝑒, 𝜇), the entity** of identity 𝐼𝐷𝑒 generates his private key 𝑃𝑒 and public key 𝐵𝑒. **CB-Signcryption: Specified a plaintext 𝑚, global parameter** param, the identities of the CB-Signcrypter and CB-UnSigncrypter ( 𝐼𝐷𝑐𝑠, 𝐼𝐷𝑐𝑢𝑠), the certificate and private key of CB-Signcrypter ( 𝐶𝑐𝑠, 𝑃𝑐𝑠), the CB-Signcrypter and CB-UnSigncrypter public keys ( 𝐵𝑐𝑠, 𝐵𝑐𝑢𝑠), it outputs a CBsigncrypted tuple 𝜙. **CB-Un Signcryption: Upon arrival 𝜙, CB-Un- Signcrypter** considerers the following is an input: identities of the CBSigncrypter and CB-Un- Signcrypter ( 𝐼𝐷𝑐𝑠, 𝐼𝐷𝑐𝑢𝑠), its own certificate and private key, its own public key and sender public key, and the global parameter param, it verifies the signature and outputs a plaintext 𝑚. **_B._** **_PROPOSED NETWORK MODEL_** Fig. 1 depicts the five key entities that comprise the proposed network model: the Application Provider, the Crowdsourced Industrial internet of Things, the Controller, the Data User, and the Cloud Server These entities are capable of cellular network connectivity (3G/4G/5G). The sensors are linked through Bluetooth and Wi-Fi technologies. The following describes in detail the function of each entity. **Application Provider: This entity serves as a Certificate** Authority (CA) and is responsible for generating a certificate for a requesting user. **Crowdsourced Industrial internet of Things: Utilizing** intelligent devices to capture sensing data from industrial IoT devices, crowdsourced IIoT offers a paradigm for data collecting and sensing. The data from sensors/mobiles and crowd tasks are saved, processed, evaluated, and shown graphically. On the request of the controller, the collected data is then sent to the controller. **Controller: In the proposed network model, the mobile** phone is considered a controller. This entity is responsible for calculating the signcryption of collected data from sensor nodes and transferring it to data user. **Data User: This entity plays the role of the end user and** delivers a signcrypted access request query to the controller if it requires Crowd-sourced IIoT data. **Cloud Server: Cloud Server is only responsible for storing** massive amounts of crowdsourced data if required; otherwise, it transfers the signcrypted text to the data user. **FIGURE 1.** Proposed network model ----- **_C._** **_PROPOSED ALGORITHM_** The proposed scheme contains the following steps. **Setup: The certificate authority (CA), initially picks a** security parameter 1[𝜀] and performs the following sub steps: It chooses a hyper elliptic curve (HEC) over finite field of order 𝐹𝛾 with Genus 𝛿⪰2 Picks a number 𝜗∈{1, 2, … … ., 𝛾−1} as a secret key and computes 𝛶=𝜗.𝒟 Choose three one way hash functions: 𝐻1, 𝐻2, and 𝐻3. Finally, it outputs global parameter set as 𝛹=( HEC, 𝐹𝛾, 1[𝜀], 𝛿, 𝛶, 𝐻1, 𝐻2, and 𝐻3) **Public Number Generation: Given global parameter set𝛹** and entity identity 𝐼𝐷𝑒, it picks a number 𝛺𝑒 ∈ {1, 2, … … ., 𝛾−1} and computes 𝛽𝑒= 𝛺𝑒.𝒟. Further, it computes 𝜔𝑒= 𝛺𝑒. 𝛶 and 𝐸𝐼𝐷𝑒 = 𝜔𝑒 ⊕( 𝐼𝐷𝑒, 𝛽𝑒). An entity of identity 𝐼𝐷𝑒 sends the pair ( 𝐸𝐼𝐷𝑒, 𝛽𝑒) to CA. **Certification: CA recovers 𝐼𝐷𝑒 as** ( 𝐼𝐷𝑒, 𝛽𝑒) = 𝜔𝑒 ⊕ 𝐸𝐼𝐷𝑒, where CA computes 𝜔𝑒= 𝛺𝑒. 𝜗. Then by considering as input 𝐼𝐷𝑒, 𝛹, and a pair ( 𝐼𝐷𝑒, 𝛽𝑒), it outputs a certificate by using the following computational steps: It picks a number 𝜂𝑒 ∈{1, 2, … … ., 𝛾−1} and computes 𝛸𝑒= 𝜂𝑒.𝒟 Calculates a certificate 𝐶𝑒 = 𝛸𝑒 + 𝛽𝑒 and a value 𝜇= 𝜂𝑒. 𝐻1( 𝐶𝑒, 𝐼𝐷𝑒) + 𝜗 Then sends the pair ( 𝐶𝑒, 𝜇) to an entity of identity 𝐼𝐷𝑒 on an open network. **Key Generation: Upon arrival ( 𝐶𝑒, 𝜇), given 𝛹, the entity** of identity 𝐼𝐷𝑒 generates his private key 𝑃𝑒 and public key 𝐵𝑒 utilizing the below computations. Computes 𝑃𝑒 = 𝛺𝑒. 𝐻1( 𝐶𝑒, 𝐼𝐷𝑒) + 𝜇 and 𝐵𝑒 = 𝑃𝑒. 𝒟 The private key 𝑃𝑒 and public key 𝐵𝑒will be acceptable in a condition if 𝐵𝑒 = 𝐶𝑒. 𝐻1( 𝐶𝑒, 𝐼𝐷𝑒) + Υ is hold **CB-Signcryption: Specified a plaintext 𝑚, 𝛹, the identities** of the CB-Signcrypter and CB-Un-Signcrypter ( 𝐼𝐷𝑐𝑠, 𝐼𝐷𝑐𝑢𝑠), the certificate and private key of CBSigncrypter ( 𝐶𝑐𝑠, 𝑃𝑐𝑠), the CB-Signcrypter and CB-UnSigncrypter public keys ( 𝐵𝑐𝑠, 𝐵𝑐𝑢𝑠), it outputs a CBsigncrypted tuple 𝜙= (𝒬, 𝒵, 𝒲) in the following computational steps: It picks a number 𝒱∈{1, 2, … … ., 𝛾−1} and computess 𝒴=𝒱.𝒟, a secret key 𝒦= 𝒱 . 𝐵𝑐𝑢𝑠and 𝒵= (𝓂, 𝓃𝑠) ⊕ 𝐻2(𝒦), a hash value 𝒬= 𝐻3( 𝐶𝑐𝑠, 𝓂, 𝒴, 𝐼𝐷𝑐𝑠, 𝐵𝑐𝑠), signature 𝒲= 𝒱+ 𝒬. 𝑃𝑐𝑠. Sends a CB-signcrypted tuple 𝜙= (𝒬, 𝒵, 𝒲) to CB-UnSigncrypter on an open network. **CB-Un Signcryption: Upon arrival 𝜙, CB-Un-Signcrypter** considerers the following parameters are set as an input: Identities of the CB-Signcrypter and CB-Un- Signcrypter ( 𝐼𝐷𝑐𝑠, 𝐼𝐷𝑐𝑢𝑠), Its own certificate and private key (𝐶𝑐𝑢𝑠, 𝑃𝑐𝑢𝑠), and its own public key and sender public key( 𝐵𝑐𝑢𝑠, 𝐵𝑐𝑠) The global parameter set𝛹, it verifies the signature and outputs a plaintext 𝑚 as followed. Computes 𝒴[/] = 𝒲. 𝒟−𝒬. 𝐵𝑐𝑠 and then computes the decryption key as 𝒦[/] = 𝒴[/]. 𝑃𝑐𝑢𝑠 Recover 𝓂 as (𝓂, 𝓃𝑠) = 𝒵⊕𝐻2(𝒦 [/]). **_D._** **_CORRECTNESS_** In the following computations, the entity of identity can confirm the originality of private key 𝑃𝑒 and public key 𝐵𝑒: 𝐵𝑒 = 𝐶𝑒. 𝐻1( 𝐶𝑒, 𝐼𝐷𝑒) + Υ 𝐵𝑒 = 𝑃𝑒. 𝒟 = ( 𝛺𝑒. 𝐻1( 𝐶𝑒, 𝐼𝐷𝑒) + 𝜇) . 𝒟 = ( 𝛺𝑒. 𝐻1( 𝐶𝑒, 𝐼𝐷𝑒) + 𝜂𝑒. 𝐻1( 𝐶𝑒, 𝐼𝐷𝑒) + 𝜗) . 𝒟 = ( 𝛺𝑒. 𝒟. 𝐻1( 𝐶𝑒, 𝐼𝐷𝑒) + 𝜂𝑒. 𝒟. 𝐻1( 𝐶𝑒, 𝐼𝐷𝑒) + 𝜗. 𝒟) = ( 𝛽𝑒. 𝐻1( 𝐶𝑒, 𝐼𝐷𝑒) + 𝛸𝑒. 𝐻1( 𝐶𝑒, 𝐼𝐷𝑒) + Υ) = (( 𝛽𝑒 + 𝛸𝑒)𝐻1( 𝐶𝑒, 𝐼𝐷𝑒) + Υ) = 𝐶𝑒. 𝐻1( 𝐶𝑒, 𝐼𝐷𝑒) + Υ Also, by using the following computations, CB-UnSigncrypter can confirm the originality of 𝜙. 𝒴[/] = 𝒲. 𝒟−𝒬. 𝐵𝑐𝑠 = (𝒱+ 𝒬. 𝑃𝑐𝑠). 𝒟−𝒬. 𝑃𝑐𝑠. 𝒟 = (𝒱. 𝒟+ 𝒬. 𝑃𝑐𝑠. 𝒟) −𝒬. 𝑃𝑐𝑠. 𝒟= 𝒱. 𝒟+ 𝒬. 𝑃𝑐𝑠. 𝒟−𝒬. 𝑃𝑐𝑠. 𝒟 = 𝒱. 𝒟= 𝒴 **5. Security Analysis** **_Theorem 1← Confidentiality_** Confidentiality is that security property of this newly contributed scheme, in which the encryption key of legitimate sender cannot be compromised by any adversary (𝒜𝒱𝓇). **Proof 1: An encryption key of 𝒦= 𝒱 . 𝐵𝑐𝑢𝑠 is first made by** the sender in the proposed certificate-based signcryption scheme then by using 𝒦 to encrypt the plaintext like 𝒵= 𝓂⊕𝐻2(𝒦). 𝒜𝒱𝓇, however, will need 𝒦= 𝒱 . 𝐵𝑐𝑢𝑠, which in turn wants 𝒱 from 𝒴=𝒱.𝒟 in order to recover the contents of 𝒵. This is not feasible for 𝒜𝒱𝓇, and it is the same as hyperelliptic curve discrete problems. In addition, the 𝒜𝒱𝓇 can recover the decryption key from 𝒦[/] = 𝒴[/]. 𝑃𝑐𝑢𝑠, which further needed 𝑃𝑐𝑢𝑠 from 𝐵𝑐𝑢𝑠 = 𝑃𝑐𝑢𝑠. 𝐷. 𝒜𝒱𝓇 cannot solve this problem, thus it equals a discrete hyperelliptic curve problem. As a result, the proposed certificate-based generalized signcryption scheme meets the confidentiality requirements. **_Theorem 2 ← Unforgeability_** It is expected that a CBS scheme will achieve unforgeability as long as there is no forger (FR) capable of compromising the sender's dedicated private key and forging the digital signature. **Proof 2:** By using the public network, the sender must generate a 𝒲= 𝒱+ 𝒬. 𝑃𝑐𝑠 a signature, send the Ciphertext, and generate the hash value 𝜙= (𝒬, 𝒵, 𝒲) along with the signature. _FR however, must be capable of figuring out_ 𝒲= 𝒱+ 𝒬. 𝑃𝑐𝑠, if it attempts to produce a forgery signature, which ----- further want 𝒱 from 𝒴=𝒱.𝒟and 𝑃𝑐𝑠 from 𝐵𝑐𝑠 = 𝑃𝑐𝑠. 𝐷. Consequently, it is not feasible for FR and equals to process two times HECDLP. Thus, the scheme discussed above meets the unforgeability benchmarks as evidenced by the above discussion. **_Theorem 3 ← Integrity_** CBS technique is most likely to obtain the integrity security package If there are no 𝒜𝒱𝓇 that generates the same hash value for two distinct size/nature messages. **Proof 3: In our scenario, the sender generated the hash** function of a plaintext as 𝒬= 𝐻3( 𝐶𝑐𝑠, 𝓂, 𝒴, 𝐼𝐷𝑐𝑠, 𝐵𝑐𝑠)and sent a Ciphertext and signature 𝜙= (𝒬, 𝒵, 𝒲) across an open channel to the receiver. Additionally, the 𝒜𝒱𝓇 attempts to retrieve a plaintext from 𝒬= 𝐻3( 𝐶𝑐𝑠, 𝓂, 𝒴, 𝐼𝐷𝑐𝑠, 𝐵𝑐𝑠) for modification, which is not possible because to the irreversible nature of hash functions. In light of the preceding discussion, this method protected the property's integrity. **_Theorem 4 ← Non- Repudiation_** CBS technique is meant to succeed the security amenity of non-repudiation If a sender cannot reject his signcryptext former. **Proof 4: In our designed CBS method, the sender cannot** revoke signature 𝒲= 𝒱+ 𝒬. 𝑃𝑐𝑠 that has been sent. Though, if the sender disputes the signature, the judge does the following computation to resolve the conflict between the receiver and the sender. 𝐵𝑐𝑠 = 𝐶𝑐𝑠. 𝐻1( 𝐶𝑐𝑠, 𝐼𝐷𝑐𝑠) + Υ = ( 𝛸𝑐𝑠 + 𝛽𝑐𝑠). 𝐻1( 𝐶𝑐𝑠, 𝐼𝐷𝑐𝑠) + ϑ. 𝒟 = ( 𝛸𝑐𝑠. 𝐻1( 𝐶𝑐𝑠, 𝐼𝐷𝑐𝑠) + 𝛽𝑐𝑠. 𝐻1( 𝐶𝑐𝑠, 𝐼𝐷𝑐𝑠) + ϑ. 𝒟 = 𝜂𝑐𝑠. 𝒟. 𝐻1( 𝐶𝑐𝑠, 𝐼𝐷𝑐𝑠) + 𝛺𝑐𝑠. 𝒟. 𝐻1( 𝐶𝑐𝑠, 𝐼𝐷𝑐𝑠) + ϑ. 𝒟 = 𝒟(𝜂𝑐𝑠. 𝐻1( 𝐶𝑐𝑠, 𝐼𝐷𝑐𝑠) + 𝛺𝑐𝑠. 𝐻1( 𝐶𝑐𝑠, 𝐼𝐷𝑐𝑠) + ϑ) = 𝒟(𝜂𝑐𝑠. 𝐻1( 𝐶𝑐𝑠, 𝐼𝐷𝑐𝑠) + ϑ + 𝛺𝑐𝑠. 𝐻1( 𝐶𝑐𝑠, 𝐼𝐷𝑐𝑠)) = 𝒟(μ + 𝛺𝑐𝑠. 𝐻1( 𝐶𝑐𝑠, 𝐼𝐷𝑐𝑠)) = 𝒟( 𝑃𝑐𝑠)= 𝐵𝑐𝑠 Therefore, the foregoing computations conclude that the sender cannot dispute his signature, as he utilized his private key 𝑃𝑐𝑠 at the time of digital signature creation as 𝒲= 𝒱+ 𝒬. 𝑃𝑐𝑠, which is interconnected with their public key 𝐵𝑐𝑠. **_Theorem 5 ← Forward Secrecy_** A CBS system is presumed to realise the security property of forward secrecy if there is no 𝒜𝒱𝓇, which compromises message confidentiality by revealing the sender's private key. **Proof 5: Our technique employs a secret key 𝒦 in addition** to the sender's private key 𝑃𝑐𝑠. Here, even 𝒜𝒱𝓇 is compromised with the sender's private key 𝑃𝑐𝑠 however, it also requires the receivers secret key 𝒦[/], which is not possible for 𝒜𝒱𝓇 because the 𝒜𝒱𝓇 can recover the decryption key from 𝒦[/] = 𝒴[/]. 𝑃𝑐𝑢𝑠, which further needed 𝑃𝑐𝑢𝑠 from 𝐵𝑐𝑢𝑠 = 𝑃𝑐𝑢𝑠. 𝐷. 𝒜𝒱𝓇 cannot solve this problem, thus it equals a discrete hyperelliptic curve problem Consequently, we can conclude from the preceding statements that this design possesses forward secrecy. **_Theorem 6 ← Anti- Replay Attack_** If there is no 𝒜𝒱𝓇, it is anticipated that a CBS Approach will replace the security asset of Anti-Replay Attack, which may be able to collect old messages and resend them to the intended recipient several times. **Proof 5: In the given approach, the receiver first encrypts a** nonce 𝓃𝑟 using the sender's public key, and then delivers it over to the sender. Once this nonce is decrypted, the recipient generates a new nonce and encrypts the two nonce values (𝓃𝑟, 𝓃𝑠) and the message as 𝒵= (𝓂, 𝓃𝑟, 𝓃𝑠) ⊕𝐻2(𝒦) with the secrete key 𝒦. The recipient receives the cypher text 𝒵 from the sender after this operation. As a result, the receiver will verify the freshness of the new nonce 𝓃𝑠 and the validity of the old 𝓃𝑟, and if it is true, the Ciphertext will be accepted as a new message; otherwise, the receiver will add this message to the revocation list. Since these two nonces (𝓃𝑟, 𝓃𝑠) are renewed with each new session, our system is resistant to replay attacks. **6. Cost Analysis** In this section, we compare the proposed scheme to that of Karati et al. [32], Insaf et al. [33], and Dharminder et al. [34] in terms of communication and computation costs. The computational efficiency is defined by the algorithm's computation cost, whereas the communication efficiency is determined by the length of the ciphertext. The symbols 𝐸𝑋𝑃𝑁, 𝐵𝐼𝑃𝐺, 𝐻𝑌𝐷𝑀, |𝑚|, |𝐺|, and |𝑛| indicate, respectively, Exponentiation, bilinear pairing, Hyper Elliptic Curve Divisor Multiplication, message size in bits, group size in bilinear pairing, and Hyperelliptic Curve parameter size in bits. Here, we neglected the cost of other operations such as hashing, subtraction, and addition, since this operation requires far less time. The operation and its time are detailed in Tab 2 below, per [35]. In addition, the simulation uses the following hardware and software: Intel Core i74510UCPU, Processor 2.0 with 8GB RAM, Windows 7 and C Library (MIRACL) [37]. HYDM will also need 0.48 milliseconds (ms) [36]. Tab 3 displays the principal operations and their respective costs in milliseconds. Tab. 5 shows the variables and their corresponding sizes used in the comparative study of communication costs [1]. Tab 6 presents a comparison of communication costs based on our variable assumption. Tabs 4 and 6 provide a comparison of our work with Karati et al. [32], Insaf et al. [33], and Dharminder et al. [34] in terms of computation and communication overheads. According to our comparison study, the presented plan demonstrates the effectiveness of computational and communication overheads, as seen in Fig.3 and Fig. 4. In addition, Tab. 5 and Tab. 7 demonstrate a significant decrease in communication and computation costs. ----- FIGURE 3: Computation cost (in ms) **TABLE 2. OPERATION AND THEIR TIMING** **Operation** EXPN BIPG **Cost in ms** 1.25 14.90 **TABLE 3. MAJOR OPERATIONS AND THEIR RESPECTIVE TIMING** **Schemes** **Signcryption** **Un-Signcryption** Karati et al.[32] 4 EXPN 2 EXPN + 2 BIPG Insaf et.al [33] 4 HYDM 5 HYDM Dharminder et al.[34] 3 EXPN 1 EXPN + 2 BIPG Proposed scheme 3 HYDM 3 HYDM **TABLE 4. COMPUTATION COST ANALYSIS** **Signcryption** **Un-Signcryption** **Total** 𝟓 𝟑𝟐. 𝟑 𝟑𝟕. 𝟑 𝟏. 𝟗𝟐 𝟐. 𝟒 **4.32** 𝟑. 𝟕𝟓 𝟑𝟏. 𝟎𝟓 **34.8** 𝟏. 𝟒𝟒 𝟏. 𝟒𝟒 **2.88** **TABLE 5: VARIABLES WITH THEIR RESPECTIVE SIZE** **Variables** **Size in Bits** Bilinear pairing (Ԍ) **1024** Hyperelliptic curve (𝑛) **80** Message (𝑚) **512** **TABLE 6: COMMUNICATION COST ANALYSIS USING MAJOR OPERATION** **Schemes** **Signcrypted Text Size** Karati et al. [32] |𝑚| + 5|𝐺| Insaf et al.[33] |𝑚| + 3|𝑛| Dharminder et al.[34] |𝑚| + 3|𝐺| Proposed scheme |𝑚| + 2|𝑛| TABLE 7: COMMUNICATION COST COMPARISON IN BITS |Operation|EXPN|BIPG|HYDM| |---|---|---|---| |Cost in ms|1.25|14.90|0.48| |TABLE 3. MAJOR O|OPERATIONS AND THEIR RESPEC|CTIVE TIMING| |---|---|---| |Schemes|Signcryption|Un-Signcryption| |Karati et al.[32]|4 EXPN|2 EXPN + 2 BIPG| |Insaf et.al [33]|4 HYDM|5 HYDM| |Dharminder et al.[34]|3 EXPN|1 EXPN + 2 BIPG| |Proposed scheme|3 HYDM|3 HYDM| |Schemes|Signcryption|Un-Signcryption|Total| |---|---|---|---| |Karati et al. [32]|𝟓|𝟑𝟐. 𝟑|𝟑𝟕. 𝟑| |Insaf et al.[33]|𝟏. 𝟗𝟐|𝟐. 𝟒|4.32| |Dharminder et al.[34]|𝟑. 𝟕𝟓|𝟑𝟏. 𝟎𝟓|34.8| |Proposed scheme|𝟏. 𝟒𝟒|𝟏. 𝟒𝟒|2.88| |TABLE 5: VARIABLES W|WITH THEIR RESPECTIVE SIZE| |---|---| |Variables|Size in Bits| |Bilinear pairing (Ԍ)|1024| |Hyperelliptic curve (𝑛)|80| |Message (𝑚)|512| |TABLE 6: COMMUNICATION COST A|ANALYSIS USING MAJOR OPERATION| |---|---| |Schemes|Signcrypted Text Size| |Karati et al. [32]||𝑚| + 5|𝐺|| |Insaf et al.[33]||𝑚| + 3|𝑛|| |Dharminder et al.[34]||𝑚| + 3|𝐺|| |Proposed scheme||𝑚| + 2|𝑛|| |Schemes|Signcrypted Text Size in bits| |---|---| ----- |Karati et al. [32]|5632| |---|---| |Insaf et al.[33]|752| |Dharminder et al.[34]|3584| |Proposed scheme|672| **FIGURE 4.** Communication cost analysis **7. Conclusions** This paper proposes the formal development of an efficient signcryption scheme in a certificate-based IIoT environment. The proposed scheme can be used in large industrial settings. The proposed scheme satisfies confidentiality, unforgeability, integrity, anti-replay attack, non-repudiation, and forward secrecy. Moreover, the proposed scheme is tested and simulated using AVISPA, a well-known security verification tool. On the basis of two back-end protocol checkers, OF-MC and CL-AtSe, the simulation results indicate that the proposed approach is SAFE in terms of its security assurances. To evaluate the cost-complexity of the proposed scheme, we assess the performance of the proposed scheme and compare it to a variety of relevant existing schemes. The results revealed that the proposed scheme is better in terms of computation and communication costs than the counterpart schemes. **_Appendix A. Implementation of the Proposed Scheme in_** **_AVISPA_** Using the popular simulation tool AVISPA [37, 38], we simulate the proposed scheme. AVISPA is a top-down formal validation and verification tool that uses an expressive and flexible High-Level Specification Protocol (HLPSL) [39] to activate the provided code and find security vulnerabilities in the provided protocol. To assess safety standards, the AVISPA tool incorporates four backends checkers, including On-the-fly Model-Checker (OFMC), Tree Automata based on Automatic Approximations for the Analysis of Security Protocols (TA4SP), and SAT-based Model-checker (SATMC) with HLPSL. The essential framework AVISPA is seen in Fig. 5 where the HLPSL is first converted to the Intermediate Format (IF) with the assistance of the HLPSL2IF translator. This IF is then allocated to the AVISPA back-end safety check tools. The result shows whether or not the suggested protocol is secure and usable in a real setting. In addition, Tabulator 8 and Figures 6 and 7 clearly demonstrate the scheme's safety. ----- FIGURE 5. Top-down illustration of AVISPA [37] TABLE 8: HLPSL CODE OF THE PROPOSED SCHEME ## role role_Cbsigncryption(Cbsigncryption:agent,Cbunsigncryption:agent,Bcs:public_key,Bcus:public_key,SND,RC V:channel(dy)) played_by Cbsigncryption def= local State:nat,Pluss:hash_func,Q:text,V:text,Nr:text,M:text,Ns:text,Xor:hash_func,K:symmetric_key init State := 0 transition 1. State=0 /\ RCV(start) =|> State':=1 /\ SND(Cbsigncryption.Cbunsigncryption) 2. State=1 /\ RCV(Cbunsigncryption.{Nr'}_Bcs) =|> State':=2 /\ V':=new() /\ Q':=new() /\ K':=new() /\ Ns':=new() /\ M':=new() /\ secret(M',sec_2,{Cbsigncryption}) /\ witness(Cbsigncryption,Cbunsigncryption,auth_1,M') /\ SND(Cbsigncryption.{Xor(M'.Ns'.Nr')}_K'.{Pluss(Q'.V')}_inv(Bcs)) end role role role_Cbunsigncryption(Cbsigncryption:agent,Cbunsigncryption:agent,Bcs:public_key,Bcus:public_key,SND, RCV:channel(dy)) played_by Cbunsigncryption def= local State:nat,Pluss:hash_func,Q:text,V:text,Nr:text,M:text,Ns:text,Xor:hash_func,K:symmetric_key init State := 0 transition ----- ## 1. State=0 /\ RCV(Cbsigncryption.Cbunsigncryption) =|> State':=1 /\ Nr':=new() /\ SND(Cbunsigncryption.{Nr'}_Bcs) 6. State=1 /\ RCV(Cbsigncryption.{Xor(M'.Ns'.Nr)}_K'.{Pluss(Q'.V')}_inv(Bcs)) =|> State':=2 /\ request(Cbunsigncryption,Cbsigncryption,auth_1,M') /\ secret(M',sec_2,{Cbsigncryption}) end role role session1(Cbsigncryption:agent,Cbunsigncryption:agent,Bcs:public_key,Bcus:public_key) def= local SND2,RCV2,SND1,RCV1:channel(dy) composition role_Cbunsigncryption(Cbsigncryption,Cbunsigncryption,Bcs,Bcus,SND2,RCV2) /\ role_Cbsigncryption(Cbsigncryption,Cbunsigncryption,Bcs,Bcus,SND1,RCV1) end role role session2(Cbsigncryption:agent,Cbunsigncryption:agent,Bcs:public_key,Bcus:public_key) def= local SND1,RCV1:channel(dy) composition role_Cbsigncryption(Cbsigncryption,Cbunsigncryption,Bcs,Bcus,SND1,RCV1) end role role environment() def= const hash_0:hash_func,bcs:public_key,alice:agent,bob:agent,bcus:public_key,const_1:agent,const_a:public _key,const_z:public_key,auth_1:protocol_id,sec_2:protocol_id intruder_knowledge = {alice,bob} composition session2(i,const_1,const_a,const_z) /\ session1(alice,bob,bcs,bcus) end role goal authentication_on auth_1 secrecy_of sec_2 end goal environment() ----- FIGURE 6. OFMC simulation result FIGURE 7. ATSE simulation result ----- **REFERENCES** [1] S. Hussain, I. Ullah, H. Khattak, M. A. Khan, C. M. Chen, and S. Kumari, ‘‘A lightweight and provable secure identity-based generalized proxy signcryption (IBGPS) scheme for Industrial Internet of Things (IIoT),’’ J. Inf. Secur. Appl., vol. 58, May 2021, Art. no. 102625. [2] M. Shafiq, Z.Tian, A.K.Bashir, X.Du, M.Guizani, "IoT malicious traffic identification using wrapper-based feature selection mechanisms, Computers & Security," vol. 94,2020,101863. [3] S. Latif, Z. Zou, Z. Idrees, J. Ahmad. A Novel Attack Detection Scheme for the Industrial Internet of Things Using a Lightweight Random Neural Network. IEEE Access, 8, 89337-50, 2020. [4] M. Younan, E.H. Houssein, M. Elhoseny, A.A. Ali. Challenges and recommended technologies for the industrial internet of things: A comprehensive review. Measurement., 151,107198, 2020. [5] Rasmeet S. Bali, Neeraj Kumar, Secure clustering for efficient data dissemination in vehicular cyber–physical systems, Future Generation Computer Systems, vol. 56, 2016, Pages 476-492. [6] M. Shafiq, Z. Tian, A. K. Bashir, X. Du and M. Guizani, "CorrAUC: A Malicious Bot-IoT Traffic Detection Method in IoT Network Using Machine-Learning Techniques," in IEEE Internet of Things Journal, vol. 8, no. 5, pp. 3242-3254, 1 March1, 2021, doi: 10.1109/JIOT.2020.3002255. [7] [Y. Zheng, “Digital signcryption or how to achieve cost (signature & encryption)≪ cost (signature)+ cost (encryption),” in Annual international cryptology conference, pp. 165–179, 1997. [8] D. He, N. Kumar, and J.-H. Lee, ‘‘Privacy-preserving data aggregation scheme against internal attackers in smart grids,’’ Wireless Netw., vol. 22, no. 2, pp. 491–502, Feb. 2016. [9] A. Shamir. Identity-based cryptosystems and signature schemes. Paper presented at: Proceedings of the Workshop on the theory and application of cryptographic techniques, Springer, Berlin, Heidelberg / Germany, 4753, 1984. [10] A. Braeken, P. Shabisha, A. Touhafi, and K. Steenhaut, “Pairing free and implicit certificate based signcryption scheme with proxy re-encryption for secure cloud data storage,” in 2017 3rd International Conference of Cloud Computing Technologies and Applications (CloudTech), 1–7, 2017. [11] H. Yu and B. Yang, “Pairing-free and secure certificateless signcryption scheme,” Comput. J., 60, 8, 1187–1196, 2017. [12] S.S. Al-Riyami, K.G. Paterson, Certificateless Public Key Cryptography, in: Advances in cryptologyASIACRYPT 2003, Springer, 452–473, 2003. [13] C. Gentry, “Certificate-based encryption and the certificate revocation problem,” in International Conference on the Theory and Applications of Cryptographic Techniques, 272–293, 2003. [14] M. Suárez-Albela, P. Fraga-Lamas, T.M. Fernández Caramés. A Practical Evaluation on RSA and ECC-Based Cipher Suites for IoT High-Security Energy-Efficient Fog and Mist Computing Devices. Sensors, 18, 3868, 2018. [15] N. Kumar, K. Kaur, S.C. Misra, et al. An intelligent RFID-enabled authentication scheme for healthcare applications in vehicular mobile cloud. Peer-to-Peer Netw. Appl. 9, 824–840 (2016). [16] A. Braeken. PUF Based Authentication Protocol for IoT. Symmetry, 10, 352, 2018. [17] S. Challa, A.K Das, P. Gope, N. Kumar, F. Wu, A.V. Vasilakos, Design and analysis of authenticated key agreement scheme in cloud-assisted cyber–physical systems, Future Generation Computer Systems, Volume 108, 2020, Pages 1267-1286. [18] S. Kumari, M. Karuppiah, A.K. Das, X. Li, F. Wu, N. Kumar. A secure authentication scheme based on elliptic curve cryptography for IoT and cloud servers. J. Supercomput. 74, 6428–6453, 2017. [19] S. Roy, S. Chatterjee, A. K. Das, S. Chattopadhyay, N. Kumar and A. V. Vasilakos, "On the Design of Provably Secure Lightweight Remote User Authentication Scheme for Mobile Cloud Computing Services," in IEEE Access, vol. 5, pp. 25808-25825, 2017. [20] M.A. Khan, I.M. Qureshi, I. Ullah, S. Khan, F. Khanzada, F. Noor. An efficient and provably secure certificateless blind signature scheme for flying ad-hoc network based on multi-access edge computing. Electronics, 9, 1, 30, 2020. [21] M.A. Khan, I. Ullah, S. Nisar, F. Noor, I.M. Qureshi, F. Khanzada, H. Khattak, M.A. Aziz. Multiaccess Edge Computing Empowered Flying Ad Hoc Networks with Secure Deployment Using Identity-Based Generalized Signcryption. Mobile Information Systems, 2020. [22] C. Tamizhselvan, V. Vijayalakshmi. An Energy Efficient Secure Distributed Naming Service for IoT. Int. J. Adv. Stud. Sci. Res., 3, 2019. [23] V. Naresh, R. Sivaranjani. V.V.E.S. Murthy. Provable secure lightweight hyper elliptic curve-based communication system for wireless sensor networks. Int. J. Commun. Syst., 31, 3763, 2018. [24] A. Rahman, I. Ullah, M. Naeem, R. Anwar, H. Khattak, S. Ullah. A Lightweight Multi-Message and MultiReceiver Heterogeneous Hybrid Signcryption Scheme based on Hyper Elliptic Curve. Int. J. Adv. Comput. Sci. Appl., 9, 2018. [25] S.S. Ullah, I. Ullah, H. Khattak, M.A. Khan, M. Adnan, S. Hussain, N.U. Amin, M.A. Khattak. A Lightweight Identity-based Signature Scheme for Mitigation of Content Poisoning Attack in Named Data Networking with Internet of Things. IEEE Access, 2020. [26] A. Karati, S.K.H. Islam, and M. Karuppiah. "Provably secure and lightweight certificateless signature scheme for IIoT environments." IEEE Transactions on Industrial Informatics, 14.8, 3701-3711, 2018. [27] B. Zhang, T. Zhu, C. Hu, & C. Zhao. Cryptanalysis of a lightweight certificateless signature scheme for IIOT environments. IEEE Access, 6, 73885-73894, 2018. [28] Y. Zhang, R. Deng, D. Zheng, J. Li, P. Wu & J. Cao. Efficient and robust certificateless signature for data crowdsensing in cloud assisted industrial IoT. IEEE Transactions on Industrial Informatics, 15, 9, 5099-5108, 2019. [29] W. Yang, S. Wang, X. Huang, Y. Mu, On the security of an efficient and robust certificateless signature scheme for iiot environments, IEEE Access, 7, 91079, 2019. [30] H. Xiong, Q. Mei, & Y. Zhao. Efficient and provably secure certificateless parallel key-insulated signature without pairing for IIoT environments. IEEE Systems Journal, 2019. [31] Z. Qiao et al., "An Efficient Certificate-Based Aggregate Signature Scheme with Provable Security for Industrial Internet of Things," in IEEE Systems Journal, 2022. [32] A. Karati, S.K.H. Islam, G. P. Biswas, M.Z.A. Bhuiyan, P. Vijayakumar, and M. Karuppiah. "Provably secure identity-based signcryption scheme for crowdsourced industrial Internet of Things environments." IEEE Internet of Things Journal, 5, 4, 2904-2914, 2017. [33] I. Ullah, N.U. Amin, M. Zareei, A. Zeb, H. Khattak, A. Khan, S.A. Goudarzi,. A lightweight and provable secured certifi-cateless signcryption approach for ----- crowdsourced IIoT applications. Symmetry 2019, 11, 1386. [34] D. Dharminder, D. Mishra, J.J. Rodrigues, R. de AL Rabelo, & K. Saleem. PSSCC: Provably secure communication framework for crowdsourced industrial Internet of Things environments. Software: Practice and Experience, 2020. [35] C. Zhou, Z. Zhao, W. Zhou, and Y. Mei, “Certificateless key-insulated generalized signcryption scheme without bilinear pairings,” Secur. Commun. Networks, 2017, vol. 2017. [36] M. A. Khan, I. M. Qureshi, I. Ullah, S. Khan, F. Khanzada, and F. Noor, “An Efficient and Provably Secure Certificateless Blind Signature Scheme for Flying Ad-Hoc Network Based on Multi-Access Edge Computing,” Electronics, 9, 1, 30, 2020. [37] AVISPA. Automated Validation of Internet Security Protocols and Applications, http://www.avispa project.org (accessed May, 2022). [38] A. Armando, D. Basin, Y. Boichut, Y. Chevalier, L. Compagna, J. Cuellar, P. H. Drielsma, P. C. Heam, O. Kouchnarenko, ´ J. Mantovani, S. Modersheim, D. von Oheimb, M. Rusinowitch, ¨ J. Santiago, M. Turuani, L. Vigano, L. Vigneron, The avispa tool ` for the automated validation of internet security protocols and applications, in: K. Etessami, S. K. Rajamani (Eds.), Computesr Aided Verification, Springer Berlin Heidelberg, Berlin, Heidelberg, 281–285, 2005. [39] D. Von Oheimb, The high-level protocol specification language hlpsl developed in the eu project avispa, in: Proceedings of APPSEM 2005 workshop, 1–17, 2005. **INSAF ULLAH received the M.S. degree in** computesr sciences from the Department of Information Technology, Hazara University Mansehra, Pakistan, where he is currently pursuing the Ph.D. degree in computesr sciences. He is currently serving as a Lecturer with the Department of Computesr Sciences, Hamdard University, Islamabad. He has published more than 25 articles in different journals and conferences. His research interest includes network security. **ABDULLAH** **ALOMARI** received a bachelor’s degree in computesrs from Umm AlQura University, Saudi Arabia, in 2008, and the MSc. and Ph.D. degrees in Engineering Mathematics and Internetworking from Dalhousie University, Halifax, NS, Canada, in 2012 and 2018, respectively. He is currently an Assistant Professor with the Department of Computesr Science, Al-Baha University, Saudi Arabia. His research interests include cybersecurity, IoT, and emergent technologies in communication networks. He is a member of the IEEE, IEEE Communication Society, and ACM. **AKO MUHAMMAD ABDULLAH is a** lecturer with the department of computesr science, from the University of Sulaimani, Kurdistan Region, Iraq. He received the B.S. degree (First-Class Hons) in Mathematics and Computesr Science from the University of Sulaimani, in 2007. Following this achievement, he obtained a grant to pursue the M.S. degree in Computesr Science from Glyndwr University, UK, in 2010. Later on, he won another grant to study for a Ph.D. in Computesr Science from EMU University, Cyprus, in 2016. His research interests include ad hoc networks, computesr networks, wireless networks, and information security. **NEERAJ KUMAR** (SMIEEE) (2019, 2020, 2021 highly-cited researcher from WoS) is working as a Full Professor in the Department of Computesr Science and Engineering, Thapar Institute of Engineering and Technology (Deemed to be University), Patiala (Pb.), India. He is also adjunct professor at Asia University, Taiwan, King Abdul Aziz University, Jeddah, Saudi Arabia and Newcatle University, UK. He has published more than 500 technical research papers (DBLP: https://dblp.org/pers/hd/k/Kumar_0001:Neeraj) in top-cited journals and conferences which are cited more than 31269 times from well-known researchers across the globe with current h-index of 96(Google scholar: https://scholar.google.com/citations?hl=en&user=gL9gR-4AAAAJ. He has guided many research scholars leading to Ph.D. and M.E./M.Tech. His research is supported by funding from various competitive agencies across the globe. His broad research areas are Green computing and Network management, IoT, Big Data Analytics, Deep learning and cyber-security. He has also edited/authored 10 books with International/National Publishers like IET, Springer, Elsevier, CRC. Security and Privacy of Electronic Healthcare Records: Concepts, paradigms and solutions (ISBN-13: 978-178561-898-7), Machine Learning for cognitive IoT, CRC Press, Blockchain, Big Data and Machine learning, CRC Press, Blockchain Technologies across industrial vertical, Elsevier, Multimedia Big Data Computing for IoT Applications: Concepts, Paradigms and Solutions (ISBN: 978-981-13-8759-3), Proceedings of First International Conference on Computing, Communications, and Cyber-Security (IC4S 2019) (ISBN 978-981-15-3369-3). Probabilistic Data Structures for Blockchain based IoT Applications, CRC Press. One of the edited text-book entitled, "Multimedia Big Data Computing for IoT Applications: Concepts, Paradigms, and Solutions” published in Springer in 2019 is having 3.5 million downloads till 06 June 2020. It attracts attention of the researchers across the globe. (https://www.springer.com/in/book/9789811387586). He is serving as editors of ACM Computing Survey, IEEE Transactions on Sustainable Computing, IEEE TNSM, Elsevier Computesr Communication, Wiley International Journal of Communication Systems. Also, he has organized various special issues of journals of repute from IEEE, Elsevier, Springer. He has been a workshop chair at IEEE Globecom 2018, IEEE Infocom 2020 (https://infocom2020.ieee infocom.org/workshop-blockchain-secure-software-defined-networkingsmart-communities) and IEEE ICC 2020 (https://icc2020.ieee icc.org/workshop/ws-06-secsdn-secure-and-dependable-software-definednetworking-sustainable-smart) and track chair of Security and privacy of IEEE MSN 2020 (https://conference.cs.cityu.edu.hk/msn2020/cf wkpaper.php). He is also TPC Chair and member for various International conferences such as IEEE MASS 2020, IEEE MSN2020. He has won the best papers award from IEEE Systems Journal in 2018, in 2020, and IEEE ICC 2018, Kansas-city in 2018. He has also won best paper award from Elsevier JNCA in 2021 and IEEE Comsoc IWCMC 2021. He has won the outstanding leadership award from IEEE Trustcom in 2021. Moreover, He won the best researcher award from parent organization every year from last eight consecutive years. **AMJAD ALSHIRANI** is a full assistant professor at Jouf University, Saudi Arabia. He is the Head of the Software Engineering Department at the Faculty of Computesr Science. He serves as a Chief Information Security Officer (CISO) at Jouf University. He received MCS and Ph.D. from Dalhousie University, Canada, in 2014 and 2019, respectively. He also holds an adjunct professor position at Dalhousie university. His research interests include but are not limited to Cybersecurity, Network Security, Cloud Computing Security, Distributed Computing systems, and Machine and Deep Learning. ----- **FAZAL NOOR received his B. Eng. and M.** Eng. degrees in Electrical and Computesr Engineering from Concordia University, Montreal, Canada in 1984 and 1986, respectively. He received his Ph.D. Engineering from McGill University, Montreal, Canada in 1993. Currently, he is a Full Professor with the Faculty of Computesr and Information Systems (FCIS) at Islamic University of Madinah, Saudi Arabia. He has published numerous papers in various reputable international journals and conferences. He has been a reviewer for IEEE, Elsevier, Springer, and various other journals. He held the position of Vice Dean of Graduate Studies and Scientific Research at FCIS. He was a Program Coordinator for Master of Computesr Science program. He has received best faculty award in 2007. He has been a TPC member of many conferences. He is a fellow member of IAER. He has been QA evaluator for Computesr Engineering program. His research interests are in AI, FANETS, Neural Networks, Embedded Systems, Signal Processing, Security, IoT, Optimization Algorithms, and Parallel and Distributed computing. **SADDAM HUSSAIN received Bachelor’s and** Master’s degrees from Islamia College, Peshawar, Pakistan, and Hazara University, Masehra, Pakistan in 2017 and 2021 respectively. He is currently pursuing his Ph.D. degree from the School of Digital Science, Universiti Brunei Darussalam. He has published 60+ papers in well-reputed journals, including IEEE, JISA Elsevier, Cluster Computing, Computesr Communication, IoTJ, Hindawi, CMC, Sensors, Energies and Electronics. He is a reviewer in reputed journals, including IEEE Access, International Journal of Wireless Information Networks, Scientific Journal of Electrical Computesr and Informatics Engineering, and CMC. His research interests include Cryptography, Network Security, Wireless Sensor Networking (WSN), Information-Centric Networking (ICN), Named Data Networking (NDN), Blockchain, Smart Grid, Internet of Things (IoT), IIoT, Quantum Computing, Cloud Computing, and Edge Computing. **MUHAMMAD ASGHAR KHAN received a** Ph.D. degree in electronic engineering from the School of Engineering and Applied Sciences (SEAS), ISRA University, Islamabad. He works as an assistant professor in the electrical engineering department at Hamdard University, Islamabad. He is a reviewer for various journals published by IEEE, Elsevier, Springer, MDPI and EURASIP. He has served as a guest editor for a number of international journals. He has published 70[+] technical and review articles in leading journals such as the IEEE Transactions on Vehicular Technology, IEEE Transactions on Industrial Informatics, IEEE Internet of Things Journal, and has presented his work at multiple national and international conferences. His main research interests include Drones/UAVs with a focus on networks, platforms, security, as well as applications and services. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2022.3211257?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2022.3211257, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09906998.pdf" }
2,022
[ "JournalArticle" ]
true
null
[ { "paperId": "30f6f9befe8653e60a6af64d1e8860d8cecf19e1", "title": "An Efficient Certificate-Based Aggregate Signature Scheme With Provable Security for Industrial Internet of Things" }, { "paperId": "baa2344688a768bd67292f9636d0e7ad367a531b", "title": "A lightweight and provable secure identity-based generalized proxy signcryption (IBGPS) scheme for Industrial Internet of Things (IIoT)" }, { "paperId": "026cbbb91e5a0d5b606917d14b385a2b84fee64e", "title": "CorrAUC: A Malicious Bot-IoT Traffic Detection Method in IoT Network Using Machine-Learning Techniques" }, { "paperId": "1568a22a9a7d8694aa3129e2891fbd3c61a6a16b", "title": "Multiaccess Edge Computing Empowered Flying Ad Hoc Networks with Secure Deployment Using Identity-Based Generalized Signcryption" }, { "paperId": "bc46eb963738e92a18d76e5bf95e3e5dc9506cde", "title": "Design and analysis of authenticated key agreement scheme in cloud-assisted cyber-physical systems" }, { "paperId": "5e4a03f0bee22cdfb8fde34fa821fb5eeb2cb9de", "title": "A Lightweight Identity-Based Signature Scheme for Mitigation of Content Poisoning Attack in Named Data Networking With Internet of Things" }, { "paperId": "86777fd09086163b25785e90b15e0070bc0aefdf", "title": "IoT malicious traffic identification using wrapper-based feature selection mechanisms" }, { "paperId": "198178a6b87037ca9e143b1a8cb4a191759c73b8", "title": "PSSCC: Provably secure communication framework for crowdsourced industrial Internet of Things environments" }, { "paperId": "2083ba192f1589aea0e268b7a657e040cf6979d5", "title": "Efficient and Provably Secure Certificateless Parallel Key-Insulated Signature Without Pairing for IIoT Environments" }, { "paperId": "9a44c85ae267245a19993c191863c939559d188b", "title": "Challenges and recommended technologies for the industrial internet of things: A comprehensive review" }, { "paperId": "66ef06175dd5c833bce4a288619de976a3838a15", "title": "An Efficient and Provably Secure Certificateless Blind Signature Scheme for Flying Ad-Hoc Network Based on Multi-Access Edge Computing" }, { "paperId": "9c6c6028babf74b9fa1e3fb0ef644b34ea5fc2a1", "title": "A Lightweight and Provable Secured Certificateless Signcryption Approach for Crowdsourced IIoT Applications" }, { "paperId": "2f05ff19499059b16b3f50c54836fb02382dfa64", "title": "Efficient and Robust Certificateless Signature for Data Crowdsensing in Cloud-Assisted Industrial IoT" }, { "paperId": "97493807b7f7057f126a45be99df41dc768bc18d", "title": "A Practical Evaluation on RSA and ECC-Based Cipher Suites for IoT High-Security Energy-Efficient Fog and Mist Computing Devices" }, { "paperId": "28971f54716b08a8f6674ee2b6789e74fd23ad5e", "title": "PUF Based Authentication Protocol for IoT" }, { "paperId": "4aff0f14f1c32992f757ea8b23ac1f266182deb6", "title": "Provably Secure Identity-Based Signcryption Scheme for Crowdsourced Industrial Internet of Things Environments" }, { "paperId": "3049e6bf286820cf46fe89a3de55dbbbb9d4740e", "title": "Provable secure lightweight hyper elliptic curve‐based communication system for wireless sensor networks" }, { "paperId": "7795758316d9d79be4dd030fba7ce1e726ab5934", "title": "Provably Secure and Lightweight Certificateless Signature Scheme for IIoT Environments" }, { "paperId": "9629154dcb04d6fa069eeb9b358e7db3cd190e25", "title": "On the Design of Provably Secure Lightweight Remote User Authentication Scheme for Mobile Cloud Computing Services" }, { "paperId": "c86dc584a197549233f0a96addf0248d4c87c6ac", "title": "Pairing free and implicit certificate based signcryption scheme with proxy re-encryption for secure cloud data storage" }, { "paperId": "0f75949c5b659eab445860eab9596cbdf5914888", "title": "Certificateless Key-Insulated Generalized Signcryption Scheme without Bilinear Pairings" }, { "paperId": "50980d9a4b9b118cf7a72f018f2a18314dc1ea8f", "title": "Pairing-Free and Secure Certificateless Signcryption Scheme" }, { "paperId": "269011b49cfd356ad16f4e846db8fe775551f02e", "title": "A secure authentication scheme based on elliptic curve cryptography for IoT and cloud servers" }, { "paperId": "a7d0f01dbb6a3c0fed9231893fab9bd9ce1b8cc9", "title": "Secure clustering for efficient data dissemination in vehicular cyber-physical systems" }, { "paperId": "1f66d8aeed3e4a3bb383090918a306902ad101ee", "title": "Privacy-preserving data aggregation scheme against internal attackers in smart grids" }, { "paperId": "f3fef7d00dc987540f37d22d1090fd19f82b3a5c", "title": "An intelligent RFID-enabled authentication scheme for healthcare applications in vehicular mobile cloud" }, { "paperId": "8e2b0fb4f49370195d2b48587dd54b0e2c8010fe", "title": "無憑證公開金鑰密碼系統; Certificateless Public Key Cryptography" }, { "paperId": "fefc340833178b4eb9a34c083300024e1735fec4", "title": "The AVISPA Tool for the Automated Validation of Internet Security Protocols and Applications" }, { "paperId": "ab50fdb39dff45aae1c7c737f6f06bd95bc48df5", "title": "Certificate-Based Encryption and the Certificate Revocation Problem" }, { "paperId": "072e8123e534331625f52111cb5b7c0441bee8aa", "title": "Digital Signcryption or How to Achieve Cost(Signature & Encryption) << Cost(Signature) + Cost(Encryption)" }, { "paperId": "5281536f3d07af0074666f48884b9d8b860dd046", "title": "Identity-Based Cryptosystems and Signature Schemes" }, { "paperId": "58f63c5de3f914878ae621ab71f69c802fe3c569", "title": "A Novel Attack Detection Scheme for the Industrial Internet of Things Using a Lightweight Random Neural Network" }, { "paperId": "07fcf4494183eae93cc560467c2cf079bc21da6a", "title": "On the Security of an Efficient and Robust Certificateless Signature Scheme for IIoT Environments" }, { "paperId": null, "title": "An energy efficient secure distributed naming service for IoT" }, { "paperId": "346fe650205d6b74db21b510d69f59b7f00458ce", "title": "Cryptanalysis of a Lightweight Certificateless Signature Scheme for IIOT Environments" }, { "paperId": "50ada48e9e84895c80db2fae91d6677a4529ce00", "title": "A Lightweight Multi-Message and Multi-Receiver Heterogeneous Hybrid Signcryption Scheme based on Hyper Elliptic Curve" }, { "paperId": null, "title": "His research interests include cybersecurity, the IoT, and emergent technologies in communication networks" }, { "paperId": null, "title": "the M.S. degree in computer science from Glyndwr University, U.K., in 2010, and the Ph.D. degree in computer science from EMU University, Cyprus" }, { "paperId": "22b44c39eba5e37387448cd972e861c6175d599f", "title": "The High-Level Protocol Specification Language HLPSL developed in the EU project AVISPA" }, { "paperId": null, "title": "NEERAJ" } ]
18,894
en
[ { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01904975f3592267314e729cb328d6600d6f557d
[]
0.857744
Cloud Computing Load Balancing Techniques: Retrospect and Recommendations
01904975f3592267314e729cb328d6600d6f557d
FUOYE Journal of Engineering and Technology
[ { "authorId": "2159384925", "name": "O. A. Oduwole" }, { "authorId": "32213625", "name": "S. Akinboro" }, { "authorId": "104651352", "name": "O. G. Lala" }, { "authorId": "72911793", "name": "M. Fayemiwo" }, { "authorId": "1833134", "name": "S. Olabiyisi" } ]
{ "alternate_issns": null, "alternate_names": [ "FUOYE J Eng Technol" ], "alternate_urls": null, "id": "1ec80c25-9740-4be7-9f28-c42e8ea619ed", "issn": "2579-0617", "name": "FUOYE Journal of Engineering and Technology", "type": "journal", "url": "http://engineering.fuoye.edu.ng/journal/index.php/engineer/index" }
Load balancing is a research area that seeks to improve the quality of services provided to various clients in cloud computing environments. As cloud users increase around the world, cloud service providers are challenged to develop strategies for distributing tasks to machines for processing at cloud data centers. This work collected and undertook a thorough review of various load balancing techniques, uncovering the key limitations of existing strategies. The publications were chosen from peer-reviewed papers on Google Scholar. Cloud computing, cloud load balancing techniques, approaches to cloud load balancing, and big-data cloud computing systems were among the terms used in the search. Out of 201 studies, 39 met the criteria for inclusion. 5 of the research focused on cloud computing, 6 on cloud load balancing, 7 on resource scheduling in cloud, 16 on techniques for balancing cloud load, and 5 on big-data cloud computing environments. The study identified some research gaps and recommended a throughput-maximization based central-distributive load balancing architecture as a solution to maximize throughput, minimize response time and processing cost, and optimize load balancing architecture. Keywords— Centralized, cloud-computing, distributive, load-balancing.
## Cloud Computing Load Balancing Techniques: Retrospect and Recommendations *[1]Oludayo A. Oduwole, [2]Solomon A. Akinboro, [1]Olusegun G. Lala, [3]Michael A. Fayemiwo and [4]Stephen O. Olabiyisi 1Department of Computer Science, Adeleke University, Ede, Nigeria 2 Department of Computer Science, University of Lagos, Lagos, Nigeria 3Department of Computer Science, Redeemers University, Ede, Nigeria 4Department of Computer Science, Ladoke Akintola University of Technology, Ogbomoso, Nigeria **{dayooduus|akinboro2002}@yahoo.com|{lalagbenga|mfayemiwo}@gmail.com|soolabiyisi@lautech.edu.ng** **REVIEW ARTICLE** Received: 17-DEC-2021; Reviewed: 27-JAN-2022; Accepted: 13-MAR-2022 [http://dx.doi.org/10.46792/fuoyejet.v7i1.753](http://dx.doi.org/10.46792/fuoyejet.v7i1.753) **Abstract- Load balancing is a research area that seeks to improve the quality of services provided to various clients in cloud computing** environments. As cloud users increase around the world, cloud service providers are challenged to develop strategies for distributing tasks to machines for processing at cloud data centres. This work collected and undertook a thorough review of various load balancing techniques, uncovering the key limitations of existing strategies. The publications were chosen from peer-reviewed papers on Google Scholar. Cloud computing, cloud load balancing techniques, approaches to cloud load balancing, and big-data cloud computing systems were among the terms used in the search. Out of 201 studies, 39 met the criteria for inclusion. 5 of the research focused on cloud computing, 6 on cloud load balancing, 7 on resource scheduling in cloud, 16 on techniques for balancing cloud load, and 5 on big-data cloud computing environments. The study identified some research gaps and recommended a throughput-maximization based central-distributive load balancing architecture as a solution to maximize throughput, minimize response time and processing cost, and optimize load balancing architecture. **Keywords- Centralized, cloud-computing, distributive, load-balancing.** ——————————  —————————— ### 1 INTRODUCTION Load unbalancing is an unfavourable occurrence for ue to the obvious services it provides to different cloud service providers (CSPs), as it reduces the reliability users, cloud computing is a well-developed and efficacy of computing services while also # D business strategy for distributed data centres. The jeopardizing the Quality of Service (QoS) promised under cloud computing model provides IT tools that are shared, the service level agreement (SLA) between the customer allocated, and accessed by users based on individual and the provider of cloud services. The necessity for load demand (Suresh & Sakthivel, 2017; Adhikari & Amgoth, balancing (LB) emerges in these circumstances, and this is 2018). Furthermore, cloud computing offers a variety of a particular research issue of interest (Mishra, Sahoo & services such as Software-as-a-Service (SaaS), Platform- Parida, 2018). as-a-Service (PaaS), and Infrastructure-as-a-Service (IaaS). These facilities are Load balancing entails task redistribution in a distributed helpful in different applications, including scientific, network, such as cloud computing, so that there are no business, and industrial applications (Kumar and overworked, under-burdened, or idle computer machines Sharma, 2018). In summary, cloud computing platform (Achar et al., 2013; Magalhaes et al., 2015). It boosts cloud has three severe challenges: virtualization, distributed performance by attempting to improve restricting framework, and load balancing. The distribution of loads parameters such as reaction time, processing time, to the processing elements is the load balancing problem. stability of the system, and job transfer (Dam et al., 2015; Dave et al., 2016). Researchers have proposed different In a multi-node environment, it is very likely that some approaches to improve quality of cloud computing nodes will be overloaded while others will be idle (Afzal services and consumption of resources. These include & Kayitha, 2019). Load unbalancing is an unfavourable pre-emptive, responsive, mixed, stable and reactive occurrence for cloud service providers (CSPs), as it methods (Afzal & Kayitha, 2019). reduces the reliability and efficacy of computing services while also jeopardizing the Quality of Service (QoS) This paper provides an in-depth investigation of promised under the service level agreement (SLA) approaches for improving cloud resource utilization between the customer and the provider of cloud services. through an analysis of load balancing algorithms, and The necessity for load balancing (LB) emerges in these assessment of their strengths and weaknesses. In an circumstances, and this is a particular research issue of attempt to enhance the performance of cloud in terms of interest (Mishra, Sahoo & Parida, 2018). throughput, response time, task rejection ratio and CPU utilization rate, attention of researchers is drawn to the *Corresponding Author invention of strategies that are based on maximization of throughput and rearrangement of spatial node **Section B- ELECTRICAL/ COMPUTER ENGINEERING & RELATED SCIENCES** distribution. The second section of this study discusses **Can be cited as:** strategies for achieving load balancing in cloud networks. Oduwole O.A., Akinboro S.A., Lala O.G., Fayemiwo M.A. and Olabiyisi S.O. Section 3 provides a critique of related research, and (2022): “Cloud Computing Load Balancing Techniques: Retrospect and Recommendations”, FUOYE Journal of Engineering and Technology Section 4 provides the conclusion. [(FUOYEJET), 7(1), 17-22. http://dx.doi.org/10.46792/fuoyejet.v7i1.753](http://dx.doi.org/10.46792/fuoyejet.v7i1.753) © 2022 The Author(s). Published by Faculty of Engineering, Federal University Oye-Ekiti. 17 [This is an open access article under the CC BY NC license (https://creativecommons org/licenses/by nc/4 0/)](https://creativecommons.org/licenses/by-nc/4.0/) ----- ### 2 AN OVERVIEW OF TECHNIQUES FOR BALANCING LOAD IN THE CLOUD ##### 2.1 PRE-EMPTIVE APPROACH A pre-emptive Load Balancing algorithm contemplates action by producing changes rather than merely reacting to changes as they happen. Its goal is to achieve a positive outcome by preventing rather than reacting to a problem. Pre-emptive actions seek to identify and capitalize on opportunities, as well as to take precautions against possible future problems and threats. The disadvantage is that only a few classic pre-emptive procedures with no concepts have been implemented (Afzal & Kayitha, 2019). Polepally & Chatrapati (2017) demonstrated a cloud computing LB technique based on dragonfly optimization and constraint measures that distributes consistent load among VMs while consuming the least amount of power. Peng et al. (2018) proposed an Ant Colony Optimization (ACO) enhancement for achieving balanced distribution of multidimensional resources by introducing the concept of load imbalance degree and PM selection expectation. To decrease predicted response time and retain fairness. Some known pre-emptive load balancers are shown in Table 1. ##### 2.2 RESPONSIVE LOAD BALANCING IN CLOUD COMPUTING Instead of managing a situation, a responsive method to load balancing responds to it. Load imbalance is addressed as it arises, with noticeable repercussions. The vast majority of load balancers are of this type. The primary fault in existing work is that the issue of load imbalance is allowed to happen before researchers propose methods for solving it by improving some task scheduling parameter(s) (Afzal & Kayitha, 2019). Table 2 shows various existing load balancing techniques that use responsive methodologies. Preventive approaches are preferable to responsive approaches because the former seeks to prevent a problem before it occurs, whereas the latter seeks to solve a problem after it has occurred (Afzal & Kayitha, 2019). ##### 2.3 STATIC VERSUS DYNAMIC METHODOLOGIES Load balancers are generally classified as either static or dynamic, such as in Nuaimi _et al. (2012) and Alakeel_ (2010). The static balancer ensures that the system parameters required for job allocation are known ahead of time. These include resource requirements, communication time, server processing capacity, memory capacity, and so on (Alexeev _et al., 2012). The major_ downside of this method is that they do not take into account the system’s present status when deciding, making them unsuitable for systems such as distributed systems, where the system's states change frequently (Mesbahi & Rahmani, 2016). Dynamic methods of balancing load consider the present system’s status on which they decide. The key benefit of this method is that tasks can be dynamically transferred from an overburdened to an under-loaded node. However, formulating and developing a dynamic load balancer is far more complex and difficult than uncovering a static solution, but we can achieve better performance and have more easy and timely solutions via dynamic mechanisms (Nuaimi et al., 2012; Alakeel, 2010). There are two types of dynamic load balancing algorithms: distributed and non-distributed. The load balancing procedure can be implemented by all nodes in the system in distributed approaches, as proposed by Shi _et al. (2011). Furthermore, in this strategy, all nodes are_ connected with each other to achieve a global objective in the system, which is known as cooperative, or each node can work independently to achieve a local goal, which is known as non-cooperative. However, in a nondistributed scheme, the burden of stabilizing the system workload is not shared by all system nodes. A single node can only implement the load balancing framework between all nodes in a centralized approach in a nondistributed scheme. In semi-distributed mode, the system is divided into partitions or groups, in each of which a single node does load balancing (Mesbahi & Rahmani, 2016). ##### 2.4 CENTRALIZED APPROACH In this case, all job allocation and scheduling choices are made by a single node (server). This node contains the knowledge base for the entire cloud network. Its main strength is the reduction in time required to investigate various cloud resources, but it places an excessive burden on the centralized server. Other drawbacks are fault intolerance and a low failure recovery rate (Katyal & Mishra, 2013). ##### 2.5 DISTRIBUTIVE APPROACH In this arrangement, there is no one node responsible for allocating resources or scheduling jobs. Multiple nodes monitor the cloud network to make precise load balancing decisions. Every node maintains a local knowledge base to ensure efficient load distribution. This architecture relieved a single node of a significant failure burden, and as a result, no single node is overburdened with task scheduling judgments, allowing it to be fault tolerant (Tripathi & Singh, 2017; Katyal & Mishra, 2013). ##### 2.6 HIERARCHICAL CLOUD COMPUTING LOAD BALANCING Load balancing decisions are made at different levels of the cloud hierarchy in the layered architecture to cloud load balancing. This strategy works best in a master-slave situation. This technique can be described using a tree data structure, where the parent node obtains information from the child node and uses that information to apply load distribution for the child node under its supervision (Katyal & Mishra, 2013; Dar & Ravindran, 2017). Table 3 classifies some existing cloud load balancers based on node distribution. © 2022 The Author(s). Published by Faculty of Engineering, Federal University Oye-Ekiti. 18 [This is an open access article under the CC BY NC license (https://creativecommons org/licenses/by nc/4 0/)](https://creativecommons.org/licenses/by-nc/4.0/) ----- Table 1. A Review of Pre-emptive Load Balancing in Cloud Computing **Authors** **Algorithm Used** **Technique Used** **Advantages** **Limitations** - Designed to accommodate large - Tasks that take longer than the Heuristic, workloads within a specified time stipulated deadline are rejected. Classical Kumar et al., Conventional Non- frame. - Thresholds for determining Deterministic (2018) Classical - Improves flexibility overloaded and under-loaded VM are - Instant scaling of resources set arbitrarily because there is no - Task rejection ratio is minimized formula for them. Load balancing - Tasks that surpass the threshold Polepally et - Task scheduling is using Dragonfly Swarm limit are unable to be completed. _al. (2017)_ accomplished while using less optimization and optimization - Task rejection rate is quite high. energy. constraint measures Non-cooperative Xiao et al. Fairness Aware game theory- - The Nash equilibrium point - Execution time is high (2017) Algorithm based yields the best load balancing optimization Li et al. Ant Colony Swarm based - Tasks are distinct from each other. - Reduced makespan (2011) Optimization optimization Peng et al. Ant Colony Swarm based - Cost is not considered - Improved resource utilization (2018) Optimization optimization Table 2. Review of Responsive Approaches to Cloud Load Balancing **Authors** **Algorithm Used** **Technique Used** **Advantages** **Limitation** - Reduced throughput, scalability, and Vanitha et al. Genetic - Response time, makespan, and task Metaheuristic resource utilization. (2017) Algorithm rejection ratio have all been reduced. Genetic - increased scalability - Minimal resource utilization, a lower Rajput et al. Evolutionary Algorithm and - Response time and execution costs level of load balance (2016) based Heuristic Minmin were reduced. - Low resource utilization and degree of balance. Kapur - High data rates and scalability, with Non-classical Heuristic - High task rejection ratio and (2015) a shorter response and execution time migration time - Scalability and fault tolerance have - A lack of balance, inefficient use of Dam et al. Genetic been improved. Optimization resources, and a high task rejection ratio (2015) Algorithm - Response time, power consumption, and migration time are all low. - Low throughput, low scalability, low Vasudevan Honey Bee - Minimized execution time, response Optimization degree of balance and resource usage _et al. (2016)_ Algorithm time and execution cost Table 3. Categorization of some existing cloud load balancers based on node distribution **Authors** **Title of Work** **Central** **Distributive** **Hierarchical** Dave and Utilizing round robin concept for load balancing algorithm at virtual Yes No No Maheta, 2014 machine level in cloud environment, Dasgupta et al. A Genetic Algorithm (GA) based load balancing strategy for cloud Yes No No (2013) computing. Radojevic and Analysis of issues with load balancing algorithms in hosted (cloud) Yes No No Zagar (2011) environments. Dhinesh and Honey bee behaviour inspired load balancing of tasks in cloud computing No Yes No Venkata (2013) environments. Wang et Towards a load balancing in a three-level cloud computing network No No Yes _al.(2010)_ Miglani and Modified Particle Swarm Optimization based upon Task categorization in No Yes No Sharma (2019) Cloud Environment Kargar and Load balancing in Map-Reduce on homogeneous and heterogeneous No Yes Yes Yakili (2015) clusters: an in-depth review. Riakiotakis et al. Distributed dynamic load balancing for pipelined computations on No Yes No (2011) heterogeneous systems. © 2022 The Author(s). Published by Faculty of Engineering, Federal University Oye-Ekiti. 19 [This is an open access article under the CC BY NC license (https://creativecommons org/licenses/by nc/4 0/)](https://creativecommons.org/licenses/by-nc/4.0/) |Authors|Algorithm Used|Technique Used|Advantages|Limitations| |---|---|---|---|---| |Kumar et al., (2018)|Conventional Non- Classical|Heuristic, Classical Deterministic|• Designed to accommodate large workloads within a specified time frame. • Improves flexibility • Instant scaling of resources • Task rejection ratio is minimized|• Tasks that take longer than the stipulated deadline are rejected. • Thresholds for determining overloaded and under-loaded VM are set arbitrarily because there is no formula for them.| |Polepally et al. (2017)|Load balancing using Dragonfly optimization and constraint measures|Swarm optimization|• Task scheduling is accomplished while using less energy.|• Tasks that surpass the threshold limit are unable to be completed. • Task rejection rate is quite high.| |Xiao et al. (2017)|Fairness Aware Algorithm|Non-cooperative game theory- based optimization|• The Nash equilibrium point yields the best load balancing|• Execution time is high| |Li et al. (2011)|Ant Colony Optimization|Swarm based optimization|• Reduced makespan|• Tasks are distinct from each other.| |Peng et al. (2018)|Ant Colony Optimization|Swarm based optimization|• Improved resource utilization|• Cost is not considered| |Authors|Algorithm Used|Technique Used|Advantages|Limitation| |---|---|---|---|---| |Vanitha et al. (2017)|Genetic Algorithm|Metaheuristic|• Response time, makespan, and task rejection ratio have all been reduced.|• Reduced throughput, scalability, and resource utilization.| |Rajput et al. (2016)|Genetic Algorithm and Minmin|Evolutionary based Heuristic|• increased scalability • Response time and execution costs were reduced.|• Minimal resource utilization, a lower level of load balance| |Kapur (2015)|Non-classical|Heuristic|• High data rates and scalability, with a shorter response and execution time|• Low resource utilization and degree of balance. • High task rejection ratio and migration time| |Dam et al. (2015)|Genetic Algorithm|Optimization|• Scalability and fault tolerance have been improved. • Response time, power consumption, and migration time are all low.|• A lack of balance, inefficient use of resources, and a high task rejection ratio| |Vasudevan et al. (2016)|Honey Bee Algorithm|Optimization|• Minimized execution time, response time and execution cost|• Low throughput, low scalability, low degree of balance and resource usage| |Authors|Title of Work|Central|Distributive|Hierarchical| |---|---|---|---|---| |Dave and Maheta, 2014|Utilizing round robin concept for load balancing algorithm at virtual machine level in cloud environment,|Yes|No|No| |Dasgupta et al. (2013)|A Genetic Algorithm (GA) based load balancing strategy for cloud computing.|Yes|No|No| |Radojevic and Zagar (2011)|Analysis of issues with load balancing algorithms in hosted (cloud) environments.|Yes|No|No| |Dhinesh and Venkata (2013)|Honey bee behaviour inspired load balancing of tasks in cloud computing environments.|No|Yes|No| |Wang et al.(2010)|Towards a load balancing in a three-level cloud computing network|No|No|Yes| |Miglani and Sharma (2019)|Modified Particle Swarm Optimization based upon Task categorization in Cloud Environment|No|Yes|No| |Kargar and Yakili (2015)|Load balancing in Map-Reduce on homogeneous and heterogeneous clusters: an in-depth review.|No|Yes|Yes| |Riakiotakis et al. (2011)|Distributed dynamic load balancing for pipelined computations on heterogeneous systems.|No|Yes|No| ----- ### 3 REVIEW OF RELATED RESEARCH Alkayal et al. (2016) developed an effective load balancer in a cloud environment based on Cuckoo Search and Firefly Algorithm (CS-FA). The proposed technique essentially prevents workload imbalances by estimating each virtual machine's capacity and load, and allocating tasks to the best machine as determined by the CS-FA algorithm. The CS-FA outperformed existing Hybrid Dynamic LB (HDLB) by migrating a significantly fewer number of tasks, indicating superior load balancing. However, topology optimization via node rearrangement were not taken into account. Various load balancing approaches in different cloud systems were investigated by Mishra _et al. (2018). A system architecture was_ provided, along with different models for the host Virtual machine and numerous performance criteria. The method used in calculating the system's makespan and energy consumption was outlined, and a taxonomy for the prevention of imbalance of cloud load was provided. Deepa et al. (2018) explored cloud computing and its various service categories, deployment models, and architecture. Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) are the 3 key service classes explained in this paper. The cloud architecture's front end and back-end components were examined. Minimal costs, limitless storage, backup and recovery, automatic software integration, easy access to information, and speedy implementation were also identified as benefits, while technical issues, cloud security, and cyber threats were highlighted as downsides. The study has provided sufficient information to alleviate the uncertainty that is often associated with cloud computing terms. Afzal and Kayitha (2019) evaluated past work on cloud load balancing and discussed its benefits and drawbacks. The literature review followed a wide research strategy that explains how the load unbalancing problem is approached and specifies the methodology, theories, algorithms, approaches, and paradigms that are used. The load unbalancing problem was investigated using the constructive generic framework (CGF) methodology. The study also includes a taxonomy of algorithms that can help future investigators cope efficiently with load unbalancing issues, such as nature-inspired algorithms, machine learning, and mathematically derived algorithms. Ngharamike et al. (2018) looked at different cloud simulation models for assessing cloud infrastructure before being implemented in the real world. CloudSim, GreenCloud, NetworkCloudSim, iCancloud, CloudAnalyst, MDCSim, EMUSIM, and CloudSched were studied in terms of their retrospect and limitations. In addition, they were compared in terms of the underlying framework, programming language, graphical user interface, availability, cost modelling, and energy modelling. It was discovered that none of the tools could completely model a true cloud environment, and that they were more efficient at describing one aspect of the cloud than the other. GreenCloud spends more time simulating than others, but it is the most ideal for modelling data centre energy use. CloudAnalyst excels at modelling federation policy, cost, and simulation time (response and execution time), while iCancloud excels at large data centre cost and component modelling. CloudSched outperformed others in the analysis of computer hardware utilization by applications, while NetworkCloudSim was the best at portraying network components of cloud centres. Jayaraj et al. (2019) presented a process optimization of big-data cloud centres using the nature-inspired Firefly Algorithm and K-Means Clustering. The proposed optimization method was compared to state-ofthe-art algorithms such as Particle Swarm Optimization (PSO), Artificial Bee Colony (ABC), and Ant Colony Optimization (ACO) using response time, throughput, and latency as metrics. The proposed balancer reduces latency, time of response and throughput multiple times, but does not take into account CPU utilization rate, which reveals degree of load balancing reached, and does not consider topology optimization required for disperse nature of big data characterized cloud. ##### 3.1 PECULIAR CHALLENGES IN PREVIOUS WORK ON CLOUD-BASED LOAD BALANCING When cloud systems are designed to handle large volumes of requests from dispersed sources at high transmission rates, mechanisms for achieving load balancing must be improved further. This, according to prior studies can be accomplished by incorporating strategies that maximize throughput while significantly reducing response time. Furthermore, improvements to established methods of balancing load are required to address minimization of processing costs and cloud topology (spatial arrangement of nodes), as previously reported. Previous work emphasized improving response time but does little to reduce processing costs (Aswini et _al., 2019)._ ### 4 CONCLUSION AND FUTURE WORK In this review, various strategies for achieving effective sharing of cloud load were investigated. Certain constraints, such as the throughput maximization problem, the cost minimization problem, and the cloud architecture optimization problem, have been identified (Castelino _et al., 2014; Jayaraj & Abdul-Samath, 2019)._ These limitations stem from the need to implement cloud task scheduling to meet the severe needs of big data settings. High throughput, low response time, low processing cost, and reorganization of the cloud architecture are all required. Previous research did not pay enough attention to optimizing processing costs and cloud architecture, resulting in a significant research gap that must be filled. To address the identified flaws, a central-distributive framework based on throughput maximization is presented in Figure 1. ----- Fig.1: Central-distributive cloud load balancing architecture The framework's operations are based on the assumptions that there is a central cloud data centre (DC) with up to five regional data centres, and that a user's request will be handled by the data centre in the region from which the request originated. Cloud load will be balanced at two levels by the suggested system: Level 1 load balancing will be done in a dispersed fashion at each DC, whereas Level 2 load balancing will be done by a DC controller in a centralized manner across all DCs. Task requests that match the throughput maximization requirements in their respective regions will be accepted by each DC. The approved tasks will then be separated into two groups: Group A and Group B. A task will be assigned to Group A if the source and destination nodes are in the same region; otherwise, it will be assigned to Group B. The Group A jobs will be first given to available nodes/servers at their respective DCs, and Level 1 load balancing will be achieved using the Particle Swarm Optimization approach. All of the tasks in Group B must be transmitted to the network's central DC controller for server allocation using the Firefly method across all of the network's available nodes. Because of its ability to find optimal solutions quickly, especially for less complicated optimizations, the PSO algorithm is preferred for regional load balancing. It does so by obtaining its global best solution from local best solutions (Devi & Ryhmend, 2014; Miglani & Sharma, 2019). Because of its high rate of processing jobs, the firefly technique will be used to balance load at the central level (Jayaraj & Samath, 2019; Kumar et al., 2020). This arrangement limits the tasks that must be transferred to those that cannot be handled locally. Hence, response time and costs will be decreased while throughput will be raised as a result of prioritizing the admission of tasks that maximize throughput. #### REFERENCES Achar R., Thilagam, P. S., Soans, N. Vikyath, P. V., Rao, S., Vijeth, A. M. (2013). Load balancing in cloud based on live migration of virtual machines. In: 2013 Annual IEEE India Conference (INDICON), pp. 1–5. Adhikari, A. & Amgoth, T. (20180. Heuristic-based load-balancing algorithm for IaaS cloud, Future Generation Computer Systems, Vol. 81, pp. 156-165. Afzal, S. & Kavitha, G. (2019). Load Balancing in Cloud Computing: A Hierarchical Taxonomical Classification. Journal of Cloud Computing: Advances, Systems & Applications. 8(22), pp. 1-24. Alakeel, A. M. (2010). A guide to dynamic load balancing in distributed computer systems. International Journal of Computer Science and Information Security, 10(6): p. 153-160. Alexeev, Y., Mahajan, A., Leyffer, S., Eletcher, G. & Fedorov, D. G. (2012). Heuristic static load-balancing algorithm applied to the fragment-molecular orbital method. IEEE International Conference on High Performance Computing, Networking, Storage and Analysis Alkayal, E. S., Jennings, N. R. & Abulkhair, M. F. (2016). Efficient Task scheduling Multi-Objective Particle Swarm Optimization in Cloud Computing. IEEE 41st Conference on Local Computer Networks Workshops. Aswini, J., Malarvizhi, N. & Kumanan, T. (2019). A Novel Firefly algorithm based Load Balancing approach for Cloud Computing. International Journal of Innovative Technology and Exploring Engineering (IJITEE). ISSN-2278-3075. 8(2), pp. 91- 96. Castelino, C., Gandhi, D., Narula, H. G., & Chokshi, N. H. (2014). Integration of Big Data and Clou Computing. International Journal of Engineering Trends and Technology (IJETT), 16(2),100- 102. Dam, S., Mandal, G., Dasgupta, K, & Dutta P. (2015). Genetic algorithm and gravitational emulation based hybrid load balancing strategy in cloud computing. In; Proceedings of the third international conference on computer, communication, control and information technology C3IT), pp 1–7. Dar, A. & Ravindran, A. (2018). A Comprehensive Study on Cloud Computing Paradigm. International Journal of Advance Research in Science and Engineering, 7(4), 235-242. Dave, S. & Maheta, P. (2014). Utilizing round robin concept for load balancing algorithm at virtual machine level in cloud environment. Int J Comput Appl [Internet] 94(4), pp. 23–29. Dave, A, Patel, B., & Bhatt, G. (2016). Load balancing in cloud computing using optimization techniques: a study. In: International Conference on Communication and Electronics Systems (ICCES), pp 1–6. Devi, M. & Ryhmend, U. (2014). Particle swarm optimization based node and link lifetime prediction algorithm for route recovery in MANET. Journal of Wireless Communication and Technology, 107(1), 1–10. Dhinesh, B. L. D., & Venkata, K. P. (2013). Honey bee behavior inspired load balancing of tasks in cloud computing environments Appl Soft Comput J. 13(5), 2292–303. Jayaraj, T. & Abdul Samath, J. (2019). Process Optimization of Big Data Cloud Centre Using Nature Inspired Firefly Algorithm and ----- K-Means Clustering. International Journal of Innovative Technology and Exploring Engineering (IJITEE), 8(12), 48-52. Kapur, R. (2015). A workload balanced approach for resource scheduling in cloud computing. In: eighth international conference on contemporary computing (IC3), pp. 36–41. Kargar, M. J. & Vakili, M. (2015). Load balancing in Map-Reduce on homogeneous and heterogeneous clusters: an in-depth review. Systems, Vol. 15, pp. 149–168. Katyal, M. & Mishra, A. (2013). A Comparative Study of Load Balancing Algorithms in Cloud Computing Environment. International Journal of Distributed and Cloud Computing, 1(2), pp. 7-14. Kumar, M. & Sharma, S. C. (2018). Deadline constrained based dynamic load balancing algorithm with elasticity in cloud environment, Computers & Electrical Engineering, Vol. 69, pp. 395-411. Kumar, K. P., Ragunathan, T., Vasumathi, D. & Prasad, P. K. (2020). An Efficient Load Balancing Technique based on Cuckoo Search and Firefly Algorithm in Cloud. International Journal of Intelligent engineering and Systems, 13(3), pp. 422- 432. DOI: 10.22266/IJIES2020.0630.38. Li, K., Xu, G., Zhao, G., Dongm, Y. & Wang, D. (2011). Cloud task scheduling based on load balancing ant colony optimization. In: Sixth annual China Grid conference, pp. 3–9. Magalhaes, D., Calheiros, R. N., Buyya, R., & Gomes, D. G. (2015). Workload modelling for resource usage analysis and simulation in cloud computing. Comp Elect Eng, 47:69–81; DOI:10.1016/j.compeleceng.2015.08.016 Mesbahi, M. & Rahmani, A. M. (2016). Load Balancing in Cloud Computing; A State of the Art Survey. I.I. Modern Education and Computer Science, vol. 3, pp. 64-78. Miglani, N. & Sharma, G. (2019). Modified Particle Swarm Optimization based upon Task categorization in Cloud Environment. International Journal of Engineering and Advanced Technology (IJEAT), 8(4C), 67 – 72. Mishra, S. K., Sahoo, B., & Parida, P. P. (2018). Load balancing in cloud computing: a big picture. J King Saud Univ Comp Infor Sci: pp. 1–32. Ngharamike, E., Ijemaru, G., Akinsannmi, O. & Folorunso, O. (2018). Cloud-based Simulation Tools for Cloud Test: A Review. FUOYE Journal of Engineering and Technology, Volume 3, Issue 1, pp.80-85. http://dx.doi.org/10.46792/fuoyejet.v3i1.100 Nuaimi, K. A., Mohamed, N., Nuaimi, M. & Al-Jaroodi, J. (2012). A Survey of Load Balancing in Cloud Computing: Challenges and Algorithms. Second Symposium on Network Cloud Computing and Applications, pp. 137-142, Doi: 10.1109/NCCA.2012.29. Peng, X. Guimin, H. Zhenhao, L. & Zhongbao, Z. (2018). An efficient load balancing algorithm for virtual machine allocation based on ant colony optimization. International Journal of Distributed Sensor Networks, vol. 14 no 12, pp. 1-9. Polepally, V. & Chatrapati, K. S. (2017). Dragonfly optimization and constraint measure-based load balancing in cloud computing. Cluster Comp, pp.1–13. Radojevic, B. & Zagar, M. (2011). Analysis of issues with load balancing algorithms in hosted cloud environments. Proc 34th Int Conv MIPRO pp. 416–420. Rajput, S. S. & Kushwah, V. S. (2016). A genetic based improved load balanced min-min task scheduling algorithm for load balancing in cloud computing. In: 8th international conference on Computational Intelligence and Communication Networks (CICN), pp. 677–681. Riakiotakis, I., Clorba, F. M., Androniko, T. & Papakonstantinou, G. (2011). Distributed dynamic load balancing for pipelined computations on heterogeneous systems. Parallel Computing, 37(10): pp. 713-729. Shi, J., Meng, C. & Ma, L. (2011). The Strategy of Distributed Load Balancing Based on Hybrid Scheduling. Fourth International Joint Conference on Computational Sciences and Optimization, pp. 268-271, Doi: 10.1109/CSO.2011.286. Suresh, S. & Sakthivel, S. (2017). A novel performance constrained power management framework for cloud computing using an adaptive node scaling approach, Computers & Electrical Engineering, Vol. 60, pp. 30-44. Tripathi, A. M. & Singh, S. (2017). A literature review on algorithms for the load balancing in cloud computing environments and their future trends. Computer Modelling & New Technologies, 21(1), 64-73. Vanitha, M. & Marikkannu, P. (2017). Effective resource utilization in cloud environment through a dynamic well-organized load balancing algorithm for virtual machines. Comp Elec Eng, vol. 57, pp. 199–208. Vasudevan, S. K., Anandaram, S., Menon, A. J & Aravinth, A. (2016). A novel improved honeybee based load balancing technique in cloud computing environment. Asian J Infor Technol, 15(9), pp. 1425–1430. Wang, S. C., Yan, K. Q., Liao, W. P., & Wang, S. S. (2010). Towards a load balancing in a three-level cloud computing network. In Proceedings: 2010 3rd IEEE International Conference on Computer Science and Information Technology, ICCSIT pp. 108– 113. Xiao, Z., Tong, Z., Li, K. & Li, K. (2017). Learning non-cooperative game for load balancing under self-interested distributed environment. Appl Soft Comput. vol. 52, pp. 376–386 -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.46792/fuoyejet.v7i1.753?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.46792/fuoyejet.v7i1.753, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GOLD", "url": "https://journal.engineering.fuoye.edu.ng/index.php/engineer/article/view/753/pdf" }
2,022
[ "Review" ]
true
2022-03-18T00:00:00
[ { "paperId": "d5504c7d00e3daad085eb6309559265bd5bbf1b0", "title": "Load Balancing" }, { "paperId": "f05fea6d7cf5446a0e78ecd71f9b556fd45e23bf", "title": "An Efficient Load Balancing Technique based on Cuckoo Search and Firefly Algorithm in Cloud" }, { "paperId": "11e50be223fb811367df87c5d3c56ccda676abfb", "title": "Load balancing in cloud computing – A hierarchical taxonomical classification" }, { "paperId": "596f436ead78ad61fca2994d3fb13ac9866b5739", "title": "Process Optimization of Big-Data Cloud Centre Using Nature Inspired Firefly Algorithm and K-Means Clustering" }, { "paperId": "48e4d8c04ea40f39fc70e94a8a9b093438e8ffc9", "title": "An efficient load balancing algorithm for virtual machine allocation based on ant colony optimization" }, { "paperId": "5cfe0cdb2a321ec34c087d11b5cab8c3056c4e25", "title": "Heuristic-based load-balancing algorithm for IaaS cloud" }, { "paperId": "5b156803bb170be0a1d9b9a034a905c014629c68", "title": "Cloud-based Simulation Tools for Cloud Testing: A Review" }, { "paperId": "ff6a2b9ebb1dfc8b0c57e0d2e093e49b6608d1e0", "title": "Load balancing in cloud computing: A big picture" }, { "paperId": "72100972165b414807a27cba9a98de38a754dd93", "title": "Deadline constrained based dynamic load balancing algorithm with elasticity in cloud environment" }, { "paperId": "3ab2bb545b1ecb2de96c40ba4de8bea44fb72275", "title": "A novel performance constrained power management framework for cloud computing using an adaptive node scaling approach" }, { "paperId": "6be54d794793455106dc86faab644db7d72703ba", "title": "A Genetic Based Improved Load Balanced Min-Min Task Scheduling Algorithm for Load Balancing in Cloud Computing" }, { "paperId": "a08a67dbeeeb598263336884dffad9d36b88f8fa", "title": "Efficient Task Scheduling Multi-Objective Particle Swarm Optimization in Cloud Computing" }, { "paperId": "2e3c95e54b3542d621882a2b1102832c5501c4ba", "title": "Load balancing in cloud computing using optimization techniques: A study" }, { "paperId": "6e3a8c09838d6a6bdc1c5a30bd1362508efb96d3", "title": "LOAD BALANCING IN CLOUD COMPUTING" }, { "paperId": "047340fc3d7c4f6b148fea2f2e20f93af6c85017", "title": "Workload modeling for resource usage analysis and simulation in cloud computing" }, { "paperId": "c52db87ea7434d52c21ad6ac701fb90e3ad20a0d", "title": "A workload balanced approach for resource scheduling in cloud computing" }, { "paperId": "b2789a8dc38932c9e95b208cddba9d83b76912eb", "title": "Load balancing in MapReduce on homogeneous and heterogeneous clusters: an in-depth review" }, { "paperId": "57a7ff51bb7ec9e3b2013d0f0e9658481e3b1443", "title": "Genetic algorithm and gravitational emulation based hybrid load balancing strategy in cloud computing" }, { "paperId": "4d78795f26f708006e285c69b0dffb20fe751878", "title": "Integration of Big Data and Cloud Computing" }, { "paperId": "7945c31c83966def6e28e791715898c0699d5b65", "title": "Particle swarm optimization (PSO)-based node and link lifetime prediction algorithm for route recovery in MANET" }, { "paperId": "f30f8d1ed878637dcb2382ebfe89b224d7b5366a", "title": "Utilizing Round Robin Concept for Load Balancing Algorithm at Virtual Machine Level in Cloud Environment" }, { "paperId": "b9a952ed1b8bfae2e976b5c0106e894bd4c41d89", "title": "A Comparative Study of Load Balancing Algorithms in Cloud Computing Environment" }, { "paperId": "ff39647a293615171b5770e80ce8ef1c7afb6aa1", "title": "Load balancing in cloud based on live migration of virtual machines" }, { "paperId": "70421cff87427d36c781ed883f8e4717a6fc58de", "title": "Honey bee behavior inspired load balancing of tasks in cloud computing environments" }, { "paperId": "be262a136720dad893610ab53c1559613dda431d", "title": "A Survey of Load Balancing in Cloud Computing: Challenges and Algorithms" }, { "paperId": "64398f6029f95fa02273797a02d62f9f6b298b1a", "title": "Heuristic static load-balancing algorithm applied to the fragment molecular orbital method" }, { "paperId": "fefb78dae1358ddb1edf963ad111054f124fd52f", "title": "Distributed dynamic load balancing for pipelined computations on heterogeneous systems" }, { "paperId": "1737177098539e295235678d664fcdb833568b94", "title": "Cloud Task Scheduling Based on Load Balancing Ant Colony Optimization" }, { "paperId": "ed192edfdf29e3e1e405230b6115c4edf7905156", "title": "Analysis of issues with load balancing algorithms in hosted (cloud) environments" }, { "paperId": "dfe76e128ba792096ee1a9339252b2852572435f", "title": "The Strategy of Distributed Load Balancing Based on Hybrid Scheduling" }, { "paperId": "6334886332407a7f9534af89932db9c05ecdcc05", "title": "Towards a Load Balancing in a three-level cloud computing network" }, { "paperId": "dc34239d6af5eb82199eca2f3bb7d86cb3347252", "title": "A Novel Firefly algorithm based Load Balancing approach for Cloud Computing" }, { "paperId": "06828299fcffe9f26410c49ba10422dc07f4d07a", "title": "International Journal of Soft Computing and Engineering" }, { "paperId": null, "title": "creativecommons.org/licenses/by-nc/4.0/" }, { "paperId": "35aae985e61a7b3508a1a89760ad5166a81163fa", "title": "earning non-cooperative game for load balancing under elf-interested distributed environment" }, { "paperId": "33e0bbac52f2b2364e337e78fcd467a1087f6537", "title": "Effective resource utilization in cloud environment through a dynamic well-organized load balancing algorithm for virtual machines" }, { "paperId": "28f334d143251d4050858dfee1e5fb5217a6a41f", "title": "Dragonfly optimization and constraint measure-based load balancing in cloud computing" }, { "paperId": null, "title": "A literature review on algorithms for the load balancing in cloud computing environments and their future trends" }, { "paperId": "bdc70c1b4e8debc69a6cf842e80b8417404c8667", "title": "A Comprehensive Study on Cloud Computing" }, { "paperId": null, "title": "CC BY NC" }, { "paperId": "5e133babe82b63bbfc42db00ae833a441a650c7b", "title": "A Guide to Dynamic Load Balancing in Distributed Computer Systems" }, { "paperId": null, "title": "FUOYE Journal of Engineering and Technology" } ]
9,027
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0190ccd90905e8efb49d511cc958a4210ea8aa1b
[ "Computer Science" ]
0.911229
A Loop-Based Key Management Scheme for Wireless Sensor Networks
0190ccd90905e8efb49d511cc958a4210ea8aa1b
EUC Workshops
[ { "authorId": "2776384", "name": "Yingzhi Zeng" }, { "authorId": "39136591", "name": "Bao-kang Zhao" }, { "authorId": "39363506", "name": "Jinshu Su" }, { "authorId": "46580710", "name": "Xia Yan" }, { "authorId": "144668859", "name": "Z. Shao" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
null
## A Loop-Based Key Management Scheme for Wireless Sensor Networks YingZhi Zeng[1], BaoKang Zhao[1,2], JinShu Su[1], Xia Yan[1,3], and Zili Shao[2] 1 School of computer, National University of Defense Technology, ChangSha Hunan, China 2 Department of Computing, The Hong Kong polytechnic University, Hong Kong 3 School of Computer and Communication, Hu'nan University, ChangSha Hunan, China zyz1234@gmail.com, sjs@nudt.edu.cn, sunofxy@hotmail.com,{csbzhao,cszlshao}@comp.polyu.edu.hk **Abstract. Wireless sensor networks are emerging as a promising solution for** various types of futuristic applications for both military and the public. The design of key management schemes is one of the most important aspects and basic research field of secure wireless sensor networks. Efficient key management could guarantee authenticity and confidentiality of the data exchanged among the nodes in the network. In this paper, we propose a new key management scheme based on loop topology. Comparing with clusterbased key management schemes, loop-based scheme is proved to be more efficient, cost-saving and safe. ### 1 Introduction Recent advancements in wireless communications and micro electromechanical technologies have promoted the development and applications of wireless sensor networks (WSN). WSN increasingly become viable solutions to many challenging problems for both military and the public applications, including battlefield surveillance, border control, target tracking and infrastructure protection. In a WSN, sensor nodes are typically deployed in adversarial environments such as military applications where a large number of sensors may be dropped from airplanes. Sensor nodes need to communicate with each other for data processing and routing. Secure communication between a pair of sensor nodes requires authentication, privacy and integrity. However, the wireless connectivity, the absence of physical protection, the close interaction between sensor nodes and their physical environment, and the unattended deployment of sensor nodes make them highly vulnerable to node capture as well as a wide range of network-level attacks. Moreover, the constrained energy, memory, and computational capabilities of the employed sensor nodes limit the adoption of security solutions designed for traditional networks. As a successful security mechanism of wired networks, key management is crucial to the secure operation of sensor networks. A large number of keys need to be managed in order to encrypt and authenticate all sensitive data exchanged. The characteristics of sensor nodes and WSNs render most existing key management solutions developed for other networks infeasible. To provide security in such a distribution environment, the M. Denko et al. (Eds.): EUC Workshops 2007, LNCS 4809, pp. 103–114, 2007. © IFIP International Federation for Information Processing 2007 ----- 104 Y. Zeng et al. well-developed public key cryptographic methods have been considered at first, but these demand excessive computation and storage from the resource extra-limited sensor nodes [1]. The symmetric key cryptography is considered as the only feasible way for wireless sensor networks. Therefore, there must be a secret key shared between a pair of communicating sensor nodes. Sensor nodes can use pre-distributed keys directly, or use keying materials to dynamically generate pair-wise keys. Since the network topology is unknown prior to deployment, a key pre-distribution scheme is required where keys are stored in ROMs of sensor nodes before the deployment. The stored keys must be carefully selected so to increase probability that two neighboring sensor nodes, which are within each other’s wireless communication range, have at least one key in common. Those nodes which have no shared keys may setup secure communicate through the help of neighboring nodes. After the deployment, each sensor node should connect with its neighboring nodes and generate their security keys in a self-organized method. After Key generation, next important step is distributing the keys to relative nodes. The main contribution of this work is to shed some light on the basic framework of the key management scheme of WSN. Loop-based scheme includes key material predistribution, key generation, key distribution and rekeying. In particular, we bring in a novel loop-based topology for key management. To the best of our knowledge, this paper is the first one to apply loop topology to key management scheme in distributed wireless sensor networks. Our analysis and comparison indicate that this approach has substantial advantages over the traditional cluster-topology scheme. The remainder of the paper is organized as follows. Section 2 provides an overview of the related works. The loop-based key management scheme is introduced in section 3. Section 4 deals with the detailed performance analysis and comparisons. We conclude in Section 5 and point out some future research directions. ### 2 Related Works A number of key management schemes have been developed for sensor networks in the recent years. In this section, we review the major existing key management schemes in wireless sensor networks. Eschenauer and Gligor [2] proposed a random key pre-distribution scheme. Each sensor node is assigned k keys out of a large pool P of keys in the pre-deployment phase. Neighboring nodes may establish a secure link only if they share at least one key, which is provided with a certain probability based on the selection of k and P. A major advantage of this scheme is the exclusion of the base station in key management. However, successive node captures enable the attacker to reveal network keys and use them to attack other nodes. Based on the EG scheme, qcomposite keys scheme was proposed by Chan in [3]. The difference between this scheme and the EG scheme is that q common keys (q >1), instead of just a single one, are needed to establish secure communication between a pair of nodes. Using the framework of pre-distributing a random set of keys to each node, Chan presented two other mechanisms for key management. The first mechanism is a multi-path key reinforcement scheme, applied in conjunction with the basic scheme to yield improved resilience against node capture attacks. The main attractive feature of this scheme is that it can enhance the security of an established link key by establishing ----- A Loop-Based Key Management Scheme for Wireless Sensor Networks 105 the link key through multiple paths. The second mechanism is a random pair-wise keys scheme. The purpose of this scheme is to allow node-to-node authentication between communicating nodes. Liu and Ning [4] provided further enhancement by using t-degree bivariate key polynomials. Since an attacker needs to capture at least t+1 nodes to obtain any tdegree polynomial, this solution was shown to significantly enhance network resilience to node capture as long as the number of captured nodes is below a certain threshold. However, if the number of captured nodes exceeds this threshold, the network is almost entirely captured by the attacker. Du et al. [5] proposed a method to improve the basic scheme by exploiting a priori deployment knowledge. They also proposed a pair-wise key pre-distribution scheme for wireless sensor networks [6], which uses Blom’s key generation scheme [7] and basic scheme as the building blocks. Choi and Youn [8] proposed a key pre-distribution scheme guaranteeing that any pair of nodes can find a common secret key between themselves by using the keys assigned by LU decomposition of a symmetric matrix of a pool of keys. ### 3 Loop-Based Key Management Scheme Existing approaches in key management scheme mainly inefficiently utilize the cluster topology information. In fact, the loop-based topology has many special benefits in WSN. We present a new key management scheme based on the loop topology. To our knowledge, this is the first paper in this area that combines the node topology with key management. **3.1 Basic Definitions** In Graph Theory, a loop is a non-directional path, which begins and ends with the same node. Since there is at most one connection between every two nodes in an undirected graph G=(V, E) [9], a path from vi to vj representing a wireless sensor network link can be defined as a sequence of vertices {vi, vi+1, …, vj}, where V representing the set of nodes and E is the set of connections. **Loop length: The length of a loop also can be called path length, is the number of** hops from vi to vj. Let L be a loop. It is obviously that if length (L)<3, either the node on L is isolated or L is a round trip between two nodes. **Loop type: In a large scale WSN, there may be some isolated nodes. A loop with** only two nodes is also a special loop. For example, in Fig.1, L2 and L3 are typical loops and L1 is a two-nodes special loop. In the following parts, nodes on the loops with greater length than 2 are called on-loop nodes. Let L be the set of the loops that node v is on. If max (len ( l ) ≤ 2) (for every l in L), we say v is non-on-loop node. **3.2 The Loop-Based Topology** Unlike traditional wired networks, WSN is a data-center network. Its core function is to aggregate data and to forward data through the route nodes to the sink. In our key management scheme, we consider the key management topology and the data process topology should not be separated. ----- 106 Y. Zeng et al. Old key management schemes are mainly based on cluster topology. Under the assumption that a sensor node either acts as a data producer or is just a router, every node should take part in a voting to choose some nodes acting as cluster headers. After the deployment of nodes and the CH’s voting, the cluster headers play an important role in the next steps which include initializing keys, distributing group keys and rekeying. There are two kinds of working flows in cluster-based key management schemes. Key management flow is under the control of those cluster headers. Data aggregating flows are processed between nodes doing sensor works. In this paper we take loop as the basic unit and the entire network is grouped into inter-connected loops in self-organized mode. Within a loop, nodes can exchange information with each other by forwarding messages along the loop in either of the two directions. For inter-loop communications, messages are first routed to the gateways nodes (router nodes joining multiple loops) and transferred from gateway to gateway till reach the destination. As for inner Loop transmission, messages are finally forwarded to the destination. Loop topology has many special benefits in WSN: (1) The loop topology is relative to the physical positions of those nodes directly. When a node within the loop receives an order to sense some special information, the node becomes an information aggregator immediately. Every neighboring node gets some sensor data and sends it to the aggregator. The aggregator will compare and integrate it with its own report. The result would be shortened before it is sent to the next hop. Hop by hop, the sensor data will be shortened and be aggregated many times until it arrives at the sink node. (2) There are no critical header nodes defined in a loop, so the network topology never suffers from chain change caused by the reelection of headers. The scenario of a group without leader will never happen in a loop-based WSN. (3) Local loop information can be reserved in every node on the loop. The topology information redundancy enhances the network robustness. (4) One of the features of a loop that there are two paths between every two nodes on the same loop provides a backup route for link failure during message transmission. **3.3 Creation of a Loop Topology for Key Management** 1、(Key material pre-distribution phase) Before the deployment, every node should be assigned some key materials, including a unique ID, a private key (only known by the key server and node itself), a Hash function and a global key. After deployment, every node will start broadcasting its ID message encrypted by the global key. This action can prevent malice listening during the initialization phase of key management. 2、Every node which receives a message can build up its neighbor table. 3、Condition 1 for Loop formation: After checking their neighbors’ information, those nodes with only one neighbor will start the second round broadcasting, such as node A in Figure-1. The information of their neighbor table (NT) is broadcasted. Neighboring nodes received NT messages will add the neighbor information into their link table (LT) and broadcast the latest LT messages to neighboring nodes. ----- A Loop-Based Key Management Scheme for Wireless Sensor Networks 107 **Fig. 1. An example for loop-based wireless sensor networks** If the sensor nodes are deployed close enough then none of them has only one neighbor. Condition 2 for Loop formation should be taken into consideration. Timing is the first key point. At time T1 after the deployment, one-neighbor node can start sending message. If none of the nodes has only one neighbor, those nodes with at least M neighbors(M>=3) can start broadcasting their NT at time T1+nT (Unit time T equals to the time a node broadcast would need). If n=5 in Figure-1, then node I will start sending its NT message. Table-2 lists those messages (including messages sender, receiver and contents) passed among some nodes in Figure-1. The message processing details and sequence are shown in Table-1 and Figure-2. **Fig. 2. An example of a loop’s creation** 4、Forming loop: After several units of times nT, some nodes, such as B in Figure-1, may receive two loop messages from neighbors. Within the node sequence that a node can find a multiple-hop path to connect itself, a loop of those nodes can be formed by the conjunction of loop messages. Thus the whole sensor networks can be divided into many loops, among them are some special loops. Two loops may share two and even more common nodes, such as L-2 and L-3 in Figure-1. ----- 108 Y. Zeng et al. **Table 1. Loop creation Messages** 5、Special loop format: A single-link node, such as node A in Figure-1, has only one link with a neighbor node. Those two nodes (A and B) form a special loop L-1. Only when a node receive a message {} come from his neighbor node can this kind of special loop be created. Through step 1 to 4, another loop L-3 can be formed by node E, D, G, H, I and J. It is obvious that two nodes (D and E) are shared in loop L-2 and L-3. This type of loop format is determined by the loop size and the node position. **3.4 The Loop-Based Key Management Scheme (LBKMS)** As described in section 3.3, the first stage of LBKMS is to form loops through step 1 to 5. All the nodes of a WSN are divided into different loops or shared between neighbor loops. Based on the loop topology, this paper develops a new key concept: loop-key. Upon loop information (every node get its neighbor table and link table and loop sequence), the loop-creator node can set up a new loop-key for those nodes in the loop. The computing formula of loop-key is: Loop-key=Hash (time stamp || private key || loop-creator node ID || some loop (1) members’ ID). Time stamp is introduced into above formula to prevent replay attack that comes from neighboring nodes. The private key is a proprietary key of loop-creator. It is also the creator’s privilege that how many loop members’ IDs are used in the hash function. For example of Figure-1, the loop-key may be equal to hash (Ts|| KB|| B|| C|| D|| E). This formula is based on the preloaded material on each sensor node, using time stamp and other loop nodes’ ID can guarantee the production-loop key be safe. In the third stage, loop-creator will send the loop-key encrypted with the global key to its loop members through the loop routing. If the loop format is not special, the key messages will be sent to its two loop-neighbors at first. Every node on the loop will send the key message to next node on the neighbor table until some node receives the same message twice. ----- A Loop-Based Key Management Scheme for Wireless Sensor Networks 109 After the above three stages, every node in WSN should belong to a loop group and should keep a loop-key shared with other loop members. Sensor data aggregation and communication within the loop should be encrypted using the loop key. **The loop-based rekey: Well known as a resource-limited network, a WSN cannot** afford changing loop-keys continually. But there are still two scenarios in which rekeying is sometimes needed. In the first scenario: If a loop member is recognized as a defection node, or the sink sends a command to clean some node, the urgent affair is to kick it out of the loop member list. First of all, such an abnormal message arrives at the closest loop member. The node will send a cleaning message to its two loop neighboring nodes (if the defection node has just one direct neighbor, then just one cleaning message is enough.). As is shown in Figure-3, cleaning message should be sent to every node on the loop except the defection node. After that, the first leader node will start sending rekeying message to replace the old loop-key. Command from sink The closest Loop member Detection report Abnormal Left Loop neighbor Right Loop neighbor next Loop neighbor next Loop neighbor …… …… left loop neighbor right loop neighbor cleaning message defection node rekeying message **Fig. 3. Loop-based rekeying in WSN (1)** Compared with first scenario, the second scenario deals with normal rekeying. If a loop member is out of battery and can-not work properly any more, it should be deleted from the loop list, and the loop-key that it shared with other members should also be abandoned. So the working flow in Figure-4 is to clean old loop-key stored on every loop member. The second step is to set up new loop-key. For the sake of saving rekeying time, the new key’s creator is the loop node that has received the same cleaning messages twice. In one word, the rekeying process is very important in long-time WSN. Loop-key should be changed as quickly as possible if some defection nodes are found. At the same time, normal key updating is also a good step to keep WSN safety. **Security enhancement in rekeying:** Because defection nodes can overhear neighbors’ messages during the rekey process, so some measures should be taken to keep the communication between remain nodes of loop in the overhearing area to be safe. Here we assume that a defection node can only overhear its one-hop neighbors’ messages. It is obviously that we cannot prevent a defection node from hearing the first cleaning message, but we can stop him from getting new keys and other damages may cause by him. For example in Figure-1, if the node I is defected, link E-J and GH should use new keys which node I cannot compute base on the pre-shared material and overhearing contents. |Command from sink Detection report|Col2| |---|---| ||| ----- 110 Y. Zeng et al. Battery problem one Loop member normal Left Loop neighbor Right Loop neighbor next Loop neighbor next Loop neighbor #### …… …… left loop neighbor right loop neighbor cleaning message loop node receive rekeying message two same messages **Fig. 4. Loop-based rekeying in WSN (2)** We use the polynomial-based key pre-distribution protocol proposed by Blundo et al. [10] to establish a new key shared between the last cleaning message’s sender and receiver. The new key is only created and used between the sender and receiver, so it is a pair-wise key. Firstly before sensor nodes’ deployment, one key sever randomly generates a bivariate t-degree polynomial _t_ f (, )x y = ∑ _a xij_ _i_ _y_ _i_ over a finite _i j,_ =0 field Fq. where q is a prime number that is large enough to accommodate a cryptographic key, and has the property of f(x, y) = f(y, x). For each sensor node i with a unique ID, the key server computes a polynomial share of f(x, y), that is, f(i, y). For any two sensor nodes i and j, node i can compute the common key f(i, j) by evaluating f(i, y) at point j, and node j can compute the common key with i by evaluating f(j, y) at i. So to establish a pair-wise key both nodes just need to evaluate the polynomial with the ID of the other node without any key negotiation and the defection nodes know nothing of the new key. The scheme is proved secure and tcollusion resistant in mathematics. At the same time, we also can use the time stamp to prevent fake cleaning messages made by the defection nodes. ### 4 Analysis, Simulation and Comparison Nodes organization is the basic for research of WSN. WSNs of clustered organization are viewed as the most energy-efficient and most long-lived class of sensor networks [11]. There exist some key management schemes for WSN that are based on the cluster topology [12~14]. Creating a cluster for key management in a wireless sensor network at least includes 5 steps. Here we use the max connection degrees method as an example: 1. Similar to our loop-based scheme, every node broadcasts its ID to its neighbor nodes; 2. After received neighbor’s ID message, every node calculates its neighbor numbers and send it with the neighbors’ IDs to the neighbor nodes; 3. A node whose connections is bigger than its neighbors can send a cluster-head request message to its neighbors; ----- A Loop-Based Key Management Scheme for Wireless Sensor Networks 111 4. Every node with lower connections sends a reply message to those cluster-head request messages: join or reject. Nodes that received different request messages have to choose one of those cluster-head campaigners as their cluster header. Which node to be chosen is determined by ID or other parameters. 5. After received enough join messages from neighbor nodes, the cluster-head candidate can set up a cluster key with its cluster members. It is obvious that the key management based on cluster topology is more complicated than our scheme described in section 3. According to the comparison in table-2 and 3, the results can be showed as follows: Communication cost: As a resource-poor network, WSN cannot afford too much communication among its nodes. The cluster-to-cluster relationship is more complex than that of loop-to-loop. It is common that some neighboring nodes are shared between two loops. But it would be redundant that more than one node are shared between two clusters. Two close clusters will cost more energy on the communication than two loops. Storage cost: The cluster-based topology has to save neighbor clusters’ information as route in the header and some members’ storage. On the contrary, in the loop-based topology, the neighbor route information is already broadcasted during the second stage of the loop’s forming. **Table 2. Cluster-based VS loop-based in communication** b[ers] 60 b[ers] 250 50 200 40 e[ssage Num] 30 e[ssage Num] 150 20 100 10 50 0 0 A[verage Sending M] stage1 stage2 stage3 stage4 stage5 stage6 A[verage Sending M] stage1 stage2 stage3 stage4 stage5 stage6 CBKMS LBKMS CBKMS LBKMS Network size=50 nodes Network size=200 nodes **Fig. 5. Sending message numbers contrast** ----- 112 Y. Zeng et al. Communication is the biggest energy consumer. Especially the cost of sending message is much larger than receiving message. We use ns2 to simulate WSN with different network size and apply CBKMS and LBKMS at same conditions. After calculating average sending messages numbers, the contrast result is list in Figure-5. We can find that CBKMS send more messages than LBKMS from stage1 to 5, only in stage 6 that loop key have to be transmitted more hops than cluster key. From perspective of security, the loop-based Key management scheme is safer and more stable than the cluster-based one. Firstly, those two schemes have different role assignment among sensor nodes. The difference is listed in Table-4. From the comparison table we can find that CBKMS assigns many important tasks on cluster headers. A header node will play as a header all the time till it is replaced by another node. A loop creator’s identifier initializes a loop’s forming and has right to generate a loop key. After the loop is formed, there is no difference between normal nodes and the loop creator. According the probability theory, every member in a loop topology has equal probability to be caught. Once a loop member is lost, its loop-neighbors can set up new loop quickly. What they need to do is to deleting the lost node ID from the loop sequence and generating a new loop key. If a cluster header is caught, then its member nodes have to take part in a new cluster header’s election. At the same time, the probability of a cluster header being caught is determined by the result that cluster **Table 3. Cluster-based VS loop-based in node storage** **Table 4. Node responsibility comparison between CBKMS and LBKMS** ----- A Loop-Based Key Management Scheme for Wireless Sensor Networks 113 **Table 5. Comparison of probability of node being caught** **Table 6. Comparison of impact of node being caught** numbers compare to the total node numbers. This probability is greater than that of a loop creator being caught. The probability comparison and impact comparison is listed in Table-5 and Table-6. ### 5 Conclusion Key management is one of the most important technologies in the security mechanism of WSN. In this paper, we present a new key management scheme called LBKMS which integrates key pre-distribution mechanism in a loop-based infrastructure. LBKMS is also a dynamic scheme that can accommodate changing scenarios. The rekeying scheme based on loop topology and its security enhancement is also described in detail. Comparing with cluster-based key management schemes, LBKMS key management is proved to be more efficient, cost-saving and safe. Future research should focus on further reduction of communication cost in key establishment. ### Acknowledgments This work was supported by the National Research Foundation for the Doctoral Program of Higher Education of China under grant No. 20049998027, and the National Science Foundation of China under grant No. 90604006 and No. 90104001. ----- 114 Y. Zeng et al. ### References [1] Carman, D.W., Kruus, P.S., Matt, B.J.: Constraints and approaches for distributed sensor network security. Technical Report #00-010, NAI Labs (2000) [2] Eschenauer, L., Gligor, V.D.: A key-management scheme for distributed sensor networks. In: The 9th ACM conference on Computer and Communications, Washington, DC, USA, November 18-22, pp. 41–47 (2002) [3] Chan, H., Perrig, A., Song, D.: Random key pre-distribution schemes for sensor networks. In: Proc. 2003 IEEE Symposium on Security and Privacy, May 11-14, pp. 197– 213 (2003) [4] Liu, D., Ning, P.: Establishing pairwise keys in distributed sensor networks. In: ACM Conference on Computer and Communications Security, pp. 52–61 (2003) [5] Du, W., Deng, J., Han, Y.S., Chen, S., Varshney, P.K.: A key management scheme for wireless sensor networks using deployment knowledge. In: INFOCOM 2004, vol. 1, pp. 586–597 (March 7-11, 2004) [6] Du, W., Deng, J., Han, Y.S., Varshney, P.K., Katz, J., Khalili, A.: A Pairwise Key Pre distribution Scheme for Wireless Sensor Networks. ACM Transactions on Information and System Security 8(2), 228–258 (2005) [7] Blom, R.: An optimal class of symmetric key generation systems. In: Beth, T., Cot, N., Ingemarsson, I. (eds.) EUROCRYPT 1984. LNCS, vol. 209, pp. 335–338. Springer, Heidelberg (1985) [8] Choi, S., Youn, H.: An Efficient Key Pre-distribution Scheme for Secure Distributed Sensor Networks. In: EUC 2005. LNCS, vol. 3823, pp. 1088–1097. Springer, Heidelberg (2005) [9] Li, Y., Wang, X., Baueregger, F., Xue, X., Toh, C.K.: Loop-Based Topology Maintenance in Wireless Sensor Networks. In: Lu, X., Zhao, W. (eds.) ICCNMC 2005. LNCS, vol. 3619, Springer, Heidelberg (2005) [10] Blundo, C., Santix, A D, Herzberg, A., Kutten, S., Vaccaro, U., Yung, M.: Perfectly secure key distribution for dynamic conferences. In: The 12th Annual International Cryptology Conference on Advances in Cryptology, pp. 471–486. Springer, Berlin (1992) [11] Vlajic, N., Xia, D.: Wireless Sensor Networks: To Cluster or Not To Cluster? In: IEEE International Symposium on WoWMoM 2006, Niagara-Falls, Buffalo-NY, USA (June 2006) [12] Chorzempa, M., Park, J.-M., Eltoweissy, M.: SECK: survivable and efficient clustered keying for wireless sensor networks. In: IPCCC 2005 (2005) [13] Younis, M.F., Ghumman, K., Eltoweissy, M.: Location-Aware Combinatorial Key Management Scheme for Clustered Sensor Networks. Parallel and Distributed Systems, IEEE Transactions 17(8), 865–882 (2006) [14] Lin, L., Ru-chuan, W., Bo, J., Hai-ping, H.: Research of Layer-Cluster Key Management Scheme on Wireless Sensor Networks. Journal of Electronics & Information Technology 28(12) (December 2006) -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-540-77090-9_10?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-540-77090-9_10, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://link.springer.com/content/pdf/10.1007%2F978-3-540-77090-9_10.pdf" }
2,007
[ "JournalArticle" ]
true
2007-12-17T00:00:00
[]
7,130
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Business", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01922d21fe492e0e4101447f8a09d67f6114d8ba
[ "Computer Science" ]
0.854103
Identification of Product Originality Based on Supply Chain Management Using Block Chain
01922d21fe492e0e4101447f8a09d67f6114d8ba
Intelligent Systems and Computer Technology
[ { "authorId": "2239924437", "name": "Sheela Rani P" }, { "authorId": "2240749771", "name": "Sankara Revathi S" }, { "authorId": "2239970998", "name": "Dharshini J S" }, { "authorId": "2239878796", "name": "Rekha M" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
The Internet of Things (IOT) is integrated with supply chain management process to track the product. To track the product smart tags is used. The smart tags like QR code and NFC is used. But with the technology enhancement the block chain is introduced into the supply chain management process. The block chain is the great revolution that data in the centralized form is transformed in to a decentralized manner. The distributed Ledger Technology (DLT) is one of the method used in ethereum block chain. The main advantage of using DLT is, it offers decentralized, privacy-preserving and verifiable process in the smart tags. In existing system only single server was used to maintain all the process like supplier, manufacturer and distributor. In this application we are using different server which was more secure than existing system. The proposed solution in this paper is it checks the product evidence during the entire lifecycle of the product by using the smart contract. The data can be immutable by using smart contract with ethereum block chain. The duplication is manipulated by the block chainserver.
_D.J. Hemanth et al. (Eds.)_ _© 2020 The authors and IOS Press._ _This article is published online with Open Access by IOS Press and distributed under the terms_ _of the Creative Commons Attribution Non-Commercial License 4.0 (CC BY-NC 4.0)._ _doi:10.3233/APC200141_ # Identification of Product Originality Based on Supply Chain Management Using Block Chain ## Sheela Rani P[a,1],Sankara Revathi S[b], Dharshini J S[b],and Rekha M[b ] aAssistant Professor, Dept of IT,Panimalar Institute of Technology, Chennai bUG Scholar, Dept of IT, Panimalar Institute of Technology, Chennai, India Abstract. The Internet of Things (IOT) is integrated with supply chain management process to track the product. To track the product smart tags is used. The smart tags like QR code and NFC is used. But with the technology enhancement the block chain is introduced into the supply chain management process. The block chain is the great revolution that data in the centralized form is transformed in to a decentralized manner. The distributed Ledger Technology (DLT) is one of the method used in ethereum block chain. The main advantage of using DLT is, it offers decentralized, privacy-preserving and verifiable process in the smart tags. In existing system only single server was used to maintain all the process like supplier, manufacturer and distributor. In this application we are using different server which was more secure than existing system. The proposed solution in this paper is it checks the product evidence during the entire lifecycle of the product by using the smart contract. The data can be immutable by using smart contract with ethereum block chain. The duplication is manipulated by the block chainserver. Keywords. Block chain, Distributed Ledger Technology (DLT), Supply Chain Management, Smart contract. 1. Introduction The main issue is the consumer is buyingthe product form retailer withoutanyprior knowledge like whether the product is original or duplicate. The consumer is buying the product just by seeing the brand logo and ISO hall mark. But duplicator are expert in making the product as like as the original. To overcome this problem ethereumblock chain is used [1].The details of each and every product is stored in the separate block chain[2].The distributed Ledger technology is used to store the details in an decentralized manner and also the product details should be viewed by everyone[3].The ethereum block chain is used because once the product details has entered in to the block chain it cannot be modified by any of them[4].The smart contract is also used in the supply chain management to make the process more efficient and also to provide trace-ability, security and transparency. 1Sheela Rani P, Department of Information Technology, Panimalar Institute of Technology, Chennai; E-mail: rpsheelarani2014@gmail com ----- 2. Digital Supply Chain Management System The Supply Chain Management Process Is Mainly Used To Deliver The Product To Consumer As It Is From The Development Of Raw Materials. It Includes Various Phases Like Supply Planning, Demand Planning, Product Planning, And Supply Management. So The Above Process Can Be Succeed Only If The Customer Satisfies The Product [5]. In Between Product Delivery Any Unauthorized Person Ca Change The Product So, It Create A Major Impact On Business. The Impact Is Not Only For Business And Also For The Consumer Who Were Buying The Product Simply By Believing The Brand. According to this problem the supply chain has revolutionize into digital process. [6]The block chain concept with smart contract is introduced [6]. It brings the drastic change in the order management industry. The most important thing is datas about the product become decentralized. This easy for the consumer to know well aboutproduct. 2.1. MVC-Model View Controller 2.1.1. Model The model which describes the kind of data stored. It does not consider about viewer and controller [7]. Whatever the changes made to the data it update the changes automatically and display it to the observer. 2.1.2. View In MVC view is the visual representation of the data. It defines what data to be viewed by the user. It transfer the request of data from user to the controller [8]. The separate interface is created for supply chain process. 2.1.3. Controller The controller act as the heart of the entire system. It act as intermediate between user and the system. The appropriate input is displayed on the screen[9]. According to the user input the controller provides the necessary output to the consumer. There are different frameworks are available in java platform but in out projects we were using two types, i. Hibernate framework ii. Spring boot 2.1.4. Hibernate Framework It is one of the framework in the java platform and also it is a open source software. It is used to retrieve data from the block chain server. It is one of the method in mapping of java class to tables and also java data type to sql data type. 2.1.5. Spring Boot The spring boot is used because it is simple to develop and also it can be configured automatically. It is mainly used to develop software applications. It is highly user friendly software. It can be easily understood by everyone . ----- 3. Methodology 3.1. Creating Suppliers First registration. The registration form contains supplier details after completing supplier registration successfully the supplier details gets stored in the database. Then supplier can login and sells the products to allmanufactures what they produce. 3.2. Manufacturer Process The manufacturer initially creates the account. The raw materials of each and every product will be analyzed by the manufacturer and then the request for particular product will be made by the manufacturer. Then suppliers will accept the request from manufacturer and raw material will be added to the manufacturer inventory [10]. Then ownership of the raw material is now transferred from supplier to manufacturer. Then manufacture will send the product ID to the block chain and then the created product will be added to manufacturer shipment. The product can be easily retrieved from block chain server with the help of product id. 3.3. Distributors Transactions The registration part contains distributors details and login. The distributor will be seeing the product in the manufacturer cart and then buying product by the distributor will be added to the block chain[11]. The distributors maintains the KYC form for adding duplicate products, it cannot be stored in blockchain[12]. 3.4. Product Verification There are two types of consumers. One is order the product without knowing the product details. So they cannot identify the product is duplicate or original. The second type of customer is view the full details of the product what they are buying so they view the block chaincontent [13]. 4. Architecture Diagram Figure 1. (Overview of the process ) ----- ## 5. Algorithm Sha-256 For Proof Of Work Pair (int,string) hash_with_proof_of_work(string difficulty=”00”) Int nounce=0 While(true) String hash_nounce=cal_hash_with_nounce(nounce) If (hash_nounce.find(difficulty)==1) Return make_pair(nounce,hash nounce) Else ## ++Nounce Block first(string data=” ”) Return block(0,data,”0”) Block next(previous data, string data=”transaction data”) Return block(previous.index+1,data,previous.hash) ## ++Nounce Block first(string data=” ”) Return block(0,data,”0”) Block next(previous data, string data=”transaction data”) Return block(previous.index+1,data,previous.hash) Calculating Hash Value string sha=to_string(nounce)+to_string(index) +timestamp+data+previous data 6. Smart contract on Block Chain Smart contract is a piece of software so it uses computer code so that the programs are stored in the ethereum block chain, it is similar to physical contract but it is digital.[[1]]It maintains certain rules which are predefined between two parties. The rules are like IF AND THEN.[[2]]The main advantage of using smart contract is, if the rules are met between two parties the smart contract process will get implementing its process automatically. Even though the details are distributed in the block chain server by using smart contract in block chain the details are immutable. It checks the conditions automatically. A smart contract is a self-executing contract between buyer and seller being directly written into lines of code. [[3]]The code and the agreements contained therein exist across a distributed, decentralized block chain network. The code controls the execution, and transactions are traceable and irreversible. Smart contract ensure that database is up-to-date and secure. And alsoit prevents unauthorized access to the database. Proof of work and consensus are two algorithm used for validating and storing the data. The need of third parties are eliminated with the help of smart contract process in block chain technology. The smart contract plays a major role in the trading business process. 7. Conclusion In the Proposed System there are Many Advantages By Using Block Chain And Smart Contract In The Supply Chain Management System. In Our Proposed System Separate ----- Block Chain Server Is Maintained For Supplier, Manufacturer, Distributor And Other Who Were Involved In The Supply Chain Process. By Smart Contract The Datas Are Decentralized And No One Is Required To Maintain The Data. Then By Using Smart Contract In Ethereum Block Chain Provides Transparency, Trace-Ability And Efficiency. Finally In Our Proposed System Product Evidence Is Maintained As It Is From The Entire Life Cycle Of The Product. By Using The Distributed Ledger Technology The Smart Tag Duplication Can Be Prevented. Data Exchange Process Between Involved Stakeholders To Ensure Data Authenticity And Integrity. Each Interaction Between Stakeholders During The Product Item Exchange Is Stored (Logged) On Blockchain. References [1] Federico Matteo Benčić .DL-Tags: DLT and Smart Tags for Decentralized, Privacy-Preserving, and Verifiable Supply Chain Management .IEEE Access, vol. 6, pp. 32979–33001,2018. [2] F.Tian, A supply chain trace-ability system for food safety based on HACCP, block chain & Internet of Things .in Proc. Int. Conf. Service Syst. Service Manage., Jun. 2017, pp.1–6. [3] Q. He, Y. Xu, Z. Liu, J. He, Y. Sun, and R. Zhang .A privacy-preserving Internet of Things device management scheme based on block chain .Int. Distrib. Sensor Netw., vol. 14, no. 11, pp. 1–12,2018. [4] M. Petersen, N. Hackius, and B. von See .Mapping the sea of opportunities: Block chain in supply chain and logistics .Inf. Technology., vol. 60, nos. 5–6, pp. 263– 271,2018. [5] G. Wood .Ethereum: A secure decentralized generalized transaction ledger .Ethereum & Ethcore, Ethereum Project Yellow Paper 151, 2014, pp.1–32 [6] B. Rakic, T. Levak, Z. Drev, S. Savic, and A. Veljkovic, ‘‘First purpose built protocol for supply chains based on block chain,’’ Origin Trail, Ljubljana, Slovenia, Tech. Rep. 1, 2017. [7] How Big chain DB is Immutable—Bigchain DB Documentation accessed: Dec. 14, 2018. [8] S. Underwood .Blockchainbeyond Bitcoin .Communication.ACM,vol.59,no.11, pp.15–17, 2016. [9] T. M. Fernández-Caramés and P. Fraga-Lamas .A review on the use of block chain for the Internet of Things . IEEE Access, vol. 6, pp.32979–33001, 2018. [10] O.Svein, Beyond Bit coin Enabling Smart Government Using Block chain Technology (Lecture Notes in Computer Science), vol. 9820. Cham, Switzerland: Springer, 2016, pp. 253–264. [11] F. M. Benčić and I. P. Žarko .Distributed ledger technology: Block chain compared to directed acyclic graph, Proc.IEEE 38th Int. Conf. Distrib. Computer. Syst., Jul. 2018, pp. 1569–1570. [12] R. De Angelis, M. Howard, and J. Miemczyk .Supply chain management and the circular economy: Towards the circular supply chain .Prod. Planning Control, vol. 29, no. 6, pp. 425–437, 2018. [13] S.A.K.Jainulabudeen, K. Rajeshkumar and M.Piyush Chouhan .Identification of Fake/Counterfeit Drugs using Blockchain and IoT Network in Panimalar Engineering College pp.4380, 2019 -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3233/apc200141?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3233/apc200141, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBYNC", "status": "GOLD", "url": "https://ebooks.iospress.nl/pdf/doi/10.3233/APC200141" }
2,020
[]
true
2020-11-10T00:00:00
[ { "paperId": "0c23c4bbf358809ef4cefbade5bea60b1aade0c5", "title": "A Review on the Use of Block chain for the Internet of Things" }, { "paperId": "4d87fc658c99ccebe80f780970fefd4908c8d0cd", "title": "DL-Tags: DLT and Smart Tags for Decentralized, Privacy-Preserving, and Verifiable Supply Chain Management" }, { "paperId": "c1822ecaf9aa67b0d1d2a8cca713b2e1192712a5", "title": "A privacy-preserving Internet of Things device management scheme based on blockchain" }, { "paperId": "37b544f90be7595b757c914656932149f2c71d67", "title": "Mapping the sea of opportunities: Blockchain in supply chain and logistics" }, { "paperId": "68c441507594095a7f07af8018c844194b88fa84", "title": "Supply chain management and the circular economy: towards the circular supply chain" }, { "paperId": null, "title": "Identification of Fake/Counterfeit Drugs using Blockchain and IoT Network in Panimalar Engineering College pp" }, { "paperId": null, "title": "How Big chain DB is Immutable—Bigchain DB Documentation" }, { "paperId": null, "title": "A supply chain trace-ability system for food safety based on HACCP, block chain & Internet of Things" }, { "paperId": null, "title": "Blockchainbeyond Bitcoin" }, { "paperId": null, "title": "Beyond Bit coin Enabling Smart Government Using Block chain Technology" }, { "paperId": null, "title": "Ethereum: A secure decentralized generalized transaction ledger .Ethereum & Ethcore" }, { "paperId": "829ba4752f0d119a7f19a26216cf414e47be6fda", "title": "First purpose built protocol for supply chains based on blockchain" }, { "paperId": null, "title": "Identification of Product Originality Based on Supply Chain Management" }, { "paperId": null, "title": "Distributed ledger technology: Block chain compared to directed acyclic graph" } ]
3,049
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01934afc7e32ff7c539d172031c19fefe9d4b6d8
[ "Computer Science" ]
0.902697
Blockchain Solutions for Forensic Evidence Preservation in IoT Environments
01934afc7e32ff7c539d172031c19fefe9d4b6d8
IEEE Conference on Network Softwarization
[ { "authorId": "1388625712", "name": "Sotirios Brotsis" }, { "authorId": "1703158", "name": "N. Kolokotronis" }, { "authorId": "1720724", "name": "Konstantinos Limniotis" }, { "authorId": "24889005", "name": "S. Shiaeles" }, { "authorId": "88740364", "name": "D. Kavallieros" }, { "authorId": "144992889", "name": "E. Bellini" }, { "authorId": "1388625735", "name": "Clément Pavué" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE Conf Netw Softwarization", "NetSoft" ], "alternate_urls": null, "id": "8d051ed1-e691-49d8-8a57-d7b4a76c4352", "issn": null, "name": "IEEE Conference on Network Softwarization", "type": "conference", "url": null }
The technological evolution brought by the Internet of things (IoT) comes with new forms of cyber-attacks exploiting the complexity and heterogeneity of IoT networks, as well as, the existence of many vulnerabilities in IoT devices. The detection of compromised devices, as well as the collection and preservation of evidence regarding alleged malicious behavior in IoT networks, emerge as areas of high priority. This paper presents a blockchain-based solution, which is designed for the smart home domain, dealing with the collection and preservation of digital forensic evidence. The system utilizes a private forensic evidence database, where the captured evidence is stored, along with a permissioned blockchain that allows providing security services like integrity, authentication, and non-repudiation, so that the evidence can be used in a court of law. The blockchain stores evidences' metadata, which are critical for providing the aforementioned services, and interacts via smart contracts with the different entities involved in an investigation process, including Internet service providers, law enforcement agencies and prosecutors. A high-level architecture of the blockchain-based solution is presented that allows tackling the unique challenges posed by the need for digitally handling forensic evidence collected from IoT networks.
#### This paper is a preprint; it has been accepted for publication in 2019 IEEE Conference on Network Softwarization (IEEE NetSoft), 24–28 June 2019, Paris, France. IEEE copyright notice c 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all ⃝ other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. ----- # Blockchain Solutions for Forensic Evidence Preservation in IoT Environments ##### Sotirios Brotsis[∗], Nicholas Kolokotronis[∗], Konstantinos Limniotis[∗], Stavros Shiaeles[†], Dimitris Kavallieros[‡], Emanuele Bellini[§], and Cl´ement Pavu´e[¶] _∗University of Peloponnese, Greece. Email: brotsis@uop.gr, nkolok@uop.gr, klimn@uop.gr_ _†Plymouth University, UK. Email: stavros.shiaeles@plymouth.ac.uk_ _‡Center for Security Studies, Greece. Email: d.kavallieros@kemea-research.gr_ _§Mathema s.r.l., Italy; Khalifa University, UAE. Email: emanuele.bellini@mathema.com_ _¶Scorechain S.A., Luxembourg. Email: clement.pavue@scorechain.com_ **_Abstract—The technological evolution brought by the Internet_** **_of things (IoT) comes with new forms of cyber-attacks exploiting_** **the complexity and heterogeneity of IoT networks, as well** **as, the existence of many vulnerabilities in IoT devices. The** **detection of compromised devices, as well as the collection and** **preservation of evidence regarding alleged malicious behavior in** **IoT networks emerge as a areas of high priority. This paper** **presents a blockchain-based solution, which is designed for the** **smart home domain, dealing with the collection and preservation** **of digital forensic evidence. The system utilizes a private forensic** **evidence database, where the captured evidence is stored, along** **with a permissioned blockchain that allows providing security** **services like integrity, authentication, and non-repudiation, so** **that the evidence can be used in a court of law. The blockchain** **stores evidences’ metadata, which are critical for providing** **the aforementioned services, and interacts via smart contracts** **with the different entities involved in an investigation process,** **including Internet service providers, law enforcement agencies** **and prosecutors. A high-level architecture of the blockchain-** **based solution is presented that allows tackling the unique** **challenges posed by the need for digitally handling forensic** **evidence collected from IoT networks.** **_Index Terms—Blockchain, Cyber-security, Forensic evidence,_** **Intrusion detection, Internet of things.** I. INTRODUCTION The Internet of things (IoT) ecosystem is comprised of a vast number of interconnected devices that collect, process, generate, and share huge amounts of (possibly sensitive and critical) information [1]. To a large extent, these devices are highly resource-constrained, like sensors and legacy embedded systems, therefore devoting most of their computational power and storage/memory capacity to delivering their core functionality. Strong security controls that are typically found in today’s personal computers cannot be adopted, since they are more resource-demanding, hence leading to the usage of lightweight and often insecure protection mechanisms (if any) for the data stored or transmitted. This fact, if combined with This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 786698. The work reflects only the authors’ view and the Agency is not responsible for any use that may be made of the information it contains. the complexity and heterogeneity of IoT networks that make the design and provisioning of security solutions a challenging task [2], allows cyber-attackers to easily compromise them and use them as the means for launching other advanced attacks, such as the distributed denial of service (DDoS) attack against Dyn that was attributed to Mirai malware [3]. The collection of forensic evidence from the attacked IoT devices and networks, along with their storage, preservation, and analysis constitute major challenges [4], primarily due to the fact that IoT devices are designed to work autonomously and in many cases, there is no reliable method to assemble residual evidence [5]. The utilization of intrusion detection _systems (IDS) in the collection process is important towards_ identifying cyber-criminals and preventing future occurrences of attacks [6]. In an IoT environment, the identification of a crime scene’s boundaries and its preservation are quite hard to accomplish while interactions continuously occur at real-time. Since the majority of IoT devices are sensors and monitors that record user’s personal information, privacy is an important issue to consider in a digital forensics investigation. This paper aims at addressing the challenges in the forensic evidence collection, preservation and investigation process, for IoT environments in the smart home domain, by exploiting the advanced intrusion detection and distributed ledger technology (DLT) solutions that are being developed in the context of the Cyber-Trust project. More precisely, a number of mechanisms installed at a smart home’s gateway, like profiling, monitoring, and anomaly detection, allow to monitor the state and behavior of IoT devices, significantly enhancing the detection of known threats and zero-day vulnerabilities, as well as to immediately collect forensic evidence for detected malicious interactions. The collected data are stored at the evidence database (evDB), hosted by the Internet service provider (ISP), along with the metadata needed in order to allow the correlation and further investigation of an attack’s generated events. The metadata are published on a blockchain, which is maintained by the ISPs, maintaining the chronological ordering of attacks’ evidences at a global scale, thus providing the means to law enforcement _agencies (LEA) to effectively trace back an attack to its source._ The proposed solution, referred to as Cyber-Trust blockchain ----- (CTB), allows the entities involved in the investigation process, such as LEAs and prosecutors, to access and handle the digital evidence, therefore realizing the chain-of-custody (CoC) by recording and preserving the chronological history of handling the digital evidence. The CTB solution relies on HyperLedger Fabric and constitutes a permissioned blockchain in order to meet privacy requirements. The remainder of the paper is structured as follows. Section II presents the current state-of-the-art and related work, while the forensic evidence collection process is described in Section III. Section IV provides the architecture of the CTB solution whereas concluding remarks are given in Section V. II. BACKGROUND AND RELATED WORK This section presents the current state-of-the-art in the areas of intrusion detection and forensic evidence collection for IoT environments, along with the blockchain solutions that have been proposed. _A. IoT intrusion detection_ Intrusion detection systems typically utilize signature-based and anomaly-based techniques for identifying possible threats in a network, where the latter relies on the monitoring of a network’s devices for any abnormal behavioral patterns [6]. In order to detect compromised IoT devices, the framework that is proposed by Nguyen, et al. autonomously identifies anomalies in an IoT network [7]; this is achieved by employing a selflearning framework to classify devices according to their types and generate normal profiles that are subsequently used for the detection of deviations. A privacy-preserving architecture, called Siotome, was proposed in [8] to provide security in smart home environments against distributed network attacks by malicious IoT devices; the system is able to monitor, detect and analyze IoT-based threats, but also to provide an effective defense framework by utilizing machine learning methods to establish optimal operational configurations. Smart phones are a particular type of devices within a smart home environment, since they are mostly used for personal and sensitive tasks, thus becoming extremely beneficial and easy targets for adversaries. Smart phones, which are vulnerable to attacks (e.g. viruses, Trojans, worms, etc.) common in personal computers, but they lack the capabilities to execute highly advanced algorithms for detecting malicious activities. Due to this fact, IDS solutions that are often proposed to regularly perform in-depth analysis and observe any misbehavior are either cloud-based [9], or are performed remotely at a central server [10], allowing optimal actions to be taken for thwarting the attack in both architectures. _B. IoT forensics_ The wide adoption of smart devices, which can provide a wealth of forensic evidence on malicious activities during an investigation process, necessitated the advancement of tools and techniques for collecting residual evidences. A forensics edge management system for the smart home environment was introduced in [11] to gather digital evidence and deal with any security issues; it provides intelligence, flexibility, automated detection, and advanced data logging capabilities. The authors in [12] proposed a forensic investigation architecture to ensure the collection, preservation and storage of digital evidences, while they validated their approach in a real-world smart home environment. Focusing on a smart home’s IoT devices, a physical analyzer called universal forensic extraction device has been proposed for conducting forensic investigation on smart phones [13], which has been tested on Android devices. A comparative analysis of digital forensics tools for Android smart phones was carried out in [14], where it was illustrated that the choice of the tool to be used plays a crucial role in the quality of the forensic evidence that is extracted from the devices. In contrast to [14], a method for acquiring forensic evidence from Android smart phones without using specialized commercial forensics tools, i.e. by only relying on open source software, was proposed in [15]. In all the above works, it was shown that the collection of information from smart phones so that it can be used as evidence in a court of law still remains a challenging task. _C. Blockchain solutions_ Blockchain solutions have recently been proposed for both intrusion detection and forensic evidence applications, since in both cases blockchain can solve issues pertaining to trust, integrity, transparency, accountability, and secure data sharing. Addressing the issue of trust management, Alexopoulos, et al. [16] applied blockchain in collaborative intrusion detection networks to deal with insider threats but also enhance the security of the information shared among the participating IDS nodes. More precisely, the authors proposed to store the generated (raw) alerts of the network as transactions in a permissioned blockchain. Meng, et al. [17] in addition to the dimension of trust between the IDS nodes, refer to issues that pertain to privacy when collaborating nodes belong to different trust domains, as shared data may have sensitive information linked to individuals or organizations, e.g., IP addresses and packet payloads. Methods for exchanging encrypted content, or only hashed data rather than raw, are considered. In forensic investigations, it is important that the evidence is not modified while passing from one entity to another. The blockchain can be used in order to certify the authenticity and legitimacy of the procedures used to gather, store and transfer digital evidence, as well as, to provide a comprehensive view of all the interactions in the CoC. In a blockchain-based CoC, it is crucial to assure that members, having read/write access to the distributed ledger, are authenticated and the evidences are verified via a consensus algorithm. Towards that direction, Lone, et al. propose a private blockchain that can be used in digital forensics to ensure the integrity of evidences [18]; the authors also aim at recording the actions taken by each entity when interacting with the evidence. On the other hand, ProbeIoT uses a blockchain to discover criminal events, which can be used as evidence, by collecting interactions between IoT devices and verify their authenticity [19]. ----- III. FORENSIC EVIDENCE COLLECTION ARCHITECTURE The primary goal of Cyber-Trust in the smart home domain, or in general in small office / home office (SOHO) network, is to accurately detect the local network’s compromised and/or infected IoT devices to apply the appropriate countermeasures, e.g. to isolate the devices from the rest of the network and to proceed with the application of proper remediation measures. The intrusion detection mechanisms that are being employed are operating both at the device- and network-level to facilitate the collection and subsequent correlation of forensic evidence from various independent sources. To combat cyber-attacks and assist the evidence collection, IoT devices’ critical information is recorded on the blockchain so that it can be later queried when e.g. a verification of proper functioning is needed, or parts of the system’s software have to be updated or patched reliably. This implies that properties, like a device’s firmware, configuration files, etc. are registered into the Cyber-Trust blockchain, at the beginning of system’s operation, and verified if needed against a history of previously valid states, in order to ensure that they have not been tampered with. This approach fits well within the practices of software distributors that publish hashes of software binaries to allow verifying their authenticity. |Col1|logger EvGen TxGen|Col3| |---|---|---| |||| _A. Adversarial model_ The adversary is a typical IoT malware botnet that actively scans for vulnerable Linux-based IoT devices in the SOHO network, like smart watches, home surveillance systems, smart phones, etc., and infects the discovered vulnerable devices by uploading and executing malware code of an unknown bot on the compromised devices; once infected, the IoT devices may take a variety of malicious actions. Typically, the phases of a botnet, prior to performing attacks in a coordinated manner, are the following. _1) Propagation: If having been infected with malware, a_ smart home’s device updates its configuration and downloads further exploits. The bot replicates itself in the SOHO network using telnet/FTP/SSH default credentials and attacks nearby devices with firmware vulnerabilities. _2) Rallying: The bot contacts a command & control (C&C)_ server, queries for instructions, and also downloads the main configuration files. The bot and the bot-master share a seeded pseudorandom generator that computes the domain names. _3) Interaction: Bot-masters use a pull approach, in which_ the bot should initiate contact with the C&C server, and then poll for updates regularly. Obfuscation techniques are used, by hiding communications in regular web traffic, hence allowing perimeter controls to be bypassed. As seen from above, the bot is listening for commands via the HTTP and HTTPS protocols (utilizing ports 80 and 443) and is assumed to execute three types of attacks, namely man_in-the-middle (MiTM), DDoS, and spamming._ Hacker 4G Ethernet ISP evidence logger WiFi Internet EvGen DB TxGen IoT devices Smart gateway agent - ▪ anomaly detection SOHO - ▪ device profiling - ▪ evidence collection Fig. 1. An overview of Cyber-Trust’s forensic evidence collection process; it is assumed that the red-colored devices in the smart home have been attacked and this is detected by the SGA that collects the evidence. The smart gateway agent (SGA) is the core component that is responsible for the smart home’s network security by utilizing advanced intrusion detection methods, monitoring its health status and profiling the IoT devices’ behavior, as well as the collection of network information including forensic evidence; the SGA is the main link with the core platform components running at the ISP layer (only those relevant to the evidence collection process are depicted in Fig. 1). When a new device is registered, the SGA performs device fingerprinting in order to extract the device’s behavioral patterns based on network flows — assuming that the device is initially in a clean state. In addition, the SGA actively monitors the communication of connected devices to detect abnormal behavior by employing a lightweight IDS which transfers any suspicious traffic to the platform’s back-end for deep packet inspection (DPI). Further to the above, the SGA uses manufacturer’s usage description (MUD) to deliver device-focused network profiling to support accurate feature-set extraction for the anomaly detection. More capable IoT devices, e.g. smart phones, have a smart _device agent (SDA) installed that allows the direct acquisition_ of information (including evidence) from end-user IoT devices. The SDA operates in a more restrictive manner as it is mainly responsible for monitoring the device’s usage, critical files and security — firmware integrity, patching status, vulnerabilities. Information on run-time processes and the hardware resources used is regularly synchronized with the Cyber-Trust platform’s back-end, and more precisely the profiling service (PS). _B. Architectural elements_ In the sequel, we describe the high-level design of the smart home environment’s security elements, as illustrated in Fig. 1. _C. Evidence collection_ When suspicious network traffic and (resp. device activity) is detected by the SGA (resp. SDA), the necessary evidence is collected and sent to the ISP so as to be stored to the evDB. The evidence is comprised of IP packets (amongst other data) in the case of network attacks, whereas for device-level attacks it might include the entire device’s image. At a minimum, the whole process is designed to achieve the following objectives: (a) ensure the confidentiality and integrity of forensic evidence during transmission and storage; (b) ensure that the evidence is collected from and destined to secure systems, which have ----- established a trust relationship via an attestation protocol to authenticate the hardware/software configuration of the remote device (such as the BIOS, MBR, firmware); and (c) compute a non-repudiated proof of existence (along with other properties) of the acquired forensic evidence. As shown in Fig. 1, the latter property is achieved by means of the CTB. The logger generates evidence log events, denoted by the EvGen function, at the time that new evidence material is being inserted into the evidence DB, and signs these events. To achieve this step, the logger needs to have generated a key pair for use with digital signature algorithms, something that requires a certificate authority (CA) — HyperLedger Fabric’s CA is used for that purpose. When a new signed evidence ev is inserted in the evDB, a new identifier id is created as id = Hash�ev nonce� _||_ where the value nonce is chosen uniformly at random to ensure the uniqueness of the evidence’s identifier. Note that id serves the purpose of the signed evidence log event’s integrity proof that can be verified by means of a cryptographic hash function; the evidence identifier, and the nonce used, are also stored in the evDB along with the actual data. After computing the integrity proofs of the signed evidence log events, each proof is written to the CTB through a series of transactions, which is denoted by the function TxGen, for subsequent generation of the next block in CTB blockchain. The blockchain explorer can then be used for retrieving the immutable record of integrity proofs on the blockchain and validate forensic evidences’ properties. IV. FORENSIC EVIDENCE BLOCKCHAIN In the course of digital forensic investigations, the evidence examination needs to be carried out by authenticated entities, while ensuring privacy requirements. Due to this fact, only the forensic evidences’ metadata are stored in the CTB, which is a permissioned distributed ledger build on HyperLedger Fabric, to provide auditing and integrity services on evidence gathered from a smart home environment. To realize the CoC and allow the entities involved to access the digital evidence, information about the chronological history of handling the evidence has to be recorded. The authenticated entities that may obtain the ownership of a forensic evidence, issue new transactions and create blocks (that contain change of ownership information), are classified as (also referred to as participants): _• Internet service provider. Collects the evidence regarding_ a security incident from the smart home environment as descibed in Section III-C. As the creator of the evidence, only the ISP is able to permanently delete it, regardless who the current owner is. _• Law enforcement agency. Can access the evidentiary data_ about a particular id, IoT device, or attack that are stored in the CTB when conducting an investigation. LEAs can also issue new transactions to transfer ownership. _• Prosecutor. Considered to be the final owner of the digital_ forensic evidence in the course of an investigation. |Col1|EvGen EvGen EvGen logger logger logger|Col3|en en en|e e e|no no no e e e| |---|---|---|---|---|---| ||llloooggggggEEEeeevvvrrrGGGeee oooggggggEEEeeevvvrrrGGGTTTeeexxxnnnGGGeee||nnn eee|nnnoooddd|AAADDDPPPLLLIIITTT nnn| ||||nnn oooddd||| |BBB EEE|vvvGGGTTTeeexxxnnnGGGeee|nnn|||| ||||||| Fig. 2. High-level architecture of Cyber-Trust’s blockchain. In the high-level architecture of CTB, that is illustrated in Fig. 2, the transactions stored are about the actions performed by the involved entities and also record the ownership transfer of the digital evidence from the moment of its collection until it reaches the prosecutor. The CTB is comprised of the following core components: (a) the front-end user interface (UI), (b) the blockchain node, (c) the trusted transaction logs, and (d) the forensic evidence DB. More precisely: _• Front-end UI. Interface allowing the participants to view,_ invoke, or query blocks, transactions, chaincodes, etc., in the CTB; it is based on Fabric’s blockchain explorer. _• Blockchain node. This component ensures that authorized_ participants can communicate with the CTB network. _• Trusted logs. Implements the blockchain and stores the_ historical record of facts about when evidence was created and how its ownership was transferred from one entity to another so as to arrive at the current system state. _• Forensic evidence DBs. The off-chain databases, in which_ the current owner of an evidence has access to, where the raw evidentiary material is stored. Note that there are several forensic evidence DBs, one for each ISP, and therefore, upon request of a particular evidence, the front-end UI delegates the request for access to the appropriate ISP. The design of the CTB provides main function allowing the participants to create, transfer, erase or view the evidences stored in the evidence DB. Each function, if properly invoked, issues and broadcasts a new transaction to the network. CreateEvidence(id, dsc). This function submits a new block to the CTB with the identifier id and the description dsc of the new evidence as input. The function’s role is not just to create a new evidence, but also checks if an evidence with the same id has already been created. Another functionality is to set the first owner of the evidence, which by default is evidence’s creator (i.e. the ISP). GetEvidence(id). Given as input an evidence identifier, the function displays / retrieves the evidence after having first checked that the evidence indeed exists and the requesting participant is its current owner. EraseEvidence(id). The function checks if the evidence with ----- identifier id has already been stored in the CTB and if the invoking participant is the ISP that created the evidence. It is evident that forensic evidences’ metadata cannot actually be erased from the CTB, as this would imply that the entire blockchain would have to be reformed. The function just deletes the evidence from the evDB and then issues a new transaction declaring that the evidence no longer exists. TransferOwnership(id, own). Given an evidence identifier id and a participant address own, the function checks various conditions. First, the evidence must exist in the CTB and the participant invoking the function has to be the current owner of the evidence. Then, the function checks if own, where the evidence will be transferred to, is authorized to access the evidence. If all conditions are true, the function transfers ownership of the evidence to the new owner own, and the address of the new owner is added to the CTB. New evidence is defined as a transaction having the following metadata: the evidence identifier id, the address creator of the ISP having collected the evidence, the description dsc of the security incident (initialized by the creator, and later updated by other participants) and a timestamp time of its occurrence, the current own (resp. previous own[′]) owner of the evidence, the type (type) of the attacked IoT device, as well as, the list of time records {τi}i=1,2,... that each owner had the evidence at his possession. The form of each transaction stored in the CTB is the following Tx = id || creator || dsc || time || own || own[′] _|| type || τi ._ Let us note that, in the context of HyperLedger Fabric, only a transaction’s proposal field is shown above, which encodes the input parameters to the chaincode for creating the proposed ledger update; trivial fields, such as a transaction’s header and signature, are omitted for simplicity. Since the security of CTB is of utmost importance, a number of fundamental properties need to hold [20], the analysis of which is outside the scope of this work, such as persistence, liveness, chain quality property, and common prefix property. If all true, they considerably limit the ability of adversaries to alter CTB evidentiary metadata. V. CONCLUSIONS Cyber-Trust platform relies on advanced intrusion detection tools to identify malicious activities and enhance the security of IoT environments by inspecting compromised devices and collecting forensic evidence so as to determine the source of cyber-attacks. The evidentiary information is safely stored as raw data in an off-chain database, while the hashes and metadata of the evidence are stored on the blockchain. The CTB is a permissioned distributed ledger, which is build on top of HyperLedger Fabric. Cyber-Trust’s blockchain-based solution dematerializes the CoC process of recording and preserving a chronological history of digital evidences. REFERENCES [1] F.-C. Cheng, “Automatic and secure wi-fi connection mechanisms for iot end-devices and gateways,” in Emerging Technologies in Computing, M. H. Miraz, P. Excell, A. Ware, S. Soomro, and M. Ali, Eds. Springer International Publishing, 2018, pp. 98–106. [2] K. Zhao and L. Ge, “A survey on the internet of things security,” in Pro_ceedings of the 2013 Ninth International Conference on Computational_ _Intelligence and Security._ Washington, DC, USA: IEEE Computer Society, 2013, pp. 663–667. [3] C. Kolias, G. Kambourakis, A. Stavrou, and J. Voas, “Ddos in the iot: Mirai and other botnets,” Computer, vol. 50, no. 7, pp. 80–84, 2017. [4] A. MacDermott, T. Baker, and Q. Shi, “Iot forensics: Challenges for the ioa era,” in 2018 9th IFIP International Conference on New _Technologies, Mobility and Security (NTMS), Feb 2018, pp. 1–5._ [5] C. J. DOrazio, K. R. Choo, and L. T. Yang, “Data exfiltration from internet of things devices: ios devices as case studies,” IEEE Internet of _Things Journal, vol. 4, no. 2, pp. 524–535, April 2017._ [6] C. J. Fung, O. Baysal, J. Zhang, I. Aib, and R. Boutaba, “Trust management for host-based collaborative intrusion detection,” in Man_aging Large-Scale Service Deployment, F. De Turck, W. Kellerer, and_ G. Kormentzas, Eds. Berlin, Heidelberg: Springer Berlin Heidelberg, 2008, pp. 109–122. [7] T. D. Nguyen, S. Marchal, M. Miettinen, M. H. Dang, N. Asokan, and A. Sadeghi, “D¨ıot: A crowdsourced self-learning approach for detecting compromised iot devices,” CoRR, vol. abs/1804.07474, 2018. [8] H. Haddadi, V. Christophides, R. Teixeira, K. Cho, S. Suzuki, and A. Perrig, “Siotome: An edge-isp collaborative architecture for iot security,” in Proceedings of International Workshop on Security and _Privacy for the Internet-of-Things (IoTSec) 2018._ ETH Zrich, 2018, 1st International Workshop on Security and Privacy for the Internet-ofThings (IoTSec); Conference Location: Orlando, Florida, USA; Conference Date: April 17, 2018. [9] A. Houmansadr, S. A. Zonouz, and R. Berthier, “A cloud-based intrusion detection and response system for mobile phones,” in 2011 IEEE/IFIP _41st International Conference on Dependable Systems and Networks_ _Workshops (DSN-W), June 2011, pp. 31–32._ [10] A.-D. Schmidt, F. Peters, F. Lamour, C. Scheel, S. A. C¸ amtepe, and S¸. Albayrak, “Monitoring smartphones for anomaly detection,” Mobile _Networks and Applications, vol. 14, no. 1, pp. 92–106, Feb 2009._ [11] E. Oriwoh and P. Sant, “The forensics edge management system: A concept and design,” in 2013 IEEE 10th International Conference on _Ubiquitous Intelligence and Computing and 2013 IEEE 10th Interna-_ _tional Conference on Autonomic and Trusted Computing, Dec 2013, pp._ 544–550. [12] A. Goudbeek, K. R. Choo, and N. Le-Khac, “A forensic investigation framework for smart home environment,” in 2018 17th IEEE Interna_tional Conference On Trust, Security And Privacy In Computing And_ _Communications/ 12th IEEE International Conference On Big Data_ _Science And Engineering (TrustCom/BigDataSE), Aug 2018, pp. 1446–_ 1451. [13] M. Faheem, N.-A. Le-Khac, and T. Kechadi, “Smartphone forensic analysis: A case study for obtaining root access of an android samsung s3 device and analyse the image without an expensive commercial tool,” _Journal of Information Security, vol. 5, pp. 83–90, 01 2014._ [14] M. Raji, H. Wimmer, and R. J. Haddad, “Analyzing data from an android smartphone while comparing between two forensic tools,” SoutheastCon _2018, pp. 1–6, 2018._ [15] P. Andriotis, G. Oikonomou, and T. Tryfonas, “Forensic analysis of wireless networking evidence of android smartphones,” in 2012 IEEE _International Workshop on Information Forensics and Security (WIFS),_ Dec 2012, pp. 109–114. [16] N. Alexopoulos, E. Vasilomanolakis, N. R. Iv´ank´o, and M. M¨uhlh¨auser, “Towards blockchain-based collaborative intrusion detection systems,” in Critical Information Infrastructures Security, G. D’Agostino and A. Scala, Eds. Cham: Springer International Publishing, 2018, pp. 107–118. [17] W. Meng, E. W. Tischhauser, Q. Wang, Y. Wang, and J. Han, “When intrusion detection meets blockchain technology: A review,” IEEE _Access, vol. 6, pp. 10 179–10 188, 2018._ [18] A. Lone and R. Mir, “Forensic-chain: Blockchain based digital forensics chain of custody with poc in hyperledger composer,” Digital Investiga_tion, vol. 28, 03 2019._ [19] M. Hossain, R. Hasan, and S. Zawoad, “Probe-iot: A public digital ledger based forensic investigation framework for iot,” in IEEE _INFOCOM 2018 - IEEE Conference on Computer Communications_ _Workshops (INFOCOM WKSHPS), April 2018, pp. 1–2._ [20] A. Kiayias, A. Russell, B. David, and R. Oliynykov, “Ouroboros: A provably secure proof-of-stake blockchain protocol,” in Advances in _Cryptology — CRYPTO 2017._ Springer, 2017, pp. 357–388. -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1903.10770, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/1903.10770" }
2,019
[ "JournalArticle", "Conference" ]
true
2019-03-26T00:00:00
[ { "paperId": "0ec88947498ddf26260e14103571a83cf7dbd100", "title": "Forensic-chain: Blockchain based digital forensics chain of custody with PoC in Hyperledger Composer" }, { "paperId": "b0276440e2073e4394f036615adb155d796d622f", "title": "Automatic and Secure Wi-Fi Connection Mechanisms for IoT End-Devices and Gateways" }, { "paperId": "11ffe16099823c6234722877dbe0e6f661ead835", "title": "A Forensic Investigation Framework for Smart Home Environment" }, { "paperId": "e1f7ec6822741ceff8aac068cb4867440f72d824", "title": "DÏoT: A Crowdsourced Self-learning Approach for Detecting Compromised IoT Devices" }, { "paperId": "02f59d116ec547463adcebb79357b1708d42ab00", "title": "Analyzing Data from an Android Smartphone while Comparing between Two Forensic Tools" }, { "paperId": "63105f71b35d1b254129f7b0bbc441820b6b2265", "title": "Probe-IoT: A public digital ledger based forensic investigation framework for IoT" }, { "paperId": "590b9d91ff28b5eaf0159ee5941e2f083fa76fb4", "title": "Iot Forensics: Challenges for the Ioa Era" }, { "paperId": "a30b4b52b1e7b0aff4a5085cdc43ace30ca66f5e", "title": "When Intrusion Detection Meets Blockchain Technology: A Review" }, { "paperId": "efa3c8090eec349394ea253806753d82ec0b8289", "title": "Towards Blockchain-Based Collaborative Intrusion Detection Systems" }, { "paperId": "44dacdec625e31df66736a385e7001ef33756c5f", "title": "Ouroboros: A Provably Secure Proof-of-Stake Blockchain Protocol" }, { "paperId": "d9b3fa2627813fe92323a7a8d25cd12e4197ead6", "title": "DDoS in the IoT: Mirai and Other Botnets" }, { "paperId": "e39b3391ea032e55618ae38bf87d3a692163e221", "title": "Data Exfiltration From Internet of Things Devices: iOS Devices as Case Studies" }, { "paperId": "21ef8261b23ffa3bb53e02e2012650371149000e", "title": "Smartphone Forensic Analysis: A Case Study for Obtaining Root Access of an Android Samsung S3 Device and Analyse the Image without an Expensive Commercial Tool" }, { "paperId": "89b27d52ccd87122b08cd49f6fc66854711d3476", "title": "The Forensics Edge Management System: A Concept and Design" }, { "paperId": "6d81fa9ae2f96560bd5f6bf9377c0243ac4f6d55", "title": "A Survey on the Internet of Things Security" }, { "paperId": "a2f6bc0b3afec452c2484c1e87255cd9737b58df", "title": "Forensic analysis of wireless networking evidence of Android smartphones" }, { "paperId": "a57cab85555665ffb22c1bd3327a4e1249113ff1", "title": "A cloud-based intrusion detection and response system for mobile phones" }, { "paperId": "09a5bbd584d6a2d12e18103ebbb8f4d1d98251e3", "title": "Trust Management for Host-Based Collaborative Intrusion Detection" }, { "paperId": "ebcc6bfb9c7f3f453114edd57429971984a043db", "title": "Monitoring Smartphones for Anomaly Detection" }, { "paperId": "873a4032a47fc03168afc0abc0b27ed9fde0198d", "title": "SIOTOME: An Edge-ISP Collaborative Architecture for IoT Security" }, { "paperId": null, "title": "CreateEvidence ( id , dsc )" }, { "paperId": null, "title": "Given as input an evidence identifier, the function displays / retrieves the evidence after having first checked that the evidence indeed exists and the requesting participant is its current owner" } ]
7,715
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/019a37813298351f947e1db3dc2991f756777e2f
[ "Computer Science" ]
0.869221
Distributed Reinforcement Learning for Privacy-Preserving Dynamic Edge Caching
019a37813298351f947e1db3dc2991f756777e2f
IEEE Journal on Selected Areas in Communications
[ { "authorId": "3156247", "name": "Shengheng Liu" }, { "authorId": "1753709200", "name": "Chong Zheng" }, { "authorId": "48355817", "name": "Yongming Huang" }, { "authorId": "1718541", "name": "Tony Q. S. Quek" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE J Sel Area Commun" ], "alternate_urls": [ "http://ieeexplore.ieee.org/servlet/opac?punumber=49", "https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?puNumber=49" ], "id": "68f20e73-515e-4c73-9cd5-5684926b45f7", "issn": "0733-8716", "name": "IEEE Journal on Selected Areas in Communications", "type": "journal", "url": "http://www.comsoc.org/jsac/" }
Mobile edge computing (MEC) is a prominent computing paradigm which expands the application fields of wireless communication. Due to the limitation of the capacities of user equipments and MEC servers, edge caching (EC) optimization is crucial to the effective utilization of the caching resources in MEC-enabled wireless networks. However, the dynamics and complexities of content popularities over space and time as well as the privacy preservation of users pose significant challenges to EC optimization. In this paper, a privacy-preserving distributed deep deterministic policy gradient (P2D3PG) algorithm is proposed to maximize the cache hit rates of devices in the MEC networks. Specifically, we consider the fact that content popularities are dynamic, complicated and unobservable, and formulate the maximization of cache hit rates on devices as distributed problems under the constraints of privacy preservation. In particular, we convert the distributed optimizations into distributed model-free Markov decision process problems and then introduce a privacy-preserving federated learning method for popularity prediction. Subsequently, a P2D3PG algorithm is developed based on distributed reinforcement learning to solve the distributed problems. Simulation results demonstrate the superiority of the proposed approach in improving EC hit rate over the baseline methods while preserving user privacy.
## Distributed Reinforcement Learning for Privacy-Preserving Dynamic Edge Caching #### Shengheng Liu, Member, IEEE, Chong Zheng, Student Member, IEEE, Yongming Huang, Senior Member, IEEE, and Tony Q. S. Quek, Fellow, IEEE Abstract—Mobile edge computing (MEC) is a prominent computing paradigm which expands the application fields of wireless communication. Due to the limitation of the capacities of user equipments and MEC servers, edge caching (EC) optimization is crucial to the effective utilization of the caching resources in MEC-enabled wireless networks. However, the dynamics and complexities of content popularities over space and time as well as the privacy preservation of users pose significant challenges to EC optimization. In this paper, a privacy-preserving distributed deep deterministic policy gradient (P2D3PG) algorithm is proposed to maximize the cache hit rates of devices in the MEC networks. Specifically, we consider the fact that content popularities are dynamic, complicated and unobservable, and formulate the maximization of cache hit rates on devices as distributed problems under the constraints of privacy preservation. In particular, we convert the distributed optimizations into distributed model-free Markov decision process problems and then introduce a privacy-preserving federated learning method for popularity prediction. Subsequently, a P2D3PG algorithm is developed based on distributed reinforcement learning to solve the distributed problems. Simulation results demonstrate the superiority of the proposed approach in improving EC hit rate over the baseline methods while preserving user privacy. Index Terms—Edge caching, mobile edge computing, privacy preservation, distributed reinforcement learning, federated learning. Manuscript received February 21, 2021; revised November 2, 2021; accepted XXX XX, XXXX. Date of publication XXX XX, XXXX; date of current version XXX XX, XXXX. This work was supported in part by the National Natural Science Foundation of China under Grant Nos. 62001103 and the National Key R&D Program of China under Grant No. 2020YFB1806600. Part of this work has been accepted for presentation at the IEEE Global Communications Conference (GLOBECOM): Machine Learning for Communications Symposium, Madrid, Spain, December 2021 [1]. (Corresponding author: Y. Huang.) S. Liu, C. Zheng, and Y. Huang are with the School of Information Science and Engineering, Southeast University, Nanjing 210096, China, and also with the Purple Mountain Laboratories, Nanjing 211111, China (e-mail: {s.liu; czheng; huangym}@seu.edu.cn). T. Q. S. Quek is with the Information System Technology and Design Pillar, Singapore University of Technology and Design, Singapore 487372 (e mail: tonyquek@sutd edu sg) I. INTRODUCTION ITH the rapid proliferation of advanced wireless applications such as virtual reality # W and Internet of vehicles (IoV), the demand of delaysensitive and computation-intensive data services in mobile networks has been soaring at an unprecedented pace [2]–[4]. Along with the advent of beyond fifth-generation (B5G) communications, the increasing speed of this demand will achieve a further leap and pose significant challenges for the computing and caching capabilities of wireless communication systems. A promising network paradigm to tackle this challenge is mobile edge computing (MEC) [5], [6]. By equipping the processing servers with the edge nodes (ENs), i.e., WiFi access point or micro base station, MEC framework provides cloudcomputing/caching capabilities within the radio access network in close proximity to terminal devices, thereby greatly reducing the service latency as well as mitigating the surging cache and computation burden of the data centers [7]–[9]. Furthermore, edge caching (EC) as one of the key techniques in MEC networks can sufficiently exploit the caching resources in edge networks to promote caching efficiency of the ENs and user equipments (UEs) [10] and further reduce the latency. Recently, the explorations of optimal caching placement policies of EC from the perspective of the relationships among the contents, the ENs and the cloud center have been investigated in many works, i.e., [11]–[13]. In [11], the authors consider the analysis and optimization of EC and multicasting in a large-scale MEC-enabled wireless network. On the basis of file combinations an iterative numerical NOMENCLATURE For ease of reading, at the top of next page, a nomenclature of notations that will be later used within the body of this paper is given. ----- t Index of time slot. I = {1, 2, · · ·, I} Set of UE labels. F = {F1, F2, · · ·, FN } Set of all contents. M0, Mi Storage capacities of the MEC server and UE-i, respectively. C0(t), Ci(t) Content sets respectively cached in the MEC server and UE-i at time t. F [i](t) Content request generated by UE-i at time t. λi (t) Content request’s arrival rate associated with UE-i at time t. P[G](t) = [Pn[G][(][t][)]]n[N]=1[,][ P][i][ �]α[i] (t), t� = �Pn[i] �α[i] (t), t��Nn=1 Content popularities of the MEC server and UE-i at time t, respectively. α[i] (t) Distribution parameter of UE-i’s popularity at time t. Gi = {α[i]gi [|][g][i] [= 1][,][ 2][,][ · · ·][, G][i][}] Parameters set that α[i] (t) evolves over time. R[G](t) = �F [i](t)�Ii=1 Request information received by the MEC server at time t. C0[u][(][t][)][,][ C]i[u][(][t][)] Sets of requested but not cached contents at MEC server and UE-i, respectively. H0 (t), Hi (t) Realtime cache hit rates at MEC server and UE-i sides, respectively. a0 (t)= �a[+]0 [(][t][)][,][ a]0[−] [(]H[t][)]i[savg]�, a(it ()t)= �a[+]i [(][t][)][,][ a]i[−] [(][t][)]� Dynamic caching actions of the MEC server and UE-Sliding average of Hi (t) over a period of time Th. i at time t, respectively. A0, Ai Collections of a0 (t) and ai (t), respectively, in each time slot t. S0 = {s0 (t) | t = 0, 1, · · · }, S[�]0 = {�s0 (t) | t = 0, 1, · · · } State space and its renewed version at the MEC server side. Si = {si (t) | t = 0, 1, · · · }, S[�]i = {�si (t) | t = 0, 1, · · · } State space and its renewed version at the local UE-i side. r0 (t), ri (t) Cumulative reward starting from time t at global and UE-i sides, respectively. R[i](t) = [F [i](t − H), · · ·, F [i](t)] Extractor of UE-i’s historical request information. Θ[G], Θ[i] Parameters sets of global and local popularity prediction models. A Θ[A], Θ Trainable parameter sets of the online and target actor networks. C Θ[C], Θ Trainable parameter sets of the online and target critic networks. π0, πi Dynamic caching policies of the MEC server and UE-i. V [π][0] (·), V [π][i] (·) Value function under policies π0 and πi, respectively. � π[Θ][A] (·), Q - Θ[C][�] Parameterized online actor and target networks. n0 (t) Gaussian noise vector at time t. Ω Replay buff for training at the MEC server. � L Θ[C][�] Training loss function of the online critic network. Jβ (π) Performance objective function for the current policy evaluation. χ Discount factor in cumulative reward. Ψ Total episodes of training. ϕ Step interval between online/target networks in parameter clone. ν Soft-update coefficient. algorithm is proposed in [11] to maximize the successful transmission probability and obtain the local optimal caching and multicasting design. By leveraging social links between clients and ENs, cooperative cache placement schemes are developed to reduce client bandwidth overheads in [12]. Furthermore, the cooperation between ENs and cloud center is also studied in Li et al. [13]. The authors in [13] proposed a capacity-aware EC framework and formulated the average-download-time (ADT) minimization problem as a multi-class processor queuing process by allowing cooperation between ENs and cloud center. However, the mentioned works assumed that the content popularity is constant during the service and is known a priori, which is impractical. Generally, content popularity is timeinvariant and unavailable in advance regardless of the caching policy used [14]. To consider time-varying content popularities, the complicated, subjective and dynamic preferences of users pose significant challenges to the effective design and optimization of the EC policies. To this end, dynamic caching replacement scheme which continuously updates the cache under certain replacement policies during the services is investigated to address these challenges [15], [16]. The authors in [15] focus on the scenario where the set of popular content is time-varying, hence they investigate the online replenishment of the ENs caches along with the delivery of the requested files. To minimize the long-term normalized delivery time, online EC and delivery schemes as well as the reactive and proactive online caching schemes are proposed [15]. Liu et al. [16] leverages the estimation of popularity to improve the dynamic caching performance. Specifically, an online Bayesian clustering caching algorithm is introduced for the cache provider to autonomously learn the users’ interactive cache hit data in a collaborative way while maintaining sustainable scalability. Nevertheless, the popularity of each cluster has to be a priori given in [16], which is still challenging in ----- practice. On the other hand, privacy preservation in privacy-sensitive applications tightens the interactions among UES and servers in MEC systems to enhance user and data security. To ensure a secure EC in vehicle-to-vehicle based MEC network, Dai et al. [17] propose a blockchain empowered distributed content caching framework where the content caching is performed in vehicles and the base stations (BSs) do not execute the content caching but just maintain permissioned blockchain to ensure an secure content caching in vehicles. However, the proposed blockchain-based EC scheme in [17] sacrifices the cache capabilities of the BSs, which are far more than that of vehicles. Moreover, the time-varying characteristic of the content popularity is not considered in [17]. In [18], the authors explore the privacy preservation in EC from the perspective of game theory, and propose a game theoretical secure caching scheme to guarantee the integrity of cached contents while preserving the privacy of users. It can be observed that the EC problem considering in [18] is still a static caching problem where the cached content is locally encrypted on UEs to prevent leakage. The MEC server just cache the corresponding cryptograph for restoring original content, which leads to the same waste of cache resources as the scheme proposed in [17]. Recently, machine learning (ML) has shown potential usefulness in privacy-preserving MEC systems [18], [19]. In [20], the authors propose a mobility-aware proactive caching scheme based on FL to dynamically update cached contents in the MEC servers according to the mobility and position information of vehicles. However, the caching scheme proposed in [20] centrally caches contents in the MEC servers and ignores the abundant cache resources of the terminal devices. In this paper, we present a privacy-preserving distributed deep deterministic policy gradient (P2D3PG) algorithm to solve the distributed cache hit rate maximization problems under the consideration of time-varying and unobservable content popularities as well as the constraints of user privacy preservation. Specifically, our contributions are summarized as follows: - We formulate a distributed optimization problem to maximize the cache hit rate of all the cache entities in the MEC-enabled system and design a dynamic caching replacement mecha nism to enhance the personalized utilization of the cache resources in the system. - With the constraints of privacy preservation and dynamic content popularities, we convert the distributed optimization problem into a distributed model-free Markov decision process (MDP) problem and further introduces a privacy-preserving FL method to predict the distributed popularities. - A P2D3PG algorithm is developed to maximize the EC hit rate of devices in the system in a distributed way without any privacy leakage. The P2D3PG algorithm addresses the challenges in extending the centralized deep deterministic policy gradient method to a distributed manner. The performance advantages in terms of the cache hit rate are also presented in the numerical results. The remainder of this paper is organized as follows. The system model is presented in Section II. Then, Section III introduces the problem formulation and analysis. In Section IV, the P2D3PG algorithm is presented with details. In Section V, simulation results are discussed. Finally, conclusions are drawn in Section VI. II. SYSTEM MODEL In the following, we investigate the optimizations of EC policy in the privacy-preserving MEC system. Fig. 1 illustrates the wireless service scenario in a privacy-preserving MEC network with I privacysensitive UEs and one privacy-preserving EN, where the MEC server and all the UEs have certain computing and caching capabilities. For UE-i at time t, once a content is requested but uncached locally, UE-i will upload request information to access this uncached content from the MEC server. Limited by the caching capability, the MEC server also occasionally access to the cloud through the backhual link for absent contents if necessary. Due to our privacy-preserving mechanism, each privacysensitive UEs will protect its database of historical requests from snooping by outsiders. Furthermore, the privacy-preserving EN has no permission to retain any historical information of any UEs, and the current requests information from UEs at time t must be immediately deleted from the MEC server once the contents have been scheduled ----- Wireless links Privacy-sensitive UEs itself, and the global popularity reflects the comprehensive interest across the service region of the MEC server. With regard to the local popularity, we model the dynamics of α[i](t) using a model-free Markov chain with |Gi| states recorded in the set Gi = {αg[i] i[|][g][i][ = 1][,][ 2][,][ · · ·][, G][i][}][, where the][ G][i][ as well] as the corresponding transition probabilities of Gi are completely unavailable due to the complexity and diversity of subjective interests [22]. Moreover, instead of conventional independent and identically distributed (IID) assumption, we assume less restrictive condition, i.e., the behaviors of UEs are independent but not identically distributed. Specifically in our model, the state set Gi of each UE-i as well as the potential state transition probabilities are different and independent. The global popularity at the MEC server side at time t can be denoted as P[G](t) = [Pn[G][(][t][)]]n[N]=1[, where][ P][ G]n [(][t][)][ is the probability] that content n is requested within the entire service area at time t. Local a11 Popularities a1g1 ... a21 Global Popularity IE-1 P2G P3G agi i a...1i a2i `�` agI I a...1I a2I requestsContent [P]1G P4G `�` MEC Server Side IE-i IE-I User Side Fig. 2. Local and global popularity. Remark 1: Note that if data are processed in an insufficiently random manner, independence can be easily violated due to spatiotemporal correlations. On the other hand, non-identical user behaviors alone can be categorized into many different types, including feature/label distribution skew, concept drift, quantity skew, etc. Additionally, UE and data distributions can fluctuate over time, which compounds the non-IIDness. Learning from highly skewed non-IID data requires characterizing and/or mitigating each of the above effects and even a mixture of them. Although several solutions have been proposed such as data-sharing and model traveling, dealing with real-world non-IID user behaviors still remains a open problem [30]. C. Dynamic Caching Mechanism Assume that the MEC server received the request information R[G](t) = {F [i](t)}I from UEs at time |Backhual Link|MEC Se| |---|---| Cloud Server MEC Server Privacy-preserving EN Fig. 1. Hierarchical architecture of the privacy-preserving MEC system under investigation. A. Service Process Let F = {F1, F2, · · ·, FN } denote the set of all contents and all these contents can be accessed from the cloud. We consider that the caching entities in the MEC server and each UE-i with limited storage capacities of M0 and Mi contents respectively, where ∀i ∈I = {1, 2, · · ·, I} is the set of UE labels. Without loss of generality, we assume that Mi ≪ M0 < N. At time t, each UE-i will generate a content request F [i](t) at an arrival rate λi (t) which is considered time-varying to be more closely aligned with reality and 0 ≤ λi (t) ≤ 1. Let F [i](t) ∈∅ denote that UE-i generate no content request at time t;otherwise F [i](t) ∈F when F [i](t) /∈∅. When F [i](t) ∈F, the probability of each content Fn ∈F requested by UE-i at time t is assumed to follow a Zipf distribution [21], defined as P[i] (α[i] (t), t) = {Pn[i] [(][α][i][ (][t][)][, t][)][}]Nn=1[. The] distribution parameter α[i] (t) evolves dynamically over time in this paper and is relevant to the subject interests of UE-i. If F [i](t) is uncached in UE-i, which is represented as F [i] (t) /∈Ci(t) and Ci(t) is the contents set cached in UE-i at time t, UEi will upload this request information to access the absent content from the MEC server. Subsequently, MEC server will search for the requested contents from UEs in its current cache state C0(t). When F [i] (t) /∈C0(t) happens, the MEC server will further access the absent contents from the cloud. Finally, the absent contents of UEs will be sent back from the MEC server. Note that, the content request F [i] (t) can be directly satisfied by the local UE-i when F [i] (t) ∈Ci(t), and the request information will not be uploaded to the MEC server at that time. B. Local and Global Popularity We introduce the local popularity and global popularity to model the time-varying content popularities depicted in Fig. 2. The local popularity of each UE depends on the subjective interests of |Local a1 Popularities 1 a1 ... a 21 g1 IE-1 a 1i a 1I a gi ... a 2i Ċ a gI ... a 2I i I IE-i IE-I User Side|Global Popularity PG PG 2 3 Content PG requests 1 PG Ċ 4 MEC Server Side| |---|---| Content requests ----- t, which is the stack of all the absent files at the user side. Then, the MEC server will check its current cache C0(t) and access to the cloud to get the absent files C0[u][(][t][) =][ {][F][ i][ (][t][)][ |][ i][ = 1][,][ · · ·][ I][} −C][0][(][t][)][.][ C]0[u][(][t][)] will be forwarded to the UEs from the cloud via the server. Therefore, C0[u][(][t][)][ are the new input files] for the MEC server at every time t. Additionally, C0[u][(][t][)][ could be an empty set when the cache hit rate] of MEC server at time t reaches 100% . It is worthy to note that R[G](t) will be erased from the server before the next time slot by the privacy-preserving mechanism. In addition, to improve the utilization of caching resource, we adopt the dynamic caching policy presented in [23]. Let a[−]0 [(][t][) =] �a[−]c0 [(][t][)]�Mc0=10 decide which files in C0(t) should be evicted from MEC server at time t, where a[−]c0 [(][t][) = 1][ indi-] cates that file Fc[0]0 [(][t][)][ ∈C][0][(][t][)][ should be deleted;] otherwise if a[−]c0 [(][t][) = 0][, it should continue to] |C0u[(][t][)][|] � � be retained. Moreover, let a[+]0 [(][t][) =] a[+]c[u]0 [(][t][)] c[u]0[=1] denote which files in C0[u][(][t][)][ should be preserved in] MEC cache at time t, where a[+]c[u]0 [(][t][) = 1][ means] that file Fc[0][u]0 [(][t][)][ ∈C]0[u][(][t][)][ should be stored; otherwise] if a[+]c[u]0 [(][t][) = 0][, it should be outright discarded.] To maximize the utilization of cache resource, we assume that |C0(t)| = M0. As such, limited by the cache capacity of the MEC server, we have |C0[u][(][t][)][|] M0 � � c[u]0[=1][ a]c[+][u]0 [(][t][) =] c0=1 [a]c[−]0 [(][t][)][ .] (1) where a[+]c[u]i [(][t][)][ decides whether file][ F]c[ i][u]i [(][t][)][ ∈C]i[u][(][t][)] should be preserved in UE-i at time t or not. Fc[i][u]i [(][t][)][ should be stored when][ a]c[+][u]i [(][t][) = 1][; otherwise] a[+]c[u]i [(][t][) = 0][ means file][ F]c[ i][u]i [(][t][)][ should be discarded] or |Ci[u][(][t][)][|][ = 0][ happens. It is worth to mention] that the cache preservation indicator of UE-i is a scalar resulting from 0 ≤|Ci[u][(][t][)][| ≤] [1][, denoted as] a[+]i [(][t][) =][ a]c[+][u]i [(][t][)][.] D. Realtime Cache Hit Rate At each time t, the MEC server will received a certain amount of requests from the UEs within the service coverage, denoted as N0[R] [(][t][) =] ��RG (t)��. Considering the existence of λi (t) is a variable with t and 0 ≤ λi (t) ≤ 1, we have N0[R] [(][t][)][ ≤] [I][. Then] we define the global realtime cache hit rate at the MEC server side as 0[(][t][)][|] H0 (t) = 1 − [|C][u] (3) N0[R] [(][t][)] [.] For UE-i, we define the realtime cache hit rate as Hi (t) = 1 −|Ci[u][(][t][)][|][ .] (4) Considering that one UE only requests at most one content in a single time slot, Hi (t) can only be equal to 0 or 1. Here, the sliding average of Hi (t) over a period of time Th is given by Hi[savg] (t) = T[1]h �Th−1 (5) th=0 [H][i][ (][t][ −] [t][h][)][ .] It should be emphasized that the dimension of a[+]0 [(][t][)] is equal to |C0[u][(][t][)][|][ which is a variable with respect] to time t. Under this dynamic caching mechanism, the cache state of the MEC server is time-varying and the update operation only happens when new files arrive. Similarly, this dynamic caching mechanism will be executed in each UE. We define the cache adeletion of UE-[−] i as a[−]i [(][t][) =] �ac[−]i [(][t][)]�Mci=1i [, where] ci [(][t][) = 1][ indicates that file][ F]c[ i]i [(][t][)][ ∈C][i][ (][t][)][ should] de discarded from UE-i at time t; otherwise if a [−] ci [(][t][) = 0][, it should be retained in memory.] Furthermore, we denote the new file entered into UE-i at time t with Ci[u][(][t][) =][ {][F][ i][ (][t][)][} −C][i][(][t][)][.] Obviously, 0 ≤|Ci[u][(][t][)][| ≤] [1][. We can also obtain] the following cache capability constraint of UE-i �|Ci[u][(][t][)][|] + �Mi − a u (t) = ac [(][t][)][,] (2) III. PROBLEM FORMULATION AND ANALYSIS A. Problem Formulation To effectively leverage the limited caching resources in the MEC system, we maximize the distributed cache hit rate of all the devices by optimizing the dynamic caching mechanism within the constraint of privacy preservation. Furthermore, we maximize the long-term cache hit rate over a continuous period of time. Therefore, the underlying optimization problem at the MEC side is formulated as follows: P1 : max lim A0 Γ→∞ �Γ (6a) τ =0 [E][[][χ][τ] [H][0][ (][t][ +][ τ] [)]][,] s.t. (1), (6b) |C0(t)| ≤ M0, (6c) a[−]c0 [(][t][)] [∈{][0][,][ 1][}][,][ ∀][c][0][ ∈M][0][,] (6d) a[+]c[u]0 [(][t][)] [∈{][0][,][ 1][}][,][ ∀][c]0[u] [∈M]0[u] [(][t][)][,] (6e) ----- where A0 = �a0 (t)= �a[+]0 [(][t][)][,][ a]0[−] [(][t][)]� | t =0, 1, 2,· · ·� represents the collection of dynamic caching actions at the MEC server side in each time slot t. χ ∈ [0, 1] is the discount factor, and the expectation is taken with respect to the measure included by the decision variables as well as the system state. Besides, M[u]0 [(][t][) =][ {][1][,][ 2][,][ · · ·|C]0[u][(][t][)][|}][ and][ M][0][ =] {1, 2, · · ·, M0}. The constraint in (6c) reflects the limitation of caching capability of the MEC server, and (6b) ensures a balance in the size of the cached files at the MEC server after the caching replacement to keep the cache full but not overflowed. At the local user side, the optimization problem of arbitrary UE-i can be formulated as state space, action space, state transition probabilities, and reward. Specifically, the MDP descriptions of the problems (6) and (7) are denoted as ⟨S0, A0, P0, R0⟩ and ⟨Si, Ai, Pi, Ri⟩ respectively. 1) States: Considering the time variable t, S0 and Si actually can be denoted as S0 = {s0 (t) | t = 0, 1, 2, · · ·} and Si = {si (t) | t = 0, 1, 2, · · ·} respectively. According to the necessary information required by the dynamic caching actions, we define s0 (t) = �C0(t), R[G] (t)� and si (t) = {Ci(t), R[i] (t)}, where R[i](t) = [F [i](t − H), · · ·, F [i](t)] is a extractor of UE-i to extract its historical requests of continuous H times before time t. H is the observation window length of the extractor. 2) Actions: From the distributed problems formulated above, we already have A0 and Ai, ∀i ∈I. 3) State Transition: State transition probability describes that the system transits from one state to the next state under current actions. For problems (6) and (7), the state transition probability can be respectively denoted as Ps[a]0[0]([(]t[t])[)]→s0(t+1) [and] Ps[a]i[i]([(]t[t])[)]→si(t+1)[. However in our problems,][ R][i][ (][t][)][ and] R[0] (t) depend on the local and global popularity described in Section II-B, which results in the transition probability unavailable. 4) Reward: The reward function assigns each perceived state to a value associated with an explicit goal. For an MDP, when an action is taken under a state, the state will transfer to next state and the environment will return an instantaneous reward as a feedback immediately, which is respectively derived as the cache hit rate H0 (t) and Hi[savg] (t) in our problems. On this basis, the cumulative reward starting from time t can be respectively given by �Γ r0 (t) = (8) τ =0 [χ][τ] [H][0][ (][t][ +][ τ] [)][,] P2 : max lim Ai Γ→∞ �Γ i (t + τ )], (7a) τ =0 [E][[][χ][τ] [H][ savg] s.t. (2), (7b) |Ci(t)| ≤ Mi, (7c) a[−]ci [(][t][)] [∈{][0][,][ 1][}][,][ ∀][c][i] [∈M][i][,] (7d) a[+]c[u]i [(][t][)] [∈{][0][,][ 1][}][,][ ∀][c]i[u] [∈M]i[u] [(][t][)][,] (7e) where Ai = �ai (t)= �a[+]i [(][t][)][,][ a]i[−] [(][t][)]� | t =0, 1, 2,· · ·� is the collection of dynamic caching actions on UE-i in each time slot t. Besides, M[u]i [(][t][)] = {1, 2, · · ·|Ci[u][(][t][)][|}][ and][ M][i][ =][ {][1][,][ 2][,][ · · ·][, M][i][}][.] The following facts and technical challenges of problems (6) and (7) should be noted: - The objective functions of the problems are both accumulated over time rather than instantaneous functions. - The solutions of problem (6) and (7) are both dynamic strategy over time rather than a transient one. Moreover, the dimension of a[+]0 [(][t][)][ is] time-varying. - The cache states and actions of the MEC server and the UEs conform to contextual chain property over time. - The distributed problems formulated above are interactional but the privacy-preserving mechanism prevents the information exchange among the problems. B. Problem Recast To overcome the first two technical challenges as well as considering the fact of the chain property mentioned in the third, we convert the underlying optimization problem into a Markov decision process (MDP) which consists of four components i e �Γ ri (t) = i (t + τ ), (9) τ =0 [χ][τ] [H] [savg] Specifically in our problems, the critical component S0 of a MDP is unobservable under the privacypreserving mechanism. The reason is that, R[G] (t) as a component of s0 (t) is the privacy of the UEs and must be immediately erased from the MEC server in current time slot. Thus, the MEC server cannot observe s0 (t) at any time t, which leads to S0 unavailable. Therefore, the technical bottlenecks from the fourth challenge still remain, especially for the MDP problem converted from problem (6) ----- C. Privacy-Preserving Distributed Popularity Prediction To allow privacy preservation as well as help all devices cache contents more effectively, we herein introduce the local and global popularity into the system states. In detail, we replace R[i] (t) in si (t) and R[G] (t) in s0 (t) with the future contents popularity P[i](α[i](t+1), t+1) and P[G](t+1) respectively, renewed as si (t) = �Ci(t), P[i][ �]α[i] (t + 1), t + 1��, (10) � s0 (t) = �C0(t), P[G] (t + 1)� . (11) � The state space can be accordingly rewritten as S�0 = {�s0 (t) | t = 0, 1, 2, · · ·} and S�i = {si (t) | t = 0, 1, 2, · · ·} � As clarified earlier, the variation of P[i] (α[i] (t + 1), t + 1) and P[G] (t + 1) depend on the interests of UEs which is subjective and complicated. Thus, P[i] (α[i] (t + 1), t + 1) and P[G] (t + 1) are unobservable especially under the constraint of privacy preservation. Here, we introduce a FL method to predict the dynamic popularities while preserving user privacy. Specifically, we deploy the prediction model with the same architecture of neural network on each device in the system. At the local user side, the future popularity’s prediction of UE-i is based on the historical requests reserved in its equipment and the prediction can be denoted as ˆP[i](α[i](t + 1), t + 1) = f [Θ][i](R[i](t)), (12) where f [Θ][i](·) is the local predictive model in UEi and Θ[i] is the collection of trainable parameters. ˆP[i](α[i](t + 1), t + 1) is the prediction of P[i](α[i](t + 1), t + 1). At the MEC server side at time t, the temporary R[G](t) can be used by the URFL method for global prediction before the erase operation, which can be denoted as ˆP[G](t + 1) = f [Θ][G](R[G](t)), (13) where P[ˆ] [G](t) denotes the prediction of global popularity P[G](t + 1). f [Θ][G](·) is the global predictive model in the MEC server. Θ[G] is the parameters set. To train these prediction models under privacy preservation, the FL framework is adopted. At the local user side, the database formed by R[i](t) is used for the local training of Θ[i] and the connectivity between UEs is not existing At the MEC server side, the parameters set Θ[G] is obtained by the parameters aggregation based on the FL framework, which can be denoted as �I Θ[G]= [1] (14) I i=1 [ω][i][Θ][i][.] where ωi is the aggregation weight and Θ[i] is uploaded by the UE-i every a certain local training step. Once a weight aggregation is complete, the new parameters Θ[G] will be broadcast to all UEs for a new round of local training until the models converged. Because the local training is performed alone on its local equipment and the interaction between the local UEs and the MEC server only involves the passing of prediction model parameters, the user privacy, i.e., R[i](t), is thus preserved during this training phase. After the distributed popularity prediction, the challenges posed by the unobservable state space S0 has been addressed. Then, the optimal policy π0[∗] [and][ π]i[∗] [for problem (6) and (7) can be respec-] tively derived as equations (15) and (16) based on the Bellman’s equation, where V [π][0] (s0 (t + 1)) = � r0 (t + 1) is the value function under policy π0 at sate s0 (t + 1), and V [π][i] (si (t + 1)) = ri (t + 1) is � � the value function under policy πi at sate si (t + 1). � Whereas, according to the local and global popularity model in our system, it can be found that the P�s[a]0[0]([(]t[t])[)]→�s0(t+1) [and][ P]�[ a]si[i]([(]t[t])[)]→�si(t+1) [still can not be] acquired even if we get the P[ˆ] [i](α[i](t + 1), t + 1) and P[ˆ] [G](t + 1). As such, traditional optimization techniques such as dynamic programming cannot effectively solve our problems, and we will propose a privacy-preserving distributed reinforcement learning algorithm to solve this problems. IV. P2D3PG FOR DYNAMIC EDGE CACHING Once P[ˆ] [G](t + 1) is predicted, certain EC policy should be subsequently determined and implemented to maximize the EC hit rate of the entire MEC system. In this work, we propose a P2D3PG algorithm for this purpose, and the designed algorithm framework is illustrated in Fig. 3. A. MEC Server Side First, the MEC server receives the requests information R[G] (t) from UEs at the beginning of each time slot t Subsequently R[G] (t) is fed into ----- π0[∗] [= arg max] a0(t)∈A0 � �s0(t+1)∈S[�]0 [P]�[ a]s0[0]([(]t[t])[)]→�s0(t+1) [(][H][0][(][t][) +][ χV][ π][0][ (][�][s][0][ (][t][ + 1)))][,] (15) πi[∗] [= arg max] ai(t)∈Ai ��si(t+1)∈S[�]i [P]�[ a]si[i]([(]t[t])[)]→�si(t+1) [(][H]i[savg] (t) + χV [π][i] (�si (t + 1))), (16) Absent files for MEC server MEC Server Absent files for UEs Actor A Local predictive model f [Q]i ( )× Actor QA Actor QAai Actor QA Fig. 3. Framework of P2D3PG algorithm. the global predictive model obtained by URFL to predict the global popularity P[ˆ] [G](t +1) of next time slot t + 1 based on equation (13). Meanwhile, the absent files of UEs will be delivered to UEs while the R[G] (t) is immediately erased from the MEC server in time slot t. Then combining P[ˆ] [G](t + 1) with the current cache state of the MEC server C0(t), the state s0 (t) can be obtained. Subsequently � s0 (t) is fed into the actor network, which is also a � neural network with several dense layers. The actor network equals to a parameterized actor function a0 (t) = π[Θ][A] (s0 (t)) which specifies the current � policy by deterministically mapping states to a specific action, where π represents a policy on parameters Θ[A]. In order for the agent to fully explore the environment, exploration-exploitation method is adopted. Different from the ε-greedy exploration [24] which is effective for small or discrete action space. In this work, we balance the exploration and the exploitation by adding a gaussian noise vector on the policy output, i.e a0 (t) = π[Θ][A] (s0 (t)) + n0 (t)| (17) UE-i ... UE-I Actor QA Critic QC where n0 (t) is the gaussian noise vector and n0 (t) is the component following a gaussian distribution with a mean of 0 and a variance of σ[2]. Then the action a0 (t) will be sent to the critic network which � is also a neural network containing several dense layers together with the state s0 (t). Consequently, � the critic network will output the estimate of the target-Q value Q ��s0 (t + 1), �a0 (t + 1)| ΘC� which is a step forward for estimating the Q-value defined as (18). Q �s0 (t), a0 (t)| Θ[C][�] = E�π0 �� �+τ =0∞ [χ][τ] [H][0][ (][t][ +][ τ] [)] s0 (t), a0 (t)� . (18) ��� � Θ[C] is the trainable parameters of the critic network. After a further linear transformation, the output Q �s0 (t + 1), a0 (t + 1)| Θ[C][�] will be fed back to � � the actor network while contributes to the loss function of the actor. In addition, the cache state of the MEC server C0(t) at time t will be updated to the next cache state C0(t+1) following the guidance of the action a0 (t). � In practical training, the two networks π[Θ][A] (·) and Q �·| Θ[C][�] are called online networks. Correspondingly for a stabler and faster convergence there Global predictive model f [Q]G ( )× ----- are two counterparts respectively called target actor A � C [�] network π[Θ] (·) and target critic network Q - Θ ��� whose architectures and parameters are clone from their online networks every a few steps. Algorithm 1 P2D3PG for dynamic EC at the MEC server. 1: Initialize: Initialize Θ[A], Θ[C] and memory buff A C A Ω. Obtain the initial Θ and Θ by cloning Θ and Θ[C]. 2: For episode = 1, 2, · · ·, Ψ MEC do: 3: Initialize cache state C0(0). Initialize R[G] (0). sample point at time t. Then we train the actor network and the critic network jointly. To let the � C [�] critic network Q - Θ approach the real Q value ��� function which will be further used to guide the training of the actor, the training loss function of the critic in the MSE sense can be defined as �y (tns)−Q �s0 (tns), a0 (tns)|Θ[C][��][2], � � � L Θ[C][�] = [1] Ns Ns � 4: For t = 1, 2, · · ·, Υ do: 5: Receive R[G] (t) from UEs. Then predict Pˆ [G] (t + 1) by (13) under the proposed URFL. 6: Observe the state s0 (t), and observe the � reward feedback H0 (t) by (3). 7: Delete R[G] (t) for privacy preservation. 8: Update C0(t) to C0(t + 1) under action a0 (t) by (17). � 9: Store point (s0 (t − 1), a0 (t), H0 (t), s0 (t)) in Ω. � � � 10: Randomly sample a mini-batch of Ns points from Ω. 11: Calculate y (tns) by (21). Then update Θ[C] by (20) and ∇ΘCL �Θ[C][�]. Update Θ[A] by (24). 12: Soft-update the target actor/critic every ϕ steps: � C C C Θ ← νΘ + (1 − ν) Θ A A A Θ ← νΘ + (1 − ν) Θ 13: End For 14: End For During the train phase at the MEC server, we adopt experience replay to enhance the stability of the training. The dataset in the replay buff can be denoted as Ω= {(s0 (t), a0 (t), H0 (t), s0 (t + 1))} . (19) � � � Specifically during the mini-batch training, Ns samples {(s0 (tns), a0 (tns), H0 (tns), s0 (tns + 1))} (ns ∈ � � � {1, 2, · · ·, Ns}) are randomly taken as a mini-batch from the replay buffer Ω where t is the random s ns=1 (20) where � C[�] y (tns)= H0 (tns)+χQ s0 (tns +1), a0 (tns +1)| Θ . � (21) tns is the random sample points over time. Thus, we optimize Θ[C] by minimizing this MSE loss and Θ[C] can be updated by ∇ΘCL �Θ[C][�]. Consequently, Q �s0 (tns), a0 (tns)| Θ[C][�] will gradually ap� � proximate the real Q-value. The actor is aimed at producing an optimal policy by maximizing the Q-value, denoted as π[Θ][A] (s0) = arg max Q �s0, a0 ��ΘA �, (22) a0 Thus, a performance objective function for the current policy evaluation is designed as Jβ (π) = Es0∼ρβ �Q �s0, a0 ��ΘA ��, (23) which estimates the expectation of Q �s0, a0 ��ΘA � under the state distribution s0 ∼ ρ[β]. Then, the actor is updated by applying the chain rule to the expected return from the start state distribution with respect to the actor parameters Θ[A]: ∇ΘAJβ (π) = Es0∼ρβ �∇a0Q�s0, a0��ΘA���a0=π[Θ][A] (s0)[·∇][Θ][A][π][Θ][A][(][s][0][)]� . (24) During the practical training, y (tns) is sent to the actor network as current real Q-value according to (21). Besides, a mini-batch Monte Carlo sampling with a size of Nm is adopted to estimate the expectation, which yields an unbiased estimation shown in (25), where tnm denotes the random sample point at time instant t. B. Local User Side Furthermore, as illustrated in Fig. 3, the MEC server then broadcasts the trained actor to the UEs within its service coverage. For each UEi the prediction of the future content popularity ----- 1 ∇ΘAJβ (π) ≈ Nm �Nm �∇a0 Q�s0(tnm), �a0(tnm)��ΘA���a�0(tnm )=π[Θ][A](s0(tnm ))+n0(tnm )[·∇][Θ][A][π][Θ][A][(][s][0][(][t][n][m][))]� . (25) nm=1 Algorithm 2 P2D3PG for dynamic EC at the local UEs. 1: Each UE-i ∈I in parallel do: 2: Initialize cache state Ci(0) and extractor R[i](0). 3: Receive the actor Θ[A] broadcasted from the MEC server. 4: For t = 0, 1, · · ·, Υ do: 5: Get the historical requests by R[i](t). 6: Predict P[ˆ] [i] (α[i] (t + 1), t + 1) by (12) 7: Observe the state si (t), and observe the � reward feedback Hi[savg] (t) by (5). 8: Select action ai (t) and update Ci(t) to Ci(t + 1) 9: Made the new request F [i](t + 1)|Pi(αi(t),t). 10: End For ˆP[i](α[i](t + 1), t + 1) should be firstly obtained by feeding R[i](t) into the local predictive model shown in Fig. 3. Then the state si (t) consisting of Ci(t) and ˆP[i](α[i](t +1), t +1) is fed to the actor which outputs � the action ai (t), denoted as ai (t) = π[Θ][A] (si (t)) . (26) � Following ai (t), UE-i updates its cache state to Ci(t+1) based on the uncached files Ci[u][(][t][)][ which are] accessed from the MEC server. Finally, the request of UE-i at time t is satisfied and a new request will be generated subsequently. The overall process of the proposed P2D3PG algorithm at the MEC server and the local UEs is summarized in Algorithm 1 and Algorithm 2, respectively, where Ψ is the total episodes, ϕ is the step interval between the online/target networks in parameter clone, and ν is the coefficient of the soft-update, which is normally set to 0.001. Based on the distributed framework of the proposed P2D3PG algorithm, the computing resources of UEs for training their actors can be saved. Additionally, the replay buff Ω on the MEC server does not contain any privacy information of UEs. Remark 2: Note that while there are actor and critic networks at the MEC server side we only have actor network at the user side. This arrangement is determined by the function of the critic network in the proposed P2D3PG algorithm. More specifically, the critic network is used for guiding the gradient descent of the actor network parameters during the training phase. Since the entire training phase of the proposed scheme is completed at the MEC server in Algorithm 1, there is no need to deploy the critic network at the user side. V. NUMERICAL SIMULATIONS AND ANALYSES In the simulation, we set the number of total files N = 24, and the window length of the extractor H = 10. For all the local UEs, we assume their cache capacity are equal, denoted as Mi = Mj, ∀i, j ∈I, i ̸= j. In our simulation, the set Gi of each local UE-i is randomly generated. Besides, the transition probability matrix Pi = �Pg[i]lgk�Ggli,gk=0 [is] also generated randomly, where Pg[i]lgk [denotes the] transition probability from αg[i] l [to][ α]g[i] k[. It should] be emphasized that the parameter set Gi and the transition probability matrix Pi are both unknown neither to the MEC server or UE-i itself. Adam optimizer [25] is used to train the parameters Θ[C] and Θ[A] with the same adaptive learning rate starting from 10[−][4]. Fig. 4. Convergence and generalization of the proposed P2D3PG methods in the dynamic edge caching ----- (a) (b) Fig. 5. Performance comparison of the proposed P2D3PG methods in terms of cache hit rate with I = 6. (a) MEC server side. (b) Local user side of UserID 1. Then, we evaluate the performance of the proposed P2D3PG algorithm at the MEC server side and the local user side respectively. There are five baselines for comparison. Three popular methods in distributed EC system including the least recently used (LRU) [26] which discards the least recently used contents, the least frequently used (LFU) [27] which discards the least reference contents in the cache, and the first input first output (FIFO) which discards the initial contents in the FIFO queue. These three methods realize the cache update without privacy preservation. For LRU and LFU, they both need to record the UEs’ request information continually in order to count the contents’ requested frequency. For FIFO, it needs to maintain a queue of the request information which contains UEs’ privacy. Additionally, we also set a normalized advantage functions (NAF) method [28] as another baseline. The NAF algorithm is a deep reinforcement learning algorithm developed based on the deep Q network algorithm [29] and is applicable to the high-dimensional action control problem. For the training of the NAF algorithm, historical request information of UEs must to be collected and stored in the MEC server without any consideration of the privacy preservation. Lastly, in the baseline of random method, the caching policy is randomly formulated and a random action is executed regardless of the current state. From the perspective of convergent behavior of the proposed P2D3PG, the training processes of the proposed P2D3PG under different I and M0 are illustrated in Fig. 4. We can observe from Fig. 4 that the MEC server can achieve a stabilized mean of the cache hit rate around 4000 episodes under different number of UEs or different cache capacity, which indicates that the agent at the MEC server has acquired the inner knowledge within the global region and the proposed P2D3PG algorithm gradually converges. We also observe from Fig. 4 that, the P2D3PG algorithm can achieve basically similar performance when the cache capacity of the MEC server is fixed but the number of UEs within the service coverage is changed. Besides, we can see that the average cache hit rate of the MEC server increases with increasing M0, since the larger cache capacity can cache more effective contents for the UEs. At the MEC server side, the performance comparison among the proposed P2D3PG algorithm and all the baseline methods versus the cache capacity M0 from 6 units to 24 units are presented in Fig. 5(a). It can be seen from Fig. 5(a) that the proposed P2D3PG algorithm outperforms all the other baseline methods in terms of cache hit rate at the MEC server side while ensuring privacy preservation. When the extreme M0 = N = 24 happens, all the considered methods can reach to 100% cache hit rate. The reason is that the EC optimization aims at utilizing the limited cache resources more effective. When the MEC server can cache all the possible contents of the service, there is no sense to optimize the EC policy and all the requests can always be satisfied Furthermore Fig 5(a) also shows that ----- (a) (b) Fig. 6. Performance comparison in terms of standard deviation of the cache hit rate with I = 6, H = 10, and N = 24. (a) MEC server side. (M0 = 9) (b) Local user side of UserID 1. (M1 = 5) the advantage of the proposed P2D3PG method becomes more significant as the cache capacity of the MEC server decreases. This implies that the proposed P2D3PG is more competitive with regard to the cache hit rate especially when the cache resource of the MEC server is limited. Specifically, the cache hit rate of the proposed P2D3PG is about 60% when the cache capacity M0 = 6 is 25% of the total contents’ size, which is nearly 14.4% higher than the LRU and LFU methods, 46.6% higher than the NAF and FIFO methods, and almost 73.1% higher than the random baseline methods. Note that, the privacy preservation of the proposed P2D3PG method is another advantage over the baselines. The performance comparisons between the proposed P2D3PG algorithm and all the baseline methods versus the UE cache capacity Mi from 3 units to 11 units at the end side are presented in Fig. 5(b). The evaluation of the local UEs is represented by user with UserID 1. We can observe from Fig. 5(b) that the proposed P2D3PG algorithm still outperform all the other baseline methods in terms of cache hit rate at the local user side while realizes the privacy preservation. As described earlier, in the proposed scheme, we formulate an optimization problem to predict the upcoming files which are going to be requested by users. Henceforth, the cache hit rate is improved by averaging over time. While the goal is to maximize the average cache hit rate, it is also meaningful to examine the standard deviation (SD) of cache hit rate at both the MEC and the local user sides At the MEC server side, we test all the methods within a period of 1024 continuous time slots with I = 6 and M0 = 9. We record all the testing results of the cache hit rate H0 (t) to draw Fig. 6(a) and calculate their SDs. We observe from Fig. 6(a) that, at the MEC server side, the proposed P2D3PG algorithm can achieve the lowest SD at 0.0138 while yielding the highest cache hit rate compared to the baseline methods. Regarding the local user side, we test all the methods on UE-1 during 512 continuous time slots with M1 = 5. Then, we also visualize the results of Hi[savg] (t), which is given in Fig. 6(b). Although at the local user side all the compared methods obtain very close SDs, P2D3PG is still superior as it achieves the highest cache hit rate as well as preserves users’ privacy. We further explore the effect of different window length H on the cache hit rate. As H is a peculiar parameter of P2D3PG, the curves of the baseline methods are presented to examine if there exists a certain set of model parameters such that the conventional approaches works better or similar to the proposed P2D3PG. We observe from Fig. 7(a) that, with the increase of H, the cache hit rate at the MEC server side first rises until reaching a certain point and then gradually declines to a steady level. We believe that this is because with an excessively long window length, the algorithm will observe too much redundant information from the historical requests; while if the window is too short, the algorithm can hardly observe sufficient information from the historical requests From Fig 7(b) we can see that ----- (a) (b) Fig. 7. Impact of the window length H of the extractor on the cache-hit-rate performance with I = 10 and N = 24. (a) MEC server side. (M0 = 9) (b) Local user side of UserID 3. (M3 = 9) (a) (b) Fig. 8. Impact of the number of total contents N on the cache-hit-rate performance with I = 10 and H = 10. (a) MEC server side. (M0 = 9) (b) Local user side of UserID 1. (M1 = 9) the conventional approaches achieve better cache hit rate than the proposed P2D3PG when H ≤ 3, which also confirms effect of excessively short window length. To recap, unreasonable window length can affect the feature extraction performance of the predictive models and further reduces the prediction accuracy of the popularities. This in turn leads to a decrease of the cache hit rate. Likewise, we provide the cache-hit-rate performance comparison with respect to the number of total contents N from 12 to 50. Fig. 8 indicates that the proposed P2D3PG outperforms all the baseline methods at both the MEC server and the local user sides, while their individual cache hit rates drop with an increasing N We also note from Fig. 8 that, the advantage of the proposed P2D3PG method becomes less pronounced as the number of total contents decreases, which reconfirms the our speculation from Fig. 5. Fig. 9(a) illustrates the performance evaluation of the proposed P2D3PG method at the end side under I = 6. In particularly, we picked UEs with user identity document (UserID) 1 through 6 in the previous subsection. It can be found that the cache hit rate of each UE increases with the cache capacity. In addition, we observe that there are differences in the cache hit rate of different UEs, which results from the independent but not identically distributed behaviors of UEs. For UEs whose variations of popularities are more complicated the challenges ----- (a) 0 50 100 150    ! Time Slot t (b) Fig. 9. Performance evaluation of the proposed P2D3PG method at the local user side. (a) Average cache hit rate of all the UEs. (b) Realtime cache hit rate of UserID 6. of the popularity predictions by the URFL method are heavier. Thus the prediction accuracies of UEs are different, which results in the different cache hit rates among UEs. We further test the performance of the proposed P2D3PG algorithm with respect to realtime cache hit rate at the end side, which is presented in Fig. 9(b). In Fig. 9(b), the UE with UserID of 6 is taken as an example, the cache capacity M6 is set at 5 units which is 20.8% of the total contents size, and the time window length of the observation is set as 300 time slots. According to the equation (4), H0 (t) = 0 when the requested content of UE 6 at time t is absent at its current cache. Otherwise, H0 (t) = 1 when the requested content of UE 6 at time t is cached in its local UE in advance. On this basis, we find from Fig. 9(b) that the realtime cache hit rate can stay at 100% for the most time slots, which implies that the requested content at most time can be directly satisfied by its local cache. Fig. 9(b) again confirms the superiority of the proposed P2D3PG algorithm on dynamic EC while preserve UEs’ privacy. VI. CONCLUSION In this paper, the problem of distributed EC hit rate maximization in an MEC-enabled wireless communication system is formulated under timevarying and unobservable content popularities. To address the challenges of distributed problem under the constraints of privacy preservation, a P2D3PG algorithm is proposed to maximize the EC hit rates in the MEC system The superior performance of the proposed methods compared to the baseline methods are confirmed by numerical simulations. Our future work will concentrate on more complicated scenarios such as heterogeneous multiple MEC nodes as well as further addressing the challenges brought from non-IID user behaviors. REFERENCES [1] C. Zheng, S. Liu, Y. Huang, and T. Q. S. Quek, “Privacypreserving federated reinforcement learning for popularityassisted edge caching,” in Proc. 40[th] IEEE Global Commun. Conf. (GLOBECOM’21): Mach. Learn. Commun. Symp., Madrid, Spain, Dec. 2021, pp. 1–6. [2] F. Hu, Y. Deng, W. Saad, et al., ”Cellular-connected wireless virtual reality: Requirements, challenges, and solutions,” IEEE Commun. Mag., vol. 58, no. 5, pp. 105–111, May 2020. [3] W. Duan, J. Gu, M. Wen, et al., ”Emerging technologies for 5GIoV networks: Applications, trends and opportunities,” IEEE Network, vol. 34, no. 5, pp. 283–289, Oct. 2020. [4] A. A. Abdellatif, A. Mohamed, C. F. Chiasserini, et al., “Edge computing for smart health: Context-aware approaches, opportunities, and challenges,” IEEE Network, vol. 33, no. 3, pp. 196–203, Jun. 2019. [5] G. Faraci, C. Grasso, and G. Schembra, ”Design of a 5G network slice extension with MEC UAVs managed with reinforcement learning,” IEEE J. Sel. Areas Commun., vol. 38, no. 10, pp. 2356–2371, Oct. 2020. [6] J. Du, F. R. Yu, G. Lu, et al., “MEC-assisted immersive VR video streaming over terahertz wireless networks: A deep reinforcement learning approach,” IEEE Internet Things J., vol. 7, no. 10, pp. 9517–9529, Oct. 2020. [7] X. Xiong, K. Zheng, L. Lei, and L. Hou, “Resource allocation based on deep reinforcement learning in IoT edge computing,” IEEE J. Sel. Areas Commun., vol. 38, no. 6, pp. 1133–1146, Jun. 2020. [8] M. Du, K. Wang, Y. Chen, et al., “Big data privacy preserving in multi-access eEdge computing for heterogeneous internet of things,” IEEE Commun. Mag., vol. 56, no. 8, pp. 62–67, Aug. 2018 ----- [9] Z. Zhao, R. Zhao, J. Xia, et al., “A novel framework of threehierarchical offloading optimization for MEC in industrial IoT networks,” IEEE Trans. Ind. Inf., vol. 16, no. 8, pp. 5424–5434, Aug. 2020. [10] X. Wang, C. Wang, X. Li, et al., ”Federated deep reinforcement learning for internet of things with decentralized cooperative edge caching,” IEEE Internet Things J., vol. 7, no. 10, pp. 9441– 9455, Oct. 2020. [11] Y. Cui, D. Jiang, and Y. Wu, “Analysis and optimization of caching and multicasting in large-scale cache-enabled wireless networks,” IEEE Trans. Wireless Commun., vol. 15, no. 7, pp. 5101–5112, Jul. 2016. [12] S. Nikolaou, R. V. Renesse, and N. Schiper, “Proactive cache placement on cooperative client caches for online social networks,” IEEE Trans. Parallel Distrib. Syst., vol. 27, no. 4, pp. 1174–1186, Apr. 2016. [13] Q. Li, Y. Zhang, Y. Li, et al., “Capacity-aware edge caching in fog computing networks,” IEEE Trans. Veh. Technol., vol. 69, no. 8, pp. 9244–9248, Aug. 2020. [14] Y. Jiang, M. Ma, M. Bennis, et al., “User preference learningbased edge caching for fog radio access network,” IEEE Trans. Commun., vol. 67, no. 2, pp. 1268–1283, Feb. 2019. [15] S. M. Azimi, O. Simeone, A. Sengupta, and R. Tandon, “Online edge caching and wireless delivery in fog-aided networks with dynamic content popularity,” IEEE J. Sel. Areas Commun., vol. 36, no. 6, pp. 1189–1202, Jun. 2018. [16] J. Liu, D. Li, and Y. Xu, “Collaborative online edge caching with bayesian clustering in wireless networks,” IEEE Internet Things J., vol. 7, no. 2, pp. 1548–1560, Feb. 2020. [17] Y. Dai, D. Xu, K. Zhang, et al., “Deep reinforcement learning and permissioned blockchain for content caching in vehicular edge computing and networks,” IEEE Trans. Veh. Technol., vol. 69, no. 4, pp. 4312–4324, Apr. 2020. [18] Q. Xu, Z. Su, Q. Zheng, et al., “Game theoretical secure caching scheme in multihoming edge computing-enabled heterogeneous networks,” IEEE Internet Things J., vol. 6, no. 3, pp. 4536– 4546, Jun. 2019. [19] L. Xiao, X. Wan, C. Dai, X. Du, X. Chen, and M. Guizani, “Security in mobile edge caching with reinforcement learning,” IEEE Wireless Commun., vol. 25, no. 3, pp. 116–122, Jun. 2018. [20] Z. Yu, J. Hu, G. Min, et al., “Mobility-aware proactive edge caching for connected vehicles using federated learning,” EEE Trans. Intell. Transp. Syst., to be published, doi: 10.1109/TITS.2020.3017474. [21] A. Sadeghi, F. Sheikholeslami, and G. B. Giannakis, “Optimal and scalable caching for 5G using reinforcement learning of space-time popularities,” IEEE J. Sel. Top. Signal Process., vol. 12, no. 1, pp. 180–190, Feb. 2018. [22] C. Zheng, S. Liu, Y. Huang, and L. Yang, “MEC-enabled wireless VR video service: A learning-based mixed strategy for energy-latency tradeoff,” in Proc. 18[th] IEEE Wireless Commun. Netw. Conf. (WCNC’20), Seoul, South Korea, Apr. 2020, pp. 1– 6. [23] C. Zheng, S. Liu, Y. Huang, and L. Yang, “Hybrid policy learning for energy-latency tradeoff in MEC-assisted VR video service,” IEEE Trans. Veh. Technol., vol. 70, no. 9, pp. 9006– 9021, Sept. 2021. [24] R. Sutton and A. Barto, Reinforcement Learning: An Introduction, Cambridge, MA, USA: MIT press, 1998. [25] D. Kingma and J. Ba, “Adam: A method for stochastic optimization,” in Proc. 3[rd] Int. Conf. Learn. Represent. (ICLR’15), San Diego, CA, USA, May 2015. [26] A. Leff, J. L. Wolf, and P. S. Yu, “Efficient LRU-based buffering in a LAN remote caching architecture,” IEEE Trans. Parallel Distrib. Syst., vol. 7, no. 2, pp. 191–206, Feb. 1996. [27] G. Ma, Z. Wang, M. Zhang, et al., “Understanding performance of edge content caching for mobile video streaming,” IEEE J. Sel. Areas Commun., vol. 35, no. 5, pp. 1076–1089, May 2017. [28] S. Gu, T. Lillicrap, I. Sutskever, and S. Levine, “Continuous deep Q-learning with model-based acceleration,” in Proc. 33[rd] Int. Conf. Mach. Learn. (ICML’16), New York, NY, USA, June 2016, pp. 2829–2838. [29] M. Volodymyr, K. Koray, S. David, et al., “Human-level control through deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, Feb. 2015. [30] P. Kairouz, H. B. McMahan, B. Avent, et al., “Advances and open problems in federated learning,” arXiv preprint [arXiv:1912.04977, 2019.](http://arxiv.org/abs/1912.04977) -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2110.10349, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/2110.10349" }
2,021
[ "JournalArticle" ]
true
2021-10-20T00:00:00
[ { "paperId": "0bbd2411a49c7f6e61b81cf14c517eaccab09479", "title": "Privacy-Preserving Federated Reinforcement Learning for Popularity-Assisted Edge Caching" }, { "paperId": "28dd118d51d4901c9b3c6b7dd04a152b82cc8703", "title": "Mobility-Aware Proactive Edge Caching for Connected Vehicles Using Federated Learning" }, { "paperId": "684a0ef05d4c037c4c61ee20295b69d806bab985", "title": "Hybrid Policy Learning for Energy-Latency Tradeoff in MEC-Assisted VR Video Service" }, { "paperId": "8cd4ae1b00404bd511a7f7b7f91d603545b5128c", "title": "Design of a 5G Network Slice Extension With MEC UAVs Managed With Reinforcement Learning" }, { "paperId": "4a9ad45ffcc886bb6ef8a44ced50c3dbceaa1419", "title": "A Novel Framework of Three-Hierarchical Offloading Optimization for MEC in Industrial IoT Networks" }, { "paperId": "3548398648ffe4e535665035a982a314b1f512c7", "title": "Emerging Technologies for 5G-IoV Networks: Applications, Trends and Opportunities" }, { "paperId": "e852e178d58dc8aedbd836003202d2324283a914", "title": "MEC-Assisted Immersive VR Video Streaming Over Terahertz Wireless Networks: A Deep Reinforcement Learning Approach" }, { "paperId": "ba762b767ee940c6793d72f5676a7b8d6023b0c5", "title": "MEC-Enabled Wireless VR Video Service: A Learning-Based Mixed Strategy for Energy-Latency Tradeoff" }, { "paperId": "57c6ba16f4d6142aee91aabd34764c375b089247", "title": "Federated Deep Reinforcement Learning for Internet of Things With Decentralized Cooperative Edge Caching" }, { "paperId": "7b6ceb18f7146baf796779959fd11e42d2d58b5b", "title": "Resource Allocation Based on Deep Reinforcement Learning in IoT Edge Computing" }, { "paperId": "4c77848e60d8dfcbf07213b9287b48487c065473", "title": "Deep Reinforcement Learning and Permissioned Blockchain for Content Caching in Vehicular Edge Computing and Networks" }, { "paperId": "21fdfd70d646fa6185e47b43a26e4e83a6648990", "title": "Capacity-Aware Edge Caching in Fog Computing Networks" }, { "paperId": "a900eb110c83ed0189c8e15d013f420840c50584", "title": "Collaborative Online Edge Caching With Bayesian Clustering in Wireless Networks" }, { "paperId": "6a6f3e8ec0495fa7fb32f88c1912d76cede7acfa", "title": "Cellular-Connected Wireless Virtual Reality: Requirements, Challenges, and Solutions" }, { "paperId": "07912741c6c96e6ad5b2c2d6c6c3b2de5c8a271b", "title": "Advances and Open Problems in Federated Learning" }, { "paperId": "fcbf1b743ed0592d18064d9bf8b376dc01b27e6a", "title": "Game Theoretical Secure Caching Scheme in Multihoming Edge Computing-Enabled Heterogeneous Networks" }, { "paperId": "905b51a3281479f8dd2fc33b3c6ce30fc5f95f57", "title": "Edge Computing for Smart Health: Context-Aware Approaches, Opportunities, and Challenges" }, { "paperId": "fb11bc82b66e6980d8611b7f73cc2700e3c117d5", "title": "Big Data Privacy Preserving in Multi-Access Edge Computing for Heterogeneous Internet of Things" }, { "paperId": "8184b5d01437034208f4bbaafe1291de7c691fb7", "title": "Security in Mobile Edge Caching with Reinforcement Learning" }, { "paperId": "dd4195906336caff8817521c7fffeac715ebccfa", "title": "User Preference Learning-Based Edge Caching for Fog Radio Access Network" }, { "paperId": "150c9d67647f5c31244691cff36dd52ffbe225b7", "title": "Online Edge Caching and Wireless Delivery in Fog-Aided Networks With Dynamic Content Popularity" }, { "paperId": "5361d640ac5d755fd9b6646a462b20e0b8c6a0ea", "title": "Optimal and Scalable Caching for 5G Using Reinforcement Learning of Space-Time Popularities" }, { "paperId": "0245647a170ea1ebf55b9cb4435ddaadba27caeb", "title": "Understanding Performance of Edge Content Caching for Mobile Video Streaming" }, { "paperId": "ce6274e1fe0e90a4cfaa59b92b6839383afee2b1", "title": "Proactive Cache Placement on Cooperative Client Caches for Online Social Networks" }, { "paperId": "d358d41c69450b171327ebd99462b6afef687269", "title": "Continuous Deep Q-Learning with Model-based Acceleration" }, { "paperId": "21a77e1e7df672660de21efbd30a31f8aa478a63", "title": "Analysis and Optimization of Caching and Multicasting in Large-Scale Cache-Enabled Wireless Networks" }, { "paperId": "340f48901f72278f6bf78a04ee5b01df208cc508", "title": "Human-level control through deep reinforcement learning" }, { "paperId": "a6cb366736791bcccc5c8639de5a8f9636bf87e8", "title": "Adam: A Method for Stochastic Optimization" }, { "paperId": "305c2a3d248aa6eb7ea1321bb4ef2e2e566960b7", "title": "Efficient LRU-Based Buffering in a LAN Remote Caching Architecture" }, { "paperId": "97efafdb4a3942ab3efba53ded7413199f79c054", "title": "Reinforcement Learning: An Introduction" } ]
17,742
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/019b5b92589614cc93c2b20751ad71f51feb8211
[ "Computer Science" ]
0.910374
The Cloud Needs Cross-Layer Data Handling Annotations
019b5b92589614cc93c2b20751ad71f51feb8211
2013 IEEE Security and Privacy Workshops
[ { "authorId": "2610427", "name": "Martin Henze" }, { "authorId": "3312737", "name": "R. Hummen" }, { "authorId": "1719689", "name": "Klaus Wehrle" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
null
# The Cloud Needs Cross-Layer Data Handling Annotations ## (Position Paper) Martin Henze, Ren´e Hummen, Klaus Wehrle _Communication and Distributed Systems_ _RWTH Aachen University, Germany_ _Email:_ _henze,hummen,wehrle_ _@comsys.rwth-aachen.de_ _{_ _}_ **_Abstract—Nowadays, an ever-increasing number of service_** **providers takes advantage of the cloud computing paradigm** **in order to efficiently offer services to private users, busi-** **nesses, and governments. However, while cloud computing** **allows to transparently scale back-end functionality such as** **computing and storage, the implied distributed sharing of** **resources has severe implications when sensitive or otherwise** **privacy-relevant data is concerned. These privacy implications** **primarily stem from the in-transparency of the involved back-** **end providers of a cloud-based service and their dedicated** **data handling processes. Likewise, back-end providers cannot** **determine the sensitivity of data that is stored or processed in** **the cloud. Hence, they have no means to obey the underlying** **privacy regulations and contracts automatically. As the cloud** **computing paradigm further evolves towards federated cloud** **environments, the envisioned integration of different cloud plat-** **forms adds yet another layer to the existing in-transparencies.** **In this paper, we discuss initial ideas on how to overcome** **these existing and dawning data handling in-transparencies and** **the accompanying privacy concerns. To this end, we propose** **to annotate data with sensitivity information as it leaves the** **control boundaries of the data owner and travels through to** **the cloud environment. This allows to signal privacy properties** **across the layers of the cloud computing architecture and** **enables the different stakeholders to react accordingly.** **_Keywords-Cloud Computing, Data Handling, Privacy_** I. INTRODUCTION Cloud computing offers an abstracted access to a huge pool of resources such as processing, storage, and networking. Instead of having to operate own infrastructure, service providers simply use only the resources they need at a certain point of time, which requires elastic scaling of resources. To receive this elasticity, the resource providers dynamically share resources between customers, which is then called multi-tenancy. Other aspects include a multitude of potentially involved stakeholder (e.g., service and infrastructure providers), the flexible combination of these stakeholders (known as inter-cloud), and location independence. Additionally, the availability of information is increased, e.g., using replication. The huge number of benefits has lead to a wide adoption of the cloud computing paradigm. In order to identify challenges for data handling in the cloud, we consider one major use case, the handling and storage of all kind of data. Especially when the cloud is integrated with highly sensitive data sources like health © 2013 M i H U d li IEEE 18 care data [1] or data collected from sensor networks [2], a scaring amount of privacy issues arises [3], [4]. The major concern for users and enterprises is the perception of loss of control over data once it is transferred to the cloud [3]– [7], which has several dimensions. First of all, there is no control over who may access the data, nor any transparency who actually did. Secondly, data might be passed on to third parties or be used for other unintended purposes. Especially for enterprises, it is nearly impossible to guarantee adherence to contracts or laws regarding customer data [5]. Finally, there is no control or at least assurance that data is eventually deleted once it is no longer needed. These concerns are a key barrier to the wide adoption of cloud-based services. One way to address these privacy issues is security, where one possible measure is encryption. However, simply restricting access to data by means of encryption is not enough to preserve privacy in a cloud environment where data is shared between entities [8]. Encryption, e.g., cannot guarantee that data is deleted after a certain period of time or only stored in certain countries. We argue that data access control (e.g., using encryption) is only one building block for data usage management. It is also necessary to establish trust that data will be handled appropriately. This requires that all entities involved in the handling of data need an awareness how this data has to be treated. To achieve this, we propose to enrich data in a cloud environment with data handling annotations. Using semantic information for cloud resources has already been proposed to realize federated cloud environments [9]. In contrast, we suggest to extend these ideas to the data being handled in order to address privacy concerns. Our contribution is as follows: First, we present challenges when handling (potentially) sensitive data in a cloud environment. Based on these challenges, we propose an annotation-based approach to data handling in a multi-layered cloud environment. These annotations allow a cloud or service provider to interpret the privacy requirements of the data and handle it accordingly. Finally, we identify and discuss technologies which can be used to realize these annotations in a cloud setting. II. DATA HANDLING CHALLENGES IN THE CLOUD Although users and companies could profit a lot from outsourcing data to the cloud, they often refrain from using the ----- cloud due to privacy concerns [3]–[5]. One major concern is the loss of control over who may access the data once it has been transferred to the cloud. In order to understand this challenges, we first give an introduction to cloud computing. Afterwards we have a look at privacy requirements that lead to challenges when handling data in the cloud. _A. Cloud Computing_ The cloud does not consist of one central entity operated by one organization, but involves a number of different stakeholders, distributed all over the world. This holds true especially in a so-called inter-cloud setting, where resources of different clouds are combined [10]. First of all, Infrastructure as a Service (IaaS) providers offer storage and processing resources, which can be rented on demand. On top of these operates the Platform as a Service (PaaS), which abstracts from physical or virtualized resources. At the very top of the cloud stack operates the Software as a Service (SaaS), which targets the end user. The typical end user only interacts with the provider of the SaaS offer she wants to use. This includes that she also only has a contractual agreement with this specific provider and not with the underlying PaaS and IaaS provider(s). However, these have a tremendous impact on fulfilling privacy requirements. In order to answer the question how the user can instruct these providers about how here data should be handled, we first have a look at privacy requirements in a cloud environment. _B. Examples for Privacy Requirements_ The cloud paradigm poses a number of challenges to the privacy-aware handling of data. First, the requirements of traditional outsourcing apply to cloud computing as well [3]. Additionally, new requirements arise which are inherent to the cloud paradigm, mainly due to the distributed nature and the desired redundancy. In the remainder of this section, we will discuss examples for these requirements in more detail. This is not be thought of as a complete list of requirements, but rather as motivating examples for privacy challenges. Additionally, we give highlevel ideas, which information needs to be provided in order to be able to address these requirements. _1) Guaranteed Data Deletion: Guaranteed deletion of_ data is from a user’s perspective a key feature of trusted cloud services [3]. From a provider’s prospective, the distributed nature and desired redundancy make this a tricky task, especially if reliable deletion methods such as secure data erasure or physical destruction have to be used. If the storage provider knew in advance at which point in time data should be deleted (e.g., the user requiring deletion after 30 days), it could group data with similar deletion dates on one physical device (replication implies to do this for more than one device). At the right point in time the whole device would then reliably be deleted using secure data erasure or physical destruction. _2) Data Protection Law Enforcement: Certain jurisdic-_ tions impose strict data protection regulations when handling personal data. The EU, e.g., demands that personal data of customers must not be transferred to oversea jurisdictions with weaker privacy laws. One prominent exception is known as safe harbor principles, which allow the transfer of personal data to jurisdictions with weaker privacy laws if the recipient declared to voluntarily follow EU regulations. Nowadays, strictly enforcing data protection laws when using cloud services is nearly impossible. It is nearly impossible to figure out the actual location at which data is stored and there is no way to mark data as data protection law relevant. If the storage provider (at the PaaS level) would know that the data it is currently handling falls under such restrictive jurisdictions, it could evaluate which parts of the infrastructure are compliant to these regulations. The data would then only be store in these parts of the IaaS. _3) Legislative Boundary Awareness: Moving data across_ legislative boundaries (probably without even noticing), raises severe concerns [3], [4], [11]. This is not limited to data protection, but results from a variety of other legal requirements. One prominent example is the storage of all data relevant for taxes in Germany. This data (and all of its copies) has to be stored in Germany. Only under certain conditions it might also be stored in a different country within the EU or EEA, but never, e.g., in the US. In order to correctly handle this data, a cloud service would on the one hand need information where this data is allowed to be stored. On the other hand, it needs a way to pass this information to the contracted storage provider(s). _4) Right to be Forgotten: The right to be forgotten is a_ proposal for a new data protection regulation in the EU [12]. In principal, the right to be forgotten states that personal information has to be deleted automatically after a certain period of time. This addresses the problem that nowadays information which has been released to the internet will never leave it again. Technically implementing the right to be forgotten is considered a challenging task, especially because it stands in stark contrast to US regulations [12]. If the storage provider (IaaS or PaaS) would know whether a data item falls under the EU’s right to be forgotten, it could periodically ask the SaaS provider, whether this specific data is still needed and thus trigger the automatic deletion. III. CROSS-LAYER DATA HANDLING ANNOTATIONS To fulfill the aforementioned requirements when handling data in the cloud, we propose the use of cross-layer data handling annotations. Annotations are a well established method in the field of data usage management [13], [14]. Each entity on the data handling path can add annotations to the data. The other entities than have to treat these as obligations. This is similar to DRM, where access rights are bound to data. More formal, we consider entities in a layered system, where data is exchanged between entities on ----- adjacent layers as well as entities on the same layer. Thereby, we denote the entity that passes data to another entity as _sender and the one receiving the data as receiver. Note, that_ a receiver might become a sender as well once the data continues traveling. The sender wants to specify obligations regarding how the passed data should be handled. These obligations are then considered binding for all receivers on the path. We argue that this approach is better suited than SLAs for fulfilling privacy requirements in the cloud. The dynamic nature of the cloud and constantly changing privacy requirements are difficult to handle with static SLAs. In the remainder of this section, we will discuss the processes and technologies needed for realizing cross-layer data handling annotations in more detail. For the beginning, we assume that all involved entities are in general interested in benefiting from data handling annotations. Towards the end of this section we will also discuss enforcement of annotations and detection of misbehavior. _A. Annotation Procedure_ To illustrate the proposed annotation process (see Figure 1), consider a cloud SaaS service that allows to store and synchronize data with different devices (similar to Dropbox). As motivated in Section II-B1, the user wants her stored data to be ultimately deleted after 30 days. Thus, she annotates the data accordingly before it is handed over to the SaaS. The SaaS checks, whether it can fulfill this obligation and states this to the user. It will then (possibly) choose between different PaaS providers it has under contract and pass the data to one which most likely will be able to fulfill the requirements. Then the PaaS provider will also check, whether it can comply with the obligation and state that fact to the SaaS layer. Again, the PaaS provider hands on the data to a fitting IaaS provider. Finally, the IaaS provider will also check for obligation compliance and report this to the PaaS provider. Then, the IaaS provider has to decide on which part(s) of its infrastructure the data should be stored. As discussed in Section II-B1, it will try to put data with similar deletion dates on the same physical device. Without annotations, this would not be possible. _B. Expressing Annotations_ In order to express data handling annotations in a machine-readable way, we propose to use privacy policy lan_guages [15]. This is a widely studied field which deals with_ the formal representation of privacy policies. The formal representation allows to reason about the privacy policies. There are three different types of privacy policy languages: (i) languages that allow users to specify their privacy requirements, (ii) languages that allow service providers to specify their privacy policies, i.e., how they will handle and use data, and (iii) languages that combine the two previous approaches and allow to match or compare a user’s requirements against a service provider’s policies. Figure 1. A user adds an annotation to her data (“delete after 30 days”) before it is passed to the cloud. Based on this annotation, the SaaS chooses a PaaS, which again chooses a IaaS. The IaaS will then store the data on a physical device together with other data that should be deleted in 30 days. We argue that in a cloud environment, the third approach is the most promising one, as it allows to formalize the requirements of all involved parties. This would allow the sender to express the data handling obligations and the receiver to formalize the privacy measures it can offer. Thus, when receiving annotated data, the compliance to the stated obligations can be checked automatically. Note that our approach is not bound to a specific privacy policy language. A number of promising privacy policy languages have been proposed [16], [17]. However, most of these languages are rather technical and require a certain level of abstraction for end users. This could be realized by letting an end user choose between a set of predefined privacy policies. Additionally, these policies could easily be made parametrized, e.g., by choosing the time range after which data should be deleted. The design of some of the languages also allows to delegate (parts of) the policy decision to a trusted third-party [16]. Thus, policies for enforcing, e.g., EU data protection laws, could be retrieved from a central, trusted location. The formalism introduced by privacy policy languages offers a lot of flexibility [16], but also requires computational effort. However, privacy policies are expected to be rather small and not lead to heavy computations [15]. Furthermore, the same annotation could be used for more than one data item. Instead of sending the full annotation, an identifier for this annotation (e.g., a hash value) would be sent. Thus, we argue that privacy policy languages are well suited for specifying data handling annotations in a cloud environment. _C. Committing to Annotations_ In order to establish a chain of trust, we require the receiver of a data item to state its compliance with the annotated obligations. To prevent data to be available without negotiated obligations, the actual data will only be transferred after the receiver has acknowledged its consent. If data would be sent without prior negotiation, an obligation violation could already happen before the obligation is ----- checked. Consider, e.g, the requirement example regarding legislative boundaries (see Section II-B3). Checking for fulfillment of this requirement after the data has already left the country is too late. In order to guarantee the receiver’s acknowledgment, we propose a process similar to a three way handshake. As this process requires identities, we assume a public key infrastructure (PKI) to be in place, where (at least) each provider in the cloud stack can be identified by a public/private key pair. The sender initiates a transmission with an annotation request. It encodes the machine-readable annotation together with a request identifier and sends it to the receiver. In order to establish a linkage between the data and its annotation, we propose to use a hash value of the data as request identifier. Upon receiving an annotation request, the receiver will parse the machine-readable annotation and decide, whether it can and wants to fulfill the specified obligations. If it cannot or does not want to fulfill the specified obligations, it will send back a negative response. Otherwise, it will reply with an annotation response. In order to confirm its consent, the receiver signs the received annotation request with its private key. The annotation response then consists of the annotation request with the added signature. Once the sender receives the annotation response, it can verify its authenticity using the digital public key certificate of the receiver. If the authenticity of the receiver’s acknowledgment to fulfill the annotated obligations can be verified, it is safe to start the transmission of the data. The sender keeps a copy of the annotation response. In case of misbehavior, it can be used to proof the receiver’s consent to the obligations. _D. Binding Data and Annotations_ In the previous section we already discussed how annotations can be linked to a data item. Given an annotation, the corresponding data item can thus easily be identified. However, without a way to link data to an annotation, the annotation could be dropped unnoticeable while the data travels through the cloud. Thus, measures to enforce the annotations or detect misbehavior (as discussed below) could not compare the observed conditions to the ones requested. One approach to binding data to associated policies is the concept of sticky policies [18] which got quite some interest in the past years. The underlying concept is to bind a policy cryptographically to the associated data and thus make the policy stick to the data. Note that the concept of sticky policies is independent of the representation of policies [19]. Thus, any privacy policy language can be used. Using sticky policies requires the introduction of one or more trusted authorities. Before the sender sends the data to the receiver, it encrypts the data and a hash value of the associated data handling policy. The trust authority’s task is to release decryption keys iff it can verify that the receiver states compliance with the policy. Adapting the concept of sticky policies to the cloud has already been proposed [19]. This approach however focuses on which and how cloud services may use data. We see sticky policies as a promising approach to ensure privacy in a cloud environment. It is especially useful when traversing untrusted entities, as the encryption ensures confidentiality. Another approach for linking data and policies leverages the integrity protection mechanism which is often employed for data stored in the cloud [2], [4]. The most common method for ensuring integrity protection of data is the use of digital signatures. For this, a hash value of the data is computed and then signed using public-key cryptography. Anyone in possession of the signee’s public key can then verify the signature and thus the integrity of the data. We propose to extend the integrity protection to the annotations associated with the data. This means that the hash value would be computed over the data and annotation before it is signed. Thus, unauthorized alteration, deletion, or addition of annotations would break the integrity of the data. Verifying the integrity protection of data in the cloud (including the authenticity of the digital signature) can be efficiently automated using a trusted third-party [20]. _E. Policy Enforcement and Misbehavior Detection_ In the previous paragraphs we discussed how to annotate data, communicate commitments to obligations, and link data and annotations to each other. Thus, we have created measures for traceability. Still, an open question is how the obligations stated by the annotations can be effectively enforced and misbehavior detected. We now present three complementing approaches that allow to enforce adherence to obligations and detect misbehavior. _1) Auditing and Certification: One established measure_ to enforce security and privacy in IT systems is auditing and certification. Nowadays, they are highly recommended as a building block to ensure secure data storage, data protection, and policy enforcement in cloud environments [21]. We propose to extend auditing and certification of cloud providers to the verification of the machine-readable privacy policy statements. This would, e.g., include verification of statements on infrastructure location, adherence to data protection laws or the ability to securely delete files. _2) Transparency: Transparency has been identified as a_ way to establish trust in a cloud provider [11], [22]. On the one hand this refers to disclosing security and privacy mechanisms which are used to protect customers’ data. More importantly, it refers to revealing how the actual data of one customer is treated. This could, e.g., mean that a customer could at any point in time look up at which exact physical location her data is stored. Another promising approach to establish transparency are log files [22], which could also state when and how data was securely deleted. Using transparency, users could verify that their annotated obligations are indeed fulfilled. For cloud providers, offering transparency could be an additional selling point. ----- Partly, transparency can be achieved using auditing and certification (see above). Another possibility is the use of trusted computing which we will discuss in the following. _3) Trusted Computing: Trusted computing (TC) is a_ technology that ensures (to some degree) that a hardware or software component behaves as expected [23]. Functions enabled by TC include secure input and output, memory curtaining, sealed storage, and remote attestation. One prominent application of TC is cloud computing [24]. There, trusted computing is, e.g., used to remotely attest the integrity and confidentiality of virtual machines. We propose to use TC to make the policy engine at the receiver a trusted component. Thus, the sender could be sure that the matching of its annotations to the receiver’s privacy policies has been performed correctly. _F. Recommendation Systems_ Once all the aforementioned mechanisms are in place, one central question still remains unanswered. How to locate and find the SaaS, PaaS, and IaaS provider(s) that are able to fulfill the data handling obligations? At a first glance one might assume that this is a static decision that only has to be made once. However, we believe that this decision process is highly dynamic. The cloud market is always in motion, market players come and go and business models change. Additionally, privacy policies are always in a state of flux. End users might change their perception of privacy, e.g., due to news coverage on data leakage. Cloud providers again might shift their privacy policies based on legislative changes, law suits, or sales reasons. Thus, an approach that is able to identify a fitting provider on demand is essential. There are already approaches to choose on demand between cloud providers based on the required (technical) resources [10], [25]. These recommendation systems consider Quality of Service (QoS), service-level agreements (SLAs), and pricing as metrics for their decision. We propose to extend these systems to also consider privacy requirements as they are stated in the annotations. IV. OUTLOOK We identified challenges when handling sensitive data in a cloud environment. Based on these challenges, we proposed to use cross-layer data handling annotations. With these annotations we are able to communicate obligations regarding the handling of data across the different layers of the cloud stack. We then identified the necessary processes and technologies for such a system and studied them in more detail. All in all, applying data handling annotations to the cloud environment seems a promising approach. In the future, we plan to further validate the feasibility of our proposed solution. For this purpose, we want to build a prototype of a file storage service (similar to, e.g., Dropbox) able to understand and follow data handling annotations. Additionally, we plan to extend AppScale and OpenStack to support our proposed privacy policy framework. ACKNOWLEDGMENT This work has in parts been funded by the German Federal Ministry of Economics and Technology under project funding reference number 01MD11049. The responsibility for the content of this publication lies with the authors. REFERENCES [1] C. Rolim, F. Koch, C. Westphall, J. Werner, A. Fracalossi, and G. Salvador, “A Cloud Computing Solution for Patient’s Data Collection in Health Care Institutions,” in Proc. ETELEMED, 2010. [2] R. Hummen, M. Henze, D. Catrein, and K. Wehrle, “A Cloud Design for User-controlled Storage and Processing of Sensor Data,” in Proc. _IEEE CloudCom, 2012._ [3] S. Pearson and A. Benameur, “Privacy, Security and Trust Issues Arising from Cloud Computing,” in Proc. IEEE CloudCom, 2010. [4] M. Zhou, R. Zhang, W. Xie, W. Qian, and A. Zhou, “Security and Privacy in Cloud Computing: A Survey,” in Proc. SKG, 2010. [5] H. Takabi, J. Joshi, and G. Ahn, “Security and Privacy Challenges in Cloud Computing Environments,” IEEE Security & Privacy, vol. 8, no. 6, 2010. [6] I. Ion, N. Sachdeva, P. Kumaraguru, and S. Capkun, “Home is Safer than the Cloud! Privacy Concerns for Consumer Cloud Storage,” in _Proc. SOUPS, 2011._ [7] D. Song, E. Shi, I. Fischer, and U. Shankar, “Cloud Data Protection for the Masses,” Computer, vol. 45, no. 1, 2012. [8] M. van Dijk and A. Juels, “On the Impossibility of Cryptography Alone for Privacy-Preserving Cloud Computing,” in Proc. USENIX _HotSec, 2010._ [9] G. Manno, W. Smari, and L. Spalazzi, “FCFA: A Semantic-based Federated Cloud Framework Architecture,” in Proc. HPCS, 2012. [10] N. Grozev and R. Buyya, “Inter-Cloud Architectures and Application Brokering: Taxonomy and Survey,” Software: Practice and Experi_ence, 2012._ [11] J. Heiser and M. Nicolett, “Assessing the Security Risks of Cloud Computing,” Gartner, Tech. Rep. G00157782, 2008. [12] J. Rosen, “The Right to Be Forgotten,” Stanford Law Review Online, vol. 64, 2012. [13] A. Schaad and A. Monakva, “Annotating Business Processes with Usage Controls,” in WWW DUMW, 2012. [14] A. Aghasaryan, M.-P. Dupont, S. Betg´e-Brezetz, and G.-B. Kamga, “Privacy Data Envelops for Moving Privacy-sensitive Data,” in W3C _Workshop on Privacy and Data Usage Control, 2010._ [15] P. Kumaraguru, L. Cranor, J. Lobo, and S. Calo, “A Survey of Privacy Policy Languages,” in SOUPS Workshop on Usable IT Security _Management, 2007._ [16] M. Becker, A. Malkis, and L. Bussard, “A Practical Generic Privacy Language,” in Proc. ICISS, 2010. [17] L. Bussard, G. Neven, and F.-S. Preiss, “Downstream Usage Control,” in Proc. IEEE POLICY, 2010. [18] S. Pearson and M. Mont, “Sticky Policies: An Approach for Managing Privacy across Multiple Parties,” Computer, vol. 44, no. 9, 2011. [19] S. Pearson, M. Mont, L. Chen, and A. Reed, “End-to-End PolicyBased Encryption and Management of Data in the Cloud,” in Proc. _IEEE CloudCom, 2011._ [20] C. Wang, Q. Wang, K. Ren, and W. Lou, “Privacy-Preserving Public Auditing for Data Storage Security in Cloud Computing,” in Proc. _IEEE INFOCOM, 2010._ [21] W. Jansen and T. Grance, “Guidelines on Security and Privacy in Public Cloud Computing,” NIST Special Publication 800-144, National Institute of Standards and Technology, 2011. [22] K. Khan and Q. Malluhi, “Establishing Trust in Cloud Computing,” _IT Professional, vol. 12, no. 5, 2010._ [23] C. J. Mitchell, Ed., Trusted Computing. IEE, 2005. [24] N. Santos, K. P. Gummadi, and R. Rodrigues, “Towards Trusted Cloud Computing,” in Proc. USENIX HotCloud, 2009. [25] P. Pawluk, B. Simmons, M. Smit, M. Litoiu, and S. Mankovski, “Introducing STRATOS: A Cloud Broker Service,” in Proc. IEEE _CLOUD, 2012._ -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/SPW.2013.31?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/SPW.2013.31, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://ieeexplore.ieee.org/ielx7/6564486/6565207/06565223.pdf" }
2,013
[ "JournalArticle" ]
true
2013-05-23T00:00:00
[]
6,913
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/019bd6c21447f1ee7f2568e030b546b60d8752c0
[ "Computer Science" ]
0.851085
Linear High-Order Distributed Average Consensus Algorithm in Wireless Sensor Networks
019bd6c21447f1ee7f2568e030b546b60d8752c0
2009 IEEE/SP 15th Workshop on Statistical Signal Processing
[ { "authorId": "1781124971", "name": "Gang Xiong" }, { "authorId": "143902560", "name": "S. Kishore" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
This paper presents a linear high-order distributed average consensus (DAC) algorithm for wireless sensor networks. The average consensus property and the convergence rate of the high-order DAC algorithm are analyzed. In particular, the convergence rate is determined by the spectral radius of a network topology-dependent matrix. Numerical results indicate that this simple linear high-order DAC algorithm can accelerate the convergence without additional communication overhead and reconfiguration of network topology.
Hindawi Publishing Corporation EURASIP Journal on Advances in Signal Processing Volume 2010, Article ID 373604, 6 pages doi:10.1155/2010/373604 # Research Article Linear High-Order Distributed Average Consensus Algorithm in Wireless Sensor Networks ## Gang Xiong and Shalinee Kishore _Department of Electrical and Computer Engineering, Lehigh University, Bethlehem, PA 18015, USA_ Correspondence should be addressed to Shalinee Kishore, skishore@lehigh.edu Received 23 November 2009; Revised 17 March 2010; Accepted 27 May 2010 Academic Editor: Husheng Li Copyright © 2010 G. Xiong and S. Kishore. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. This paper presents a linear high-order distributed average consensus (DAC) algorithm for wireless sensor networks. The average consensus property and the convergence rate of the high-order DAC algorithm are analyzed. In particular, the convergence rate is determined by the spectral radius of a network topology-dependent matrix. Numerical results indicate that this simple linear high-order DAC algorithm can accelerate the convergence without additional communication overhead and reconfiguration of network topology. ## 1. Introduction The distributed average consensus (DAC) algorithm aims to provide distributed nodes in a network agreement on a common measurement, known at any one node as the local state information. As such, it has many relevant applications in wireless sensor networks [1, 2], for example, movingobject acquisition and tracking, habitat monitoring, reconnaissance, and surveillance. In the DAC approach, average consensus can be sufficiently reached within a connected network by averaging pair-wise local state information at network nodes. In [1], Olfati-Saber et al. established a theoretical framework for the analysis of consensus-based algorithms. In this paper, we study a simple approach to improve the convergence rate of DAC algorithms in wireless sensor networks. The author of [3] demonstrates that the convergence rate of DAC can be increased by using the “smallworld” phenomenon. This technique, however, needs to redesign the network topology based on “random rewiring”. In [4], an extrapolation-based DAC approach is proposed; it utilizes a scalar epsilon algorithm to accelerate the convergence rate without extra communication cost. However, numerical results show that mean square error does not decrease monotonically with respect to iteration time, which may not be desirable in practical applications. In [5], the authors extend the concept of average consensus to a higher dimension one via the spatial point of view, where nodes are spatially grouped into two disjoint sets: leaders and sensors. Specifically, it is demonstrated that under appropriate conditions, the sensors’ states converge to a linear combination of the leaders’ states. Furthermore, multiobjective optimization (MOP) and Pareto optimality are utilized to solve the learning problem, where the goal is to minimize the error between the convergence state and the desired estimate subject to a targeted convergence rate. In [6], the authors introduce the concept of nonlinear DAC algorithm, where standard linear addition is replaced by a sine operation during local state update. The convergence rate of this nonlinear DAC algorithm is shown to be faster under appropriate weight designs. In this paper, we apply the principles of high-order consensus to the distributed computation problem in wireless sensor networks. This simple linear high-order DAC requires no additional communication overhead and no reconfiguration of the network topology. Instead, it utilizes gathered data from earlier iterations to accelerate consensus. We study here the convergence property and convergence rate of the high-order DAC algorithm and show that its convergence rate is determined by the spectral radius of ----- 2 EURASIP Journal on Advances in Signal Processing a network topology-dependent matrix. Moreover, numerical results indicate that the convergence rate can be greatly improved by storing and using past data. This paper is outlined as follows. Section 2 provides background and system model for the high-order DAC algorithm. Section 3 discusses convergence analysis for this scheme. Simulation results are presented in Section 4, and conclusions are provided in Section 5. ## 2. Background and System Model _2.1. Linear High-Order DAC Algorithm. We assume a syn-_ chronized, time-invariant connected network. In each iteration of the M-th order DAC algorithm, each node transmits a data packet to its neighbor which contains the local state information. Each node then processes and decodes the received message from its neighbors. After retrieving the state information, each node updates its local state using the weighted average of the current state between itself and its neighboring nodes as well as stored state information from the M − 1 previous iterations of the algorithm. The update rule of the M-th order DAC algorithm at each node i is given as _M−1_ _xi(k) = xi(k −_ 1) + ε � _cm�−γ�mΔxi(k, m),_ _m=0_ (1) � � � Δxi(k, m) = _x_ _j(k −_ _m −_ 1) − _xi(k −_ _m −_ 1), _j∈Ni_ _2.2. Network Model and Some Preliminaries. In the following,_ we model the wireless sensor network as an undirected graph ( The convergence properties presented here can be easily extended for a directed graph. We omit this extension here.) G = (V, E ), consisting of a set of N nodes V = {1, 2, ..., N _}_ and a set of edges E . Each edge is denoted as e = (i, j) ∈ E where i ∈ V and j ∈ V are two nodes connected by edge e. We assume that the presence of an edge (i, j) indicates that nodes i and j can communicate with each other reliably. We assume here a connected graph, that is, there exists a path connecting any pair of distinct nodes. Given this network model, we denote A = [aij] as the adjacency matrix of G such that aij _= 1 if (i, j) ∈_ E and aij = 0 otherwise. Next, let L be the graph Laplacian matrix of G which is defined as L = D − _A, where D =_ diag{d1, d2, ..., dN _} is the degree matrix of G, and di = |Ni|._ Given this matrix L, we have L1 = 0 and 1[T]L = 0[T], where **1 = [1, 1, ...**, 1][T] and 0 = [0, 0, ..., 0][T]. Additionally, L is a symmetric positive semidefinite matrix. And for a connected graph, the rank of L is N − 1 and its eigenvalues can be arranged in increasing order as 0 = λ1(L) < λ2(L) ≤· · · ≤ _λN_ (L) [8]. Let us define x(k) = [x1(k), x2(k), ..., xN (k)][T]. The M-th order DAC algorithm in (1) thus evolves as **x(k) = (IN −** _εL)x(k −_ 1) − _ε_ _M−1_ � _cm�−γ�mLx(k −_ _m −_ 1), _m=1_ (2) where xi(k) is the local state at node i during iteration k; Ni is the set of neighboring nodes that can communicate reliably with node i; ε is a constant step size; cm are predefined constants with c0 = 1 and cm /= 0(m > 0); γ is a forgetting factor, such that |γ| < 1. We assume initial conditions of the M-th order DAC algorithm are xi(−M + 1) = · · · = _xi(−1) = xi(0) = θi, where θi is initial local state information_ for node i. It is worth mentioning that when γ = 0, the highorder DAC algorithm reduces to the (conventional) firstorder DAC algorithm. This linear high-order DAC algorithm can be regarded as a generalized version of DAC algorithm; it requires no additional communication cost and no reconfiguration of network topology. Compared to the conventional first-order DAC algorithm, with negligible increase in memory size and computation load in each sensor node, the convergence rate can be greatly improved with appropriate algorithm design. In [7], the authors propose an average consensus algorithm with improved convergence rate by considering a convex combination of conventional operation and linear predication. In particular, a special case of one step predication is presented for detailed analysis. We note that the major difference between the DAC algorithm in [7] and our proposed scheme is that we utilize stored state difference for high-order updating and show that optimal convergence rate can be significantly improved by this simple extension. Furthermore, we present explicitly the optimal convergence rate of second-order DAC algorithm in Section 3.2. with the initial conditions x(−M + 1) = · · · = x(−1) = **x(0) = θ, where θ = [θ1, θ2, ...**, θN ][T] and IN denotes the _N × N identity matrix._ ## 3. Convergence Analysis of High-Order DAC Algorithm _3.1. Average Consensus Property of High-Order DAC Algo-_ _rithm. Before we investigate the convergence property of_ the high-order DAC algorithm, we define two MN × MN matrices where K = (1/N)11[T], and 0N _×N denotes the N × N all-zero_ matrix. Then we have the following lemma: **Lemma 1. The eigenvalues of H −** _J agree with those of H_ _except that λ1(H) = 1 is replaced by λ1(H −_ _J) = 0._ (3) _H =_ ⎡IN − _εL c1γεL · · · −cM−1�−γ�M−1εL_ _IN_ **0N** _×N_ _· · ·_ **0N** _×N_ ... ... ... ⎢⎢⎢⎢⎢⎢⎢⎣ **0N** _×N_ _· · ·_ _IN_ **0N** _×N_ ⎤ , ⎥⎥⎥⎥⎥⎥⎥⎦ _K 0N_ _×N · · · 0N_ _×N_ _K 0N_ _×N · · · 0N_ _×N_ ... ... ... _K 0N_ _×N · · · 0N_ _×N_ ⎤ , ⎥⎥⎥⎥⎥⎥⎥⎦ _J =_ ⎡ ⎢⎢⎢⎢⎢⎢⎢⎣ ----- EURASIP Journal on Advances in Signal Processing 3 _Proof. Let us define two MN × 1 vectors hl = (1/N)[1[T]0[T]_ _· · ·_ **0[T]][T]** and hr = [1[T] _· · ·_ **1[T]1[T]][T]. It is easy to check that hl** and hr are left and right eigenvectors of H corresponding to _λ1(H) = 1, respectively, that is, h[T]l_ _[H][ =][ h]l[T]_ [and][ H][h][r][ =][ h][r][.] Additionally, J = hrh[T]l [,][ h]l[T][h][r][ =][ h]l[T][h][l][ =][ 1. In order to obtain] the eigenvalues of H − _J, we have [9]_ **det(H −** _J −_ _λIMN_ ) � � _= det(H −_ _λIMN_ ) 1 − **h[T]l** [(][H][ −] _[λI][MN]_ [)][−][1][h][r] ⎡ (4) ⎡ _MN_ � ⎣± (λi(H) − _λ)_ _i=1_ � Note that there are M roots corresponding to one λi(L). For a time invariant and connected network, L has only one eigenvalue, λ1(L) = 0. From (8), when λ1(L) = 0, the eigenvalues of H satisfy f (λ) = λ[M] _−λ[M][−][1]_ _= 0. Then, for this_ _λ1(L) = 0, H has only two distinct eigenvalues, λ1(H) = 1_ (with algebraic multiplicity 1) and λ2(H) = 0 (with algebraic multiplicity M − 1). Additionally, it is easy to show that the algebraic multiplicity of eigenvalue λ(H) = 1 is equal to 1. Based on Lemma 1, we know that the eigenvalues of H − _J_ agree with those of H except that λ1(H) = 1 is replaced by _λ1(H −_ _J) = 0. Since ρ(H −_ _J) < 1, we see that the eigenvalues_ of H stay inside the unit circle except for λ1(H) = 1. Thus, we have _=_ _=_ � _V_ _[−][1]_ ⎡ ⎡ _MN_ � ⎣± (λi(H) − _λ)_ _i=2_ ⎤� ⎦ 1 − **[h]l[T][h][r]** 1 − _λ_ ⎤ ⎦(−λ). lim � 1 **01×(MN** _−1)_ _k →∞[H]_ _[k][ =][ V][ lim]k →∞_ **0(MN** _−1)×1_ Λ[k] � _V_ _[−][1]_ The above equation is valid because **hr = (H −** _λIMN_ )[−][1](H − _λIMN_ )hr = (H − _λIMN)[−][1](1 −_ _λ)hr._ (5) Thus, the eigenvalues of H − _J are λ1(H −_ _J) = 0 and_ _λi(H −_ _J) = λi(H), i = 2, ..._, MN. This completes the proof. The average consensus property of the M-th order DAC algorithm in wireless sensor networks is stated in the following theorem. **Theorem 1. Consider the M-th order DAC algorithm in** (2) in a time-invariant, connected, undirected wireless sensor _network, with initial conditions x(−M + 1) = · · · = x(−1) =_ **x(0) = θ. When ρ(H −J) < 1, an average consensus is achieved** _asymptotically, or equivalently,_ _= V_ � 1 **01×(MN** _−1)_ **0(MN** _−1)×1 0(MN_ _−1)×(MN_ _−1)_ (9) lim _k →∞[x][i][(][k][)][ =][ 1]N_ **[1][T][θ][ =][ 1]N** _N_ � _θi,_ _∀i ∈_ V, (6) _i=1_ _where ρ(·) denotes the spectral radius of a matrix._ _Proof. Let us define ψ(k) = [x(k)[T]x(k −_ 1)[T] _· · · x(k −_ _M +_ 1)[T]][T]. Then, the M-th order DAC algorithm in (2) can be rewritten as ψ(k) = Hψ(k − 1), which implies that ψ(k) = _H_ _[k]ψ(0). To calculate the eigenvalues of H, we have [9]_ **det(H −** _λIMN_ ) (7) _N_ � _=_ _i=1_ ⎛ ⎝λ[M] _−_ (1 − _ελi(L))λ[M][−][1]_ _= hrh[T]l_ [,] where Λ is the Jordan form matrix corresponding to eigenvalues λi(H) /= 1 [9]. Thus, we have limk →∞H _[k]_ _= J._ Then, limk →∞ψ(k) = Jψ(0), which indicates lim (10) _k →∞[x][i][(][k][)][ =][ 1]N_ **[1][T][θ][.]** This completes the proof. According to Theorem 1, we see that when this linear high-order DAC algorithm is employed in an undirected wireless sensor network, average consensus can be achieved asymptotically. We also note that our proposed high-order DAC algorithm relies heavily on local state information exchange between two or more nodes in the networks. Noisy links [10] and packet drop failures [11] will certainly affect the performance of our proposed high-order DAC algorithm. We will investigate these important issues in the future. _3.2. Convergence Rate for High-Order DAC Algorithm. One_ of the most important measures of any distributed, iterative algorithm is its convergence rate. As we show next, the convergence rate of the high-order DAC algorithm is determined by the spectral radius of H − _J, which is similar to the first-_ order DAC algorithm [1]. Let us define the average consensus value in each iteration as m(k) = (1/N)1[T]x(k). In the high-order DAC algorithm, this value remains invariant during each iteration since _m(k) =_ [1] _N_ **[1][T]** ⎞ ⎠ _= 0._ ⎡ ⎣(IN − _εL)x(k −_ 1) +ε _M−1_ � _cm�−γ�mλi(L)λM−1−m_ _m=1_ Thus, the eigenvalues of H should satisfy the following equation: _f (λ) = λ[M]_ _−_ (1 − _ελi(L))λ[M][−][1]_ ⎤ ⎦ _−ε_ _M−1_ � _cm�−γ�mLx(k −_ _m −_ 1) _m=1_ (11) _= m(k −_ 1) = · · · = m(0). We now define the disagreement vector as δ(k) = x(k) − _m(k)1, which indicates the difference between the updated_ + ε _M−1_ (8) � _cm�−γ�mλi(L)λM−1−m = 0._ _m=1_ ----- 4 EURASIP Journal on Advances in Signal Processing 1 0.9 1 0.8 0.7 0.6 0.5 2 3 0.4 0.3 0.2 0.1 0 4 5 6 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 (a) (b) Figure 1: Network topologies used in numerical results: (a) fixed network with 6 nodes (Case 1) and (b) random network with 16 nodes (Case 2). local state and the average state of the network nodes. Then, the evolution of the disagreement vector is obtained as. **_δ(k) = (IN −_** _εL)δ(k −_ 1) − _ε_ _M−1_ � _cm�−γ�mLδ(k −_ _m −_ 1). _m=1_ (12) Given this dynamic of the disagreement vector, we note. **Lemma 2. For the M-th order DAC algorithm in (2) in a time** _invariant, connected, undirected wireless sensor network, with_ _initial conditions x(−M + 1) = · · · = x(−1) = x(0) = θ_ _and α = ρ(H −_ _J) < 1, an average consensus is exponentially_ _reached in the following form:_ �M−1 _m=0_ _[∥][δ][(][k][ −]_ _[m][)][∥][2]_ _≤_ _Mα[2][k],_ (13) _∥δ(0)∥[2]_ _where ∥· ∥_ _denotes the ℓ2 norm of a vector._ _Proof. Let us define the error vector as e(k) = [δ[T](k) δ[T](k −_ 1) · · · **_δ[T](k −_** _M + 1)][T]_ which can be obtained from e(k) = **_ψ(k) −_** _J1ψ(k), where J1 = IM ⊗_ _K, and ⊗_ denotes the Kronecker product. Based on this definition, we see that the error vector results in the following evolution: **e(k) = (H −** _J1H)ψ(k −_ 1) _= (H −_ _J)�ψ(k −_ 1) − _J1ψ(k −_ 1)� _= (H −_ _J)e(k −_ 1). (14) Let us define the convergence region R to satisfy ρ(H − _J) < 1, that is,_ R = ��ε, γ� _| ρ(H −_ _J) < 1�._ (16) Based on Lemma 2, we see that the convergence rate for the _M-th order DAC algorithm in wireless sensor networks is_ determined by the spectral radius of H − _J, which depends_ solely on the network topology. Furthermore, we note that there may exist possible choices of ε and γ to achieve the optimal convergence rate of the high-order DAC algorithm. To see this, we formulated the following spectral radius minimization problem to find the optimal ε and γ for the high-order DAC algorithm, that is, minε,γ _ρ(H −_ _J)_ (17) s.t. �ε, γ� _∈_ R. From (17), we see that the optimal convergence rate of our proposed high-order DAC algorithm depends solely on the eigenvalues of Lapacian matrix. Let us define the minimal spectral radius of H − _J as αopt = min{ρ(H −_ _J)}, and the_ optimal convergence rate as νopt = − log(αopt). When M = 2, the optimal convergence rate of second-order DAC algorithm can be obtained as [12] _νopt,SO = log_ _[λ]λ[N]N[(]([L]L[) + 3]) −_ _λ[λ]2[2]([(]L[L])[)]_ _[.]_ (18) Recall that in the first-order DAC algorithm, we have [2] _νopt,FO = log_ _λ[λ]N[N]([(]L[L])[) +] −[ λ]λ[2]2[(]([L]L[)])_ _[.]_ (19) Clearly, we see that νopt,SO ≥ _νopt,FO. In the case when M ≥_ 3, we note that, in general, the closed-form solution for this optimization problem is hard to find due to the fact that high-order polynomial equations are involved in calculating The above equation is valid because (H − _J)J1 = 0MN_ _×MN_, and J1H = J. Then, we have _∥e(k)∥[2]_ _= ∥(H −_ _J)e(k −_ 1)∥[2] (15) _≤_ _α[2]∥e(k −_ 1)∥[2] _≤· · · ≤_ _α[2][k]∥e(0)∥[2],_ which is equivalent to (13). This completes the proof. ----- EURASIP Journal on Advances in Signal Processing 5 1.8 1.6 1.4 1.2 1 1.8 1.6 1.4 1.2 1 2 0.8 0.6 0.4 0.2 0 0.3 0.35 0.4 0.45 0.5 0.55 0.6 0.65 0.7 0.75 0.8 0.8 0.6 0.4 0.2 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 Threshold η Threshold η MD: first-order DAC algorithms MH: first-order DAC algorithms BC: first-order DAC algorithms BC: second-order DAC algorithms BC: third-order DAC algorithms BC: fourth-order DAC algorithms Figure 2: Convergence rate comparison of DAC algorithms with various weights in random networks versus distance threshold when N = 16. MD: first-order DAC algorithms MH: first-order DAC algorithms BC: first-order DAC algorithms BC: second-order DAC algorithms BC: third-order DAC algorithms Figure 3: Convergence rate comparison of DAC algorithms with various weights in random networks versus distance threshold when N = 256. the eigenvalues of H − _J. For example, when M = 3 and_ _c1 = 1, c2 = 1, we need to find the roots of the following_ cubic equation to obtain the eigenvalues of H − _J:_ _f (λ) = λ[3]_ _−_ (1 − _ελi(L))λ[2]_ _−_ _γελi(L)λ + γ[2]ελi(L) = 0._ (20) In practical applications, since the optimal ε and γ depend only on the network topology, a numerical solution can be obtained offline based on node deployment, and all design parameters can be flooded to the sensor nodes before they run the distributed algorithm. As we will show in the simulations, the optimal convergence rate can be greatly improved by this linear high-order DAC algorithm. ## 4. Simulation Results In the following, we simulate networks in which the initial local state information of node i is equally spaced ( trends similar to the ones noted below were observed when initial local state information between nodes were arbitrary (e.g., when they were uniformly distributed over [−β, β]). We use this fixed local state assumption here for comparison purposes) in [−β, β], where β _= 500. For the sake of_ simplicity, we only consider M = 3 and M = 4 for the higherorder DAC approach. In the simulations, we denote our proposed DAC algorithm as best constant (BC) high-order DAC algorithm and choose two types of ad hoc weights as comparison: maximum degree (MD) and metropolis hasting (MH) weights [13]. Furthermore, we assume c1 = 1, c2 = 1, c3 = 1/6 and study the following two network topologies: _Case 1. Fixed network with 6 nodes as shown in Figure 1(a)._ _Case 2. Random network with 16 nodes. The 16 nodes were_ randomly generated with uniform distribution over a unit square; two nodes were assumed connected if the distance between them was less than η, a predefined threshold. One realization of such a network is shown in Figure 1(b). Figure 2 shows the optimal convergence rates for the DAC algorithms with various weights in random networks with 16 nodes as a function of η. The results are based on 1000 realizations of the random network where we excluded disconnected networks. From the plots, we note that the first-order BC DAC algorithm outperforms the first-order MH and MD DAC algorithms. Furthermore, we see that the optimal convergence rate increases as M increases. However, we also observe that the fourth-order DAC algorithm has negligible improvement compared to the third-order algorithm. Based on this, we restrict our examination of higher-order DAC algorithm to M = 3 in the subsequent results. In addition to the results shown here, we ran this simulation setup for various realizations of random networks, assuming a large number of nodes. Figure 3 shows the convergence rate comparison for DAC algorithms with various weights when N = 256. As expected, we see that the results show a similar trend, that is, the optimal convergence rate of DAC algorithm increases as M increases. In Figure 4, we compare the convergence rates of the third-order DAC algorithm with the first- and second-order DAC algorithms for both the random and fixed network ----- 6 EURASIP Journal on Advances in Signal Processing 10[5] 10[0] 10[−][5] 10[10] 0 1 2 3 4 5 6 7 8 9 Iteration time index k (a) Fixed network with 6 secondary users 10[0] 10[−][10] 10[−][20] 0 2 4 6 8 10 12 14 16 18 20 Iteration time index k BC: first-order DAC algorithms BC: second-order DAC algorithms BC: third-order DAC algorithms (b) Random network with 16 secondary users Figure 4: Convergence rate comparison of first-, second-, and third-order DAC algorithms: (a) fixed network with 6 nodes (Case 1) and (b) random network with 16 nodes (Case 2). topologies. Specifically, we plot the mean square error (defined as (1/N)∥δ(k)∥[2]). In simulating random networks, we average out results over 1000 network realizations and assume η = 0.9, that is, network nodes are well connected with one another. As expected, we see that the third-order DAC algorithm converges faster than the first- and secondorder DAC algorithms for both network scenarios. ## 5. Conclusions [3] R. Olfati-Saber, “Ultrafast consensus in small-world networks,” in Proceedings of the American Control Conference _(ACC ’05), vol. 4, pp. 2371–2378, June 2005._ [4] E. Kokiopoulou and P. Frossard, “Accelerating distributed consensus using extrapolation,” IEEE Signal Processing Letters, vol. 14, no. 10, pp. 665–668, 2007. [5] U. A. Khan, S. Kar, and J. M. F. Moura, “Higher dimensional consensus: learning in large-scale networks,” IEEE Transac_tions on Signal Processing, vol. 58, no. 5, pp. 2836–2849, 2010._ [6] U. A. Khan, S. Kar, and J. M. F. Moura, “Distributed average consensus: beyond the realm of linearity,” in Proceedings of _the 43rd IEEE Asilomar Conference on Signals, Systems and_ _Computers, November 2009._ [7] B. N. Oreshkin, T. C. Aysal, and M. J. Coates, “Distributed average consensus with increased convergence rate,” in Pro_ceedings of the IEEE International Conference on Acoustics,_ _Speech and Signal Processing (ICASSP ’08), pp. 2285–2288,_ April 2008. [8] R. A. Horn and C. R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, UK, 1985. [9] C. D. Meyer, Matrix Analysis and Applied Linear Algebra, SIAM, 2001. [10] L. Xiao, S. Boyd, and S.-J. Kim, “Distributed average consensus with least-mean-square deviation,” Journal of Parallel and _Distributed Computing, vol. 67, no. 1, pp. 33–46, 2007._ [11] Y. Hatano and M. Mesbahi, “Agreement over random networks,” IEEE Transactions on Automatic Control, vol. 50, no. 11, pp. 1867–1872, 2005. [12] G. Xiong and S. Kishore, “Discrete-time second-order distributed consensus time synchronization algorithm for wireless sensor networks,” EURASIP Journal on Wireless Communi_cations and Networking, vol. 2009, Article ID 623537, 12 pages,_ 2009. [13] L. Xiao, S. Boyd, and S. Lall, “A scheme for robust distributed sensor fusion based on average consensus,” in Proceedings of _the 4th International Symposium on Information Processing in_ _Sensor Networks (IPSN ’05), pp. 63–70, April 2005._ In this paper, we present a linear high-order DAC algorithm to address the distributed computation problem in wireless sensor networks. Interestingly, the high-order DAC algorithm can be regarded as a spatial-temporal processing technique, where nodes in the network represent the spatial advantage, the high-order processing represents the temporal advantage, and the optimal convergence rate can be viewed as the diversity gain. In the future, we intend to investigate the effects of fading, link failure, and other practical conditions when utilizing the DAC algorithm in wireless sensor networks. ## References [1] R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and cooperation in networked multi-agent systems,” Proceedings of _the IEEE, vol. 95, no. 1, pp. 215–233, 2007._ [2] L. Xiao and S. Boyd, “Fast linear iterations for distributed averaging,” in Proceedings of the 42nd IEEE Conference on _Decision and Control, vol. 5, pp. 4997–5002, December 2003._ -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1155/2010/373604?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1155/2010/373604, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://asp-eurasipjournals.springeropen.com/counter/pdf/10.1155/2010/373604" }
2,009
[ "JournalArticle" ]
true
2009-10-06T00:00:00
[ { "paperId": "7c2a1f8a5ac9560d259aee8e63ffcd5bb5d94c97", "title": "Distributed average consensus: Beyond the realm of linearity" }, { "paperId": "22b5ed352fa9a2e4a606fd7bae29bbde8f311972", "title": "Higher Dimensional Consensus: Learning in Large-Scale Networks" }, { "paperId": "5b731bc6c64a24212f7f1a035b8cb8d3917e7a3b", "title": "Distributed average consensus with increased convergence rate" }, { "paperId": "992741fa38088868f6d26e7062bac068d3b71fe3", "title": "Accelerating Distributed Consensus Using Extrapolation" }, { "paperId": "aa6be519b394b44ab24c6ad964f8a2c6a9b23571", "title": "Consensus and Cooperation in Networked Multi-Agent Systems" }, { "paperId": "dc2894187aa9c0058efea2904020f59587b211a1", "title": "Ultrafast consensus in small-world networks" }, { "paperId": "59697e0aea25057adf743265888b3a4f5a607f82", "title": "A scheme for robust distributed sensor fusion based on average consensus" }, { "paperId": "48372b9fdbe64ec8d619babaf7f7ee734b00127c", "title": "Fast linear iterations for distributed averaging" }, { "paperId": "d6e6c3243e9e4e6dd8f0bc783d7612b7d3863d5f", "title": "Matrix Analysis and Applied Linear Algebra" }, { "paperId": "721f54f6fa32f5f02c5124a2b73ce5f4280b4eaf", "title": "Matrix analysis" }, { "paperId": "c44e0677856d6cbf252b6c06a92dc8c4c25b518a", "title": "Discrete-Time Second-Order Distributed Consensus Time Synchronization Algorithm for Wireless Sensor Networks" }, { "paperId": "5359fb2362ee22a18a5cc1bf9ff7f69d7ce533bf", "title": "Distributed average consensus with least-mean-square deviation" }, { "paperId": "a6187255156a25be0a68b5d20e2affb25c5dacd0", "title": "Agreement over random networks" }, { "paperId": "155b4d63cb1c7fd47f1944f75905ee98dbc4abbe", "title": "R-Matrix Analysis" }, { "paperId": null, "title": "532 2009 IEEE/SP 15th Workshop on Statistical Signal Processing" } ]
8,139
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/019d3320cca2fb1f3c5a791d3f76b1769eddf215
[ "Computer Science" ]
0.873536
A toolbox for verifiable tally-hiding e-voting systems
019d3320cca2fb1f3c5a791d3f76b1769eddf215
IACR Cryptology ePrint Archive
[ { "authorId": "2365563", "name": "V. Cortier" }, { "authorId": "145343020", "name": "P. Gaudry" }, { "authorId": "103167608", "name": "Quentin Yang" } ]
{ "alternate_issns": null, "alternate_names": [ "IACR Cryptol eprint Arch" ], "alternate_urls": null, "id": "166fd2b5-a928-4a98-a449-3b90935cc101", "issn": null, "name": "IACR Cryptology ePrint Archive", "type": "journal", "url": "http://eprint.iacr.org/" }
null
# A toolbox for verifiable tally-hiding e-voting systems ## Véronique Cortier, Pierrick Gaudry, Quentin Yang To cite this version: #### Véronique Cortier, Pierrick Gaudry, Quentin Yang. A toolbox for verifiable tally-hiding e-voting systems. ESORICS 2022 - 27th European Symposium on Research in Computer Security, Sep 2022, Copenhague, Denmark. pp.631-652, ￿10.1007/978-3-031-17146-8_31￿. ￿hal-03367930v2￿ ## HAL Id: hal-03367930 https://inria.hal.science/hal-03367930v2 #### Submitted on 29 Sep 2022 #### HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. #### L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. #### Distributed under a Creative Commons Attribution| 4 0 International License ----- ## A toolbox for verifiable tally-hiding e-voting systems Véronique Cortier, Pierrick Gaudry, and Quentin Yang Université de Lorraine, CNRS, Inria **Abstract. In most verifiable electronic voting schemes, one key step is** the tally phase, where the election result is computed from the encrypted ballots. A generic technique consists in first applying (verifiable) mixnets to the ballots and then revealing all the votes in the clear. This however discloses much more information than the result of the election itself (that is, the winners, plus possibly some information required by law) and may offer the possibility to coerce voters. In this paper, we present a collection of building blocks for designing tally-hiding schemes based on multi-party computations. From these building blocks, we design a fully tally-hiding scheme for Condorcet elections. Our implementation shows that the approach is practical, at least for medium-size elections. Similarly, we provide the first tallyhiding schemes with no leakage for three important counting functions: D’Hondt, STV, and Majority Judgment. We prove that they can be used to design a private and verifiable voting scheme. We also unveil unknown flaws or leakage in some previously proposed tally-hiding schemes. ### 1 Introduction Electronic voting is used in many countries and various contexts, from major politically binding elections to small elections among scientific councils. It allows voters to vote from any place and is often used as a replacement for postal voting. Moreover, it enables complex tally processes where voters express their preference by ranking their candidates (preferential voting). In such cases, the votes are counted using the prescribed procedure (e.g. Single Transferable Vote or Condorcet), which is tedious by hand but easy for a computer. Numerous electronic voting protocols have been proposed such as Helios [6], Civitas [15], or CHVote [21]. They all intend to guarantee at least two security properties: vote secrecy (no one should know how I voted) and verifiability. Vote secrecy is typically achieved through asymmetric encryption: election trustees jointly compute an election public key that is used to encrypt the votes. The trustees take part in the tally, to compute the election result. Only a coalition of dishonest trustees (set to some threshold) can decrypt a ballot and violate vote secrecy. Verifiability typically guarantees that a voter can check that her vote has been properly recorded and that an external auditor can check that the result corresponds to the received votes. Then, depending on the protocol, additional properties can be achieved such as coercion-resistance or cast-as-intended. ----- Various techniques are used to achieve such properties but one common key step is the tally: from the set of encrypted ballots, it is necessary to compute the result of the election, in a verifiable manner. There are two main approaches for tallying an election. The first one is the ho_momorphic tally. Thanks to the homomorphic property of the encryption scheme_ (typically ElGamal), the ballots are combined to compute the (encrypted) sum of the votes. Then only the resulting ciphertext is decrypted to reveal the election result, without leaking the individual votes. For verifiability, each trustee produces a zero-knowledge proof of correct (partial) decryption so that anyone can check that the result indeed corresponds to the encrypted ballots. The second main approach is based on verifiable re-encryption mixnets. The encrypted ballots are shuffled and re-randomized such that the resulting ballots cannot be linked to the original ones [40,21]. A zero-knowledge proof of correct mixing is produced to guarantee that no ballot has been removed nor added. Several mixers are successively used and then each (rerandomized) ballot is decrypted, yielding the original votes in clear, in a random order. Homomorphic tally can only be applied to simple vote counting functions, where voters select one or several candidates among a list and the result of the election is the sum of the votes, for each candidate. We note that even in this simple case, the tally reveals more information than just the winner(s) of the election. Mixnet-based tally can be used for any vote counting function since it reveals the (multi)set of the initial votes. On the other hand, this is much more information than the result itself, and such systems can be subject to Italian attacks. Indeed, when voters rank their candidates by order of preference, the number of possible choices can be higher than the number of voters. Hence a voter can be coerced to vote in a certain way by first selecting the first candidates as desired by the coercer and then “signing” her ballot with some very particular order of candidates, as prescribed by the coercer. The coercer will check at the end of the election that such a ballot appears. Recent work have explored the possibility to design tally-hiding schemes, that compute the result of the election from a set of encrypted ballots, without leaking any other information. This can be seen as an instance of Multi-Party Computation (MPC), but the context of voting adds some constraints. First, a voter should only produce one encrypted ballot that should remain of reasonable size and be computed with low resources (e.g. in JavaScript). The trustees can be assumed to have more resources. Yet, it is important to minimize the number of communications and the computation cost, whenever possible. In particular, voters should not wait for weeks before obtaining the result. Moreover, all proofs produced by the authorities need to be downloaded and verified by external, independent auditors. It is important that verifying an election remains affordable. _Related work. Even when the winner(s) of the election is simply the one(s)_ that received the most votes, leaking the scores of each candidate can be embarrassing and even lower vote privacy. This is discussed in [25] where the authors propose a protocol called Ordinos that computes the candidate who received 2 ----- the most votes, without any extra information. In case of preferential voting, where voters rank candidates, several methods can be applied to determine the winner(s). Two popular methods are Single Transferable Vote (STV) and Condorcet. STV is used in politically binding elections in several countries, including Australia, Ireland or UK. Condorcet has several variants and the Schulze variant is popular among several associations like Ubuntu or GnuGP. These are the counting methods offered by the voting platform CIVS [1] and used in many elections. Literature for tally-hiding schemes includes [22] which shows how to compute the result in Condorcet, while [37] and [9] provide several methods for STV. They all leak some partial information, but much less than the complete set of votes. Ordinos has been extended [24] to cover various counting functions that include Borda, Hare-Niemeyer, Condorcet, and Instant-Runoff Voting (IRV, which is STV with only one seat). This shows the flexibility of Ordinos, yet at a cost: ballots are of size cubic in the number of candidates for CondorcetSchulze and even super-exponential for IRV. The last system we study, Majority Judgment (MJ) is a vote system where voters give a grade to each candidate (typically between 1 and 6). The winner is, roughly, the candidate with the highest median rating. Since typically several candidates have the same median, the winner is determined by a complex algorithm that iteratively compares the highest median, then the second one and so on (see [7] for the full details). In [12], the authors show how to compute Majority Judgment in MPC. All these approaches except [22] rely on Paillier encryption since it is better suited than ElGamal for the arithmetic comparison of the content of two ciphertexts. _Our contributions. First, we revisit the existing work, exhibiting weaknesses_ and even flaws for some of them. For example, we discovered that the scheme proposed in [22] for Condorcet breaks vote privacy for each voter that voted blank. Moreover, we found out that the approach developed in [12] for Majority Judgement fails in not-so-rare cases. Our second and main contribution is the design of a toolbox of MPC primitives well suited for tally-hiding schemes. We provide a precise cost analysis, with various tradeoffs in terms of message size, number of communications, and computational costs. We believe this study could be useful in other settings. As an application of our toolbox, we provide new algorithms for computing vote counting functions, decreasing both the complexity and the leakage or proposing other trade-offs regarding the load for the voters and the trustees. One of our first findings is that even for complex counting functions, it is possible to use Exponential ElGamal encryption instead of Paillier. This offers a much better tool support as well as new tradeoffs in terms of computational costs. As counting functions, we first consider Condorcet-Schulze and propose the first tally-hiding scheme that allows candidates to be ranked at equality, with a quasi-linear complexity for voters (vs cubic in [24]). We also devise several efficiency/leakage compromises. We continue by considering three major counting functions: D’Hondt, Majority Judgment, and STV. For each of them, we propose the first tally-hiding schemes with no leakage. 3 ----- _Security proof and implementation. The Paillier setting of our toolbox builds_ upon the same low-level primitive as previous works. However, in the ElGamal setting that we found to be highly relevant, the core ingredient is the CGate protocol (that conditionally sets a component to 0). An important contribution of our work is to formally prove that this primitive is UC-secure and verifiable. Concentrating on this ElGamal setting, this allows us to prove vote secrecy and verifiability of a voting scheme that embeds our tally-hiding protocol. With the same goal of validating our ElGamal approach, we have implemented our building blocks in a library in this setting. As a proof of concept, we have combined them to form the tally-hiding scheme that corresponds to Condorcet-Schulze. Our experiments show a reasonable execution time. Authorities need a couple of minutes to perform the tally for 5 candidates, and about 9 hours for 20 candidates (and 1024 voters). In contrast, the code [24] developed in the Paillier setting, needed more than 9 days for 20 candidates (and was almost insensitive to the number of voters). Finally, we emphasize that our toolbox should be suitable to implement any realistic counting method.For example, we assumed here that the desired result of the election is exactly the set of winners but our toolbox could be used to reveal more information if needed (for example, it could tell that candidate A receives between 55% and 60% of the votes). _Outline of the paper. We start (Section 2) by explaining how to obtain all_ basic arithmetic operations in MPC on encrypted integers, using El Gamal encryption and we show that it is UC-secure. Figure 1 in Appendix provides the cost of each basic function, that allows to derive the cost of any complex function, obtained by composition. In Section 3, we apply our toolbox to the CondorcetSchulze tally function and we provide a detailed computational cost analysis, and compare it with previous approaches (one of them suffering from a privacy breach). Due to space constraints, we overview in Section 4 how our toolbox can be applied to single voting, STV and Majority judgement, again comparing our approach to previous techniques. The exact cost of each tally function is given in Appendix. We show in Section 5 that, in all these cases, we can derive a privacy preserving voting protocol. A companion report [18] provides a more detailed overview on how our toolbox can be applied to build MPC secure tally functions for Condorcet, single voting, STV, and Majority judgement. It also contains all the detailed algorithms and security proofs. Our source code for the implementation is available in [4]. ### 2 Description of the Tally-Hiding Toolbox We focus on the tally phase, common to most voting schemes. We assume a public ballot box that contains the list of encrypted ballots where all the traditional issues up to here have been handled: eligibility, validity of ballots, revoting policy if applicable, and so on. We concentrate on the counted-as-recorded property. Our goal is to compute the winners of the election, while preserving the privacy of the voters, namely with no additional leakage of information about the tally. The decryption key is assumed to be shared among a trustees, with a 4 ----- threshold scheme, and we wish the procedure to produce a transcript such that: 1) if at least a threshold of t +1 trustees is honest, the result will be obtained; 2) if at most t trustees are corrupted, only the result is known (no side-information is leaked); 3) even if all a trustees are dishonest, if the transcript is valid then the result is guaranteed to be correct. **2.1** **Encryption scheme: Paillier vs ElGamal** Paillier and Exponential ElGamal are the most popular asymmetric encryption schemes that are homomorphic, where multiplication or division of ciphertexts correspond to addition or subtraction of the corresponding cleartexts. They therefore allow re-encryption, by multiplying by an encryption of 0. These are properties at the heart of the MPC protocols. When Exponential ElGamal encryption can be used, it offers several avantages over Paillier. First, popular elliptic curves like NIST P-256 or Curve25519 are now ubiquitous in cryptographic libraries, while there is in general no support for Paillier. Moreover, in our context, it is important to split the decryption key among several trustees so that no single authority can break vote privacy. It is easy to set up threshold decryption in ElGamal, with an arbitrary threshold of trustees [16]. The situation is more complex in Paillier. The general threshold key distribution scheme [23] is of high complexity. A more efficient scheme exists [29], but only with a honest majority. Another reason for preferring ElGamal is that the underlying security assumption (Decisional Diffie Hellman) can be considered as more standard than the one for Paillier (Decisional n-Residuosity). On the other hand, Paillier offers more possibilities when it comes to MPC. Therefore, in general, an algorithm based on the Paillier scheme requires less exponentiations than when based on ElGamal; however, exponentiations are more costly. Later on, we will provide the complexities of our algorithms measured by the number of exponentiations. When comparing these figures, one should remember the respective costs in ElGamal and in Paillier, that we estimate now. **Table 1. Estimation of the number of exponentiations per second in Paillier and** ElGamal settings. Paillier Elliptic ElGamal Ratio Native (server-side) 200 10,000 50 In browser (voter-side) 2 5,000 2,500 **Parameter sizes and cost of operations. For a voting system, a 128-bit** level of security seems to be a reasonable choice. While 112-bit level is probably acceptable for the next decade, many certification bodies will ask for 128 bits or more. In the case of an elliptic ElGamal this translates readily into a curve over a base field of 256 bits, and usually prime files are preferred. For the Paillier scheme, the security relies on a problem that is not harder than integer factorization of an RSA number n. Since the complexity of the best known factoring algorithm is hard to evaluate, there is no strict consensus about 5 |Col1|Paillier|Elliptic ElGamal|Ratio| |---|---|---|---| |Native (server-side)|200|10,000|50| |In browser (voter-side)|2|5,000|2,500| ----- the size of n for a 128-bit security level. Generally, this goes around 3072 bits. In Table 1, we estimate the number of exponentiations per second, based on a medium level of optimization, for a native implementation on a modern processor (based on OpenSSL, using RSA for Paillier emulation), and for a JavaScript implementation in a browser (based on libsodium.js and JavaScript BigInt). **2.2** **Key elements of ElGamal-based MPC** Our toolbox contains subroutines for both ElGamal and Paillier, but in this description, we concentrate on ElGamal, since in the end we find it more suitable for e-voting. In ElGamal-based MPC, some operations seem to be impossible to be performed efficiently, for instance comparing two encrypted integers. In order to evaluate any counting function, we will therefore restrict ourselves to manipulating encrypted bits. By the homomorphic property, dividing an encryption of 1 by a ciphertext provides an easy and cheap Not gate. The main workhorse of our toolbox is a primitive from [32] called conditional gate, that provides an And gate. We readily deduce that a Nand gate is available, which is complete, and therefore any function can be implemented by working on encrypted bits. **Algorithm 1: CGate** **Require: X, Y such that X, Y are encryptions of x, y ∈{0, 1}** **Ensure: Z = Enc(xy)** **1 Compute Y0 = Enc(−1)Y** [2], set X0 to X **2 for i = 1 to a, for the authority i, do** **3** Choose r1, r2 ∈r Zq and s ∈r {−1, 1} **4** Compute Xi = ReEnc(Xi[s]−1[, r][1][)][ and][ Y][i] [=][ ReEnc][(][Y][ s]i−1[, r][2][)] **5** Reveal Xi, Yi and a ZKP that Xi and Yi are well formed **6 Each authority verifies the proof of the other authorities** **7 They collectively rerandomize Xa and Ya into X** _[′]_ and Y _[′]_ **8 They collectively compute ya = Dec(Y** _[′])_ 1 **9 Return Z = (XX** _[′][y][a]_ ) 2 **Conditional Gates. A conditional gate [32] is a protocol which allows to com-** pute, from two encryptions of x and y, an encryption of xy. It is named this way because y needs to lie in a known binary domain. We propose the CGate protocol (Algorithm 1), adapted from [32] so that we could prove its security in the SUC framework (see Section 2.4). This protocol is the main building block of our MPC protocols, which consist of CGate protocols and homomorphic operations. Note that each participant of a CGate protocol produces a Zero Knowledge Proof (ZKP) that guarantees that the correct computations were performed (including at steps 7 and 8 for example). Those ZKP can later form a transcript which can be used to verify the output of the protocol. Their exact description can be found in [18]. By concatenating the transcripts of all the CGate subprotocols, a transcript for verifiability can be obtained for all our MPC protocols. 6 ----- **Encrypting an integer. When ElGamal is used for a homomorphic tally, the** result is an integer that is directly encrypted thanks to a natural encoding. We can still add and subtract encrypted values, but most other operations (comparison, multiplication, . . . ) are more difficult, or even impossible. Therefore, in our protocol we will keep intermediate integer values encrypted in the bit_encoding, where each bit of the integer is separately encrypted. We denote it_ _X_ [bits] = (X0, . . ., Xm−1), where 2[m] is a bound on the integer represented by X, and Xi is the encryption of the i-th bit of the binary expansion (index 0 for the least significant bit). Converting an integer in bit-encoding to natural encoding is done using the homomorphic property and the Horner scheme. The other direction is impossible in the ElGamal setting. However, if the Paillier scheme is used, converting from the natural to the bit-encoding is still possible [33]. **2.3** **MPC toolbox** We now present the building blocks that constitute our toolbox, such as addition, multiplication and comparison. Those building blocks can be combined to evaluate any counting function without leaking anything but the result. For each of them, we study their cost, which are summarized in the Figure 1 of the Appendix. The computation cost is the number of exponentiations, but for the communications, we distinguish the broadcast and the rounds of communications. An important information is also the size of the transcript that is created during the process and that should be checked, for example by auditors, to guarantee that the result is correct. We believe that this toolbox is of independent interest and could be used in contexts beyond tally-hiding protocols. This gathers results from various domains, first on ZKP [11,27,30,40] and MPC [8,19,32,33,28,34] but also on hardware circuits [10]. We distinguish between the functionality (e.g. addition) and the protocol that realizes it since different options may be considered, leading to different trade-offs in terms of communications and computations. For some building blocks, we propose our own protocols, improving existing propositions. **Branch-free tools. In MPC, the algorithms must be implemented in a branch-** free setting, because the result of a test cannot be revealed. We consider the following conditional operations, where B is an encrypted bit. **– CondSetZero(X, B), CondSetZero[bits](X** [bits], B): conditionally sets to zero by outputting a re-encryption of X if B is an encryption of 1, or of Enc(0) otherwise. In the bit-encoding setting, each bit of X is treated separately. **– Select(X, Y, B), Select[bits](X** [bits], Y [bits], B): selects according to bit by outputting a re-encryption of X if B is an encryption of 0, or of Y otherwise. **– SelectInd([Xi], [Bi]): selects in array according to bits by outputting a re-** encryption of Xi for the i such that Bi is an encryption of 1. This requires that [Bi] is such that that there is only one index i for which Bi is Enc(1). The CondSetZero functionality is essentially just the CGate protocol. The other functionalities can be easily derived using the homomorphic property. If 7 ----- the Paillier setting is used, a more efficient realization is possible [19,34]. More details can be found in [18]. **Arithmetic. Thanks to the homomorphic property, additions and subtractions** are easily handled with the natural encoding. However, they are more involved with the bit-encoding [32]. We denote the corresponding functionalities Add and ``` Sub. They can be implemented as we would do for binary circuits. ``` Comparison of two integers is denoted by LT. In bit-encoding, it can be seen as a subtraction where only the final borrow bit is needed. Similarly, we define the Mul functionality that can be applied to integers in the bit-encoding, following the schoolbook algorithm for bit-wise encoded integers. Finally, a frequent operation is to compute the sum of many encrypted binary values, typically to get the total number of votes for a given option. We call this operation Aggreg. If this is the final result before decryption, the homomorphic property is enough, but in general the result is needed in the bit-encoding format. We therefore designed a dedicated tree-based algorithm with variable precision, which improves the complexity compared to a naive approach. The cost of many variants of all of these, with different trade-offs, are given in Appendix. We also include algorithms in the Paillier setting for which more operations are available in the natural encoding. **Shuffle and mixnet. A tool that is of great use in our context is the verifiable** shuffle [39,40], leading to mixnets. In electronic voting, the typical use of a mixnet is during the tally phase, just before decrypting all the ballots, one by one. Our tally-hiding schemes actually makes a thorough use of shuffle, not only on the trustees side but also on the voter’s side, as shown in Section 3. **2.4** **Security** We consider the well-known UC-framework [13] to prove security. A composable framework is particularly suitable to analyze the security of our MPC protocols since we provide building blocks that we combine. We actually use the composition framework from [14], which is a Simpler version of the Universally Composable framework (SUC), shown to imply UC-security. Participants of a protocol _P are modeled as Polynomial Probabilistic Turing Machines (PPT). Each of the_ _a participants has a single input and output communication tape, and interacts_ with a router, which in turn interacts with an adversary A. The adversary interacts with the router and the environment Z. It can corrupt a subset C of participants of size at most t, where t ≤ _a is some threshold. Non-corrupted_ participants are honest and follow the protocol, while corrupted participants are fully impersonated by the adversary and give away any secret they have. The process terminates when Z writes on its output tape. We denote REALP,A,Z (κ, z) the output, where κ is a security parameter and z is an arbitrary auxiliary input. The security of the process is guaranteed by a comparison with an ideal one, in which each party hands over their inputs to a trusted party T which honestly performs the desired computation. Corrupted parties may send arbitrary outputs as instructed by the adversary, and the adversary can block or delay communications with the trusted party. Intuitively, T computes some ideal function f, 8 ----- such as Add but it cannot be just a function. Indeed, T additionally takes care of failure cases (for example, when too many parties return inconsistent data). We denote IDEALT,S,Z (κ, z) the output of the environment in the ideal process, when it interacts with the adversary S. Intuitively, a protocol is SUC-secure if, for all adversary A in the real process, there exists a simulator S in the ideal process such that no PPT environment Z can tell whether they are interacting with the adversary in the real process or with the simulator in the ideal process. **Definition 1 (Secure computation [14]).** _Let P be a protocol, T some_ _trusted party. We say that P securely computes T if, for all PPT A, there exists_ _a PPT S such that, for all PPT Z, there exists a negligible function µ such that_ _for all κ and all z polynomial in κ,_ _| Pr(IDEALT,S,Z_ (κ, z) = 1) − Pr(REALP,A,Z (κ, z) = 1)| ≤ _µ(κ)._ All our building blocks (except shuffle and mixnets, that are handled separately) rely on CondSetZero in the sense that they can all be derived as composition of this function, possibly with intermediate operations using only the homomorphic property. To compute CondSetZero, we consider the MPC protocol CGate [32] based on ElGamal, and we adapt it in order to prove, in the SUC framework, that CGate securely computes the trusted party TCGate, that behaves as CondSetZero except when parties do not answer, in which case it returns an error. The CGate protocol also produces a transcript which acts as a ZKP that the protocol was performed correctly. The SUC security of the other building blocks then follows by composition. Actually, as detailed in [14], SUC-security is not directly composable but instead requires to introduce intermediary (composable) hybrid models, where participants have an oracle access to some ideal trusted parties. We could prove by composition of the hybrid models that each of our building blocks securely computes its corresponding ideal trusted party. However, this would require some extra work since our building blocks compute a re-encryption of the desired function (e.g. addition) and hence is not a deterministic function. Instead, we use a different proof strategy: we show that any composition of CGate, followed by a final decryption, is SUC-secure, which corresponds exactly to our needs when applied to tally-hiding schemes. All the precise definitions and proofs are provided in the full version of this paper [18]. ### 3 Tally-hiding schemes for Condorcet-Schulze The Condorcet approach is a popular technique to determine a winner when voters rank candidates by order of preference, possibly with equalities. A Condorcet winner is a candidate that is preferred to every other candidate by a majority of voters. More formally, we consider the matrix of pairwise preferences d where _di,j is the number of voters that prefer (strictly) candidate i over j. Then a_ Condorcet winner is a candidate i such that di,j > dj,i for all j ̸= i. Such a candidate may not exist. In that case, several variants can be applied to compute the winner. We focus here on the Schulze method, used for example for Ubuntu elections [5]. It first considers by “how much” a candidate is preferred, which can 9 ----- be reflected into the adjacency matrix a defined as _ai,j =_ � _di,j −_ _dj,i if di,j > dj,i,_ 0 otherwise. Then a weighted directed graph is derived from the adjacency matrix, where each candidate i is associated to a node and there is an edge from i to j with weight _ai,j. This itself induces an order relation between the candidates by comparing_ the “strength” of the paths between i and j. The exact algorithm can be found in [35]. Note that there may be several winners according to Condorcet-Schulze. We denote by fCond the function that returns the winners. We propose several MPC implementations of Condorcet-Schulze, depending on the accepted leakage and on the load balance between the voters and the authorities. The different approaches are summarized in Table 2. **Table 2. Leading terms of the cost of MPC implementations for Condorcet-Schulze. n:** number of voters, m = ⌈log(n +1)⌉, k: number of candidates, a: number of authorities. Authorities Size of the Version Leakage EG/P [Voters] # exp. # exp. # comm. transcript adj. matrix [22] **privacy** EG 5k[2] 18nak[2] 2 13nak[2] **breach [i]** [24] [ii][iii] ∅ P 5k[3] 6nak[3] + (54m 4k log m 9nk[3] + (56m +292 log m)ak[3] +100 log m)ak[3] _ballots as_ _list of integers_ adj. matrix EG 8k log k 872 _[nak][2][ log][ k]_ 2 log k 932 _[nak][2][ log][ k]_ (partial MPC) _ballots as_ 29 31 _list of integers_ ∅ EG 8k log k 2 _[nak][2][(3 log][ k]_ 2 _[nak][2][(3 log][ k]_ +5m) + 174mak[3][ m][(][m][ + 4][k][)] +5m) + 186mak[3] (full MPC) _ballots asmatrices_ adj. matrix EG 432 _[k][2]_ 472 _[nk][2]_ 0 852 _[nk][2]_ i [22] leaks, for each ballot, the number of candidates ranked at equality. In particular, who voted blank is known to everyone. ii [24] does not allow voters to give the same rank to several candidates. iii [24] originally does not take into account the cost of verifying the ZKP from the voters. **3.1** **Ballots as matrices** A first approach is to encode the vote as a preference matrix m. For each candidate i, let ci be its rank, possibly with equality. Then mi,j is set to 1 if ci < cj, 0 if ci = cj and −1 otherwise. The voters then encode their ballot as an encrypted preference matrix M . They also need to prove that M is well-formed, that is, corresponds to a total order (with equalities). This requires e.g. to prove that if the voter prefers i over j and j over k then she prefers i over k: (mi,j = 1) ∧ (mj,k = 1) ⇒ (mi,k = 1) and similar relations when mi,j and mj,k are equal to 0 or −1. 10 ----- To discharge the voter from such a proof effort, in [22] the authorities shuffle each preference matrix in blocks and then decrypt them to check that it was indeed well formed. However, this yields a privacy breach, unnoted in [22]: for each voter, everyone learns the number of candidates placed at equality. In particular, everyone learns who voted blank since in that case all candidates are placed at equality. A costly way to repair [22] is to let the voters prove the relations with a ZKP, with a cost of O(k[3]) exponentiations to build and to check a ballot, where _k is the number of candidates. This is the approach of [24], that also assumes_ that voters do not place candidates at equality (the case ci = cj is forbidden). We propose an alternative approach in O(k[2]) exponentiations for both the voter and the verifier. Assume first that a voter prefers candidate 1 over candidate 2, that is preferred over candidate 3 and so on. Then the corresponding preference matrix is m[init]. We consider a fixed encryption M [init] of this matrix, where _Eα is the ElGamal encryption of α with “randomness” 0. Everyone can check that_ _M_ [init] is formed as prescribed, at no cost, since we use a constant “randomness”:  0 1 1 _· · ·_ _m[init]_ = −1 0 [..] . [..].  _Mi,j[init]_ [=]  _EE10_ ifif i < j i = j  ... ... ... 1  _E−1 otherwise._ 1 1 0 _−_ _· · · −_ Assume now that a voter wishes to rank the candidates in some order, which is a permutation σ of 1, 2, . . ., k. Then the voter can simply shuffle M [init] using σ. The associated proofs of a shuffle guarantee that the resulting matrix is indeed a permutation of M [init], hence is well formed. Interestingly the secret vote σ is not encoded in the initial matrix but in the permutation used to shuffle it. Applying [40], this requires O(k[2]) exponentiations for the voter. To account for candidates that have an equal rank, the voter still shuffles M [init] according to a permutation σ consistent with her preference order, that is such that σ(i) < _σ(j) implies that ci ≤_ _cj. But beforehand, she sends an additional vector B of_ encrypted bits (bi), where bi = 1 if candidates σ[−][1](i) and σ[−][1](i + 1) have equal rank and bi = 0 otherwise. The voter will then modify the matrix M [init] into a transformed matrix M _[′], using B, so that M_ _[′]_ corresponds to her preference matrix. The resulting cost is still in O(k[2]) (since k[2] coefficients need to be updated) instead of O(k[3]) for [24] (that, yet, does not consider equalities). Then the (encrypted) adjacency matrix can be computed by simply multiplying all ballots. This matrix is then (provably) decrypted by the authorities and Condorcet-Schulze as well as many variants can be applied. The main cost for the authorities lies in the verification of the proofs for each ballot. We could also avoid leaking the adjacency matrix by computing the Condorcet-Schulze winner(s) in MPC. However, the cost for the authorities would be in O(k[3]). If this is considered affordable, then we can further alleviate the charge of the voters, as we shall explain now. **3.2** **Ballots as list of integers** To minimize computations on the voter’s side, we simply ask them to encrypt the list of integers (ci) representing their preference. In the ElGamal setting, we di 11 ----- rectly use the bit representation of each integer and encrypt each bit separately. If there are k candidates, we need log k bits to encode each candidate, hence a ballot will contain k log k ciphertexts, together with ZKP which prove that they encrypt only 0 or 1. This is to be compared with the k[2] encryptions when ballots are encoded as a preference matrix. To apply the Schulze method, the authorities transform back each ballot into a preference matrix. We consider the positive _preference matrix, obtained from the preference matrix by setting negative co-_ efficients to 0. If Ci denotes the encryption of ci then the encrypted positive preference matrix M are computed by the authorities as Mi,j = LT[bits](Ci, Cj). Summing up the (encrypted) matrix Mv for each voter v, we obtain the (encrypted) pairwise positive preferences matrix D. Then the authorities can apply the Schulze method in MPC from D, which can be implemented from the FloydWarshall algorithm [20,36]. Indeed, the latter mostly consists in computations of min/max, and translates into an MPC algorithm using the building blocks presented in Section 2. We denote by PCond the corresponding MPC protocol. The advantage of this solution is that the load for voters remains minimal, with O(k log k) exponentiations in total. However, for the authorities, transforming each ballot into a preference matrix costs O(k[2] log k) per voter, while computing the Floyd-Warshall algorithm requires O(k[3]) exponentiations. To summarize, when the number of candidates and voters remain reasonable, it is actually possible to compute the Condorcet winners with no leakage. Interestingly, the costly operations performed by the trustees can be done on-the-fly, while voters submit their ballots. Note that unless the number of candidates is really large w.r.t. the number of voters, a fully-hiding tally scheme is not really more expensive than schemes leaking the adjacency matrix. **Security. We denote by TCond the trusted party that implements fCond in the** SUC framework. We show that PCond securely computes TCond (proof in [18]). **Theorem 1. PCond securely computes TCond under the DDH assumption and the** _random oracle model (ROM)._ **3.3** **Implementation** In order to validate our approach, we have written a prototype implementation. In the literature, most of such prototypes are based on Paillier encryption. Here, we concentrate on the ElGamal setting, in order to evaluate its practical feasibility. The libsodium library is used for randomness and all elliptic curve and hashing operations. The rest is implemented as a standalone C++ program. It is available as a companion artefact of this paper [4] and is published as free software. Most of the primitives of our toolbox have been implemented, and as a proof of concept, we have written a fully tally-hiding protocol for CondorcetSchulze (ballots as list of integers, and no leakage, in Table 2). We ran our software on various sets of parameters. In order to compare to [24], we also consider 3 trustees (and no threshold). Our experimental setting is a single server hosting two 16-core AMD EPYC 7282 processors and 128 GB of RAM. Each of the 3 trustees runs 4 computing threads and a few scheduling and 12 ----- I/O threads. The communication between the trustees is emulated via the loopback network interface. Thus, all the network system calls are indeed performed by the program, even though this is just a simulation. The verification of the validity of the ballots is a non-MPC computation that takes a negligible time, compared to the tally. In Table 3, we summarize the cost in terms of wall-clock time and the size of the transcript, measured by the program. **Table 3. Benchmark (wall-clock time and transcript size) of fully tally-hiding** Condorcet-Schulze MPC computation. voters 5 candidates 10 candidates 20 candidates 64 1m50s / 49 MB 8m30s / 0.30 GB 45m / 1.8 GB 128 2m40s / 87 MB 12m / 0.51 GB 1h27m / 2.9 GB 256 4m35s / 160 MB 20m / 0.88 GB 2h37m / 4.8 GB 512 8m10s / 305 MB 34m / 1.6 GB 4h43m / 8.6 GB 1024 15m / 595 MB 1h05m / 3.1 GB 8h50m / 16 GB This experiment demonstrates that the approach is sound and in the realm of practicability, for moderate-sized elections. With this choice of ballot representation, which is very cheap from the voter’s point of view, the agglomeration of the preference matrices has to be done in MPC, and therefore the cost for the trustees grows quasi-linearly in the number of voters. Therefore, at some point, the approach of [24] using Paillier encryption becomes preferable, since the aggregation is for free, and the MPC cost is essentially independent of the number of voters. Still, their benchmark gives more than 9 days of MPC computation for tallying a 20-candidates Condorcet-Schulze election, which is more than what we provide for 1024 voters. ### 4 Other Counting Methods We also provide fully leakage-free tally protocols for D’Hondt, Majority Judgment and Single Transferable Vote. We survey our findings and encodings for each counting functions. Full details are available in [18]. In particular, we prove that our tally protocols are SUC-secure by providing analogs of Theorem 1. **4.1** **Single vote** A first class of counting functions applies to the case where voters simply select some candidate(s). The typical way to determine the s winners is to count the number of votes for each candidate and select the s ones with the most votes. This is the case covered by Ordinos [25], which however suffers from a shortcoming in case of equalities: it may return more winners than the number of seats. We correct this and we show that it is possible to rely on ElGamal, thanks to an adapted algorithm. This lowers the size of a ballot for voters at a higher cost for the authorities, which can be preferred in practice. 13 |voters|5 candidates|10 candidates|20 candidates| |---|---|---|---| |64|1m50s / 49 MB|8m30s / 0.30 GB|45m / 1.8 GB| |128|2m40s / 87 MB|12m / 0.51 GB|1h27m / 2.9 GB| |256|4m35s / 160 MB|20m / 0.88 GB|2h37m / 4.8 GB| |512|8m10s / 305 MB|34m / 1.6 GB|4h43m / 8.6 GB| |1024|15m / 595 MB|1h05m / 3.1 GB|8h50m / 16 GB| ----- Things get more complex when voters select a candidate list instead of a single candidate. Indeed, the seats need to be shared among the candidates of the different lists, according to the number of votes received. One popular technique is the D’Hondt method, which is used in practice for politically-binding elections. We extend the approach initiated by Ordinos to the case of D’Hondt, building on two main ideas: the use of a more advanced algorithm and a more efficient primitive for comparison, inspired from circuits. In this case, ElGamal is a key ingredient for designing a practical tally-hiding scheme. The analysis in terms of cost is displayed in Figure 2 of the appendix. **4.2** **Majority Judgment** Majority Judgment (MJ) [7] is a method in which candidates are each given a grade, such as Excellent, Good, Poor, etc. Then the candidates are compared based on the sequence formed by their median grades i.e. the median grade, then the median obtained when the median grade is removed, and so on. It has been recently used by more than 400 000 voters in French primary elections [2]. In [12], an MPC protocol is proposed to realize MJ, but we discovered that it only implements a simplified version, called majority gauge. When the majority gauge returns a winner, then it is indeed a MJ winner but, in small elections, there is a rather high probability that the simplified algorithm does not provide any result. For example, in an election with 100 voters, [12] can fail with probability 20% [18], which not only is inconvenient (imagine an election that must be canceled because no winner is declared!) but also leaks some information (there is no winner according to the majority gauge). To repair the approach, one issue is that the complexity of the MJ algorithm depends (linearly) on the number of voters, which may be large. Hence, [7] devises an alternative (complex) algorithm that no longer depends on the number of voters. We propose a variant of this algorithm and use it as a basis to derive a tally-hiding procedure. Our algorithm has a similar complexity to [12] while they implement a much simpler algorithm. Then we show that it is possible to adapt our algorithm to ElGamal encryption. Interestingly, the format remains unchanged for the voter (hence the resulting ballot is even easier to compute). The resulting computational costs are displayed in Figure 3 in appendix. This is a good example where working with bit-encoded integers allowed to perform all the needed operations in MPC. The load for the trustees increases but our study shows that it remains reasonable since the extra operations are more or less compensated by the fact that computations are faster in ElGamal. **4.3** **Single Transferable Vote** In Single Transferable Vote (STV), each voter must give a strict ordering of a subset of candidates. It consists of several rounds, during which each ballot grants a (weighted) number of votes to its first candidate. If a candidate has more votes than a quota, she is selected and any exceeding votes are transferred to the next candidate in each ballot (i.e. the weight of the ballot is multiplied by a transfer coefficient and the candidate is removed from all ballots). Otherwise, the candidate with the least votes is eliminated. Many variants of STV exist, 14 ----- depending on the way in which the votes are transferred. We took advise from Australian academics to choose an ideal version of STV, which is easy to analyze. We discovered that even without any cryptography, the ideal STV algorithm is exponential and far from being practical. The reason is that the numerators and denominators of the fractions grow exponentially with the number of seats. On real data elections from the South New Wales election in Australia [3], it would take about one month on a personal computer to compute the result, and about 30GB of central memory to store all the fractions. Given that ideal STV cannot be efficiently computed in the clear, we considered a variant with rounding. In [37,9], there are three techniques to compute the STV winners, all with some leakage. Note that [37] computes the ideal STV (with no rounding) but probably because the authors did not realize that it would quickly be impractical. [31,24] cover a particular case where only one candidate is elected (IRV). Note that [24] uses a naive encoding of the possible choices: if there are c candidates, they view the c! possible orders as c! possible “candidates” from which a voter makes a selection, yielding a ballot of super-exponential size, while the ballot size is O(c[2]) in [31]. We propose a fully tally-hiding algorithm for STV, with no leakage, at a cost similar to [37,9], as displayed in Figure 4 in appendix. To keep the cost reasonable, we re-used techniques of hardware circuits to implement efficiently the arithmetic functions. ### 5 Application to e-voting security We show that our tally-hiding schemes can be used for e-voting, preserving vote secrecy and verifiability. We consider a mini-voting scheme, TH-voting, where we assume that voters have an authenticated channel with the voting server. Similarly to Ordinos [25], voters simply encrypt their vote following the expected format and the MPC protocol is used for tallying. **5.1** **Definitions** A voting scheme consists of four algorithms and one MPC protocol (Setup, vote, ``` isValid, Ptally, Verify) where: ``` - Setup(κ, a, t) takes as input the security parameter κ, the number of authorities a and a threshold t. It returns sk, pk, (si, hi)[a]i=1[, respectively a key pair] _sk, pk and the corresponding private and public shares si, hi for each authorities._ - vote(pk, v) takes a public key pk, a vote v, and returns a ballot. - isValid(BB, B) takes as input a ballot B and a ballot box BB and returns a boolean that states whether B is valid w.r.t. BB. - Ptally(a, t) = P1, · · ·, Pa is an MPC protocol to compute the tally. - Verify(r, Π, BB) takes as input a result r, a transcript Π and a ballot box _BB and returns a boolean that states whether r is correct w.r.t. BB and Π._ This check is typically run by external auditors. In [26], a quantitative definition of privacy is proposed, where a voting system is said δ-private for some δ. This definition can be turned into a qualitative one when δ is shown to be minimal, in a sense that an ideal protocol achieves δ[′]privacy with a negligible |δ − _δ[′]|. Hence, a natural definition of privacy is to_ 15 ----- compare the probability of success of the adversary in a real and in an ideal protocol, and to show that the difference is negligible. Just as in [26], we consider a definition where the adversary tries to guess the vote of a single voter. We consider a fixed set V of valid voting options and the games defined respectively in Algorithms 2 and 3, where the differences are highlighted in blue. **Definition 2 (vote** **privacy).** _We_ _say_ _that_ _a_ _voting_ _protocol_ (Setup, vote, isValid, Ptally, Verify) guarantees vote privacy w.r.t a result func_tion tally if, for all parameters t, a, n, nc with t < a and nc ≤_ _n, for all_ _C ⊂_ [1, a] of size at most t, for all adversary A, there exists an adversary B _and a negligible function µ such that for all voting options v2, · · ·, vn ∈_ _V,_ _|Pr(Real[Priv]A,Ptally_ [(][κ, n, n][c][, a, t, C, V, v][2][,][ · · ·][, v][n][) = 1)] _−_ Pr(Ideal[Priv]B,tally[(][κ, n, n][c][, a, t, C, V, v][2][,][ · · ·][, v][n][) = 1)][| ≤] _[µ][(][κ][)][.]_ **Algorithm 2: Real[Priv]A,Ptally** **Require: κ, n, nc, a, t, C, V, v2, · · ·, vn** **1 sk, pk, (si, hi)[a]i=1** [:=][ Setup][(][κ, a, t][)] **2 b ∈r {0, 1}; par = pk, h1, · · ·, ha** **3 v0, v1 := A(κ, par, (si)i∈C)** **4 BB := {vote(pk, vb)}** **5 for i = 2 to n −** _nc do_ _BB := BB_ [�]{vote(pk, vi)} **6 (Xi)i>n−nc := A(BB)** **7 for i > n −** _nc do_ **8** **if isValid(BB, Xi) then** _BB := BB_ [�]{Xi} **9 r := A||i∈[1,a]\CPi(si, par, BB)** **10 b[′]** := A() **11 Return (b == b[′]) ∧** (v0, v1 ∈ _V )_ **5.2** **TH-voting** **Algorithm 3: Ideal[Priv]B,tally** **Require: κ, n, nc, a, t, C, V, v2, · · ·, vn** **1 sk, pk, (si, hi)[a]i=1** [:=][ Setup][(][κ, a, t][)] **2 b ∈r {0, 1}; par = pk, h1, · · ·, ha** **3 v0, v1 := B(κ, par, (si)i∈C)** **4 BB := {vote(pk, vb)}** **5 for i = 2 to n −** _nc do_ _BB := BB_ [�]{vote(pk, vi)} **6 (Xi)i>n−nc := B()** **7 for i > n −** _nc do_ **8** **if isValid(BB, Xi) then** _BB := BB_ [�]{Xi} **9 r := tally((Extractsk(B))B∈BB)** **10 b[′]** := B(r) **11 Return (b == b[′]) ∧** (v0, v1 ∈ _V )_ We define a voting protocol Vtally for each tally function tally covered in our work (D’Hondt, Majority Judgment, Condorcet-Schulze, and STV), with Ptally the corresponding tally-hiding protocol, in the ElGamal setting. The algorithm ``` votetally returns an encrypted ballot following the devised encoding, and a ``` ZKP that the ballot is correctly formed. The algorithm isValidtally checks the ZKP and additionally ensures that the ballot is not already on the board. As explained in Section 2, the CGate protocol produces a transcript which acts as a ZKP that the protocol was performed correctly. By concatenating the transcripts of all CGate and the transcript of the threshold decryption, the participants produce a ZKP Π that Ptally has been performed correctly. This also defines a ``` Verifytally algorithm which simply consists of verifying all the ZKP. Finally, we ``` consider an ideal Setup(κ, a, t) that picks a group G corresponding to the security 16 ----- parameter κ, picks randomly a generator g and returns sk, pk, s1, h1, · · ·, sa, ha where the (si, hi) are distributed following Shamir’s scheme with a authorities and a threshold t; sk is the corresponding secret key and pk = (g, g[sk]). The setup can be further refined with a UC-secure DKG (see e.g. [38]). **Theorem 2. Let tally be one of the previously defined tally functions (D’Hondt,** _Majority Judgment, Condorcet-Schulze, and STV). Assuming DDH, Vtally is_ _private w.r.t. tally._ The proof can be found in [18]. We also prove that Vtally is verifiable for a notion of verifiability similar to [17]. Note that the key step is the fact that our tally-hiding schemes guarantees universal verifiability: auditors can check that the result is valid. Individual verifiability is straightforward in our setting since we implicitly assume that all voters verify their vote. How to achieve individual verifiability in practice is beyond the scope of this work. ### References [1. Condorcet Internet Voting Service (CIVS). https://civs.cs.cornell.edu/](https://civs.cs.cornell.edu/) [2. The Guardian, January 30th. https://www.theguardian.com/world/2022/jan/](https://www.theguardian.com/world/2022/jan/30/peoples-primary-backs-as-taubira-as-unity-candidate-of-french-left) ``` 30/peoples-primary-backs-as-taubira-as-unity-candidate-of-french-left ``` [3. NSWEC – Election results. NSW Electoral Commision, https://pastvtr.](https://pastvtr.elections.nsw.gov.au/SG1901/LC/State/preferences) ``` elections.nsw.gov.au/SG1901/LC/State/preferences ``` [4. Source code of prototype implementation of Section 3. Available at https://](https://gitlab.inria.fr/gaudry/THproto) ``` gitlab.inria.fr/gaudry/THproto ``` 5. Ubuntu IRC council position. `https://lists.ubuntu.com/archives/` ``` ubuntu-irc/2012-May/001538.html ``` 6. Adida, B.: Helios: Web-based Open-Audit Voting. In: USENIX (2008) 7. Balinski, M., Laraki, R.: Majority Judgment: Measuring Ranking and Electing. MIT Press (2010) 8. Bar-Ilan, J., Beaver, D.: Non-Cryptographic Fault-Tolerant Computing in Constant Number of Rounds of Interaction. In: PODC. ACM (1989) 9. Benaloh, J., Moran, T., Naish, L., Ramchen, K., Teague, V.: Shuffle-Sum: CoercionResistant Verifiable Tallying for STV Voting. IEEE Trans. Inf. Foren. Sec. (2010) 10. Brent, R., Kung, H.: A Regular Layout for Parallel Adders. IEEE Trans. Comp. (1982) 11. Bünz, B., Bootle, J., Boneh, D., Poelstra, A., Wuille, P., Maxwell, G.: Bulletproofs: Short Proofs for Confidential Transactions and More. In: S&P 2018 (2018) 12. Canard, S., Pointcheval, D., Santos, Q., Traoré, J.: Practical Strategy-Resistant Privacy-Preserving Elections. In: ESORICS 2018. Springer (2018) 13. Canetti, R.: Universally Composable Security: A New Paradigm for Cryptographic Protocols. In: FOCS (2001) 14. Canetti, R., Cohen, A., Lindell, Y.: A Simpler Variant of Universally Composable Security for Standard Multiparty Computation. In: CRYPTO (2015) 15. Clarkson, M.R., Chong, S., Myers, A.C.: Civitas: Toward a Secure Voting System. In: S&P (2008) 16. Cortier, V., Galindo, D., Glondu, S., Izabachene, M.: Distributed ElGamal à la Pedersen - Application to Helios. In: WPES (2013) 17. Cortier, V., Galindo, D., Glondu, S., Izabachene, M.: Election Verifiability for Helios under Weaker Trust Assumptions. In: ESORICS 2014. Springer (2014) 17 ----- 18. Cortier, V., Gaudry, P., Yang, Q.: A toolbox for verifiable tally-hiding e-voting systems. Cryptology ePrint Archive, Report 2021/491 (2021) 19. Cramer, R., Damgård, I., Nielsen, J.B.: Multiparty Computation from Threshold Homomorphic Encryption. In: EUROCRYPT 2001. Springer (2001) 20. Floyd, R.W.: Algorithm 97: Shortest path. Commun. ACM (1962) 21. Haenni, R., Koenig, R.E., Locher, P., Dubuis, E.: CHVote System Specification. Cryptology ePrint Archive, Report 2017/325 (2017) 22. Haines, T., Pattinson, D., Tiwari, M.: Verifiable Homomorphic Tallying for the Schulze Vote Counting Scheme. In: VSTTE. Springer (2019) 23. Hazay, C., Mikkelsen, G., Rabin, T., Toft, T.: Efficient RSA Key Generation and Threshold Paillier in the Two-Party Setting. Journal of Cryptology (2019) 24. Hertel, F., Huber, N., Kittelberger, J., Kuesters, R., Liedtke, J., Rausch, D.: Extending the Tally-Hiding Ordinos System: Implementations for Borda, HareNiemeyer, Condorcet, and Instant-Runoff Voting. In: Proceedings E-Vote-ID 2021. University of Tartu Press (2021) 25. Kuesters, R., Liedtke, J., Mueller, J., Rausch, D., Vogt, A.: Ordinos: A verifiable tally-hiding e-voting system. In: EuroS&P (2020) 26. Küsters, R., Truderung, T., Vogt, A.: Verifiability, Privacy, and CoercionResistance: New Insights from a Case Study. In: S&P (2011) 27. Lipmaa, H.: On Diophantine Complexity and Statistical Zero-Knowledge Arguments. In: ASIACRYPT 2003. Springer (2003) 28. Lipmaa, H., Toft, T.: Secure Equality and Greater-Than Tests with Sublinear Online Complexity. In: ICALP. Springer (2013) 29. Nishide, T., Sakurai, K.: Distributed Paillier Cryptosystem without Trusted Dealer. In: WISA. Springer (2010) 30. Poupard, G., Stern, J.: Security analysis of a practical “on the fly” authentication and signature generation. In: EUROCRYPT 1998. Springer (1998) 31. Ramchen, K., Culnane, C., Pereira, O., Teague, V.: Universally Verifiable MPC and IRV Ballot Counting. In: FC. Springer (2019) 32. Schoenmakers, B., Tuyls, P.: Practical Two-Party Computation Based on the Conditional Gate. In: ASIACRYPT 2004. Springer (2004) 33. Schoenmakers, B., Tuyls, P.: Efficient Binary Conversion for Paillier Encrypted Values. In: EUROCRYPT 2006. Springer (2006) 34. Schoenmakers, B., Veeningen, M.: Universally Verifiable Multiparty Computation from Threshold Homomorphic Cryptosystems. In: ACNS. Springer (2015) 35. Schulze, M.: A New Monotonic, Clone-independent, Reversal Symmetric, and Condorcet-consistent Single-winner Election Method. Social Choice and Welfare (2011) 36. Warshall, S.: A Theorem on Boolean Matrices. Journal of the ACM (1962) 37. Wen, R., Buckland, R.: Mix and Test Counting in Preferential Electoral Systems. Tech. rep., University of New South Wales (2008) 38. Wikström, D.: Universally Composable DKG with Linear Number of Exponentiations. In: SCN. Springer (2004) 39. Wikström, D.: A Sender Verifiable Mix-Net and a New Proof of a Shuffle. In: ASIACRYPT 2005. Springer (2005) 40. Wikström, D.: A Commitment-Consistent Proof of a Shuffle. In: ACISP (2009) ### Appendix 18 ----- Functionality Option Algorithm Exp per trustee Comm. cost Transcript size `Dec` P/EG `Dec` 5a _B_ 4a `RandBit` P/EG `RandBit` 3a + 2 _R_ 6a EG `CGate [32]` 29a _R + 4B_ 31a ``` CSZ ``` P `Mul [34]` 10a 2B 11a `Select` P/EG `Select` `CSZ` `CSZ` `CSZ` `SelectInd` P/EG `SelectInd` _nCSZ_ `CSZ` _nCSZ_ `Neg[bits]` P/EG `Neg[bits]` (m − 1)CSZ (m − 1)CSZ (m − 1)CSZ `Add[bits]` P/EG `Add[bits]` [32] (2m − 1)CSZ (2m − 1)CSZ (2m − 1)CSZ Sublinear P/EG `UFCAdd[bits]` _m(_ [3]2 [log][ m][ + 2)][CSZ] 2(log m + 1)CSZ _m(_ [3]2 [log][ m][ + 2)][CSZ] P/EG `Sub[bits]` (2m − 1)CSZ (2m − 1)CSZ (2m − 1)CSZ LT `Sub[bits]` P/EG `SubLT[bits]` (2m − 1)CSZ (2m − 1)CSZ (2m − 1)CSZ LT+EQ P/EG `SubLT[bits]` (3m − 2)CSZ (2m + log m)CSZ (3m − 2)CSZ Sublinear P/EG `UFCSub[bits]` _m(_ [3]2 [log][ m][ + 2)][CSZ] 2(log m + 1)CSZ _m(_ [3]2 [log][ m][ + 2)][CSZ] LT P/EG `SubLT[bits]` (2m − 1)CSZ (2m − 1)CSZ (2m − 1)CSZ `LT[bits]` LT+EQ P/EG `SubLT[bits]` (3m − 2)CSZ (2m + log m)CSZ (3m − 2)CSZ Sublinear P/EG `CLT[bits]` (4m − 3)CSZ 2(log m + 1)CSZ (4m − 3)CSZ Sublinear+EQ P/EG `CLT[bits]` (5m − 4)CSZ 2(log m + 1)CSZ (5m − 4)CSZ `EQ[bits]` P/EG `EQ[bits]` (2m − 1)CSZ (log m + 1)CSZ (2m − 1)CSZ Precomp 21ma + 75a `EQ` `EQH [28]` _R + 8B_ (22m + 28)a P +4(m + 1) Precomp (27m + 146 log m)a (28m + 50 log m)a `GT` `GTH [28]` (2R + 13B) log m P +8m + 9a + 5 log m +6a `BinExpand` P `BinExpand [33]` 12ma + 53a + 3m _R + 2mB_ (17m + 21)a `Aggreg[bits]` EG `Aggreg[bits]` 3nCSZ (log n + 1) log nCSZ 3nCSZ `Mul[bits]` P/EG `Mul[bits]` 3m[2]CSZ 2m[2]CSZ 3m[2]CSZ `Div[bits]` P/EG `Div[bits]` (3m − 1)rCSZ 2mrCSZ (3m − 1)rCSZ naive `MinMax[bits]` P/EG `MinMax[bits]` (8m − 2)nCSZ 2m log nCSZ (8m − 2)nCSZ sublinear P/EG `MinMax[bits]` (12m − 6)nCSZ 2 log n(log m + 2)CSZ (12m − 6)nCSZ (9n + 11)a EG [40] _R_ 10(n + 1)a `Mixnet` +n − 6 P [40] (8n + 10)a _R_ 10(n + 1)a **Fig. 1. Cost of various MPC primitives: basic functionalities for logic, integer arith-** metic, and a few advanced functions. The Option column includes whether this is available in Paillier (P) or ElGamal (EG). The notations are a for the number of authorities, m for the bit-length of the operands, n for the number of operands, r for the precision (in the division). All logarithms are in base 2. The communication costs are expressed in terms of broadcast (denoted B) and full-rounds (denoted R). The unit of the transcript size is the key length. This corresponds to half the size of a ciphertext in both Paillier (typically 3072 bits) and ElGamal (typically 256 bits) settings. 19 ----- 20 -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-031-17146-8_31?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-031-17146-8_31, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "CLOSED", "url": "" }
2,021
[ "JournalArticle" ]
false
null
[]
16,721
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/019e1093ddc45ba016fa5028e20cb8fd7cce58d7
[ "Computer Science" ]
0.869802
Trustworthy Pre-Processing of Sensor Data in Data On-chaining Workflows for Blockchain-based IoT Applications
019e1093ddc45ba016fa5028e20cb8fd7cce58d7
International Conference on Service Oriented Computing
[ { "authorId": "4757242", "name": "Jonathan Heiss" }, { "authorId": "2900492", "name": "Anselm Busse" }, { "authorId": "48987542", "name": "S. Tai" } ]
{ "alternate_issns": null, "alternate_names": [ "ICSOC", "Int Conf Serv Oriented Comput" ], "alternate_urls": null, "id": "05173752-2c8b-4d3e-b2d5-81e27c400524", "issn": null, "name": "International Conference on Service Oriented Computing", "type": "conference", "url": "http://www.icsoc.org/" }
Prior to provisioning sensor data to smart contracts, a pre-processing of the data on intermediate off-chain nodes is often necessary. When doing so, originally constructed cryptographic signatures cannot be verified on-chain anymore. This exposes an opportunity for undetected manipulation and presents a problem for applications in the Internet of Things where trustworthy sensor data is required on-chain. In this paper, we propose trustworthy pre-processing as enabler for end-to-end sensor data integrity in data on-chaining workflows. We define requirements for trustworthy pre-processing, present a model and common workflow for data on-chaining, select off-chain computation utilizing Zero-knowledge Proofs (ZKPs) and Trusted Execution Environments (TEEs) as promising solution approaches, and discuss both our proof-of-concept implementations and initial experimental, comparative evaluation results. The importance of trustworthy pre-processing and principle solution approaches are presented, addressing the major problem of end-to-end sensor data integrity in blockchain-based IoT applications.
# Trustworthy Pre-Processing of Sensor Data in Data On-chaining Workflows for Blockchain-based IoT Applications Jonathan Heiss, Anselm Busse, and Stefan Tai Information Systems Engineering (ISE) TU Berlin, Germany ``` {jh,ab,st}@ise.tu-berlin.de ``` **Abstract. Prior to provisioning sensor data to smart contracts, a pre-** processing of the data on intermediate off-chain nodes is often necessary. When doing so, originally constructed cryptographic signatures cannot be verified on-chain anymore. This exposes an opportunity for undetected manipulation and presents a problem for applications in the Internet of Things where trustworthy sensor data is required on-chain. In this paper, we propose trustworthy pre-processing as enabler for endto-end sensor data integrity in data on-chaining workflows. We define requirements for trustworthy pre-processing, present a model and common workflow for data on-chaining, select off-chain computation utilizing Zero-knowledge Proofs (ZKPs) and Trusted Execution Environments (TEEs) as promising solution approaches, and discuss both our proof-ofconcept implementations and initial experimental, comparative evaluation results. The importance of trustworthy pre-processing and principle solution approaches are presented, addressing the major problem of endto-end sensor data integrity in blockchain-based IoT applications. **Keywords: Pre-processing · Sensor Data · IoT · Blockchain · Trustwor-** thy · On-chaining · Off-chaining · TEE · zkSNARKs · Zokrates · SGX ## 1 Introduction Blockchain technology is increasingly used in the Internet of Things (IoT) to store and process critical sensor data originating from and shared between multiple, often mutually distrusting parties [20,24,15,16,7,12,23]. In local energy grids with blockchain-based energy trading, for example, energy consumers and producers depend on smart meter-generated measurement data [19,7]. In supply chains, product-related manufacturing and shipping events are written to a blockchain to provide a single source of truth for all involved, independent parties [24,23]. In healthcare, blockchain use cases exist for doctors, hospitals, and emergency services to have access to patients’ health data collected by wearables [12]. However, the variety and scale of connected IoT devices and the generated data pose new challenges regarding data processing and data on-chaining. Raw ----- sensor measurements cannot directly be used on the blockchain because of volume limitations [18] or because sensitive information may be exposed and become accessible to unintended readers [7]. Blockchains inherently have privacy and scalability limitations [6,17] that must be taken into account. Consequently, the on-chain processing of sensor data is preceded by preprocessing steps to reduce data volume and ensure that confidential information is veiled. Such pre-processing typically is executed on intermediate, off-chain nodes as part of multi-staged data provisioning workflows [20,24,15,16,7,12]: data originates on constrained sensor nodes, then moves to more powerful gateway nodes for pre-processing, and is finally provisioned to smart contracts as aggregated information. For example, in the healthcare use case described in [12], data is pre-processed by personal computers or smartphones; in energy grids [7] by workstations located within participating households; in supply chains [24] by board computers and mobile devices. While pre-processing has become an integral element in such data on-chaining _workflows and is necessary to mitigate scalability and privacy issues, off-chain_ pre-processing also represents a security risk. Sensor devices typically sign their measurements to provide data integrity. However, sensor data integrity is not end-to-end: once data is pre-processed on middleboxes, signatures constructed on the input do not apply to the output anymore. Contrary to smart contract application logic, application stakeholders cannot validate off-chain processing as part of the blockchain’s consensus protocol. Consequently, naive pre-processing can be exploited for malicious data manipulation without being noticed. This attack vector threatens data integrity in data on-chaining workflows and quickly questions the entire blockchain-based IoT system design and data quality. To address this problem, solutions are needed to ensure trustworthy pre_processing, i.e., to make computational correctness verifiable on the blockchain._ Off-chain computations have been proposed [6] to outsource blockchain transaction processing to off-chain nodes without compromising trust guarantees. ZeroKnowledge (ZK) computations and Trusted Execution Environments (TEE) are two important approaches here that are also increasingly being used in earlyadoption projects and practice [7,10,9,1]. However, using ZK computations and TEEs for trustworthy pre-processing has not been examined so far. In the face of the rising interest in blockchain-based sensor data management and the need for end-to-end sensor data integrity, in this paper, we analyze the underlying problem of trustworthy pre-processing in data on-chaining workflows, propose a model for integrity-preserving data on-chaining, and examine its practical applicability based on ZK computations and TEEs. Thereby, we make two individual contributions: 1. First, we propose a model for end-to-end sensor data integrity through trustworthy pre-processing. We characterize sensor data pre-processing in onchaining workflows for blockchain-based IoT applications based on relevant literature. From our findings, we refine our problem statement and introduce trustworthy pre-processing as a workflow element that enables application ----- stakeholders through participation in the blockchain network to verify data integrity from source to sink. 2. Second, we examine the applicability of zkSNARKs-based and Trusted Execution Environments (TEE)-based off-chain computations for our proposed model. Based on a typical application workflow, we first conceptualize how trustworthy pre-processing can be instantiated with ZoKrates [8], a toolkit for zkSNARKs-based off-chain computation, and with Intel SGX [5], Intel’s realization of TEEs. Then, we implement the proposed model with both technologies as a proof of concept and present preliminary experiments in a testbed. While our results attest to the applicability of trustworthy pre-preprocessing with both approaches, they also confirm that, in comparison, zkSNARKs provide stronger integrity guarantees (weaker trust assumptions), whereas TEEs enable more efficient off-chain pre-processing. ## 2 Pre-Processing To lay the foundation for trustworthy pre-processing, in this section, we first describe the general characteristics of pre-processing in blockchain-based IoT applications that we observed in pertinent research papers. Next, we refine our problem statement and define computational integrity, based on [2]. Finally, we present a model for trustworthy pre-processing on gateway nodes for use in data on-chaining workflows that start with sensor devices and result in smart contracts. **2.1** **Characterization** Pre-processing in blockchain-based applications shares common objectives, input types, and functionality. **Objectives In data on-chaining workflows, off-chain pre-processing helps to** mitigate blockchain-inherent scalability and privacy limitations. Thereby, it pursues the following objectives: **– Offloading Computation: Outsource on-chain data processing to an off-chain** node that is not bound to costly consensus-based transaction processing [7]. **– Reducing Storage: Reduce the volume of sensor data to minimize the storage** footprint on the blockchain [12,18]. **– Enabling Confidentiality: Hide sensitive information contained in raw mea-** surements or meta-data from stakeholders that do have read permissions [7,24,20]. **Inputs Pre-processing can be executed on different types of data. We distinguish** between the following: ----- **– Measurements include all data that is generated by sensor devices. This** includes time series data collected over a longer period of time [21], for example, temperature or location data, and event data that represents externally triggered occurrences [24], for example, the scanning or opening of a container in a logistics context. **– Meta-data originates from the sensor device and contains descriptive infor-** mation about the measurements, such as sensor identities, target storage addresses, or timestamps. **– Auxiliary data is added at the gateway node. Examples are filter rules, access** control lists, or storage addresses. Measurements and meta-data are critical for pre-processing and are referred to in the following as sensory data. In contrast, auxiliary data is never processed alone but optionally used to enrich pre-processing. **Types Without claiming completeness, we identify three general types of data** pre-processing which can be observed in relevant applications [24,19,12,20] and which represent typical functionality for operating on sequential data [1]. **– Mapping: Data is transformed into a target format, e.g., enumeration, en-** cryption, decryption, hashing [20,24]. **– Reducing: Data of one or multiple sensor devices is consolidated, e.g., the** arithmetic average or a total amount is calculated [19]. **– Filtering: Data is filtered according to predefined rules, e.g., only values** below a predefined threshold are returned [12]. **2.2** **Problem Refinement** Data provisioning is often controlled by one of the stakeholders, e.g., shippers in supply chains [24,15] or producers in energy markets [7]. Stakeholders may have a personal, often economically motivated interest in manipulating the data, e.g., in cooling chains to prevent contractual penalties if perishable fright is perished or to improve accounting positions. Given such motifs, we assume data providing stakeholders as potential attackers. In data on-chaining workflows, data can take three states: it is in transit when it is transmitted from one to another component, it is at rest when it is persisted on disk, and it is in use when it is processed in memory. During the states in transit and at rest, data integrity and authenticity can be verified using cryptographic signatures. However, when data is processed, it is transformed and signatures constructed on the input do not apply for the output anymore. Furthermore, off-chain pre-processing cannot be validated by stakeholders through the consensus mechanism. An attacker could selfishly execute different functions on the data to manipulate the output and obtain a personal benefit without being noticed. Therefore, we assume manipulation of computation as the potential attack. 1 https://web.mit.edu/6.005/www/fa15/classes/25-map-filter-reduce/ ----- **2.3** **Computational Integrity** As a first step towards trustworthy pre-processing, we characterize computational integrity. We adopt the model proposed in [2]. A pre-processing program P is executed on input data D and some auxiliary data A and returns output O such that P (D, A) → _O._ A malicious executer may benefit from creating a manipulated program P _[′]_ such that P _[′](D, A) →_ _O[′]_ _| O[′]_ ≠ _O. For example, in the supply chain use case,_ a shipper executes a threshold check P on temperature measurements D using the threshold A. If the shipper knows that the outcome O triggers a contractual penalty, but O[′] does not, it may change P to P _[′]_ to obtain O[′] instead of O. It then reports O[′] to the blockchain and is exempt from the penalty. Additionally, the executer may leave the program P unchanged but manipulate the input data D such that P (D[′], A) → _O[′]_ _| D ̸= D[′]_ _∧_ _O[′]_ ≠ _O or the auxiliary data A such that_ _P_ (D, A[′]) _O[′]_ _A_ = A[′] _O[′]_ = O _→_ _|_ _̸_ _∧_ _̸_ To prevent both, program and input manipulation, stakeholders should be able to verify computational integrity which is only guaranteed if output O is executed on the right program P and on the right input data (D, A) such that _P_ (D, A) → _O | (P ̸= P_ _[′]) ∧_ (D ̸= D[′]) ∧ (A ̸= A[′]). Therefore, we assume that program P also generates an evidence E that asserts computational integrity such that P (D, A) → (O, E). To enable third-party stakeholders to verify computational integrity, additionally, an asymmetric key pair is required: the evidence signed with the proving key can be verified by any third party with the corresponding verification key. The evidence and the evidence key pair represent the major artefacts for trustworthy pre-processing. **2.4** **End-to-End Data Integrity** Given that integrity of data can be verified while it is in use, we can define a data on-chaining workflow where integrity is verifiable from its source on the sensor node to its sink on the smart contract as depicted in Figure 1. Note that instead of a simple signature, verifiable evidence is provided to the blockchain that allows data integrity verification with moderate computational overhead in the blockchain network. Fig. 1: End-to-End Data Integrity through Trustworthy Pre-Processing ----- **One Time Setup During an initial one time setup, central system artifacts are** generated and deployed on the system components. Given that these artifacts are critical to verify computational integrity, we assume a trusted setup where each stakeholder can verify the integrity of the artifacts. It consists of three steps: As a first step (1. Integrity Assertion), an environment is established that enables the gateway node to generate verifiable evidence of computational integrity as accompanying artefacts of the pre-processing outputs. This includes the integrity of sensory and auxiliary inputs. Examples for such environments are mathematical constraint systems [8] or trusted execution environments [5] as will be described in the subsequent section. Next (2. Key Generation), two key pairs are required: an evidence key pair consisting of a proving and verification key for signing and verifying the evidence and a sensor key pair, represented as a cryptographic public and private key that is used to sign and verify the sensor data on the sensor node and the gateway node respectively. As the last setup step (3. Deployment), all artefacts are deployed: The gateway node is equipped with the sensor node’s public key, the integrity-preserving pre-processing program, the proving key, and optionally auxiliary data. The smart contract receives the verification key that enables evidence verification. **Recurring Operations Sensory data arrives recurringly at the gateway node** in regular intervals, e.g., batches of time series data, or in irregular intervals, e.g., externally triggered events. Then (4. Pre-Processing), the pre-processing program takes the signed sensory data, the sensor’s public key, and optionally auxiliary data as inputs and executes the following steps: (a) The sensory inputs’ signature is verified with the sensor device’s public key. (b) Pre-processing functions are executed on the verified inputs. Examples are provided in section 2.1. (c) An evidence is created and signed with the gateways’ proving key. The evidence enables the smart contract to verify computational integrity. Outputs and signed evidence are transmitted to the smart contract through the blockchain node. The smart contract verifies the evidence using the verification key (5. Verification). Successful verification on the blockchain enables applications stakeholders to independently verify that integrity of sensor data has been preserved from source to sink despite intermediate pre-preprocessing. Pre-processing outputs can be consumed through participating blockchain nodes and used for subsequent processing. ## 3 Application For trustworthy pre-processing to become easily applicable in practice, technologies are required that enable on-chain verifiability of computational integrity and that can implement the pre-processing characteristics as described in 2.1. ----- Fig. 2: Off-chain Computation Technologies according to [6] **3.1** **Technologies for Trustworthy Pre-processing** Off-chain computation has been proposed to mitigate privacy and scalability limitations of blockchains by outsourcing computation to off-chain nodes without compromising core blockchain properties [6,17]. Thereby, it represents a matching concept for trustworthy pre-processing. However, the different approaches to off-chain computation presented in [6] and depicted in Figure 2 are not equally suitable. Both incentive-based and sMPC-based approaches require multiple nodes that execute non-trivial protocols. However, in data on-chaining applications in the IoT [12,15,24,7,20], pre-processing is typically executed on a single node with limited networking and storage capacity. If such a constraint is given, the distributed computation model and interactive nature of incentive- and sMPC-based approaches may be inconsistent with use case specific requirements which restricts general applicability. In contrast, zero-knowledge and enclave-based approaches can be executed non-interactively on a single node and, hence, promise broader applicability for trustworthy pre-processing. **3.2** **ZkSNARKs-based Pre-Processing with ZoKrates** _Zero-knowledge proofs enable a prover to convince a verifier that it has correctly_ executed a computation without revealing inputs to the verifier. _zkSNARKs can be summarized as one type of a zero-knowledge protocol_ that distinguishes through succinctness, i.e., resulting artefacts are small in size and can be verified fast, non-interactivity, i.e., only one message is required to convince the verifier, and argument of knowledge, i.e., the prover is able to prove that she has access to the correct data. ZoKrates [8] provides a toolbox and a higher-level language to implement a zkSNARKs-proving system where an off-chain prover can convince an on-chain verifier that the computation has been executed correctly. To describe the ZoKrates-based pre-processing (compare Figure 3), we leverage the model presented in Section 2.1 and build upon the ZoKrates workflow described in [8]. **One Time Setup** 1. Integrity Assertion: To guarantee integrity of auxiliary data and the sensor public key, both are typed as public arguments in the ZoKrates program and, ----- Fig. 3: Trustworthy Pre-Processing with ZoKrates hence, are required on-chain for evidence verification. Since the verification would fail on different public inputs, their integrity can be determined onchain. Once specified, the high-level ZoKrates code is compiled into an executable constraint system (ECS) in the ZoKrates Intermediate Representation (ZIR) format that can be considered as an extension to a Rank-1-Constraint System and enables assertion of computational integrity: if a variable assignment is found that satisfies the defined constraints computational integrity can be proven. 2. Evidence Key Generation: An evidence key pair is generated from a Common Reference String (CRS) [8] which enables proof creation and verification. Since the CRS allows construction of fake proofs it must be securely disposed after key generation. The evidence key pair is cryptographically bound to the previously generated ECS. 3. Deployment: The ECS, the evidence proving key, auxiliary data, and the sensor public key are deployed to the gateway node which takes the role of the off-chain prover. Verification key and the verification contract are deployed to the blockchain. **Recurring Operations** 5. Execution: The ZIR program is executed on predefined inputs, through the ZoKrates interpreter. The output is called witness, an artefact representing variable assignments that satisfy the specified constraints for a specific execution. In a separate step, the cryptographic proof is generated based on the execution-specific witness and the program-specific proving key. Finally, outputs and evidence are forwarded to the smart contract through a blockchain node. 6. Verification: The verification contract takes the cryptographic proof, the verification key, and public program arguments as input parameters. The verification is only successful if the proof is executed with the right program and on the right (public) inputs. ----- Fig. 4: Trustworthy Pre-Processing with Intel SGX **3.3** **Enclave-based Pre-Processing with Intel SGX** Enclave-based computation enables an enclave-external party to verify that an output has been computed by a specific program inside a specific enclave that protects internal integrity. Thereby, it relies on two concepts: Trusted Execution Environments and Remote Attestation. _Trusted Execution Environments (TEE) are hardware-secured parts of a sys-_ tem architecture that protect data and code from external manipulation and disclosure. Programs executed inside such TEEs are running in an isolated and/or encrypted memory region that cannot even be accessed in the highest privilege level of the system. Thus, it protects the content of the TEE from the system owner and guarantees the integrity of computation executed inside the TEE. Intel SGX is Intel’s concrete implementation of TEEs. We use the terms TEE and enclave interchangeably. _Remote Attestation enables the external verification of the integrity of the_ TEE’s internal state and the authenticity of messages received from inside. Thus, ensuring that a malicious attacker cannot falsely pose as an trusted enclave. TEE-enabled devices have a device identity key that is embedded into the device hardware during manufacturing and can be verified by external parties through a Public Key Infrastructure (PKI). Using this key, the device creates for each instantiated TEE an identity certificate which can externally be verified through the PKI. This enables evidence key generation. When remote attestation is requested, the enclave returns signed measurements which represent a complete snapshot of the TEEs internal state. With SGX as TEE, remote attestation and the PKI are managed by Intel. In the following, we describe pre-processing with Intel SGX as depicted in Figure 4. To achieve comparability with Zokrates-based pre-processing we use the same workflow model as described in Section 2.4. **One Time Setup** 1. Integrity Assertion: To guarantee integrity of auxiliary data and the sensor public key, both must be protected through the TEEs security guarantees. Therefore, they are specified inside the enclave during implementation. ----- Once the enclave is instantiated and loaded in memory, as a first step, remote attestation is executed to verify the enclave’s internal state. The signed measurements are verified using the enclave’s public key that is previously authenticated through the externally managed PKI. If the measurements match a predefined reference value that represents the ground truth of the enclave’s internal state, the enclave’s integrity is verified. 2. Key Generation: To verify the enclave’s integrity a unique enclave-bound key pair is required that can be authenticated from outside the enclave. This evidence key pair is used to sign program results computed inside the enclave. Given that the enclave’s integrity guarantees hold, this signature enables verification of computational integrity on the blockchain. The evidence key is generated inside the enclave and can be authenticated through an externally managed PKI. 3. Deployment: The enclave’s evidence public key becomes part of the verification contract which implements the signature verification on-chain and is deployed to the blockchain. At this point, the enclave is already instantiated on the gateway node. **Recurring Operations** 5. Execution: Sensor data is provided through the host program which represents the only interface to the enclave. Auxiliary data and the sensor public key are already part of the enclave and, hence, protected. The program is executed as defined in Section 2.4. The computational outputs are signed with the evidence proving key. 6. Verification: The verification contract validates the signature with the evidence verification key. A successful validation proves the outputs’ authenticity, i.e., they have been signed with the right proving key that is unique to the enclave, and integrity, i.e., the received outputs are computed by the right pre-processing program inside the enclave. ## 4 Evaluation Given the two conceptual workflow descriptions, in this section, we evaluate the technical feasibility for each technology. **4.1** **Implementation** Our proof-of-concept (PoC) implementations follow the descriptions provided in Section 3.2 and 3.3 respectively. Thereby, we focus on the recurring operations steps, execution and verification which we consider as most relevant to demonstrate feasibility. Aspects of the setup phase are discussed in Section 5. The PoC program should respect the pre-processing characteristics presented in Section 2.1. Our program mimics a threshold violation check on sensory data where the threshold represents auxiliary data. The sensory data is filtered for ----- violations, then reduced by counting the violations, and mapped by scaling the filtered values down. The smart contract is only provided with the violation count. Thereby, the program fulfills all three objectives: computation is outsourced to an off-chain node, the data footprint is reduced in size, and the potentially sensitive sensor measurements are not published on-chain. **ZoKrates: For our ZoKrates-based implementation, we simulate the sensor** node with a Python script that hashes the data with SHA256 and signs it with EDDSA-based sensor key pair, which ZoKrates support. Plain sensory data is a private input, while the data’s hash, signature, and the sensor public key are public inputs to the ZoKrates program. To verify integrity of sensory inputs, the signature’s hash input is reconstructed from the plain sensor data and compared to the hash inputs. Only if both signature verification and hash comparison are successful integrity is guaranteed. Hashing and signature verification are implemented using the ZoKrates Standard Library. Pre-processing is executed by two commands provided by the ZoKrates CLI: compute-witness that requires the compiled program and generate-proof that takes proving key and witness as inputs. The outputs are written to disk. **Intel SGX: For the SGX evaluation, we have implemented two enclaves. The** first one simulates a sensor node and signs the sensory input data with an internally generated sensor key pair using the SGX-provided operations sgx_create __keypair and sgx_ecdsa_sign. The second enclave represents the gateway node_ that stores auxiliary data and the sensor public key internally. It verifies the sensor data with the sensor public key using the SGX operation sgx_ecdsa_verify. Evidence key pair generation and signature construction on computational outputs are realized with the same SGX commands as the sensor enclave. The processing result and the corresponding signature are written to disk. **Ethereum: As blockchain technology, we chose Ethereum [26], which is** widely used and finds application both as a public blockchain but also as consortium blockchain based on Proof-of-Authority consensus and non-public deployment. For each, respectively, a verification contract is implemented in Solidity that runs on a locally deployed Ethereum blockchain and is accessed through a Ganache blockchain client. To validate Intel SGX evidence, we build upon an existing ECDSA implementation for the Ethereum blockchain [2]. ZoKrates proofs rely on EdDSA (twisted Edwards curve) and are verified through a dedicated verification contract that is generated by ZoKrates CLI support [3]. **4.2** **Experiments** Given our proof-of-concept implementations, we can now conduct initial experiments to obtain the first practical insights into trustworthy pre-processing with zkSNARKs and TEEs. At this point, it should be noted that experimental results strongly depend on our non-optimized PoC implementations and, hence, cannot simply be generalized. 2 https://github.com/tdrerup/elliptic-curve-solidity 3 https://github.com/Zokrates/ZoKrates ----- 10[1][.][7] 10[1][.][6] 10[2][.][5] 10[2] 10[3] 10[1][.][5] (a) Various Batch Sizes, Count of 1 10[1][.][5] (b) Various Batch Counts, Size of 1 Fig. 5: Pre-processing with ZoKrates **Exerimental Setup For our experimental setup, we deploy our implementa-** tions on an Intel NUC-Kit NUC7PJYH with an SGX enabled Pentium Silver J5005 CPU, 8 GB of Memory, and an Ubuntu 18.04.5 LTS operating system. To construct workloads, we use smart meter measurements collected in a testbed of an energy grid research project[4] and prepare the measurements such that (1) each measurement consists of four integer values, (2) measurements are collected into batches of different sizes line-wise in plain text, and (3) each batch is signed to represent the sensor’s signature. As mentioned in Section 2.1, pre-processing is typically exposed to two types of workloads: event and batch processing. To simulate that in our experimental setup, we turn on two knobs: for events of different sizes, we change the input data size per execution (batch size), for batch processing, we vary the number of subsequent executions (batch count). Latter is executed on size-one-batches which contain a single measurement. The computational outputs of size-one-batch experiments are used for onchain verification, which is measured in Gas, an Ethereum-specific metric for capturing computational complexity of on-chain transaction processing. |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11| |---|---|---|---|---|---|---|---|---|---|---| |||||||||||| |||||||||||| |||||||||||| |||||||||||| |||||||||||| |||||||||||| |ri at|1 ous es|B|4 atc|h|8 Co|unt|16 s,|Siz|32 e o|f| **Results The results summarized for ZoKrates in Figure 5 and for Intel SGX in** Figure 6 show the overall execution time for off-chain pre-processing in seconds and microseconds, respectively. As expected, the execution time of zkSNARKsbased pre-processing is orders of magnitude higher than that of enclave-based pre-processing. With larger batch sizes, the execution time increases almost gradually. This holds true for each technology individually as shown in Figure 5 a) and Figure 6 a). Similar behaviour can be observed for increasing the batch count as shown in Figure 5 b) and Figure 6 b). However, we can observe that for both ZoKrates and SGX the increase is much steeper for a growing batch count than for a growing batch size (note the different logarithmic y-scales). For this specific implementation example, this would mean that it is preferable to increase the number of processed data through larger batch sizes rather than counts when possible in the actual application scenario. 4 https://blogpv.net/ |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |||||||||| |||||||||| |||||||||| |||||||||| |||||||||| |rio n an U w rg su he n n ad t p ue t o ifi c T sh os -p ss is|1 us ta I , 8 or y g re s o t t tio s: ur er nt ain mp ca om he ow eco ro in ho|Bat l Se ntel GB kloa rid ment f diff he s ned even n on exec exe a si utat tion, put resu the nds, cess g. W lds|4 ch tu N of ds res c er en in t t ut cu ng io w ati lt o r ing ith tru|Size Fig p F UC- Me , we earc onsis ent s sor’s Sect and wo k ion tions le m nal o hich onal s sum veral espe is larg e fo|8 s, . 5 or Kit mo u h ts iz si ion ba no (b ( ea ut is c m l e cti ord er r e|Coun : Pre our NU ry, a se sm proje of fo es lin gnat 2.1 tch bs: f atch batc sure puts me ompl ariz xecu vely. ers batc ach|16 t o -p ex C nd a ct ur e- ur , p pro or siz h me o as ex ed ti A of h te|f ro pe 7P a rt 4 in w e. re c e e co n f ur it fo on s m siz ch| ----- 10[3][.][2] 10[3][.][1] 10[3] (a) Various Batch Sizes, Count of 1 10[5] 10[4] 10[3] (b) Various Batch Counts, Size of 1 Fig. 6: Pre-processing with Intel SGX In ZoKrates-based pre-processing, the accompanying construction of cryptographic proofs represents a memory-intensive computation that correlates with the input size. The experiment for the next larger batch size of 32 measurements in ZoKrates ran out of memory during the proof-generation on the test system. Given that sensory data can quickly grow very large, the memory capacity of constrained IoT or edge devices may present a limiting factor, but may not be an issue for larger middleboxes. In contrast, Intel SGX reduces pre-processing overhead. Even though, our implementation was also memory limited regarding a batch size larger than 1024 measurements, this is just a limitation of the current SGX design that might change in the future and can be mitigated, e.g., by splitting up the processes into multiple enclaves on the same machine. Better efficiency and smaller memory consumption distinguishes Intel SGX as a suitable technology for lower IoT layers where computational resources are typically scarce. However, contrary to ZoKrates, SGX-based pre-processing requires an increased trust in the correctness of the hardware implementation and the attestation process that requires trusting Intel regarding a correct attestation. In our proof-of-concept implementation, on-chain verification costs are cheaper for ZoKrates-generated proofs (567 614 Gas) than for Intel SGX-generated signatures (1 211 443 Gas). However, since on-chain verification costs strongly depend on the implementation of respective signature algorithm our results cannot be generalized, e.g., for other blockchain technologies. |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11| |---|---|---|---|---|---|---|---|---|---|---| |||||||||||| |||||||||||| |ar l S|1 io G|us X|32 Ba|tch|128 C|ou|256 nts|, S|512 ize|o| ## 5 Discussion While in the previous section, initial insights about the performance behavior of each technology were provided, in this section, we discuss security and trust aspects and potential extensions for trustworthy pre-processing. **Integrity and Trust Assumptions: As described in Section 2.2, pre-** processing is assumed to be executed by non-trusted stakeholders who have an incentive for data manipulation. While off-chain technologies eliminate unnoticed attacks during pre-processing, the setup phase still reveals an attack ----- surface. In Zokrates, for example, key generation must be executed in a trusted setup to guarantee that the Common Reference String is safely disposed to prevent fake proof generation. However, establishing a trusted setup for zkSNARKs is a known problem to which various approaches exist as referenced in [8]. In Intel SGX, the integrity guarantee strongly relies on the internal state of the enclave and on the authenticity of the evidence key pair. To preserve this guarantee, remote attestation and key authenticity must be verified through a trusted third party or by all involved stakeholders individually. Also, auxiliary data and the sensor’s public key must be verified before being added to the enclave. Beyond the setup, zkSNARKs-based pre-processing does not rely on further trust assumptions, whereas enclave-based pre-processing heavily relies on a trustworthy manufacturer that ensures that private keys are kept secret and certificates obtained from the PKI are authentic to the device’s identities. This distinguishes ZoKrates as particularly suitable for processing critical data with substantial security demands. **Further Attacks: Beyond our attack model described in Section 2.2, attacks** on data freshness and availability must be considered. While an attacker that controls communication channels, e.g., between gateway and blockchain node, cannot compromise data integrity without being noticed (Man-in-the-Middle At_tack_ ) due to signature and evidence verification, it can, however, intercept and replay messages in a different order to impact the overall application logic (Re_play Attack_ ). To prevent this, secure timestamps or challenge-response patterns can be applied. Furthermore, to prevent a malicious executor from compromising availability by withholding messages (Denial of Service Attack ), gateway nodes can redundantly be deployed to eliminate centralization, similar to this proposal [25]. **Multi-Stage Pre-Processing: In multi-stage data on-chaining workflows,** multiple pre-processing tasks may be executed subsequently by different nontrusted stakeholders. To verify integrity on-chain, an evidence chain must be established that allows any subsequent computation to validate the provided evidence of the previous computation. This way, end-to-end integrity could be guaranteed along arbitrarily long on-chaining workflows. **Confidential Pre-Processing: While this work focuses on integrity preser-** vation, in some use cases it might be required to keep inputs to pre-processing hidden from the executor. This can, for example, be achieved through Intel SGX, where encrypted inputs can be decrypted inside the enclave, processed, and encrypted again before being returned. Thereby, inputs and outputs would not be accessible by the executor. However, side-channel attacks must be respected that are known to extract confidential information from enclaves [4]. ## 6 Related Work In this paper, we extend trustworthy data on-chaining as presented in [14] by considering data in use as an additional attack vector. Furthermore, we leverage approaches to off-chain computation presented in [6] to realize trustworthy ----- pre-processing. From the proposed off-chain computation technologies in [6], zkSNARKs and Trusted Execution Environments are increasingly adopted in scientific literature on blockchain-based IoT applications. Recently, many proposals leverage zkSNARKs for off-chain computations through Zokrates; however, only a few intersect blockchain-based sensor data management. While in [7] ZoKrates is applied for off-chain processing of sensor data, i.e., smart meter measurements in local energy grids, other works mainly use Zokrates for privacy-preserving authentication, e.g. in the context of smart vehicle authentication at charging stations [11], consumer authentication for car sharing [13], or in health care for patient authentication [22]. TEEs are leveraged in various papers to implement trustworthy oracles that bridge data provisioning from off-chain data sources to smart contracts. For example, in TownCrier [27], a TEE-based oracle system is proposed to authenticate data provided by HTTPS-enabled off-chain data sources, or in [25], a distributed TEE-enabled oracle system is proposed that improves availability. Beyond scientific usage, e.g., ChainLink [5] works on a solution to implement these concepts for practical usage [3]. While the main focus of these proposals lies in data provisioning, other works instead use TEEs for sensor data management. In [9], for example, a system is proposed that employs TEEs for intermediate processing of sensory data before it is forwarded to the blockchain and the cloud. The authors of [1] use TEEs for trustworthy access management of sensor data in hybrid storage systems where off-chain storage holds encrypted sensor data and the blockchain stores its hashes and access logs. While these proposals do not apply pre-processing as defined in this paper, they underline the need for a systematization of trustworthy preprocessing that we aim to provide with our contributions. ## 7 Conclusion End-to-end sensor data integrity is critical to many blockchain-based IoT applications. Data on-chaining workflows accordingly require pre-processing on offchain nodes to be trustworthy. In this paper, we explored the use of zkSNARKsand TEE-based computations for trustworthy pre-processing, first, as individual candidate technologies that require non-trivial set-ups for integration in data onchaining workflows, and second, through a preliminary, comparative experimental evaluation based on two proof-of-concept implementations. We conclude that each presents an important approach that (a) can conceptually be well-integrated in respective workflows and (b) satisfies the requirements and primary objective of end-to-end data integrity. Our proof-of-concept implementations use current, state-of-the-art software, and, since both zero-knowledge proofs and TEEs are very active areas of research, our implementations and the experimental findings must be seen as preliminary. We expect rapid advances regarding the used software stacks and current constraints regarding memory limitations, and, consequently, performance numbers to change. Still, a principal performance gap 5 https://chain.link/ ----- and performance advantage of TEEs over zkSNARKs is expected to remain. However, as discussed in this paper, the choice of an approach and technology will depend also on other, non-performance criteria like the integrity and trust assumptions or existing attack vectors for the specific IoT application under consideration. Future work will address extensions of the proposed model regarding its computational scalability through parallel execution and its applicability for stream processing. ## References 1. Ayoade, G., Karande, V., Khan, L., Hamlen, K.: Decentralized iot data management using blockchain and trusted execution environment. In: 2018 IEEE International Conference on Information Reuse and Integration (IRI). pp. 15–22 (2018) 2. Ben-Sasson, E., et al.: Computational integrity with a public random string from quasi-linear pcps. In: Advances in Cryptology – EUROCRYPT 2017. pp. 551–579 (2017) 3. Breidenbach, L., Cachin, C., Chan, B., Coventry, A., Ellis, S., Juels, A., Koushanfar, F., Miller, A., Magauran, B., Moroz, D., et al.: Chainlink 2.0: Next steps in the evolution of decentralized oracle networks (2021) 4. Bulck, J.V., et al.: Foreshadow: Extracting the keys to the intel SGX kingdom with transient out-of-order execution. In: 27th USENIX Security Symposium (USENIX Security 18). pp. 991–1008. USENIX Association, Baltimore, MD (Aug 2018) 5. Costan, V., Devadas, S.: Intel SGX explained. IACR Cryptol. ePrint Arch. 2016, [86 (2016), http://eprint.iacr.org/2016/086](http://eprint.iacr.org/2016/086) 6. Eberhardt, J., Heiss, J.: Off-chaining models and approaches to off-chain computations. In: Proceedings of the 2Nd Workshop on Scalable and Resilient Infrastructures for Distributed Ledgers. SERIAL’18, ACM (2018) 7. Eberhardt, J., Peise, M., Kim, D.H., Tai, S.: Privacy-preserving netting in local energy grids. In: 2020 IEEE International Conference on Blockchain and Cryptocurrency (ICBC). pp. 1–9 (2020) 8. Eberhardt, J., Tai, S.: Zokrates - scalable privacy-preserving off-chain computations. In: IEEE International Conference on Blockchain (2018) 9. Enkhtaivan, B., Inoue, A.: Mediating data trustworthiness by using trusted hardware between iot devices and blockchain. In: 2020 IEEE International Conference on Smart Internet of Things (SmartIoT). pp. 314–318 (2020) 10. Gabay, D., Akkaya, K., Cebe, M.: A privacy framework for charging connected electric vehicles using blockchain and zero knowledge proofs. In: 2019 IEEE 44th LCN Symposium on Emerging Topics in Networking (LCN Symposium). pp. 66–73 (2019) 11. Gabay, D., Akkaya, K., Cebe, M.: Privacy-preserving authentication scheme for connected electric vehicles using blockchain and zero knowledge proofs. IEEE Transactions on Vehicular Technology 69(6), 5760–5772 (2020) 12. Griggs, K.N., Ossipova, O., Kohlios, C.P., Baccarini, A.N., Howson, E.A., Hayajneh, T.: Healthcare blockchain system using smart contracts for secure automated remote patient monitoring. Journal of Medical Systems 42, 1–7 (2018) 13. Gudymenko, I., et al.: Privacy-preserving blockchain-based systems for car sharing leveraging zero-knowledge protocols. In: 2020 IEEE International Conference on Decentralized Applications and Infrastructures (DAPPS). pp. 114–119 (2020) ----- 14. Heiss, J., Eberhardt, J., Tai, S.: From oracles to trustworthy data on-chaining systems. In: IEEE International Conference on Blockchain (2019) 15. Helo, P., Shamsuzzoha, A.: Real-time supply chain—a blockchain architecture for project deliveries. Robotics and Computer-Integrated Manufacturing 63, 101909 (2020) 16. Huang, S., Wang, G., Yan, Y., Fang, X.: Blockchain-based data management for digital twin of product. Journal of Manufacturing Systems 54, 361–371 (2020) 17. J.Eberhardt, S.Tai: On or Off the Blockchain? Insights on Off-Chaining Computation and Data. In: ESOCC 2017: 6th European Conference on Service-Oriented and Cloud Computing (2017) 18. Kurt Peker, Y., Rodriguez, X., Ericsson, J., Lee, S.J., Perez, A.J.: A cost analysis of internet of things sensor data storage on blockchain via smart contracts. Electronics **9(2) (2020)** 19. Peise, M., Kuhlenkamp, J., Busse, A., Eberhardt, J., Ulbricht, M.R., Tai, S., Baus, J., Kassebaum, M., Zörner, T.: Blockchain-based local energy grids: Advanced use cases and architectural considerations. In: IEEE 18th ICSA-C. pp. 130–137 (2021) 20. Putz, B., Dietz, M., Empl, P., Pernul, G.: Ethertwin: Blockchain-based secure digital twin information management. Information Processing & Management 58(1) (2021) 21. Shafagh, H., Burkhalter, L., Hithnawi, A., Duquennoy, S.: Towards blockchainbased auditable storage and sharing of iot data. In: Proceedings of the 2017 on Cloud Computing Security Workshop. p. 45–50 (2017) 22. Sharma, B., Halder, R., Singh, J.: Blockchain-based interoperable healthcare using zero-knowledge proofs and proxy re-encryption. 2020 International Conference on COMmunication Systems & NETworkS (COMSNETS) pp. 1–6 (2020) 23. Sigwart, M., Borkowski, M., Peise, M., Schulte, S., Tai, S.: Blockchain-based data provenance for the internet of things. In: Proceedings of the 9th International Conference on the Internet of Things (2019) 24. Sund, T., Lööf, C., Nadjm-Tehrani, S., Asplund, M.: Blockchain-based event processing in supply chains—a case study at ikea. Robotics and Computer-Integrated Manufacturing 65, 101971 (2020) 25. Woo, S., Song, J., Park, S.: A distributed oracle using intel sgx for blockchain-based iot applications. Sensors 20(9) (2020) 26. Wood, G.: Ethereum: A secure decentralised generalised transaction ledger. Ethereum Project Yellow Paper (2014) 27. Zhang, F., Cecchetti, E., Croman, K., Juels, A., Shi, E.: Town crier: An authenticated data feed for smart contracts. In: Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (2016) -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2110.15869, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/2110.15869" }
2,021
[ "JournalArticle", "Conference" ]
true
2021-10-29T00:00:00
[ { "paperId": "a42d6276c42f9c744c7e03cc434878e7486bfe6c", "title": "Blockchain-based Local Energy Grids: Advanced Use Cases and Architectural Considerations" }, { "paperId": "8d7522f9fa1b7649224d47ca9554534f27c44223", "title": "Blockchain-based event processing in supply chains - A case study at IKEA" }, { "paperId": "eeb429a558c4b8cefe4a7be7f647a1b44cdcae47", "title": "Mediating Data Trustworthiness by Using Trusted Hardware between IoT Devices and Blockchain" }, { "paperId": "e68614acf883819efe901ea22ffb525ec44814d6", "title": "Privacy-Preserving Blockchain-Based Systems for Car Sharing Leveraging Zero-Knowledge Protocols" }, { "paperId": "01728534a79df753dcb18d40bf02e35b2b3e0a95", "title": "Real-time supply chain - A blockchain architecture for project deliveries" }, { "paperId": "b1b7e159c94bfe29b8333ed9ca7bd75fc92f8270", "title": "Privacy-Preserving Netting in Local Energy Grids" }, { "paperId": "ba91f44b7b9b9d8cf7c2c4d9fc218f4542a38cb1", "title": "A Distributed Oracle Using Intel SGX for Blockchain-Based IoT Applications" }, { "paperId": "4e5a0594e5c35a37df1818cd9c0fdbfac968fdc4", "title": "Privacy-Preserving Authentication Scheme for Connected Electric Vehicles Using Blockchain and Zero Knowledge Proofs" }, { "paperId": "375125029b085e70a109491656b69aa01bc2a166", "title": "A Cost Analysis of Internet of Things Sensor Data Storage on Blockchain via Smart Contracts" }, { "paperId": "be67f5374b0452238c104a1e9b1250a52c418ecb", "title": "Blockchain-based Interoperable Healthcare using Zero-Knowledge Proofs and Proxy Re-Encryption" }, { "paperId": "655292601c44af63fed65ddb2a55174e9a5a9b92", "title": "A Privacy Framework for Charging Connected Electric Vehicles Using Blockchain and Zero Knowledge Proofs" }, { "paperId": "ea72cc627bb7131ff06e9b03c06d2ef6ca4580c7", "title": "From Oracles to Trustworthy Data On-Chaining Systems" }, { "paperId": "09f9702e9960539fb77e174e7f574caa7e2471d8", "title": "Blockchain-based Data Provenance for the Internet of Things" }, { "paperId": "094e1f0434195c6fa8f3df6e7adad1f3b67ba801", "title": "Off-chaining Models and Approaches to Off-chain Computations" }, { "paperId": "721ba8f893ea219617f4ab4607abbf7d2cee0d54", "title": "Foreshadow: Extracting the Keys to the Intel SGX Kingdom with Transient Out-of-Order Execution" }, { "paperId": "5b9a39b51df3d724609af6c888189a713aeac000", "title": "ZoKrates - Scalable Privacy-Preserving Off-Chain Computations" }, { "paperId": "9727206903eb40d4fa42606711bad3402f2ba9aa", "title": "Decentralized IoT Data Management Using BlockChain and Trusted Execution Environment" }, { "paperId": "6d661299a8207a4bff536494cec201acee3c6c1c", "title": "Healthcare Blockchain System Using Smart Contracts for Secure Automated Remote Patient Monitoring" }, { "paperId": "b0360261dd5d95c698f20212c23c4d31173a3055", "title": "On or Off the Blockchain? Insights on Off-Chaining Computation and Data" }, { "paperId": "24747c62e558cad008e9a65c9a7e2d463cd9f2de", "title": "Towards Blockchain-based Auditable Storage and Sharing of IoT Data" }, { "paperId": "94ab98fa1754b8e7d841ff3546b29675a8e3dbb0", "title": "Computational Integrity with a Public Random String from Quasi-Linear PCPs" }, { "paperId": "48f5c490f1d3b875894fc274d143278f07f6add4", "title": "Town Crier: An Authenticated Data Feed for Smart Contracts" }, { "paperId": "992577d0e72fa36524c2905bf11c962044cf33b9", "title": "EtherTwin: Blockchain-based Secure Digital Twin Information Management" }, { "paperId": null, "title": "Chainlink 2.0: Next steps in the evolution of decentralized oracle networks" }, { "paperId": "8d67b76222d84dcd337b8a2c78f13837070a79ce", "title": "Blockchain-based data management for digital twin of product" }, { "paperId": "2d7f3f4ca3fbb15ae04533456e5031e0d0dc845a", "title": "Intel SGX Explained" }, { "paperId": "3c50bb6cc3f5417c3325a36ee190e24f0dc87257", "title": "ETHEREUM: A SECURE DECENTRALISED GENERALISED TRANSACTION LEDGER" }, { "paperId": null, "title": "Trustworthy Pre-Processing of Sensor Data" }, { "paperId": null, "title": "Integrity Assertion : To guarantee integrity of auxiliary data and the sensor public key, both must be protected through the TEEs security guarantees. Therefore" } ]
11,201
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Mathematics", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Mathematics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01a012a1065f40e7ba2c0e3ce36d8fde114926f1
[ "Computer Science", "Mathematics" ]
0.802516
Hardness of k-LWE and Applications in Traitor Tracing
01a012a1065f40e7ba2c0e3ce36d8fde114926f1
Algorithmica
[ { "authorId": "143630342", "name": "S. Ling" }, { "authorId": "1680071", "name": "Duong Hieu Phan" }, { "authorId": "1803138", "name": "D. Stehlé" }, { "authorId": "1740453", "name": "Ron Steinfeld" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "https://link.springer.com/journal/453", "http://www.springer.com/computer/theoretical+computer+science/journal/453" ], "id": "300eb16f-ce6c-495a-8da3-2e691bf9051d", "issn": "0178-4617", "name": "Algorithmica", "type": "journal", "url": "https://www.springer.com/computer/theoretical+computer+science/journal/453" }
null
# Hardness of k-LWE and Applications in Traitor Tracing San Ling[1], Duong Hieu Phan[2], Damien Stehlé[3], and Ron Steinfeld[4] 1 Division of Mathematical Sciences, School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore 2 Laboratoire LAGA (CNRS, U. Paris 8, U. Paris 13), U. Paris 8, France 3 Laboratoire LIP (U. Lyon, CNRS, ENSL, INRIA, UCBL), ENS de Lyon, France 4 Faculty of Information Technology, Monash University, Clayton, Australia **Abstract. We introduce the k-LWE problem, a Learning With Errors variant** of the k-SIS problem. The Boneh-Freeman reduction from SIS to k-SIS suffers from an exponential loss in k. We improve and extend it to an LWE to k-LWE reduction with a polynomial loss in k, by relying on a new technique involving trapdoors for random integer kernel lattices. Based on this hardness result, we present the first algebraic construction of a traitor tracing scheme whose security relies on the worst-case hardness of standard lattice problems. The proposed LWE traitor tracing is almost as efficient as the LWE encryption. Further, it achieves public traceability, i.e., allows the authority to delegate the tracing capability to “untrusted” parties. To this aim, we introduce the notion of projective _sampling family in which each sampling function is keyed and, with a projection_ of the key on a well chosen space, one can simulate the sampling function in a computationally indistinguishable way. The construction of a projective sampling family from k-LWE allows us to achieve public traceability, by publishing the projected keys of the users. We believe that the new lattice tools and the projective sampling family are quite general that they may have applications in other areas. **Keywords: Lattice-based cryptography, Traitor tracing, LWE.** ## 1 Introduction Since the pioneering work of Ajtai [3], there have been a number of proposals of cryptographic schemes with security provably relying on the worst-case hardness of standard lattice problems, such as the decision Gap Shortest Vector Problem with polynomial gap (see the surveys [30,40]). These schemes enjoy unmatched security guarantees: Security relies on worst-case hardness assumptions for problems expected to be expo_nentially hard to solve (with respect to the lattice dimension n), even with quantum_ computers. At the same time, they often enjoy great asymptotic efficiency, as the basic operations are matrix-vector multiplications in dimension _O[�](n) over a ring of cardinal-_ ity _oly(n). A breakthrough result in that field was the introduction of the Learning_ _≤P_ With Errors problem (LWE) by Regev [38,39], who showed it to be at least as hard as worst-case lattice problems and exploited it to devise an elementary encryption scheme. J.A. Garay and R. Gennaro (Eds.): CRYPTO 2014, Part I, LNCS 8616, pp. 315–334, 2014. _⃝c_ International Association for Cryptologic Research 2014 ----- 316 S. Ling et al. Gentry et al. showed in [19] that Regev’s scheme may be adapted so that a master can generate a large number of secret keys for the same public key. As a result, the latter encryption scheme, called dual-Regev, can be naturally extended into a multi-receiver encryption scheme. In the present work, we build traitor tracing schemes from this dualRegev LWE-based encryption scheme. TRAITOR TRACING. A traitor tracing scheme is a multi-receiver encryption scheme where malicious receiver coalitions aiming at building pirate decryption devices are deterred by the existence of a tracing algorithm: Using the pirate decryption device, the tracing algorithm can recover at least one member of the malicious coalition. Such schemes are particularly well suited for fighting copyright infringement in the context of commercial content distribution (e.g., Pay-TV, subscription news websites, etc). Since their introduction by Chor et al. [15], much work has been devoted to devising efficient and secure traitor tracing schemes. The most desirable schemes are fully collusion resistant: they can deal with arbitrarily large malicious coalitions. But, unsurprisingly, the most efficient schemes are in the bounded collusion model where the number of malicious users is limited. The first non-trivial fully collusion resistant scheme was proposed _√_ by Boneh et al. [11]. However, its ciphertext size is still large (Ω( _N_ ), where N is the total number of users) and it relies on pairing groups of composite order. Very recently, Boneh and Zhandry [12] proposed a fully collusion resistant scheme with poly-log size parameters. It relies on indistinguishability obfuscation [18], whose security foundation remains to be studied, and whose practicality remains to be exhibited. In this paper, we focus on the bounded collusion model. The Boneh-Franklin scheme [7] is one of the earliest algebraic constructions but it can still be considered as the reference algebraic transformation from the standard ElGamal public key encryption into traitor tracing. This transformation induces a linear loss in efficiency, with respect to the maximum number of traitors. The known transformations from encryption to traitor tracing in the bounded collusion model present at least a linear loss in efficiency, either in the ciphertext size or in the private key size [7,31,23,41,6,10]. We refer to [21] for a detailed introduction to this rich topic. OUR CONTRIBUTIONS. We describe the first algebraic construction of a public-key lattice-based traitor tracing scheme. It is semantically secure and enjoys public traceability. The security relies on the hardness of LWE, which is known to be at least as hard as standard worst-case lattice problems [39,33,13]. The scheme is the extension, described above, of the dual-Regev LWE-based en cryption scheme from [19] to a multi-receiver encryption scheme, where each user has a different secret key. In the case of traitor tracing, several keys may be leaked to a traitor coalition. To show that we can trace the traitors, we extend the LWE problem and introduce the k-LWE problem, in which k hint vectors (the leaked keys) are given out. Intuitively, k-LWE asks to distinguish between a random vector t close to a given lattice Λ and a random vector t close to the orthogonal subspace of the span of k given short vectors belonging to the dual Λ[∗] of that lattice. Even if we are given (b[∗]i [)][i][≤][k] small in Λ[∗], computing the inner products ⟨b[∗]i _[,][ t][⟩]_ [will not help in solving this problem,] since they are small and distributed identically in both cases. The k-LWE problem can be interpreted as a dual of the k-SIS problem introduced by Boneh and Freeman [8], ----- Hardness of k-LWE and Applications in Traitor Tracing 317 which intuitively requests to find a short vector in Λ[∗] that is linearly independent with the k given short vectors of Λ[∗]. Their reduction from SIS to k-SIS can be adapted to the LWE setup, but the hardness loss incurred by the reduction is gigantic. We propose a significantly sharper reduction from LWEα to k-LWEα. This improved reduction requires a new lattice technique: the equivalent for kernel lattices of Ajtai’s simultaneous sampling of a random q-ary lattice with a short basis [4] (see also Lemma 2). We adapt the Micciancio-Peikert framework from [28] to sampling a Gaussian X ∈ Z[m][×][n] along with a short basis for the lattice ker(X) = {b ∈ Z[m] : b[t]X = 0}. Kernel lattices also play an important role in the re-randomization analysis of the recent lattice-based multilinear map scheme of Garg et al. [17], and we believe that our new trapdoor generation tool for such lattices is likely find additional applications in future. We also remark that our technique can be adapted to the SIS to k-SIS reduction. We thus solve the open question left by Boneh and Freeman of improving their reduction [8]: from an exponential loss in k to a polynomial loss in k. Consequently, their linearly homomorphic signatures and ordinary signature schemes enjoy much better efficiency/security trade-offs. Our construction of a traitor tracing scheme from k-LWE can be seen as an additive and noisy variant of the (black-box) Boneh-Franklin traitor tracing scheme [7]. While the Boneh-Franklin scheme is transformed from the ElGamal encryption with a linear loss (in the maximum number of traitors) in efficiency, our scheme is almost as efficient as standard LWE-based encryption, as long as the maximum number of traitors is bounded below n/(c log n), where n is the LWE dimension determined by the security parameter, and c is a constant. The full functionality of black-box tracing in both the Boneh-Franklin scheme and ours are of high complexity as they both rely on the black-box confirmation: given a superset of the traitors, it is guaranteed to find at least one traitor and no innocent suspect is incriminated. Boneh and Franklin left the improvement of the black-box tracing as an interesting open problem. We show that in lattice setting, the black-box tracing can be accelerated by running the tracing procedure in parallel on untrusted machines. This is a direct consequence of the property of public traceability, i.e., the possibility of running tracing procedure on public information, that our scheme enjoys. We note that almost all traitor tracing systems require that the tracing key must be kept secret. Some schemes [14,37,9,12] achieve public traceability and some others achieve a stronger notion than public traceability, namely the non-repudation, but the setup in these schemes require some interactive protocol between the center and each user such as a secure 2-party computation protocol in [35], a commitment protocol in [36], an oblivious polynomial evaluation in [42,24,22]. To obtain public traceability and inspired from the notion of projective hash family [16], we introduce a new notion of projective sampling family in which each sampling function is keyed and, with a projection of the key on a well chosen space, one can simulate the sampling function in a computationally indistinguishable way. The construction of a set of projective sampling families from k-LWE allows us to publicly sample the tracing signals. Independently, our new lattice tools may have applications in other areas. The kLWE problem has a similar flavour to the Extended-LWE problem from [32]. It would be interesting to exhibit reductions between these problems. On a closely-related topic, ----- 318 S. Ling et al. it seems our sampling of a random Gaussian integer matrix X together with a short basis of ker(X) is compatible with the hardness proof of Extended-LWE from [13]. In particular, it should be possible to use it as an alternative to [13, Def 4.5] in the proof of [13, Le 4.7], to show that Extended-LWE remains hard with many hints independently sampled from discrete Gaussians. REMARK. Due to lack of space, some background and the missing proofs of Sections 3 and 5 have been removed from this proceedings version. The full version is available on the webpages of the authors. ## 2 Preliminaries If x is a real number, then _x_ is the closest integer to x (with any deterministic rule _⌊_ _⌉_ in case x is half an odd integer). All vectors will be denoted in bold. By default, our vectors are column vectors. We let _,_ denote the canonical inner product. For q prime, _⟨·_ _·⟩_ we let Zq denote the field of integers modulo q. For two matrices A, B of compatible dimensions, we let (A _B) and (A_ _B) respectively denote the horizontal and vertical_ _|_ _∥_ concatenations of A and B. For A ∈ Z[m]q _[×][n], we define Im(A) = {As : s ∈_ Z[n]q _[} ⊆]_ [Z]q[m][.] For X ⊆ Z[m]q [, we let][ Span(][X][)][ denote the set of all linear combinations of elements] of X. We let X _[⊥]_ denote the linear subspace {b ∈ Z[m]q : ∀c ∈ _X, ⟨b, c⟩_ = 0}. For a matrix S ∈ R[m][×][n], we let ∥S∥ denote the norm of its longest column. If S is full column-rank, we let σ1(S) ≥ _. . . ≥_ _σn(S) denote its singular values. We let T denote_ the additive group R/Z. If D1 and D2 are distributions over a countable set X, their statistical distance 1 � 2 _x∈X_ _[|][D][1][(][x][)][ −]_ _[D][2][(][x][)][|][ will be denoted by][ Δ][(][D][1][, D][2][)][. The statistical distance is]_ defined similarly if X is measurable. If X is of finite weight, we let U (X) denote the uniform distribution over X. For any invertible S ∈ R[m][×][m] and c ∈ R[m], we define the function ρS,c(b) = exp(−π∥S[−][1](b − **_c)∥[2]). For S = sIm, we write ρs,c, and we omit_** the subscripts S and c when S = Im and c = 0. We let να denote the one-dimensional Gaussian distribution with standard deviation α. **2.1** **Euclidean Lattices and Discrete Gaussian Distributions** A lattice is a set of the form {[�]i≤n _[x][i][b][i][ :][ x][i][ ∈]_ [Z][}][ where the][ b][i][’s are linearly in-] dependent vectors in R[m]. In this situation, the bi’s are said to form a basis of the _n-dimensional lattice. The n-th minimum λn(L) of an n-dimensional lattice L is de-_ fined as the smallest r such that the n-dimensional closed hyperball of radius r centered in 0 contains n linearly independent vectors of L. The smoothing parameter of L is defined as ηε(L) = min{r > 0 : ρ1/r(L[�] \ 0) ≤ _ε} for any ε ∈_ (0, 1), where _L[�] = {c ∈_ Span(L) : c[t] _· L ⊆_ Z} is the dual lattice of L. It was proved in [29, Le. 3.3] that ηε(L) ≤ �ln(2n(1 + 1/ε))/π·λn(L) for all ε ∈ (0, 1) and n-dimensional lattices L. For a lattice L ⊆ R[m], a vector c ∈ R[m] and an invertible S ∈ R[m][×][m], we de fine the Gaussian distribution of parameters L, c and S by DL,S,c(b) ∼ _ρS,c(b) =_ exp(−π∥S[−][1](b − **_c)∥[2]) for all b ∈_** _L. When S = σ · Im, we simply write DL,σ,c._ ----- Hardness of k-LWE and Applications in Traitor Tracing 319 Note that DL,S,c = S[t] _· DS−tL,1,S−tc. Sometimes, for convenience, we use the no-_ tation DL+c,S as a shorthand for c + DL,S,−c. Gentry et al. [19] gave an algorithm, referred to as GPV algorithm, to sample from DL,S,c when given as input a basis (bi)i of L such that �ln(2n + 4)/π · maxi ∥S[−][t]bi∥≤ 1. We extensively use q-ary lattices. The q-ary lattice associated to A ∈ Z[m]q _[×][n]_ is de fined as Λ[⊥](A) = {x ∈ Z[m] : x[t] _· A = 0 mod q}. It has dimension m, and a basis can_ be computed in polynomial-time from A. For u ∈ Z[m]q [, we define][ Λ]u[⊥][(][A][)][ as the coset] _{x ∈_ Z[m] : x[t] _· A = u[t]_ mod q} of Λ[⊥](A). **2.2** **Random Lattices** We consider the following random lattices, called q-ary Ajtai lattices. They are obtained by sampling A ←�U (Z[m]q _[×][n]) and considering Λ[⊥](A). The following lemma provides_ a probabilistic bound on the smoothing parameter of Λ[⊥](A). **Lemma 1 (Adapted from [19, Le. 5.3]). Let q be prime and m, n integers with m** _n_ _≥_ 2n and ε > 0, then ηε(Λ[⊥](A)) ≤ 4q _m_ [�]log(2m(1 + 1/ε))/π, for all except a fraction 2[−][Ω][(][n][)] _of A ∈_ Z[m]q _[×][n]._ It is possible to efficiently sample a close to uniform A along with a short basis of Λ[⊥](A) (see [4,5,34,28]). **Lemma 2 (Adapted from [5, Th. 3.1]). There exists a ppt algorithm that given n, m,** _q ≥_ 2 as inputs samples two matrices A ∈ Z[m]q _[×][n]_ _and T ∈_ Z[m][×][m] _such that: the_ _distribution of A is within statistical distance 2[−][Ω][(][n][)]_ _from U_ (Z[m]q _[×][n]); the rows of T_ _form a basis of Λ[⊥](A); each row of T has norm ≤_ 3mq[n/m]. For A ∈ Z[m]q _[×][n], S ∈_ R[m][×][m] invertible, c ∈ R[m] and u ∈ Z[n]q [, we define the] distribution DΛ⊥u [(][A][)][,S,][c][ as][ ¯][c][ +][ D][Λ][⊥][(][A][)][,S,][−][c][¯][+][c][, where][ ¯][c][ is any vector of][ Z][m][ such] that ¯c[t] _· A = u[t]_ mod q. A sample x from DΛ⊥u [(][A][)][,S][ can be obtained using the GPV] algorithm along with the short basis of Λ[⊥](A) provided by Lemma 2. Boneh and Freeman [8] showed how to efficiently obtain the residual distribution of (A, x) without relying on Lemma 2. **Theorem 1 (Adapted from [8, Th. 4.3]). Let n, m, q ≥** 2, k ≥ 0 and S ∈ R[m][×][m] _be such that m ≥_ 2n, q is prime with q > σ1(S) · �2 log(4m), and σm(S) = _n_ _k_ _q_ _m · max(Ω([√]n log m), 2σ1(S)_ _m ). Let u1, . . ., uk ∈_ Z[n]q _[and][ c][1][, . . .,][ c][k][ ∈]_ [R][m][ be] _arbitrary. Then the residual distributions of the tuple (A, x1, . . ., xk) obtained with the_ _following two experiments are within statistical distance 2[−][Ω][(][n][)]._ Exp0 : _A ←�U_ (Z[m]q _[×][n]);_ _∀i ≤_ _k : xi ←�DΛ⊥ui_ [(][A][)][,S,][c][i] _[.]_ Exp1 : ∀i ≤ _k : xi ←�DZm,S,ci_ ; A ←�U �Z[m]q _[×][n]|∀i ≤_ _k : x[t]i_ _[·][ A][ =][ u]i[t]_ [mod][ q] � _._ This statement generalizes [8, Th. 4.3] in three ways. First, the latter corresponds to the special case corresponding to taking all the ui’s and ci’s equal to 0. This generalization does not add any extra complication in the proof of [8, Th. 4.3], but is important ----- 320 S. Ling et al. for our constructions. Second, the condition on m is less restrictive (the corresponding assumption in [8, Th. 4.3] is that m max(2n log q, 2k)). To allow for such small _≥_ values of m, we refine the bound on the smoothing parameter of the Λ[⊥](A) lattice (namely, we use Lemma 1). Third, we allow for a non-spherical Gaussian distribution, which seems needed in our generalized Micciancio-Peikert trapdoor gadget used in the reduction from LWE to k-LWE in Section 3.2. We also use the following result on the probability of the Gaussian vectors xi from Theorem 1 being linearly independent over Zq. **Lemma 3 (Adapted from [8, Le. 4.5]). With the notations and assumptions of Theo-** _rem 1, the k vectors x1, . . ., xk sampled in Exp0 and Exp1 are linearly independent_ _over Zq, except with probability 2[−][Ω][(][n][)]._ **2.3** **Rényi Divergence** We use Rényi Divergence (RD) in our analysis, relying on techniques developed in [27,25,26]. For any two probability distributions P and Q such that the support of P is a subset of the support of Q over a countable domain X, we define the RD (of order 2) _P (x)[2]_ by R(P _∥Q) =_ [�]x∈X _Q(x)_ [, with the convention that the fraction is zero when both] numerator and denominator are zero. We recall that the RD between two offset discrete Gaussians is bounded as follows. **Lemma 4 ([25, Le. 4.2]). For any n-dimensional lattice L ⊆** R[n] _and invertible matrix_ _S, set P = DL,S,w and Q = DL,S,z for some fixed w, z ∈_ R[n]. If w, z ∈ _L, let_ _ε = 0. Otherwise, fix ε ∈_ (0, 1) and assume that σn(S) ≥ _ηε(L). Then R(P_ _∥Q) ≤_ � 11+−εε �2 _· exp_ �2π∥w − **_z∥[2]/σn(S)[2][�]._** We use this bound and the fact that the RD between the parameter distributions of two distinguishing problems can be used to relate their hardness, if they satisfy a certain public samplability property. **Lemma 5 ([26]). Let Φ, Φ[′]** _denote two distributions, and D0(r) and D1(r) denote two_ _distributions determined by some parameter r. Let P, P_ _[′]_ _be two decision problems de-_ _fined as follows:_ _• P_ _: Assess whether input x is sampled from distribution X0 or X1, where_ _X0 = {x : r ←�Φ, x ←�D0(r)}, X1 = {x : r ←�Φ, x ←�D1(r)}._ _• P_ _[′]: Assess whether input x is sampled from distribution X0[′]_ _[or][ X]1[′]_ _[, where]_ _X0[′]_ [=][ {][x][ :][ r][ ←][�Φ][′][, x][ ←][�D][0][(][r][)][}][, X]1[′] [=][ {][x][ :][ r][ ←][�Φ][′][, x][ ←][�D][1][(][r][)][}][.] _Assume that D0(·) and D1(·) have the following public samplability property: there_ _exists a sampling algorithm S with run-time TS such that for all r, b, given any sample_ _x from Db(r) we have:_ _• S(0, x) outputs a sample distributed as D0(r) over the randomness of S._ _• S(1, x) outputs a sample distributed as D1(r) over the randomness of S._ _If there exists a T -time distinguisher A for problem P with advantage ε, then, for_ _every λ > 0, there exists an O(λε[−][2]_ _· (TS + T ))-time distinguisher A[′]_ _for problem P_ _[′]_ _with advantage ε[′]_ _≥_ 8R(εΦ[3]∥Φ[′]) _[−]_ _[O][(2][−][λ][)][.]_ ----- Hardness of k-LWE and Applications in Traitor Tracing 321 **2.4** **Learning with Errors** Let s ∈ Z[n]q [and][ α >][ 0][. We define the distribution][ A][s][,α][ as follows: Take][ a][ ←][�U] [(][Z]q[n][)] and e ←�να, and return (a, [1]q _[⟨][a][,][ s][⟩]_ [+][ e][)][ ∈] [Z]q[n] _[×][ T][. The][ Learning With Errors problem]_ LWEα, introduced by Regev in [38,39], consists in assessing whether an oracle produces samples from U (Z[n]q [for some constant][ s][ ←][�U] [(][Z][n]q [)][. Regev [39]] showed that for q ≤Poly[×]([ T]n)[)] prime and[ or][ A][s][,α] _α ∈_ ( _√2qn_ _[,][ 1)][, LWE is (quantumly) not]_ easier than standard worst-case lattice problems in dimension n with approximation factors _oly(n)/α. This hardness proof was partly dequantized in [33,13], and the_ _P_ requirements that q should be prime and _oly(n) were waived._ _P_ In this work, we consider a variant LWE where the number of oracle samples that the distinguisher requests is a priori bounded. If m denotes that bound, then we will refer to this restriction as LWEα,m. In this situation, the hardness assumption can be restated in terms of linear algebra over Zq: Given A ←�U (Z[m]q _[×][n]), the goal is to distinguish_ between the distributions (over T[m]) 1 1 _q [U][ (Im(][A][)) +][ ν]α[m]_ and _q [U]_ � Z[m]q � + να[m][.] Under the assumption that αq _Ω([√]n), the right hand side distribution is indeed_ _≥_ within statistical distance 2[−][Ω][(][n][)] to U (T[m]) (see, e.g., [29, Le. 4.1]). The hardness assumption states that by adding to them a small Gaussian noise, the linear spaces Im(A) and Z[m]q [become computationally indistinguishable. This rephrasing in terms of linear] algebra is helpful in the security proof of the traitor tracing scheme. Note that by a standard hybrid argument, distinguishing between the two distributions given one sample from either, and distinguishing between them given Q samples (from the same distribution), are computationally equivalent problems, up to a loss of a factor Q in the distinguishing advantage. Finally, we will also use a variant of LWE where the noise distribution να is re placed by Dq−1Z,α, and where U (T) is replaced by U (Tq) with Tq being q[−][1]Z with addition mod 1. This variant, denoted by LWE[′], was proved in [34] to be no easier than standard LWE (up to a constant factor increase in α). ## 3 New Lattice Tools The security of our constructions relies on the hardness of a new variant of LWE, which may be seen as the dual of the k-SIS problem from [8]. **Definition 1. Let k ≤** _m, S ∈_ R[m][×][m] _invertible and C = (c1∥· · · ∥ck) ∈_ R[k][×][m]. _The (k, S, C)-LWEα,m problem (or (k, S)-LWE if C = 0) is as follows: Given A ←�_ _U_ (Z[m]q _[×][n]), u ←�U_ (Z[n]q [)][ and][ x][i] _[←][�D]Λ[⊥]−u[(][A][)][,S,][c][i][ for][ i][ ≤]_ _[k][, the goal is to distinguish]_ _between the distributions (over T[m][+1])_ 1 _q_ _[·][ U]_ � � **_ut_** Im _A_ + να[m][+1]. � 1 **_xi_** �⊥� �� 1 + να[m][+1] _and_ _q_ _[·][ U]_ � Spani≤k ----- 322 S. Ling et al. The classical LWE problem consists in distinguishing the left distribution from uni form, without the hint vectors x[+]i = (1∥xi). These hint vectors correspond to the secret keys obtained by the malicious coalition in the traitor tracing scheme. Once these hint vectors are revealed, it becomes easy to distinguish the left distribution from the uniform distribution: take one of the vectors x[+]i [, get a challenge sample][ y][ and com-] pute ⟨x[+]i _[,][ y][⟩∈]_ [T][; if][ y][ is a sample from the left distribution, then the centered residue] is expected to be of size ≈ _α · ([√]mσ1(S) + ∥ci∥), which is ≪_ 1 for standard parameter settings; on the other hand, if y is sampled from the uniform distribution, then **_x[+], y_** should be uniform. The definition of (k, S)-LWE handles this issue by _⟨_ _⟩_ replacing U (Z[m]q [+1]) by U (Spani≤k(x[+]i [)][⊥][)][.] Sampling x[+]i [from][ D][Λ][⊥][((][u][t][∥][A][))][,S,][c]i [may seem more natural than imposing that the] first coordinate of each x[+]i [is][ 1][. Looking ahead, this constraint will prove convenient] to ensure correctness of our cryptographic primitives. Theorem 3 below and its proof can be readily adapted to this hint distribution. They may also be adapted to improve the SIS to k-SIS reduction from [8]. Setting C = 0 is also more natural, but for technical reasons, our reduction from LWE to (k, S, C)-LWE works with unit vectors ci. However, we show that for small ∥ci∥, there exist polynomial time reductions between (k, S, C)-LWE and (k, S)-LWE. In the proof of the hardness of (k, S)-LWE problem, we rely on a gadget integral matrix G that has the following properties: its first rows have Gaussian distributions, it is unimodular and its inverse is small. Before going to this proof, we shall build such a gadget matrix by extending Ajtai’s simultaneous sampling of a random q-ary lattice with a short basis [4] (see also Lemma 2) to kernel lattices. More precisely, we adapt the Micciancio-Peikert framework [28] to sampling a Gaussian X ∈ Z[m][×][n] along with a short basis for the lattice ker(X) = {b ∈ Z[m] : b[t]X = 0}. **3.1** **Sampling a Gaussian X with a Small Basis of ker(X)** The Micciancio-Peikert construction [28] relies on a leftover hash lemma stating that with overwhelming probability over A ←�U (Z[m]q _[×][n]) and for a sufficiently large σ, the_ distribution of A[t] _·_ _DZm,σ mod q is statistically close to U_ (Z[n]q [)][. We use a similar result] over the integers, starting from a Gaussian X ∈ Z[m][×][n] instead of a uniform A ∈ Z[m]q _[×][n]._ The proof of the following lemma relies on [1], which improves over a similar result from [2]. The result would be neater with σ2 = σ1, but, unfortunately, we do not know how to achieve it. The impact of this drawback on our results and constructions is mostly cosmetic. **Lemma 6. Let m ≥** _n ≥_ 100 and σ1, σ2 > 0 satisfying σ1 ≥ _Ω([√]mn log m), m ≥_ _Ω(n log(σ1n)) and σ2 ≥_ _Ω(n[5][/][2][√]mσ1[2]_ [log][3][/][2][(][mσ][1][))][. Let][ X][ ←][�D]Z[m],σ[×]1[n][. There exists] _a ppt algorithm that takes n, m, σ1, σ2, X and c ∈_ Z[n] _as inputs and returns x ∈_ Z[n], r ∈ Z[m] _such that x = c+X_ _[t]r with ∥r∥≤_ _O(σ2/σ1), with probability 1−2[−][Ω][(][n][)],_ _and_ _Δ�(X, x), DZ[m],σ[×]1[n]_ _[×][ D]Z[n],σ2,c�_ _≤_ 2[−][Ω][(][n][)]. We now adapt the trapdoor construction from [28] to kernel lattices. ----- Hardness of k-LWE and Applications in Traitor Tracing 323 **Theorem 2. Let n, m1, σ1, σ2 be as above, and m2** _m1 bounded as n[O][(1)]. There_ _≥_ _exists a ppt algorithm that given n, m1, m2 (in unary), σ1 and σ2, returns X1_ _∈_ Z[m][1][×][n], X2 ∈ Z[m][2][×][n], and U ∈ Z[m][×][m] _with m = m1 + m2, such that:_ _• the distribution of (X1, X2) is within statistical distance 2[−][Ω][(][n][)]_ _of DZ[m],σ[1][×]1_ _[n]_ _×_ (DZ[m]2 _,σ2,δ1 × · · · × DZ[m]2_ _,σ2,δn), where δi denotes the ith canonical unit vector_ _in Z[m][2]_ _whose ith coordinate is 1 and whose remaining coordinates are 0._ _• we have | det U_ _| = 1 and U · X = (In∥0) with X = (X1∥X2),_ _• every row of U has norm ≤_ _O([√]nm1σ2) with probability ≥_ 1 − 2[−][Ω][(][n][)]. The second statement implies that the last m _n rows of U form a basis of the_ _−_ random lattice ker(X). _Proof. We first sample X1 from DZ[m],σ[1][×]1_ _[n]_ using the GPV algorithm. We run m2 times the algorithm from Lemma 6, on the input n, m1, σ1, σ2, X1 and c running through the columns of C = [In|0n×(m2−n)]. This gives X2 ∈ Z[m][2][×][n] and R ∈ Z[m][1][×][m][2] such that _X2[t]_ [= [][I][n][|][0]n×(m2−n)[] +][ X]1[t] _[·][ R][. One can then see that][ U][ ·][ X][ = [][I][n][∥][0][]][, where]_ = _._ � _, X =_ � _U =_ � **0** _Im2_ _Im1 −(X1|0)_ � _·_ � _−R[t]_ _Im2_ _Im1 + (X1|0)R[t]_ _−(X1|0)_ � � _X1_ _X2_ � _Im1_ **0** _−R[t]_ _Im2_ The result then follows from Gaussian tail bounds (to bound the norms of the rows of X1) and elementary computations. _⊓⊔_ Our gadget matrix G is U _[−][t]. In the following corollary, we summarize the properties_ we will use. **Corollary 1. Let n, m1, m2, m, σ1, σ2 be as in Theorem 2. There exists a ppt algorithm** _that given n, m1, m2 (in unary), and σ1, σ2 as inputs, returns G ∈_ Z[m][×][m] _such that:_ _• the top n × m submatrix of G is within statistical distance 2[−][Ω][(][n][)]_ _of DZ[n],σ[×][m]1_ [1] _×_ (DZ[m]2 _,σ2,δ1 × · · · × DZ[m]2_ _,σ2,δn_ )[t], _• we have | det G| = 1 and ∥G[−][1]∥≤_ _O([√]nm2σ2), with probability 1 −_ 2[−][Ω][(][n][)]. **3.2** **Hardness of k-LWE** The following result shows that this LWE variant, with S a specific diagonal matrix, is no easier than LWE. **Theorem 3. There exists c > 0 such that the following holds for k = n/(c log n).** _Let m, q, σ, σ[′]_ _be such that σ_ _Ω(n), σ[′]_ _Ω(n[3]σ[2]/ log n), q_ _Ω(σ[′][√]log m)_ _≥_ _≥_ _≥_ _is prime, and m_ _≥_ _Ω(n log q) (e.g., σ_ = _Θ(n), σ[′]_ = _Θ(n[5]/ log n), q_ = _Θ(n[5]) and m = Θ(n log n)). Then there exists a probabilistic polynomial-time re-_ _duction from LWEm+1,α in dimension n to (k, S)-LWEm+2n,α′ in dimension 4n,_ _with α[′]_ = Ω(mn[3][/][2]σσ[′]α) and S = � _σ · Im0_ +n _σ[′ ]·0 In_ �. More concretely, using a (k, S)-LWEm+2n,α′ algorithm with run-time T and advantage ε, the reduction gives _an LWEm+1,α algorithm with advantage ε[′]_ _≥_ 8R(εΦ[3]∥Φ[′]) _[−]_ _[O][(2][−][λ][)][ and advantage]_ _ε[′]_ = Ω((ε − 2[−][Ω][(][n/][ log][ n][)])[3]) − _O(2[−][n])._ ----- 324 S. Ling et al. The reduction takes an LWE instance and extends it to a related k-LWE instance for which the additional hint vectors (xi)i≤k are known. The major difficulty in this extension is to restrain the noise increase, as a function of k. The existing approach for this reduction (that we improve below) is the technique used in the SIS to k-SIS reduction from [8]. In the latter approach, the hint vectors are chosen independently from a small discrete Gaussian distribution, and then the LWE matrix A is extended to a larger matrix A[′] under the constraint that the hint vectors are in the q-ary lattice Λ[⊥](A[′]) = **_b : b[t]A[′]_** = 0 mod q . Unfortunately, with this approach, _{_ _}_ the transformation from an LWE sample with respect to A, to a k-LWE sample with respect to A[′], involves a multiplication by the cofactor matrix det(G) · G[−][1] over Z of a k _k full-rank submatrix G of the hint vectors matrix. Although the entries of G are_ _×_ small, the entries of its cofactor matrix are almost as large as det G, which is exponential in k. This leads to an “exponential noise blowup,” restraining the applicability range to k _O(1) if one wants to rely on the hardness of LWE with noise rate 1/α_ _≤_ [�] _≤_ _oly(n) (otherwise, LWE is not exponentially hard to solve). To restrain the noise_ _P_ increase for large k, we use the gadget of Corollary 1. Ignoring several technicalities, the core idea underlying our reduction is that the latter gadget allows us to sample a small matrix X 2 with X _−2_ 1 also small, which we can then use to transform the given LWE matrix A[+] = (u[t]∥A) ∈ Zq[(][m][+1)][×][n] into a taller k-LWE matrix A[′][+] = T · A[+], using a transformation matrix T of the form � _Im+1_ _−X_ _−2_ 1[X]1 � _T =_ _,_ for some small independently sampled matrix X1 = [1|X 1]. We can accordingly transform the given LWE sample vector b = A[+]s + e for matrix A[+] into an LWE sample **_b[′]_** = T b = A[′][+]s + T e for matrix A[′][+] by multiplying the given sample by T . Since [X1|X 2] · T = 0, it follows that [X1|X2] · A[′][+] = 0, so we can use k small rows of [X1|X 2] as the k-LWE hints x[+]i [for the new matrix][ A][′][+][, while, at same time, the] smallness of T keeps the transformed noise e[′] = T e small. _Proof. For a technical reason related to the non-zero centers δi in the distribution of_ the hint vectors produced by our gadget from Corollary 1, we decompose our reduction from LWEm+1,α to (k, S)-LWE into two subreductions. The first subreduction (outlined above) reduces LWEm+1,α in dimension n to (k, S, C)-LWEm+2n,α′ in dimension 4n, where the ith row of C is the unit vector ci = (0[m][+][n]|δi) ∈ R[m][+2][n] for i = 1, . . ., k. The second subreduction reduces (k, S, C)-LWEm+2n,α[′] in dimension 4n to (k, S)-LWEm+2n,α′ in dimension 4n. We first describe and analyze the first subreduction, and then explain the second subreduction. **Description of the First Subreduction. Let (A[+], b) with A[+]** = (u[t] _A) denote the_ _∥_ given LWEα,m+1 input instance, where A[+] _←�U_ (Zq[(][m][+1)][×][n]), and b ∈ T[m][+1] comes from either the “LWE distribution” [1]q _[U][ (Im(][A][+][)) +][ ν]α[m][+1]_ or the “Uniform distribu tion” [1]q _[U]_ �Z[m]q [+1]� + να[m][+1]. The reduction maps (A[+], b) to (A[′], u[′], X, b[′]) with A[′] _∈_ Zq[(][m][+2][n][)][×][4][n] and u[′] _∈_ Z[4]q[n] independent and uniform, X ∈ Z[k][×][(][m][+2][n][)] with its ith ----- Hardness of k-LWE and Applications in Traitor Tracing 325 row xi independently sampled from DΛ⊥−u[′][ (][A][′][)][,S][ for][ i][ ≤] _[k][, and][ b][′][ ∈]_ [T][m][+1+2][n][ com-] ing from either the “k-LWE distribution” [1]q _[U][ (Im(][A][′][+][)) +][ ν]α[m][+1+2][n]_ if b is from the “LWE distribution,” or the “k-Uniform distribution” [1]q _[U]_ �Spani≤k(x[+]i [)][⊥][�] if b is from the “Uniform distribution.” Here A[′][+] = (u[′][t]∥A[′]), and x[+]i [denotes the vector][ (1][∥][x][i][)] for i _k. The reduction is as follows._ _≤_ 1. Sample gadget X 2 ∈ Z[2][n][×][2][n] using Corollary 1 (with parameters n, m1, m2, σ1, _σ2 set to k, n, n, σ, σ[′]_ respectively), and sample X 1 ←�DZ[2][n],σ[×][m]. Define T = � _−X[−]2Im[1]_ _·+1 (1|X1)_ � _∈_ Z[(][m][+1+2][n][)][×][(][m][+1)], where 1 is the all-1 vector. Let X _∈_ Z[k][×][(][m][+2][n][)] denote the matrix made of the top k rows of (X 1|X 2). 2. Sample C[+] _∈_ Zq[(][m][+1+2][n][)][×][3][n] with independent columns uniform orthogonally to Im((1|X)) modulo q. Let u[t]C _∈_ Z[3]q[n] be the top row of C[+], and C ∈ Z[(]q[m][+2][n][)][×][3][n] denote its remaining m + 2n rows. _√_ _√_ _√_ _t_ 3. Compute Σ = α[′] _· Im+1+2n −_ _T · T_ _[t]_ and _Σ such that_ _Σ ·_ _Σ_ = Σ; if Σ is not positive definite, abort. _√_ 4. Compute A[′][+] = (T ·A[+]|C[+]) and b[′] = T b+ [1]q _[C][+]_ _[·][s][′]_ [+] _Σe[′], with s[′]_ _←�U_ (Z[3]q[n][)] and e[′] _←�ν1[m][+1+2][n]. Let (u[′])[t]_ = (u∥uC )[t] _∈_ Z[4]q[n] be the top row of A[′][+]. 5. Return (A[′], u[′], X, b[′]). Step 1 aims at building a transformation matrix T that sends A[+] to the left n columns of A[′][+]. Two properties are required from this transformation. First, it must be a linear map with small coefficients, so that when we map the LWE right hand side to the kLWE right hand side, the noise component does not blow up. Second, it must contain some vectors (1∥xi) in its (left) kernel, with xi normally distributed. These vectors are to be used as k-LWE hints. For this, we use the gadget of the previous subsection. This ensures that the xi’s are (almost) distributed as independent Gaussian samples from DZn,σ × DZn,σ′, and that the matrix T is integral with small coefficients. We define B ∈ Z[2]q[n][×][n] by [A[+]∥B] = T A[+], so that we have: � _A+_ � � _Im+1_ � �1|X 1|X 2� _·_ = �1|X 1|X 2� _·_ _−1_ _· A[+]_ = 0 mod q. _B_ _−X_ 2 _· (1|X_ 1) This means each row of �X 1|X 2� belongs to Λ[⊥]−u[(][A][′′][)][, where][ A][′′][ = [][A][t][|][B][t][]][t][.] At this stage, it is tempting to define the k-LWE matrix as A[′′] and give away the _k-LWE hint vectors xi ∈_ _Λ[⊥]−u[(][A][′′][)][ making up the matrix][ X][. However, this approach]_ does not quite work: we have extended A by 2n rows, but we give only k hint vectors (we cannot output them all, as the bottom rows of X 2 may not be normally distributed). This creates a difficulty for mapping “Uniform” to “k-Uniform” in the reduction. Step 2 circumvents the above difficulty by sampling extra column vectors C[+] _∈_ Zq[(][m][+1+2][n][)][×][3][n] that are uniform in the subspace orthogonal to the hint vectors x[+]i modulo q. When the parameters are properly set, the columns of [T _C[+]] span the_ _|_ full subspace orthogonal to the xi’s mod q, with overwhelming probability. We finally set A[′][+] = � _AB[+]_ _C+�._ ��� It remains to see how to map “LWE” to “k-LWE.” The main problem, when multiply ing b by T, is that the LWE noise gets skewed. If its covariance matrix was of the form � _A+_ _B_ � ----- 326 S. Ling et al. _α[2]_ _·Im+1, then it becomes α[2]T ·T_ _[t]. To compensate for that, in Step 3, we add to T ·b an_ independent Gaussian noise with well-chosen covariance Σ = α[′][2]·Im+1+2n _−α[2]T ·T_ _[t]._ We set α[′] large enough to ensure that this symmetric matrix is positive definite. This noise unskewing technique was adapted to discrete Gaussians and used in cryptography in [34]. **Analysis of the First Subreduction. All steps of the reduction can be implemented in** polynomial time. Its correctness follows from the following three lemmas. The proofs can be found in the full version. **Lemma 7. The tuple (A[′], u[′], X) is within statistical distance 2[−][Ω][(][n/][ log][ n][)]** _of the dis-_ _tribution in which A[′]_ _∈_ Zq[(][m][+2][n][)][×][4][n] _and u[′]_ _∈_ Z[4]q[n] _are independent and uniform, and_ _the rows of X ∈_ Z[k][×][(][m][+2][n][)] _are from DΛ⊥−u[′][ (][A][′][)][,S,][c][i][, where][ c][i][ = (0][m][+][n][|][δ][i][)][ ∈]_ [R][m][+2][n] _and δi denotes the ith canonical unit vector in Z[n]_ _for i = 1, . . ., k._ Next, we assume that (A[′][+], X) is fixed and consider the distribution of b[′] in the two cases of the distribution of b. First we consider the “LWE” to “k-LWE” distribution mapping. **Lemma 8. The following holds with probability 1** 2[−][Ω][(][n/][ log][ n][)] _over the choice of_ _−_ _X_ 1 and X 2. If b ∈ T[m][+1] _is sampled from_ [1]q _[U]_ [(Im][A][) +][ ν]α[m][+1], then b[′] _∈_ T[m][+1+2][n] _is_ _within statistical distance 2[−][Ω][(][n][)]_ _of_ [1]q _[U][ (Im][A][′][+][) +][ ν]α[m][′][+1+2][n]._ Finally, we consider the “Uniform” to “k-Uniform” distribution mapping. **Lemma 9. The following holds with probability 1−2[−][Ω][(][n/][ log][ n][)]** _over the choice of X_ 1 _and X_ 2. If b is sampled from [1]q _[U]_ �Z[m]q [+1]� + να[m][+1], then b[′] _is within statistical distance_ 2[−][Ω][(][n][)] _of_ [1]q _[U]_ �Spani≤k(x[+]i [)][⊥][�] + να[m][′][+1+2][n]. Overall, we have described a reduction that maps the “LWE distribution” to the “k LWE distribution,” and the “Uniform distribution” to the “k-Uniform distribution,” up to statistical distance 2[−][Ω][(][n/][ log][ n][)]. **Second Subreduction. It remains to reduce the (k, S, C)-LWE with non-zero cen-** ters for the hint distribution, to (k, S)-LWE with zero-centered hints. For this, we use Lemma 5 to obtain the following. **Lemma 10. Let m[′]** = m + 2n, n[′] = 4n, and assume that σm′ (S) ≥ _ω([√]n). If there_ _exists a distinguisher against (k, S)-LWEm′,α′ in dimension n[′]_ _with run-time T and_ _advantage ε, then there exists a distinguisher against (k, S, C)-LWEm′,α′ with run-_ _time T_ _[′]_ = O(Poly(m[′]) · (ε − 2[−][Ω][(][n][)])[−][2] _· T ) and advantage ε[′]_ = Ω((ε − _O(2[−][n]))[3]/_ _R −_ _O(2[−][n])), where R = exp(O(k · (2[−][n]_ + ∥C∥[2]/σm′ (S)[2]))). The main idea of the proof of Lemma 10, given in the full version, is to apply Lemma 5 with P, P _[′]_ being the (k, S)-LWE and (k, S, C)-LWE problems respectively, which have instances of the form x = (r, y), where r = (A, u, {xi}i≤k) and the hints **_xi for i ≤_** _k sampled from either the zero-centered distribution ←�DΛ⊥−u[(][A][)][,S,][0][ (dis-]_ tribution Φ of r, in (k, S)-LWE) or the non-zero center distribution ←�DΛ⊥−u[(][A][)][,S,][c][i] ----- Hardness of k-LWE and Applications in Traitor Tracing 327 (distribution Φ[′] of r, in (k, S, C)-LWE), and y ∈ T[m][+1] is a sample from either the distribution or the distribution � � **_ut_** �� _D0(r) = [1]q_ Im _A_ + να[m][+1] _[·][ U]_ � � 1 �⊥� _D1(r) = [1]q_ Spani≤k **_xi_** + να[m][+1]. _[·][ U]_ Given x = (r, y), is possible to efficiently sample y[′] from either D0(r) or D1(r), so the public-samplability property assumed by Lemma 5 is satisfied. This Lemma gives the desired reduction between (k, S)-LWE and (k, S, C)-LWE, as long as the RD R(Φ∥Φ[′]) between the distribution of r in the two problems is polynomially bounded. The latter reduces to obtaining a bound on the RD between a Gaussian distribution and a small offset thereof, which is given by Lemma 4. In our application of Lemma 10, the (k, S, C)-LWE problem resulting from the first subreduction has ∥C∥ = 1, and σm′ (S) = σ, so that R = exp(O(k · (2[−][n] +1/σ[2]))) = _O(1) using σ = Ω(n) and k_ _n. This shows that the second subreduction is proba-_ _≤_ bilistic polynomial time. _⊓⊔_ Our technique can be applied to improve the Boneh-Freeman reduction from SIS to _k-SIS, from an exponential loss in k to a polynomial loss in k. In fact, we map A to A[′′]_ in the same way (except that we do not use and add u on top of the matrix A) and then also use the top k rows of (X 1|X 2) as the k-SIS hints for the new matrix A[′′]. Then, whenever the adversary can output a short vector x1∥x2 that is orthogonal to A[′′], we can also output a short vector (x1 − **_x2 · X_** _−2_ 1[X] [1][)][ which is orthogonal to][ A][. As the] rows of X 1 are distributed as independent Gaussian samples and the adversary is only given its first k rows, it can be shown that, if x1∥x2 is linearly independent from the _k-SIS hints, then the vector (x1 −_ **_x2 · X_** _−2_ 1[X] [1][)][ is null with a negligible probability.] RD may also be used to reduce k-SIS with non-zero-centered hints (with small centers) to k-SIS with zero-centered hints. ## 4 A Lattice-Based Public-Key Traitor Tracing Scheme In this section, we describe and analyze our basic traitor tracing scheme. First, we give the underlying multi-user public-key encryption scheme. We then explain how to implement black-box confirmation tracing. **4.1** **A Multi-user Encryption Scheme** The scheme is designed for a given security parameter n, a number of users N and a maximum malicious coalition size t. It then involves several parameters q, m, α, S. These are set so that the scheme is correct (decryption works properly on honestly generated ciphertexts) and secure (semantically secure encryption and possibility to trace members of malicious coalitions). In particular, we define S ----- 328 S. Ling et al. as Diag(σ, . . ., σ, σ[′], . . ., σ[′]) ∈ R[m][×][m] where σ[′] _> σ and their respective numbers_ of iterations are set so that (t, S)-LWEm+1,α is hard to solve. Setup. The trusted authority generates a master key pair using the algorithm from Lemma 2. Let (A, T ) ∈ Z[m]q _[×][n]_ _× Z[m][×][m]_ be the output. We additionally sample u uniformly in Z[n]q [. Matrix][ T][ will be part of the tracing key][ tk][, whereas the public key] is pk = A[+], with A[+] = (u[t] _A)._ _∥_ Each user Ui for i ≤ _N obtains a secret key ski from the trusted authority, as fol-_ lows. The authority executes the GPV algorithm using the basis of Λ[⊥](A) consisting of the rows of T, and the standard deviation matrix S. The authority obtains a sample xi from DΛ⊥−u[(][A][)][,S][. The standard deviations][ σ][′][ > σ][ may be chosen as small] as 3mq[n/m][�](2m + 4)/π. The user secret key is x[+]i = (1∥xi) ∈ Z[m][+1]. Using the Gaussian tail bound and the union bound, we have ∥xi∥≤ _[√]mσ[′]_ for all i ≤ _N_, with probability 1 _N_ 2[−][Ω][(][m][)]. _≥_ _−_ _·_ The tracing key tk consists of the matrix T and all pairs (Ui, ski). Encrypt. The encryption algorithm is exactly the 1-bit encryption scheme from [19, Se. 7.1], which we recall, for readability.[1] The plaintext and ciphertext domains are = _P_ _{0, 1} and C = Z[m]q_ [+1] respectively, and: **_s + e +_** _·_ � _, where s ←�U_ (Z[n]q [)][ and][ e][ ←][�] _[⌊][ν][αq][⌉][m][+1][.]_ � � _M_ _q/2_ _· ⌊_ _⌋_ **0** Enc : M _�→_ � **_ut_** _A_ As explained in [19], this scheme is semantically secure under chosen plaintext attacks (IND-CPA), under the assumption that LWEm+1,α is hard to solve. Decrypt. To decrypt a ciphertext c ∈ Z[m]q [+1], user Ui uses its secret key x[+]i [and] evaluates the following function Dec from Z[m]q [+1] to {0, 1}: Map c to 0 if ⟨x[+]i _[,][ c][⟩]_ [mod][ q] is closer to 0 than _q/2_ . _±⌊_ _⌋_ If c is an honestly generated ciphertext of a plaintext M ∈{0, 1}, we have ⟨x[+]i _[,][ c][⟩]_ [=] _⟨x[+]i_ _[,][ e][⟩]_ [+][ M][ · ⌊][q/][2][⌋] [mod][ q][, where][ e][ ←][�] _[⌊][ν][αq][⌉][m][+1][. It can be shown that the latter has]_ magnitude ≤ 2[√]mαq∥x[+]i _[∥]_ [with probability][ 1][−][2][−][Ω][(][n][)][ over the randomness of][ e][. This] is 3mαqσ[′] for all i, with probability 1 _N_ 2[−][Ω][(][n][)]. To ensure the correctness of _≤_ _≥_ _−_ _·_ the scheme, it suffices to set q ≥ 4mαqσ[′]. Note that other constraints will be added to enable tracing. **Theorem 4. Let m, n, q, N be integers such that q is prime and N** 2[o][(][n][)]. Let α, σ, _≤_ _σ[′]_ _> 0 such that σ[′]_ _σ_ _Ω(mq[n/m][√]log m) and α_ 1/(4mσ[′]). Then the scheme _≥_ _≥_ _≤_ _described above is IND-CPA under the assumption that LWEm+1,α is hard. Further,_ _the decryption algorithm is correct:_ _∀M ∈{0, 1}, ∀i ≤_ _N : Dec (Enc(M, pk), ski) = M_ _holds with probability_ 1 2[−][Ω][(][n][)] _over the randomness used in Setup and Enc._ _≥_ _−_ 1 As usual, the encryption algorithm may be used to encapsulate session keys which are then fed into an efficient data encapsulation mechanism to encrypt the data. ----- Hardness of k-LWE and Applications in Traitor Tracing 329 **4.2** **Tracing Traitors** We now present a black-box confirmation algorithm Trace.[2] It is given access to an oracle that provides black-box access to a decryption device . It takes as inputs _O[D]_ _D_ the tracing key tk = (T, (Ui, x[+]i [)][i][≤][N] [)][ and a set of suspect users][ {U][i]1 _[, . . .,][ U][i]k_ _[}][ of]_ cardinality k _t, where t is the a priori bound on any coalition size. Wlog, we may_ _≤_ consider that k = t and ij = j for all j ≤ _k._ Algorithm Trace gathers information about which keys have been used to build decoder, by feeding different carefully designed distributions to oracle . We con_D_ _O[D]_ sider the following t + 1 distributions T r0, . . ., T rt over C = Z[m]q [+1]: + ⌊ναq⌉[m][+1]. _T ri = U_ �Span(x[+]1 _[, . . .,][ x]i[+][)][⊥][�]_ The first distribution T r0 is the uniform distribution, whereas the last distribution T rt is meant to be computationally indistinguishable from Enc(0). We define p∞ as the probability Pr[ (c, M ) = 1] that the decoder can decrypt the ciphertexts, over the _O[D]_ randomness of M ←�U ({0, 1}) and c ←� Enc(M ). We define pi as the probability the decoder decrypts the signals in T ri, for i ∈ [0, t]: _, M_ = 1 _._ _O[D]_ **_c +_** � _pi =_ Pr **_c ←�T ri_** _M ←�U({0, 1})_ � � � � _M_ _q/2_ _· ⌊_ _⌋_ **0** � A gap between pi−1 and pi is meant to indicate that Ui is a traitor. The confirmation and soundness properties are proved in the full version. We now concentrate on a new feature of our scheme: public traceability. ## 5 Projective Sampling and Public Traceability We now modify the scheme of the Section 4 so that the tracing signals can be publicly sampled. For this purpose, we introduce the concept of projective sampling family. **5.1** **Projective Sampling** Inspired from the notion of projective hash family [16], we propose the notion of projective sampling family in which each sampling function is keyed and, with a projected key, one can simulate the sampling function in a computationally indistinguishable way. Let X be a finite non-empty set. Let F = (Fk)k∈K be a collection of sampling functions indexed by K, so that Fk is a sampling function over X, for every k ∈ _K. We_ call Sam = (F, K, X) a sampling family. We now introduce the concept of projective sampling. **Definition 2 (Projective Sampling). Let Sam = (F, K, X) be a sampling family. Let** _J be a finite, non-empty set, and let π : K_ _J be a (probabilistic) function. Let also_ _→_ 2 Note that in our context, minimal access is equivalent to standard access: since the plaintext domain is small, plaintext messages can be tested exhaustively. ----- 330 S. Ling et al. P = (Pj)j∈J be a collection of sampling functions over X, and D be a distribution _over K. Then PSam = (F, K, X, P, J, π, D) is called a projective sampling family if,_ _with overwhelming probability over the choice of k, k[′]_ _�D, and given the secret_ _←_ _key k and its projected key π(k), 1) the distributions obtained using Fk and Pπ(k) are_ _computationallyindistinguishable, and 2) the distributions obtained using Fk and Pπ(k′)_ _can be efficiently distinguished._ The first condition means that for k _�D, the value π(k) “encodes” the sampling_ _←_ distribution of Fk, so that when π(k) is made public, the sampled signal Fk can be publicly simulated by Pπ(k). The security requirement is very strong because the adversary is not only given the projected key, as in projective hashing, but also the secret key k. We require that sampling signals from the secret key and from its projected key are indistinguishable for the insiders who know the secret key. This is relevant for traitor tracing, as the traitors are system insiders and they possess secret data. The second condition (that we actually do not directly use in our cryptographic application) allows to prevent the trivial solution consisting in setting Pπ(k) as an efficient sampling function that is independent of k: the simulation signal Pπ(k) must be specific to k.[3] **5.2** **Projective Sampling from k-LWE** We construct a set of projective sampling families (PSami)0≤i≤t. The parameters are almost identical to the parameters in the Setup of the multi-user scheme of Section 4. A further difference, required for simulation purposes in the security proof, is that σ[′] _>_ _σ must be set_ _Ω[�]([√]mn + πq)._ We let A ←�U (Z[m]q _[×][n]) and u ←�U_ (Z[n]q [)][ be public parameters. For each][ i][, we define] _Ki = (Z[m]q_ [)][i][ and][ D][i][ as the distribution on][ K][i][ that samples][ k][ = (][x][j][)][j][≤][i][ with][ x][j][ ←][�] _DΛ⊥−u[(][A][)][,σ][ for all][ j][ ≤]_ _[i][. The sampling function][ F][i,k][ is defined as][ U]_ [(Span][j][≤][i][(][x]j[+][)][⊥][)+] _⌊ναq⌉[m][+1]. The projected key πi(k) is defined as follows:_ _• Sample H ∈_ Zq[m][×][(][m][−][n][)] uniformly, conditioned on Im(A) ⊆ Im(H). _• For each j ≤_ _i, define h[t]j_ [=][ −][x]j[t] _[·][ H][.]_ _• Finally, set J = Zq[m][×][(][m][−][n][)]_ _× (Z[m]q_ _[−][n])[i]_ and set πi(k) = (H, (hj)j≤i). We now define the sampling Pi,πi(k) with projected key πi(k) = (H, (hj)j≤i), as follows: _• Set Hj = (h[t]j[∥][H][)][ ∈]_ [Z]q[(][m][+1)][×][(][m][−][n][)]. We have x[+]j _[t][·][H][j][ =][ 0][ and][ Im(][A][+][)][ ⊆]_ [Im(][H][j] [)][.] _• Set Pi,πi(k) = U (∩j≤iIm(Hj)) + ⌊ναq⌉[m][+1], with ∩j≤0Im(Hj) = Z[m]q_ [+1] by con vention. Note that ∩j≤iIm(Hj ) ⊆ Spanj≤i(x[+]j [)][⊥][.] **Theorem 5. For each i = 0, . . ., t, PSami is a projective sampling family. Concretely,** _under the (i, S)-LWEα,m hardness assumptions, given the uniformly sampled public_ _parameters (A, u), the secret key k = (xj)j≤i ←�Di and its projected key πi(k) =_ (H, (hj)j≤i), the distributions Fi,k and Pi,πi(k) are indistinguishable. Moreover, they _are both indistinguishable from U_ (Im(A[+])) + ⌊ναq⌉[m][+1]. Finally, with overwhelming 3 Another trivial situation occurs when π(k) = k: the projected key leaks the full information about the original key and one cannot safely publish the projected key. ----- Hardness of k-LWE and Applications in Traitor Tracing 331 _probability, the distributions Fi,k and Pi,πi(k′) can be efficiently distinguished, when k[′]_ _is independently sampled from Di._ _Proof. For the last statement, observe that with overwhelming probability, the secret_ key k[′] contains an x[′]j _[∈]_ [Z]q[m] [that does not belong to][ Span]j≤i[(][x][j][)][ (by Lemma 3). In] that case, taking the inner product of all x[′]j[’s of][ k][′][ with a sample from][ P][i,π]i[(][k][′][)] [gives] small residues modulo q, whereas one of the inner products of the x[′]j[’s with a sample] from with a sample from Fi,k will be uniform modulo q. We now consider the first statement. From the hardness of (i, S)-LWEm,α, given k, the distributions Fi,k = U (Spanj≤i(x[+]j [)][⊥][) +][ ⌊][ν][αq][⌉][m][+1][ and][ U] [(Im(][A][+][)) +][ ⌊][ν][αq][⌉][m][+1] are indistinguishable. Further, given k = (xj)j≤i, the projected key πi(k) = (H, (hj)j≤i) can be sampled from Di. Therefore, given both k and πi(k), the distributions Fi,k and U (Im(A[+])) + ⌊ναq⌉[m][+1] remain indistinguishable. Now, we have Im(A[+]) ⊆∩j≤iIm(Hj) ⊆ (Spanj≤i(x[+]j [))][⊥][. Hence:] _U_ (Im(A[+])) + U (∩j≤iIm(Hj)) = U (∩j≤iIm(Hj)), _U_ (Spanj≤i(x[+]j [)][⊥][) +][ U] [(][∩][j][≤][i][Im(][H][j][)) =][ U] [(Span]j≤i[(][x][+]j [)][⊥][)][.] We note that given h1, . . ., hi, one can efficiently sample from U (∩j≤iIm(Hj)). Therefore, under the hardness of (i, S)-LWEm,α, this shows that Fi,k, Pi,πi(k) and _U_ (Im(A[+])) + ⌊ναq⌉[m][+1] are indistinguishable. _⊓⊔_ **5.3** **Public Traceability from Projective Sampling** In the scheme of Section 4, the tracing key tk = (T, (Ui, xi)i≤N ) must be kept secret, as it would reveal the secret keys of the users. The tracing signals are samples from _U_ (Spanj≤i(x[+]j [)][⊥][) +][ ⌊][ν][αq][⌉][m][+1][, which exactly matches][ F][i,k][. By publishing the pro-] jected key πi(k), anyone can use the projective sampling Pi,πi(k): by Theorem 5, given (k, πi(k)), Fi,k and Pi,πi(k) are indistinguishable and they are both indistinguishable from the original sampling U (Im(A[+])) + ⌊ναq⌉[m][+1]. We are thus almost done with public traceability. However, a subtle point is that we have to use all the projective samplings (Pi,πi(k)) for transforming the secret tracing to the public tracing: all the projected keys (hj)j≤N should be published. Because the keys k in Fi,k are not independent, it could occur that the adversary exploits a projected key πi(k) for distinguishing Pi′,πi′ (k′) from the original signals. To handle this, we prove that, given (xj)j≤i and all the keys (hj)j≤N, the adversary cannot distinguish Pi,πi(k) from the original signals. For this purpose, we exploit a technique from [20] to simulate (hj)i<j≤N from the public information. **Theorem 6. Set i ≤** _t. Under the (i, S)-LWEα,m and the LWE[′]α,m_ _[hardness assump-]_ _tions, given the secret key k = (xj)j≤i and the projected keys (H, (hj)j≤N_ ), the fol_lowing two distributions are indistinguishable_ Pi,α(k) = U (∩j≤iIm(Hj)) + ⌊ναq⌉[m][+1] _and U_ (Im(A[+])) + ⌊ναq⌉[m][+1]. ----- 332 S. Ling et al. _Proof. Assume a ppt attacker is given (xj)j≤i (with the xj’s independently sampled_ from DΛ⊥−u[(][A][)][,σ][) and all the projected keys][ (][h][j][)][j][≤][N] [))][. We are to prove that, under the] (i, S)-LWEα,m and LWE[′]α,m [hardness assumptions, it cannot distinguish between the] distributions (over Z[m]q [+1]) _U_ (Im(A[+])) + ⌊ναq⌉[m][+1] and Pi,πi(k) = U (∩j≤iIm(Hj)) + ⌊ναq⌉[m][+1]. We proceed by a sequence of games. **Game0:** This is the above distinguishing game. We let ε0 denote the adversary’s distinguishing advantage. The goal is to show that ε0 is negligible. **Game1:** In this second game, we sample x1, . . ., xi from DΛ⊥−u[(][A][)][,σ][ as in][ Game][0][,] but the xj’s for j > i are sampled uniformly in Z[n]q [, conditioned on][ x]j[t] _[·][ A][ =][ −][u][t][.]_ The hj’s for j > i are modified accordingly, but the rest is as in Game0. We let ε1 denote the adversary’s distinguishing advantage. The main point is that in Game1, no secret information is required for sampling the projected keys hj’s for j > i. The proof of the following lemma may be found in the full version. **Lemma 11. Under the LWE[′]α,m** _[hardness assumption, the quantity][ |][ε][1]_ _[−]_ _[ε][0][|][ is negli-]_ _gible._ We note that, in Game1, the hj’s can be sampled publicly from the available data. Therefore, from Theorem 5, under the (i, S)-LWEα,m hardness assumptions, the advantage ε1 is negligible. _⊓⊔_ _Semantic security of the updated scheme. We modify the public information of the_ scheme of Section 4, so that we can use the set of projective sampling families described above. For this aim, we simply add the projected key (H, (hi)i≤N ) to the public key. The scheme becomes publicly traceable because the tracing signals can be sampled from the projected keys, as explained above. Finally, as the public key has been modified, we should prove that the knowledge of these projected keys provides no significant advantage for an adversary towards breaking the semantic security of the encryption scheme. Fortunately, the semantic security directly follows from Theorem 6, for the particular case of i = 0. **Acknowledgements. We thank M. Abdalla, D. Augot, R. Bhattacharrya, L. Ducas,** V. Guleria, G. Hanrot, F. Laguillaumie, K. T. T. Nguyen, G. Quintin, O. Regev, H. Wang for helpful discussions. The authors were partly supported by the LaBaCry MERLION grant, the Australian Research Council Discovery Grant DP110100628, the ANR-09VERSO-016 BEST and ANR-12-JS02-0004 ROMAnTIC Projects, the INRIA invited researcher scheme, the Singapore National Research Foundation Research Grant NRFCRP2-2007-03,the Singapore MOE Tier 2 research grant MOE2013-T2-1-041,the LIA Formath Vietnam and the ERC Starting Grant ERC-2013-StG-335086-LATTAC. ----- Hardness of k-LWE and Applications in Traitor Tracing 333 ## References 1. Aggarwal, D., Regev, O.: A note on discrete gaussian combinations of lattice vectors (2013), [Draft Available at, http://arxiv.org/pdf/1308.2405v1.pdf](http://arxiv.org/pdf/1308.2405v1.pdf) 2. Agrawal, S., Gentry, C., Halevi, S., Sahai, A.: Discrete gaussian leftover hash lemma over infinite domains. In: Sako, K., Sarkar, P. (eds.) ASIACRYPT 2013, Part I. LNCS, vol. 8269, pp. 97–116. Springer, Heidelberg (2013) 3. Ajtai, M.: Generating hard instances of lattice problems (extended abstract). In: Proc. of STOC, pp. 99–108. ACM (1996) 4. Ajtai, M.: Generating hard instances of the short basis problem. In: Wiedermann, J., Van Emde Boas, P., Nielsen, M. (eds.) ICALP 1999. LNCS, vol. 1644, pp. 1–9. Springer, Heidelberg (1999) 5. Alwen, J., Peikert, C.: Generating shorter bases for hard random lattices. Theor. Comput. Science 48(3), 535–553 (2011) 6. Billet, O., Phan, D.H.: Efficient Traitor Tracing from Collusion Secure Codes. In: SafaviNaini, R. (ed.) ICITS 2008. LNCS, vol. 5155, pp. 171–182. Springer, Heidelberg (2008) 7. Boneh, D., Franklin, M.K.: An efficient public key traitor scheme (Extended abstract). In: Wiener, M. (ed.) CRYPTO 1999. LNCS, vol. 1666, pp. 338–353. Springer, Heidelberg (1999) 8. Boneh, D., Freeman, D.M.: Linearly homomorphic signatures over binary fields and new tools for lattice-based signatures. In: Catalano, D., Fazio, N., Gennaro, R., Nicolosi, A. (eds.) PKC 2011. LNCS, vol. 6571, pp. 1–16. Springer, Heidelberg (2011), Full version available [at, http://eprint.iacr.org/2010/453](http://eprint.iacr.org/2010/453) 9. Boneh, D., Waters, B.: A fully collusion resistant broadcast, trace, and revoke system. In: Proc. of ACM CCS, pp. 211–220. ACM (2006) 10. Boneh, D., Naor, M.: Traitor tracing with constant size ciphertext. In: Ning, P., Syverson, P.F., Jha, S. (eds.) ACM CCS 2008, pp. 501–510. ACM Press (2008) 11. Boneh, D., Sahai, A., Waters, B.: Fully collusion resistant traitor tracing with short ciphertexts and private keys. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 573–592. Springer, Heidelberg (2006) 12. Boneh, D., Zhandry, M.: Multiparty key exchange, efficient traitor tracing, and more from indistinguishability obfuscation. Cryptology ePrint Archive, Report 2013/642 (2013), [http://eprint.iacr.org/](http://eprint.iacr.org/) 13. Brakerski, Z., Langlois, A., Peikert, C., Regev, O., Stehlé, D.: Classical hardness of learning with errors. In: STOC, pp. 575–584. ACM (2013) 14. Chabanne, H., Phan, D.H., Pointcheval, D.: Public traceability in traitor tracing schemes. In: Cramer, R. (ed.) EUROCRYPT 2005. LNCS, vol. 3494, pp. 542–558. Springer, Heidelberg (2005) 15. Chor, B., Fiat, A., Naor, M.: Tracing traitors. In: Desmedt, Y.G. (ed.) CRYPTO 1994. LNCS, vol. 839, pp. 257–270. Springer, Heidelberg (1994) 16. Cramer, R., Shoup, V.: Universal hash proofs and a paradigm for adaptive chosen ciphertext secure public-key encryption. In: Knudsen, L.R. (ed.) EUROCRYPT 2002. LNCS, vol. 2332, pp. 45–64. Springer, Heidelberg (2002) 17. Garg, S., Gentry, C., Halevi, S.: Candidate multilinear maps from ideal lattices. In: Johansson, T., Nguyen, P.Q. (eds.) EUROCRYPT 2013. LNCS, vol. 7881, pp. 1–17. Springer, Heidelberg (2013) 18. Garg, S., Gentry, C., Halevi, S., Raykova, M., Sahai, A., Waters, B.: Candidate indistinguishability obfuscation and functional encryption for all circuits. In: Proc. of FOCS, pp. 40–49. IEEE Computer Society Press (2013) 19. Gentry, C., Peikert, C., Vaikuntanathan, V.: Trapdoors for hard lattices and new cryptographic constructions. In: Proc. of STOC, pp. 197–206. ACM (2008), Full version available [at, http://eprint.iacr.org/2007/432.pdf](http://eprint.iacr.org/2007/432.pdf) ----- 334 S. Ling et al. 20. Gordon, S.D., Katz, J., Vaikuntanathan, V.: A group signature scheme from lattice assumptions. In: Abe, M. (ed.) ASIACRYPT 2010. LNCS, vol. 6477, pp. 395–412. Springer, Heidelberg (2010) 21. Kiayias, A., Pehlivanglu, S.: Encryption For Digital Content. Springer, Heidelberg (2010) 22. Kiayias, A., Yung, M.: Breaking and repairing asymmetric public-key traitor tracing. In: Digital Rights Management Workshop, pp. 32–50 (2002) 23. Kiayias, A., Yung, M.: Traitor tracing with constant transmission rate. In: Knudsen, L.R. (ed.) EUROCRYPT 2002. LNCS, vol. 2332, pp. 450–465. Springer, Heidelberg (2002) 24. Komaki, H., Watanabe, Y., Hanaoka, G., Imai, H.: Efficient asymmetric self-enforcement scheme with public traceability. In: Kim, K.-c. (ed.) PKC 2001. LNCS, vol. 1992, pp. 225–239. Springer, Heidelberg (2001) 25. Langlois, A., Stehlé, D., Steinfeld, R.: GGHLite: More efficient multilinear maps from ideal lattices. In: Nguyen, P.Q., Oswald, E. (eds.) EUROCRYPT 2014. LNCS, vol. 8441, pp. 239–256. Springer, Heidelberg (2014) 26. Langlois, A., Stehlé, D., Steinfeld, R.: Improved and simplified security proofs in latticebased cryptography: using the Rényi divergence rather than the statistical distance (2014); Available on the webpages of the authors. 27. Lyubashevsky, V., Peikert, C., Regev, O.: On ideal lattices and learning with errors over rings. J. ACM 60(6), 43 (2013) 28. Micciancio, D., Peikert, C.: Trapdoors for lattices: Simpler, tighter, faster, smaller. In: Pointcheval, D., Johansson, T. (eds.) EUROCRYPT 2012. LNCS, vol. 7237, pp. 700–718. Springer, Heidelberg (2012) 29. Micciancio, D., Regev, O.: Worst-case to average-case reductions based on gaussian measures. SIAM J. Comput 37(1), 267–302 (2007) 30. Micciancio, D., Regev, O.: Lattice-based cryptography. In: Bernstein, D.J., Buchmann, J., Dahmen, E. (eds.) Post-Quantum Cryptography, pp. 147–191. Springer, Heidelberg (2009) 31. Naor, M., Pinkas, B.: Efficient trace and revoke schemes. In: Frankel, Y. (ed.) FC 2000. LNCS, vol. 1962, pp. 1–20. Springer, Heidelberg (2001) 32. O’Neill, A., Peikert, C., Waters, B.: Bi-deniable public-key encryption. In: Rogaway, P. (ed.) CRYPTO 2011. LNCS, vol. 6841, pp. 525–542. Springer, Heidelberg (2011) 33. Peikert, C.: Public-key cryptosystems from the worst-case shortest vector problem. In: Proc. of STOC, pp. 333–342. ACM (2009) 34. Peikert, C.: An efficient and parallel gaussian sampler for lattices. In: Rabin, T. (ed.) CRYPTO 2010. LNCS, vol. 6223, pp. 80–97. Springer, Heidelberg (2010) 35. Pfitzmann, B.: Trials of traced traitors. In: Anderson, R. (ed.) IH 1996. LNCS, vol. 1174, pp. 49–64. Springer, Heidelberg (1996) 36. Pfitzmann, B., Waidner, M.: Asymmetric fingerprinting for larger collusions. In: ACM CCS 1997, pp. 151–160. ACM Press (April 1997) 37. Phan, D.H., Safavi-Naini, R., Tonien, D.: Generic construction of hybrid public key traitor tracing with full-public-traceability. In: Bugliesi, M., Preneel, B., Sassone, V., Wegener, I. (eds.) ICALP 2006. LNCS, vol. 4052, pp. 264–275. Springer, Heidelberg (2006) 38. Regev, O.: On lattices, learning with errors, random linear codes, and cryptography. In: Proc. of STOC, pp. 84–93. ACM (2005) 39. Regev, O.: On lattices, learning with errors, random linear codes, and cryptography. J. ACM 56(6) (2009) 40. Regev, O.: The learning with errors problem. In: Invited survey in CCC 2010 (2010), [http://www.cims.nyu.edu/~regev/](http://www.cims.nyu.edu/~regev/) 41. Sirvent, T.: Traitor tracing scheme with constant ciphertext rate against powerful pirates. In: Augot, D., Sendrier, N., Tillich, J.-P. (eds.) Workshop on Coding and Cryptography—WCC 2007, pp. 379–388 (April 2007) 42. Watanabe, Y., Hanaoka, G., Imai, H.: Efficient asymmetric public-key traitor tracing without trusted agents. In: Naccache, D. (ed.) CT-RSA 2001. LNCS, vol. 2020, pp. 392–407. Springer, Heidelberg (2001) -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/s00453-016-0251-7?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/s00453-016-0251-7, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://dr.ntu.edu.sg/bitstream/10356/84074/1/Hardness%20of%20k-LWE%20and%20Applications%20in%20Traitor%20Tracing.pdf" }
2,014
[ "JournalArticle" ]
true
2014-08-17T00:00:00
[]
23,093
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Economics", "source": "s2-fos-model" }, { "category": "Business", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01a03a56f53ef90015c9e3c8c1c699794f905a10
[ "Computer Science" ]
0.852922
A Tale of Two Layers: The Mutual Relationship between Bitcoin and Lightning Network
01a03a56f53ef90015c9e3c8c1c699794f905a10
Risks
[ { "authorId": "66531339", "name": "S. Martinazzi" }, { "authorId": "14773251", "name": "D. Regoli" }, { "authorId": "34039293", "name": "Andrea Flori" } ]
{ "alternate_issns": [ "2261-611X" ], "alternate_names": null, "alternate_urls": [ "https://www.mdpi.com/journal/risks", "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-317985", "https://risks.hypotheses.org/" ], "id": "eedb1527-2aee-4def-bdb2-20fca9c12c3d", "issn": "2227-9091", "name": "Risks", "type": null, "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-317985" }
A major concern of the adoption and scalability of Blockchain technologies refers to their efficient use for payments. In this work, we analyze how Lightning Network (LN), which represents a relevant infrastructural novelty, is influenced by the market dynamics of its referring cryptocurrency, namely Bitcoin. In so doing, we focus on how the LN is efficient in performing transactions and we relate this feature to the market conditions of Bitcoin. By applying the Toda–Yamamoto variant of Granger-causality, we note that market conditions of Bitcoin do not significantly influence the topological configuration of the LN. Hence, although the LN represents a second layer on the Bitcoin blockchain, our findings suggest that its efficient functioning does not appear to be related to the simple market performance of its underlying cryptocurrency and, in particular, of its volatile market fluctuations. This result may therefore contribute to shed light on the practical usage of the LN as a blockchain technology to favor transactions.
# ***risks*** *Article* ## **A Tale of Two Layers: The Mutual Relationship** **between Bitcoin and Lightning Network** **Stefano Martinazzi** **[1,]** ***** **, Daniele Regoli** **[2,†]** **and Andrea Flori** **[1]** 1 Department of Management, Economics and Industrial Engineering, Politecnico di Milano, 20121 Milan, Italy; andrea.flori@polimi.it 2 Data Science and Artificial Intelligence, Intesa Sanpaolo, 20121 Milan, Italy; daniele.regoli@sns.it ***** Correspondence: stefano.martinazzi@polimi.it - The views expressed in this paper are those of the author and should not be attributed to Intesa Sanpaolo or to the author as representative or employee of Intesa Sanpaolo. ��������� Received: 27 October 2020; Accepted: 26 November 2020; Published: 1 December 2020 **�������** **Abstract:** A major concern of the adoption and scalability of Blockchain technologies refers to their efficient use for payments. In this work, we analyze how Lightning Network (LN), which represents a relevant infrastructural novelty, is influenced by the market dynamics of its referring cryptocurrency, namely Bitcoin. In so doing, we focus on how the LN is efficient in performing transactions and we relate this feature to the market conditions of Bitcoin. By applying the Toda–Yamamoto variant of Granger-causality, we note that market conditions of Bitcoin do not significantly influence the topological configuration of the LN. Hence, although the LN represents a second layer on the Bitcoin blockchain, our findings suggest that its efficient functioning does not appear to be related to the simple market performance of its underlying cryptocurrency and, in particular, of its volatile market fluctuations. This result may therefore contribute to shed light on the practical usage of the LN as a blockchain technology to favor transactions. **Keywords:** bitcoin; lightning network; granger causality; market efficiency; global efficiency **1. Introduction** The growing attention on cryptocurrencies and blockchain solutions is generating a new literature recognizing the increasing relevance assumed by these technologies in shaping several economic domains. Undoubtedly, the research agenda has been heavily affected by the impact of cryptocurrencies’ market behaviours and their extremely volatile dynamics. More generally, the use of cryptocurrencies as either means of payment or investment assets has influenced a rich stream of research about the economic fundamentals of these technologies (see, e.g., Baur et al. (2015, 2018) Böhme et al. 2015; Gomber et al. 2017; Selgin 2015; Yermack 2015). Nevertheless, the adoption of these technologies in many contexts still appears in its infancy, thus motivating the current debate and research on how to scale them and, possibly, foster their wider adoption by the business environment in general (Bech and Garratt 2017; Kumhof and Noone 2018; Polasik et al. 2015). Against this background, the literature has tried to recognize and analyze the key aspects which may limit the diffusion and adoption of cryptocurrencies and blockchain technologies. For instance, since cryptocurrencies are usually not supported by any centralized institution and are generally not related to tangible assets, governance issues may prevent them from being attractive and functioning tools for financial applications (Dwyer 2015; Flori 2019a; Weber 2016; Yermack 2017). In addition, ethical issues may represent a major concern for their adoption and diffusion in business contexts (Angel and McCabe 2015; Dierksmeier and Seele 2018), and in order to respond to critical issues such as money laundering activities, tax evasion and insider trading, an adequate regulatory framework is *Risks* **2020**, *8* [, 129; doi:10.3390/risks8040129](http://dx.doi.org/10.3390/risks8040129) [www.mdpi.com/journal/risks](http://www.mdpi.com/journal/risks) ----- *Risks* **2020**, *8*, 129 2 of 18 no longer considered deferable (Blundell-Wignall 2014; Brito et al. 2014; Pieters and Vivanco 2017). Finally, technical aspects may play a significant role in the diffusion of these technologies. For instance, Bitcoin cannot perform consistent amounts of transactions per unit of time since on average every ten minutes only a single block can be mined and added to the blockchain, meaning a maximum of seven transactions per second. As a comparison, well known payment systems such as Visa can process several thousands transactions per second (Croman et al. 2016). The identification of cryptocurrencies as investment products, commodities, or currencies is still under discussion (see, e.g., Baek and Elbeck 2015; Baur et al. 2018; Carrick 2016; Flori 2019a; Hong 2017; Yermack 2015). However, the technological constraints occurring during transactions and the inherent liquidity limitations suggest that, at least for transaction aspects, cryptocurrencies may resemble commodities, with values reflecting their intrinsic scarcity and mining costs (see, e.g., Dwyer 2015; Selgin 2015 ). Within such a framework, miners are those actors that can add new blocks containing transaction records to the blockchain, thus playing a pivotal role for the functioning of the underlying system. From an economic point of view, the interplay between miners and the other actors operating in the system can be gauged for instance by the dynamics of the fees, whose value deeply depends on the amount of transactions waiting to be added into the blockchain but weakly refers to the volume transferred per unit of time (Khan et al. 2019). As an example, during the Bitcoin boom phase at the end of 2017, when demand was very high, fees reached an astonishing level of about USD 40 from less than USD 1 per transaction registered at the beginning of the same year (Lee 2018). Hence, for large transferred amounts, transactions executed through a blockchain technology can represent a more convenient solution than traditional payment systems, while blockchain infrastructures could appear economically inefficient for micro-payments. For these reasons, many different solutions have been proposed to increase throughput and lower latencies during transactions, such as the deployment in August 2017 of Bitcoin Cash to increase the size of the blocks to 8Mb, or the Segregated Witness implemented after the hardfork of November 2017 to quadruplicate the amount of transactions that can be placed into a single block (SegWit, Bitcoin Improvement Proposal 141). Interestingly, a recent infrastructural novelty refers to a “Layer 2” solution based on smart contracts and formed by a network of channels established mainly for micro-payments. This solution is named Lightning Network (hereinafter, LN) and was deployed in January 2018. More specifically, this network is formed by user counterparts that open bilateral channels through the issue of a multi-signed transaction on the Bitcoin blockchain. In so doing, these pairs of counterparts are then enabled to exchange back and forth a predefined amount of Bitcoin through off-chain transactions that are not uploaded into the blockchain at each operation (Poon and Dryja 2016), thus facilitating faster transactions. Eventually, if a particular channel is no longer needed, a multi-signed transaction corresponding to the final balance between the two counterparts is then uploaded to the blockchain. Since the LN represents one of the most recognized solutions for scalability, in this paper we aim to evaluate how this infrastructure is evolving over time and, in particular, we investigate how its configuration is reflecting the dynamics of Bitcoin, i.e., the market behavior of its referring cryptocurrency. We opt for the assessment of the efficiency of the LN as a key dimension describing its functioning. In fact, this network of channels forms a multi-hop framework in which counterparts can send flows to other counterparts, even without creating a new channel, whenever a common path linking more channels is available and has enough stored capacity. For this reason, we employ the topological efficiency proposed by Latora and Marchiori (2001, 2003) to assess its likelihood to disseminate information through its nodes, which is a critical aspect for successfully routing transactions in such a multi-hop framework. In line with Martinazzi and Flori (2020), we provide therefore a network theory analysis of the LN, but in this case we propose an evaluation of the efficiency on a daily basis to better assess the impact of market dynamics. We consider the period from 12 February 2018, when the LN started, to 12 August 2020. Our study reveals how the size of the LN, ----- *Risks* **2020**, *8*, 129 3 of 18 as well as the capacity stored in its channels, increased remarkably over the period, while its efficiency has been characterized by phases of ups and downs. Interestingly, we observe a few erratic behaviours during the period under study, which may be related to the market dynamics of the underlying cryptocurrency. For this reason, we decide to study the role played by Bitcoin market dynamics primarily by assessing its market efficient conditions. In particular, we test econometrically whether the weak form of the Efficient Market Hypothesis (EMH) (Fama 1970) holds. Several empirical works have already observed that cryptocurrency markets tend indeed to be inefficient, at least during boom and burst phases, meaning that returns appear skewed and heavy-tailed distributed, strong volatility clustering and leverage effects are present, and that multifractality and long-range dependence phenomena for both returns and volatility are quite common (see, e.g., Bariviera et al. 2017; Beguši´c et al. 2018; Chu et al. 2015; Phillip et al. 2018; Takaishi 2018; Zhang et al. 2018 among others). Therefore, we apply a battery of econometric tests to verify whether Bitcoin market patterns are actually efficient over the sample period. In so doing, we also contribute to the literature by studying market efficiency for recent observations of Bitcoin through the inclusion of a comprehensive set of tests. Our findings, supported also by the application of the Detrended Fluctuation Analysis over the sample period, indicate that Bitcoin is far from being an efficient market. More in general, the dependence of Bitcoin market efficiency on investors’ behavioral distortions, variations in their risk appetite, changes in market conditions, impact of news, or even novelties in the blockchain infrastructural environment is still under investigation (Brauneis and Mestel 2018; Caginalp and Caginalp 2018; Dyhrberg et al. 2018; Flori 2019b; Fry 2018; Garcia et al. 2014; Kristoufek 2018; Urquhart 2018). In this work, we propose to evaluate the possible mutual effects occurring between Bitcoin market conditions and the functioning of the LN. In particular, we study the nexus between these systems by means of the Toda and Yamamoto (1995)’s variant of the Granger causality test (Toda and Yamamoto 1995), thus avoiding any pretest bias from cointegration issues. Our analysis reveals that Bitcoin market conditions are not able to Granger-cause the topological efficiency of the LN. Hence, the functioning of this second layer of the Bitcoin blockchain does not appear to be affected by how information is correctly or not spread in its referring crypto-market. From an economic perspective, this finding may question the practical usage of the LN as a system to favor the adoption and diffusion of blockchain technologies, since its ability to efficiently route transactions does not appear to be influenced by the market dynamics of its referring crypto-market. In fact, Bitcoin market dynamics may influence the LN in several ways, since strong market appreciation may discourage LN users to block bitcoins within the LN, or may induce them to open channels only with a few selected counterparts, thus impacting on the configuration of the LN. The contribution of this paper is therefore twofold. Firstly, we provide a detailed analysis of the evolution of the LN with respect to its topological configuration to characterize its efficient functioning in routing information through the multi-hop framework. Secondly, we show how such infrastructural efficiency levels relate to the market dynamics of its underlying cryptocurrency, revealing that its dynamics appear poorly connected to the market patterns of Bitcoin. More specifically, we note that Bitcoin market performance does not influence the level of interconnectivity among the users within the LN, but instead it may affect users’ decisions on how much bitcoins to store in the corresponding edges of the LN, thus possibly impacting on the overall functioning of the network. Bitcoin practical usage and its scalability issue has haunted it, preventing its mass adoption since its initial stages. Our findings can be used to build the case for arguing that there might be a wide difference between Bitcoin’s audience and the users of LN. In this regard, our work reveals the lack of strong relationships between Bitcoin’s market dynamics and one of the most promising technological improvement underneath it. It should be noted, however, that although the referring cryptocurrency of LN is Bitcoin, several studies (see, e.g., Aslanidis et al. 2020; Corbet et al. 2018; Dimpfl and Peter 2019; Katsiampa 2019) have highlighted the market interdependence across cryptocurrencies, possibly hiding the role of events in other currencies through the impact on Bitcoin. ----- *Risks* **2020**, *8*, 129 4 of 18 The paper is organized as follows. In Section 2 we present the technical aspects behind the LN and we describe the methodologies applied to compute both the topological and market efficiency measures. In this section we also discuss the mechanism behind the Granger causality testing, while in Section 3 we explore the main findings of our study. Section 4 contains concluding remarks. **2. Methodology** *2.1. The Lightning Network* LN is the second layer of Bitcoin created to overcome some issues related to the payment system, which are low throughput (Poon and Dryja 2016) and high confirmation latency (Barber et al. 2012). Two users in the LN can exchange a pre-established amount of Bitcoin (BTC) instantaneously through an off-chain bi-directional payment channel based on a smart contract that also allows to perform an arbitrarily number of transactions exempt from fees. Basically, the only costs are therefore the fees paid to open the channel and to close it and broadcast the final balance between the two counterparts. An interesting aspect of the LN is that two separate users that do not share a common channel might still be able to exchange payments if they can find a shared path with enough capacity to route the transaction. This routing framework is known as “multi-hop” (Decker and Wattenhofer 2015; Nowostawski and Tøn 2019; Poon and Dryja 2016). As illustrated in the example of Figure 1, user A may seek to send 1 BTC to user B, but these two counterparts do not share any direct link. However, A and B are directly linked with user C through two different channels. If the capacity installed on those two channels is equal or higher than 1 BTC, A can send its payment to B provided a small fee paid to C for its role as connector. This example can be extended to paths with more than one connector, where payments are forwarded through multiple channels as long as they have enough stored capacity. For instance, A can send 1 BTC to B through users D and E. Conversely, sharing a common path through user F is not sufficient to route 1 BTC payment since one channel carries only 0.3 BTC. **Figure 1.** Representation of different options for a multi-hop transaction. Circles represent users’ nodes while the bi-directional channels are represented with arrows in both directions. To characterize the LN, we follow the approach proposed in Miller et al. (2019) and Guo et al. (2019), employing a daily view of the LN configuration to study its time evolution. Specifically, we consider a channel to be active if the opening date is the same or earlier than the one in which the daily snapshot is taken, and the closing date is the same or later than the date of the snapshot. We then employ a topological analysis borrowed from network theory to assess the configuration of the LN. In particular, by means of these daily snapshots, we represent the LN at a given date as an undirected weighted graph in which nodes stand for active users which are connected by edges representing the corresponding channels. The weight of a certain edge stands for the stored amount of BTC in the respective channel. ----- *Risks* **2020**, *8*, 129 5 of 18 Finally, following the approach proposed in Martinazzi and Flori (2020), we decide to use as main representation of the LN’s configuration its topological efficiency. It depends on key elements of the structure of the network, such as its density and the distribution of the capacity stored in its channels, hence it is a measure capable to aggregate a great deal of relevant information about the functioning of the network. For this reason, we refer to the topological efficiency to evaluate how the resulting configurations are able to spread information throughout the system, meaning how the network can efficiently perform multi-hop transactions. In so doing, we consider the global efficiency proposed in Latora and Marchiori (2001, 2003). This measure refers to the average value of the inverse of the shortest path among each possible couple of nodes. In formula, the global efficiency is: *E* ( *G* ) = *N* ( *N* 1 *−* 1 ) *[×]* [ ∑] *[i]* *[̸]* [=] *[j]* *[∈]* *[G]* *d* 1 *ij* [, with] *[ G]* [ the network composed by] *[ N]* [ nodes and] *[ d]* *[ij]* [ the shortest path] between nodes *i* and *j* . Global efficiency is usually normalized by *E* ( *G* ideal ), where *G* ideal is the fully connected graph with the same *N* nodes, and thus it propagates information in the most efficient possible way. Once normalized, 0 *≤* *E* ( *G* ) *≤* 1, with 0 standing for the totally inefficient configuration, and 1 meaning the fully connected case. The global efficiency summarizes, therefore, the features of the level of interconnectivity between the nodes of the network and the distribution of the stored capacity among the corresponding edges. For these reasons, we decide to apply it to characterize the effective and efficient functioning of the LN, and we map its evolution over time to evaluate how such system reacted to the dynamics of its referring cryptocurrency, namely Bitcoin. *2.2. Market Efficiency* Several techniques have been applied to detect market efficiency in cryptomarkets. Empirical findings reported in the literature typically find that Bitcoin returns have been not uniformly efficient over time. For instance, inefficient market conditions have been observed by Kristoufek (2018) in the intervals from the mid-2011 to the mid-2012, and between March and November 2014. Similarly, Urquhart (2016) finds inefficient conditions since the inception of Bitcoin but also a tendency towards efficiency in the recent period. By contrast, other authors find opposite results, e.g., Nadarajah and Chu (2017) claim that Bitcoin is an efficient market in the interval from August 2010 to July 2016. Likewise, Tiwari et al. (2018) observe that Bitcoin is largely efficient in the period from July 2010 to June 2017. The detection of efficient market conditions in cryptomarkets appears, therefore, ambiguous in the literature and findings appear strongly dependent on the selected reference period (for a review see, e.g., Flori (2019a)). In addition, scholars have also applied several estimation procedures borrowed from different and multidisciplinary fields. For instance, long-term dependence has been investigated by Jiang et al. (2018) who exploit the generalized Hurst exponent and a rolling-window estimation procedure to study the time-varying efficiency of Bitcoin, by Alvarez-Ramirez et al. (2018) who also point to the cyclical anti-persistence of price returns, and Bariviera et al. (2017) who additionally find that market liquidity does not seem to affect the level of long-term dependence. Al-Yahyaee et al. (2018) show that Bitcoin presents levels of long-range persistence higher than those of common asset classes (e.g., gold, equity indices, the US dollar index). Significant price fluctuations have also stimulated the detection of the efficient conditions of market volatility. For instance, Bariviera (2017) analyzes the daily volatility of returns and finds that volatility is persistent during the period from August 2011 to February 2017, thus supporting the emergence of volatility clustering, while several other works (see Al-Yahyaee et al. 2018; Baur et al. 2018; Bouri et al. 2019; Dro˙zd˙z et al. 2018; Phillip et al. 2019 to name a few) note strong persistence and higher levels of volatility compared to traditional financial instruments. Hence, following these perspectives proposed in the literature, we decide to employ a rich toolbox of different econometric tests to analyze market efficiency. Specifically, to test whether returns are independent observations, we exploit both the Runs Test (Wald and Wolfowitz 1940) and the Bartels Test (Bartels 1982); instead, to verify serial dependence in the returns, we apply the non-parametric BDS Test (Broock et al. 1996) and the Automatic Portmanteau Test (Escanciano and Lobato 2009). Finally, to test whether returns follow a random walk, we consider the DL Test (Domínguez and Lobato 2003) ----- *Risks* **2020**, *8*, 129 6 of 18 and the AVR Test (Choi 1999; Kim 2009; Lo and MacKinlay 1988). In essence, we refer to these tests to recognize the presence of efficient conditions in the period when LN was deployed. This assessment therefore provides an aggregate view on the efficient market conditions of Bitcoin over the whole reference period, namely from 12 February 2018 to 12 August 2020. In our analysis the daily returns ( *R* *t* ) at time *t* are computed as *R* *t* = log ( *P* *t* / *P* *t* *−* 1 ) *×* 100, with *P* *t* the price of Bitcoin at time *t*, while we compute the corresponding volatility as the absolute value of returns (namely, *|* *R* *t* *|* ). In line with the current literature on cryptocurrencies investigating long-term dependency, we employ the Detrended Fluctuation Analysis (DFA) (Peng et al. 1994, 1995) to provide a daily evolution of market conditions. DFA is, in fact, a common technique to study the stability conditions in various financial systems (see, e.g., Spelta et al. 2020). Hence, Bitcoin market returns *R* *t* are shifted by their mean *⟨* *R* *⟩* and integrated as follows: *t* *x* *t* = ∑ ( *R* *i* *−⟨* *R* *⟩* ) ; (1) *i* = 1 then, windows with various lengths ∆ *l* are employed to split these transformed series, so that for each window and value of ∆ *l* the resulting summed data can be fit. Specifically, we use a local least squares straight-line fit and we minimize the squared errors within each time window. The root-mean-square deviation from the trend is computed as follows: � � *L* � *F* ( ∆ *l* ) = ∑ [ *x* ( *t* ) *−* *x* ∆ *l* ( *t* )] [2], (2) *L* *t* = 1 � [1] with *L* the total number of data points and *x* ∆ *l* ( *t* ) the piecewise sequence of straight-line fits. Since *F* ( ∆ *l* ) indicates the average of the summed squares of the residuals computed in the windows, a log-log graph of *F* ( ∆ *l* ) versus ∆ *l* is expected to be linear if power law scaling is present, meaning that statistical self-affinity expressed as *F* ( ∆ *l* ) ∝ ( ∆ *l* ) *[α]* emerges as a straight line on the log-log graph. We compute the scaling exponent *α* as the slope of the fitted line using least-squares. The scaling parameter *α*, which can be also interpreted as the Hurst exponent, indicates the presence of self-similarity, and therefore long-term memory, as it maps the scaling of dispersion around the regressor as the size of the windows increases. The value of *α* is, therefore, informative for signaling the following behaviours: *•* 0 *<* *α* *<* 0.5: long-term memory and anti-correlation; *•* 0.5 *<* *α* *<* 1: long-term memory and correlation; *•* *α* = 0.5: uncorrelated signal (no memory); *•* *α* *>* 1: non-stationary signal. In our work, this entire procedure is repeated daily over sliding windows of 250 observations and one datapoint step forward. For robustness check, in the Appendix A we show also the main results for sliding windows of length equal to 300 and 600 days. We anticipate here that findings are qualitatively very similar to those reported in the main analysis of the paper. We retain the daily value of the exponent *α* to map the evolution of the market efficiency conditions of Bitcoin. Finally, to study the mutual relationships between LN and Bitcoin market conditions, we consider the Toda–Yamamoto test (Toda and Yamamoto 1995). This is a variant of the Granger causality test that does not rely on pretesting for cointegration issues. Basically, this approach assumes that the Wald test statistic is valid for Granger causality on *p* *−* lags of a certain variable in an overfitted VAR( *p* + *dmax* ) model in which *dmax* refers to the highest order of integration in that system. With *dmax* *>* 0, a regression equation on the system encompassing variables *X* and *Y* is thus of the following form: ----- *Risks* **2020**, *8*, 129 7 of 18 *p* *p* *p* + *dmax* *p* + *dmax* *X* *t* = *c* 1 + ∑ *α* *j* *X* *t* *−* *j* + ∑ *β* *j* *Y* *t* *−* *j* + ∑ *α* *k* *X* *t* *−* *k* + ∑ *β* *k* *Y* *t* *−* *k* + *ϵ* *t*, (3) *j* = 1 *j* = 1 *k* = *p* + 1 *k* = *p* + 1 where the coefficients on the additional lagged variables are not considered in the Wald statistic, which asymptotically has a chi-square distribution with *p* degrees of freedom, irrespective of the order of integration or cointegration properties of the variables in the system (Dolado and Lütkepohl 1996). Hence, this approach allows us to test linear or nonlinear restrictions on the first *p* coefficient matrices using the standard asymptotic theory, even if the processes may be integrated or cointegrated of an arbitrary order (Toda and Yamamoto 1995). **3. Results** Figure 2 shows two different illustrative snapshots of the LN. The plot on the left is the representation of LN on the 12 February 2018, while the plot on the right stands for the 12 August 2020. They refer to the first and the last observation of the LN in our sample. In both snapshots it is possible to notice the presence of a few large nodes surrounded by smaller ones indistinguishable from each other. The presence of a few massively endowed nodes highly connected with the rest of the network, composed by a vast majority of relatively poorly endowed nodes, suggests an overall hub and spoke structure of the system, a feature already highlighted by Martinazzi and Flori (2020). **Figure 2.** Visual representation of the LN. The plot on the left refers to 2018/02/12, while the one on the right to 2020/08/12. Moreover, in Table 1 we show some topological measures collected for the LN at the beginning and at the end of the sample period. The LN grows from 538 nodes, connected by 1985 channels and with a total capacity of 6.56 BTC (USD 56,861 according to the historical exchange rate) to 7916 simultaneously active nodes, interconnected by 43,654 channels with a total capacity of 1216.29 BTC (USD 13,945,976). The number of connections per node does not change remarkably in terms of median values (from two to three connections), while the median capacity of the nodes (namely, the strength value) increases about four times. Similarly, the average degree increases from 7.37 to 11.03 connections per node, while the average node’s capacity increases by an order of magnitude. Overall these topological indicators point to the presence of a vast majority of nodes with few connections and, possibly, with only a small amount of stored BTC. Furthermore, as we mentioned before, the multi-hop routing capability of the LN is limited by the possibility of finding paths formed by channels with enough capacity to forward a payment. Hence, it is interesting to note that the median capacity per channel increases from about USD 8 to about USD 57 and the mean value from USD 28.64 to USD 319.47, which means that routing payments are likely to become potentially easier along this period. ----- *Risks* **2020**, *8*, 129 8 of 18 **Table 1.** A collection of topological measures for LN. This table presents some topological measures extrapolated from the network at its first and last observation in our sample period. **12 February 2018** **12 August 2020** Nodes 538 7916 Channels 1985 43, 654 Density 0.014 0.001 Median Degree 2 3 Average Degree 7.37 11.03 Median Strength(USD) 22.80 91.70 Average Strength (USD) 211.34 3523.49 Average Capacity (USD) 28.64 319.47 Median Capacity (USD) 7.80 57.33 Total Capacity (USD) 56, 861 13, 945, 976 Assortativity *−* 0.370 *−* 0.231 Assortativity (W) *−* 0.170 *−* 0.057 Diameter 6 12 Radius (LCC) 4 6 Transitivity (W) 0.120 0.063 Global Efficiency Norm. 0.140 0.014 We also consider some topological measures taking into account the whole configuration of the network. The assortativity coefficient stands for the tendency of the nodes to connect with others that possess similar degrees of connections. For a weighted network, an assortative behaviour arises when nodes with similar weighted degree bond together. The LN, in its unweighted representation, shows a decisive disassortative behaviour which is typical, for instance, of the internet infrastructure (Noldus and Van Mieghem 2015) . Such disassortative feature is present also in the weighted representation of the LN (namely, Assortativity (W)), even if in a less remarkable fashion. Finally, the radius and diameter coefficients, which measure the minimum and maximum eccentricity distance between any pairs of nodes respectively, indicate an increasing dimension of the network as reflected also by the rise in the number of participants. Our main measure of interest, the normalized global efficiency, shows instead a relevant drop from 0.14 to 0.014. The topological efficiency represents a relevant aspect for the assessment of the usability of LN as a payment infrastructure since it indicates how flows can effectively move through the system. In Figure 3 we plot the historical values of the LN’s normalized efficiency against the density of the network and the median capacity of the channels expressed in USD. The latter are chosen to display two key aspects about efficiency, namely the inter-connectivity among nodes and the capacity installed on the corresponding channels. As shown in the figure, the tendency of the normalized efficiency is comparable with the network’s density, while the growth of the median capacity presents an opposite pattern, especially in the last period. While it is intuitive to understand why a decrease in the inter-connectivity of the network deteriorates its efficiency, the relationship with the median capacity could be not so evident. A possible explanation lays, in fact, in the definition of the ideal graph, which has the capacity evenly distributed among all its channels, with respect to the real network characterized by sparser stored capacities and a core of very active nodes. Hence, an increase in the total capacity will be always distributed in a more efficient way in the ideal graph than in the real network, therefore decreasing the normalised efficiency of the latter. ----- *Risks* **2020**, *8*, 129 9 of 18 **Figure 3.** Evolution of LN’s normalized efficiency. The plot exhibits the evolution of LN’s global normalized efficiency (in light blue), its density (in black) multiplied by 10, and the median capacity installed on its channels (in red). Both density and efficiency can assume values between 0 and 1. The y-axis on the left is related to the density and efficiency measures, while the one on the right is related to the median capacity expressed in USD. As presented in the Introduction, in order to assess the role played by Bitcoin market conditions on the efficiency levels of the LN, we primarily analyze how its market dynamics efficiently embeds information. In Table 2 we report the results of the tests presented in Section 2.2. Although caution should be taken due to the short sample periods, our findings in Panel A indicate that Bitcoin returns seem to be characterized by inefficient market conditions. We consider several time windows, basically one for each year from 2015 to 2019, and two cases that refer to the interval from 2015 to 12 August 2020 and from 12 February 2018 to 12 August 2020, respectively. The latter case corresponds to our reference period with respect to the deployment of the LN, while the case from 2015 to 12 August 2020 is a scenario extended in terms of the length of the observations in order to enhance the statistical significance of the results. This latter case practically supports the findings reported for each year separately. Similarly, the market conditions for volatility appear largely inefficient during each interval and across each test (see Panel B), thus reflecting the market turbulence characterizing the persistence of the Bitcoin erratic market behavior. In addition, to depict the market conditions of Bitcoin on a daily basis, we rely on the DFA (Peng et al. 1994, 1995). Figure 4 shows the time evolution of the exponents for both the returns (in green) and volatility (in blue) of Bitcoin. Note how both exponents do not lie in a range around value 0.5 that corresponds to efficient market conditions, thus supporting the results reported in Table 2. Furthermore, although price euphoria has stimulated upwards-downwards market dynamics and relevant price fluctuations, the dynamics of the price of Bitcoin (in gray) does not seem to strongly map on the corresponding patterns of the DFA exponents. This is true also in the period after the remarkable market boom phase starting from the beginning of 2017 and culminating in the early part of 2018 when the LN was established. During the reference period, the correlation values between Bitcoin price and both the DFA exponents of returns and volatility are low and about 0.04 and 0.05, respectively. ----- *Risks* **2020**, *8*, 129 10 of 18 **Table 2.** Bitcoin market efficiency conditions. Table reports *p* -values for the following tests: the Runs Test (Wald and Wolfowitz 1940), the Bartels Test (Bartels 1982), the BDS Test (Broock et al. 1996), the Automatic Portmanteau Test (Escanciano and Lobato 2009), the AVR Test (Choi 1999; Kim 2009; Lo and MacKinlay 1988), and the DL Test (Domínguez and Lobato 2003). For BDS the table reports the average *p* -values across specifications with embedding dimensions from 2 to 5; for the AVR test we compute 500 bootstrap iterations; for DL the table reports both the wild-bootstrap *p* -values of the Cramer von Mises test statistic (cp) and of the Kolmogorov-Smirnov test statistic (kp). Panel A refers to Bitcoin returns, while Panel B reports the results for the corresponding volatility computed as the absolute value of the returns (i.e., *|* returns *|* ). **PANEL A** **Automatic** **Runs** **Bartels** **BDS** **Portmanteau** **AVR** **DL (cp)** **DL (kp)** **Period** **Test** **Test** **Test** **Test** **Test** **Test** **Test** 2015/01/01–2015/12/31 0.00053 0.00005 0.00000 0.10644 0.35000 0.00000 0.00000 2016/01/01–2016/12/31 0.01605 0.00016 0.00000 0.08296 0.04800 0.00000 0.00000 2017/01/01–2017/12/31 0.00164 0.00000 0.00000 0.00041 0.00000 0.00000 0.00000 2018/01/01–2018/12/31 0.07434 0.00070 0.00000 0.02037 0.01400 0.00000 0.00000 2019/01/01–2019/12/31 0.00078 0.00000 0.00148 0.00033 0.00800 0.00000 0.00000 2018/02/12–2020/08/12 0.00002 0.00000 0.00000 0.00002 0.00200 0.00000 0.00000 2015/01/01–2020/08/12 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 **PANEL B** **Automatic** **Runs** **Bartels** **BDS** **Portmanteau** **AVR** **DL (cp)** **DL (kp)** **Period** **Test** **Test** **Test** **Test** **Test** **Test** **Test** 2015/01/01–2015/12/31 0.00036 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 2016/01/01–2016/12/31 0.02127 0.00001 0.00000 0.00055 0.00000 0.00000 0.00000 2017/01/01–2017/12/31 0.00016 0.00000 0.00037 0.00000 0.00000 0.00000 0.00000 2018/01/01–2018/12/31 0.00000 0.00000 0.00081 0.00000 0.00000 0.00000 0.00000 2019/01/01–2019/12/31 0.00016 0.00000 0.03740 0.00092 0.00000 0.00000 0.00000 2018/02/12–2020/08/12 0.00000 0.00000 0.00009 0.00000 0.00000 0.00000 0.00000 2015/01/01–2020/08/12 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 0.00000 **Figure 4.** DFA exponent time evolution. The plot exhibits the DFA exponent (Peng et al. 1994, 1995) for returns (in green) and volatility (in blue). Estimates consider sliding windows of 250 daily observations and one datapoint step forward. The shadow area refers to the standard error of the corresponding coefficient. The log of Bitcoin prices (divided by 10 [3] ) is reported in gray. The dotted red line stands for the 0.5 level of the DFA *α* exponent. ----- *Risks* **2020**, *8*, 129 11 of 18 In particular, the literature has proposed several aspects that may affect the efficiency of crypto markets (see, e.g., Brauneis and Mestel 2018; Caginalp and Caginalp 2018; Dyhrberg et al. 2018; Flori 2019b; Fry 2018; Garcia et al. 2014; Kristoufek 2018; Urquhart 2018 to name a few). They refer, for instance, to investors’ behavioral biases, the impact of news and infrastructural changes. As far as the latter aspect is concerned, the LN represents one of the main infrastructural novelties within the framework of payments solutions based on blockchain technologies. For this reason, we aim to explore whether its functioning has been influenced by the market conditions of its referring cryptocurrency, namely Bitcoin, or, alternatively, whether it is also possible that LN has affected the market efficiency conditions of Bitcoin. Findings reported in Table 2 and Figure 4 indicate that the market conditions for Bitcoin seem to have been inefficient when the LN started to operate, basically corresponding to the period just after the remarkable boom phase culminated at the end of 2017. Our next investigation refers, therefore, to the comparison between the market conditions of Bitcoin and the functioning of LN, the latter in terms of its ability to perform transactions in a multi-hop system. In so doing, to map the daily market conditions of Bitcoin we refer to the exponent values from the DFA of both returns and volatility along with its basic price and returns time series, while we employ the topological efficiency to describe the functioning of the LN. Due to the nature of these indicators, which may exhibit erratic patterns, and the potential presence of cointegration issues, we opt for the Granger-like causality test based on the Toda–Yamamoto approach (Toda and Yamamoto 1995). Other methodologies to run a proper causality testing when time-series are non-stationary and, possibly, cointegrated can be utilized as well (see, e.g., Lütkepohl 2005). We run the Toda–Yamamoto tests over the period from 12 February 2018 to 12 August 2020, thus covering the eighteen months of existence of the LN in our sample. We consider the topological efficiency of the LN, the DFA exponents of the returns and volatility of Bitcoin as well as both its raw price and returns time series. Specifically, the mechanics behind the application of the Toda–Yamamoto test is based on the following steps. First, for each series we compute the maximum order of integration ( *dmax* ) by calculating the ADF and KPSS tests. Second, we set up VAR models in levels for pairs of variables and we select the maximum lag length for them ( *p* ) using information criteria such as AIC, SIC, HQ and FPE. Third, we check whether each VAR model is well specified by verifying that residuals are not serially correlated. Fourth, we add the maximum order of integration to the number of lags, thus estimating augmented VAR ( *p* + *dmax* ) models. Our assessment is finally based on carrying out Wald tests for the first *p* variables. The Wald test statistics will be asymptotically chi-squared distributed with *p* degrees of freedom. Table 3 reports the estimates of the Toda–Yamamoto tests. Panel A reports the case in which column variables may Granger-cause the LN efficiency. Interestingly, although the LN is a second layer of the Bitcoin blockchain, the efficiency of the Bitcoin market does not seem to really impact on the functioning of the LN. In fact, both the tests in which the DFA exponents of the Bitcoin returns and volatility are compared against the LN topological efficiency present very high *p* -values. Similarly, Bitcoin raw prices and returns do not seem to have a statistically significant influence on the LN efficiency. Hence, it seems that the efficiency of the LN in terms of its ability to perform transactions in the multi-hop structure is not influenced by the market dynamics of Bitcoin. We recall that the efficiency of the LN is here dependent on the level of interconnectivity and on the capacity in terms of bitcoins stored in the edges of the network. Our findings indicate that Bitcoin market performances may thus not have a role in shaping the LN efficiency, while, as expected, Panel B indicates that the LN efficiency configuration is not able to Granger-causes the market dynamics of its main referring cryptomarket. In the Appendix A, we show that such findings are confirmed once we extend the time windows to compute the DFA to 300 and 600 days instead of 250. ----- *Risks* **2020**, *8*, 129 12 of 18 **Table 3.** Testing for granger-causality. The table reports the results of the Toda–Yamamoto test (Toda and Yamamoto 1995) . In Panel A we test whether column variables Granger-cause the row variable, while the opposite for Panel B. **Panel A: Column G-Causes Row** **BTC Alpha** **BTC Vol Alpha** **BTC Price** **BTC Returns** LN efficiency statistics 5.50 0.70 1.70 1.60 *p* -value 0.36 0.98 0.42 0.46 **Panel B: Row G-Causes Column** **BTC Alpha** **BTC Vol Alpha** **BTC Price** **BTC Returns** LN efficiency statistics 7.40 2.90 0.41 0.31 *p* -value 0.19 0.72 0.81 0.86 Then, in order to better understand whether some aspects of the configuration of LN are instead prone to be influenced by changes in the Bitcoin market performances, we investigate whether topological features related to the efficiency levels of LN might be Granger-caused by the market performance of Bitcoin. Therefore, we replicate a similar analysis as the one reported in Table 3, but in this case we specifically focus on the relationships between Bitcoin returns and a battery of topological indicators. In particular, in Table 4 we report the estimates related to the Granger-causality of Bitcoin returns on the following topological indicators for the LN: assortativity, density, transitivity, the median value of the nodes’ strength, and the median capacity of the edges. Hence, we refer to a simple list of topological indicators that are able to map the configuration of the LN in terms of both the features of its nodes and the way edges connecting these nodes are created (see also Table 1 and the corresponding discussion). Hence, this analysis provides an intuitive indication of the potential elements contributing to the functioning of the LN. From Table 4 note how Bitcoin returns do not appear to Granger-cause how similar nodes in the LN tend to connect together, as shown by the relationship with assortativity. Similarly, it emerges that the relationship with respect to the overall density of the LN is not significant. Hence, Bitcoin market performance does not seem to be a significant driver for the creation of channels in the LN, at least for what concerns the aggregate level of inter-connectivity in the network. In addition, both the relationships with the assortativity and with the transitivity seem to signal that Bitcoin market performances are not able to significantly affect the structure of the neighborhood of each node. This is also supported by the results involving the median values of the nodes’ strengths, which do not appear influenced by Bitcoin market dynamics. By contrast, it seems that the amount of bitcoins stored in the channels can be related to Bitcoin market movement. Overall, these findings support the interpretation that Bitcoin market performances hardly influence the efficiency of the LN through the creation of channels, but possibly impact on it with the corresponding deployment of stored resources. Finally, the corresponding reverse relationships are not statistically significant. **Table 4.** Testing for the Granger-causality relationship: BTC returns vs. LN configuration. The table reports the results of the Toda–Yamamoto test (Toda and Yamamoto 1995) in which BTC returns are tested to verify whether they Granger-cause a list of topological indicators for the LN (reported in column). These topological indicators refer to respectively: the assortativity, the density, the transitivity, the median value of the nodes’ strength, and the median capacity of the edges. **Row G-Causes Column** **Assortativity** **Density** **Transitivity** **Median Strength** **Median Capacity** statistics 0.17 0.96 3.10 0.69 4.70 BTC returns *p* -value 0.68 0.33 0.21 0.41 0.03 Previous findings seem to discard the presence of a relevant role for the topological features of the nodes. The LN is, however, characterized by the existence of a bundle of very active players to which a cloud of small nodes (in terms of capacity) are connected. For this reason, we also investigate the potential impact of Bitcoin market returns on the characteristics of these highly centralized nodes whose dynamics may actually influence the overall functioning of the system. Hence, we select the ----- *Risks* **2020**, *8*, 129 13 of 18 top 0.5% of the nodes in terms of strength, thus representing those nodes in the LN which are likely to affect the overall functioning of the system, and we test the Toda–Yamamoto Granger-causality of Bitcoin market returns on their fraction of capacity with respect to the whole LN. We observe that this relationship is not significant ( *p* -value 0.30). We replicate the same analysis using the top 1% and 10% of the nodes, obtaining similar results ( *p* -values equal to 0.32 and 0.86 respectively). The centralization feature of the LN, already observed by Martinazzi and Flori (2020), may influence its functioning since a huge portion of transactions are likely to occur across the edges of these central nodes. Our analysis suggests that the tendency towards a centralized configuration of the LN does not seem to be impacted by Bitcoin market performances. **4. Conclusions** Since its inception, Bitcoin has been criticized for its inability to efficiently perform as many transactions per second as traditional payments services. This evidence, known as scalability issue, has been addressed with different tentative solutions, but it has never been completely solved. With this regard, the LN is a system based on off-chain payment channels and has been considered since its proposal as a very promising candidate to definitively solve the scalability issue. This work proposes to investigate the LN functioning by adopting a graph theory perspective to detect how efficient it is in routing information through its multi-hop framework. In particular, in order to assess the efficiency of such infrastructure we analyze whether Bitcoin market conditions affect the functioning of LN. This is a relevant point for practical purposes, since the very volatile nature of Bitcoin, which is the underlying cryptocurrency of LN, may actually influence the configuration of the LN, limiting its wider adoption and, eventually, preventing its use as a solution for the scalability issue. To detect whether Bitcoin market performances play a role in shaping the configuration of the LN, we opt for an investigation strategy in which Bitcoin market dynamics is synthesized through an intuitive set of indicators. First, we test Bitcoin for the weak Efficient Market Hypothesis on a daily basis by means of the Detrended Fluctuation Analysis (DFA) and various statistical tests. We keep the DFA exponents for the Bitcoin returns and volatility and, alongside prices and returns daily time series, we test if they Granger-cause the efficiency of LN. This analysis does not reveal any significant relationship between market conditions of Bitcoin and the topological efficiency of LN and vice-versa. Then, we focus on a simple indicator of market performance and we test whether Bitcoin daily returns, largely emphasized by market watchers and blockchain fans, actually impact on specific topological properties related with the efficiency of LN, such as assortativity, density, transitivity, median nodal capacity and median channel capacity that we employ to describe the infrastractural features of LN and its adoption. Once again, our findings reveal that Bitcoin market performances do not seem to influence the properties of the configuration of the LN, with the only exception represented by the capacity stored in the channels. Finally, we investigate the Granger-causality relationship between Bitcoin market returns and the growth of the most endowed nodes in the LN, which represent the most active nodes in the network through which a relevant share of transactions in the multi-hop framework is likely to occur. More precisely, we consider the proportion of the capacity installed over those channels co-owned by the top 0.5%, 1% and 10% nodes. Our estimates indicate that Bitcoin market performances do not seem to influence the core of the network. These results suggest that the forces that drive Bitcoin market patterns are different from those that affect the evolution of the LN. In the light of these results, we can suppose that the activity of the LN might be only in part influenced by the interest surrounding Bitcoin market performance, since the functioning of the LN does not appear to be strongly related to the market dynamics of its referring cryptocurrency. In fact, our analysis indicates that the very volatile market dynamics of Bitcoin, although it could influence the configuration of LN by impacting for instance on the amount stored in the channels, in practice does not affect its level of efficiency. This is an interesting result for future adoption of LN as an infrastructural solution to favor scalability since it seems to indicate ----- *Risks* **2020**, *8*, 129 14 of 18 that Bitcoin market turmoil and performance play a marginal role in shaping the LN configuration, which instead seems more related to the distribution of capacities among channels. We can thus speculate that the LN is an innovation that attracts the interest of the most technologically proficient users of Bitcoin, while it has little impact whatsoever for those that consider Bitcoin nothing more than a financial asset. As noted in (Martinazzi and Flori 2020), the efficiency of the network is one of its main features and it is strongly affected by the structure and the capacity distributed over its channels. Hence, users and proponents of LN might emphasize the importance and usability of LN to increase stored capacity, thus enabling higher effectiveness and making the infrastructure capable to perform indirect payments in a more efficient way. There are some limitations in this study. First, the scarce length of the period under analysis may hinder our conclusions, especially when considering such volatile market patterns. Second, the nature of Bitcoin and the LN makes impossible to impute precisely node’s ownership, an aspect that would be interesting to take into account to understand how common users operate across these two networks. For instance, nodes’ behavior might be relevant to disentangle those cases where the LN is mostly exploited for testing purposes, where users interested in evaluating and testing this technology may be more prone to open a channel with a node owned by a recognized institution in the LN. The underlying behavioral drivers that shape the development of the LN’s structure should be investigated more carefully in future works also with respect to the overall market dynamics of cryptocurrencies. For instance, interdependences between the co-movements of different cryptocurrencies have been empirically shown in many works (see, e.g., Dimpfl and Peter 2019; Katsiampa 2019) highlighting the presence of herding behavior in the market, which can be exacerbated by periods of market stress (Raimundo Júnior et al. 2020; Vidal-Tomás et al. 2019). Future works may thus focus on how news and main announcements may impact on LN infrastructure, its functioning and relationships with Bitcoin and, more in general, with the marketplace of cryptocurrencies. The detection and stability of clusters of nodes sharing similar features, in line for instance with other applications in finance (see, e.g., Flori et al. 2019; Puliga et al. 2016; Spelta et al. 2018), represent another interesting field that can be investigated to study users’ behavior in the network. **Author Contributions:** Conceptualization, S.M., D.R., A.F.; methodology, S.M., D.R., A.F.; formal analysis, S.M., A.F.; investigation, S.M., A.F.; data curation, S.M.; writing–original draft preparation, S.M., A.F.; writing–review and editing, D.R., A.F.; visualization, S.M. All authors have read and agreed to the published version of the manuscript. **Funding:** This research received no external funding. **Conflicts of Interest:** The authors declare that they have no conflict of interest. **Appendix A** **Table A1.** Testing for Granger-causality. The table reports the results of the Toda–Yamamoto test (Toda and Yamamoto 1995) . In Panel A we test whether column variables Granger-cause the row variable, while the opposite for Panel B. To compute the DFA, we consider time windows of length *n* = 300 for the first two columns and *n* = 600 for the last two. **Panel A: Column G-Causes Row** **BTC Alpha** **300** **BTC Vol Alpha** **300** **BTC Alpha** **600** **BTC Vol Alpha** **600** LN efficiency statistics 1.90 1.90 2.20 1.10 *p* -value 0.60 0.87 0.81 0.98 **Panel B: Row G-Causes Column** **BTC Alpha** **300** **BTC Vol Alpha** **300** **BTC Alpha** **600** **BTC Vol Alpha** **600** LN efficiency statistics 0.90 3.40 3.10 5.30 *p* -value 0.83 0.64 0.68 0.50 ----- *Risks* **2020**, *8*, 129 15 of 18 **References** Al-Yahyaee, Khamis Hamed, Walid Mensi, and Seong-Min Yoon. 2018. Efficiency, multifractality, and the long-memory property of the bitcoin market: A comparative analysis with stock, currency, and gold markets. *Finance Research Letters* [27: 228–34. [CrossRef]](http://dx.doi.org/10.1016/j.frl.2018.03.017) Alvarez-Ramirez, José, Eduardo Rodriguez, and Carlos Ibarra-Valdez. 2018. Long-range correlations and asymmetry in the bitcoin market. *Physica A: Statistical Mechanics and its Applications* [492: 948–55. [CrossRef]](http://dx.doi.org/10.1016/j.physa.2017.11.025) Angel, James J., and Douglas McCabe. 2015. The ethics of payments: Paper, plastic, or bitcoin? *Journal of* *Business Ethics* [132: 603–11. [CrossRef]](http://dx.doi.org/10.1007/s10551-014-2354-x) Aslanidis, Nektarios, Aurelio F. Bariviera, and Alejandro Perez-Laborda. 2020. Are cryptocurrencies becoming more interconnected? *arXiv* arXiv:2009.14561. Baek, Chung, and Matt Elbeck. 2015. Bitcoins as an investment or speculative vehicle? A first look. *Applied Economics Letters 22* [: 30–34. [CrossRef]](http://dx.doi.org/10.1080/13504851.2014.916379) Barber, Simon, Xavier Boyen, Elaine Shi, and Ersin Uzun. 2012. Bitter to better—How to make bitcoin a better currency. In *International Conference on Financial Cryptography and Data Security* . Berlin and Heidelberg: Springer, pp. 399–414. Bariviera, Aurelio F. 2017. The inefficiency of bitcoin revisited: A dynamic approach. *Economics Letters 161* : 1–4. [[CrossRef]](http://dx.doi.org/10.1016/j.econlet.2017.09.013) Bariviera, Aurelio F, María José Basgall, Waldo Hasperué, and Marcelo Naiouf. 2017. Some stylized facts of the bitcoin market. *Physica A: Statistical Mechanics and its Applications 484* [: 82–90. [CrossRef]](http://dx.doi.org/10.1016/j.physa.2017.04.159) Bartels, Robert. 1982. The rank version of von neumann’s ratio test for randomness. *Journal of the American* *Statistical Association* [77: 40–46. [CrossRef]](http://dx.doi.org/10.1080/01621459.1982.10477764) Baur, Aaron W., Julian Bühler, Markus Bick, and Charlotte S. Bonorden. 2015. Cryptocurrencies as a disruption? empirical findings on user adoption and future potential of bitcoin and co. In *Conference on e-Business, e-Services* *and e-Society* . Berlin and Heidelberg: Springer, pp. 63–80. Baur, Dirk G., Kihoon Hong, and Adrian D. Lee. 2018. Bitcoin: Medium of exchange or speculative assets? *Journal of International Financial Markets, Institutions and Money* [54: 177–89. [CrossRef]](http://dx.doi.org/10.1016/j.intfin.2017.12.004) Bech, Morten L., and Rodney Garratt. 2017. *Central Bank Cryptocurrencies* . BIS Quarterly Review. Basel: BIS, September. Beguši´c, Stjepan, Zvonko Kostanjˇcar, H. Eugene Stanley, and Boris Podobnik. 2018. Scaling properties of extreme price fluctuations in bitcoin markets. *Physica A: Statistical Mechanics and its Applications* 510: 400–6. Blundell-Wignall, Adrian. 2014. The bitcoin question. In *OECD Working Papers on Finance, Insurance and Private* *Pensions* . Paris: OECD. Böhme, Rainer, Nicolas Christin, Benjamin Edelman, and Tyler Moore. 2015. Bitcoin: Economics, technology, and governance. *Journal of economic Perspectives* 29: 213–38. Bouri, Elie, Luis A. Gil-Alana, Rangan Gupta, and David Roubaud. 2019. Modelling long memory volatility in the bitcoin market: Evidence of persistence and structural breaks. *International Journal of Finance &* *Economics* 24: 412–26. Brauneis, Alexander, and Roland Mestel. 2018. Price discovery of cryptocurrencies: Bitcoin and beyond. *Economics Letters* [165: 58–61. [CrossRef]](http://dx.doi.org/10.1016/j.econlet.2018.02.001) Brito, Jerry, Houman Shadab, and Andrea Castillo. 2014. Bitcoin financial regulation: Securities, derivatives, prediction markets, and gambling. *Columbia Science and Technology Law Review* [16: 144. [CrossRef]](http://dx.doi.org/10.2139/ssrn.2423461) Broock, William A., José Alexandre Scheinkman, W. Davis Dechert, and Blake LeBaron. 1996. A test for independence based on the correlation dimension. *Econometric Reviews* [15: 197–235. [CrossRef]](http://dx.doi.org/10.1080/07474939608800353) Caginalp, Carey, and Gunduz Caginalp. 2018. Opinion: Valuation, liquidity price, and stability of cryptocurrencies. *Proceedings of the National Academy of Sciences* [115: 1131–34. [CrossRef]](http://dx.doi.org/10.1073/pnas.1722031115) Carrick, Jon. 2016. Bitcoin as a complement to emerging market currencies. *Emerging Markets Finance and* *Trade* [52: 2321–34. [CrossRef]](http://dx.doi.org/10.1080/1540496X.2016.1193002) Choi, In. 1999. Testing the random walk hypothesis for real exchange rates. *Journal of Applied Econometrics* [14: 293–308. [CrossRef]](http://dx.doi.org/10.1002/(SICI)1099-1255(199905/06)14:3<293::AID-JAE503>3.0.CO;2-5) Chu, Jeffrey, Saralees Nadarajah, and Stephen Chan. 2015. Statistical analysis of the exchange rate of bitcoin. *PLoS ONE* [10: e0133678. [CrossRef]](http://dx.doi.org/10.1371/journal.pone.0133678) Corbet, Shaen, Andrew Meegan, Charles Larkin, Brian Lucey, and Larisa Yarovaya. 2018. Exploring the dynamic relationships between cryptocurrencies and other financial assets. *Economics Letters* [165: 28–34. [CrossRef]](http://dx.doi.org/10.1016/j.econlet.2018.01.004) ----- *Risks* **2020**, *8*, 129 16 of 18 Croman, Kyle, Christian Decker, Ittay Eyal, Adem Efe Gencer, Ari Juels, Ahmed Kosba, Andrew Miller, Prateek Saxena, Elaine Shi, Emin Gün Sirer and et al. 2016. On scaling decentralized blockchains. In *International* *Conference on Financial Cryptography and Data Security* . Berlin and Heidelberg: Springer, pp. 106–25. Decker, Christian, and Roger Wattenhofer. 2015. A fast and scalable payment network with bitcoin duplex micropayment channels. In *Symposium on Self-Stabilizing Systems* . Berlin and Heidelberg: Springer, pp. 3–18. Dierksmeier, Claus, and Peter Seele. 2018. Cryptocurrencies and business ethics. *Journal of Business Ethics* 152: 1–14. [[CrossRef]](http://dx.doi.org/10.1007/s10551-016-3298-0) Dimpfl, Thomas, and Franziska J. Peter. 2019. Group transfer entropy with an application to cryptocurrencies. *Physica A: Statistical Mechanics and its Applications* [516: 543–51. [CrossRef]](http://dx.doi.org/10.1016/j.physa.2018.10.048) Dolado, Juan J., and Helmut Lütkepohl. 1996. Making wald tests work for cointegrated var systems. *Econometric Reviews* [15: 369–86. [CrossRef]](http://dx.doi.org/10.1080/07474939608800362) Domínguez, Manuel A., and Ignacio N. Lobato. 2003. Testing the martingale difference hypothesis. *Econometric Reviews* [22: 351–77. [CrossRef]](http://dx.doi.org/10.1081/ETC-120025895) Dro˙zd˙z, Stanisław, Robert Gebarowski, Ludovico Minati, Paweł O´swiecimka, and Marcin Watorek. 2018. Bitcoin market route to maturity? evidence from return fluctuations, temporal correlations and multiscaling effects. *Chaos: An Interdisciplinary Journal of Nonlinear Science* [28: 071101. [CrossRef]](http://dx.doi.org/10.1063/1.5036517) Dwyer, Gerald P. 2015. The economics of bitcoin and similar private digital currencies. *Journal of Financial Stability* 17: [81–91. [CrossRef]](http://dx.doi.org/10.1016/j.jfs.2014.11.006) Dyhrberg, Anne H., Sean Foley, and Jiri Svec. 2018. How investible is bitcoin? analyzing the liquidity and transaction costs of bitcoin markets. *Economics Letters* [171: 140–43. [CrossRef]](http://dx.doi.org/10.1016/j.econlet.2018.07.032) Escanciano, J. Carlos and Ignacio N. Lobato. 2009. An automatic portmanteau test for serial correlation. *Journal of* *Econometrics* [151: 140–49. [CrossRef]](http://dx.doi.org/10.1016/j.jeconom.2009.03.001) Fama, Eugene F. 1970. Efficient capital markets: A review of theory and empirical work. *The journal of Finance* [25: 383–417. [CrossRef]](http://dx.doi.org/10.2307/2325486) Flori, Andrea. 2019a. Cryptocurrencies in finance: Review and applications. *International Journal of Theoretical and* *Applied Finance* [22: 1950020. [CrossRef]](http://dx.doi.org/10.1142/S0219024919500201) Flori, Andrea. 2019b. News and subjective beliefs: A bayesian approach to bitcoin investments. *Research in* *International Business and Finance* [50: 336–56. [CrossRef]](http://dx.doi.org/10.1016/j.ribaf.2019.05.007) Flori, Andrea, Simone Giansante, Claudia Girardone, and Fabio Pammolli. 2019. Banks’ business strategies on the edge of distress. *Annals of Operations Research* [, 1–50. [CrossRef]](http://dx.doi.org/10.1007/s10479-019-03383-z) Fry, John. 2018. Booms, busts and heavy-tails: The story of bitcoin and cryptocurrency markets? *Economics Letters* 171: [225–29.[CrossRef]](http://dx.doi.org/10.1016/j.econlet.2018.08.008) Garcia, David, Claudio J. Tessone, Pavlin Mavrodiev, and Nicolas Perony. 2014. The digital traces of bubbles: Feedback cycles between socio-economic signals in the bitcoin economy. *Journal of the Royal Society Interface* [11: 20140623. [CrossRef]](http://dx.doi.org/10.1098/rsif.2014.0623) Gomber, Peter, Jascha-Alexander Koch, and Michael Siering. 2017. Digital finance and fintech: current research and future research directions. *Journal of Business Economics* [87: 537–80. [CrossRef]](http://dx.doi.org/10.1007/s11573-017-0852-x) Guo, Yuwei, Jinfeng Tong, and Chen Feng. 2019. A measurement study of bitcoin lightning network. Paper presented at 2019 IEEE International Conference on Blockchain (Blockchain), Atlanta, GA, USA, July 14–17; pp. 202–11. Hong, KiHoon. 2017. Bitcoin as an alternative investment vehicle. *Information Technology and Management* 18: 265–75. [[CrossRef]](http://dx.doi.org/10.1007/s10799-016-0264-6) Jiang, Yonghong, He Nie, and Weihua Ruan. 2018. Time-varying long-term memory in bitcoin market. *Finance Research Letters* [25: 280–84. [CrossRef]](http://dx.doi.org/10.1016/j.frl.2017.12.009) Katsiampa, Paraskevi. 2019. Volatility co-movement between bitcoin and ether. *Finance Research Letters* 30: 221–27. [[CrossRef]](http://dx.doi.org/10.1016/j.frl.2018.10.005) Khan, Nida, and Radu State. 2019. Lightning network: A comparative review of transaction fees and data analysis. In *International Congress on Blockchain and Applications* . Berlin and Heidelberg: Springer, pp. 11–18. Kim, Jae H. 2009. Automatic variance ratio test under conditional heteroskedasticity. *Finance Research Letters* [6: 179–85. [CrossRef]](http://dx.doi.org/10.1016/j.frl.2009.04.003) Kristoufek, Ladislav. 2018. On bitcoin markets (in) efficiency and its evolution. *Physica A: Statistical Mechanics and* *its Applications* [503: 257–62. [CrossRef]](http://dx.doi.org/10.1016/j.physa.2018.02.161) Kumhof, Michael, and Clare Noone. 2018. *Central Bank Digital Currencies-Design Principles and Balance Sheet* *Implications* . London: Bank of England. ----- *Risks* **2020**, *8*, 129 17 of 18 Latora, Vito, and Massimo Marchiori. 2001. Efficient behavior of small-world networks. *Physical Review Letters* [87: 198701. [CrossRef] [PubMed]](http://dx.doi.org/10.1103/PhysRevLett.87.198701) Latora, Vito, and Massimo Marchiori. 2003. Economic small-world behavior in weighted networks. *The European* *Physical Journal B-Condensed Matter and Complex Systems* [32: 249–63. [CrossRef]](http://dx.doi.org/10.1140/epjb/e2003-00095-5) [Lee, Timothy. 2018. Bitcoin’s Transaction Fee Crisis is Over-For Now. Available online: https://arstechnica.com/](https://arstechnica.com/tech-policy/2018/02/bitcoins-transaction-fee-crisis-is-over-for-now/) [tech-policy/2018/02/bitcoins-transaction-fee-crisis-is-over-for-now/ (accessed on 5 April 2019).](https://arstechnica.com/tech-policy/2018/02/bitcoins-transaction-fee-crisis-is-over-for-now/) Lo, Andrew W., and A. Craig MacKinlay. 1988. Stock market prices do not follow random walks: Evidence from a simple specification test. *The Review of Financial Studies* [1: 41–66. [CrossRef]](http://dx.doi.org/10.1093/rfs/1.1.41) Lütkepohl, Helmut. 2005. *New Introduction to Multiple Time Series Analysis* . Berlin and Heidelberg: Springer. Martinazzi, Stefano, and Andrea Flori. 2020. The evolving topology of the lightning network: Centralization, efficiency, robustness, synchronization, and anonymity. *PLoS ONE* [15: e0225966. [CrossRef] [PubMed]](http://dx.doi.org/10.1371/journal.pone.0225966) Miller, Andrew, Iddo Bentov, Surya Bakshi, Ranjit Kumaresan, and Patrick McCorry. 2019. Sprites and state channels: Payment networks that go faster than lightning. In *International Conference on Financial Cryptography* *and Data Security* . Berlin and Heidelberg: Springer, pp. 508–26. Nadarajah, Saralees, and Jeffrey Chu. 2017. On the inefficiency of bitcoin. *Economics Letters* [150: 6–9. [CrossRef]](http://dx.doi.org/10.1016/j.econlet.2016.10.033) Noldus, Rogier, and Piet Van Mieghem. 2015. Assortativity in complex networks. *Journal of Complex Networks* [3: 507–42. [CrossRef]](http://dx.doi.org/10.1093/comnet/cnv005) Nowostawski, Mariusz, and Jardar Tøn. 2019. Evaluating methods for the identification of off-chain transactions in the lightning network. *Applied Sciences* [9: 2519. [CrossRef]](http://dx.doi.org/10.3390/app9122519) Peng, C.-K., Sergey V. Buldyrev, Shlomo Havlin, Michael Simons, H. Eugene Stanley, and Ary L Goldberger. 1994. Mosaic organization of dna nucleotides. *Physical Review e* [49: 1685. [CrossRef]](http://dx.doi.org/10.1103/PhysRevE.49.1685) Peng, C.-K., Shlomo Havlin, H. Eugene Stanley, and Ary L. Goldberger. 1995. Quantification of scaling exponents and crossover phenomena in nonstationary heartbeat time series. *Chaos: An Interdisciplinary Journal of Nonlinear* *Science* [5: 82–87. [CrossRef]](http://dx.doi.org/10.1063/1.166141) Phillip, Andrew, Jennifer Chan, and Shelton Peiris. 2019. On long memory effects in the volatility measure of cryptocurrencies. *Finance Research Letters* [28: 95–100. [CrossRef]](http://dx.doi.org/10.1016/j.frl.2018.04.003) Phillip, Andrew, Jennifer SK Chan, and Shelton Peiris. 2018. A new look at cryptocurrencies. *Economics Letters* [163: 6–9. [CrossRef]](http://dx.doi.org/10.1016/j.econlet.2017.11.020) Pieters, Gina, and Sofia Vivanco. 2017. Financial regulations and price inconsistencies across bitcoin markets. *Information Economics and Policy* [39: 1–14. [CrossRef]](http://dx.doi.org/10.1016/j.infoecopol.2017.02.002) Polasik, Michal, Anna Iwona Piotrowska, Tomasz Piotr Wisniewski, Radoslaw Kotkowski, and Geoffrey Lightfoot. 2015. Price fluctuations and the use of bitcoin: An empirical inquiry. *International Journal of* *Electronic Commerce* [20: 9–49. [CrossRef]](http://dx.doi.org/10.1080/10864415.2016.1061413) Poon, Joseph, and Thaddeus Dryja. 2016. The Bitcoin Lightning Network: Scalable Off-Chain Instant Payments. [Available online: https://scholar.google.com/scholar?q=The+bitcoin+lightning+network:+Scalable+off-chain+](https://scholar.google.com/scholar?q=The+bitcoin+lightning+network:+Scalable+off-chain+instant+payments&hl=zh-CN&as_sdt=0&as_vis=1&oi=scholart) [instant+payments&hl=zh-CN&as_sdt=0&as_vis=1&oi=scholart (accessed on 1 October 2020).](https://scholar.google.com/scholar?q=The+bitcoin+lightning+network:+Scalable+off-chain+instant+payments&hl=zh-CN&as_sdt=0&as_vis=1&oi=scholart) Puliga, Michelangelo, Andrea Flori, Giuseppe Pappalardo, Alessandro Chessa, and Fabio Pammolli. 2016. The accounting network: How financial institutions react to systemic crisis. *PLoS ONE* [11: e0162855. [CrossRef]](http://dx.doi.org/10.1371/journal.pone.0162855) [[PubMed]](http://www.ncbi.nlm.nih.gov/pubmed/27736865) Raimundo Júnior, Gerson de Souza, Rafael Baptista Palazzi, Ricardo de Souza Tavares, and Marcelo Cabus Klotzle. 2020. Market stress and herding: A new approach to the cryptocurrency market. *Journal of Behavioral Finance* [1–15. [CrossRef]](http://dx.doi.org/10.1080/15427560) Selgin, George. 2015. Synthetic commodity money. *Journal of Financial Stability* [17: 92–99. [CrossRef]](http://dx.doi.org/10.1016/j.jfs.2014.07.002) Spelta, Alessandro, Andrea Flori, and Fabio Pammolli. 2018. Investment communities: Behavioral attitudes and economic dynamics. *Social Networks* [55: 170–88. [CrossRef]](http://dx.doi.org/10.1016/j.socnet.2018.07.004) Spelta, Alessandro, Andrea Flori, Nicolò Pecora, Sergey Buldyrev, and Fabio Pammolli. 2020. A behavioral approach to instability pathways in financial markets. *Nature Communications* [11: 1–9. [CrossRef]](http://dx.doi.org/10.1038/s41467-020-15356-z) Takaishi, Tetsuya. 2018. Statistical properties and multifractality of bitcoin. *Physica A: Statistical Mechanics and Its* *Applications* [506: 507–19. [CrossRef]](http://dx.doi.org/10.1016/j.physa.2018.04.046) Tiwari, Aviral Kumar, Rabin K. Jana, Debojyoti Das, and David Roubaud. 2018. Informational efficiency of bitcoin—An extension. *Economics Letters* [163: 106–9. [CrossRef]](http://dx.doi.org/10.1016/j.econlet.2017.12.006) Toda, Hiro Y., and Taku Yamamoto. 1995. Statistical inference in vector autoregressions with possibly integrated processes. *Journal of econometrics* [66: 225–50. [CrossRef]](http://dx.doi.org/10.1016/0304-4076(94)01616-8) ----- *Risks* **2020**, *8*, 129 18 of 18 Urquhart, Andrew. 2016. The inefficiency of bitcoin. *Economics Letters* [148: 80–82. [CrossRef]](http://dx.doi.org/10.1016/j.econlet.2016.09.019) Urquhart, Andrew. 2018. What causes the attention of bitcoin? *Economics Letters* [166: 40–44. [CrossRef]](http://dx.doi.org/10.1016/j.econlet.2018.02.017) Vidal-Tomás, David, Ana M. Ibáñez, and José E. Farinós. 2019. Herding in the cryptocurrency market: Cssd and csad approaches. *Finance Research Letters* [30: 181–86. [CrossRef]](http://dx.doi.org/10.1016/j.frl.2018.09.008) Wald, Abraham, and Jacob Wolfowitz. 1940. On a test whether two samples are from the same population. *The Annals of Mathematical Statistics* [11: 147–62. [CrossRef]](http://dx.doi.org/10.1214/aoms/1177731909) Weber, Beat. 2016. Bitcoin and the legitimacy crisis of money. *Cambridge Journal of Economics* [40: 17–41. [CrossRef]](http://dx.doi.org/10.1093/cje/beu067) Yermack, David. 2015. Is bitcoin a real currency? an economic appraisal. In *Handbook of Digital Currency* . Amsterdam: Elsevier, pp. 31–43. Yermack, David. 2017. Corporate governance and blockchains. *Review of Finance* [21: 7–31. [CrossRef]](http://dx.doi.org/10.1093/rof/rfw074) Zhang, Wei, Pengfei Wang, Xiao Li, and Dehua Shen. 2018. Some stylized facts of the cryptocurrency market. *Applied Economics* [50: 5950–65. [CrossRef]](http://dx.doi.org/10.1080/00036846.2018.1488076) **Publisher’s Note:** MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. *⃝* c 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution [(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/risks8040129?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/risks8040129, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/2227-9091/8/4/129/pdf?version=1606819882" }
2,020
[]
true
2020-12-01T00:00:00
[ { "paperId": "1040d65f3ecc9411dcb3fece96955591738541f0", "title": "Are Cryptocurrencies Becoming More Interconnected?" }, { "paperId": "4624c23d95c89a94f492f5f7857b3157fab4e1f7", "title": "Market Stress and Herding: A New Approach to the Cryptocurrency Market" }, { "paperId": "dd163c5da0fcd3e894ce6f4aee58406aa8ac84e7", "title": "A behavioral approach to instability pathways in financial markets" }, { "paperId": "b325f5b476e732e9b2891854729d0344d86663e7", "title": "The evolving topology of the Lightning Network: Centralization, efficiency, robustness, synchronization, and anonymity" }, { "paperId": "2963b3af286edc278810cb0e155594e56a242237", "title": "News and subjective beliefs: A Bayesian approach to Bitcoin investments" }, { "paperId": "378034a3b91f7a02f63ff02b895a98a3c66bc5f8", "title": "Banks’ business strategies on the edge of distress" }, { "paperId": "0b566dcb72ce48beb7671ef1c68b9fae80331086", "title": "CRYPTOCURRENCIES IN FINANCE: REVIEW AND APPLICATIONS" }, { "paperId": "9ed6d8769278533c29548bd904ed60b9db84ada8", "title": "Volatility co-movement between Bitcoin and Ether" }, { "paperId": "70927b9d6f3e4fd45409e4f75d7cc6bc04f7a73c", "title": "Herding in the cryptocurrency market: CSSD and CSAD approaches" }, { "paperId": "bbd599222a309597b76adde8c57e13bfd89e351e", "title": "A Measurement Study of Bitcoin Lightning Network" }, { "paperId": "4486db64821c02e1cc7ed42a66f2c0b327720b8b", "title": "Lightning Network: A Comparative Review of Transaction Fees and Data Analysis" }, { "paperId": "b3fd679536441bf9862618b47dab03b372c9e24f", "title": "Evaluating Methods for the Identification of Off-Chain Transactions in the Lightning Network" }, { "paperId": "59bd48405841599a7ef08adbe19a1ef8a506c620", "title": "On long memory effects in the volatility measure of Cryptocurrencies" }, { "paperId": "ba1baef2cfa82721aa0f5c77af618e0b14d4e6a4", "title": "Group transfer entropy with an application to cryptocurrencies" }, { "paperId": "217895bb50976672fd9004ee9e3b22ce6f0e1e31", "title": "Efficiency, multifractality, and the long-memory property of the Bitcoin market: A comparative analysis with stock, currency, and gold markets" }, { "paperId": "6ea1864c4299ddee3189c2bdfeb924b283603402", "title": "Modelling long memory volatility in the Bitcoin market: Evidence of persistence and structural breaks" }, { "paperId": "3495ba63e5d1d84d2dc7670744cce75d4e005959", "title": "How investible is Bitcoin? Analyzing the liquidity and transaction costs of Bitcoin markets" }, { "paperId": "1dbc8eae05a88fd93b2535c49697bbe0012d49da", "title": "Investment communities: Behavioral attitudes and economic dynamics" }, { "paperId": "e538f221a3ff45df561dcef25ebed552d8a8e39a", "title": "Booms, busts and heavy-tails: The story of Bitcoin and cryptocurrency markets?" }, { "paperId": "b61d0259f2c02c945f2946679f0347e36285fc70", "title": "On Bitcoin markets (in)efficiency and its evolution" }, { "paperId": "36b0aa4a5c7cb9372337cf23ac02f3725806a279", "title": "Some stylized facts of the cryptocurrency market" }, { "paperId": "665efab907e67c5a7ed457c1f5ad70bc0c030717", "title": "Central Bank Digital Currencies - Design Principles and Balance Sheet Implications" }, { "paperId": "d69c961cfa53407076c0b590479f8cc6fbcd5ac4", "title": "Bitcoin market route to maturity? Evidence from return fluctuations, temporal correlations and multiscaling effects" }, { "paperId": "1d0ff715d8d15d48b7c577fef1bb6fd496b5667a", "title": "Price discovery of cryptocurrencies: Bitcoin and beyond" }, { "paperId": "7e9db8c685e356957e58a5bef854340e3b0c41ce", "title": "Scaling properties of extreme price fluctuations in Bitcoin markets" }, { "paperId": "1046914c37810cf8784280eb000633dbbc266ee4", "title": "Informational efficiency of Bitcoin—An extension" }, { "paperId": "8c839b711a837cc36aea2e832b7f06d7a3155785", "title": "A new look at Cryptocurrencies" }, { "paperId": "747785ad0cf34f4f8abe99438b47474afa8a49f2", "title": "Long-range correlations and asymmetry in the Bitcoin market" }, { "paperId": "ff1b336d2132028bcf88ec2b290b3ee9e2468ce9", "title": "What Causes the Attention of Bitcoin?" }, { "paperId": "569fd1797fe6e29ac0876d1e4d4e08ef8bf9b0c9", "title": "Time-varying long-term memory in Bitcoin market" }, { "paperId": "101c8b6cc9f2bf69bdf7ddb4bfeba4b98540d0cc", "title": "Bitcoin as an alternative investment vehicle" }, { "paperId": "3b539f3d38dab6a25aea93f918ea1d0d8dac6c05", "title": "Exploring the Dynamic Relationships between Cryptocurrencies and Other Financial Assets" }, { "paperId": "95433513a30d7703fab53ed2a371004970d249ef", "title": "Opinion: Valuation, liquidity price, and stability of cryptocurrencies" }, { "paperId": "719b219f9b5e2de92b2014b11289fd9d388fb046", "title": "The Inefficiency of Bitcoin Revisited: A Dynamic Approach" }, { "paperId": "8dad18e3b6b03bc3f7bb39ecdcb830329f2687d1", "title": "Central Bank Cryptocurrencies" }, { "paperId": "88b94e0474e244d855b8f6be70b797a8fe90ac88", "title": "Statistical properties and multifractality of Bitcoin" }, { "paperId": "572271b7f4d35459aaa4d8d84c3b26c9f2380765", "title": "Some Stylized Facts of the Bitcoin Market" }, { "paperId": "4dce5b72e1f205e11dcdcc8db68ea1cb9a68bbc5", "title": "Sprites and State Channels: Payment Networks that Go Faster Than Lightning" }, { "paperId": "f78cc892779c605709904fd95cc0326f017340d5", "title": "Digital Finance and FinTech: current research and future research directions" }, { "paperId": "b74764ad73ab4c9f1399e007b92492ffbe94165f", "title": "Financial Regulations and Price Inconsistencies across Bitcoin Markets" }, { "paperId": "99d1e84f089518c9f74a9b4487a95caa794fde9c", "title": "The Inefficiency of Bitcoin" }, { "paperId": "946dff638c0d7b6eba37aaafce84cb1fff13ac01", "title": "Cryptocurrencies and Business Ethics" }, { "paperId": "065acc1a4d1abc808e75373101da9564afbc5744", "title": "Bitcoin as a Complement to Emerging Market Currencies" }, { "paperId": "2cba4a62e779d7d14d06a53291dfcf61779b043d", "title": "The Accounting Network: How Financial Institutions React to Systemic Crisis" }, { "paperId": "00f17247557be49e3411fdbc69f6258d3e33b8ea", "title": "Assortativity in complex networks" }, { "paperId": "7b46b14147af27db0bf3e2dd832b20ed0e27a692", "title": "Corporate Governance and Blockchains" }, { "paperId": "46ac66712dcde20dd6d0459a23b274fdea2c63d1", "title": "Cryptocurrencies as a Disruption? Empirical Findings on User Adoption and Future Potential of Bitcoin and Co" }, { "paperId": "685a0a1a54cb732c466cbae58ea174211297ac04", "title": "Price Fluctuations and the Use of Bitcoin: An Empirical Inquiry" }, { "paperId": "51b27a41ca1a33445a1041fcea84341fcf0b8c4c", "title": "A Fast and Scalable Payment Network with Bitcoin Duplex Micropayment Channels" }, { "paperId": "d0bd4e83eef71cd09b29594f900df94381d1beb7", "title": "Statistical Analysis of the Exchange Rate of Bitcoin" }, { "paperId": "9d17865be2471788f85d240774d8d1289387880f", "title": "Bitcoin: Medium of Exchange or Speculative Assets?" }, { "paperId": "7bbe97a6ca3ab201e0be6d6d3313a72e96e909e0", "title": "Bitcoin Financial Regulation: Securities, Derivatives, Prediction Markets, and Gambling" }, { "paperId": "c1470f65727a347257053e50afbc0f1b61f6285d", "title": "Bitcoins as an investment or speculative vehicle? A first look" }, { "paperId": "6b25206e4420bb2380f3d8afd9e61fc484355d37", "title": "The Ethics of Payments: Paper, Plastic, or Bitcoin?" }, { "paperId": "7328d31f9b5078658718ada22e94088c5a094291", "title": "The digital traces of bubbles: feedback cycles between socio-economic signals in the Bitcoin economy" }, { "paperId": "7b8de1a148d61005e92d2cc0de745674fb6faf2f", "title": "Bitcoin: Economics, Technology, and Governance" }, { "paperId": "c98935d62549be84c71e9d6c6501bb3446480d07", "title": "The Economics of Bitcoin and Similar Private Digital Currencies" }, { "paperId": "114d480c4b6ab827b67e4ab7c4bce874662b8d08", "title": "Is Bitcoin a Real Currency? An Economic Appraisal" }, { "paperId": "aa4bac5be6fd81c0487e1693f2dc0483a15b5ed4", "title": "Bitcoin and the Legitimacy Crisis of Money" }, { "paperId": "2aee438bb3c1eba0fe7f78aad022d5ffc1f0a832", "title": "Synthetic Commodity Money" }, { "paperId": "7058f8909fda4825e9ff8c45b7a0499d9d896eb2", "title": "Bitter to Better - How to Make Bitcoin a Better Currency" }, { "paperId": "e1f0b7eaddefcdce832818d469db127e13c10786", "title": "Automatic variance ratio test under conditional heteroskedasticity" }, { "paperId": "180920d8b86db0b3fd9130fae2d4fb934c9ba477", "title": "An Automatic Portmanteau Test for Serial Correlation" }, { "paperId": "df59768408a3a63ae4c62a02865641863c7510ac", "title": "New Introduction to Multiple Time Series Analysis" }, { "paperId": "978933e7f31a615d475883317791e905a967fc0b", "title": "Testing the Martingale Difference Hypothesis" }, { "paperId": "10512503084161d6c6e1fe8f325f73c6f9b0632d", "title": "Economic small-world behavior in weighted networks" }, { "paperId": "f45dce9b51785e2daa9939c1bd00febf1bd787d0", "title": "Efficient behavior of small-world networks." }, { "paperId": "43bce23bbd472c8061aa485d1a9b1e88fcc0f26f", "title": "Testing the random walk hypothesis for real exchange rates" }, { "paperId": "db2fdcd554afa060f7539ee5c1a5d299ca9e2cef", "title": "Quantification of scaling exponents and crossover phenomena in nonstationary heartbeat time series" }, { "paperId": "bd3a6b06537db6e66aaa2e345ab3af7f307640ad", "title": "Statistical inference in vector autoregressions with possibly integrated processes" }, { "paperId": "5ecd9a86ce844961ef665bb679d7234ad7997e08", "title": "Mosaic organization of DNA nucleotides." }, { "paperId": "ee5daed721de8400592d88c00b6e333c7cacdb9a", "title": "Stock Market Prices Do Not Follow Random Walks: Evidence from a Simple Specification Test" }, { "paperId": "5f2170e8261aeeab45a82eab2d515106bc391511", "title": "The Rank Version of von Neumann's Ratio Test for Randomness" }, { "paperId": "82d172c2cec10eeb036a7dbee147ec34b87f5038", "title": "On a Test Whether Two Samples are from the Same Population" }, { "paperId": null, "title": "Bitcoin’s Transaction Fee Crisis is Over-For Now" }, { "paperId": "c5bb74650eebdfdd8fcc52129a30afb49af4261c", "title": "On the inefficiency of Bitcoin" }, { "paperId": null, "title": "The Bitcoin Lightning Network : Scalable Off - Chain Instant Payments" }, { "paperId": null, "title": "On scaling decentralized blockchains" }, { "paperId": null, "title": "The bitcoin question" }, { "paperId": "96c55a3cc285896ba232bb418b45930d1d9c79d7", "title": "Some Stylized Facts" }, { "paperId": "8df324b8ef143204150550bc783ca1e14312dd32", "title": "A test for independence based on the correlation dimension" }, { "paperId": "61f8dbbbaf0960c9c902f27e4227b1745aca8689", "title": "Making Wald Tests Work for Cointegrated Var Systems" }, { "paperId": null, "title": "Statistical Mechanics and its Applications 516: 543–51" }, { "paperId": "5952458a5eea85791dc1933f3a26d214a86207e4", "title": "Quantification of scaling exponents and crossover phenomena in nonstationary heartbeat time series." }, { "paperId": "1f3b3ac7dc10701b972f6bba5db3e8ed5046a1d4", "title": "Efficient Capital Markets : A Review of Theory and Empirical Work" } ]
19,185
en
[ { "category": "Mathematics", "source": "external" }, { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Mathematics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01a11a004648bfa588d0c9cf37ef0cc767372893
[ "Mathematics", "Computer Science" ]
0.800276
Solving polynomial systems over finite fields: improved analysis of the hybrid approach
01a11a004648bfa588d0c9cf37ef0cc767372893
International Symposium on Symbolic and Algebraic Computation
[ { "authorId": "1776877", "name": "L. Bettale" }, { "authorId": "145238789", "name": "J. Faugère" }, { "authorId": "144148617", "name": "Ludovic Perret" } ]
{ "alternate_issns": null, "alternate_names": [ "ISSAC", "Int Symp Symb Algebraic Comput" ], "alternate_urls": null, "id": "75f4667b-00a3-4ba6-a3a5-700e8414c0dd", "issn": null, "name": "International Symposium on Symbolic and Algebraic Computation", "type": "conference", "url": "https://en.wikipedia.org/wiki/International_Symposium_on_Symbolic_and_Algebraic_Computation" }
null
## Solving Polynomial Systems over Finite Fields: Improved Analysis of the Hybrid Approach ### Luk Bettale, Jean-Charles Faugère, Ludovic Perret To cite this version: ###### Luk Bettale, Jean-Charles Faugère, Ludovic Perret. Solving Polynomial Systems over Finite Fields: Improved Analysis of the Hybrid Approach. ISSAC 2012 - 37th International Symposium on Symbolic and Algebraic Computation, Jul 2012, Grenoble, France. pp.67–74, ￿10.1145/2442829.2442843￿. ￿hal- 00776070￿ ### HAL Id: hal-00776070 https://inria.hal.science/hal-00776070 ###### Submitted on 14 Jan 2013 ###### HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. ###### L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. ----- # Solving Polynomial Systems over Finite Fields: Improved Analysis of the Hybrid Approach ##### Luk Bettale[∗] ###### Oberthur Technologies 71-73 rue des Hautes Pâtures 92726 Nanterre Cedex, France l.bettale@oberthur.com ##### ABSTRACT ##### Jean-Charles Faugère ###### INRIA Paris-Rocquencourt Center PolSys Project UPMC, Univ Paris 06, LIP6 CNRS, UMR 7606, LIP6 UFR Ingénierie 919, LIP6 Case 169, 4, Place Jussieu, F-75252 Paris Jean-Charles.Faugere@inria.fr ity and log(q) ≪ _n._ ##### Ludovic Perret ###### INRIA Paris-Rocquencourt Center PolSys Project UPMC, Univ Paris 06, LIP6 CNRS, UMR 7606, LIP6 UFR Ingénierie 919, LIP6 Case 169, 4, Place Jussieu, F-75252 Paris Ludovic.Perret@lip6.fr The Polynomial System Solving (PoSSo) problem is a fundamental NP-Hard problem in computer algebra. Among others, PoSSo have applications in area such as coding theory and cryptology. Typically, the security of multivariate public-key schemes (MPKC) such as the UOV cryptosystem of Kipnis, Shamir and Patarin is directly related to the hardness of PoSSo over finite fields. The goal of this paper is to further understand the influence of finite fields on the hardness of PoSSo. To this end, we consider the so-called hybrid _approach. This is a polynomial system solving method dedicated_ to finite fields proposed by Bettale, Faugère and Perret (Journal of Mathematical Cryptography, 2009). The idea is to combine exhaustive search with Gröbner bases. The efficiency of the hybrid approach is related to the choice of a trade-off between the two methods. We propose here an improved complexity analysis dedicated to quadratic systems. Whilst the principle of the hybrid approach is simple, its careful analysis leads to rather surprising and somehow unexpected results. We prove that the optimal trade-off (i.e. number of variables to be fixed) allowing to minimize the complexity is achieved by fixing a number of variables proportional to the number of variables of the system considered, denoted n. Under some natural algebraic assumption, we show that the asymptotic complexity of the hybrid approach is 2[(][3][.][31][−][3][.][62 log][2][(][q][)][−][1][)] _[n], where q is the size_ of the field (under the condition in particular that log(q) ≪ _n). This_ is to date, the best complexity for solving PoSSo over finite fields (when q > 2). We have been able to quantify the gain provided by the hybrid approach compared to a direct Gröbner basis method. For quadratic systems, we show (assuming a natural algebraic assumption) that this gain is exponential in the number of variables. Asymptotically, the gain is 2[1][.][49] _[n]_ when both n and q grow to infin _∗This work has been carried out when this author was PhD student_ at UPMC/INRIA/LIP6). Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Copyright 20XX ACM ...$10.00. ##### 1. INTRODUCTION The purpose of this paper is to study the complexity of solving the Polynomial System Solving (PoSSo) problem over finite fields. This problem, that will be denoted by PoSSoq, is as follows: **Polynomial System Solving over Finite Fields (PoSSoq)** Let q = p[k], where p is prime and k > 0. **Input: f1(x1,...,** _xn),..., fm(x1,...,_ _xn) ∈_ Fq[x1,..., _xn]._ **Goal: find a vector z1,...,** _zn ∈_ F[n]q such that: _f1(z1,...,_ _zn) = ··· = fm(z1,...,_ _zn) = 0._ PoSSoq typically arises in area such as cryptography and coding theory (but not limited to). In cryptology, the hardness of PoSSoq is now a subject of major interest, e.g. [30, 23, 24, 16, 18, 14, 17, 25, 1, 29, 15, 34, 36, 21]. In one hand, this problem is used as a trapdoor to design many cryptographic primitives, mostly in multivariate cryptography [32, 33, 37]. On the other hand, the security of many cryptosystems reduce trough algebraic attacks [3, 23, 35] to PoSSoq. From a complexity-theoretical point of view, PoSSoq is NP-Hard independently of the size q [28]. Thus, any algorithm for PoSSoq should be exponential in the worth case. However, this does not exclude that large family of PoSSoq instances can be solved in subexponential or polynomial complexity. In addition, the exact exponent occurring in algorithms of exponential complexity is often a critical question in applications. The general question we want to address here is how much the restriction to finite fields influence the hardness of PoSSo ? **Hybrid Approach. In [9], we have described a rather simple** Gröbner-basis based method taking advantage of the finite field structure: the so-called hybrid approach. The idea is to mix exhaustive search and Gröbner bases [11, 13, 12] computation. In what follows, hybrid approach will always refer to the Gröbnerbasis based method described in [9]. The principle of such approach is to fix k – which is a parameter – among the n variables of the system considered and then compute q[k] Gröbner bases of smaller systems to recover the set of solutions. The efficiency of the hybrid approach depends upon a proper choice of the trade_off k between the number of variables to be fixed and the cost of_ computing a Gröbner basis of the smaller sub-systems. At first glance, it is even not clear that a non-trivial trade-off exists (i.e. ----- whether k ̸= 0?). A first contribution of [9] is to show that the hybrid approach brings a significant improvement in practice (with respect to a direct Gröbner basis computation). As an application, we have shown that the parameters of many multivariate schemes (which are directly based on the hardness of PoSSoq) must be refined to achieve a cryptographic security level (i.e. > 2[80] operations). For instance, the hybrid approach has been used to attack previously recommended parameters of the UOV scheme [29] (for instance, [9][Table 4, first row] in a complexity as small as 2[37][.][75][�]. Remark that experiments performed in [9] suggest that the optimal trade-off seems to be achieved for a small and constant value of k. We show in this paper that this intuition is actually false. We mention that [9] also laid the foundation for a theoretical analysis of the hybrid approach. It has been shown that the hybrid approach is beneficial (i.e. a non-trivial trade-off exists) if q is less than 2[0][.][62] _[ω][ n], where ω,_ 2 ⩽ _ω ⩽_ 3 is the linear algebra constant. **Related Works. The complexity of solving solving binary qua-** dratic equations has been more particularly investigated in [38, 39, 7]. The authors of [38] proposed an heuristic method – based on the so-called XL [31] algorithm – of complexity O �2[0][.][875] _[n][�]_ for solving PoSSo2 (with quadratic equations). They propose to combine exhaustive search with XL. This is the so-called FXL. As pointed in [2] XL can be viewed as a sub-optimal version of F4 [19] (and consequently, FXL is a sub-optimal version of the hybrid approach). In addition, the exact assumptions that have to be verified by the input systems are unclear. Also, similar results have been announced in [39][Section 2.2], but there analysis relies on algorithmic assumptions (e.g., row echelon form of sparse matrices in quadratic complexity) that are not known to hold currently. Under these assumptions, the authors show that the most favorable trade-off between exhaustive search and row echelon form computations in the FXL algorithm is obtained by specializing 0.45 _n variables (for q = 2)._ Recently, [7] used an hybrid approach – and additional techniques – to further improve the solving of quadratic binary systems. The authors of [7] proposed a deterministic algorithm for solving PoSSo2 in O �2[0][.][841] _[n][�]_ when m = n (i.e. same number of equations and variables). A probabilistic variant of their algorithm (Las Vegas type) has expected complexity O �2[0][.][792] _[n][�]. They roughly estimate_ the actual threshold between their method and exhaustive search (whose cost is 4log2 n 2[n] operations [10]), which is as low as 200. Note that the complexity analysis in [7] requires an algebraic assumption which is similar to [9]. Such assumption will be also used here. From now on, we will always assume that q > 2. The question of solving PoSSoq for a bigger q is quickly addressed in [39][Section 2.1]. More precisely, [39][Proposition 7, p. 5] describes an implicit method for finding the optimal number of variables to be fixed in FXL. For q = 2[8], the best-tradeoff in FXL is obtained by fixing 0.049 _n variables (assuming ω = 2). Us-_ ing a different technique, we present also here an implicit method for finding the best-tradeoff with the hybrid approach. For example with q = 2[8], we get the most favorable trade-off is obtained by fixing 0.07 _n variables (assuming ω = 2.4)._ The goal of this paper is to further improve the theoretical analysis initiated in [9]. In particular, we address the following issues: _• What is the explicit asymptotic value of the best trade-off ?_ _• What is the asymptotic complexity the hybrid approach ?_ _• What is the gain of the hybrid approach over a direct Gröbner_ basis method ? **Organization of the Paper. After this introduction, the paper is** organized as follows. Sect. 2 recalls some results from [9] needed for our new analysis. We also define a general framework for our study. We emphasize that all our results are based on a rather natural algebraic assumption about the sub-systems considered during the hybrid approach, i.e. we assume that semi-regular system remains semi-regular after having specialized some variables (this is similar to [9, 7]). This is formalized in Hypothesis 1 (Section 2.1). In Section 2.2, we present a first new result about the hybrid approach. Surprisingly enough, we have been able to show that fixing a number of variables k which is proportional to the initial number of variables of the system considered yields a better trade-off than the one in [9]. In Section 3, we provide an explicit form of the best trade-off. We show that it is asymptotically[1] equivalent to: 10.86 _ω_ [2] _n_ (4.16 log2 (q) _−_ 3.14 _ω)[2][,]_ where ω, 2 ⩽ _ω ⩽_ 3 is the linear algebra constant. This result allows to derive an asymptotical equivalent for the cost of the hybrid approach. Precisely, the complexity is asymptotically equivalent to 2[n] _[ω][ (][1][.][38][−][0][.][44]_ _[ω][ log][(][q][)][−][1][)], when n →_ ∞, _q →_ ∞ and log (q) ≪ _n._ Finally, we quantify in Section 4 the gain of the hybrid approach with respect to a direct Gröbner basis computation. Once again, we arrive to a rather unexpected result. The hybrid approach provides – under some conditions – an exponential speed-up. More precisely, when n → ∞, _q →_ ∞ and as long as n ≫ log(q), the gain of the hybrid approach compared to the direct Gröbner basis approach is asymptotically 2[0][.][62] _[ω][ n]. To the knowledge of the authors, this_ makes the hybrid approach the method with the best asymptotical complexity for solving PoSSoq (for q > 2). ##### 2. PRELIMINARIES We review in this part some useful results obtained in [9]. Throughout the paper, we always use the following notations: q is the size of the field, n is the number of variables, m is the number of equations and k is the trade-off (number of fixed variables in the hybrid approach). We will always assume that m ≥ _n. We denote_ by ω, 2 ⩽ _ω ⩽_ 3 the linear algebra constant. We write O for the “big O” notation. We also use the o for the “little-o” notation, i.e. _f_ (n) = o�g(n)� if limn→∞ _gf_ ((nn)) [=][ 0. Finally, we say that][ f][ and][ g] are asymptotically equivalent, denoted f ∼ _g, if f −_ _g = o(g) (or_ equivalently, limn→∞ _gf_ ((nn)) [=][ 1 if][ f][ and][ g][ are positive real valued] functions). ##### 2.1 Complexity of the Hybrid Approach We recall in this part the general expression of the hybrid approach cost [9]. To do so, let CF5 �n, _m,_ _dreg�_ be the complexity of computing the Gröbner basis of a system of m equations in n variables using the F5 algorithm[2] [20], where dreg is the degree of _regularity of the system. Informally, the degree of regularity is the_ maximum degree reached during the Gröbner basis computation. Note that this degree depends on n, _m and q. The complexity of the_ hybrid approach [9] is as follows. PROPOSITION 2.1. Let { _f1,..., fm} ⊂_ Fq[x1,..., _xn] be an al-_ _gebraic system of equations with respective degrees d1 ⩾_ _··· ⩾_ _dm._ 1A maple code corresponding to this paper can be found at http: ``` //www-salsa.lip6.fr/~perret/Site/hybrid_issac.mpl. ``` 2Note that a similar analysis could be also performed with any algorithm solving PoSSoq and having a precise complexity estimates based on the degree of regularity, e.g. [11, 13, 12, 19, 20, 27]. ----- _Let k be a non-negative integer and d[max]reg_ [(][k][)][ (resp.][ D][max][(][k][)][) be the] _maximum degree of regularity (resp. maximum number of solutions_ _in the algebraic closure of Fq counted with multiplicities) of all the_ _systems:_ _{ f1(x1,...,_ _xn−k,_ _v1,...,_ _vk),..., fm(x1,...,_ _xn−k,_ _v1,...,_ _vk)}_ _for any (v1,...,_ _vk) ∈_ F[k]q. The complexity of the hybrid approach is _bounded from above by:_  �  _._ (1)  min 0⩽k⩽n  � q[k] _CF5_ �n _−_ _k,_ _m,_ d[max]reg [(][k][)]� � �� � _Gröbner basis_ + _O_ ((n _−_ _k)_ D[max](k)[ω] ) � �� � _change of ordering_ This is the complexity of computing q[k] (DRL) Gröbner bases with F5 of polynomial systems having m equations, n − _k variables, re-_ spective degrees d1 ⩾ _··· ⩾_ _dm, plus the cost of performing a change_ of ordering with FGLM [22]. In order to study the asymptotical behavior of the hybrid approach, we assume – as in [9] – a regularity condition about the sub-systems arising during the hybrid approach. HYPOTHESIS 1. Let { _f1,..., fm} ⊂_ Fq[x1,..., _xn] be random_ _algebraic equations of respective degrees d1 ⩾_ _··· ⩾_ _dm._ _Let βmin,_ 0 < βmin < 1 be a value that will be specified later. Then, _for any k,_ 0 ⩽ _k ⩽_ _⌈βmin n⌉, and for each vector (v1,...,_ _vk) ∈_ F[k]q, _the system:_ _{_ _f1(x1,...,_ _xn−k,_ _v1,...,_ _vk),..., fm(x1,...,_ _xn−k,_ _v1,...,_ _vk)}_ _is semi-regular for n large enough._ Note that systems verifying such hypothesis are in particular semiregular (k = 0). We refer the reader to [8, 4, 6, 5] for more information on semi-regular systems. In practice, a randomly picked system is semi-regular with high probability. Assuming Fröberg’s conjecture [26], this can be proven more formally. We emphasize that Hypothesis 1 has been experimentally verified [7] for a large amount of random quadratic binary systems. In [9], such assumption has been verified for larger q on algebraic systems coming coming from multivariate schemes such as UOV [30]. However, such systems are naturally under-defined. Thus, the total number of variables to be fixed (m−n variables to have a square system plus _k variables due to the hybrid approach) is sufficiently big to assume_ that the algebraic systems obtained after specialization behave as a random system. Note also that we performed some experiments to check this assumption for random systems of equations. We experimentally verified that Hypothesis 1 holds for random square systems with various values of n, 6 ≤ _n ≤_ 16, and with parameters _q,_ _βmin as in Table 2._ One interesting feature of semi-regular systems is that their degree of regularity is known in advance. Indeed, let { f1,..., fm} ⊂ Fq[x1,..., _xn] be a semi-regular system. Its regularity is given by_ the index of the first non-positive coefficient of _i=1[(][1]_ _[−]_ _[z][d][i]_ [)] #### ∑ ckz[k] = [∏][m] . _k≥0_ (1 _−_ _z)[n]_ In addition, asymptotical equivalents are known [8, 4, 6, 5] for the degree of regularity. These allow to perform the analysis in [9], and will be further used in this paper. Note that assuming Hypothesis 1, all the sub-systems solved during the hybrid approach have – for a fixed k – the same degree of regularity. We denote this regularity by dreg(k) (i.e. d[max]reg [(][k][) =] dreg(k). Furthermore, the number of solutions of an over-determined semi-regular system of equations is always 0 or 1 (i.e. 0 ≤ D[max](k) ≤ 1 as soon as k > 0). This allows to neglect the cost of the change ordering algorithm in the complexity. ##### 2.2 Best Trade-Off for Quadratic Systems ? Throughout this paper, we denote by k0 the optimal value for k, that is, the parameter that minimizes the complexity of the hybrid approach. The goal of this part is to have the asymptotic trend of the best trade-off. To simplify the analysis, we focus our attention to quadratic systems. Such systems are widespread in many applications (especially cryptography), making their study of main interest. To find the best trade-off, we want to minimize the complexity of the hybrid approach. To do so, we first consider the complexity _Chyb(k) of the hybrid approach as a continuous function of k ∈_ R. When this function reaches its minimum, its derivative Chyb(k)[′] with respect to k vanishes. A root k0 of Chyb(k)[′] with k0, 0 ⩽ _k0 ⩽_ _n_ gives then the best tradeoff. Finally, as Chyb(k) is a complexity, it is always positive. It is thus equivalent to look for a root of its logarithmic derivative _[C][hyb][(][k][)][′]_ _Chyb(k)_ [.] Let C1(n, _k) = (n_ _−_ _k −_ 1) _,C2(n,_ _k) =_ � 3 _n2−k_ _−_ 1 _−_ _√nk�_ and _C3(n,_ _k) =_ � _n+2_ _k_ _[−]_ _√nk�. The authors of [9] obtain that the best_ trade-off k0 is a root of ∆(k) where � 1 � ∆(k) = log (q)+ _ω_ log �C1(n, _k)�_ + 2C1(n, _k)_ _−_ _[ω]_ �1 + �n/k� [�]log �C2(n, _k)�_ + 1 � 2 2C2(n, _k)_ _−_ _[ω]_ �1 _−_ �n/k� [�]log �C3(n, _k)�_ + 1 � _._ (2) 2 2C3(n, _k)_ Observe that n does not appear in the asymptotic expansion of ∆ (β ). Thus, a solution of ∆ (β ) = 0 at infinity is unrelated to n. As a consequence, the best (asymptotic) trade-off can be written To push further the asymptotical analysis, we need to assume – a priori – what it is the global trend of k. At first glance, it seems (rather) natural to believe that k is going to be small and should be then a constant. This is what was assumed in [9]. Surprisingly enough, we will see that the best trade-off is obtained asymptotically by fixing β0 n variables, where β0 is independent of n. To do this, we first write k = β n with 0 ⩽ _β ⩽_ 1, and we show that β tends to a constant when n grows to infinity. By substituting _k by β n in (2), and factoring by n in each log terms we obtain that_ ∆(β ) = � � log (q)+ _ω_ log (n)+ log 1 _−_ _β −_ [1]n � 1 + 2C1(n, _β n)_ � _−_ _[ω]2_ �1 + �1/β � [�]log (n)+ log � 3 _−2_ _β_ _−_ [1]n _[−]_ �β � + 2C2(n1, _β n)_ _−_ _[ω]2_ �1 _−_ �1/β � [�]log (n)+ log � 1 +2 _β_ _−_ �β � + 2C3(n1, _β n)_ � _._ The coefficient of log (n) in this expression is: �ω − _[ω]_ �1 + �1/β � _−_ _[ω]_ �1 _−_ �1/β �[�] = 0. 2 2 � (3) We remark that C1(n, _β n),C2(n,_ _β n) and C3(n,_ _β n) go to infinity_ when n tends to infinity. As a consequence: ∆ (β ) ∼ log (q)+ _ω (log_ (1 _−_ _β_ )) _−_ _[ω]_ �1 + �1/β � [�]log � 3 _−_ _β_ _−_ �β �� 2 2 _−_ _[ω]_ 2 �1 _−_ �1/β � [�]log � 1 + _β_ _−_ �β �� _._ 2 ----- _k0 = β0 n, where β0 is unrelated to n. This is a contradiction with_ our prior assumption [9]: k0 is not a constant. To have a precise analysis, we should look for the best asymptotic trade-off assuming k = β n. This is one of the reasons motivating a new analysis. ##### 3. COMPLEXITY OF HYBRID APPROACH In this part, we investigate the complexity of the hybrid approach. The goal is to have an expression of the complexity as explicit as possible. To this end, we first derive an asymptotical equivalent of this complexity depending of the degree of regularity. According to Section 2.2, we have the global trend of the best trade-off. It is of the form k = β n (with β unrelated to n). Then, we derive an asymptotically equivalent formula for the regularity of the subsystems involved in the hybrid approach. Finally, we put everything together to get an asymptotic equivalent for hybrid approach cost. ##### 3.1 A First Asymptotic Equivalent We recall that the complexity of F5 as stated in [8]: ##### 3.3 Implicit Form of the Best Trade-Off In this part, we show that the best trade-off at infinity k0 = ⌈β0 n⌉ can be obtained by solving an implicit equation. The idea is to derive an equivalent of the logarithmic derivative of Chyb using the regularity (7). Let D = 1 − _β + γ. By combining (2) and (7), we_ get that _[C][hyb][(][β][ n][)][′]_ _Chyb(β n)_ _[∼]_ � 1 � _n log_ (q)+ _ω n_ log (n)+ log (1 _−_ _β_ )+ 2 _n_ (1 _−_ _β_ ) � � _α_ �� 1 � _−_ _[ω][ n]_ 1 + log (n)+ log (D)+ 2 _α +_ _β −_ 1 2 _nD_ � � _α_ �� 1 � _−_ _[ω][ n]_ 1 _−_ log (n)+ log (γ)+ _._ 2 _α +_ _β −_ 1 2 _n_ _γ_ The terms in log(n) cancel out in this expression. Since n > 0, _β0_ is then a root of A(β ) = [1]n _[·][ C]C[hyb]hyb[(]([β]β[ n] n[)])[′]_ [. By ignoring constant terms at] infinity: _A(β_ ) ∼ _A∞(β_ ), (8) _CF5_ �n, _dreg�_ = O ��n +dregdreg �ω � _._ (4) with Remark that this complexity does not involve explicitly the number of equations (m). But, remember the regularity depends on m. This cost is slightly different from the one used in [9]. The reason is that (4) is more accurate for semi-regular systems. Using Stirling’s formula, i.e. _A∞(β_ ) = log (q)+ _ω log_ (1 _−_ _β_ ) � � _α_ _−_ _[ω]_ 1 + 2 _α +_ _β −_ 1 � � _α_ _−_ _[ω]_ 1 _−_ 2 _α +_ _β −_ 1 � log �D1(α, _β_ )� � log �D2(α, _β_ )� _,_ _√_ � _n_ _n! ∼_ 2 _π n_ _e_ �n _,_ we can derive a first expression for complexity of the hybrid approach. Since Chyb(k) = q[k] _CF5_ �n _−_ _k,_ dreg(k)�, it is not difficult to see that Chyb(k) ∼ �n _−_ _k +_ dreg(k)�n−k+dreg(k)+ 12 �n _−_ _k�n−k+ 12 dreg(k)[dreg][(][k][)+][ 1]2_ _ω_   _q[k]_  1  _√_ 2 _π_ _[·]_ _._ (5) where D1(α, _β_ ) = α + [1][−]2[β] _−_ �α (α + _β −_ 1) and D2(α, _β_ ) = _α −_ [1][−]2[β] _−_ �α (α + _β −_ 1). This leads to the following result. PROPOSITION 3.1. Let F = { f1,..., fm} ⊂ Fq[x1,..., _xn] be a_ _system of quadratic equations verifying Hypothesis 1. Let A∞_ _be_ _as defined in (8). The best trade-off for solving F with the hy-_ _brid approach is asymptotically to fix k0 = ⌈β0 n⌉_ _variables, where_ _β0 is a root of A∞_ _such that β0,_ 0 < β0 ⩽ 1. The coefficient β0 is _independent on the number of variables n._ A root β0 of A∞(β ) can be computed numerically (for instance using a computer algebra software like MAPLE). In Table 2 (Appendix), we present the best trade-off β0 obtained for various values of α and q. ###### 3.3.1 Square Quadratic Systems In this part, we focus on the common case m = n (i.e., α = 1, square system). This allows to further refine Proposition 3.1. First, we simplify A∞(β ) as defined in (8) by setting α = 1. Second, we make the change of variable β ← _ν1[2][ . Finally, by expending]_ _B∞(ν) = A∞_ � _ν1[2]_ �, we get that: By abuse of language, we will always refer to (5) (asymptotic equivalent) as the complexity of the hybrid approach. ##### 3.2 Asymptotic Equivalent of the Regularity From now on, we set m = α n (α ≥ 1 is a constant). According to Section 2.2, the best trade-off is obtained for a k of the form β · _n._ Thus, the hybrid approach considers sub-systems having n[′] = n(1− _β_ ) variables and a number of equations m = 1−αβ [(][1] _[−]_ _[β]_ [)] _[n][ =][ θ][ n][′][.]_ For such systems, we have an asymptotic equivalent of the degree of regularity [8], i.e.: � � � � _dreg(n[′],_ _m) ∼_ _θ −_ [1] _θ (θ −_ 1) _n_ + _O_ _n[1][/][3][�]_ _._ (6) 2 _[−]_ Note that in [9], we have used a different asymptotic expansion of the degree of regularity. Experiments performed in [9] seem to suggest that the optimal number of variables (i.e. trade-off) to be fixed is a constant. As discussed in Section 2.2, this intuition is incorrect. Thus, assuming a trade-off of the form β · _n, we get that any sub-_ system occurring in the hybrid approach has a degree of regularity � asymptotically equivalent to γ n + _O_ _n[1][/][3][�], with:_ � _ν −_ 1 _B∞(ν) = log_ (q)+ _ω log_ (2 _ν +_ 2)+ _ω log_ 2 _ν_ [2] � � _ν −_ 1 _−_ _[ω]_ 2 [(][1] [+] _[ν][)][ log]_ [(][3] _[ν][ +]_ [1][)] _[−]_ _[ω]2_ [(][1] [+] _[ν][)][ log]_ 2 _ν_ [2] � � _ν −_ 1 _−_ _[ω]_ 2 [(][1] _[−]_ _[ν][)][ log]_ [(][ν][ −] [1][)] _[−]_ _[ω]2_ [(][1] _[−]_ _[ν][)][ log]_ 2 _ν_ [2] � _._ � � _γ =_ _α −_ [1] _[−]_ _[β]_ _−_ 2 � _α (α +_ _β −_ 1) _._ (7) We observe that the terms in log � _ν−1_ � cancels out. Finally: 2 _ν_ [2] _A(β_ ) ∼ _B∞(β_ ), (9) ----- with B∞(ν) = log (q)+ � � _ω_ log (2 _ν +_ 2) _−_ [1] [+] _[ν]_ log (3 _ν +_ 1) _−_ [1] _[−]_ _[ν]_ log (ν − 1) _._ 2 2 For square systems, Proposition 3.1 can be refined as follows. PROPOSITION 3.2. Let F = { _f1,..., fn} ⊂_ Fq[x1,..., _xn] be a_ _system of quadratic equations verifying Hypothesis 1. Let B∞_ _be_ _as defined in (9). The best trade-off for solving F with the hybrid_ _approach is asymptotically to fix k0 =_ � _νn0[2]_ � _variables, where ν0 is_ _a root of B∞(ν) such that ν0,_ 0 < β0 ⩽ 1. The coefficient β0 = _ν[1]0[2]_ _is independent of n._ We show in Table 1 the value of β0 = _ν[1]0[2]_ [with respect to several] usual sizes of field q. We compare these values with the exact ratio _β0 when n = 100 and n = 200 (once the parameters are fixed, we_ can compute exact value β0[exact] minimizing the complexity of the hybrid approach). The table shows that our approximation matches well with the expected value. Then, as k0 = ⌈n _β0⌉_ = � _νn0[2]_ �, we recover the result announced. Note that when q is too small, β0 becomes greater than one and the approximation is not valid. We are now in position to derive the (asymptotical) complexity of the hybrid approach. We use the value of β0 provided in Proposition 3.3 together with (7) to have an asymptotic of the regularity. It is a multiple of n, and we denote by γ0 the corresponding factor. Precisely: � 1 + _β0_ � � _γ0 =_ _−_ _β0_ _._ (11) 2 Finally, we obtain the asymptotic complexity of the hybrid approach – with the best tradeoff – using the complexity (5). Let _D0 = 1_ _−_ _β0 +_ _γ0, we have Chyb(k0) = Chyb(β0 n)_ **Table 1: Sample values for β0 for several field sizes with ω =** 2.4. We need less variables to reach the best trade-off when the **field is bigger.** _q_ 2[2] 2[3] 2[4] 2[5] 2[6] 2[8] 2[16] _β0_ 0.52 0.35 0.24 0.17 0.12 0.071 0.017 _β0[exact], n = 100_ 0.59 0.35 0.25 0.14 0.12 0.08 0.02 _β0[exact], n = 200_ 0.55 0.39 0.24 0.17 0.17 0.09 0.02 Note that the the proportion of variables which needs to be fixed tends to 0 when the size of the field increases. This is consistent with the intuition that the exhaustive search becomes less interesting for too big fields. ##### 3.4 Complexity of the Hybrid Approach – An Asymptotic Equivalent We derive in this part an explicit (asymptotic) equivalent of the hybrid approach complexity. The only element which is missing to get this equivalent is an explicit form of the β0 discussed in Section 3.3. Table 1 suggests that when q grows, β0 = _ν[1]0[2]_ [decreases.] This means that ν0 → ∞ when q → ∞. This remark combined with Proposition 3.2 leads to the following result. PROPOSITION 3.3. Let F = { _f1,..., fn} ⊂_ Fq[x1,..., _xn] be a_ _system of quadratic equations verifying Hypothesis 1. Asymptoti-_ _cally, the best trade-off for solving F with the hybrid approach is_ _to fix k0 = ⌈n_ _β0⌉_ _variables, with:_ � 3 _ω log_ (3) �2 _β0_ = 6 log (q)+ 6 _ω log_ (2) _−_ 4 _ω −_ 3 _ω log_ (3) _,_ 10.86 _ω_ [2] = (4.16 log2 (q) _−_ 3.14 _ω)[2]_ PROOF. Let B∞ (ν) be as defined in Proposition 3.2. We get that _B∞_ (ν) ∼ν→∞ � � log(q) _−_ [1] log (2) _−_ [2] _._ 2 _[ω][ log][(][3][)]_ _[ν][ +]_ _[ω]_ 3 _[−]_ 2 [1] [log] [(][3][)] _∼_ �√q2[β][0]π[ n]�ω · � ((nn−−ββ00 n n)[n]+[−][β]γ[0]0[ n] n[+])[ 1][n]2[−] ([β]γ[0]0[ n] n[+])[γ][0][γ][0][ n][ n][+][+][ 1]2[ 1]2 �ω _,_ _∼_ �√q2[β][0]π[ n]�ω · ([√]1n)[ω][ ·]  (1 _−_ _Dβ00n)−[n]β[−]0[β] n[0]+[ n]γ[+]0 n[ 1]2 γ+_ 02[1]γ0 n+ 2[1] ω _,_ _∼_ �√q2[β]π[0][ n] n�ω · � (1 _−Dβ00)_ _γ0_ � _ω2_ _·_ � (1 _−_ _βD0)[D]0[1][0][−][β][0]_ _γ0[γ][0]_ �ω n _. (12)_ This leads to: THEOREM 3.1. The complexity of the hybrid approach – us_ing the trade-off k0 = ⌈β0 n⌉_ _of Proposition 3.3 – is asymptotically_ _equivalent to_ 2[n] _[ω][ (][1][.][38][−][0][.][63]_ _[ω][ log][2][(][q][)][−][1][)], when n →_ ∞, _q →_ ∞ _and log_ (q) ≪ _n._ PROOF. From (12) and using the value k0 in Prop. 3.3: log2 �Chyb(k0)� _∼_ _nK −_ _ω log2_ �√2 _π n�_ + _O_ (1) (13) with K = � 3 1 log2 _−_ [1] 2 _[−]_ 2 _ν0[2]_ _ν0_ � log2(q) + _ω_ _ν0[2]_ � � 3 1 _−_ [1] 2 _[−]_ 2 _ν0[2]_ _ν0_ � � � _−_ _ω_ � 1 _−_ [1] _ν0[2]_ log2 1 _−_ [1] _ν0[2]_ � 1 1 _−_ [1] 2 [+] 2 _ν0[2]_ _ν0_ � � _−_ _ω_ � 1 1 _−_ [1] 2 [+] 2 _ν0[2]_ _ν0_ log2 _._ Let ν0 be a root of B∞ (ν) at infinity (i.e. ν → ∞). We get: _ν0 =_ [6 log] [(][q][)+] [6] _[ω][ log]_ [(][2][)] _[−]_ [4] _[ω][ −]_ [3] _[ω][ log]_ [(][3][)] _._ (10) 3 _ω log_ (3) When q → ∞, K tends to 23 _[ω][ log][2][ (][3][)]_ _[−]_ _[ω][ −]_ [1]4 _ω_ [2]loglog2 (2(q3))[2] = 1.38 _ω −_ 0.63 logω2 ([2] _q)_ _[.]_ The first term in (13) is dominant, so the complexity of the hybrid approach is asymptotically 2[nK]. If ω = 2.4 for instance, the complexity of the hybrid approach is: 2[n]�3.31−3.62 log2(q)[−][1][�]. ##### 4. ASYMPTOTIC GAIN OF THE HYBRID APPROACH The purpose of this part is to quantify the gain of the hybrid approach with respect to a direct approach. We restrict our attention here to the case m = n (i.e. α = 1). ----- The degree of regularity of a square quadratic system of n equations is n + 1 [8]. Using Stirling’s formula in (4): �ω _._ _CF5 ∼_ � 1 (2 _n_ + 1)[2] _[n][+][ 3]2_ _√_ 2 _π_ _[·]_ _n[n][+][ 1]2 (n_ + 1)[n][+][ 3]2 To simplify this expression, we use: (2 _n_ + 1)[2] _[n][+][ 3]2_ � �2 _n+ 32_ = 1 + [1] _∼_ _e._ (2 _n)[2]_ _[n][+][ 3]2_ 2 _n_ Thus, CF5 ∼ � 1 2 �ω _√_ _∼_ 2 _π_ _[·][ e]n[n][(][+][2][ 1]2[n] en[)][2]_ _[n][n][+][+][ 3][ 3]2_ � 1 2 �ω _∼_ _√_ 1 _._ 2 _π_ _[·][ 2][2]n[n][+]2_ [ 3] Finally: � 2 _CF5 ∼_ _√π n_ � 1 2 2[2] _[n][+][ 3]2_ _√_ 2 _π_ _[·][ n]n[2]_ _[n][n][+][+][ 3][ 1]2 n[n][+][ 3]2_ �ω �ω _·_ 2[2] _[ω][ n]_ _._ (14) Let k0 be as defined in Proposition 3.3. Using (12) and (14), we get that _ChybCF5(k0)_ _[∼]_ � _√2π n_ �ω _×_ 2[2] _[ω][ n][ �][√]2_ _π n�ω_ _q[β][0][ n]_ � (1 _−_ _β0)_ _γ0_ 1 _−_ _β0 +_ _γ0_ �ω n _._ � _ω2_ � (1 _−_ _β0)[1][−][β][0]_ _γ0[γ][0]_ (1 _−_ _β0 +_ _γ0)[1][−][β][0][+][γ][0]_ This last expression can be written as follows: � 1 _q[β][0]_ � �ω �n (1 _−_ _β0)[1][−][β][0]_ _γ0[γ][0]_ 2[2] _·_ (1 _−_ _β0 +_ _γ0)[1][−][β][0][+][γ][0]_ �2 _√2�ω_ _·_ � 1(1−−ββ0 +0) _γγ00_ _ω_ � 2 _·_ _._ As a consequence: _CF5_ 1 _Chyb(k0)_ _[∼]_ _q[β][0][ n]_ � (1 _−_ _β0)[1][−][β][0]_ _γ0[γ][0]_ 2[2] _·_ (1 _−_ _β0 +_ _γ0)[1][−][β][0][+][γ][0]_ On the other hand, the actual gain can be more precisely computed with explicit values of Chyb, the best trade-off, and CF5 ). We compare the real gain with several of our asymptotic estimations for fields of size q = 2, 16, 256, 2[16], 2[32] using ω = 2.4. Each figure (Fig. 1 to 5) has four curves, except when q ⩽ 13, where the approximation of Proposition 3.3 is not relevant. – The theoretical gain (plain line) obtained from the explicit complexity of CF5 (4) and the best trade-off as the minimum of Proposition 2.1 for all _k,_ 0 ⩽ _k ⩽_ _n._ – The gain when n → ∞ (dashed line) obtained from (16) and the trade-off is computed with Proposition 3.1. – The gain when n → ∞ with k0 from Proposition 3.3 (loosely dashed line) obtained from (16) (relevant for q > 13). – The asymptotic gain when n → ∞ and q → ∞ (dotted line) of Theorem 4.1. gain 2[240] 2[200] 2[160] 2[120] 2[80] 2[40] �ω n _._ (15) This corresponds to the asymptotic gain of the hybrid approach. To simplify our notations, we denote by Q = log2 � _ChybCF5(k0)_ � the logarithm of the gain. It holds that Q ∼ _nC, with:_ � (1 _−_ _β0)[1][−][β][0]_ _γ0[γ][0]_ _C = −β0 log2 (q)+_ 2 _ω log2 (2)+_ _ω log2_ (1 _−_ _β0 +_ _γ0)[1][−][β][0][+][γ][0]_ � _._ Note that C does not depend on n. We replace β0 and γ0 by their respective values obtained from Prop. 3.3 and equation (11). To have an approximation of this gain, one can compute an asymptotic expansion of C when q → ∞. Using the logarithmic in base 2: _C ∼_ 3 _ω −_ [3] (16) 2 _[ω][ log][2][ (][3][) =][ 0][.][62]_ _[ω][ .]_ This allows to state the following: THEOREM 4.1. Let F = { _f1,..., fn} ⊂_ Fq[x1,..., _xn] be quadratic_ _equations verifying Hypothesis 1. When n →_ ∞, q → ∞ _and as long_ _as n ≫_ log2(q), the gain of the hybrid approach compared to a _direct Gröbner basis approach is asymptotically 2[0][.][62]_ _[ω][ n]._ Theorem 4.1 gives a trend of the asymptotic gain. It shows the overall efficiency of the hybrid approach compared to the simple Gröbner basis approach. For ω = 2.4, we get a speed-up of 2[1][.][49] _[n]_ as stated in the abstract. nb. vars 0 25 50 75 100 125 150 **Figure 1: Gain when solving a system over F2.** gain 2[240] 2[200] 2[160] 2[120] 2[80] 2[40] nb. vars 0 25 50 75 100 125 150 **Figure 2: Gain when solving a system over F16.** ----- gain 2[240] 2[200] 2[160] 2[120] 2[80] 2[40] nb. vars 0 25 50 75 100 125 150 **Figure 3: Gain when solving a system over F28** **.** gain 2[240] 2[200] 2[160] 2[120] 2[80] 2[40] nb. vars 0 25 50 75 100 125 150 **Figure 4: Gain when solving a system over F216** **.** gain 2[240] 2[200] 2[160] 2[120] 2[80] 2[40] nb. vars 0 25 50 75 100 125 150 **Figure 5: Gain when solving a system over F232** **.** As expected, the gain becomes more accurate as q grows (Fig. 1 to 3). When n is not big enough compared to q, it becomes less accurate (Fig. 5). Asymptotically, the hybrid approach is then always better than a direct solving. Eventually, when q is too big (with respect to n), the cost of an exhaustive search, even in one single variable, will be too expensive compared to Gröbner basis computation. **Acknowledgments. We would like to thank the referees for their** meaningful comments. The work described in this paper has been supported in part by the European Commission through the ICT program under contract ICT-2007-216676 ECRYPT II. The authors were also supported in part by the french ANR under the Computer Algebra and Cryptography (CAC) project ANR-09-JCJCJ-0064-01 and the High-Performance Algebraic Computing (HPAC) project ANR-2011-BS02-013-04. ##### 5. REFERENCES [1] M. Albrecht, J.-C. Faugère, P. Farshim, and L. Perret. Polly cracker, revisited. In D. Lee and X. Wang, editors, Advances _in Cryptology Asiacrypt 2011, volume 7073 of Lecture Notes_ _in Computer Science, pages 179–196. Springer Berlin /_ Heidelberg, 2011. [2] G. Ars, J.-C. Faugère, H. Imai, M. Kawazoe, and M. Sugita. Comparison between xl and gröbner basis algorithms. In P. J. Lee, editor, ASIACRYPT, volume 3329 of Lecture Notes in _Computer Science, pages 338–353. Springer, 2004._ [3] D. Augot, J.-C. Faugère, and L. Perret. Foreword. J. Symb. _Comput., 44(12):1605–1607, 2009._ [4] M. Bardet. Études des systèmes algébriques surdéterminés. _Applications aux codes correcteurs et à la cryptographie._ PhD thesis, Université Paris 6, Décembre 2004. [5] M. Bardet, J.-C. Faugère, and B. Salvy. Complexity study of Gröbner basis computation. Technical report, INRIA, 2002. ``` http://www.inria.fr/rrrt/rr-5049.html. ``` [6] M. Bardet, J.-C. Faugère, and B. Salvy. On the complexity of Gröbner basis computation of semi-regular overdetermined algebraic equations. In International Conference on _Polynomial System Solving – ICPSS, pages 71–75, 2004._ [7] M. Bardet, J.-C. Faugère, B. Salvy, and P.-J. Spaenlehauer. On the complexity of solving quadratic boolean systems. _CoRR, abs/1112.6263, 2011._ [8] M. Bardet, J.-C. Faugère, B. Salvy, and B.-Y. Yang. Asymptotic behaviour of the degree of regularity of semi-regular polynomial systems. In The Effective Methods _in Algebraic Geometry Conference – MEGA 2005, pages_ 1–14, 2005. [9] L. Bettale, J.-C. Faugère, and L. Perret. Hybrid approach for solving multivariate systems over finite fields. Journal of _Mathematical Cryptology, volume 3(issue 3):177–197, 2009._ [10] C. Bouillaguet, H.-C. Chen, C.-M. Cheng, T. Chou, R. Niederhagen, A. Shamir, and B.-Y. Yang. Fast exhaustive search for polynomial systems in f2. In S. Mangard and F.-X. Standaert, editors, CHES, volume 6225 of Lecture Notes in _Computer Science, pages 203–218. Springer, 2010._ [11] B. Buchberger. Ein Algorithmus zum Auffinden der _Basiselemente des Restklassenringes nach einem_ _nulldimensionalen Polynomideal. PhD thesis, University of_ Innsbruck, 1965. [12] B. Buchberger. Bruno buchberger’s phd thesis 1965: An algorithm for finding the basis elements of the residue class ring of a zero dimensional polynomial ideal. Journal of _Symbolic Computation, 41(3-4):475–511, 2006._ [13] B. Buchberger, G. E. Collins, R. G. K. Loos, and R. Albrecht. Computer algebra symbolic and algebraic computation. SIGSAM Bull., 16(4):5–5, 1982. [14] C. Cid, S. Murphy, and M. J. B. Robshaw. Algebraic aspects _of the advanced encryption standard. Springer, 2006._ [15] N. Courtois, L. Goubin, and J. Patarin. SFLASHv3, a fast asymmetric signature scheme. available at ----- ``` http://eprint.iacr.org/2003/211. ``` [16] N. Courtois and J. Pieprzyk. Cryptanalysis of block ciphers with overdefined systems of equations. In Y. Zheng, editor, _ASIACRYPT, volume 2501 of Lecture Notes in Computer_ _Science, pages 267–287. Springer, 2002._ [17] N. T. Courtois and G. V. Bard. Algebraic cryptanalysis of the data encryption standard. In Cryptography and Coding ’07, volume 4887 of Lecture Notes in Computer Science, pages 152–169. Springer, 2007. [18] I. Dinur and A. Shamir. Cube attacks on tweakable black box polynomials. In A. Joux, editor, EUROCRYPT, volume 5479 of Lecture Notes in Computer Science, pages 278–299. Springer, 2009. [19] J.-C. Faugère. A new efficient algorithm for computing Gröbner bases (F4). Journal of Pure and Applied Algebra, 139:61–88, June 1999. [20] J.-C. Faugère. A new efficient algorithm for computing Gröbner bases without reduction to zero (F5). In _Proceedings of the 2002 International Symposium on_ _Symbolic and Algebraic Computation ISSAC, pages 75–83._ ACM Press, 2002. [21] J.-C. Faugère, F. L. dit Vehel, and L. Perret. Cryptanalysis of minrank. In D. Wagner, editor, CRYPTO, volume 5157 of _Lecture Notes in Computer Science, pages 280–296._ Springer, 2008. [22] J.-C. Faugère, P. M. Gianni, D. Lazard, and T. Mora. Efficient computation of zero-dimensional gröbner bases by change of ordering. J. Symb. Comput., 16(4):329–344, 1993. [23] J.-C. Faugère and A. Joux. Algebraic cryptanalysis of Hidden Field Equation (HFE) cryptosystems using Gröbner bases. In Advances in Cryptology – CRYPTO 2003, volume 2729 of Lecture Notes in Computer Science, pages 44–60. Springer, 2003. [24] J.-C. Faugère, A. Otmani, L. Perret, and J.-P. Tillich. Algebraic cryptanalysis of mceliece variants with compact keys. In Advances in Cryptology – EUROCRYPT 2010, volume 6110 of Lecture Notes in Computer Science, pages 279–298. Springer, 2010. [25] J.-C. Faugère, L. Perret, C. Petit, and G. Renault. Improving the Complexity of Index Calculus Algorithms in Elliptic Curves over Binary Field. In Proceedings of Eurocrypt 2012, Lecture Notes in Computer Science, pages 1–15. Springer Verlag, 2012. [26] R. Fröberg. An inequality for Hilbert series of graded algebras. Math. Scand., 56(2):117–144, 1985. [27] S. Gao, Y. Guan, and F. Volny. A new incremental algorithm for computing groebner bases. In W. Koepf, editor, ISSAC, pages 13–19. ACM, 2010. [28] M. R. Garey and D. S. Johnson. Computers and _Intractability: A Guide to the Theory of NP-Completeness._ W. H. Freeman and Company, 1979. [29] A. Kipnis, J. Patarin, and L. Goubin. Unbalanced oil and vinegar signature schemes. In Advances in Cryptology – _EUROCRYPT ’99, volume 1592 of Lecture Notes in_ _Computer Science, pages 206–222. Springer, 1999._ [30] A. Kipnis and A. Shamir. Cryptanalysis of the HFE Public Key Cryptosystem by Relinearization. In Advances in _Cryptology – CRYPTO ’99, volume 1666 of Lecture Notes in_ _Computer Science, pages 19–30. Springer, 1999._ [31] A. Kipnis and A. Shamir. Cryptanalysis of the hfe public key cryptosystem by relinearization. In M. J. Wiener, editor, _CRYPTO, volume 1666 of Lecture Notes in Computer_ _Science, pages 19–30. Springer, 1999._ [32] T. Matsumoto and H. Imai. Public quadratic polynomial-tuples for efficient signature-verification and message-encryption. In Advances in Cryptology – _EUROCRYPT ’88, volume 330 of Lecture Notes in Computer_ _Science, pages 419–453. Springer, 1988._ [33] J. Patarin. Hidden Fields Equations (HFE) and Isomorphisms of Polynomials (IP): two new families of asymmetric algorithms. In Advances in Cryptology – EUROCRYPT ’96, volume 1070 of Lecture Notes in Computer Science, pages 33–48. Springer, 1996. [34] K. Sakumoto, T. Shirai, and H. Hiwatari. Public-key identification schemes based on multivariate quadratic polynomials. In P. Rogaway, editor, CRYPTO, volume 6841 of Lecture Notes in Computer Science, pages 706–723. Springer, 2011. [35] M. Sala, T. Mora, L. Perret, S. Sakata, and C. Traverso. _Gröbner Bases, Coding, and Cryptography. Springer, 2009._ [36] M. Sugita, M. Kawazoe, L. Perret, and H. Imai. Algebraic cryptanalysis of 58-round SHA-1. In Fast Software _Encryption, volume 4593 of Lecture Notes in Computer_ _Science, pages 349–365. Springer, 2007._ [37] C. Wolf and B. Preneel. Taxonomy of Public Key Schemes based on the problem of Multivariate Quadratic equations. Cryptology ePrint Archive, Report 2005/077, 2005. ``` http://eprint.iacr.org/. ``` [38] B.-Y. Yang and J.-M. Chen. Theoretical analysis of xl over small fields. In H. Wang, J. Pieprzyk, and V. Varadharajan, editors, ACISP, volume 3108 of Lecture Notes in Computer _Science, pages 277–288. Springer, 2004._ [39] B.-Y. Yang, J.-M. Chen, and N. Courtois. On asymptotic security estimates in xl and gröbner bases-related algebraic cryptanalysis. In J. Lopez, S. Qing, and E. Okamoto, editors, _ICICS, volume 3269 of Lecture Notes in Computer Science,_ pages 401–413. Springer, 2004. ##### APPENDIX **Table 2: Sample values for β0 depending on several values of α** **and q with ω = 2.4. An entry is empty when there is no positive** **solution (i.e. best trade-off is k = 0).** _q_ 2[2] 2[3] 2[4] 2[5] 2[6] 2[8] 2[16] _β0 (α = 1)_ 0.52 0.35 0.24 0.17 0.12 0.071 0.017 _β0 (α = 1.1)_ 0.47 0.29 0.17 0.087 0.036 – – _β0 (α = 1.25)_ 0.40 0.19 0.052 – – – – _β0 (α = 1.5)_ 0.28 0.028 – – – – – _β0 (α = 1.75)_ 0.16 – – – – – – _β0 (α = 2)_ 0.042 – – – – – – _β0 (α = 3)_ – – – – – – – _β0 (α = 4)_ – – – – – – – _β0 (α = 5)_ – – – – – – – -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1145/2442829.2442843?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1145/2442829.2442843, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://hal.inria.fr/hal-00776070/file/MAYA2-UPMCINRIA-hybridext_1.0.pdf" }
2,012
[ "JournalArticle", "Conference" ]
true
2012-07-22T00:00:00
[]
15,831
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Environmental Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Economics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01a2245e6a49508445a32b9fbc698cb600cdc0a9
[ "Computer Science" ]
0.894953
Grid Load Shifting and Performance Assessments of Residential Efficient Energy Technologies, a Case Study in Japan
01a2245e6a49508445a32b9fbc698cb600cdc0a9
Sustainability
[ { "authorId": "101113408", "name": "Yanxue Li" }, { "authorId": "2735674", "name": "Weijun Gao" }, { "authorId": "39184490", "name": "Yingjun Ruan" }, { "authorId": "98220386", "name": "Y. Ushifusa" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://mdpi.com/journal/sustainability", "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-172127" ], "id": "8775599f-4f9a-45f0-900e-7f4de68e6843", "issn": "2071-1050", "name": "Sustainability", "type": "journal", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-172127" }
The increasing penetration of renewable energy decreases grid flexibility; thus, decentralized energy management or demand response are emerging as the main approaches to resolve this limitation and to provide flexibility of resources. This research investigates the performance of high energy efficiency appliances and grid-integrated distributed generators based on real monitored data from a social demonstration project. The analysis not only explores the potential cost savings and environmental benefits of high energy efficiency systems in the private sector, but also evaluates public grid load leveling potential from a bottom-up approach. This research provides a better understanding of the behavior of high decentralized efficient energy and includes detailed scenarios of monitored power generation and consumption in a social demonstration project. The scheduled heat pump effectively lifts valley load via transforming electricity to thermal energy, its daily electricity consumption varies from 4 kWh to 10 kWh and is concentrated in the early morning over the period of a year. Aggregated vehicle to home (V2H) brings flexible resources to the grid, by discharging energy to cover the residential night peak load, with fuel cost savings attributed to 90% of profit. The potential for grid load leveling via integrating the power utility and consumer is examined using a bottom-up approach. Five hundred thousand contributions from scheduled electrical vehicles (EVs) and fuel cells provide 5.0% of reliable peak power capacity at 20:00 in winter. The outcome illustrates the energy cost saving and carbon emission reduction scenarios of each of the proposed technologies. Relevant subsidies for heat pump water heater systems and cogeneration are essential customers due to the high initial capital investment. Optimal mixes in structure and coordinated control of high efficiency technologies enable customers to participate in grid load leveling in terms of lowest cost, considering their different features and roles.
## sustainability _Article_ # Grid Load Shifting and Performance Assessments of Residential Efficient Energy Technologies, a Case Study in Japan **Yanxue Li** **[[1][ ID]](https://orcid.org/0000-0002-9794-1610)** **, Weijun Gao** **[1], Yingjun Ruan** **[2,]* and Yoshiaki Ushifusa** **[[3][ ID]](https://orcid.org/0000-0002-9139-1822)** 1 Faculty of Environmental Engineering, The University of Kitakyushu, Kitakyushu 808-0135, Japan; 15315005563@163.com (Y.L.); gaoweijun@me.com (W.G.) 2 Institute of Mechanical Engineering, Tongji University, Siping Road 1239, Shanghai 20092, China 3 Faculty of Economics and Business Administration, The University of Kitakyushu, Kitakyushu 802-8577, Japan; ushifusa@kitakyu-u.ac.jp ***** Correspondence: ruanyj@tongji.edu.cn; Tel.: +86-21-65981482 Received: 22 May 2018; Accepted: 17 June 2018; Published: 21 June 2018 ���������� **[�������](http://www.mdpi.com/2071-1050/10/7/2117?type=check_update&version=1)** **Abstract:** The increasing penetration of renewable energy decreases grid flexibility; thus, decentralized energy management or demand response are emerging as the main approaches to resolve this limitation and to provide flexibility of resources. This research investigates the performance of high energy efficiency appliances and grid-integrated distributed generators based on real monitored data from a social demonstration project. The analysis not only explores the potential cost savings and environmental benefits of high energy efficiency systems in the private sector, but also evaluates public grid load leveling potential from a bottom-up approach. This research provides a better understanding of the behavior of high decentralized efficient energy and includes detailed scenarios of monitored power generation and consumption in a social demonstration project. The scheduled heat pump effectively lifts valley load via transforming electricity to thermal energy, its daily electricity consumption varies from 4 kWh to 10 kWh and is concentrated in the early morning over the period of a year. Aggregated vehicle to home (V2H) brings flexible resources to the grid, by discharging energy to cover the residential night peak load, with fuel cost savings attributed to 90% of profit. The potential for grid load leveling via integrating the power utility and consumer is examined using a bottom-up approach. Five hundred thousand contributions from scheduled electrical vehicles (EVs) and fuel cells provide 5.0% of reliable peak power capacity at 20:00 in winter. The outcome illustrates the energy cost saving and carbon emission reduction scenarios of each of the proposed technologies. Relevant subsidies for heat pump water heater systems and cogeneration are essential customers due to the high initial capital investment. Optimal mixes in structure and coordinated control of high efficiency technologies enable customers to participate in grid load leveling in terms of lowest cost, considering their different features and roles. **Keywords: load shifting; high efficient appliances; on-site generators; performance evaluations** **1. Introduction** The impact of climate change and sustainable energy growth have heightened the urgency for investigation into next generation energy and social system models in Japan, especially after the Fukushima nuclear disaster on March 2011. Following this, Japan shut down almost all of its nuclear power plants, which accounted for around 30% of total power generation. Now, the tight balance between the demand and supply of power at peak hours in Japan is obvious, and the ambitious plan to reduce GHG (greenhouse gas) emissions (a 25% reduction by 2020 compared with 1990 level) ----- _Sustainability 2018, 10, 2117_ 2 of 19 has become unfeasible under this scenario. In order to tackle the tight grid demand-supply balance scenario, especially during peak demand periods, enormous political and technical efforts are being taken to replace the loss of the nuclear energy. Extensive research efforts have focused on power supply optimization, and considering features of both power supply and demand, such as the dispatch of pumped hydro storage, demand response management, variable renewable energy (VRE) feed-in tariff schemes and retail market liberalization, Komiyama and Fujii [1]. VRE is expected to play a significant role in enhancing Japan’s energy self-sufficiency and greenhouse gas reduction. However, it is predicted that due to low correlation between fluctuating VRE generation and instantaneous electric power demand, increasing VRE integration will lead to a nonlinear decrease in residual load and cause curtailment of variable renewables due to limits in grid flexibility, which also influences the effective utilization and market value of regional VRE [2–4]. Currently, the building sector is trending towards decentralized, more efficient technologies to cover electrical or heating loads. Hence, with the increase in efficient power technologies being installed in the electrical distribution grid, planning their integration into the public grid is also needed. This is similar to the integration of renewable energy resources, where consumers adjust their energy consumption patterns to provide flexibility of resources. Researchers [5–9] have examined the performance of demand side management strategies, such as uptake of energy saving appliances, integration of flexible power technologies, and relevant incentive policies to encourage customers to participate more in local or community power supply management. Relevant studies have discussed the impact on load shifting of implementing high efficiency technologies such as heat pump water heaters, distributed PV system and EV with a coordinated demand response scheme. Heat pump water heaters are generally considered as useful appliances for environmental protection and load shifting. The uptake of heat pumps is generally supported by specific electricity tariff schemes in the energy market and policy implications. Klein, Herkel [7] analyzed the cumulative load shifting potential in the heating and cooling sector, and found that different flexibility and storage options can be used to alter the load trajectory. Goto, Goto [8] states that an increase in energy price will enhance the selection rate of Eco-cute, and that cost reductions will be effective under specific tariff structures. Love, Smith [10] analyzed the effects of the uptake of heat pumps on the Great Britain national electricity grid from an aggregated perspective, using a simple upscaling method to add heat pump electrical load to the national grid which indicated peak demand and ramp rate increases. Fischer, Wolf [11] assessed the flexibility of the residential heat pump model considering maximum power, shiftable energy and regeneration time, with results showing that flexibility is highly dependent on ambient temperature. Baeten, Rogiers [12] simulated control models for heat pump and thermal storage, and the results indicated that customers with heat pump heating systems can effectively participate in reducing peak generation capacity. With the expansion in the use of grid-connected on-site generators, power storage can provide customers with potential cost saving benefits by allowing them to manage their local power consumption under specific electricity market conditions. Meanwhile, this also adds flexibility to the grid in an aggregated form. Komiyama and Fujii [13] pointed that lower rechargeable battery cost can decrease the PV output suppression rate after large-scale PV energy is integrated into the grid. Rodriguez-Calvo, Cossent [14] investigated the technical impact of the future integration of electrical vehicles and PV generation, considering residential demand and homogeneously distributed EV and PV, EV charging works effectively in off-peak valley hours, excess PV production increases the degree of load imbalance. Management for the operation and planning of distributed energy systems is important. White and Zhang [15] examined the potential financial return for using vehicle to grid (V2G) as a grid resource for peak load reduction and regulation on a daily basis. Aggregated V2G participation may create a formal storage market with higher penetration of intermittent resources. Mohammadi, Mehrtash [16] analyzed the features of power networks to find a set of suitable portions with the aim of convergence performance improvement. Amini and Islam [17] uses a genetic algorithm to find the best allocation of parking lots. Bahrami and Parniani [18] proposes a load management strategy for EV charging to reduce peak load, and used a stochastic approach to enable smart ----- _Sustainability 2018, 10, 2117_ 3 of 19 chargers to schedule EVs based on historical charging data, thus minimizing the cost of charging for the vehicle owner. Recently, price-based demand response has been widely implemented in the power market, shifting part of the behavioral-based responsive load between different periods to reduce energy costs [19,20]. Rahmani-andebili [21] proposed linear and nonlinear modeling for the incentive-based and price-based demand response programs that have been implemented in several real power markets. In Rahmani-andebili [22] modelled implementation of demand response programs considering the power unit commitment, results indicating that residential customers can decrease the cost of power using cooperative demand side management strategies, and the carbon emission from thermal power plants is also reduced. Rahmani-Andebili and Shen [23] investigates price-controlled energy management of smart homes through a bi-level optimization framework. Smart homes achieve cost saving through scheduling the daily power consumption load. Driven by the potential benefits of demand side management, HEMS (Home Energy Management System, Panasonic) is widely promoted to reduce household energy in Japan. A national energy roadmap launched by METI (Ministry of Economy, Trade and Industry) launched in 2014 states that the Japanese Government is committed to the realization of low energy consumption households, for example, all newly constructed houses are expected to be equipped with HEMS by 2030. Buildings offer the potential for on-site energy generation (e.g., rooftop PV, cogeneration systems, Panasonic) and different storage options (e.g., thermal tank and battery), and their integration into the public grid needs to be planned similarly to the integration of renewable energy resources, providing flexible resources to the grid. The first aim of this research is to present the performance of high efficiency technology applications in the residential sector, and to classify the variability in local power generation and load consumption. Then, we examine the grid load leveling potential of coordinated demand side management strategies from a bottom-up approach. This study also presents their economic and environmental benefits, numerically, under the current electricity market in Japan. The paper is organized as follows: Section 2 provides an overview of the public power supply system and the data resources. Section 3 develops a better understanding of the behaviors of decentralized high efficiency energy systems based on real monitored applications and investigates the performance of the high efficiency technology applications with coordinate management strategies. Section 4 discusses the impacts of demand side management on the public grid from an aggregated perspective and estimates the economic and environmental benefits. Finally, conclusions and suggestions are provided. **2. Objective and Motivation** _2.1. Location Scenario_ Currently, PV generation is the renewable energy resource that is playing the main role in enhancing energy self-sufficiency at the district level, since the feed-in tariff was launched in 2012 in Japan. For example, the integrated cumulative capacity of PV reached 787 MWp in February, 2018 in Kyushu, accounting for 24.5% of the total district power capacity. Increasingly, intermittent sources provide a large proportion of variable and less flexible generation; Kyushu Electric Power even declared a temporary halt to VRE integration because of concern that PV output could impact the lower demand during the mid-season in September 2014. Figure 1 presents the locational scenario of the area we examined in this paper. Kyushu lays off the south end of Honshu; as Japan’s third largest island the population reached 13 million by the end of 2017, which is 10.2% of the national population, the land area covers 42,231 km[2], which is 11.2% of the national land area. Kyushu Electric Power sales amounted to 9.2% of the nationwide electricity business, according to the Kyuden annual report, 2017. Figure 2 describes the trend of yearly and hourly peak demand trends from 1981 to 2017 in the Kyushu public grid. It shows a steady growth trend before 2005, then it experiences a decrease mainly influenced by a nationwide electricity saving campaign during 2012 in response to the tight power supply-demand scenario. Recently, the peak demand and yearly load have reached saturation level. ----- _Sustainability 2018, 10, 2117_ 4 of 19 _Sustainability 2018, 10, x FOR PEER REVIEW_ 4 of 19 _Sustainability 2018, 10, x FOR PEER REVIEW_ 4 of 19 **Figure 1.Figure 1.Figure 1. Location of Kyushu region in Japan. Location of Kyushu region in Japan. Location of Kyushu region in Japan.** **Figure 2. Trend of yearly and peak electricity loads.** **Figure 2.Figure 2. Trend of yearly and peak electricity loads. Trend of yearly and peak electricity loads.** _2.2. Data Resources_ _2.2. Data Resources_ _2.2. Data Resources_ Historical public grid loads were collected from the Kyuden Power Company website at hourly Historical public grid loads were collected from the Kyuden Power Company website at hourly Historical public grid loads were collected from the Kyuden Power Company website at hourly interval over 2017 [24], Figure 3 describes the daily average demand curves in the public grid. During interval over 2017 [24], Figure 3 describes the daily average demand curves in the public grid. During interval over 2017 [24], Figure 3 describes the daily average demand curves in the public grid. the mid-season months, it shows a relatively flat daily demand curve. Higher daily variations occur the mid-season months, it shows a relatively flat daily demand curve. Higher daily variations occur During the mid-season months, it shows a relatively flat daily demand curve. Higher daily variations in the summer and winter seasons due to the increasing air conditioning loads, the average summer in the summer and winter seasons due to the increasing air conditioning loads, the average summer occur in the summer and winter seasons due to the increasing air conditioning loads, the average peak load driven by the massive cooling demand reaches around 14,000 MWh as much as 1.6 times peak load driven by the massive cooling demand reaches around 14,000 MWh as much as 1.6 times summer peak load driven by the massive cooling demand reaches around 14,000 MWh as much of the valley-load. We note that averaging the time series over a day may lead to an underestimation of the valley-load. We note that averaging the time series over a day may lead to an underestimation in variations in the daily demand curve. Daily load in winter generally experiences two peak periods in variations in the daily demand curve. Daily load in winter generally experiences two peak periods in the morning and night driven by the increase in heating demand Load leveling and peak load ----- _Sustainability 2018, 10, 2117_ 5 of 19 as 1.6 times of the valley-load. We note that averaging the time series over a day may lead to anSustainability 2018, 10, x FOR PEER REVIEW 5 of 19 _Sustainability underestimation in variations in the daily demand curve. Daily load in winter generally experiences2018, 10, x FOR PEER REVIEW_ 5 of 19 two peak periods in the morning and night driven by the increase in heating demand. Load levelingshifting have become important strategies for the grid utilities who are concerned about power shifting have become important strategies for the grid utilities who are concerned about power and peak load shifting have become important strategies for the grid utilities who are concerned aboutbalancing security and quality maintenance, especially during the summer and winter seasons. balancing security and quality maintenance, especially during the summer and winter seasons. power balancing security and quality maintenance, especially during the summer and winter seasons. **Figure 3.Figure 3. Average daily grid demand curves in different months of Kyushu region. Average daily grid demand curves in different months of Kyushu region.** **Figure 3. Average daily grid demand curves in different months of Kyushu region.** _2.3. Motivation_ _2.3. Motivation_ _2.3. Motivation_ As shown in Figure 4, load shifting can bring benefits to both power demand and supply sides. As shown in Figure 4, load shifting can bring benefits to both power demand and supply sides. For demand users, the overall operational cost can be reduced due to the high price during peak As shown in Figure 4, load shifting can bring benefits to both power demand and supply sides. For demand users, the overall operational cost can be reduced due to the high price during peak For demand users, the overall operational cost can be reduced due to the high price during peak period. As illustrated in Figure 4, grid load leveling can be achieved by valley bottom-up and peak period. As illustrated in Figure 4, grid load leveling can be achieved by valley bottom-up and peak period. As illustrated in Figure 4, grid load leveling can be achieved by valley bottom-up and peak cutting, enhancing the grid flexibility on a daily basis. In the following section we will examine the cutting, enhancing the grid flexibility on a daily basis. In the following section we will examine the cutting, enhancing the grid flexibility on a daily basis. In the following section we will examine the load level potential from high efficiency technologies from a bottom-up approach as illustrated in load level potential from high efficiency technologies from a bottom-up approach as illustrated in load level potential from high efficiency technologies from a bottom-up approach as illustrated in Hainoun [25]. Hainoun [25]. Hainoun [25]. Load shift Load shift ###### Supply capacity Capacity reduction Supply capacityPeak cuttingCapacity reduction Load curve Peak cutting Load curve Morning Day Night Morning Day Night Morning Day Night Morning Day Night **Figure 4.Figure 4. Grid load shifting scheme.Grid load shifting scheme.** **Figure 4. Grid load shifting scheme.** Renewable production is hard to schedule since it is highly dependent on nature. With increasing levels of renewable penetration, grid generation becomes less flexible to balance the Renewable production is hard to schedule since it is highly dependent on nature. With Load shift Load shift ----- _Sustainability 2018, 10, 2117_ 6 of 19 Renewable production is hard to schedule since it is highly dependent on nature. With increasing levels of renewable penetration, grid generation becomes less flexible to balance the flexibility. Demand side management is seen as a promising resource to increase the flexibility of the power _Sustainability 2018, 10, x FOR PEER REVIEW_ 6 of 19 system. Coordinated demand side management can reduce the customer’s overall costs via shifting peak load during the peak price period. For suppliers, benefits can be obtained through the investmentshifting peak load during the peak price period. For suppliers, benefits can be obtained through the in additional power generation facilities. As a result, the responsibility for grid flexibility does notinvestment in additional power generation facilities. As a result, the responsibility for grid flexibility fall solely on the plant side, but also requires flexibility on the part of the demand side management.does not fall solely on the plant side, but also requires flexibility on the part of the demand side management. Figure 5 illustrates the schematic overview of the research, the black line refers to Figure 5 illustrates the schematic overview of the research, the black line refers to power flow, power flow, the dashed blue line represents the signal flow, and the red dotted line is thermal flow: the dashed blue line represents the signal flow, and the red dotted line is thermal flow: on the plant on the plant side, thermal plants, renewable energy and nuclear energy serve as the main power side, thermal plants, renewable energy and nuclear energy serve as the main power resources to meet resources to meet variable grid load, the central load dispatch center sends the price signal to the variable grid load, the central load dispatch center sends the price signal to the consumers and receives consumers and receives real-time power consumption from smart meters; thus, providing chances real-time power consumption from smart meters; thus, providing chances for cooperation between thefor cooperation between the utility and consumers. V2H, heat pumps and on-site generators are utility and consumers. V2H, heat pumps and on-site generators are implemented on the demand side,implemented on the demand side, and are designed to shift the owner’s load pattern and reduce and are designed to shift the owner’s load pattern and reduce energy consumption or cost.energy consumption or cost. (HEMS **Central load** **dispatching center** ) **Smart V2H** **PV Arrays** **Heat pump** **Figure 5. Schematic overview of the research.** **Figure 5. Schematic overview of the research.** **3. High Efficiency Technologies** **3. High Efficiency Technologies** This part will mainly describe the performance of high efficiency technology applications inThis part will mainly describe the performance of high efficiency technology applications in next-generation energy and social systems demonstration projects in Kyushu, Japan. Firstly, dailynext-generation energy and social systems demonstration projects in Kyushu, Japan. Firstly, daily residential power load curves for each month are calculated by averaging 200 residential households,residential power load curves for each month are calculated by averaging 200 residential households, and the power consumption ratios of heat pump water systems over a week in different seasons wereand the power consumption ratios of heat pump water systems over a week in different seasons were investigated in detail in 10 households. Then, the detailed power flows of EV over 153 days in ainvestigated in detail in 10 households. Then, the detailed power flows of EV over 153 days in a residential application are described. Finally, the production scenario for a PV/fuel cell hybrid power residential application are described. Finally, the production scenario for a PV/fuel cell hybrid power system is presented based on a social experiment demonstration project in Kitakyushu. system is presented based on a social experiment demonstration project in Kitakyushu. _3.1. Heat Pump Water Heaters_ _3.1. Heat Pump Water Heaters_ Energy for hot water accounts for about 30% of total residential energy consumption in Japan Energy for hot water accounts for about 30% of total residential energy consumption in According to Zhang, Qin [26], numerous heat pump water heaters have been developed for the Japan According to Zhang, Qin [residential sector, alongside the promotion of all-electrification households over recent years. 26], numerous heat pump water heaters have been developed for the residential sector, alongside the promotion of all-electrification households over recent years.Thermal storage applications are integrated to shift daily energy consumption patterns, and generally Thermal storage applications are integrated to shift daily energy consumption patterns, and generallyschedule the working time of heat pump water heater in the lower pricing region (early morning and schedule the working time of heat pump water heater in the lower pricing region (early morning anddeep night) to provide potential economic benefits for customers. Figure 6 presents the structure of a deep night) to provide potential economic benefits for customers. Figureresidential household with a heat pump water system, the unit capacity and water tank volume 6 presents the structure of generally falls in the range of 4.5/6.0 kW and 370/460 L, respectively. Annual average coefficient of a residential household with a heat pump water system, the unit capacity and water tank volume performance (COP) of the Eco-cute CO2 heat pump water heater normally ranges from 3.2 to 3.8. ----- _Sustainability 2018, 10, 2117_ 7 of 19 generally falls in the range of 4.5/6.0 kW and 370/460 L, respectively. Annual average coefficient of performance (COP) of the Eco-cute COSustainability 2018, 10, x FOR PEER REVIEW 2 heat pump water heater normally ranges from 3.2 to 3.8.7 of 19 There has been a steady increasing trend in the uptake of heat pump water heaters in recent years; the cumulative number of heat pump water heaters in Japan’s residential sector has reached aroundSustainability There has been a steady increasing trend in the uptake of heat pump water heaters in recent years; 2018, 10, x FOR PEER REVIEW 7 of 19 six million. In order to investigate the operational scenario of the residential heat pump water heater,There has been a steady increasing trend in the uptake of heat pump water heaters in recent years; the cumulative number of heat pump water heaters in Japan’s residential sector has reached around six million. In order to investigate the operational scenario of the residential heat pump water heater, we collected the monitored historical loads at hourly interval of 200 residential households with thethe cumulative number of heat pump water heaters in Japan’s residential sector has reached around we collected the monitored historical loads at hourly interval of 200 residential households with the Eco-cute system in the Kitakyushu Smart Community Demonstration Project.six million. In order to investigate the operational scenario of the residential heat pump water heater, Eco-cute system in the Kitakyushu Smart Community Demonstration Project. we collected the monitored historical loads at hourly interval of 200 residential households with the Eco-cute system in the Kitakyushu Smart Community Demonstration Project. _Import power_ Controller **Figure 6.Figure 6. Structure of residential household with heat pump water heater system. Structure of residential household with heat pump water heater system.** **Figure 6. Structure of residential household with heat pump water heater system.** Figure 7 presents the color-scale distribution of residential load each month for households Figure 7 presents the color-scale distribution of residential load each month for households equipped with a heat pump water heater system. The daily energy consumption pattern has a strong equipped with a heat pump water heater system. The daily energy consumption pattern has aFigure 7 presents the color-scale distribution of residential load each month for households relationship with the customer’s habits, with two daily peak periods of household load mainly equipped with a heat pump water heater system. The daily energy consumption pattern has a strong strong relationship with the customer’s habits, with two daily peak periods of household load mainlyoccurring in the early morning and the evening. It can also be clearly seen that the baseload increases relationship with the customer’s habits, with two daily peak periods of household load mainly occurring in the early morning and the evening. It can also be clearly seen that the baseload increasesduring air conditioning seasons and that early morning peak load driven by the utilization of the heat occurring in the early morning and the evening. It can also be clearly seen that the baseload increases during air conditioning seasons and that early morning peak load driven by the utilization of the heatpump water heater increases, due to the production of hot water that generally lasts from 0:00 to 6:00 during air conditioning seasons and that early morning peak load driven by the utilization of the heat pump water heater increases, due to the production of hot water that generally lasts from 0:00 to 6:00a.m. when the electricity price is cheap according to time-of-use pricing schemes. pump water heater increases, due to the production of hot water that generally lasts from 0:00 to 6:00 a.m. when the electricity price is cheap according to time-of-use pricing schemes.a.m. when the electricity price is cheap according to time-of-use pricing schemes. a.m. when the electricity price is cheap according to time-of-use pricing schemes.a.m. when the electricity price is cheap according to time-of-use pricing schemes. **Figure 7. Load color scale distribution of residential household equipped with Eco-cute.** **Figure 7. Load color scale distribution of residential household equipped with Eco-cute.** **Figure 7. Load color scale distribution of residential household equipped with Eco-cute.** Load color scale distribution of residential household equipped with Eco-cute. ----- _Sustainability 2018, 10, 2117_ 8 of 19 _Sustainability 2018, 10, x FOR PEER REVIEW_ 8 of 19 In order to investigate the detailed consumption structure and seasonal variations, the powerSustainability 2018, 10, x FOR PEER REVIEW 8 of 19 In order to investigate the detailed consumption structure and seasonal variations, the power consumption of 10 selected households were collected over a week, including the detailed consumption In order to investigate the detailed consumption structure and seasonal variations, the power consumption of 10 selected households were collected over a week, including the detailed of the heat pump, lights, air conditioner and others. Figure 8 illustrates the distributions of monitored consumption of 10 selected households were collected over a week, including the detailed consumption of the heat pump, lights, air conditioner and others. Figure 8 illustrates the distributions heat pump water heater power consumption ratios of daily load in different seasons, generally rangeconsumption of the heat pump, lights, air conditioner and others. Figure 8 illustrates the distributions of monitored heat pump water heater power consumption ratios of daily load in different seasons, from 20~45%. Increasing heating demand, drop in COP of heat pump and rising energy loss jointlyof monitored heat pump water heater power consumption ratios of daily load in different seasons, generally range from 20~45%. Increasing heating demand, drop in COP of heat pump and rising lead the increases of heat pump power consumption during winter period.energy loss jointly lead the increases of heat pump power consumption during winter period.generally range from 20~45%. Increasing heating demand, drop in COP of heat pump and rising energy loss jointly lead the increases of heat pump power consumption during winter period. **Figure 8. Distribution of power consumption of the heat pump water heater to daily power load ratio.** **Figure 8. Distribution of power consumption of the heat pump water heater to daily power load ratio.** **Figure 8. Distribution of power consumption of the heat pump water heater to daily power load ratio.** Figure 9 presents the color scale distributions of the power consumption of a heat pump water Figureheater in a typical residential household. The working period of the heat pump is usually from 0:00 Figure 9 presents the color scale distributions of the power consumption of a heat pump water 9 presents the color scale distributions of the power consumption of a heat pump water heater in a typical residential household. The working period of the heat pump is usually from 0:00 heater in a typical residential household. The working period of the heat pump is usually from 0:00to 7:00 a.m., in the valley period of the demand load. Operating time becomes shorter with daily to 7:00 a.m., in the valley period of the demand load. Operating time becomes shorter with daily to 7:00 a.m., in the valley period of the demand load. Operating time becomes shorter with dailydecreasing heating demand, and the heat pump water heater system shows higher power decreasing heating demand, and the heat pump water heater system shows higher power consumptiondecreasing heating demand, and the heat pump water heater system shows higher power consumption density in the winter, which can be attributed to the higher heating demand and lower consumption density in the winter, which can be attributed to the higher heating demand and lower generating efficiency under low ambient temperature. Heat pump water heaters tend to operate density in the winter, which can be attributed to the higher heating demand and lower generating generating efficiency under low ambient temperature. Heat pump water heaters tend to operate earlier in winter time to meet the daily heating load, which may be highly dependent on the activity efficiency under low ambient temperature. Heat pump water heaters tend to operate earlier in winter earlier in winter time to meet the daily heating load, which may be highly dependent on the activity based load. time to meet the daily heating load, which may be highly dependent on the activity-based load.based load. **Figure 9. Color-scale distribution of power consumption of a heat pump water heater system in a** typical residential house. ----- _SustainabilityFigure 9. 2018 Color-scale distribution of power consumption of a heat pump water heater system in a, 10, 2117_ 9 of 19 typical residential house. _3.2. EV (V2H)_ _3.2. EV (V2H)_ Grid utilities has been making efforts by giving incentives to V2H customers to modify their Grid utilities has been making efforts by giving incentives to V2H customers to modify their power consumption using a scheduling strategy that enables EV to charge during the grid valley period power consumption using a scheduling strategy that enables EV to charge during the grid valley and to discharge power to the home at night; this is typically accomplished in a HEMS environment. period and to discharge power to the home at night; this is typically accomplished in a HEMS Electrical vehicles for residential demand response could bring potential benefits to both the power environment. Electrical vehicles for residential demand response could bring potential benefits to supply and demand side, supporting peak reduction from the aggregated form and reducing customer both the power supply and demand side, supporting peak reduction from the aggregated form and energy costs under time-of-use tariff schemes. Figure 10 illustrates the structure of the examined reducing customer energy costs under time-of-use tariff schemes. Figure 10 illustrates the structure residential V2H system in the Kitakyushu Jono Smart Community Project. of the examined residential V2H system in the Kitakyushu Jono Smart Community Project. ##### Grid Price signal Controller Charge/discharge signal _Residential consumer_ _EV Car_ _Smart V2H_ **Figure 10.Figure 10. Structure of residential household with EV car system. Structure of residential household with EV car system.** The working condition of an EV in a typical household is illustrated in FigureThe working condition of an EV in a typical household is illustrated in Figure 10. The controller 10. The controller can determine the charge/discharge condition of the battery considering the price signal from the grid.can determine the charge/discharge condition of the battery considering the price signal from the Figuregrid. Figure 11 describes the distribution of power flows from residential EV system in Jono, 11 describes the distribution of power flows from residential EV system in Jono, Kitakyushu, over 153 days. Plug-in conditions are mainly concentrated in the middle of the night from 23:30 to 3:00Kitakyushu, over 153 days. Plug-in conditions are mainly concentrated in the middle of the night and the discharge domain generally occurs after work and lasts from 17:00 and 23:00. This actuallyfrom 23:30 to 3:00 and the discharge domain generally occurs after work and lasts from 17:00 and charging/discharging operation of the battery coincides with grid valley and peak demand. EV uptake23:00. This actually charging/discharging operation of the battery coincides with grid valley and peak could lead to a valley increase of 2.5 kW and provide around 1.5 kW peak reduction in the evening,demand. EV uptake could lead to a valley increase of 2.5 kW and provide around 1.5 kW peak meaning that the daily charge power is around 8.5 kWh and around 41% of charged power will bereduction in the evening, meaning that the daily charge power is around 8.5 kWh and around 41% released to home electricity consumption, that is, around half of the stored power will be used toof charged power will be released to home electricity consumption, that is, around half of the stored replace the oil consumption of the EV car. The color distribution of power flows from EVs confirmspower will be used to replace the oil consumption of the EV car. The color distribution of power flows that expanded use of EVs could be scheduled to support grid operation during daily use from anfrom EVs confirms that expanded use of EVs could be scheduled to support grid operation during aggregated perspective.daily use from an aggregated perspective. ----- _Sustainability 2018, 10, 2117_ 10 of 19 _Sustainability 2018, 10, x FOR PEER REVIEW_ 10 of 19 _Sustainability 2018, 10, x FOR PEER REVIEW_ 10 of 19 **Figure 11.Figure 11. Color-scale distribution of power flows in V2H system. Color-scale distribution of power flows in V2H system.** **Figure 11. Color-scale distribution of power flows in V2H system.** _3.3. On-Site Generators_ _3.3. On-Site Generators_ _3.3. On-Site Generators_ Distributed on-site generators, such as PV and cogeneration systems are playing an increasing Distributed on-site generators, such as PV and cogeneration systems are playing an increasing Distributed on-site generators, such as PV and cogeneration systems are playing an increasing role in enhancing local energy self-sufficiency in Japan. Figure 12 describes the structure of a role in enhancing local energy self-sufficiency in Japan. Figure 12 describes the structure of a residential role in enhancing local energy self-sufficiency in Japan. Figure 12 describes the structure of a residential hybrid on-site energy supply system, the grid connected PV capacity is 4.84 kWp, the fuel hybrid on-site energy supply system, the grid connected PV capacity is 4.84 kWp, the fuel cell has residential hybrid on-site energy supply system, the grid connected PV capacity is 4.84 kWp, the fuel cell has 0.70 kWp nominal output equipped with 140 L thermal tank for hot water storage; 0.70 kWp nominal output equipped with 140 L thermal tank for hot water storage; cogeneration cell has 0.70 kWp nominal output equipped with 140 L thermal tank for hot water storage; cogeneration runs in the combined heating and power mode tracking thermal load. When the PV runs in the combined heating and power mode tracking thermal load. When the PV production is cogeneration runs in the combined heating and power mode tracking thermal load. When the PV production is greater than the simultaneous electrical demand, excess generation will be sold into the greater than the simultaneous electrical demand, excess generation will be sold into the grid. If the production is greater than the simultaneous electrical demand, excess generation will be sold into the grid. If the total production from the PV and fuel cell is still unable to cover the residential load, total production from the PV and fuel cell is still unable to cover the residential load, electricity will be grid. If the total production from the PV and fuel cell is still unable to cover the residential load, electricity will be imported from the grid to cover the shortage. imported from the grid to cover the shortage. electricity will be imported from the grid to cover the shortage. **Figure 12. PV and fuel cell hybrid residential energy supply system.** **Figure 12.Figure 12. PV and fuel cell hybrid residential energy supply system. PV and fuel cell hybrid residential energy supply system.** Figures 13–15 demonstrate the detailed daily variabilities in PV (a) and fuel cell outputs (b) in Figures 13–15 demonstrate the detailed daily variabilities in PV (a) and fuel cell outputs (b) in color scale distributions for August, October and January, which represent the summer, mid-season Figures 13–15 demonstrate the detailed daily variabilities in PV (a) and fuel cell outputs (b) in color scale distributions for August, October and January, which represent the summer, mid-season and winter, respectively. The operation of cogeneration follows a thermal tracking strategy, and the color scale distributions for August, October and January, which represent the summer, mid-season and winter, respectively. The operation of cogeneration follows a thermal tracking strategy, and the working period and output from the fuel cell have a strong relationship with the amount of daily and winter, respectively. The operation of cogeneration follows a thermal tracking strategy, and the working period and output from the fuel cell have a strong relationship with the amount of daily heating demand. The output of PV highly depends on the weather conditions and shows low power working period and output from the fuel cell have a strong relationship with the amount of daily heating demand. The output of PV highly depends on the weather conditions and shows low power supply credit on winter or mid-season days, it shows higher power density in the summer period. supply credit on winter or mid-season days, it shows higher power density in the summer period. ----- _Sustainability 2018, 10, 2117_ 11 of 19 heating demand. The output of PV highly depends on the weather conditions and shows low power _Sustainability supply credit on winter or mid-season days, it shows higher power density in the summer period.2018, 10, x FOR PEER REVIEW_ 11 of 19 _Sustainability 2018, 10, x FOR PEER REVIEW_ 11 of 19 _Sustainability 2018, 10, x FOR PEER REVIEW_ 11 of 19 (a) (b) (a) (b) (a) (b) **Figure 13. Distributions of power output from fuel cell (0.70 kWp) and PV (4.84 kWp) in August.** **Figure 13.Figure 13. Distributions of power output from fuel cell (0.70 kWp) and PV (4.84 kWp) in August. Distributions of power output from fuel cell (0.70 kWp) and PV (4.84 kWp) in August.** **Figure 13. Distributions of power output from fuel cell (0.70 kWp) and PV (4.84 kWp) in August.** (a) (b) (a) (b) (a) (b) **Figure 14. Distributions of power output from fuel cell (0.70 kWp) and PV (4.84 kWp) in October.** **Figure 14. Distributions of power output from fuel cell (0.70 kWp) and PV (4.84 kWp) in October.** **Figure 14.Figure 14. Distributions of power output from fuel cell (0.70 kWp) and PV (4.84 kWp) in October. Distributions of power output from fuel cell (0.70 kWp) and PV (4.84 kWp) in October.** (a) (b) (a) (b) (a) (b) (a) (b) **Figure 15. Distributions of power output from fuel cell (0.70 kWp) and PV (4.84 kWp) in January.** **Figure 15. Distributions of power output from fuel cell (0.70 kWp) and PV (4.84 kWp) in January.** **Figure 15. Distributions of power output from fuel cell (0.70 kWp) and PV (4.84 kWp) in January.** **Figure 15. Distributions of power output from fuel cell (0.70 kWp) and PV (4.84 kWp) in January.** **4. Analysis and Results** **4. Analysis and Results** **4. Analysis and Results** (b) (a) _4.1. Impacts on Grid_ _4.1. Impacts on Grid_ _4.1. Impacts on Grid_ Currently, the residential sector accounts for around 33% of total power consumption in Japan. Currently, the residential sector accounts for around 33% of total power consumption in Japan. Currently, the residential sector accounts for around 33% of total power consumption in Japan. This part takes a bottom-up engineering approach to estimate the impact of aggregated residential h k b h h f d d l ----- _Sustainability 2018, 10, 2117_ 12 of 19 **4. Analysis and Results** _4.1. Impacts on Grid_ Currently, the residential sector accounts for around 33% of total power consumption in Japan. This part takes a bottom-up engineering approach to estimate the impact of aggregated residential high efficiency energy technologies on the real world public grid. Assuming the load shape effects _Sustainability have a linear relationship with, and the participation rate of high efficiency technologies, we estimated2018, 10, x FOR PEER REVIEW_ 12 of 19 the load leveling potential for 500,000 participants for the abovementioned technologies. Considering Considering that the need for load leveling and power balancing pressure mainly occur in air that the need for load leveling and power balancing pressure mainly occur in air conditioning seasons, conditioning seasons, we examine daily load shifting performances in August and January. As shown we examine daily load shifting performances in August and January. As shown in Figure 16, the red in Figure 16, the red dotted line represents the original daily demand curves in August, with cooling dotted line represents the original daily demand curves in August, with cooling demand leading to two demand leading to two peak periods in the daytime and early night. The EVs and heat pump water peak periods in the daytime and early night. The EVs and heat pump water heaters mainly bottom-up heaters mainly bottom-up the valley load during deep night time and early morning, the PV systems the valley load during deep night time and early morning, the PV systems largest generating ability largest generating ability coincides with the grid daytime peak period, fuel cells contribute less to the coincides with the grid daytime peak period, fuel cells contribute less to the daily power consumption daily power consumption and are greatly limited to the lower heating demand in summer period. and are greatly limited to the lower heating demand in summer period. Released energy from EVs can Released energy from EVs can effectively reduce the night peak load in the absence of PV production.effectively reduce the night peak load in the absence of PV production. **Figure 16. Load shifting performance of high efficiency technologies in August.** **Figure 16. Load shifting performance of high efficiency technologies in August.** Figure 17 presents the load shifting scenario of January, the original grid load is described in the Figure 17 presents the load shifting scenario of January, the original grid load is described in the red dotted line, power consumption increases in the periods of early morning and after dinner time, red dotted line, power consumption increases in the periods of early morning and after dinner time, which coincides with the power consumption in residential sector. Fuel cells contribute more to the which coincides with the power consumption in residential sector. Fuel cells contribute more to the residential daily load when there is an increase in heating demand, including hot water and space residential daily load when there is an increase in heating demand, including hot water and space heating. Heat pumps consume more electricity to meet the daily increasing heat demand and lift more heating. Heat pumps consume more electricity to meet the daily increasing heat demand and lift in grid valley period. PVs show lower generating ability in the winter compared with the summer more in grid valley period. PVs show lower generating ability in the winter compared with the period and have low correlation with the grid load; it should be noted that high PV penetration may summer period and have low correlation with the grid load; it should be noted that high PV lead to the ‘duck curve’ to increase the net load fluctuation. Scheduled EVs and fuel cells jointly penetration may lead to the ‘duck curve’ to increase the net load fluctuation. Scheduled EVs and fuel contribute to the 5.0% of peak reduction at 20:00, enhancing the daily grid flexibility on a daily basis. cells jointly contribute to the 5.0% of peak reduction at 20:00, enhancing the daily grid flexibility on a daily basis. ----- _Sustainability 2018, 10, 2117_ 13 of 19 _Sustainability 2018, 10, x FOR PEER REVIEW_ 13 of 19 _Sustainability 2018, 10, x FOR PEER REVIEW_ 13 of 19 **Figure 17. Load shifting performance of high efficiency technologies in January.** **Figure 17. Load shifting performance of high efficiency technologies in January.** _4.2. Economic Performance4.2. Economic Performance Figure 17. Load shifting performance of high efficiency technologies in January._ **2018, 10, x FOR PEER REVIEW** _4.2. Economic Performance In order to incentive the uptake of high efficiency technologies and encourage customers toIn order to incentive the uptake of high efficiency technologies and encourage customers to_ participate more in district grid operation management, Japanese policy makers are liberalizingparticipate more in district grid operation management, Japanese policy makers are liberalizing the In order to incentive the uptake of high efficiency technologies and encourage customers to retail electricity market to increase their economic efficiency and produce benefits for consumers, the retail electricity market to increase their economic efficiency and produce benefits for consumers, participate more in district grid operation management, Japanese policy makers are liberalizing the mainly through price reductions, Shin and Managi [27]. In order to reinforce industrial mainly through price reductions, Shin and Managi [27]. In order to reinforce industrial competitiveness, retail electricity market to increase their economic efficiency and produce benefits for consumers, competitiveness, Japan achieved full liberalization of the retail electricity market in April 2016. Policy Japan achieved full liberalization of the retail electricity market in April 2016. Policy makers openedmainly through price reductions, Shin and Managi [27]. In order to reinforce industrial makers opened the retail electricity market to competition, and enabled business consumers more the retail electricity market to competition, and enabled business consumers more options to managecompetitiveness, Japan achieved full liberalization of the retail electricity market in April 2016. Policy options to manage their energy consumption; consumers can choose to buy electricity from the their energy consumption; consumers can choose to buy electricity from the retailer of their choice thatretailer of their choice that best meets their needs, such as optimal tariff design, feed-in tariffs and makers opened the retail electricity market to competition, and enabled business consumers more best meets their needs, such as optimal tariff design, feed-in tariffs and relevant capacity subsidies.relevant capacity subsidies. This part analyze the electricity cost for customers with the uptake of options to manage their energy consumption; consumers can choose to buy electricity from the retailer of their choice that best meets their needs, such as optimal tariff design, feed-in tariffs and This part analyze the electricity cost for customers with the uptake of different high efficiencydifferent high efficiency technologies. Economic benefits for consumers are critical for the relevant capacity subsidies. This part analyze the electricity cost for customers with the uptake of technologies. Economic benefits for consumers are critical for the development of high efficiencydevelopment of high efficiency appliances. There are two main types of tariff schemes for customers different high efficiency technologies. Economic benefits for consumers are critical for the appliances. There are two main types of tariff schemes for customers to choose to achieve high energyto choose to achieve high energy saving benefit, as shown in Figure 18. development of high efficiency appliances. There are two main types of tariff schemes for customers saving benefit, as shown in Figure 18. to choose to achieve high energy saving benefit, as shown in Figure 18. **Figure 18. Different electricity tariff schemes for residential customer.** **Figure 17. Load shifting performance of high efficiency technologies in January.** **Figure 17. Load shifting performance of high efficiency technologies in January.** saving benefit, as shown in Figure 18. **Figure 18. Different electricity tariff schemes for residential customer.** **Figure 18. Different electricity tariff schemes for residential customer.** to choose to achieve high energy saving benefit, as shown in Figure 18. ----- _Sustainability 2018, 10, 2117_ 14 of 19 Figure 18A shows a typical electricity tariff scheme composed of a base charge and a volume charge that is favorable for customer with on-site generators. Electricity tariffs increase with higher monthly consumption volume, although it also indicates the potential for electricity bill savings by introducing an on-site generator to reduce the amount of electricity imported from the grid. The time-of-use tariff structure is described in Figure 18B, with 0.22 $/kWh in the daytime lasting from 8:00 to 22:00, and 0.11 $/kWh from 23:00 to 7:00. This scheme is suitable for heat pump and EV users, and encourages customers to schedule their daily energy consumption according to the time-of-use scheme for cost reduction. In order to investigate their potential economic benefit, Table 1 illustrates the cost and technical input parameters used for the assessment. **Table 1. Cost and technical input parameters [28–30].** **Variables** **Value** Annual COP of heat pump 3.4 Daily heat pump power consumption 5.5 kWh (30% of average daily load) Cost of heat pump 8000 $ (4.5 kW, 370 L tank) PV feed-in tariff 0.25 $/kWh PV cost 1000 $/kW Gas pricing 1.86 $/Nm[3] Lower Heating Value 45 MJ/Nm[3] Oil Pricing 1.18 $/L EV car consumption 9.5 km/kWh (electricity), 12.5 km/L (Oil) EV battery cost 1200 $/kW Fuel cell efficiency Electricity 39%, thermal 46% Fuel cell cost 13,000 $ (0.70 kW nominal output) Gas boiler Thermal efficiency 85% Figure 19 illustrates the average cost reduction via application of high efficiency technologies, the savings from use of a heat pump water heater is calculated and compared to the cost of a conventional gas boiler. Assuming that an annual average of 35.0% of the total electricity demand is shiftable and the average COP is 3.4, the high efficiency of the heat pump water heater system contributed a cost saving of around $3.25 per day. The benefits of EV are the reduction in the gasoline fuel cost of a car and electricity tariff reduction due to the time-of-use rates. Results show that the fuel cost reduction accounted for a large ratio of car fuel cost saving, although the potential cost saving is still less when the battery only participates in the home load management through the discharging/charging cycle. On-site generators can reduce customers’ electricity costs by using the electricity tariff scheme with volume charges. PV feed-in production brings the cash flow from the grid utility, although it should be noted that while cogeneration systems can provide high overall energy supply efficiency, the gas fuel cost will rise with increasing amounts of power production from the fuel cell. ----- _Sustainability 2018, 10, 2117_ 15 of 19 _Sustainability 2018, 10, x FOR PEER REVIEW_ 15 of 19 ###### 4 PV feed-in profit Electricity tariff reduction 3.5 3 2.5 2 1.5 1 Energy cost reduction 0.5 0 Heat pump EV On-site generators **FigureFigure 19.19. Average daily profits for customers with different applications of high efficiency technologies.Average** daily profits for customers with different applications of high efficiency technologies. In order to investigate the economic feasibility of the high efficiency technologies, choosing the annual discount rate i equals to 4.0%, the NPV (net present value) performance of efficiency energy In order to investigate the economic feasibility of the high efficiency technologies, choosing the systems in their 10th year were carried out based on parameter values in Table 1, as shown in Figure annual discount rate i equals to 4.0%, the NPV (net present value) performance of efficiency energy 20. As given in Equation (1), the cash inflow refers to the annual net profit of technologies for the systems in their 10th year were carried out based on parameter values in Table 1, as shown in Figure 20. generic year j = {1, 2, …, 10}, and presents the installation cost. The EV system achieves a promising As given in Equation (1), the cash inflow refers to the annual net profit of technologies for the generic net benefit within 10 years due to the cost differences between electricity and gasoline. Heat pump year j = {1, 2, . . ., 10}, and presents the installation cost. The EV system achieves a promising net water heater systems can achieve a net profit within its lifespan (12 years), but the payback period is benefit within 10 years due to the cost differences between electricity and gasoline. Heat pump water longer than 10 years; proper subsidies or a further drop in capital costs may encourage the customer’s heater systems can achieve a net profit within its lifespan (12 years), but the payback period is longer preference for Eco-cute. It is hard to achieve benefits for customers with PV/fuel cell hybrid system, than 10 years; proper subsidies or a further drop in capital costs may encourage the customer’s because the economic feasibility of hybrid energy system is still highly dependent on direct subsidies preference for Eco-cute. It is hard to achieve benefits for customers with PV/fuel cell hybrid system, or adjustments in energy pricing and the high initial investment is still the main obstacle to its wide because the economic feasibility of hybrid energy system is still highly dependent on direct subsidies adoption in the coming decades. or adjustments in energy pricing and the high initial investment is still the main obstacle to its wide adoption in the coming decades. _NPV_  n _R_ _j_  (1 _i)_ _j_  _C0_ (1) _n_ _j_ 1 _NPV =_ ∑ _Rj · (1 + i)[−][j]_ _−_ _C0_ (1) _j=1_ ----- _Sustainability 2018, 10, 2117_ 16 of 19 _Sustainability 2018, 10, x FOR PEER REVIEW_ 16 of 19 _Sustainability 2018, 10, x FOR PEER REVIEW_ 16 of 19 **Figure 20.Figure 20. Net present value of high efficiency technologies within 10 years. Net present value of high efficiency technologies within 10 years.** **Figure 20. Net present value of high efficiency technologies within 10 years.** Assuming electric efficiency is 39.0%, thermal efficiency is 46.0% for fuel cells and 85.0% thermal Assuming electric efficiency is 39.0%, thermal efficiency is 46.0% for fuel cells and 85.0% thermal efficiency for conventional hot water boilers and assuming the COAssuming electric efficiency is 39.0%, thermal efficiency is 46.0% for fuel cells and 85.0% thermal 2 emission factor of natural gas and efficiency for conventional hot water boilers and assuming the COgasoline are 2.29 kg/Nmefficiency for conventional hot water boilers and assuming the CO[3] and 2.32 kg/L, respectively; a 0.483 kg/kWh CO2 emission factor of natural gas and 2 emission factor of natural gas2 emission factor was and gasoline are 2.29 kg/Nmcalculated for the imported power from Kyushu public grid. The annual average daily COgasoline are 2.29 kg/Nm[3] and 2.32 kg/L,[3] and 2.32 kg/L, respectively; a 0.483 kg/kWh COrespectively; a 0.483 kg/kWh CO2 emission factor was 2 emission factor was2 emission calculated for the imported power from Kyushu public grid. The annual average daily COreductions per capacity of high efficiency technologies were estimated as illustrated in Figure 21. calculated for the imported power from Kyushu public grid. The annual average daily CO2 emission 2 emission reductions per capacity of high efficiency technologies were estimated as illustrated in FigureEnvironmental benefits from the heat pump and EV were achieved due to the replacement of natural reductions per capacity of high efficiency technologies were estimated as illustrated in Figure 21. 21. Environmental benefits from the heat pump and EV were achieved due to the replacement of naturalgas and gasoline consumption. The emission reductions of on-site generators can be attributed to PV Environmental benefits from the heat pump and EV were achieved due to the replacement of natural gas and gasoline consumption. The emission reductions of on-site generators can be attributed to PVproduction and use of recycled waste gas from the fuel cell for heating demand.gas and gasoline consumption. The emission reductions of on-site generators can be attributed to PV production and use of recycled waste gas from the fuel cell for heating demand.production and use of recycled waste gas from the fuel cell for heating demand. **3.5** **PV system** **3.5** **PV system** **3** **3** **2.5** **2.5** **Fuel cell** **2** **Fuel cell** **2** **1.5** **1.5** **1** **1** **0.5** **0.5** **0** **0** **Heat pump** **EV** **On-site generators** **Heat pump** **EV** **On-site generators** **Figure 21. Comparison of various high efficiency technologies for average daily CO2 reduction.** **Figure 20.Figure 20. Net present value of high efficiency technologies within 10 years. Net present value of high efficiency technologies within 10 years.** **Figure 21.Figure 21. Comparison of various high efficiency technologies for average daily CO Comparison of various high efficiency technologies for average daily CO2 reduction. 2 reduction.** ----- _Sustainability 2018, 10, 2117_ 17 of 19 **5. Conclusions** This research examined the performance of scheduled efficient technologies, including heating pumps, thermal/battery storage and on-site generators in the residential sector, in order to obtain a better understanding of the behaviors of decentralized high efficiency energy systems and it estimated their cost saving and environmental benefits, based on real tested applications in social demonstration projects in Kyushu, Japan. The results provide a good reference for a plan for mixed high efficiency energy technologies, especially when they are managed to participate in grid load management. The main findings of this research can be summarized as follows: 1. Aggregated heat pump and V2G systems can effectively be used for grid peak load leveling, heat pump water heaters can flexibly shift heating demand to the early morning to bottom-up the grid valley load, daily power consumption of heat pumps vary from 4.0 kWh to 10.0 kWh over the year. Scheduled V2G can effectively cover the night peak load via an optimal discharging strategy. 2. Due to limited heating demand, fuel cells hardly run and have nominal output during the summer period. Fuel cells contribute more to customer electricity load under higher heating demand, and it can be used as a reliable peak power resource, independent of the weather conditions. PV production coincides with the grid peak period in summer and presents high peak capacity credit, and PV generating ability shows great variations among days over a year. 3. Heat pump provides the opportunity to reduce CO2 emission 0.40 kg/(kW·day) via reducing fuel consumption, EV systems with 2.5 kW charging capacity produce around $ 3.2/day profit through replacing gasoline consumption, and achieve economic benefits within six years. Heat pump water heater systems have a relatively longer payback period (10 years) in the current energy market, the feasibility of the on-site cogeneration system still highly depends on access to capacity subsidies under the current energy market in Japan, despite its higher CO2 reduction, 1.76 kg/(kW day). _·_ 4. Different technologies show different roles in load leveling An optimal mix plan and coordinates management strategies are important to regulate local or community energy systems, 500,000 contributions from scheduled EVs and fuel cells could serve as 5.0% of reliable peak power capacity at 20:00 in winter. This paper found that aggregated high efficiency technologies can not only help grid regulation but also reduce social carbon emissions. Higher initial investment is perhaps the most serious obstacle for installation of high efficiency technologies on the demand side. When home appliances or on-site generators are scheduled for grid load regulation, financial incentives for customers to shoulder part of the capacity cost may be favorable for the adoption of high efficiency technologies. In terms of storage systems for power regulation, especially under massive integration of intermittent renewable resources, future work will explore the performance of a combination of EV and PV systems. Meanwhile, considering the decreasing trend in feed-in tariffs over the coming years, future research will focus on increasing local renewable energy consumption with local power resource sharing on a community scale. **Author Contributions: W.C. methodology, Y.L. software and validation, Y.R. and Y.U. resources.** **Conflicts of Interest: The authors declare that there is no conflict of interests regarding the publication of** this paper. ----- _Sustainability 2018, 10, 2117_ 18 of 19 **Abbreviations** The following abbreviations are used in this manuscript: Electrical vehicles EVs Greenhouse gas GHG Variable renewable energy VRE Photovoltaic PV Vehicle to home V2H Vehicle to grid V2G Home energy management system HEMS Ministry of economy, trade and industry METI Coefficient of performance COP Fuel cell FC Heat pump HP Net present value NPV **References** 1. Komiyama, R.; Fujii, Y. Long-term scenario analysis of nuclear energy and variable renewables in Japan’s [power generation mix considering flexible power resources. Energy Policy 2015, 83, 169–184. [CrossRef]](http://dx.doi.org/10.1016/j.enpol.2015.04.005) 2. [Hirth, L. The market value of variable renewables. Energy Econ. 2013, 38, 218–236. [CrossRef]](http://dx.doi.org/10.1016/j.eneco.2013.02.004) 3. Winkler, J.; Pudlik, M.; Ragwitz, M.; Pfluger, B. The market value of renewable electricity—Which factors [really matter? Appl. Energy 2016, 184, 464–481. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2016.09.112) 4. Roos, A.; Bolkesjø, T.F. Value of demand flexibility on spot and reserve electricity markets in future power [system with increased shares of variable renewable energy. Energy 2018, 144, 207–217. [CrossRef]](http://dx.doi.org/10.1016/j.energy.2017.11.146) 5. Li, Y.; Gao, W.; Ruan, Y. Feasibility of virtual power plants (VPPs) and its efficiency assessment through benefiting both the supply and demand sides in Chongming country, China. Sustain. Cities Soc. 2017, 35, [544–551. [CrossRef]](http://dx.doi.org/10.1016/j.scs.2017.08.030) 6. Kim, J.J. Economic analysis on energy saving technologies for complex manufacturing building. _[Resour. Conserv. Recycl. 2016. [CrossRef]](http://dx.doi.org/10.1016/j.resconrec.2016.03.018)_ 7. Klein, K.; Herkel, S.; Henning, H.-M.; Felsmann, C. Load shifting using the heating and cooling system of an office building: Quantitative potential evaluation for different flexibility and storage options. Appl. Energy **[2017, 203, 917–937. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2017.06.073)** 8. Goto, H.; Goto, M.; Sueyoshi, T. Consumer choice on ecologically efficient water heaters: Marketing strategy [and policy implications in Japan. Energy Econ. 2011, 33, 195–208. [CrossRef]](http://dx.doi.org/10.1016/j.eneco.2010.09.004) 9. Mah, D.N.-y.; Wu, Y.-Y.; Ip, J.C.-m.; Hills, P.R. The role of the state in sustainable energy transitions: A case [study of large smart grid demonstration projects in Japan. Energy Policy 2013, 63, 726–737. [CrossRef]](http://dx.doi.org/10.1016/j.enpol.2013.07.106) 10. Love, J.; Smith, A.Z.P.; Watson, S.; Oikonomou, E.; Summerfield, A.; Gleeson, C.; Biddulph, P.; Chiu, L.F.; Wingfield, J.; Martin, C.; et al. The addition of heat pump electricity load profiles to GB electricity demand: [Evidence from a heat pump field trial. Appl. Energy 2017, 204, 332–342. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2017.07.026) 11. Fischer, D.; Wolf, T.; Wapler, J.; Hollinger, R.; Madani, H. Model-based flexibility assessment of a residential [heat pump pool. Energy 2017, 118, 853–864. [CrossRef]](http://dx.doi.org/10.1016/j.energy.2016.10.111) 12. Baeten, B.; Rogiers, F.; Helsen, L. Reduction of heat pump induced peak electricity use and required generation capacity through thermal energy storage and demand response. Appl. Energy 2017, 195, 184–195. [[CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2017.03.055) 13. Komiyama, R.; Fujii, Y. Assessment of massive integration of photovoltaic system considering rechargeable battery in Japan with high time-resolution optimal power generation mix model. Energy Policy 2014, 66, [73–89. [CrossRef]](http://dx.doi.org/10.1016/j.enpol.2013.11.022) 14. Rodriguez-Calvo, A.; Cossent, R.; Frías, P. Integration of PV and EVs in unbalanced residential LV networks and implications for the smart grid and advanced metering infrastructure deployment. Int. J. Electr. Power _[Energy Syst. 2017, 91, 121–134. [CrossRef]](http://dx.doi.org/10.1016/j.ijepes.2017.03.008)_ 15. White, C.D.; Zhang, K.M. Using vehicle-to-grid technology for frequency regulation and peak-load reduction. _[J. Power Sources 2011, 196, 3972–3980. [CrossRef]](http://dx.doi.org/10.1016/j.jpowsour.2010.11.010)_ ----- _Sustainability 2018, 10, 2117_ 19 of 19 16. Mohammadi, A.; Mehrtash, M.; Kargarian, A.; Barati, M. Tie-Line Characteristics based Partitioning for Distributed Optimization of Power Systems. arXiv, 2018. 17. Amini, M.; Islam, A. Allocation of electric vehicles’ parking lots in distribution network. In Proceedings of the Innovative Smart Grid Technologies Conference (ISGT), Washington, DC, USA, 19–22 Feburary 2014; pp. 1–5. 18. Bahrami, S.; Parniani, M. Game theoretic based charging strategy for plug-in hybrid electric vehicles. _[IEEE Trans. Smart Grid 2014, 5, 2368–2375. [CrossRef]](http://dx.doi.org/10.1109/TSG.2014.2317523)_ 19. Rahmani-andebili, M. Risk-cost-based generation scheduling smartly mixed with reliability-driven and [market-driven demand response measures. Int. Trans. Electr. Energy Syst. 2015, 25, 994–1007. [CrossRef]](http://dx.doi.org/10.1002/etep.1884) 20. Rahmani-andebili, M. Investigating effects of responsive loads models on unit commitment collaborated [with demand-side resources. IET Gener. Transm. Distrib. 2013, 7, 420–430. [CrossRef]](http://dx.doi.org/10.1049/iet-gtd.2012.0552) 21. Rahmani-andebili, M. Modeling nonlinear incentive-based and price-based demand response programs and [implementing on real power markets. Electr. Power Syst. Res. 2016, 132, 115–124. [CrossRef]](http://dx.doi.org/10.1016/j.epsr.2015.11.006) 22. Rahmani-andebili, M. Nonlinear demand response programs for residential customers with nonlinear [behavioral models. Energy Build. 2016, 119, 352–362. [CrossRef]](http://dx.doi.org/10.1016/j.enbuild.2016.03.013) 23. Rahmani-Andebili, M.; Shen, H. Price-Controlled Energy Management of Smart Homes for Maximizing [Profit of a GENCO. IEEE Trans. Syst. Man Cybern. Syst. 2017. [CrossRef]](http://dx.doi.org/10.1109/TSMC.2017.2690622) 24. [Grid Electricity Load and PV Integration in Kyushu, Kyuden Electrical Company. Available online: http:](http://www.kyuden.co.jp/wheeling_disclosure.html) [//www.kyuden.co.jp/wheeling_disclosure.html (accessed on 20 March 2018).](http://www.kyuden.co.jp/wheeling_disclosure.html) 25. Hainoun, A. Construction of the hourly load curves and detecting the annual peak load of future Syrian [electric power demand using bottom-up approach. Int. J. Electr. Power Energy Syst. 2009, 31, 1–12. [CrossRef]](http://dx.doi.org/10.1016/j.ijepes.2008.09.006) 26. Zhang, J.-F.; Qin, Y.; Wang, C.-C. Review on CO2 heat pump water heater for residential use in Japan. _[Renew. Sustain. Energy Rev. 2015, 50, 1383–1391. [CrossRef]](http://dx.doi.org/10.1016/j.rser.2015.05.083)_ 27. Shin, K.J.; Managi, S. Liberalization of a retail electricity market: Consumer satisfaction and household [switching behavior in Japan. Energy Policy 2017, 110, 675–685. [CrossRef]](http://dx.doi.org/10.1016/j.enpol.2017.07.048) 28. [Residential Fuel Cell Cogeneration System, Panasonic Company. Available online: https://panasonic.biz/](https://panasonic.biz/appliance/FC/) [appliance/FC/ (accessed on 20 April 2018).](https://panasonic.biz/appliance/FC/) 29. [Eco-Cute in Japan, Panasonic Company. Available online: http://sumai.panasonic.jp/hp/ (accessed on](http://sumai.panasonic.jp/hp/) 21 April 2018). 30. [Smart V2G in Japan. Available online: http://www.shouene.com/photovoltaic/smarthouse/ev.html](http://www.shouene.com/photovoltaic/smarthouse/ev.html) (accessed on 21 April 2018). © 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution [(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/SU10072117?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/SU10072117, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/2071-1050/10/7/2117/pdf?version=1529571820" }
2,018
[]
true
2018-06-21T00:00:00
[ { "paperId": "b5aa85f2521c2d15fa3196ef67a5a09a073001cc", "title": "Price-Controlled Energy Management of Smart Homes for Maximizing Profit of a GENCO" }, { "paperId": "7d592ac2e238afc8a96724ea92d97ad69a646d44", "title": "Tie-Line Characteristics based Partitioning for Distributed Optimization of Power Systems" }, { "paperId": "ecdfac57762b9c16148ece581afefffac191c3bf", "title": "Value of demand flexibility on spot and reserve electricity markets in future power system with increased shares of variable renewable energy" }, { "paperId": "fd0096445d90537894d468ab7826384a5c4a7d87", "title": "Liberalization of a retail electricity market: Consumer satisfaction and household switching behavior in Japan" }, { "paperId": "ab00adb6f014eecf6fbc75f4fbcddb8ea7db6d1e", "title": "Feasibility of virtual power plants (VPPs) and its efficiency assessment through benefiting both the supply and demand sides in Chongming country, China" }, { "paperId": "c402596ec5cfcc1bbc3fad4d9aeb74e0d39be9ec", "title": "The addition of heat pump electricity load profiles to GB electricity demand: Evidence from a heat pump field trial" }, { "paperId": "ea1e9c1f65b3953697020c0720872694f0bdd7eb", "title": "Integration of PV and EVs in unbalanced residential LV networks and implications for the smart grid and advanced metering infrastructure deployment" }, { "paperId": "ef99a93330e0d1fa30839318985b754ff019c6b6", "title": "Load shifting using the heating and cooling system of an office building: Quantitative potential evaluation for different flexibility and storage options" }, { "paperId": "30f8c59a409a0596d9d7c6e82230adbcfa32412b", "title": "Economic analysis on energy saving technologies for complex manufacturing building" }, { "paperId": "30449dac8e90bbe2413c9ce7cc60f132f7f6145b", "title": "Reduction of heat pump induced peak electricity use and required generation capacity through thermal energy storage and demand response" }, { "paperId": "5b141a3987e1a2e80517259a5e17717d0dbf46e4", "title": "The market value of renewable electricity – Which factors really matter?" }, { "paperId": "be4111c4fa69c891c291872620d90198f5a32a8a", "title": "Nonlinear demand response programs for residential customers with nonlinear behavioral models" }, { "paperId": "965ed8bc7cffb568a5662e0478cd8c1ed97b32e5", "title": "Modeling nonlinear incentive-based and price-based demand response programs and implementing on real power markets" }, { "paperId": "e71bf71c27ea2a106a748da5295b427e6b8edf0a", "title": "Review on CO2 heat pump water heater for residential use in Japan" }, { "paperId": "0f3d1e3d545625eb2d2e898aee47d233637dee4e", "title": "Long-term scenario analysis of nuclear energy and variable renewables in Japan's power generation mix considering flexible power resources" }, { "paperId": "ae9713409c5b729dcd8efe184dceee28017b7154", "title": "Risk‐cost‐based generation scheduling smartly mixed with reliability‐driven and market‐driven demand response measures" }, { "paperId": "0f0a4a2af8724018b8c2cae2aee7401b8521580d", "title": "Game Theoretic Based Charging Strategy for Plug-in Hybrid Electric Vehicles" }, { "paperId": "e33e309ca8dc4ecd34dacfb4a6793c269f8aff73", "title": "Allocation of electric vehicles' parking lots in distribution network" }, { "paperId": "807e1e4203273d15eed88d258fb202a3c766dc56", "title": "Assessment of massive integration of photovoltaic system considering rechargeable battery in Japan with high time-resolution optimal power generation mix model" }, { "paperId": "cca3add14b05ea3548d4f6421562729c6cc7040b", "title": "The role of the state in sustainable energy transitions: A case study of large smart grid demonstration projects in Japan" }, { "paperId": "b99f843ca88d519dcaa8c91bfc173aafce9f45d5", "title": "The Market Value of Variable Renewables The Effect of Solar and Wind Power Variability on their Relative Price" }, { "paperId": "6f2296c2fb07184cfe7023d4d8538664a082a3a8", "title": "Investigating effects of responsive loads models on unit commitment collaborated with demand-side resources" }, { "paperId": "a74286cbcd9153d44ab88c40fab504e0fd0a4b37", "title": "Using vehicle-to-grid technology for frequency regulation and peak-load reduction" }, { "paperId": "89ad40fce853969d455c4d54614bc2f91b192c87", "title": "Consumer choice on ecologically efficient water heaters: Marketing strategy and policy implications in Japan" }, { "paperId": "6cadafacd4102a8df7fa1e65b8a59dffe6f38b96", "title": "Model-based flexibility assessment of a residential heat pump pool" }, { "paperId": "9dda0ca1440bf9b83e673f247b1e3cdb87b98fbf", "title": "Construction of the hourly load curves and detecting the annual peak load of future Syrian electric power demand using bottom-up approach" }, { "paperId": null, "title": "Grid Electricity Load and PV Integration in Kyushu , Kyuden Electrical Company" } ]
17,205
en
[ { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01af0f2d89d49b44ee6cdda55a01b449de3f3081
[]
0.915372
DECENTRALIZED MACHINE LEARNING ON BLOCKCHAIN: A REVIEW OF RECENT DEVELOPMENTS
01af0f2d89d49b44ee6cdda55a01b449de3f3081
International Research Journal of Modernization in Engineering Technology and Science
[]
{ "alternate_issns": null, "alternate_names": [ "Int Res J Mod Eng Technol Sci" ], "alternate_urls": null, "id": "f963a178-be9f-4f29-95bb-0d56a45d63db", "issn": "2582-5208", "name": "International Research Journal of Modernization in Engineering Technology and Science", "type": "journal", "url": null }
null
## e-ISSN: 2582-5208 International Research Journal of Modernization in Engineering Technology and Science ( Peer-Reviewed, Open Access, Fully Refereed International Journal ) Volume:05/Issue:04/April-2023 Impact Factor- 7.868 www.irjmets.com # DECENTRALIZED MACHINE LEARNING ON BLOCKCHAIN: A REVIEW OF RECENT DEVELOPMENTS ## Donald Ashwin Dsouza[*1] *1Student Of Master Of Computer Application, N.M.A.M. Institute Of Technology, ### Nitte, Karnataka, India. DOI : https://www.doi.org/10.56726/IRJMETS37762 ## ABSTRACT Decentralized machine learning (DML) is a new paradigm in artificial intelligence (AI) that combines the power of distributed computing and blockchain technology to enable secure and privacy-preserving machine learning. In DML, multiple devices or nodes collaborate to train a machine learning model without sharing their data, thereby enhancing data privacy and security. This research paper provides a comprehensive review of recent developments in DML on the blockchain, including its applications, challenges, and potential solutions. The paper analyzes relevant literature and case studies to highlight the advantages and limitations of DML on the blockchain. The study looks at the many consensus techniques used in DML and how they affect system performance, including proof-of-work, proof-of-stake, and proof-of-authority. The function of smart contracts in DML and how they might improve the system's security and transparency are also discussed in the paper. The paper also covers DML on the blockchain's difficulties and potential solutions, including scalability, interoperability, and privacy issues. According to the study's results, DML on the blockchain has the power to change the AI industry by providing safe and private machine learning. To address the technological and nontechnical problems, however, it also requires additional research and development. To fully realize the potential of DML on the blockchain, the study emphasizes the necessity of a coordinated effort by researchers, developers, and policymakers. **Keywords: Decentralized Machine Learning, Blockchain, Distributed Computing, Data Privacy, Security,** Consensus Mechanisms, Proof-Of-Work, Proof-Of-Stake, Proof-Of-Authority, Smart Contracts, Scalability, Interoperability, Privacy Concerns, Collaborative Effort. ## I. INTRODUCTION Decentralized machine learning (DML) is a new approach in artificial intelligence (AI) that combines the power of distributed computing and blockchain technology to enable secure and privacy-preserving machine learning. Traditional machine learning relies on the centralization of a lot of data, which leaves it open to security risks and privacy violations. However, DML allows numerous devices or nodes to work together and train a machine learning model without revealing any of their data, improving data privacy and security. Due of its potential to change the AI landscape, this strategy has attracted a lot of interest from researchers, developers, and corporations. Several uses for blockchain technology outside of banking have been discovered. This technology was initially created for the decentralized management of cryptocurrencies. The primary characteristics of blockchain, including immutability, transparency, and decentralization, make it the perfect foundation for DML. In DML on the blockchain, the nodes work together to train the machine learning model, while the blockchain securely and openly stores the transactional data. Another way to automate the training process and improve the security and transparency of the system is to employ smart contracts, which are self-executing contracts with the conditions of the agreement put directly into code. This study offers a thorough analysis of current advancements in DML on the blockchain, including its uses, difficulties, and prospective remedies. The advantages and restrictions of DML on the blockchain are highlighted through the paper's analysis of pertinent literature and cases. The effectiveness of the system is examined in relation to the various consensus techniques employed in DML, such as proof-of-work, proof-ofstake, and proof-of-authority. The function of smart contracts in DML and how they might improve the system's security and transparency are also discussed in the paper. ----- ## e-ISSN: 2582-5208 International Research Journal of Modernization in Engineering Technology and Science ( Peer-Reviewed, Open Access, Fully Refereed International Journal ) Volume:05/Issue:04/April-2023 Impact Factor- 7.868 www.irjmets.com The study discusses the challenges and potential solutions in DML on the blockchain, including scalability, interoperability, and privacy concerns. According to the study's findings, DML on the blockchain has the power to change the AI industry by providing safe and private machine learning. To address the technological and non-technical problems, however, it also requires additional research and development. To fully realize the potential of DML on the blockchain, the study emphasizes the necessity of a coordinated effort by researchers, developers, and policymakers. ## II. LITERATURE SURVEY Adeel and Zeadally (2021) review the applications and challenges of blockchain-based decentralized machine learning and propose future research directions [1]. Li et al. (2020) proposes a framework for decentralized machine learning on blockchain, which includes secure and privacy-preserving machine learning capabilities. Their framework also addresses the challenges of scalability, performance, and data heterogeneity [2]. Yuan et al. (2021) review the potential of federated learning on blockchain to enable secure and privacypreserving machine learning in a decentralized environment. They also discuss the challenges and limitations of federated learning on blockchain [3]. Salama and Mohamed (2021) perform a systematic review of the existing literature, frameworks, and architectures of decentralized machine learning on blockchain. Their review highlights the challenges and opportunities of this approach, including the need for efficient consensus mechanisms and secure data sharing [4]. Zhang et al. (2021) provide a survey of decentralized machine learning on blockchain, including the current state-of-the-art, research challenges, and future directions. They also discussed the potential of blockchainbased decentralized machine learning to address the challenges of privacy, security, and data sharing in machine learning [5]. ## III. PROPOSED APPROACH Decentralized machine learning on blockchain is an emerging research area that aims to address some of the challenges associated with traditional machine learning approaches, such as data privacy, data security, and data ownership. Our proposed approach involves creating a decentralized network of nodes that can execute machine learning models in a secure and transparent manner. The first step in our proposed approach is to create a decentralized network of nodes that can communicate with each other using a peer-to-peer (P2P) protocol. A copy of the blockchain, which houses encrypted data and smart contracts that control how machine learning tasks are carried out, is kept on each node in the network. All network participants may independently confirm the accuracy and legitimacy of the data as well as the successful completion of the machine learning tasks thanks to the tamper-proof and transparent ledger provided by the blockchain. The specifications of the work, such as the dataset, the model architecture, and the learning rate, are specified in a smart contract that is established on the blockchain to start a new machine learning task. The terms under which the work will be carried out, such as the quantity of nodes necessary to engage in the training process, its length, and the incentive for participating nodes, are also specified in the smart contract. The network nodes can take part in the machine learning task by running a federated learning algorithm after the smart contract has been formed. Federated learning is a distributed machine learning technique in which each node's local data is used to train the model, with the results being combined to create a global model. In order to protect data privacy and security, federated learning is a method that does not require the nodes to exchange their data with one another. The outcomes of the machine learning task are then combined and kept on the blockchain, where they may be accessed by parties with the necessary permissions. The blockchain ensures data ownership and data privacy while offering a transparent and secure method of storing and sharing machine learning findings. By executing a smart contract on the blockchain that sets the terms under which the results can be accessible, the authorised parties can gain access to the machine learning results ----- ## e-ISSN: 2582-5208 International Research Journal of Modernization in Engineering Technology and Science ( Peer-Reviewed, Open Access, Fully Refereed International Journal ) Volume:05/Issue:04/April-2023 Impact Factor- 7.868 www.irjmets.com As a conclusion, our suggested strategy is using blockchain technology to build a decentralized network of nodes that can run machine learning models in a safe and open manner. The strategy makes use of federated learning approaches to guarantee data confidentiality and privacy as well as smart contracts to control how machine learning tasks are carried out. By solving some of the problems with conventional machine learning methods, the suggested method has the potential to revolutionize the area. ## IV. DECENTRALIZED MACHINE LEARNING ON BLOCKCHAIN Decentralized Machine Learning on Blockchain (DMLB) is an emerging field that aims to revolutionize the way machine learning tasks are performed. By integrating blockchain and machine learning technologies, DMLB provides a secure and transparent platform that addresses some of the major challenges facing traditional centralized systems. One of the primary advantages of DMLB is its ability to enable multiple parties to collaborate on machine learning tasks without sharing their data, thereby protecting the privacy of sensitive information. This is achieved by using encrypted data and smart contracts that govern the execution of machine learning tasks on a decentralized network of nodes. The use of blockchain technology also ensures that the system is secure and transparent, making it less vulnerable to cyber threats and fraud. As the field of DMLB continues to evolve, it has the potential to transform various industries by enabling more efficient and secure machine learning operations. ## V. THE RELATIONS BETWEEN BLOCKCHAIN AND MACHINE LEARNING The integration of blockchain and machine learning technologies has gained popularity recently because of its potential to completely transform data processing and management. While machine learning enables computers to learn from that data and make decisions based on it, blockchain offers a secure and transparent way to store and share data. Blockchain's capacity to address data privacy issues is one of the technology's primary benefits for machine learning. Users can keep control of their data by keeping it on a decentralized network and restricting who can access it, lowering the risk of data breaches and unauthorized access. Additionally, because the data is stored on a decentralised network that is difficult to hack, blockchain-based systems are more transparent and secure than conventional centralized systems. Additionally, it is possible to analyze and glean insights from the data kept on the blockchain using machine learning algorithms. Blockchain-based systems, for instance, can be used to develop predictive models for various purposes, including fraud detection and financial transactions. Overall, the fusion of machine learning with blockchain technology has the potential to revolutionize data processing and management, resulting in more private, open, and secure platforms. ## VI. PROMISING DIRECTIONS Even though DMLB has the power to completely alter how we develop and use machine learning models, there are still a number of obstacles to be overcome. Scalability is one of the most important issues since DMLB systems need a lot of processing power to carry out machine learning activities. The performance of the system may be constrained by the network's weakest node because each node must carry out computations locally. By creating more effective decentralized training methods for machine learning models, this problem can be solved. The requirement for more effective consensus methods presents another difficulty. The current proof-of-work and proof-of-stake consensus processes employed in blockchain systems can be slow and resource-intensive, which restricts the scalability of DMLB systems. Therefore, new consensus mechanisms that are more effective and scalable are required for DMLB applications. Another promising direction is the development of edge intelligence frameworks that can provide the necessary tools and platforms for deploying and managing smart and collaborative AI systems at the edge. Edge intelligence frameworks can include the necessary components for data management, model training and inference, communication and coordination, security and privacy, and performance and optimization. Edge intelligence frameworks can also enable the integration of different AI techniques and algorithms, such as deep learning reinforcement learning and transfer learning ----- ## e-ISSN: 2582-5208 International Research Journal of Modernization in Engineering Technology and Science ( Peer-Reviewed, Open Access, Fully Refereed International Journal ) Volume:05/Issue:04/April-2023 Impact Factor- 7.868 www.irjmets.com Finally, the issue of data privacy needs to be addressed. By enabling several parties to work together on machine learning tasks without revealing their data, DMLB systems provide a high level of anonymity, but there is still a chance that the data can be linked back to the original source. Therefore, new privacy-preserving methods must be created in order to shield the data from unauthorized access. ## VII. CONCLUSION In conclusion, the DMLB technology, which combines blockchain and machine learning, has the ability to completely alter how we store, use, and exchange data. DMLB has a number of benefits over conventional centralized systems, including increased security, openness, and privacy. The review of recent developments in DMLB reveals that significant progress has been made in this field, with numerous successful implementations and promising research directions. However, a number of issues still need to be resolved in subsequent studies, including scalability, effectiveness, and interoperability. DMLB, which enables secure and transparent cooperation on machine learning tasks without sacrificing data privacy, has the potential to alter a number of industries, including healthcare, banking, and logistics, despite these obstacles. A new business model that makes use of the combined intelligence of several parties while protecting their confidential information can be developed with the help of DMLB. As a result, DMLB is an exciting area of study that will likely spur innovation and change in the years to come. ## VIII. REFERENCES [1] Adeel, M., & Zeadally, S. (2021). A review of blockchain-based decentralized machine learning: applications, challenges, and future directions. Journal of Parallel and Distributed Computing, 154, 161171. [2] Li, Y., Xie, J., Zhang, X., Liu, Y., & Luan, T. H. (2020, November). Towards decentralized machine learning on blockchain. In 2020 IEEE International Conference on Blockchain and Cryptocurrency (ICBC) (pp. 110). IEEE. [3] Yuan, S., Zhang, X., Chen, Y., Zhao, X., & Gao, H. (2021). Federated learning on blockchain: a review. IEEE Transactions on Computational Social Systems, 8(1), 127-139. [4] Salama, T., & Mohamed, A. (2021). Decentralized machine learning on blockchain: a systematic review. Journal of Parallel and Distributed Computing, 157, 224-238. [5] Zhang, R., Yang, Y., Liu, C., & Xue, Y. (2021). Decentralized machine learning on blockchain: a survey. IEEE Access, 9, 29287-29302. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.56726/irjmets37762?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.56726/irjmets37762, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://doi.org/10.56726/irjmets37762" }
2,023
[ "JournalArticle", "Review" ]
true
2023-05-07T00:00:00
[]
3,493
en
[ { "category": "Law", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01af3da360d1ad0806873d3fc887d316ce8bb9eb
[]
0.905708
Implementing the electronic signature law in Tanzania – successes, challenges, and prospects
01af3da360d1ad0806873d3fc887d316ce8bb9eb
Digital Evidence and Electronic Signature Law Review
[ { "authorId": "98415123", "name": "Ubena John" } ]
{ "alternate_issns": [ "2054-8508", "1744-0882" ], "alternate_names": [ "Digit Évid Electron Signat Law Rev" ], "alternate_urls": [ "https://journals.sas.ac.uk/deeslr/index" ], "id": "a5a22387-62fc-4433-90ea-4741fbc93284", "issn": "1756-4611", "name": "Digital Evidence and Electronic Signature Law Review", "type": "journal", "url": "https://journals.sas.ac.uk/deeslr/" }
Abstract  In a bid to implement the Electronic Transactions Act 2015, Tanzania initiated the adoption of a National Public Key infrastructure (PKI) framework. However, the plan has not been executed as expected because of certain gaps and ambiguities in the laws. This article examines the existing laws providing for the legal validity, admissibility and enforceability of electronic signatures especially using PKI; identifies the weaknesses of the existing laws and recommends new laws relevant to PKI that should be considered, and their rationale.  Index words: Tanzania, electronic signature, PKI, cryptography, certification
#### ARTICLE : # Implementing the electronic signature law in Tanzania – successes, challenges, and prospects ## By Ubena John ### Introduction Electronic commerce is an example of electronic transactions that have recently been taken up in Tanzania. Thanks to the enabling legal environment, online services such as eHealth services, mobile and electronic banking services, and payment systems have thrived. However, security and trust has not been forthcoming where people conduct business transactions on the internet. From a legal standpoint, electronic signatures are used for a variety of purposes, including to signify willingness to be bound by the terms of a contract, to sign a bank order, to authorise an invoice and to provide authority (for payments). An electronic signature signifies consent of the signatory and their intent to be bound by the transaction. Thus, where the law requires a signature, that requirement may be met by using an electronic signature, as provided for by s6 of the Electronic Transactions Act 2015 (ETA). For electronic transactions, signatures may include a name at the bottom of email, a personal identity number (PIN), a scanned handwritten signature, etc. Nonetheless, these forms of signature may not be helpful to identify a party or to ensure the integrity of the data. To overcome this challenge, a signature using PKI and involving trusted third parties is utilised.[1] Tanzania has provided for the legal validity of a signature using a PKI.[2] However, the legal effects of an electronic signature within a PKI does have some deficiencies. It is undisputed that electronic transactions require trust and security. Online market sellers and buyers would like to know with whom they are transacting[3] and to be assured that the documents they are exchanging, or transactions in which they are engaging, are trustworthy. Signatures have a range of functions, which include: identifying the signatory; that the signatory intended the signature to be his signature; that the signatory signified his assent to be bound by the content of the document he signed; and that the signature guarantees trust or offers assurance to respective parties to a particular transaction.[4] In the online world, it is possible to rely on digital signatures for the purposes of trust, integrity, and confidentiality, although online traders tend to rely on the means of payment being linked to the person making the order for goods or services, rather than rely on any form of electronic signature, and this works very well. While the law provides for the legal validity of electronic signatures in Tanzania, the reality of how they are used is somewhat different. 1 Adam Mambi, ICT Law Book: A Source Book for Information & Communication Technologies and Cyber-Crime (Dar es salaam: Mkuki na Nyota Publishers, 2010) 103-105; Ubena John, ‘E-documents & E-signatures in Tanzania: Their Role, Status, and the Future’, in Kelvin Joseph Bwalya and Saul F.C. Zulu, (eds), A handbook of Research on e-Government in Emerging Economies: _Adoption, E-Participation, and Legal Frameworks, Vol.1 (Hershey, PA, USA, IGI, 201), pp. 90-122._ 2 See ss 6-7 ETA providing for validity of electronic signatures in Tanzania. 3 Stephen Mason and Timothy S. Reiniger, ‘“Trust” Between Machines? Establishing Identity Between Humans and Software Code, or whether You Know it is a Dog, and if so, which Dog?’, Computer and Telecommunications Law Review, 2015, Volume 21, Issue 5, pp. 135-148. 4 Andrew Murray, Information Technology Law (Oxford, OUP, 2011), p. 428; for a comprehensive list of the functions of a signature, see Stephen Mason, Chapter 7 ‘Electronic signatures’ in Stephen Mason and Daniel Seng, editors, Electronic Evidence _and Electronic Signatures (5th edn, Institute of Advanced Legal Studies for the SAS Humanities Digital Library, School of_ Advanced Study, University of London, 2021), 7.11-7.19, open access at https://humanities-digital[library.org/index.php/hdl/catalog/book/electronic-evidence-and-electronic-signatures.](https://humanities-digital-library.org/index.php/hdl/catalog/book/electronic-evidence-and-electronic-signatures) This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivs 4.0 International Licence **Digital Evidence and Electronic Signature Law Review, 19 (2022) | 102** ----- **Implementing the electronic signature law in Tanzania** ### Defining the electronic signature Section 3 of ETA provides: ‘electronic signature’ means data, including an electronic sound, symbol, or process, executed, or adopted to identify a party, to indicate that party’s approval or intention in respect of the information contained in the electronic communication and which is attached to or logically associated with such electronic communication. Section 7 of ETA provides that an electronic signature is secure if it: (a) is unique for the purpose for which it is used; (b) can be used to identify the person who signs off the electronic communication; (c) is created and affixed to the electronic communication by the signer; (d) is under control of the person who signs; and (e) is created and linked to the electronic communication to which it relates in a manner such that any changes in the electronic communication would be revealed.[5] Beside the statutory definition of an electronic signature, there are several legal scholars who have attempted to define the term ‘electronic signature’ and identify the purposes of a signature. According to Professor Chris Reed, the electronic signature serves three purposes: the identity of the signatory; the intention to make a signature; and that the signatory adopts the contents of the document.[6] Mason outlines a number of aspects of the signature, including the purpose and functions, considered dictionary definitions,[7] discusses the difference between the manuscript (handwritten) signature and a digital signature and explains what a digital signature is.[8] The oft-cited example of an ideal electronic signature is the digital signature within the framework of a PKI, because the PKI involves trusted third parties. At this juncture, it is noteworthy that the term ‘digital signature’ is used interchangeably with ‘electronic signature’.[9] Any form of electronic signature is capable of being binding, but some forms of electronic signature do not have the same status in law in some jurisdictions.[10] The digital signature can achieve technical efficacy in security, confidentiality, and integrity. It is used to secure databanks, online shops, critical infrastructure, and such like. Electronic signatures take various forms, such as a name typed at the foot of an email, a sound, clicking ‘OK’, or the accept button on a web page signifying assent to the terms and conditions on that page.[11] Other forms of signature include biometric measurements, such as the scanned retina, fingerprint, and DNA samples.[12] The use of the 5 ETA s7. 6 Chris Reed, ‘What is a signature?’, (2000) 3 Journal of Information, Law and Technology (JILT), at [https://warwick.ac.uk/fac/soc/law/elj/jilt/2000_3/reed/.](https://warwick.ac.uk/fac/soc/law/elj/jilt/2000_3/reed/) 7 Mason, Chapter 7 ‘Electronic signatures’, 7.1-710. 8 Mason, Chapter 7 ‘Electronic signatures’, 7.30, a full technical overview of how a digital signature works is set out at 7.2037.227. 9 Mason, Chapter 7 ‘Electronic signatures’, 7.30-7.32. 10 Anna Nordén, ‘Electronic signatures in a legal context’, in Cecilia Magnusson Sjöberg, editor, IT Law for _IT Professionals – an_ _introduction (Studentlitteratur AB; 2005) pp. 152-154; Ubena John, ‘E-documents & E-signatures in Tanzania: Their Role, Status,_ and the Future’, p 104; Stephen Mason, ‘The practical issues in using electronic signatures in different jurisdictions’, Computer _and Telecommunications Law Review, 2021, Volume 27, Issue 6, pp. 165-179._ 11 By way of example, see the USA case of Moore v Microsoft Corporation, 293 A.D.2d 587, 741 N.Y.S.2d 91 (N.Y. App. Div. 2002); see also eBay International AG v Creative Festival Entertainment _Pty Ltd (2006) 170 FCR 450 (Australian Federal Court held that_ the act of clicking acceptance of terms and conditions appearing in a website is as good as signing of a contract in writing); Stephen Mason Electronic Signatures in Law (4th edn, Institute of Advanced Legal Studies for the SAS Humanities Digital Library, School of Advanced Study, University of London, 2016), 3.10, currently available online at [https://ials.sas.ac.uk/digital/humanities-digital-library/observing-law-ials-open-book-service-law/electronic-signatures; Cecilia](https://ials.sas.ac.uk/digital/humanities-digital-library/observing-law-ials-open-book-service-law/electronic-signatures) Magnusson Sjöberg and Anna Nordén, ‘Managing Electronic Signatures – Current challenges’ in Peter Wahlgren, editor, IT Law Volume 47 (Stockholm Institute for Scandinavian Law, 2004), pp 81-95; Anna Nordén, ‘Electronic signatures in a legal context’, pp. 149-183. 12 See Mason, Chapter 7 ‘Electronic Signatures’, for a complete list and relevant case law. ----- **Implementing the electronic signature law in Tanzania** fingerprint – especially a thumb print – as a signature is common in Tanzania, and individuals without a handwritten signature and who file their pleadings will sign them by affixing their thumb prints to the documents in proceedings. ### Development of the electronic signature law in Tanzania Prior to the enactment of the ETA in 2015, the electronic signature lacked legal recognition in Tanzania. The legislature subsequently took cognizance of the development in electronic commerce and electronic government services. In 2015, it enacted the ETA to provide for a range of issues, including the legal validity of electronic transactions, electronic contracts, electronic signatures, and the admissibility of electronic evidence. Despite these developments, trust in electronic transactions was difficult to achieve without the parties identifying or knowing their counterparties in online transactions. The attributes of the secure electronic signature set out above are commendable. However, the law has not defined the rights, duties and liabilities of the parties creating, using, or relying on electronic signatures. To address this shortcoming, Tanzania intended to implement PKI signatures as mandated by the ETA.[13] The ETA embodies provisions for the regulation of cryptographic and certification services.[14] Digital authentication is undertaken by the electronic Government Authority (eGA) on behalf of public entities via PKI and the Digital Signature Management System as mandated by the e-Government Act, No 10 of 2019.[15] These provisions confirm that the Tanzania preference is for a PKI.[16] The law further states, at s6(1), that ‘where a law requires the signature of a person to be entered, that requirement shall be met by a secure electronic signature made under this Act.’ Having depicted the development of the electronic signature agenda in Tanzania, it is worthwhile to elaborate the approaches in respect of electronic signature laws adopted in other jurisdictions, albeit briefly. ### Approaches to electronic signatures law This section makes a short comparison to the development of the law of electronic signatures within Australia and South Africa. The electronic signature laws in these countries seem to have been influenced by UNCITRAL Model law on Electronic Commerce. The three types of approach that jurisdictions have taken to electronic signatures are briefly explained.[17] They are prescriptive, minimalistic, and two-tier. #### Prescriptive approach The prescriptive approach to electronic signatures specifies the particular type of electronic signature technology to be adopted. It is strict and inflexible. This approach may act to stifle innovation because other types of electronic signature technology are excluded. The jurisdictions that have opted for the prescriptive approach are Brazil, Indonesia, Israel, Peru, Philippines, Russia, Turkey, and Uruguay.[18] The prescriptive approach stipulates the purpose of the electronic signature, but also specifies the technology for a signature to be legally valid. Some jurisdictions adopted this approach, but later revised the legislation.[19] 13 See Ministry of Works, Transport and Communication, Consultancy report dated 20 February 2017 in respect of Tender No. ME.006/RCIP/2015-2016/HQ/C/03 Business, Functional, Non-Functional Requirements and System Design Specification for the Tanzania National Public Key Infrastructure includes policy, legislative and regulation requirements. Herein referred to as the NPKI consultancy report (on file with the author). See also International Competitive Selection Tender from the Tanzania Communications Regulatory Authority available at [https://www.tcra.go.tz/uploads/documents/sw-1619170675-](https://www.tcra.go.tz/uploads/documents/sw-1619170675-PROVISION%20OF%20CONSULTANCY%20SERVICES%20FOR%20IMPLEMENTATION%20OF%20NATIONAL%20PUBLIC%20KEY%20INFRASTRUCTURE%20(NPKI)%20IN%20TANZANIA.pdf) [PROVISION%20OF%20CONSULTANCY%20SERVICES%20FOR%20IMPLEMENTATION%20OF%20NATIONAL%20PUBLIC%20KEY%20I](https://www.tcra.go.tz/uploads/documents/sw-1619170675-PROVISION%20OF%20CONSULTANCY%20SERVICES%20FOR%20IMPLEMENTATION%20OF%20NATIONAL%20PUBLIC%20KEY%20INFRASTRUCTURE%20(NPKI)%20IN%20TANZANIA.pdf) [NFRASTRUCTURE%20(NPKI)%20IN%20TANZANIA.pdf.](https://www.tcra.go.tz/uploads/documents/sw-1619170675-PROVISION%20OF%20CONSULTANCY%20SERVICES%20FOR%20IMPLEMENTATION%20OF%20NATIONAL%20PUBLIC%20KEY%20INFRASTRUCTURE%20(NPKI)%20IN%20TANZANIA.pdf) 14 ETA ss33-36. 15 The e-Government Act s5. 16 India took the prescriptive approach and preferred the PKI model, but the law was amended to provide for all forms of electronic signature: Mason, Electronic Signatures in Law, 3.3. 17 For details on the approach to electronic signature law from various jurisdictions see Mason, Electronic Signatures in Law, 3.23.21. [18 See CERTIPHI, electronic signatures, https://www.certiphi.com/resource-center/compliance-services/electronic-signatures/;](https://www.certiphi.com/resource-center/compliance-services/electronic-signatures/) Mason, Electronic Signatures in Law, 3.3. 19 India is a good example. ----- **Implementing the electronic signature law in Tanzania** #### Minimalistic approach The minimalistic approach permits the use of any form of electronic signature. All types of electronic signature are legally recognized. The countries that have preferred the minimalistic approach include Australia, Canada, New Zealand, Thailand, and USA.[20] The advantage of the minimalistic approach is that it promotes innovation. The merit of the minimalistic approach is simplicity. Any type of electronic signature is legally recognized. In so doing the market is left to supply any signature technology. Nevertheless, besides other deficiencies, the minimalistic approach has left room for signatures of poor quality to be used and hence they may be easily forged, although it must be noted that, given the millions of contracts entered into remotely across the world every day, there are very few cases of forgery.[21] #### Two-tier approach The two-tier approach is a hybrid model in which most types of signature technology will be legally recognized. The legislation generally provides for a certain class of approved electronic signature technologies that may be used.[22] The EU has indicated it prefers the qualified electronic signature (digital signature) over other types of electronic signatures. Tanzania has similarly expressed preference for the secure electronic signature over other types of electronic signature. The two-tier approached has been adopted in the EU, China, Japan, South Africa, and Tanzania. The advantage of this approach is that the law recognizes any type of electronic signature. The legislation also tends to include attributes linked to an electronic signature that are considered to be reliable or secure.[23] The problem is that not every signature is reliable or secure. ETA section 6(1) provides that where the law requires a signature to be appended, such requirement shall be met by entering or using a secure electronic signature as defined under section 7. Regardless, many Tanzanians use simple electronic signatures such as the name at the foot of email, and a scanned version of handwritten signature. This is probably because buying and constantly paying to up-date a PKI digital signature is expensive and complex to install and use. #### Australia Although Australia adopted the minimalistic approach to electronic signatures, the Gatekeeper Public Key Infrastructure Framework issued by the Digital Transformation Office (DTO) suggests that some Australian government agencies preferred to use the PKI signature. The Gatekeeper PKI Framework is a guide issued to assist those who are using or relying on a signature affixed within a PKI to authenticate online transactions. It helps the parties (accreditation authority, registration authority, certification authorities, key issuers, certificate holders, users or relying parties) involved in the PKI signature cycle to understand the technical and legal requirements. Moreover, it helps them appreciate their roles, rights, duties, and liabilities. The use of the PKI signature is not mandatory in Australia.[24] Parties are free to choose any electronic signature technology that meets the attributes set out in the Electronic Transactions Act.[25] Nonetheless, when government agencies or other organisations use the PKI (including a digital certificate to authenticate the signing party), Gatekeeper accredited service providers must be used.[26] Unlike South Africa, where the South Africa Accreditation Authority (SAAA) accredits both private and government electronic signature service providers, in Australia, the Gatekeeper PKI Framework is for government agencies that use PKI signatures. Section 2 of the Gatekeeper PKI Framework provides: The Gatekeeper PKI Framework is a whole-of-government suite of policies, standards and procedures that governs the use of PKI in Government for the authentication of individuals, organisations, and non-person entities (NPE) – such as devices, applications, or computing components. [20 CERTIPHI, electronic signatures, https://www.certiphi.com/resource-center/compliance-services/electronic-signatures/;](https://www.certiphi.com/resource-center/compliance-services/electronic-signatures/) Mason, Electronic Signatures in Law, 3.8. 21 Mason, Chapter 7 ‘Electronic signatures’, 7.35-7.37, 7.227. [22 CERTIPHI, electronic signatures, https://www.certiphi.com/resource-center/compliance-services/electronic-signatures/;](https://www.certiphi.com/resource-center/compliance-services/electronic-signatures/) Mason, Electronic Signatures in Law, 3.15; Article 7 of UNCITRAL Model Law on Electronic Commerce also adopted a two-tier approach to electronic signature; Article 6(3) of UNCITRAL Model Law on Electronic Signature echoes the foregoing law. 23 This has been done in South Africa and Tanzania. 24 Section 5.4 of Gatekeeper PKI Framework. 25 Electronic Transactions Act 1999 (Cth), s10(1). 26 See Section 5.4 of Gatekeeper PKI Framework. ----- **Implementing the electronic signature law in Tanzania** The Digital Transformation Office is responsible for scrutinizing the application for accreditation of the Gatekeeper of PKI and making recommendations to the Gatekeeper Competent Authority. The latter is responsible for decisions in relation to the accreditation of service providers. Although the Gatekeeper PKI Framework appears to be for government agencies, it applies to organisations that choose to obtain and maintain gatekeeper’s accreditation.[27] Under Section 10(1)(a) and (b) of the Electronic Transactions Act 1999, an electronic signature is legally recognized in Australia if it has the following attributes: …(a) in all cases—a method is used to identify the person and to indicate the person’s intention in respect of the information communicated; and (b) in all cases—the method used was either: (i) as reliable as appropriate for the purpose for which the electronic communication was generated or communicated, in the light of all the circumstances, including any relevant agreement; or (ii) proven in fact to have fulfilled the functions described in paragraph (a), by itself or together with further evidence… These attributes apply to any electronic signature regardless of its underlying technology. In the three countries (Tanzania, Australia, and South Africa), a notable similarity is that all have electronic signature laws that have been highly influenced by the UNCITRAL Model Law on Electronic Commerce. Principles such as functional equivalence found in this Model Law have found their way into the electronic signature laws of these jurisdictions. Also, South Africa and Tanzania have provisions that stipulate the attributes of electronic signature to be secure or reliable. These match the attributes set under Article 7 of the UNCITRAL Model Law on Electronic Commerce. While Australia has adopted the minimalistic approach, South Africa and Tanzania have opted for the two-tier approach. This might be because the two-tier approach is not only found in the UNCITRAL Model Law on Electronic Commerce, but it is also found in the Southern African Development Community (SADC) Model law.[28] Because electronic signatures comprise many types, their qualities vary. Many jurisdictions give a higher value to an electronic signature that has the capability of achieving confidentiality, integrity, authenticity, and identifying the signatory. It is for this reason that Tanzania adopted the secure electronic signature (in EU parlance[29]) that in practice, and according to the government plan, is the public key infrastructure (PKI) signature.[30] #### South Africa In South Africa, a PKI has been implemented via the Electronic Communications and Transactions Act.[31] Under that law, the advanced electronic signature (AES) is defined, in section 1, as ‘an electronic signature which results from a process which has been accredited by the Authority as provided for in section 37’. Where the law requires a transaction to be endorsed by signature, that requirement is met only if the AES is used.[32] The AES underlying framework is the use of PKI. In South Africa accredited authentication and certification products and certification services also known as PKI or AES services are carried out by two accredited agencies: Law Trust Party Services (Pty) Limited and the South African Post Office Limited (SAPO).[33] The latter is a government agency accredited by the South Africa Accreditation Authority (SAAA) to provide cryptography and certification services. The SAPO first launched its Trust Centre, which is a digital signature and authentication hub, in July 2013.[34] Its Trust Centre is AES Class 4 Certificate and related certificates are compatible with all applications that support the use of the X.509 27 Section 2 of Gatekeeper PKI Framework. 28 https://www.itu.int/en/ITU-D/Projects/ITU-EC[ACP/HIPSSA/Documents/FINAL%20DOCUMENTS/FINAL%20DOCS%20ENGLISH/sadc_model_law_e-transactions.pdf .](https://www.itu.int/en/ITU-D/Projects/ITU-EC-ACP/HIPSSA/Documents/FINAL%20DOCUMENTS/FINAL%20DOCS%20ENGLISH/sadc_model_law_e-transactions.pdf) 29 Regulation (EU) No 910/2014 of the European Parliament and of the Council of 23 July 2014 on electronic identification and trust services for electronic transactions in the internal market and repealing Directive 1999/93/EC, OJ L 257, 28.8.2014, p. 73114, for which see Article 26 Requirements for advanced electronic signatures (eIDAS). 30 Section 7 of ETA. 31 Act No. 25 of 2002 (ECTA). 32 See ECTA s13(1). 33 South Africa Accreditation Authority (SAAA), accredited authentication and certification products and certification services, at [http://www.saaa.gov.za/index.php/accreditation.html.](http://www.saaa.gov.za/index.php/accreditation.html) 34 There is a link to the SAPO Trust Centre from the South Africa Accreditation Authority, although the link does not appear to be working. ----- **Implementing the electronic signature law in Tanzania** digital certificate.[35] The other provider, LAWTRUST, is a private company accredited by the SAAA to offer digital authentication services.[36] The LAWTRUST AES product is based on a (claimed) high assurance digital certificate, compatible with products or services that support the X.509 digital certificate.[37] Recent cases in South Africa regarding electronic signatures[38] include Spring Forest Trading 599 CC v Wilberry (Pty) _Ltd t/a Ecowash.[39]_ The issue in this case was whether the names of the parties at the bottom or foot of each email constituted the required consensual cancellation of the agreement. It was held the names at the foot of the emails constituted a signature and was binding. Cachalia JA giving judgment for the court said, at [28] that The typewritten names of the parties at the foot of the emails, which were used to identify the users, constitute ‘data’ that is logically associated with the data in the body of the emails, as envisaged in the definition of an ‘electronic signature’. They therefore satisfy the requirement of a signature and had the effect of authenticating the information contained in the emails. _Global & Local Investment Advisors (Pty) Ltd v Nickolaus Ludick Fouché,[40]_ involved emails sent fraudulently to a bank authorising the transfer of funds. The issue for determination was whether a series of fraudulent emails bound _Fouché. The court held, at [16], that ‘[The emails] were not written nor sent by the person they purported to_ originate from. They are fraudulent as they were written and dispatched by person or persons without the authority to do so. They are not binding on Mr Fouché’ – hence the typed signature was a forgery and could not be relied upon. The case of First Rand Bank t/a Wesbank v Molamuagae,[41] was an action against Andrew Molamuagae for the cancellation of an instalment sale agreement and the repossession of a vehicle which Molamuagae purchased under the contract. The contract, called an ‘iContract’, was signed by Molamuagae online with a personal information number, which had been sent to his cellular telephone number, together with his identity number. One of the issues before the court was whether the electronic signature complied with the Electronic Communications and Transactions Act 2002 (ECTA). Senyatsi AJ said that it did, at [43]: ‘The NCA [National Credit Act 2005] does not provide for the form that the signature to the instalment sale agreement needs to take. As a result, it is quite possible to sign the agreement electronically and in compliance with the ECTA.’ It followed that the instalment sale agreement had been concluded by the parties. ### Public Key Infrastructure signature The PKI involves trusted third parties in the creation and management of keys and certificates for the purposes of a digital signature. The signature interface uses a pair of keys: one private and another public.[42] The latter may be kept public whereas the former is kept secret (private). Public key encryption uses two different keys, each of which will decrypt documents encrypted by the other key. This means the private key can be kept secret, while the other is made public.[43] With a PKI signature, the rights, duties/obligations and liabilities and other PKI specific issues of certification and supervision are defined in the ETA in Part VI and Part VII. The PKI signature is required to have the following 35 SAAA, accredited authentication and certification products and certification services, at [http://www.saaa.gov.za/index.php/accredited-authentication-and-certification-products-services.html.](http://www.saaa.gov.za/index.php/accredited-authentication-and-certification-products-services.html) [36 LAWtrust, PKI, at https://www.lawtrust.co.za/solutions/pki.](https://www.lawtrust.co.za/solutions/pki) 37 SAAA, accredited authentication and certification products and certification services, at [http://www.saaa.gov.za/index.php/accredited-authentication-and-certification-products-services.html. See also SAPO Trust](http://www.saaa.gov.za/index.php/accredited-authentication-and-certification-products-services.html) [Centre at https://docplayer.net/96041888-The-sapo-trust-centre.html; X.509 at https://en.wikipedia.org/wiki/X.509.](https://docplayer.net/96041888-The-sapo-trust-centre.html) 38 See Mason, ‘Electronic signatures’, Chapter 7 for earlier cases from South Africa. 39 (725/13) [2014] ZASCA 178; 2015 (2) SA 118 (SCA) (21 November 2014); mentioned by Mason, Chapter 7 ‘Electronic signatures’, 7.129. 40 (71/2019) [2019] ZASCA 08; 2021 (1) SA 371 (SCA) (18 March 2020). 41 (24558/2016) [2018] ZAGPPHC 762 (26 February 2018). 42 Anna Nordén, ‘Electronic signatures in a legal context’, at pp. 156-157; John, ‘E-documents & E-signatures in Tanzania: Their Role, Status, and the Future’, p 105. 43 Chris Reed, ‘What is a Signature?’ 2000(3); for a comprehensive explanation of how PKI works, including the risks, see Mason, Chapter 7 ‘Electronic signatures’, 7.203-7.277. ----- **Implementing the electronic signature law in Tanzania** attributes: to ensure confidentiality, integrity, authenticity and identify the signatory.[44] Other scholars have added non-repudiation as another attribute,[45] although, as Mason indicates, non-repudiation is impossible.[46] ### Existing laws that address the issues of PKI electronic signatures Tanzania has several relevant items of legislation and regulations that affect the legal position of PKI electronic signatures. They include the ETA and the Electronic Transactions (Cryptographic and Certification Services Providers) Regulations 2016 (G.N. No. 228), the Electronic Transactions (Cryptographic and Certification Services Providers) Regulations 2016 (G.N. No. 224), Electronic and Postal Communications Act of 2010 and the Electronic and Postal Communications (Computer Emergency Response Team) Regulations 2018 (G.N. No. 60); The Tanzania Communications Regulatory Authority Act 2003; and the Evidence Act (Chapter 6).[47] Each are examined below. ### Electronic Transactions Act The Electronic Transactions Act, Act No. 13 of 2015 (ETA) is the first Act to provide for the validity, admissibility and enforceability of electronic signatures in Tanzania.[48] The ETA provides for a secure electronic signature and its functions, together with the regulation of Cryptographic and Certification Services.[49] The ETA provides a definition of the electronic signature;[50] the legal recognition of an electronic signature;[51] the secure electronic signature, its attributes and application;[52] the liability of the relying party;[53] the use of electronic signatures in electronic record keeping;[54] the use of an electronic signature for the purposes of notarisation, acknowledgement, and certification;[55] the regulation of cryptographic and certification services,[56] and the admissibility and authenticity of evidence in electronic form.[57] The following Ministries are responsible for information and communications technology: the Ministry of Works, Transport and Communication, the Tanzania Communications Regulatory Authority (TCRA), the Bank of Tanzania (BoT), and the Electronic Government Authority (eGA). These government institutions are also involved in regulating electronic signatures. The ETA mandates the Minister responsible for ICT to select and designate a regulator of cryptographic and certification services[58] and approves policies and regulations for cryptographic and Certification Services Providers[59] and putting the NPKI into operation. The law also stipulates the functions of the regulator of cryptographic and certification services, including, among other things, the licensing of electronic signature services and issuing of digital certificates.[60] The Tanzania Communications Regulatory Authority (TCRA) is the regulator of cryptographic and certification services. The communication sector is vast, which means the TCRA might have difficulties undertaking its duties. Ideally, the regulation of electronic signatures should have been left to another institution. Nevertheless, there are other institutions playing a role in regulating electronic signatures not set out in legislation. They do so by virtue of their position. These institutions are the BoT, eGA, National Identity Agency (NIDA), and private commercial banks. The BoT regulates commercial banks, the eGA approves and monitors development of all electronic government projects, and the NIDA issues National Identities both manual and electronic. In so far as the regulation of electronic 44 ETA s7. 45 Anna Nordén, ‘Electronic signatures in a legal context’, pp. 156-157. 46 Mason, Chapter 7 ‘Electronic signatures’, 7.286-7.297. 47 [Cap 6 R.E. 2019]. 48 ETA s6. 49 ETA ss33-36. 50 ETA s3. 51 ETA s6. 52 ETA s7 and s8. 53 ETA s12. 54 ETA s9. 55 ETA s10. 56 ETA ss33-36. 57 ETA ss18 and 46; the Evidence Act [Cap. 6 R.E. 2019] (TEA) s64A. 58 ETA s13(4). 59 ETA s33. 60 ETA s34; ETA Regulations (G.N. No. 228 of 2016). ----- **Implementing the electronic signature law in Tanzania** signatures is concerned (with exception of the eGA managing the government’s digital authentication framework[61]), the powers and functions of these institutions in the electronic signature cycle are not clearly articulated in legislation. Hence the rights and duties of the parties involved may be contractual. For example, the issuance of a PIN for bank cards that can be used in ATMs remains a contractual arrangement between a bank and its customer. Despite the above legal framework, there is uncertainty. While the parties to a contract are free to use the electronic signature of their choice unless the law prescribes otherwise,[62] this freedom of choice is qualified with the preference for the secure electronic signature.[63] Interestingly, provisions for secure electronic signatures may also be regarded as non-discriminatory, for it merely sets out what attributes an electronic signature needs if it is to be considered secure.[64] Thus, any electronic signature is legally recognized providing it meets the attributes set out in ETA s7. ### Forms of electronic signature other than PKI signatures Despite the ETA recognizing PKI electronic signatures, the implementation of the intimated PKI electronic signatures regime in Tanzania has not been realized. There is no PKI infrastructure in place. That is not to say people are not using electronic signatures. As mentioned above, other forms of electronic signature are used in sending text messages, sending email, using the PIN to take out money from an ATM.[65] Further, it must be emphasised that the definition of an electronic signature is very wide, and includes all forms of electronic signature, not just digital signatures using a PKI, as discussed below. Clearly, commercial organisations incorporate security features when dealing with customers. For instance, where a bank offers electronic banking services to its customers, the bank issues the customer with a username and password to obtain access to online services. This process differs from one bank and another. When a customer logs onto their electronic banking platform, some banks will send a notification to their mobile telephone that the account is being viewed. During this interaction by the software, the browser and the Internet Protocol address will be recorded by the bank (unless the customer uses a VPN or other mechanism to make it appear that they are obtaining access to the account from another country).[66] Additionally, a bank receives a code to his mobile telephone or email address instantly which must be used within a short period of time to authenticate the customer before approving or confirming the funds transfer. The electronic signature at the bottom of an email is used widely. There are instances where a signature applied via text message may be valid, for example, in a loan agreement over text message, where the court in China held that the data exchanged via mobile telephones in text messages can be admitted in evidence.[67] In other jurisdictions this has extended to torts such as defamation. The latter was the dispute in Lazarus _Mirisho Mafie and M/S Shidolya_ _Tours and Safaris v. Odilo Gasper Kilenga alias Moiso Gasper[68]_ where an email was admitted as evidence to prove that a defamatory email was from the defendant. The court also examined whether an email address may be used to prove that it was indeed the defendant who sent the defamatory email. ### ATM cases in Tanzania There have been a few cases where people have relied on the PIN in an ATM as evidence, in disputes brought before the courts. That extends to where the PIN for use in an ATM or mobile banking such as SIM Banking or an M-PESA 61 The e-Government Act s5. 62 ETA s6(3). 63 ETA s7. 64 Thanks to Stephen Mason for this observation. 65 For cases where the PIN is compromised or money fraudulently withdrawn from ATMs, see National Microfinance Bank (PLC) _v Delphina Ikanda Mama, Civil Appeal No.149 of 2017, High Court of Tanzania, Dar es salaam District Registry at Dar es salaam_ (unreported); Mwanswa Jones’ case. [66 See, for instance, https://en.wikipedia.org/wiki/Internet_geolocation and https://en.wikipedia.org/wiki/Geo-blocking .](https://en.wikipedia.org/wiki/Internet_geolocation) 67 _Yang Chunning v. Han Ying. (2005) hai min chu zi NO.4670, Beijing Hai Dian District People’s Court. See case translation and_ commentary in Digital Evidence and Electronic Signature Law Review, 5 (2008), pp. 103–5; see Mason, Electronic Signatures in _Law, Chapter 3 ‘The practical issues in using electronic signatures’, at 125._ 68 Commercial Case No. 10 of 2008, High Court of Tanzania Commercial Division at Arusha (Unreported). ----- **Implementing the electronic signature law in Tanzania** account have been compromised and money has been fraudulently withdrawn, for which see National Microfinance _Bank Ltd v Michael Obey Daud.[69]_ The more frequent issues before the court are customer protection, customer negligence in handling the PIN, the bank breaching fiduciary duty, weak security of the system, vulnerabilities of the mobile banking system, etc.[70] Surprisingly, the reliability of the PIN is not examined. The customer trusts the system without having knowledge about the system itself.[71] A bank might claim that the customer divulged the PIN to third parties. If proved, the bank will not be liable. But where the customer proves he or she did not authorise a third party to obtain access to his or her account, the bank may be liable.[72] The customer has a duty to notify the bank once the PIN is compromised. What is gathered from the relationship between banker and customer in electronic banking is that it relies on trust and this may include trust that the software and the machine are working correctly.[73] Assessing the evidence where an electronic signature is in dispute can be of significant concern. For instance, some judges may tend to believe the assurance given by a witness for the bank in the absence of any evidence.[74] The bank customer may be accused of negligence that he or she has shared his PIN with a third party who in turn obtained access to his or her bank account.[75] This conclusion is reached in ignorance of the fact that the software may have its inherent problems or may be accessed without the knowledge of the customer.[76] The possession of an ATM card and PIN is not conclusive evidence that a thief cannot obtain access to the customer’s bank account and withdraw cash.[77] It is suggested that the advice offered by the Supreme Court of Lithuania in their sage judgment in the case of Ž.Š. v _Lietuvos taupomasis bankas is of great value in aiding judges in assessing the evidence, as set out at page 259:[78]_ … in the event of a dispute between the bank and the card holder concerning the use of PIN code (electronic signature), the bank must provide the probative evidence regarding the particular actions or inaction of the card holder that would prove the use of the PIN code (electronic signature) with the card holder’s knowledge 69 Civil Appeal No.51 of 2020, High Court of Tanzania at Mwanza (unreported) available at [https://tanzlii.org/tz/judgment/high-](https://tanzlii.org/tz/judgment/high-court-tanzania/2021/3154) [court-tanzania/2021/3154.](https://tanzlii.org/tz/judgment/high-court-tanzania/2021/3154) 70 See Ubena John and Caroline Mutalemwa, ‘Are the customers’ rights protected against fraud in mobile banking in Tanzania: a review of laws and practice’, Institute of Judicial Administration Law Journal (forthcoming 2022). 71 Mason and Reiniger, ‘“Trust” Between Machines? Establishing Identity Between Humans and Software Code, or whether You Know it is a Dog, and if so which Dog?’, p. 135. 72 See Vodacom (T) Limited and NMB v Mwanswa Jonas Consolidated Civil Appeals No. 1 and No. 2 of 2016, High Court of Tanzania at Mbeya (unreported). 73 For history of trust in machines see Richard Warner and Robert H. Sloan, “Vulnerable Software: Product-Risk Norms and the Problem of Unauthorized Access” (2012) 45 Journal of Law, Technology & Policy 45; see also Mason and Reiniger, ‘“Trust” Between Machines? Establishing Identity Between Humans and Software Code, or whether You Know it is a Dog, and if so which Dog?’, p. 135. 74 Maryke Silalahi Nuth, “Unauthorized use of bank cards with or without the PIN: a lost case for the customer?” (2012) 9 Digital _Evidence and Electronic Signature Law Review 95; see National Microfinance Bank (PLC) v Delphina Ikanda Mama, Civil Appeal_ No.149 of 2017, High Court of Tanzania, Dar es salaam District Registry at Dar es salaam (unreported). 75 _National Microfinance Bank (PLC) v Delphina Ikanda Mama, Civil Appeal No.149 of 2017, High Court of Tanzania, Dar es_ salaam District Registry at Dar es salaam (unreported). 76 Stephen Mason, “Debit cards, ATMs and negligence of the bank and customer” (2012) 27(3) Butterworths Journal of _International Banking and Financial Law 163; Stephen Mason, “Electronic banking and how courts approach the evidence”_ (2013) 29(2) Computer Law and Security Review 144; Mason and Reiniger Esq., ‘“Trust” Between Machines? Establishing Identity Between Humans and Software Code, or whether You Know it is a Dog, and if so which Dog?’, p. 135. 77 There seems to be a wrong assumption in some cases (NMB v Michael Obey Daud HC, Civil Appeal No.51 of 2020, HCT Mwanza (unreported) and NMB v Delphina Ikanda Mama Civil Appeal No.149 of 2017, High Court of Tanzania, Dar es salaam District Registry at Dar es salaam (unreported)) that it is a bank customer who knew his PIN, which meant that the withdrawal from the ATM could not be done by anybody save by the customer or a person who had been given the PIN by that customer. That was held without a critical analysis being done. 78 Civil case No. 3K-3-390/2002, Supreme Court of Lithuania, translated by Sergejs Trofimovs, 6 Digital Evidence and Electronic _Signature Law Review (2009) 255 – 262; see also the helpful advice offered to members of the judiciary in two important papers:_ Paul Marshall, James Christie, Peter Bernard Ladkin, Bev Littlewood, Stephen Mason, Martin Newby, Jonathan Rogers, Harold Thimbleby and Martyn Thomas CBE, Recommendations for the probity of computer evidence’, 18 Digital Evidence and Electronic _Signature Law Review (2021) pp. 18-26 and Michael Jackson, ‘An approach to judging evidence from computers and computer_ systems’ 18 Digital Evidence and Electronic Signature Law Review (2021) pp. 50-55. ----- **Implementing the electronic signature law in Tanzania** or due to his negligence or lack of care. The bank also bears the obligation to prove that the original PIN code (electronic signature) was used, i.e. the electronic signature, which identifies the specific person – the bank’s client. The sufficient basis of transfer of burden of proof to the card holder may be established only in those cases where the original PIN code is used, and in accordance with the present level of equipment and in accordance with the requirements as to the formation and usage of such a signature, this signature could not have been reproduced without the holder’s knowledge or negligence.’ ### Electronic Transactions (Cryptographic and Certification Service Providers) Regulations Cryptography and certification are at the core of PKI. It is for this reason the Cryptographic and Certification Services Providers Regulations[79] were promulgated. The regulations regulate cryptographic and certification services in Tanzania, and the Minister responsible for communications is empowered to designate an institution to regulate electronic signatures, especially cryptographic and certification services.[80] ### Electronic evidence law Prior to the enactment of ETA in 2015, the electronic signature lacked statutory legal validity. An electronic signature was inadmissible as evidence and hence unenforceable in the courts of law in Tanzania. The ETA recognized electronic transactions. It also recognized data message as evidence. The ETA amended the Evidence Act [Cap 6 R.E. 2019] to the effect that electronic evidence is admissible in the courts of law in Tanzania.[81] In determining the admissibility and evidential weights of evidence in electronic form, s18(2) of the ETA provides for the following to be considered: (a) the reliability of the manner in which the data message was generated or communicated; (b) the reliability of the manner in which the integrity of the data message was maintained; (c) the manner in which the originator was identified; and (d) any other factor that may be relevant in assessing the weight of evidence. #### Electronic evidence cases in Tanzania The role of the judiciary towards the change of legal framework on the admissibility of electronic evidence and the issue of authenticity should not be understated. Several cases have been decided by the High Court of Tanzania, which include Trust Bank Ltd v Le-marsh Enterprises _Ltd, Joseph Mbui Magari, Lawrence Macharia;[82] Lazarus Mirisho_ _Mafie and M/S Shidolya Tours and Safaris v Odilo Gasper Kilenga alias Moiso Gasper;[83] Exim Bank (T) Ltd v_ _Kilimanjaro Coffee Company Limited;[84]_ and William Mungai v Cosatu Chumi.[85] Some of these cases focused on the issue of the authenticity of the electronic evidence.[86] Controversies have emerged in the courts as to whether a signature is essential in determining the reliability of data messages as evidence. In Ami Tanzania Limited v Prosper Joseph Msele,[87] the Court of Appeal of Tanzania held that a signature is not required under section 18 of ETA to fulfil data message reliability requirements. However, in Stanley _Murithi Mwaura v R,[88]_ the Court of Appeal held that the admissibility of electronic evidence depends on fulfilling the requirements[89] of proving the reliability of the data massage as stipulated in s18(2) of the ETA. As held in many cases in the High Court, the reliability of a data message can be proved by showing that the manner through which the 79 G.N. No. 228 of 2016. 80 ETA s33. 81 ETA s46; TEA s64A. 82 [2002] TLR 144. 83 Commercial Case No.10 of 2008, HC Commercia Division at Arusha (Unreported). 84 Commercial Case No. 29 of 2011 (HC Commercial Division at Dar es salaam) (Unreported). 85 Election Petition No.8 of 2015 (HC at Iringa) (Unreported). 86 Each of these cases are discussed in Ubena John, ‘Legal issues surrounding the admissibility of electronic evidence in Tanzania’, 18 Digital Evidence and Electronic Signature Law Review (2021) 56-67. 87 Civil Appeal No. 159 of 2020, Court of Appeal of Tanzania at Dar es salaam (Unreported) (judgment delivered on 11 November 2021). 88 Criminal Appeal No. 144 of 2019 Court of Appeal of Tanzania at Dar es salaam (Unreported) (decided on 22 November 2021). 89 Holding that they are ‘requirements’ may be controversial as these are ‘attributes’ that ought to be considered. ----- **Implementing the electronic signature law in Tanzania** message was created, stored, or communicated was reliable, or how the originator was identified was reliable. These may be partly achieved by using a digital signature, because it has a capacity to provide for the confidentiality, integrity, and authenticity of the data, although using a digital signature will not provide for absolute certainty, because of the weakness of the IT systems.[90] ### Electronic and Postal Communications Act The Electronic and Postal Communications Act, Act No. 3 of 2010 (EPOCA) provides for the functions of TCRA. It provides for a licensing framework of electronic communications service providers. It also empowers the TCRA to regulate standards and competition in electronic communications. The EPOCA provides for the Computer Emergence Response Team (CERT). The team is charged with a duty to investigate internet security issues in Tanzania, including identifying criminal activities and malicious code. ### Tanzania Communications Regulatory Authority Act The Tanzania Communications Regulatory Authority Act, Act No. 12 of 2003 (TCRA) established the post of Regulator of Communications.[91] The main functions of the TCRA are to regulate the communications sector with the aim of guaranteeing the availability of communications services, interconnection, interoperability, and competition. However, because the TCRA has many duties to perform (under EPOCA, TCRA, CCA, etc.) it is debatable whether it has the capacity to regulate PKI digital signatures effectively under the ETA.[92] ### The e-Government Act, 2019 To implement electronic government in Tanzania, the legislature in 2019 enacted the e-Government Act,[93] although the adoption of ICT for the provision of public services started earlier. It was in 2009 that the government issued a circular dated 9 October 2009 on the use of ICT in public services. The circular was issued by the Permanent Secretary, in the Ministry of President’s Office – Public Service. It provides, among other things, for the proper and secure use of ICT in government services. There are also the e-Government General Regulations.[94] The overall purpose of the Regulations is to implement electronic government in Tanzania. In 2012 under the Executives Agencies Act of 1997,[95] the Electronic Government Agency (eGA) was established as a semi-autonomous agency. The agency later became the e-Government Authority regulating the development and use of e-government systems. The e-Government Act provides for the authority to coordinate, oversee, and promote e-government initiatives and enforce e-government related policies, laws, regulations, standards, and guidelines in public institutions.[96] Another function of the eGA is to establish and maintain a secure shared government ICT infrastructure and systems.[97] A good example is the development of the e-office, explained below. It further develops mechanisms for the enforcement of ICT Security standards and guidelines, the provision of support for ICT security operations, and implementation of government wide cyber security strategies.[98] The eGA has been instrumental in developing various ICT systems and applications for the government and public services generally. For example, the government mailing system, electronic office (e-office) management system, land use and management system, etc. #### The Electronic Government Authority and PKI signature The above laws and the Electronic Government Authority (eGA) played a significant role in operationalization of digital (PKI) signature in government and public service in Tanzania. In 2016, the eGA was charged with a task to 90 The weakness of software is discussed in depth, with numerous examples, in Mason, Chapter 5 ‘The presumption that computers are “reliable’’’ in Mason and Seng, Electronic Evidence and Electronic Signatures, and the proof that digital signatures can be undermined and forged is discussed at 7.254. 91 TCRA Act, 2003 s4. 92 ETA s34. 93 Act No. 10 of 2019. 94 e-Government General Regulations of 2020, G.N. No. 70 published on 7 February 2020. 95 [Cap 245 R.E. 2002]. 96 The e-Government Act s5. 97 The e-Government Act s5. 98 The e-Government Act s5. ----- **Implementing the electronic signature law in Tanzania** develop the electronic office (e-office) management system.[99] The development of the system was completed in September 2017. This system has minimized the use of paper in government offices. It is now possible to manage files electronically. All government entities are required to use the government mailing system and be connected to the government network – GovNet – to use the e-office management system.[100] The e-office management system also uses the digital (PKI) signature. The eGA is the Certification Authority.[101] This PKI infrastructure is called the eGov PKI and Digital Signature Management System (DSMS).[102] The users are chief executives and the officers managing government registries. The PKI adopted the X.509 standard.[103] Under the eGA PKI and DSMS, there is registration authority which receives digital certificate requests from a particular government entity. It verifies the identity of the requestor and approves it. Thereafter, the approved request is forwarded to the Certification Authority (eGA).[104] While the above PKI and DSMS works fine, no guidelines have been issued on the rights, duties and liabilities of parties involved in the PKI cycle. Neither the eGA nor TCRA has issued the certificate practice statement. ### Weaknesses in the existing laws Among the major defects of the current laws relevant to PKI in Tanzania is the failure to address the rights, duties and liabilities of the parties involved in PKI. For instance, the ETA and G.N. No. 228 do not provide for the rights, duties, and liabilities[105] of the main participants in the PKI framework, although it does provide for the liability of the electronic signature on the relying party in s12: A person who relies on an electronic signature shall bear the legal consequences of failure to take reasonable steps to verify the (a) authenticity of an electronic signature; or (b) validity of a certificate or observe any limitation with respect to the certificate where an electronic signature is supported by a certificate. It should be noted that this provision merely reinforces the need for the relying party to satisfy themselves that the signature is of the person who it claims to be. The relying party has always had the burden of proving a signature is not a forgery. Another weakness is the provision of non-exhaustive elements of PKI in the regulations (G.N. No. 228). Some elements such as registration authority, repository, validation authority, subscriber, certificate policy and subscriber (and relying party) agreement are excluded. Admittedly, these may be included in the certificate policy or certificate practice statement. The lack of provision for the vetting or verification of PKI signature users is a shortcoming. Although the Electronic and Postal Communications Act[106] requires the registration of SIM cards, there are no requirements for the registration of laptops or desktop computers. Moreover, there is no system for registration of internet users. Similarly, the G.N. No. 228 deals with the registration and licensing of cryptographic and certification service providers and not PKI signature users. [99 The system website is at http://eoffice.gov.go.tz.](http://eoffice.gov.go.tz/) 100 The eGA and Department of Archives and Records, training material on e-office (Mfumo wa Ofisi Mtandao), May 2022 (unpublished) (on file with the author). [101 For details on systems developed by eGA see https://www.ega.go.tz/e-services/government-to-government-g2g.](https://www.ega.go.tz/e-services/government-to-government-g2g) 102 The eGA and Department of Archives and Records, training material on e-office, May 2022 (unpublished) (on file with the author). 103 The eGA training material on Digital signatures, (unpublished) (on file with the author). 104 The eGA training material on Digital signatures, (unpublished) (on file with the author). 105 These may however be stated in the certificate practice statement. 106 Act No.3 of 2010. ----- **Implementing the electronic signature law in Tanzania** Moreover, there is a lack of legal provisions to establish an institution to undertake the verification of PKI signature users. Both the TCRA Act and EPOCA are silent on this point. Without verifying a user, there is risk that cybercriminals can use the service. Furthermore, current agreements between vendors, Certifications Authorities (CAs) and users seem to be selfregulating. They are unregulated by the relevant authorities. Neither the ETA nor the G.N. No. 228 regulates these agreements. It is unclear whether the regulator (TCRA) under G.N. No. 228 is responsible. Without such regulation, the interests of consumers and end users may be at stake. The terms contained in the agreements may be used by the CAs to exempt themselves from liability. The above discussion makes it clear that the regulations do not include the issues raised. It is possible that these matters might be covered under the provisions of ETA s11, where it provides that before the grant of a licence, the regulator has to approve a certification practice statement – all of the issues raised in the four paragraphs above can (and should be) be covered in the certification practice statement. What seems to be missing is a guide on certificate policies and certificate practice statements, as with the Australian framework. Additionally, although the accreditation process or licensing is covered in the regulations, it remains unclear who has been granted licences or has been accredited to provide cryptographic and certification services. The media outlets, including the TCRA website, are silent on this. Thus, there is no evidence that the PKI framework has been implemented except for the eGA PKI and DSMS that is used as the digital authentication framework in the e-Office Management System. An additional problem is the lack of a privacy and data protection law in Tanzania. A privacy and data protection law ought to be enacted because the absence of such a law jeopardizes the security of PKI signature users’ personal data. Arguably, the cryptographic and certification service providers should be subject to appropriately legal binding requirements regarding privacy and data protection in association with the use of their services and the technologies used. This suggestion is informed by the fact that even the EU eIDAS has recognized the need to embrace and employ data protection rules and principles within the electronic signature framework.[107] Similarly, the Australian Gatekeeper PKI Framework has included a privacy impact assessment component which is drawn from the Privacy Amendment (Enhancing Privacy Protection) Act 2012. The Act sets out standards, rights, and obligations for the handling, holding, accessing and correction of personal information (including sensitive data).[108] Moreover, there is an absence of an information security policy, although a National Information and Communications Technology Policy does exist.[109] We believe that is not enough. There ought to be a National Information Security Policy that will provide a vision and strategies of the Tanzania government on information security. There is a need for a clearly established or accredited institution to deal with authentication frameworks. To that end Tanzania may borrow a leaf from other countries such as Australia where the Digital Transformation Office is responsible for regulating all government authentication frameworks,[110] and South Africa’s SAPO and LAWTRUST. The tasks of managing the government digital authentication frameworks, including the PKI, has been given to the Electronic Government Authority in accordance with the e-Government Act.[111] There is a need to accredit companies to provide a digital authentication framework for the private sector. There seems to be no encumbrance on this because any private company may apply to TCRA, which is the accreditation entity for issuing a licence to provide cryptography and certification services. Nevertheless, even prior to the implementation of the ETA regulations, private commercial banks, owners of online shops and online marketplaces appear to have offered cryptography and 107 Article 5 of the eIDAS provides for the application of the Consolidated text: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) (Text with EEA relevance), OJ L 119 4.5.2016, p. 1, Corrigendum, OJ L 127, 23.5.2018, p. 2 ((EU) 2016/679), especially with respect to data processing and protection. 108 Section 8.2 of Gatekeeper PKI Framework. 109 https://www.ega.go.tz/uploads/publications/sw-1574848612-SERA%202016.pdf. 110 Australia DTO manages all government authentication frameworks, at https://www.dta.gov.au/news/dto-now-managegovernment-authentication-frameworks. 111 See The e-Government Act s5. ----- **Implementing the electronic signature law in Tanzania** certification services. But for legal validity, admissibility, and enforceability of electronic signature, regardless as to who manages the authentication framework, the requirements set under ETA must be observed.[112] ### Adjusting to PKI As it has been observed in the discussion above, there are several gaps in the existing laws that support PKI and NPKI. The identified gaps ought to be addressed if the operationalization of NPKI is to be successful. The following laws are recommended to be put in place. One, the law needs to be enacted to make provision for the rights, duties, obligation, and liabilities of the parties involved in PKI. This may be achieved by amending G.N. No. 228. It is essential for the rights, duties and liabilities of the parties involved in the PKI cycle to be defined. This may also be achieved by a certificate practice statement. Without setting out these rights, duties and liabilities, the NPKI might never be put into operation.[113] Two, the amendment to G.N. No.228 should, ideally, include the following: registration authority, repository, validation authority, subscriber, certificate policy and subscriber (and relying party) agreement. Moreover, the regulations should be reformed to provide for the registration and digital authentication agencies (Bank of Tanzania and Tanzania Posts Corporation as rightly suggested in the NPKI consultancy report). If that is not viable at the time of writing, then more private companies should be encouraged to apply to TCRA for accreditation or licensing for the provision of digital authentication or cryptography and certification services. Three, the G.N. No. 228 should further provide for proofing and vetting or verification of PKI signature subscribers and users. The identify verification of electronic signature users is essential. Without such identity proofing, the key holders may not be known. Four, it is important to amend the Bank of Tanzania Act 2006 and Tanzania Posts Corporation Act.[114] G.N. No. 228 should include legal provisions to establish which institution is to undertake verification or proving identity of PKI signature subscribers. This is like the role of the SAPO Trust Centre in South Africa. In Tanzania, the users’ identity verification process may be carried out by the Bank of Tanzania and Tanzania Posts Corporation. If that would have been the case, the laws establishing these institutions ought to have been amended to provide for such a role. While the authentication of digital transactions is an important factor for the prosperity of electronic commerce and electronic government transactions, one may wonder about the readiness of the Tanzania Posts Corporation to assume such a role or whether there should there be new institutions accredited to support implementation of cryptography and certification services. Although there seems to be bias in suggesting the use of Tanzania Posts for this purpose. Tanzania Posts has a vast network and an online shop,[115] which might be useful in establishing these types of service. This would not restrict the adoption of PKI by other organizations. Although the BoT and Tanzania Posts Corporation were considered to be in a better position to manage the government’s PKI and Digital Signature Management System (DSMS)[116] it was the eGA that developed and manages it.[117] Additionally, it has developed many other e-government systems. Intriguingly, the eGA is a regulatory authority whose function as the regulator may be reconsidered if it concentrates on developing systems instead of regulating others to develop and use the egovernment systems. It is not too late to apportion and align the roles in developing and regulating digital authentication framework for public entities. Five, the enactment of a Privacy and Protection Act is equally important. A privacy and data protection law should be enacted to secure the privacy and personal data of PKI signature users. This law should aim to impose obligations on cryptographic and certification service providers to adopt strategies to secure the privacy of users. As evidenced in countries such as South Africa, the operation of NPKI involves the massive use of personal data. Thus, a privacy and 112 ETA s7. 113 Wikipedia, certificate policy, available at https://en.wikipedia.org/wiki/Certificate_policy; see also RFC 2527; S. Chokhani and W. Ford (November 2003) Internet X.509 Public Key Infrastructure Certificate Policy and Certification Practices Framework at https://datatracker.ietf.org/doc/html/rfc3647#page-16. See also section 4.3 of the Gatekeeper PKI Framework. 114 [Chapter 303 R.E. 2019]. 115 See Tanzania Posts online shop at https://www.postashoptz.post/. 116 It can safely be stated that eGA took advantage of its mandate given under the e-Government Act s5. 117 It derived its mandate from the e-Government Act s5. ----- **Implementing the electronic signature law in Tanzania** data protection law if enacted will help to set the parameters on the use of such personal data in the NPKI framework in Tanzania. Although there has been delay in enacting such an act, a Bill has already been drafted. What is unclear though is when it will be enacted into law. Except for point five, and in alternative to some changes into the laws suggested at points 1-4, it might be possible to adopt a Certification Policy and Certification Practice Statement similar to the Australian Gatekeeper PKI Framework. A similar result can also be achieved via a certification practice statement. Tanzania may draw lessons from the Gatekeeper PKI Framework of Australia. Even though the framework was meant for government agencies, private organisations are not precluded from using it as a model. Six, the formulation of a National Information Security policy. There ought to be National Information Security Policy that will provide a vision and strategies for the Tanzania government on information security. The cryptographic and certification providers will equally be required to have in place information security documentation which will indicate their risk management approach. The regulator may be empowered to impose a penalty on providers who lack adequate information security documentation. For government entities information security issues have been taken care of by the e-Government Act whose implementation is through the eGA.[118] ### Conclusion This article has examined the implementation of the electronic signature law in Tanzania and identified several gaps in the laws. Suggestions have been made to remedy the lacunae. Changes can be made swiftly via the relevant regulatory authorities – although it will be necessary to provide for greater certainty via changes in the law. The recommendations offered in this article are offered in the diligent hope that those in government acknowledge the need to act, and to act swiftly. © Ubena John, 2022 **Ubena John is a Judge in the High Court of** Tanzania, and senior lecturer at the Faculty of Law, Mzumbe University, Tanzania. jubena@mzumbe.ac.tz 118 The e-Government Act s5 & ss36-46. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.14296/deeslr.v19i0.5467?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.14296/deeslr.v19i0.5467, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GOLD", "url": "https://journals.sas.ac.uk/deeslr/article/download/5467/5231" }
2,022
[]
true
2022-10-10T00:00:00
[ { "paperId": "9e84cd1e7dfe1f071940b9073ebcb0c4e281ebf4", "title": "Unauthorized use of bank cards with or without the PIN: a lost case for the customer?" }, { "paperId": "dea5befe63ac09ae18e5d076f5d857530253aa79", "title": "Vulnerable Software: Product-Risk Norms and the Problem of Unauthorized Access, co-authored with Robert Sloan" }, { "paperId": "c79d4e3991c056ec787975b1868bd95871331164", "title": "Internet X.509 Public Key Infrastructure Certificate Policy and Certification Practices Framework" }, { "paperId": null, "title": "It derived its mandate from the e-Government Act s5" }, { "paperId": null, "title": "Are the customers' rights protected against fraud in mobile banking in Tanzania: a review of laws and practice" }, { "paperId": null, "title": "It can safely be stated that eGA took advantage of its mandate given under the e-Government Act s5" }, { "paperId": null, "title": "High Court of Tanzania at Mwanza (unreported)" }, { "paperId": null, "title": "255 -262; see also the helpful advice offered to members of the judiciary in two important papers" }, { "paperId": null, "title": "Debit cards, ATMs and negligence of the bank and customer" }, { "paperId": null, "title": "Trust\" Between Machines? Establishing Identity Between Humans and Software Code, or whether You Know it is a Dog, and if so which Dog?" } ]
15,879
en
[ { "category": "History", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Linguistics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01af3e512ef0d7cec8722b2a2290346e7d690d39
[]
0.905632
Medieval manuscripts and their migrations: Using SPARQL to investigate the research potential of an aggregated Knowledge Graph
01af3e512ef0d7cec8722b2a2290346e7d690d39
Digital Medievalist
[ { "authorId": "2060463635", "name": "H. Wijsman" }, { "authorId": "145536615", "name": "Toby Burrows" }, { "authorId": "100841083", "name": "L. Cleaver" }, { "authorId": "49613615", "name": "Doug Emery" }, { "authorId": "2930307", "name": "E. Hyvönen" }, { "authorId": "2106653", "name": "M. Koho" }, { "authorId": "145022714", "name": "Lynn Ransom" }, { "authorId": "47586165", "name": "E. Thomson" } ]
{ "alternate_issns": null, "alternate_names": [ "Digit Médiév" ], "alternate_urls": [ "https://journal.digitalmedievalist.org/", "http://www.digitalmedievalist.org/journal/" ], "id": "a9bd1972-c0be-4c24-a14d-e6144ef03137", "issn": "1715-0736", "name": "Digital Medievalist", "type": null, "url": "https://digitalmedievalist.wordpress.com/" }
Although the RDF query language SPARQL has a reputation for being opaque and difficult for traditional humanists to learn, it holds great potential for opening up vast amounts of Linked Open Data to researchers willing to take on its challenges. This is especially true in the field of premodern manuscripts studies as more and more datasets relating to the study of manuscript culture are made available online. This paper explores the results of a two-year long process of collaborative learning and knowledge transfer between the computer scientists and humanities researchers from the Mapping Manuscript Migrations (MMM) project to learn and apply SPARQL to the MMM dataset. The process developed into a wider investigation of the use of SPARQL to analyse the data, refine research questions, and assess the research potential of the MMM aggregated dataset and its Knowledge Graph. Through an examination of a series of six SPARQL query case studies, this paper will demonstrate how the process of learning and applying SPARQL to query the MMM dataset returned three important and unexpected results: 1) a better understanding of a complex and imperfect dataset in a Linked Open Data environment, 2) a better understanding of how manuscript description and associated data involving the people and institutions involved in the production, reception, and trade of premodern manuscripts needs to be presented to better facilitate computational research, and 3) an awareness of need to further develop data literacy skills among researchers in order to take full advantage of the wealth of unexplored data now available to them in the Semantic Web.
Burrows, Toby, Laura Cleaver, Doug Emery, Eero Hyvönen, Mikko Koho, Lynn Ransom, Emma Thomson, and Hanno Wijsman. 2022. “Medieval Manuscripts and Their Migrations: Using SPARQL to Investigate the Research Potential of an Aggregated Knowledge Graph.” Digital _[Medievalist, 15(1): 3, pp. 1–48. DOI: https://doi.org/10.16995/dm.8064](https://doi.org/10.16995/dm.8064)_ # Medieval Manuscripts and Their Migrations: Using SPARQL to Investigate the Research Potential of an Aggregated Knowledge Graph **[Toby Burrows, University of Oxford, UK, toby.burrows@oerc.ox.ac.uk](mailto:toby.burrows@oerc.ox.ac.uk)** **[Laura Cleaver, University of London, UK, laura.cleaver@sas.ac.uk](mailto:laura.cleaver@sas.ac.uk)** **[Doug Emery, University of Pennsylvania Libraries, US, emery@pobox.upenn.edu](mailto:emery@pobox.upenn.edu)** **[Eero Hyvönen, University of Helsinki, FI, eero.hyvonen@aalto.fi](mailto:eero.hyvonen@aalto.fi)** **[Mikko Koho, University of Helsinki & Aalto University, FI, mikko.koho@aalto.fi](mailto:mikko.koho@aalto.fi)** **[Lynn Ransom, University of Pennsylvania Libraries, US, lransom@upenn.edu](mailto:lransom@upenn.edu)** **[Emma Thomson, University of Pennsylvania Libraries, US, emmacaw@upenn.edu](mailto:emmacaw@upenn.edu)** **[Hanno Wijsman, Institut de recherche et d’histoire des textes (CNRS), FR, hannowijsman@gmail.com](mailto:hannowijsman@gmail.com)** Although the RDF query language SPARQL has a reputation for being opaque and difficult for traditional humanists to learn, it holds great potential for opening up vast amounts of Linked Open Data to researchers willing to take on its challenges. This is especially true in the field of premodern manuscripts studies as more and more datasets relating to the study of manuscript culture are made available online. This paper explores the results of a two-year long process of collaborative learning and knowledge transfer between the computer scientists and humanities researchers from the Mapping Manuscript Migrations (MMM) project to learn and apply SPARQL to the MMM dataset. The process developed into a wider investigation of the use of SPARQL to analyse the data, refine research questions, and assess the research potential of the MMM aggregated dataset and its Knowledge Graph. Through an examination of a series of six SPARQL query case studies, this paper will demonstrate how the process of learning and applying SPARQL to query the MMM dataset returned three important and unexpected results: 1) a better understanding of a complex and imperfect dataset in a Linked Open Data environment, 2) a better understanding of how manuscript description and associated data involving the people and institutions involved in the production, reception, and trade of premodern manuscripts needs to be presented to better facilitate computational research, and 3) an awareness of need to further develop data literacy skills among researchers in order to take full advantage of the wealth of unexplored data now available to them in the Semantic Web. _Digital Medievalist is a peer-reviewed open access journal published by the Open Library of Humanities. © 2022 The_ Author(s). This is an open-access article distributed under the terms of the Creative Commons Attribution 4.0 International License (CC-BY 4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original [author and source are credited. See http://creativecommons.org/licenses/by/4.0/.](http://creativecommons.org/licenses/by/4.0/) ----- ## 1 Introduction §1 The primary goals of the Mapping Manuscript Migrations (MMM) project (for the project blog, see: [http://blog.mappingmanuscriptmigrations.org/;](http://blog.mappingmanuscriptmigrations.org/) technical descriptions and publications of the project are available at: [https://](https://mappingmanuscriptmigrations.org/en/) [mappingmanuscriptmigrations.org/en/), funded by the Digging into Data Challenge](https://mappingmanuscriptmigrations.org/en/) of the Trans-Atlantic Platform between 2017 and 2020, were to bring together data relating to the history and provenance of medieval and Renaissance manuscripts and to explore the research potential of the aggregated dataset. Based on the Linked Data publishing model (Heath and Bizer 2011) and the W3C Semantic Web standards and technologies [(https://www.w3.org/standards/semanticweb),](https://www.w3.org/standards/semanticweb) including Universal Resource Identifiers (URI), the RDF data model, ontologies (Staab and Studer 2009), and the SPARQL query language (SPARQL recommendation of W3C: [https://www.](https://www.w3.org/TR/sparql11-query/) [w3.org/TR/sparql11-query/) for querying RDF data, the project resulted in establishing](https://www.w3.org/TR/sparql11-query/) a Linked Open Data (LOD) service (SPARQL recommendation of W3C: [https://www.](https://www.w3.org/TR/sparql11-query/) [w3.org/TR/sparql11-query/) and a public MMM portal (the MMM portal is available at:](https://www.w3.org/TR/sparql11-query/) [https://mappingmanuscriptmigrations.org). The data and the portal (Hyvönen et al.](https://mappingmanuscriptmigrations.org) 2021) allow users to access and query across three distinct datasets, each focusing on premodern manuscript data but built to serve three different purposes: the University of Pennsylvania’s _Schoenberg Database of Manuscripts, the Institut de recherche et_ d’histoire des textes’ _Bibale database, and the Bodleian Library’s online catalogue_ _[Medieval Manuscripts in Oxford Libraries (respectively, https://sdbm.library.upenn.edu;](https://sdbm.library.upenn.edu)_ [https://bibale.irht.cnrs.fr; and https://medieval.bodleian.ox.ac.uk). The MMM project](https://bibale.irht.cnrs.fr) also made the transformed datasets (for a full report on MMM data modelling and transformation from legacy databases, see Koho et al. 2021) available for direct searching [and downloading on the Zenodo repository (https://zenodo.org/record/4019643).](https://zenodo.org/record/4019643) §2 The work of modelling, combining, and presenting the MMM data was carried out by project team members from the e-Research Centre at Oxford University and the Semantic Computing Research Group at Aalto University, and was based on a series of twenty-four research questions determined at the outset by the project’s manuscript researchers at the IRHT and the Schoenberg Institute for Manuscript Studies as well as by members of a focus group gathered in the early stages of the project. The questions were designed to serve as examples of the kinds of inquiries that researchers would want to make in order to identify the key data points they would want to access and query for the data modelling team. They were also used to analyze and test the data model and the viability of the aggregated data and were then used in the evaluation of the public MMM portal (Burrows et al. 2020). To these ends, the original research questions were fundamental to the shaping and successful implementation of the project. ----- §3 While the launch of the MMM LOD service and portal marked the formal end of the project, for the MMM project team it represented a path to a new frontier for research. The portal, based on the Sampo model (the Sampo model and series of semantic portals are described in: [https://seco.cs.aalto.fi/applications/sampo/) and](https://seco.cs.aalto.fi/applications/sampo/) Sampo-UI framework (Ikkala et al. 2021) with its search, data exploration, and data analysis functionalities, is an interface that lies between the users and the underlying RDF data. The portal can be used without programming skills or knowledge about the SPARQL language. The user can choose from five perspectives—Manuscripts, Works, People, Places, and Events—that provide easy entrée into the dataset from different perspectives and facilitate searching and analyzing the data for users new to Linked Data. The perspectives are implemented using SPARQL queries to the underlying LOD service that mediate but also ultimately limit users’ ability to query the data flexibly, extensively, and expansively. The perspectives are grounded in traditional research questions that were created outside of a computational context and are therefore not suited to take full advantage of the data model they helped to create. The really interesting data digging happens when the user confronts the RDF data directly via the SPARQL endpoint using custom made SPARQL queries for solving particular research questions. For this purpose, SPARQL editors, such as YASGUI (Rietveld and Hoekstra 2017) can be [used, or alternatively programming environments, such as Google Colab (https://colab.](https://colab.research.google.com/notebooks/intro.ipynb) [research.google.com/notebooks/intro.ipynb) and Jupyter notebooks (https://jupyter.](https://colab.research.google.com/notebooks/intro.ipynb) [org) for Python scripting for visualizations and data analyses based on SPARQL queries.](https://jupyter.org) §4 This paper explores this process as it was undertaken by members of the project team, the primary authors of the present article who participated in a two-year long process of collaborative learning and knowledge transfer between computer scientists and humanities researchers. The process developed into a wider investigation of the use of SPARQL to analyze the data, explore broader types of research questions, and assess the research potential of the MMM aggregated dataset and its Knowledge Graph. Through an examination of a series of six SPARQL query case studies, we will show that as we became more adept at querying, the better we understood that the scope of original research questions had fallen short of both the abilities and the potential of the MMM data to create new knowledge about the production and transmission of manuscripts across time and that a new approach to research questions would produce better and more transparent results. In addition to analyzing the queries themselves, we will also show what the case studies reveal about the structure and contents of the MMM data, and how lacunae in the data (especially around biographical details of persons) can be compensated for by drawing in information from other Linked Open Data resources like Wikidata. ----- ## 2 The research questions §5 Before turning to the SPARQL case studies, it is useful to provide further background to the development of the original research questions to provide context and highlight some of the key problems they presented when applied to the aggregated dataset. A research question is typically understood to be a question that a research project seeks to answer. Identifying a research question or set of questions is generally one of the first steps in developing the methods and techniques for scholarship, whether that scholarship is traditional or digital, because it provides a basis and a goal for starting work. The MMM research questions were based on the team’s pre-existing knowledge of each dataset, but they also represented a set of expectations for what manuscript researchers might want to know about manuscripts in general (Table 1). 1. How many manuscripts produced before 1600 in European countries survive? 2. How many manuscripts were produced in Northern Italy and/or Lombardy? 3. How many manuscripts were produced in the Low Countries? 4. How many manuscripts were produced in London in the fifteenth century? 5. How many manuscripts formerly owned by Sir Thomas Phillipps are in British Libraries? 6. What is the average number of folios in a book of hours? 7. How many surviving manuscripts that contain Spanish texts written in gothic rotunda were produced in Castile for an abbey or convent? How many were owned during the nineteenth century by English private collectors? Which of these are now owned by an institution in North America? 8. What French collectors purchased manuscripts since the end of the Wars of Religion (after 1598)? Where are their manuscripts now? 9. How many manuscripts containing texts by Ramon Llul were sold in the 19th century? 10. Who collects manuscripts with texts by Ramon Llul? 11. How many times do texts by Ramon Llul appear with texts by Albertus Magnus in the same manuscript? 12. What was the most popular text by a medieval author in France in the seventeenth-cen­ tury? 13. Did Sir Thomas Phillipps own a thirteenth-century bible with historiated initials? 14. How many illuminated manuscripts were in a specific collection? 15. Who are the donors and owners of a collection? 16. Research by subject, technique, language, artist, even the use of pigments in a collec­ tion? 17. Details of a collection (subject, technique, place of production, etc.)? What are its gaps? What are its dominant features? 18. Life of a collection, or of an illuminated book? (Contd.) |1.|How many manuscripts produced before 1600 in European countries survive?| |---|---| |2.|How many manuscripts were produced in Northern Italy and/or Lombardy?| |3.|How many manuscripts were produced in the Low Countries?| |4.|How many manuscripts were produced in London in the fifet enth century?| |5.|How many manuscripts formerly owned by Sir Thomas Phillipps are in British Libraries?| |6.|What is the average number of folios in a book of hours?| |7.|How many surviving manuscripts that contain Spanish texts writet n in gothic rotunda were produced in Castile for an abbey or convent? How many were owned during the nineteenth century by English private collectors? Which of these are now owned by an institution in North America?| |8.|What French collectors purchased manuscripts since the end of the Wars of Religion (afet r 1598)? Where are their manuscripts now?| |9.|How many manuscripts containing texts by Ramon Llul were sold in the 19th century?| |10.|Who collects manuscripts with texts by Ramon Llul?| |11.|How many times do texts by Ramon Llul appear with texts by Albertus Magnus in the same manuscript?| |12.|What was the most popular text by a medieval author in France in the seventeenth-cen­ tury?| |13.|Did Sir Thomas Phillipps own a thirteenth-century bible with historiated initials?| |14.|How many illuminated manuscripts were in a specific collection?| |15.|Who are the donors and owners of a collection?| |16.|Research by subject, technique, language, artist, even the use of pigments in a collec­ tion?| |17.|Details of a collection (subject, technique, place of production, etc.)? What are its gaps? What are its dominant features?| |18.|Life of a collection, or of an illuminated book?| ----- |19.|Which manuscripts have probably been lost?| |---|---| |20.|Which manuscript has been sold and can no longer be identified as part of a collection today?| |21.|Which copies of a text are illuminated?| |22.|What position does a copy of a text occupy in its transmission? Are there unique exem­ plars of works?| |23.|What are the surviving versions of a work? Who made a French translation of an old text? When?| |24.|What are the difef rent surviving publications [copies] of a text (date, place of produc­ tion, person(s) responsible, etc.)?| **Table 1: Mapping Manuscript Migrations Original Research Questions.** This list is also referenced in Burrows et al. (2020). Questions 14 to 24 were borrowed from the _[Biblissima project’s list of research questions available here: https://doc.biblissima.fr/ontologie-](https://doc.biblissima.fr/ontologie-biblissima#m%C3%A9thodologie)_ [biblissima#m%C3%A9thodologie.](https://doc.biblissima.fr/ontologie-biblissima#m%C3%A9thodologie) §6 The questions were designed to include different levels of complexity to test how well results could be retrieved. Simple questions such as 1–6 are based on elements easily identified across all data sets. For example, Questions 1 and 2 require results to be filtered by only one element: by date (before 1600) and by place (Northern Italy and Lombardy) respectively. The remaining questions introduce more complexity. For many of these, simply adding more elements elevated the level of complexity. For example, Question 7 “How many surviving manuscripts that contain Spanish texts written in gothic rotunda were produced in Castile for an abbey or convent?” requires five data elements: language, script type, place of production, former owner, and institution type. §7 The questions provided a template of data elements for the data model development and helped to define the semantic relationships among the elements that would need to be encoded within the model. But were they good research questions in the sense defined above? Testing them against the RDF in the SPARQL endpoint revealed structural weaknesses in the questions. As the case studies will show, these included semantic ambiguity and misleading assumptions about certain data elements or what the combined datasets were capable of answering. §8 A successful answer to a research question depends on how well the methods and techniques determined to answer that question are developed and applied to the research process. A successful answer will also depend on how well the research data is understood by those posing the question and how well the question can be mapped to the underlying data model. Querying the dataset using SPARQL exposed the difficulties arising from questions that had too much ambiguity to make computational querying ----- possible or that were based on flawed assumptions made by users about the abilities of the data to return the expected results. Gaining an awareness of these problems also helped the team refine the questions as their understanding of the available evidence and nature of the data increased. ## 3 SPARQL query language §9 SPARQL is the query language designed for data that conform to the RDF model, and hence is a key component of Semantic Web and LOD services and platforms (DuCharme 2013). SPARQL queries follow the pattern of RDF triples, in that they are expressed in the “subject–predicate–object” pattern. Queries are usually run against a SPARQL endpoint exposed by a triple store. Multiple namespaces can be queried in the same query; so can multiple SPARQL endpoints. Some Linked Open Data triple stores containing humanities data offer a public SPARQL endpoint, such as the Getty [Vocabularies endpoint and the Wikidata endpoint (http://vocab.getty.edu/sparql;](http://vocab.getty.edu/sparql) [https://query.wikidata.org/sparql).](https://query.wikidata.org/sparql) §10 SPARQL has something of a reputation for being difficult to learn, however, and appears to have been little used by humanities researchers—or at least rarely promoted to them as an active tool for digital humanities projects (Schweizer and Geer 2021). There are few previous specific evaluations of SPARQL in a digital humanities setting. (One exception is: Ichinose et al. 2014. SPARQL is only mentioned briefly in: Meroño Peñuela et al. 2015.) The best available resource for humanities researchers interested in learning SPARQL is the 2015 tutorial by Matthew Lincoln on the Programming Historian Website (Lincoln 2015). This site, however, has been officially “retired”; the examples depended on the British Museum’s SPARQL endpoint to its Collections database which is no longer reliably available. Lincoln (2014), an earlier but much shorter introduction to SPARQL by Lincoln, uses Europeana as its basis. §11 As noted above, the MMM team became interested in exploring different approaches to the aggregated data that went beyond the functionality of the public portal. Guided by the expertise of Semantic Web specialists from Aalto University, the project team conducted a weekly online SPARQL training workshop over the course of two years (May 2019–May 2021). During these sessions, the specialists were able to transfer knowledge to the humanists and in return the humanists provided insight into the research process for the Semantic Web specialists. The MMM project has also published its own introductory tutorial for using SPARQL queries with the MMM data [(https://mapping-manuscript-migrations.github.io/sparql/sparql_tutorial.](https://mapping-manuscript-migrations.github.io/sparql/sparql_tutorial.html) [html).](https://mapping-manuscript-migrations.github.io/sparql/sparql_tutorial.html) ----- ## 4 The MMM data model and knowledge graph §12 The MMM data model, which draws on the CIDOC-CRM (Doerr 2003; for the CRM standard online, see: [http://www.cidoc-crm.org/) and FRBRoo (Riva, Doerr,](http://www.cidoc-crm.org/) and Žumer 2009) ontologies for its entity classes and properties but also adds some specific to MMM, has been discussed in detail elsewhere (Koho et al. 2021, 4–10). It was constructed mainly by inspecting and comparing the different data models used by the three data sources, with additional verification from the twenty-four MMM research queries. It is used to structure the MMM Knowledge Graph, which contains the following entities (as of January 2021): - 222,605 manuscripts - 435,428 works and expressions - 56,685 actors (persons and organizations) - 5,077 places - 937,158 events A significant number of resources in the MMM Knowledge Graph (primarily actors and places) are linked to external authorities. These links originate from the source datasets and from the work done in the MMM project to add shared identifiers to resources in the source datasets for reconciliation purposes. External linkages, in addition to resource level links to the original source datasets, include: - 15,868 links to VIAF - 4,617 links to Wikidata [• 4,311 links to data.bnf.fr](http://data.bnf.fr) - 4,236 links to the Getty TGN - 3,470 links to ISNI database - 3,066 links to German national library catalogue - 3,060 links to IdRef (Identifiers and Referentials) - 2,572 links to The Library of Congress Linked Data Service - 1,909 links to Bibliothèque nationale de France catalogue The vocabularies for actors and places were automatically harmonized across the source data using these identifiers. Manuscripts were harmonized using shelf-marks or Phillipps numbers (assigned by the 19th-century collector Thomas Phillipps). The ----- names of works were harmonized by manual review of string matching on titles; this only covered titles in the same language, not translated titles in other languages. §13 A temporal distribution of the events in the MMM data by decades is shown in Figure 1, with separate categories for (1) manuscript production events (ecrm:E12_ ``` Production), (2) manuscript observations (ecrm:E10_Transfer_of_Custody and mmms:ManuscriptActivity), and (3) all other events. Only events with an associated ``` timespan are visualized, which accounts for 22.5% of all events. Some events span multiple decades, in which cases an event is counted for each decade. The data are skewed by manuscript survival, cataloguing practices, and most of all by what is catalogued and included in the databases. The SPARQL query used is as follows: [https://api.triplydb.com/s/OYKNfOimm.](https://api.triplydb.com/s/OYKNfOimm) **Figure 1: Distribution of events in MMM data, by decades.** §14 One of the important lessons learned from the SPARQL workshops was the necessity of understanding the underlying RDF data model and the semantic links between the data elements in order to perform functional queries. In his explanation of RDF, Joshua Tauberer notes: “What is meant by ‘semantic’ in Semantic Web is not that computers are going to understand the meaning of anything, but that the logical pieces of meaning can be mechanically manipulated by a machine to useful _human_ ends” (Tauberer 2006). The humans using the machine, we learned, must therefore understand the logical structure in order to manipulate it for useful computational ends. §15 When considering the MMM data model, it is important to keep in mind its relationship to the research questions. The data model is expressed in RDF, a method for describing data by defining relationships between data objects. The “subject– predicate–object” pattern produces triples that express the relationships. A triple is ----- the basic unit of an RDF knowledge graph. For many, the concept of triples is difficult to digest. Unlike most other data models that present data as lists of elements, such as a spreadsheet with well-defined columns or the tables in a relational database, the elements in RDF exist in something more comparable to a cloud of data, seemingly loosely connected by semantic statements. It is much harder to visualize and internalize the structure in one’s mind, which may explain why understanding RDF and ways to query it are difficult for non-semantic web specialists. §16 As the syntactical naming of the units comprising a triple suggests, triples work much like sentences. In a sentence, which can also be a question, subjects and objects are related by the action or state of being that links them. If one considers triples as a list of answers to questions (who did what, what is something, when was something done), then a query in RDF is simply a triple or series of triples statements expressed in the context of a search to identify desired data elements possessing certain relationships. A simple SPARQL query can be expressed as “Show me all things associated with this thing.” Then, a further relationship can be added to refine results: “Then show me all the things associated with those things that share this value.” Further triple statements can be added to the query indefinitely to execute a variety of search functions. The query, then, is only limited by three things: the researcher’s ability to think of new questions to ask or new associations to make; how well the associations have been expressed in the data model in relation to the data; and how well the data has been structured so that the required data elements are accessible to the computer performing the search. §17 As we noted above, the MMM RDF data model was derived in large part from the data elements identified in the research questions described in the previous section (manuscripts, texts, owners, places of production, dates, etc.) (Figure 2). These elements are the nodes represented in the model. The nodes are connected to each other by the properties derived from the MMM ontologies, which express all the possible relationships between the nodes, for example, “is composed of,” “has former or current owner,” “took place at,” “has timespan,” etc. In the RDF schema, the nodes are the subjects and objects connected to each other by the properties or predicates; the connections form the triples that can then be queried in a medium like SPARQL. To construct a query, one starts with a node, then follows the associations in any direction where there is a link. In such a flexible structure, the possibilities for what one can query and how are greatly expanded. For the MMM project team, achieving a high degree of familiarity with the data model enhanced the ability to query it and opened up new ways to approach the data well beyond the scope that the original research questions set out to achieve. ----- **Figure 2: The MMM Data Model.** ## 5 SPARQL queries as case studies §18 Most of the original MMM research questions could, with some exceptions, be answered to greater or lesser extent through the Semantic Web portal interface to the MMM data, using a combination of filtering and searching. But the project team wanted to go beyond this interface–partly to tackle those questions that the interface could not answer, partly to explore new questions, and partly to explore the relationships and structures in the data more fully. SPARQL queries were used for this purpose, and ----- the remainder of this paper discusses a selection of these queries as case studies for investigating the research potential of the aggregated dataset and the MMM Knowledge Graph. They include three of the original MMM research questions, as well as further questions arising from another large European manuscript provenance project Cultivate [MSS (https://www.ies.sas.ac.uk/research-projects-archives/cultivate-mss-project),](https://www.ies.sas.ac.uk/research-projects-archives/cultivate-mss-project) an ERC-funded project led by Laura Cleaver at the Institute for English Studies at the University of London, and other questions intended to add data from sources outside the MMM Knowledge Graph. The queries based on these questions did not always return the expected results, but the lessons learned from them led to better questions and better results. ### 5.1 Query 1: How many manuscripts were produced in Lombardy or Northern Italy? §19 The first query, based on Question 2 of the original research questions, is a simple one requiring only two data elements: show me all of the manuscripts associated with a certain _production_ _place. In this case, a researcher may want to find all illuminated_ manuscripts produced in or around Milan. Milan, a specific city located within the region of Lombardy, was a major and influential centre of illumination in northern Italy especially in the late Middle Ages, but only a fraction of manuscripts produced in this area during this time have been securely localized to the city in available sources. Shared stylistic features in script or decoration or textual references (e.g., calendars) of otherwise unlocalizable manuscripts can, however, point to affinities with this particularly “northern” style, leading cataloguers to tentatively assign “Northern Italy” as the place of production if the case for a secure tie to Milan or Lombardy is too tenuous to justify. §20 The researcher will therefore want to cast a wide net to find all manuscripts with a possible connection to Milan. The query “show me all manuscripts produced in Lombardy and Northern Italy” will return a reasonable set of results to allow narrowing down the search for manuscripts produced in Milan. A search for manuscripts produced in Lombardy will helpfully limit results, but a wider search for manuscripts produced in Northern Italy could return more expansive results and more chances for finding manuscripts that have not yet been more accurately localized. [5.1.1 Query explanation https://api.triplydb.com/s/l6M4n5Eff](https://api.triplydb.com/s/l6M4n5Eff) §21 The query (Figure 3) begins with a SELECT statement, which identifies the variable values to be returned by the query. The SELECT statement here (lines 9 to 10) includes variables that will return only distinct, or different, manuscript values and production place values. Also included are production timespans, though the timespan is not ----- essential to the original query. Multiple production place and production timespan values associated with the same manuscript value are concatenated to avoid showing the duplicated values within the same manuscript record. **Figure 3: SPARQL query for Query 1.** §22 The places are limited to those associated with the Getty’s Thesaurus of Geographical Names (TGN) identifiers for Northern Italy (tgn_4005363) and Lombardy (tgn_7003237) (line 13). The predicates in line 15 (ecrm:P108_has_produced/ ``` ecrm:P7_took_place_at/gvp:broaderPreferred*) allow the capture of manuscripts ``` associated with these places as well as all other places expressed within the hierarchy of the TGN terms. The asterisk symbol at the end of the predicate gvp:broaderPreferred* tells the query to include all results that equal the Getty Thesaurus of Geographic Names URIs for Northern Italy and Lombardy as well as any places that are nested within them. Lines 18 to 21 make optional the association between a value in production place and a manuscript, and lines 22 to 24 do the same for the production timespan. 5.1.2 Results §23 The query returns 1,702 instances of manuscripts, or manifestation singletons, in the combined dataset that contain the TGN IDs for Northern Italy (tgn_4005363) and for Lombardy (tgn_7003272) as a production place value. The predicate ``` gvp:broaderpreferred* in line 15 of the query also enables the capture of cities and ``` sites within Lombardy without having to identify and enter all TGN IDs associated with Lombardy. For example, Results 3 to 6 show “manifestation singletons,” which is how ----- the FRBRoo ontology defines a manuscript object, with Milan as the production place [because Milan is contained within Lombardy in the TGN hierarchy (http://vocab.getty.](http://vocab.getty.edu/tgn/7003150) [edu/tgn/7003150).](http://vocab.getty.edu/tgn/7003150) §24 The results also show manifestation singletons with multiple production places. These results indicate more than one place attribution has been assigned to a particular manifestation singleton. There are two reasons for this result. The first has to do with the way that manuscripts are often described: a source description identifies two or more possible places of production either because a cataloguer is hedging bets, for example, a manuscript could be described as from “Austria or Northern Italy” [(http://ldf.fi/mmm/manifestation_singleton/sdbm_24767), or because a manuscript](http://ldf.fi/mmm/manifestation_singleton/sdbm_24767) contains two component parts that were produced in different places and later bound together. The second reason is due to the data modelling: two or more of the sources could give two different places, as in this example in which the Bodleian record gives one place of production and the SDBM gives another: [http://ldf.fi/mmm/manifestation_](http://ldf.fi/mmm/manifestation_singleton/bodley_manuscript_2010) [singleton/bodley_manuscript_2010. In the case of the SDBM, a manuscript record](http://ldf.fi/mmm/manifestation_singleton/bodley_manuscript_2010) may contain two or more entries that give different location data. 5.1.3 Lessons learned §25 Some general conclusions can be drawn about the interpretation of the dataset based on these results. The results highlight inconsistencies inherent to manuscript description dependent upon human observation: differing opinions (Austria or Northern Italy?) or knowledge changes across time (it was considered to be made in Northern Italy, but recent studies now indicate that it may have been produced in Siena), and inconsistencies in data entry (production place was not provided in the source data). (The SDBM draws its data from catalogue sources that can vary widely in the amount of detail provided in manuscript description, from simple identification of author, title, and date to full codicological descriptions; it is common therefore for many details relating to the physical description of a manuscript not to be provided.) The query results therefore cannot be taken at face value and researchers must navigate through the manuscript links in the MMM record for further exploration and discovery. §26 A review of the results for this query raises the question: are the SPARQL results better than the results from a similar query in the MMM portal or separate queries in the original source datasets? The MMM portal and all three source datasets represent places hierarchically based on LOD authorities, including TGN. Querying the original data sources would obviously lack the efficiency of the aggregated dataset, but [a search in the MMM portal returns the same results as the SPARQL query (https://](https://mappingmanuscriptmigrations.org/en/manuscripts/faceted-search/table?page=0) [mappingmanuscriptmigrations.org/en/manuscripts/faceted-search/table?page=0),](https://mappingmanuscriptmigrations.org/en/manuscripts/faceted-search/table?page=0) ----- and the visualization tool allows drilling down in the search results much more effectively. Thus, this particular SPARQL query offers only limited advantage over more direct searching in the source datasets and no advantage over the portal. §27 While the query did not improve on the results provided by the MMM portal, the process of building the query gave shape to the data and insight into the limitations and character of the source data. This exercise, along with other early, relatively simple queries the group created, introduced the building blocks for SPARQL queries, like place and timespan techniques, that were returned to time and again. ### 5.2 Query 2: How many manuscripts survive that contain Spanish texts written in gothic rotunda script that were produced in Castile for an abbey or convent? How many of these were owned during the nineteenth century by English private collectors and are now owned by an institution in North America? §28 The first case study shows a simple query that produces no further information beyond what can be gained from a filtered browse in the MMM portal. The second case study demonstrates how adding complexity to the question expands the potential of using SPARQL to query an RDF dataset. The question attempts to determine how many manuscripts exist today that were written in a certain script type and that were produced in a certain institution type existing in a specific production place. The question then proposes that those results be further limited to those manuscripts owned by a certain _collector type from a specific_ _location during a specific_ _timespan._ A final additional query limits the search again to those manuscripts with a specific _current location._ §29 As one of the original research questions, this question was designed to be complex for complexity’s sake, in order to demonstrate for the data modellers how a researcher might want to drill down with increasing specificity using a wide range of qualifiers. It is an intentionally challenging question that tests the limits of the source datasets. The question contains an element “script type” that was ultimately not included in the final data model because it was not adequately represented in the original data sources. The question also requires that the query be able to distinguish between types of institutions (religious, monastic) and types of collectors (private _versus public) as well as distinguishing current locations among all locations identified_ in the data. Unlike the first case study, this question produced, not surprisingly, a fundamentally more complex query that tests not only the data model but also the user’s ability to interpret the results of the query. The query was developed in two steps: first, to identify Castilian manuscripts with Spanish texts; then, to determine who produced them. ----- 5.2.1 Query explanation _[5.2.1.1 Step 1 (Figure 4): https://api.triplydb.com/s/GfPEtMgxX](https://api.triplydb.com/s/GfPEtMgxX)_ **Figure 4: SPARQL query for Query 2: Step 1.** §30 This stage of the query finds manuscripts that were produced in Castile and contain texts written in the Spanish language. Following the SELECT statement, Line 10 uses a ``` VALUES clause to assign the specific URIs for the ?place variable, including the general ``` region of Castile and more specific locations, both historical and modern, within the region (Valladolid, Toledo, Burgos, Madrid, Ciudad Real, Ávila, and Guadalajara). Lines 11 to 13 include statements defining the `?manuscript variable as a “manifestation` singleton” (efrbroo:F4_Manifestation_Singleton). This variable links to the ``` ?expression variable and returns the human-readabe label for the manuscript ``` (?manuscript_label). The `?expression variable is the text within a manuscript,` following the FRBRoo conceptual model that defines the relationships between works, expressions, manifestations, and items in bibliographic records. Line 15 collects the label of the expression. Line 16 states that all expressions included in the results must [be written in the Spanish language, encoded as the URI <http://ldf.fi/mmm/language/](http://ldf.fi/mmm/language/sdbm_8) [sdbm_8>.](http://ldf.fi/mmm/language/sdbm_8) §31 In MMM, manuscripts are linked to information about their place of production via a production event class. Lines 18–19 return information about these production events by stating that production events (represented by the ?production variable) are linked to manuscripts via the ecrm:P108_has_produced property, and occurred at a ----- specific place (represented by the ?place variable). This variable is the same variable defined in line 10; all manuscripts included in the results will thus have a production place that matches one of those values. Line 21 returns the human-readable label for the ?place variable by using the skos:prefLabel predicate to return the label for the ``` ?place variable. ``` _[5.2.1.2 Step 2 (Figure 5): https://api.triplydb.com/s/C83qkzA_h](https://api.triplydb.com/s/C83qkzA_h)_ **Figure 5: SPARQL query for Query 2: Step 2.** §32 In the next step, we modified the previous query to include “produced for” information. Commissioning data is part of the production event class, so we add the ``` ?commissioner variable at line 21 by linking it to the ?production event variable via ``` the predicate `mmms:carried_out_by_as_commissioner. Line 23 collects the labels` for the `?commissioner. This modified query produces zero results, however, which` tells us that no commissioning data is available for this group of manuscripts in the MMM dataset. We were thus not able to continue the query to find out more about later ownership. 5.2.2 Results §33 The initial query produced a list of 286 texts, or “expressions,” in manuscripts which were produced in Castile and written in the Spanish language. Amending the query to look for the person or organization who commissioned the production of these manuscripts produced no results. Further exploration of the data showed that the ----- MMM-specific property “carried_out_by_as_commissioner” is only relevant to the Bibale data. The SDBM identifies provenance agents but does not distinguish whether former owners could also be commissioners. The Bodleian records sometimes include ownership information, like the presence of a coat of arms, that suggests a manuscript was commissioned, but this encoding is not expressed as structured data that can be mapped to the MMM data model. A separate query to show which manuscripts in the [whole dataset have a named commissioner (https://api.triplydb.com/s/1tQPkY-au)](https://api.triplydb.com/s/1tQPkY-au) returned 234 records from Bibale. These results have no overlap with the manuscripts produced in Castile. §34 As a result, this research question cannot be answered directly. An alternative would be to find the earliest owners of Castilian manuscripts as a proxy for potential commissioners. In the course of investigating this problem, the team produced an ancillary query to locate the distinct places of production and the owners of Spanish [language manuscripts produced in Castile: https://api.triplydb.com/s/M5lTr-KYy. The](https://api.triplydb.com/s/M5lTr-KYy) results can be inspected manually to find religious houses as the earliest owners, since the MMM data do not specify types of institutions. This approach avoids the dead-end of the commissioning relationship and can then be refined to look for 19th-century English owners and present-day American owners. This query in fact finds three Spanish language manuscripts produced in Castile with religious houses as their (presumably [first) owners: two from Madrid, (http://ldf.fi/mmm/manifestation_singleton/](http://ldf.fi/mmm/manifestation_singleton/bibale_40694) [bibale_40694 and](http://ldf.fi/mmm/manifestation_singleton/bibale_40694) [http://ldf.fi/mmm/manifestation_singleton/sdbm_23689), and](http://ldf.fi/mmm/manifestation_singleton/sdbm_23689) [one from Burgos (http://ldf.fi/mmm/manifestation_singleton/sdbm_5013). One of](http://ldf.fi/mmm/manifestation_singleton/sdbm_5013) these was later owned by a 19th-century British collector (Thomas Phillipps), while another is now in a North American library (University of California Berkeley). 5.2.3 Lessons learned §35 This research question, which combined place of origin, text language, script, type of commissioner, and then added the later location and ownership of manuscripts, was designed to test the limits of the dataset and is plainly artificial. A pattern emerged during the development of this query that we saw frequently. Often, the source datasets do not contain the specific information sought in the question, or do not encode it in a way that can be mapped to the specific MMM property. The Bodleian catalogue includes 78 cases where persons have the role statement “commissioner, dedicatee, or patron,” including MS. Lat. class. d. 38, a Latin manuscript containing the arms of King [Alfonso V of Aragon (https://medieval.bodleian.ox.ac.uk/catalog/manuscript_6383).](https://medieval.bodleian.ox.ac.uk/catalog/manuscript_6383) In the MMM data, he is encoded as an owner of this manuscript, but is not linked to its production event. This suggests that some re-thinking of the transformation and ----- mapping of personal role statements from the Bodleian data in particular might be worth considering. §36 Both “Castile” and “Spanish” are also problematic in this query. Historical regions like the Kingdom of Castile are not reflected in the TGN hierarchy of places, which is based on current administrative and jurisdictional boundaries, so the property ``` gvp:broaderpreferred cannot be used. For this query, the ?place variable had to be ``` bound to a list of specific place URLs from the TGN that was roughly comprehensive. The lack of availability of geographical hierarchy information, and the fact that historical boundaries change over time, mean that there is no simple method for capturing places within historical regions. Records that represent Castilian manuscripts may simply list Spain as the place of production, but there is no way to determine more specific locations within Spain in the query. §37 The term “Spanish” for language is also ambiguous. Spanish in its modern sense is a post-medieval phenomenon (Penny 2002); the MMM data sources are inconsistent in their encoding of medieval languages from the Iberian peninsula. The fullest and most accurate way of constructing this query would involve inspecting all these varieties of languages and places in the data sources, seeing the extent to which they are reflected in the MMM data, and ascertaining how best to specify them in the SPARQL query. Even a cursory look suggests a significant level of inconsistency in the source data. These considerations would still apply if the question was made much more specific along these lines: Which manuscripts containing texts in a vernacular language were produced in the Kingdom of Castile as it existed in 1217? ### 5.3 Query 3: What was the most popular text by a medieval author in France in the 17th century? §38 This third query offers a further example of how an original research question can be difficult to translate into a satisfactory form that is appropriate for the MMM data model. It requires building a search around the data elements: _author,_ _work,_ and _place and_ _date_ associated with a specific _event, in this case the acquisition of a_ manuscript with a certain text by a French collector in the 17th century, which is defined in the MMM data model as a “provenance event.” All of these elements are included in the data model, but the challenge is to identify what data or combination of data determines popularity. What in the context of the MMM dataset does popularity mean? The following query explanation attempts to extract results based on this assumption. Because of the complexity of the query, the team broke the investigation down into a ----- series of four query steps in which each query builds upon the results of the previous one. 5.3.1 Query explanation _[5.3.1.1 Step 1 (Figure 6): Provenance events occurring in France: https://api.triplydb.com/s/ZWE5m487i](https://api.triplydb.com/s/ZWE5m487i)_ §39 The first step of this query aims to identify all provenance events (dates optional) that occurred in France. Following the `SELECT` statement, Line 9 assigns a specific value to the `?event_type_uri variable,` `ecrm:E10_Transfer_of_Custody, by using` the VALUES clause. Thus, every event type returned in the results will be a provenance event involving the transfer of a manuscript from one owner to another, as opposed to more generic provenance events where a direct transfer of ownership is not necessarily known or confirmed by the data. Line 11 states that every location returned in the results (represented by the `?place_uri variable) must be within the boundaries of France` using the same predicate gvp:broaderPreferred* that was used in the first case study. Line 14 introduces the ?event_uri variable, stating that every ?event_uri must have occurred at the places assigned to (ecrm:P7_took_place_at) the ?place_uri variable in Line 11. Lines 16–17 further define the types of information we return about events. In line 16, the symbol a is a shorthand for the rdf:type predicate to indicate that the ``` ?event_uri variable is an instance of the ?event_type_uri class, which we defined in ``` line 9 as a transfer of custody event. Line 18 is an optional clause that includes the date that an event took place, if that information is present in the data. **Figure 6: SPARQL query for Query 3: Step 1.** ----- _5.3.1.2 Step 2 (Figure 7): Manuscripts and their provenance events (dates optional) that occurred in_ _[France: https://api.triplydb.com/s/L1Pd3P9ZM](https://api.triplydb.com/s/L1Pd3P9ZM)_ **Figure 7: SPARQL query for Query 3: Step 2.** §40 Building on the above query, the second query’s results include all the manuscripts associated with provenance events that occurred in France, with the dates on which they occurred if known. Line 19 states that the `?event_uri variable is linked to the` ``` ?manuscript variable via two potential provenance event predicates: either transfer of ``` custody events or observed manuscript events, which are provenance events where a direct transfer of custody is not confirmed in the data. _5.3.1.3 Step 3 (Figure 8). Manuscripts with their titles (optional), that had a provenance event that_ _[occurred in France in the 17th century: https://api.triplydb.com/s/WVeDNDp7V](https://api.triplydb.com/s/WVeDNDp7V)_ **Figure 8: SPARQL query for Query 3: Step 3.** ----- §41 This query expands and refines the results further by adding the titles of works within the manuscripts and limiting the timeframe of provenance events to those that occurred in the 17th century. Lines 24–25 feature an `OPTIONAL clause to retrieve the` works included in the manuscripts (if known) and the labels of those works, represented by the variable ?titles. §42 Lines 29–34 include statements related to the dates when a provenance event took place. For this research question, we are interested in events that occurred in the 17th century. The range of timespans included in the results need to have begun after the year 1599 but before the year 1700. To specify these parameters in SPARQL, we take the beginning of the timespan specified in each event (the `?begin variable), use the` ``` BIND and YEAR functions to extract the year from each timespan and assign each year ``` to a new variable, ?year, and then FILTER the results to include only those years that are less than 1700 but greater than 1599. _5.3.1.4 Step 4 (Figure 9). Manuscript with texts by authors who lived between 450–1500, with_ _[provenance events that occurred in France in the 17th century: https://api.triplydb.com/s/_9cC7UFM-](https://api.triplydb.com/s/_9cC7UFM-)_ **Figure 9: SPARQL query for Query 3: Step 4.** §43 This query adds information about authors and their life dates to the results. The strategy for limiting authors by their life dates is similar to the route taken in the previous query to find 17th century provenance events. Since the research question is interested in works composed by medieval authors, our results need to be limited to authors who lived during the medieval period, which we defined as between 450–1550 CE. Works are linked to their authors via the `mmm:carried_out_by_as_possible_` ``` author predicate, as seen in lines 33–34. Authors with known life dates will have their ``` birth and/or death dates (which could each vary widely in specificity from a precise date to a range of time) stored separately in the database, so we need to filter on birth and death dates separately. The parameters for the authors’ births are stated in line ----- 35–39. We link from the author to their birth event (line 35), from that birth event to the timespan for that event, and then to the beginning of the timespan (?author_ ``` birth_begin). Just as in the previous query, we use the BIND and FILTER functions on ``` lines 38–39 to extract the year from the timespan and then filter to include only years that are less than 1500 but greater than 450. We use the same process to limit authors’ death dates to the medieval period in lines 41–45, by using predicates specific to author death events and limiting the dates to between 500 and 1550. 5.3.2 Results §44 Query 3: Step 1 returned 1,765 results. Event dates ranged from “after 877” (e.g., [<http://ldf.fi/mmm/event/bibale_transfer_association:3972>) to “2020-04-01 –](http://ldf.fi/mmm/event/bibale_transfer_association:3972) [2020-04-03” (e.g., <http://ldf.fi/mmm/event/sdbm_source_observation_260557).](http://ldf.fi/mmm/event/sdbm_source_observation_260557) About 20% of these records had no associated date. The second query returned 47,805 results. More than 96% of these were generic “manuscript-related events” rather than transfers of custody. About 50% of them had no associated date. The third query reduced the number of records drastically. Only 1757 records were identified as “transfer of custody” or “observed ownership” events that could be localized to 17th-century France. §45 When authors’ birth and death dates were added to find “medieval” authors in the fourth query, the number of results was further reduced to 1,262 records. This list contains all the possible combinations of authors, works, dates, and manuscripts. The query can be analyzed to reveal that the list contains 264 manuscripts, 757 distinct works, and 153 different authors. Because the titles of works have not been harmonized across versions in different languages—and also because of the way in which the SDBM records multiple works contained in a single manuscript—it is impossible to say with any certainty which work occurs most frequently. (In the SDBM, multiple works and multiple authors occurring in the same manuscript are listed separately and are not linked to each other. This means that the MMM mapping has to describe each author as the “possible author” of each work, even though they may only be the author of one of the works in question.) But the most frequently occurring authors are clear: Isidore of Seville (25 manuscripts), Bede (22), Bernard of Clairvaux (13), Anselm of Canterbury (12), and Boethius (11). 5.3.3 Lessons learned §46 The complexity of this research question meant that we had to break it down into parts, write queries for each part, and then assemble these into a single SPARQL query. It also revealed that questions can be expressed in a form that is difficult to map to the ----- terms and relationships used in the data model and the aggregated dataset. To approach a set of results that could be used to answer the question “what was the most popular text?” meant tackling a series of definitional problems and making choices about how best to define them in the context of the MMM data. §47 The phrase “most popular text” is ambiguous, for a start. It could mean the most-read text, the most-quoted text, the most-owned text, or the most-circulated text. Only the latter two have any relevance to the MMM data, since they can be expressed respectively in terms of “the text in those manuscripts with the most recorded owners” in 17th-century France, or “the text in those manuscripts with the most ownership events” in the same period. Does the question refer to manuscript owners associated with France, or manuscript provenance events which occurred in France? Does “medieval author” cover anonymous or pseudonymous works and expressions as well as those with known authors? If so, how do we identify anonymous “medieval” texts, since works and expressions do not have dates directly associated with them? §48 Whatever choices were made in relation to these definitional difficulties, the important point was to ensure that those choices were documented and explained. It might also have been possible to consider reframing the question in a less prescriptive way: “Which manuscripts with medieval texts were owned by French collectors in the 17th century?” This could have been addressed by identifying owners living in France in the 17th century and looking at the manuscripts they owned and the associated works. §49 As mentioned earlier, one factor affecting these results significantly is that titles of works have not been harmonized across translations in different languages. There is little in the way of authoritative Linked Open Data vocabularies and identifiers for medieval and Renaissance works, and the absence of consistent conventional titles for works in this period makes the process of reconciling them between their occurrences in different manuscripts extremely difficult (Sharpe 2003). Without this kind of reconciliation, we cannot easily construct a query that takes a work and looks for all manuscripts containing that work. We should either try to identify all the variant titles of a work and include them in the query, or focus on manuscripts and authors instead. The way in which the SDBM treats multiple works in a single manuscript (as described above) also has a significant effect on queries of this kind. 5.3.4 New research questions and wider explorations §50 The three case studies considered so far attempted to apply research questions devised before the data model was designed and implemented. The results of each query were mixed. While the simplest query produced the expected results primarily because of its simplicity, it did not really test the ability of the dataset to return results. The more ----- complex questions with a significantly greater degree of ambiguity were much harder to translate into the elements and relationships expressed in the data model. They revealed, amongst other things, that some queries could be too specific or too complex in their combination of criteria to produce meaningful results. They also revealed that some relationships in the MMM data (e.g., between authors and works) were too ambiguous to produce reliable results. And they showed that questions involving pre modern languages or pre-modern political and administrative jurisdictions needed careful mapping to modern authoritative vocabularies for places and languages. But they helped to teach some of the intricacies of SPARQL in the context of a relatively complex data model and a dataset that contains important ambiguities. §51 Moving the SPARQL queries beyond the initial set of research questions became an important goal and the focus of more recent workshop sessions. For this second round of investigations, we looked particularly at ways of visualizing data in response to comparative quantitative and exploratory questions. We also examined ways of extending the reach of questions by using data sources outside MMM to add missing contextual information. The questions were mostly derived from active research projects into the history of manuscript collecting, a topic for which the MMM data should be particularly relevant. ### 5.4 Query 4: What are the ratios of height to width in medieval liturgical manuscripts? §52 The aggregated MMM data (like the source datasets) contain various quantifiable elements relating to the physical properties of individual manuscripts. These include height, width, folio count, and number of lines on a page, as well as the numbers of miniatures and decorated initials. A Twitter thread in December 2020 devoted to the importance of recording a manuscript’s size and folio count (Smith 2020) included a visualization of height-to-width ratios in 3,413 manuscripts, using data from the SDBM (Davis 2020). This prompted an exploration of the same kind of data in MMM using a new set of SPARQL queries in an effort to confirm this visualization and to correct the problem of multiple entries referring to the same manuscript, potentially skewing the results. In the MMM dataset, duplicate manuscript entries from the SDBM data as well as from the Bodleian and Bibale data were reconciled into one record, thus reducing a certain amount of noise in the results. §53 To construct the query, a formula to calculate ratios is applied to two elements, height and width restricted to a specific manuscript type, in this case, liturgical manuscripts. These manuscripts contain the prayers, readings, and hymns recited or sung during the Mass or as part of the Divine Office. They include missals and graduals for the Mass, and breviaries and antiphonaries for the Divine Office. Other less common ----- types of liturgical manuscripts include sacramentaries, sequentaries, pontificals, and ordinals. While manuscript dimensions are available in all three source datasets, none of the datasets include the element “manuscript type” in their respective data models. The solution was to query for records containing specific titles reflecting liturgical manuscripts. 5.4.1 Query explanation _[5.4.1.1 Step 1 (Figure 10): Manuscript production year averages and ratios of height: https://api.triplydb.](https://api.triplydb.com/s/nfhgCrlyB)_ _[com/s/nfhgCrlyB](https://api.triplydb.com/s/nfhgCrlyB)_ **Figure 10: SPARQL query for Query 4: Step 1 (lines 1–24).** §54 This query begins with a very simple `SELECT statement that includes only two` variables: one representing a manuscript’s production year average, and another representing the ratio of a manuscript’s size. We defined this ratio as a manuscript’s average height divided by its average width. In MMM, manuscripts often have multiple different values for their heights, widths, and production years because our data about them comes from many different sources created over time. This necessitates that we use the averages of these values. §55 Calculating averages requires a subquery nested within the `WHERE` clause of our main query, beginning on lines 12–14. The `SELECT statement in the subquery` begins with the `?manuscript variable, followed by three instances of the average` aggregate function (AVG) that will calculate the averages of the ?height_mm, ?width_ `mm, and ?production_year variables. The WHERE` clause beginning on line 15 defines the desired triple (i.e., subject–predicate–object) patterns in these variables. Lines 16–19 pertain to the ?manuscript variable, defining it as a manifestation singleton and returning height, width, and work data. Lines 21–24 refine the height and width ----- information by returning the value of those fields and defining that value as a unit of length in millimeters. §56 To narrow the results of this query to only include liturgical manuscripts, we can filter the manuscripts based on their text titles, which are modeled as work labels in MMM (Figure 11). Line 26 includes a `FILTER clause that refines the results` to only include manuscripts that contain works that include the characters “missal,” “gradual,” “breviar,” or “antiphon” in their titles. The character strings will be matched exactly, so the work labels are abbreviated in order to accommodate the various spellings found in MMM’s data sources. Likewise, each `CONTAINS function` also includes the `LCASE function to convert the values in the ?work_label field to` all lowercase letters so as to include both upper and lowercase title variations in our ``` FILTER clause. ``` **Figure 11: SPARQL query for Query 4: Step 1 (lines 25–35).** §57 Lines 28–31 construct the `?production_year variable. Production event` information is its own class in MMM, which links to specific manuscripts via the ``` ecrm:P108_has_produced predicate (line 28) and timespan data via the ecrm:P4_ has_time-span predicate (line 29). MMM timespans follow the CIDOC-CRM model for ``` modeling the range of a timespan. For this query, we’ve elected to use the “beginning of the begin” of the timespan data (the terminus post quem) as a manuscript’s production date (line 30). Line 31 uses the BIND and YEAR functions to extract the year information out of the production timespan and assign it as the `?production_year variable.` This concludes the subquery. As a last step, we group our results according to the ``` ?manuscript variable on line 33 to ensure that our results table displays information ``` for only one manuscript per row. ----- _[5.4.1.2 Step 2 (Figure 12): Revised query to filter results for manuscripts produced after 700: https://api.](https://api.triplydb.com/s/-9C8qoZtb)_ _[triplydb.com/s/-9C8qoZtb](https://api.triplydb.com/s/-9C8qoZtb)_ **Figure 12: SPARQL query for Query 4: Step 2 (lines 21–37).** §58 This query is nearly identical to the previous query, except that it includes three extra `FILTER` functions to refine the results further. On lines 30–31, two `FILTER` functions state that all height and width measurements included in the calculations must be greater than 39 millimeters and less than 500 millimeters. Filtering the results in this way helps ensure that our results do not include typos or other data entry mistakes that sometimes appear in the measurement data. Line 37 filters the production year results to include only manuscripts produced on or after 700 CE. The choice to filter by this production year stems from a cosmetic need to produce a chart of the results that is easier to read. Since few manuscripts in the MMM dataset were produced before 700 CE, removing those manuscripts from the results creates a more efficient x-axis and greater legibility of the individual data points in the chart. 5.4.2 Alternative Query 4 (Figures 13a-c): Comparing ratios of different liturgical books: breviaries [and missals: https://api.triplydb.com/s/qrzY6bd0e](https://api.triplydb.com/s/qrzY6bd0e) **Figure 13a: SPARQL query for Alternative Query 4 (lines 10–14).** ----- **Figure 13b: SPARQL query for Alternative Query 4 (lines 15–40).** **Figure 13c: SPARQL query for Alternative Query 4 (lines 40–65).** §59 This alternative query copies the basic structure of the previous query to produce results that compare the average ratios of missals to the average ratios of breviaries. The SELECT statement includes two different sets of ratios, one for missals (line 12) and one for breviaries (line 13). ----- §60 To calculate these two different ratios, we use the same subquery strategy as employed previously, but a UNION clause (line 40) allows the results to be displayed together. The first subquery (beginning on line 17) calculates the data for missals, using the FILTER function to isolate those manuscripts that have the characters “missal” in their work label (line 29). §61 This exact structure is copied in the second subquery (beginning on line 41), except in this case the FILTER function finds works containing the characters “breviar” (line 53). To distinguish the two results, the averages related to breviaries are called ``` ?b_height_mm_average and ?b_width_mm_average. ``` 5.4.3 Results §62 Step 1 of the original query visualizes the height-to-width ratios for 4,513 liturgical manuscripts (Figure 14). It includes missals, graduals, breviaries, and antiphonaries, but the ratios are not distinguished by type of manuscript. There are no limits on the date of production, or on the size of the ratios. Because there are four outlying ratios between 8.636 and 30.831, as well as a small number of early production dates, the other results are heavily compressed, and the details of the other ratios cannot easily be seen. **Figure 14: Height-to-width ratios of liturgical manuscripts.** §63 Step 2 of the original query visualizes the height-to-width ratio of 4,030 liturgical manuscripts by limiting the production period to 700 to 1800 CE (Figure 15). The variation in ratios is also limited by the exclusion of manuscripts larger than 500mm or smaller than 39mm. It includes missals, graduals, breviaries, and antiphonaries, but the ratios are not distinguished by type of manuscript. The results are clustered around 1.25 to 1.6; most manuscripts were produced in the 14th or 15th centuries. The clusters of ----- results for the years 900, 1000, 1100, 1200, 1300, and 1400 reflect the use of start dates for estimated production year ranges. Another version of this query makes use of end dates [as well, to smooth out this kind of clustering: https://api.triplydb.com/s/uG86O-AIC.](https://api.triplydb.com/s/uG86O-AIC) **Figure 15: Height-to-width ratios of liturgical manuscripts produced between 700-1800 AD.** §64 The alternative query compares the height-to-width ratio of two different types of liturgical manuscripts produced during the period 700 to 1700 CE (Figure 16). The total number of manuscripts involved is 12,169. Missals are shown as blue dots and breviaries appear in red. Most of the manuscripts fall within the range 1.0 to 2.0, though the majority fall between 1.25 and 1.6. There is considerable similarity between the two different types. Relatively few manuscripts have ratios less than 1.0 (i.e., with their width greater than their height). **Figure 16: Height-to-width ratios of breviaries and missals (outliers removed).** ----- 5.4.4 Lessons learned §65 Neither MMM nor the source datasets provide information about the categories or subjects of works, so liturgical manuscripts had to be identified by keyword searches on uniform titles. Fortunately, these are generally common to Latin, English, and French, such as missal/missale, antiphonal/antiphonarium, breviary/breviarium, and so on. The initial query produced a single set of ratios regardless of the specific type of liturgical manuscript; later refinement visualized the ratios for the specific types separately, enabling comparisons between them. §66 Dimensions are likely to have multiple values in the SDBM, reflecting different descriptions from different observations of the same manuscript. The same kind of variation can also be found for the same manuscript in two or three of the data sources. We dealt with this by averaging the height and the width across the different values. §67 Some problems were identified with the source data, including records that had height but not width, and some cases where mm and cm measurements were mixed together. These could produce incorrect ratios, since the query works by adding up the raw figures and then dividing by the number of values. §68 Production date ranges are often approximate, for example, “1300–1400” or “1225–1250.” We dealt with this initially by taking the earliest date in the date range, that is, “1300” and “1225” in these two cases. Further refinement of this query involved calculating an average for production date ranges (e.g., 1400–1450 as 1425), to avoid results bunching together at 1400 for 15[th]-century manuscripts. §69 Several outliers were noticeable in **Figure 16, including one with a ratio** of 30 (not shown). These were checked to see if they reflected an error in the source data, but the extreme outlier was found to be a roll rather than a codex, an unexpected result that could challenge assumptions about the use and readership of liturgical manuscripts in the Middle Ages. Our choice to remove outliers from the results meant that a more granular display of results in Yasgui became possible, but at the expense of a fuller and more accurate representation of variations in the data as the roll breviary indicates. Further, excluding outlying values for height and width actually affected the ratio calculations for some manuscripts and produced incorrect values. Excluding outlying ratios might be a better way of achieving this goal. §70 As originally formulated, the query obscured whether height or width was the larger dimension, since the ratio was constructed by dividing the larger dimension by the smaller one, regardless of which was the height or width. The resulting ratios were ----- always 1.0 or greater. A different formulation of the query was required to show the ratio of height to width consistently; the results then included ratios lower than 1.0, in cases where a codex was wider than it was long. Choosing between these queries depends on the ultimate goal of the research: is it simply to find the average relative proportions of a manuscript, or is it examining the orientation and layout of the pages as well? §71 The resulting scatter plot showing ratios for 12,169 individual manuscripts, coloured according to their type, provided a very effective visual representation of a relatively large body of data. But these queries also made clear the importance of consistent approaches to recording this kind of data and documenting the assumptions made in analyses of the data. ### 5.5 Query 5: How long did the bookseller James Tregaskis keep manuscripts in his stock? §72 The next query, derived from the work of the Cultivate MSS project, considers the length of time books remained in the stock of a particular dealer. In this case, we looked at the London dealer James Tregaskis, who was a prolific producer of catalogues, many of which have been entered into the SDBM as part of the project and are now searchable as LOD within the MMM portal (Worms 2016). §73 A manuscript might appear in Tregaskis’ catalogues multiple times a year, allowing the duration of a manuscript’s time in his stock to be traced with a relatively high degree of precision. This is particularly valuable because, unlike some other firms (notably J. & J. Leighton), no sales records are known to survive. Tregaskis’ activities therefore have to be reconstructed from his catalogues and records of his purchases at auctions. The SDBM allows for records pertaining to a single manuscript to be linked to a manuscript record, but it is difficult to compare those manuscript records. SPARQL provides the potential to calculate the length of time a manuscript remained in Tregaskis’ stock and to compare these figures. Comparing this with the same information for a larger and longer-lived firm like Bernard Quaritch Ltd. would help to assess the significance of the Tregaskis data. §74 Tregaskis’ catalogues provide price data for each manuscript as it is offered for sale. It is therefore possible to track changes in the prices asked for a manuscript over time. However, in the time period covered by the catalogues in the SDBM (1892 1936), Great Britain was not using a decimal currency. Moreover, Tregaskis expressed prices in both pounds, shillings and pence, and guineas (a guinea was £1 1s). Using SPARQL to query price movements over time is therefore not feasible without some normalization of the raw price data. ----- 5.5.1 Query explanation _5.5.1.1 Step 1 (Figure 17): Manuscripts sold by Tregaskis, their dates of transfer, their transfer counts,_ _[and the number of days they stayed in Tregaskis’ stock: https://api.triplydb.com/s/euJRw2LfK](https://api.triplydb.com/s/euJRw2LfK)_ **Figure 17: SPARQL query for Query 5: Step 1.** §75 This query includes several calculations in its SELECT statement to determine the amount of time manuscripts remained in Tregaskis’ stock. The MIN aggregate function extracts the earliest date in a manuscript’s transfer timespan `(MIN(?timespan_` ``` datetime) AS ?earliest_date). An identical strategy calculates the last date in the ``` same timespan with the MAX function (MAX(?timespan_datetime) AS ?last_date). With these two new variables, `?earliest_date and` `?last_date, the` `DAY function` can calculate the duration of time a manuscript remained in Tregaskis’ possession ``` (DAY(?last_date - ?earliest_date) AS ?duration). The COUNT function ``` calculates the number of times each manuscript appeared in a Tregaskis catalogue as the ?transfer_count variable. _[5.5.1.2 Step 2 (Figure 18): Duration and transfer count of Quaritch stock https://api.triplydb.com/](https://api.triplydb.com/s/0BBppvlWj)_ _[s/0BBppvlWj](https://api.triplydb.com/s/0BBppvlWj)_ §76 Step 2 mirrors the process used in Step 1 to find transfers associated with Bernard Quaritch Ltd., but reduces the amount of information displayed so that a scatter-plot visualization becomes possible. The `SELECT statement is reduced to two calculated` ----- variables: duration and transfer count (line 9). The MAX and MIN calculations are included within the DAY function to calculate duration. The URI for Quaritch is swapped for Tregaskis in lines 12–13, and the transfer count is limited to those manuscripts with 2 or more transfers (line 25). **Figure 18: SPARQL query for Query 5: Step 2.** _[5.5.1.3 Step 3 (Figures 19a–b): Tregaskis and Quaritch duration and transfers compared https://api.](https://api.triplydb.com/s/RY-FOOqM4)_ _[triplydb.com/s/RY-FOOqM4](https://api.triplydb.com/s/RY-FOOqM4)_ **Figure 19a: SPARQL query for Query 5: Step 3 (lines 9–25, relating to Tregaskis).** **Figure 19b: SPARQL query for Query 5: Step 3 (lines 24–43, relating to Quaritch).** ----- §77 Step 3 of the query brings together the results of the previous two queries for easier comparison. This involves creating two similar sub-queries—one for Tregaskis (lines 11 to 27) and one for Quaritch (lines 29 to 43), combining them with a `UNION` command (line 28), and displaying the duration and the transfer counts from the two sub-queries in an overarching SELECT statement (line 9). The transfer count in each sub-query is limited to those manuscripts with 2 or more transfers (lines 26 and 42). _5.5.1.4 Step 4 (Figures 20a–b): Comparison of Tregaskis and Quaritch stock between 1901–1920_ _[https://api.triplydb.com/s/syyzeyQ_q](https://api.triplydb.com/s/syyzeyQ_q)_ **Figure 20a: SPARQL query for Query 5: Step 4 (lines 10–29, relating to Tregaskis).** **Figure 20b: SPARQL query for Query 5: Step 4 (lines 28–47, relating to Quaritch).** §78 Step 4 is designed to limit the comparison between Tregaskis and Quaritch to a period when they were both active: between 1901 and 1920. This is done by adding ----- a statement to each sub-query (at lines 25 and 43) to filter the timespan for values after 31 December 1900 and before 1st January 1921: FILTER (?timespan_datetime > ``` “1900-12-31”^^xsd:date && ?timespan_datetime < “1921-01-01”^^xsd:date) ``` _[5.5.1.5 Step 5 (Figure 21): An improved scatter-plot visualization https://api.triplydb.com/s/qyGoY07li](https://api.triplydb.com/s/qyGoY07li)_ **Figure 21: SPARQL query for Query 5: Step 5.** §79 Step 5 of this query is designed to address a significant limitation in the scatter plot visualizations: one coloured dot could hide several manuscripts with the same duration and number of transfers (e.g., two transfers and zero days duration). We wanted to use a bubble chart to show the relative frequency of each combination of duration and transfers. This involved re-working the query to match the pattern of variables required for a bubble chart: (1) Text – the label for each bubble; (2) Numeric – X axis; (3) Numeric – Y axis; (4) Text – determines the colour of bubbles; (5) Numeric – determines the relative size of bubbles. §80 The query uses two sub-queries to find transfers associated with Tregaskis or Quaritch, and binds the relevant name as the seller (lines 14 to 19; 21 to 26). The sub-queries are joined with a UNION command (line 20). We then find the manuscripts involved in these transfers and the dates of the transfers (lines 28 to 33). The results are limited to those with a transfer count greater than one, and a duration greater than zero days (line 36). The calculation of transfer counts and durations is done in a SELECT statement at line 12. §81 To construct the pattern of variables required for the bubble chart, an outer ``` SELECT statement is added (line 9). This also counts the number of manuscripts with the ``` ----- same combination of duration and number of transfer counts. The manuscript names, although required for the bubble chart, have been replaced with a blank space enclosed between quotation marks, since their inclusion would make the chart unreadable. 5.5.2 Results §82 Step 1 of the query produces 87 results, with transfer counts ranging from 2 to 8, and durations ranging from zero to 3,927 days. The transfer dates range from 1900 to 1935. §83 Step 2 of the query produces 750 results, with transfer counts ranging from 2 to 12, and durations ranging from zero to 36,159 days (99 years). The results can now be visualized as a scatter plot (Figure 22). **Figure 22: Multiple catalogue listings by Bernard Quaritch Ltd.** §84 Step 3 of the query produces 837 results, with transfer counts ranging from 2 to 12, and durations ranging from zero to 36,159 days (99 years). This is a simple addition of the separate Tregaskis and Quaritch results, which can now be distinguished and compared on the same visualization, with Tregaskis manuscripts shown in blue and Quaritch in red (Figure 23). **Figure 23: Tregaskis and Quaritch transfer counts and durations compared.** ----- §85 The durations between first and last listings are much greater for Quaritch, as are the number of listings, but does this reflect anything more than the much longer time period over which this firm has operated? In some cases, Quaritch had bought back (and re-sold) a manuscript originally sold by the firm some decades earlier, so the manuscript was not actually kept in stock for the whole period in question. §86 Step 4 of the query produced 203 results for manuscript transfers between 1901 and 1920. The maximum duration was 6,605 days (18 years), and the maximum number of transfers during these 20 years was eight. This visualization makes it clear that Tregaskis was likely to list the same manuscript many more times than Quaritch during this period, and usually within a significantly shorter period of time (Figure 24). **Figure 24: Comparison between Tregaskis and Quaritch 1901–1920.** §87 Step 5 of this query produces a bubble chart in which each bubble shows the duration, the number of transfer events, the seller (Tregaskis in red, Quaritch in blue), and the number of manuscripts with that combination of variables (in the size of the bubble). The most common combination is visible in the largest blue bubble in the lower left of the chart: a duration of 792 days and a transfer count of 2, with Quaritch as the seller. A total of 30 manuscripts have this combination. The configuration of the bubble chart has been used to limit the maximum duration shown to 5,000 days, for the sake of visibility (Figure 25). **Figure 25: Bubble chart comparing Quaritch and Tregaskis.** ----- 5.5.3 Lessons learned §88 SPARQL can be used with the MMM data to find and compare patterns in the stock retention and catalogue listings of manuscripts by dealers like Tregaskis and Quaritch over multiple years. Visualizations in the form of scatter plots and bubble charts are a valuable way of displaying this information; in the case of bubble charts, four different variables can be combined in the same chart. This cannot be done with the Sampo-UI interface to MMM, nor in the interfaces of the three source datasets. §89 Nevertheless, these visualizations—and the underlying SPARQL results—need to be treated with some caution, since they conceal various assumptions about the data. There is no way of distinguishing manuscripts that were sold, bought back, and sold again by Quaritch from those that were kept in stock for a number of consecutive years and advertised in multiple catalogues during that period. The duration in stock is simply calculated from the earliest listing to the last recorded listing. Manuscripts with a duration of zero days between two listings may have been advertised twice in the same year, without a specific day or month being recorded, but these entries may also reflect two versions of the same catalogue or stock list entered separately in the Schoenberg Database. ### 5.6 Query 6: What is known about the social backgrounds of 19th- and 20th-century British collectors? §90 Interested in researching the social backgrounds of 19[th]- and 20[th]-century manuscript collectors, author Toby Burrows raised the possibility of using a federated query to add information from external sources like Wikidata to an MMM SPARQL query. A federated query in SPARQL connects one endpoint to any other openly available endpoint, thus greatly expanding the possibilities for finding associations among datasets not otherwise obviously connected by topic or content. For Burrows’ purposes, the MMM data on its own could not provide information about the occupation, gender, life events, and other data that would build a fuller picture about the lives of these collectors and provide more insight into their habits and motivations for collecting. Mechanisms such as the shared use of identifiers from resources like the Virtual [International Authority File (VIAF; http://viaf.org/) that are included when available in](http://viaf.org/) the name authority metadata associated with people and institutions in both MMM and Wikidata provide one of the easier ways to execute a federated search and present an opportunity to pull the personal data of persons and institutions provided in Wikidata entries together into MMM query results. 5.6.1 Query explanation [§91 Query 6 (Figure 26): https://api.triplydb.com/s/44t3wQfOg. This federated query](https://api.triplydb.com/s/44t3wQfOg) combines information from the MMM and Wikidata datasets with a single SPARQL ----- query sent to the MMM endpoint. Line 15 limits the geographical scope of this query to England. We then find the actors associated with events occurring in England (lines 17–18), together with their death events and names (lines 20–21). The dates of these death events are then filtered for those occurring before 1900 and after 1800 (lines 23 to 26). Actors are then limited to those with VIAF identifiers (lines 28–29). Line 28 equivocates the ?actor and ?identifier variables by using the owl:sameAs predicate, which is how the query will connect the VIAF data between the MMM and Wikidata datasets. These VIAF identifiers are then passed to Wikidata using the SERVICE keyword. To find the corresponding “person” record in Wikidata, lines 31–32 return Wikidata resources that have VIAF identifiers that match the identifiers returned for the MMM actors above, along with their occupations. The FILTER on line 35 returns the occupation labels in English, rather than any language that may appear in the Wikidata data. **Figure 26: Query 6.** §92 The results show each actor’s MMM ID and name, together with their year of death, their VIAF identifier, and their occupation identifier and occupation (the latter two from Wikidata). 5.6.2 Results §93 The query produces 205 results with 91 distinct names of people with death dates in the 19th century, who have an average of two occupations, though some have ----- many more than this: 19 in the case of William Morris! There are a total of 83 different occupations. Politicians (21) and writers (21) are the most common, though there is also an astrologer, a brewer, and at least three slave holders. (Readers can find these results by clicking on the Yasgui link for Query 6.) We thus are presented with a cross section of occupations associated with those with the means and motivations to collect manuscripts in the 19th century. 5.6.3 Lessons learned §94 As this query shows, Wikidata can be a valuable source of additional information about people and institutions in the MMM dataset that is not otherwise captured by the source datasets, in this case, the occupations of individual collectors. The results show that the personal, professional, and academic interests of collectors of premodern European manuscripts in the 19th century are diverse and sometimes surprising, including “singer-songwriter” or “science fiction writer.” The results may also show a certain bias. For example, why are there so many occupations associated with William Morris compared to other collectors? Is it because he was that much more active than anyone else, or that as a seminal figure in the Arts and Crafts movement, we have simply collected more data about him than other 19th-century manuscript collectors? §95 The question of bias cannot be ignored as it has implications for how we collect data and, in this case, data about people. Well-known people or institutions will have more data about their lives associated with them in online resources. But it is also interesting to note that women are not included in these results, though we know that there were women involved in the book trade in Britain with death dates before 1900. (For example, Henrietta Katherine Burrell, recorded in the SDBM Name Authority: [https://sdbm.library.upenn.edu/names/40365/.) Why is this so? The simple answer is](https://sdbm.library.upenn.edu/names/40365/) that the overlap of persons with the same VIAF identifiers in both Wikidata and MMM is small. Indeed, while there are 56,685 actors (persons and organizations) in MMM, only a fraction have VIAF identifiers. At present, MMM has more than 15,300 VIAF identifiers for actors, but only 4,400 Wikidata identifiers. §96 This lack of representation in VIAF could be due in large part to a systemic lack of recognition for the contributions that women have made in the book trade in the 19th century and in society in general. Following these results, we performed a similar query that asked to return actors with the same VIAF number but were identified in Wikidata as female. The best set of results was found among women collectors in the United States born between 1900 and 1950: [https://api.triplydb.com/s/OZlC0ieHo, which returns](https://api.triplydb.com/s/OZlC0ieHo) 14 results showing 9 different women, with occupations ranging from librarian, book collector, and archaeologist to politician, statistician, and lawyer, among others. ----- §97 As these results show, this query strategy requires both the MMM person and the Wikidata person to have a shared VIAF identifier to return results. Our results point to the broader problem of the lack of representation of a large number of actors in available authorities and LOD resources. A systematic import of Wikidata identifiers into MMM (or into the source datasets) would increase results, but the problem will not be fully addressed until actors in underrepresented social groups and minorities are given better data representation in these resources. ## 6 Conclusion §98 The weekly SPARQL workshop held by the MMM project began as a knowledge transfer activity designed to teach the practical skill of learning how to perform SPARQL queries, but gradually developed into a wider investigation of the use of SPARQL to analyze the data, explore broader types of research questions, and assess the research potential of the MMM aggregated dataset and its Knowledge Graph. The benefits of investing over 500 hours of staff time in learning and practicing SPARQL queries can be seen in various ways, beginning with a diagnostic approach to identifying limitations in the data aggregated by the MMM project. This includes areas (like the different types of events) where the data sources do not enable an optimum level of granularity in the MMM data model. The source datasets do not collect the same information or, sometimes, when they _do collect the same information, it is not computationally_ accessible via the same methods. This is more than a matter of improved mapping and transformation. Information that is explicit in one dataset may be only inferred from another. Discrete pieces of information in one source may be stored in aggregated form in another. §99 Like most collection-based humanities datasets and their interfaces, the MMM data sources are designed to produce lists of items (manuscripts) meeting certain criteria, rather than supporting statistical analyzes. The price data in the SDBM, for example, are purely descriptive and do not provide an adequate basis for quantitative analysis, even within a SPARQL query. On the other hand, some contextual information that is outside the scope of the source datasets can be added on-the-fly in SPARQL queries, as our work with person data from Wikidata shows. This also reinforced the importance of Linked Open Data identifiers in enabling this kind of approach and raised some significant questions about future strategies for including identifiers in datasets like those used by MMM. §100 There are signs that being able to write SPARQL queries is becoming a useful practical skill for humanities researchers. The popular humanities data management, ----- network analysis and visualisation environment nodegoat recently added functionality for using SPARQL queries to import contextual data from Linked Open Data sources, for example (nodegoat 2021). SPARQL remains challenging to learn, even when using a detailed and well-documented data model like MMM, and requires a certain amount of trial and error. The Yasgui interface used in the MMM workshop offers some diagnostic help with formulating queries correctly, but its main advantages are the built-in visualizations. Its new “Geo events” display which can produce timelines and map [based event sequences has also been tested against MMM data. (See this query: https://](https://api.triplydb.com/s/u_-KEd-US) [api.triplydb.com/s/u_-KEd-US.) But it would help to have a more visual approach to](https://api.triplydb.com/s/u_-KEd-US) constructing the SPARQL queries themselves, in which data models and name spaces can be visualized for selecting entities and properties. One recent project has designed a visual interface for constructing SPARQL queries in the humanities, known as Gravsearch, but this has to be used within the Knora software package (Schweizer and Geer 2021). §101 More generally, the workshop resulted in a better understanding of how querying data in a computational context works. For the humanists on the team, learning the technical language and structures of SPARQL also showed them how to develop more ambitious approaches to the MMM data, transforming the traditional research questions that had shaped the initial data modelling work into more sophisticated and expansive queries that took full advantage of the MMM data model. As a result, the returned data from these queries better reflected the true value of the combined dataset for humanistic research. For the computer scientists, the more evolved approach to querying led to more understanding about the complex research questions that are of interest to manuscript researchers, and to better analysis to determine the success of the project. §102 As these case studies show, querying the MMM dataset via its SPARQL endpoint does not produce perfect results, or results that provide a definitive answer in the traditional sense to the research questions. The methodology presented in these case studies follows the principles of distant reading, whereby computational aggregation and analysis of the data presented in returned results brings new insights into and raises new questions about the nature of the data and the subject it represents—in this case pre-modern manuscripts (Moretti 2013). While one would not want to draw hard conclusions from the results achieved in these queries, we hope to have shown that the process of learning and experimenting in a SPARQL environment brings three important benefits: 1) a better understanding of a complex and imperfect dataset, 2) a better understanding of how manuscript description and associated data involving the ----- people and institutions involved in the production, reception, and trade of premodern manuscripts needs to be presented to better facilitate computational research, and 3) an awareness of the need to further develop data literacy skills among researchers in order to take full advantage of the wealth of unexplored data now available to them in the Semantic Web (Koltay 2015). ----- **Acknowledgements** [This work was funded by the Trans-Atlantic Platform under its Digging into Data Challenge (https://](https://diggingintodata.org) [diggingintodata.org) for 2017–2020. The Mapping Manuscript Migration project was led by the](https://diggingintodata.org) University of Oxford, in partnership with the University of Pennsylvania, Aalto University, and Helsinki Centre for Digital Humanities (HELDIG) at the University of Helsinki, and the Institut de recherche et d’histoire des textes (IRHT). The authors wish to acknowledge CSC–IT Center for Science, Finland, for computational resources. The transformation of the Oxford Manuscript data into RDF builds upon earlier work by the OXLOD project. The authors acknowledge the contributions of the following: Antoine Brix (IRHT), Petri Leskinen (Aalto University), Synnøve Myking (IRHT), Pierre-Louis Pinault (IRHT), and Jouni Tuominen (University of Helsinki). **Competing interests** LR currently serves as the Director of Digital Medievalist; her tenure on the board ends July 2022. **Contributions** Authorial contributions Authorship is alphabetical after the drafting author and principal technical lead. Author contributions, described using the CASRAI CredIT typology, are as follows: The corresponding author is: Lynn Ransom (lr) List of contributors and roles in alphabetical order - Toby Burrows: tb - Laura Cleaver: lc - Doug Emery: de - Eero Hyvönen: eh - Mikko Koho: mk - Lynn Ransom: lr - Emma Thomson: et - Hanno Wijsman: hw - Conceptualization: tb; lc; eh; de; mk; lr; et - Methodology: tb; lc; de; eh; mk; lr; et - Investigation: tb; lc; de; mk; lr; et; hw - Writing – Original Draft Preparation: tb; lc; de; mk; lr; et - Writing – Review & Editing: tb; de; eh; lr; et - Visualization: tb; mk - Supervision: tb; eh; lr; hw - Project Administration: tb; eh; lr; hw - Funding Acquisition: tb; eh; lr; hw ----- Editorial contributions Recommending editors: Mike Kestemont, University of Antwerp, Belgium Recommending referees: Tiziana Mancinelli, Ca’ Forscari Università Venezia, Italy Roman Bleier, University of Graz, Austria Section/copy/layout editors: Morgan Pearce, The Journal Incubator, University of Lethbridge, Canada Christa Avram, The Journal Incubator, University of Lethbridge, Canada **References** Burrows, Toby, Nicole Bergk Pinto, Mahaut Cazals, Alexandre Gaudin, and Hanno Wijsman. 2020. “Evaluating a Semantic Portal for the ‘Mapping Manuscript Migrations’ Project.” DigItalia: Rivista _del Digitale nei Beni Culturali 2, 178–185. Accessed May 3, 2022._ [http://digitalia.sbn.it/article/](http://digitalia.sbn.it/article/view/2643) [view/2643.](http://digitalia.sbn.it/article/view/2643) Davis, Lisa Fagin (@lisafdavis). 2020. “Voilà! 3,413 data points, height/width over time for several different genres of liturgical manuscripts. Data from @schoenbergdb (Caveat! Some of these data points are duplicates, since each database record is an observation of a particular manuscript at a particular time).” Twitter, December 5, 8:09 a.m. Accessed May 3, 2022. [https://twitter.com/](https://twitter.com/lisafdavis/status/1335239769765392386) [lisafdavis/status/1335239769765392386”.](https://twitter.com/lisafdavis/status/1335239769765392386) Doerr, Martin. 2003. “The CIDOC Conceptual Reference Module: An Ontological Approach to [Semantic Interoperability of Metadata.” AI Magazine, 24(3): 75–92. https://doi.org/10.1609/aimag.](https://doi.org/10.1609/aimag.v24i3.1720) [v24i3.1720.](https://doi.org/10.1609/aimag.v24i3.1720) DuCharme, Bob. 2013. Learning SPARQL: Querying and Updating with SPARQL 1.1. 2[nd] ed. Sebastopol, CA: O’Reilly. Heath, Tom, and Christian Bizer. 2011. “Linked Data: Evolving the Web into a Global Data Space.” _Synthesis Lectures on the Semantic Web: Theory and Technology, 1(1): 1–136. Accessed February 17,_ [2022. DOI: https://doi.org/10.2200/S00334ED1V01Y201102WBE001](https://doi.org/10.2200/S00334ED1V01Y201102WBE001) Hyvönen, Eero, Esko Ikkala, Mikko Koho, Jouni Tuominen, Toby Burrows, Lynn Ransom, and Hanno Wijsman. 2021. “Mapping Manuscript Migrations on the Semantic Web: A Semantic Portal and Linked Open Data Service for Premodern Manuscript Research.” In _Proceedings of the 20[th]_ _International Joint Conference of Semantic Web_ (ISWC 2021), virtual, October 24–28, 615–630. [New York: Springer. DOI: https://doi.org/10.1007/978-3-030-88361-4_36](https://doi.org/10.1007/978-3-030-88361-4_36) Ichinose, Shiori, Ichiro Kobayashi, Michiake Iwazume, and Kuogi Tanaka. 2014. “Ranking the Results of Dbpedia Retrieval with SPARQL Query.” In _JIST 2013: Semantic Technology_ (Lecture Notes in Computer Science, vol 8388), edited by Wooju Kim, Ying Ding, and Hong-Gee Kim. 306–319. [Cham: Springer. DOI: https://doi.org/10.1007/978-3-319-06826-8_23](https://doi.org/10.1007/978-3-319-06826-8_23) Ikkala, Esko, Eero Hyvönen, Heikki Rantala, and Mikko Koho. 2021. “Sampo-UI: A Full Stack JavaScript Framework for Developing Semantic Portal User Interfaces.” Semantic Web 13(1): 69–84. [DOI: https://doi.org/10.3233/SW-210428](https://doi.org/10.3233/SW-210428) ----- Koho, Mikko, Toby Burrows, Eero Hyvönen, Esko Ikkala, Kevin Page, Lynn Ransom, Jouni Tuominen, Doug Emery, Arthur Mitchell Fraas, Benjamin Heller, David Lewis, Andrew Morrison, Guillaume Porte, Emma Thomson, Athanasios Velios, and Hanno Wijsman. 2021. “Harmonizing and Publishing Heterogeneous Premodern Manuscript Metadata as Linked Open Data.” Journal _of the Association for Information Science and Technology 73(2): 240–257. DOI:_ [https://doi.](https://doi.org/10.1002/asi.24499) [org/10.1002/asi.24499](https://doi.org/10.1002/asi.24499) Koltay, Tibor. 2015. “Data Literacy for Researchers and Data Librarians.” Journal of Librarianship and _[Information Science 49(1): 3–14. DOI: https://doi.org/10.1177/0961000615616450](https://doi.org/10.1177/0961000615616450)_ Lincoln, Matthew. 2014. “SPARQL for Humanists.” Matthew Lincoln, PhD [blog]. 10 July. Accessed [February 17, 2022. https://matthewlincoln.net/2014/07/10/sparql-for-humanists.html.](https://matthewlincoln.net/2014/07/10/sparql-for-humanists.html) ———. 2015. “Using SPARQL to access Linked Open Data.” _Programming Historian._ Accessed February 17, 2022. [https://programminghistorian.org/en/lessons/retired/graph-databases-and-](https://programminghistorian.org/en/lessons/retired/graph-databases-and-SPARQL) [SPARQL.](https://programminghistorian.org/en/lessons/retired/graph-databases-and-SPARQL) Meroño-Peñuela, Albert, Ashkan Ashkpour, Marieke van Erp, Kees Mandemakers, Leen Breure, Andrea Scharnhorst, Stefan Schlobach, and Frank van Harmelen. 2015. “Semantic Technologies for Historical Research: A Survey.” _Semantic Web 6(6): 539–564. Accessed May 3, 2022._ [http://](http://www.semantic-web-journal.net/sites/default/files/swj301.pdf) [www.semantic-web-journal.net/sites/default/files/swj301.pdf. DOI:](http://www.semantic-web-journal.net/sites/default/files/swj301.pdf) [https://doi.org/10.3233/](https://doi.org/10.3233/SW-140158) [SW-140158](https://doi.org/10.3233/SW-140158) Moretti, Franco. 2013. Distant Reading. Verso Books. nodegoat. 2021. “nodegoat Workshop Series Organised by the SNSF SPARK Project ‘Dynamic Data [Ingestion.’” nodegoat (blog), April 6. Accessed May 3, 2022. https://nodegoat.net/blog.p/82.m/54/](https://nodegoat.net/blog.p/82.m/54/nodegoat-workshop-series-organised-by-the-snsf-spark-project-dynamic-data-ingestion) [nodegoat-workshop-series-organised-by-the-snsf-spark-project-dynamic-data-ingestion.](https://nodegoat.net/blog.p/82.m/54/nodegoat-workshop-series-organised-by-the-snsf-spark-project-dynamic-data-ingestion) Penny, Ralph. 2002. _A History of the Spanish Language. Cambridge: Cambridge University Press._ [DOI: https://doi.org/10.1017/CBO9780511992827](https://doi.org/10.1017/CBO9780511992827) Rietveld, Laurens, and Rinke Hoekstra. 2017. “The YASGUI Family of SPARQL Clients.” Semantic _[Web 8(3): 373–383. DOI: https://doi.org/10.3233/SW-150197](https://doi.org/10.3233/SW-150197)_ Riva, Pat, Martin Doerr, and Maja Žumer. 2009. “FRBRoo: Enabling a Common View of Information from Memory Institutions.” International Cataloguing and Bibliographic Control 38(2), 30–34. Accessed [February 17, 2022. https://archive.ifla.org/IV/ifla74/papers/156-Riva_Doerr_Zumer-en.pdf.](https://archive.ifla.org/IV/ifla74/papers/156-Riva_Doerr_Zumer-en.pdf) Schweizer, Tobias, and Benjamin Geer. 2021. “Gravsearch: Transforming SPARQL to Query [Humanities Data,” Semantic Web 12(6): 379–400. DOI: https://doi.org/10.3233/SW-200386](https://doi.org/10.3233/SW-200386) Sharpe, Richard. 2003. Titulus: Identifying Medieval Latin Texts, an Evidence-Based Approach. Turnhout: Brepols. Smith, Innocent (@InnocentOP). 2020. “Leafing through Emmanuel Borque’s Etude sur les sacramentaires romains (published in the 40s and 50s), I’m struck by how he neglects to give any details about the physical aspects of the manuscripts, e.g. size and number of folios.” Twitter, December 5, 7:13 a.m. Accessed May 3, 2022. [https://twitter.com/InnocentOP/](https://twitter.com/InnocentOP/status/1335225723859169282) [status/1335225723859169282.](https://twitter.com/InnocentOP/status/1335225723859169282) Staab, Steffan, and Studer, Rudi, eds. 2009. Handbook of Ontologies. 2nd ed. Berlin: Springer-Verlag. [DOI: https://doi.org/10.1007/978-3-540-92673-3](https://doi.org/10.1007/978-3-540-92673-3) ----- [Tauberer, Joshua. 2006. “What Is RDF.” XML.com. Accessed May 3, 2022. https://www.xml.com/](http://XML.com) [pub/a/2001/01/24/rdf.html.](https://www.xml.com/pub/a/2001/01/24/rdf.html) [Worms, Laurence. 2016. “James Tregaskis.” Antiquarian Booksellers’ Association. https://aba.org.uk/](https://aba.org.uk/page/james-tregaskis) [page/james-tregaskis.](https://aba.org.uk/page/james-tregaskis) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.16995/dm.8064?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.16995/dm.8064, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://journal.digitalmedievalist.org/article/id/8064/download/pdf/" }
2,021
[]
true
2021-12-23T00:00:00
[ { "paperId": "99d691a42b96363f04ae930c0253d309c2867185", "title": "Harmonizing and publishing heterogeneous premodern manuscript metadata as Linked Open Data" }, { "paperId": "843a0cec71ae411865373f171969be9badcf34fd", "title": "Sampo-UI: A full stack JavaScript framework for developing semantic portal user interfaces" }, { "paperId": "5d7be902025a233c85767e78b8076a49125cdbd4", "title": "The Mise-En-Page In Western Manuscripts" }, { "paperId": "9ade2e622faa14004919eaed2f5c7f545adeb90a", "title": "Gravsearch: Transforming SPARQL to query humanities data" }, { "paperId": "bd52d8dee23617d3531702124fc87402a42a32a4", "title": "Data literacy for researchers and data librarians" }, { "paperId": "565a484c4579d099953fa0380ef736bf19f19731", "title": "The YASGUI family of SPARQL clients" }, { "paperId": "d87b8bdbd3b6a04bfedf8442fec0720db21e9062", "title": "Using SPARQL to access Linked Open Data" }, { "paperId": "c5224f327be6e4c2d449d8feb2a980c6cfe36c64", "title": "Semantic technologies for historical research: A survey" }, { "paperId": "995d6b55cab845ae7efa3876bdd5ddbef8e3fa0c", "title": "Ranking the Results of DBpedia Retrieval with SPARQL Query" }, { "paperId": "422c689b929e7ae7698ae8ed05bc16bbf637f3a9", "title": "Linked Data: Evolving the Web into a Global Data Space" }, { "paperId": "2c4c774d79363b78ead2550075b878432c5c24fa", "title": "The CIDOC Conceptual Reference Module: An Ontological Approach to Semantic Interoperability of Metadata" }, { "paperId": "686be738209fe686c8999f481d7a9643838f7012", "title": "A History of the Spanish Language" }, { "paperId": "37ec5013510022cb075469aa8816adf066e0c42c", "title": "Mapping Manuscript Migrations on the Semantic Web: A Semantic Portal and Linked Open Data Service for Premodern Manuscript Research" }, { "paperId": "4e98831ca6e7740fb9cf4b3a04fe56e55e5598e6", "title": "Learning SPARQL: querying and updating with SPARQL 1.1" }, { "paperId": "0d81b621b592a7213248ee899ebb1d1e82f7f224", "title": "FRBRoo: Enabling a Common View of Information from Memory Institutions" }, { "paperId": "2521c1d36286ea3a919bf413dddba2388d000377", "title": "Handbook on Ontologies" }, { "paperId": "a4ca58e8a3513f4f75ac17466a3ba167bf836e4c", "title": "Titulus. Identifying Medieval Latin Texts: An Evidence-Based Approach" }, { "paperId": "cdaca08dbecfd2b5bfb7e4d1833789fbe4788c92", "title": "What Is RDF" }, { "paperId": null, "title": "Petri Leskinen" }, { "paperId": null, "title": "Antiquarian Booksellers" }, { "paperId": null, "title": "The Journal Incubator" }, { "paperId": null, "title": "nodegoat Workshop Series Organised by the SNSF SPARK Project 'Dynamic Data Ingestion" }, { "paperId": null, "title": "SPARQL for Humanists." }, { "paperId": null, "title": "Evaluating a Semantic Portal for the ‘Mapping Manuscript Migrations" }, { "paperId": null, "title": "Distant Reading" } ]
27,735
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01b198ed09d52a7c601bcf229705508847cf48ca
[ "Computer Science" ]
0.888783
Lay Down the Common Metrics: Evaluating Proof-of-Work Consensus Protocols' Security
01b198ed09d52a7c601bcf229705508847cf48ca
IEEE Symposium on Security and Privacy
[ { "authorId": "31452614", "name": "Ren Zhang" }, { "authorId": "144795520", "name": "B. Preneel" } ]
{ "alternate_issns": null, "alternate_names": [ "S&P", "IEEE Symp Secur Priv" ], "alternate_urls": null, "id": "29b9c461-963e-4d11-b2ab-92c182243942", "issn": null, "name": "IEEE Symposium on Security and Privacy", "type": "conference", "url": "http://www.ieee-security.org/" }
Following Bitcoin's Nakamoto Consensus protocol (NC), hundreds of cryptocurrencies utilize proofs of work (PoW) to maintain their ledgers. However, research shows that NC fails to achieve perfect chain quality, allowing malicious miners to alter the public ledger in order to launch several attacks, i.e., selfish mining, double-spending and feather-forking. Some later designs, represented by Ethereum, Bitcoin-NG, DECOR+, Byzcoin and Publish or Perish, aim to solve the problem by raising the chain quality; other designs, represented by Fruitchains, DECOR+ and Subchains, claim to successfully defend against the attacks in the absence of perfect chain quality. As their effectiveness remains self-claimed, the community is divided on whether a secure PoW protocol is possible. In order to resolve this ambiguity and to lay down the foundation of a common body of knowledge, this paper introduces a multi-metric evaluation framework to quantitatively analyze PoW protocols' chain quality and attack resistance. Subsequently we use this framework to evaluate the security of these improved designs through Markov decision processes. We conclude that to date, no PoW protocol achieves ideal chain quality or is resistant against all three attacks. We attribute existing PoW protocols' imperfect chain quality to their unrealistic security assumptions, and their unsatisfactory attack resistance to a dilemma between "rewarding the bad" and "punishing the good". Moreover, our analysis reveals various new protocol-specific attack strategies. Based on our analysis, we propose future directions toward more secure PoW protocols and indicate several common pitfalls in PoW security analyses.
2019 IEEE Symposium on Security and Privacy # Lay Down the Common Metrics: Evaluating Proof-of-Work Consensus Protocols’ Security ## Ren Zhang *Nervos* and *imec-COSIC, KU Leuven* ren@nervos.org ***Abstract*** **—Following Bitcoin’s Nakamoto Consensus protocol** **(NC), hundreds of cryptocurrencies utilize proofs of work (PoW)** **to maintain their ledgers. However, research shows that NC fails** **to achieve perfect chain quality, allowing malicious miners to al-** **ter the public ledger in order to launch several attacks, i.e., selfish** **mining, double-spending and feather-forking. Some later designs,** **represented by Ethereum, Bitcoin-NG, DECOR+, Byzcoin and** **Publish or Perish, aim to solve the problem by raising the chain** **quality; other designs, represented by Fruitchains, DECOR+ and** **Subchains, claim to successfully defend against the attacks in the** **absence of perfect chain quality. As their effectiveness remains** **self-claimed, the community is divided on whether a secure PoW** **protocol is possible. In order to resolve this ambiguity and** **to lay down the foundation of a common body of knowledge,** **this paper introduces a multi-metric evaluation framework to** **quantitatively analyze PoW protocols’ chain quality and attack** **resistance. Subsequently we use this framework to evaluate the** **security of these improved designs through Markov decision** **processes. We conclude that to date, no PoW protocol achieves** **ideal chain quality or is resistant against all three attacks. We** **attribute existing PoW protocols’ imperfect chain quality to** **their unrealistic security assumptions, and their unsatisfactory** **attack resistance to a dilemma between “rewarding the bad”** **and “punishing the good”. Moreover, our analysis reveals various** **new protocol-specific attack strategies. Based on our analysis, we** **propose future directions toward more secure PoW protocols and** **indicate several common pitfalls in PoW security analyses.** ***Index Terms*** **—blockchain, proof-of-work consensus, incentive** **compatibility, double-spending, censorship resistance** I. I NTRODUCTION By November 2018, more than six hundred digital currencies leverage *proofs of work (PoW)*, i.e., moderately hard computational tasks, to maintain consensus on a public ledger of transactions [1]. All PoW consensus protocols originate from Bitcoin’s *Nakamoto Consensus (NC)* [2], in which participants, called *miners*, compete in generating the latest *block* —a group of new transactions bound with a solution to a computational puzzle. The protocol helps participants reach agreement on a sequence of blocks named the *blockchain* . The miner of each blockchain block is entitled to a *block reward* of new bitcoins to incentivize protocol participation. Remarkably, NC is the first scheme that promises an inalterable public ledger without prior knowledge on participants’ identities. Unfortunately, the security of NC is challenged by several studies [3]–[7], in which researchers identify a wide range of strategies that allow attackers with less than 50% of total computing power to rewrite part of the blockchain with high success rate. ## Bart Preneel *imec-COSIC, KU Leuven* bart.preneel@esat.kuleuven.be Given NC’s security weakness, a considerable amount of non-NC PoW protocols [6]–[23] have emerged in the past few years, which all claim to achieve stronger security properties. Nevertheless, in the absence of a systematic evaluation, such advancements remain self-claimed and not widely acknowledged. Moreover, some protocols introduce new issues like lowering the chain-growth rate [24], [25] or facilitating an attacker to create disagreements among the compliant miners [26]. This inconclusive situation also feeds the pessimistic atmosphere surrounding PoW, leading new digital currencies to abandon PoW and turn to other consensus mechanisms such as proofs of stake (PoS), which all rely on stronger security assumptions, yet open new attack vectors [27]–[29]. In this paper, we address this situation and explore the (im)possibility of more secure PoW protocols. Our work and contributions include: **A quantitative security evaluation framework.** We identify that NC’s key weakness lies in its low *chain quality*, defined as the fraction of blockchain blocks mined by the compliant miners. The unsatisfactory chain quality allows attackers to substitute other miners’ blocks from the blockchain with the attackers’, which impairs NC’s inalterability promise and could be utilized by attackers to cause three kinds of damage: they can (1) gain relative block rewards larger than their fair share with a *selfish mining attack* [6]; (2) spend the same coin more than once with a *double-spending attack* ; and (3) force rational miners to collectively censor certain target transactions with a *feather-forking attack* [30]. Accordingly, to verify the self-claimed improvements of recent non-NC protocols and to detect the security flaws in PoW designs, we propose a comprehensive evaluation framework including *chain quality* and three attack-resistance metrics of *incentive compatibility*, *subversion gain* and *censorship* *susceptibility*, corresponding to the aforementioned attacks. **Generalizing MDP-based methods for analyzing PoW** **protocols.** While Markov decision processes are commonly used to explore an actor’s utility-maximizing strategies in a stochastic environment, previous MDP-based analyses mostly focus on NC with a rational, i.e., profit-driven, adversary [4], [31], [32]. We generalize their methods on two dimensions. First, by redefining the attacker’s utility, we extend the model to include *byzantine adversaries*, whose goals are not limited to their economic gains. This generalization allows our model © 2019, Ren Zhang. Under license to IEEE. DOI 10.1109/SP.2019.00086 175 ----- TABLE I S ECURITY ANALYSES BY THE PROTOCOL DESIGNERS AND OUR NEW RESULTS . Grou p Protocol Desi g ners’ anal y sis Our results Better-chain-quality SHTB [12] None New protocol-specific attack strategy Better-chain-quality UDTB [18], [21] Analysis against one attack strategy New protocol-specific attack strategy Attack-resistant: reward-all Fruitchains [20] Formal analysis against selfish mining Vulnerable to selfish mining and doubleassuming some parameters are large enough spending attacks with reasonable parameters Attack-resistant: punishment RS [12], [21] Analysis against one attack strategy Vulnerable to censorship attack Attack-resistant: reward-luck y Subchains [11] None Vulnerable to all three attacks to capture more real-world attack scenarios, such as censorship or chain quality attacks. Second, by introducing new modeling and acceleration techniques, our MDPs can model more complicated systems and support longer block races than previous works, which enables cross-protocol security comparison. Moreover, our approach opens the possibility of applying artificial intelligence techniques in analyzing protocol security. By properly simplifying the protocol and confining the attackers’ reasonable actions, these techniques enable systematic exploration of a protocol’s vulnerabilities with a given attacker goal, which helps improve the protocol design iteratively. **Systematic evaluation of non-NC PoW protocols.** Based on their self-claimed properties, we divide PoW protocols claiming to improve NC’s security into two groups: *better-* *chain-quality protocols* and *attack-resistant protocols*, differing in whether they accept imperfect chain quality as a given condition. We then use our framework to evaluate the two groups accordingly. Our findings are summarized as follow: *•* *No PoW protocol achieves perfect chain quality facing a* *strong attacker.* We first evaluate the chain quality of two influential better-chain-quality protocols that are previously unverified: smallest-hash tie-breaking (SHTB) [12] and unpredictable deterministic tie-breaking (UDTB) [18], [21]. Joining the results of previous studies [4], [13], [31], we confirm that an attacker with more than a quarter of total mining power can obtain an unfair fraction of blockchain blocks in all better-chain-quality protocols. We attribute the low chain quality to information asymmetry between the attacker and the compliant miners, which is inherent to the unrealistic security assumptions in PoW protocols, including the participants’ unawareness of their own network connectivity and the lack of a globally synchronous clock. *•* *No attack-resistant protocol is resistant against all three* *attacks.* Then we evaluate the attack-resistant protocols based on the metrics of incentive compatibility, subversion gain, and censorship susceptibility. We further divide these protocols into three groups based on their technical approaches: *reward-all protocols*, *punishment protocols* and *reward-lucky protocols* . We choose a representative and most influential protocol from each approach for evaluation: Fruitchains [20], a variant of DECOR+ [12], [21] named reward-splitting protocol (RS), and Subchains [11]. Our analysis shows that all three approaches suffer from certain drawbacks: reward-all protocols remove the attacker’s risk of losing block rewards in double-spending attacks; punishment protocols aid feather-forking attacks; rewardlucky protocols facilitate all three attacks. We attribute these empirical results to a dilemma between “rewarding the bad” and “punishing the good”. Our findings show that no better-chain-quality protocol outperforms NC’s chain quality in all attacker settings, neither does any attack-resistant protocol outperforms NC in defending against all three attacks. Starting from our identified cruxes hindering substantial improvement in both chain quality and attack resistance, we point out several directions of future improvements towards more secure PoW protocols. **Exposing limitations in existing PoW protocols’ security** **analyses.** The unsatisfactory security of PoW protocols roots in the designers’ lack of [7], [8], [11], [12], [17]–[19], or incomplete security analyses. Existing analyses are limited either to only one attack strategy [6], [9], [21]–[23], turning its back on the protocol-specific attack strategies, or to one or two security properties [10], [13]–[16], [20], [33], leaving the protocols more vulnerable against other attacker incentives. In addition, our analysis reveals that, in some designers’ analysis, certain parameters are artificially anchored to an unrealistic range in order to prove the properties of the protocol, leaving the real-world security unexplored. Of the five protocols we model in this paper, a comparison between our results and the designers’ own analyses is summarized in Table I. Our results highlight that PoW protocols’ security is not a unidimensional index, but rather a multi-metric property subjects to *the law of* *the minimum* —security is decided by the weakest point in the design. Therefore, future protocol analyses need to consider a broad strategy space covering the all reasonable actions with a given attacker goal, and incorporate multiple attacks with real-world parameters. II. N AKAMOTO C ONSENSUS ’ S S ECURITY I SSUES AND A LTERNATIVE P O W P ROTOCOLS *A. Nakamoto Consensus* NC helps all network participants agree on and order the set of confirmed transactions in a decentralized, pseudonymous way. Each block contains its *height* —distance from the hardcoded *genesis block*, the hash value of the *parent block*, a set of transactions, and a nonce. Embedding the parent hash ensures that a miner chooses which chain to mine on before starting to mine. To construct a valid block, miners work on finding the right nonce so that the block hash is smaller than 176 ----- the *block difficulty target* . This target is adjusted every 2016 blockchain blocks so that on average one block is appended to the blockchain in ten minutes. Compliant miners publish blocks to the network the moment they are found. Miners are incentivized by two kinds of rewards. First, a *block reward* is allocated to the miner of every blockchain block. Second, the value difference between the inputs and the outputs in a transaction is called the *transaction fee*, which goes to the miner who includes the transaction in the blockchain. When more than one block extends the same preceding block, a miner adopts and mines on the *main chain* that is most computationally challenging to produce, which is commonly, although inaccurately, referred to as the *longest* *chain* . When several chains are of the same “length”, miners choose the first chain they receive. We refer to this *forked* situation where miners work on different parent blocks as a *block race*, an equal-length block race as a *tie*, and blocks of the same height as *competing blocks* . Mining on the longest chain or the first-received block during a tie is denoted as the *compliant strategy* [5], [26], [34], [35]. Blocks that are not on the longest chain are orphaned and discarded by all miners. By convention, Bitcoin users will not consider a transfer of funds settled until it is confirmed by six blocks, including the block containing the transaction. We refer to Narayanan et al. [36] for a complete view of the system. *B. Nakamoto Consensus’s Security Issues* Bitcoin’s designer believes that the protocol achieves *perfect* *chain quality*, i.e., as long as more than half of total mining power is compliant, any attempt to substitute blocks from the blockchain fails with large probability [2]. Unfortunately, this belief is disproved by several later studies [3]–[7], which discover a family of strategies to replace the compliant miners’ blocks with the attackers’ at the end of the blockchain with high success rate. The imperfect chain quality can be directly exploited to manipulate vote results in some blockchains [37]. Moreover, the imperfect chain quality enables a variety of other attacks, differing in the attackers’ goals: *•* *Selfish mining.* In this attack first analyzed by Eyal and Sirer [6], a *selfish miner* keeps discovered blocks secret and mines on top of them, hoping to gain a larger lead on the public chain of *honest blocks* mined by the compliant miners. The selfish miner publishes the secret chain if it has one block and the public chain catches up, or it has more than one block and the lead is reduced to one. Though risking the reward of the first secret block, once the selfish chain is two blocks ahead of its competitor, the selfish miner can securely invalidate compliant miners’ competing blocks. This strategy has been generalized by Sapirshtein et al. [4] and Nayak et al. [5] to a family of strategies. This attack allows the selfish miner to gain unfair block rewards. As the attacker’s revenue rises superlinearly with the mining power share, rational miners are incentivized to attack collectively for a higher input-output ratio. This situation not only damages the system’s decentralized structure, but also raises the success rates of various other attacks. *•* *Double-spending.* A successful double-spending attack re verses the payment after the service or goods are delivered. The transaction to the merchant is replaced by a *conflict-* *ing transaction* transferring the fund back to the attacker. Double-spending were once believed to be difficult with less than 50% of total mining power [2]. However, a 2016 paper by Sompolinsky and Zohar [32] indicates that an attacker with arbitrarily low mining power can profitably implement the attack by combining it with selfish mining: the attacker mines in secret to perform double-spending attacks, and when there is little hope to orphan six blocks in a row, the attacker publishes the secret blocks to claim the block rewards, switching to selfish mining instead. *•* *Feather-forking.* In this attack proposed by Miller [30], the attacker publicly promises to fork the blockchain to invalidate all blocks confirming the target transactions. The attacker will keep mining on the forked chain until the main chain is *k* blocks ahead. Although the attack is not profitable and the success rate is low with minority mining power, the rational choice for other miners is to join the attacker on the censorship in order to avoid the potential loss. A successful attacker can approve and decline transactions at will, becoming the system’s de facto owner, which violates the motivation of the permissionless design. Researchers identify several other attacks against NC [35], [38]–[42]. Nevertheless, these attacks either do not have their roots, as well as their solutions in the consensus protocol, or do not bring realistic threats in the coming decades. *C. Alternative PoW Protocols* A substantial number of alternative PoW protocols have been proposed to address NC’s security issues. In this part we split these designs into two groups, better-chain-quality and attack-resistant protocols, based on their claims, and selectively introduce some most influential designs. These two groups are not mutually exclusive. Although we omit non-security-related innovations and hybrid protocols, i.e., protocols that combine PoW with other consensus mechanisms [43]–[45], our security analysis is still applicable to their underlying PoW protocols. We refer interested readers to the recent SoK paper of Bano et al. [28] for a more complete overview of consensus protocols. *1) Better-chain-quality protocols:* These designs usually modify NC’s *fork-resolving policy*, hoping to reduce the probability that the compliant miners work on the attacker’s chain during a block race. The first three designs abandon NC’s firstreceived tie-breaking rule, yet still follow the longest-chain rule, whereas the others abandon both rules. *a) Uniform tie-breaking:* Eyal and Sirer suggest during a tie, miners choose which chain to mine on uniformly at random regardless of which one they receive first [6]. This policy is adopted by the PoW component of Ethereum, the cryptocurrency with the second largest market capitalization [46]. Bitcoin-NG, a high-throughput blockchain protocol [47] implemented in two cryptocurrencies Waves [48] and Aeternity [49], also follows uniform tie-breaking policy. 177 ----- pointer block parent block Fig. 1. A Fruitchains execution. Banana’s gap is height(E) *−* height(B) = 2. Tomato is not a valid fruit because its pointer block (D) is orphaned. When *T* *o* = 3, pear is not valid even if it is included in E, as its gap reaches *T* *o* . *b) Largest-fee and smallest-hash tie-breaking:* Lerner proposes DECOR+, in which during a tie, miners choose the chain whose *tip*, i.e., the last block, has the largest transaction fees, and when multiple tips have the same amount of fees, choose the one with the smallest hash [12]. A variant of DECOR+ is implemented in Rootstock [17], a Bitcoin sidechain [50]. The author believes a *deterministic tie-breaking* *policy* helps the compliant miners choose the same chain in a tie, thus limiting the attacker’s ability. *c) Unpredictable deterministic tie-breaking:* In Byzcoin [18], Kokoris-Kogias et al. recommend that ties are resolved deterministically via a pseudorandom function taking all competing blocks as inputs. This tie-breaking policy is also described by Camacho and Lerner in an updated version of DECOR+ [21]. Within this policy, the attacker can neither determine whether a secretly-mined block can win a tie with unfair possibility before all competing blocks are mined, nor split the compliant mining power. *d) Publish or perish:* Zhang and Preneel present a design *Publish or Perish*, in which forks are resolved by comparing all chains’ *weights* [13]. Blocks published after their competitors do not contribute to the weight of its chain, and blocks that incorporate links to their parents’ competitors are appreciated more. Consequently, a block that is kept secret until a competing block is published contributes to neither or both branches, hence it confers no advantage in winning the block race. *e) Others:* Other better-chain-quality protocols include the GHOST protocol designed by Sompolinsky and Zohar [33] and Chainweb by Martino et al. [23]. *2) Attack-resistant protocols:* These designs usually modify NC’s *blockchain topology* and *reward distribution policy*, hoping to reduce the attacker’s profitability or to reduce the compliant miners’ losses. They can be categorized into three types: the first two types issue rewards based on the block’s topological position in the blockchain, whereas the third type issues rewards based on the block content. *a) Reward-all protocols:* In these designs, most of recent PoW solutions receive a fraction of a full reward, although some of them may not contribute to the transaction confirmation. Consequently, the compliant miners’ losses due to malicious orphaning of their blocks are compensated. Fruitchains by Pass and Shi [20] distributes rewards to all recent *fruits*, which are parallel products of block mining. Similar to “a block candidate is a block if its hash’s first *l* bits are smaller than a predefined target”, the candidate is a *fruit* if its hash’s *last l* bits are smaller than another target. Although generated from the same mining process, fruits and blocks have different functionalities. Each block embeds an ordered fruit list, similar to each block in NC embeds an ordered transaction list; transactions are embedded in fruits instead. Transactions are ordered based on their first fruit appearances in the blockchain. In addition to the transactions, each fruit contains a *pointer* to a recent main chain block which the fruit miner is certain will not be orphaned. A fruit is valid if its pointer block is not orphaned, or its *gap* —the height difference between its pointer block and the main chain block contains the fruit—is smaller than a timeout threshold *T* *o* . All valid fruits receive the same reward and blocks receive nothing. This incentive mechanism is also adopted by Thunderella, a blockchain design of the same authors [43]. Other designs of this type include the PoW component of Ethereum, the Inclusive protocol by Lewenberg et al. [10], SPECTRE by Sompolinsky et al. [14], Meshcash by Bentov et al. [15], and PHANTOM by Sompolinsky and Zohar [16]. *b) Punishment protocols:* As it is often hard to tell which of the competing blocks are mined by the attacker, these designs forfeit rewards of all competing blocks to deter attacks. In DECOR+, the block reward is split evenly among all competing blocks of the same height [12], [21]. The authors propose some other punishment rules for suspected malicious behaviors. Bahack suggests another punishment protocol [7]. *c) Reward-lucky protocols:* These designs selectively reward PoW solutions, hoping that these solutions serve as anchor points to stabilize the blockchain. Subchains by Rizun demands miners to broadcast *weak blocks*, i.e., block candidates with larger difficulty target, in addition to blocks [11]. Weak blocks also count in chain length and contribute to the transaction confirmation, though receive no reward. Subchains follows NC’s longest chain and first-received rule. Bobtail by Bissias and Levine [22] is another reward-lucky protocol. III. E VALUATION F RAMEWORK AND S ECURITY M ODEL As non-NC PoW protocols’ security improvements remain self-claimed, we propose our evaluation framework in order to investigate whether they have fixed NC’s weaknesses, and to shed light on the possibility of such improvement. *A. Evaluation Framework* We present four metrics for a more comprehensive view on PoW protocols’ security. This is not an exhaustive list of all metrics proposed in the literature, but rather a comparative framework with NC as the benchmark. In particular, though the chain-growth and the common-prefix properties are also used to quantify consensus protocol security [3], [15], [25], [51], they are not included, because the attack vectors on these properties are only introduced by certain non-NC protocols. *1) Chain quality:* This metric measures the difficulty to substitute the honest main chain blocks. In line with previous research [3], [25], [51], we define the chain quality *Q* as the expected lower bound on the fraction of honest main chain blocks, given that the attacker controls a fraction of total mining power *α* . Defining *B* *c* and *B* *a* as the total number 178 ----- of main chain blocks mined by the compliant miners and the attacker respectively, and *s* the attacker’s strategy, we have: *Q* ( *α* ) = min lim *B* *c* *.* *s* *t→∞* *B* *a* + *B* *c* Ideally, *Q* ( *α* ) = 1 *−* *α*, namely the attacker gets main chain blocks at most proportional to the mining power. A protocol’s chain quality is not related to its reward distribution policy. *2) Incentive compatibility:* This metric measures a protocol’s selfish mining resistance. It is defined as the expected lower bound on the *relative revenue* of the compliant miners [4]–[7], [13], [26], [31], namely: � *R* *c* *I* ( *α* ) = min lim *,* *s* *t→∞* � *R* *a* + � *R* *c* where [�] *R* *a* and [�] *R* *c* are the cumulative rewards received by the attacker and the compliant miners, respectively. Incentive compatibility shares the same ideal value 1 *−* *α* with chain quality. Unlike chain quality, all three attack resistance metrics are tightly related to the reward distribution policy. *3) Subversion gain:* This metric measures the profitability of double-spending attacks, which is quantified as the timeaveraged illegal upper bound profit in a specific attack model, in line with several previous papers [26], [31], [32]. In this model, every honest block contains a *payment transaction* to the merchant, whose conflicting version is embedded in the block’s secret competitor, if the competitor exists. The service or goods are delivered when the block containing the payment transaction reaches *σ* confirmations, with *σ* = 6 in Bitcoin, or the attacker gives up on attacking this block. In the former case, if the payment transaction is later invalidated, for every block that is orphaned after confirmation, the attacker receives a double-spending reward *V* ds, in the unit of block rewards. In other words, if the attacker successfully orphans *k* blocks in a row, the double-spending reward is defined as *R* ds ( *k, σ, V* ds ) = 0 *,* *k < σ* *,* (1) � ( *k* + 1 *−* *σ* ) *V* ds *,* *k ≥* *σ* where *k* + 1 *−* *σ* is the number of *σ* -confirmation blocks that are orphaned. In addition, if the first payment transaction is invalidated before reaching *σ* confirmations, *R* ds = 0. The attacker receives no punishment for failed double-spending attempts, because if an attack fails, the service or goods will be delivered eventually, compensating the attacker’s loss. This metric captures multiple aspects of a protocol’s doublespending resistance. First, incorporating [�] *R* *a* forces the attacker to balance the risk of losing block rewards with the double-spending gain. Second, the merchant is allowed to delay delivery if the conflicting transaction is broadcast before *σ* confirmations, counteracting the attack. Third, longer forks, which cause more damage in reality, result in higher rewards. The subversion gain of the attacker is defined as: *S* ( *α, σ, V* ds ) = max lim � *R* *a* + � *R* ds *−* *α,* *s* *t→∞* *t* where *t* represents the lasting time, measured as the number of block generation intervals; *α* is the time-averaged mining reward without the double-spending attack. Ideally, the attacker complies with the protocol to avoid losing any block reward, namely *S* ( *α, σ, V* ds ) = 0. However, an attacker is always incentivized to deviate as long as *V* ds is large enough. *4) Censorship susceptibility:* Inspired by feather-forking attacks, we measure censorship susceptibility as the maximum fraction of income loss the attacker incur on compliant miners in a censorship retaliation attack. We choose not to incorporate the attacker’s economic loss, as the retaliation does not happen if the censorship threat succeeds. As long as the other miners are convinced of the attacker’s determination, the only factor affecting their strategy is the expected loss of not cooperating. Unlike feather-forking, in which the retaliation starts after receiving the block containing the target transaction, in our model, the attack is initiated as soon as compliant miners start mining the block. This setting is practical, as the attacker can learn the transaction inclusion as soon as the mining starts by eavesdropping in compliant mining pools. Another difference with feather-forking is that we remove the reliance on the parameter *k* by allowing the attacker to drop the falling-behind chain and try to orphan the next honest block at any time. As the attacker’s goal is to maximize the compliant miners’ loss, mining on a falling-behind chain is not always optimal. Our generalized setting captures multiple attack scenarios. For example, in an extreme form of the attack, attackers degrade the system’s availability by replacing honest blocks with empty blocks, delaying all transactions’ confirmation. A protocol’s censorship susceptibility is defined as: � *O* *c* *C* ( *α* ) = max lim *,* *s* *t→∞* � *O* *c* + � *R* *c* where [�] *O* *c* is the compliant miners’ cumulative reward loss due to the attack, in the unit of block rewards. Ideally, *C* ( *α* ) = 0, namely the compliant miners have no risk rejecting a censorship request. *B. Threat Model* We follow the threat model of most studies on PoW security [3]–[7], [9], [13], [26], [31], [32], [52]. In this model, there is only one colluding pool of malicious miners, denoted as “the attacker”, with less than half of total mining power. All other miners are compliant. This is the strongest form of the attacks as multiple attackers cause more damage when combining their mining power. We do not consider the effect of transaction fees as in [35], [42], [53]. Neither do we incorporate the difficulty adjustment mechanism as in [39]. In terms of network connectivity, the attacker cannot drop other miners’ messages or downgrade their propagation speed. However, the attacker may, after seeing a compliant miner’s message, send a new one to certain miners that arrives before the original message. The propagation delay is modeled as a fixed natural orphan rate, as in [31]. Unfortunately, as many protocols we evaluate are under development and their parameters are not specified, it is difficult to estimate their orphan rates. Therefore we assume all protocols in this work have the same expected block interval and zero natural orphan rate, in order to ensure a fair comparison on the protocol level. 179 ----- In this model, the following result has been proven [4], [52]: if the protocol follows the longest-chain rule and the attacker is rational, there are at most two active chains at any given time: a *public chain*, and at most one *attacker chain*, whose last several blocks might be hidden from the compliant miners. Any more-than-two-chain strategy decreases the attacker’s effective mining power, therefore is strictly dominated by a two-chain strategy. We refer to the last common block of these two chains, namely the last block recognized by all miners, as the *consensus block* . *C. Modeling Mining Processes as MDPs* An MDP is a discrete time stochastic control process that models the decision making in situations where outcomes are partly random and partly under the control of a decision maker. To model a system as an MDP, we need to encode all status and history information that might influence the strategic player’s decisions into a *state*, and the player’s available decisions into several *actions* . Moreover, a *state* *transition matrix* describes the probability distribution of the next state over every ( *state, action* ) pair. At last, when certain ( *state, action, ne* *w* *state* ) transition happens, a *reward* is allocated to the player to facilitate utility computation. In line with previous studies [4], [13], [26], [31], [32], mining is modeled as a sequence of steps. The MDP *state* describes the blockchain’s status at the beginning of a step, which incorporates all information that might affect the attacker and the compliant miners’ decisions, e.g., the lengths of competing chains, the miners of the last several blocks, and the number of unpublished attacker blocks. Encoding a blockchain status into a state is challenging, as despite the sparseness of the transition matrices and our optimization, an MDP solver gives the exact solution only when the number of states is less than about 10 [7] . In each step, the attacker first decides how many secret blocks to publish. Next, the rewards are distributed for certain blocks if all miners agree that these blocks are settled, either as main chain blocks, orphans or *uncles*, i.e., orphans that are referred to in the main chain. Afterwards, all miners start mining. The compliant miners choose which chain to mine on based on public information, whereas the attacker may choose either chain. The *action* in an MDP describes the attacker’s choices on how many blocks to publish and which chain to mine on. A new block is then mined by either the attacker or the compliant miners, with probability distribution according to their mining power shares. New honest blocks are published immediately, whereas the attacker decides whether to publish his new block at the beginning of the next step. The attacker’s old blocks published in the next step might reach the compliant miners before the new honest block. The MDP *state transition* is triggered by the new mining event. The rationale behind this publish-reward-mine-found sequence is that rational decisions may only change when a new block is available [4], [7]. Whenever it is infeasible to model the exact system, we choose to favor the compliant miners and limit the attacker’s ability, ensuring the attacker’s utility is achievable in reality to better demonstrate the protocols’ weaknesses. IV. C HAIN Q UALITY A NALYSIS ON B ETTER -C HAIN -Q UALITY P ROTOCOLS This section evaluates the chain quality *Q* ( *α* ) of NC, uniform tie-breaking (UTB), smallest-hash tie-breaking (SHTB), unpredictable deterministic tie-breaking (UDTB) and Publish or Perish (PoP). We do not consider largest-fee tie-breaking, as it enables a malicious miner to locally generate a hugefee transaction and to embed it in the miner’s own block to increase the chance of winning a tie. Neither do we consider GHOST, as it behaves identically to NC when the network delay is negligible [33]. At last, we leave the evaluation of DAG-based protocols, such as [23], to future work as the notion of chain quality is not directly applicable to them. When orphaned blocks receive no reward and the main chain blocks receive full rewards, the chain quality is equivalent to the compliant miners’ relative revenue. Therefore, we implement this reward distribution policy in all MDPs of this section, and define the utility as the attacker’s relative revenue 1 *−* *Q* ( *α* ), in order to find the chain quality. This equivalence also allows us to reuse the relative revenue MDP designs of previous studies. We re-implement the NC, UTB and PoP MDPs as described in [4] and [13]. Our implementation can model block races longer than previous studies, as we accelerate the programs by allocating memory only once before assigning values to the state transition matrices. In this section, we first model the mining process of SHTB and UDTB, and then present the evaluation results. *A. Modeling SHTB* The key challenge of modeling SHTB is to encode in a state the hashes of the latest blocks, as compliant miners resolve ties via comparing these hashes. Unfortunately, a block hash is usually a 256-bit value; encoding which makes the total number of states too large to be solvable. Therefore, we split the hash value space into a small number of *regions* and only encode the hash region number. When comparing two hashes from the same region, we consider the public chain tip to be smaller, which favors the compliant miners. As this simplification discourages the attacker, our MDP computes an upper bound on SHTB’s chain quality. We defer the detailed MDP design to Appendix A. *B. Modeling UDTB* The main challenge is to model the pseudorandom function (PRF) determining a tie’s winner. We address this challenge by introducing a binary field *tie* in the state representation, denoting whether the public chain tip has priority over its competitor after applying the PRF. This field is meaningful when the attacker chain is no shorter than the public chain. Every time the public chain tip is updated, it has equal probability to be 0 or 1. The design can be found in Appendix B. *C. Evaluation Results* *1) Solving for the optimal policies:* Our MDPs output the attacker’s optimal policy and the expected fraction of main chain blocks following this policy, namely 1 *−* *Q* ( *α* ), allowing 180 ----- Fig. 2. The difference between the chain quality *Q* ( *α* ) and the ideal value 1 *−* *α* of NC, UTB, SHTB, UDTB and PoP. Larger number indicates worse performance. *Q* ( *α* ) does not converge for PoP and SHTB when *α* = 0 *.* 45 and *α ≥* 0 *.* 4, respectively. us to compute *Q* ( *α* ). Besides *α*, another input in NC is *γ*, defined as the proportion of compliant mining power that works on the attacker chain during a tie. We compute *Q* ( *α* ) for all five protocols with *α* between 0 *.* 1 and 0 *.* 45 with interval 0 *.* 05. Three different *γ* values are chosen for NC: 0, 0 *.* 5, and 1. The fail-safe parameter *k* in PoP is set to 3, following the authors’ recommendation [13]. For NC, UTB and UDTB, we set the maximum block race length, denoted as *l* max, to 160, which is large enough so that *Q* ( *α* )’s lower and upper bounds differ in less than 4 *×* 10 *[−]* [5] for all inputs. The detailed computation of these bounds can be found in Sect. 4.2 of [4]. For PoP, *l* max is set to 30, which is larger than the value 12 in the authors’ implementation [26]. For SHTB, we set *l* max to be 40 and split the valid hash space into 15 equal-size regions. Once *l* max is reached, the attacker is forced to publish the attacker chain and end the block race. For the latter two protocols, we check the convergence by examining whether the results are affected if *l* max decreases by two. Data points that do not converge are discarded. *2) Chain quality:* Our results of NC, UTB and PoP in Fig. 2 match those from previous studies [4], [13], [31]. We list our new insights as follows. *Result 1:* UTB and UDTB’s *Q* ( *α* ) are almost identical; they perform no better than NC when *γ ≤* 0 *.* 5 for all inputs. For all our inputs, UTB’s and UDTB’s *Q* ( *α* ) differ in at most 1%. UDTB may outperform UTB when natural forks happen frequently, as these forks are resolved faster in UDTB due to the compliant miners’ convergence. UTB’s and UDTB’s unsatisfactory performance is attributed to the following protocol-specific strategy: as neither policy takes the block receiving time into consideration, an attacker who keeps mining from behind the public chain may still win the block race with a tie. Consequently, their chain quality is lower than that of NC when *γ* = 0 *.* 5. *Result 2:* SHTB achieves the lowest chain quality among all better-chain-quality protocols. An examination of the optimal strategies reveals the cause of SHTB’s poor chain quality. In SHTB, when *α* = 0 *.* 1, the optimal action when “the attacker finds a smallest-hash-region block before the compliant miners find anything” is to keep TABLE II THE PROFITABLE THRESHOLD *PT* OF NC, UTB, SHTB, UDTB AND P O P. Protocol *PT* Protocol *PT* NC, *γ* = 0 0.3333 SHTB (upper bounds) 0.0652 NC, *γ* = 0 *.* 5 0.2500 UDTB 0.2321 NC, *γ* = 1 0.0000 PoP 0.2500 UTB 0.2321 mining privately, whereas in all other protocols except NC, *γ* = 1, the weak attacker publishes the block. In other words, resolving ties by comparing hashes allows the attacker to better estimate the probability of winning, hence he is more inclined to deviate when the odds are in favor. Moreover, SHTB enables “catching up from behind” strategy like UTB and UDTB. *3) Profitable threshold:* We calculate the *profitable thresh-* *old (PT)*, the maximum *α* that achieves the ideal chain quality 1 *−* *α* and display the results in Table II. *Result 3:* To date, no PoW protocol achieves the ideal chain quality when *α >* 0 *.* 25. SHTB’s actual PT should be zero, because as long as a secret block’s hash is small enough, the probability of winning a tie can be arbitrarily high, encouraging the attacker to withhold the block. The seemingly above-zero result is because we are unable to encode the hash to arbitrary granularity. *Result 4:* No protocol modification outperforms NC, *γ* = 0 when *α ≤* 0 *.* 39. NC, *γ* = 0 achieves the best chain quality for all *α ≤* 0 *.* 35 in Fig. 2. It is only outperformed by PoP when *γ ≥* 0 *.* 4. We locate the exact value where PoP starts to outperform NC with a binary search: in both PoP and NC, *Q* (0 *.* 3901) = 0 *.* 5372. *D. What Goes Wrong: Information Asymmetry* We attribute NC’s poor chain quality to the protocol’s incapability in distinguishing the honest chain from the attacker chain, due to information asymmetry. When two competing chains simultaneously emerge, no information can help the compliant miners identify the attacker chain, or even whether there is an attacker chain, as the fork might be caused by a temporary network partition. In contract, possessing information of both chains, the attacker makes more informed decisions of “gambling” only when the odds are in favor. Since this information asymmetry is not addressed in non-NC protocols, their attempts to raise the chain quality remain unsatisfactory. Unfortunately, we believe it is difficult to solve this information asymmetry within PoW protocols’ security assumptions. In these assumptions, compliant miners can only rely, almost exclusively, on limited public information, namely the blockchain topology and block content, to choose which chain to mine on. While other public information, such as the network partition status, which is highly likely to be available to all miners in reality, as well as the compliant miners’ private information such as their network connectivity or the difference between a block’s timestamp and its receiving time, is ignored in identifying the attacker chain. The attacker, on the other hand, is able to act on all available information. In other 181 ----- half reward no reward parent Fig. 3. An RS execution. gap(C *[′]* ) = height(E) *−* height(C *[′]* ) = 2. When *T* *o* = 3, B *[′]* is not visible even if it is referred to in E as its gap reaches *T* *o* . words, the information asymmetry is anchored and intensified in these protocols through their unrealistic and inconsistent security assumptions. V. I NCENTIVE C OMPATIBILITY A NALYSIS ON T YPICAL A TTACK -R ESISTANT P ROTOCOLS In the following sections, we analyze the attack resistance of NC and three most influential designs, one from each type of attack-resistant protocols introduced in Sect. II-C2. For reward-all and reward-lucky protocols, we choose Fruitchains and Subchains, respectively. For punishment protocols, we implement our own variant of DECOR+ named *reward-splitting* *protocol (RS)* . Unlike DECOR+, RS follows NC’s longest chain and first-received fork-resolving policy. This modification excludes the influence of the chain quality from our attack resistance analysis, as all four protocols in comparison share the same chain quality. Most insights we gain are direct generalizable to all protocols of the same type. *A. Modeling Fruitchains* We use *Ratio* f2b to denote the ratio of fruit difficulty target to block difficulty target. For example, *Ratio* f2b = 2 means that of all the *units* —mining products, two thirds are fruits and one third are blocks in expectation. The main challenge of modeling Fruitchains is to encode each fruit’s pointer block. The number of states grows exponentially with the number of steps if we encode all possible choices of each fruit. To address this complexity, we assume all compliant miners know when the block race starts and act optimally to avoid honest fruits being orphaned. Moreover, the attacker’s action to cause a tie is disabled so that no honest fruit points at attacker-chain blocks. These assumptions are in favor of the compliant miners. Consequently, incentive compatibility is computed as an upper bound, while subversion gain and censorship susceptibility are computed as lower bounds. Our Fruitchains MDP design can be found in Appendix C. *B. Defining and Modeling RS* In RS, we define a block’s *gap* as the height difference between the first main chain block that refers to the block and the block itself. A main chain block’s gap is defined as zero. This definition, unlike that of Fruitchains, enables an accurate modeling of our protocol. A block is *visible* if its gap is strictly smaller than the *timeout threshold T* *o* . Each block reward is split among all visible blocks of the same height. Other reward-forfeiting mechanisms of DECOR+ are omitted as they are related to its own fork-resolving policy. Therefore, RS’s numerical results are not the same as those of DECOR+. To model RS, we observe that when the attacker wins a block race, it is uncertain whether the orphaned honest blocks are rendered invisible, as they might still be included in the blockchain as uncles. Therefore, we introduce an extra field *history*, a string of at most *T* *o* *−* 1 bits, in our state representation to encode blocks whose rewards are not settled prior to the current block race. Each bit in *history* denotes the blockchain’s status at a specific height. Interested readers can find the MDP design in Appendix D. *C. Modeling Subchains* The ratio of weak block difficulty target to block difficulty target is denoted as *Ratio* w2b . Note that *Ratio* w2b is not equivalent to *Ratio* f2b in Fruitchains. In Fruitchains, a unit is a fruit as long as the fruit target is met; in Subchains, a unit is a weak block when the weak-block target is met and the block target is *not* met. When *Ratio* w2b = 2, half of the units are weak blocks while the other half are blocks in expectation. A straightforward encoding of a Subchains state includes both chains’ block/weak-block mining sequences, in which the number of states grows exponentially with the block race length. To compress the state space, we observe that in all outcomes of a block race, the public chain is either adopted or abandoned by both miners as a whole. Similar argument applies to the public chain’s competing attacker-chain units. Therefore, we encode only the number of blocks in both chains, the attacker chain’s last three units and the length difference between the two chains instead of two full mining sequences. This simplification limits the attacker’s ability: the attacker can keep no more than three private units after every publication. Hence our Subchains MDP favors the compliant miners. The complete MDP design is in Appendix E. *D. Evaluation Results* Our MDPs output the attacker’s optimal strategies and their expected relative revenue, namely 1 *−* *I* ( *α* ). For all three protocols, we compute *I* ( *α* ) with *α* between 0 *.* 1 and 0 *.* 45 with interval 0 *.* 05 and *γ* = 0 *,* 0 *.* 5 and 1, except that our Fruitchains MDP does not support *γ* = 0 *.* 5. *1) Fruitchains:* Fruitchains is evaluated with the following set of parameters. *Ratio* f2b is set to 1 so that the expected number of fruits equals that of blocks, which is the simplest case. The maximum block race length *l* max is set to 20. Two different *T* *o* values, 7 and 13 are selected so that we can verify whether a larger *T* *o* results in a higher *I* ( *α* ). In practice, *T* *o* should be no bigger than *σ* + 1, where *σ* is the confirmation threshold, otherwise an attacker can start mining a competing chain to double-spend a confirmed transaction without risking any fruit rewards. Hence the maximum *T* *o* required by Bitcoin’s six-confirmation convention and Ethereum’s twelveconfirmation convention are 7 and 13, respectively. Other MDP thresholds are set so the probability that these thresholds are reached before *l* max is around one percent. The attacker is forced to publish the entire chain if any threshold is reached. The results can be found in the first four data lines of Table III. 182 ----- TABLE III I NCENTIVE C OMPATIBILITY *I* ( *α* ) OF F RUITCHAINS, COMPUTED AS UPPER BOUNDS, SELECTIVELY SHOWN . E NTRIES THAT PERFORM WORSE THAN NC ARE IN RED ITALIC . *I* (0 *.* 1) = 0 *.* 9 FOR ALL ( *T* *o* *, Ratio* f2b *, γ* ) COMBINATIONS . ( *T* *o* *, Ratio* f2b *,* *γ* ) *\* *α* 0.15 0.2 0.25 0.3 0.35 (7,1,0) *0.8494* *0.7961* *0.7356* *0.6614* *0.5658* (7,1,1) 0.8493 0.7956 0.7337 0.6557 0.5532 (13,1,0) 0.8500 *0.7997* *0.7472* *0.6864* *0.6068* (13,1,1) 0.8500 0.7997 0.7470 0.6854 0.6036 (13,2,0) 0.8500 *0.7997* *0.7472* *0.6866* *0.6072* (13,2,1) 0.8500 0.7997 0.7470 0.6856 0.6040 (13,0.5,0) 0.8500 *0.7997* *0.7472* *0.6864* *0.6065* (13,0.5,1) 0.8500 0.7997 0.7470 0.6853 0.6033 pointer block parent block honest block attacker block Fig. 4. Selfish mining in Fruitchains, *T* *o* = 3. Attacker fruits mined before the *T* *o* -th attacker block are embedded in both chains, whereas honest fruits are only embedded in honest blocks. The attacker loses only the strawberry if losing the block race; however, if the attacker wins the race with *≥* *T* *o* attacker blocks, all honest fruits are invalidated. *Result 5:* In terms of *I* ( *α* ), Fruitchains performs worse than that of NC for various parameter choices when *γ* = 0. In NC, when *γ* = 0, a weak attacker publishes the blocks immediately after they are mined, giving up the temporary lead to avoid losing the block rewards. In contrast, in Fruitchains, as blocks receive no reward, the attacker has no incentive to publish any blocks when neither chain reaches length *T* *o* . This property encourages more audacious block-withholding behaviors aiming to orphan all honest fruits with a long attacker chain. Moreover, this property decreases the profitable threshold to zero: the attacker can withhold blocks as long as the attacker chain is in the lead, regardless of how small *α* is. An examination of the optimal strategies verifies our inference. Fruitchains performs better than NC when *γ* = 1. This is because in Fruitchains—unlike in NC—winning a block race with a short chain does not increase the attacker’s relative revenue. *Result 6:* In Fruitchains, *I* ( *α* ) increases along with *T* *o*, at the price of longer transaction confirmation delay. As *T* *o* increases, the chance that the attacker chain reaches *T* *o* before the public chain decreases, limiting the attacker’s unfair relative revenue. According to the authors, *I* ( *α* ) gets arbitrarily close to the ideal value 1 *−α* with a large enough *T* *o* . Unfortunately, as *T* *o* *≤* *σ* + 1, *σ* must increase along with *T* *o*, resulting in longer transaction confirmation time. Fruitchains’s authors have not specified the value of *T* *o* . Next we study the influence of *Ratio* f2b on *I* ( *α* ). Two other *Ratio* f2b values, 2 and 0.5, are chosen for *T* *o* = 13. The results can be found in the last four lines of Table III. TABLE IV *I* ( *α* ) AND PROFITABLE THRESHOLD *PT* OF REWARD - SPLITTING PROTOCOL (RS). E NTRIES PERFORM WORSE THAN NC ARE IN RED ITALIC . O MITTED ENTRIES, INCLUDING ALL ENTRIES WITH *α ≤* 0 *.* 25, REALIZE THE IDEAL VALUE *I* ( *α* ) = 1 *−* *α* . ( *T* *o* *,* *γ* ) *\* *α* 0.3 0.35 0.4 0.45 *PT* (3,0) 0.6084 0.4842 *0.3097* *0.3022* (3,0.5) 0.5997 0.4534 0.2575 0.3021 (3,1) 0.6921 0.5771 0.4292 0.2406 0.2918 (6,0) 0.5283 0.3454 0.3549 (6,0.5) 0.5056 0.2945 0.3509 (6,1) 0.6397 0.4899 0.2816 0.3428 (9,0) 0.5566 0.3690 0.3752 (9,0.5) 0.5388 0.3210 0.3702 (9,1) 0.5269 0.3098 0.3647 *Result 7:* In Fruitchains, *I* ( *α* ) increases along with *Ratio* f2b, at the price of more repeating transactions in different fruits. This result is similar to that of the Newton-Pepys problem [54]: a higher *Ratio* f2b lowers the execution’s variance, thus favors the compliant miners with majority mining power. However, the gain comes with a trade-off: more parallel fruits contain more repeating transactions, which demands better network optimization to avoid wasting bandwidth. *2) RS:* Three different *T* *o* values are chosen: 3, 6 and 9. *T* *o* = 6 here is roughly equivalent to *T* *o* = 7 in Fruitchains: in both cases, the first honest unit’s reward is removed when the sixth attacker chain block is accepted by all miners. The profitable thresholds are also calculated. We set *l* max = 30 and all data points converge. The results are shown in Table IV. *Result 8:* In RS, *I* ( *α* ) increases along with *T* *o* . RS with *T* *o* = 3 outperforms Fruitchains with *T* *o* = 7 for all inputs. *I* ( *α* ) is further improved when *T* *o* increases. For any *α <* 0 *.* 5, RS is able to achieve the ideal *I* ( *α* ) with a large enough *T* *o*, rather than getting asymptotically close to the ideal value as in Fruitchains. This is because unlike Fruitchains where block withholding has no risk, in RS half of the secret blocks’ rewards are at risk even if the attacker wins the block race. Therefore, when the potential risk outweighs the relative revenue gain in selfish mining, the attacker follows the compliant strategy and *I* ( *α* ) = 1 *−* *α* . *3) Subchains:* The maximum numbers of blocks in both chains are set to 20. The length difference of the chains *diff* *u* is set in range [ *−* 5 *,* 20]. The attacker is forced to end the block race once the border numbers are reached. Two different *Ratio* w2b values, 2 and 3 are selected to verify whether a larger weak-block-to-block ratio results in a higher *I* ( *α* ). The results are selectively displayed in Table V. *Result 9:* In Subchains, *PT* = 0 for all parameter combinations. In other words, Subchains is not incentive compatible regardless of how weak the attacker is. We examine the optimal strategies and discover a series of attacks. For example, when the first several units in a block race are attacker weak blocks, the attacker will not publish them regardless of how small *α* is, as weak blocks 183 ----- TABLE V *I* ( *α* ) OF S UBCHAINS, UPPER BOUNDS, SELECTIVELY SHOWN . E NTRIES PERFORM WORSE THAN NC ARE IN RED ITALIC . ( *Ratio* w2b *,* *γ* ) *\* *α* 0.1 0.15 0.2 0.25 0.3 (2,0) *0.8990* *0.8467* *0.7922* *0.7342* *0.6712* (2,0.5) *0.8970* *0.8426* *0.7853* *0.7241* *0.6570* (2,1) 0.8889 0.8235 0.7500 0.6667 0.5714 (3,0) *0.8987* *0.8456* *0.7895* *0.7288* *0.6613* (3,0.5) *0.8960* *0.8401* *0.7804* *0.7156* *0.6432* (3,1) 0.8889 0.8235 0.7500 0.6667 0.5714 publishing time honest block honest weak block attacker block attacker weak block |B etwork time v w|Col2|B| |---|---|---| Fig. 5. A typical selfish mining strategy for a weak attacker in Subchains. The attacker withholds only weak blocks to invalidate honest blocks. In this example, honest block B is invalidated by attacker weak blocks v and w. receive no reward. These weak blocks are used to invalidate honest blocks, thus increasing the attacker’s relative revenue. Consequently, Subchains is never incentive compatible. Subchains always performs worse than NC with *γ <* 1. Two protocols are equally bad when *γ* = 1, because in both protocols, every attacker unit can orphan an honest unit without any risk. *Result 10:* In Subchains, *I* ( *α* ) decreases as *Ratio* w2b in creases. Unfortunately, a larger *Ratio* w2b does not help *I* ( *α* ). This is because more weak blocks give the attacker more windows to orphan honest blocks with attacker weak blocks. VI. S UBVERSION G AIN A NALYSIS *A. Modeling Subversion Gain* Similar to previous works [26], [31], [32], all subversion gain MDPs output average reward per step, rather than the relative revenue, as the latter value has no practical meaning. *1) NC and RS:* Our NC subversion gain MDP extends previous works [26], [31], [32] by allowing the merchant to delay delivery if the conflicting transaction is broadcast before the first payment transaction in a block race receives *σ* confirmations. In order to carry this “early publication” information to reward allocation, we introduce an extra field *matched* in the state representation, which is a binary value encoding whether the earliest attacker block in this block race is published to cause a tie before *σ* confirmations. When all miners accept some attacker blocks into the blockchain and *matched* = false, the attacker receives double-spending rewards *R* ds in addition to the block rewards, which is defined according to Eqn. (1) in Sect. III-A3. RS’s subversion gain MDP follows the same modifications. *2) Fruitchains:* Fruitchains’s subversion gain MDP issues *R* ds according to Eqn. (1) when the attacker wins a block race. There is no need to introduce a *matched* field, as our Fig. 7. Subversion bounty *R* sb ( *α, σ* ) of NC and RS, *γ* = 0 *.* 95. Fruitchains MDP does not allow publishing part of the attacker chain. Note that *k* and *σ* in the equation only count the number of blocks, as fruits do not contribute to the transaction ordering. The outputs are normalized to average reward per confirmation, namely per block, rather than per unit, in line with other protocols in comparison. *3) Subchains:* As our Subchains MDP does not encode the public chain’s length, we assume the service or goods are delivered when the transaction is confirmed by *σ* *[′]* blocks, so that *σ* *[′]* *× Ratio* w2b is roughly equivalent to *σ* in other protocols. In line with other protocols’ “one unit of block reward per confirmation” rule, each main chain block receives *Ratio* w2b reward units. The double-spending reward *R* ds is also multiplied by *Ratio* w2b to incorporate the transactions embedded in weak blocks and later reverted. A *matched* field is added to the state encoding, similar to that of NC and RS. *B. Evaluation Results* *1) Subversion gain:* We display results from one set of parameters and inputs that cover all new insights in Fig. 6. The attacker has the strongest propagation advantage, i.e., *γ* = 1. We set *σ* = 6 following Bitcoin’s convention. *R* ds is set to 3, which is of the same order of magnitude as the block reward, forcing the attacker to balance two kinds of rewards. The timeout thresholds *T* *o* are set to 7 and 6 in Fruitchains and RS, respectively. *Ratio* f2b and *Ratio* w2b are set to 1 and 2 in Fruitchains and Subchains. We set the maximum number of blocks in a block race in Subchains *b* max = 12, and *l* max = 24 in the three other protocols to ensure a fair comparison. *Result 11:* The subversion gain *S* ( *α, σ, V* ds ) of Fruitchains and Subchains is larger than that of NC in our setting, while 184 ----- that of RS is smaller. Fruitchains and Subchains perform worse than NC for most *α* values. Fruitchains appears to achieve better performance when *α* = 0 *.* 45 due to its MDP’s limited action set. Indeed, if we truncate NC’s and RS’s action sets to the same as Fruitchains’s, they outperform Fruitchains when *α* = 0 *.* 45. The reasons of Subchains’s and Fruitchains’s unsatisfactory performance are similar to those of their *I* ( *α* ). As blocks in Fruitchains and weak blocks in Subchains have no reward, withholding them is risk-free. More audacious block-withholding behaviors result in higher expected doublespending reward regardless of how small *R* ds is. RS achieves better double-spending resistance than NC, and sometimes even achieves the ideal value 0, because the attacker has to balance the potential gain of double-spending and the potential loss in block rewards. When the risk outweighs the benefit, the attacker follows the compliant strategy. *2) Subversion bounty:* To further evaluate a protocol’s double-spending resistance, we define the *subversion bounty* *R* sb ( *α, σ* ) as the minimum *R* ds that causes a rational attacker to deviate from the compliant strategy. We only compute *R* sb ( *α, σ* ) for NC and RS as *R* sb ( *α, σ* ) *≡* 0 in the two other protocols. We choose *γ* = 0 *.* 95 rather than 1, because in the latter case, the attacker never follows the compliant strategy in NC, as every attacker block can orphan an honest block without any risk. The results are shown in Fig. 7. *Result 12:* Raising *σ* drastically increases *R* sb for weak attackers, but it is less effective for strong attackers. Strong attackers can often find more than one block in a row, allowing them to initiate double-spending for less rewards. *Result 13: R* sb ( *α, σ* ) decreases superlinearly with *α* . The subversion bounty provides some guidance for merchants to choose the maximum value received in a block and the number of confirmations, based on the estimated attacker ability and the consensus protocol. VII. C ENSORSHIP S USCEPTIBILITY A NALYSIS *A. Modeling Censorship Susceptibility* The censorship susceptibility MDPs are different from incentive compatibility MDPs in their reward calculation. Here, the attacker’s reward in a step is calculated as the compliant miners’ loss *O* *c* due to the attack. In NC, *O* *c* is defined as the number of orphaned honest blocks. In Fruitchains, the attacker receives all compliant miners’ fruit rewards if the attacker wins a block race no shorter than *T* *o* . In RS, the attacker receives one block reward for each honest block rendered invisible and half of a block reward for each visible honest block with a competitor. In Subchains, the attacker receives *Ratio* w2b units of rewards for each invalidated honest block. *B. Evaluation Results* The protocols’ *C* ( *α* ) are computed with the following parameters. Three *γ* values are considered: 0, 0.5 and 1, with the exception that our Fruitchains MDP does not support *γ* = 0 *.* 5. We set *b* max in Subchains as 20 and *l* max = 40 in the three other protocols to ensure a fair comparison. We truncate Fig. 8. Censorship susceptibility *C* ( *α* ) of four protocols, *l* max = 40. We put *γ* = 0 *.* 5 and *γ* = 1 in the same chart to save space. Larger number indicates worse performance. a field representing the attacker’s own fruits in Fruitchains MDP to enable larger values for *l* max, as these fruits do not contribute to the censorship attack. Other parameters are the same with our subversion gain evaluation. The results are listed in Fig. 8. *Result 14:* Subchains’s *C* ( *α* ) performs worse than NC, whereas Fruitchains performs better. RS’s *C* ( *α* ) is worse than NC when *γ* = 0, but better when *γ ≥* 0 *.* 5. Subchains performs worse than NC for all parameter sets with *α <* 0 *.* 45 and *γ <* 1. When *γ* = 1, its performance is almost identical to that of NC. The reason for Subchains’s poor performance in *C* ( *α* ) is similar to that of *I* ( *α* ). RS performs worse than NC when *γ* = 0 because in NC, the attacker cannot orphan an honest block with just one attacker block in a block race, whereas in RS, the attacker block can “loot” half of a block reward from its honest competitor. Fruitchains performs the best for all *α ≤* 0 *.* 3 because in Fruitchains, the attacker cannot invalidate any honest fruit without winning a block race of length *T* *o*, which is difficult for weak attackers. An interesting fact is that when *α ≥* 0 *.* 4, RS’s *C* ( *α* ) outperforms that of Fruitchains, due to their different gap definitions. In Fruitchains, winning a block race with at least *T* *o* blocks invalidates all honest fruits mined in the current block race, as their gaps are calculated from their pointer blocks, which are either “outdated”—mined before the current block race, or invalidated—not in the main chain. On the other hand, RS’s gap is calculated from an uncle’s own height, 185 ----- therefore when the attacker wins a long block race, the last several honest blocks may still be referred to in the blockchain as valid uncles, splitting the attacker’s rewards. *Result 15:* Fruitchains’s and RS’s gap definitions perform better in terms of censorship resistance facing weak and strong attackers, respectively. VIII. S ECURITY T RADE - OFFS IN A TTACK R ESISTANCE *A. Security vs. Performance* Our results confirm two security-performance trade-offs. First, longer confirmation delay contributes to better attack resistance, as shown in Result 6, 8, and our subversion bounty analysis. Second, higher bandwidth consumption, if properly utilized, strengthens the system by reducing the attacker’s “lucky” space of gambling, as shown in Result 7. Moreover, our model quantifies the influence of each parameter on the protocols’ attack resistance, allowing practitioners to choose these parameters according to their use cases. *B. “Rewarding the Bad” vs. “Punishing the Good”* None of the protocols we have studied successfully defends against all three attacks. Their weaknesses are not protocolspecific, but inherent to their technical approaches. Rewardall protocols improve censorship resistance by increasing the difficulty to invalidate other miners’ rewards, at the price of removing the risk to fork the blockchain, thus encouraging double-spending attacks. Punishment protocols improve selfish mining and double-spending resistance by discouraging malicious behaviors, at the price of lowering the attacker’s difficulty to damage the compliant miners’ income, thus facilitating censorship. Reward-lucky protocols, contrary to their designers’ intention, allow the attacker to invalidate the compliant miners’ “lucky” blocks with the attacker’s “unlucky” units in a risk-free manner, leaving them more vulnerable to all three attacks. We conclude that none of the three approaches can improve the security of PoW against three major attacks; they only offer different trade-offs in resistance. In other words, to date, no protocol achieves better resistance than NC in defending all three attacks. We further summarize these weaknesses into a dilemma between “rewarding the bad” and “punishing the good”, which roots in information asymmetry we identified in Sect. IV-D. Recall that due to this asymmetry, when the blockchain is forked, the protocol is unable to distinguish whether a contentious unit, be it a block, fruit or weak block, is a product of compliant or malicious behavior. As a result, if all contentious units are rewarded or punished equally, either “the bad” are rewarded, as in reward-all protocols, or “the good” are punished, as in punishment protocols. Selectively rewarding some contentious units without solving information asymmetry, as in reward-lucky protocols, usually increases the vulnerability to malicious manipulation, allowing both undesirable consequences to happen. This dilemma reveals that it is difficult, if not impossible, to defend against all three attacks with just a novel reward distribution policy. IX. D ISCUSSION *A. Future Directions for PoW Protocol Designs* First, we highlight an empirical lesson summarized from our findings: complexity is the enemy of security. As demonstrated by our results, despite the simplicity of NC, to date there is no protocol that surpasses NC in all our security metrics when the attacker has no network propagation advantage. The seemingly more sophisticated later designs, contrary to their own claims, not only invite new attack strategies, but also complicate the analysis. In fact, some protocols are so complicated that their vulnerabilities could only be revealed through our MDP modeling. As we have identified the cruxes of existing designs’ unsatisfactory chain quality and attack resistance as their unrealistic and inconsistent security assumptions and the dilemma between “rewarding the bad” and “punishing the good”, respectively, we present our suggestions on more secure PoW designs in the following two directions, accordingly. *1) Introducing and realizing practical assumptions to raise* *the chain quality:* Such assumptions may include: *•* *Awareness of network conditions.* Knowledge on whether the network is partitioned and the slowest block propagation time allows the participants to identify block withholding behaviors with a higher level of confidence. This information helps distinguish between honest and attacker blocks, and thus it contributes to raising chain quality. In the real world, well-established techniques from distributed databases can help to detect network partitions. The block propagation delay can be estimated from various measurement data, such as the current orphan rate [55], which are locally available or accessible from multiple online sources [56], *•* *A loosely synchronized clock.* With a loosely synchronized clock, participants can use the gap between a block’s receiving time and its timestamp as an indicator of malicious behaviors. This indicator could help to further raise the chain quality in combination with the previous assumption. Note that the assumption of a roughly accurate clock is necessary for all PoS protocols and is inherent to NC, as Bitcoin adjusts the block difficulty and the block reward according to the block timestamps reported by the miners. *•* *Responsible parties with large deposits or public real-world* *identities.* The absence of legislation in permissionless blockchains is not in favor of security. This situation can be mitigated by demanding a large deposit before performing certain actions to increase the amount of penalty, or limiting these actions to parties with publicly verified real-world identities in order to put their reputation at stake. Realizing these assumptions requires continuous work from researchers and developers as these seem to be necessary preconditions to improve the chain quality. *2) Outsourcing liability to raise attack resistance:* *•* *Introducing additional punishment rules.* The unfair rewards go to the malicious miners can be balanced with additional punishment. This approach demands that cryptographic 186 ----- proofs of the malicious behaviors are embedded in the blockchain. For example, accountable assertions can be used to deter double-spending [57]. Designing such proofs for censorship attacks is an interesting research direction. *•* *Relying on “layer 2” protocols to protect against specific* *attacks.* This approach reduces the consensus protocol’s pressure on defending against certain attacks. For example, as Bitcoin’s layer 2 solution, lightning network [58] guarantees double-spending resistance for its transactions, requiring the underlying consensus protocol to resist against selfish mining and censorship attacks. *B. Future Directions for PoW Protocols’ Security Analyses* Three common pitfalls in existing security analyses prevent these vulnerabilities from being discovered in the first place: *•* *Limiting the analysis to only one attack strategy.* Our work shows that such analysis is far from sufficient: protocolspecific rules often inspire new attack strategies, causing more damage than the generic strategy analyzed by the designer. Typical examples include SHTB’s “smallest hash first” rule that inspires a “withhold when the hash is small enough” strategy and Subchains’s “weak block counts in chain length” rule that inspires a “withhold weak blocks to invalidate honest blocks” strategy. In particular, given the recent advancement of artificial intelligence, we can expect future attackers to be equipped with more sophisticated strategies. Therefore, a solid protocol design calls for a formal, rather than a heuristic, security analysis. *•* *Limiting the analysis against just one type of attacker* *incentive.* The blockchain ecosystem results in complex interactions between attackers and other players: an attacker may focus on short-term rewards, as in double-spending attacks, or risk short-term rewards for higher future returns, as in selfish mining, or even sacrifice all rewards to cause damage on other players, as in censorship attacks. This complexity, together with the multifunctional nature of blockchains, demands the security evaluation to be more comprehensive in terms of attacker incentive. Nevertheless, existing analyses typically focus on short-term reward seekers, leaving the protocol vulnerable to attackers with the two other incentives. The problem is more prominent for permissionless designs, where transactions are processed by anonymous parties, who abide by the protocol only out of their will and interests as defined by themselves. The lack of outside-the-blockchain negative consequences, especially legislative ones, opens the door for various attacker incentives which need to be taken into account. *•* *Proving the system’s security within an unrealistic param-* *eter range.* Even if the security proofs give solid results, it is unclear whether the system is secure in a more realistic parameter range. For example, we reveal that Fruitchains is susceptible to selfish mining and double-spending attacks if the confirmation delay is shortened to more reasonable values. Therefore, we argue that future security analyses should depart from real-world parameters to provide more objective and meaningful results. As demonstrated in this research, analyzing protocol security with artificial intelligence techniques has the following three-fold advantage. First, it simplifies the analysis with wellestablished algorithms, which enables us to analyze protocols more complicated than NC. Second, it allows accurate evaluation of the parameter choices. Third, these techniques can compute the attacker’s optimal strategies, allowing designers to gain direct insights and iteratively improve their designs. Note that, although vulnerability identification is simplified, it is more difficult to prove that a protocol resists against an attack with these techniques. Security cannot be claimed without proving that the strategy space used to compute the utility covers all rational strategies. X. R ELATED W ORK Most research analyzing PoW protocol security focuses on NC [3], [51], [52], [59]–[62]. To the best of our knowledge, this paper presents the first cross-protocol multi-metric blockchain security evaluation. Modeling a consensus protocol as a Markov process allows researchers to quantify the attacker’s optimal utility with wellstudied algorithms. Specifically, Gervais et al. study the selfish mining and double-spending resistance of NC with different parameters [31]. Zhang and Preneel evaluate the security of Bitcoin Unlimited, a Bitcoin scaling proposal [26]. Kiffer et al. [63] analyze Chainweb’s and GHOST’s *consistency*, namely whether all compliant parties share the same ledger, regardless of whether the ledger is biased by an attacker. XI. C ONCLUSION Since the introduction of Bitcoin, new PoW designs emerge on a daily basis from both industry and academia. However, technology advancement cannot be simply measured by the number of protocols, but only by convincing improvements in performance or security. Unfortunately, the security of most of these alternative protocols remains self-claimed, and many of them seem to share similar vulnerabilities. To address this situation, this paper systematically analyze the security of seven most representative and influential alternative designs. However, our results show that none of these designs outperform NC in terms of either the chain quality or attack resistance in all scenarios. We identify the roots of their unsatisfactory performance as PoW protocols’ unrealistic assumptions and information asymmetry between the compliant miners and the attacker. Moreover, we discover a considerable number of protocol-specific attacks and quantify two securityperformance trade-offs with finer granularity. These results allow us to pinpoint some promising directions towards more secure PoW protocol designs and more solid security analysis. A CKNOWLEDGEMENTS This work was supported in part by Blockstream, the Flemish government imec ICON BoSS project, and the Research Council KU Leuven: C16/15/058. We would like to thank Yonatan Sompolinsky, Andrew Miller, Kaiyu Shao, Pieter Wuille, Gregory Maxwell, Adam Back and the anonymous reviewers for their valuable comments and suggestions. 187 ----- R EFERENCES [1] mapofcoins. (2018) Map of coins: BTC map. [Online]. Available: http://mapofcoins.com/bitcoin [2] S. Nakamoto. (2008) Bitcoin: A peer-to-peer electronic cash system. [Online]. Available: http://www.bitcoin.org/bitcoin.pdf [3] J. A. Garay, A. Kiayias, and N. Leonardos, “The Bitcoin backbone protocol: Analysis and applications,” in *EUROCRYPT*, 2015, pp. 281– 310. [4] A. Sapirshtein, Y. Sompolinsky, and A. Zohar, “Optimal selfish mining strategies in Bitcoin,” in *Financial Cryptography and Data Security*, 2016, pp. 515–532. [5] K. Nayak, S. Kumar, A. Miller, and E. Shi, “Stubborn mining: Gener alizing selfish mining and combining with an eclipse attack,” in *IEEE* *European Symposium on Security and Privacy (EuroS&P)* . IEEE, 2016, pp. 305–320. [6] I. Eyal and E. G. Sirer, “Majority is not enough: Bitcoin mining is vulnerable,” in *Financial Cryptography and Data Security* . Springer, 2014, pp. 436–454. [7] L. Bahack, “Theoretical Bitcoin attacks with less than half of the computational power (draft),” *arXiv preprint arXiv:1312.7013*, 2013. [8] Ethereum white paper: Modified Ghost implementation. [Online]. Available: https://github.com/ethereum/wiki/wiki/White-Paper *♯* modified -ghost-implementation [9] E. Heilman, “One weird trick to stop selfish miners: Fresh Bitcoins, a solution for the honest miner.” Cryptology ePrint Archive, Report 2014/007, 2014, https://eprint.iacr.org/2014/007. [10] Y. Lewenberg, Y. Sompolinsky, and A. Zohar, “Inclusive block chain protocols,” in *Financial Cryptography and Data Security*, 2015, pp. 528– 547. [11] P. R. Rizun, “Subchains: A technique to scale Bitcoin and improve the user experience,” *Ledger*, 2016. [Online]. Available: https://www.ledgerjournal.org/ojs/index.php/ledger/article/view/40 [12] S. D. Lerner. (2015) DECOR+HOP: A scalable blockchain protocol. [Online]. Available: https://scalingbitcoin.org/papers/DECOR-HOP.pdf [13] R. Zhang and B. Preneel, “Publish or Perish: A backward-compatible defense against selfish mining in Bitcoin,” in *CT-RSA 2017: The Cryp-* *tographers’ Track at the RSA Conference*, 2017, pp. 277–292. [14] Y. Sompolinsky, Y. Lewenberg, and A. Zohar. (2016) SPECTRE: Seri alization of proof-of-work events: Confirming transactions via recursive elections. [Online]. Available: https://eprint.iacr.org/2016/1159.pdf [15] I. Bentov, P. Hub´acek, T. Moran, and A. Nadler, “Tortoise and Hares consensus: the Meshcash framework for incentive-compatible, scalable cryptocurrencies,” *IACR Cryptology ePrint Archive*, 2017. [16] Y. Sompolinsky and A. Zohar, “PHANTOM: A scalable blockdag protocol,” *IACR Cryptology ePrint Archive*, 2018. [Online]. Available: https://eprint.iacr.org/2018/104.pdf [17] S. D. Lerner. (2015) RSK white paper overview. [Online]. Available: https://zh.scribd.com/document/371006520/RSK-White-Paper-Overview [18] E. K. Kogias, P. Jovanovic, N. Gailly, I. Khoffi, L. Gasser, and B. Ford, “Enhancing Bitcoin security and performance with strong consistency via collective signing,” in *Proc. 25th conference on USENIX Security* *Symposium*, 2016. [19] E. Kokoris-Kogias, P. Jovanovic, L. Gasser, N. Gailly, E. Syta, and B. Ford, “Omniledger: A secure, scale-out, decentralized ledger via sharding,” *Proc. 38th IEEE Symposium on Security and Privacy*, 2018. [20] R. Pass and E. Shi, “Fruitchains: A fair blockchain,” in *Proceedings* *of the ACM Symposium on Principles of Distributed Computing*, ser. PODC ’17. ACM, 2017, pp. 315–324. [Online]. Available: http://doi.acm.org/10.1145/3087801.3087809 [21] P. Camacho and S. D. Lerner. (2016) DECOR+LAMI: A scalable blockchain protocol. [Online]. Available: https://scalingbitcoin.org/papers/DECOR-LAMI.pdf [22] G. Bissias and B. N. Levine, “Bobtail: A proof-of-work target that min imizes blockchain mining variance (draft),” *CoRR*, vol. abs/1709.08750, 2017. [Online]. Available: http://arxiv.org/abs/1709.08750 [23] W. Martino, M. Quaintance, and S. Popejoy. (2018) Chainweb: A proof-of-work parallel-chain architecture for massive throughput. [Online]. Available: http://kadena.io/docs/chainweb-v15.pdf [24] C. Natoli and V. Gramoli, “The balance attack against proof-of-work blockchains: The R3 testbed as an example,” 2016. [25] A. Kiayias and G. Panagiotakos, “On trees, chains and fast transactions in the blockchain.” *IACR Cryptology ePrint Archive*, vol. 2016, p. 545, 2016. [26] R. Zhang and B. Preneel, “On the necessity of a prescribed block validity consensus: Analyzing bitcoin unlimited mining protocol,” in *Proceedings of the 13th International Conference on emerging Network-* *ing EXperiments and Technologies* . ACM, 2017, pp. 108–119. [27] H. Nguyen. (2018) Proof-of-stake & the wrong engineering mindset. [Online]. Available: https://medium.com/@hugonguyen/proof-of-stakethe-wrong-engineering-mindset-15e641ab65a2 [28] S. Bano, A. Sonnino, M. Al-Bassam, S. Azouvi, P. McCorry, S. Meiklejohn, and G. Danezis, “Consensus in the age of blockchains,” *CoRR*, vol. abs/1711.03936, 2017. [Online]. Available: http://arxiv.org/abs/1711.03936 [29] J. Brown-Cohen, A. Narayanan, C.-A. Psomas, and S. M. Wein berg, “Formal barriers to longest-chain proof-of-stake protocols,” *arXiv* *preprint arXiv:1809.06528*, 2018. [30] A. Miller. (2013) Feather-forks: enforcing a blacklist with sub 50[Online]. Available: https://bitcointalk.org/index.php?topic=312668.0 [31] A. Gervais, G. O. Karame, K. W¨ust, V. Glykantzis, H. Ritzdorf, and S. Capkun, “On the security and performance of proof of work blockchains,” in *Proc. the 2016 ACM SIGSAC Conference on Computer* *and Communications Security*, ser. CCS ’16. ACM, 2016, pp. 3–16. [Online]. Available: http://doi.acm.org/10.1145/2976749.2978341 [32] Y. Sompolinsky and A. Zohar, “Bitcoin’s security model revisited,” *arXiv* *preprint arXiv:1605.09193*, 2016. [33] ——, “Secure high-rate transaction processing in Bitcoin,” in *Financial* *Cryptography and Data Security*, 2015, pp. 507–527. [34] J. Bonneau, A. Miller, J. Clark, A. Narayanan, J. A. Kroll, and E. W. Felten, “Sok: Research perspectives and challenges for Bitcoin and cryptocurrencies,” in *IEEE Symposium on Security and Privacy (S&P)* . IEEE, 2015, pp. 104–121. [35] M. Carlsten, H. Kalodner, S. M. Weinberg, and A. Narayanan, “On the instability of Bitcoin without the block reward,” in *Proc.* *2016 ACM SIGSAC Conference on Computer and Communications* *Security*, ser. CCS ’16. ACM, 2016, pp. 154–167. [Online]. Available: http://doi.acm.org/10.1145/2976749.2978408 [36] A. Narayanan, J. Bonneau, E. Felten, A. Miller, and S. Goldfeder, *Bitcoin and Cryptocurrency Technologies* . Princeton University Pres, 2016. [37] Decred Developers. (2018) Decred - autonomous digital currency. [Online]. Available: https://www.decred.org/ [38] J. Bonneau, “Why buy when you can rent? bribery attacks on Bitcoin style consensus,” in *BITCOIN workshop, Financial Cryptography and* *Data Security* . Springer, 2016. [39] D. Meshkov, A. Chepurnoy, and M. Jansen, “Revisiting difficulty control for blockchain systems,” in *DPM/CBT@ESORICS 2017*, 2017. [Online]. Available: https://eprint.iacr.org/2017/731.pdf [40] I. Eyal, “The miner’s dilemma,” in *IEEE Symposium on Security and* *Privacy (S&P)* . IEEE, 2015, pp. 89–103. [41] Y. Kwon, D. Kim, Y. Son, E. Vasserman, and Y. Kim, “Be selfish and avoid dilemmas: Fork after withholding (faw) attacks on bitcoin,” in *Proceedings of the 2017 ACM SIGSAC Conference on Computer and* *Communications Security* . ACM, 2017, pp. 195–209. [42] I. Tsabary and I. Eyal, “The gap game,” in *Proceedings of the 11th ACM* *International Systems and Storage Conference* . ACM, 2018. [43] R. Pass and E. Shi, “Thunderella: Blockchains with optimistic in stant confirmation,” in *Advances in Cryptology – EUROCRYPT 2018* . Springer International Publishing, 2018, pp. 3–33. [44] I. Bentov, C. Lee, A. Mizrahi, and M. Rosenfeld, “Proof of activity: Extending bitcoin’s proof of work via proof of stake [extended abstract],” *ACM SIGMETRICS Performance Evaluation Review*, vol. 42, no. 3, pp. 34–37, 2014. [45] T. Duong, A. Chepurnoy, L. Fan, and H.-S. Zhou, “Twinscoin: A cryptocurrency via proof-of-work and proof-of-stake,” in *Proceedings* *of the 2Nd ACM Workshop on Blockchains, Cryptocurrencies, and* *Contracts*, ser. BCC ’18. ACM, 2018, pp. 1–13. [Online]. Available: http://doi.acm.org/10.1145/3205230.3205233 [46] V. Buterin. (2014) Ethereum: A next-generation smart contract and decentralized application platform. [Online]. Available: https://github.com/ethereum/wiki/wiki/White-Paper [47] I. Eyal, A. E. Gencer, E. G. Sirer, and R. V. Renesse, “Bitcoin NG: A scalable blockchain protocol,” in *13th USENIX Symposium* *on* *Networked* *Systems* *Design* *and* *Implementation* *(NSDI* *16)* . Santa Clara, CA: USENIX Association, 2016, pp. 45–59. [Online]. Available: https://www.usenix.org/conference/nsdi16/technicalsessions/presentation/eyal 188 ----- [48] (2018) Waves platform. [Online]. Available: https://wavesplatform.com/ [49] (2018) Aeternity blockchain. [Online]. Available: https://aeternity.com/ [50] A. Back, M. Corallo, L. Dashjr, M. Friedenbach, G. Maxwell, A. Miller, A. Poelstra, J. Tim´on, and P. Wuille, “Enabling blockchain innovations with pegged sidechains,” *URL: http://www.* *opensciencereview.* *com/papers/123/enablingblockchain-innovations-* *with-pegged-sidechains*, 2014. [51] R. Pass, L. Seeman, and A. Shelat, “Analysis of the blockchain protocol in asynchronous networks,” in *Annual International Conference on the* *Theory and Applications of Cryptographic Techniques* . Springer, 2017, pp. 643–673. [52] A. Kiayias, E. Koutsoupias, M. Kyropoulou, and Y. Tselekounis, “Blockchain mining games,” in *Proceedings of the 2016 ACM Con-* *ference on Economics and Computation* . ACM, 2016, pp. 365–382. [53] K. Liao and J. Katz, “Incentivizing blockchain forks via whale trans actions,” in *Financial Cryptography and Data Security* . Springer International Publishing, 2017, pp. 264–279. [54] Wolfram Research, Inc. (2018) Newton-Pepys problem. [Online]. Available: http://mathworld.wolfram.com/Newton-PepysProblem.html [55] C. Decker and R. Wattenhofer, “Information propagation in the Bitcoin network,” in *13th IEEE International Conference on Peer-to-Peer Com-* *puting (P2P)*, 2013. [56] Bitcoin stats - data propagation. [Online]. Available: http://bitcoinstats.com/network/propagation/ [57] T. Ruffing, A. Kate, and D. Schr¨oder, “Liar, liar, coins on fire!: Penalizing equivocation by loss of Bitcoins,” in *Proceedings of the 22Nd* *ACM SIGSAC Conference on Computer and Communications Security*, ser. CCS ’15. New York, NY, USA: ACM, 2015, pp. 219–230. [Online]. Available: http://doi.acm.org/10.1145/2810103.2813686 [58] J. Poon and T. Dryja, “The Bitcoin lightning network: Scalable off-chain instant payments,” 2016. [59] J. Garay, A. Kiayias, and N. Leonardos, “The Bitcoin backbone protocol with chains of variable difficulty,” in *Annual International Cryptology* *Conference* . Springer, 2017, pp. 291–323. [60] J. A. Garay, A. Kiayias, and G. Panagiotakos, “Proofs of work for blockchain protocols,” Cryptology ePrint Archive, Report 2017/775, Tech. Rep., 2017. [61] P. Wei, Q. Yuan, and Y. Zheng, “Security of the blockchain against long delay attack,” in *Advances in Cryptology–ASIACRYPT 2018*, 2018. [62] T. Duong, A. Chepurnoy, and H.-S. Zhou, “Multi-mode cryptocurrency systems,” in *Proceedings of the 2nd ACM Workshop on Blockchains,* *Cryptocurrencies, and Contracts* . ACM, 2018, pp. 35–46. [63] L. Kiffer, R. Rajaraman *et al.*, “A better method to analyze blockchain consistency,” in *Proceedings of the 2018 ACM SIGSAC Conference on* *Computer and Communications Security* . ACM, 2018, pp. 729–744. A PPENDIX A SHTB MDP D ESIGN *A. Properties of Deterministic Tie-Breaking Protocols* We can simplify the state representation in these protocols by omitting two kinds of information that do not affect the miners’ choices of parent blocks. First, we do not need to encode the mining history, as “latecomer” blocks can still win a tie. Second, we do not need to explicitly encode how many attacker chain blocks are published, as it can be deduced from the public chain length *l* *c* . As compliant miners always work on the same chain in deterministic tie-breaking protocols, if the attacker publishes enough blocks so that the compliant miners switch to the attacker chain, the public chain is abandoned and *l* *c* is updated to zero; otherwise as long as *l* *c* *>* 0 we can safely assume the compliant miners are working on the public chain, thus different numbers of published attacker chain blocks make no difference to all miners. This analysis also shows that compliant miners always work on the public chain in deterministic tie-breaking protocols. *B. State Space* We use *l* *a* and *l* *c* to denote the lengths of the attacker chain and the public chain, respectively, excluding their common blocks. The hash region of the public chain tip is denoted as *Hash* *c* . If we normalize the space of valid block hash to [0 *,* 1) and split it into 10 regions, *Hash* *c* = *h* means the hash resides in [0 *.* 1 *h,* 0 *.* 1( *h* +1)), where *h* is the region number, an integer ranges from 0 to 9. *Hash* [1] *a* [and] *[ Hash]* [2] *a* [represent the hash] regions of the last and the second last attacker chain blocks, respectively. When *l* *a* *≥* *l* *c* *>* 0, *tie* denotes whether the public chain tip is smaller than its attacker chain competitor. It has two possible values: aWin, meaning the attacker chain competitor is smaller, and aLose, meaning the public chain tip is smaller. The state representation differs according to the length difference of the chains: (1) *When l* *a* *< l* *c*, a state is represented as a 3-tuple ( *l* *a*, *l* *c*, *Hash* *c* ). As the public chain is longer, the compliant miners will not mine on the attacker chain, thus there is no need to encode *Hash* [1] *a* [and] *[ Hash]* [2] *a* [.] *[ Hash]* *[c]* [is encoded] in case the attacker catches up from behind. (2) *When l* *a* = *l* *c*, a state is a 3-tuple ( *l* *a*, *l* *c*, *tie* ). When *tie* = aWin, the attacker can orphan the public chain by publishing the entire attacker chain. (3) *When l* *a* = *l* *c* + 1, a state is a 4-tuple ( *l* *a*, *l* *c*, *tie*, *Hash* [1] *a* [)][. When] *[ tie]* [ = aWin][, the attacker can orphan the public] chain by publishing until the *l* *c* -th attacker block; otherwise the attacker needs to publish the entire chain to win the race. When *l* *c* = 0, *tie* is undefined, denoted as *∅* . (4) *When l* *a* *> l* *c* + 1, a state is a 4-tuple ( *l* *a*, *l* *c*, *Hash* [1] *a* [,] *[ Hash]* [2] *a* [)][. Instead of encoding] the hash regions of all attacker blocks in the leading part, we only encode the last two. The attacker is not allowed to orphan the public chain by winning a tie when more than one block ahead, which favors the compliant miners. *C. Actions* The attacker can choose from four actions: *Adopt.* Give up the attacker chain and mine on the public chain. This action is always available. *OverrideWithTie.* Publish until the *l* *c* -th attacker block to orphan the public chain, and keep mining on the attacker chain after publication. Available when *tie* = aWin. *OverrideWithMore.* Publish until the ( *l* *c* + 1)-th attacker block to orphan the public chain, and keep mining on the attacker chain. Available when *l* *a* *> l* *c* . *Wait.* Do not publish anything and keep mining on the attacker chain. Always available. We do not claim that this action set covers all optimal actions. It is possible that in certain states, the optimal action is to publish more than *l* *c* +1 blocks, which is not in our action set. This constrained attacker action set favors the compliant miners. An interesting implication from this action set definition is that we can assume the attacker always mines on the attacker chain. *Adopt* can be considered as working on an empty attacker chain. Note that this does not exclude the compliant strategy from the strategy space. The compliant strategy is equivalent to choosing *Adopt* when the last block is honest 189 ----- and choosing *OverrideWithMore* when the last block is the attacker’s. This implication applies to all our MDPs. *D. Reward Allocation and State Transition* The compliant miners get *R* *c* = *l* *c* only after *Adopt* . The attacker gets *R* *a* = *l* *c* or *l* *c* + 1 after *OverrideWithTie* or *OverrideWithMore*, respectively. After each of these three actions, information regarding blocks that are permanently abandoned or accepted by both miners will be cleared in the new temporary state. No reward is allocated after *Wait* . When a new block is mined, it has equal probability to reside in every hash region. For example, when there are 10 valid hash regions, the probability that the compliant miners find the next block in region 3 is (1 *−* *α* ) */* 10. Assuming the new block’s hash region is *Hash* [new], if the new block’s chain is longer than its competitor, *Hash* [new] will be encoded in the next state as *Hash* [1] *a* [or] *[ Hash]* *[c]* [, depending on the miner.] Before replacing a non-empty *Hash* [1] *a* [, the old] *[ Hash]* [1] *a* [is stored] as the new *Hash* [2] *a* [:] *[ Hash]* [2] *a* *[,]* [new] = *Hash* [1] *a* [. If] *[ l]* *[a]* [=] *[ l]* *[c]* *[−]* [1][ in the] post-publishing temporary state and the new block is mined by the attacker, we have *tie* = aWin in the new state if *Hash* [new] is smaller than the previous *Hash* *c* or *tie* = aLose if *Hash* [new] is equal to or bigger than the previous *Hash* *c* . As an example, if *l* *a* = *l* *c* *−* 1 and *Hash* *c* = 3 in the post-publishing state, the probability that the next state is ( *l* *a* +1 *, l* *c* *, tie* [new] = aWin) is *α×* 3 */* 10, as the attacker can only win the tie with *Hash* [new] = 0 *,* 1 *,* or 2; the probability that the next state is ( *l* *a* + 1 *, l* *c* *, tie* [new] = aLose) is *α ×* (10 *−* 3) */* 10. The same rule is followed for updating *tie* when the public chain is catching up from behind the attacker chain. A PPENDIX B UDTB MDP D ESIGN *A. State Space* As the probability of winning a tie is fixed to 50%, there is no need to encode the hashes of the latest blocks. Therefore, we can simplify the state representation of the previous MDP as follows. (1) *When l* *a* *< l* *c* *or l* *c* = 0, a state is a two-tuple ( *l* *a* *, l* *c* ). (2) *When l* *a* *≥* *l* *c* *>* 0, a state is a 3-tuple ( *l* *a*, *l* *c*, *tie* ). *B. Actions* The action set is the same with the previous MDP. According to the action set completeness proof in Appendix A of [4], this set covers all rational actions. Note that the proof is not applicable to SHTB as blocks in SHTB are not interchangeable: a block with smaller hash is more likely to win a tie. *C. Reward Allocation and State Transition* The reward allocation mechanism is identical to that of the previous MDP. The state transition rules for updating *l* *a* and *l* *c* are straightforward, hence we only highlight the updating rule for *tie* here. The new *tie* is different from the previous one in three occasions. First, when *l* *a* = *l* *c* *−* 1 in the post-publishing temporary state and the new block is mined by the attacker, the transition probability to the new state ( *l* *a* +1 *, l* *c* *, aWin* ) is *α/* 2 and the same to ( *l* *a* +1 *, l* *c* *, aLose* ). Second, when *l* *a* *> l* *c* in the post-publishing state and the new block is mined by the compliant miners, the transition probability to the new state ( *l* *a* *, l* *c* + 1 *, aWin* ) is (1 *−* *α* ) */* 2 and the same to ( *l* *a* *, l* *c* + 1 *, aLose* ). At last, when *l* *a* = *l* *c* in the post-publishing state and the new block is mined by the compliant miners, *tie* is cleared and the transition probability to the new state ( *l* *a* *, l* *c* + 1) is 1 *−* *α* . In all other situations, *tie* remains unchanged. A PPENDIX C T HE F RUITCHAINS MDP D ESIGN Unlike in previous MDPs where a block is found at the end of each step, in the Fruitchains MDP, each step ends with the discovery of a *unit*, which might be a fruit or a block. *A. State Space* Encoding each fruit’s pointer block in a state is computationally infeasible due to the potentially large number of fruits. Therefore, we split all fruits into three groups and deal with them separately: (1) attacker fruits mined before the *T* *o* -th attacker block; (2) attacker fruits mined after the *T* *o* -th attacker block; (3) honest fruits. As the attacker knows which block is the consensus block, it is rational that fruits in group (1) point to the consensus block, so that they can be published before expiration and embedded in both chains. As these fruits always receive rewards, we can issue their rewards the moment they are found, and forget them in the next state. Fruits in group (2) gain rewards if and only if the attacker wins the block race, because otherwise the pointer blocks of these fruits are invalidated. Fruits in group (3) lose the rewards when the attacker wins the block race with at least *T* *o* blocks, either because their pointer blocks are invalidated or because their gaps exceed *T* *o* . For all other scenarios, either the attacker loses or wins with less than *T* *o* blocks, we assume all honest fruits receive rewards. This setting favors the compliant miners, as the attacker may still invalidate some honest fruits when winning with less than *T* *o* blocks: either the honest fruits expire after the current block race, as their pointers are before the consensus block; or the attacker wins the following block races and obtains *T* *o* consecutive main chain blocks eventually, causing the honest fruits mined in the first block race to expire. A state is represented as a 4-tuple ( *l* *a*, *l* *c*, *f* *c*, *isLastHB* ) when *l* *a* *< T* *o*, or a 5-tuple ( *l* *a*, *l* *c*, *f* *c*, *f* *a* [afterT] [o], *isLastHB* ) when *l* *a* *≥* *T* *o*, where *f* *c* denotes the number of honest fruits, *f* *a* [afterT] [o] denotes the number of attacker fruits mined after the *T* *o* -th attacker block. A Boolean value *isLastHB* stores whether the last unit is an honest block. *B. Actions* The attacker can choose from three actions: *Adopt.* Give up the attacker chain. Same as previous MDPs. *Override.* Publish all fruits and blocks to orphan the public chain. When *γ* = 0, this action is only available when *l* *a* *≥* *l* *c* + 1; when *γ* = 1, this action is also available when *l* *a* = *l* *c* and *isLastHB* = true. Due to the complexity of Fruitchains, we do not consider other *γ* values. 190 ----- *Wait.* Keep mining on the attacker chain. Same as previous MDPs. This limited set of actions does not allow pre-mining. Namely, the attacker cannot publish some blocks and fruits, and carry other secret units to the next block race. *C. Reward Allocation and State Transition* A valid fruit receives 1 */Ratio* f2b so that on average one unit of reward is issued per block. The attacker receives one fruit reward for each fruit mined before the *T* *o* -th attacker block. If the attacker chooses *Override* when *l* *a* *< T* *o* or *Adopt*, the compliant miners receive *f* *c* fruit rewards. If the attacker chooses *Override* when *l* *a* *≥* *T* *o*, the compliant miners receive nothing and the attacker receives *f* *a* [afterT] [o] fruit rewards. All settled fruits and blocks are cleared in the new temporary state. The new unit found at the end of a step can be an attacker block, an attacker fruit, an honest block or an honest fruit, with probability *α/* (1 + *Ratio* f2b ), *α · Ratio* f2b */* (1 + *Ratio* f2b ), (1 *−* *α* ) */* (1 + *Ratio* f2b ), (1 *−* *α* ) *· Ratio* f2b */* (1 + *Ratio* f2b ), respectively. For example, when *α* = 1 */* 3 and *Ratio* f2b = 2, the probabilities of finding an attacker block, an attacker fruit, an honest block and an honest fruit are 2 */* 9, 1 */* 9, 4 */* 9 and 2 */* 9, respectively. When the latest unit is an honest block, *isLastHB* = true, otherwise *isLastHB* = false. A PPENDIX D R EWARD -S PLITTING P ROTOCOL MDP D ESIGN It is never optimal for the attacker to hide a block forever, as a late publication still gains at least half of a block reward. Similarly, the attacker blocks never embed honest uncles, hoping that they could be rendered invisible. *A. State Space* An honest block of height *h* becomes invisible when the main chain blocks between height *h* and *h* + *T* *o* *−* 1 are all mined by the attacker. Therefore, our state representation needs to encode previous consecutive block races won by the attacker up to *T* *o* *−* 1 height values. We encode this history information as *history*, a binary string of at most *T* *o* *−* 1 bits. The length of *history* represents the number of consecutive attacker main chain blocks. Each bit indicates whether the attacker block has an honest competitor: 0 means no, 1 means yes. The least significant bit represents the blockchain status at the consensus block’s height, denoted as *h* con, and the second least significant bit represents that of height *h* con *−* 1. Other bits follow similar definitions. A substring from height *h* 1 to *h* 2 where *h* 1 *≤* *h* 2 is denoted as *history* [ *h* 1 : *h* 2 ], thus *history* is equivalent to *history* [ *h* con *−T* *o* +2 : *h* con ]. When *h* 1 *> h* 2 the substring is empty. We do not need to encode blocks at height *h* con *−T* *o* +1 and lower, as their rewards are settled along with the current consensus block. Neither do we need to encode whether a leading zero is an attacker block without an honest competitor or a block race won by the compliant miners, as in both cases the rewards are settled already, which will be further explained when describing the reward allocation. The number of 1s in the substring is denoted as [�] *history* [ *h* 1 : *h* 2 ]. A state is represented as a 4-tuple ( *l* *a*, *l* *c*, *fork*, *history* ), where *fork* has three possible values. If there is an ongoing tie, namely the attacker chain is published until the *l* *c* -th block and this block is published along with the latest honest block, *fork* = active. Otherwise if the latest block is mined by the compliant miners, *fork* = cLast; *fork* = aLast if the attacker finds the last block. *B. Actions* There are *T* *o* + 2 possible optimal actions: *Adopt.* Give up the attacker chain. Same as previous MDPs. *Wait.* Keep mining on the attacker chain. Same as previous MDPs. *Match.* Publish until the *l* *c* -th attacker block to cause a tie, then keep mining on the attacker chain. Feasible when *l* *a* *≥* *l* *c* and *fork* = cLast, namely the attacker has enough blocks to match the newly-mined honest block. *Override* *k* *.* Publish until the ( *l* *c* + *k* )-th attacker block to orphan the public chain, then keep mining on the attacker chain, where 1 *≤* *k ≤* *T* *o* *−* 1. Feasible when the attacker has enough blocks. This action set covers all optimal actions. It is never optimal to publish the ( *l* *c* + *T* *o* )-th attacker block, as the attacker can invalidate one more honest block without risking any block reward by deferring this attacker block’s publication until the next honest block is mined. *C. Reward Allocation and State Transition* An attacker block is certain to receive the full reward if it has no competing honest block when published. Therefore, we issue block rewards to these “no competitor” attacker blocks the moment they are published. Consequently, the rewards of all 0s in *history* are settled before they enter *history* . When choosing *Adopt*, the compliant miners receive *l* *c* *−* *l* *a* full rewards for honest blocks without a competitor, and ( [�] *history* + *l* *a* ) */* 2 for honest blocks with a competitor. The attacker receives ( [�] *history* + *l* *a* ) */* 2 for the attacker blocks. We assume *l* *a* *≤* *l* *c* here, as otherwise *Override* 1 is clearly more profitable than *Adopt* . After *Adopt*, *history* [new] is empty. When choosing *Override* *k*, the attacker receives two kinds of rewards. The first kind are for attacker blocks that have competitors but the competitors are pushed out of *history* after this action. We first append 1 *[l]* *[c]* *||* 0 *[k]*, a string denotes the current block race, to the end of *history*, then truncate the resulted string to *T* *o* *−* 1 least significant bits. When *T* *o* *−* 1 *≥* *l* *c* + *k*, *history* [new] = *history* [ *h* con *−T* *o* +2+ *l* *c* + *k* : *h* con ] *||* 1 *[l]* *[c]* *||* 0 *[k]* . The attacker receives [�] *history* [ *h* con *−T* *o* +2 : *h* con *−T* *o* +1+ *l* *c* + *k* ] for all 1s in the discarded *history* bits. Otherwise when *T* *o* *−* 1 *< l* *c* + *k*, the attacker receives [�] *history* + *l* *c* + *k−* ( *T* *o* *−* 1) for all 1s in *history* and the first *l* *c* + *k −* ( *T* *o* *−* 1) attacker blocks in the current block race, as their competitors are invalidated, and *history* [new] = 1 *[T]* *[o]* *[−]* [1] *[−][k]* *||* 0 *[k]* . The second kind of rewards are for the last *k* published attacker blocks, as they have no honest competitor. No reward is allocated after *Wait* if *fork ̸* = active. There are two possible states after *Wait* if *fork ̸* = active, *Adopt* and 191 ----- *Override* *k* : either the next block is mined by the attacker on the attacker chain with probability *α*, or the next block is mined by the compliant miners on the public chain with probability 1 *−* *α* . In the former case, *fork* [new] = aLast; in the latter case, *fork* [new] = cLast. Unlike the previous actions, there are three possible states after *Wait* if *fork* = active or *Match* . First, the attacker mines a block on the attacker chain with probability *α* . This is the only transition in the entire MDP where *fork* [new] = active. Second, the compliant miners mine on the public chain with probability (1 *−* *α* )(1 *−* *γ* ), *fork* [new] = cLast. In the first two cases, no reward is allocated and *history* [new] = *history* . Third, the compliant miners mine on the attacker chain with probability (1 *−* *α* ) *γ* . In this case, *history* is appended with 1 *[l]* *[c]* and truncated until at most *T* *o* *−* 1 bits. The attacker receives rewards for all 1s in the discarded history bits. The new state is ( *l* *a* *−* *l* *c* *,* 1 *,* cLast *, history* [new] ). A PPENDIX E S UBCHAINS MDP D ESIGN *A. State Space* Similar to Fruitchains MDP, in Subchains MDP, each step ends with the discovery of a unit—either a block or a weak block. Based on our key observation in Sect. V-C, of the two mining sequences, only the leading unit sequence of the attacker chain, i.e., the units whose heights are larger than the public chain tip, needs to be encoded, as other bits are either adopted or abandoned as a whole. Therefore, we introduce two extra fields to facilitate state representation compression. First, *lead* denotes the attacker chain’s leading unit sequence. Each bit in a string indicates whether the unit is a block or a weak block: 0 means a weak block, 1 means a block. The most significant bit represents the oldest unit in the chain, while the least significant bit presents the latest. Second, we encode the length difference between two chains as *diff* *u* . The state representation differs according to the length difference of the chains. (1) *When diff* *u* *<* 0, a state is a 3-tuple ( *b* *a* *, b* *c* *, diff* *u* ), where *b* *a* and *b* *c* denote the number of blocks in the attacker and the public chain, respectively. (2) *When diff* *u* = 0, a state is a 4-tuple ( *b* *a* *, b* *c* *, diff* *u* *, fork* ). Similar to *fork* in RS MDP, *fork* here denotes whether there is an ongoing tie, and if not, the miner of the last unit. There is no need to encode *fork* in the previous case as it is infeasible for the attacker to cause a tie. (3) *When* *diff* *u* *>* 0, a state is a 5-tuple ( *b* *a* *, b* *c* *, diff* *u* *, lead, fork* ). For example, (1 *,* 3 *,* 2 *,* “01” *,* aLast) means: the attacker chain and the public chain have one and three blocks, respectively; the attacker chain is two units longer than the public chain, of which the penultimate unit is a weak block, the last unit is a block mined in the last round. *B. Actions* The attacker can choose from four actions: *Adopt*, *Override*, *Match* and *Wait* . *Adopt* and *Wait* are the same with previous MDPs. *Match.* Publish until the published attacker chain is of the same length with the public chain to cause a tie, then keep mining on the attacker chain. Feasible when *fork* = cLast and *diff* *u* = 0 *,* 1 *,* 2 *,* or 3. The requirement on *diff* *u* is because we set the maximum length of *lead* to three in order to further compress the state space. When *diff* *u* *>* 3, *lead* only encodes the last three attacker units. *Override.* When *diff* *u* = 1 *,* 2 or 3, publish until the published attacker chain is one unit longer than the public chain; when *diff* *u* *>* 3, publish all attacker units except the last three. This limited action set favors the compliant miners. *C. Reward Allocation and State Transition* We issue each block *Ratio* w2b units of rewards, so that on average each block or weak block receives one unit of reward. As both weak blocks and blocks contribute to the transaction confirmation, this “one reward per confirmation” rule is consistent with the reward allocation mechanisms of NC, Fruitchains and RS. The compliant miners get *R* *c* = *b* *c* *× Ratio* w2b only after *Adopt* . After *Override*, the attacker gets rewards for all published attacker blocks, which is *R* *a* = ( *b* *a* *−* [�] *lead* ) *Ratio* w2b when *diff* *u* *>* 3 or *diff* *u* *≤* 3 and the highest order bit of *lead* is zero, or *R* *a* = ( *b* *a* *−* [�] *lead* + 1) *Ratio* w2b when *diff* *u* *≤* 3 and the highest order bit of *lead* is 1. If the next unit is mined by the compliant miner on the attacker chain after *Wait* when *fork* = active or *Match*, the attacker gets *R* *a* = ( *b* *a* *−* [�] *lead* ) *Ratio* w2b . After each of these actions, information regarding blocks and weak blocks that are permanently abandoned or accepted by both miners will be cleared in the new temporary state. No reward is allocated after *Wait* when *fork ̸* = active. There are four outcome states after *Wait* when *fork ̸* = active, *Adopt* or *Override*, depending on the next unit. The new mining product can be an attacker block, an attacker weak block, an honest block or an honest weak block, with probability *α/Ratio* w2b, *α·* ( *Ratio* w2b *−* 1) */Ratio* w2b, (1 *−α* ) */Ratio* w2b, (1 *−* *α* ) *·* ( *Ratio* w2b *−* 1) */Ratio* w2b, respectively. Meanwhile, after *Wait* when *fork* = active or *Match*, the new honest unit might be mined on either chains, resulting in six outcome states. For example, the probability of an honest block mined on the attacker chain is (1 *−* *α* ) *γ/Ratio* w2b . We now describe how to get the new state from the temporary state after publication and the new unit. The rule for updating *fork* is identical to that of RS. If the next unit is honest, *diff* *u* decreases by one, otherwise it increases by one. If the next unit is a block, *b* *a* or *b* *c* increases by one according to the miner. 192 -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/SP.2019.00086?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/SP.2019.00086, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://ieeexplore.ieee.org/ielx7/8826229/8835208/08835227.pdf" }
2,019
[ "JournalArticle" ]
true
2019-04-01T00:00:00
[]
30,985
en
[ { "category": "Business", "source": "external" }, { "category": "Agricultural and Food Sciences", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01b40f5393953927507ab537dcb42428b38eb8e8
[ "Business" ]
0.816047
The Use of Blockchain Technology in Agriculture
01b40f5393953927507ab537dcb42428b38eb8e8
Zeszyty Naukowe Uniwersytetu Ekonomicznego w Krakowie
[ { "authorId": "8373449", "name": "M. Aldag" } ]
{ "alternate_issns": null, "alternate_names": [ "Zesz Nauk Uniw Èkon w Krakowie" ], "alternate_urls": null, "id": "3ffeb5da-e0c9-4840-8b03-81df5f40986e", "issn": "1898-6447", "name": "Zeszyty Naukowe Uniwersytetu Ekonomicznego w Krakowie", "type": null, "url": "https://www.ceeol.com/search/journal-detail?id=97" }
Objective : This paper explores the use of blockchain technology in agriculture and agricultural products. Research Design & Methods : The article is based on a critical analysis of the literature with a view to understanding the current state of use of blockchain technology in agriculture. It was assumed that blockchain technology is used in the agricultural sector to promote food security, prevent food fraud and verify the origin and authenticity of agricultural products and agricultural inputs. Findings : Blockchain technology improves traceability and transparency, allowing parties within the agricultural value chain to identify faulty or suboptimal processes as well as bad actors. This ensures that ideal conditions are pursued from farm to market. The ability to trace the origin of food products is essential when food safety breaks down. The early identification of the origin of contamination will enable food companies to swing into action quickly to prevent illness and thus save lives. Such a timely response will also help limit food wastage and will save money by containing financial fallout. Implications / Recommendations : Blockchain technology has strong potential for success within the agricultural sector. It can be used to ensure food safety by enabling the source of agricultural products, as well as the source of their potential contamination, to be traced, and the authenticity of farming inputs to be verified. Blockchain can also be employed in the process of disbursing subsidies to farmers to ensure that they benefit from subsidy programmes. Finally, blockchain technology will offer farmers better prices and better payment methods and solve challenges in land title sales and purchase registration. Contribution : Blockchain is a new technology to the agricultural sector, and enormous challenges remain. There is still no established system to regulate blockchain transactions. Nevertheless, the application of blockchain in agriculture holds promising rewards.
# Zeszyty Naukowe ### Mustafa Cem Aldag ## 4 (982) ISSN 1898-6447 e-ISSN 2545-3238 Zesz. Nauk. UEK, 2019; 4 (982): 7–17 https://doi.org/10.15678/ZNUEK.2019.0982.0401 # The Use of Blockchain Technology in Agriculture **Abstract** _Objective: This paper explores the use of blockchain technology in agriculture and agri-_ cultural products. _Research Design & Methods: The article is based on a critical analysis of the literature_ with a view to understanding the current state of use of blockchain technology in agriculture. It was assumed that blockchain technology is used in the agricultural sector to promote food security, prevent food fraud and verify the origin and authenticity of agricultural products and agricultural inputs. _Findings: Blockchain technology improves traceability and transparency, allowing parties_ within the agricultural value chain to identify faulty or suboptimal processes as well as bad actors. This ensures that ideal conditions are pursued from farm to market. The ability to trace the origin of food products is essential when food safety breaks down. The early identification of the origin of contamination will enable food companies to swing into action quickly to prevent illness and thus save lives. Such a timely response will also help limit food wastage and will save money by containing financial fallout. _Implications / Recommendations: Blockchain technology has strong potential for suc-_ cess within the agricultural sector. It can be used to ensure food safety by enabling the source of agricultural products, as well as the source of their potential contamination, to be traced, and the authenticity of farming inputs to be verified. Blockchain can also be Mustafa Cem Aldag, Bandırma Onyedi Eylül University, Merkez Yerleşkesi 10200 Bandırma, Balikesir, Turkey, e-mail: maldag@bandirma.edu.tr, ORCID: https://orcid.org/0000-00017224-2277. This is an open access article distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 License (CC BY-NC-ND 4.0); https://creativecommons.org/ licenses/by-nc-nd/4.0/ ----- 8 _Mustafa Cem Aldag_ employed in the process of disbursing subsidies to farmers to ensure that they benefit from subsidy programmes. Finally, blockchain technology will offer farmers better prices and better payment methods and solve challenges in land title sales and purchase registration. _Contribution: Blockchain is a new technology to the agricultural sector, and enormous_ challenges remain. There is still no established system to regulate blockchain transactions. Nevertheless, the application of blockchain in agriculture holds promising rewards. **Keywords: blockchain, agriculture, food fraud, food safety.** **JEL Classification: Q16.** ### 1. Introduction Blockchain technology is part of industry 4.0, which encompasses automation and data exchange in production processes. Industry 4.0 integrates the internet of things (IoT), cyber-physical systems, cognitive computing and cloud computing. Blockchain technology is gaining in popularity alongside cryptocurrencies such as Bitcoin. Even though the first use of blockchain was in cryptocurrencies, the technology holds great potential for other types of transactions. This paper explores the use of block technology in agriculture and agricultural products. Blockchain is a modern technology used in business transactions. At root, it consists of structured data holding transactional records, and at the same time ensures transparency, security, and decentralisation. Satoshi Nakamoto first applied blockchain technology in 2009, creating a Bitcoin or digitized currency which can be traded in the place of fiat capital. Transactions done with blockchain technology are secured with a digital, encrypted, tamper-proof signature, making them very difficult to change. Blockchain makes financial transactions possible while removing the need for intermediaries such as banks. However, blockchain has been used for other purposes in agriculture, including supporting small scale farmers and the evolution of ICT E-Agriculture as well as ensuring food security and safety. ### 2. The Role of Blockchain in Agriculture Blockchain can be used to ensure food safety within the agricultural supply chain by improving traceability and transparency, allowing parties within the agricultural value chain to identify poor or faulty processes as well as bad actors (Tian 2017). This ensures that the best conditions possible are maintained from the farms up to the market. The ability to trace the origin of food products becomes important when there is a breakdown in or threat to food safety. Employing blockchain, industry regulators can quickly pinpoint the source of the contaminant as ----- _The Use of Blockchain Technology…_ 9 well as determine the scope of the affected products (Underwood 2016). The early identification of the origin of contamination will enable food companies to swing into action quickly to prevent illness and thus save lives. Such a timely response will also help limit food wastage and will save money by containing the financial fallout. There is already clear vested interest from both producers and consumers, and companies such as IBM and Walmart have begun work in the area of food safety by employing blockchain technology. Food security has been defined as the ability of an individual, at all times, to have financial, physical, and social access to safe, sufficient, and nutritious food, meeting their desire for particular quality and the preferences of food for a life that is active and healthy. Achieving such goals has been limited by various humanitarian disasters including environmental calamities as well as ethnic and political conflict. Blockchain technology has gained success where its workability has been affirmed with the use of cryptocurrency; hence, different agricultural organisations are using the technology to harness its transparency (De Fazio 2016). That clarity can help solve challenges that accompany intermediaries that hinder the distribution of resources and financial transactions. Agriculture and the supply chain are essential areas in terms of both products and the cultivation of the acres. Agriculture is connected to the food suppy chain, with the end products necessary as inputs in multi-actor supply chain distributions. Along the supply chain, consumers are the end clients (Ge et al. 2017). Blockchain can be used in many sectors. One of them is international aid. The technology can be used to track donations and make them more secure. People do purchase goods locally and hence are unaware of their origin or the production footprints (Kamilaris, Prenafeta-Boldú & Fonts 2018). Due to this lack of awareness, when issues related to the buying and supplying of the food erupts, blockchain technology can offer a solution, hence solving real-life problems that crop up in the agricultural supply chain. When a product is traceable, both retailers and consumers will trust it more. If the entire supply chain for agricultural products is embedded in a blockchain-driven ecosystem, from product registration and payment to transport and delivery, then retailers can verify that the product they are receiving is what they paid for. Since every step of the transaction process is recorded in the blockchain, any claim by a supplier about the origins of his products can be confirmed by tracing the journey of the product from the farmer up to the point it was received at the shop, thus alleviating concerns of misrepresentation. A transparently distributed ledger will increase consumer confidence in the origins of their food as well as the efficiency of its production (Lemieux 2016). In monitoring their food chain, consumers ----- 10 _Mustafa Cem Aldag_ will be better informed of the origin of their food, their dates of its manufacture and the efficiency with which products are created. Startups such as Provenance are already using blockchain to provide concrete proof of the origin of their food supplies. Derivation uses blockchain to secure and keep track of its food supply chains and make such information public, thus ensuring the process is inclusive of all partners in the supply chain (Kim & Laskowski 2018). Provenance uses the ledger to comprehensively document ingredients, supply chain materials, and products, thus giving their customers greater transparency about the authenticity and origin of their products. The startup provides buyers with a fully transparent record in the format of a real-time data platform. This allows the buyers to see each step in the product’s journey from the current location of the product, the current owner, and the period the product was with a particular person. ### 3. Findings Despite the limitations blockchain technology is experiencing, including the transformation of Information Communication and Technology (ICT), the trust people had in mistrusted parties in financial transactions has changed. Intermediary parties such as banks are no longer required when transacting money thanks to the blockchain technology they have in place. Similarly, blockchain is being used to develop greater efficiency in agriculture as ICT has enabled access to knowledge about banks and digital resources. Blockchain technology has provided sufficient infrastructure in e-agriculture, regarding ICT’s potential, formulation of priorities, and aims as a crucial first step (Yu-Pin Lin et al. 2017). For several decades, initiatives for monitoring agricultural environments have embraced a wide range of ICT. This includes technologies for long-distance monitoring of farmland conditions and managing equipment with smartphone applications. Agricultural systems that help in monitoring environments support both timeless deterrent systems and baseline measuring of data that can be used by managers in planning (Prasannan, Vargese & Smita 2019). At the same time, the availability of the blockchain, environmental, and agricultural data, monitored and kept in a dispensable cloud, creates a space for trust, thus securing sustainable agricultural development using ICT and free, transparent data. Blockchain technology, as it is interlinked with crypto-economic security, ensure all data recorded at the national level adheres to international agricultural standards and the naming of conventions that may remain unreachable to malicious attackers (Yu-Pin Lin et al. 2017). Indeed, agrarian networks making use of blockchain are decentralized and immutable systems with groundbreaking ----- _The Use of Blockchain Technology…_ 11 control. The immutability can revolutionise all resources that are biophysically documented, captured from sources used, and reused in a wide-range data set. The traceability and accompanying transparency offered by blockchain models play a crucial role in preventing food fraud, which occurs mostly through false labeling. As the demand for antibiotic-free, organic, and GMO food soars, fraudulent labeling is becoming common. However, blockchain technology and the internet of things are used to efficiently monitor the entire supply chain. Even the smallest transactions occurring at the warehouse, farm, or factory can be monitored by IoT technologies such as RFID tags and sensors, with the information then communicated across the supply chain (Tian 2016). Blockchain will thus save giant shipping companies millions by ensuring efficiency and reducing the incidence of fraud occurring anywhere in the hundreds of interactions involved in supply chains. Blockchain technology reduces transaction costs and leads to fair pricing. It also enables commodity buyers to deal directly with their suppliers and make payments through mobile transfer. Buyers and suppliers will thus find it easier to negotiate fair prices for their agricultural products. The farmer will receive a reasonable amount for their agricultural produce, and the retailer will equally pay a fair price for the agricultural products supplied. The retailer saves money because the technology eliminates agents and middlemen. Blockchain technology ultimately allows the farmers and producers to justify the premiums they set for certain agricultural products (Ge et al. 2017). Block technology will also help reduce transaction costs brought by the heavily fragmented market for farm products. The demand for agricultural goods is heavily dependent on personally knowing a party along the supply chain before one can trust them to do business. The trust and accountability created by the ledger that is available to all parties can reduce or even eliminate the need to evaluate each party individually on their trustworthiness and their ability to execute a deal. Those who deal in agricultural goods can, therefore, do business without the need to broker trust. Food safety encompasses how food is prepared, handled and stored for consumption, and is key to consumers not being made ill (Ray et al. 2019). To avoid that, digitisation should provide information that is trustworthy and reliable as concerns the source of a food product. At the same time, traceability can enhance food safety, with the appropriate department able to step in to ascertain the cause of challenges facing food production (Yu-Pin Lin et al. 2017). Using blockchain, food organisations can locate outbreaks by tracing particular sources, and thereby reduce the theft of food. Blockchain technology may be used to track goods moving from one destination to another down the supply chain and overseas (Allen et al. 2019). Many food organisations are embracing blockchain to enhance ----- 12 _Mustafa Cem Aldag_ food integrity and safety. In 2006, Oceana carried out research on deceitful practices in the seafood industry and concluded that twenty percent of seafood is mislabeled. Blockchain technology can be used in tracing the originality of such cases. This is done with the help of an application in a decentralized cloud to solve the problems (Fernández-Caramés & Fraga-Lamas 2018). Additionally, other researchers have noted that food supply chains earn little trust, while quality and complexity require long-distance shipping and have long procedural times. Here too blockchain can help, by providing an effective solution where advanced traceability of food is achieved based on increased transparency and safety. At the same time, problems surrounding food safety should be identified and authorities notified quickly. For entities involved in agri-commerce, the application of blockchain technology will help provide faster payment options at reduced costs. Across the globe, farmers experience a massive delay in the release of funds for their produce submitted to various national agricultural boards. Adding to the farmer’s misery is the costly nature of payment options, such as wire transfers. Some of these inefficiencies can be solved by blockchain. There are already blockchain-based apps designed by some developers to peer fund transfers that are secure, near-instantaneous, and cheap (Chinaka 2016). By using smart contracts, payments are automatically triggered as soon as the buyer confirms the fulfillment of certain conditions. ### 4. Recommendations Blockchain technology can also be used to verify the authenticity of agricultural inputs. More often than not, farmers are not sure if the contributions they buy are authentic. Local retailers likewise sell fake products to farmers, raking in huge profits as a result. Sometimes the retailers themselves may not know if the products they purchase from their suppliers are authentic. Even large companies that produce agricultural inputs are losing millions of dollars as a result of duplication or pilferage, which also negatively affects the companies’ brand image. Blockchain may be a solution to this problem as it will increase the traceability of each input sold, from the manufacturer to the last buyer. Blockchain will also make it possible for farmers and retailers to check the authenticity and origin of the inputs they buy. All they need to do is scan the blockchain barcode on each product with a smartphone (Crosby et al. 2016). Yet another area of application for blockchain is agriculture island title registration. Globally, the process of registering the sale or purchase of land is often complicated and highly susceptible to fraud. Land cartels corrupt the land regis ----- _The Use of Blockchain Technology…_ 13 tration process, making it difficult for buyers to know if the land they are buying or leasing is litigation free. Blockchain can make the recording of property transactions more efficient and, because the recorded data is accessible and publicly available, more transparent as well (Chavez-Dreyfuss 2016). Blockchain is already being used in land registration, with one of the first movers in this space being Andhra Pradesh. Pradesh has partnered with ChromaWay, a startup from Sweden Blockchain, to build a blockchain solution for land registration and recordkeeping (Anand, McKibbin & Pichel 2017). Record keeping requires significant labour and financial outlays, and blockchain is expected to reduce both. Moreover, with smart contracting between farmers and corporate farming firms, contracting for leasing land will become easier. Ethereum is an example of a blockchain project built to realise the potential of intelligent contracting. Supporting small scale farmers and the emerging cooperatives is one essential way to both impart and boost efficiency in less developed nations. Organisations should be able to portray technology to the future generations using digitized networks to supply small scale products for supporting them (Chang et al. 2018). Other cooperatives established by the farmers use a method that actively raises competition in less developed countries, thereby assisting farmers with a chance of winning a large number of shares on the crops they are farming. AgriLedger works through a mobile app, which helps record truth that, thanks to blockchain, is incorruptible. Small-scale farmers can use a distributed crypto ledger and mobile apps to create trusted circles. OlivaCoin, a B2B, provides platform that supports the trading of olive oil with a view to enhancing the reduction of capital costs overall and maximizing transparency, hence speeding up access to global markets (Kamilaris, Prenafeta-Boldú & Fonts 2018). Start-ups including Arc Net, Bext 360, provenance, and Bart Digital provide provide small farm cultivators with tools and thus swift traceability in a growing number of products. Additionally, small-scale farmers may benefit from blockchain technology when they focus on carving out niches separate from major corporations. Currently, blockchain is swiftly gaining acceptance at major mainstream firms, suggesting the roles and uses of analyzing data will grow (Elizur 2018). Thus, the small-scale farmers are advised to begin maximising their options and to get in the game. Ultimately, cooperatives can include either small or medium farmers and grow into big entities capable of satisfying consumers; all these can be achieved with the use of blockchain technology, which aids in peacefully resolving disputes and feuds between farmers and cooperatives. Across the globe, agriculture relies heavily on government subsidies, though how much of the subsidies actually reach farmers is an open question. Much of the money is grabbed up by cartels who purchase large quantities of agricultural inputs such as fertilisers, then exhaust the stock in order to force farmers to buy ----- 14 _Mustafa Cem Aldag_ their inputs from the cartels. The application of blockchain will, however, improve transparency in the distribution and delivery of subsidies. This will ensure that the targeted disbursement of grants will reach local farmers and help reduce the theft and corruption in the system (Swan 2015). Establishing such a network is a complex process that calls for multiple stakeholders to come together, but that is not impossible with today’s technology. ### 5. Conclusion Blockchain has become a modest technology that is used in financial business transactions. Structured data records held in blockchain are seen as secure, decentralised, and transparent. Data kept in a blockchain is digitally recorded and has a history that is standard and available to each user of the network. A digitised signature is used to secure the information stored on each blockchain, while network nodes validate every transaction that transpires on a blockchain. Blockchain technology has proved that technological advances in agriculture provide a solution to the crisis that has embroiled food production and human food consumption. Blockchain plays a critical role in food security, food safety, support for small-scale farmers and the evolution of CTI E-Agriculture. Blockchain technology has strong potential for success in agriculture. It can help ensure food safety by making it possible to trace the source of contamination. It can also be used to trace the origin of an agricultural product and to verify the authenticity of farming inputs. Blockchain can also be employed in the disbursement of subsidies to farmers to ensure that they benefit from such programmes. Blockchain technology will bring better prices and better payment methods to farming as well as solve challenges in land title sales and purchase registration. Blockchain is still in a relatively nascent stage, especially in the agricultural sector, and the challenges that remain are enormous. One of these challenges concerns regulation across the globe, as there is still no established system to regulate blockchain transactions. Nevertheless, the application of blockchain in agriculture holds promising rewards. **Acknowledgment** This work was supported by the Scientific Research Projects Coordination Unit of Bandırma Onyedi Eylül University BAP-18-REKT-1009-079. ----- _The Use of Blockchain Technology…_ 15 **Bibliography** Allen D. W. E., Berg Ch., Davidson S., Novak M., Potts J. (2019), International Policy _Coordination for Blockchain Supply Chains, “Asia and the Pacific Policy Studies”,_ May, https://doi.org/10.1002/app5.281. Anand A., McKibbin M., Pichel F. (2017), Colored Coins: Bitcoin, Blockchain, and Land _Administration, 24 March, https://cadasta.org/resources/white-papers/bitcoin-block-_ chain-land/ (accessed: 24 November 2019). Chang J., Katehakis M. N., Melamed B., Shi J. (2018), Blockchain Design for Supply _Chain Management, “SSRN Solutions”, 27 December, https://papers.ssrn.com/_ sol3/papers.cfm?abstract_id=3295440 (accessed: 24 November 2019), https://doi. org/10.2139/ssrn.3295440. Chavez-Dreyfuss G. (2016), Sweden Tests Blockchain Technology for Land Registry, June 16, https://www.reuters.com/article/us-sweden-blockchain/sweden-tests-blockchain-technology-for-land-registry-idUSKCN0Z22KV (accessed: 24 November 2019). Chinaka M. (2016), Blockchain Technology – Applications in Improving Financial Inclu_sion in Developing Economies: Case Study for Small Scale Agriculture in Africa,_ “Research Gate Journal”. Crosby M., Nachiappan, Pattanayak P., Verma S., Kalyanaraman V. (2016), BlockChain _Technology: Beyond Bitcoin, “AIR Applied Innovation Review”, no 2._ De Fazio M. (2016), Agriculture and Sustainability of the Welfare: The Role of the Short _Supply Chain, “Agriculture and Agricultural Science Procedia”, vol. 8, https://doi._ org/10.1016/j.aaspro.2016.02.044. Elizur I. (2018), How to Use Blockchain and Big Data for Better Small Business Profits, 26 December, https://smallbiztrends.com/2018/02/big-data-blockchain-small-business. html (accessed: 24 November 2019). Fernández-Caramés T. M., Fraga-Lamas P. (2017), _A Review on the Use of Block-_ _chain for the Internet of Things, “IEEE Access”, vol. 6, https://doi.org/10.1109/_ ACCESS.2018.2842685. Ge L., Brewster C., Spek J., Smeenk A., Top J. (2017), Blockchain for Agriculture and _Food: Findings from the Pilot Study, Wageningen Economic Research Report 2017-112,_ http://ictupdate.cta.int/2018/09/04/the-rise-of-blockchain-technology-in-agriculture/ (accessed: 24 November 2019). Iuon-Chang Lin, Hsuan Shih, Jui-Chun Liu, Yi-Xiang Jie (2018), Food Traceability _System Using Blockchain, Proceedings of 79th IASTEM International Conference,_ Tokyo, Japan, 6th–7th October 2017, August 30, http://www.worldresearchlibrary.org/ up_proc/pdf/1121-151134311859-64.pdf (accessed: 24 November 2019). Kamilaris A., Prenafeta-Boldú F., Fonts A. (2018), The Rise of Blockchain Technology _in Agriculture, http://ictupdate.cta.int/2018/09/04/the-rise-of-blockchain-technology-_ in-agriculture/ (accessed: 24 November 2019). Khan M. A., Salah K. (2017), IoT Security: Review, Blockchain Solutions, and Open _Challenges, “Future Generation Computer Systems”, vol. 82, https://doi.org/10.1016/_ j.future.2017.11.022. Kim H., Laskowski M. (2018), Toward an Ontology-driven Blockchain Design for _Supply-chain Provenance, “Intelligent Systems in Accounting, Finance, and Manage-_ ment”, vol. 25, no. 1, https://doi.org/10.1002/isaf.1424. ----- 16 _Mustafa Cem Aldag_ Lemieux V. L. (2016), Trusting Records: Is Blockchain Technology the Answer?, “Records Management Journal”, vol. 26, no. 2, https://doi.org/10.1108/rmj-12-2015-0042. Prasannan K., Varghese B., Smita C. T. (2019), Evaluation of Supply Chain Management _Based on Block Chain Technology and Homomorphism Encryption, “International_ Journal of Information Systems and Computer Sciences”, vol. 8, no 2, https://doi. org/10.30534/ijiscs/2019/15822019. Ray P., Om Harsh H., Daniel A., Ray A. (2019), Incorporating Block Chain Technology in _Food Supply Chain, “International Journal of Management Studies”, vol. VI, no 1(5),_ https://doi.org/10.18843/ijms/v6i1(5)/13. Swan M. (2015), Blockchain: Blueprint for a New Economy, O’Reilly Media, Sebastopol, CA. Swan M. (2018), Blockchain: Blueprint for a New Economy, O’Reilly Media, May 21, https://dl.acm.org/citation.cfm?id=3006358 (accessed: 24 November 2019). Tian F. (2016), An Agri-food Supply Chain Traceability System for China Based on RFID _& Blockchain Technology, 2016 13th International Conference on Service Systems_ and Service Management (ICSSSM), https://doi.org/10.1109/ICSSSM.2016.7538424. Tian F. (2017), A Supply Chain Traceability System for Food Safety Based on HACCP, _blockchain & Internet of Things, 2017 International Conference on Service Systems_ and Service Management, https://doi.org/10.1109/ICSSSM.2017.7996119. Underwood S. (2016), Blockchain beyond Bitcoin, “Communications of the ACM”, vol. 59, no 11, https://doi.org/10.1145/2994581. Yu-Pin Lin, Petway J., Anthony J., Mukhtar H., Shih-Wei Liao, Cheng-Fu Chou, Yi-Fong Ho (2017), Blockchain: The Evolutionary Next Step for ICT E-Agriculture, “Environment Journal”, vol. 4, no 3, https://doi.org/10.3390/environments4030050. **Wykorzystanie technologii blockchain w rolnictwie** (Streszczenie) _Cel: Celem artykułu jest ukazanie wybranych aspektów wykorzystania technologii block-_ _chain w rolnictwie oraz produkcji i dystrybucji produktów rolnych._ _Metodyka badań: Artykuł opracowany został na podstawie analizy krytycznej światowej_ literatury przedmiotu, dotyczącej problematyki aktualnego zastosowania technologii _blockchain w rolnictwie. Założono, że technologia blockchain jest wykorzystywana_ w sektorze rolnym w celu promowania bezpieczeństwa żywnościowego, zapobiegania oszustwom na rynku żywnościowym oraz weryfikacji pochodzenia i autentyczności produktów rolnych i środków produkcji rolnej. _Wyniki badań: Technologia blockchain pozwala na identyfikowanie w sposób przejrzy-_ sty i zrozumiały wszystkich elementów w łańcuchu wartości w rolnictwie, umożliwia wychwytywanie procesów nieoptymalnych oraz rozpoznawanie podmiotów, których intencje można uznać za nieuczciwe. Wykorzystanie technologii blockchain pozwala optymalizować ogólne warunki funkcjonowania rynku produktów żywnościowych oraz monitorować pochodzenie produktów spożywczych, co jest niezbędne w przypadku wystąpienia nieprawidłowości w zakresie bezpieczeństwa żywności. Wczesne rozpoznanie np. źródła zanieczyszczenia produktów spożywczych umożliwia szybkie rozpoczęcie działań niezbędnych do zapobiegania chorobom, a tym samym ratowania życia. Szybka ----- _The Use of Blockchain Technology…_ 17 reakcja na zauważone nieprawidłowości pomaga również w ograniczaniu marnotrawienia żywności, co pozwala na minimalizowanie strat finansowych. _Wnioski: Technologia blockchain współcześnie wykazuje duży potencjał w zakresie_ wykorzystania w sektorze rolnym. Dzięki możliwości szczegółowego śledzenia procesu produkcji żywności można ją wykorzystać w obszarze zapewniania bezpieczeństwa żywności. Ze względu na możliwość przechowywania i przesyłania informacji o transakcjach technologia blockchain może być również wykorzystywana w procesie wypłacania dotacji rolnikom, co gwarantuje szeroki dostęp do programów subsydiowania rolnictwa. Ponadto technologia blockchain oferuje rolnikom możliwość internetowego negocjowania cen produktów rolnych, wykorzystania różnych metod płatności oraz wspomaga realizację procesów sprzedaży gruntów i rejestracji zakupów. _Wkład w rozwój dyscypliny: Technologia blockchain stanowi nowe rozwiązanie, możliwe_ do wykorzystania w sektorze rolnym, który współcześnie zmaga się z ogromnymi wyzwaniami. Pomimo że nie wypracowano spójnych regulacji w zakresie transakcji blockchain, zastosowanie tej technologii w rolnictwie w opinii autora przynosi obiecujące korzyści. **Słowa kluczowe: technologia blockchain, rolnictwo, oszustwa na rynku żywnościowym,** bezpieczeństwo żywności. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.15678/znuek.2019.0982.0401?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.15678/znuek.2019.0982.0401, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GOLD", "url": "https://zeszyty-naukowe.uek.krakow.pl/article/download/1918/1447" }
2,019
[]
true
null
[ { "paperId": "0c23c4bbf358809ef4cefbade5bea60b1aade0c5", "title": "A Review on the Use of Block chain for the Internet of Things" }, { "paperId": "ada891ef7a359cd74fab24bc5bbe8c8c56130d19", "title": "Blockchain in Agriculture" }, { "paperId": "906551a4090bb022509652b493c551caa64bb1a6", "title": "Evaluation of Supply Chain Management based on Block Chain Technology and Homomorphism Encryption" }, { "paperId": "660b8debffd5ecc8d184b2810cbc8880724e6af7", "title": "International Policy Coordination for Blockchain Supply Chains" }, { "paperId": "273e26febae15e1b8da30fc6236d9038c01d84f1", "title": "Incorporating Block Chain Technology in Food Supply Chain" }, { "paperId": "1d563457e29bfe64a4a8d1a1e3c490144b2429d3", "title": "Blockchain Design for Supply Chain Management" }, { "paperId": "56740c50e90c62c8b3f3a5cc1a8176e91cad6abc", "title": "The rise of blockchain technology in agriculture" }, { "paperId": "02458904f9bd718bd8c6a1a36e9847ad83b0410b", "title": "A Review on the Use of Blockchain for the Internet of Things" }, { "paperId": "81f6442e50890b990598e637a44b2d8d10329710", "title": "IoT security: Review, blockchain solutions, and open challenges" }, { "paperId": "5312d8d6ce129c38e0788723bf5015700f9dcdd7", "title": "Blockchain: The Evolutionary Next Step for ICT E-Agriculture" }, { "paperId": "304083f2a7b00d07d7c33883e2e74ac0fd8245c5", "title": "A supply chain traceability system for food safety based on HACCP, blockchain & Internet of things" }, { "paperId": "efe573cbfa7f4de4fd31eda183fefa8a7aa80888", "title": "Blockchain beyond bitcoin" }, { "paperId": "69a22ec0bb3aeb424bc7d7ee2b8d1b4b59cda3cb", "title": "Trusting records: is Blockchain technology the answer?" }, { "paperId": "24cdeb7d7421012c2fdd362b8e2816c105b7071f", "title": "An agri-food supply chain traceability system for China based on RFID & blockchain technology" }, { "paperId": "97fddbbfd681bce9eeb8e0a013353b4d5b2ba0db", "title": "Blockchain: Blueprint for a New Economy" }, { "paperId": "44ee1bf827396f8a08f54be78e1b868c11de23bc", "title": "Toward an ontology-driven blockchain design for supply-chain provenance" }, { "paperId": null, "title": "How to Use Blockchain and Big Data for Better Small Business Profits" }, { "paperId": "e8eeebd53272ae5c7d6534cd7199c7308da5128d", "title": "Blockchain for agriculture and food: Findings from the pilot study" }, { "paperId": "2129a3227258bd7854008a8f2d1d20296d208694", "title": "Blockchain technology -- applications in improving financial inclusion in developing economies : case study for small scale agriculture in Africa" }, { "paperId": "ed31a604eeda368c3f9269d1a150cf098f274d00", "title": "Agriculture and Sustainability of the Welfare: The Role of the Short Supply Chain" }, { "paperId": "d23e3b0fecc9f24900a3e3dd4d31dda934c6a88d", "title": "Colored Coins: Bitcoin, Blockchain, and Land Administration" }, { "paperId": null, "title": "Food Traceability System Using Blockchain" } ]
7,304
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01b4bf3b85b406774ca35eb3b980a4fd6bf87dbc
[ "Computer Science" ]
0.861197
Towards Blockchain Interoperability: Improving Video Games Data Exchange
01b4bf3b85b406774ca35eb3b980a4fd6bf87dbc
International Conference on Blockchain
[ { "authorId": "1387489007", "name": "Léo Besançon" }, { "authorId": "143924526", "name": "Catarina Ferreira Da Silva" }, { "authorId": "71300895", "name": "P. Ghodous" } ]
{ "alternate_issns": null, "alternate_names": [ "ICBC", "IEEE Int Conf Blockchain Cryptocurrency", "IEEE International Conference on Blockchain and Cryptocurrency", "Int Conf Blockchain" ], "alternate_urls": null, "id": "f1ab8d75-7f15-4bb4-ad88-e834ec6ed604", "issn": null, "name": "International Conference on Blockchain", "type": "conference", "url": null }
Current solutions for designing and building decentralized blockchain applications lack interoperability. Consequently, blockchains and existing technologies do not integrate well in a unified framework. This integration is necessary to work around some of the blockchains constraints, such as scalability of transactions and ergonomics. Indeed, blockchains are not suitable for huge data storage, but there are distributed data storage solutions that can be used in a decentralized blockchain application. Regarding ergonomics, the use of blockchain technology should be in the background and transparent for users that may not know how to set up and secure a blockchain-based application.We propose an architecture aiming to easily link existing decentralized technologies and blockchains. We then discuss the impact of this architecture for the video game industry. As a result, we propose an original data representation of blockchain gaming assets in order to improve data exchanges in this industry.
## Towards Blockchain Interoperability: Improving Video Games Data Exchange ### Léo Besançon, Catarina Ferreira da Silva, Parisa Ghodous To cite this version: #### Léo Besançon, Catarina Ferreira da Silva, Parisa Ghodous. Towards Blockchain Interoperability: Improving Video Games Data Exchange. IEEE International Conference on Blockchain and Cryp- tocurrency, May 2019, Seoul, South Korea. pp.81-85, ￿10.1109/BLOC.2019.8751347￿. ￿hal-02085698￿ ### HAL Id: hal-02085698 https://hal.science/hal-02085698v1 #### Submitted on 14 May 2019 #### HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. #### L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. ----- # Towards Blockchain Interoperability: Improving Video Games Data Exchange #### L´eo Besanc¸on, Catarina Ferreira Da Silva, Parisa Ghodous _Univ Lyon, Universit´e Claude Bernard Lyon 1_ _LIRIS, F-69100_ Villeurbanne, France leo.besancon, catarina.ferreira-da-silva, parisa.ghodous @liris.cnrs.fr _{_ _}_ **_Abstract—Current_** **solutions** **for** **designing** **and** **building** **de-** **centralized** **blockchain** **applications** **lack** **interoperability.** **Con-** **sequently,** **blockchains** **and** **existing** **technologies** **do** **not** **integrate** **well** **in** **a** **unified** **framework.** **This** **integration** **is** **necessary** **to** **work** **around** **some** **of** **the** **blockchains** **constraints,** **such** **as** **scalability** **of** **transactions** **and** **ergonomics.** **Indeed,** **blockchains** **are** **not** **suitable** **for** **huge** **data** **storage,** **but** **there** **are** **distributed** **data** **storage** **solu-** **tions** **that** **can** **be** **used** **in** **a** **decentralized** **blockchain** **application.** **Regarding** **ergonomics,** **the** **use** **of** **blockchain** **technology** **should** **be** **in** **the** **background** **and** **transparent** **for** **users** **that** **may** **not** **know** **how** **to** **set** **up** **and** **secure** **a** **blockchain-based** **application.** **We** **propose** **an** **architecture** **aiming** **to** **easily** **link** **existing** **decentralized** **technologies** **and** **blockchains.** **We** **then** **discuss** **the** **impact** **of** **this** **architecture** **for** **the** **video** **game** **industry.** **As** **a** **result,** **we** **propose** **an** **original** **data** **representation** **of** **blockchain** **gaming** **assets** **in** **order** **to** **improve** **data** **exchanges** **in** **this** **industry.** **_Index_** **_Terms—Blockchain,_** **interoperability,** **standards,** **video** **games** I. INTRODUCTION Blockchain (BC) is an innovative technology, which can have a high impact in numerous industries, such as healthcare [1], supply-chain [2], finance [3] and video games [4]. BC are append-only ledgers shared across a network of clients. Zheng et al. show in [5] some of the promises of this technology: decentralization, anonymity, persistency of information and auditability. However, they also highlight some of its current challenges: each node needs to keep the history of all the transactions made in the network, so the storage space keeps increasing, and the number of transactions that can be processed by the network is quite limited, around 7 transactions per second for Bitcoin. Deshpande et al. [6] also show the importance of resolving interoperability issues and developing standards in the BC field. This interoperability need can be found at multiple levels: a) between different BC, b) between different projects running on the same BC, and c) between BC and other technologies used to create decentralized applications. BC are usually distributed, meaning the record of all transactions is replicated across multiple physical nodes. They can also be decentralized, meaning they are not controlled by a single entity (e.g. a government or company). In this case, control is determined by a consensus mechanism, which determines which blocks are considered valid for the network. In this paper, we mainly focus on decentralized blockchain applications (DBA). II. RELATED WORK _A. Interoperability between blockchains_ Since the creation of Bitcoin, various new BC designs have tried to improve the technology. For example, EOS [7] uses Delegated Proof of Stake (DPoS) as a method for achieving consensus, which compromises decentralization in order to increase throughput. There is no unified standard between all BC designs, and this leads to the need for research regarding interoperability between BC [8]–[11]. Particularly, [12] proposes a layered architecture to improve communication between BC. _B. Interoperability in a particular field_ Some research works also try to solve interoperability issues within a particular field. This is the case of [13], which analyzes how to leverage BC technology to improve data sharing between patients and healthcare institutions. Standardization efforts have come from the IEEE Blockchain Initiative [14] and the IEEE Standards Association [15]. For example, a framework focused on the Internet of Things is proposed in [16]. Concurrently, the Enterprise Ethereum Alliance (EEA) [17] designs specifications for BC clients, built for the Ethereum ecosystem, that could have enterprise usage. Unfortunately these proposals cannot be easily extended to other applications and applied to other BC. For example, the EEA aims to reach enterprises, so they don’t take into account decentralization in their specifications [18]. The architecture proposed by IBM [19] has similar limitations: even though they include a public network for customers, the BC is managed by an administrator and its consensus is achieved by trusted participants. Similarly, in the video game industry, Hoard [20] aims to better integrate BC in game engines for developers, as well as to abstract complexities of the BC for players. However they do not propose a generic framework for DBA. Approaching ----- the problem from a semantic perspective, like [39] did for smart contract security, could improve interoperability. _C. Standardization of a particular blockchain_ Protocols and commonly used interfaces in the BC space have mostly been standardized with a bottom-up approach. This is achieved mainly through Bitcoin Improvement Proposals (BIPs) and Ethereum Improvement Proposals (EIPs), as well as Ethereum Request for Comments (ERCs). The latter has seen several proposed standards for asset management, each built on the ERC-20 token standard [21]. In the video game industry, for non-fungible tokens, the most used token is ERC-721 [22] (e.g. collectible virtual objects such as CryptoKitties [23]). More recently, the ERC1155 [24] proposes a unified interface able to manage both fungible and non-fungible assets. Currently, these standards only cover the BC side of an asset, by specifying smart contract interfaces tools need to support in order to manage the assets. However, this approach is limited as it doesn’t take into account the ecosystem as a whole. For example, collectible assets such as CryptoKitties are represented by images. These images are centralized and controlled by the servers of the project’s company. This design choice could be challenged if any decentralized image storage standard was associated with the ERC-721 standard. Indeed, most decentralized applications cannot use only BC technology, as it currently has several limitations. For example, the cost of permanently storing large amounts of data (e.g. images) on the Ethereum BC is prohibitive [25]. As a result, developers need to use BC only for the core processing of the application. Non-crucial processing, storage and other ancillary tasks have to be managed by other tools, such as distributed file storage solutions. Interoperability between a BC and these tools is a challenge, and it should be better taken into consideration when building standards. _D. Decentralized application example_ The project Decentraland [26] uses a novel approach in order to create a virtual universe where users can purchase virtual land. Users can add 3D models, videos or sounds to their land, and script their content to interact with other users. Concretely, it is possible to design games that will run inside this virtual world. However, with the current specifications, developers need to design their games around the project’s ecosystem. For example, the game’s logic can only be programed using the project’s language. Having a more generic design could help bring support for existing game engines more easily, and attract more content creators on the platform. To summarize, to the best of our knowledge, no welldefined and complete architecture specifications for generic DBA have been proposed yet. As a result, integrating BC into video games is difficult to do with current technology and development tools. The National Institute of Standards and Technology confirms [27] that the current and future work regarding BC standardization concerns BC interoperability among others. This is why our work focuses on the proposition of a generic design for DBA that could be applied in most potential application of BC technology. We propose a design pattern to help developers better integrate BC into their applications. III. PROPOSED ARCHITECTURE The goal of the layered architecture shown in Fig. 1 is to provide the building blocks to support a DBA. It also avoids using a BC for non-suitable tasks. For sake of genericity, it is analogous to the OSI model, and each layer needs to communicate with its neighbors. As seen previously in [25], BC technology isn’t suitable for huge data storage. Fortunately, other decentralized tools can be used in addition to a BC to implement a complete application [28]. For example, static storage can be distributed on InterPlanetary File System [29] (IPFS) for free, as any node can choose the content they host. However, in practice, if a company does not want to lose the files needed for a product, they can host a node which acts as a gateway if no one else is incentivized to host the files. Other projects try instead to give economic incentive to store data. For example, FileCoin [30] is a BC layer built on top of IPFS, and Swarm [31] is an Ethereum Foundation project, aiming at bringing decentralized storage in the Ethereum ecosystem. One of the main challenges faced by the file storage layer is to correctly estimate the needs of an application, in terms of data availability, decentralization and data loss prevention. For dynamic content and queries, we cannot directly use the decentralized storage tools mentioned above, as they only support static files. But several projects (OrbitDB [32], a layer on top of IPFS, and Gun [33]) make use of conflict-free replicated data types. These databases use data types which are suitable for a distributed environment, as it is always possible to resolve incoherence between peers, even when they go offline regularly. The Brewer’s theorem states that a distributed database can have at most two of the following properties: consistency, availability and partition tolerance. Using this theorem, an application-specific choice has to be made in this layer in order to have the suitable trade-off for the considered use case. The processing layer aims to validate data integrity and to manage crucial game mechanics (ensuring financial integrity, |CCoommmmuunniiccaattiioonn llaayyeerr ((HHTTTTPP,, WWeebbRRTTCC,, lliibbpp22pp,, JJSSOONN--RRPPCC,, ggRRPPCC,, ……))|Col2| |---|---| ||| Fig. 1. Decentralized blockchain application architecture ----- preventing cheating, etc.). This is done by any BC which supports smart contracts, e.g. Ethereum [34], EOS [7], Hyperledger Fabric [35]. The choice of the specific BC used in this layer has to be made by the developers depending on the use case. Indeed, it is sometimes preferable to prioritize throughput over decentralization or security. In these cases, it makes sense to use EOS or Hyperledger instead of Ethereum to process the application’s smart contracts. However the BC chosen in the processing layer may not have the exact properties needed. Platforms and second layer solutions can help to improve the interoperability between projects, as they can abstract the processing layer so that the application layer can interact with any of the possible BC. This abstraction is useful because it allows developers of projects built between different BC to use similar terminology, designs and mechanisms. Second layer solutions can also improve the scalability of the BC. Developers of DBA currently have two means to scale up the number of transactions. Both aim to avoid sending transaction on the public BC. There are sidechains [37] and state channels, popularized by the work made on the Lightning Network [36]. If a decentralized multiplayer game must have low-latency, the developers can implement a token ring network structure through state channels for player communications, instead of having all players transact on the BC. The application layer is related to the interface the user connects to in order to use the application. It needs to interact with the BC but also to abstract complicated concepts regarding cryptography for the user. Indeed, developers cannot expect the users to know how the BC works and the consequences regarding security of their funds. This is why the ease of use and ergonomics of the technology is crucial. For example, gamifying wallet creation is an interesting way to make sure the user has stored securely the seed words for his wallet. To enable communication between peers and these layers, a decentralized application should use peer-to-peer networking tools. Peer discovery may be difficult to achieve without a server, but it is possible to use Distributed Hash Tables (DHTs) as Guidi et al. did in [38]. Finally, we see that each layer of this architecture has its own challenges, both from research and engineering perspectives. Design and interface specifications would greatly help resolve these challenges. IV. APPLICATION TO THE VIDEO GAME INDUSTRY In the video game industry, BC can be used to improve trust between players and developers, as well as to reduce friction in the game implementation. For example: _• The founder of Ethereum, Vitalik Buterin realized the_ importance of decentralization when Blizzard unilaterally updated the rules regarding one of his World of Warcraft assets [40]. The player felt cheated by the developers because he understood he didn’t truly own his assets. _• If players can interact with peer-to-peer technologies,_ game developers don’t have to pay for expensive servers as all the processing can be done by the players’ machines. _• BC are especially suitable for ownership management._ Games like Lunar Mines [41] take advantage of BC by letting players easily craft and trade items with other players. This type of game mechanics could be done in a non-BC game, but developers would need to recreate the asset ownership database and trading features the BC provides, so it would be harder to implement. The main drawbacks of games using BC technology compared to centralized games are the technological complexity of BC systems and the lack of control over certain aspects. For example, unwanted or illegal content could be harder to censor. The video game industry entails various additional constraints. For example, most video games need real-time data exchange between multiple players. Moreover, graphical assets generally need a lot of storage space and bandwidth. In order to apply the proposed architecture to this industry, we show in Fig. 2 a possible life cycle of a BC game asset. Once it is created, we need to store it in the suitable format and storage solution. In order to validate the properties of the asset with the BC, hashes and the main properties should be stored in a smart contract that manages the asset. Depending on the application, the validation step can also aim to ensure the data inside the asset can be used by the application, and does not contain unwanted or illegal content. This step can be achieved by a centralized entity that stakes its reputation on the asset validity, by a community vote or by any other consensus method. Finally, whenever a player wants to use the asset (because they or another player own it in-game), the application layer should check its properties. To correctly represent a video game asset on the BC, we’ve seen that ERC token standards currently do not interface well with the other layers. For example, only the ERC-721 interface allows for a reference to metadata, and it only consists of one URI that could potentially become obsolete. The approach described in [26] has a similar issue. The representation of a video game asset needs to be generic and be quickly implemented with existing technologies. We define two types of parameters: asset handling prop Fig. 2. Life cycle of a blockchain game asset ----- Shaders (folder) Sounds (folder) Images (folder) Videos (folder) Videos (folder) Shaders (folder) Images (folder) erties, and asset specific properties. The first type describes required properties to identify the asset: for example its name, its hash, or its validation status. The second type contains anything else. In order to be able to represent generic assets, we focused on an archiving format similar to Java archives JAR. The content of this representation is described in Fig. 3. Most of the elements mentioned are self-explanatory. However, the following list adds precision to some of them: _• Hash (multihash [42] format) - can be used to quickly_ reference the asset. For example, if the hash is stored in a smart contract, one can retrieve the asset from the hash and then recompute it to ensure data integrity. The multihash format is self-describing, and we can implement it with any hashing function. This means that if an existing hashing function becomes obsolete because of hash collisions, we can change it with back-compatibility. However, hash collisions are less critical in our use case as we do not transfer value between users, _• Properties - related to the asset (e.g. its in-game effects),_ _• Smart contract’s Application Binary Interface (ABI) -_ describes the prototypes of the functions of the contract, _• Smart contract’s and creator’s address reference (string) -_ a reference to the smart contract’s address with a naming system such as the Ethereum Naming System (ENS), _• Child assets hashes - can be used for crafting different_ assets into one. With this asset specification, game developers and BC engineers can use and agree on the same data representation. Also, it will be easier to develop tools to quickly import the assets and interact with the BC from game engines (Unity or Unreal Engine). To assess the feasibility of our proposal, we want to release a follow up research showing a prototype of a fully decentralized game implementing the architecture we propose and our proposed data representation for data transfers between players. This interoperability between game engines and the BC also allows for new game mechanisms, as shown in the next section. V. USER GENERATED CONTENT User Generated Content (UGC) gives players the opportunity to create assets and share them with anyone in the community. Our asset data representation and architecture can help game creators and users by providing a unified distributed game design framework, which supports interoperability. Indeed, anyone can create an asset following our representation. Then, the asset is verified on the BC by a smart contract and referenced by its hash, which ensures data integrity. We can ensure the asset follows community rules by Proof of Authority, as it is easy to implement, but it compromises decentralization. A more decentralized approach could use a community vote. In this case, to avoid Sybil attacks, only players above a certain level in-game (or having played a certain time) could vote. Another possibility is to automatically filter unwanted content using machine learning in a decentralized cloud computing framework (using, for example, the products proposed by iExec [43] or Golem [44]), but this approach brings a different set of constraints than reaching consensus within the community. For example, an artificial intelligence needs data for its training, and errors need to be handled. An advantage of using BC for UGC is that content creators can receive royalties automatically for the usage of their assets. An example of a business model would be to reserve part of the game’s revenue for community content creators, based on how much content they provided and how much it is used by the community. This incentivizes content creation and involvement of players in the game. VI. CONCLUSIONS AND PROSPECTS Better interoperability between BC and existing technologies is needed. This interoperability can be obtained by formalizing specifications for intercommunication between layers of the architecture of DBA. In this work, we presented an architecture applied to the video game industry. We saw that existing BC data representations could not easily be used throughout the whole architecture. That is why we described a new data representation for BC assets, that contain all the necessary information to be used in a BC environment. They also take into account the scalability issues of the BC, by allowing easier data sharing of BC assets. The next steps will be to generalize what we learned from the application of our proposal to the video game industry and refine our proposed design by considering applications in other industries, e.g. the Internet of Things, banking or supply chain. For example, we can use BC assets to create diploma certifications. Assets would contain the diploma and hashes would be referenced on the BC. The advantage of BC technology here would be to timestamp the certification, and allow for revocation. The validation of our framework will be achieved by developing a proof-of-concept of a decentralized and real-time BC game using our architecture and asset data representation. Besides the decentralization, auditability and security benefits of BC, this allows for the game’s community to be more involved in the governance and content creation of the game. ACKNOWLEDGMENT The PhD work of L´eo Besanc¸on is supported by B2Expand, 69100 Villeurbanne, France. We thank Eric Burgel, chairman[´] of B2Expand, for his help and advice. code reference code ----- REFERENCES [1] A. Azaria, A. Ekblaw, T. Vieira, and A. Lippman, “Medrec: Using blockchain for medical data access and permission management,” in _International Conference on Open and Big Data (OBD), 2016, pp._ 25–30. [2] S. A. Abeyratne, R. P. Monfared, “Blockchain ready manufacturing supply chain using distributed ledger,” International Journal of Research _in Engineering and Technology, vol. 05, no. 09, pp. 1–10, Sep. 2016._ [3] Y. Guo and C. Liang, “Blockchain application and outlook in the banking industry,” Financial Innovation, vol. 2, no. 1, Dec. 2016. [4] XAYA, “The ultimate blockchain gaming platform,” XAYA White paper, 2018. [Online]. Available: https://xaya.io/downloads/XAYA White Paper.pdf. [Accessed: 5Dec-2018]. [5] Z. Zheng, S. Xie, H.-N. Dai, X. Chen, and H. Wang, “Blockchain challenges and opportunities: a survey,” Int. J. Web and Grid Services, vol. 14, no. 4, pp. 352–375, 2018. [6] A. Deshpande, K. Stewart, L. Lepetit, and S. Gunashekar, “Distributed Ledger Technologies/Blockchain: Challenges, opportunities and the prospects for standards,” Prepared for the British Standards Institution (BSI), May 2017, May 2017. [7] I. Grigg, “EOS - An Introduction,” 2017. [Online]. Available: https://eos.io/documents/EOS An Introduction.pdf. [Accessed: 12-Dec2018]. [8] V. Buterin, “Chain Interoperability,” R3 Research Paper, 2016. [9] J. Kwon and E. Buchman, “Cosmos: a network of distributed ledgers,” 2017. [Online]. Available: https://cosmos.network/resources/whitepaper. [Accessed: 12-Dec-2018]. [10] G. Wood, “Polkadot: Vision for a heterogeneous multichain framework,” White Paper, 2016. [Online]. Available: https://polkadot.network/PolkaDotPaper.pdf. [Accessed: 12-Dec-2018]. [11] T. Hardjono, A. Lipton, and A. Pentland, “Towards a Design Philosophy for Interoperable Blockchain Systems,” arXiv:1805.05934 [cs], May 2018. [12] H. Jin, X. Dai, and J. Xiao, “Towards a Novel Architecture for Enabling Interoperability amongst Multiple Blockchains,” in 2018 IEEE 38th _International Conference on Distributed Computing Systems (ICDCS),_ 2018, pp. 1203–1211. [13] W. J. Gordon and C. Catalini, “Blockchain Technology for Healthcare: Facilitating the Transition to Patient-Driven Interoperability,” Computa_tional and Structural Biotechnology Journal, vol. 16, pp. 224–230, Jan._ 2018. [14] “Standards - IEEE Blockchain Initiative.” [Online]. Available: https://blockchain.ieee.org/standards. [Accessed: 11-Dec-2018]. [15] “IEEE-SA - The IEEE Standards Association - Home.” [Online]. Available: https://standards.ieee.org. [Accessed: 19-Dec-2018]. [16] Standard for the Framework of Blockchain Use in Internet of Things (IoT). P2418.1. 2017. [Online]. Available: https://standards.ieee.org/project/2418 1.html. [Accessed: 12-Dec2018]. [17] “Enterprise Ethereum Alliance - Home.” [Online]. Available: https://entethalliance.org. [Accessed: 19-Dec-2018]. [18] Enterprise Ethereum Alliance, Enterprise Ethereum Client Specification V2. 2018. [Online]. Available: https://entethalliance.org/wpcontent/uploads/2018/11/EEA Enterprise Ethereum Client Specification V2.pdf. [Accessed: 12-Dec-2018]. [19] “Blockchain reference architecture - IBM Cloud Garage Method.” [Online]. Available: https://www.ibm.com/cloud/garage/architectures/ blockchainArchitecture/reference-architecture/. [Accessed: 11-Dec2018]. [20] Hoard — Facilitating True Ownership of Virtual Gaming Assets on the Ethereum Blockchain. Buy, Sell and Rent Downloadable Content on a Marketplace. Powered by Blockchain Technology. https://www.hoard. exchange/index.html [21] F. Vogelsteller and V. Buterin. “ERC-20 Token Standard,” 2015. [Online]. Available at: https://github.com/ethereum/EIPs/blob/master/EIPS/eip-20.md [Accessed: 5-Dec-2018]. [22] W. Entriken, S. Dieter, E. Jacob and N. Sachs. “ERC-721 Non-Fungible Token Standard,” 2018. [Online] Available at: https://github.com/ethereum/EIPs/blob/master/EIPS/eip-721.md [Accessed: 5-Dec-2018]. [23] CryptoKitties. ”CryptoKitties — Collect and breed digital cats!,” 2018. [Online] Available at: https://www.cryptokitties.co/ [Accessed: 6-Dec2018]. [24] W. Radomski, A. Cooke, P. Castonguay, J. Therien and E. Binet. “Multi Token Standard,” 2018. [Online] Available at: https://github.com/ethereum/EIPs/blob/master/EIPS/eip-1155.md [Accessed: 5-Dec-2018]. [25] A. Palau, “Storing on Ethereum. Analyzing the costs,” Coinmonks, 2018. [Online]. Available: https://medium.com/coinmonks/storing-onethereum-analyzing-the-costs-922d41d6b316. [Accessed: 6-Dec-2018]. [26] E. Ordano, A. Meilich, Y. Jardi, and M. Araoz, “Decentraland, A blockchain-based virtual world,” 2017. [Online]. Available: https://decentraland.org/whitepaper.pdf. [Accessed: 6-Dec-2018]. [27] A. Regenscheid and D. Yaga, “Blockchain and Distributed Ledger Technologies: Opportunities, Challenges and Future Work,” 2017. [Online]. Available: https://csrc.nist.gov/CSRC/media/Presentations/NIST-BlockChain-Research-Project/images-media/ar-dy-blockchain-combined.pdf. [Accessed: 6-Dec-2018]. [28] A. P. Kryukov and A. P. Demichev, “Decentralized Data Storages: Technologies of Construction,” Program Comput Soft, vol. 44, no. 5, pp. 303–315, Sep. 2018. [29] J. Benet, “IPFS - Content Addressed, Versioned, P2P File System,” _arXiv:1407.3561 [cs], Jul. 2014._ [30] Protocol Labs, “Filecoin: A Decentralized Storage Network,” 2018. [Online]. Available: https://filecoin.io/filecoin.pdf. [Accessed: 12-Dec2018]. [31] V. Tr´on, A. Fischer, D. A. Nagy, Z. Felf¨oldi, and N. Johnson, “Swap, swear and swindle: incentive system for swarm,” Technical Report, Ethersphere Orange Papers 1, 2016. [32] G. Agrawal. ”OrbitDB: A peer-to-peer database for the decentralized web,” 2018. [Online] Available at: https://medium.com/coinmonks/orbitdb-a-peer-to-peer-database-forthe-decentralized-web-30bac1d056fe [Accessed: 6 Dec. 2018]. [33] Nadal, M. ”amark/gun: A realtime, decentralized, offline-first, graph database engine,” GitHub, 2018. [Online]. Available at: https://github.com/amark/gun [Accessed: 6 Dec. 2018]. [34] V. Buterin et al., “A next-generation smart contract and decentralized application platform,” 2014. [Online] Available at: https://github.com/ethereum/wiki/wiki/White-Paper [Accessed: 6 Dec. 2018]. [35] E. Androulaki et al., “Hyperledger fabric: a distributed operating system for permissioned blockchains,” in Proceedings of the Thirteenth EuroSys _Conference on - EuroSys ’18, Porto, Portugal, 2018, pp. 1–15._ [36] J. Poon and T. Dryja, “The Bitcoin Lightning Network: Scalable Off-Chain Instant Payments,” 2016. [Online]. Available: https://lightning.network/lightning-network-paper.pdf. [Accessed: 12-Dec-2018] [37] A. Back et al., “Enabling Blockchain Innovations with Pegged Sidechains,” 2014. [Online]. Available: http://kevinriggen.com/files/sidechains.pdf. [Accessed: 12-Dec-2018] [38] B. Guidi, M. Conti, A. Passarella, and L. Ricci, “Managing social contents in Decentralized Online Social Networks: A survey,” Online _Social Networks and Media, vol. 7, pp. 12–29, Sep. 2018._ [39] I. Grishchenko, M. Maffei, and C. Schneidewind, “A Semantic Framework for the Security Analysis of Ethereum Smart Contracts,” in Principles of Security and Trust, 2018, pp. 243–269. [40] V. Buterin, “Vitalik Buterin on about.me,” about.me. [Online]. Available: https://about.me/vitalik buterin. [Accessed: 11-Mar-2019]. [41] Lunar Mines - Own your space. [Online]. Available: https://lunarmines.io. [Accessed: 11-Mar-2019]. [42] Multiformats/multihash. Self describing hashes - for future proofing. [Online]. Available: https://github.com/multiformats/multihash. [Accessed: 11-Mar-2019]. [43] G. Fedak, B. Wassim, and A. Eduardo, “iExec: Blockchain-Based Decentralized Cloud Computing,” Version 3.0, 2018. [Online]. Available: https://iex.ec/whitepaper/iExec-WPv3.0-English.pdf. [Accessed: 8-Dec2018]. [44] Golem. ”The Golem Project. Whitepaper,” 2016. [Online]. Available: https://golem.network/crowdfunding/Golemwhitepaper.pdf. [Accessed: 8-Dec-2018]. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/BLOC.2019.8751347?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/BLOC.2019.8751347, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "other-oa", "status": "GREEN", "url": "https://hal.archives-ouvertes.fr/hal-02085698/file/Final_version_IEEE_Blockchain_Conf-HAL.pdf" }
2,019
[ "JournalArticle", "Conference" ]
true
2019-05-01T00:00:00
[]
8,203
en
[ { "category": "Engineering", "source": "external" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Economics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01b5eadb819072a88c5ec8db6af70d3bb4b033f9
[ "Engineering" ]
0.893853
Forecasting in Blockchain-based Local Energy Markets
01b5eadb819072a88c5ec8db6af70d3bb4b033f9
Energies
[ { "authorId": "134417826", "name": "Michael Kostmann" }, { "authorId": "2015030", "name": "W. Härdle" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-155563", "https://www.mdpi.com/journal/energies", "http://www.mdpi.com/journal/energies" ], "id": "1cd505d9-195d-4f99-b91c-169e872644d4", "issn": "1996-1073", "name": "Energies", "type": "journal", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-155563" }
Increasingly volatile and distributed energy production challenges traditional mechanisms to manage grid loads and price energy. Local energy markets (LEMs) may be a response to those challenges as they can balance energy production and consumption locally and may lower energy costs for consumers. Blockchain-based LEMs provide a decentralized market to local energy consumer and prosumers. They implement a market mechanism in the form of a smart contract without the need for a central authority coordinating the market. Recently proposed blockchain-based LEMs use auction designs to match future demand and supply. Thus, such blockchain-based LEMs rely on accurate short-term forecasts of individual households’ energy consumption and production. Often, such accurate forecasts are simply assumed to be given. The present research tested this assumption by first evaluating the forecast accuracy achievable with state-of-the-art energy forecasting techniques for individual households and then, assessing the effect of prediction errors on market outcomes in three different supply scenarios. The evaluation showed that, although a LASSO regression model is capable of achieving reasonably low forecasting errors, the costly settlement of prediction errors can offset and even surpass the savings brought to consumers by a blockchain-based LEM. This shows that, due to prediction errors, participation in LEMs may be uneconomical for consumers, and thus, has to be taken into consideration for pricing mechanisms in blockchain-based LEMs.
# energies _Article_ ## Forecasting in Blockchain-Based Local Energy Markets **Michael Kostmann** **[1,][∗]** **and Wolfgang K. Härdle** **[2,3,4]** 1 School of Business and Economics, Humboldt-Universität zu Berlin, Spandauer Str. 1, 10178 Berlin, Germany 2 Ladislaus von Bortkiewicz Chair of Statistics, School of Business and Economics, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099 Berlin, Germany 3 Wang Yanan Institute for Studies in Economics, Xiamen University, 422 Siming Road, Xiamen 361005, China 4 Department of Mathematics and Physics, Charles University Prague, Ke Karlovu 2027/3, 12116 Praha 2, Czech ***** Correspondence: michael.kostmann@hu-berlin.de Received: 2 June 2019; Accepted: 9 July 2019; Published: 16 July 2019 [����������](https://www.mdpi.com/1996-1073/12/14/2718?type=check_update&version=2) **�������** **Abstract: Increasingly volatile and distributed energy production challenges traditional mechanisms** to manage grid loads and price energy. Local energy markets (LEMs) may be a response to those challenges as they can balance energy production and consumption locally and may lower energy costs for consumers. Blockchain-based LEMs provide a decentralized market to local energy consumer and prosumers. They implement a market mechanism in the form of a smart contract without the need for a central authority coordinating the market. Recently proposed blockchain-based LEMs use auction designs to match future demand and supply. Thus, such blockchain-based LEMs rely on accurate short-term forecasts of individual households’ energy consumption and production. Often, such accurate forecasts are simply assumed to be given. The present research tested this assumption by first evaluating the forecast accuracy achievable with state-of-the-art energy forecasting techniques for individual households and then, assessing the effect of prediction errors on market outcomes in three different supply scenarios. The evaluation showed that, although a LASSO regression model is capable of achieving reasonably low forecasting errors, the costly settlement of prediction errors can offset and even surpass the savings brought to consumers by a blockchain-based LEM. This shows that, due to prediction errors, participation in LEMs may be uneconomical for consumers, and thus, has to be taken into consideration for pricing mechanisms in blockchain-based LEMs. **Keywords: blockchain; local energy market; smart contract; smart meter; short-term energy forecasting;** machine learning; least absolute shrinkage and selection operator (LASSO); long short-term memory (LSTM); prediction errors; market mechanism; market simulation **JEL Classification: Q47; D44; D47; C53** **1. Introduction** The “Energiewende”, or energy transition, is a radical transformation of Germany’s energy sector towards carbon free energy production. This energy revolution led in recent years to widespread installation of renewable energy generators [1,2]. In 2017, more than 1.6 million photovoltaic micro-generation units were already installed in Germany [3]. Although this is a substantial step towards carbon free energy production, there is a downside: The increasing amount of distributed and volatile renewable energy resources, possibly combined with volatile energy consumption, presents a serious challenge for grid operators. As energy production and consumption have to be balanced in electricity grids at all times [4], modern technological solutions to manage grid loads and price renewable energy are needed. One possibility to increase the level of energy distribution efficiency ----- _Energies 2019, 12, 2718_ 2 of 27 on low aggregation levels is the implementation of local energy markets (LEMs) in a decentralized approach, an example being the Brooklyn Microgrid [5]. LEMs enable interconnected energy consumers, producers, and prosumers to trade energy in near real-time on a market platform with a specific pricing mechanism [6]. A common pricing mechanism used for this purpose are discrete double auctions [7–9]. Blockchain-based LEMs utilize a blockchain as underlying information and communication technology and a smart contract to match future supply and demand and to settle transactions [10]. As a consequence, a central authority that coordinates the market is obsolete in a blockchain-based LEM. Major advantages of such LEMs are the balancing of energy production and consumption in local grids [11], lower energy costs for consumers [12], more customer choice (empowerment) [13], and less power line loss due to shorter transmission distances [14]. In the currently existing energy ecosystem, the only agents involved in electricity markets are utilities and large-scale energy producers and consumers. Household-level consumers and prosumers do not actively trade in electricity markets. Instead, they pay for their energy consumption or they are reimbursed for their infeed of energy into the grid according to fixed tariffs. In LEMs, on the contrary, households are the participating market agents that typically submit offers in an auction [7,15]. This market design requires the participating households to estimate their future energy demand and/or supply, to be able to submit a buy or sell offer to the market [16]. Therefore, accurate forecasts of household energy consumption/production are a necessity for such LEM designs. This is due to the market mechanism employed and does not depend on whether an LEM is implemented on a blockchain or not. However, research on blockchain-based LEM mostly employ market mechanisms that require accurate forecasts of household energy consumption/production making the aspect of forecasting especially relevant here. Despite this, it is frequently assumed in existing research on (blockchain-based) LEMs that such accurate forecasts are readily available (see, e.g., [6–8,16,17]). However, forecasting the consumption/production of single households is difficult due to the inherently high degree of uncertainty, which cannot be reduced by the aggregation of households [18]. Hence, the assumption that accurate forecasts are available cannot be taken in practice to be correct. Additionally, given the substantial uncertainty in individual households’ energy consumption or production, prediction errors may have a significant impact on market outcomes. This is where we focused our research: We evaluated the possibility of providing accurate short-term household-level energy forecasts with existing methods and currently available smart meter data. Moreover, our study aimed to quantify the effect of prediction errors on market outcomes in blockchain-based LEMs. For the future advancement of the field, it seemed imperative that the precondition of accurate forecasts of individual households’ energy consumption and production for LEMs is assessed. Because, if the assumption cannot be met, the proposed blockchain-based LEMs may not be a sensible solution to support the transformation of our energy landscape. This, however, is urgently necessary to limit CO2 emissions and the substantial risks of climate change. _1.1. Related Research_ Although LEMs started to attract interest in academia already in the early 2000s, it is still an emerging field [11]. Mainly driven by the widespread adoption of smart meters and Internet-connected home appliances, recent work on LEMs focuses on use cases in developed and highly technologized energy grid systems [19]. While substantial work regarding LEMs in general has been done (e.g., [7,8,15]), there are only few examples of blockchain-based LEM designs in the existing literature. Mengelkamp et al. [10] derived seven principles for microgrid energy markets and evaluated the Brooklyn Microgrid according to those principles. With a more practical focus, Mengelkamp et al. [6] implemented and simulated a local energy market on a private Ethereum-blockchain that enables participants to trade local energy production on a decentralized market platform with no need for a central authority. Münsing et al. [20] similarly elaborate a peer-to-peer energy market concept on a blockchain but focus on operational grid constraints and a fair payment rendering. Additionally, ----- _Energies 2019, 12, 2718_ 3 of 27 there are several industry undertakings to put blockchain-based energy trading into practice, such [as Grid Singularity (gridsingularity.com) in Austria, Powerpeers (powerpeers.nl) in the Netherlands,](https://gridsingularity.com) [Power Ledger (powerledger.io) in Australia, and LO3 Energy (lo3energy.com) in the United States.](https://powerledger.io) Interestingly, none of the above cited works, that employ market mechanisms requiring household energy forecasts for bidding, check whether the assumed availability of such forecasts is given. However, without this assumption, trading through an auction design, as described by, e.g., Block et al. [9] or Buchmann et al. [8], and implemented in a smart contract by Mengelkamp et al. [6] is not possible. Unfortunately, this forecasting task is not trivial due to the extremely high volatility of individual households’ energy patterns [18]. However, research by Arora and Taylor [21], Kong et al. [22], Shi et al. [23], and Li et al. [24] shows that advances in the energy forecasting field also extend to household-level energy forecasting problems and serve as a promising basis for the present study. _1.2. Present Research_ We investigated the prerequisites necessary to implement blockchain-based distributed local energy markets. In particular this means: (a) forecasting net energy consumption and production of private consumers and prosumers one time-step ahead; (b) evaluating and quantifying the effects of forecasting errors; and (c) evaluating the implications of low forecasting quality for a market mechanism. The prediction task was fitted to the setup of a blockchain-based LEM. Thereby, the present research distinguishes itself notably from previous studies that solely try to forecast smart meter time series in general. The evaluation of forecasting errors and their implications was based on the commonly used market mechanism for discrete interval, double-sided auctions, while the forecasting error settlement structure was based on the work of Mengelkamp et al. [6]. The following research questions were examined: 1. Which prediction technique yields the best 15-min ahead forecast for smart meter time series measured in 3-min intervals using only input features generated from the historical values of the time series and calendar-based features? 2. Assuming a forecasting error settlement structure, what is the quantified loss of households participating in the LEM due to forecasting errors by the prediction technique identified in Question (a)? 3. Depending on Question (b), what implications and potential adjustments for an LEM market mechanism can be identified? The present research found that regressing with a least absolute shrinkage and selection operator (LASSO) on one week of historical consumption data is the most suitable approach to household-level energy forecasting. However, this method’s forecasting errors still substantially diminish the economical benefit of a blockchain-based LEM. Thus, we conclude that changes to the market designs are the most promising way to still employ blockchain-based LEMs as means to meet some of the challenges generated by Germany’s current energy transition. The remainder of the paper is structured as follows: Section 2 presents the forecasting models and error measures used to evaluate the prediction accuracy. Moreover, it introduces the market mechanism and simulation used to evaluate the effect of prediction errors in LEMs. Section 3 describes the data used. Section 4 presents the prediction results of the forecasting models, evaluates their performance relative to a baseline model and assesses the effect of prediction errors on market outcomes. The insights gained from this are then used to identify potential adjustments for future market mechanisms. Finally, Section 5 concludes with a summary, limitations, and an outlook on further research questions that emerge from the findings of the present research. All code and data used in the present research are available through the Quantnet website (www.quantlet.de). They can be easily found by entering BLEM (Blockchain-based Local Energy ----- _Energies 2019, 12, 2718_ 4 of 27 Markets) into the search bar. As part of the Collaborative Research Center, the Center for Applied Statistics and Economics and the International Research Training Group (IRTG) 1792 at the Humboldt-University Berlin, Quantnet contributes to the goal of strengthening and improving empirical economic research in Germany. **2. Method** To select the forecasting technique, we applied the following criteria: 1. The forecasting technique has to produce deterministic (i.e., point) forecasts. 2. The forecasting technique had—for comparison—to be used in previous studies. 3. The previous study or studies using the forecasting technique had to use comparable data, i.e., recorded by smart meters in 60-min intervals or higher resolution, recorded in multiple households, and not recorded in small and medium enterprises (SMEs) or other business or public buildings. 4. The forecasting task had to be comparable to the forecasting task of the present research, i.e., single consumer household (in contrast to the prediction of aggregated energy time series) and very short forecasting horizon ( 24 h). _≤_ 5. The forecasting technique had to take historical and calendar features only as input for the prediction. 6. The forecasting technique had to produce absolutely and relative to other studies promisingly accurate predictions. Based on these criteria, two forecasting techniques were selected for the prediction task at hand. As short-term energy forecasting techniques are commonly categorized into statistical and machine learning (or artificial intelligence) methods [25–27], one method of each category was chosen: Long short-term memory recurrent neural network (LSTM RNN) adapted from the procedure outlined by Shi et al. [23] and autoregressive LASSO as implemented by Li et al. [24]. Instead of LSTM RNN, gated recurrent unit (GRU) neural networks could have been used as well. However, despite needing fewer computational resources, their representational power may be lower compared to LSTM RNNs [28] and their successful applicability in household-level energy forecasting has not been proven in previous studies. The forecasting techniques used data from 1 January 2017 to 30 September 2017 as training input and the forecast was evaluated on data from 1 October 2017 to 31 December 2017. This means that no data from autumn were included in the training data. However, this seems unlikely to influence the forecasting performance as the German climate in the months from February to April (which are included in the training data) is comparable to the climate in the months from October to December; the forecasting horizon is very short-term; and the input for the forecasting techniques is too short to reflect any seasonal changes in temperature or sunshine hours. _2.1. Baseline Model_ A frequent baseline model used for deterministic forecasts is the simple persistence model [29]. This model assumes that the conditions at time t persist at least up to the period of forecasting interest at time t + h. The persistence model is defined as _xt+1 = xt._ (1) � There are several other baseline models commonly used in energy load forecasting. Most of them are, in contrast to the persistence model, more sophisticated benchmarks. However, as the forecasting task at hand serves the specific use case of being an input for the bidding process in a blockchain-based LEM, the superiority of the forecasting model over a benchmark model is of secondary importance. Hence, in the present research, only the persistence model served as a baseline for the forecasting techniques presented in Sections 2.2 and 2.3. ----- _Energies 2019, 12, 2718_ 5 of 27 _2.2. Machine Learning-Based Forecasting Approach_ The first sophisticated forecasting technique that was employed in the present research to produce as accurate as possible predictions for the blockchain-based LEM is a machine learning algorithm. Long short-term memory (LSTM) recurrent neural networks (RNN) have been introduced only very recently in load forecasting studies (e.g., [22,23,27,30]). Neural networks do not need any strong assumptions about their functional form, such as traditional time series models (e.g., autoregressive moving average, ARMA). However, they are universal approximators for finite input [31] and, therefore, are especially well suited for the prediction of volatile time series such as energy consumption or production. The most basic building blocks of any neural network are three types of layers: an input layer, one or more hidden layer(s), and an output layer. Each layer consists of one or more units (sometimes called neurons). Each unit in a layer takes in an input, applies a transformation to this input, and outputs it to the next layer. Formally, this can be written as **_h1,i = φ1 (W_** 1xi + b1) **_h2,i = φ2 (W_** 2h1,i + b2) ... (2) � � _oi = φn_ **_W_** _nh(n−1),i + bn_ = �yi, where n denotes a layer, φn is the activation function, W _n is the weight matrix, and bn is the bias_ vector in layer n. xi is the ith input vector and oi is the output value of the output layer, which is the estimation of the true value yi. The weight matrices and bias vectors in each layer are parameters that are adjusted during the training of the model. However, such a simple neural network is not particularly well-suited for time series learning [28]. This is because simple neural networks, such as the one described above, do not have an internal state that could retain a memory of previously processed input. That is, to learn a sequence or time series, the described neural network would always need the complete time series as a single input. It cannot retain a memory of something learned in a previous chunk of the time series to apply it to the next chunk that is fed into the model. This problem is tackled by recurrent neural networks. RNNs still consist of the basic building blocks of units and layers. However, the units not only feed forward the transformed input as output but also have a recurrent connection that feeds an internal state back into the unit as input. Thereby, a RNN unit loops over individual elements of an input sequence, instead of processing the whole sequence in a single step. This means that the RNN unit applies the transformation to the first element of the input sequence and combines it with its internal state. This introduces the notion of time into neural networks. Formally, this can be written as � � **_h1,t = φ1_** **_W_** 1[(][i][)][x][t][ +][ W] 1[(][r][)][h][1,][(][t][−][1][)] [+][ b][1] � � **_h2,t = φ2_** **_W_** 2[(][i][)][h][1,][t][ +][ W] 2[(][r][)][h][2,][(][t][−][1][)] [+][ b][2] (3) ... � � _ot = φn_ **_W_** [(]n[i][)][h](n−1),t [+][ b]n = �yt, where n denotes a layer, φn is the activation function, W [(]n[i][)] [is the weight matrix for the input,][ W] _n[(][r][)]_ is the weight matrix for the recurrent input (i.e., the output of layer n in the previous time step), and bn is the bias vector in layer n. xt is the input vector at time t and ot is the output value of the output layer which is the estimation of the true value yt. Note that the output layer has no recurrent units but is the same as in a simple feed forward network. ----- _Energies 2019, 12, 2718_ 6 of 27 The cyclical structure of an RNN unit can be unrolled across time (see Figure 1). This illustrates that a RNN is basically a simple neural network that has one layer for each time step that has to be processed per input. Theoretically, this feedback structure enables RNNs to retain information about sequence elements that have been processed many steps before the current step and use it for the prediction of the current step. However, in practice, the vanishing gradient problem occurs (for more details on the vanishing gradient problem, see, e.g., [32]). This problem makes RNNs basically untrainable for very long sequences. output hidden layer 1 input ℎ$,"#$ ℎ$," ℎ$,"%$ ℎ$,"#$ ℎ$," !"#$ !" !"%$ |hidden layer 1 input|$,"#$ $," $,"%$ ℎ ℎ $,"#$ $," ! ! !|Col3| |---|---|---| **Figure 1. Schematic representation of an unfolded RNN unit. Adapted from [28].** To overcome the vanishing gradient problem, Hochreiter and Schmidhuber [33] developed LSTM units. LSTM RNN is an advanced architecture of RNN that is particularly well suited to learn long sequences or time series due to its ability to retain information over many time steps [28]. LSTM units extend RNN units by an additional state. This state can retain information for as long as needed. In which step this additional state is updated and in which state the information it retains is used in the transformation of the input is controlled by three so-called gates [34]. These three gates have the form of a simple RNN cell. Formally, by slightly adapting the notation of Lipton et al. [35]—who used _ht−1 instead of st−1, whereas the notation used here (st−1) accounts for the modern LSTM architecture_ with peephole connections—the gates can be written as � � **_it = σ_** **_W_** [(][ix][)]xt + W [(][is][)]st−1 + bi � � **_f t = σ_** **_W_** [(][ f x][)]xt + W [(][ f s][)]st−1 + b f � � **_ot = σ_** **_W_** [(][ox][)]xt + W [(][os][)]st−1 + bo, (4) where σ is the sigmoid activation function σ(z) = 1+1e[−][z][,][ W][ denotes the weight matrices that are] intuitively labeled (ix for the weight matrix of gate it multiplied with the input xt etc.), and b denotes the bias vectors. Again, following the notation of Lipton et al. [35], the full algorithm of a LSTM unit is given by the three gates specified above, the input node, � � **_gt = σ_** **_W_** [(][gx][)]xt + W [(][gh][)]ht−1 + bg, (5) the internal state of the LSTM unit at time step t, **_st = gt ⊙_** **_it + st−1 ⊙_** **_f t,_** (6) where ⊙ is pointwise multiplication, and the output at time step t, **_ht = φ (st) ⊙_** **_ot._** (7) The internal structure of a LSTM cell is further clarified in Figure 2. For an intuitive but more detailed explanation of LSTM neural networks, see [28] (Ch. 6.2). ----- _Energies 2019, 12, 2718_ 7 of 27 𝑥" 𝑥" 𝑖" 𝑜" (𝑠"*+) (𝑠"*+) input 𝑥",ℎ"*+ 𝑔" 𝑠" 𝜙(𝑠") ℎ" output (𝑠"*+) (𝑠"*+) LSTM unit 𝑓" 𝑥" **Figure 2. Schematic representation of an LSTM unit. Adapted from [36]. The filled in circles represent** the pointwise multiplication operation denoted by ⊙ in Equations (6) and (7). In summary, LSTM RNNs are capable of learning highly complex, non-linear relationships in time series data, which makes them a promising forecasting technique to predict households’ very short-term energy consumption and production. The specific LTSM RNN approach adopted in the present research was based on the procedure employed by Shi et al. [23] to forecast individual households’ energy consumption. According to the relevant use case in the present research, LSTM RNNs were trained for each household individually using only the household’s historic consumption patterns and calendar features. Specifically, seven days of past consumption, an indicator for weekends, and an indicator for Germany-wide holidays were used as input for the neural network in the present research. This follows the one-hot encoding used by Chen et al. [30]. Seven days of lagged data were used as input because preliminary results indicated that the autocorrelation in the time series becomes very weak in lags beyond one week. Moreover, using the previous week as input data still preserves the weekly seasonality and represents a reasonable compromise between as much input as possible and the computational resources needed to process the input in the training process of the LSTM neural network. The target values in the model training were single consumption values in 15-min aggregation. The following example serves as illustration: Assume the consumption values in 3-min intervals from 13 November 2017 13:00 to 20 November 2017 13:00 and zero/one-indicators for weekends and holidays (i.e., 3 3360 data points) _×_ are fed into the neural network. The model then produces a single output value that estimates the household’s energy consumption in kWh from 20 November 2017 13:00 to 20 November 2017 13:15. A neural network is steered by several hyperparameters: the number and type of layers, the number of hidden units within each layer, the activation functions used within each unit, dropout rates for the recurrent transformation, and dropout rates for the transformation of the input. To identify a well working combination of hyperparameter values, tuning is necessary which is unfortunately computationally very resource intensive. Table 1 presents the hyperparameters that were tuned and their respective value ranges. The tuning was done individually for each network layer. Optimally, the hyperparameters of all layers should be tuned simultaneously. However, due to computational constraints, that was not possible here and, thus, the described, second-best option was chosen. As the hyperparameter values specified in Table 1 for layer 1 alone result in 81 possible hyperparameter combinations, only random samples of these combinations were taken, the resulting models trained on a randomly chosen dataset and compared. In total, 16 models with one layer, 13 models with two layers and 13 models with three layers were tuned. The model tuning was conducted on four Tesla P100 graphical processing units (GPUs) through the Machine Learning (ML) Engine of the Google Cloud Platform. The job was submitted to the Google Cloud ML Engine via Google Cloud SDK and the R package cloudml. Although neural networks can be trained much faster on GPUs than ----- _Energies 2019, 12, 2718_ 8 of 27 on conventional central processing units (CPUs) [28], usage of GPUs through the Google Cloud ML Engine incurs substantial monetary cost. Thus, they were only used for the model tuning in this study. **Table 1. The hyperparameters that were tuned for an optimal LSTM RNN model specification.** **Possible** **Possible** **Sampling** # of Assessed **Hyperparameter** **Values** **Combinations** **Rate** **Combinations** batch size {128, 64, 32} hidden units {128, 64, 32} layer 1 81 0.2 16 recurrent dropout {0, 0.2, 0.4} dropout {0, 0.2, 0.4} hidden units {128, 64, 32} layer 2 recurrent dropout {0, 0.2, 0.4} 26 0.5 13 dropout {0, 0.2, 0.4} hidden units {128, 64, 32} layer 3 recurrent dropout {0, 0.2, 0.4} 26 0.5 13 dropout {0, 0.2, 0.4} Based on the hyperparameter tuning results, a model with the specification shown in Table 2 was used for the prediction of a single energy consumption value for the next 15 min. The total length of data points covered in the training process equals the batch size times the input data points times the number of data points that are aggregated for each prediction (i.e., 5 data points): 700 × 32 × 5 = 112,000 data points. This is equivalent to the time period from 1 January 2017 00:00 to 22 August 2017 09:03. The tuning process and results can be replicated by following the Quantlet link in the caption of Table 2. **Table 2. Tuned hyperparameters for LSTRM RNN prediction model.** [BLEMtuneLSTM (github.com](https://github.com/QuantLet/BLEM/tree/master/BLEMtuneLSTM) [/QuantLet/BLEM/tree/master/BLEMtuneLSTM)](https://github.com/QuantLet/BLEM/tree/master/BLEMtuneLSTM) **Hyperparameter** **Tuned Value** layers 1 hidden units 32 dropout rate 0 recurrent dropout rate 0 batch size 32 number of input data points 3360 number of training samples 700 number of validation samples 96 The general procedure of model training, model assessment and prediction generation is shown in Procedure 1. The parameter tuple was set globally for all household datasets based on the hyperparameter tuning. Thereafter, the same procedure was repeated for each dataset: First, the consumption data time series was loaded, target values were generated, and the input data were transformed. The transformation consisted of normalizing the log-values of the consumption per 3-min interval between 0 an 1. This ensured fast convergence of the model training process. The data batches for the model training and the cross-validation were served to the training algorithm by so-called generator functions. Second, the LSTM RNN was compiled and trained with Keras, which is a neural network application programming interface (API) written in Python. The Keras R package (v2.2.0.9), which was used with RStudio v1.1.453 and TensorFlow 1.11.0 as back-end, is a wrapper of the Python library and is maintained by Chollet et al. [37]. The model training and prediction for each household was performed on a Windows Server 2012 with 12 cores and 24 logical processors of Intel Xeon 3.4 GHz CPUs. The model training was done in a differing number of epochs as early stopping was employed to prevent overfitting: Once the mean absolute error on the validation data did not decrease by more than 0.001 in three consecutive epochs, the training process was stopped. Third, the trained model was used to generate predictions on the test set that comprised data from 1 October 2017 00:00 to 1 January 2018 00:00 (i.e., 44,180 data points). As the prediction was made ----- _Energies 2019, 12, 2718_ 9 of 27 in 15-min intervals, in total, 8836 data points were predicted. Using the error measures described in Section 2.4, the model performance was assessed. Finally, the predictions for all datasets were saved for the evaluation in the LEM market mechanism. **Procedure 1 Supervised training of and prediction with LSTM RNN.** 1: Set parameter tuple < l, u, b, d >: number of layers l ⊆ _L, number of hidden LSTM-units u ⊆_ _U, batch size b ⊆_ _B, and dropout rate d ⊆_ _D._ 2: Initiate prediction matrix P and list for error measures Θ. 3: for Household i in dataset pool I do 4: Load dataset Ψi. 5: Generate target values y by aggregating data to 15-min intervals. 6: Transform time series in dataset Ψi and add calendar features. 7: Set up training and validation data generators according to parameter tuple < b, d >. 8: Split dataset Ψi into training dataset Ψi,tr and testing dataset Ψi,ts. 9: Build LSTM RNN ζi on Tensorflow with network size (l, h). 10: **repeat** 11: **At kth epoch do:** 12: Train LSTM RNN ζi with data batches ϕtrain ⊆ Ψi,tr supplied by training data generator. 13: Evaluate performance with mean absolute error Λk on cross-validation data batches ϕval ⊆ Ψi,tr supplied by validation data generator. 14: **until Λk−1 −** Λk < 0.001 for the last 3 epochs. 15: Save trained LSTM RNN ζi. 16: Set up testing data generator according to tuple < b, d >. 17: Generate predictions �yi with batches ϕts ⊆ Ψi,ts fed by testing data generator into LSTM RNN ζi. 18: Calculate error measures Θi to assess performance of Xi. 19: Write prediction vector �yi into column i of matrix P. 20: end for. 21: Save matrix P. 22: End. _2.3. Statistical Method-Based Forecasting Approach_ To complement the machine learning approach of a LSTM RNN with a statistical approach, a second, regression-based method was used. For this purpose, the autoregressive LASSO approach proposed by Li et al. [24] seemed most suitable. Statistical methods have the advantage of much lower model complexity compared to neural networks which makes them computationally much less resource intensive. Li et al. [24] used LASSO [38] to find a sparse autoregressive model that generalizes better to new data. Formally, the LASSO estimator can be written as **_β�LASSO = arg min_** **_β_** 1 2 [+][ λ][ ∥][β][∥]1 [,] (8) 2 _[∥][(][y][ −]_ **_[X]_** **_[β][)][∥][2]_** where X is a matrix with row t being [1 xt[T][]][ (the length of][ x]t[T] [is the number of lag-orders][ n][ available),] and λ is a parameter that controls the level of sparsity in the model, i.e., which of the n available lag-orders are included to predict yt+1. This model specification selects the best recurrent pattern in the energy time series by shrinking coefficients of irrelevant lag-orders to zero and, thereby, improves the generalizability of the prediction model. In the present research, the sparse autoregressive LASSO approach was implemented using the R package glmnet [39]. As for the LSTM RNN approach, model training and prediction were performed for every household individually. Following the procedure of Li et al. [24], only historical consumption values were used as predictors. Specifically, for comparability to the LSTM approach, seven days of lagged consumption values served as input to the LASSO model. The response vector consisted of single consumption values in 15-min aggregation. The same example as above serves as illustration: Assume the consumption values in 3-min intervals from 13 November 2017 13:00 to 20 November 2017 13:00 (i.e., 3360 data points) are available to the model for prediction. Based on the training data, the model chooses the lagged values with the highest predictive power ----- _Energies 2019, 12, 2718_ 10 of 27 and makes a linear estimation of a single value for the household’s energy consumption in kWh from 20 November 2017 13:00 to 20 November 2017 13:15. The detailed description of the model estimation and prediction is presented in Procedure 2. As the LASSO model requires a predictor matrix, the time series of each household was split in sequences of length n = 3360 with five data points skipped in between. The skip accounted for the fact that the response vector was comprised of 15-min interval consumption values (i.e., five aggregated 3-min values). After generating the predictor matrix for the model estimation, the optimal λ was found in a K-fold cross-validation. Here, K was set to 10. The sequence of λ-values that was tested via cross-validation was of length L = 100 and was constructed by calculating the minimum λ-value as a fraction of the maximum λ-value (λmin = ελmax, where λmax was such that all β-coefficients were set equal to zero) and moving along the log-scale from λmax to λmin in L steps. However, the glmnet algorithm used early-stopping to reduce computing times if the percent of null deviance explained by the model with a certain λ did not change sufficiently from one to the next λ-value. The cross-validation procedure identified the biggest λ that is still within one standard deviation of the λ with the lowest mean absolute error. The final coefficients for each household were then computed by solving Equation (8) for the complete predictor matrix. Thereafter, the predictions were made on the testing data. Again, the time series was sliced according to the sliding window of length _n = 3360 skipping five data points and written into a predictor matrix. This matrix comprised data_ from 1 October 2017 00:00 to 1 January 2018 00:00 (i.e., 8836 cases of 3360 lagged values), resulting again in 8836 predicted values as in the case of the LSTM approach. The predictions on all datasets were assessed using the error measures described in Section 2.4 and saved for the evaluation of the prediction in the context of the LEM market mechanism. **Procedure 2 Cross-validated selection of λ for LASSO and prediction.** 1: Initiate prediction matrix P and list for error measures Θ. 2: for Household i in dataset pool I do 3: Load dataset Ψi. 4: Generate target values y by aggregating data to 15-min intervals. 5: Split dataset Ψi into training dataset Ψi,tr and testing dataset Ψi,ts. 6: Generate predictor matrix Mtr by slicing time series Ψi,tr with sliding window. 7: Generate sequence of λ-values {ls}s[L]=1[.] 8: Set number of cross-validation (CV) folds K. 9: Split predictor matrix Mtr into K folds. 10: **for k in K do** 11: Select fold k as CV testing set and folds j ̸= k as CV training set. 12: **for each ls in {ls}s[L]=1** **[do]** 13: Compute vector **_β[�]k,ls on CV training set._** 14: Compute mean absolute error Λk,ls on CV testing set. 15: **end for.** 16: **end for.** 17: For each **_β[�]k,ls calculate average mean absolute error Λ[¯]_** _s across the K folds._ 18: Select cross-validated λ-value ls[CV] with the highest regularization (min no. of non-zero β-coeff.) within one SD of the minimum Λ[¯] _s._ 19: Compute **_β[�]lCVs_** on complete predictor matrix Mtr. 20: Generate predictor matrix Mts by slicing time series Ψi,ts with sliding window. 21: Generate predictions �yi from predictor matrix Mts and coefficients **_β[�]lCVs_** . 22: Calculate error measures Θi to assess performance. 23: Write prediction vector �yi into column i of matrix P. 24: end for. 25: Save matrix P. 26: End. ----- _Energies 2019, 12, 2718_ 11 of 27 _2.4. Error Measures_ Forecasting impreciseness is measured by a variety of norms. The L1-type mean absolute error (MAE) is defined as the average of the absolute differences between the predicted and true values [40]: MAE = [1] _N_ _N_ ### ∑ |x�t − xt|, (9) _t=1_ where N is the length of the forecasted time series, _xt is the forecasted value and xt is the observed_ � value. As MAE is only a valid error measure if one can assume that for the forecasted distribution the mean is equal to the median (which might be too restrictive), an alternative is the root mean square error (RMSE), i.e., the square root of the average squared differences [29,41]: _N_ ### ∑ (x�t − xt)[2]. (10) _t=1_ RMSE = � � � � [1] _N_ Absolute error measures are not scale independent, which makes them unsuitable to compare the prediction accuracy of a forecasting model across different time series. Therefore, they are complemented with the percentage error measures mean absolute percentage error (MAPE) and normalized root mean square error (NRMSE) normalized by the true value: , (11) ���� _x�t −_ _xt_ ���� _xt_ MAPE = [100] _N_ _N_ ### ∑ _t=1_ and _N_ ### ∑ _t=1_ ��xt − _xt_ _xt_ �2 . (12) NRMSE = � � � � [100] _N_ However, as Hyndman and Koehler [42] pointed out, using xt as denominator may be problematic as the fraction _[x][�][i]x[−]t[x][i]_ is not defined for xt = 0. Therefore, time series containing zero values cannot be assessed with this definition of the MAPE and NRSME. To overcome the shortage of an undefined fraction in the presence of zero values in the case of MAPE and NRMSE, the mean absolute scaled error (MASE) as proposed by Hyndman and Koehler [42] was used. That is, MAE was normalized with the in-sample mean absolute error of the persistence model forecast: MAE MASE = . (13) 1 _n−1_ [∑]t[N]=2 _[|][x][t][ −]_ _[x][t][−][1][|]_ In summary, in the present research, the forecasting performance of the LSTM RNN and the LASSO were evaluated using MAE, RMSE, MAPE, NRMSE, and MASE. _2.5. Market Simulation_ We used a market mechanism with discrete closing times in 15-min intervals. Each consumer and each prosumer submit one order per interval and the asks and bids are matched in a closed double auction that yields a single equilibrium price. The market mechanism was implemented in R. This allows for a flexible and time-efficient analysis of the market outcomes with and without prediction errors. The simulation of the market mechanism followed five major steps: First, the consumption and production values of each market participant per 15-min interval from 1 October 2017 00:00 to 1 January 2018 00:00 were retrieved. These values are either the true values as yielded by the aggregation of the raw data or the prediction values as estimated by the best performing prediction model. Second, for each market participant, a zero-intelligence limit price was generated by drawing randomly from ----- _Energies 2019, 12, 2718_ 12 of 27 the discrete uniform distribution U{12.31, 24.69}. The lower bound is the German feed-in tariff of 12.31 [EURct] kWh [and the upper bound is the average German electricity price in 2016 of 28.69][ EURct]kWh [[][43][].] This agent behavior has been shown to generate efficient market outcomes in double auctions [44] and is rational in so far as electricity sellers would not accept a price below the feed-in tariff and electricity buyers would not pay more than the energy utility’s price per kWh. However, this assumes that the agents do not consider any non-price related preferences, such as strongly preferring local renewable energy [6]. Third, for each trading slot (i.e., every 15-min interval), the bids and asks were ordered in price-time precedence. Given the total supply is lower than the total demand, the lowest bid price that can still be served determines the equilibrium price. Given the total supply is higher than the total demand, the overall lowest bid price determines the equilibrium price. In the case of over- or undersupply, the residual amounts are traded at the feed-in (12.31 [EURct] kWh [) or the regular household] consumer electricity tariff (28.69 [EURct] kWh [) with the energy utility. Fourth, the applicable price for each bid] and ask was determined and the settlement amounts, resulting from this price and the energy amount ordered, were calculated. In the case of using predicted values for the bids, there was an additional fifth step: After the next trading period, when the actual energy readings were known, any deviations between predictions and true values were settled with the energy utility using the feed-in or household consumer electricity tariff. This led to correction amounts that were deducted or added to the original settlement amounts. For the market simulation, perfect grid efficiency and, hence, no transmission losses were assumed. **3. Data** The raw data used for the present research were provided by Discovergy GmbH and are available at [BLEMdata (github.com/QuantLet/BLEM/tree/master/data), hosted at GitHub. Discovergy](https://github.com/QuantLet/BLEM/tree/master/data) describes itself as a full-range supplier of smart metering solutions offering transparent energy consumption and production data for private and commercial clients [45]. To be able to offer such data-driven services, Discovergy smart meters record energy consumption and production near real-time—i.e., in 2-s intervals—and send the readings to Discovergy’s servers for storage and analysis. Therefore, Discovergy has extremely high resolution energy data of their customers at their disposal. This high resolution is in stark contrast to the half-hourly or even hourly recorded data used in previous studies on household energy forecasting (e.g., [21,23,46,47]). To our knowledge, there is no previous research using Discovergy smart meter data, apart from Teixeira et al. [48], who used the data as simulation input but not for analysis or prediction. The data come in 200 individual datasets each containing the meter readings of a single smart meter; 100 datasets belong to pure energy consumers and 100 datasets belong to energy prosumers (households that produce and consumer energy). The meter readings were aggregated to 3-min intervals and range from 1 January 2017 00:00 to 1 January 2018 00:00. This translated into 175,201 observations per dataset. Each observation consists of the total cumulative energy consumption and the total cumulative energy production from the date of installation until time t, current power over all phases installed in the meter at time t and a timestamp in Unix milliseconds. For further analysis, the power readings were dropped and the first differences of the energy consumption and production readings were calculated. These first differences are equivalent to the energy consumption and production within each 3-min interval between two meter recordings. The result of this computation leaves each dataset with two time series (energy consumption and energy production in kWh) and 175,200 observations. Figure 3 shows the energy consumption time series of Consumer 082. In the first panel of Figure 3, the consumption per 3-min interval for all of 2017 is shown. Notably, there are two extended periods (in March and June) and three shorter periods (in July, September, and December) with a clearly distinguishable low consumption level and low fluctuation. The most likely explanations for these low, stable energy consumption periods are holidays, in which the household members are on vacation and leave appliances that are on standby or always turned on as the only energy consumers. at ----- _Energies 2019, 12, 2718_ 13 of 27 The second panel zooms to just one month making daily fluctuation patterns visible. The last panel zooms in to a single day of energy consumption. It exemplifies well a usual pattern of energy consumption: There is low and rather stable energy consumption from midnight until about 07:30, which only fluctuates in a systematic and repeated way due appliances in standby and “always on" appliances, such as a fridge and/or freezer. At around 07:30, the household members probably wake up and the energy consumption spikes for the next 30 min—the lights are turned on, coffee is made, the stove is turned on, and maybe a flow heater is used to shower with hot water. As the household members leave the house (13 May is a Monday), the consumption slowly decreases again. In the evening at about 18:30 the energy consumption spikes again, probably caused by dinner preparations. **Figure 3. Energy consumption recordings of Consumer 082. The first panel shows the full year 2017,** the second panel zooms in to one month (May), and the third panel zooms in to one day (13 May). [BLEMplotEnergyData (github.com/QuantLet/BLEM/tree/master/BLEMplotEnergyData)](https://github.com/QuantLet/BLEM/tree/master/BLEMplotEnergyData) Out of the 100 consumer datasets, five exhibited non-negligible shares of zero consumption values leading to their exclusion. One consumer dataset was excluded as the consumption time series was flat for the most part of 2017 and one consumer was excluded due to very low and stable consumption values with very rare, extreme spikes. Four more consumers were excluded due to conspicuous regularity in daily or weekly consumption patterns. Lastly, one consumer was excluded not due to peculiarities in the consumption patterns but due to missing data. As the inclusion of this shorter time series would have led to difficulties in the forecasting algorithms, this dataset was excluded as well. Out of the 100 prosumer datasets, 86 were excluded due to zero total net energy production in 2017. These “prosumers" would not act as prosumers in an LEM as they would never actually supply a production surplus to the market. Of the remaining 14 prosumer datasets, one prosumer dataset was excluded because the total net energy it fed into the grid in 2017 was just 22 kWh. Additionally, ----- _Energies 2019, 12, 2718_ 14 of 27 one prosumer dataset was excluded as it only fed energy into the grid in the period from 6 January 2017 to 19 January 2017. For all other measurement points, the net energy production was zero. Overall, 88 consumer and 12 prosumer datasets remained for the analysis. All datasets include a timestamp and the consumption time series for consumers and the production time series for prosumers with a total of 175,200 data points each. **4. Results** _4.1. Evaluation of the Prediction Models_ Three prediction methods were used to forecast the energy consumption of 88 consumer households 15 min ahead: a baseline model, a LSTM RNN model, and a LASSO regression. All three prediction models were compared and evaluated using the error measures presented in Section 2.4. The performance of the prediction models was tested on a quarter of the available data. That is, the prediction models were fitted on the consumption values from 1 January 2017 00:00 to 30 September 2017 00:00, which is equivalent to 131,040 data points per dataset. For all 88 consumer datasets, the models were fitted separately resulting in as many distinct LASSO and LSTM prediction models. The fitted models were then used to make energy consumption predictions in 15-min intervals for each household individually on the data from 1 October 2017 00:00 to 1 January 2018 00:00. This equates to 8836 predicted values per dataset per prediction method. Figure 4 displays the total sum of over- and underestimation errors in kWh of each prediction method per dataset. That is, for each consumer, the total sum of overestimation errors is calculated as summing all differences between true and forecasted value, when the forecasted value is greater than the true value (formally, δi[o] = ∑t[N]=1 [(][x][�][i][,][t] _[−]_ _[x][i][,][t][) [(][x][�][i][,][t]_ _[−]_ _[x][i][,][t][)][ >][ 0][]][; red bars)]_ and the total sum of underestimation errors is calculated as summing all differences between true and forecasted value, when the forecasted value is smaller than the true value (formally, _δi[u]_ [=][ ∑]t[N]=1 [(][x][�][i][,][t] _[−]_ _[x][i][,][t][) [(][x][�][i][,][t]_ _[−]_ _[x][i][,][t][)][ <][ 0][]][; blue bars). Thus, the red and blue bars added together depict]_ the total sum of errors in kWh for each prediction method per dataset. The LASSO technique achieved overall lower total sums of errors than the baseline model. Notably, the sum of underestimation errors is higher across the datasets than the sum of overestimation errors. This points towards a general tendency of underestimating sudden increases in energy consumption by the LASSO technique. The LSTM model on the other hand shows a much higher variability in the sums of over- and underestimation errors. By tendency, the overestimation errors of the LSTM model are smaller than those of the LASSO and baseline model. Nevertheless, the underestimation is much more pronounced in the case of the LSTM model. Especially, some datasets stand out regarding the high sum of underestimation errors. This points towards a much higher heterogeneity in the suitability of the LSTM model to predict consumption values depending on the energy consumption pattern of the specific dataset. The LASSO technique on the other hand seems to be more equally well suited for all datasets and their particular consumption patterns. ----- _Energies 2019, 12, 2718_ 15 of 27 **Benchmark model** 750 500 250 0 Legend overestimation −250 underestimation −500 −750 −1000 consumer ID **LASSO model** 750 500 250 0 Legend overestimation −250 underestimation −500 −750 −1000 consumer ID **LSTM model** 750 500 250 0 Legend overestimation −250 underestimation −500 −750 −1000 consumer ID **Figure 4. Sum of total over- and underestimation errors of energy consumption per consumer dataset** and prediction model. [BLEMplotPredErrors (github.com/QuantLet/BLEM/tree/master/BLEMp](https://github.com/QuantLet/BLEM/tree/master/BLEMplotPredErrors) [lotPredErrors)](https://github.com/QuantLet/BLEM/tree/master/BLEMplotPredErrors) The average performance of the three prediction models across all 88 datasets is shown in Table 3. As can be seen, LASSO and LSTM consistently outperformed the baseline model according to MAE, RMSE, MAPE, NRMSE and MASE. The LASSO model performed best overall with the lowest median error measure scores across the 88 consumer datasets. ----- _Energies 2019, 12, 2718_ 16 of 27 **Table 3. Median of error measures for the prediction of energy consumption across all 88 consumer** datasets. [BLEMevaluateEnergyPreds (github.com/QuantLet/BLEM/tree/master/BLEMevaluateE](https://github.com/QuantLet/BLEM/tree/master/BLEMevaluateEnergyPreds) [nergyPreds)](https://github.com/QuantLet/BLEM/tree/master/BLEMevaluateEnergyPreds) **Model** **MAE** **RMSE** **MAPE** **NRMSE** **MASE** LSTM 0.04 0.09 22.22 3.30 0.85 LASSO 0.03 0.05 17.38 2.31 0.57 Benchmark 0.05 0.10 27.98 5.08 1.00 Improvement LSTM (in %) 16.21 12.61 20.57 34.98 14.78 Improvement LASSO (in %) 44.02 48.73 37.88 54.61 43.02 The superior performance of the LASSO model is also clearly visible in Figure 5. This might be surprising, as from a theoretical point of view, a linear model should not outperform a non-linear neural network that fulfills the conditions for a universal approximator for finite input. The most reasonable explanation seems to be that the LSTM RNN model used here missed a good local minimum for a number of datasets and converged to suboptimal parameter combinations. If the main focus of this paper were finding an optimal forecasting algorithm for individual households’ short-term energy consumption, this would require further investigation. However, this study focused on the achievable forecasting accuracy with state-of-the-art methods already employed in previous studies. The results imply that it seems unwise to use a general set of hyperparameters on a number of household energy consumption datasets that differ quite substantially in their energy consumption patterns. However, as the LASSO technique employed here achieved an error score that is competitive with comparable research applications, the underperformance of the LSTM RNN compared to the LASSO technique is of no further concern. **Boxplots of RMSE for consumption predictions** **Boxplots of MASE for consumption predictions** |0.15 RMSE 0.10 0.05 naive LASSO LSTM|1.0 0.8 MASE 0.6 naive LASSO LSTM| |---|---| **Figure 5. Box plots of RMSE and MASE scores across 88 consumer datasets for the three different** prediction models (the upper 3%-quantile of the error measures is cut off for better readability). [BLEMevaluateEnergyPreds (github.com/QuantLet/BLEM/tree/master/BLEMevaluateEnergyP](https://github.com/QuantLet/BLEM/tree/master/BLEMevaluateEnergyPreds) [reds)](https://github.com/QuantLet/BLEM/tree/master/BLEMevaluateEnergyPreds) ----- _Energies 2019, 12, 2718_ 17 of 27 Interestingly, some consumer datasets exhibit apparently much harder to predict consumption patterns than the other datasets. This is exemplified by the outliers of the box plots, as well as by the heat map displayed in Figure 6. It confirms that there is quite some variation among the same prediction methods across different households. Therefore, one may conclude that there is no “golden industry standard” approach for households’ very short-term energy consumption forecasting. Nevertheless, it is obvious that the LASSO model performed best overall. Hence, the predictions on the last quarter of the data produced by the fitted LASSO model for each consumer dataset were used for the evaluation of the market simulation presented next. LSTM **MASE of energy consumption prediction** consumer ID MASE 1.2 1.0 0.8 0.6 LASSO naive **Figure 6. Heat map of MASE scores for the prediction of consumption values per consumer dataset.** [BLEMevaluateEnergyPreds (github.com/QuantLet/BLEM/tree/master/BLEMevaluateEnergyP](https://github.com/QuantLet/BLEM/tree/master/BLEMevaluateEnergyPreds) [reds)](https://github.com/QuantLet/BLEM/tree/master/BLEMevaluateEnergyPreds) _4.2. Evaluation of the Market Simulation_ The market simulation used the market mechanism of a discrete interval, closed double auction to assess the impact of prediction errors on market outcomes. In total, 88 consumers and 12 prosumer datasets were available. To evaluate different supply scenarios, the market simulation was conducted three times with a varying number of prosumers included. The three scenarios consisted of a market simulation with balanced energy supply and demand, a simulation with severe oversupply and a simulation with severe undersupply. To avoid extreme and unusual market outcomes over the time period of the simulation, two prosumers with high production levels, but long periods of no energy production in the simulation period were not included as energy suppliers in the market. The remaining prosumers were in- or excluded according to the desired supply scenario. That is, the undersupply scenario comprised six prosumers, the balanced supply scenario additionally included one more, and the oversupply scenario included additionally to the balanced supply scenario two more prosumers. 4.2.1. Market Outcomes in Different Supply Scenarios The difference between supply and demand for each trading period, the equilibrium price of each double auction, and the weighted average price—termed LEM price—is shown in Figure 7. The LEM price is computed in each trading period as the average of the auctions equilibrium price and the energy utilities energy price (28.69 [EURct] kWh [) weighted by the amount of kWh traded for the respective] price. The three graphs below depicting the market outcomes are results of the market simulation with true consumption values. As can be seen, the equilibrium price shown in the middle panel of Figure 7 moves roughly synchronous to the over-/undersupply shown in the top panel. As there is by tendency more undersupply in the balanced scenario (the red line in the top panel indicates perfectly balanced supply and demand), the equilibrium price is in most trading periods close to its upper limit and the LEM price is almost always above the equilibrium price. There is by tendency more undersupply due to the fact that four of the relevant prosumer datasets are from producers with large capacities (>10 kWh per 15-min interval) that dominated the remaining prosumers’ production capacity substantially and therefore a more balanced supply scenario could not be created. ----- _Energies 2019, 12, 2718_ 18 of 27 **Balanced supply: Market outcomes per trading period with true consumption values** 0 −10 −20 −30 Oct Nov Dec Jan timestamp 25 20 15 Oct Nov Dec Jan timestamp 25 20 15 Oct Nov Dec Jan timestamp **Figure 7. Market outcomes per trading period simulated with true values and a balanced supply** scenario. [BLEMmarketSimulation (github.com/QuantLet/BLEM/tree/master/BLEMmarketSim](https://github.com/QuantLet/BLEM/tree/master/BLEMmarketSimulation) [ulation)](https://github.com/QuantLet/BLEM/tree/master/BLEMmarketSimulation) This observation is in contrast to the oversupply scenario shown in Figure 8. Here, the prosumers’ energy supply surpasses the consumers’ energy demand in the majority of trading periods. Accordingly, the equilibrium price in each auction is close to the lower limit of the energy utility’s feed-in tariff of 12.31 [EURct] kWh [. However, trading periods with undersupply lead to visible spikes in] the equilibrium price, which are, as expected, even more pronounced in the LEM price. In all other periods, the equilibrium price equals the LEM price as all demand is served by the prosumers and there is no energy purchased from the grid. Figure 9 shows the market simulation performed in an undersupply scenario. Here, the market outcomes are the opposite to the oversupply scenario: The equilibrium prices move in a band between 20 [EURct] kWh [and the upper limit of 28.69][ EURct]kWh [. The LEM prices are even higher as the deficit in supply] has to be compensated by energy purchases from the grid. This means that, the more severe the undersupply is, the more energy has to be purchased from the grid, and the more the LEM price surpasses the equilibrium price. In summary, one can conclude that the market outcomes are the more favorable to consumers, the more locally produced energy is offered by prosumers. Assuming a closed double auction as market mechanism and zero-intelligence bidding behavior of market participants, oversupply reduces the LEM prices substantially leading to savings on the consumer side. On the other hand, prosumers will favor undersupply in the market as they profit from the high equilibrium prices while still being able to sell their surplus energy generation at the feed-in tariff without a loss compared to no LEM. |Balanced supply: Market outcomes per trading period with true consumption values|Balanced supply: Market outcomes per trading period with true consumption values|Col3| |---|---|---| |0 kWh −10 in oversupply −20 −30 Oct Nov Dec Jan timestamp||| |||| |EURct 25 in price 20 equilibrium 15 Oct Nov Dec Jan timestamp||| |||| |||| |25 EURct in 20 price LEM 15 Oct Nov Dec Jan timestamp||| |||| |||| ----- _Energies 2019, 12, 2718_ 19 of 27 **Oversupply: Market outcomes per trading period with true consumption values** 20 10 0 Oct Nov Dec Jan timestamp 25 20 15 Oct Nov Dec Jan timestamp 25 20 15 Oct Nov Dec Jan timestamp **Figure 8. Market outcomes per trading period simulated with true values and an oversupply scenario.** [BLEMmarketSimulation (github.com/QuantLet/BLEM/tree/master/BLEMmarketSimulation)](https://github.com/QuantLet/BLEM/tree/master/BLEMmarketSimulation) **Undersupply: Market outcomes per trading period with true consumption values** 0 −10 −20 −30 Oct Nov Dec Jan timestamp 25 20 15 Oct Nov Dec Jan timestamp 25 20 15 Oct Nov Dec Jan timestamp **Figure 9. Market outcomes per trading period simulated with true values and an undersupply scenario.** [BLEMmarketSimulation (github.com/QuantLet/BLEM/tree/master/BLEMmarketSimulation)](https://github.com/QuantLet/BLEM/tree/master/BLEMmarketSimulation) ----- _Energies 2019, 12, 2718_ 20 of 27 4.2.2. Loss to Consumers due to Prediction Errors To assess the adverse effect of prediction errors on market outcomes, the LASSO-predicted energy consumption values per 15-min interval are used. The predictions of the model served as order amounts in the auction bids. After the true consumption in the respective trading period was observed, payments to settle over- or underestimation errors were made. That is, if a consumer bid with a higher amount than actually consumed, it still bought the full bid amount from the prosumers but had to sell the surplus to the energy utility over the grid at the feed-in tariff. On the other hand, if a consumer bid with a lower amount than actually consumed, it bought the bid amount from the prosumers but had to purchase the surplus energy consumption from the grid at the energy utility’s tariff. Thus, prediction errors are costly as the consumer always has to clear the order in less favorable conditions than the equilibrium price provides. Table 4 contrasts the results of the market simulation with true consumption values with the results of the market simulation with predicted consumption values in three different supply scenarios. The equilibrium and LEM prices almost do not differ within the three scenarios whether the true or predicted consumption values are used. The prices between the scenarios, however, differ substantially. The average total revenue over the three-month simulation period of prosumers is largely unaffected by the use of true or predicted consumption values. This is not surprising as the revenue is a function of the equilibrium price, which is apparently largely unaffected by whether true or predicted consumption values are used, and the electricity produced, which is obviously completely unaffected by whether true or predicted consumption values are used. **Table 4. Average results of the market simulation for three different supply scenarios. Prices are** averaged across all trading periods. Revenues and costs for the whole simulation period are averaged across all prosumers and consumers, respectively. [BLEMevaluateMarketSim (github.com/QuantLe](https://github.com/QuantLet/BLEM/tree/master/BLEMevaluateMarketSim) [t/BLEM/tree/master/BLEMevaluateMarketSim)](https://github.com/QuantLet/BLEM/tree/master/BLEMevaluateMarketSim) **Balanced Supply** **Oversupply** **Undersupply** **Mean** **True** **Predicted** **True** **Predicted** **True** **Predicted** Equilibrium price (in EURct) 24.64 24.61 12.50 12.49 25.68 25.69 LEM price (in EURct) 27.31 27.28 12.51 12.49 28.08 28.10 Revenue (in EUR) 1113.84 1108.88 3454.62 3451.69 1035.90 1036.12 Cost with LEM (in EUR) 439.26 457.94 200.75 226.61 451.60 470.69 Cost without LEM (in EUR) 459.83 446.93 459.83 446.93 459.83 446.93 What differs according to Table 4, however, is the cost for consumers. The cost without the LEM is on average across all consumers smaller when using predicted consumption values compared to using true consumption values. This can be explained by the LASSO model’s tendency to underestimate on the data at hand and because correction payments for the prediction errors are not factored into this number. The average total cost for electricity consumption in the whole simulation period is with an LEM higher when using predicted consumption values compared to using true consumption values. This is due to the above-mentioned need to settle prediction errors at unfavorable terms. The percentage loss induced by prediction errors is shown in Table 5. Depending on the supply scenario it ranges between about 4.8% and 13.75%. These numbers have to be judged relative to the savings that are brought to consumers by the participation in an LEM. It turns out, that in the balanced supply scenario, the savings due to the LEM are almost completely offset by the loss due to prediction errors. As consumers profit more from an LEM, the lower the equilibrium prices are, this is not the case in the oversupply scenario. Here, the savings are substantial and amount to about 130%, which is almost ten times more than the percentage loss due to the prediction errors. However, the problem of the settlement structure for prediction errors becomes very apparent in the undersupply scenario. Here, the savings due to an LEM are more than offset by the loss due to prediction errors. Consequently, consumers would be better off not participating in an LEM. ----- _Energies 2019, 12, 2718_ 21 of 27 **Table 5. Average savings for consumers due to the LEM and average loss for consumers due to** prediction errors in the LEM. [BLEMevaluateMarketSim (github.com/QuantLet/BLEM/tree/master](https://github.com/QuantLet/BLEM/tree/master/BLEMevaluateMarketSim) [/BLEMevaluateMarketSim)](https://github.com/QuantLet/BLEM/tree/master/BLEMevaluateMarketSim) **Mean** **Balanced Supply** **Oversupply** **Undersupply** Cost without LEM (in EUR) 459.83 459.83 459.83 Cost predicted values (in EUR) 457.94 226.61 470.69 Cost true values (in EUR) 439.26 200.75 451.60 Savings due to LEM (in %) 4.82 129.08 1.90 Loss due to pred. errors (in %) _−4.80_ _−13.75_ _−4.76_ This result is visualized in a more differentiated way in Figure 10. The figure shows for each supply scenario, for each consumer, the total energy cost over the whole simulation period in: (1) no LEM; (2) an LEM with the use of predicted consumption values; and (3) an LEM with the use of true consumption values. For each supply scenario, the bottom panel shows the percentage loss due to not participating in the LEM and the loss due to participating and using predicted consumption values compared to participating and using true consumption values. In the balanced scenario, there are some consumers who would make a loss due to the participation in the LEM and relying on predicted values. For them, the loss due to no LEM (yellow bar) is smaller than the loss due to prediction errors (green bar). However, 56 out of 88 consumer (i.e., 64%) also profit from the participation in the LEM despite the costs induced by prediction errors. Due to the much lower equilibrium prices in the oversupply scenario, the LEM participation here is, despite prediction errors, profitable for all consumers. However, even in this scenario, the savings for the consumers are diminished by more than 10%, which is quite substantial. In contrast, in the undersupply scenario, the loss due to the prediction errors leaves the participation in the LEM for almost all consumers unprofitable. Merely three consumers would profit and have lower costs in an LEM than without an LEM, despite prediction errors. Overall, it becomes clear that prediction errors significantly lower the economic profitability of an LEM for consumers. This, however, is often argued to be one of the main advantages of LEMs. The result is especially concerning in LEMs where locally produced energy is undersupplied. Here—still assuming the closed double auction market mechanism and zero-intelligence bidding strategies—the savings from the participation in the LEM are marginal. Therefore, the costs induced by prediction errors mostly outweigh the savings from the participation. This results in an overall loss for consumers due to the LEM, which makes the participation economically irrational. Only in cases of substantial oversupply, the much lower equilibrium price, compared to the energy utility’s price, compensates for the costs from prediction errors. In conclusion, this means that LEMs with a discrete interval, closed double auction as market mechanism and a prediction error settlement structure as proposed in [6] combined with the prediction accuracy of state-of-the-art energy forecasting techniques require substantial oversupply in the LEM for it to be beneficial to consumers. ----- _Energies 2019, 12, 2718_ 22 of 27 **Balanced supply** 2000 1500 Legend cost without LEM 1000 cost with predicted values 500 cost with true values 0 consumer ID 0 −3 Legend −6 loss due to no LEM loss due to prediction errors −9 −12 consumer ID **Oversupply** 2000 1500 Legend cost without LEM 1000 cost with predicted values 500 cost with true values 0 consumer ID 0 Legend −50 loss due to no LEM loss due to prediction errors −100 consumer ID **Undersupply** 2000 1500 Legend cost without LEM 1000 cost with predicted values 500 cost with true values 0 consumer ID 0 −3 Legend −6 loss due to no LEM loss due to prediction errors −9 −12 consumer ID **Figure 10. Total energy cost to consumers from 01 October 2018 to 31 December 2017 in the case of** no LEM, LEM with true values, and LEM with predicted values in three different supply scenarios. [BLEMevaluateMarketSim (github.com/QuantLet/BLEM/tree/master/BLEMevaluateMarketSim)](https://github.com/QuantLet/BLEM/tree/master/BLEMevaluateMarketSim) _4.3. Implications for Blockchain-Based Local Energy Markets_ In light of these results, it remains open to derive implications and to propose potential adjustments for an LEM market mechanism. After all, there are substantial advantages of LEMs which have been established in various studies and still make LEMs an attractive solution for the challenges brought about by the current energy transition. Adjustments mitigating the negative effect of prediction errors on the profitability of LEMs could address one or more of the following areas: first, the forecasting techniques employed; second, the demand and supply structure of the LEM; and, third, the market mechanism used in the blockchain-based LEM. ----- _Energies 2019, 12, 2718_ 23 of 27 The first and most intuitive option is to improve the forecasting accuracy with which the predictions, which serve as the basis of bids and asks, are made. For example, a common approach to reduce the bias of LASSO-based predictions are post-LASSO techniques such as presented by Chernozhukov et al. [49]. Another aspect that seems relevant for the improvement of forecasting models is the evaluation method. Using economic measures for the evaluation of forecasting model performance may address a potential mismatch between statistical measures of forecasting accuracy and the resulting economic profits [50]. However, these approaches most likely result only in small improvements. Thus, the most obvious way to achieve a substantial improvement is the inclusion of more data. More data may hereby refer either to a higher resolution of recorded energy data or to a wider range of data sources such as behavioral data of household members or data from smart appliances. A higher resolution of smart meter readings is already easily achievable. The smart meters installed by Discovergy that also supplied the data for the present research are capable of recording energy measurements up to every two seconds. However, data at such a fine granularity requires substantial data storage and processing capacities which are unlikely to be available in an average household. Especially, the training of prediction models with such vast amounts of input data points is computationally very resource intensive. The potential solution of outsourcing this, however, introduces new data privacy concerns that are already a sensible topic in smart meter usage and blockchain-based LEMs (e.g., [8,51]). Increasing the forecasting intervals to 30 or 60 min, as an alternative way to reduce the computational resources needed, would presumably decrease the forecasting accuracy which, in turn, might increase the cost for consumers. However, the effect of this potential solution on the cost for consumers due to forecasting errors seems reasonable to be investigated in future studies. The inclusion of behavioral data into prediction models such as the location of the person within their house and the inclusion of smart appliances’ energy consumption (as done by Kong et al. [22]) and running schedules raises important privacy concerns as well. Pooling and using energy consumption data of several households, as done by Shi et al. [23], again introduces privacy concerns as it implies data sharing between households, which in relatively small LEMs cannot be guaranteed to preserve the anonymity of market participants. For all these reasons, it seems unlikely that in the near future qualitative jumps in the prediction accuracy of very short-term household energy consumption or production of individual households will be available. The second option addresses the demand and supply structure in the blockchain-based LEM. As shown in Section 4.2, the cost induced by prediction errors and their settlement is more than compensated in an oversupply scenario. Hence, employing LEMs only in a neighborhood in which energy production surpasses energy consumption would mitigate the problem of unprofitability due to prediction errors as well. Where this is not possible, participation to the LEM could be restricted, such that oversupply in a majority of trading periods is ensured. However, this might end up in a market manipulation that most likely makes most of LEMs’ advantages obsolete. Moreover, it is unclear on what basis the restriction to participate in the market should be grounded. The third option to mitigate the problem is the market mechanism and the prediction error settlement structure. A simple approach to reduce forecasting errors is to decrease the forecasting horizon. Thus, instead of having 15-min trading periods, which also require 15-min ahead forecast, the trading periods could be shrunk to just 3 min. This would increase the forecasting accuracy, and, thereby, lead to lower costs due to the settlement of prediction errors. However, in a blockchain-based LEM, more frequent market closings come at the cost of more computational resources needed for transaction verification and cryptographic block generation. Depending on the consensus mechanism used for the blockchain, the energy requirements for the computations, which secure transactions and generate new blocks, may be substantial. This, of course, is rather detrimental to the idea of promoting more sustainable energy generation and usage. Nevertheless, using consensus mechanisms based on identity verification of the participating agents may serve as a less computational, and thus energy intensive alternative, which might make shorter trading intervals reasonable. Another, more radical, approach might be to change the market mechanism of closed ----- _Energies 2019, 12, 2718_ 24 of 27 double auctions altogether and use an exposed market instead. Hereby, the energy consumption and production is settled in an auction after the true values are known, instead of in advance. This means that market participants submit just limit prices in their bids and asks without related amounts and the offers are matched in an auction in regular time intervals. Then, the electricity actually consumed and produced in the preceding period is settled according to the market clearing price. Related to this approach is a solution, where bidding is based on forecasted energy values, while the settlement is shifted by one period such that the actual amounts can be used for clearing. This approach, however, may introduce the possibility of fraud and market manipulation as agents can try to deliberately bid using false amounts. While in the smart contracted developed by Mengelkamp et al. [6] funds needed to back up the bid are held as pledges until the contract is settled (this ensures the availability of the necessary funds to pay the bid), this would be senseless, if settlement is only based on actual consumption without considering the amount specified in the offer. However, the extent of this problem and ways to mitigate it should be assessed from a game theoretical perspective that is out of scope of the present research. Overall, prediction errors have to be taken into account for future designs of blockchain-based LEMs. Otherwise, they may substantially lower the profitability and diminish the incentive to participate in an LEM for consumers. In addition, the psychological component of having to rely on an unreliable prediction algorithm that may be more or less accurate depending on the household’s energy consumption patterns seems unattractive. Even though possible solutions are not trivial and each comes with certain trade-offs, there is room for future improvement of the smart contracts and the market mechanism they reproduce. **5. Conclusions** The present research had the objectives: (1) to evaluate the prediction accuracy achievable for household energy consumption with state-of-the-art forecasting techniques; (2) to assess the effect of prediction errors on an LEM that uses a closed double auction with discrete time intervals as market mechanism; and (3) to infer implications based on the results for the future design of blockchain-based LEMs. In the performance assessment of currently used forecasting techniques, the LASSO model yielded the best results with an average MAPE across all consumer datasets of 17%. It was subsequently used to make predictions for the market simulation. The evaluation of the market mechanism and prediction error settlement structure revealed that, in a balanced supply and demand scenario, the costs of prediction errors almost completely offset savings brought by the participation in the LEM. In an undersupply scenario, the cost due to prediction errors even surpassed the savings and made market participation uneconomical. The most promising approach to mitigate this problem seemed to be adjustment of the market design, which can be two-fold: either shorter trading periods could be introduced, which would reduce the forecasting horizon, and therefore prediction errors, or the auction mechanism could be altered to not use predicted consumption values to settle transactions. For the present research, data from a greater number of smart meters and more context information about the data would have been desirable. However, due to data protection legislation, no information regarding locality of the households, household characteristics or the type of power plant prosumer households used could be provided. Thus, unfortunately, no other covariates (e.g., temperature) could be used in the forecasting of energy consumption. In addition, the large-scale differences in the production capacities of the prosumers, contained in the data, complicated the analysis of the market simulation further. Additionally, it is worth mentioning that the market simulation did not account for taxes or fees, especially grid utilization fees, which can be a substantial share of the total electricity cost of households. The simulation also did not take into account compensation costs for blockchain miners that reimburses them for the computational cost they bear. Evidently, future research concerned with blockchain-based LEMs should take into account the potential cost of prediction errors. Furthermore, to our knowledge, there has been no simulation ----- _Energies 2019, 12, 2718_ 25 of 27 of a blockchain-based LEM with actual consumption and production data conducted. Doing so on a private blockchain with the market mechanism coded in a smart contract should be the next step for the assessment of potential technological and conceptual weaknesses. In conclusion, previous research has shown that blockchain technology and smart contracts combined with renewable energy production can play an important role in tackling the challenges of climate change. The present research, however, emphasizes that advancement on this front cannot be made without a holistic approach that takes all components of blockchain-based LEMs into account. Simply assuming that reasonably accurate energy forecasts for individual households will be available once the technical challenges of implementing an LEM on a blockchain are solved, may steer research into a wrong direction and bears the risk of missing the opportunity to quickly move into the direction of a more sustainable and less carbon-intensive future. **Author Contributions: Conceptualization, M.K. and W.K.H.; Data curation, M.K.; Formal analysis, W.K.H.;** Methodology, M.K.; Software, M.K.; Supervision, W.K.H.; Validation, M.K. and W.K.H.; Visualization, M.K.; Writing—original draft, M.K.; and Writing—review and editing, M.K. and W.K.H. **Funding: This research received no external funding.** **Acknowledgments: We would like to thank Discovergy GmbH for the kind provision of their smart meter data,** the Humboldt Lab for Empirical and Quantitative Research (LEQR) at the School of Business and Economics, Humboldt-University Berlin for the kind provision of computing resources, and the IRTG 1792 at the School of Business and Economics, Humboldt University of Berlin for valuable support. **Conflicts of Interest: The authors declare no conflict of interest.** **Data Availability: All data and algorithms are freely available through** www.quantlet.de with the keyword _[BLEM and at GitHub: github.com/QuantLet/BLEM.](https://github.com/QuantLet/BLEM)_ **Abbreviations** The following abbreviations are used in this manuscript: LEM Local energy market LASSO Least absolute shrinkage and selection operator RNN Recurrent neural network LSTM Long short-term memory ML Machine learning GPU Graphical processing unit CPU Central processing unit CV Cross-validation SD Standard deviation MAE Mean absolute error RMSE Root mean square error MAPE Mean absolute percentage error NRMSE Normalized root mean square error MASE Mean absolute scaled error **References** 1. Sinn, H.W. Buffering volatility: A study on the limits of Germany’s energy revolution. Eur. Econ. Rev. 2017, _[99, 130–150. [CrossRef]](http://dx.doi.org/10.1016/j.euroecorev.2017.05.007)_ 2. Bayer, B.; Matschoss, P.; Thomas, H.; Marian, A. The German experience with integrating photovoltaic [systems into the low-voltage grids. Renew. Energy 2018, 119, 129–141. [CrossRef]](http://dx.doi.org/10.1016/j.renene.2017.11.045) 3. BSW-Solar. Statistische Zahlen der deutschen Solarstrombranche (Photovoltaik); Bundesverband Solarwirtschaft e.V.: Berlin, Germany, 2018. 4. Weron, R. Modeling and forecasting electricity loads and prices: A statistical approach; John Wiley & Sons: Chichester, UK, 2006. 5. [Rutkin, A. Blockchain-Based Microgrid Gives Power to Consumers in New York. Available online: newscienti](https://newscientist.com/article/2079334-blockchain-based-microgrid-gives-power-to-consumers-in-new-york/) [st.com/article/2079334-blockchain-based-microgrid-gives-power-to-consumers-in-new-york/ (accessed on](https://newscientist.com/article/2079334-blockchain-based-microgrid-gives-power-to-consumers-in-new-york/) 13 July 2019). ----- _Energies 2019, 12, 2718_ 26 of 27 6. Mengelkamp, E.; Gärttner, J.; Rock, K.; Kessler, S.; Orsini, L.; Weinhardt, C. Designing microgrid energy [markets—A case study: The Brooklyn Microgrid. Appl. Energy 2018, 210, 870–880. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2017.06.054) 7. Lamparter, S.; Becher, S.; Fischer, J.G. An Agent-based Market Platform for Smart Grids. In Proceedings of the 9th International Conference on Autonomous Agents and Multiagent Systems (AAMAS): Industry Track, Toronto, ON, Canada, 10–14 May 2010; pp. 1689–1696. 8. Buchmann, E.; Kessler, S.; Jochem, P.; Böhm, K. The Costs of Privacy in Local Energy Markets. In Proceedings of the 2013 IEEE 15th Conference on Business Informatics, Vienna, Austria, 15–18 July 2013; pp. 198–207. 9. Block, C.; Neumann, D.; Weinhardt, C. A Market Mechanism for Energy Allocation in Micro-CHP Grids. In Proceedings of the 41st Annual Hawaii International Conference on System Sciences (HICSS 2008), Waikoloa, HI, USA, 7–10 January 2008; pp. 172–183. 10. Mengelkamp, E.; Notheisen, B.; Beer, C.; Dauer, D.; Weinhardt, C. A blockchain-based smart grid: Towards [sustainable local energy markets. Comput. Sci. Res. Dev. 2018, 33, 207–214. [CrossRef]](http://dx.doi.org/10.1007/s00450-017-0360-9) 11. Stadler, M.; Cardoso, G.; Mashayekh, S.; Forget, T.; DeForest, N.; Agarwal, A.; Schönbein, A. Value streams [in microgrids: A literature review. Appl. Energy 2016, 162, 980–989. [CrossRef]](http://dx.doi.org/10.1016/j.apenergy.2015.10.081) 12. Mengelkamp, E.; Gärttner, J.; Weinhardt, C. Intelligent Agent Strategies for Residential Customers in Local Electricity Markets. In Proceedings of the Ninth International Conference on Future Energy Systems (e-Energy ’18), Karlsruhe, Germany, 12–15 June 2018; pp. 97–107. 13. Koirala, B.P.; Koliou, E.; Friege, J.; Hakvoort, R.A.; Herder, P.M. Energetic communities for community energy: A review of key issues and trends shaping integrated community energy systems. Renew. Sustain. _[Energy Rev. 2016, 56, 722–744. [CrossRef]](http://dx.doi.org/10.1016/j.rser.2015.11.080)_ 14. Hvelplund, F. Renewable energy and the need for local energy markets. _Energy 2006, 31, 2293–2302._ [[CrossRef]](http://dx.doi.org/10.1016/j.energy.2006.01.016) 15. Ilic, D.; Silva, P.G.D.; Karnouskos, S.; Griesemer, M. An energy market for trading electricity in smart grid neighbourhoods. In Proceedings of the 2012 6th IEEE International Conference on Digital Ecosystems and Technologies (DEST), Campione d’Italia, Italy, 18–20 June 2012; pp. 1–6. 16. Rosen, C.; Madlener, R. An auction design for local reserve energy markets. Decis. Support System. 2013, _[56, 168–179. [CrossRef]](http://dx.doi.org/10.1016/j.dss.2013.05.022)_ 17. Mengelkamp, E.; Gärttner, J.; Weinhardt, C. Decentralizing Energy Systems Through Local Energy Markets: The LAMP-Project. In Proceedings of the Multikonferenz Wirtschaftsinformatik (MKWI), Lüneburg, Germany, 6–9 March 2018; pp. 924–930. 18. Wang, Y.; Chen, Q.; Hong, T.; Kang, C. Review of smart meter data analytics: Applications, methodologies, [and challenges. IEEE Trans. Smart Grid 2018, 10, 1–24. [CrossRef]](http://dx.doi.org/10.1109/TSG.2018.2880680) 19. Burger, C.; Kuhlmann, A.; Richard, P.; Weinmann, J. Blockchain in the Energy Transition. A Survey among _Decision-Makers in the German Energy Endustry; Report; ESMT Berlin, Berlin, Germany, 2016._ 20. Münsing, E.; Mather, J.; Moura, S. Blockchains for decentralized optimization of energy resources in microgrid networks. In Proceedings of the 2017 IEEE Conference on Control Technology and Applications (CCTA), Mauna Lani, HI, USA, 27–30 August 2017; pp. 2164–2171. 21. Arora, S.; Taylor, J.W. Forecasting electricity smart meter data using conditional kernel density estimation. _[Omega 2016, 59, 47–59. [CrossRef]](http://dx.doi.org/10.1016/j.omega.2014.08.008)_ 22. Kong, W.; Dong, Z.Y.; Jia, Y.; Hill, D.J.; Xu, Y.; Zhang, Y. Short-Term Residential Load Forecasting Based on [Resident Behaviour Learning. IEEE Trans. Power Syst. 2018, 33, 1087–1088. [CrossRef]](http://dx.doi.org/10.1109/TPWRS.2017.2688178) 23. Shi, H.; Xu, M.; Li, R. Deep learning for household load forecasting—A novel pooling deep RNN. IEEE Trans. _[Smart Grid 2018, 9, 5271–5280. [CrossRef]](http://dx.doi.org/10.1109/TSG.2017.2686012)_ 24. Li, P.; Zhang, B.; Weng, Y.; Rajagopal, R. A sparse linear model and significance test for individual [consumption prediction. IEEE Trans. Power Syst. 2017, 32, 4489–4500. [CrossRef]](http://dx.doi.org/10.1109/TPWRS.2017.2679110) 25. Bansal, A.; Rompikuntla, S.K.; Gopinadhan, J.; Kaur, A.; Kazi, Z.A. Energy Consumption Forecasting for Smart Meters. In Proceedings of the third Internatial Conference on Business Analytics and Intelligence (BAI) 2015, Bangalore, India, 17–19 December 2015; pp. 1–20. 26. Diagne, M.; David, M.; Lauret, P.; Boland, J.; Schmutz, N. Review of solar irradiance forecasting methods [and a proposition for small-scale insular grids. Renew. Sustain. Energy Rev. 2013, 27, 65–76. [CrossRef]](http://dx.doi.org/10.1016/j.rser.2013.06.042) 27. Gan, D.; Wang, Y.; Zhang, N.; Zhu, W. Enhancing short-term probabilistic residential load forecasting with [quantile long–short-term memory. J. Eng. 2017, 2017, 2622–2627. [CrossRef]](http://dx.doi.org/10.1049/joe.2017.0833) 28. Chollet, F.; Allaire, J. Deep Learning with R; Manning Publications Co.: Shelter Island, NY, USA, 2018. ----- _Energies 2019, 12, 2718_ 27 of 27 29. Van der Meer, D.W.; Widén, J.; Munkhammar, J. Review on probabilistic forecasting of photovoltaic power [production and electricity consumption. Renew. Sustain. Energy Rev. 2018, 81, 1484–1512. [CrossRef]](http://dx.doi.org/10.1016/j.rser.2017.05.212) 30. Chen, K.; Chen, K.; Wang, Q.; He, Z.; Hu, J.; He, J. Short-term Load Forecasting with Deep Residual [Networks. IEEE Trans. Smart Grid 2018, 1–10. [CrossRef]](http://dx.doi.org/10.1109/TSG.2018.2844307) 31. Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. _[Neural Netw. 1989, 2, 359–366. [CrossRef]](http://dx.doi.org/10.1016/0893-6080(89)90020-8)_ 32. Bengio, Y.; Simard, P.; Frasconi, P. Learning long-term dependencies with gradient descent is difficult. _[IEEE Trans. Neural Netw. 1994, 5, 157–166. [CrossRef]](http://dx.doi.org/10.1109/72.279181)_ 33. [Hochreiter, S.; Schmidhuber, J. Long Short-Term Memory. Neural Comput. 1997, 9, 1735–1780. [CrossRef]](http://dx.doi.org/10.1162/neco.1997.9.8.1735) 34. Gers, F.A.; Schmidhuber, J.; Cummins, F. Learning to Forget: Continual Prediction with LSTM. Neural Comput. **[2000, 12, 2451–2471. [CrossRef]](http://dx.doi.org/10.1162/089976600300015015)** 35. Lipton, Z.C.; Berkowitz, J.; Elkan, C. A Critical Review of Recurrent Neural Networks for Sequence Learning. _arXiv 2015, arXiv:1506.00019v4._ 36. Graves, A., Supervised Sequence Labelling. In Supervised Sequence Labelling with Recurrent Neural Networks; Springer: Berlin/Heidelberg, Germany, 2012; pp. 5–13. 37. Chollet, F.; Allaire, J. R Interface to Keras. 2017. [Available online: https://github.com/rstudio/keras](https://github.com/rstudio/keras) (accessed on 30 September 2018). 38. Tibshirani, R. Regression Shrinkage and Selection via the Lasso. J. R. Statist. Soc. Ser. B (Methodol.) 1996, _[58, 267–288. [CrossRef]](http://dx.doi.org/10.1111/j.2517-6161.1996.tb02080.x)_ 39. Friedman, J.; Hastie, T.; Tibshirani, R. Regularization Paths for Generalized Linear Models via Coordinate [Descent. J. Stat. Softw. 2010, 33, 1–22. [CrossRef]](http://dx.doi.org/10.18637/jss.v033.i01) 40. Hoff, T.; Perez, R.; Kleissl, J.; Renne, D.; Stein, J. Reporting of irradiance modeling relative prediction errors. _[Prog. Photovolt. Res. Appl. 2013, 21, 1514–1519. [CrossRef]](http://dx.doi.org/10.1002/pip.2225)_ 41. Zhang, J.; Florita, A.; Hodge, B.M.; Lu, S.; Hamann, H.F.; Banunarayanan, V.; Brockway, A.M. A suite of [metrics for assessing the performance of solar power forecasting. Sol. Energy 2015, 111, 157–175. [CrossRef]](http://dx.doi.org/10.1016/j.solener.2014.10.016) 42. Hyndman, R.J.; Koehler, A.B. Another look at measures of forecast accuracy. Int. J. Forecast. 2006, 22, 679–688. [[CrossRef]](http://dx.doi.org/10.1016/j.ijforecast.2006.03.001) 43. Heidjann, J. Strompreise in Deutschland - Vergleichende Analyse der Strompreise für 1437 Städte in Deutschland; StromAuskunft - Alles über Strom, Münster, Germany, 2017. 44. Gode, D.K.; Sunder, S. Allocative Efficiency of Markets with Zero-Intelligence Traders: Market as a Partial [Substitute for Individual Rationality. J. Political Econ. 1993, 101, 119–137. [CrossRef]](http://dx.doi.org/10.1086/261868) 45. Discovergy GmbH. Intelligente Stromzähler und Messsysteme; Discovergy GmbH: Heidelberg, Germany, 2018. 46. Auder, B.; Cugliari, J.; Goude, Y.; Poggi, J.M. Scalable Clustering of Individual Electrical Curves for Profiling [and Bottom-Up Forecasting. Energies 2018, 11, 1893. [CrossRef]](http://dx.doi.org/10.3390/en11071893) 47. Gerossier, A.; Girard, R.; Kariniotakis, G.; Michiorri, A. Probabilistic day-ahead forecasting of household [electricity demand. CIRED - Open Access Proc. J. 2017, 2017, 2500–2504. [CrossRef]](http://dx.doi.org/10.1049/oap-cired.2017.0625) 48. Teixeira, B.; Silva, F.; Pinto, T.; Santos, G.; Praça, I.; Vale, Z. TOOCC: Enabling heterogeneous systems interoperability in the study of energy systems. In Proceedings of the 2017 IEEE Power Energy Society General Meeting, Chicago, IL, USA, 16-20 July 2017; pp. 1–5. 49. Chernozhukov, V.; Härdle, W.K.; Huang, C.; Wang, W. LASSO-Driven Inference in Time and Space. arXiv **2018, arXiv:1806.05081v3.** 50. Maciejowska, K.; Nitka, W.; Weron, T. Day-Ahead vs. Intraday—Forecasting the Price Spread to Maximize [Economic Benefits. Energies 2019, 12, 631. [CrossRef]](http://dx.doi.org/10.3390/en12040631) 51. Greveler, U.; Justus, B.; Loehr, D. Forensic content detection through power consumption. In Proceedings of the 2012 IEEE International Conference on Communications (ICC), Ottawa, ON, Canada, 10-15 June 2012; pp. 6759–6763. _⃝c_ 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution [(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/EN12142718?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/EN12142718, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/1996-1073/12/14/2718/pdf?version=1563357630" }
2,019
[]
true
2019-06-02T00:00:00
[ { "paperId": "049fb4055514c35a4675805098f15ea2a168f418", "title": "Renewable" }, { "paperId": "21e7683f642967ff4675e8f3b79dc9b8d3a4a6c4", "title": "Smart Meters" }, { "paperId": "513578741617a4bbc12bf3dee63129cb9836260b", "title": "Forex exchange rate forecasting using deep recurrent neural networks" }, { "paperId": "216d7c2230a4b421bfbb97c260025f02f4004773", "title": "Optimal Operation of Active Distribution Network Involving the Unbalance and Harmonic Compensation of Converter" }, { "paperId": "d1faf825b22ead23bd9afe07027b8a747753e8a8", "title": "Information Arrival, News Sentiment, Volatilities and Jumps of Intraday Returns" }, { "paperId": "542df9c07b06202e6df417d80e4a026cd567f30a", "title": "Deep Learning with R" }, { "paperId": "bed3e7e555fbb3a5ff9c724b67a29a94cac72b64", "title": "Localizing Multivariate CAViaR" }, { "paperId": "74acfbf241ee27710892d6a92d1d62f66326ce12", "title": "Day-Ahead vs. Intraday—Forecasting the Price Spread to Maximize Economic Benefits" }, { "paperId": "d140fb7f9537c309e50845dd6cd06a4482ac57c5", "title": "Deep Learning for Household Load Forecasting—A Novel Pooling Deep RNN" }, { "paperId": "5ee947732bc1e7fe3ff9b593264252fd4071f227", "title": "Scalable Clustering of Individual Electrical Curves for Profiling and Bottom-Up Forecasting" }, { "paperId": "7d8096a78105a8284e74e9ee0340e589fc502b93", "title": "LASSO-Driven Inference in Time and Space" }, { "paperId": "1df845cba1fecf2f3271ad13b44086d48d05b658", "title": "Intelligent Agent Strategies for Residential Customers in Local Electricity Markets" }, { "paperId": "6aae2606ec782a2da4268618c9301881f9cdc44d", "title": "Short-Term Load Forecasting With Deep Residual Networks" }, { "paperId": "df973c4acc8b135ccffc880fd6e18913c06820f3", "title": "The German experience with integrating photovoltaic systems into the low-voltage grids" }, { "paperId": "4ffbd6860cc2cada1f4c6c77ad3f9b6535c4b3cf", "title": "Review of Smart Meter Data Analytics: Applications, Methodologies, and Challenges" }, { "paperId": "8d3f008403e73226955699e31361f5ae02628dc8", "title": "Designing microgrid energy markets" }, { "paperId": "57d6de06108aab568915c1590b2fc114947cd35c", "title": "A blockchain-based smart grid: towards sustainable local energy markets" }, { "paperId": "546dae9d8b69c8010b8a4c9447891b54ed8a1ae2", "title": "TOOCC: Enabling heterogeneous systems interoperability in the study of energy systems" }, { "paperId": "71fbb7b01136466f459a73bd5c613dc7bb16c5b3", "title": "Probabilistic day-ahead forecasting of household electricity demand" }, { "paperId": "b3427fadb79e813d3fad9f7ec815b2ca7958031e", "title": "Blockchains for decentralized optimization of energy resources in microgrid networks" }, { "paperId": "b3a9e5f3dfe8a152b4852d2cd6bfd3d68cbb7096", "title": "Buffering Volatility: A Study on the Limits of Germany's Energy Revolution" }, { "paperId": "747c2f963d3b5f90603ea8f38726b4d9a5a5bfb2", "title": "Energetic communities for community energy: A review of key issues and trends shaping integrated community energy systems" }, { "paperId": "b7dcb6407be265e1f86e716970757a67dadda891", "title": "Value streams in microgrids: A literature review" }, { "paperId": "bb8fb2e373551abb2d50d001e903051a7a6c7f16", "title": "Energy Consumption Forecasting for Smart Meters" }, { "paperId": "0c61a23cf344b7cb200148f9b6516423268e3448", "title": "A Sparse Linear Model and Significance Test for Individual Consumption Prediction" }, { "paperId": "a6336fa1bcdeb7c84d2c4189728f0c1b2b7d0883", "title": "A Critical Review of Recurrent Neural Networks for Sequence Learning" }, { "paperId": "8e492617bbcd785e6c714ad53de2b895a718b744", "title": "Forecasting electricity smart meter data using conditional kernel density estimation" }, { "paperId": "408e0b009a7e3b1cf3f3510bae460938d742bedf", "title": "An auction design for local reserve energy markets" }, { "paperId": "54a4d9d268b8b0ab9c4634133ad0cb379ed94ea6", "title": "Review of solar irradiance forecasting methods and a proposition for small-scale insular grids" }, { "paperId": "cf80c5fe15f1c718d761ffb6cdeddccda4203aaf", "title": "Reporting of irradiance modeling relative prediction errors" }, { "paperId": "14f9fb13b2dd131f763023b30f62965ead0b7002", "title": "The Costs of Privacy in Local Energy Markets" }, { "paperId": "cf2ddf35ac0acc96f3a668a882443929da6aae5e", "title": "An energy market for trading electricity in smart grid neighbourhoods" }, { "paperId": "c371b864eb4d9c938ec4e366b86666e4e3d4697c", "title": "Forensic content detection through power consumption" }, { "paperId": "e961c7ef22dae74a192776ab9ab1e7eb8491f09c", "title": "An agent-based market platform for Smart Grids" }, { "paperId": "0a14824290453b95d051ec9cc299d0f61ad82b23", "title": "Regularization Paths for Generalized Linear Models via Coordinate Descent." }, { "paperId": "5535daf8f0702925ca6020a596767bde2a714385", "title": "Enhancing short-term probabilistic residential load forecasting with quantile long–short-term memory" }, { "paperId": "9768db71a74de325f85af81395d6a5f7265b4f2b", "title": "A Market Mechanism for Energy Allocation in Micro-CHP Grids" }, { "paperId": "234811c79ba579eaf72697a35f767f9d37c81e8d", "title": "Modeling and Forecasting Electricity Loads and Prices: A Statistical Approach" }, { "paperId": "784f972682a958d65a6d48b2c8025736caa45fb4", "title": "Another look at measures of forecast accuracy" }, { "paperId": "890affffdee22753a0cafd88f3efda38c7559896", "title": "Renewable energy and the need for local energy markets" }, { "paperId": "11540131eae85b2e11d53df7f1360eeb6476e7f4", "title": "Learning to Forget: Continual Prediction with LSTM" }, { "paperId": "2e9d221c206e9503ceb452302d68d10e293f2a10", "title": "Long Short-Term Memory" }, { "paperId": "d0be39ee052d246ae99c082a565aba25b811be2d", "title": "Learning long-term dependencies with gradient descent is difficult" }, { "paperId": "08fa207ef2db3c88a5e5d188f721ffb0e8274518", "title": "Allocative Efficiency of Markets with Zero-Intelligence Traders: Market as a Partial Substitute for Individual Rationality" }, { "paperId": "f22f6972e66bdd2e769fa64b0df0a13063c0c101", "title": "Multilayer feedforward networks are universal approximators" }, { "paperId": "9ec768707292a4e5e46e05ee84dc9912558181ba", "title": "Voting for Health Insurance Policy: the U.S. versus Europe" }, { "paperId": "392fe6f21d9b735c719a742ed987702b893824dd", "title": "Designing microgrid energy markets A case study: The Brooklyn Microgrid" }, { "paperId": "3f26ec0ec56bc0359ab1b3c685f9ba06ca6e3375", "title": "Review on probabilistic forecasting of photovoltaic power production and electricity consumption" }, { "paperId": "68798bd266b0e8c15bee46aa765e0aafd1d82e73", "title": "Short-Term Residential Load Forecasting Based on Resident Behaviour Learning" }, { "paperId": "bbd2976f87d1ffa309f4871ba5973b66938263c7", "title": "Decentralizing Energy Systems Through Local Energy Markets: The LAMPProject" }, { "paperId": "b9a5764785ca7aa5e495c3dc4941250ea5182116", "title": "A suite of metrics for assessing the performance of solar power forecasting" }, { "paperId": "f4b4a1f89046325c5f6526b3df3eb09284cb73e9", "title": "Supervised Sequence Labelling" }, { "paperId": "49c03ab2a1fd1b469de370142e564a120e8e4961", "title": "ELECTRICITY MARKETS" }, { "paperId": "b365b8e45b7d81f081de44ac8f9eadf9144f3ca5", "title": "Regression Shrinkage and Selection via the Lasso" }, { "paperId": null, "title": "Blockchain-based microgrid gives power to consumers in New York, 2016" }, { "paperId": null, "title": "Intelligente Stromzähler und Messsysteme" }, { "paperId": null, "title": "This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license" }, { "paperId": null, "title": "The role of medical expenses in the saving decision of elderly: a life cycle model\" by Xinwen Ni" }, { "paperId": null, "title": "Blockchain-Based Microgrid Gives Power to Consumers in New York. Available online: newscienti st.com/article/2079334-blockchain-based-microgrid-gives-power-to-consumers-in-new-york" }, { "paperId": null, "title": "Strompreise in Deutschland - Vergleichende Analyse der Strompreise für 1437 Städte in 711 Deutschland" }, { "paperId": null, "title": "Adaptive Nonparametric Community Detection" }, { "paperId": null, "title": "Statistische Zahlen der deutschen Solarstrombranche (Photovoltaik)" }, { "paperId": null, "title": "Interface to Keras. 2017" }, { "paperId": null, "title": "Blockchain in the energy transition. A survey among 671 decision-makers in the German energy industry" } ]
24,344
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01b830e8b5f61b017ce43ee51ba8b801fb0ec1ca
[ "Computer Science" ]
0.894901
Research on Block-Chain-Based Intelligent Transaction and Collaborative Scheduling Strategies for Large Grid
01b830e8b5f61b017ce43ee51ba8b801fb0ec1ca
IEEE Access
[ { "authorId": "151480570", "name": "Xiaolin Fu" }, { "authorId": "48017506", "name": "Hong Wang" }, { "authorId": "2157177606", "name": "Zhijie Wang" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://ieeexplore.ieee.org/servlet/opac?punumber=6287639" ], "id": "2633f5b2-c15c-49fe-80f5-07523e770c26", "issn": "2169-3536", "name": "IEEE Access", "type": "journal", "url": "http://www.ieee.org/publications_standards/publications/ieee_access.html" }
In view of the problems of large-grid-level centralized transactions and dispatch centers with information asymmetry and high processing costs, a completely decentralized transaction architecture and a weak centralized scheduling strategy based on block-chain are proposed. Firstly, the concepts of transaction decentralization and scheduling decentralization are defined, and the reliability of distributed transaction communication is studied. Built a blockchain transaction risk control model based on the communication credit consensus mechanism. Secondly, under the weakly centralized scheduling architecture based on the autonomous chain of substations, security checks are performed, and temporary central nodes are set up to perform scheduling tasks. Finally, an improved evolutionary game algorithm is used to solve the above model, and the optimal solution is obtained by dynamically updating the credibility.
Received August 2, 2020, accepted August 13, 2020, date of publication August 18, 2020, date of current version August 28, 2020. _Digital Object Identifier 10.1109/ACCESS.2020.3017694_ # Research on Block-Chain-Based Intelligent Transaction and Collaborative Scheduling Strategies for Large Grid XIAOLIN FU 1, HONG WANG2, AND ZHIJIE WANG1 1College of Electrical Engineering, Shanghai Dianji University, Shanghai 201306, China 2School of Economics and Management, Tongji University, Shanghai 200092, China Corresponding author: Zhijie Wang (wangzj@sdju.edu.cn) This work was supported by the Shanghai Natural Science Foundation Project under Grant 15ZR1417300. **ABSTRACT In view of the problems of large-grid-level centralized transactions and dispatch centers with** information asymmetry and high processing costs, a completely decentralized transaction architecture and a weak centralized scheduling strategy based on block-chain are proposed. Firstly, the concepts of transaction decentralization and scheduling decentralization are defined, and the reliability of distributed transaction communication is studied. Built a blockchain transaction risk control model based on the communication credit consensus mechanism. Secondly, under the weakly centralized scheduling architecture based on the autonomous chain of substations, security checks are performed, and temporary central nodes are set up to perform scheduling tasks. Finally, an improved evolutionary game algorithm is used to solve the above model, and the optimal solution is obtained by dynamically updating the credibility. **INDEX TERMS Block-chain, large grid, intelligent trading, collaborative scheduling.** **I. INTRODUCTION** With the continuous development of modern power systems, a unified power trading and dispatching platform has problems of information asymmetry and low transaction reliability, which does not meet the characteristics of openness, equality and sharing in the power system. Therefore, The State Grid Corporation has been proposed to build a smart grid in China and has successfully developed a smart charging and feeding service monitoring system. At the same time, it has achieved outstanding results in large-scale new energy grid connection and operation control technology [1]. Northeast Asia signed the Memorandum of Cooperation in Northeast Asia Power Interconnection in 2016 [2], [3]. Take the Northeast Asian transnational power grid as an example to analyze the feasibility of its dispatch method and long-term transaction mode [4], [5]. The establishment of a maturity evaluation model for cross-border power trading is conducive to the further improvement of power trading and dispatch mechanisms [6]. The associate editor coordinating the review of this manuscript and approving it for publication was Zhouyang Ren . As a new type of decentralized computing model, blockchain can simplify operation procedures and reduce execution costs, that makes the power system gradually transition from partial decentralization to complete decentralization [7]. Set up a weak central organization for congestion management, and automatically execute the process through smart contracts, saving transaction execution time [8]. By proposing a heterogeneous block chain interaction method, the interconnection between all levels of energy layer are achieved [9]. To improve the adaptability of power dispatch, the advantages of blockchain autonomous consensus are introduced into demand management [10]–[12]. In fact, there are little works that systematically analyzes trading and dispatching strategies in power systems. Moreover, some existing results in [7]–[12] cannot be verified by simulation based on blockchain theory. Motivated by discussion, in this paper, the influence of distributed transactions on the stability of large power grids is considered. By defining the proportion of decentralized and quantified blockchain participation, a blockchain transaction risk control model based on the communication credit consensus mechanism is built. On the basis of the completion of the transaction, a weakly centralized scheduling architecture ----- based on the autonomous chain of substations is designed for security verification. At the same time, a temporary central node is set to perform scheduling tasks, and a collaborative optimization scheduling strategy is proposed. Finally, an improved evolutionary game algorithm is used to solve the problem, and a stable scheduling scheme is obtained by dynamically updating the credibility. The rest of this paper is organized as follows. Section 2 analyzes the feasibility of the integration of blockchain and large power grids. Section 3 proposes distributed trading strategies based on blockchain. Section 4 proposes the weakly centralized scheduling strategy based on the blockchain, and solves it through an improved evolutionary game algorithm. A simulation example is given in Section 5, which is followed by the conclusion in Section 6. **II. THE FEASIBILITY ANALYSIS OF THE INTEGRATION** **OF LARGE POWER GRID AND BLOCK-CHAIN** _A. LARGE-GRID AND BLOCK-CHAIN INTEGRATION_ _FUNCTION_ The decentralized nature of the blockchain naturally corresponds to the distributed nature of the main body in the power grid, which can meet the demand for direct electricity trading. The data transparency, traceability and anti-tampering characteristics of the blockchain can improve the security and reliability of transactions. Blockchain provides solutions for a number of problems that cannot be implemented in the current smart grid [13]. It can be integrated with smart grid functions as shown in Table 1. **TABLE 1. Microgrid and block-chain integration function.** _B. SUBSTATION AUTONOMOUS CHAIN MODEL_ The substation autonomous chain is composed of substations at all levels. It is a partial decentralized structure with a trusted center. Data access and reading are subject to strict rights management. The privacy protection is better, and it is applicable to the inside of the power grid. As shown in Figure 1, the six substations in the center represent large power consumers in each transaction dispatch layer, adopting direct power **FIGURE 1. Substation autonomous chain.** purchase to improve the overall operating efficiency. The four small-capacity substations in the outer ring still use centralized power trading, which is conducive to maintaining the stable operation of the platform. When this level of electric energy does not meet the demand for electricity, the neighboring substation submits a transaction application and the transaction authority is obtained after the review is passed. Since the optimization strategy in this paper is analyzed under a decentralized structure, it prevents randomness and volatility from affecting the safe and stable operation of the power grid. Therefore, equation (1) is defined for the degree of decentralization _Ddt =_ _[T]T[bc]tc_ × 100 (1) where Tbc is the number of distributed transactions that the block-chain participates in, Ttc is the number of dispatches under the participation of a single center and Ddt = 100 is fully decentralized. It is shown that all transaction executions do not go through a third-party centralized agency, and all are peer-to-peer(P2P) transactions, 50 < Ddt < 100 is weakly centralized. It is shown that more than 50% of transactions are executed as P2P transactions, and the representation of transaction centralization is weak and 0 < Ddt < 50 is weakly decentralized. It is shown that more than 50% of transactions are executed as centralized transactions. The decentralization of transactions is weak, Ddt = 0 is fully centralized. It is shown that all transactions are executed through a third-party trading center. **III. DISTRIBUTED TRADING STRATEGY BASED** **ON BLOCK-CHAIN** _A. SMART CONTRACT MODEL_ The core value of the block-chain is to achieve mutual trust through multi-party co-governance. It can also ensure the authenticity and reliability of information without the need for a third party [14]. Its trustworthy features are characterized in the form of smart contracts, which can automatically execute transaction settlement [15]. As shown in Figure 2, the main implementation steps are as follows: i) An electronic agreement is reached between the transaction nodes on the signatures of the two parties, the transaction ----- **FIGURE 2. Model of smart contract.** amount, electronic currency, transaction rules, and the complete state machine, ii) After the P2P network spreads, the transaction information verified by consensus is written to the block-chain, iii) Check the oracle and its external data. Although the smart contract itself does not have strategies the ability to access the external data of the blockchain, it can pass through the oracle. Using external adapters, the blockchain can safely connect with the oracle API. Developers can easily connect their smart contracts with the pre-written oracle API suite to establish a complete off-chain oracle connection. So as to get in touch with outside world and obtain reliable external data. iv) The conditions for triggering the smart contract can be the state on the chain, such as whether the payment is completed or there are marked electricity purchase and sale prices (ie electricity demand), and external information (such as weather conditions), etc. After the user gets the returned contract address and contract interface information, the user can call the contract by initiating a transaction. When the transaction satisfies triggered condition, it is pushed to the queue for verification, and the transaction is completed after the verification is passed and recognized by more than half of the nodes. When the transaction does not meet the trigger condition, it will not be recorded in the block and return to step (1) to find transaction data that meets the requirements again. Most power generation companies at the large power grid level are traditional energy sources. It is necessary to give priority to ensuring the safe and stable operation of the power grid, and on this basis to improve corporate efficiency. The fully distributed transaction framework based on the blockchain designed in this paper is shown in Figure 3. The red line in the figure connects power generation companies, power users and grid companies. With the support of blockchain technology, direct electric energy transactions and electricity bill settlement are completed. The usage of the block chain to automatically share and non-tamperable **FIGURE 3. Design of a fully distributed transaction framework based on** block-chain in large power grid. is shown of recording information simplifies transaction settlement, in addition, it improves the efficiency and settlement efficiency of large-user enterprises [16]. In the transaction implementation stage, the smart meter records the actual consumption or output of electricity over a period of time. Broadcast to other nodes, and the bookkeeping node records on the chain. The amount of change in the user’s electronic currency is obtained through a smart contract. Each transaction node in the power grid needs to reach a consensus on the generation and consumption of electrical energy, and the electronic money paid to the generator is related to the amount of power generation and supply and demand. The cost of electricity is : _f (x) = aele · pele_ (2) where aele is actual power consumption by users and pele is the price of unit electricity. The balance fee is composed of the actual electricity cost of the user and the penalty fee of the unfinished transaction indicator. The former is sold at a lower transaction price, and the latter needs to pay a higher actual electricity price. The balance fee can be expressed as : _δ_ _g (aele, ds, dr_ _) = aele · pele +_ _(ds−dr )[2]_ - ppunish (3) _e_ _τ_ where ds is the power supply, dr is the power demand, δ and _τ are coefficients and ppunish is the unit of penalty electricity_ price. The fees payable by users are inversely related to supply and demand : _l (dc, ds, dr_ _) =_ _[d][c][ ·][ ε][ ·][ d][r]_ (4) _ds + dr_ where dc is Power consumption, ε is the coefficient. _B. RELIABILITY RESEARCH OF DISTRIBUTED_ _TRANSACTION COMMUNICATION_ Reliable communication is studied from two aspects of link connectivity and transaction reciprocity. The link connectiv ----- ity considers the link connectivity probability of the communication network topology. On the premise of ensuring link connectivity, nearby transactions can be realized, reducing network loss and improving transaction efficiency. Combined with the definition of the degree of dispersion above, the link connectivity of a completely distributed structure is defined as : _Lp_ � � � linkcon = 1 − 1 − _epq · e[2]p_ (5) _q=1_ Under a fully centralized structure, that link connectivity is defined as : _m_ � linknet = _M[1]_ _linkcon_ (6) _p=1_ where Lp is the number of links connected to node p, epq is the extensibility of the qth link connected to node p, and the extensibility is defined as the probability that link q is connected to other nodes except node p; ep is the sum of scalability of node p, Linknet is the link connectivity of system, _M is the total number of nodes._ The trade interdependence of the power trading communication network is shown that when the integrity of the communication network is damaged, that is, when the line is overhauled, the remaining nodes and links can still maintain the performance of real-time power trading. Transaction interdependence can effectively reduce the negative impact of unbalanced electricity on the power grid, as defined by equation (7): Aidele = senp · _h_ � _Lpj_ _j=1_ senpj · _mpj(M −_ 1) (7) where h is the shortest path length connecting two nodes, _senp is the communication response speed of node p, senpj is_ the set of node communication response speed with the same pitch as node p, Lpj is The number of links between the node _p and the equal-distance communication set, mpj is the total_ number of nodes equidistant from node p. Regarding the whole-network transaction interdependence of the communication network, it is expressed as : **FIGURE 4. Comparison of link connectivity with a scalability of** [0.91, 0.99]. **FIGURE 5. Comparison of transaction interdependence with a scalability** of [0.91, 0.99]. between fully centralized communication and fully decentralized communication is shown in Figure 4. The transaction reciprocity degree under the two communication methods is shown in Figure 5. It can be seen from Figures 4 that within the range of extensibility [0.91, 0.99], the link connectivity of a fully decentralized power communication based on block-chain technology is superior to that of a fully centralized power communication. The link connectivity of fully decentralized communication networks is 1.9% to 2.4% higher than that of fully centralized communication architectures, that is, on the premise of ensuring link connectivity, it can effectively promote the nearby transaction of electrical energy and reduce network loss. It is shown that Figures 5 that the completely decentralized power communication architecture based on block-chain technology has better transaction interdependence than the fully centralized architecture. Moreover, Figure 5 shows that the latter’s transaction completion rate is only 16.4% 20.7% ∼ of the former. That is, when the completely decentralized communication architecture network is damaged. Due to the decentralized interconnected network structure, power transactions can reach equal-distance transaction nodes through other connected links to maintain the continued operation of power transactions. _C. BLOCK-CHAIN TRANSACTION RISK MANAGEMENT_ _AND CONTROL MODEL BASED ON COMMUNICATION_ _CREDIT CONSENSUS MECHANISM_ On the basis of the above reliability research, this section improves the equity proof mechanism and proposes a transaction risk management and control model based on Aid [1] = _M_ In the equation (8): _M_ � _∂p · Aidele_ (8) _p=1_ _zp_ _∂p =_ _zmax_ (9) where ∂p is the weighted coefficient of node p transaction compatibility, zp is the number of nodes in the node p equaldistance communication set, zmax is the maximum number of nodes in a node’s equidistant communication set. Assuming that the connectivity of the node and the link is the same, take the extendable interval of the node and the link as [0.91, 0.99]. Comparison of link connectivity ----- the communicate Proof-of-Credit (cPoC). It incorporates communication reliability and data transmission speed into the credit scoring system as a competitive mechanism for transaction nodes to obtain the right to keep accounts. The consensus mechanism is important to agreement reached by the nodes in the decentralized system [17]–[19]. In the process of distributed transactions, the speed and reliability of data broadcasting should also be used as constraints when nodes compete for bookkeeping rights, reflecting the value provided by transaction entities participating in direct transactions, which is an important right of market entities. Therefore, this paper proposes a cPoC consensus mechanism, which considers communication reliability and data transmission speed in the setting of the difficulty coefficient. The competition algorithm for the accounting rights of each node in this mechanism is shown in equations (10) and (11): _H (Ri, k_ _i) ≤_ _Ndiff · e[c][i]_ - trani (10) _Ndiff = Nba + N (trani, vi)_ (11) where H (·) is the hash function, Ri is the root hash of all the data packed into the block by node i, ki is the random number that node i needs to find, ci is the credit score of node i, _Ndiff is the difficulty factor, Nba is the default basic difficulty_ coefficient of the coefficient, N (·) is the data transmission network function, trani is the reliability of data transmission and vi is the data transmission speed (bits/sec). According to the optimization strategy of accounting rights proposed by Equations (10) and (11), the node can obtain the accounting rights according to the flow of Figure 6. **FIGURE 6. Node competition accounting right rule.** Under the cPoC consensus mechanism, the function values obtained by each node’s single-run hash function are evenly distributed between 0 and 2[256] 1. Assume that there are − _F transaction entities in the network, then the probability of_ one of them gaining the block accounting right is as follows equation (12) shows : _(Ndiff ·2e[256][ci]_ - rani) _e[c][i]_ - trani _pri[block]_ = �F � _(Ndiff ·2e[256]cj_ - ranj) � [=] �F _e[c][j]_ - tranj _j=1_ _j=1_ (12) where 2[256] is the space size mapped by the SHA 256 − algorithm. In the above equation, the numerator represents the probability that node i will successfully obtain the accounting power in a hash function calculation. From(12), the difficulty coefficient of node mining is related to its credit score and communication reliability. The higher the credit score and the higher the communication reliability, the lower the mining difficulty and the greater the probability of obtaining accounting rights. It can reward highly credible subjects and punish low credible subjects. Compared with the existing electricity trading methods, the increased difficulty of selection can control trading risks. The cPOC algorithm reduces the attack success rate of malicious nodes by increasing the difficulty of choosing the transaction subject. Consequently, realizing the management and control of distributed energy transaction risks, as shown in Figure 7. When the system is attacked by malicious nodes, it has a strong ability to maintain stable operation. As shown in Figure 7, the number of malicious nodes gradually increases from 0 to 40, with a step size of 2. It can be seen from the figure that when the number of malicious nodes is less than 62% of the total, the attack success rate is 0. Therefore, compared with the continuous double auction mechanism, the use of blockchain technology to achieve transaction authentication has higher security and reliability. Figure 8 shows the comparison of throughput under different transaction strategies. Transaction throughput refers to the number of transactions completed by the system in a given time period. That is, the greater the throughput of **FIGURE 7. Success rate of malicious node attack.** **FIGURE 8. Transaction throughput comparison.** ----- the system, the more user or system requests the system completes in unit time, and the system resources are fully utilized. Figure 8 takes the average value of different transaction states. When the number of nodes is less than 40, the blockchain-based transaction strategy proposed in this article has low transaction delay and high consensus speed, and transaction settlement is completed through an automatically executed smart contract. So it has obvious advantages in throughput performance. When it exceeds 40, the throughput under the strategy proposed in this article drops slightly, and finally stabilizes at about 32 times, which still has better room for improvement. Figure 9 shows the effective supply rate of transactions in each period. The effective supply rate refers to the ratio of the number of transactions successfully completed according to the transaction intention to the total transaction volume. The higher the effective supply rate, the smaller the transaction defaults and transaction adjustments, the more conducive to improving transaction quality. As shown in Figure 9, although the continuous double auction mechanism can maintain the supply rate at a relatively high level, there is a significant decline during the peak load period. In the blockchain transaction strategy proposed in this paper, the cPOC consensus mechanism introduces credit scoring and communication reliability to timely amend the entities that do not meet the transaction needs, and has the effect of rewarding high-trust entities and punishing low-trust entities. During the peak transaction period from 18:00-20:00, the highest supply rate can be increased by 11.7%, and the average effective supply rate can be increased by 5.8%, effectively reducing the transaction default rate and adjustment volume. **FIGURE 9. Effective supply rate of transactions in each period.** As shown in Table 2 and Figure 10, the existing continuous double auction mechanism has high requirements for local servers. Thus, it is difficult to implement it in a decentralized low-cost network. The block-chain-based transaction strategy proposed in this paper can effectively reduce the daily **TABLE 2. Daily operating costs under different mechanisms.** **FIGURE 10. Operating costs of microgrids at different times.** operating cost of the microgrid by 8.45%. It is because the block-chain technology can break the information barrier between the generator and the user, reduce the credit cost in the transaction process and the third-party platform construction cost. 6:00p.m 9:00p.m is the peak load period, and the ∼ optimization effect is more obvious. **IV. WEAK CENTRALIZED SCHEDULING STRATEGY BASED** **ON EVOLUTIONARY GAME ALGORITHM** _A. WEAK CENTRALIZED ARCHITECTURE BASED_ _ON SUBSTATION AUTONOMOUS CHAIN_ At present, electricity market transactions are mainly divided into two types: annual transactions and monthly transactions. This paper first uses the monthly transaction method of centralized bidding as an example to illustrate the relationship between the transaction center and the dispatch center, as shown in Figure 11. The two are jointly responsible for the electricity market. The former is mainly responsible for declaration, clearance and settlement, and the latter is mainly responsible for security check, congestion management and **FIGURE 11. Monthly centralized bidding process.** ----- transaction execution. All transaction intentions need to pass the security check of the dispatch center to finally form a transaction plan [20], [21]. Considering that there is still a dispatch center in the current grid company system, this paper proposes the weak centralization idea of decentralization of dispatching part, which retains the function of disatch center. A temporary scheduling center is selected through the blockchain consensus mechanism to perform scheduling tasks at all levels. At the same time, the substation autonomous chain will approve transaction scheduling information to provide safety supervision for the stable operation of the power grid. The temporary center node is affected by factors such as load location, power supply location, power supply unit, network delay, etc. According to the different transaction information, the selected temporary center will change, as shown in Figure 12 and Figure 13. **FIGURE 12. Temporary central node at t1.** **FIGURE 13. Temporary central node at t2.** Figure 12 shows the process of selecting the temporary central point at time t1. The power plants that provide electrical energy include three thermal power plants, one wind power plant, and one photovoltaic power plant. The system communication node broadcasts the random number that needs to be solved in the round scheduling data, and each node performs distributed storage of the transaction data while updating the local transaction scheduling data. The substation node that can calculate the correct random number result as a priority. The temporary center of this round of scheduling performs its own scheduling tasks and gets certain rewards. Figure 13 shows the selection process of the temporary central point at time t2. The power plant that provides electrical energy includes two thermal power plants, two wind power plants, and one photovoltaic power plant, which are different from the geographical location and power supply situation at time t1. Therefore, re-select the temporary central node and perform random number calculation. By uploading the data, we can know the active power applied for in the substation for this round of transactions. Using the stored data in the block-chain network, we can know the maximum load during the application period of the substation, so we can obtain the available power and the total power required to ensure the stable operation of the power grid. According to the submitted address information, the substation autonomous chain automatically recognizes the highest substation level of power purchaser A and power seller B in this round of transactions : _f (A, B) = n_ _n = 1, 2, 3, 4, 5_ (13) where 1, 2, 3, 4, and 5 represent 35kV substation, 110kV substation, 220kV substation, 330kV substation, and 500kV substation, respectively. Assuming that the level of the substation directly connected to the power purchaser A is m, and the level of the substation directly connected to the power seller B is o, then a total of Nstation level substations need to be passed: _Nstation = 2n −_ _m −_ _o + 1_ (14) Assuming that A is connected to user B through 500kV, 330kV, 220kV, 110kV, 35kV substation, then n 5, m 5, = = because A is directly connected to 500kV, B is the user, directly connected to 35kV, o 1, then the number of passing = substations between the two is 5, which is in line with the real situation. _B. SMART CONTRACT COLLABORATIVE_ _SCHEDULING MODEL_ On the basis of traditional power grid economic dispatch, block-chain technology is incorporated, which effectively introduces the advantages of block-chain in data storage, information security, and data interaction into the power grid economic dispatch [22]–[25]. The economic dispatch plan of the power grid is formed in a smart contract and is checked and confirmed by the energy management system. Finally, the reliable power supply from the power generation unit to the power consumption unit is realized. The specific steps are as follows: i) Each power generation unit and power user access historical data and current status information in the blockchain network, receive existing transaction requests, and perform data backup after authentication by the entire network. ii) According to all the transaction information that has passed the authentication, each node calls the smart contract ----- to perform economic dispatch calculations. The information format released by the power supply is : _GEN = (IDGEN_ _, HGEN_ _, RGEN_ _, JGEN_ _, KGEN_ _, �GEN_ _)_ (15) where GEN is controllable power information, IDGEN is the unique identification obtained when the controllable power supply joins the block-chain network, HGEN is output capacity, RGEN is cost information, JGEN is the controllable energy type, KGEN is the current start and stop status of the unit, _�GEN is the climbing rate._ iii) Integrate all the effective information received by the smart contract to form an economic dispatch objective function and constraint conditions, thereby generating a dispatch plan. The scheduling model in this paper is shown in equation (16) to equation (20). The scheduling scheme is propagated through the P2P network, waiting for other nodes to verify. iv) If the scheduling plan is verified, it will be recorded in the blockchain in the form of a smart contract, otherwise, go back to step (3) to re-equationte the scheduling plan. v) When the preset trigger conditions are met, each power generation and consumption unit automatically executes the scheduling plan in the smart contract, which is regarded as the end of a scheduling task. In the hierarchical scheduling, the main task of the national survey is to equationte a cross-provincial tie-line plan, which is determined by balancing large power distribution and power trading. In the case of a known output curve, if the power transaction situation needs to be adjusted due to security constraints, consider establishing a tie-line model with the goal of minimum adjustment cost. As shown in equation (16): _T_ _N_ � � _s_ _σiµn_ ��Cn,t − _Cn,t_ �� (16) _t=1_ _n=0_ where Cn,t is the contribution of the inter-provincial power supply at time t according to the original transaction plan, _Cn[s],t_ [is the suggested contribution after the power supply] across the provinces does not meet the safety constraints at time t, N is the total number of power supplies, σi is the power distribution ratio of the power supply to the tie line i, and the value is between [0, 1], µn is the adjustment cost of power supply n. The corresponding constraints under the objective function are: i) Tie line transmission constraints _Cn,t,min ≤_ _Cn,t ≤_ _Cn,t,max_ (17) where Cn,t,min are the minimum and maximum power that can be received or sent at time t, respectively. ii) Control constraints of unit groups in the control area  � _χg,min ≤_ _Li,t ≤χg,max_ _g ∈_ _G_  _i∈g_ (18) � LG,t = _i∈g_ _Li,t −_ _CG,t_ where G is the large grid, g is the provincial power grid, _χg,min and χg,max are the minimum and maximum output of_ the provincial grid unit respectively, LG,t is the load demand of large power grid, CG,t is the large grid tie line plan to contribute. The first equation in equation (18) indicates that the total output of units in the provincial grid meets the fluctuation in the interval [χg,min, χg,max], and the second equation is used to ensure load balance of large grid. iii) Power flow check constraints The following equations are power balance constraint, node power constraint and node voltage constraint. � _Ce[t]_ [=][ V]e[ t] _f_ �Ve[t] [−] _[V]f[ t]_ � - ref e ∈ _El_ (19) s.t. Vmin ≤ _Ve[t]_ [≤] _[V][max][,][ C][min][ ≤]_ _[C]e[t]_ [≤] _[C][max]_ ∀e (20) where f is all nodes connected to node e, Ce[t] [is the power] of nodee at time t, the inflow is positive and the outflow is negative, ref is the current value flowing through the two nodes, the flow direction from e to f is positive, and the flow direction from f to e is negative, El is a collection of system nodes, Cmin and Cmax are the minimum and maximum values of node power Ce[t] [respectively,][ V][min][ and][ V][max][ are the mini-] mum and maximum values of node voltage Ve[t] [respectively.] _C. IMPROVED EVOLUTIONARY GAME ALGORITHM_ Evolutionary game theory is based on individuals with limited rationality, and it well describes the trend of behavior changes [26]. It makes up for the difficult problem of complete rationality and Nash equilibrium in classical game theory, and actively explores evolutionary stability strategies and evolutionary processes [27], [28]. In the evolutionary game algorithm, large power grids and provincial power grids as game participants generate two populations denoted as P1 and P2 respectively, p1 and p2 are the probability of population distribution in the initial population. P1 and P2 take y1 and y2 as benefit targets respectively. When two agents in the group compete for the same benefit, a game will be triggered. Let the two agents x �x ∈ ÊPi� and x[′][ �]x[′] ∈ ÊPj� play the game in the maximization benefit game. When the relationship between i andj is different, the scheduling function obtained by x is different. when i and j are equal, the scheduling function is shown in equation (21): _Dispatch (x) =_ _[y][i][ (][x][)][ −]_ _[y][i][,][min]_ (21) _yi. max −_ _yi,min_ when i and j are not equal, the scheduling function is shown in equation (22): _Dispatch (x) =_ �yi (x) − _yj_ �x[′][��] − �yi,min − _yj,max�_ (22) �yi,max − _yj,min�_ − �yi,min − _yj,max�_ In each generation of the evolutionary algorithm, a pair of agents is randomly selected to perform a number of repeated games. Take the average scheduling value as the subject’s ----- fitness value. The best dispatch decision is obtained by flexibly adjusting the game status between large power grids and provincial power grids. Since the dispatch strategy in this paper is analyzed under a partially decentralized structure, in order to prevent the randomness and volatility of distributed dispatch from affecting the operation of large power grids. Therefore, the Decentralization of scheduling (Decentralization of scheduling) is defined by equation (23). Considering the credibility of stable decision-making due to distribution, the credibility represents the feasibility of a scheduling scheme that satisfies the operational stability of the power grid. The definition is shown in equation (24), so that the algorithm parameters are dynamically adjusted when the game is evolved. _Dsc =_ _[S]ssc[bc]_ × 100 (23) where Sbc is the number of distributed schedules, Ssc is the number of centralized scheduling, Dsc = 100 is completely decentralized. It is shown that all scheduling executions do not go through a third-party centralized agency, and all are P2P schedules, 50 < Dsc < 100 is weakly centralized. It is shown that more than 50% of schedules are executed as P2P schedules, and the representation of scheduling centralization is weak, 0 < Dsc _< 50 is weakly decentralized. It is_ shown that more than 50% of schedules are executed as centralized schedules, and the decentralization of schedules is weak, Dsc = 0 is completely centralized. It is shown that all scheduling is executed through a third-party dispatch center. _Scred = �uerror + �ferror_ (24) where �uerror is the voltage deviation value in the power grid, �ferror is the frequency deviation value in the power grid. The credibility sets the constraint range according to the allowable deviation under each voltage level. It is known that the evolution of the ird generation dispatching decision of the large power grid and the provincial power grid is Mdec. In a variety of random scenarios, if the provincial power grid cannot complete the dispatch task, the impact of the generated electric energy fluctuation on the operation of the large power grid can be calculated. Equationte a compensation model corresponding to the impact of the provincial power grid on the operation of the large power grid, and express it as the penalty cost of the impact of the provincial power grid output. _Scred,min, the population distribution probability is adjusted_ appropriately in the following two situations: i) _[S][comp]_ _Spro_ _[>][ S][cred][,][min][, The impact of distributed dispatch on]_ the stability of large power grids is greater than the minimum credibility, so the population distribution probability is not adjusted, ii) _[S][comp]_ _Spro_ [≤] _[S][cred][,][min][, The effect of distributed dispatch on]_ the stability of the large power grid is less than the minimum credibility. Starting from i 1 evolutions, the population + distribution probability is adjusted so that the stability impact of large power grid caused by the randomness of distributed dispatch is within the tolerable range. **V. EXAMPLE ANALYSIS** In order to verify the effectiveness of the mechanism proposed in this paper, a weak centralized scheduling model is built on MATLAB. Smart contracts are written in C language. web3 uses HTTP Provider as a connector to the database. After the connection is completed, the scheduling model can be called in the smart contract. In the decision-making phase, the provincial power grid obtains the expected power through the web3.eth.call interface. Complete the clearing solution and optimization scheduling in MATLAB, and write the optimization results into the smart contract through the _web3.eth.sendTransaction interface. The parameters of dif-_ ferent capacity units are shown in Table 3. Taking the provincial power grid as an example, simulation calculation of the optimal dispatching problem of coal-fired generating units in the province is carried out. The output plan of the unit determined by the evolutionary game method is shown in Figure 14, which is consistent with the load curve change rule at various times of the day. The peak output of different units is positively correlated with the installed capacity. In the evolutionary game, each unit takes the minimum change in power on the contact line when the power transaction adjustment is required as the objective function. Through equation (25), the minimum credibility is used as the basis for judgment, and the population distribution probability is dynamically adjusted, so that the output of the unit can meet the requirements of safe and stable operation of the large power grid. As shown in Figure 15, setting different scheduling decentralization degrees will affect the output of the unit. Under weak centralization, the unit output is smoother, which can **FIGURE 14. Output curves of different coal-fired units.** _Q_ � _�q · α · �Mdiff[2]_ [·][ (][1][ −] _[D][sc][)]_ (25) _q=1_ _Scomp =_ _T_ � _t=1_ where �q is the probability weight corresponding to the scene, Q is the total number of multiple random scenes, _�Mdiff is the gap between the actual output of the provincial_ power grid and the dispatching decision output of Mdec, α is the unit penalty cost. Assuming that the provincial grid operation cost under this dispatch decision is Spro and the minimum credibility is ----- **TABLE 3. Minimum stable combustion load and adjustment range of units with different capacity.** reduce peak and valley fluctuations, because the block-chain technology is used by each unit to maintain the weak center. Consensus scheme can realize information sharing and multiparty governance. The output curve of the unit is not only affected by the dispatch center, but also by the remaining power stations. To a certain extent, the output of the unit can be optimized to make it smoother, and then the dispatch efficiency of each power station is improved. However, due to the impact of the block’s own storage efficiency, with the increasing number of transactions and scheduling bodies, the limited storage space will reduce the block’s response speed, so further research on block management is needed. **FIGURE 15. Unit output curves under different dispatching** decentralization degrees. The power deviation values under different optimization strategies are shown in Figure 16. It can be seen from the figure that before optimization, the power deviation value in the grid is high and the power fluctuation is large, and the system state is unstable. The blockchain-based optimization strategy proposed in this paper is compared with the optimization effect of genetic algorithm. Although genetic **FIGURE 16. Power deviation diagram under different optimization** strategies. algorithm can find the optimal solution more effectively, the power deviation after blockchain optimization is lower, which reduces power fluctuation. Therefore, the blockchain-based scheduling optimization strategy is more conducive to the safe and stable operation of the power grid. In order to verify the impact of power flow on the power flow on the tie line under the weakly centralized dispatch mode, a large power grid is used as a test case. Assume that the initial conditions are: a certain province’s shortfall of electricity-3385MW, consisting of 16 physical lines, and a power adjustment space of 10%. As shown in Figure 17, ± after power adjustment under distributed scheduling, most branch deviations are distributed around 15%, so the calculation and operation costs are reduced at the expense of power flow accuracy near the tie line. Based on the weak centralization of the blockchain, trend data saves computing memory and improves computing efficiency through multiparty consensus. **FIGURE 17. Impact on tie line power under distributed scheduling.** In the evolutionary game algorithm, the dynamic credibility change trends of large power grids and provincial power grids are shown in Figure 18. It can be seen from the figure that large power grids and provincial power grids undergo a dynamic evolutionary game process, which can eventually make weak centralized dispatch to large power grids. The degree of stability is maintained above the minimum confidence level, and the deviation of voltage and frequency can meet the requirements of power grid operation. And minimize the penalty cost of the provincial grid calculated by equation (25). Compared with the traditional scheduling method, the two-way security system established ----- **FIGURE 18. Dynamic trend of dynamic credibility.** under the blockchain technology can maintain the continuous stability of both parties and obtain better economic benefits. **VI. CONCLUSION** This paper focuses on the research of large grid-level transaction and collaborative scheduling strategies in smart grids, and systematically analyzes the advantages of block-chainbased transactions and scheduling. All the models and strategy analysis described in this paper are based on the substation autonomous chain. Compared with the existing distributed transaction methods, although the robustness is not significantly improved, it can significantly improve the transaction throughput and calculation efficiency. Since the storage efficiency of the block itself restricts the response speed of the block under the main body of large-scale transactions, further study on block congestion management should be conducted as a more effective solution for scheduling optimization. **ACKNOWLEDGMENT** Thanks to all the staff members of the Shanghai Natural Science Foundation Project for their help and the staff of Beijing Jin-Feng Energy Internet Park for providing the data source. **REFERENCES** [1] N. C. Zhou, J. Q. Liao, Q. G. Wang, C. Y. Li, and Y. Li, ‘‘Analysis and prospect of application status of deep learning in smart grid,’’ Automat. _Electr. Power Syst., vol. 43, no. 4, pp. 180–197, 2019._ [2] Z. Y. Liu, ‘‘Research and prospects on transnational and intercontinental interconnection of global energy Internet,’’ Proc. CSEE, vol. 36, no. 19, pp. 5103–5110, 2016. [3] Y. H. Zhang, Y. T. Zhang, D. Zhang, P. Y. You, C. Gao, and Y. L. Jiang, ‘‘Preliminary study on transnational power interconnection model and technical feasibility in Northeast Asia,’’ Global Energy Internet, vol. 1, no. S1, pp. 213–221, 2018. [4] Q. R. Yang, T. Ding W. Q. Ma, H. M. Zhang, Z. Y. Jia, W. Tian, and Y. Cao, ‘‘Decentralized security-constrained economic dispatch for global energy Internet and practice in Northeast Asia,’’ in Proc. IEEE Conf. Energy _Internet Energy Syst. Integr. (EI2), Nov. 2017, pp. 1–6._ [5] Y. Cao, T. Ding, Y. T. Hou, and M. H. Shan, ‘‘Design and simulation of long-term trading mode of multinational electricity market under the background of global energy Internet,’’ Global Energy Internet, vol. 1, no. S1, pp. 242–248, 2018. [6] X. L. Li, Y. M. Song, C. T. Tang, F. Z. Shan, J. Xu, and Y. W. Liu, ‘‘Research on maturity model of cross-border electricity trading market based on global energy Internet,’’ Electr. Power Inf. Commun. Technol., vol. 15, no. 3, pp. 7–13, 2017. [7] G. N. Wang, J. F. Yang, S. Wang, L. L. Duan, J. Zhang, and Y. T. Wu, ‘‘Distributed optimization of power grid considering EV swap station scheduling and blockchain data storage,’’ Autom. Electr. Power Syst., vol. 43, no. 8, pp. 110–116, 2019. [8] X. Tai, H. B. Sun, and Q. L. Guo, ‘‘Blockchain-based power transaction and congestion management method in energy Internet,’’ Power Syst. Technol., vol. 40, no. 12, pp. 3630–3638, 2016. [9] B. Li, W. Z. Cao, J. Zhang, S. S. Chen, B. Yang, Y. Sun, and B. Qi, ‘‘Multi-energy system transaction system and key technology based on heterogeneous blockchain,’’ Autom. Electr. Power Syst., vol. 42, no. 4, pp. 183–193, 2018. [10] G. Wu, B. Zeng, R. Li, and M. Zeng, ‘‘Research on the application mode of blockchain technology in comprehensive demand side response resource trading,’’ Proc. Chin. Soc. Electr. Eng., vol. 37, no. 13, pp. 3717–3728, 2017. [11] M. Zeng, J. Cheng, Y. Q. Wang, Y. F. Li, Y. Q. Yang, and J. Y. Dou, ‘‘A preliminary study on the multi-module collaborative autonomy model of the energy Internet under the blockchain framework,’’ Proc. CSEE, vol. 37, no. 13, pp. 3672–3681, 2017. [12] B. Li, C. Lu, W. Z. Cao, B. Qi, D. Z. Li, S. S. Chen, and G. Y. Ciu, ‘‘Application of automatic demand response system based on blockchain technology,’’ Proc. CSEE, vol. 37, no. 13, pp. 3691–3702, 2017. [13] Y. Yan, J. H. Zhao, F. S. Wen, and X. Y. Chen, ‘‘Blockchain in energy system: Concept, application and outlook,’’ Electr. Power Construct., vol. 38, no. 2, pp. 12–20, 2017. [14] B. Meng, J. B. Liu, Q. Liu, X. X. Wang, X. R. Zheng, and D. B. Wang, ‘‘Survey of smart contract security,’’ Chin. J. Netw. Inf. Secur., vol. 6, no. 3, pp. 1–13, 2020. [15] M. L. Fu, L. F. Wu, Z. Hong, and W. B. Feng, ‘‘Research on smart contract security vulnerability mining technology,’’ J. Comput. Appl., vol. 39, no. 7, pp. 1959–1966, 2019. [16] J. Ping, S. J. Chen, and Z. Yan, ‘‘Energy blockchain underlying technology suitable for convex optimization scenarios of power systems,’’ Proc. CSEE, vol. 40, no. 1, pp. 108–116, 2020. [17] J. Wang, W. B. Liu, and L. L. Gong, ‘‘A consensus mechanism for blockchain dynamic authorization,’’ J. Heilongjiang Univ. Sci. Technol., vol. 30, no. 2, pp. 193–199, 2020. [18] Y. Yuan, X. C. Ni, S. Zeng, and F. Y. Wang, ‘‘The development status and prospect of blockchain consensus algorithm,’’ Acta Automatica Sinica, vol. 44, no. 11, pp. 2011–2022, 2018. [19] J. Wang, T. Yang, and Y. Li, ‘‘Design of integer chaotic key generator for wireless sensor network,’’ Int. J. Future Gener. Commun. Netw., vol. 9, no. 11, pp. 327–336, Nov. 2016. [20] L. J. He, S. Cheng, and Z. M. Chen, ‘‘Two-tier optimal dispatch of multimicrogrids considering interactive power control and bilateral bidding transactions,’’ Power Syst. Protection Control, vol. 48, no. 11, pp. 10–17, 2020. [21] X. L. Li, F. Z. Shan, Y. M. Song, M. H. Zhou, C. Q. Liu, and C. T. Tang, ‘‘Optimal dispatch of multi-region integrated energy system considering heating network constraints and carbon trading,’’ Autom. Electr. Power _Syst., vol. 43, no. 19, pp. 52–59, 2019._ [22] H. Y. Zhou, W. H. Qian, J. J. Bo, Z. N. Wei, G. Q. Sun, and H. X. Zang, ‘‘Analysis of typical application scenarios and project practice of energy blockchain,’’ Electr. Power Construct., vol. 41, no. 2, pp. 11–20, 2020. [23] M. T. Yang, B. X. Zhou, S. Dong, N. Lin, Z. G. Li, and F. Y. He, ‘‘Design and dispatch optimization of microgrid electricity market supported by blockchain,’’ Electr. Power Autom. Equip., vol. 39, no. 12, pp. 155–161, 2019. [24] B. X. Zhou, M. T. Yang, S. Q. Shi, J. X. Wei, Z. G. Li, and S. Dong, ‘‘Blockchain based potential game model of microgrid market,’’ Autom. _Electr. Power Syst., vol. 44, no. 7, pp. 15–22, 2020._ [25] Q. S. Li, X. Y. Tang, and Q. M. Zhao, ‘‘Analysis of applying weak centralized blockchain technology in energy trading system of energy Internet,’’ _Power Syst. Big Data, vol. 22, no. 6, pp. 22–27, 2019._ [26] N. T. Huang, J. R. Q. Bao, G. W. Cai, S. Y. Zhao, D. B. Liu, J. S. Wang, and P. P. Wang, ‘‘Multi-agent joint investment in microgrid source-storage multi-strategy bounded rational decision-making evolutionary game capacity planning,’’ Proc. CSEE, vol. 40, no. 4, pp. 1212–1224, 2020. [27] C. H. Peng, K. Qian, and J. L. Yan, ‘‘Differential evolutionary game bidding strategy on generation side under new energy grid-connected environment,’’ Power Syst. Technol., vol. 43, no. 6, pp. 2002–2009, 2019. [28] H. B. Cheng, T. Luo, C. Kang, X. Tian, Y. X. Guo, and X. Wang, ‘‘Multilayer game bidding model for electric vehicle aggregators participating in demand response,’’ Adv. Technol. Electr. Eng. Energy, vol. 39, no. 2, pp. 46–56, 2020. ----- XIAOLIN FU was born in Jinan, China. She received the bachelor’s degree, in 2018. She is currently pursuing the master’s degree with Shanghai Dianji University. She serves as the Head of the Academic Department of the Graduate Association. She published an article in an international journal and she has participated in the compilation of a book. Her research direction is a smart grid multi-layer transaction and collaborative scheduling strategy based on blockchain. Dr. Fu participated in the Shanghai Green Motors and Intelligent Manufacturing graduate academic forum and won the third prize, in 2018. In 2019, she won a national scholarship. In October 2019, she went to the French Higher School of Science, Technology and Economics and the University of Applied Sciences, Kaiserslautern, Germany, and participated in the 2019 Sino-German Intelligent Manufacturing Technology Postgraduate Academic Forum, and won the first prize. HONG WANG received the bachelor’s degree, in 2017, and the master’s degree in electrical and electrical engineering from Strathclyde University, in 2018. He is currently pursuing the MPacc degree with Tongji University, studying accounting, auditing and financial management. He has published two international conference papers and participated in the compilation of a book. He obtained a total of eight invention patents and utility model patents. His research direction is the application of blockchain in the power economy and power market. ZHIJIE WANG was engaged in the research work of power transmission and new energy power generation technology in the postdoctoral mobile station of electrical engineering of the China University of Mining and Technology, in 2005, an Academic Leader of the key disciplines of power electronics and power transmission of the Shanghai Institute of Electrical Engineering, Shanghai Talent Development Fund Program, and a Shanghai Electric Group Technology Leader. He participated in the completion of the national 863 project subproject robot patrol system path planning optimal control strategy research, the National Natural Science Foundation of China sub-project service robot based on information fusion technology decision-making method research, and presided over the Shanghai Natural Science Foundation more than ten projects, such as the Shanghai Municipal Science and Technology Commission Project and the Shanghai Talent Development Fund Project. His main research direction is the energy Internet collaborative optimization dispatch control and active distribution network technology based on blockchain. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2020.3017694?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2020.3017694, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://ieeexplore.ieee.org/ielx7/6287639/8948470/09170507.pdf" }
2,020
[ "JournalArticle" ]
true
null
[ { "paperId": "fcc90c261e344ea95a300894e96d079e26196ffc", "title": "Decentralized security-constrained economic dispatch for global energy internet and practice in Northeast Asia" }, { "paperId": "7b2aec6db016b4caedc6309c35d891fc4b7c3cd8", "title": "Design of Integer Chaotic Key Generator for Wireless Sensor Network" }, { "paperId": null, "title": "Two-tier optimal dispatch of multimicrogrids considering interactive power control and bilateral bidding transactions" }, { "paperId": null, "title": "‘‘Multi-layer game bidding model for electric vehicle aggregators participating in demand response,’’" }, { "paperId": null, "title": "Multi-agent joint investment in microgrid source-storagemulti-strategy bounded rational decision-making evolutionary game capacity planning" }, { "paperId": null, "title": "‘‘Blockchain based potential game model of microgrid market,’’" }, { "paperId": null, "title": "‘‘Analysis of typical application scenarios and project practice of energy blockchain,’’" }, { "paperId": null, "title": "Survey of smart contract security,’’Chin" }, { "paperId": null, "title": "‘‘Optimal dispatch of multi-region integrated energy system considering heating network constraints and carbon trading,’’" }, { "paperId": null, "title": "‘‘Design and dispatch optimization of microgrid electricity market supported by blockchain,’’" }, { "paperId": null, "title": "‘‘Analysis of applying weak centralized blockchain technology in energy trading system of energy Internet,’’" }, { "paperId": null, "title": "‘‘Distributed optimization of power grid considering EV swap station scheduling and blockchain data storage,’’" }, { "paperId": null, "title": "‘‘Differential evolutionary game bidding strategy on generation side under new energy grid-connected environment,’’" }, { "paperId": null, "title": "‘‘Design and simulation of long-term trading mode of multinational electricity market under the background of global energy Internet,’’" }, { "paperId": null, "title": "‘‘Preliminary study on transnational power interconnection model and technical feasibility in Northeast Asia,’’" }, { "paperId": null, "title": "‘‘Multi-energy system transaction system and key technology based on heterogeneous blockchain,’’" }, { "paperId": null, "title": "‘‘Application of automatic demand response system based on blockchain technology,’’" }, { "paperId": null, "title": "‘‘A preliminary study on the multi-module collaborative autonomy model of theenergyInternetundertheblockchainframework,’’" }, { "paperId": null, "title": "‘‘Researchontheapplicationmodeof blockchaintechnologyincomprehensivedemandsideresponseresourcetrading,’’" }, { "paperId": null, "title": "‘‘Research on maturity model of cross-border electricity trading market based on global energy Internet,’’" }, { "paperId": null, "title": "‘‘Blockchain in energy sys-tem: Concept, application and outlook,’’" }, { "paperId": null, "title": "Analysis and prospect of application status of deep learning in smart grid ‘ Research and prospects on transnational and intercontinental interconnection of global energy Internet" }, { "paperId": null, "title": "‘‘Blockchain-basedpowertransactionandcongestionmanagementmethodinenergyInternet,’’" }, { "paperId": null, "title": "The development status and prospect of blockchain consensus algorithm" }, { "paperId": null, "title": "Energy blockchain underlying technology suitable for convex optimization scenarios of power systems ‘ A consensus mechanism for blockchain dynamic authorization" }, { "paperId": null, "title": "Survey of smart contract security Research on smart contract security vulnerability mining technology" } ]
12,775
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01b92e8b9b8c0a7f67fda9e34a9d4a54ef2d4da9
[ "Computer Science" ]
0.879208
Fine-Filtered Attributed Key Based Data Storage in Cloud Computing
01b92e8b9b8c0a7f67fda9e34a9d4a54ef2d4da9
[ { "authorId": "144042460", "name": "L. Krishna" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
null
_p_ _f_ _f_ _g_ _g_ Vol.2, No.3, September 30 (2016), pp. 35-41 http://dx.doi.org/10.21742/apjcri.2016.09.05 # Fine-Filtered Attributed Key Based Data Storage in Cloud Computing ## Lohit Krishna[1)] Abstract With the improvement of cloud computing, outsourcing information to cloud server draws in bunches of considerations. To ensure the security and accomplish flexibly fine-grained file access control, attribute based encryption (ABE) was proposed and utilized as a part of distributed storage system. Be that as it may, client denial is the essential issue in ABE plans. In this article, we give a ciphertext-policy attribute based encryption (CP-ABE) scheme with productive client denial for distributed storage system. The issue of client repudiation can be understood productively by presenting the idea of client gathering. At the point when any client leaves, the gathering chief will overhaul clients' private keys with the exception of the individuals who have been repudiated. Also, CP-ABE plot has substantial calculation cost, as it develops directly with the multifaceted nature for the get to structure. To decrease the calculation cost, we outsource high calculation load to cloud specialist co-ops without spilling record substance and mystery keys. Notbaly, our plan can withstand plot assault performed by denied clients participating with existing clients. We demonstrate the security of our plan under the detachable calculation Diffie-Hellman (DCDH) presumption. The consequence of our analysis demonstrates calculation cost for neighborhood gadgets is generally low and can be consistent. Our plan is appropriate for resource constrained devices. Keywords :cloud computing, attribute-based encryption, outsource decryption, user revocation, collusion attack. ## 1. Introduction Cloud computing is viewed as a prospective computing paradigm in which resource is provided as administration over the Internet. It has met the expanding needs of processing assets and capacity assets for a few undertakings because of its focal points of economy, scalability, and accessibility. As of late, a few distributed storage administrations, for example, Microsoft Azure and Google App Engine were assembled and can supply clients with scalable and dynamic storage. Received(May 23, 2016), Review Result(1st: June 10, 2016, 2nd: July 15, 2016), Accepted(September 10, 2016) 1(Corresponding Author) Machine Intelligence Research Labs, India email: lohitkrishna39@gmail.com ----- Fine-Filtered Attributed Key Based Data Storage in Cloud Computing With the expanding of delicate information outsourced tocloud, cloud storage administrations are confronting many difficulties including information security and information get to control. To take care of those issues, attribute-based encryption (ABE) plans [1-3] have been connected to distributed storage administrations. Sahai and Waters[1] initially proposed ABE conspire named fluffy personality based encryption which is gotten from identity-based encryption (IBE) [4]. As another proposed cryptographic primitive, ABE conspire has the upside of IBE plan, as well as gives the normal for "one-to-numerous" encryption. By and by, ABE chiefly incorporates two classes called ciphertext-approach ABE (CP-ABE) [2] and key-arrangement ABE (KP-ABE) [3]. In CP-ABE, ciphertexts are related with get to approaches and client's private keys are related with quality sets. A client can unscramble the ciphertext if his characteristics fulfill the get to arrangement installed in the ciphertext. It is opposite in KP-ABE. CP-ABE is more appropriate for the outsourcing information engineering than KP-ABE on the grounds that they get to arrangement is characterized by the information proprietors. In this article, we exhibit a proficient CP-ABE with client revocation ability. ## 2. Proposed system 2.1 Related Work In spite of the fact that ABE has demonstrated its benefits, client disavowal and characteristic renouncement are the essential concerns. The disavowal issue is considerably more troublesome particularly in CP-ABE plans, in light of the fact that every property is shared by numerous clients. This implies repudiation for any trait or any single client may influence alternate clients in the framework. As of late, some work [5-9] has been proposed to take care of this issue in productive ways. Boldyreva et al. [5] gave an IBE conspire productive repudiation, which is additionally appropriate for KP-ABE. All things considered, it is uncertain whether their plan is reasonable for CP-ABE. Yu et al. [6] gave a trait based information imparting plan to characteristic repudiation capacity. This plan was turned out to be secure against picked plaintext assaults (CPA) in light of DBDH supposition. In any case, the length of ciphertext and client's private key are relative to the quantity of traits in the characteristic universe. In the key era, encryption and unscrambling stages, calculation includes all properties in the trait universe. Subsequently, it is costly in correspondence and calculation cost for clients. Tysowski et al. [8] gave a simple strategy to perform client repudiation operation by consolidating ----- _p_ _f_ _f_ _g_ _g_ Vol.2, No.3 September 30 (2016) CP-ABE with re-encryption. In their plan, every client has a place with a gathering and holds a gathering mystery key issued by the gathering. Be that as it may, their plan does not avoid arrangement assault performed by revoked clients participating with existing clients. The reason is that every client's gathering mystery key is same in a similar gathering. The qualities of the renounced clients can be utilized by the client in a similar gathering without the predetermined traits. Furthermore, we call attention to that there is a similar security hazard in the plans [7] [9]. 2.2. Existing System Boldyreva et al. [5] given an IBE scheme with efficient revocation, which is also suitable for KP-ABE. In any case, it is uncertain whether their plan is reasonable for CP-ABE. Yu et al. [6] given a property based information offering plan to quality renouncement capacity. This plan was ended up being secure against picked plaintext assaults (CPA) in light of DBDH suspicion. Be that as it may, the length of figure content and client's private key are relative to the quantity of traits in the characteristic universe. Yu et al. [6] planned a KP-ABE conspire with fine-grained information get to control. This plan requires that the root hub in the get to tree is an AND door and one kid isa leaf hub which is related with the fake characteristic. In the current scheme, when a client leaves from a client gathering, the gathering supervisor just repudiates his gathering mystery key which suggests that the client's private key related with characteristics is still legitimate[10-12]. In the event that somebody in the gathering deliberately uncovered the gathering mystery key to the denied client, he can perform decoding operations through his private key[13-15]. To illuminate this assault, a solid example is given. Expect that the information is encoded under the arrangement "teacher AND cryptography" and the gathering open key. Assume that there are two clients: user1and user2 whose private keys are related with the quality sets {male, educator, cryptography} and {male, understudy, cryptography} individually. On the off chance that the two are in the gathering and hold the gathering mystery key, then user1can unscramble the information however user2 can't. At the point when user1is renounced from the gathering, he can't unscramble alone on the grounds that he doesn't have the overhauled aggregate mystery key. Be that as it may, the traits of user1are not renounced and user2 has the upgraded aggregate mystery key. In this way, user1can connive with user2 to play out the decoding operation. Moreover, security model and verification were not given in their scheme[16-20]. ----- Fine-Filtered Attributed Key Based Data Storage in Cloud Computing 2.2.1 Disadvantage of Existing System It is costly in communication and computation cost for clients. Unfortunately, ABE scheme requires high calculation overhead amid performing encryption and unscrambling operations. This deformity turns out to be more serious for lightweight gadgets because of their compelled registering assets. There is a major limitation to single-authority ABE as in IBE. To be specific, every client validates him to the expert, demonstrates that he has a specific property set, and afterward gets mystery key related with each of those characteristics. In this way, the specialist must be trusted to screen every one of the traits. It is unreasonable in practice and cumbersome for authority. 2.3 Proposed System In this system, we concentrate on outlining a CP-ABE scheme with effective client disavowal for distributed storage system. We mean to model collusion attack performed by revoked clients coordinating with existing clients. Furthermore, we build an effective user revocation CP-ABE scheme through improving the existing scheme and demonstrate our plan is CPA secure under the specific model. To solve existing security issue, we implant an endorsement into every client's private key. Along these lines, every client's gathering mystery key is unique in relation to others and bound together with his private key related with attributes. To lessen clients' computation loads, we present two cloud specialist organizations named encryption-cloud service provider (E-CSP) and decryption-cloud service provider (D-CSP). The obligation of E-CSP is to perform outsourced encryption operation and D-CSP is to perform outsourced unscrambling operation. In the encryption stage, the operation related with the spurious property is performed locally while the operation related with the sub-tree is outsourced to E-CSP. 2.3.1 Advantages of Proposed System Reduction of the heavy computation load on clients. We outsource the majority of calculation load to E-CSP and D-CSP and leave little computation cost to local devices. Our plan is effective for resource constrained devices such as mobile phones. Our plan can ----- _p_ _f_ _f_ _g_ _g_ Vol.2, No.3 September 30 (2016) be utilized as a part of distributed storage system that requires the capacities of client renouncement and fine-grained access control. [Fig. 1] System Architecture ## 3. Conclusion In this article, we gave a formal definition and security show for CP-ABE with client revocation. We additionally build a solid CP-ABE scheme which is CPA secure in light of DCDH presumption. To oppose plot assault, we install an authentication into the client's private key. So that vindictive clients and the renounced clients don't be able to create a legitimate private key through joining their private keys. Also, we outsource operations with high calculation cost to E-CSP and D-CSP to diminish the client's calculation loads. Through applying the system of outsource, computation cost for nearby devices is much lower and moderately settled. The aftereffects of our examination demonstrate that our plan is proficient for resource constrained devices. **References** ----- Fine-Filtered Attributed Key Based Data Storage in Cloud Computing [1] A. Sahai and B. Waters, Fuzzy Identity-Based Encryption, EUROCRYPT’05, LNCS, (2005), Vol.3494, pp.457-473. [2] J.Bethencourt, A. Sahai and B. Waters, Ciphertext-Policy Attribute-Based Encryption, Proc. IEEE Symposium on Security and Privacy, **(2007) May, pp.321-334, doi: 10.1109/SP.2007.11.** [3] V. Goyal, O. Pandey, A. Sahai, and B. Waters, Attribute-Based En-cryption for Fine-Grained Access Control of Encrypted Data Proc. 13th ACM Conference on Computer and Communications Security (CCS ’06), **(2006), pp. 89-98, doi:10.1145/1180405.1180418.** [4] D. Boneh and M. K. Franklin, Identity-Based Encryption from the Weil Pairing, SIAM Journal of Computing, **(2003), Vol.32, No.3, pp.586-615.** [5] A. Boldyreva, V. Goyal, and V. Kumar, Identity-Based En-cryption with Efficient Revocation Proc.15th ACM conference on Computer and communications security (CCS’ 08), **(2008), pp.417-426.** [6] S. Yu, C. Wang, K. Ren, and W. Lou, Attribute Based Data Sharing with Attribute Revocation, Proc.5th ACM Symposium on Information, Computer and Communications Security (ASIACCS’ 10), **(2010), pp.** 261-270. [7] M. Yang, F. Liu, J. Han, and Z. Wang, An Efficient Attribute based Encryption Scheme with Revocation for Outsourced Data Sharing Control, Proc. 2011 International Conference on Instru-mentation, Measurement, Computer, Communication and Control, **(2011), pp.516-520.** [8] P. K. Tysowski and M. A. Hasan, Hybrid Attribute-Based Encryption and Re-Encryption for Scalable Mobile Applications in Clouds, IEEE Transactions on Cloud Computing, **(2013), pp.172-186.** [9] J. Hur and D. K. Noh, Attribute-Based Access Control with Efficient Revocation in Data Outsourcing Systems, IEEE Transactions on Parallel and Distributed Systems, **(2011), pp.1214-1221.** [10] S. Yu, C. Wang, K. Ren, and W. Lou, Achieving Secure, Scalable, and Fine-Grained Data Access Control in Cloud Computing, Proc. of IEEE INFOCOM’10, **(2010), pp.1-9.** [11] M. Green, S. Hohenbergerand B. Waters, Outsourcing the decryption of ABE ciphertexts, Proc. 20th USENIX Conference on Security (SEC ’11), **(2011), pp.34.** [12] J. Li, X. F. Chen, J.W. Li, C.F. Jia, J.F. Ma, and W. J. Lou, Fine-Grained Access Control System Based on Outsourced Attribute-Based Encryp-tion, Proc.18th European Symposium on Research in Computer Security (ESORICS’ 13), LNCS8134, **(2013), pp. 592-609; Berlin, Germany** [13] J. W. Li, C. F. Jia, J. Liand, and X. F. Chen, Outsourcing Encryption of At-tribute-Based Encryption with Mapreduce, Proc. 14th International Conferenceon Information and Communications Security (ICICS ’12), LNCS7618, Berlin:Springer-Verlag, **(2012), pp.191-201, doi:10.1007/978-3-642-34129-8_17** [14] M. Chase, Multi-authority Attribute Based Encryption, Proc. 4th Theory of Cryptography Conference (TCC ’07), LNCS4392, Berlin:Springer-Verlag, **(2007), pp.515-534.** [15] Z. Liu, Z. Cao, Q. Huang, D. S. Wongand, and T. H. Yuen, Fully Secure Multi-Authority Ciphertext-Policy Attribute-Based Encryption without Random Oracles, Proc.16th European Symposium on Research in Computer Security (ESORICS ’11), LNCS6879, Berlin:Springer-Verlag, **(2011), pp. 278-297.** [16] J. G. Han, W. Susilo, Y. Mu, and J. Yan, Privacy-Preserving Decentralized Key-Policy Attribute-Based ----- _p_ _f_ _f_ _g_ _g_ Vol.2, No.3 September 30 (2016) Encryption, IEEE Transactions on Parallel and Distributed Systems, **(2012), Vol.23, No.11, pp.2150-2162,** doi: 10.1109/TPDS.2012.50. [17] H. L. Qian, J. G. Liand, and Y. C. Zhang, Privacy-Preserving Decentralized Ciphertext-Policy Attribute-Based Encryption with Fully Hidden Access Structure, Proc.15th International Conference on Information and Communications Security (ICICS ’13), LNCS8233, Berlin:Springer-Verlag, **(2013),** pp.363-372. [18] H. L. Qian, J. G. Li, Y. C. Zhangand, and J.G. Han, Privacy Preserving Personal Health Record Using Multi-Authority Attribute-Based Encryption with Revocation, International Journal of Information Security, doi:10.1007/s10207-014-0270-9. [19] Z. Liu, Z. F. Cao, and D. S. Wong, Blackbox Traceable CP-ABE: How to Catch People Leaking Their Keys by Selling Decryption Devices on eBay, Proc. 2013 ACM SIGSAC Conference on Computer and Communications Security (CCS ’13), **(2013), pp.475-486, doi: 10.1145/2508859.2516683.** [20] Z. Liu, Z. F. Cao, and D. S. Wong, White-Box Traceable Ciphertext-Policy Attribute-Based Encryption Supporting Any Monotone Access Structures, IEEE Transactions on Information Forensics and Security, **(2013), Vol.8, No.1, pp.76-88, doi: 10.1109/TIFS.2012.2223683.** ----- Fine-Filtered Attributed Key Based Data Storage in Cloud Computing (This page is empty intentionally) -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.21742/APJCRI.2016.09.05?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.21742/APJCRI.2016.09.05, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://doi.org/10.21742/apjcri.2016.09.05" }
2,016
[]
true
2016-09-30T00:00:00
[]
4,290
en
[ { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Sociology", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01b9dc24c13e77b4611c2daa43cce60c3b281af0
[]
0.951387
A Short, Qualitative Analysis Of Virtual Private Networks
01b9dc24c13e77b4611c2daa43cce60c3b281af0
[ { "authorId": "2138835367", "name": "Alexandra Bonder" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
This paper provides an overview of the current state of Virtual Private Networks (VPNs) by combining a general analysis of key issues with the perspectives of employees working at five popular VPN companies. This paper argues that VPN technology cannot be analyzed in a meaningful way without reference to the values and motivations of the people of which the companies comprise. A key finding is the differences observed between different employees’ understanding of terms essential to VPN competence: “security” and “privacy”. These differences highlight the difficulty of judging VPNs objectively, as, their perceived functionality ultimately depends on an affective alignment of values between user and company.
A SHORT, QUALITATIVE ANALYSIS OF VIRTUAL PRIVATE NETWORKS By Alexandra Bonder Bachelor of Humanities, Carleton University, 2010 A major research paper presented to Ryerson University and York University in partial fulfillment of the requirements for the degree of Master of Arts in the joint program of Communication and Culture Toronto, Ontario, Canada, 2018 ©Alexandra Bonder, 2018 ----- **Author's Declaration** I hereby declare that I am the sole author of this MRP. This is a true copy of the MRP, including any required final revisions. I authorize Ryerson University to lend this MRP to other institutions or individuals for the purpose of scholarly research. I further authorize Ryerson University to reproduce this MRP by photocopying or by other means, in total or in part, at the request of other institutions or individuals for the purpose of scholarly research. I understand that my MRP may be made electronically available to the public. ----- A SHORT, QUALITATIVE ANALYSIS OF VIRTUAL PRIVATE NETWORKS Master of Arts, 2018 Alexandra Bonder Communication and Culture Ryerson University and York University **ABSTRACT** This paper provides an overview of the current state of Virtual Private Networks (VPNs) by combining a general analysis of key issues with the perspectives of employees working at five popular VPN companies. This paper argues that VPN technology cannot be analyzed in a meaningful way without reference to the values and motivations of the people of which the companies comprise. A key finding is the differences observed between different employees’ understanding of terms essential to VPN competence: “security” and “privacy”. These differences highlight the difficulty of judging VPNs objectively, as, their perceived functionality ultimately depends on an affective alignment of values between user and company. ----- **Acknowledgements** I would like to acknowledge my MRP supervisor, Professor Gregory Elmer, who has supported me throughout my time at Ryerson University, always encouraging me to think critically and creatively. I would also like to acknowledge my second reader, Professor Catherine Middleton, who reignited my curiosity about communication policy, upon my return to academia. Lastly, but very importantly, I would like to acknowledge my father, Arieh Bonder, whose unwavering belief in me has allowed me to complete this project. ----- **Table of Contents** Author’s Declaration…………………………………………………………..ii Abstract………………………………………………………………………..iii Acknowledgements……………………………………………………………iv Introduction…………………………………………………………………….1 Background…………………………………………………………………….2 A Brief History Of VPNs………………………………………………………5 The Depiction Of Personal-Use VPNs In The Media……………………….....7 Main Criticisms Of VPNs……………………………………………………..11 Theoretical Framework for the Analysis of the Underlying Philosophies Of VPNs………………………..…………..15 Interview Overview…………………………………………………………...18 Interview Questions And Methodology……………………………………….21 Quote Style…………………………………………………………………….24 Results And Analysis………………………………………………………….24 Trust…………………………………………………………………………...26 Values…………………………………………………………………………30 Conclusion…………………………………………………………………….42 Works Cited…………………………………………………………………...48 ----- ----- ----- ----- INTRODUCTION In 1998, writing on Wired.com, in a column dedicated to “deflating this month’s overblown memes”, author Steve Steinberg described Virtual Private Networks (VPNs) as a fad with a life expectancy of 18 months. “The wonderful thing about virtual private networks,” he wrote, “is that its myriad of definitions give every company a fair chance to claim that its existing product is actually a VPN. But no matter what definition you choose, the networking buzz-phrase doesn't make sense. The idea is to create a private network via tunneling and/or encryption over the public Internet. Sure, it's a lot cheaper than using your own frame relay connections, but it works about as well as sticking cotton in your ears in Times Square and pretending nobody else is around.” Twenty years later, VPNs still exist and thrive, being used by millions of individual users all over the world. Though once used primarily by businesses to provide secure, remote server access to employees, individuals are now using VPNs for purposes that go far beyond their original corporate roles, from accessing geo-blocked content to evading state sanctioned censorship of social media sites and more (Longworth). And yet, many of the same issues alluded to in the above short quote remain true today. The aim of this paper is to get a better appreciation of the intentions of VPN creators in order to gain a more holistic understanding of VPNs beyond the technology itself. As the political theorist Michel Foucault says, the effects of power are not only negative. Rather, power creates its own reality (194). If this is the case, VPNs, do not only create free spaces, but also inject these spaces with meaning. What type of meaning does the VPN world hold, and, on a larger scale, what impact do VPNs have on online security, beyond simply upholding concepts of a “free Internet”? Ultimately looking at ----- VPNs through this lens may help us to define VPNs more accurately, and help to determine if they can indeed act as powerful tools for online security and privacy. In order to do this, I have chosen to conduct interviews because the competence of VPN technology is highly reliant on how it is being administered. And though it is never possible to completely understand the true intentions of those working at VPN companies, I hope to provide a small peek into their own values and motivations and how these may potentially inform the functionality of their product. BACKGROUND What exactly is a VPN? And are VPNs actually effective in providing the security and privacy for citizens that they claim to offer? Are the promises made by VPNs, in fact, more hype than substance? The purpose of VPNs is to provide subscribers with secure and private Internet connections. This is carried out through the application of security protocols, most commonly the use of “tunneling”, the hiding of a user’s IP address, and the encryption of data (Microsoft 2001). Tunneling in personal use VPNs, generally refers to the transmission of VPN protocols, encapsulated with more VPN protocols, transmitted over a protected network. This insures that whatever being passed over the network is kept private until it is received on the other side of the network (“How VPN Works”). As governments and corporations attempt to restrict and influence Internet access, VPNs are widely seen and used as a tool to fight back against Internet constraints, and to keep the Internet “free”(Chen) (Amnesty International). Freedom can be defined in many different ways, and can be in reference to the Internet as it was first conceived, i.e. with a ----- lack of centralized control, or it can be defined in its democratic sense, i.e. a space that allows for freedom of speech and expression (Amnesty International). Though VPNs have been outlawed or heavily restricted in many countries, for example, in China and Russia, this has not stopped their user bases from growing (“VPN Market Worth $41.702 Billion”). Their lack of regulation is essential to their use in some countries as a tool of resistance. However, the lack of information about them, because of the absence of regulation, also leaves users vulnerable to security risks, as they have little way of knowing how secure and competent the service provided is. This reality was brought to light in a 2016 study by Australia’s Commonwealth Scientific and Industrial Research Organisation (CSIRO), which revealed that many of the most popular VPNs actually do the exact opposite of what they were assumed to do. In many cases, rather than providing privacy, they tracked user data, failed to encrypt Internet traffic, and even shared and sold user data to third parties (CDT) (Ikram) (White). As Internet Security Expert Kevin Wriggle said in a TechDirt podcast on the subject, “The median VPNs are somewhere between incompetent and actively malicious…” From this alone, it can be inferred that VPNs, as they exist today, do not categorically provide privacy and security for citizens. And so, the “freedom” implied by the use of VPNs cannot be assured. Is this lack of security an intrinsic shortcoming of VPN technology, or the “incompetence” of subpar developers? Or rather, is a VPN’s level of privacy and security a conscious choice of its creators? If so, what are the reasons behind their choices, and what are the effects of these choices on their products? ----- Though studies have looked at the technological functioning of VPNs, none have looked at their moral positioning and the political implications of this, information which could help to better understand the reasons behind some of their shortcomings, as well as shed light on their potential strengths as an Internet security tool for everyday citizens (Ikram) (CSIRO). I use the terms “motivations” to describe the subjective positioning of those working at VPN companies, and “political” to describe the implications of this positioning when it comes to the choices that are being made. I would argue that the value of a VPN is at least partially determined by their application. In other words, VPNs, as a category, cannot be judged by their technology alone. They must also be evaluated within the context of their creation, which includes examining the subjective political/ethical positioning of their creators. When the Australian CSIRO first published their 2016 study, at least one VPN company, TunnelBear, took the initiative to hire a third-party security auditing group to evaluate the legitimacy of their platform and to confirm its ability to provide security and privacy (TunnelBear 2017). TunnelBear did this despite claiming to have had a 200% increase in sales, due to media coverage of the United States’ Federal Communications Commission’s (FCC) “attack” on net neutrality (Silverman). This seems to have proven genuine interest in the quality of their product - an ethical stance - that allowed them to improve their product. However, one might ask why it took negative publicity to instigate action? Understanding the “why” might help to better predict how VPN companies could seek to improve themselves in the future. ----- A BRIEF HISTORY OF VPNS As outlined by Janet Abbate, in the book, Inventing the Internet, the Internet was initially created to establish a more secure communication infrastructure in case of a major terrorist attack (2). Before its creation, military communication relied on telecommunications infrastructure that transmitted data through centralized hubs (4). Basically, data was passed through a single series of hubs, from one party to another. The centralization of these hubs made this type of communication physically vulnerable. If one hub were to go down, communication between two parties would be terminated (6). The Internet, on the other hand, passes data through millions of routers or “Secure Internet Servers”. Even if hundreds of routers were to go down, data could automatically re-route itself along a different series of routers to get to its target. The only problem is, although the Internet is more physically secure than traditional telecommunications infrastructure, data-wise it is not (Gupta, 5). Each router through which data passes can be easily accessed; its content can be viewed by those maintaining the server, making it vulnerable to security threats (for example, hacking or government surveillance) (Gupta, 4). VPNs were created to help remedy these vulnerabilities, and were originally used as an affordable way for organizations to connect remote points, such as users, databases, or whole offices, to an organization’s central secured network (Mohta) (LaBorde) (Dawson). Many cite the first VPN as being created in 1995 by Gurdeep Singh-Pall, who is currently Vice President of Skype at Microsoft, but who was, at that time, a Microsoft computer engineer (Crunchbase). So what exactly is a VPN? Internet security giant, and one of the first VPN providers, Cisco, provided a “common sense and simple” definition for VPNs in 1998. ----- “A VPN is a private network constructed within a public network infrastructure, such as the global Internet.” In other words a VPN is a secure and private space created within the larger, open Internet (Robinson). VPN expert John Longworth provides a little bit more information, defining them as, VPNs are used to protect data from being accessed or altered as it travels over another network (e.g. the Internet). This is possible through the use of a wide variety of computer protocols that securely ‘wrap’ your data in a layer of encryption and ensure that the destination for that encrypted data is authenticated (i.e.: the person or system is who it says it is) and authorised (allowed) to ‘unwrap’ it. In other words, VPNs allow users to securely access a private network and also share data remotely. VPNs work by combining security protocols and layers of encryption. For example, a VPN usually uses “tunneling” protocols, which, in common terms, means creating a virtual “tunnel” between routers (Norton). These “tunnels” create a private network within the larger open Internet through which data can be passed. In addition, if the VPN detects it is being attacked, it will automatically re-route, to create a new protected tunnel along a different set of routers (Upfal). The information within the tunnel is encrypted, so even if attackers penetrate the tunnel, it would be difficult to decipher the data carried within (Norton). There is also ideally a layer of “authentication” to ensure you are who you say you are, which prevents anyone else from intercepting your communications, disguised as you. One important effect of VPNs is that, to outsiders, your IP address will appear to come from wherever the VPN server is located. An IP address is a unique string of numbers ----- that identifies your specific computer over a network. It also holds information about where your computer is geographically located. This allows Internet Service Providers (ISP), the government, or other regulatory bodies to create barriers around your Internet experience. For example, because of copyright laws, certain shows on Netflix might only be available in the US versus Canada. It is important that those in the “private” network established by a VPN are not only unaware of the content of the data, but of the private relationship itself (Cisco). As Internet security protocols and encryptions are constantly being updated, a well-functioning VPN will use the most secure and up-to-date ones, in order to maintain the functions mentioned above, intrinsic to competent functioning (Cisco). This is why VPN companies often advertise the fact that they do not keep “logs” (i.e. personal details) of user data (Nord) (TorGuard) (TunnelBear). Using most commercial VPNs is relatively easy. An individual will download a client VPN on to their computer, usually logging in with a username and password. This will connect the individual to a server VPN. The individual’s IP address will now appear to be that of the server VPN (Microsoft). The user can now, supposedly, carry out his or her Internet activities with complete confidence about the security of their transactions. THE DEPICTION OF PERSONAL-USE VPNS IN THE MEDIA For the purposes of this paper, I will define “personal-use VPN” as a VPN that is being used by a private user, versus a VPN that has been established by a company or organization. It is difficult to find information as to when the first personal-use VPNs began to gain popularity, but articles describing personal-use VPNs seemed to gain mention in the mid to late 2000s. News articles from major outlets during that time ----- usually mentioned VPNs as tools to combat Internet restraint and censorship. For example, a 2011 BBC article, quoted VPN, Hotspot Shield as reporting a 1000% increase in usage during the Arab Spring (“Turkish people turn to VPNs”). Another early market to adopt personal VPN technology, beginning in 2010, were Chinese users, who were attempting to circumvent the Great Firewall of China (Nie). Another early instance of VPN use was in 2011, when the Iranian government released plans to build its own national, limited Internet service (Bazley). In such cases, the consequences of using and administrating a VPN are clearly and primarily political. “Political”, in this case, meaning the VPN is being used as a tool to avoid censorship, mobilize people, in an environment that is hostile to such things. Though VPNs may have first gained popularity for personal use due to their political potential, most VPNs generally advertised themselves as providing the same things as VPNs that are being used for business purposes, that being security and privacy. And, it turns out many people are indeed using VPNs. Two GlobalWebIndex studies found that 1 in 4 people had used VPNs in 2016, with this number up to 1 in 3 in January 2017. While Indonesia was the country with the highest concentration of VPN use from 2013 to 2016, with 41% of users relying on a VPN connection in 2016, in 2017 Turkey took the top spot with close to 50% of Internet users using VPNs. Heavy government censorship was cited as the reason for this uptake (GlobalWebIndex). Countries where VPNs are illegal also have significant concentrations of VPN users, with China at 29% and Vietnam at 35% (GlobalWebIndex). US saturation is at a lower 25%, although since this study was put out in 2016, there is a chance things may already have changed. One VPN company, TunnelBear, claims its North American sales ----- have rocketed throughout 2017, with policy changes surrounding net neutrality in the United States (Silverman). VPN use also skews to a younger demographic, with a separate study by GlobalWebIndex stating: “If we split the overall figures for VPN usage by age then it’s 16-34s who dominate. In fact, with 16-24s on 35% and 55-64s on just under 15%, the youngest demographics are over twice as likely to be using VPNs as the oldest ones. Such a pattern suggests that overall numbers will rise still higher in the years ahead.” (Young) Though security and privacy may be the main purpose of VPNs, as Jason Mander, of GlobalWebIndex says, "In some countries, China, Indonesia and Thailand being prime examples, people use VPNs to overcome governmental restrictions on sites like Facebook and Twitter. In Western Europe, privacy is the biggest factor. But by far the most popular one globally is the need to access [geographically blocked] entertainment content” (Nave). This refers to content that is not available to the consumer due to either licensing or political barriers. Although the general purposes of using VPNs may be similar globally, the consequences of doing so vary from country to country. For example, in Vietnam, where VPNs are illegal, what you say and consume online can have serious consequences. For example, in the past year, activist Tran Thi Nga was sentenced to nine years in prison, female blogger, Ngoc Nhu Quynh was sentenced to 10 years, and four other activists, Pham Van Troi, Nguyen Trung Ton, Truong Minh Duc and Nguyen Bac Truyen, are still awaiting trial. All the aforementioned were arrested, and/or charged based on their online activity. In the United Arab Emirates, using a VPN could cost you a fine of up to ----- $7,000,000 CA (“Federal Decree-Law no. (5) of 2012”). In early 2017, the owner of a Chinese VPN provider was jailed seven years. In countries where you can be jailed for any type of online activism, or where there is heavy online censorship, the reasons a citizen would want to protect their privacy and security are obvious. But why would a Canadian, or an American, who has far greater civil liberties, and with access to an Internet that is relatively free of censorship, need a VPN? One reason is to circumvent geo-blocking; i.e. gain access to geographically restricted content, usually due to copyright laws. For example, American Netflix has a different offering than Canadian Netflix, therefore Canadians might use (or at least try to use) a VPN to access this content. Whereas there have been no legal ramifications of using a VPN so far in Canada, there is vocal discouragement from some content distributers and creators, such as Bell Media and Netflix, who view this type of access akin to piracy (Fullagar) (Evans). Many Canadians, however, do not see circumventing geo-blocking as “piracy” but rather, view it as their intrinsic right to access whatever they want online. Canadians also use VPNs for more general privacy and security concerns. For instance they may be accessing the Internet on an open connection at a local café, and want to ensure others cannot track their details. There are also general concerns about surveillance by corporations, and the government (Khazan). As infamous leaker Edward Snowden has proven, the Government, even in rich, democratic nations, is liable to stick its nose into places where it (arguably) does not belong. But, although most Canadians and Americans may care about security and freedom in theory, the majority has proven to ----- care to a lesser extent about security and freedom in practice. A 2015 American study by Pew Research found that “Americans feel privacy is important in their daily lives in a number of essential ways… Americans also have exceedingly low levels of confidence in the privacy and security of the records that are maintained by a variety of institutions in the digital age.” (Madden and Lee) At the same time, the studies show that although Americans believe policy should be put in place to protect their privacy, “few have adopted advanced privacy-enhancing measures” (Madden and Lee). This is all to say that VPNs are used to access different types of content in different countries, and the need for VPNs to actually provide a high level of privacy and security differs from country to country. MAIN CRITICISMS OF VPNS Although VPN companies have been widely heralded as essential to security and privacy online, they have also been widely criticized for a number of reasons. One common criticism is that they provide a haven for illegal activity, although what constitutes criminal activity can be vague and wide-ranging. For example, as one interviewee pointed out, illegal activity associated with VPNs in North America consists of more isolated and “acute crimes”, for example, a person downloading child pornography (Reed et al.). For select content creators and distributors downloading copyrighted material, or bypassing geo-blocked content via a VPN, may be frowned upon and discouraged but, even then, not considered illegal. This is the case in Canada, where it is considered legal grey territory (Jackson). In other countries, for example countries with strong censorship laws, like Egypt, the government may be concerned that VPNs ----- allow people to access content, organize, and speak against laws and policies in a way that would constitute illegal activity. If we look at “acute” crimes, an interesting paradox arises. Often the same cases that put dangerous criminals away also reveal real security breaches in the services themselves. For example, certain VPN companies have been known to hide the identity of drug dealers and those involved in child pornography, amongst other nefarious actors (Reed et al.). Since VPN companies, as part of their intrinsic functioning, are not supposed to keep logs of their users, it makes it difficult for authorities to find the perpetrators of such crimes, although not impossible. Technically they should not be able to, since, a competent VPN company, as defined by Microsoft, “should not know” who is using their service. Exactly what they should not know is vague, and varies from company to company. For example, some VPN companies may ask for a name when you sign up for their service, while others may ask for only an email. If your email contains your name, it will be easy for the company to know who is using their service, and potentially share this information. In the VPN world, this security function is known as “not keeping logs” and many VPN companies will advertise “no logging” directly on their websites. However, VPN companies, under legal pressure, have often proven this is not the case. One example can be seen in the case of 24-year old Ryan Lin. In March of 2017, Lin was arrested and charged with cyber stalking, amongst other related crimes, with help from information from his VPN service Pure VPN (one of the largest VPN providers). As stated in the official criminal complaint, records from Pure VPN show that the same email accounts, Lin's Gmail account, and the teleport Gmail account, were accessed from ----- the same WANSecurity IP address (“United States of America v. Ryan S. Lin”). Significantly, Pure VPN was able to determine that their service was accessed by the same customer from two originating IP addresses: the IP address from the home Lin was living in at the time, and the software company where Lin was employed at the time. Take HotSpot Shield, a Silicon Valley-based and well-promoted VPN company that was founded far back in 2005, and was credited for being used to help activists during the Arab Spring in Egypt, Tunisia and Lebanon (Whittacker). Advertising initiatives for HotSpot Shield have included a political billboard reading, “Angela Merkel was hacked should have used HotSpot Shield” (see image 1). This leads us to another critique of VPNs: their potential maliciousness. In a 2016 article by popular Internet security website ZDNet entitled “Why Hotspot Shield's co founder puts privacy over profits” co-founder David Gorodyansky explained that 97% of his users got the service for free, through an “ad-supported” version of the service (Whittacker). He added that they did not know data-per user or names, and they promised “shielded connections, security, privacy enhancement for individuals and small businesses and an “ad-free browsing” environment (Whittacker). However in the CSIRO report previously mentioned, it was found that HotSpot Shield actively tracked its users, injecting Javascript for tracking and advertising purposes, and redirected “e-commerce traffic to partnering domains”. In light of these revelations, the Centre for Democracy and Technology has submitted a complaint to America’s communications regulator, The Federal Communications Commission (FCC), stating, in summary that their service is “unfair and deceptive” in its promise of “secure, private and anonymous” access to the Internet (“Complaint, Request for Investigation”). ----- Image 1 For the average user, it is often very difficult to tell the intentions and legitimacy of a VPN company. Take Israeli-based Hola, for example. In 2015, Hola had an overall 46 million users, many of them opting for the “free” version (Andy). As first reported by TorrentFreak, but confirmed by security firm Vectra, with additional confirmation by Hola’s founders, Ofer Vilenski and Derry Shribman, the free version routed traffic between VPN users. Essentially, a user’s IP address was re-associated with other traffic so that the company did not have to buy bandwidth (Andy) (Vectra Threat Labs). This left users unprotected from the traffic that was now being associated with their IP. Hola also sells their user bandwidth to others through their own affiliated company, luminati.org (“Multiple Critical Vulnerabilities”). When a free user’s bandwidth is sitting idle, Hola would allow third parties to buy it. These third parties used it to host botnet attacks (“Multiple Critical Vulnerabilities”). A botnet attack is when a string of computers are used together, often to spam on a large scale. In this case, the ----- consumers were also the product. Is this really a VPN or simply a “geo-unblocking” service, that doesn’t really provide privacy or security? More disturbing is something computer science researcher and popular Internet personality Eli Upfal pointed out in a 2016 post entitled “Are Free VPN Services As Safe As Paid VPN?”, “Let me tell you what. If I was in charge of the fucking NSA and I had billions upon billions upon billions of dollars to spend, you are damn motherfucking right I would drop 10 million dollars to create one of the best VPN services the world has ever seen…if I was in charge of the NSA I would create free VPN services. If I was part of the Russian Intelligence Service I would create free VPN services. If I was part of the Chinese Intelligence Service I would create free VPN services. Because isn’t that a great, phenomenal idea?” (Upfal). Indeed, it looks like this has been the case, starting in Syria in 2012 to devastating effects. Freedom House reported that, “Due to the prevailing need for circumvention and encryption tools among activists and other opposition members, Syrian authorities have developed fake Skype encryption tools and a fake VPN application, both containing harmful Trojans.” (“Syria”). Basically, the VPN service would appear to be protecting the anonymity of individuals, but would in fact be feeding all data to Syrian authorities. THEORETICAL FRAMEWORK FOR THE ANALYSIS OF THE UNDERLYING PHILOSOPHIES OF VPNS As it can be seen in the preceding sections, VPNs have been both praised for their capacity to create “freedom” and criticized for things such as hiding criminals, and ignoring copyright laws (Reed et al.). In addition, VPNs have also found themselves in ----- controversy for saying one thing and doing another, like in the case of Hola. My goal is to try to get an initial understanding of the motivations and decision-making processes that direct VPNs, as well as an understanding of the political implications of their application and operations. This analysis will be based both on the research collected on the operations of VPNs and on interviews carried out with VPN providers. I will critically analyse my data through the theoretical perspectives of Zizi Papacharissi and Chantal Mouffe, two philosophers whose work has provided a useful prism through which to view and assess the interactions of individuals and organizations. Of particular interest to me, is Papacharissi’s description of “affect” in her book Affective Publics, and Mouffe’s description of decision-making, as described in Agnostics: Thinking the World Politically. Applying these conceptual understandings to the data I collected on VPN providers could help provide a better understanding of the values held by the VPN providers interviewed and the political implications of these values in the context of the operation of their companies. Affect, most generally, is focused on the “forces other than conscious knowing” that position us to make choices, join movements, and ultimately direct us within the world (Gregg and Seigworth). My goal in interviewing those working at VPN companies is to look at affective qualities that feed into both motivations and decision-making processes. These are ultimately the qualities that direct and define concepts like “security” and “freedom” when these concepts are applied in the world. In other words, concepts like “security” and “freedom”, which are essential to VPN technology, have different meanings depending on who is defining them, and how they are being applied. I ----- want to understand how they are being defined, and the implications of varying definitions. Mouffe and Papacharissi both argue that we are connected to each other “affectively”, that is, based not on rationality alone, but on multiple strong primordial affinities, which create and influence our subjective understanding of the world (Agnostics 46) (Papacharissi 8). Papacharissi describes new media as being particularly affective (4). Through new media, storytelling is facilitated across the world, triggering affective responses, and community building among different causes and geographically distant people (68). This allows people to “feel” their way through various movements and their impacts, despite never having experienced them first-hand (4). The new communities on the Internet are, in many ways, imagined — they are not based on lived experiences (4). And yet they can add momentum to any movement by bringing multiple differing perspectives together for a shared goal (37). Papacharissi uses the social media platform Twitter to trace these affective connections through activist-driven political movements. For example, the Arab Spring, which saw an international community come together to support the singular causes of oppressed peoples (6). I will argue that the “VPN world”, i.e. those who work for VPN companies, are affectively connected to both other companies and their users, creating a force of mutual affect. The owners of VPN companies make their own affectively derived positions concrete in the running and execution of their companies. Rather than just providing support through their voice, like a Twitter user, they also provide a service. In the running of their businesses they reveal a point of view, which becomes realized through their ----- decisions. And this does not happen in a vacuum; the similarities and differences in how these companies are run creates an unofficial, ever changing, standard of conduct in an industry that is impossible to formally standardize. Such companies are “policed” by each other, along with community members, for example, those who are interested in Internet security, and customers. But what informs these standards? Mouffe believes that decisions are necessarily exclusive (Agnostics 3). As she said in a 2005 interview, “…if you choose one thing, you necessarily exclude the other. Decisions have to be made, and to decide on one alternative is to exclude the other.” (Pluralt) This means you have to show preference to certain reasoning over all others. Though decisions may be arrived at affectively, the results will have real effects, which, can be analyzed in a more objective manner (6). For example, if a VPN owner says that they believe in “privacy and democracy” but is confronted with the choice of either helping authorities track down a child predator or refusing to give away data, which value will end up dominating? How does this choice, in a broader sense, change what it means to provide “privacy and security” for that company? And from what affective perspective is one able to come to this choice by? The practical reality of a VPN, and the affective reasoning of VPN employees, can be contrasted to reveal a point-of-view, which can then be used as a starting point to analyze their overall take on security and freedom. INTERVIEW OVERVIEW Given the above discussion and as a means to seek a better understanding of the role VPNs play in today’s world, I interviewed employees of five popular VPN companies. ----- The primary purpose of these interviews was to gain a better understanding of the basis upon which their companies were run, in the absence of formal regulation. To identify my candidates, I started by interviewing two people who had been referred to me by friends. My initial plan was to ask these employees to refer me to other employees at different VPN companies. Unfortunately, both connections were unable to refer me to anyone. The general impression I received throughout my interviews was that there was intense competition between companies, and there were few personal connections between employees at different companies. As one of my interviewees told me when I mentioned people in the industry were hard to track down, “we sell privacy and anonymity, so you didn’t select the easiest people to get in contact with” (Company C). I then reached out to over 50 popular VPN companies via contact details provided on their websites, and when available, LinkedIn. These companies could be found on multiple lists from top privacy, security, and VPN-focused websites (Eddy). Since my interviews were anonymous, I will refer to the companies as Companies A through E. Each employee interviewed has also been given a pseudonym. To provide a brief description of each company, the first, Company A, is a popular and highly rated VPN service, one of the few that has publicly sought to gain legitimacy by inviting third party scrutiny of its operations. On their own website, Company A described itself as “really, really simple privacy apps” which provided “simple, private, free access to the open Internet you love.” From this company, I interviewed Luke, the head of marketing and Jack, a programmer. I was connected to Jack through a friend, who connected me to Luke, as he thought Luke might be able to ----- answer questions he could not. There were no other employees available to interview. I asked if they knew of anyone else in the VPN world I could talk to, but they did not. Company B was co-founded by leaders in the anti-copyright movement. The first headline on their website read, “Big Brother is watching YOU ...we are not”. Reviews of the VPN service were relatively good, although one website, BestVPN.com, did describe them as keeping limited amounts of user data. I connected to Karl, of Company B, by reaching out on their online FAQ chat. Karl offered tech support for Company B, as well as working as a developer. The third company I spoke with, Company C, was more difficult to find information on, and its services had mixed reviews online. Company C’s homepage described it as allowing you to “bit torrent anonymously, bypass throttling, and unlimited speeds”. I connected with Thomas of Company C through a friend. Thomas is one of three employees who works at Company C, and does site maintenance, marketing, and tech support. Company D was very well rated by most websites surveyed. Their website lead with sales messaging for special pricing before describing “total security” and “absolute privacy” as their main goals. I connected to Heather, of Company D, by reaching out to the company directly. Heather does tech support, as well as marketing. Company E was also highly rated. They advertised themselves as a “Security and Privacy” VPN but open with a “Streaming Guarantee”, promising users would be able to watch live events with a strong and fast connection. At the time of this paper, the World Cup was being aired, and soccer images were present on Company E’s homepage. Philip ----- of Company E was the head of marketing. I reached out to him by contacting the company directly. Both Company A and Company C listed their addresses in Toronto. On their homepages, each company at least alluded to “privacy”, and most, with the exception of Company C, alluded to “security”. INTERVIEW QUESTIONS AND METHODOLOGY The questions I posed to the five companies were in the form of a semi-structured interview, as described by Anne Galletta in the book, “Mastering the Semi-Structured Interview and Beyond”. This involved encouraging more candid answers and conversations that provided a better understanding of the context from which VPN companies have emerged. My questions are provided below. In addition, in the tradition of semi-structured interviewing, I asked follow-up questions based upon my interviewees’ answers and on the general flow of our conversation. I divided my questions into two categories: Personal Motivational Questions, and Practical Questions. I did this because, as stated previously, my goal is to try to get an initial understanding of the motivations that may inform the decision making processes within VPN companies as well as an understanding of the political implications of their applications and operations, i.e. their “real consequences”. People, according to Papacharissi, have personally motivated reasons for acting, based on their own backgrounds, but some of these reasons, although reached to on an individual level, all feed into larger, common ideals (71). By speaking to individuals about their own personal thoughts, I can begin to consider what themes inform the decisions being ----- made at VPN companies, from the inside. I am asking “factual questions” to get background on the actual reality of the companies, and contrast this reality with the motivations of those working at them. The five VPN companies I interviewed have approximately 90,000,000 million users in around 183 countries worldwide. So, although there were a small number of interviews, the overall impact of the companies interviewed is significant. The smallest VPN company I spoke with had over 50,000 users and the largest had 50,000,000 users. Each interview lasted an hour to an hour and a half. Three interviews were conducted via video chat, one interview was conducted in-person, and one interview was conducting over an encrypted chat line, at the request of the employee. As previously mentioned, I reached out to over 50 of the most popular VPN companies, and these were the companies that responded to my requests. Interview questions grouped by categories: Personal Motivation Questions: - What is your background? - Why did you start a VPN company? - Were there reasons, beyond financial reasons, that you started a VPN company? - Have you started any other companies? - Are VPNs a passion or a job for you? - Why is VPN technology important? - Do you feel strongly about the capabilities of VPN technology as it relates to the current state of the Internet? ----- - Do you see yourself working in this sector long term? - Are there any moments in your company’s history that have made you feel proud? - What is the most exciting part of VPN technology for you? - What is the most exciting part of the Internet for you? - Are there any roadblocks you see in the future of your company? - Are there any alternative technologies that you see promise in? - Would you ever pivot, or redefine your company? Practical Questions: - Where do you have servers? - Where do most of your users come from? - What is the main reason they use your VPN? - Do you keep any user data? - Would it be possible for you to give away user data to authorities? - Are there circumstances where you would give away user data to authorities? - What is your VPN primarily for? - Do you consider it better for some uses than others? - Do you feel responsible for those who use your VPN? - There are some VPNs that have been in the news for not doing exactly what they say they are going to do. Do you have an opinion on this? - To you, most general, what separates a competent — or good — VPN from an incompetent — or bad — VPN? - Where do you see the future of VPN technology headed? ----- - Have you ever been in a moral or legal dilemma concerning the administration of your VPN? QUOTE STYLE This paper focuses on the motivations, feelings, affects, and opinions of those involved in VPNs and the impact of these on the delivery of their technology. For this reason I have provided longer format quotes, so that the reader can appreciate the different personalities of each interviewee. This “personality” is something that is difficult to capture by paraphrasing, or summarizing quotes, although I have also done this where appropriate. RESULTS AND ANALYSIS One overriding conclusion that emerged from my interviews was that the VPN world is particularly affective. Though it is based, to a large extent, on shared values and goals (i.e. privacy and security online and a “open” or “free” internet), it is made up of people from geographically distant places, who all have their own unique interpretation as to what make for a functional VPN, and more, generally, what the internet specifically should be, as it relates to privacy and security. Moreover, each position makes up a part of the same ongoing conversation, where there are different sides, but no clear “right” or “wrong”. Drawing conclusions about the competence of a VPN relies on understanding why a company makes the choices it does through consideration of its motivations. Two main themes that emerged from my interviews and help to highlight this affective nature, and which, when analyzed together, provide a partial understanding of the motivations of VPN companies are trust and values. Though the two themes sometime overlap each ----- other, they each have distinct attributes that are worth analyzing alone. Moreover, different responses relating to each category often highlight contradictions in the very areas in which they overlap. I will argue, based on my analysis of the interview responses, that the extent of security and privacy provided by VPNs are at least partially determined by the values and motivations of the particular VPN company. With respect to “trust,” the primary question is, how does one trust a VPN company? This question is two-fold. First, how do you trust a VPN, technologically speaking, to be functional, and second, how can you trust the ones who are maintaining and running the technology? A VPN could have the technological capacity to provide a certain level of security, but if the VPN company decides, for example, to log data or sell data, the technology becomes obsolete from a privacy and security standpoint. Ultimately, trust, in the VPN business, is an elusive quality. VPN companies can “manufacture trust” through their marketing and using key words that people can identify with, especially in the emotionally charged space of security and privacy. For example, even though, technologically speaking, all VPN companies are supposed to be doing the same thing, i.e. providing a secure, private, Internet service, their motivations may differ. And, in turn, their users may also have different usage goals, and therefore, different thresholds upon which to base their trust when determining whether their trust in a VPN company is well founded. And so this leads to the second theme to be discussed: values. Essentially, it’s impossible to analyze trust without analyzing the motivations of VPNs and the personal ethos from which these motivations arise. As a consequence, these will be the next focus of this paper after the discussion of trust. ----- Given the small number of interviews that were able to be carried out, the interviews will be used primarily to inform the issues under discussion here, supported by evidence gathered in the media, rather than providing any conclusions in of themselves. It is also important to note that the wide reach of these companies, in terms of the number of customers they have, give this sample group intrinsic value. I begin with the issue of trust. TRUST While conducting my interviews, the issue of trust often surfaced, often when we began to speak about the collection of user data. This led to questions like, “how can your users trust you do not keep data?” which led to the more general question, “how can your users trust you?” No company was able to give a definitive answer. As Luke from Company A told me on the issue of trust, “Trust is the perennial problem, nobody has the solution.” Karl, of Company B, echoed this idea, “the VPN provider pinky swears that, while they could find out and tell the world who you are, they will not do so.” Thomas, of Company C, also concurred, “Our customers really can only take our word for it that we don’t keep any logs, and track their information…and we really don’t. But there’s no way to know if we’re telling a lie.” Heather of Company D, said about the same thing, “This is just a matter of belief. You either trust us or you don’t. Maybe some tech savvy people can look into the code or run some tests or something, but like, just regular users just believe what other users say, what reporters say and what we say on our site.” Philip, of Company E did acknowledge that trust is an issue, but pointed to third party audits, consumer reviews, amount of users and privacy policies as a good place for ----- people to begin to analyze if they can, or cannot, trust their VPN company. This brings us to solutions for the trust issue. Thomas gave me the same answer as Philip, Well there are a lot of websites that try to run some tests and they’re independent researchers that try to find leaks or issues in VPNs and security apps in general when they track this they publish the results after that users know if a particular VPN is bad or whatever. Company A and D all alluded to similar tactics for trusting a company, though all companies acknowledged that this is not 100% sufficient. The problem with reading the privacy policy of VPN companies is that sometimes they can be obtuse or even misleading. One example is PureVPN, one of the most popular VPN services, who VPN review site BestVPN.com found to keep many logs, included logging names, email addresses, phone numbers, IP addresses, bandwidth data and connection timestamps, despite claiming to keep “no logs” (bestvpn.com). Relying on popular opinion may be of use for users who are looking to use a VPN for certain uses (for example, if you are streaming content, the speed of the VPN will be valuable), but do not provide an educated opinion on security and privacy measures. In terms of crowd sourced online reviews, as has been recently seen with Facebook’s data-sharing scandal in spring of this year, popular opinion is not always right. In this example, millions of people used Facebook, and yet, at its peak popularity, user data was actually being compromised on a mass scale (Madrigal). According to a recent literature review, trust is a precondition for people’s adoption of electronic services, and positive reviews are an initial determining factor for initiating this trust (Beldad). And as Papacharissi points out, spreads some democratic ideas, in its equalizing nature. If we apply this to the concept of ----- mass online use, or mass-reviews, they do seem like a properly democratic mode of judgment. But as French philosopher Alexis de Tocqueville opines, “In times of equality, because of their similarity men have no faith in one another; but this same similarity gives them an almost unlimited trust in the judgment of the public; for it does not seem plausible to them that when all have the same enlightenment truth is not found on the side of the greatest number.” (409). So, although these means may help people trust a VPN, they are not fool proof. One other alternative answer to “how do you trust your VPN company” came from Karl: At the start I used Company B in particular for privacy reasons. That is how I got to know the service, and I liked the concept. Then I got in contact with the staff, volunteered a bit in the project, and ended joining the staff…when it comes to anonymity in the VPN sense (one key node doing the "hiding", as opposed to the chained concept of TOR where one just gets lost in a twisty web), trust is important. Company B came from the people who ran [Internet Company X], and that is pretty much the best pedigree possible. Here Karl implies that it is personal experience with the company, and the reputation of those behind it, that makes him prefer VPN technology — when in the right hands — to other Internet security methods (in this case, TOR), and legitimizes (or lets him trust) Company B. TOR is a different anonymizing network that passes IP address through a number of different nodes, those in charge of each node only being aware of the IP address before and after it. This makes traffic difficult to be traced back to a single ----- computer. Unlike VPNs, TOR is not run by a single group, but rather relies on volunteer networks (Rankin). Luke, of Company A, echoed Karl’s sentiment saying: I think the biggest defence to be completely honest, is that we have 40 people here who legitimately care about privacy and like you can tell it’s all we talk about all day… A second employee of Company A, Jack, added, “I know a bunch of people on my level who would just quit if we started logging. Like people wouldn’t work here anymore.” Thomas, of Company C’s, answer may appear at first like more of a shoulder shrug than an answer saying as a reason to trust, “…you know, we’re like a small company; they (our users) don’t really have a reason not to trust us.” And as further proof went on to say, “I use it myself to download and not get caught, buy stuff off the Dark Web, so you know, it’s nice…I know my boss isn’t going to rat me out.” Though this may appear to be different than Luke, Jack and Karl’s answers, it has some similarities. The Company C employee trusts his company because he personally trusts the person who runs it. He trusts him so much he knows he won’t “rat him out”; this implies a shared set of values. Again, it is the knowledge of the people who work at Company A and Company B and Company C, who, in the eyes of these employees, provide the biggest objective security assurance to Company A users. Of course, the users themselves, more often than not, do not have the opportunity to “know” their VPN company on this personal level. But to what extent is trust important to users? As previously stated, some initial level of trust is necessary to attract people to a company (Beldad). But trust will inevitably mean different things to different users, as it’s based on affectively derived ----- preconditions, such as values and emotions (Beldad). For example, those looking to access Netflix, may not care if their data is being collected by the VPN company, in so long as they can trust their VPN to provide them with a strong connection to Netflix. Others, who are would like their VPN to provide privacy and security for its own sake, may have a different definition of trust, which is based on the actual technology. They will want their VPN company to take security and privacy as seriously as they do, for its own sake. In either case, if those in charge of the VPN have values that are aligned with your own, a strong of a bond of trust is possible. This brings us to the next theme to be discussed in this paper, the theme of values. VALUES In Isaiah Berlin’s essay, “Two Concepts of Liberty” he reveals the paradox of freedom. According to Berlin, a person is never completely free, if we consider “freedom” nothing more than a lack of restraint; our own aspirations, or a society’s aspirations, impose limits on our complete freedom (Two Concepts), and direct us. Berlin calls freedom, as a lack of restraint, “negative freedom”. To Berlin, this type of freedom is worthless without boundaries. Boundaries, whether they are found in laws, or our own values, allow us to actually use our freedom to do what we choose to, and to pursue a meaningful life. And so, negative freedom must be balanced with “positive freedom”; the freedom to pursue “the good life” whatever that may be. Berlin has been criticized for drawing too fine a line between “positive freedom” (the restraints that direct us to pursue our values) and coercion. For the purposes of this paper, this is beside the point. The useful part of this distinction is the idea that when people use VPNs they are not simply ----- experiencing a lack of restraint, they are also experiencing the values of the company, values which, direct the company’s choices, and thus affect the user experience. VPNs ideally provide a secure and private space, where a person is able to, ideally, do what he or she wants to do. VPNs let us, ideally, use the Internet in an unrestrained and unlimited manner. And yet, the values that VPN companies hold do have the capacity to restrain us. They set limits to our freedom, as users support a world view that is necessarily value laden. Even in their ideal state, VPN technology cannot give us complete freedom. They come with their own values that direct us, and change our experience of the Internet. In trusting a VPN company, we choose to promote whichever values they promote, and make ourselves vulnerable to their own ethical decisions, a decision which, as previously described, is based on affective qualities like values and emotions (Beldad). In other words, we are not just subscribing to freedom as a complete lack of boundaries; we are subscribing to an alignment of our values with those of the VPN. Trust, like freedom, is not something objective that someone, or some company, either has or does not. It is, rather, something that is dependent on the interaction between those seeking a relationship and that trust, thus requiring mutual, shared values. And yet, as VPNs continue to grow in popularity, what exactly constitutes a VPN of “value” is continuously being redefined, internally and externally as conversations from the world of internet security, customers, and amongst companies themselves, create affective boundaries, through the interplay of “emotions, affect, and feeling” that help to determine ideological standards (Papacharissi 3). ----- As it turns out, there are some commonalities among the values held by different VPN companies, but there are also differences. Analyzing this helps to shed light on the boundaries of the VPN world today, and how this may inform Internet security in the future. To begin to understand “values” we can start by analyzing Company C. As previously demonstrated, Thomas trusted his company to do the right thing because he trusted his boss, who would “not rat him out”. This employee of Company C described his boss as someone “super paranoid” who “smokes a lot of weed, so that doesn’t help [with the paranoia]” and who has “also done some shady things in his past”. He met his boss in a Parisian nightclub, and knew nothing about the VPN world. He was hired on to Company C after a short friendship. He did not know his boss’ real name until a year and a half into his employment. He saw his position as purely a job, though he did find the space interesting. This company is marketed as a Canadian file-sharing focused service but whose identity will be kept anonymous. Though this may seem like a strange rationale for trusting someone, Thomas, who alludes to using the VPN for nefarious reasons, sees kinship with his boss, who seems to promote what would look to others as a morally dubious life style. From Thomas’ perspective this “live and let live” life style is what gives him trust. Thomas went on to reveal the possible security problems with the VPN, explaining that the protocols being used were now considered obsolete by Google. He admitted what was enabling them to get good reviews. They paid money to websites like TorrentFreak, to place them on Top 10 lists, which he claimed, “everyone does” (Company A, incidentally concurred with this observation, saying “It’s kind of a ----- necessary evil”). When I ask Thomas where in Canada they headquartered, the response was “That’s just a bullshit thing. We’re officially registered in America…Canada sounds better”. However, we cannot assume based on these facts that Company C was completely devoid of a moral compass. Thomas went on to outline a situation where the VPN made an ethical choice to, ultimately, go against their own privacy terms and conditions, because they felt morally compelled to do so. He described a circumstance where Canadian authorities traced an individual uploading pornography back to one of their VPNs IP addresses. His boss activated the collection of logs, which is counter to the VPN’s security policy, to catch the perpetrator. If anyone logged back on to the sites, they would be able to, hypothetically, trace them to a user account. They were never able to catch the perpetrator, who, according to Thomas, “…probably has like, you know, 20 VPN accounts or something like that, switching them back and forth…piggybacking VPN companies…if one of them is shady and keeps logs, it will connect you to another VPN company.” Ironically, in this case Thomas’ own VPN company was the “shady” VPN. I asked why they had chosen to break their own privacy rules. Thomas replied: Of course with the child pornography, we were like ok. We can help you in anyway possible…because it’s fucking child pornography. No — we do not condone that. Even if we’re like, “yeah free internet”, blah blah blah, it’s child pornography…freedom of speech, whatever…that’s not something we accept. He then went on to describe situations where his employer would not help authorities: If they had come to us being like oh a hacker…we wouldn’t have done much to help them. It’s not the same thing…we receive thousands of DMCA notices for copyright violation; we just don’t do anything about them. ----- I mentioned that there was a definite moral line there for him, to which he replied, “Exactly, I’m guessing we would have been the same if they asked us about some serial killer.” I outlined the highly publicized case where Pure VPN had been able to help authorities track down an Internet stalker, as outlined in Part 1 of this paper, which, seemed to reveal the company had been keeping data. The employee explained that this was not necessarily the case. They could set up “honeypots”, i.e. to start keeping data after the fact. In my understanding, either way, they are keeping user data. From a security perspective, whether it is being done before or after the fact does not make a difference for those whose data is being compromised. This employee seemed to use his “common sense” morality to make justifications for the company’s decisions and state. If we look at VPNs as things that are supposed to provide security and privacy, it would seem that this VPN is something that falls short. But if we look at morality in a larger sense, summarizing this employee’s position, he recognizes that most people use the VPN to access restricted material online. For this purpose he thinks the VPN could be better, but is good enough. And he thinks that principles like “freedom of speech” and “free internet” are not as important when the issue becomes the need to assist in capturing a child predator. This employee has no grand illusions about the Internet. He’s a guy with a job; he’s a pragmatist. When I relayed Company C’s handling of the honeypot-child-pornography story to Company A, they were visibly shocked. Luke asked if it was Pure VPN, alluding to their dubious reputation before responding with: ----- It’s so sketchy, it just makes me feel better about what we do. You know, Company A would never dream about those sorts of things, like we’re… I think that comes back to my comment about us not being the moral arbitrator… like once you start down that road of choosing who deserves privacy and who doesn’t deserve privacy, you get in a really awkward position, where now you have to decide for the entire world. Company A took a more “black and white” or idealist approach to the VPN world. For them, user data is not to be compromised under any circumstance. Luke explained to me that a VPN is simply a tool. It can do good and it can be used to do bad. But Luke believed, that in the grand scheme of things, more good was being done through their service than bad. Compromising user data, even if it was to help catch a child predator, would only serve to compromise the good their company is doing. As Luke tells me, Do I like that our way to prove that we don’t log things is defending people that are being, you know, blamed for, [or] suspected of committing crimes? No, but that’s the North American example where we have much more acute crime…like if you get the same request for information on someone living in Iran for just using a VPN, regardless of what they’re using it for, they’re just using a VPN, it’s often illegal in some of these countries, so these are the cases that I especially want to have the system set up, so that the same rules of law apply, so that we’re not the ones making the moral judgment...it’s not my place to choose where we are on the spectrum, it’s my place to create a tool and allow people to use it. And I ----- think at the end of the day, I think the world is a lot better that it exists rather than not existing. Unlike Thomas, the employees from Company A were passionate about their jobs, and driven by the fact that they were “doing good” in the world. Luke described this again, in more depth: …the big meaningful things for us are when we do things like offer free data to an entire country to get around censorship. My time since I’ve been here, I’ve done it for Turkey a couple of times, I’ve done it for Venezuela I’ve done it for Iran, a whole host of countries, and it’s just such a great feeling that you get out of it, that you’re adding back, you’re actually giving back to communities and you’re not just creating a tool in isolation that may be useful but isn’t really adding back to society in any way. So although Luke had argued, when presented with possible nefarious uses for using a VPN (e.g. distributing child pornography) that the company had simply created a tool, and they should not be the moral arbitrator of how that tool was used, in this latter quote, he argues quite the opposite. They were not creating a tool in isolation; they were giving back to the world, by “doing good”. This was by actively giving data to countries for free, something quite outside their role as a VPN company. This reveals that security and privacy were not only being offered for their own sake (or for the sake of a “negative freedom”, or unrestrained freedom, as described by Isaiah Berlin) but for the hope that something good, in this case democratic values, would come from it. To this point, Jack, a second employee from Company A said: ----- …the way I view VPNs, it’s as access to information, and the way I view access to information is as a valuable tool for democracy… this is a channel that people use to explore ideas and think about things and to communicate, and to think privately which is an important part of our society. Thomas from Company C, actually did echo this sentiment of the importance of VPNs in a non North American context, saying: I know journalists, for example if they are based in Egypt and they’re talking shit about the government, then they want to be protected, because they if they’re not connected to a VPN and start posting articles about how the government is they could find out where he is in Egypt and come get him, you know? But when asked if he feels proud of any moments in his company’s history, he gives a different answer: Yeah, well I guess it’s kind of cool to know you’re helping some people you’re helping people download and…(laughs)…it’s kind of cool, because I use it myself to download and not get caught, buy stuff off the Dark Web, so you know, it’s nice. Thomas was not attempting to influence the content, and thus did not take any ownership over what other people were doing. He was providing a tool, and people were using it, for good or bad. Is it possible for a company to distance itself from the bad, but claim responsibility for the good? I asked Heather of Company D if they had ever had any moral dilemmas concerning the administration of their VPN: ----- I mean we are all people and we all have hesitations, but we strongly believe that we do better than worse…anything can be used for bad, if it’s used by a bad person. That’s why we believe we create VPNs for a good reason. And what was this “good” reason? Heather continues, [Our CEO] actually decided to create a VPN because he believed the freedom of information is something worth working for…it’s unfair to limit people in whatever resources he wants to visit…it was right after Snowden (i.e. right after Edward Snowden released classified information from the National Security Agency (NSA) revealing global surveillance programs)…we should develop VPNs to contribute to the free Internet. The mention of Snowden hints at a strong belief in anti-surveillance (Osborne). Company D did, however, as outlined by bestvpn.com, keep some logs (for examples, how many people are using the VPN at one time), for reasons like providing a better customer experience. This could point to a slight discrepancy between making their product appealing to the general public and a complete adherence to non-surveillance principles. It is also important to note that Company D hails from the Ukraine, although their business is officially registered in the U.S. The Ukraine has far more government surveillance than North America, and for this reason the employees may be more sensitive to such issues. Company D believes that a lack of surveillance will lead to “good” and is a good in itself, despite the potential for “bad people” to abuse its platform. We can see how Company C, Company A and Company D all differ. Company C takes a pragmatic approach, believing less in grandiose ideals about the Internet and more in a case-by-case moral code. Company A believes in actively helping their non-North ----- American users spread ideals of democracy. Company D was founded off principles of non-surveillance, and comes from a country where surveillance is rampant. Philip, of Company E, again provides a different answer: For me it’s really important that people can experience the Internet the way it’s meant to be. For example when I have friends from other countries that tell me they can’t connect I just find that a little weird, so we unblock for that. Here we see him take a pragmatic approach, much like Company C. He personally finds it “a little weird” that there are countries where the Internet cannot be joined freely; where people can’t access what he can. The notion of the Internet being experienced “the way “it’s meant to be” refers to its decentralized nature; where there is no single body mediating usage (Barrat & Shade, 298). He, however, does not consider this position, a position at all, but rather as a sort of “non-position”: Non-western countries focus a lot on controlling the Internet where western countries too try to have some level of surveillance of the Internet, and I think for us as a VPN company we don’t want to be an activist in the middle of these arguments and we don’t want to take a political stance…as simple as an adult you have the right browse as you like and we would like to protect your privacy from advertisers, malware, trackers etc. So I personally, and this doesn’t necessarily reflect the company, this is my personal opinion, for me it is extremely important because it allows a user to do what he wants on the internet and at the same time blocks advertisers from tracking him and [lets him] experience the web in a much cleaner way. ----- It is easy to claim to be apolitical when you are equating your VPN with doing things you consider to be good. In Philip’s case, as outlined in the paragraph above, “protecting your privacy from advertisers, malware, trackers, etc.” But what if someone is doing something bad? I relayed the child pornography case to him and ask what he would do. Philip replied, … it does come down to an ethical dilemma, but for us, who are extremely focused on making sure that we uphold our promise to our users… I don’t know how to answer this because on one hand this would not be my decision, and on the other hand, I don’t have previous experience to use to indicate what we would do. By suggesting that Company E has never been dealt with this specific circumstance, so Philip is hesitant to answer either way, speaks to the moral ambiguity surrounding such decisions, which have to be dealt with on a case-by-case basis. Again, this ambiguous answer is very different from Company A’s, who, was adamant about never compromising their data no matter the case. When I ask Company B if they ever have been morally conflicted about their platform, I am a bit surprised by their answer. As described before, Company B was founded based on strong principles of privacy, and free sharing of information. TheBestVPN.com did find that it stored some user data, including email addresses. Karl says when I ask if he feels ethically responsible for what users do using their VPN: Kind of…there are moments when our users have misbehaved…I’m thinking harassment/spamming…(but) we might be a bit trigger happy at some points. ----- Karl forwards a blog article in which they were “trigger happy” in his opinion. The article describes a competitor VPN company that “tricked” Company B into booting a user off their platform for spreading right wing/racist propaganda, that had in fact been planted by the competitor, in an apparent attempt to smear Company B, accusing them of not being “neutral” as a VPN company “should be”. Company B responded that they received screenshots of the fake-perpetrator’s aggressions, and the behavior clearly went against their terms and conditions. As stated in Company B’s response: The ToS clearly states that we will not protect users spreading right wing material. The author of the aforementioned article states that in his personal opinion a VPN service should be neutral. We see this differently. If a user spreads right wing propaganda then he/she/it is on the wrong side of history. We are not going to tolerate that our work is used to further the agenda of people who think that: - Just because your skin has a different color, - You have a different religion, - You have a different sexuality, - Or a disability Here, Company B draws a strict line as to what would cause them to compromise user data; bigoted behaviour. Karl admits that it is hard to know exactly where the moral line between acceptable and non-acceptable behavior lies, and when action should be taken, but says that a VPN should not provide users with a “free for all” when it comes to their Internet use. ----- CONCLUSION As has been described in this paper, VPNs as a technology are ultimately dependent on the choices that the humans who create and maintain them. That is why I have spent time highlighting some key points in conversations with those who work at VPN companies as they reflected the theme of trust, and their own personal values throughout the course of this study. Ultimately, the attitudes of the VPN providers interviewed shine a light on the limits of VPN technology as it exists in the world, and the level of security it provides. VPNs function in a way that is affective, so to say, the principles they function according to are not based on industry norms, but affectively derived and highly personal values. Even in a small sample size there is much variance and disagreement on what the limits of “security” and “freedom” are, and where these concepts should be tapered in place of other values (for example, stopping cybercrime, like child pornography). It is clear that there is no playbook on making principled or ethical decisions surrounding the administration of a VPN. Although all the companies interviewed said that they stand for privacy and security, the definition of these terms actually blurred in practice. Two of the five companies, Company C and Company B, were upfront about where their adherence to the pure concepts of privacy and security stopped. In Company C’s case, it was child pornography, and in Company B’s case it was racism. They both admitted, however, that it was difficult to know when to step in on these issues and each, ultimately, applied a case-by-case approach to taking any action. Company A, on the other hand, affirmed that there was no circumstance under which they would compromise their principles. The way they saw it, any such compromises would undermine all the “good” their VPN did. Company D and E, ----- although not as explicitly, suggested a similar conclusion. The “good” their VPNs provided was worth any “bad” behaviours. The definition of “good” also differed from company to company, ranging from helping bypass government censorship to protecting people from nosey ISPs, to keeping the Internet “open the way it was meant to be”. This begs the question, “does consciously and actively applying boundaries on freedom and privacy by VPNs really take away from the ‘good’ they do?” Can a VPN company help journalists in Uganda exercise principles of democracy and also help Canadian authorities track down child predators? Are these two things really mutually exclusive? On the flip side of this question lies a paradox. Refraining from giving away customer logs at any cost is a choice that is not “neutral”, but political. For example, working to keep the Internet “open” is a choice, and assuming the Internet is “meant to be” a certain way is an opinion. In Company A’s case, giving free Internet access to those looking to spread “democratic principles” is reflective of a very particular worldview that is quite apart from “security” and “privacy”, even if these principles do, at points, overlap with democratic principles. My point is that even companies who claim to never compromise principles of security or privacy, do in fact, and still make ethical decisions, that compromise other widely held ethical beliefs. These compromises arise the moment concepts like “privacy” and “security” are applied within a dynamic world. Their application in the world requires decisions to be made that change them from positive ideals to affectively derived interpretation of ideals. This shows how particularly affective VPN companies are, with a lack of formalized regulations and standards. ----- I would argue that in so far as VPN companies are simply providing a free and secure space, they are not doing good or bad. They are simply providing a tool for individuals to use the Internet as they see fit, whether this be to circumvent copyright or protest the government. It is these individuals who inject that space with content, which can be both “good” and “bad”, and must affectively feel their way into a space that aligns with their own world view as a standard for “trust” in an unregulated environment. Some VPN companies confuse Berlin’s two concepts of freedom. They equate the freedom they provide, freedom in the negative sense, with democratic principles. Though negative freedom may be a democratic principle, this freedom alone, does not necessarily imply democracy. Democracy entails a series of values and sentiments that go beyond a simple lack of boundaries, for example the values of equality and the purposeful separation of church and state. As Papacharissi explains, the Internet pluralizes but does not necessarily democratize a space. As an example, the employees of Company A found pride in the “good” uses that came from their VPN, for example, family members reuniting in countries that were politically volatile. They felt justified in running their company, as they found it “important for democracy”. But in so far as a VPN company does nothing more than provide a free space, are they really doing something “good” or “bad”? If it is the users who take action through the VPN, how responsible is a company for these actions? Moreover, can their service simply be a neutral “tool” when someone uses it for “bad” purposes, for example for “acute crimes” in North America, but a “democratic service” when someone uses it for “good”? ----- Another example may shed more light on this question. In 2014, during the “Egyptian Revolution” Facebook CEO Mark Zuckerberg at first distanced himself from taking any credit for the actions of Egyptians during protests. But, at a shareholder meeting, he claimed that Facebook was indeed a vehicle for democracy, and that was his main purpose. As the revolution went awry (an equally repressive regime took power), he again rejected ownership over any actions resulting from the use of Facebook, and went back to saying Facebook was nothing more than a tool. In this case, in the situation of VPN users, the Internet pluralizes but does not necessarily democratize. So having technology companies that facilitate spaces for negative freedom, then claiming ownership over the democratic elements that emerge from that space seems like a tenuous connection to make on their part. Company A, however, does actively promote democracy in ways that are apart from their existence as a VPN provider. For example, they have on occasion, provided free, secure Internet access to protesters who were fighting for democracy. Although the company believes they are acting ethically through the creation of a space for negative freedom, they are not necessarily promoting positive values like democracy. This is demonstrated by the fact that because they don’t want to undermine their ethical position of a free internet, they do refuse to do anything that could curb it, for example helping authorities track down a child predator. But doing this would simply be curbing a negative freedom, helping track down a child predator. It would not impede on the notion of “democracy” because it is not at odds with it. It is simply at odds with a complete negative freedom. Democracy is a value that the company holds that is quite apart from the complete freedom provided by their VPN. ----- The above example is raised because it helps reveal the potential ethical problems or paradoxes VPN companies can be confronted with, particularly if they misunderstand their stance to be a “neutral” democracy and fail to acknowledge the political dimension of their choices (Laclau and Mouffe 96). On the Internet, remaining neutral is not an option. As Chantal Mouffe says, each choice made ultimately exposes your beliefs, as a choice favours one option over another. I would suggest that VPN companies not shy away from making policies grounded in their own ethics, and that they should remain open about them. It is only through this that consumers can properly align themselves with a company that matches their own needs and values; thus fostering trust. So, where does this lead us in terms of the issue and theme of trust? As admitted by the majority of VPN companies interviewed, trust is a concept that cannot be fully resolved satisfactorily. Trust is a two-way street, which relies on a mutual understanding that both parties share the same values. Trust depends on a user’s own values and purposes for using a VPN. As Papacharissi says, on the Internet, we affectively feel our way into communities that we relate too. Choosing a VPN is much the same. Allegiance and trust is based on affective feelings, not only on concrete evidence, which is often difficult to find. Papacharissi says that, “what reason, belief, and ideology suggest, affect, feeling, and emotion frequently overturn in favor of the irrational” (3). Yet the rational and the irrational remain in extractible from one another, and it is only hypothetically that we are able to divide them. Even through interviewing five companies, who work in the niche area of VPNs, we are able to see interpretations that lead to different applications of ----- policy, though all companies claim to be abiding by the same principles; security and privacy. Through one lens, these discrepancies could be seen as an immaturity and recklessness within the industry (Leyden). Through another lens, the precarious, affective nature of the technology could be an example of the nature of the Internet on a larger scale; a pluralized space through which people feel their way. ----- Works Cited Abbate, Janet. Inventing the Internet. MIT Press, 1999. Andy. Hola VPN Sells Users’ Bandwidth, Founder Confirms.” TorrentFreak, 28 May 2015, torrentfreak.com/hola-vpn-sells-users-bandwidth-150528/. Accessed 4 Dec. 2017. Barrat N., Shade L. R. “Net Neutrality: Telecom Policy and the Public Interest.” Canadian Journal of Communication, 2007 pp. 295-305. Bazley, Tarek. “Iran internet plan ignites debate.” Al Jazeera, 29 Sept 2012,www.aljazeera.com/indepth/features/2012/09/2012927132545740255.html . Accessed 4 Nov. 2017. Beldad, Ardion et al. “How shall I trust the faceless and the intangible? A literature review on antecedents of online trust.” Source Information, vol. 26, no. 5, 2010 Sept. pp. 857-869. Accessed 19 Aug. 2018. Buckle, Chase. “Turkey Leads for VPN Usage.” GlobalWebIndex, Chart of the Day, 10 Jan. 2017, www.vpnmentor.com/reviews/btguard-vpn/. Accessed 04 Jan. 2018. Chen, Caleb. “Thousands march in Moscow, Russia to support Internet Freedom, protest VPN ban.” Privacy News Online, 24 July, 2017, www.privateinternetaccess.com/blog/2017/07/thousands-march-moscow-russiasupport-internet-freedom-protest-vpn-ban/. Accessed 25 Oct. 2017. Company A Interview. Personal interview. 15 April 2018. Company B Interview. Personal interview. 02 May 2018. Company C Interview. Personal interview. 06 April 2018. Company D Interview. Personal interview. 09 March 2018. Company E Interview. Personal interview. 02 June 2018. “Complaint, Request for Investigation, Injunction, and Other Relief in the Matter of AnchorFree, Inc. Hotspot Shield VPN” Submitted by The Center for Democracy _& Technology (CDT), 1 Aug. 2017,_ www.documentcloud.org/documents/3911863-FTC-Complaint-on-VPNs.html. Accessed 29 Dec. 2017. Dawson, Dave. “Determining and integrating the best applications for VPNs.” Computer _Technology Review, vol. 17, no. 9, 1997, pp. 10-12._ ----- Eddy, Max. “The Best VPN Services of 2018.” PC Mag, 6 Aug. 2018, www.pcmag.com/article2/0,2817,2403388,00.asp. Accessed 6 Aug. 2018. Evans, Pete. “Bell Media president says using VPNs to skirt copyright rules is stealing.” _CBC, 5 Jun. 2015, www.cbc.ca/news/business/bell-media-president-says-using-_ vpns-to-skirt-copyright-rules-is-stealing-1.3099972. “Federal Decree-Law no. (5) of 2012” On Combating Cybercrimes. Khalifa Bin Zayed Al _Nahyan. ejustice.gov.ae/downloads/latest_laws/cybercrimes_5_2012_en.pdf_ Foucault, Michel. Discipline and Punish: The Birth of the Prison. New York, Random House, 1977. Fullagar, David. “Evolving Proxy Detection as a Global Service.” Netflix Media Center, 14 Jan. 2016, media.netflix.com/en/company-blog/evolving-proxy-detection-asa-global-service. Galletta, Anne. Mastering the Semi-Structured Interview and Beyond. New York University Press, 2001. Gregg, Melissa and Gregory Seigworth. The Affect Theory Reader. London, Duke University Press, 2010. “Gurdeep Singh Pall.” Crunchbase, People, www.crunchbase.com/person/gurdeep-singh pall. Accessed 04 Jan. 2018. “GWI Social: GlobalWebIndex’s Quarterly Report on the Latest Trends in Social Networking.” GlobalWebIndex, Flagship Report Q1, 2017, cdn2.hubspot.net/hubfs/304927/Downloads/GWI-Social-Summary-Q1-2017.pd Habermas, Jurgen. “The concept of human dignity and the realistic utopia of human rights.” Philosophical Dimensions of Human Rights, Springer, 2012, pp. 63-79 Haraty, Ramzi and Bassam Zantout. “The TOR data communication system.” Journal _of Communications and Networks, vol. 16, no. 4, 2014. ieeexplore-ieee-_ org.ezproxy.lib.ryerson.ca/document/6896565/citations. Accessed 18 Aug. 2018. Hawke, Robinson. “Malware FAQ: Microsoft PPTP VPN.” Sans, www.sans.org/security resources/malwarefaq/pptp-vpn. Accessed 3 Dec. 2017. Ikram, Muhammad, et al. An Analysis of the Privacy and Security Risks Android VPN _Permission-enabled Apps. Internet Measurement Conference, 2016,_ www.icir.org/vern/papers/vpn-apps-imc16.pdf ----- KennWhite[Kenn White]. “Most VPNs are Terrible” GitHubGist, 20 Jul. 2017, gist.github.com/kennwhite/1f3bc4d889b02b35d8aa. Accessed 25 Oct. 2017. Khazan, Olga. “Actually, Most Countries Are Increasingly Spying on Their Citizens, the UN Says” The Atlantic, 6 June 2013, www.theatlantic.com/international/archive/2013/06/actually-most-countries-areincreasingly-spying-on-their-citizens-the-un-says/276614/. Accessed 7 Aug. 2018. Jackson, Allan. “Coalition asks CRTC to block websites with pirated content in a bid to fight illegal streaming.” The Financial Post. 29 Jan. 2018. business.financialpost.com/telecom/coalition-asks-crtc-to-block-websites-withpirated-content-in-bid-to-fight-illegal-streaming. Accessed 19 Aug. 2018. LaBorde, Doug. “Understanding and implementing effective VPNs.” Computer _Technology Review, vol. 18, no. 2, 1998, pp. 12-16._ Laclau, Ernesto and Chantal Mouffe. Hegemony and the Social Strategist Towards a _Radical Democratic Politics. Verso, 1985._ Leyden, John. “90% of SSL VPNs are ‘hopelessly insecure’, say researchers.” The _Register, 26 Jul. 2016, www.theregister.co.uk/2016/02/26/ssl_vpns_survey/._ Accessed 15 Jul. 2018. Longworth, James. “VPN: from an obscure network to a widespread solution.” Computer _Fraud & Security. vol. 2018, no. 4, doi.org/10.1016/S1361-3723(18)30034-4._ Accessed 20 Aug. 2018. MacKinnon, Rebecca. Consent of the Networked: The World-Wide Struggle for Internet _Freedom. Basic Books, 2012._ Madden, Mary and Lee Rainie. “Americans’ Attitudes Abou Privacy, Security and Surveillance.” Pew Research Center, 20 May 2015, www.pewinternet.org/2015/05/20/americans-attitudes-about-privacy-securityand-surveillance/. Madrigal, Alexis C. “What we know about Facebook’s latest data scandal.” The Atlantic, 04 Jun. 2018. https://www.theatlantic.com/technology/archive/2018/06/whatwe-know-about-facebooks-latest-data-scandal/561992/. Accessed 20 Jul. 2018. Meeta, Gupta. Building a Virtual Private Network. NIIT, 2003. “Microsoft Leads Initiatives for Virtual Private Networks Across the Internet.” Microsoft, 4 March 1996, news.microsoft.com/1996/03/04/microsoft-leads-initiative-forvirtual-private-networks-across-the-internet/. Accessed 1 Nov. 2017. ----- “How VPN Works.” Microsoft TechNet, technet.microsoft.com/pt pt/library/cc779919(v=ws.10).aspx#w2k3tr_vpn_how_niuh. Accessed 20 Aug. 2018. Moczulski, J.P. “Protests in Iran lead to a surge in downloads of Canadian VPN tools.” _The Globe and Mail, www.theglobeandmail.com/report-on-business/small-_ business/going-global/protests-in-iran-lead-to-a-surge-in-downloads-ofcanadian-vpn-tools/article37599480/. Accessed 12 May 2018. Mohta, Pushpendra “VPNs bring interactivity to electronic commerce.” HP Chronicle, vol. 15, no. 7, 1998. Mouffe, Chantal. Agnostics; Thinking the World Politically. Verso, 2013. Mouffe, Chantal. Interview by Pluralt. Studies in Political Economy, Spring 1996, spe.library.utoronto.ca/index.php/spe/article/view/9368. Accessed 24 Oct. 2017. Mouffe, Chantal. The Return of the Political. Verso, 1993. “Multiple Critical Vulnerabilities in Hola Overlay Network Client.” Hola Security _Advisory. adios-hola.org/advisory.txt. Accessed 20 Dec. 2017._ Nave, Kathryn. “Infoporn: how VPN use varies by country.” Wired, 1 Jul. 2016, www.wired.co.uk/article/vpn-use-worldwide-privacy-censorship. Accessed 04 Dec. 2017 Nie, Weiliang. “Chinese learn to leap the ‘Great Firewall’.” BBC, 19 March 2010, news.bbc.co.uk/2/hi/technology/8575476.stm. Accessed 4 Nov. 2017 Osbourne, Charlie. “Snowden wants to build anti-surveillance tech.” CNET. 21 Jul. 2014. https://www.cnet.com/news/snowden-to-build-anti-surveillance-tech/. Accessed 15 Jul. 2018. Papacharissi, Zizi. Affective Publics: Sentiment, Technology, and Politics. Oxford University Press, 2015. Paul, Ian. “VPNs have a Trust Issue: Here’s What TunnelBear did About it.” PC World. 8 Aug. 2018. www.pcworld.com/article/3213032/security/vpns-have-a-trustissue-heres-what-tunnelbear-did-about-it.html. Accessed 7 Jul. 2018. Rankin, Kyle. Linux Hardening in Hostile Networks: Server Security From TLS to Tor. Boston, Addison-Wesley, 2018. Reed, Alan, et al. “Forensic Analysis of Epic Privacy Browser on Windows Operating Systems.” European Conference on Cyber Warfare and Security, Jun. 2017, 341-350. [Rosen, Rebecca J. "The Fight for a Fair and Free Internet". The Atlantic, 14 Feb. 2012.](https://www.theatlantic.com/technology/archive/2012/02/the-fight-for-a-fair-and-free-internet/253027/) ----- “Russia: VPN ban is a major blow to Internet freedom.” Amnesty International, 31 July 2017, www.amnesty.org/en/latest/news/2017/07/russia-vpn-ban-is-a-majorblow-to-internet-freedom/. Accessed 25 Oct. 2017. “Secure Internet servers (per 1 million people).” World Bank, 2016, data.worldbank.org/indicator/IT.NET.SECR.P6, Accessed 07 Jan. 2017. Silverman, Laura. “Turning to VPNs for Online Privacy Might be Putting your Data at Risk.” All Tech Considered, NPR, www.npr.org/sections/alltechconsidered/2017/08/17/543716811/turning-tovpns-for-online-privacy-you-might-be-putting-your-data-at-risk. Accessed 04 Dec. 2017. Steinberg, Steve G. “Hype List.” Wired, 2 June 1998, www.wired.com/1998/02/hype-list 25/. Accessed 4 Nov. 2017. “Syria.” Freedom on the Net 2017, 2017, Freedom House. freedomhouse.org/report/freedom-net/2017/syria. Accessed 4 Dec. 2017 “The Truth About VPNs.” Techdirt from Techdirt, 4 April 2017, www.techdirt.com/blog/podcast/. “TunnelBear Completes Industry-First Consumer VPN Public Security Audit.” _TunnelBear, tunnelbear.com/blog/tunnelbear_public_security_audit/. Accessed_ 04 Dec. 2017. “Turkish people turn to VPNs as Istanbul protests spread.” BBC, 6 June 2013, www.bbc.com/news/technology-22799768. Accessed 4 Nov. 2017 United States of America v. Ryan S. Lin. No. 17-MJ-4251-DHH. United States District Court. 2017, https://www.justice.gov/opa/press-release/file/1001841/download. Upfal, Eli. “Are Free VPN Services As Safe As Paid VPN?” YouTube, uploaded by Failed Normal Redux, 13 Apr. 2016, www.youtube.com/watch?v=vDbPjgXstHg “Virtual Private Network (VPN) Market Analysis by Type, Deployment, Products, End User | VPN Market Worth US $41.702 Billion by 2023 at 18% CAGR.” _MarketWatch, 12 June 2018. www.marketwatch.com/press-release/virtual-_ private-network-vpn-market-analysis-by-type-deployment-products-end-uservpn-market-worth-us-41702-billion-by-2023-at-18-cagr-2018-06-12. Accessed 6 Aug. 2018. Vectra Threat Labs. “Technical analysis of Hola.” Vectra, 1 Jun. 2015, blog.vectra.ai/blog/technical-analysis-of-hola. Accessed 4 Dec. 2017. ----- Young, Katie. “1 in 5 are weekly VPN users.” 18 Aug. 2016, blog.globalwebindex.net/chart-of-the-day/1-in-5-are-weekly-vpn-users/. Young, Katie. “4 things to know about VPN users.” 2 Feb. 2016, blog.globalwebindex.net/chart-of-the-day/4-things-to-know-about-vpn-users/ “What is a VPN? And why you should use a VPN on public Wi-Fi.” Norton, Security Center, us.norton.com/internetsecurity-privacy-what-is-a-vpn.html7890-. Whittaker, Zack. “Why Hotspot Shield’s co-founder puts privacy over profits.” ZD Net, 12 Jan. 2016. www.zdnet.com/article/why-hotspot-shield-co-founder-putsprivacy-over-profits/. Winckworth, Kate. “Tinker, Torrentor, Streamer, Spy: VPN Privacy Alert” CSIROscope, 25 Jan. 2017, blog.csiro.au/tinker-torrentor-streamer-spy-vpn-privacy-alert/. Accessed 25 Oct. 2017. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.32920/ryerson.14652117?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.32920/ryerson.14652117, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://rshare.library.ryerson.ca/articles/thesis/A_Short_Qualitative_Analysis_Of_Virtual_Private_Networks/14652117/1/files/28133880.pdf" }
null
[ "Review" ]
true
null
[]
23,767
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Business", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01ba758718a903ee35a12f0f667da8992f78ef3f
[ "Computer Science" ]
0.855066
Exploring the potential of blockchain technology within the fashion and textile supply chain with a focus on traceability, transparency, and product authenticity: A systematic review
01ba758718a903ee35a12f0f667da8992f78ef3f
Frontiers in Blockchain
[ { "authorId": "2209068064", "name": "Aayushi Badhwar" }, { "authorId": "51293595", "name": "S. Islam" }, { "authorId": "46387291", "name": "C. Tan" } ]
{ "alternate_issns": null, "alternate_names": [ "Front Blockchain" ], "alternate_urls": null, "id": "17d7865f-0af7-472c-b174-60948bf06d11", "issn": "2624-7852", "name": "Frontiers in Blockchain", "type": null, "url": "https://www.frontiersin.org/journals/blockchain#" }
Blockchain Technology has shown tremendous potential to be a foundation for the currently shifting paradigm towards more traceable and transparent supply chains. This review highlights the opportunities that exist in adapting Blockchain Technology in the fashion and textile supply chain, while also providing insight into the challenges of adopting this technology. This paper provides a systematic review of the potential of Blockchain Technology within the fashion and textile industry’s supply chain to analyse its role in traceability, transparency, and product authenticity. To achieve this, a substantive number of research papers and non-scholarly resources have been scrutinised. An emphasis was placed on topics regarding Blockchain Technology (BT), the fashion and textile industry and supply chain (manufacturing and distribution), traceability, transparency, and product authenticity. The selected research papers range from empirical analysis, argumentative, case studies, opinion articles, review articles, short reports, and book chapters.
OPEN ACCESS EDITED BY Meghana Kshirsagar, University of Limerick, Ireland REVIEWED BY Renjith V. Ravi, MEA Enginnering College, India Robin Singh Bhadoria, GLA University, India *CORRESPONDENCE Aayushi Badhwar, [aayushi.badhwar@rmit.edu.au](mailto:aayushi.badhwar@rmit.edu.au) SPECIALTY SECTION This article was submitted to Blockchain in Industry, a section of the journal Frontiers in Blockchain RECEIVED 15 September 2022 ACCEPTED 07 February 2023 PUBLISHED 20 February 2023 CITATION Badhwar A, Islam S and Tan CSL (2023), Exploring the potential of blockchain technology within the fashion and textile supply chain with a focus on traceability, transparency, and product authenticity: A systematic review. Front. Blockchain 6:1044723. [doi: 10.3389/fbloc.2023.1044723](https://doi.org/10.3389/fbloc.2023.1044723) COPYRIGHT © 2023 Badhwar, Islam and Tan. This is an open-access article distributed under the [terms of the Creative Commons](https://creativecommons.org/licenses/by/4.0/) [Attribution License (CC BY). The use,](https://creativecommons.org/licenses/by/4.0/) distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms. y [DOI 10.3389/fbloc.2023.1044723](https://doi.org/10.3389/fbloc.2023.1044723) # Exploring the potential of blockchain technology within the fashion and textile supply chain with a focus on traceability, transparency, and product authenticity: A systematic review #### Aayushi Badhwar*, Saniyat Islam and Caroline Swee Lin Tan School of Fashion and Textiles, RMIT University, Melbourne, VIC, Australia Blockchain Technology has shown tremendous potential to be a foundation for the currently shifting paradigm towards more traceable and transparent supply chains. This review highlights the opportunities that exist in adapting Blockchain Technology in the fashion and textile supply chain, while also providing insight into the challenges of adopting this technology. This paper provides a systematic review of the potential of Blockchain Technology within the fashion and textile industry’s supply chain to analyse its role in traceability, transparency, and product authenticity. To achieve this, a substantive number of research papers and nonscholarly resources have been scrutinised. An emphasis was placed on topics regarding Blockchain Technology (BT), the fashion and textile industry and supply chain (manufacturing and distribution), traceability, transparency, and product authenticity. The selected research papers range from empirical analysis, argumentative, case studies, opinion articles, review articles, short reports, and book chapters. KEYWORDS blockchain technology, fashion, fashion industry, supply chain, transparency, traceability, product authenticity, Circular Economy ## 1 Introduction The exciting emergence of Blockchain Technology (BT) has revolutionised many industries over the past decade (Kimani et al., 2020). BT is a decentralised and distributed digital ledger that allows for unalterable record-keeping (Ahad et al., 2020). The immutable feature of this technology has tremendous potential for the fashion and textile industry to improve its transparency and traceability in supply chain operations (Agrawal et al., 2018). Understanding the application of BT in the fashion and textile industry is a complex task due to the scale of the industry, which was valued at 1.5 trillion US Abbreviations: BoF, Business of Fashion; BT, Blockchain technology; GFA, Global Fashion Agenda; ICN, India Committee of the Netherlands; ISO, International Organisation for Standardisation; SCM, Supply Chain Management; SDG, Sustainable Development Goals; UNICEF, United Nations International Children’s Emergency Fund; USD, United Stated Dollar. ----- dollars in 2021 and is estimated to reach 2 trillion by 2026 (Smith, 2022), as shown in Figure 1. Moreover, the fashion and textile industry is cluttered with complex supply chains (Garcia-Torres et al., 2019). The industry is dependent on global supply chains for manufacturing and distribution processes ranging from sourcing raw materials to catering finished products to customers (Masson et al., 2007). The inherent complexity of supply chains is often used to obscure the origin, tracing, and authenticity of fashion and textile products (Li, 2013). In addition, unethical and corrupt practices within supply chains, such as forced child labour, modern slavery, and disregard for the environmental consequences can be hidden from both the retailer and the customer (De Aguiar Hugo et al., 2021). While these cost-cutting practices can increase the retailer’s profit margin, they also have the potential to jeopardise the customer’s trust in their favourite brands and in extreme cases, also the customer’s health (Bikoff et al., 2015). The emergence of the COVID-19 pandemic has reiterated how essential it is to maintain healthy living and a natural balance with the planet. The retail landscape is set to resume normality, compared to before the pandemic, by the fourth quarter of 2023 (BoFMcKinsey&Company, 2021). While the world faced multiple lockdowns, unpredictable fluctuations in consumer trends created new opportunities for emerging technology in businesses. BT is one among the others which aided many businesses with maintaining their global supply chains while simultaneously enhancing the customer experience. This paper systematically reviews the potential of BT based on its inherent properties and capabilities in the manufacturing and distribution areas of the fashion and textile supply chain. The complex structure of the fashion and textile industry will be discussed emphasising the diverse and complex nature of their supply chains. This paper utilises a narrative approach of synthesising the information to bring the findings of the systematic review of the key research topics in structured summaries. This is achieved by mapping the narrative paragraphs under the thematic headings. This strategy has assisted in providing a structural flow to the relative evidence found within the course of conducting this review. The diverse nature of the industry’s supply chains permits numerous loopholes to be exploited. The lack of traceability and transparency creates a disconnect between what is publicised and the harsh reality. Currently, there is a dearth of research in this field that provides a universally feasible framework to solve the existing concerns. BT exhibits promising potential to revolutionise the fashion and textile industry’s supply chain. In the fashion and textile industry, BT is yet to be applied to the entire supply chain, with current uses limited to solving niche problems rather than providing a holistic solution for the challenges of traceability and transparency. The limited exploration of BT features through experimental research has left countless challenges to be resolved. This paper highlights the existing research gaps while summarising the direction of current research in the field of BT in the fashion and textile industry. This paper also sheds light on the lack of transparency and traceability within these supply chains and how it impacts the industry’s ability to battle the counterfeit markets. Existing BT applications will be illustrated in sequence while ----- TABLE 1 Inclusion and exclusion criteria of the review. Keywords “Blockchain Technology” AND “Supply Chain” AND “ “Traceability” OR “Transparency” OR “Product Authenticity Timespan 1991–2022 Search Systems Google Scholar, Emerald, ScienceDirect and Scopus Criteria Sources Article type Journal articles Book chapters Conference papers News articles, industry reports Master’s/doctorate thesis Public case, webpages, video Language English Translated to English All other languages Others Irrelevant to the research area (e.g., other industries, blockchain models, cryptocurrencies, and economic applications of BT) Irrelevant to the topic Not accessible Duplicates highlighting the existing gaps in relevance to the fashion and textile industry’s supply chains. The paper concludes with the limitations and future recommendations based on the review of the existing research. ### 1.1 Background The presented systematic review is the first attempt at bridging the problem which emerges from the lack of transparency and traceability within the fashion supply chains with the existing and potential solution, which is provided by BT. Additionally, this paper presents the review in a narrative summarisation of the challenges existing in the fashion and textile industry from the grass root level, such as highlighting the lack of universally adopted definitions of the key research topics like transparency and traceability with the industry’s context. It aims at illustrating the limitations and scope of future research which can boost the adoption of BT to solve the existing challenges within the fashion and textile supply chain. It should be noted that research within the context of the application of BT in the fashion and textile industry does exist (Choi, 2019; Tripathi et al., 2021), however, has not been widely adopted by the industry (Caldarelli et al., 2021). BT and the relevant phenomenon within the fashion and textile industry are well-established and researched. The key research topics, such as, Blockchain Technology (BT), Supply Chain, Fashion, Fashion Industry, Fashion and Textile Industry, Traceability, and |Keywords|“Blockchain Technology” AND “Supply Chain” AND “Fashion” OR “Fashion Industry” OR “Fashion and Textile Industry” AND “Traceability” OR “Transparency” OR “Product Authenticity”|Col3|Col4| |---|---|---|---| |Timespan|1991–2022||| |Search Systems|Google Scholar, Emerald, ScienceDirect and Scopus||| |Criteria|Sources|No. of exclusion|No. of inclusion| |Article type|Journal articles|65|112| ||Book chapters|17|8| ||Conference papers|11|4| ||News articles, industry reports|25|11| ||Master’s/doctorate thesis|4|1| ||Public case, webpages, video|9|16| |Language|English|122|152| ||Translated to English|6|0| ||All other languages|3|0| |Others|Irrelevant to the research area (e.g., other industries, blockchain models, cryptocurrencies, and economic applications of BT)|108|| ||Irrelevant to the topic|7|| ||Not accessible|3|| ||Duplicates|6|| Transparency have further background information (history and definitions) incorporated in the narrative sections with the thematic headings, which are crucial for the findings of this review. The current systematic review addresses the lack of empirical evidence which results in the scarcity of life-cycleassessment case studies (Ahmed and Maccarthy, 2021) related to the framework of the adoption of BT within the fashion and textile industry. ## 2 Methodology This systematic review paper follows the five-step method proposed by Khan et al. (2003) to research the topic to ensure the reliability and transparency of this review. The five-step criteria are as follows. a) Outline the question for review, b) Classify related work, c) Evaluate the quality of studies, d) Summarise the findings, and e) Interpret the results. The proposed research question is, “What are the applications of blockchain within the fashion and textile manufacturing and distribution channels and, how does it impact the product’s traceability, transparency, and authenticity?.” A range of sources was identified to outline the relevant research work to ----- explore the research question, as shown in Table 1. These sources were accessed using the databases such as, Google Scholar, ScienceDirect, Emerald, and Scopus. These sources were selected from different databases to avoid bias in selection. The shortlisted sources contain only published material to maintain the quality of this review. These sources were limited primarily to scholarly research which is presented originally or translated into English while also utilising non-scholarly research for supplementary evidence. Classification narrowed down the scholarly research work to peer-reviewed and cited research papers. This review utilised only trusted and credible global sources for non-scholarly research work such as Common Objective, Business of Fashion & McKinsey Company Industrial Reports, and The Washington Post. The sources examined for this review were selected using a transparent selection process by employing Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) to improve the quality of this review (Moher et al., 2009). The terms and keywords investigated in this review are: Blockchain Technology (BT), Supply Chain, Fashion, Fashion Industry, Fashion and Textile Industry, Traceability, Transparency, and Product Authenticity. Boolean Syntax was also applied to these terms to form desired combinations of the same with AND, and OR to find the most relevant work. The keywords were searched within the titles of the research resources. In total, 283 research resources were identified and further narrowed down to 152 based on their relevance and credibility by analysing the abstract. These papers were then shortlisted to 152 research resources which were analysed thoroughly to produce this review. The research was gathered for this review in 2021 and 2022, where it was found that BT was invented in 1991. Additionally, it was discovered that “Traceability”, one of the essential keywords, as defined by International Organisation for Standardisation (ISO) in 1994. Considering the above mentioned reasons, a range between 1991 and 2022 would provide a thorough search period for this review. However, the abundance of relevant resources which are reviewed to conduct this study to focus on the current state of the research topic is from 2011 to 2022. The exclusion and inclusion criteria for this review are based on relevancy, source type, language, and other factors which are summarised in Table 1, while the distribution of source types can be found in Figure 2. Figure 3 illustrates the refinement process undertaken to build the research library for this review. The initial search resulted in 283 sources of which 6 duplicates were removed. 277 sources were further screened for their relevance to the review while also being filtered into those written or translated to English. 82 sources were excluded after this screening process and 195 sources were examined in detail. 43 sources were excluded after they were thoroughly read and found to be irrelevant. Figure 4 presents the yearly distribution from the 1990s to the 2020s of the sources included in the systematic review. BT was invented in 1991, however only gained a platform from 2008 onwards. Figure 4 contains 1994 as a starting point of the annual distribution as the ISO defined traceability in 1994. The abundance of the relevant and updated sources reviewed for this paper is from 2011 to 2022. Within the context of this systematic review, 23 articles were published in 2020. It is critical to note that the majority of the research articles used in this review were published during the last 5 years (2018–2022), which accounts for over 50% of the articles. The growth in the number of research articles reveals the heightened interest of the researchers in this specific area of study. Figure 5 represents the most significant nations that contribute to the research which are included in this systematic review. A greater number (25% and 24%) of the research articles originated from United Kingdom and United States, followed by India and Australia. ----- ----- ## 3 Understanding Blockchain: Technology of the decade Blockchain is a technology that has provided numerous businesses with a competitive technological edge in the last decade (Tripathi, et al., 2021). BT was invented by Stuart Haber and Scott Stornetta in 1991 (Kushwaha and Joshi, 2021) however, its global debut only occurred in 2008 when it was successfully applied to create “Bitcoin: A Peer-To-Peer Electronic Cash System” by Satoshi Nakamoto (Nakamoto and Bitcoin, 2008). This technology has become globally known as cryptocurrency ----- however, BT’s application to the fashion and textile industry remains in the nascent stages of exploration. BT uses a distributed ledger, meaning it is a consensus of shared and synchronised digital data which can be spread across multiple sites, institutions, and countries (Panda et al., 2021). There is no central administrator or centralised data storage which provides this technology with its main features like decentralisation and immutable history. BT uses cryptographic hashtags to store digital information in a block with a digital signature making it traceable. BT includes key features including, consensus and smart contracts which make this technology user-friendly to different types of industries (Kushwaha and Joshi, 2021). Upon validation of the block by consensus of the network, the block forms an unalterable chain. This unalterable state is achieved when to change a block in the chain (which can only be done by making a new block containing the same predecessor) one must regenerate all successor blocks and redo the work on regenerating the blocks. These blocks also contain time stamps and geo-location tags which makes this technology traceable in real-time (Drescher, 2017). Figure 6 illustrates some of these above-mentioned features which make BT immutable and traceable (Abeyratne and Monfared, 2016). To recognise the properties of this technology, it is important to comprehend the fundamentals of its structure and functioning. A blockchain is a digital ledger of individual blocks. These blocks contain information and form a chain or jigsaw-like structure by attaching themselves to the other blocks in the chain. The information stored in the blocks can be of multiple origins and natures, such as a record of valid network activity, documents, and transactions (Abeyratne and Monfared, 2016). This information is stored in an encrypted form that is immutable and has a traceable history, which can be shared with the participants or stakeholders in the chain. The generation and addition of blocks to the chain take the form of a digital transaction (Bauerle, 2018). For example, a party (or participant) requests information or initiates a digital transaction that is recorded in a block. This block circulates in the chain and other participants can react and process this information with an appropriate response. This new information is then stored in this block. Common examples of this information are smart contracts and financial transactions (Crosby et al., 2016). Once this block is verified by all the participants, it is then added to the block permanently and the transaction is considered completed. It is pivotal to comprehend the procedure of creating blocks and their functional know-how in terms of security and storage capacity to understand its potential use cases (Drescher, 2017). ## 4 Creation of a block in a blockchain The process of creating a block can be categorised into three important stages, as illustrated in Figure 7: recording; verifying and validating; and updating. Each stage has a different purpose in the formation of a block. ### 4.1 Recording Information is received to initiate the transaction and converted into cryptographic hashtags (a fixed-size alphanumeric string) which are called hash values through a specific algorithm (Lemieux, 2016). These values are irreversible, meaning the output cannot be converted to the input. With every new input, a new hash value is generated to maintain its authenticity (Drescher, 2017). ### 4.2 Verifying and validating This block is then circulated and distributed in the ledger network of miners, also known as nodes. Each miner has a public key and a private key which together form a digital signature. Any interaction or action, which includes time and geographical stamping, requires these keys (Crosby et al., 2016). These miners are members with the authority to validate the information stored in the block. Once these miners approach a majority with the same conclusion, the block is approved/validated to enter the next stage (Heiskanen, 2017). ### 4.3 Updating This is the final stage when the block becomes part of the blockchain. This block will include the hash values of the proceeding ----- blocks (Drescher, 2017). Therefore, to change a block in the chain, one must regenerate and redo all the successor blocks. This makes each previous block unalterable, as the output cannot be converted to the input. BT has become well-known because of its applications and developments in cryptocurrencies such as Bitcoin and Ethereum. In addition to cryptocurrency, industries such as banking, health, and taxation, have been actively implementing BT to reap the advantages of numerous features provided by this technology (Kimani et al., 2020). Naturally, business operations and supply chain management departments, such as the fashion and textile industry (Tripathi et al., 2021), have piqued interest in potential applications of BT to address the complex nature of the supply chain. It has already been incorporated by several renowned luxury brands in segments of their supply chains and user interfaces to provide improved services to their customers (Choi, 2019). However, it is yet to be widely adopted by the fashion and textile industry (Caldarelli et al., 2021). To further evaluate this, the nature of the fashion industry and the relationship between transparency and traceability in fashion’s supply chain requires investigation. ## 5 Encompassing the fashion industry: Definition to the conventional reality Through countless perspectives and debatable concepts, fashion can be defined as the exhibition of ideas and conscious concepts, articulated in clothes and ensembles (Entwistle, 2000). Major and Steele (Major and Steele, 2019) defined the fashion industry as, “part of a larger social and cultural phenomenon known as the “fashion system,” a concept that embraces not only the business of fashion but also the art and craft of fashion, and not only production but also consumption” (Islam, 2021). The multi-dimensional and layered structure of this industry makes it difficult to define it in a singular definition (Montagna, 2015). While philosophically fashion can be described as a form of art or self-expression, it is commonly conceptualised in a ‘costume-based’ meaning (Tamara et al., 2014). The multi-billion-dollar fashion industry is constantly evolving while concurrently transitioning rapidly to digitisation and globalisation, even during the COVID-19 pandemic (Brydges et al., 2021). Witnessing massive consumer shifts and disrupted supply chains, the industry has been facilitating rigorous adaptation in sales via online media channels. Although, the fashion industry is in the recovery phase after losing growth during the pandemic period and researchers have predicted to reset the growth to normal 2019 by end of 2023 (BoF-McKinsey&Company, 2021). Today’s fashion and textile industries are among the rapidly growing and advancing global industries with inherently complex and diverse supply chains (BoF-McKinsey&Company, 2020; BoFMcKinsey&Company, 2022). The ever-increasing complexity of the designs and competitive pricing; forces companies on a global hunt for new raw materials for manufacturing techniques and technology. The resulting supply chains are difficult to trace due to their vastness and distributed operations (Marshall et al., 2016). The complex and asymmetrical nature of the supply chains in the fashion and retail industry leaves loopholes which can be easily exploited. The fashion and textile industry is constantly associated with: sweat shops; human rights breaches; pollution of the environment; black markets; unsustainable practices toward the planet and people (Kurpierz and Smith, 2020). The fashion and textile industry is infamously known to focus solely on maximising profit margins and low production costs. However, that cost is commonly compensated by child labour, modern-day slavery, fake audits, corruption, and fraudulent certifications (De Aguiar Hugo et al., 2021). The fashion and textile industry are rapidly growing sectors that are highly subjective to the conceptions of business owners and consumers. Consequently, accountability resulting in negative publicity is limited and gets blurred especially in hazardous circumstances. The fashion industry is separated majorly into luxury fashion and mass-produced fashion. This results in a large variance in the operations of supply chains belonging to specific sectors. An understanding of the difference in the nature and functioning of different supply chains is essential to develop a global solution. ### 5.1 Fashion manufacturing and distribution: Fashion Industry’s complex supply chain A supply chain is a series of activities to control and channel the flow of materials, parts, and products to the customer (Stevens and Johnson, 2016). Supply Chain Management (SCM) is a confusing term as it can be defined as a flow of: materials and products; or management philosophy; or branch of management; or management process (Tyndall, 1998). Depending on the nature of supply chains, there are many stakeholders involved, including but not limited to: producers, product assemblers, manufacturers, wholesalers, retailer merchants, and transporters (La Londe and Masters, 1994). Manufacturing companies adopt a supply chain management philosophy to establish management practices that allow them to operate continuously (Mentzer et al., 2001). To understand the basic nature of the supply chains for the fashion industry, the luxury fashion industry should be separated and differentiated from the mass-produced apparel retail industry (Yang et al., 2017) as illustrated in Figure 8. The distinctions and differences between these two categories are highlighted below. ##### 5.1.1 Mainstream and high-end luxury fashion industry The high-end luxury ensembles and accessories are exclusively designed and manufactured for special orders, for example, couture houses that create fashion as a form of art (Raustiala and Sprigman, 2006). Luxury products are created specifically to cater to an exclusive segment of consumers who desire high-end products and can afford to pay for such. The cost of these high-end products is significantly higher than the mainstream fashion products. The high cost and exclusivity are justified by the creators and the brands as they incorporate the best quality raw material and precision manufacturing (Choi, 2020). In most cases, luxury brands control the design and quality of their products by owning the entire supply chain or opting for the most reliable manufacturer (Karaosman et al., 2017). Luxury brands own small and medium-sized manufacturing units to have complete control over their merchandise to protect the designs and gain competitive advantages (Jestratijevic and Rudd, 2018). Therefore, luxury brands commonly only share selective and generic information regarding their supply chains and instead communicate the product value through various marketing campaigns (Jestratijevic et al., 2020a). The discreet ----- nature of supply chain operations and limited information provided by luxury brands, can raise suspicion about the transparency of the brand’s operations. ##### 5.1.2 Mass-produced apparel and the retail industry The mass-produced apparel and retail industry constitutes most of the affordable-fashion brands and labels which operate on a completely different supply chain model compared to luxury brands (Niinimäki et al., 2020). Fashion brands that target mass-produced apparel are seeking to manufacture products at the lowest cost to improve their profit margin. Making cheap and fast fashion has evolved into a business strategy for the apparel industry (Bhardwaj and Fairhurst, 2010; Moore, 2019; Nguyen, 2020). Fast-fashion brands are seeking manufacturers who can provide the lowest manufacturing time to keep up with changing trends. These strategies require low-cost human labour, high availability of manufacturing machinery, and high ease in sourcing raw materials for continuous production (Igwe and Kanyembo, 2019; Chen et al., 2020). Historically, capitalist businesses and brands have exploited developing nations because of the vast socio-economic gaps in their population segments. Resultantly, the lower socioeconomic segment of the population is usually exploited to provide low-cost human labour in unethical supply chains (Strauss, 2012; Ikumapayi et al., 2020). Fast-fashion brands rapidly manufacture large volumes of fashion products, catering to the mass population with the latest trends (Tsay et al., 1999; Bhardwaj and Fairhurst, 2010) and generally outsource their supply chain (Arora and Mittal, 2011). Brands with the ‘take-make-waste’ business model, commonly do not have ownership of their supply chains, resulting in limited control and increased complexity, compared to the luxury brands in the fashion and textile industry (Arrigo, 2021). There is immense pressure for the timely delivery of large-volume orders within the fast-fashion industry. Therefore, it is not surprising that the official manufacturers unofficially employ the third parties to outsources the manufacturing services, making the supply chain more complex and harder to trace (Farahani et al., 2014). Additionally, the audits are also executed in a fraudulent manner which restrains the issues in the supply chain from surfacing. The major impacts of these poorly and unethically managed supply chains are reflected in the health, safety, and rights of workers along with the environment (Cho et al., 2015; Ciasullo et al., 2017; Diouf and Boiral, 2017). These circumstances together create an opaque cloud over the traceability of the fashion and textile supply chains. ### 5.2 State of the fashion and textile industry: An ethical perspective The fashion and textile industry infamously suffers heavily from poor working conditions. In 2012, Tazreen Fashion Factory in Dhaka, Bangladesh was engulfed in fire claiming 117 workers’ lives and leaving ----- many injured. This devastating and catastrophic event was the first glimpse for many consumers, into the dark side of the fashion and textile supply chains (Saxena, 2020). This unfortunate incident was the cornerstone of mass campaigns in fashion history of consumer awareness (Omotoso, 2018) like, Who made my clothes? However, over the past decade, there has been a continuous string of fatal occurrences throughout the supply chain (CleanClothesCampaign, 2022). The Common Objective reported that at least 1600 workers have been confirmed killed in fatal accidents within the garment industry between 2012 and 2017 (CommonObjective, 2018b). In February 2020, another denim factory in India burnt to ashes, claiming the lives of 7 workers because of the poor fire exit procedures and routes demonstrating the broken and unfacilitated supply chain system that still operates with extremely hazardous conditions (Bellware, 2020). The Pulse of Fashion Industry Report 2017 estimated 1.4 million recorded injuries in the industry each year. This estimate is projected to have a hike of 7% to 1.6 million by 2030 (GFA, 2017). Figure 9 illustrates a timeline of fatal incidents which have been reported since 2012. According to the United Nations International Children’s Emergency Fund (UNICEF), more than 100 million children are affected by association with the fashion industry, specifically garment and footwear. Along with child labour, these children are also affected by: a lack of maternity protection; the absence of childcare facilities; and poor living and working conditions for garment worker communities (UNICEF, 2020). The United States Department of Labour reported that 51 countries around the globe use children in at least one part of the fashion and garment supply chain (CommonObjective, 2018a). In India, over half a million children work on cotton seeds in agriculture (ICN, 2015). While forced child labour in Uzbekistan is used for cotton cultivation and Syrian child refugees are used as labourers for Turkey’s garment industry. These are just some examples that have surfaced because of the challenges of fraudulent IDs and the lack of birth (UzbekForum, 2021). These examples are a small snapshot of the problems that arise without a transparent and traceable supply chain. The fashion industry is currently facing the impacts of the pandemic in the form of logistics bottlenecks, increasing shipping costs, material shortages, and manufacturing delays (BoF-McKinsey&Company, 2022). As a result, the performance pressure on the supply chains is higher than ever. The fashion and textile industry is already the second most polluting industry because of the manufacturing processes. These processes involve the use of an excessive amount of chemicals that generate hazardous residues (Nimkar, 2018) and by-products (McFall-Johnsen, 2020). The fashion and textile industry is notorious for sharing selective information regarding its supply chain. Selective sharing is used as a technique to hide corrupt practices and remain unaccountable or to gain competitive advantages over others (Cho et al., 2015). These practices are only expected to increase in severity due to the increased pressure from the pandemic, while there is still no overarching system to keep the industry accountable for sustainability. Traceability and transparency could be essential tools to have sustainable supply chain management (Garcia-Torres et al., 2019). Considering the above research, a conclusion can be drawn that a fully functional operating industry has many loopholes. These grey areas exist in both sub-sectors of the industry mentioned above: luxury and mass-produced. Both sub-sectors of the fashion industry share a mutual lack of traceability and transparency. Selective information ----- disclosure and opaqueness in supply chain operations make the industry hard to penetrate and analyse the validity of sustainable practices. To discuss the topics of traceability and transparency, it is important to understand their origin and impacts on the world. ### 5.3 Traceability and transparency challenges within the fashion and textile industry United Nations Sustainable Development Goal (SDG) #12 (UnitedNations, 2015) specifies the relevant information and awareness for sustainable development and lifestyles in harmony with nature, as outlined in Figure 10. Attaining traceability should be the first action plan for any industry, including fashion and textiles. Honest communication across the supply chain and with consumers will be the next step as transparency (Papú Carrone, 2020). Understanding the true meaning of these two terms is as important as creating an action plan based on them. ##### 5.3.1 Traceability A general definition given by ISO in 1994 was the ability to trace the history, application, or location of an entity employing recorded identifications (ISO, I., 1994). ISO has defined traceability with modifications and improvements based on different industries. In the context of the food industry, in 2005 ISO defined it as the origin of materials and parts, the processing history, and the distribution and location of the product after delivery. However, it has not been defined in the context of the fashion and retail industry. Due to the inherent differences that are specific to the fashion and textile industry, the existing definitions and interpretations are not transferrable (Olsen and Borit, 2013). ##### 5.3.2 Transparency Transparency is defined as: relevant, timely, and reliable information, in written and verbal form (Williams, 2005). Ray and Das see it as the degree of openness when applied to corporate structures and is called corporate transparency (Ray and Das, 2009). Defining the terms through a business operational lens, is a tool to support organisation-stakeholder relationship (Wehmeier, 2018). However, all these attempts to define the term transparency blurs the grammatical boundaries as no official definition has been accepted in research (Phillips, 2011). A growing cohort of customers is demanding transparency in the fashion and textile industry (James and Montgomery, 2017). Influential groups of customers are demanding action from their favourite brands to become transparent and accountable (Griplas, 2021). The use of terms like transparency and traceability to create a unique selling point is cultivating a lack of trust between customers and the fashion and textile industry (Dahl, 2010; Jestratijevic et al., 2020b). Fashion brands extensively rely on their brand image to keep the customers engaged. The use of media and celebrity endorsements is one of the oldest and most common strategies undertaken by brands to create and uphold the brand image (Erdogan, 1999). The fashion and textile industry lacks credible and non-selective information disclosure due to the extremely high stakes of losing and tempering the brand image (Jestratijevic et al., 2020a). This has also given rise to anti-consumption values among customers which is leading some major fashion brands to operate on similar trends (Lee et al., 2017). It can be challenging to ensure human rights and sustainable practices due to the complex structure of the supply chain, which sometimes involves multiple layers of undeclared stakeholders (Jestratijevic et al., 2018). This allows a huge gap for misinterpretation and uncertainty around brand perception ----- which also restricts genuine communication between the brand and customers (Kang and Hustvedt, 2014). The driving force for the fashion and textile industry is consumer demand, however, some research indicates a lack of consumer interest in prioritising sustainable fashion (Carrigan and Attalla, 2001; Jørgensen et al., 2006). The lack of customer awareness of sustainable practices allows the industry to create confusion and greenwash their products. This leads to a lack of trust in green products and ethical production impacting the purchasing decision of sustainable fashion (Bhaduri and Ha-Brookshire, 2011; Saicheua et al., 2011; Pookulangara and Shephard, 2013). Transparent supply chain operations would remove this grey area that is often used by companies to greenwash their products and would likely promote customer interest in truly green products. Along with manufacturers and producers, it is important to highlight the consumer’s role in shaping the fashion industry. For consumers, it filters down to self-awareness regarding consumption and willingness to make sufficient changes to create sustainable habits. SDG #17 as shown in Figure 10, contextualises the importance of partnership between governments, the private sector, and civil society (SDG, 2015). However, one of the biggest challenges in this regard is the flooding of the market with counterfeit copies of authentic products. It not only impacts brands directly but also is responsible for the consumer’s mindset. Therefore, it is important to understand the challenges of the counterfeiting and grey market in the fashion and textile industry and its impact on product authenticity. ### 5.4 Counterfeiting and the grey market: Product authenticity Counterfeit goods are one of the fastest-growing industries in the world. It is also one of the oldest organised crimes in history (Hamelin et al., 2013). It thrives in a parasitic relationship with other industries which have valuable products with expensive prices or heavy consumption to make it profitable with moderate prices. Pharmaceuticals, antiques and art pieces, jewellery, and watches, toys, mobile phones and accessories are some of the industries other than the fashion and textile industry that is suffering from counterfeit goods (Chaudhry and Stumpf, 2011; Antonopoulos et al., 2020). In 2015, the World Economic Forum estimated that the piracy and counterfeit markets cost the global economy an estimated USD 1.77 trillion, which is nearly 10% of the global merchandise trade (Gregson and Crang, 2017). Counterfeit fraud is described as an enormous drain on the global economy by the International Chambers of Commerce (Hardy, 2011). It steals billions of dollars from the legitimate economy to fund undisclosed, underground industries. For counterfeit products, the money trail is untraceable which hinders the revenue collection by the government and increases the burden on taxpayers. It also allows poor-quality merchandise to enter the market and exposes consumers to these dangerous products (Hardy, 2011). However, it is difficult to estimate the actual size of the global grey market and counterfeit economy because of its nontraceable existence. The variance of laws and regulations in various parts of the world also adds to this problem (Antonopoulos et al., 2020). Counterfeiting products is a multi-leveled activity that varies depending on the nature of the business operation ranging from deceptive or non-deceptive; low quality or high-quality fakes; condoned copies; or copies of genuine products (Dugato et al., 2015). The counterfeiting of fashion products comes under the scope of non-safety critical goods, however, beauty and fragrances come under the scope of safety-critical goods as they can significantly affect consumer’s health and safety (Large, 2015; Van Duyne et al., 2015). The counterfeiting economy can be underlined as organised crime and the crime money generated is usually a corruptive force for the global economy. The counterfeit market is a threat to social life and overall global stability (Reuter, 2013). The counterfeit economy thrives on consumer demand for products that are popular and brands that are famous (Delener, 2000; Hamelin et al., 2013), as they serve the purpose of socialadjustive for the consumers (Wilcox et al., 2009; Pham et al., 2018). Consumers playing a vital role in upholding the counterfeit economy can transact for the products in two ways, deceptively or non-deceptively (Wilcox et al., 2009). A deceptive counterfeiting transaction takes place when a consumer buys a particular brand or product thinking it is from a known and authentic brand, however in reality is an original product. On the other hand, in a non-deceptive transaction, a consumer willingly takes part in the counterfeit transaction (Hopkins et al., 2003). Counterfeiting of fashion products can be easier as they are aspirational goods that are relatively easier to produce. Fashion products also have nonuniform restrictions and the act of copying designs, in the name of inspiration, is forgiven to some degree within the industry (Hilton et al., 2004). For a legitimate business, intellectual property infringements can vary from using a designer/creator’s name to using a brand’s emblem/logo, or patent designs (Wall and Large, 2010). Not only do brands and businesses have to constantly safeguard the integrity of their products but they also constantly struggle to uphold the brand image (Green and Smith, 2002). The grey market refers to the unauthorised distributional channels where branded products are sold in comparison to counterfeiting products which can be defined as selling products which are copied or not genuine (Li et al., 2016). Unlike the black market where counterfeit or stolen products are sold, the grey market is more complex to combat (Autrey et al., 2014). The grey market has thrived with developments in technology and e-commerce channels that allow new ease of doing business with new trade treaties and policies (Meraviglia, 2018; Wang et al., 2020b). In general, luxury brands are the biggest targets of the grey market and usually lose 5%–10% of their sales because of it (Shannon, 2018). The grey market and its impact are not yet fully established. However, it is argued to erode brand image and reduce the brand’s profit margins while injecting inferior substitutes of authorised products into the market (Ahmadi et al., 2015). Figure 11 illustrates the difference between an authentic market, a black market, and a grey market. The black market and grey market channels collaboratively challenge the credibility of the brands. The different markets emphasise how product authenticity is crucial and significant (Bian and Moutinho, 2009). The authenticity of products is commonly only described when being compared to their inauthentic counterpart (Fionda and Moore, 2009). Resulting, the value behind authenticity is reserved for high-value products, especially in the luxury fashion industry (Keller et al., 2011). The value of authenticity depends on consumer perception (Napoli et al., 2014). For a brand, authenticity means incorporating features like the brand’s history and heritage (Brown et al., 2003), craftsmanship ----- (Beverland, 2006), nostalgia (Beverland et al., 2008), sincerity (Thompson et al., 2006), quality (Beverland, 2006), and design consistency (Beverland et al., 2008). Product authenticity is associated with brand authenticity and brand value (Turunen, 2018) which makes it a holistic marketing tool to initiate and maintain brand loyalty and attachment among consumers (Choi ----- TABLE 2 Blockchain technology in the fashion industry. Name of the BT Targeted Area in Supply platform Chain VeChain by BitSE Anti-Counterfeiting Brandzledger Fibercoins Transparency and Traceability TextileGenesis Provinence Chronicled Loomia Consumer Engagement 1TrueID SourceMap Administration and Control Everledger and MYMCQ Marketplace Platform FIGURE 13 Application of blockchain technology in fashion supply chain. et al., 2015). This provides the brands an edge to fight counterfeit products circulating in black and grey markets (Pham et al., 2018). Therefore, defending product authenticity is crucial and many brands are using technology as a means to communicate the value of the product to consumers (Franco et al., 2019; Wang et al., 2020b). Businesses that are targeting sustainable operations to maintain balance among the people, the planet, and profit are also exploring the application of technology to implement traceability |TABLE 2 Blockchain technology in the fashion industry.|Col2|Col3| |---|---|---| |Blockchain technology in the fashion industry||| |Name of the BT platform|Targeted Area in Supply Chain|Brands/Labels Association| |VeChain by BitSE|Anti-Counterfeiting|BMW China, Baby Ghost, H&M, LVMH, Walmart China, Bayer China| |Brandzledger Fibercoins||| ||Transparency and Traceability|Lenzing, US Cotton Trust protocol, H&M, Kering, Arvind Textiles, 17 Chicks, WWF, Textile Exchange, Bestseller, Martine Jarlgaard, Greats, DeBeers| |TextileGenesis||| |Provinence Chronicled||| |Loomia 1TrueID|Consumer Engagement|Innovative & Wearable Technology Festo, Analog Devices, Alessandro Gherardi| |SourceMap|Administration and Control|BeautyCounter, Timberland, Vans| |Everledger and MYMCQ|Marketplace Platform|Alexander McQueen, Brilliant Earth| and transparency throughout their networks (Kumar et al., 2017). However, it is also important to communicate and educate the consumers at the same time. For the same reason, a completely traceable supply chain is required to have transparent communication between the consumer and industry (Ospital et al., 2022). Traceable supply chains allow both sustainable development to be validated and also safeguard the businesses’ own interests and profits. Current sustainability and ----- transparency measurement tools, for example, the Higg Index, have not provided a solution to this problem (Gunther, 2016). There is a lack of technological solutions which address the concerns which arise from the lack of traceability in the fashion and textile industry. In summary, a holistic and feasible solution is urgently required for the fashion and textile industry to solve the plethora of supply chain issues. BT is a technological advancement that has gained interest in many industries including the fashion and textile industry. BT is envisaged as a potential solution to improve the overarching issues of traceability and transparency in the fashion and textile industry (Putasso et al., 2019; Treiblmaier and Tumasjan, 2022). ## 6 Applications of blockchain Blockchain has become popular because of its features and advantages like the avoidance of data tampering and its capacity to facilitate large networks. Current applications are mostly fixated on using this technology to inform consumers about products and their features rather than utilising BT to provide a transparent supply chain that is not infected by asymmetric information disclosure and the complexity of globalisation (Agrawal et al., 2018). As illustrated in Table 2, BitSE and Babyghost have collaborated (Martén, 2017), to develop VeChain and Brandzledger (Putasso et al., 2019) which are seamless applications of BT as an anti-counterfeiting solutions (Kshetri, 2017). Fibercoins is another application of BT that is based on a cryptocurrency model to eliminate legal and financial risks for users (Ahmed and Maccarthy, 2021). TextileGenesis and Fibercoins have collaborated to provide a unified application of BT which discloses a traceable journey of a product from the fibre stage to the endcustomer (FiberCoin, 2022; TextileGenesis, 2022). Figure 12 illustrates the potential mapping of BT applications to enhance consumer experience and their access to the traceability and transparency of the products. In the context of traceability, this technology has been implemented by Chronicled and Provinence for Martine Jarlgaard (Putasso et al., 2019). BT applications can also enhance a customer’s trust in fashion and lifestyle brands. Leaders in this space include Socios, Loomia, 1 TrueID, and Alessandro Gherardi, NeuFund, AmaZix, and Timeless Luxury Group (Putasso et al., 2019; Panda et al., 2021). BT applications for supply chain management have been explored by Faizod and SourceMap (Panda et al., 2021). In the field of the marketplace, Alexander McQueen and Everledger have created a blockchain-enabled platform called MYMCQ. It becomes quickly apparent that an overarching and complete solution is yet to be implemented within the fashion and textile industry. The fashion and textile industry is yet to explore the vast area of technical information which can help in the sustainable management of supply chains (Wang et al., 2020a). This technology offers the potential to solve some of the root problems in the industry. The fashion and textile industry has a distributed supply chain that is threatened by the infiltration of unauthorised parties in their manufacturing processes (ElMessiry and ElMessiry, 2018). Another advantage of this technology is to limit counterfeits in the market, which causes companies to lose profits and circulate potentially hazardous products in the market. Fashion as an industry is exposed to many other industries which contribute to its supply chain. Therefore, BT can act as an additional security blanket to protect the authenticity of the products (Tripathi et al., 2021). BT can facilitate quality control and check processes by making them time and cost-efficient (ElMessiry and ElMessiry, 2018). A potential application of blockchain in the fashion industry’s supply chain is illustrated in Figure 13. The implementation strategy of this technology in the fashion and textile industry is still in its infancy in the context of supply chain. Industries such as civil construction and food-based agribusinesses have ----- successfully moved their supply chains to a blockchain platform to make a traceable and transparent operational structure (Hultgren and Pajala, 2018; Sander et al., 2018). Several other industries are exploring the benefits of the application of this technology alongside cryptocurrency are food, health, education, events, entertainment, and cybersecurity (Tripathi et al., 2019; Ahad et al., 2020; Tripathi et al., 2020; Panda et al., 2021; Düdder et al., 2022; Thakur, 2022). ## 7 Key challenges in embracing blockchain technology Despite a promising future, there is still a myriad of challenges for BT to be adopted within the fashion industry. This technology is in its nascent stage and requires further investigation to test its feasibility and overcome the various challenges as shown in Figure 14. Technological immaturity is one of the key challenges yet to be overcome followed by its sunk cost (Agrawal et al., 2018; Kouhizadeh et al., 2020). The massive scale of operations involved in the fashion and textile supply chain could contribute significantly to this expense (Khanfar et al., 2021). Decentralisation is a feature of BT, representing an absence of a regulatory body, however, its application could leave the fashion and textile industry vulnerable (Trautman, 2014; Jabbar et al., 2020). Lack of standardisation and structure may result in unnecessary disclosure of sensitive information in the pursuit of increased compatibility of BT (Mistry et al., 2020). In the initial stages, integrating BT within business models may be difficult as it replaces traditional practices and operations (Cole et al., 2019). As more aspects of the business are incorporated into a blockchain channel, the security of that channel becomes crucial. A weak security system may lead to intellectual property concerns and the loss of valuable information (Anderson, 2018). Along with all the above-mentioned challenges, the complexity of the fashion and textile industry’s supply chain makes its integration into BT challenging. There are examples of BT in fashion’s supply chain in specific fields as mentioned in Section 6. However, due to the inherent vastness of the supply chain, an overarching and singular application for traceable and transparent transactions has not been achieved yet. Additionally, the existing applications of BT are not universally adopted by the industry. ## 8 Discussion, limitations, and future research directions This literature review aims to bridge the knowledge gap between BT and the fashion and textile industry industries to promote the future application of BT to contribute to both industries. The current review has three main limitations and highlights gaps in the extant literature that could be addressed with further research. First, although this review has been conducted in a well-organised manner, it lacks grey literature as most of the sources reviewed for this research are either traditional commercial or academic publishing only in the English language. Second, this review found limited applications of BT within the fashion and textile industry, therefore, limiting the presentation of BT applications in the fashion and textile supply chain. Since BT has tremendous potential to recast the supply chain operations of the fashion and textile industry, this review instead highlights the novelty of BT in the fashion and textile industry. Further research is required through theoretical and practical lenses considering BT’s application to the fashion and textile industry’s complex and ungeneralised supply chains (Agrawal et al., 2018; Agrawal and Pal, 2019). Third, this review found an abundance of research resources, regarding the keywords mentioned in Section 2, Methodology, in journals from the field of law, technology, management, and innovation. However, a very limited number of resources within the journals of the fashion and textile industry were found. This review highlights that BT applications enabling a traceable supply chain specific to the fashion and textile industry lack empirical evidence and life-cycle-assessment case studies (Ahmed and Maccarthy, 2021). This review recommends further research to address this current gap in the literature mentioned above. The fashion and textile industry-specific research will also assist with the concerns around the legal protection of creative designs, contracts, and other transactions which are available on any BT platform. Therefore, the application of BT in the fashion and textile industry requires more structure as well as further research into its alignment with traditional law systems (Anderson, 2018). It is important to explore BT’s applicability and conduct studies based on real-life business examples adopting it as a platform for supply chain operations in the fashion and textile industry (Cole et al., 2019). This review paper aims to understand the potential of BT in assisting the fashion and textile industry in fighting its daily challenges based on the existing research. The fashion and textile industry is one of the largest and fasting growing industries in the world, providing employment opportunities and one of the primary requirements for people: clothing. The varied nature of supply chains inherently leaves multiple loopholes within their functionality while also adopting concepts like sustainability, traceability, and transparency which are subjective. Current research shows a large consensus that the fashion and textile industry’s supply chain lacks traceability and transparency. There are established connections between important aspects of sustainability with traceability and transparency. However, there is a deficiency of research that suggests an executable plan to resolve the current concerns which are crafting long-lasting and undesirable effects on the planet. BT has recast many industries and contains the potential to refashion the fashion industry. However, the limited substantial use of BT in the supply chain of the fashion and textile industry and the limited exploration of its features through experimental research has left countless challenges to be resolved. ## Author contributions AB conducted the systematic literature review and was the major contributor to writing the manuscript. SI and CT continuously reviewed the manuscript throughout the process. All authors read and approved the final manuscript. ## Conflict of interest The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. ----- ## Publisher’s note All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated ## References Abeyratne, S. A., and Monfared, R. P. (2016). Blockchain ready manufacturing supply [chain using distributed ledger. Int. J. Res. Eng. Technol. 5, 1–10. doi:10.15623/ijret.2016.](https://doi.org/10.15623/ijret.2016.0509001) [0509001](https://doi.org/10.15623/ijret.2016.0509001) Agrawal, T. K., and Pal, R. (2019). Traceability in textile and clothing supply chains: Classifying implementation factors and information sets via Delphi study. Sustain. [(Basel, Switz. 11, 1698. doi:10.3390/su11061698](https://doi.org/10.3390/su11061698) Agrawal, T. K., Sharma, A., and Kumar, V. (2018). Blockchain-based secured traceability system for textile and clothing supply chain. Singapore: Springer Singapore. Ahad, M. A., Paiva, S., Tripathi, G., and Feroz, N. (2020). Enabling technologies and [sustainable smart cities. Sustain. cities Soc. 61, 102301. doi:10.1016/j.scs.2020.102301](https://doi.org/10.1016/j.scs.2020.102301) Ahmadi, R., Iravani, F., and Mamani, H. (2015). Coping with gray markets: The impact of market conditions and product characteristics. Prod. Operations Manag. 24, [762–777. doi:10.1111/poms.12319](https://doi.org/10.1111/poms.12319) Ahmed, W. A. H., and Maccarthy, B. L. (2021). Blockchain-enabled supply chain traceability in the textile and apparel supply chain: A case study of the fiber producer, [lenzing. Sustain. (Basel, Switz. 13, 10496. doi:10.3390/su131910496](https://doi.org/10.3390/su131910496) Anderson, S. (2018). The missing link between blockchain and copyright: How companies are using new technology to misinform creators and violate federal law. N. C. J. Law Technol. 19, 1. Antonopoulos, G. A., Hall, A., Large, J., and Shen, A. (2020). Counterfeit goods fraud: [An account of its financial management. Eur. J. Crim. policy Res. 26, 357–378. doi:10.](https://doi.org/10.1007/s10610-019-09414-6) [1007/s10610-019-09414-6](https://doi.org/10.1007/s10610-019-09414-6) Arora, S., and Mittal, S. (2011). Intensifying export performance through planned capacity building: A study of the Indian apparel sector. J. Glob. Fash. Mark. 2, 20–27. [doi:10.1080/20932685.2011.10593079](https://doi.org/10.1080/20932685.2011.10593079) Arrigo, E. (2021). Collaborative consumption in the fashion industry: A systematic [literature review and conceptual framework. J. Clean. Prod. 325, 129261. doi:10.1016/j.](https://doi.org/10.1016/j.jclepro.2021.129261) [jclepro.2021.129261](https://doi.org/10.1016/j.jclepro.2021.129261) Autrey, R. L., Bova, F., and Soberman, D. A. (2014). Organizational structure and gray [markets. Mark. Sci. 33, 849–870. doi:10.1287/mksc.2014.0869](https://doi.org/10.1287/mksc.2014.0869) Bauerle, N. (2018). How does blockchain technology work. Retrieved from coindesk. [Available at: https://www.coindesk.com/information/how-does-blockchain-](https://www.coindesk.com/information/how-does-blockchain-technology-work) [technology-work (Accessed March 19, 2022).](https://www.coindesk.com/information/how-does-blockchain-technology-work) Bellware, K. (2020). in Seven people died when the only escape from a fire at an Indian denim factory was up a ladder. Editor T. W POST. Beverland, M. B., Lindgreen, A., and Vink, M. W. (2008). Projecting authenticity through advertising: Consumer judgments of advertisers’ claims. J. Advert. 37, 5–15. [doi:10.2753/joa0091-3367370101](https://doi.org/10.2753/joa0091-3367370101) Beverland, M. (2006). The ‘real thing’: Branding authenticity in the luxury wine trade. [J. Bus. Res. 59, 251–258. doi:10.1016/j.jbusres.2005.04.007](https://doi.org/10.1016/j.jbusres.2005.04.007) Bhaduri, G., and Ha-Brookshire, J. E. (2011). Do transparent business practices pay? Exploration of transparency and consumer purchase intention. Cloth. Text. Res. J. 29, [135–149. doi:10.1177/0887302x11407910](https://doi.org/10.1177/0887302x11407910) Bhardwaj, V., and Fairhurst, A. (2010). Fast fashion: Response to changes in the [fashion industry. Int. Rev. retail, distribution consumer Res. 20, 165–173. doi:10.1080/](https://doi.org/10.1080/09593960903498300) [09593960903498300](https://doi.org/10.1080/09593960903498300) Bian, X., and Moutinho, L. (2009). An investigation of determinants of counterfeit [purchase consideration. J. Bus. Res. 62, 368–378. doi:10.1016/j.jbusres.2008.05.012](https://doi.org/10.1016/j.jbusres.2008.05.012) Bikoff, J. L., Heasley, D. K., Sherman, V., and Stipelman, J. (2015). Fake it’til we make it: Regulating dangerous counterfeit goods. J. Intellect. Prop. Law Pract. 10, 246–254. [doi:10.1093/jiplp/jpv016](https://doi.org/10.1093/jiplp/jpv016) BOF-MCKINSEY&COMPANY (2020). The state of fashion 2020. The State of Fashion ed. London: McKinsey & Company. BOF-MCKINSEY&COMPANY (2021). The state of fashion 2021. The State of Fashion ed. London: McKinsey & Company. BOF-MCKINSEY&COMPANY (2022). The state of fashion 2022. The State of Fashion ed. London: McKinsey & Company. Brown, S., Kozinets, R. V., and Sherry, J. F., JR (2003). Teaching old brands new tricks: [Retro branding and the revival of brand meaning. J. Mark. 67, 19–33. doi:10.1509/jmkg.](https://doi.org/10.1509/jmkg.67.3.19.18657) [67.3.19.18657](https://doi.org/10.1509/jmkg.67.3.19.18657) Brydges, T., Heinze, L., Retamal, M., and Henninger, C. E. (2021). Platforms and the pandemic: A case study of fashion rental platforms during COVID-19. Geogr. J. 187, [57–63. doi:10.1111/geoj.12366](https://doi.org/10.1111/geoj.12366) organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher. Caldarelli, G., Zardini, A., and Rossignoli, C. (2021). Blockchain adoption in the fashion sustainable supply chain: Pragmatically addressing barriers. J. Organ. Change [Manag. 34, 507–524. doi:10.1108/jocm-09-2020-0299](https://doi.org/10.1108/jocm-09-2020-0299) Carrigan, M., and Attalla, A. (2001). The myth of the ethical consumer–do ethics [matter in purchase behaviour? J. consumer Mark. 18, 560–578. doi:10.1108/](https://doi.org/10.1108/07363760110410263) [07363760110410263](https://doi.org/10.1108/07363760110410263) Chaudhry, P. E., and Stumpf, S. A. (2011). Consumer complicity with counterfeit [products. J. Consumer Mark. 28, 139–151. doi:10.1108/07363761111115980](https://doi.org/10.1108/07363761111115980) Chen, Y., Chung, S.-H., and Guo, S. (2020). Franchising contracts in fashion supply chain operations: Models, practices, and real case study. Ann. Operations Res. 291, [83–128. doi:10.1007/s10479-018-2998-5](https://doi.org/10.1007/s10479-018-2998-5) Cho, C. H., Laine, M., Roberts, R. W., and Rodrigue, M. (2015). Organized hypocrisy, organizational façades, and sustainability reporting. Account. Organ. Soc. 40, 78–94. [doi:10.1016/j.aos.2014.12.003](https://doi.org/10.1016/j.aos.2014.12.003) Choi, H., Ko, E., Kim, E. Y., and Mattila, P. (2015). The role of fashion brand authenticity in product management: A holistic marketing approach. J. Prod. [Innovation Manag. 32, 233–242. doi:10.1111/jpim.12175](https://doi.org/10.1111/jpim.12175) Choi, K.-H. (2020). A systematic review exploring the current state of fashion criticism-A focus on the fashion designer exhibition reviews of fashion theory. [J. Korean Soc. Cloth. Text. 44, 273–294. doi:10.5850/jksct.2020.44.2.273](https://doi.org/10.5850/jksct.2020.44.2.273) Choi, T.-M. (2019). Blockchain-technology-supported platforms for diamond authentication and certification in luxury supply chains. Transp. Res. Part E Logist. [Transp. Rev. 128, 17–29. doi:10.1016/j.tre.2019.05.011](https://doi.org/10.1016/j.tre.2019.05.011) Ciasullo, M. V., Cardinali, S., and Cosimato, S. (2017). A strenuous path for sustainable supply chains in the footwear industry: A business strategy issue. [J. Glob. Fash. Mark. 8, 143–162. doi:10.1080/20932685.2017.1279066](https://doi.org/10.1080/20932685.2017.1279066) Cleanclothescampaign (2022). Deaths and injuries in the global garment industry. Netherlands: Clean Clothes Campaign. Cole, R., Stevenson, M., and Aitken, J. (2019). Blockchain technology: Implications for [operations and supply chain management. Supply chain Manag. 24, 469–483. doi:10.](https://doi.org/10.1108/scm-09-2018-0309) [1108/scm-09-2018-0309](https://doi.org/10.1108/scm-09-2018-0309) COMMONOBJECTIVE (2018a). Child labour in the fashion industry. [Online]. [Available: https://www.commonobjective.co/article/child-labour-in-the-fashion-](https://www.commonobjective.co/article/child-labour-in-the-fashion-industry) [industry (Accessed November 12, 2021).](https://www.commonobjective.co/article/child-labour-in-the-fashion-industry) Crosby, M., Pattanayak, P., Verma, S., and Kalyanaraman, V. (2016). Blockchain technology: Beyond bitcoin. Appl. Innov. 2, 71. Dahl, R. (2010). Green washing: Do you know what you’re buying? Durham, United States: National Institute of Environmental Health Sciences. De Aguiar Hugo, A., De, N. A. D. A. E., and Da Silva Lima, R. (2021). Can fashion Be circular? A literature review on circular economy barriers, drivers, and practices in the fashion [industry’s productive chain. Sustainability 13, 1224613–1231050. doi:10.3390/su132112246](https://doi.org/10.3390/su132112246) Delener, N. (2000). International counterfeit marketing: Success without risk. Rev. Bus. 21, 16. Diouf, D., and Boiral, O. (2017). The quality of sustainability reports and impression management: A stakeholder perspective. Account. Auditing, Account. 30, 643–667. [doi:10.1108/aaaj-04-2015-2044](https://doi.org/10.1108/aaaj-04-2015-2044) Drescher, D. (2017). Blockchain basics: A non-technical introduction in 25 steps. [Apress, Frankfurt-am-Mein. doi:10.1007/978-1-4842-2604-9](https://doi.org/10.1007/978-1-4842-2604-9) Düdder, B., Bager, S. L., Henglein, F., Herbert, J. M., and Wu, H. (2022). Event-based [supply chain network modeling: Blockchain for good coffee. Front. Blockchain 5. doi:10.](https://doi.org/10.3389/fbloc.2022.846783) [3389/fbloc.2022.846783](https://doi.org/10.3389/fbloc.2022.846783) Dugato, M., Favarin, S., and Camerini, D. (2015). Estimating the counterfeit markets in Europe. Milan: Transcrime Research in Brief. Elmessiry, M., and Elmessiry, A. (2018). Blockchain framework for textile supply chain management: Improving transparency, traceability, and quality. Cham: Springer International Publishing. Entwistle, J. (2000). The fashioned body: Fashion, dress, and modern social theory. Malden, MA: Polity Press. Erdogan, B. Z. (1999). Celebrity endorsement: A literature review. J. Mark. Manag. 15, [291–314. doi:10.1362/026725799784870379](https://doi.org/10.1362/026725799784870379) Farahani, R. Z., Rezapour, S., Drezner, T., and Fallah, S. (2014). Competitive supply chain network design: An overview of classifications, models, solution techniques and [applications. Omega 45, 92–118. doi:10.1016/j.omega.2013.08.006](https://doi.org/10.1016/j.omega.2013.08.006) ----- FIBERCOIN (2022). Welcome to FiberCoin official website. [Online]. Available: [https://fibercoin.tk (Accessed January 2, 2022).](https://fibercoin.tk) Fionda, A. M., and Moore, C. M. (2009). The anatomy of the luxury fashion brand. [J. brand Manag. 16, 347–363. doi:10.1057/bm.2008.45](https://doi.org/10.1057/bm.2008.45) Franco, J. C., Hussain, D., and Mccoll, R. (2019). Luxury fashion and sustainability: [Looking good together. J. Bus. Strategy 41, 55–61. doi:10.1108/jbs-05-2019-0089](https://doi.org/10.1108/jbs-05-2019-0089) Garcia-Torres, S., Albareda, L., Rey-Garcia, M., and Seuring, S. (2019). Traceability for sustainability – literature review and conceptual framework. Supply chain Manag. [24, 85–106. doi:10.1108/scm-04-2018-0152](https://doi.org/10.1108/scm-04-2018-0152) GFA (2017). “Pulse of the fashion industry 2017,” in Pulse Report of the Fashion industry. copenhagen: GFA, bcg, sac. Editor G. F. AGENDA (København, Denmark: Global Fashion Agenda). Green, R. T., and Smith, T. (2002). Executive insights: Countering brand [counterfeiters. J. Int. Mark. 10, 89–106. doi:10.1509/jimk.10.4.89.19551](https://doi.org/10.1509/jimk.10.4.89.19551) Gregson, N., and Crang, M. (2017). Illicit economies: Customary illegality, moral [economies and circulation. Trans. Inst. Br. Geogr. 42, 206–219. doi:10.1111/tran.12158](https://doi.org/10.1111/tran.12158) [Griplas, L. (2021). Fibre: Unravelling the seams. [Online]. Available: https://www.](https://www.woolmark.com/fibre/unravelling-the-seams/) [woolmark.com/fibre/unravelling-the-seams/ (Accessed January 17, 2021).](https://www.woolmark.com/fibre/unravelling-the-seams/) Gunther, M. (2016). Despite the Sustainable Apparel Coalition, there’s a lot you don’t know about that T-shirt. Kings Place London: The Gaurdian. Hamelin, N., Nwankwo, S., and El Hadouchi, R. (2013). ’Faking brands’: Consumer [responses to counterfeiting. J. consumer Behav. 12, 159–170. doi:10.1002/cb.1406](https://doi.org/10.1002/cb.1406) Hardy, J. (2011). Estimating the global economic and social impacts of counterfeiting and piracy. Paris, France: International Chamber of Commerce. Heiskanen, A. (2017). The technology of trust: How the Internet of Things and blockchain could usher in a new era of construction productivity. Constr. Res. [Innovation 8, 66–70. doi:10.1080/20450249.2017.1337349](https://doi.org/10.1080/20450249.2017.1337349) Hilton, B., Choi, C. J., and Chen, S. (2004). The ethics of counterfeiting in the fashion [industry: Quality, credence and profit issues. J. Bus. ethics 55, 343–352. doi:10.1007/](https://doi.org/10.1007/s10551-004-0989-8) [s10551-004-0989-8](https://doi.org/10.1007/s10551-004-0989-8) Hopkins, D. M., Kontnik, L. T., and Turnage, M. T. (2003). Counterfeiting exposed: Protecting your brand and customers. New York, United States: J. Wiley & Sons. Hultgren, M., and Pajala, F. 2018. Blockchain technology in construction industry: Transparency and traceability in supply chain. Independent thesis Advanced level (degree of Master (Two Years)), NCC, ICN (2015). Cotton’s forgotten children. India Committee of the Netherlands. Igwe, P. A., and Kanyembo, F. (2019). “The cage around internationalisation of smes and the role of government,” in International entrepreneurship in emerging markets: Nature, drivers, barriers and determinants (Bradford, United Kingdom: Emerald Group Publishing). Ikumapayi, O., Oyinbo, S., Akinlabi, E., and Madushele, N. (2020). Overview of recent advancement in globalization and outsourcing initiatives in manufacturing systems. [Mater. Today Proc. 26, 1532–1539. doi:10.1016/j.matpr.2020.02.315](https://doi.org/10.1016/j.matpr.2020.02.315) Islam, S. (2021). “Waste management strategies in fashion and textiles industry: Challenges are in governance, materials culture and design-centric,” in Waste management in the fashion and textile industries (Amsterdam, Netherlands: Elsevier). ISO, I. (1994). 8402: 1994 quality management and quality assurance–vocabulary. Geneva, Switzerland: ISO. Jabbar, S., Lloyd, H., Hammoudeh, M., Adebisi, B., and Raza, U. (2020). Blockchainenabled supply chain: Analysis, challenges, and future directions. Multimed. Syst. 27, [787–806. doi:10.1007/s00530-020-00687-0](https://doi.org/10.1007/s00530-020-00687-0) James, A., and Montgomery, B. (2017). Engaging the fashion consumer in a [transparent business model. Int. J. Fash. Des. Technol. Educ. 10, 287–299. doi:10.](https://doi.org/10.1080/17543266.2017.1378730) [1080/17543266.2017.1378730](https://doi.org/10.1080/17543266.2017.1378730) Jestratijevic, I., Nancy, A. R., and James, U. (2020a). Transparency of sustainability disclosures among luxury and mass-market fashion brands. J. Glob. Fash. Mark. 11, [99–116. doi:10.1080/20932685.2019.1708774](https://doi.org/10.1080/20932685.2019.1708774) Jestratijevic, I., Rudd, N. A., and Uanhoro, J. (2020b). Transparency of sustainability disclosures among luxury and mass-market fashion brands. J. Glob. Fash. Mark. 11, [99–116. doi:10.1080/20932685.2019.1708774](https://doi.org/10.1080/20932685.2019.1708774) Jestratijevic, I., and Rudd, N. (2018). “Making fashion transparent: What consumers know about the brands they admire,” in Fashion business cases. Online edition (London: Bloomsbury Academic). Jestratijevic, I., Uanhoro, J., and Rudd, N. A. (2018). “Policies versus Practices: Transparency of supply chain disclosures among luxury and mass market fashion brands,” in International Textile and Apparel Association Annual Conference Proceedings, Ames, Iowa, Jan 1st, 2018. Iowa State University Digital Press. Jørgensen, U., Olsen, S. I., Jørgensen, M. S., Lauridsen, E. H., Hauschild, M. Z., Hoffmann, L., et al. (2006). Waste prevention, waste policy and innovation. Lyngby, Denmark: Department of Manufacturing Engineering and Management, Technical University. Kang, J., and Hustvedt, G. (2014). Building trust between consumers and corporations: The role of consumer perceptions of transparency and social [responsibility. J. Bus. Ethics 125, 253–265. doi:10.1007/s10551-013-1916-7](https://doi.org/10.1007/s10551-013-1916-7) Karaosman, H., Brun, A., and Morales-Alonso, G. (2017). “Vogue or vague: Sustainability performance appraisal in luxury fashion supply chains,” in Sustainable management of luxury (Berlin, Germany: Springer). Keller, K. L., Parameswaran, M., and Jacob, I. (2011). Strategic brand management: Building, measuring, and managing brand equity. Tharamani, Chennai: Pearson Education India. Khan, K. S., Kunz, R., Kleijnen, J., and Antes, G. (2003). Five steps to conducting a [systematic review. J. R. Soc. Med. 96, 118–121. doi:10.1258/jrsm.96.3.118](https://doi.org/10.1258/jrsm.96.3.118) Khanfar, A. A. A., Iranmanesh, M., Ghobakhloo, M., Senali, M. G., and Fathi, M. (2021). Applications of blockchain technology in sustainable manufacturing and supply [chain management: A systematic review. Sustain. (Basel, Switz. 13, 7870. doi:10.3390/](https://doi.org/10.3390/su13147870) [su13147870](https://doi.org/10.3390/su13147870) Kimani, D., Adams, K., Attah-Boakye, R., Ullah, S., Frecknall-Hughes, J., and Kim, J. (2020). Blockchain, business and the fourth industrial revolution: Whence, whither, [wherefore and how? Technol. Forecast. Soc. Change 161, 120254. doi:10.1016/j.techfore.](https://doi.org/10.1016/j.techfore.2020.120254) [2020.120254](https://doi.org/10.1016/j.techfore.2020.120254) Kouhizadeh, M., Zhu, Q., and Sarkis, J. (2020). Blockchain and the circular economy: Potential tensions and critical reflections from practice. Prod. Plan. Control 31, 950–966. [doi:10.1080/09537287.2019.1695925](https://doi.org/10.1080/09537287.2019.1695925) Kshetri, N. (2017). Can blockchain strengthen the internet of things? IT Prof. 19, [68–72. doi:10.1109/mitp.2017.3051335](https://doi.org/10.1109/mitp.2017.3051335) Kumar, V., Agrawal, T. K., Wang, L., and Chen, Y. (2017). Contribution of traceability [towards attaining sustainability in the textile sector. Text. Cloth. Sustain. 3, 5. doi:10.](https://doi.org/10.1186/s40689-017-0027-8) [1186/s40689-017-0027-8](https://doi.org/10.1186/s40689-017-0027-8) Kurpierz, J. R., and Smith, K. (2020). The greenwashing triangle: Adapting tools from fraud to improve CSR reporting. Sustain. Account. Manag. Policy J. 11, 1075–1093. [doi:10.1108/sampj-10-2018-0272](https://doi.org/10.1108/sampj-10-2018-0272) Kushwaha, S. S., and Joshi, S. (2021). “An overview of blockchain-based smart contract,” in Computer networks and inventive communication technologies (Berlin, Germany: Springer), 899–906. La Londe, B. J., and Masters, J. M. (1994). Emerging logistics strategies: Blueprints for [the next century. Int. J. Phys. distribution Logist. Manag. 24, 35–47. doi:10.1108/](https://doi.org/10.1108/09600039410070975) [09600039410070975](https://doi.org/10.1108/09600039410070975) Large, J. (2015). ‘Get real, don’t buy fakes’: Fashion fakes and flawed policy–the problem with taking a consumer-responsibility approach to reducing the [‘problem’of counterfeiting. Criminol. Crim. Justice 15, 169–185. doi:10.1177/](https://doi.org/10.1177/1748895814538039) [1748895814538039](https://doi.org/10.1177/1748895814538039) Lee, M. S., Seifert, M., and Cherrier, H. (2017). Governing corporate social responsibility in the apparel industry after rana plaza. Berlin, Germany: Springer.Anti-consumption and governance in the global fashion industry: Transparency is key Lemieux, V. L. (2016). Trusting records: Is blockchain technology the answer? Rec. [Manag. J. 26, 110–139. doi:10.1108/rmj-12-2015-0042](https://doi.org/10.1108/rmj-12-2015-0042) Li, H., Zhu, S. X., Cui, N., and Li, J. (2016). Analysis of gray markets in [differentiated duopoly. Int. J. Prod. Res. 54, 4008–4027. doi:10.1080/00207543.](https://doi.org/10.1080/00207543.2016.1170906) [2016.1170906](https://doi.org/10.1080/00207543.2016.1170906) Li, L. (2013). Technology designed to combat fakes in the global supply chain. Bus. [Horizons 56, 167–177. doi:10.1016/j.bushor.2012.11.010](https://doi.org/10.1016/j.bushor.2012.11.010) Major, L. S., and Steele, V. (2019). Fashion industry. Illinois, United States: Encyclopædia Britannica, Inc. Marshall, D., Mccarthy, L., Mcgrath, P., and Harrigan, F. (2016). What’s your strategy for supply chain disclosure? MIT Sloan Manag. Rev. 57, 36–45. Martén, M. (2017). Digital rights management: Blockchain and digital music content management. Northlake Way: Semantic Scholar. Masson, R., Iosif, L., Mackerron, G., and Fernie, J. (2007). Managing complexity in [agile global fashion industry supply chains. Int. J. Logist. Manag. 18, 238–254. doi:10.](https://doi.org/10.1108/09574090710816959) [1108/09574090710816959](https://doi.org/10.1108/09574090710816959) Mcfall-Johnsen, M. (2020). These facts show how sustainable the fashion industry is World [Economic Forum. Available: https://www.weforum.org/agenda/2020/01/fashion-industry-](https://www.weforum.org/agenda/2020/01/fashion-industry-carbon-unsustainable-environment-pollution/) [carbon-unsustainable-environment-pollution/ (Accessed January 15, 2021).](https://www.weforum.org/agenda/2020/01/fashion-industry-carbon-unsustainable-environment-pollution/) Mentzer, J. T., Dewitt, W., Keebler, J. S., Min, S., Nix, N. W., Smith, C. D., et al. (2001). [Defining supply chain management. J. Bus. Logist. 22, 1–25. doi:10.1002/j.2158-1592.](https://doi.org/10.1002/j.2158-1592.2001.tb00001.x) [2001.tb00001.x](https://doi.org/10.1002/j.2158-1592.2001.tb00001.x) Meraviglia, L. (2018). Technology and counterfeiting in the fashion industry: Friends [or foes? Bus. horizons 61, 467–475. doi:10.1016/j.bushor.2018.01.013](https://doi.org/10.1016/j.bushor.2018.01.013) Mistry, I., Tanwar, S., Tyagi, S., and Kumar, N. (2020). Blockchain for 5G-enabled IoT for industrial automation: A systematic review, solutions, and challenges. Mech. Syst. [Signal Process. 135, 106382. doi:10.1016/j.ymssp.2019.106382](https://doi.org/10.1016/j.ymssp.2019.106382) Moher, D., Liberati, A., Tetzlaff, J., Altman, D. G., and Group*, P. (2009). Preferred reporting items for systematic reviews and meta-analyses: The PRISMA [statement. Ann. Intern. Med. 151, 264–269. doi:10.7326/0003-4819-151-4-](https://doi.org/10.7326/0003-4819-151-4-200908180-00135) [200908180-00135](https://doi.org/10.7326/0003-4819-151-4-200908180-00135) Montagna, G. (2015). Multi-dimensional consumers: Fashion and human factors. [Procedia Manuf. 3, 6550–6556. doi:10.1016/j.promfg.2015.07.954](https://doi.org/10.1016/j.promfg.2015.07.954) ----- Moore, K. (2019). report shows customers want responsible fashion, but don’t want to [pay for it. What should brands do? [Online]. Available: https://www.forbes.com/sites/](https://www.forbes.com/sites/kaleighmoore/2019/06/05/report-shows-customers-want-responsible-fashion-but-dont-want-to-pay-for-it/?sh=50a3344b1782) [kaleighmoore/2019/06/05/report-shows-customers-want-responsible-fashion-but-](https://www.forbes.com/sites/kaleighmoore/2019/06/05/report-shows-customers-want-responsible-fashion-but-dont-want-to-pay-for-it/?sh=50a3344b1782) [dont-want-to-pay-for-it/?sh=50a3344b1782 (Accessed January 10, 2021).](https://www.forbes.com/sites/kaleighmoore/2019/06/05/report-shows-customers-want-responsible-fashion-but-dont-want-to-pay-for-it/?sh=50a3344b1782) Nakamoto, S., and Bitcoin, A. (2008). A peer-to-peer electronic cash system. [Bitcoin.–URL. Available: https://bitcoin.org/bitcoin.pdf.4](https://bitcoin.org/bitcoin.pdf) Napoli, J., Dickinson, S. J., Beverland, M. B., and Farrelly, F. (2014). Measuring [consumer-based brand authenticity. J. Bus. Res. 67, 1090–1098. doi:10.1016/j.jbusres.](https://doi.org/10.1016/j.jbusres.2013.06.001) [2013.06.001](https://doi.org/10.1016/j.jbusres.2013.06.001) [Nguyen, T. (2020). Fast fashion, explained [online]. Vox. Available: https://www.vox.](https://www.vox.com/the-goods/2020/2/3/21080364/fast-fashion-h-and-m-zara) [com/the-goods/2020/2/3/21080364/fast-fashion-h-and-m-zara (Accessed November](https://www.vox.com/the-goods/2020/2/3/21080364/fast-fashion-h-and-m-zara) 10, 2021). Niinimäki, K., Peters, G., Dahlbo, H., Perry, P., Rissanen, T., and Gwilt, A. (2020). The [environmental price of fast fashion. Nat. Rev. Earth Environ. 1, 189–200. doi:10.1038/](https://doi.org/10.1038/s43017-020-0039-9) [s43017-020-0039-9](https://doi.org/10.1038/s43017-020-0039-9) Nimkar, U. (2018). Sustainable chemistry: A solution to the textile industry in a developing [world. Curr. Opin. Green Sustain. Chem. 9, 13–17. doi:10.1016/j.cogsc.2017.11.002](https://doi.org/10.1016/j.cogsc.2017.11.002) COMMONOBJECTIVE (2018b). in Death, injury and health in the fashion industry. Editor C. OBJECTIVE (London, UK: Common Objective). Co Data. Olsen, P., and Borit, M. (2013). How to define traceability. Trends food Sci. Technol. [29, 142–150. doi:10.1016/j.tifs.2012.10.003](https://doi.org/10.1016/j.tifs.2012.10.003) Omotoso, M. (2018). Who made my clothes” movement-how it all began [online]. [Available: https://fashioninsiders.co/features/inspiration/who-made-my-clothes-](https://fashioninsiders.co/features/inspiration/who-made-my-clothes-movement/) [movement/ (Accessed January 9, 2022).](https://fashioninsiders.co/features/inspiration/who-made-my-clothes-movement/) Ospital, P., Masson, D. H., Beler, C., and Legardeur, J. (2022). “Toward total traceability and full transparency communication in textile industry supply chain,” in INCOSE International Symposium (Wiley Online Library), 1–7. Panda, S. K., Jena, A. K., Swain, S. K., and Satapathy, S. C. (2021). Blockchain technology: Applications and challenges. Cham: Springer International Publishing. Papú Carrone, N. (2020). The UN sustainable development Goals for the textile and fashion industry. Springer.Traceability and transparency: A way forward for SDG 12 in the textile and clothing industry Pham, M., Valette-Florence, P., and Vigneron, F. (2018). Luxury brand desirability and fashion equity: The joint moderating effect on consumers’ commitment toward [luxury brands. Psychol. Mark. 35, 902–912. doi:10.1002/mar.21143](https://doi.org/10.1002/mar.21143) Phillips, J. W. (2011). Secrecy and transparency: An interview with samuel weber. [Theory, Cult. Soc. 28, 158–172. doi:10.1177/0263276411428339](https://doi.org/10.1177/0263276411428339) Pookulangara, S., and Shephard, A. (2013). Slow fashion movement: Understanding consumer perceptions—an exploratory study. J. Retail. consumer Serv. 20, 200–206. [doi:10.1016/j.jretconser.2012.12.002](https://doi.org/10.1016/j.jretconser.2012.12.002) Putasso, E., Ferro, E., and Osella, M. (2019). Blockchain in the fashion industry: Opportunities and challenges. Belgium: Textile and Clothing Business Labs. Raustiala, K., and Sprigman, C. (2006). The piracy and paradox: Innovation and intellectual property in fashion design. Va. L. Rev. 92, 1687. Ray, S., and Das, S. 2009. Corporate reporting framework (CRF): Benchmarking tata motors against AB volvo and exploring future challenges. Decision (0304-0941), 36. Reuter, P. (2013). Are estimates of the volume of money laundering either feasible or useful? Research handbook on money laundering. Cheltenham: Edward Elgar Publishing. Saicheua, V., Cooper, T., and Knox, A. 2011. Public understanding towards sustainable clothing and the supply chain. Sander, F., Semeijn, J., and Mahr, D. (2018). The acceptance of blockchain technology in [meat traceability and transparency. Br. food J. 120, 2066–2079. doi:10.1108/bfj-07-2017-0365](https://doi.org/10.1108/bfj-07-2017-0365) Saxena, S. B. (2020). Labor, global supply chains and the garment industry in south asia: Bangladesh after Rana plaza. Milton Park, Abingdon, Oxon ; New York, NY: Routledge, Taylor & Francis Group. SDG (2015). Sustainable Development Goals [Online]. Department of Economic and [Social Affairs. Available: https://sdgs.un.org/goals/goal17 (Accessed October 7, 2021).](https://sdgs.un.org/goals/goal17) Shannon, S. (2018). Fashion’s dirty secret: Millions in grey market sales [online]. [Business of fashion. Available: https://www.businessoffashion.com/articles/luxury/](https://www.businessoffashion.com/articles/luxury/fashion-dirty-little-secret-grey-market-luxury-paralleling/) [fashion-dirty-little-secret-grey-market-luxury-paralleling/ (Accessed January 3, 2022).](https://www.businessoffashion.com/articles/luxury/fashion-dirty-little-secret-grey-market-luxury-paralleling/) Smith, P. (2022). Global apparel market - statistics & facts. Statista: Statista. Stevens, G. C., and Johnson, M. (2016). Integrating the supply chain ... 25 years on 25 years [on. Int. J. Phys. distribution Logist. Manag. 46, 19–42. doi:10.1108/ijpdlm-07-2015-0175](https://doi.org/10.1108/ijpdlm-07-2015-0175) Strauss, K. (2012). Coerced, forced and unfree labour: Geographies of exploitation in [contemporary labour markets. Geogr. Compass 6, 137–148. doi:10.1111/j.1749-8198.](https://doi.org/10.1111/j.1749-8198.2011.00474.x) [2011.00474.x](https://doi.org/10.1111/j.1749-8198.2011.00474.x) Tamara, O., Emily, B., Amelia, R., Jenny, F., Kitty, T., and Ritchie Ares, D. 2014. Wearable art: Final fashions: Fashion is a form of self-expression in life but few consider how they’d like to be dressed when they die. [TEXTILEGENESIS (2022). Textile genesis [online]. Textile genesis. Available: https://](https://textilegenesis.com) [textilegenesis.com (Accessed January 1, 2022).](https://textilegenesis.com) Thakur, A. (2022). A comprehensive study of the trends and analysis of distributed ledger technology and blockchain technology in the healthcare industry. Front. [Blockchain 5, 844834. doi:10.3389/fbloc.2022.844834](https://doi.org/10.3389/fbloc.2022.844834) Thompson, C. J., Rindfleisch, A., and Arsel, Z. (2006). Emotional branding and the [strategic value of the doppelgänger brand image. J. Mark. 70, 50–64. doi:10.1509/jmkg.](https://doi.org/10.1509/jmkg.2006.70.1.50) [2006.70.1.50](https://doi.org/10.1509/jmkg.2006.70.1.50) Trautman, L. J. (2014). Virtual currencies; bitcoin & what now after liberty reserve, silk road, and Mt. Gox? Richmond J. Law Technol., 20. Treiblmaier, H., and Tumasjan, A. (2022). Editorial: Economic and business [implications of blockchain technology. Front. Blockchain 5, 857247. doi:10.3389/](https://doi.org/10.3389/fbloc.2022.857247) [fbloc.2022.857247](https://doi.org/10.3389/fbloc.2022.857247) Tripathi, G., Ahad, M. A., and Paiva, S. (2020). S2HS-A blockchain based approach for smart healthcare system. HealthcareElsevier, 100391. Tripathi, G., Ahad, M. A., and Sathiyanarayanan, M. (2019). “The role of blockchain in internet of vehicles (IoV): Issues, challenges and opportunities,” in 2019 International Conference on contemporary Computing and Informatics (IC3I) (IEEE), 26–31. Tripathi, G., Tripathi Nautiyal, V., Ahad, M. A., and Feroz, N. (2021). Blockchain technology and fashion industry-opportunities and challenges. Intelligent Syst. [Reference Library,Blockchain Technol. Appl. Challenges, 201–220. doi:10.1007/978-3-](https://doi.org/10.1007/978-3-030-69395-4_12) [030-69395-4_12](https://doi.org/10.1007/978-3-030-69395-4_12) Tsay, A. A., Nahmias, S., and Agrawal, N. (1999). Modeling supply chain contracts: A review. Int. Ser. Operations Res. Manag. Sci. Quantitative models supply chain Manag., [299–336. doi:10.1007/978-1-4615-4949-9_10](https://doi.org/10.1007/978-1-4615-4949-9_10) Turunen, L. L. M. (2018). Perceived authenticity. Interpretations of luxury. Springer. Tyndall, G. R. (1998). Supercharging supply chains: New ways to increase value through global operational excellence. New York: Wiley. UNICEF (2020). Children’s rights in the garment and footwear supply chain [Online]. [UNICEF. Available: https://www.unicef.org/reports/childrens-rights-in-garment-and-](https://www.unicef.org/reports/childrens-rights-in-garment-and-footwear-supply-chain-2020) [footwear-supply-chain-2020 (Accessed October 12, 2021).](https://www.unicef.org/reports/childrens-rights-in-garment-and-footwear-supply-chain-2020) UNITEDNATIONS (2015). Transforming our world: The 2030 Agenda for sustainable [development [online]. New York: United Nations. Available: https://](https://sustainabledevelopment.un.org/post2015/transformingourworld) [sustainabledevelopment.un.org/post2015/transformingourworld (Accessed January 9,](https://sustainabledevelopment.un.org/post2015/transformingourworld) 2022). UZBEKFORUM (2021). A turning point in Uzbekistan’s cotton harvest. Berlin: Uzbek Forum. Van Duyne, P. C., Maljević, A., Antonopoulos, G. A., Harvey, J., and Von Lampe, K. (2015). The relativity of wrongdoing: Corruption, organised crime, fraud and money laundering in perspective. Breda: WLP (Wolf Legal Publishers. Wall, D. S., and Large, J. (2010). Jailhouse frocks: Locating the public interest in [policing counterfeit luxury fashion goods. Br. J. Criminol. 50, 1094–1116. doi:10.1093/](https://doi.org/10.1093/bjc/azq048) [bjc/azq048](https://doi.org/10.1093/bjc/azq048) Wang, B., Luo, W., Zhang, A., Tian, Z., and Li, Z. (2020a). Blockchain-enabled circular supply chain management: A system architecture for fast fashion. Comput. [Industry 123, 103324. doi:10.1016/j.compind.2020.103324](https://doi.org/10.1016/j.compind.2020.103324) Wang, Y., Lin, J., and Choi, T.-M. (2020b). Gray market and counterfeiting in supply chains: A review of the operations literature and implications to luxury industries. [Transp. Res. Part E, Logist. Transp. Rev. 133, 101823. doi:10.1016/j.tre.2019.101823](https://doi.org/10.1016/j.tre.2019.101823) Wehmeier, S. (2018). The international encyclopedia of strategic communication. Wiley, 1–10.Transparency Wilcox, K., Kim, H. M., and Sen, S. (2009). Why do consumers buy counterfeit luxury [brands? J. Mark. Res. 46, 247–259. doi:10.1509/jmkr.46.2.247](https://doi.org/10.1509/jmkr.46.2.247) Williams, C. C. (2005). Trust diffusion: The effect of interpersonal trust on structure, [function, and organizational transparency. Bus. Soc. 44, 357–368. doi:10.1177/](https://doi.org/10.1177/0007650305275299) [0007650305275299](https://doi.org/10.1177/0007650305275299) Yang, S., Song, Y., and Tong, S. (2017). Sustainable retailing in the fashion industry: A [systematic literature review. Sustainability 9, 1266. doi:10.3390/su9071266](https://doi.org/10.3390/su9071266) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3389/fbloc.2023.1044723?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3389/fbloc.2023.1044723, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.frontiersin.org/articles/10.3389/fbloc.2023.1044723/pdf" }
2,023
[ "JournalArticle", "Review" ]
true
2023-02-20T00:00:00
[ { "paperId": "cdf3ed711c8118e14bdfe26e2c379d9158155556", "title": "The Fashioned Body: Fashion, Dress and Modern Social Theory, Joanne Entwistle (2023), 3rd ed." }, { "paperId": "307cb0568fe08cd831c82d13224d9a697d77a2ce", "title": "Event-Based Supply Chain Network Modeling: Blockchain for Good Coffee" }, { "paperId": "b4e55b6a86b802da5f9d6189465fae6e7af919bf", "title": "Editorial: Economic and Business Implications of Blockchain Technology" }, { "paperId": "508d4fac9865afa67c03bd659ff88638c394713a", "title": "A Comprehensive Study of the Trends and Analysis of Distributed Ledger Technology and Blockchain Technology in the Healthcare Industry" }, { "paperId": "bde6d3ed07c316f2a35a231c5cef827fddccf8d6", "title": "Toward total traceability and full transparency communication in textile industry supply chain" }, { "paperId": "fc89751113411b3f9be960d1ebda03bb5865da22", "title": "Can Fashion Be Circular? A Literature Review on Circular Economy Barriers, Drivers, and Practices in the Fashion Industry’s Productive Chain" }, { "paperId": "bf5884adb2db5c2cc693423f82c0ce5b17adcd0e", "title": "Collaborative consumption in the fashion industry: A systematic literature review and conceptual framework" }, { "paperId": "2ce40b867bbfb35c74ce8115b94f227e9380a3c3", "title": "Blockchain-Enabled Supply Chain Traceability in the Textile and Apparel Supply Chain: A Case Study of the Fiber Producer, Lenzing" }, { "paperId": "9787abbfc6ee3aa67cd9f801364229058ffe798c", "title": "Applications of Blockchain Technology in Sustainable Manufacturing and Supply Chain Management: A Systematic Review" }, { "paperId": "70b86f91b9751b4cb15d287848a30630d2133044", "title": "Blockchain adoption in the fashion sustainable supply chain: Pragmatically addressing barriers" }, { "paperId": "57cd30bf91b84a9f5ca5e7146c573fdef2fdf9fe", "title": "Blockchain, business and the fourth industrial revolution: Whence, whither, wherefore and how?" }, { "paperId": "92b575440db1d61436ee12190dfa0224c95f0f54", "title": "Blockchain-enabled circular supply chain management: A system architecture for fast fashion" }, { "paperId": "8fe88e05eccf0d96ab31352c92883ed49cc24229", "title": "Blockchain-enabled supply chain: analysis, challenges, and future directions" }, { "paperId": "6f52ab4f9ca1e7a85ed7c80a9077a8a7f6c99006", "title": "Platforms and the pandemic: A case study of fashion rental platforms during COVID‐19" }, { "paperId": "04b58585d0d5d707c3b3d9afd996a884ef387fde", "title": "Enabling technologies and sustainable smart cities" }, { "paperId": "a9a4bbc1843c2a154b6c6af995e8168454436b42", "title": "Blockchain and the circular economy: potential tensions and critical reflections from practice" }, { "paperId": "7fa5f147489b37b7762594ce36760f27ceb6f8a5", "title": "A Systematic Review Exploring the Current State of Fashion Criticism -A Focus on the Fashion Designer Exhibition Reviews of Fashion Theory-" }, { "paperId": "71b2b94f826d0d8ccc69dda8d3eeebeb98afc7ba", "title": "The greenwashing triangle: adapting tools from fraud to improve CSR reporting" }, { "paperId": "78f58ede8254b81e4fa18511614a22a9e7a0b94a", "title": "The environmental price of fast fashion" }, { "paperId": "ec480dfe2e70ed634393f83d9e9c60c055cdba43", "title": "Transparency of sustainability disclosures among luxury and mass-market fashion brands" }, { "paperId": "ef8e9fcadab13b5d20e03d9b56d491da754b1953", "title": "Blockchain for 5G-enabled IoT for industrial automation: A systematic review, solutions, and challenges" }, { "paperId": "3d04777aa209f0eedd9828727720207998dc1853", "title": "The Role of Blockchain in Internet of Vehicles (IoV): Issues, Challenges and Opportunities" }, { "paperId": "4023304a734516bcde57bf7c7bb4ad095c894c23", "title": "The Cage Around Internationalisation of Smes and The Role of Government" }, { "paperId": "57e041577b686903d8f8a62819fe5bb7c7e47e92", "title": "S2HS- A blockchain based approach for smart healthcare system." }, { "paperId": "48f0a7f6dfbbd2ff8d428c7a4195dcc412ae5a6a", "title": "Luxury fashion and sustainability: looking good together" }, { "paperId": "9d3f64861b03860d76ce12f00b4cd6df988bd249", "title": "Blockchain-technology-supported platforms for diamond authentication and certification in luxury supply chains" }, { "paperId": "0a69ea4c66e907b48cc7c63a70e26ceefac7c0ec", "title": "Labor, Global Supply Chains, and the Garment Industry in South Asia" }, { "paperId": "abdb4a5bbdcab60344497edd3e48599b64f0c00f", "title": "Blockchain technology: implications for operations and supply chain management" }, { "paperId": "699132b58a742022670e8498113c7696f5f318ba", "title": "Counterfeit goods fraud: an account of its financial management" }, { "paperId": "1455a6a1bedcf72b02cf5a4693527498e94d52bf", "title": "Traceability in Textile and Clothing Supply Chains: Classifying Implementation Factors and Information Sets via Delphi Study" }, { "paperId": "a54b9f26f8ff49be6ddf0b482f7d40be4939749f", "title": "Traceability for sustainability – literature review and conceptual framework" }, { "paperId": "cfd14903d07b582d2a17144a1016bb7f17e1be0d", "title": "Luxury brand desirability and fashion equity: The joint moderating effect on consumers’ commitment toward luxury brands" }, { "paperId": "3182f7944c3f960190b3a509020d054b8d7481d1", "title": "Transparency" }, { "paperId": "d7b60c5fcc3ec546eba8d43c4e495222e75761fb", "title": "The International Encyclopedia of Strategic Communication" }, { "paperId": "2c02c28bb34b1d35964f03634b39784e13e4b291", "title": "Franchising contracts in fashion supply chain operations: models, practices, and real case study" }, { "paperId": "d94b8f1171c9edfa01c0c82b552ebd26e63312da", "title": "The acceptance of blockchain technology in meat traceability and transparency" }, { "paperId": "e9a59fd61f347ab9cf1cc5666a37c64d086cec30", "title": "Blockchain Framework for Textile Supply Chain Management - Improving Transparency, Traceability, and Quality" }, { "paperId": "e004dc64732bb7e83994860bc651a129a08c30f1", "title": "Technology and counterfeiting in the fashion industry: Friends or foes?" }, { "paperId": "18a25645ef2eb9f599711a9ae49736542e4e7041", "title": "Sustainable chemistry: A solution to the textile industry in a developing world" }, { "paperId": "69916a9e0cf25a659a3bae53ce837420540fcadf", "title": "Engaging the fashion consumer in a transparent business model" }, { "paperId": "e8709e2906361ade9064cc605b9c7637bec474a0", "title": "Can Blockchain Strengthen the Internet of Things?" }, { "paperId": "612ba99a0d3ca2c7cc3ef9090d0ff08db423d36f", "title": "Sustainable Retailing in the Fashion Industry: A Systematic Literature Review" }, { "paperId": "662ca2ea8f4d9eb03e8e5192795615cf6a0ae192", "title": "Illicit economies: customary illegality, moral economies and circulation" }, { "paperId": "8aaf1246e8b7da9b73b13e2e33b82ca53dbf6b88", "title": "Contribution of traceability towards attaining sustainability in the textile sector" }, { "paperId": "daa1cd2bbae39dd6f1f33dae748fd7c13400d8f3", "title": "The technology of trust: How the Internet of Things and blockchain could usher in a new era of construction productivity" }, { "paperId": "d809b1a1c987e13a2fbc466b87d95002b6198526", "title": "Blockchain Basics: A Non-Technical Introduction in 25 Steps" }, { "paperId": "b6ad5edd66bcfe9439ad9c645179c1396af8f431", "title": "The quality of sustainability reports and impression management: A stakeholder perspective" }, { "paperId": "8f704940bceca85aca4fa582641dc89f4af5a511", "title": "A strenuous path for sustainable supply chains in the footwear industry: A business strategy issue" }, { "paperId": "69a22ec0bb3aeb424bc7d7ee2b8d1b4b59cda3cb", "title": "Trusting records: is Blockchain technology the answer?" }, { "paperId": "80be4ae917b6160bbd5076be07404453a42f086f", "title": "Analysis of gray markets in differentiated duopoly" }, { "paperId": "87c1cdb20a73f1e8bdef4364c6001c1e8bba78e9", "title": "Integrating the Supply Chain … 25 years on" }, { "paperId": "2bf44ce7af48d5e71f809dc2bf8ae3e7256170ed", "title": "‘Get real, don’t buy fakes’: Fashion fakes and flawed policy – the problem with taking a consumer-responsibility approach to reducing the ‘problem’ of counterfeiting" }, { "paperId": "f72971186ee6008ff32ba73ba8b7c6142f4aa26f", "title": "Fake it ’til we make it: regulating dangerous counterfeit goods" }, { "paperId": "6d3037175cd88e01c18d2065c4702e7b7dfc0074", "title": "The role of fashion brand authenticity in product management: : a holistic marketing approach" }, { "paperId": "4dfea6d680cf0f75e909b41796d2dfcbd244c3d0", "title": "Building Trust Between Consumers and Corporations: The Role of Consumer Perceptions of Transparency and Social Responsibility" }, { "paperId": "d7d566c4079a28708303777b487ef38349dba8fe", "title": "The fashion industry" }, { "paperId": "82504006291ac980934fc41743c4aae3c44dce77", "title": "Organizational Structure and Gray Markets" }, { "paperId": "d941e2534f7ceb16a2980a88c516da545e10cfc1", "title": "Competitive supply chain network design: An overview of classifications, models, solution techniques and applications" }, { "paperId": "5f98109e62b8aefa80eb1c3b432aadd5c5c841e1", "title": "Measuring Consumer-Based Brand Authenticity" }, { "paperId": "7245ea74cd899fe5969bc29c6acea80e922f455b", "title": "Virtual Currencies; Bitcoin & What Now after Liberty Reserve, Silk Road, and Mt. Gox?" }, { "paperId": "bc6a4c3a60a6584a8092e0e41e91dab4543c4285", "title": "Are estimates of the volume of money laundering either feasible or useful" }, { "paperId": "203a85a0515d565de440ab4a2986ea52f403f3c3", "title": "'Faking brands': Consumer responses to counterfeiting" }, { "paperId": "d00471b00a51a486e6978e8947b5588265bd48c0", "title": "Technology designed to combat fakes in the global supply chain" }, { "paperId": "0ce2a6d98afacd0503ad60826c5263cc53bcf076", "title": "Slow fashion movement: Understanding consumer perceptions—An exploratory study" }, { "paperId": "17dba4a055da020c7395192b1f76b5255dedeb09", "title": "How to define traceability" }, { "paperId": "61b6d53e0fd9dd93b17f14c41cf37a2547382167", "title": "Coerced, Forced and Unfree Labour: Geographies of Exploitation in Contemporary Labour Markets" }, { "paperId": "221b6934e755f87e7bc92b76f886b04c544950e6", "title": "Secrecy and Transparency" }, { "paperId": "daaad0cbf98c1a57c73aa4acc7015a24666426af", "title": "Coping with Gray Markets: The Impact of Market Conditions and Product Characteristics" }, { "paperId": "997ecc585f2f5c932b116a49b18e8119e83f0855", "title": "Public understanding towards sustainable clothing and the supply chain" }, { "paperId": "fbbdb7180e4b82951e6f3a8d26e57814e34f628c", "title": "Do Transparent Business Practices Pay? Exploration of Transparency and Consumer Purchase Intention" }, { "paperId": "9f38b8fbb716c1df4cbc5ef69407eeef4425874c", "title": "Consumer complicity with counterfeit products" }, { "paperId": "4ec3c16442a61681af47b7fde7e0a02eada49443", "title": "Intensifying Export Performance Through Planned Capacity Building: A Study of the Indian Apparel Sector" }, { "paperId": "23e1f4361a7fc5f5d35ecd6a34b875e79b4ec2d7", "title": "Jailhouse Frocks: Locating the Public Interest in Policing Counterfeit Luxury Fashion Goods" }, { "paperId": "62b5d218846b59d30abf582e83775c18c17aa1b7", "title": "Green Washing" }, { "paperId": "673460dc0814ad53dc980fe192f3ab336acd247d", "title": "Fast fashion: response to changes in the fashion industry" }, { "paperId": "245831b1ba9fa32fdb224555b37533010af903e6", "title": "Preferred reporting items for systematic reviews and meta-analyses: the PRISMA Statement" }, { "paperId": "4cd7d5e84be894ccc38b4a841f608e6d5df42d4a", "title": "The anatomy of the luxury fashion brand" }, { "paperId": "9f267b7ac41bdfe72a058f24539d5587f41c51a4", "title": "Why Do Consumers Buy Counterfeit Luxury Brands?" }, { "paperId": "fcbd1d626d91a97e4a4ff9542253a691ad3c0336", "title": "An investigation of determinants of counterfeit purchase consideration" }, { "paperId": "b12c0e8ca04088720f06ff185c8273f8efacb0fe", "title": "National Institute of Environmental Health Sciences" }, { "paperId": "6d6a3f126b92f07d2803b14f2f0b4951a7090eda", "title": "Projecting Authenticity Through Advertising: Consumer Judgments of Advertisers' Claims" }, { "paperId": "3fdc495553bf331ea6e00acf4ab013239335d2ba", "title": "Managing complexity in agile global fashion industry supply chains" }, { "paperId": "7d2c26490ef0c20d5de03bf00d45b769747215f0", "title": "The 'real thing': Branding authenticity in the luxury wine trade" }, { "paperId": "23dc4a894d6e6c4cae714d9a849eb96f192ec046", "title": "The Piracy Paradox: Innovation and Intellectual Property in Fashion Design" }, { "paperId": "a6f87187553ab5b1f85b30f63cecd105f222a91f", "title": "Emotional Branding and the Strategic Value of the Doppelgänger Brand Image" }, { "paperId": "fadb63870ac3c86a1f2282682384dcd67f1b1350", "title": "Trust Diffusion: The Effect of Interpersonal Trust on Structure, Function, and Organizational Transparency" }, { "paperId": "95c4f1d3d2c20a014c4983e9d4cbe20e15311129", "title": "The Ethics of Counterfeiting in the Fashion Industry: Quality, Credence and Profit Issues" }, { "paperId": "960a640bfe46e9a773722f6344cf91c7050f1375", "title": "Teaching Old Brands New Tricks: Retro Branding and the Revival of Brand Meaning" }, { "paperId": "3ce6db01cbd1889aabd1330ffce38760d2f92bb8", "title": "Five Steps to Conducting a Systematic Review" }, { "paperId": "3f445d762614df00ad74367df7ae52455c655d5c", "title": "……Industry" }, { "paperId": "bb4639a014443b286ac009b11a3623439a501a25", "title": "Executive Insights: Countering Brand Counterfeiters" }, { "paperId": "d35b2dbb357d479a9b64a9985da8e2f3e7596f25", "title": "The myth of the ethical consumer – do ethics matter in purchase behaviour?" }, { "paperId": "e3c5d8952d509b86d06ece1c83244f148a1a4337", "title": "DEFINING SUPPLY CHAIN MANAGEMENT" }, { "paperId": "5c4af0f11a04aaae790936d4f1a0264a77b98ee4", "title": "Strategic Brand Management: Building, Measuring, and Managing Brand Equity." }, { "paperId": "97e00b6d21c891e610c217c31cfbc602c453b17b", "title": "International Counterfeit Marketing: Success without Risk" }, { "paperId": "6be35f5b6ecf665bb7384b139fa1d628c86b66ff", "title": "Celebrity Endorsement: A Literature Review" }, { "paperId": "148f124513abac67721deb55f847424c1e0abaaa", "title": "Supercharging Supply Chains: New Ways to Increase Value Through Global Operational Excellence" }, { "paperId": "df7e4cb808890dc04707c56e514f7d75f5864b31", "title": "Emerging Logistics Strategies" }, { "paperId": "792ff5a74aac0672b52805198942a6051fd0caf1", "title": "Integrating the Supply Chain" }, { "paperId": null, "title": "Global apparel market - statistics & facts" }, { "paperId": null, "title": "Textile genesis [online" }, { "paperId": "98b3d65290b060315ae7236d496f2cb2e7bc9205", "title": "Blockchain Technology and Fashion Industry-Opportunities and Challenges" }, { "paperId": "0f08153b6906068370913c8fb7e7c4f79cbf3ba5", "title": "Blockchain Technology: Applications and Challenges" }, { "paperId": "a6def4f8a07364cef3afd8c158d1b186194f5176", "title": "An Overview of Blockchain-Based Smart Contract" }, { "paperId": "2a01d1f7943972c031c716272f4617cc8ec814e7", "title": "Waste management strategies in fashion and textiles industry: Challenges are in governance, materials culture and design-centric" }, { "paperId": null, "title": "Fibre: Unravelling the seams" }, { "paperId": null, "title": "The state of fashion 2021. The State of Fashion ed" }, { "paperId": null, "title": "A turning point in Uzbekistan’s cotton harvest" }, { "paperId": "6c06a9c822aa321cf2c786ede171651396cd1657", "title": "Gray market and counterfeiting in supply chains: A review of the operations literature and implications to luxury industries" }, { "paperId": "c3cf7ca6b0743d67d9e2f3cb087bb650b0febbd6", "title": "Overview of recent advancement in globalization and outsourcing initiatives in manufacturing systems" }, { "paperId": "b31df65aea5a4256f0c7d19e3fa518b7c40b942f", "title": "The UN Sustainable Development Goals for the Textile and Fashion Industry" }, { "paperId": null, "title": "Children ’ srightsinthegarmentandfootwearsupplychain" }, { "paperId": null, "title": "in Seven people died when the only escape from a fire at an Indian denim factory was up a ladder" }, { "paperId": null, "title": "ThesefactsshowhowsustainablethefashionindustryisWorld" }, { "paperId": null, "title": "Fast fashion, explained [online" }, { "paperId": "4ba39a959ad90d06bfae92194b064d6b0ab36952", "title": "Traceability and Transparency: A Way Forward for SDG 12 in the Textile and Clothing Industry" }, { "paperId": null, "title": "report shows customers want responsible fashion, but don’t want to pay for it" }, { "paperId": "815cb5526044da78c4c0186f228c042f876eb511", "title": "Blockchain-Based Secured Traceability System for Textile and Clothing Supply Chain" }, { "paperId": "c2d40bb5539650f9673dd6b154c4fafd88eeb802", "title": "The Missing Link Between Blockchain and Copyright: How Companies Are Using New Technology to Misinform Creators and Violate Federal Law" }, { "paperId": "f10808a21dcb5ddfec33ab3d93a72bdf3e650202", "title": "Making Fashion Transparent: What Consumers Know about the Brands They Admire" }, { "paperId": "77c42c08166ad27ef6d06167f55288fb975dd899", "title": "Blockchain technology in construction industry : Transparency and traceability in supply chain" }, { "paperId": "8ca6089df264b32ff2519b56624b653663e6fad8", "title": "Policies versus Practices: Transparency of supply chain disclosures among luxury and mass market fashion brands" }, { "paperId": null, "title": "COMMONOBJECTIVE" }, { "paperId": null, "title": "Fashion’s dirty secret: Millions in grey market sales [online" }, { "paperId": null, "title": "Perceived authenticity" }, { "paperId": null, "title": "Who made my clothes” movement-how it all began [online" }, { "paperId": "a6701a34e5cb4b055863ff85e1e9071b67357288", "title": "Vogue or Vague: Sustainability Performance Appraisal in Luxury Fashion Supply Chains" }, { "paperId": "4061f8342c7dec952f0ccf4bf4c4e8088ce0ee20", "title": "Anti-consumption and Governance in the Global Fashion Industry: Transparency is Key" }, { "paperId": "a3809b125d31ab5c1c29a3da22f64161a631dced", "title": "Governing Corporate Social Responsibility in the Apparel Industry after Rana Plaza" }, { "paperId": "806b53c232f3f4dfa70a77071606ad80487f1035", "title": "Digital rights management : blockchain and digital music content management" }, { "paperId": "ef421af177a513784bd0ad3b5f25f98330b5c5b1", "title": "Transforming our world : The 2030 Agenda for Sustainable Development" }, { "paperId": "636226e13977683de5de04d669e5490443c5065f", "title": "Blockchain ready manufacturing supply chain using distributed ledger" }, { "paperId": "e65feb35d5471aa80fe50c0d050569614e6fd7d4", "title": "What’s Your Strategy for Supply Chain Disclosure?" }, { "paperId": null, "title": "Despite the Sustainable Apparel Coalition, there’s a lot you don’t know about that T-shirt" }, { "paperId": "53cb534c2951f6e0736c30c664fd689b61c538c1", "title": "Organized hypocrisy, organizational façades, and sustainability reporting" }, { "paperId": "b90e5240ee2c52fd6a3ee3d37773dc54351b7787", "title": "Multi-dimensional Consumers: Fashion and Human Factors" }, { "paperId": "9cfb48c5c8f99245a48e9e73e259f2f7a42e146d", "title": "The Relativity of Wrongdoing: Corruption, Organised Crime, Fraud and Money Laundering in Perspective" }, { "paperId": "8cbd535a53ade4d965622ac5aec609ee97838bda", "title": "Estimating the counterfeit markets in Europe" }, { "paperId": null, "title": "Cotton’s forgotten children" }, { "paperId": null, "title": "Wearable art: Final fashions: Fashion is a form of self-expression in life but few consider how they" }, { "paperId": "e45c0b40953b7c45c9bb56b23dadc77c5bb8ba71", "title": "Literature Review and Conceptual Framework" }, { "paperId": null, "title": "Estimating the global economic and social impacts of counterfeiting and piracy" }, { "paperId": "8cb43ef9a558f478f824ba5cea6dc2728b69765e", "title": "Corporate Reporting Framework (CRF): Benchmarking Tata Motors against AB Volvo and Exploring Future Challenges" }, { "paperId": "ee1b2191e6de66a8c45d3cbafda96a7262780b61", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": "66d202199d41fc4a8159f5b0a1f3e79a8f105331", "title": "Waste prevention, waste policy and innovation" }, { "paperId": "ca0938290c6c8a7ea1cd1ff7ab3ad2f33edc7d50", "title": "Counterfeiting exposed : protecting your brand and customers" }, { "paperId": null, "title": "Department of Economic and Social Affairs.”" }, { "paperId": "f5e2f95c78d934e6a9306ca41185d400b5a808b5", "title": "Modeling Supply Chain Contracts: A Review" }, { "paperId": null, "title": "8402: 1994 quality management and quality assurance–vocabulary" }, { "paperId": null, "title": "How does blockchain technology work. Retrieved from coindesk" }, { "paperId": null, "title": "Welcome to FiberCoin of fi cial website" } ]
25,490
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01bdb3a6035f1beb00143f618e97acc6e16efe97
[ "Computer Science" ]
0.858935
A comprehensive performance analysis of Apache Hadoop and Apache Spark for large scale data sets using HiBench
01bdb3a6035f1beb00143f618e97acc6e16efe97
Journal of Big Data
[ { "authorId": "2069485772", "name": "N. Ahmed" }, { "authorId": "3312622", "name": "A. Barczak" }, { "authorId": "2656889", "name": "Teo Sušnjak" }, { "authorId": "144818046", "name": "M. A. Rashid" } ]
{ "alternate_issns": [ "2579-0048" ], "alternate_names": [ "J Big Data", "Journal on Big Data" ], "alternate_urls": [ "http://www.springer.com/computer/database+management+&+information+retrieval/journal/40537", "http://techscience.com/JBD/index.html", "https://journalofbigdata.springeropen.com", "https://journalofbigdata.springeropen.com/" ], "id": "d60da343-ab92-4310-b3d7-2c0860287a9d", "issn": "2196-1115", "name": "Journal of Big Data", "type": "journal", "url": "http://www.journalofbigdata.com/" }
Big Data analytics for storing, processing, and analyzing large-scale datasets has become an essential tool for the industry. The advent of distributed computing frameworks such as Hadoop and Spark offers efficient solutions to analyze vast amounts of data. Due to the application programming interface (API) availability and its performance, Spark becomes very popular, even more popular than the MapReduce framework. Both these frameworks have more than 150 parameters, and the combination of these parameters has a massive impact on cluster performance. The default system parameters help the system administrator deploy their system applications without much effort, and they can measure their specific cluster performance with factory-set parameters. However, an open question remains: can new parameter selection improve cluster performance for large datasets? In this regard, this study investigates the most impacting parameters, under resource utilization, input splits, and shuffle, to compare the performance between Hadoop and Spark, using an implemented cluster in our laboratory. We used a trial-and-error approach for tuning these parameters based on a large number of experiments. In order to evaluate the frameworks of comparative analysis, we select two workloads: WordCount and TeraSort. The performance metrics are carried out based on three criteria: execution time, throughput, and speedup. Our experimental results revealed that both system performances heavily depends on input data size and correct parameter selection. The analysis of the results shows that Spark has better performance as compared to Hadoop when data sets are small, achieving up to two times speedup in WordCount workloads and up to 14 times in TeraSort workloads when default parameter values are reconfigured.
p g ## RESEARCH ## Open Access # A comprehensive performance analysis of Apache Hadoop and Apache Spark for large scale data sets using HiBench ### N. Ahmed[1*], Andre L. C. Barczak[1], Teo Susnjak[1] and Mohammed A. Rashid[2] *Correspondence: nasim751@yahoo.com 1 School of Natural and Computational Sciences, Massey University, Albany, Auckland 0745, New Zealand Full list of author information is available at the end of the article **Abstract** Big Data analytics for storing, processing, and analyzing large-scale datasets has become an essential tool for the industry. The advent of distributed computing frameworks such as Hadoop and Spark offers efficient solutions to analyze vast amounts of data. Due to the application programming interface (API) availability and its performance, Spark becomes very popular, even more popular than the MapReduce framework. Both these frameworks have more than 150 parameters, and the combination of these parameters has a massive impact on cluster performance. The default system parameters help the system administrator deploy their system applications without much effort, and they can measure their specific cluster performance with factoryset parameters. However, an open question remains: can new parameter selection improve cluster performance for large datasets? In this regard, this study investigates the most impacting parameters, under resource utilization, input splits, and shuffle, to compare the performance between Hadoop and Spark, using an implemented cluster in our laboratory. We used a trial-and-error approach for tuning these parameters based on a large number of experiments. In order to evaluate the frameworks of comparative analysis, we select two workloads: WordCount and TeraSort. The performance metrics are carried out based on three criteria: execution time, throughput, and speedup. Our experimental results revealed that both system performances heavily depends on input data size and correct parameter selection. The analysis of the results shows that Spark has better performance as compared to Hadoop when data sets are small, achieving up to two times speedup in WordCount workloads and up to 14 times in TeraSort workloads when default parameter values are reconfigured. **Keywords: HiBench, BigData, Hadoop, MapReduce, Benchmark, Spark** **Introduction** Hadoop [1] has become a very popular platform in the IT industry and academia for its ability to handle large amounts of data, along with extensive processing and analysis facilities. Different users produce these large datasets, and most of data are unstructured, increasing the requirements for memory and I/O. Besides, the advent of many new applications and technologies brought much larger volumes of complex data, including social media, e.g., Facebook, Twitter, YouTube, online shopping, © The Author(s) 2020. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the [permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creat​iveco​](http://creativecommons.org/licenses/by/4.0/) [mmons​.org/licen​ses/by/4.0/.](http://creativecommons.org/licenses/by/4.0/) ----- machine data, system data, and browsing history [2]. This massive amount of digital data becomes a challenging task for the management to store, process, and analyze. The conventional database management tools are unable to handle this type of data [3]. Big data technologies, tools, and procedures allowed organizations to capture, process speedily, and analyze large quantities of data and extract appropriate information at a reasonable cost. Several solutions are available to handle this problems [4]. Distributed computing is one possible solution considered as the most efficient and fault-tolerant method for companies to store and process massive amounts of data. Among this new group of tools, MapReduce and Spark are the most commonly used cluster computing tools. They provide users with various functions using simple application programming interfaces (API). MapReduce is a framework used for distributed computing used for parallel processing and designed purposely to write, read, and process bulky amounts of data [1, 5, 6]. This data processing framework is comprised of three stages: Map phase, Shuffle phase and Reduce phase. In this technique, the large files are divided into several small blocks of equal sizes and distributed across the cluster for storage. MapReduce and Hadoop distributed file systems (HDFS) are core parts of the Hadoop system, so computing and storage work together across all nodes that compose a cluster of computers [7]. Apache Spark is an open-source cluster-computing framework [8]. It is designed based on the Hadoop and its purpose is to build a programing model that “fits a wider class of applications than MapReduce while maintaining the automatic fault tolerance” [9]. It is not only an alternative to the Hadoop framework but it also provides various functions to process real streaming data. Apart from the map and reduce functions, Spark also supports MLib1, GraphX, and Spark streaming for big data analysis. Hadoop MapReduce processing speed is slow because it requires accessing disks for reads and writes. On the other hand, Spark uses memory to store data reducing the read/write cycle [1]. In this paper, we have addressed the above mentioned critical challenges. According to our knowledge, none of the previous works have addressed those challenges. Our proposed work will help the system administrators and researchers to understand the system behavior when processing large scale data sets. The main contributions of this paper are as follows: - We introduced a comprehensive empirical performance analysis between MapReduce and Spark frameworks by correlating resource utilization, splits size, and shuffle behavior parameters. As per our knowledge, few previous studies have presented information regarding that. Considering this point, the authors have focused on a comprehensive study about various parameters impact with large data set instead of a large number of workloads. - We accomplished comprehensive comparison work between Hadoop and Spark where large scale datasets (600 GB) are used for the first time. The experiments present the various aspects of cluster performance overhead. We applied two Hibenchmark workloads to test the efficiency of the system under MapReduce and Spark, where the data sets are repeatedly changing. ----- - We selected several parameters covering different aspects of system behavior. Multiple parameters are used to tune job performance. The results of the analysis will facilitate job performance tuning and enhance the freedom to modify the ideal parameters to enhance job efficiency. - We measured the scalability of the experiment by repeating the experiment three times, getting the average execution time for each job. Besides, we investigate the system execution time, maximum sustainable throughput and speedup. - We used a real cluster capable of handling large scale data set (600 GB) with benchmarking tools for a comprehensive evaluation of MapReduce and Spark. The remainder of the paper is organized as follows: “Related work” section presents a critical review of related research works, and then describes Hadoop and Spark systems. The difference between Hadoop and Spark is explained in “Difference between Hadoop and Spark” section. The experimental setup is presented in “Experimental setup” section. In “The parameters of interest and tuning approach” section, we explain the chosen parameters and tuning approach. “Results and discussion” section presents the performance analysis of the results and finally, we conclude in “Conclusion” section. **Related work** Shi et al. [10] proposed two profiling tools to quantify the performance of the MapReduce and Spark framework based on a micro-benchmark experiment. The comparative study between these frameworks are conducted with batch and iterative jobs. In their work, the authors consider three components: shuffle, executive model, and caching. The workloads, Wordcount, k-means, Sort, Linear Regression, and PageRank, are chosen to evaluate the system behavior based on CPU bound, disk-bound, and network bound [11]. They disabled map and reduce function for all workloads apart of a Sort. For the Sort, the reduce task is configured up to 60 map tasks, and the reduce task conFigured to 120. The map output buffer is allocated to 550 MB to avoid additional spills for sorting the map output. Spark intermediate data are stored in 8 disks where each worker is configured with four threads. The authors claim that Spark is faster than MapReduce when WordCount runs with different data sets (1 GB, 40 GB, and 200 GB). The TeraSort is used by sort-by-key() function. They have found that Spark is faster than MapReduce when the data set is smaller (1 GB), but Mapreduce is nearly two times faster than Spark when the data set is of bigger sizes (40 GB or 100 GB). Besides, Spark is one and a half times faster than MapReduce with machine learning workloads such as K-means and Linear Regression. It is claimed that in a subsequent iteration, Spark is five times faster than MapReduce due to the RDD caching and Spark-GraphX is four times faster than MapReduce. Li et al. [12] proposed a spark benchmarking suite [13], which significantly enhances the optimization of workload configuration. This work has identified the distinct features of each benchmark application regarding resource consumption, the data flow, and the communication pattern that can impact the job execution time. The applications are characterized based on extensive experiments using synthetic data sets. There are ten different workloads such as Logistic Regression, Support Vector Machine, Matrix Factorization, Page Rank, Tringle Count, SVD++, Hive, RDD Relation, Twitter, and ----- PageView used with different input data sizes. An eleven nodes virtual cluster is used to analyze the performance of the workloads. The workload analysis is carried out concerning CPU utilization, memory, disk, and network input/output consumption at the time of job execution. They have found that most of the workloads spend more than 50% execution time for MapShuffle-Tasks except logistic regression. They concluded that the job execution time could be reduced while increasing task parallelism to leverage the CPU utilization fully. Thiruvathukal et al. [14] have considered the importance and implication of the language such as Python and Scala built on the Java Virtual Machine (JVM) to investigate how the individual language affects the systems’ overall performance. This work proposed a comprehensive benchmarking test for Massage Passing Interface (MPI) and cloud-based application considering typical parallel analysis. The proposed benchmark techniques are designed to emulate a typical image analysis. Therefore, they presented one mid-size (Argonne Leadership Computing Facility) cluster with 126 nodes, which run on COOLEY [14] and a large scale supercomputer (Cray XC40 supercomputer) cluster with a single node which runs on THETA [14]. Significantly, they have increased some important Spark parameters (Spark driver memory, and executor memory) values as per the machine resource. They have recommended that COOLEY and THETA frameworks are be beneficial for immediate research work and high-performance computing (HPC) environments. Marcue et al. [15] present the comparative analysis between Spark and Flink frameworks for large scale data analysis. This work proposed a new methodology for iterative workloads (K-Means, and Page Rank) and batch processing workloads (WordCount, Grep, and TeraSort) benchmarking. They considered four most important parameters that impact scalability, resource consumption, and execution time. Grid 5000 [16] has used upto 100 nodes cluster deploying Spark and Flink. They have recommended that Spark parameter (i.e., parallelism and partitions) configuration is sensitive and depends on data sets, while the Flink is highly extensive memory oriented. Samadi et al. [7] has investigated the criteria of the performance comparison between Hadoop and Spark framework. In his work, for an impartial comparison, the input data size and configuration remained the same. Their experiment used eight benchmarks of the HiBench suite [13]. The input data was generated automatically for every case and size, and the computation was performed several times to find out the execution time and throughput. When they deployed microbenchmark (Short and TeraSort) on both systems, Spark showed higher involvement of processor in I/Os while Hadoop mostly processed user tasks. On the other hand, Spark’s performance was excellent when dealing with small input sizes, such as micro and web search (Page Rank). Finally, they concluded that Spark is faster and very strong for processing data in-memory while Hadoop MapReduce performs maps and reduces function in the disk. In another paper, Samadi et al. [9] proposed a virtual machine based on Hadoop and Spark to get the benefit of virtualization. This virtual machine’s main advantage is that it can perform all operations even if the hardware fails. In this deployment, they have used Centos operating system built a Hadoop cluster based on a pseudo-distribution mode with various workloads. In their experiments, they have deployed the Hadoop machine on a single workstation and all other demos on its JVM. To justify the big data ----- framework, they have presented the results of Hadoop deployment on Amazon Elastic Computing (EC2). They have concluded that Hadoop is a better choice because Spark requires more memory resources than Hadoop. Finally, they have suggested that the cluster configuration is essential to reduce job execution time, and the cluster parameter configuration must align with Mappers and Reducers. The computational frameworks, namely Apache Hadoop and Apache Spark, were investigated by [17]. In this investigation, the Apache webserver log file was taken into consideration to evaluate the two frameworks’ comparative performance. In these experiments, they have used Okeanos’s virtualized computing resources based on infrastructures as a Service (IaaS) developed by the Greek Research and Technology Network [17]. They proposed a number of applications and conducted several experiments to determine each application’s execution time. They have used various input files and the slave nodes to find out the execution time. They have found that the execution time is proportional to the input data size. They have concluded that the performance of Spark is much better in most cases as compared to Hadoop. Satish and Rohan [18] have shown a comparative performance study between Hadoop MapReduce and Spark-based on the K-means algorithm. In this study, they have used a specific data set that supports this algorithm and considered both single and double nodes when gathering each experiment’s execution time. They have concluded that the Spark speed reaches up to three times higher than the MapReduce, though Spark performance heavily depends on sufficient memory size [19]. Lin et al. [20] have proposed a unified cloud platform, including batch processing ability over standalone log analysis tools. This investigation has considered four different frameworks: Hadoop, Spark, and warehouse data analysis tools Hive and Shark. They implemented two machine learning algorithms (K-means and PageRank) based on this framework with six nodes to validate the cloud platform. They have used different data sizes as inputs. In the case of K-means, as the data size increased and exceed memory size, the latency schedule and overall Spark performance degraded. However, the overall performance was still six times higher than Hadoop on average. On the other hand, Shark shows significant performance improvement while using queries directly from disk. Petridis et al. [21] have investigated the most important Spark parameters shown in Table 4 and given a guideline to the developers and system administrators to select the correct parameter values by replacing the default parameter values based on trial-anderror methodology. Three types of case studies with different categories such as Shuffle Behavior, Compression and Serialization, and Memory Management parameters were performed in this study. They have highlighted the impact of memory allocation and serialization when the number of cores and default parallelism values change. Therefore, there are 12 parameters chosen with three benchmarking applications: sort-by-key, shuffling, and k-means. The sort-by-key experiments used both 1 million and 1 billion key-values of lengths 10 and 90 bytes and the optimal degree of partition is set to 640. The Hash performance is increased to 127 s, which is 30 s faster than the default parameter, and shuffle.file.buffer is increased by 140 s. The rest of the parameters do not play any important role in improving the performance. For another Shuffling experiment, they used a 400 GB dataset. The Hash shuffle performance is degraded by 200 s, and ----- Tungsten-Sort speed is increased by 90 s. By decreasing the buffer size from 32 to 15 KB, the system performance was degraded by about 135s, which is more than 10% from the primary selection. For K-means, they used two sizes of data input (100 MB and 200 MB). They have not found significant k-means performance improvement by changing the parameters. Therefore, they have concluded that based on their methodology, the speedup achievement is tenfold. However, the main challenges of tuning Hadoop and Spark configuration parameters are due to the complicated behavior of distributed large scale systems while the parameter selection is not always trivial for the system administrators. Inappropriate combination of parameter values can affect the overall system performance. Inappropriate combination of parameter values can affect the overall system performance. The published literature in Table 1 presents some empirical studies. None of these studies have considered larger data sizes (600 GB), more parameters, and real clusters. In our study, we chose a conventional trial-and-error approach [21], larger data set, and 18 important parameters (listed in Tables 3 and 4) from resource utilization, input splits, and shuffle category. **Difference between Hadoop and Spark** Hadoop [22] is a very popular and useful open-source software framework that enables distributed storage, including the capability of storing a large amount of big datasets across clusters. It is designed in such a way that it can scale up from a single server to thousands of nodes. Hadoop processes large data concurrently and produces fast **Table 1 Published related work** **Author’s** **Date Workloads** **Data size** **Parameters Hardware** Lin et al. [20] 2013 K-means PageRank 10,000 to 20 mil points 1 mil to 10 mil points Log analysis Nodes—6, 2 CPU cores 4 GB memory per node Nodes—4, 16 CPU cores 48 GB memory per node Satish and Rohan 2015 K-means 62–1240 MB Default Virtual machine [18] Nodes—2, 4 GB RAM and 500 GB (HD) Yasir Samadi et al. [7] 2016 Micro Benchmarks Web Search SQL Machine Learning 18–328 MB 5000 to 12 * 10e4 pages 3 Virtual machine Disk (SDD)—40 GB Petridis et al. [21] 2017 K-means shuffling 400 GB 12 Barcelona supercomand sort-by-Key puting center Mavridis et al. [17] 2017 Spark SQL and Spark 1.1 GB, 1.5 GB and Log analysis Virtual machine—6 Hive 11 GB Memory—8 GB Master node—8 cores Salve node—4 cores Yasir Samadi et al. [9] 2018 Micro Benchmarks Web Search SQL Machine Learning 1 GB, 5 GB and 8 GB 3 Virtual machine Disk(SDD)—40 GB Proposed experi- 2020 WordCount and 50–600 GB 18 SNCC, Production ments TeraSort Cluster CPU cores—80 Total Storage—60 TB Master node—1 Slaves nodes—9 ----- results. With Hadoop, the core parts are Hadoop Distributed File System (HDFS) and MapReduce. HDFS [23] splits the files into small pieces into blocks and saves them into different nodes. There are two kinds of nodes on HDFS: data-nodes (worker) and name-nodes (master nodes) [24, 25]. All the operations, including delete, read, and write, are based on these two types of nodes. The workflow of HDFS is like the following flow: firstly, the name-node asks for access permission. If accepted, it will turn the file name into a list of HDFS block IDs, including the files and the data-nodes that saved the blocks related to that file. The ID list will then be sent back to the client, and the users can do further operations based on that. MapReduce [26] is a computing framework that includes two operations: Mappers and Reducers. The mappers will process files based on the map function and transfer them into the new key-value pairs [27]. Next, the new key-value pairs are assigned to different partitions and sorted based on their keys. The combiner is optional and can be recognized as a local reduces operation which allows counting the values with the same key in advance to reduce the I/O pressure. Finally, partitions will divide the intermediate key-value pairs into different pieces and transfer them to a reducer. MapReduce needs to implement one operation: shuffle. Shuffle means transferring the mapper output data to the proper reducer. After the shuffle process is finished, the reducer starts some copy threads (Fetcher) and obtains the output files of the map task through HTTP [28]. The next step is merging the output into different final files, which are then recognized as reducer input data. After that, the reducer processes the data based on the reduced function and writes the output back to the HDFS. Figure 1 depicts a Hadoop MapReduce architecture. Spark became an open-source project from 2010. Zahari has developed this project at UC Berkely’s AMPLab in 2009 [4, 29]. Spark offers numerous advantages for developers to build big data applications. Spark proposed two important terms: Resilient Distributed Datasets (RDD) and Directed Acyclic Graph (DAG). These two techniques work together perfectly and accelerate Spark up to tens of times faster than Hadoop under certain circumstances, even though it usually only achieves a performance two to three times more quickly than MapReduce. It supports multiple sources that have a fault tolerance mechanism that can be cached and supports parallel operations. Besides, it can represent a single dataset with multiple partitions. When Spark runs on the Hadoop cluster, RDDs will be created on the HDFS in many formats supported by Hadoop, likewise text and sequence files. The DAG scheduler [30] system expresses the dependencies of RDDs. Each spark job will create a DAG and the scheduler will drive the graph into the different stages of tasks then the tasks will be launched to the cluster. The DAG will be created in both maps and reduce stages to express the dependencies fully. Figure 2 illustrates the iterative operation on RDD. Theoretically, limited Spark memory causes the performance to slow down. **Experimental setup** **Cluster architecture** In the last couple of years, many proposals came from different research groups about the suitability of Hadoop and Spark frameworks when various types of data ----- **Fig. 1 Hadoop MapReduce architecture [1]** **Fig. 2 Spark workflow [31]** of different sizes are used as input in different clusters. Therefore, it becomes necessary to study the performance of the frameworks and understand the influence of various parameters. For the experiments, we will present our cluster performance based on MapReduce and Spark using the HiBench suite [23, 23]. In particular, we have selected two Hibench workloads out of thirteen standard workloads to represent the two types of jobs, namely WordCount (aggregation job) [32], and TeraSort (shuffle job) [33] with large datasets. We selected both the workloads because of their complex characteristics to study how efficiently both the workloads analyze the cluster performance by correlating MapReduce and Spark function with a combination of groups of parameters. ----- **Table 2 Experimental Hadoop cluster** Server configuration Processor 2.9 GHz Main Memory 64 GB Local Storage 10 TB Node configuration CPU Intel(R) Xeon(R) CPU E3-1231 v3 @ 3.40GHz Main Memory 32 GB Number of Nodes 10 Local Storage 6 TB each, 60 TB total CPU cores 8 each, 80 total Software Operating System Ubuntu 16.04.2 (GNU/Linux 4.13.0-37-generic×86 64) JDK 1.7.0 Hadoop 2.4.0 Spark 2.1.0 Workload Micro Benchmarks WordCount, and TeraSort **Hardware and software specification** The experiments were deployed in our own cluster. The cluster is configured with 1 master and 9 slaves nodes which is presented in Fig. 3. The cluster has 80 CPU cores and 60 TB local storage. The implemented hardware is suitable for handling various difficult situations in Spark and MapReduce. The detailed Hadoop cluster and software specifications are presented in Table 2. All our jobs run in Spark and MapReduce. We have selected Yarn as a resource manager, which can help us monitor each working node’s situation and track the details of each job with its history. We have used _Apache Ambari to monitor and profile_ the selective workloads running on Spark and MapReduce. It supports most of the Hadoop components, including HDFS, MapReduce, Hive, Pig, Hbase, Zookeeper, Sqoop, and Hcatalog” [34]. Besides, Ambari supports the user to control the Hadoop cluster on three aspects, namely provision, management, and monitoring. ----- **Table 3 Hadoop configuration parameters** **Configuration parameters** **Hadoop** **Tuned values** **category** Resource utilization mapreduce.reduce.memory 8 GB mapred.reduce.task 16,384 MB, 25,600 MB mapreduce.reduce.cpu.vcores 4 Input split mapred.min.split.size, mapred.max.split.size 128 MB (default), 256 MB, 512 MB, 1024 MB Shuffle i/o.sort.mb 25, 50, 75, 100 i/o.sort.factor 512, 1024, 1536, 2047 mapreduce.reduce.shuffle.parallelcopies 50, 100, 150, 200 mapreduce.task.io.sort.factor 15, 30, 45, 60 **Table 4 Spark configuration parameters** **Configuration parameters** **Spark** **Tuned values** **category** Resource utilization num-executors 50 executor-cores 4 executor-memory 8 GB Input split spark.hadoop.MapReduce.input .filein- 128 MB (default), 256MB, 512MB, 1024MB putformat.split.minsize Shuffle spark.shuffle.file.buffer 16 k, 32 k (default), 48 k, 64 k spark.reducer.maxSizeInFlight 32 M, 48 M (default), 64 M, 96 M spark.hadoop.dfs.replication 1 spark.default.parallelism 80, 100, 200, 300 **Workloads** As stated above, in this study we chose two workloads for the experiments [32, 33]: _WordCount: The wordCount workload is map-dependent, and it counts the number_ of occurrences of separate words from text or sequence file. The input data is produced by RandomTextWriter. It splits into each word by using the map function and generates intermediate data for the reduce function as a key-value [35]. The intermediate results are added up, generating the final word count by the reduce function. _TeraSort: The TeraSort package was released by Hadoop in 2008 [36] to measure the_ capabilities of cluster performance. The input data is generated by the TeraGen function which is implemented in Java. The TeraSort function does the sorting using the MapReduce, and the TeraValidate function is used to validate the output of the sorted data. For both workloads, we used up to 600 GB of synthetic input data generated using a string concatenation technique. **The parameters of interest and tuning approach** Tuning parameters in Apache Hadoop and Apache Spark is a challenging task. We want to find out which parameters have important impacts on system performance. The configuration of the parameters needs to be investigated according to work-load, ----- data size, and cluster architecture. We have conducted a number of experiments using Apache Hadoop and Apache Spark with different parameter settings. For this experiment, we have chosen the core MapReduce and Spark parameter setting from resource utilization, input splits and shuffle groups. The selected tuned parameters with their respective tuned values on the map-reduce and Spark category are shown in Tables 3 and 4. **Results and discussion** In this section, the results obtained after running the jobs are evaluated. We have used synthetic input data and used the same parameter configuration for a realistic comparison. Each test was repeated three times, and the average runtime was plotted in each graph. For both frameworks, we show the execution time, throughput, and speedup to compare the two frameworks and visualize the effects of changing the default parameters. **Execution time** The execution time is affected by the input data sizes, the number of active nodes, and the application types. We have fixed the same parameters for the fair comparative analysis, such as the number of executors to 50, executor memory to 8 GB, executor cores to 4. ----- Figure 4a, b show how MapReduce and Spark execution time depend on the datasets’ size and the different input splits and shuffle parameters. The execution time of MapReduce WordCount workload with the default input split size (128 MB) and shuffle parameter (sort.mb 100, sort.factor 2047) obtained better execution time for entire data sizes compared to other parameters. Hadoop Map and Reduce function behave better because of their faster execution time and overlooked container initialization overhead for specific workload types. This result suggests that the default parameter is more suitable for our cluster when using data sizes from 50 to 600 GB. In Fig. 4c the default input splits of Spark is 128 MB. Previously, we have mentioned that the number of executors, executor memory, and executor cores are fixed. From the above Fig. 4c, we see that the execution time of input split size 256 MB outperforms the default set up until 450 GB data sizes. In fact, the default splits size (128 MB) is more efficient when the data size is larger than the 450 GB. Notably, we can see that the default parameter shows better execution performance when the data set reaches 500 GB or above. The new parameter values can improve the processing efficiency by 2.2% higher than the default value (128 MB). Table 5 presents the experimental data of WordCount workload between MapReduce and Spark while the default parameters are changing. For the Spark shuffle parameter, we have chosen the default serializer, the (JavaSeri_alizer) because of the simplicity and easy control of the performance of the serializa-_ tion [37]. In this category, the serializer is PL100 object [37]. We can see from Fig. 4d that the improvement rate is significantly increased when we set the PL value to 300. It is evident that the best performance is achieved for sizes larger than 400 GB. Also, it shows that when tuning the PL value to 300, the system can achieve a 3% higher improvement for the rest of the data sizes. Consequently, we can conclude that input splits can be considered an important factor in enhancing Spark WordCount jobs’ efficiency when executing small datasets. Figure 5a is comparing MapReduce TeraSort workloads based on input splits that include default parameters. In this analysis, we have set (Red_Task and _InSp) value_ fixed with default split size 128 MB. We have changed the parameter values and tested whether the splits’ size can keep the impact on the runtime. So, for this reason, we have selected three different sizes: 256 MB, 512 MB, and 1024 MB. We have observed that with a split size of 256MB, the execution performance is increased by around 2% in datasets with up to 300 GB. On the contrary, when the data sizes are larger than 300 GB, the default size outperforms split size equals 512 MB. Moreover, we have noticed that the improvement rates are similar when the data sizes are smaller than 200 GB. **Table 5 The best execution time of MapReduce and Spark with WordCount workload** **Split sizes (MB)** **Execution** **time (s)** MapReduce input splits (WordCount) 128 2376 Spark input splits (WordCount) 256 1392 MapReduce shuffle (WordCount) 100 2371 Spark shuffle (WordCount) 300 1334 ----- Figure 5b illustrates the execution performance with the MapReduce shuffle parameter for the TeraSort workload. We have seen that the average execution time behaves linearly for sizes up to 450 GB when the parameter change to (Reduce_150 and _task._ _io_45) as compared to the default configuration (Reduce_100 and_ _task.io_30). Besides,_ We have also noticed that the default configuration is outperforming all other settings when the data sizes are larger than 450 GB. So, we can conclude that by changing the shuffled value, the system execution performance improves by 1%. In general, this is very unlikely that the default size has optimum performance for larger data sizes. Figure 5c illustrates the Spark input split parameter execution performance analysis for the TeraSort workload. The Spark executor memory, number of executors, and executor memory are fixed while changing the block size to measure the execution performance. Apart from the default block size (128 MB), there are 3 pairs (256 MB, 512 MB, and 1024 MB) of block size is taken into this consideration. Our results revealed that the block size 512 MB and 1024 MB present better runtime for sizes up to 500 GB data size. We have also observed a significant performance improvement achieved by the 1024 block size, which is 4% when the data size is larger than 500 GB. Thus, we can conclude that by adding the input splits block size for large scale data size, Spark performance can be increased. Figure 5d shows Spark shuffle behaviour performance for TeraSort workloads. We have taken two important default parameters (buffer = 32, _spark.reducer.maxSizeIn_ _Flight_ = 48 MB) into our analysis. We have found that when the buffer and maxSizeIn_Flight are increased by 128 and 192, the execution performance increased proportionally_ ----- **Table 6 The best execution time of MapReduce and Spark with Terasort workload** **Split sizes (MB)** **Execution time (s)** MapReduce input splits (TeraSort) 256 21,014 Spark input splits (TeraSort) 512 & 1024 3780 & 3439 MapReduce shuffle (TeraSort) 150 & 45 24,250 Spark shuffle (TeraSort) 128 & 192 6540 **a** **b** **Fig. 6 The comparison of Hadoop and Spark with WordCount and TeraSort workload with varied input splits** and shuffle tasks up to 600 GB data sizes. Our results show that the default execution is equal, with a tested value of up to 200 GB data sizes. The possible reason for this performance improvement is the larger number of splits size for different executors. Table 6 presents the experimental data of the TeraSort workload between MapReduce and Spark, while the default parameters are changing. Figure 6a illustrates the comparison between Spark and MapReduce for WordCount and TeraSort workloads after applying the different input splits. We have observed that Spark with WordCount workloads shows higher execution performance by more than 2 times when data sizes are larger than 300 GB for WordCount workloads. For the smaller data sizes, the performance improvement gap is around ten times. Figure 6 shows a TeraSort workload for MapReduce and Spark. We can see that Spark execution performance is linear and proportionally larger as the data size increase. Also, we noticed that the runtime for MapReduce jobs are not as linear in relation to the data size as Spark jobs. The possible reason could be unavoidable job action on the clusters and as a result that the dataset is larger than the available RAM. So, we conclude that MapReduce has slower data sharing capabilities and a longer time to the read-write operation than Spark [4]. **Throughput** The throughput metrics are all in MB per second. For this analysis, we only considered the best results from each category. We have observed that MapReduce throughput performance for the TeraSort workload is decreasing slightly as the data size crosses beyond 200 GB. Besides, for the WordCount workload, the MapReduce throughput is almost linear. For the Spark TeraSort workload, it can be observed that ----- the throughput is not constant, but for the WordCount workload, the throughput is almost constant. In this analysis, the main focus was to present the throughput difference between WordCount and TeraSort workload for MapReduce and Spark. We found that WordCount workload remains almost stable for most of the data sizes, and concerning the TeraSort workload, MapReduce remain stable than Spark (see Fig. 7). ----- **Speedup** Figure 8a–c show the Spark’s speed up compared to MapReduce. Figure 8a, b depicts individual workload speedup. The best results are taken into this consideration from each category in order to get a speedup. From the above figures, we can see that as the data size increases, WordCount workload speedup decreases with some non-linearity. Besides, we can see that the TeraSort speedup decreases when data reaches sizes larger than 300 GB. Notably, as the data size increases to more than 500GB for both workloads, the speedup starts to increase. Figure 8c illustrates the speedup comparison between the workloads. It can be seen that the TeraSort workload outperforms WordCount workload and achieves an all-time maximum speedup of around 14 times. The literature presents that Spark is up to ten times faster than Hadoop under certain circumstances and in normal conditions, and it only achieves a performance two to three times faster than MapReduce [38]. However, this study found that Spark performance is degraded when the input data size is big. **Conclusion** This article presented the empirical performance analysis between Hadoop and Spark based on a large scale dataset. We have executed WordCount and Terasort workloads and 18 different parameter values by replacing them with default set-up. To investigate the execution performance, we have used trial-and-error approach for tuning these parameters performing number of experiments on nine node cluster with a capacity of 600 GB dataset. Our experimental results confirm that both Hadoop and Spark systems performance heavily depends on input data size and right parameter selection and tuning. We have found that Spark has better performance as compared to Hadoop by two times with WordCount work load and 14 times with Tera-Sort workloads respectively when default parameters are tuned with new values. Further more, the throughput and speedup results show that Spark is more stable and faster than Hadoop because of Spark data processing ability in memory instead of store in disk for the map and reduced function. We have also found that Spark performance degraded when input data was larger. As future work, we plan to add and investigate 15 HiBench workloads, consider more parameters under resource utilization, parallelization, and other aspects, including practical data sets. The main focus would be to analyze the job performance based on autotuning techniques for MapReduce and Spark when several parameter configurations replace the default values. **Acknowledgements** The authors acknowledge Sibgat Bazai for his valuable suggestions. **Authors’ contributions** NA was the main contributor of this work. He has done an initial literature review, data collection, experiments, prepare results, and drafted the manuscript. ALCB and TS deployed and configured the physical Hadoop cluster. ALCB also worked closely with NA to review, analyze, and manuscript preparation. TS and MAR helped to improve the final paper. All authors read and approved the final manuscript. **Funding** This work was not funded. **Availability of data and materials** The data that support the findings of this study are available from the corresponding author upon reasonable request. **Ethics approval and consent to participate** Not applicable. ----- **Consent for publication** Not applicable. **Competing interests** The authors declare that they have no competing interests. **Author details** 1 School of Natural and Computational Sciences, Massey University, Albany, Auckland 0745, New Zealand. 2 Department of Mechanical and Electrical Engineering, Massey University, Auckland 0745, New Zealand. Received: 30 July 2020 Accepted: 26 November 2020 **References** 1. [Apache Hadoop Documentation 2014. http://hadoo​p.apach​e.org/. Accessed 15 July 2020.](http://hadoop.apache.org/) 2. Verma A, Mansuri AH, Jain N. Big data management processing with hadoop mapreduce and spark technology: A comparison. In: 2016 symposium on colossal data analysis and networking (CDAN). New York: IEEE; 2016. p. 1–4. 3. Management Association IR. Big Data: concepts, methodologies, tools, and applications. Hershey: IGI Global; 2016. 4. Zaharia M, Chowdhury M, Das T, Dave A, Ma J, Mccauley M, Franklin M, Shenker S, Stoica I. Fast and interactive analytics over hadoop data with spark. Usenix Login. 2012;37:45–51. 5. Dean J, Ghemawat S. Mapreduce: simplified data processing on large clusters. Commun ACM. 2008;51(1):107–13. 6. Wang G, Butt AR, Pandey P, Gupta K. Using realistic simulation for performance analysis of mapreduce setups. In: Proceedings of the 1st ACM workshop on large-scale system and application performance; 2009. p. 19–26. 7. Samadi Y, Zbakh M, Tadonki C. Comparative study between hadoop and spark based on hibench benchmarks. In: 2016 2nd international conference on cloud computing technologies and applications (CloudTech). New York: IEEE; 2016. p. 267–75. 8. Ahmadvand H, Goudarzi M, Foroutan F. Gapprox: using gallup approach for approximation in big data processing. J Big Data. 2019;6(1):20. 9. Samadi Y, Zbakh M, Tadonki C. Performance comparison between hadoop and spark frameworks using hibench benchmarks. Concurr Comput Pract Exp. 2018;30(12):4367. 10. Shi J, Qiu Y, Minhas UF, Jiao L, Wang C, Reinwald B, Özcan F. Clash of the titans: mapreduce vs. spark for large scale data analytics. Proc VLDB Endow. 2015;8(13):2110–211. 11. Veiga J, Expósito RR, Pardo XC, Taboada GL, Tourifio J. Performance evaluation of big data frameworks for large-scale data analytics. In: 2016 ieee international conference on Big Data (Big Data). New York: IEEE; 2016. p. 424–31. 12. Li M, Tan J, Wang Y, Zhang L, Salapura V. Sparkbench: a comprehensive benchmarking suite for in memory data analytic platform spark. In: Proceedings of the 12th ACM international conference on computing frontiers; 2015. p. 1–8. 13. Wang L, Zhan J, Luo C, Zhu Y, Yang Q, He Y, Gao W, Jia Z, Shi Y, Zhang S. Bigdatabench: a big data benchmark suite from internet services. In: 2014 IEEE 20th international symposium on high performance computer architecture (HPCA). New York: IEEE; 2014. p. 488–99. 14. Thiruvathukal GK, Christensen C, Jin X, Tessier F, Vishwanath V. A benchmarking study to evaluate apache spark on [large-scale supercomputers. 2019; arXiv preprint arXiv​:1904.11812​.](http://arxiv.org/abs/1904.11812) 15. Marcu O-C, Costan A, Antoniu G, Pérez-Hernández MS. Spark versus flink: Understanding performance in big data analytics frameworks. In: 2016 IEEE international conference on cluster computing (CLUSTER). New York: IEEE; 2016. p. 433–42. 16. Bolze R, Cappello F, Caron E, Daydé M, Desprez F, Jeannot E, Jégou Y, Lanteri S, Leduc J, Melab N, et al. Grid’5000: a large scale and highly reconfigurable experimental grid testbed. Int J High Perform Comput Appl. 2006;20(4):481–94. 17. Mavridis I, Karatza E. Log file analysis in cloud with apache hadoop and apache spark 2015. 18. Gopalani S, Arora R. Comparing apache spark and map reduce with performance analysis using k-means. Int J Comput Appl. 2015;113(1):8–11. 19. Gu L, Li H. Memory or time: Performance evaluation for iterative operation on hadoop and spark. In: 2013 IEEE 10th international conference on high performance computing and communications & 2013 IEEE international conference on embedded and ubiquitous computing. New York: IEEE; 2013. p. 721–7. 20. Lin X, Wang P, Wu B. Log analysis in cloud computing environment with hadoop and spark. In: 2013 5th IEEE international conference on broadband network & multimedia technology. New York: IEEE; 2013. p. 273–6. 21. Petridis P, Gounaris A, Torres J. Spark parameter tuning via trial-and-error. In: INNS conference on big data. Berlin: Springer; 2016. p. 226–37. 22. Landset S, Khoshgoftaar TM, Richter AN, Hasanin T. A survey of open source tools for machine learning with big data in the hadoop ecosystem. J Big Data. 2015;2(1):24. [23. HiBench Benchmark Suite. https​://githu​b.com/intel​-hadoo​p/HiBen​ch. Accessed 15 July 2020.](https://github.com/intel-hadoop/HiBench) 24. Shvachko K, Kuang H, Radia S, Chansler R. The hadoop distributed file system. In: 2010 IEEE 26th symposium on mass storage systems and technologies (MSST). New York: IEEE; 2010. p. 1–10. 25. Luo M, Yokota H. Comparing hadoop and fat-btree based access method for small file i/o applications. In: International conference on web-age information management. Berlin: Springer; 2010. p. 182–93. 26. Taylor RC. An overview of the hadoop/mapreduce/hbase framework and its current applications in bioinformatics. BMC Bioinform. 2010;11:1. ----- 27. Vohra D. Practical Hadoop ecosystem: a definitive guide to hadoop-related frameworks and tools. California: Apress; 2016. 28. Lee K-H, Lee Y-J, Choi H, Chung YD, Moon B. Parallel data processing with mapreduce: a survey. AcM sIGMoD record. 2012;40(4):11–20. 29. Zaharia M, Chowdhury M, Franklin MJ, Shenker S, Stoica I. Spark: cluster computing with working sets. HotCloud. 2010;10:95. [30. Kannan P. Beyond hadoop mapreduce apache tez and apache spark. San Jose State University); 2015. http://www.](http://www.sjsu.edu/people/robert.chun/courses/CS259Fall2013/s3/F.pdf) [sjsu.edu/peopl​e/rober​t.chun/cours​es/CS259​Fall2​013/s3/F.pdf. Accessed 15 July 2020.](http://www.sjsu.edu/people/robert.chun/courses/CS259Fall2013/s3/F.pdf) [31. Spark Core Programming. https​://www.tutor​ialsp​oint.com/apach​e_spark​/apach​e_spark​_rdd.htm. Accessed 15 July](https://www.tutorialspoint.com/apache_spark/apache_spark_rdd.htm) 2020. 32. Huang S, Huang J, Dai J, Xie T, Huang B. The hibench benchmark suite: Characterization of the mapreduce-based data analysis. In: 2010 IEEE 26th international conference on data engineering workshops (ICDEW 2010). New York: IEEE; 2010. p. 41–51. 33. Chen C-O, Zhuo Y-Q, Yeh C-C, Lin C-M, Liao S-W. Machine learning-based configuration parameter tuning on hadoop system. In: 2015 IEEE international congress on big data. New York: IEEE; 2015. p. 386–92. [34. Ambari. https​://ambar​i.apach​e.org/. Accessed 15 July 2020.](https://ambari.apache.org/) 35. Xiang L-H, Miao L, Zhang D-F, Chen F-P. Benefit of compression in hadoop: A case study of improving io performance on hadoop. In: Proceedings of the 6th international asia conference on industrial engineering and management innovation. Berlin: Springer; 2016. p. 879–90. [36. O’Malley O. Terabyte sort on apache hadoop. Report, Yahoo!; 2008. http://sortb​enchm​ark.org/Yahoo​Hadoo​p.pdf.](http://sortbenchmark.org/YahooHadoop.pdf) Accessed 15 July 2020. [37. Apache Tuning Spark 1.1.1. https​://spark​.apach​e.org/docs/1.1.1/tunin​g.html. Accessed 15 July 2020.](https://spark.apache.org/docs/1.1.1/tuning.html) 38. Rathore MM, Son H, Ahmad A, Paul A, Jeon G. Real-time big data stream processing using gpu with spark over hadoop ecosystem. Int J Parallel Progr. 2018;46(3):630–46. **Publisher’s Note** Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1186/s40537-020-00388-5?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1186/s40537-020-00388-5, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GREEN", "url": "https://journalofbigdata.springeropen.com/track/pdf/10.1186/s40537-020-00388-5" }
2,020
[ "JournalArticle" ]
true
2020-08-17T00:00:00
[ { "paperId": "74acb80f0b5aa4e0febb69d512bf49d69b1ddf23", "title": "A Benchmarking Study to Evaluate Apache Spark on Large-Scale Supercomputers" }, { "paperId": "19b4272193750424da6c88417ffb1478ae2498e0", "title": "Gapprox: using Gallup approach for approximation in Big Data processing" }, { "paperId": "ed0a3a7a4bc53d8801c4e9c0f96c3ed63960737c", "title": "Performance comparison between Hadoop and Spark frameworks using HiBench benchmarks" }, { "paperId": "c7755139b91c9b3100ace2a842b7fa646ecb86d6", "title": "Real-Time Big Data Stream Processing Using GPU with Spark Over Hadoop Ecosystem" }, { "paperId": "2f44ab0e52eb98fdfc0086638887f175b6f2fe4b", "title": "Benchmarking Distributed Stream Data Processing Systems" }, { "paperId": "07cbb256b63f571355ff486b7a08f258fabc9093", "title": "2016 Ieee International Conference on Big Data (big Data) Performance Evaluation of Big Data Frameworks for Large-scale Data Analytics" }, { "paperId": "cc95fa1da1adc858c0f77dc5053d767116bd1c6c", "title": "Spark Versus Flink: Understanding Performance in Big Data Analytics Frameworks" }, { "paperId": "6b56b7d482373b6fce288b95e51353f53f575c34", "title": "Spark Parameter Tuning via Trial-and-Error" }, { "paperId": "ec6e4fcf74b1af0c9f75fa6cf327c84d698cdcc4", "title": "Comparative study between Hadoop and Spark based on Hibench benchmarks" }, { "paperId": "bbc756a805a15c9d7b548f22eb989261f32b7429", "title": "Big data management processing with Hadoop MapReduce and spark technology: A comparison" }, { "paperId": "1c3b28248b9cf303ea9b6c1a24741ef6f1e100d1", "title": "Wormhole attack in mobile ad-hoc networks" }, { "paperId": "1c35bab85900cd6c09eaddc1b1a9541bbc0bbcc3", "title": "A survey of open source tools for machine learning with big data in the Hadoop ecosystem" }, { "paperId": "1a057fd874f7c1994618f1c7560c492d5f590cb1", "title": "Clash of the Titans: MapReduce vs. Spark for Large Scale Data Analytics" }, { "paperId": "3a8f359e6c6a76903d0d491bc2dbec821ada07ee", "title": "Machine Learning-Based Configuration Parameter Tuning on Hadoop System" }, { "paperId": "a776115d6567d38ed345c8c93fb23c7ff335cb1a", "title": "SparkBench: a comprehensive benchmarking suite for in memory data analytic platform Spark" }, { "paperId": "62e3eebdede5536807944a0e5e1fb7fd2458450a", "title": "Comparing Apache Spark and Map Reduce with Performance Analysis using K-Means" }, { "paperId": "e3642ab645a3fdcce97f854503ba67f4b503b9ea", "title": "BigDataBench: A big data benchmark suite from internet services" }, { "paperId": "c470e6d9bd8feed9486fdcb8011ffca7b0bcbdd9", "title": "Memory or Time: Performance Evaluation for Iterative Operation on Hadoop and Spark" }, { "paperId": "3f689c3e73a0cd74aac5dcb4a9d240f3697552e1", "title": "Log analysis in cloud computing environment with Hadoop and Spark" }, { "paperId": "283f1674a4e6a6a5e2dd11fafaadcd3dff2d17fb", "title": "Parallel data processing with MapReduce: a survey" }, { "paperId": "b84cc17984f6f13bfadd2b61a605dabf9d9bfa8b", "title": "An overview of the Hadoop/MapReduce/HBase framework and its current applications in bioinformatics" }, { "paperId": "742afd4548f1960d3956823f5497c2b363660b6b", "title": "Comparing Hadoop and Fat-Btree Based Access Method for Small File I/O Applications" }, { "paperId": "24281c886cd9339fe2fc5881faf5ed72b731a03e", "title": "Spark: Cluster Computing with Working Sets" }, { "paperId": "8ce4c0ee315d86f32ec7354ccdf8d8996e8ee270", "title": "The Hadoop Distributed File System" }, { "paperId": "72c2958810686a4f6efde55095b14c66720e782d", "title": "The HiBench benchmark suite: Characterization of the MapReduce-based data analysis" }, { "paperId": "3c66994ac5c16064132e3f241b0fec97092e6164", "title": "Using realistic simulation for performance analysis of mapreduce setups" }, { "paperId": "82cb49e8c26e79b1afcceb009f45dfcb283f67e9", "title": "Grid'5000: A Large Scale And Highly Reconfigurable Experimental Grid Testbed" }, { "paperId": null, "title": "HiBench Benchmark Suite" }, { "paperId": null, "title": "Apache Hadoop Documentation 2014" }, { "paperId": null, "title": "Spark Core Programming" }, { "paperId": "3048f6eac1f4ae4e98ec0a9725ead719024596a8", "title": "Practical Hadoop Ecosystem" }, { "paperId": "75956b3671b1d9ea4524c7af683a388f9deb2e45", "title": "Benefit of Compression in Hadoop: A Case Study of Improving IO Performance on Hadoop" }, { "paperId": null, "title": "Big Data: concepts, methodologies, tools, and applications" }, { "paperId": null, "title": "Log file analysis in cloud with apache hadoop and apache spark" }, { "paperId": "910cf35895c9e65273e0d0e137756f278fa60042", "title": "Beyond Hadoop MapReduce Apache Tez and Apache Spark" }, { "paperId": "27261b32bee50b199e5aa581b5a047575fecba2f", "title": "In International business" }, { "paperId": "195093fb1100c9861d409ff010a3e3899de1b4bf", "title": "Fast and Interactive Analytics over Hadoop Data with Spark" }, { "paperId": "627be67feb084f1266cfc36e5aed3c3e7e6ce5f0", "title": "MapReduce: simplified data processing on large clusters" }, { "paperId": "34b9635d7779e219e9d60e0d3d33919ca9bc123c", "title": "Publisher's Note" }, { "paperId": null, "title": "Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations" }, { "paperId": null, "title": "Terabyte sort on apache hadoop . Report , Yahoo ! ( 2008 )" } ]
11,567
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01bdf288e71aea8dfbec90d64bc41f982ce84d0f
[ "Computer Science" ]
0.856704
Scaling up Trustless DNN Inference with Zero-Knowledge Proofs
01bdf288e71aea8dfbec90d64bc41f982ce84d0f
arXiv.org
[ { "authorId": "35342489", "name": "Daniel Kang" }, { "authorId": "3056528", "name": "Tatsunori B. Hashimoto" }, { "authorId": "2295665819", "name": "Ion Stoica" }, { "authorId": "2116961690", "name": "Yi Sun" } ]
{ "alternate_issns": null, "alternate_names": [ "ArXiv" ], "alternate_urls": null, "id": "1901e811-ee72-4b20-8f7e-de08cd395a10", "issn": "2331-8422", "name": "arXiv.org", "type": null, "url": "https://arxiv.org" }
As ML models have increased in capabilities and accuracy, so has the complexity of their deployments. Increasingly, ML model consumers are turning to service providers to serve the ML models in the ML-as-a-service (MLaaS) paradigm. As MLaaS proliferates, a critical requirement emerges: how can model consumers verify that the correct predictions were served, in the face of malicious, lazy, or buggy service providers? In this work, we present the first practical ImageNet-scale method to verify ML model inference non-interactively, i.e., after the inference has been done. To do so, we leverage recent developments in ZK-SNARKs (zero-knowledge succinct non-interactive argument of knowledge), a form of zero-knowledge proofs. ZK-SNARKs allows us to verify ML model execution non-interactively and with only standard cryptographic hardness assumptions. In particular, we provide the first ZK-SNARK proof of valid inference for a full resolution ImageNet model, achieving 79\% top-5 accuracy. We further use these ZK-SNARKs to design protocols to verify ML model execution in a variety of scenarios, including for verifying MLaaS predictions, verifying MLaaS model accuracy, and using ML models for trustless retrieval. Together, our results show that ZK-SNARKs have the promise to make verified ML model inference practical.
## Scaling up Trustless DNN Inference with Zero-Knowledge Proofs Daniel Kang [1] Tatsunori Hashimoto [2] Ion Stoica [3] Yi Sun [4] ### Abstract As ML models have increased in capabilities and accuracy, so has the complexity of their deployments. Increasingly, ML model consumers are turning to service providers to serve the ML models in the ML-as-a-service (MLaaS) paradigm. As MLaaS proliferates, a critical requirement emerges: how can model consumers verify that the correct predictions were served, in the face of malicious, lazy, or buggy service providers? In this work, we present the first ImageNetscale method to verify ML model execution noninteractively. To do so, we leverage recent developments in ZK-SNARKs (zero-knowledge succinct non-interactive argument of knowledge), a form of zero-knowledge proofs. ZK-SNARKs allows us to verify ML model execution noninteractively and with only standard cryptographic hardness assumptions. In particular, we provide the first ZK-SNARK proof of valid inference for a full resolution ImageNet model, achieving 79% top-5 accuracy. We further use these ZK-SNARKs to design protocols to verify ML model execution in a variety of scenarios, including for verifying MLaaS predictions, verifying MLaaS model accuracy, and using ML models for trustless retrieval. Together, our results show that ZK-SNARKs have the promise to make verified ML model inference practical. ### 1. Introduction ML models have been increasing in capability and accuracy. In tandem, the complexity of ML deployments has also been exploding. As a result, many consumers of ML models now outsource the training and inference of ML models to service providers, which is typically called “ML-as-a 1University of Illinois, Urbana-Champaign 2Stanford University [3]University of California, Berkeley [4]University of Chicago. Correspondence to: Daniel Kang <ddkang@illinois.edu>. Proceedings of the 37 [th] International Conference on Machine Learning, Online, PMLR 119, 2020. Copyright 2020 by the author(s). service” (MLaaS). MLaaS providers are proliferating, from major cloud vendors (e.g., Amazon, Google, Microsoft, OpenAI) to startups (e.g., NLPCloud, BigML). A critical requirement emerges as MLaaS providers become more prevalent: how can the model consumer (MC) verify that the model provider (MP) has correctly served predictions? In particular, these MPs execute model inference in untrusted environments from the perspective of the MC. In the untrusted setting, these MPs may be lazy (i.e., serve random predictions), dishonest (i.e., serve malicious predictions), or inadvertently serve incorrect predictions (e.g., through bugs in serving code). In this work, we propose using the cryptographic primitive of ZK-SNARKs (Zero-Knowledge Succinct NonInteractive Argument of Knowledge) to address the problem of practically verifying ML model execution in untrusted settings. We present the first ZK-SNARK circuits that can verify inference for ImageNet-scale models, in contrast to prior work that is limited to toy datasets such as MNIST or CIFAR-10 (Feng et al., 2021; Weng et al., 2022; Lee et al., 2020; Liu et al., 2021). We are able to verify a proof of valid inference for MobileNet v2 achieving 79% accuracy while simultaneously being verifiable in 10 seconds on commodity hardware. Furthermore, our proving times can improve up to one to four orders of magnitude compared to prior work (Feng et al., 2021; Weng et al., 2022; Lee et al., 2020; Liu et al., 2021). We further provide practical protocols leveraging these ZK-SNARKs to verify ML model accuracy, verify MP predictions, and using ML models for audits. These results demonstrate the feasibility of practical, verified ML model execution. ZK-SNARKs are a cryptographic primitive in which a party can provide a certificate of the execution of a computation such that no information about the inputs or intermediate steps of the computation are revealed to other parties. ZK-SNARKs have a number of surprising properties (Section 3). Importantly for verified DNN execution, ZKSNARKs allow portions of the input and intermediates to be kept hidden (while selectively revealing certain inputs) and are non-interactive. The non-interactivity allows third parties to trustlessly adjudicate disputes between MPs and MCs and verify the computation without participating in the computation itself. ----- Scaling up Trustless DNN Inference with Zero-Knowledge Proofs In the setting of verified DNN inference, the weights, inputs, or neither can be made public while keeping the others hidden. The hidden portions can then be committed to by computing and revealing hashes of the inputs, weights, or both (respectively). In particular, a MP may be interested in keeping its proprietary weights hidden while being able to convince a MC of valid inference. The ZK-SNARK primitive allows the MP to commit to the (hidden) weights while proving execution. To ZK-SNARK ImageNet-scale models, we leverage recent developments in ZK-SNARK proving systems (zcash, 2022). Our key insight is that off-the-shelf proving systems for generic computation are sufficient for verified ML model execution, with careful arithmetization (i.e., translation) from DNN specifications to ZK-SNARK arithmetic circuits. Our arithmetization uses two novel optimizations: lookup arguments for non-linearities and reuse of sub-circuits across layers (Section 4). Without our optimizations, the ZK-SNARK construction will require an impractically large amount of hardware resources. Given the ability to ZK-SNARK ML models while committing to and selectively revealing chosen portions of their inputs, we propose methods of verifying MLaaS model accuracy, MLaaS model predictions, and trustless retrieval of documents in the face of malicious adversaries. Our protocols combine ZK-SNARK proofs and economic incentives to create trustless systems for these tasks. We further provide cost estimates for executing these protocols. In summary, our contributions are: 1. The first ImageNet-scale ZK-SNARK circuit that can be proved and verified on commodity hardware (Section 6). 2. Novel arithmetization optimizations for DNN inference in the form of lookup arguments for non-linearities and sub-circuit reuse to enable ImageNet-scale ZKSNARKs (Section 4). 3. Protocols and proofs of concept for leveraging these ZKSNARKs in methods for auditing via ML models, verifying ML model accuracy, and serving ML model predictions in the face of adversaries (Section 5). ### 2. Related Work Secure ML. Recent work has proposed secure ML as a paradigm for executing ML models (Ghodsi et al., 2017b; Mohassel & Zhang, 2017; Knott et al., 2021). There are a wide range of security models, including verifying execution of a known model on untrusted clouds (Ghodsi et al., 2017b), input privacy-preserving inference (Knott et al., 2021), and weight privacy-preserving inference. The most common methods of doing secure ML are with multi-party computation (MPC), homomorphic encryption (HE), or interactive proofs (IPs). As we describe, these methods are either impractical, do not work in the face of malicious adversaries (Knott et al., 2021; Kumar et al., 2020; Lam et al., 2022; Mishra et al., 2020), or do not hide the weights/inputs (Ghodsi et al., 2017b). In this work, we propose practical methods of doing verified ML execution in the face of malicious adversaries. MPC. One of the most common methods of doing secure ML is with MPCs, in which the computation is shared across multiple parties (Knott et al., 2021; Kumar et al., 2020; Lam et al., 2022; Mishra et al., 2020; Jha et al., 2021). There are a variety of MPC protocols with different guarantees. However, all MPC protocols have shared properties: they require interaction (i.e., both parties must be simultaneously online) but can perform computation without revealing the computation inputs (i.e., weights and ML model inputs) across parties. There are several security assumptions for different MPC protocols. The most common security assumption is the semi-honest adversary, in which the malicious party participates in the protocol honestly but attempts to steal information. In this work, we focus on potentially malicious adversaries, who can choose to deviate from the protocol. Unfortunately, MPC that is secure against malicious adversaries is impractical: it can cost up to 550 GB of communication and 657 seconds of compute per example on toy datasets (Pentyala et al., 2021). In this work, we provide a practical, alternative method of verifying ML model inference in the face of malicious adversaries. Furthermore, our methods do not require per-example communication. HE. Homomorphic encryption allows parties to perform computations on encrypted data without first decrypting the data (Armknecht et al., 2015). HE is deployed to preserve privacy of the inputs, but cannot be used to verify that ML model execution happened correctly. Furthermore, HE is incredibly expensive. Since ML model inference can take up to gigaflops of computation, HE for ML model inference is currently impractical, only working on toy datasets such as MNIST or CIFAR-10 (Lou & Jiang, 2021; Juvekar et al., 2018). ZK-SNARKs for secure ML. Some recent work has produced ZK-SNARK protocols for neural network inference on smaller datasets like MNIST and CIFAR-10. Some of these works like (Feng et al., 2021) use older proving systems like (Groth, 2016). Other works (Ghodsi et al., 2017a; Lee et al., 2020; Liu et al., 2021; Weng et al., 2022) use interactive proof or ZK-SNARK protocols based on sumcheck (Thaler, 2013) custom-tailored to DNN operations such as convolutions or matrix multiplications. Compared ----- Scaling up Trustless DNN Inference with Zero-Knowledge Proofs to these works, our work in the modern Halo2 proving system (zcash, 2022) allows us to use the Plonkish arithmetization to more efficiently represent DNN inference by leveraging lookup arguments and well-defined custom gates. Combined with the efficient software package halo2 and advances in automatic translation, we are able to outperform these methods. ### 3. ZK-SNARKs Overview. Consider the task of verifying a function evaluation y = f (x; w) with public inputs x, private inputs w, and output y. For example, in the setting of public input and hidden model, x may be an image, w may be the weights of a DNN, and y may be the result of executing the DNN with weights w on x. A ZK-SNARK (Bitansky et al., 2017) is a cryptographic protocol allowing a Prover to generate a proof π so that with knowledge of π, y, and x alone, a Verifier can check that the Prover knows some w so that y = f (x; w). ZKSNARK protocols satisfy several non-intuitive properties summarized informally below: 1. Succinctness: The proof size is sub-linear (typically constant or logarithmic) in the size of the computation (i.e., complexity of f ). 2. Non-interactivity: Proof generation does not require interaction between the verifier and prover. 3. Knowledge soundess: A computationally bounded prover cannot generate proofs for incorrect executions. 4. Completeness: Proofs of correct execution verify successfully. 5. Zero-knowledge: The proof reveals no information about private inputs beyond what is contained in the output and public inputs. Most ZK-SNARK protocols proceed in two steps. In the first step, called arithmetization, they produce a system of polynomial equations over a large prime field (an arithmetic circuit) so that finding a solution is equivalent to computing f (x; w). Namely, for (f, y, x, w), the circuit constraints are met if and only if y = f (x; w). In the second step, a cryptographic proof system, often called a backend, is used to generate a ZK-SNARK proof. This work uses the Halo2 ZK-SNARK protocol (zcash, 2022) implemented in the halo2 software package. In contrast to ZK-SNARK schemes custom designed for neural networks in prior work (Liu et al., 2021; Lee et al., 2020), Halo2 is designed for general-purpose computation, and halo2 has a broader developer ecosystem. This means we inherit the security, efficiency, and usability of the resulting developer tooling. In the remainder of this section, we describe the arithmetization and other properties of Halo2. Plonkish arithmetization. Halo2 uses the Plonkish arithmetization (zcash, 2022), which allows polynomial constraints with certain restricted forms of randomness. It is a special case of a randomized AIR with preprocessing (Ben-Sasson et al., 2018; Gabizon, 2021) which unifies unifies recent proof systems including PlonK, plookup, and PlonKup (Gabizon et al., 2019; Gabizon & Williamson, 2020; Pearson et al., 2022). Variables in the arithmetic circuit are arranged in a rectangular grid with cells valued in a 254-bit prime field. The Plonkish arithmetization allows three types of constraints with which any computation may be expressed:[1] Custom gates are polynomial expressions over cells in a single row which must vanish on all rows of the grid. As a simple example, consider a grid with columns labeled a, b, c with ai, bi, ci being the cells in row i. The custom multiplication gate ai · bi − ci = 0 enforces that ci = ai · bi for all rows i. In nearly all circuits, it is beneficial to have custom gates only apply to specific rows. To do this, we can add an extra column q (per custom gate), where each cell in q takes the value 0 or 1. Then, we can modify the custom gate to be qi · (ai · bi − ci) = 0 which only applies the custom multiplication gate for rows which qi ̸= 0. Column q is called a selector. Permutation arguments allow us to constrain pairs of cells in the grid to have equal values. They are used to copy values from one cell to another. They are implemented via randomized polynomial constraints for multiset equality checks. Lookup arguments allow us to constrain a k-tuple of cells (d[1]i [, . . ., d]i[k][)][ in the same row][ i][ to agree with][ some][ row of] a separate set of k columns in the grid. This constrains (d[1]i [, . . ., d]i[k][)][ to lie in the][ lookup table][ defined by those][ k] other columns. We use lookup arguments in the arithmetization in two ways. First, we implement range checks on a cell c by constraining it to take values in a fixed range {0, . . ., N − 1}. Second, we implement non-linearities by 1Any arbitrary computation can be expressed, but the size of the arithmetized circuit depends heavily on the nature of the computation. ----- Scaling up Trustless DNN Inference with Zero-Knowledge Proofs looking up a pair of cells (a, b) in a table defined by exhaustive evaluation of the non-linearity. Lookup arguments are also implemented by randomized polynomial constraints. Prior work on SNARK-ing neural networks using proof systems intended for generic computations started with the more limited R1CS arithmetization (Gennaro et al., 2013) and the Groth16 proof system (Groth, 2016), in which neural network inference is less efficient to express. In Section 4, we describe how to use this more expressive Plonkish arithmetization to efficiently express DNN inference. Measuring performance for Halo2. Halo2 is an instance of a polynomial interactive oracle proof (IOP) (Ben-Sasson et al., 2016) made non-interactive via the FiatShamir heuristic. In a polynomial IOP, the ZK-SNARK is constructed from column polynomials which interpolate the values in each column. In Halo2, these polynomials are fed into the inner product argument introduced in (Bowe et al., 2019) to generate the final ZK-SNARK. Several different aspects of performance matter when evaluating a ZK-SNARK proof for a computation. First, we wish to minimize proving time for the Prover and verification time for the Verifier. Second, on both sides, we wish to minimize the proof size. Although a precise cost model for these is complex in Halo2, all of these measures generally increase with the number of rows, columns, custom gates, permutation arguments, and lookup arguments. ### 4. Constructing ZK-SNARKs for ImageNet-Scale Models We now describe our main contribution, the implementation of a ZK-SNARK proof for MobileNetv2 inference (Sandler et al., 2018) in halo2. This requires arithmetizing the building block operations in standard convolutional neural networks (CNNs) in the Plonkish arithmetization. 4.1. Arithmetization Standard CNNs are composed of six distinct operations: convolutions, batch normalization, ReLUs, residual connections, fully connected layers, and softmax. We fuse the batch normalization into the convolutions and return the logits to avoid executing softmax. We now describe our ingredients for constraining the remaining four operations. Quantization and fixed-point. Neural network inference is typically done in floating-point arithmetic, which is extremely expensive to emulate in the prime field of arithmetic circuits. To avoid this overhead, we focus on DNNs quantized in int8 and uint8. For these DNNs, weights and activations are represented as 8 bit integers, though intermediate computations may involve up to 32 bit integers. The second custom gate constrains a dot product of fixed size with zero point. For constant zero point z, inputs x[j]i [,] In these quantized DNN, each weight, activation, and output is stored as a tuple (wquant, z, s), where wquant and z are 8-bit integer weight and zero point, and s is a floating point scale factor. z and s are often shared for all weights in a layer, which reduces the number of bits necessary to represent the DNN. In this representation, the weight wquant represents the real number weight w = (wquant − z) · s. To more efficiently arithmetize the network, we replace the floating point s by a fixed point approximation [a]b [for][ a, b][ ∈] N and compute w via w = ((wquant − z) · a)/b, where the intermediate arithmetic is done in standard 32bit integer arithmetic. Our choice of lower precision values of a and b results in a slight accuracy drop but dramatic improvements in prover and verifier performance. As an example of fixed point arithmetic after this conversion, consider adding y = x1 + x2 with zero points and scale factors zy, z1, z2 and sy, s1, s2, respectively. The floating point computation (y − zy) · sy = (x1 − z1) · s1 + (x2 − z2) · s2 is replaced by the fixed point computation by + zy. ay y ≈ (x1 − z2) · [a][1] b1 by + (x2 − z2) · [a][2] ay b2 The addition and multiplication can be done natively in the finite field, but the division cannot. To address this, we factor the computation of each layer into dot products and create a custom gate to verify division. We further fuse the division and non-linearity gates for efficiency. We describe this process below. Custom gates for linear layers. MobileNets contain three linear layers (layers with only linear operations): convolutions, residual connections, and fully connected layers. For these linear layers, we perform the computation per activation. To avoid expensive floating point scaling by the scale factor and the non-linearities, we combine these operations into a single sub-circuit. To reduce the number of custom gates, we only use two custom gates for all convolutions, residual connections, and fully connected layers. The first custom gate constrains the addition of a fixed number of inputs x[j]i [in row][ i][ via] ci = N � x[j]i [.] j=1 ----- Scaling up Trustless DNN Inference with Zero-Knowledge Proofs weights wi[j][, and output][ c][i][ in row][ i][, the gate implements the] polynomial constraint ci = N �(x[j]i [−] [z][)][ ·][ w]i[j] j=1 for a fixed N . To implement dot products of length k < N, we constrain wk+1, . . ., wN = 0. For dot products of length k > N, we use copy constraints and the addition gate. While the addition gate can be represented using the dot product gate, we use two gates for efficiency purposes. Namely, the custom addition gate can perform an Nelement addition using half as many grid cells as the dot product gate. Lookup arguments for non-linearities. Consider the result of an unscaled, flattened convolution in row i: ci = � x[j]i [·][ w]i[j] [,] j where j indexes over the image height, width, and channels. Performing scale factor division and (clipped) ReLU to obtain the final activation requires computing �� ci · a ai = ClipAndScale(ci, a; b) := clip b � � , 0, 255 . To constrain this efficiently, we apply a lookup argument and use the same value of b across layers. To do so, we first perform the division by b using a custom gate. Since b is fixed, we can use the same custom gate and lookup argument. Let di = [c][i]b[·][a] [. We then precompute the possible] values of the input/output pairs of (di, ai) to form a lookup table T = {(c, ClipAndScale(c)) | c ∈{0, . . ., N }}. N is chosen to cover the domain, namely the possible values of c. We then use a lookup argument to enforce the constraint Lookup[(di, ai) ∈ T ]. We emphasize that naively using lookup arguments would result in a different lookup argument per layer, since the scale factors differ. Using different lookup arguments would add high overhead, which our approach avoids. Automated translation from TensorFlow Lite. We created a translation layer to compile TensorFlow Lite models into circuits in the halo2 software package. The translation layer automatically unrolls the inference computation into an arithmetic circuit in the Plonkish arithmetization using the custom gate and lookup arguments described above. Our translation layer implements two optimizations. First, to minimize the number of columns and number of custom gates, our translation layer avoids creating new custom gates until there are no more available rows in existing ones. Second, we reduce the number of lookup arguments by sharing lookup tables between layers when the scale factors are the same. This is particularly useful for the residual layers, where the scaling factor can be normalized to be shared across layers. 4.2. Committing to weights or inputs As described in Section 3, ZK-SNARKs allow parts of the inputs to be made public, in addition to revealing the outputs of the computation. For ML models, the input (e.g., image), weights, or both can be made public. Then, to commit to the hidden inputs, the hash can be computed within the ZK-SNARK and be made public. Concretely, we use the following primitives: 1. Hidden input, public weights: the input is hidden and the weights are public. The input hash is computed and made public. 2. Public input, hidden weights: the input is public and the weights are hidden. The weight hash is computed and made public. 3. Hidden input, hidden weights: the inputs and weights are hidden. The hash of both are computed and made public. To compute the hashes, we use an existing circuit for the SNARK-friendly Poseidon hash (Grassi et al., 2019). The hash of the inputs, weights, or both can be SNARK-ed as described. ### 5. Applications of Verified ML Model Inference Building upon our efficient ZK-SNARK constructions, we now show that it is possible to verify ML model accuracy, verify ML model predictions for serving, and trustlessly retrieve documents matching a predicate based on an ML model. 5.1. Protocol Properties and Security Model Protocol properties. In this section, we describe and study the properties of protocols leveraging verified ML inference. Each protocol has a different set of requirements, which we denote A. The requirements A may be probabilistic (e.g., the model has accuracy 80% with 95% probability). We are interested in the validity and viability of our protocols. Validity that if the protocol completes, A holds. Viability refers to the property that rational agents will participate in the protocol. ----- Scaling up Trustless DNN Inference with Zero-Knowledge Proofs Security model. In this work, we use the standard ZKSNARK security model for the ZK-SNARKs (B¨unz et al., 2020). Informally, the standard security model states the prover and verifier only interact via the ZK-SNARKs and that the adversary is computationally bounded, which excludes the possibility of side channels. Our security model allows for malicious adversaries, which is in contrast to the semi-honest adversary setting. Recall that the in semihonest adversary setting, the adversaries honestly follow the protocol but attempt to compromise privacy, which is common in the MPC setting. Assumptions. For validity, we only assume two standard cryptographic assumptions. First, that it is hard to compute the order of random group elements (B¨unz et al., 2020), which is implied by the RSA assumption (Rivest et al., 1978). Second, that finding hash collisions is difficult (Rogaway & Shrimpton, 2004). Only requiring cryptographic hardness assumptions is sometimes referred to as unconditional (Ghodsi et al., 2017a). For viability, we assume the existence of a programmatic escrow service and that all parties are economically rational. In the remainder of this section, we further assume the “no-griefing condition,” which states that no party will purposefully loses money to hurt another party, and the “notimeout condition,” which states that no parties will time out. Both of these conditions can be relaxed. We describe how to relax these conditions in the Appendix. 5.2. Verifying ML model accuracy In this setting, a model consumer (MC) is interested in verifying a model provider’s (MP) model’s accuracy, and MP desires to keep the weights hidden. As an example use case, MC may be interested in verifying the model accuracy to purchase the model or to use MP as an ML-as-a-service provider (i.e., to purchase predictions in the future). Since the weights are proprietary, MP desires to keep the weights hidden. The MC is interested in verifiable accuracy guarantees, to ensure that the MP is not lazy, malicious, or serving incorrect predictions. Denote the cost of obtaining a test input and label to be E, the cost of ZK-SNARKing a single input to be Z, and P to be the cost of performing inference on a single data point. We enforce that E > Z > P . Furthermore, let N = N1 + N2 be the number of examples used in the verification protocol. These parameters are marketplacewide and are related to the security of the protocol. The protocol requires that MP stakes 1000N1E per model to participate. The stake is used to prevent Sybil attacks, in which a single party fakes the identity of many MPs. Given the stake, the verification protocol is as follows for some accuracy target a: 1. MP commits to an architecture and set of weights (by providing the ZK-SNARK keys and weight hash respectively). MC commits to a test set {(x1, y1), ..., (xN, yN )} by publishing the hash of the examples. 2. MP and MC escrows 2NE + ǫ, where ǫ goes to the escrow service. 3. MC sends the test set to MP. MP can continue or abort at this point. If MP aborts, MC loses NP of the escrow. 4. MP sends ZK-SNARKs and the outputs of the model on the test set to MC. 5. If accuracy target a is met, MC pays 2NZ. Otherwise, MP loses the full amount 2NE to MC. The verification protocol is valid because MP must produce the outputs of the ML model as enforced by the ZKSNARKs. MC can compute the accuracy given the outputs. Thus, if the protocol completes, the accuracy target is met. If the economic value of the transaction exceeds 1000N1E, the protocol is viable since the MP will economically benefit by serving or selling the model. This follows as we have chosen the stake parameters so that malicious aborting will cost the MC or MP more in expectation than completing the protocol. We formalize our analysis and give a more detailed analysis the Appendix. 5.3. Verifying ML Model Predictions In this setting, we assume that MC has verified model accuracy and is interested in purchasing predictions in the MLas-a-service setting. As we show, MC need not request a ZK-SNARK for every prediction to bound malicious MP behavior. The serving verification procedure proceeds in rounds of size K (i.e., prediction is served over K inputs). MC is allow to contest at any point during the round, but not after the round has concluded. Furthermore, let K ≥ K1 > 0. The verification procedure is as follows: 1. MC escrows 2KZ and MP escrows βKZ, where β ≥ 2 is decided between MP and MC. 2. MC provides the hashes for the K inputs to the escrow and sends the inputs to MP (xi). MP verifies the hashes. 3. MP provides the predictions (yi) to the inputs (without ZK-SNARKs) to MC. MC provides the hash of Concat(xi, yi) to the escrow. ----- Scaling up Trustless DNN Inference with Zero-Knowledge Proofs 4. If MC believes MP is dishonest, MC can contest on any subset K1 of the predictions. 5. When contested, MP will provide the ZK-SNARKs for the K1 predictions. If MP fails to provide the ZKSNARKs, then it loses the full βZP . 6. If the ZK-SNARKs match the hashes, then MC loses 2K1Z from the escrow and the remainder of the funds are returned. Otherwise, MP loses the full βZP to MC. For validity, if MP is honest, MC cannot contest successfully and the input and weight hashes are provided. Similarly, if MC is honest and contests an invalid prediction, MP will be unable to produce the ZK-SNARK. For viability, first consider an honest MP. The honest MP is indifferent to the escrow as it receives the funds back at the end of the round. Furthermore, all contests by MC will be unsuccessful and MP gains K1Z per unsuccessful contest. For honest MC to participate, they must either have a method of detecting invalid predictions with probability p or they can randomly contest a p fraction of the predictions. Note that for random contests, p depends on the negative utility of MC receiving an invalid prediction. As long as βKZ is large relative to [KZ]p [, then MC will participate.] 5.4. Trustless Retrieval of Items Matching a Predicate In this setting, a requester is interested in retrieving records that match the output of an ML model (i.e., a predicate) from a responder. These situations often occur during legal subpoenas, in which a judge requires the responder to send a set of documents matching the predicate. For example, the requester may be a journalist requesting documents under the Freedom of Information Act or the plaintiff requesting documents for legal discovery. This protocol could also be useful in other settings where the responder wishes to prove that a dataset does not contain copyrighted content. When a judge approves this request, the responder must divulge documents or images that match the request. We show that ZK-SNARKs allow requests encoded as ML algorithms can be trustlessly verified. The protocol proceeds as follows: 1. The responder commits to the dataset by producing hashes of the documents. 2. The requester sends the model to the responder. 3. The responder produces ZK-SNARKs of the model on the documents, with the inputs hashed. The responder sends the requester the documents that match the positive class of the model. The audit protocol guarantees the following: the responder will return the documents from Stage 1 that match the model’s positive class. The validity follows from the difficulty of finding hash collisions and the security of ZKSNARKs. The responder may hash invalid documents (e.g., random or unrelated images), which the protocol makes no guarantees over. This can be mitigated based on whether the documents come from a trusted or untrusted source. For documents from a trusted source, the hashes can be verified from a signature from the trusted source. As an example, hashes for government-produced documents (in the FOIA setting) may be produced at the time of document creation. For documents from an untrusted source (e.g., the legal discovery setting), we require a commitment for the entire corpus. Given the commitment, the judge can allow the requester to randomly sample a small number (N ) of the documents to verify the hashes. In this case, the requester can verify that the responder tampered with at most p = exp � 1N−δ � for some confidence level δ. ### 6. Evaluation To evaluate our ZK-SNARK system, we ZK-SNARKed MobileNets with varying configurations. We evaluated the hidden model and hidden input setting, which is the most difficult setting for ZK-SNARKs. We measured four metrics: model accuracy, setup time, proving time, and verification time. The setup time is done once per MobileNet architecture and is independent of the weights. The proving is done by the model provider and the verification is done by the model consumer. Proving and verification must be done once per input. To the best of our knowledge, no prior work can ZK-SNARK DNNs on ImageNet scale models. As mentioned, we ZK-SNARK quantized DNNs, which avoids floating point computations. We use the model provided by TensorFlow Slim (Silberman & Guadarrama, 2018). MobileNet v2 has two adjustable parameters: the “expansion size” and the input dimension. We vary these parameters to see the effect on the ZK-SNARKing time and accuracy of the models. 6.1. ZK-SNARKs for ImageNet-scale models We first present results when creating ZK-SNARKs for only the DNN execution, which all prior work on ZKSNARKs for DNNs do. Namely, we do not commit to the model weights in this section. We use the AWS r6i.32xlarge instance type for all experiments in this section. ----- Scaling up Trustless DNN Inference with Zero-Knowledge Proofs Model Accuracy (top-5) Setup time Proving time Verification time Proof size (bytes) MobileNet, 0.35, 96 59.1% 93.9s 163.2s 0.74s 6528 MobileNet, 0.5, 224 75.7% 937.7s 1530.7s 6.32s 7552 MobileNet, 0.75, 192 79.2% 1341.2s 2457.5s 10.27s 5952 Table 1. Accuracy, setup time, proving time, and verification time of various MobileNet v2 configurations. The first parameter is the “expansion size” parameter for the MobileNet and the second parameter is image resolution. As shown, it is now possible to SNARK ImageNet models, which no prior work can achieve. Method Proving time lower bounds (s) Zen 20,000 vCNN 172,800 pvCNN 31,011[∗] zkCNN 1,597[∗] Table 2. Lower bounds on the proving time for prior work. These lower bounds were obtained by finding a DNN with strictly fewer operations compared to MobileNet v2 (0.35, 96) in the papers reporting Zen and vCNN. For pvCNN and zkCNN, we estimate the lower bound by scaling the computation. We summarize results for various MobileNet v2 configurations in Table 1. As shown, we can achieve up to 79% accuracy on ImageNet, while simultaneously taking as few as 10s and 5952 bytes to verify. Furthermore, the ZKSNARKs can be scaled down to take as few as 0.7s to verify at 59% accuracy. These results show the feasibility of ZKSNARKing ImageNet-scale models. In contrast, we show the lower bounds on the time for prior work to ZK-SNARK a comparable model to MobileNet v2 (0.35, 96). We were unable to reproduce any of the prior work, but we use the proving numbers presented in the papers. For Zen, and vCNN we use the largest model in the respective papers as lower bounds (MNIST or CIFAR10 models). For zkCNN and pvCNN we estimate the proving time by scaling the largest model in the paper. As shown in Table 2, the proving time for the prior work is at least 10× higher than our method and up to 1,000× higher. We emphasize that these are lower bounds on the proving time for prior work. Finally, we note that the proof sizes of our ZK-SNARKs are orders of magnitude less than MPC methods, which can take tens to hundreds of gigabytes. 6.2. Protocol Evaluation We present results when instantiating the protocols described in Section 5. To do so, we ZK-SNARK MobileNet v2 (0.35, 96) while committing to the weights, which no other prior work does. For the DNNs we consider, the cost of committing to the weights via hashes is approximately the cost of the inference itself. This Fraction Sample size Cost 5% 72 $11.99 2.5% 183 $30.48 1% 366 $60.96 Table 3. Costs of performing verified prediction and trustless retrieval while bounding the fraction of predictions tampered with. Cost were estimated with the MobileNet v2 (0.35, 96) model. phenomena of hashing being proportional to the computation cost also holds for other ZK-SNARK applications (Privacy & Explorations, 2022). For each protocol, we compute the cost using public cloud hardware for the prover and verifier for a variety of protocol parameters. We use a cost-optimized instance for these experiments (AWS r8i.8xlarge). A full deployment of ZK-SNARKs would require analyzing the assorted infrastructure costs associated with the deployment, which is outside the scope of this work. Verifying prediction and trustless retrieval. For both the verifying MP predictions and trustless retrieval, the MC (requester) can bound the probability that the MP (responder) returns incorrect results by sampling at random. In both cases, if a single incorrect example is found, the MC (requester) has recourse. In the verified predictions setting, MC will financially gain and in the retrieval setting, the requester can force the judge to make the responder turn over all documents. As such, the MC can choose a confidence level δ and a bound on the fraction of predictions tampered p. The MC can then choose a random sample of size N as determined by inverting a valid Binomial proportion confidence interval. Namely, N is independent of the size of the batch. We compute the number of samples required and the cost of the ZK-SNARKs (both the proving and verifying) at various p at δ = 5%, with results in Table 3. We use the Clopper-Pearson exact interval (Clopper & Pearson, 1934) to compute the sample size. To contextualize these results, consider the Google Cloud Vision API. Google Cloud Vision charges $1.50 per 1,000 images. Predictions over one million images would cost $1,500. If we could scale ZK-SNARKs to verify the ----- Scaling up Trustless DNN Inference with Zero-Knowledge Proofs ǫ Sample size Total cost 5% 600 $99.93 2.5% 2,396 $399.08 1% 14,979 $2494.90 Table 4. Cost of verifying the accuracy of an ML model within some ǫ of the desired accuracy. Costs were estimated with the MobileNet v2 (0.35, 96) model. Google API model with cost on par with MobileNet v2 (0.35, 96), verifying these predictions would add 4% overhead, which is acceptable in many circumstances. Verifying model accuracy. For verifying MP model accuracy, the MC is interested in bounding probability that the accuracy target a is not met: P (a[′] < a) ≤ δ for the estimated accuracy a[′] and some confidence level δ. We focus on binary accuracy in this evaluation. For binary accuracy, we can use Hoeffding’s inequality to solve for the sample size: � −2ǫ2 P (a − a[′] - ǫ) ≤ exp N � = δ We show the total number of samples needed for various ǫ at δ = 5% and the associated costs in Table 4. Although these costs are high, they are within the realm of possibility. For example, it may be critical to verify the accuracy of a financial model or a model used in healthcare settings. For reference, even moderate size datasets can cost on the order of $85,000 (Incze, 2019), so verifying the model would add between 0.1% to 2.9% overhead compared to just the cost of obtaining training data. ### 7. Conclusion In this work, we present protocols for verifying ML model execution trustlessly for audits, testing ML model accuracy, and ML-as-a-service inference. We further present the first ZK-SNARKed ImageNet-scale model to demonstrate the feasibility of our protocols. Combined, our results show the promise for verified ML model execution in the face of malicious adversaries. ### References Armknecht, F., Boyd, C., Carr, C., Gjøsteen, K., J¨aschke, A., Reuter, C. A., and Strand, M. A guide to fully homomorphic encryption. Cryptology ePrint Archive, 2015. Ben-Sasson, E., Chiesa, A., and Spooner, N. Interactive oracle proofs. In Hirt, M. and Smith, A. (eds.), Theory of Cryptography, pp. 31–60, Berlin, Heidelberg, 2016. Springer Berlin Heidelberg. ISBN 978-3-662-53644-5. Ben-Sasson, E., Bentov, I., Horesh, Y., and Riabzev, M. Scalable, transparent, and post-quantum secure computational integrity. Cryptology ePrint Archive, Paper 2018/046, 2018. URL [https://eprint.iacr.org/2018/046.](https://eprint.iacr.org/2018/046) [https://eprint.iacr.org/2018/046.](https://eprint.iacr.org/2018/046) Bitansky, N., Canetti, R., Chiesa, A., Goldwasser, S., Lin, H., Rubinstein, A., and Tromer, E. The hunting of the snark. Journal of Cryptology, 30(4):989–1066, 2017. Bowe, S., Grigg, J., and Hopwood, D. Recursive proof composition without a trusted setup. Cryptology ePrint Archive, Paper 2019/1021, 2019. [URL https://eprint.iacr.org/2019/1021.](https://eprint.iacr.org/2019/1021) [https://eprint.iacr.org/2019/1021.](https://eprint.iacr.org/2019/1021) B¨unz, B., Fisch, B., and Szepieniec, A. Transparent snarks from dark compilers. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 677–706. Springer, 2020. Clopper, C. J. and Pearson, E. S. The use of confidence or fiducial limits illustrated in the case of the binomial. Biometrika, 26(4):404–413, 1934. Feng, B., Qin, L., Zhang, Z., Ding, Y., and Chu, S. Zen: An optimizing compiler for verifiable, zero-knowledge neural network inferences. Cryptology ePrint Archive, 2021. Gabizon, A. From airs to raps - how plonkstyle arithmetization works. 2021. URL [https://hackmd.io/@aztec-network/plonk-arithmetii](https://hackmd.io/@aztec-network/plonk-arithmetiization-air) Gabizon, A. and Williamson, Z. J. plookup: A simplified polynomial protocol for lookup tables. Cryptology ePrint Archive, 2020. Gabizon, A., Williamson, Z. J., and Ciobotaru, O. Plonk: Permutations over lagrange-bases for oecumenical noninteractive arguments of knowledge. Cryptology ePrint Archive, 2019. Gennaro, R., Gentry, C., Parno, B., and Raykova, M. Quadratic span programs and succinct nizks without pcps. In Annual International Conference on the Theory and Applications of Cryptographic Techniques, pp. 626–645. Springer, 2013. Ghodsi, Z., Gu, T., and Garg, S. Safetynets: Verifiable execution of deep neural networks on an untrusted cloud. 2017a. doi: 10.48550/ARXIV.1706.10268. URL [https://arxiv.org/abs/1706.10268.](https://arxiv.org/abs/1706.10268) ----- Scaling up Trustless DNN Inference with Zero-Knowledge Proofs Ghodsi, Z., Gu, T., and Garg, S. Safetynets: Verifiable Lou, Q. and Jiang, L. Hemet: A homomorphic-encryptionexecution of deep neural networks on an untrusted cloud. friendly privacy-preserving mobile neural network archiAdvances in Neural Information Processing Systems, 30, tecture. In International conference on machine learn2017b. ing, pp. 7102–7110. PMLR, 2021. Mishra, P., Lehmkuhl, R., Srinivasan, A., Zheng, W., and Grassi, L., Khovratovich, D., Rechberger, C., Roy, Popa, R. A. Delphi: A cryptographic inference service A., and Schofnegger, M. Poseidon: A new hash for neural networks. In 29th USENIX Security Sympo function for zero-knowledge proof systems. Cryptol sium (USENIX Security 20), pp. 2505–2522, 2020. ogy ePrint Archive, Paper 2019/458, 2019. URL [https://eprint.iacr.org/2019/458.](https://eprint.iacr.org/2019/458) Mohassel, P. and Zhang, Y. Secureml: A system for [https://eprint.iacr.org/2019/458.](https://eprint.iacr.org/2019/458) scalable privacy-preserving machine learning. In 2017 IEEE symposium on security and privacy (SP), pp. 19– Groth, J. On the size of pairing-based non-interactive argu- 38. IEEE, 2017. ments. In Annual international conference on the theory Pearson, L., Fitzgerald, J., Masip, H., Bell´es-Mu˜noz, M., and applications of cryptographic techniques, pp. 305– and Mu˜noz-Tapia, J. L. Plonkup: Reconciling plonk 326. Springer, 2016. with plookup. Cryptology ePrint Archive, 2022. Incze, R. The cost of machine learning projects. 2019. URL Pentyala, S., Dowsley, R., and De Cock, M. Privacy [https://medium.com/cognifeed/the-cost-of-machine-learning-projects-7ca3aea03a5c.](https://medium.com/cognifeed/the-cost-of-machine-learning-projects-7ca3aea03a5c) preserving video classification with convolutional neural networks. In International conference on machine learn Jha, N. K., Ghodsi, Z., Garg, S., and Reagen, B. Deepre ing, pp. 8487–8499. PMLR, 2021. duce: Relu reduction for fast private inference. In International Conference on Machine Learning, pp. 4839– Privacy and Explorations, S. zkevm, 2022. URL 4849. PMLR, 2021. [https://github.com/privacy-scaling-explorations/z](https://github.com/privacy-scaling-explorations/zkevm-circuits) Rivest, R. L., Shamir, A., and Adleman, L. A method Juvekar, C., Vaikuntanathan, V., and Chandrakasan, A. for obtaining digital signatures and public-key cryptosys {GAZELLE}: A low latency framework for secure neu tems. Communications of the ACM, 21(2):120–126, ral network inference. In 27th USENIX Security Sympo 1978. sium (USENIX Security 18), pp. 1651–1669, 2018. Rogaway, P. and Shrimpton, T. Cryptographic hash Knott, B., Venkataraman, S., Hannun, A., Sengupta, S., function basics: Definitions, implications, and separaIbrahim, M., and van der Maaten, L. Crypten: Secure tions for preimage resistance, second-preimage resismulti-party computation meets machine learning. Ad- tance, and collision resistance. In International workvances in Neural Information Processing Systems, 34: shop on fast software encryption, pp. 371–388. Springer, 4961–4973, 2021. 2004. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., and Kumar, N., Rathee, M., Chandran, N., Gupta, D., Rastogi, Chen, L.-C. Mobilenetv2: Inverted residuals and linear A., and Sharma, R. Cryptflow: Secure tensorflow infer bottlenecks. In Proceedings of the IEEE conference on ence. In 2020 IEEE Symposium on Security and Privacy computer vision and pattern recognition, pp. 4510–4520, (SP), pp. 336–353. IEEE, 2020. 2018. Lam, M., Mitzenmacher, M., Reddi, V. J., Wei, G.-Y., and Silberman, N. and Guadarrama, S. Tf-slim: A high level Brooks, D. Tabula: Efficiently computing nonlinear ac library to define complex models in tensorflow, 2018. tivation functions for secure neural network inference. arXiv preprint arXiv:2203.02833, 2022. Thaler, J. Time-optimal interactive proofs for circuit evalu ation. Cryptology ePrint Archive, Paper 2013/351, 2013. Lee, S., Ko, H., Kim, J., and Oh, H. vcnn: Verifiable convo- URL [https://eprint.iacr.org/2013/351.](https://eprint.iacr.org/2013/351) lutional neural network based on zk-snarks. Cryptology [https://eprint.iacr.org/2013/351.](https://eprint.iacr.org/2013/351) ePrint Archive, 2020. Weng, J., Weng, J., Tang, G., Yang, A., Li, M., and Liu, J.-N. pvcnn: Privacy-preserving and verifiable Liu, T., Xie, X., and Zhang, Y. Zkcnn: Zero knowledge convolutional neural network testing. arXiv preprint proofs for convolutional neural network predictions and arXiv:2201.09186, 2022. accuracy. In Proceedings of the 2021 ACM SIGSAC Conference on Computer and Communications Security, pp. zcash. halo2, 2022. URL 2968–2985, 2021. [https://zcash.github.io/halo2/.](https://zcash.github.io/halo2/) ----- Scaling up Trustless DNN Inference with Zero-Knowledge Proofs ### A. Viability of Verifying Model Accuracy In this section, we prove the viability of the simplified protocol for verifying model accuracy. As mentioned, viability further requires that the cost of the model or price of post-verification purchased predictions is greater than 1000N1E. Viability requires that honest MP/MC will participate and that dishonest MP/MC will not participate. Consider the case of an honest MP. If MC is dishonest, it can economically gain by having MP proceed beyond Stage 4 and having MP fail the accuracy target. However, as MP has access to the test set, they can determine the accuracy before proceeding beyond 4, so will not proceed if the accuracy target is not met. If MP has a valid model, they will proceed, since the profits of serving predictions or selling the model is larger than their stake. Consider the case of an honest MC. Note that an economically rational MP is incentivized to serve the model if it has a model of high quality. Thus, we assume dishonest MPs do not have model that achieves the accuracy target. The dishonest MP can economically gain by aborting at Stage 4 at least 1000 times (as E > P ). MC can choose to participate with MP that only has a failure rate of at most 1%. In order to fool honest MCs, MP must collude to verify invalid test sets, which costs 2ǫ per verification. MP must have 99 fake verifications for one failed verification from an honest MC. Thus, by setting ǫ = NP99 [, dishonest MP] will not participate. From our analysis, we see that honest MP and MC are incentivized to participate and that dishonest MP and MC will not participate, showing viability. ### B. Verifying ML Model Accuracy with Griefing and Timeouts In this section, we describe how to extend our model accuracy protocol to account for griefing and timeouts. Griefing is when an adversarial party purposefully performs economically disadvantageous actions to harm another party. Timeouts are when either the MP or MC does not continue with the protocol (whether by choice or not) without explicitly aborting. Denote the cost of obtaining a test input and label to be E, the cost of ZK-SNARKing a single input to be Z, and P to be the cost of performing inference on a single data point. We enforce that E > Z > P . Furthermore, let N = N1 + N2 be the number of examples used in the verification protocol. These parameters are marketplacewide and are related to the security of the protocol. The marketplace requires MP to stake 1000N1E per model to participate. The stake is used to prevent Sybil attacks, in which a single party fakes the identity of many MPs. Given the stake, the verification protocol is as follows for some accuracy target a: 1. MP commits to an architecture and set of weights (by providing the ZK-SNARK keys and weight hash respectively). MC commits to a test set {(x1, y1), ..., (xN, yN )} by publishing the hash of the examples. 2. MP and MC escrows 2NE + ǫ, where ǫ goes to the escrow service. 3. MP selects a random subset of size N1 of the test set. If MC aborts at this point, MC loses the full amount in the escrow to MP. If MC continues, it sends the subset of examples to MP. 4. MP chooses to proceed or abort. If MP aborts, MC loses N1P of the escrow to MP and the remainder of the funds are returned to MC and MP. 5. MC sends the remainder of the N2 examples to MP. If MP aborts from here on out, MP loses the full amount in the escrow (2NE) to MC. 6. MP sends SNARKs of the N2 examples with outputs revealed. The weights and inputs are hashed. 7. If accuracy target a is met, MC pays 2(N1P + N2Z). Otherwise, MP loses the full amount 2NE to MC. Validity and viability (no griefing or timeouts). The verification protocol is valid because MP must produce the outputs of the ML model as enforced by the ZK-SNARKs. MC can compute the accuracy given the outputs. Thus, if the protocol completes, the accuracy target is met. Viability further requires that the cost of the model or price of post-verification purchased predictions is greater than 1000N1E. We must show that honest MP/MC will participate and that dishonest MP/MC will not participate. We first show viability without griefing or timeouts and extend our analysis below. Consider the case of an honest MP. If MC is dishonest, it can economically gain by having MP proceed beyond Stage 4 and having MP fail the accuracy target. Since MP chooses the subsets N1 and N2, they can be drawn uniformly from the full test set. Thus, MP can choose to proceed only if P (a met|N1) > 1 − α is such that expected value for MP is positive, where α depends on the choice of ǫ (we provide concrete instantiations for α and ǫ below). If MC is honest, MP gains in expected value by completing the protocol, as its expected gain is (1 − α)(N1P + 2N2Z − ǫ) + αN1P. ----- Scaling up Trustless DNN Inference with Zero-Knowledge Proofs Consider the case of an honest MC. Note that an economically rational MP is incentivized to serve the model if it has a model of high quality. Thus, we assume dishonest MPs do not have model that achieves the accuracy target. The dishonest MP can economically gain by aborting at Stage 4 at least 1000 times (as E > P ). MC can choose to participate with MP that only has a failure rate of at most 1%. In order to fool honest MCs, MP must collude to verify invalid test sets, which costs 2ǫ per verification. MP must have 99 fake verifications for one failed verification from an honest MC. Thus, by setting ǫ = [N]99[1][P] [, dishonest MP will not] participate. For this choice of ǫ, α > 49N491PN +991P NE [.] From our analysis, we see that honest MP and MC are incentivized to participate and that dishonest MP and MC will not participate, showing viability. Accounting for griefing. We have shown that there exist choices of α and ǫ for viability with economically rational actors. However, we must also account for griefing, where an economically irrational actor harms themselves to harm another party. It is not possible to making griefing impossible. However, we can study the costs of griefing. By making these costs high, our protocol will discourage griefing. In order to make these costs high, we let ǫ = N1P . We first consider griefing attacks against MC. For the choice of ǫ, dishonest MP must pay 99N1P per honest MC it griefs. In particular, MC loses N1P per attack, so the cost of a griefing MP is 99× higher than the cost to MC. We now consider griefing attacks against an MP. Since MP can randomly sample, MP can simply choose α appropriately to ensure the costs to a griefing MC is high. In particular, the MP pays 2NE per successful attack. MP’s expected gain for executing the protocol is (1 − α)(2N2Z) + αN1P for the choice of ǫ above. Then, for 2. MC sends encrypted inputs to MP. 3. MP signs and publishes an acknowledgement of the receipt. 4. MC publishes decryption key. 5. MP contests that the decryption key is invalid or continues the protocol. If MC does not respond or aborts in Stages 1, 2, or 4, it is slashed. If MP does not respond in Stages 3 or 5, it is slashed. Validity follows from standard cryptographic hardness assumptions. Without the decryption key, MP cannot access the data. With the decryption key, MP can verify that the data was sent properly. 1 50 [NE][ −] [2][N][2][Z] α = N1P − N2Z the cost of griefing is 100× higher for griefing MC than MP. By choosing N1 and N2 appropriately, MP can ensure the cost of griefing is high for griefing MCs. Accounting for timeouts. Another factor to consider is that either MC or MP can choose not to continue the protocol without explicitly aborting. To account for this, we introduce a sub-protocol for sending the data. Once the data is sent, if MP does not continue after time period of time, MP is slashed. The sub-protocol for data transfer is as follows: 1. MC sends hashes of encrypted inputs to escrow and MP. -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2210.08674, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "http://arxiv.org/pdf/2210.08674" }
2,022
[ "JournalArticle" ]
true
2022-10-17T00:00:00
[ { "paperId": "8744a927d68b946aa7d70afdeb25962b72c0813b", "title": "vCNN: Verifiable Convolutional Neural Network Based on zk-SNARKs" }, { "paperId": "9081970d3833d482b60997627587728d324ec2a1", "title": "Tabula: Efficiently Computing Nonlinear Activation Functions for Secure Neural Network Inference" }, { "paperId": "f2a7dc399f1d4fe9ec1feccdeca3782b764ae00d", "title": "pvCNN: Privacy-Preserving and Verifiable Convolutional Neural Network Testing" }, { "paperId": "6336149196f02d0eb6c1a89bc6b662e4422bd794", "title": "zkCNN: Zero Knowledge Proofs for Convolutional Neural Network Predictions and Accuracy" }, { "paperId": "7eb733c8ac1b3d1dd8b50e066ddae10769e3b46e", "title": "CrypTen: Secure Multi-Party Computation Meets Machine Learning" }, { "paperId": "e2937b0b2ab8700e40904a5ee639042983a76068", "title": "HEMET: A Homomorphic-Encryption-Friendly Privacy-Preserving Mobile Neural Network Architecture" }, { "paperId": "4521c2e96fcd1560ade2489316eef1805c094cda", "title": "DeepReDuce: ReLU Reduction for Fast Private Inference" }, { "paperId": "ad107cb967099ed051fad2d88e6ea26345001c75", "title": "Privacy-Preserving Video Classification with Convolutional Neural Networks" }, { "paperId": "064455ab95783c2dfd161bfa3b41475d84e5c608", "title": "Delphi: A Cryptographic Inference Service for Neural Networks" }, { "paperId": "c71edf106571a39c9cf6d2ace05b63b4c66bf72a", "title": "Transparent SNARKs from DARK Compilers" }, { "paperId": "5a2a7bcb24488a4ee938e376cc876d7c2fe115a7", "title": "CrypTFlow: Secure TensorFlow Inference" }, { "paperId": "266e0f8e62773ed894f1eb79ee75746be470aeb6", "title": "Contested" }, { "paperId": "c519be94d6a99ec80f60d7369cc2587c485c8304", "title": "Gazelle: A Low Latency Framework for Secure Neural Network Inference" }, { "paperId": "dd9cfe7124c734f5a6fc90227d541d3dbcd72ba4", "title": "MobileNetV2: Inverted Residuals and Linear Bottlenecks" }, { "paperId": "a9b35ca93c086d0f97130d4c0257d70cb1f40cae", "title": "SafetyNets: Verifiable Execution of Deep Neural Networks on an Untrusted Cloud" }, { "paperId": "2b7f9117eb6608a58be4c078ca3d69c0e5ccb875", "title": "SecureML: A System for Scalable Privacy-Preserving Machine Learning" }, { "paperId": "3cf7f1025a92524cd00e2cabc14af46a8073ee06", "title": "Interactive Oracle Proofs" }, { "paperId": "510aec2a12d43bbd6ddb85d09def188273f6c024", "title": "On the Size of Pairing-Based Non-interactive Arguments" }, { "paperId": "e8f3c22566f7dbca63b5b5828bba22a0cd72defa", "title": "Quadratic Span Programs and Succinct NIZKs without PCPs" }, { "paperId": "b296575ecaacfe7160a949764dec93e8bf1eda19", "title": "Time-Optimal Interactive Proofs for Circuit Evaluation" }, { "paperId": "1a438164c1ca074c40baa6c3279cb5e0c573313e", "title": "Cryptographic Hash-Function Basics: Definitions, Implications, and Separations for Preimage Resistance, Second-Preimage Resistance, and Collision Resistance" }, { "paperId": "b4d4a78ecc68fd8fe9235864e0b1878cb9e9f84b", "title": "A method for obtaining digital signatures and public-key cryptosystems" }, { "paperId": "166c42895882039e4252f7c943efa13d0505109f", "title": "THE USE OF CONFIDENCE OR FIDUCIAL LIMITS ILLUSTRATED IN THE CASE OF THE BINOMIAL" }, { "paperId": "f5f07d3b650ca5b3f41f091fac5ae663f7a1d7e9", "title": "PlonKup: Reconciling PlonK with plookup" }, { "paperId": "14b7bfba25af540f94922e49e7fed137ec5cd88a", "title": "ZEN: An Optimizing Compiler for Verifiable, Zero-Knowledge Neural Network Inferences" }, { "paperId": "56fb9ca04c3c6c0a1b972d7b4d825ebcda81c459", "title": "Poseidon: A New Hash Function for Zero-Knowledge Proof Systems" }, { "paperId": null, "title": "From airs to raps - how plonkstyle arithmetization works. 2021" }, { "paperId": "f06d005574d5435f43663acef3e56abe44a9c2f7", "title": "plookup: A simplified polynomial protocol for lookup tables" }, { "paperId": null, "title": "The cost of machine learning projects. 2019" }, { "paperId": "d928b78ea85cae93d3ca0bfabe47bf954db55e7a", "title": "PLONK: Permutations over Lagrange-bases for Oecumenical Noninteractive arguments of Knowledge" }, { "paperId": "83ace8f26e6c57c6c2b4e66e5e81aafaadd7ca38", "title": "Halo: Recursive Proof Composition without a Trusted Setup" }, { "paperId": "7fb02be911a495994bfdcab75c0f0a4315970319", "title": "Scalable, transparent, and post-quantum secure computational integrity" }, { "paperId": null, "title": "Tf-slim: A high level library to define complex models in tensorflow, 2018" }, { "paperId": "87c6557bb8c822c54f64691d8d1a6ae9c149a4bb", "title": "The Hunting of the SNARK" }, { "paperId": "7ee670d05930c034d2224a42b37db8862a566810", "title": "A Guide to Fully Homomorphic Encryption" }, { "paperId": null, "title": "Validity follows from standard cryptographic hardness assumptions" }, { "paperId": null, "title": "The responder produces ZK-SNARKs of the model on the documents, with the inputs hashed" }, { "paperId": null, "title": "MC sends the test set to MP. MP can continue or abort at this point" }, { "paperId": null, "title": "model’s positive class. The validity follows from the dif-ficulty of finding hash collisions and the security of ZK-SNARKs" }, { "paperId": null, "title": "MC sends the remainder of the N 2 examples to MP. If MP aborts from here on out, MP loses the full amount in the escrow ( 2 NE ) to MC" }, { "paperId": null, "title": "MC provides the hashes for the K inputs to the es-crow and sends the inputs to MP ( x i )" }, { "paperId": null, "title": "MP provides the predictions ( y i ) to the inputs (without ZK-SNARKs) to MC" }, { "paperId": null, "title": "column q (per custom gate), where each cell in q takes the value 0 or 1" }, { "paperId": null, "title": "If MC believes MP is dishonest, MC can contest on any subset K 1 of the predictions" }, { "paperId": null, "title": "MP sends ZK-SNARKs and the outputs of the model on the test set to MC" }, { "paperId": null, "title": "MP sends SNARKs of the N 2 examples with outputs revealed" }, { "paperId": null, "title": "Knowledge soundess : A computationally bounded prover cannot generate proofs for incorrect executions" }, { "paperId": null, "title": "MP and" }, { "paperId": null, "title": "Completeness : Proofs of correct execution verify successfully" }, { "paperId": null, "title": "which only applies the custom multiplication gate which q i 6 = 0 . Column q is called a selector" }, { "paperId": null, "title": "MP chooses to proceed or abort" }, { "paperId": null, "title": "Novel arithmetization optimizations for DNN inference in the form of lookup arguments for non-linearities and sub-circuit reuse to enable ImageNet-scale ZK-SNARKs (Section 4)" }, { "paperId": null, "title": "Non-interactivity : Proof generation does not require interaction between the verifier and prover" }, { "paperId": null, "title": "MP selects a random subset of size N 1 of the test set" } ]
14,756
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Business", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01c0579debd09da21bfa3edafebff9a35c7a6e8a
[ "Computer Science" ]
0.858785
Blockchain Analysis Tool For Monitoring Coin Flow
01c0579debd09da21bfa3edafebff9a35c7a6e8a
Swiss Conference on Data Science
[ { "authorId": "2078662442", "name": "Aman Framewala" }, { "authorId": "1831193215", "name": "Sarvesh Harale" }, { "authorId": "1573631781", "name": "Shreya Khatal" }, { "authorId": "152891106", "name": "Dhiren R. Patel" }, { "authorId": "1701175", "name": "Yann Busnel" }, { "authorId": "1759654", "name": "M. Rajarajan" } ]
{ "alternate_issns": null, "alternate_names": [ "International Conference on Software Defined Systems", "Swiss Conf Data Sci", "SDS", "Int Conf Softw Defin Syst" ], "alternate_urls": null, "id": "4f02ac59-1046-4afe-981b-6f33122b2014", "issn": null, "name": "Swiss Conference on Data Science", "type": "conference", "url": null }
While cryptocurrencies like Bitcoin have the potential to break traditional financial barriers, there are growing concerns about such currencies being used to fund illegal activities. Blockchain keeps the complete history of all transactions ever performed and each node replicates it. The humongous data it contains can be analyzed to gain useful insights about user transactions as well as the blockchain as a whole. In this paper, we propose an approach to parse and visualize the data of Bitcoin blockchain in a graph structure and carry out analysis that includes tracking and tracing, address clustering and entity tagging. We also try to find patterns in the data at a macro level to provide insights about the overall system. Thus, these efforts lead to foundation work for an analysis tool for getting insights on the coin flow of any financial system including cryptocurrencies.
## **Blockchain Analysis Tool For Monitoring Coin Flow** ### Aman Framewala, Sarvesh Harale, Shreya Khatal, Dhiren Patel, Yann Busnel, Muttukrishnan Rajarajan **To cite this version:** #### Aman Framewala, Sarvesh Harale, Shreya Khatal, Dhiren Patel, Yann Busnel, et al.. Blockchain Analysis Tool For Monitoring Coin Flow. BAT 2020: Second International Workshop on Blockchain Applications and Theory in conjunction with SDS 2020: Seventh International Conference on Software Defined Systems, Jun 2020, Paris, France. pp.1-2, ￿10.1109/SDS49854.2020.9143908￿. ￿hal-02750844￿ ### **HAL Id: hal-02750844** **https://imt-atlantique.hal.science/hal-02750844v1** #### Submitted on 3 Jun 2020 #### HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. #### L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. ----- # Blockchain Analysis Tool For Monitoring Coin Flow Aman Framewala [1], Sarvesh Harale [1], Shreya Khatal [1], Dhiren Patel [1], Yann Busnel [2], and Muttukrishnan Rajarajan [3] 1 Department of Computer Engineering, VJTI Mumbai, India Email: amanframewala@gmail.com, sarveshharale10@gmail.com, khatalshreya@gmail.com, dhiren29p@gmail.com 2 IMT Atlantique, IRISA Rennes, France Email: yann.busnel@imt - atlantique.fr 3 City University London, UK Email: R.Muttukrishnan@city.ac.uk ***Abstract*** **—While cryptocurrencies like Bitcoin have the** **potential to break traditional financial barriers, there are growing** **concerns about such currencies being used to fund illegal** **activities. Blockchain keeps the complete history of all** **transactions ever performed and each node replicates it. The** **humongous data it contains can be analyzed to gain useful insights** **about user transactions as well as the blockchain as a whole. In** **this paper, we propose an approach to parse and visualize the data** **of Bitcoin blockchain in a graph structure and carry out analysis** **that includes tracking and tracing, address clustering and entity** **tagging. We also try to find patterns in the data at a macro level** **to provide insights about the overall system. Thus, these efforts** **lead to foundation work for an analysis tool for getting insights on** **the coin flow of any financial system including cryptocurrencies.** ***Keywords—Blockchain, bitcoin, tracking and tracing, address*** ***clustering, entity tagging,*** I. I NTRODUCTION Bitcoin has been favored by many people due to its decentralized and pseudo-anonymous nature. The popularity of Bitcoin has continued to rise with over 200k transactions being recorded each day [1][2]. At the same time, Bitcoin is widely used as a means of exchange for dark markets like the Silk Road studied by [3], which was infamous for drugs, human trafficking and also for activities such as money laundering and extortion [4]. This has led to an urgent need for law enforcement agencies to monitor the flow of Bitcoin, detect such activities and further deter them. However, binary-formed data of the Bitcoin blockchain make it cumbersome for the agencies to perform analysis for obtaining usable evidence from scratch [5]. Bitcoin is a pseudo-anonymous currency [6], in which all the transactions are visible and traceable, but the Blockchain does not store an information which allow direct mapping to the real-world entities, thus providing anonymity [7][8]. One of the motives of cryptocurrencies is to provide anonymity and this has led to the formation of new cryptocurrencies like Monero [9] and ZeroCash [10] which enhance the anonymity of users. Other mechanisms like Bitcoin mixing services have also been developed which serve as a tool to provide anonymity by obfuscating the flow of funds [11], thus aiding in money laundering activities. XXX-X-XXXX-XXXX-X/XX/$XX.00 ©20XX IEEE In this paper, we propose a tool to parse the Bitcoin blockchain data, visualize the transactions and analyze them with ease. It integrates the features of transaction graph analysis [12], address clustering [13], entity tagging [14], tracking tracing [15] and wallet monitoring using alerts into a single tool which is designed to suit the needs of monitoring coin flow. This can help financial institutions and law enforcement agencies in identifying criminal entities and investigating activities like money laundering and ransomware. The major contributions of this work are as follows: - Establish a concrete methodology for analysis and monitoring of cryptocurrencies. - Consolidate various analysis functions that can be performed on cryptocurrencies enabling greater auditability. The rest of the paper is organized as follows: In Section 2, background and related work are presented. Section 3 discusses our proposal with the design rationale and techniques used. Section 4 gives implementation details and discusses results visualization. We conclude the paper in Section 5 followed by the references at the end. II. B ACKGROUND AND RELATED WORK Nakamoto [16] marks the inception of blockchain and Bitcoin in the world. It proposes the Bitcoin system as a peer-to-peer value transfer system. Bitcoin is a cryptocurrency, based on the UTXO (Unspent Transaction Output) model. Users can transact on the Bitcoin blockchain using Bitcoin accounts. A Bitcoin account is defined by an Elliptic Curve Cryptography key pair [5][17]. The Bitcoin account is publicly identified by its bitcoin address, obtained from its public key using a unidirectional function as shown in Figure 1. Using this public information user can send bitcoins to that address. Then, the corresponding private key is needed to spend the bitcoins of the account. Table 1 shows a sample private key, its intermediate results and the corresponding Bitcoin address generated. ----- Fig. 1: Bitcoin Address Generation |Private Key|9e524de478970a9621c0e52890805d5f28e362 0892ba6bfa701b026c6ee10a52| |---|---| |Public Key|03ee3b7337eb52d1e8bd7ee271db9aa43a6775 0ff483870ab2753d2e13922970db| |Public Key Hash|5355f7bb58765e07a20f978b6e2437e99a5e923| |Bitcoin Address|18be54dbyAth7CR4ymeoQBpzwinLW5Qe1K| Table. 1: Bitcoin Address Example It is easy to understand that any user can create any number of bitcoin addresses (generating the key pairs) using standard bitcoin client software. A transaction in Bitcoin is a transfer of value that is broadcast to the network and collected into a block. A transaction typically references previous transaction outputs (UTXO) as inputs to it and generates new transaction outputs (UTXO). Figure 2 represents typical bitcoin transactions. One can note that a small amount equivalent to the transaction fee gets deducted and is awarded to the miner. Figure 2 (b) shows Fig 2: Bitcoin Transactions how change can be returned to the address 1, which gives input to the transaction. Since blockchains provide auditability, it is possible to view every transaction ever recorded. These transactions can be analyzed to provide insights into emerging trends and sentiments concerning the use of the blockchain. Spagnuolo et al. [18] propose a framework to automatically parse the blockchain, cluster addresses, classify addresses and users, export and visualize elaborated information from the Bitcoin network. They also implement a classifier that labels the clusters in an automated or semi-automated way, by using several web scrapers that incrementally update lists of addresses belonging to known identities. Cuneyt et al. [19] explore aspects of blockchain analytics such as analysis models, tools and use cases in the modern world. Fleder et al. [12] annotate the public Bitcoin transaction graph by trying to link Bitcoin public keys to real people – either definitively or statistically. The graph is then put through a graph-analysis framework to find and summarize the activity of both known and unknown users. They then use web scraping to find Bitcoin addresses and try to link them to real-world entities. Cuneyt et al. [20] present general algorithms for tracking Bitcoin flows. Ermilov et al *.* [13] propose heuristic methods for grouping addresses that might probably be controlled by a single entity which is an important step for analyzing transactions. They also recommend using off-chain information to be combined with blockchain information to further refine the results. Hong et al. [21] explores cryptocurrency mixing (laundry) services and proposes a general de-mixing algorithm for common mixing services by exploiting their static and dynamic parameters. Geodell et al. [22] present a study on electronic payment methods majorly focusing on cryptocurrencies and comparing their offered anonymity and auditability. They then propose two schemes of using cryptocurrencies which try to provide an acceptable level of anonymity to users and also providing a good degree of auditability to regulatory authorities. Jourdan et al. [14] propose that identities of Bitcoin address holders can be leaked based on transaction features or off-network information. Balthasar et al. [23] [24] briefly examines some of the most relevant Bitcoin laundry services and studies their Fig. 3: Overview of the Blockchain Analysis Tool ----- main features mainly the security and anonymity provided by them. Balsakas et al.[26] provides a comprehensive study of blockchain analysis as a field of study. They explore the features of available blockchain analysis tools and categorizes them based on their provided functionality. It also presents the prevailing challenges on blockchain analysis. III. A NALYSIS TOOL : OUR PROPOSAL AND DESIGN RATIONALE We propose a system that integrates the features of transaction graph analysis, address clustering, entity tagging and tracking tracing into a single tool which is designed to suit the needs of monitoring coin flow. Figure 3 represents an overview of the Blockchain Analysis tool. The front end of the tool provides an interactive web-based GUI provided to make various queries, view statistics representing the current state of the bitcoin blockchain. This tool allows generation of alerts for transactions involving specific wallet address or a given transaction amount. The workflow for the backend of the tool can be seen in Figure 4 and has been described briefly in subsection C. *A.* *Blockchain Data Migration Module* This module is responsible for getting the data to a graph database like *Neo4j* [27], where the processing of graph-related queries can be done quickly owing to the intuitive query interface. The process involved transferring the binary bitcoin dump into a database utilizing a parser and using other databases to speed up the process. Figure 5 depicts our proposed method for the migration of a blockchain dump (i.e. Bitcoin blockchain) into a graph database (i.e. Neo4j). The process for migration of any blockchain to a graph database can be broadly broken down into the following steps: *1)* Dump Processing: It consists of the following steps: Fig. 4: Workflow for detecting patterns in blockchain *a)* *Bitcoin Parsing:* After downloading the Bitcoin dump data, it needs to be parsed for converting it to a processable format. There are readily available libraries for parsing Bitcoin data which convert raw binary data into a structured form. The parser is used to make transactions from all the blocks available in a readable format. *b)* *Transaction Deserialization:* After getting the transactions, they are deserialized into objects having fields as transaction hash, timestamp, inputs and outputs. *c)* *Inputs and Outputs Aggregation:* In Bitcoin transactions, there might be a possibility that multiple inputs (or outputs) may relate to a single address. Therefore, the inputs (outputs) are aggregated to form a single input (output) from that address. This is done for brevity and convenience. *d)* *Bitcoin Unit Conversion:* Delete Bitcoin transactions contain information about bitcoin amounts involved in the transaction in satoshis (10 [-8] BTC). These values are converted into BTC. This is again for brevity and convenience as there is no specific requirement for processing values at such a granularity. *2)* *Fields Extraction:* After converting the transactions to structured form, essential fields are extracted which are used to migrate the data to Neo4j Graph Database. These fields are extracted and CSV files are created out of them. four types of CSV files are created with the following fields: Fig. 5: Workflow for data cleaning and migration to database ----- |Types|Fields| |---|---| |Transactions|Transaction Hash and Timestamp| |Addresses|Bitcoin wallet addresses| |Inputs|Transaction Hash, Address and Amount| |Outputs|Transaction Hash, Address and Amount| Table. 2: Fields Extraction The UTXOs are saved to MongoDB database which is used for creating Input CSV files. This is due to the structure of the Bitcoin transaction where inputs refer to the previous transaction and its output index. Thus there is a need to keep the UTXOs in a database due to insufficient memory (RAM) during preprocessing. *3)* *Data Migration:* Once all the CSVs are created, they are migrated to *Graph Database* using the *Import Tool* provided by Neo4j. Since the bitcoin blockchain is continuously appended with new transactions, one needs to run a cron job on a daily basis for synching the database with the latest state of the blockchain. The tool is proposed to have a button to force start a sync in realtime to carry out analysis. We also propose a mechanism to generate alerts based on a certain wallet adress or transaction amount. The tool would monitor the transactions and provide a notification whenever the condition is met during the syncing of the database. *B.* *Analysis Of Graph* While monitoring coin flow, it is important to obtain insights from a transaction graph [12]. In this analysis, we have three main subsections. First is tracking and tracing of money through the various wallet addresses [15]. Second being address clustering [13], which tries to group wallet addresses operated by a single logical entity. There are two main ways to cluster addresses namely: (i) Common Spend and (ii) One time change as given in [13]. The third process being Entity Tagging which involves attempts to gain information about some addresses by using techniques like web scraping [25] and usage analysis. *1)* *Tracking and Tracing:* Tracking refers to looking for transactions that use this transaction output and its subsequent transactions (forward direction). Tracing refers to looking for transactions that result in this transaction output and its previous transactions (backward direction). *2)* *Address Clustering:* Address Clustering is the process of grouping multiple addresses such that all addresses are controlled by a single entity using heuristic methods. The entity can be a single person, a group of individuals or an organization. Address clustering may be inaccurate as it is based on heuristics. Figure 6 demonstrates the following 2 heuristics used for address clustering: *a)* *One Time Change:* Change from a transaction is returned to the user through a new address. *b)* *Common Spending:* All the addresses in the inputs of a transaction are controlled by a single entity *3)* *Entity Tagging:* Entity Tagging refers to labeling the address clusters with a real-world entity. This can be done using scraping open-source information. Ex: Tagging a group of addresses operated by a cryptocurrency exchange. *Pattern Detection* In this stage, the behavior of the blockchain is analyzed against various parameters to gain insights at a macro level. The analysis involves market volume and price analysis, mapping of the news events to activities in the blockchain which could be measured as an increase or decrease in demand for the cryptocurrency or sudden rise in acceptance of a given cryptocurrency related to some event. IV. R ESULT VISUALIZATION The various transaction graphs that are displayed include two type of nodes, the transaction and the wallet address nodes. They are represented by blue and orange color respectively. The former is uniquely identified by their transaction hash, while the latter by their wallet address. The edges represent the relationship that the wallet address has with the transaction. An incoming edge would represent an input to the transaction while the outgoing edge represents the output from a transaction being credited to the given wallet address. *A.* *Migration to Graph Database* The process of migration was successfully completed adding 470,162,363 transactions, 548,854,187 wallet addresses connected by 793,453,561 aggregated inputs and 1,179,067,970 aggregated outputs. Since there is no limit to the number of wallet addresses a user can create, several of these addresses maybe used for just a couple of transactions to obfuscate the trail. The total disk space used was 400GB including indices. Figure 7 depicts a transaction graph obtained with input given as the wallet address. The graph consists of the given wallet address at the center surrounded by other nodes, which are linked by the transaction hash. Fig. 6: Address Clustering based on Heuristics ----- Fig. 7: Bitcoin transaction graph for a Given Wallet *B.* *Tracking and Tracing* Figures 8 and 9 represent the transaction graphs for randomly chosen wallet address showing the one-hop tracking and the two-hop tracking respectively. The tracking operations were performed for a limited number of nodes to allow for ease in visualization. The visualizations show how the coin flow entering into the given wallet address across several hops, allowing us to reach the point of origination. *C.* *Address Clustering* Figure 10 depicts an address cluster obtained using the common spend heuristic. All the wallet addresses enclosed in the blue box provide inputs to the same transaction, thus according to the common spend heuristic they are considered to be controlled by the same entity or organization. This allows us to cluster addresses together to aid in entity tagging. *D.* *Entity Tagging* A group of 307,481 addresses was identified to be belonging to BTC-e.com which is an infamous cryptocurrency exchange. This information was obtained by scraping open source web data. *E.* *Pattern Detection* Patterns at macro level were monitored to obtain insights about the overall blockchain. We performed a sample tracking analysis in which we obtained the number of addresses in the first, second and third hop transactions originating from the random seed address “1EYSiRC2nUi2xLMTuwkWhHtpTTVVZ6KNrz”. The results are as follows: - First hop: 82 - Second hop: 105 - Third hop: 140 Analyzing the blockchain at macro level revealed the following statistics: - Total Transactions: 469608054 [30 [th] October 2019] - Total Volume of Money in Circulation: 18021375 BTC [30 [th] October 2019] - Number of Wallets added in a week: 2,62,972 17 [th] November 2019 - 43428228 24 [th] November 2019 - 43691200 V. CONCLUSION This paper provides a foundation for a blockchain analysis tool to monitor the coin flow in a given blockchain. Currently, this tool is for the Bitcoin blockchain. However, it can be used with any blockchain by adding appropriate migration module in its modular organization. The tool has features including Tracking, Tracing, Address Clustering, and Entity Tagging. Further, it also finds patterns at macro level to gain insights from the data. The results obtained using this tool are insightful and encouraging. Future scope of this work includes support for other cryptocurrencies like Ethereum and making it a universal tool for blockchain analysis. Fig. 8: Hop Tracking (length 1) Fig. 9: Hop Tracking (length 2) Fig. 10: Address Clustering Result on Data ----- VI. REFERENCES 1. Blockchain.com (as of 20/01/2020) URL : https://www.blockchain.com/en/charts 2. Patel, D., Bothra, J., Patel, V.: Blockchain exhumed. 2017 ISEA Asia Security and Privacy (ISEASP), Surat, 2017, pp. 1-12. Doi: 10.1109/ISEASP.2017.7976993 3. Christin, Nicolas. (2012). Traveling the Silk Road: A Measurement Analysis of a Large Anonymous Online Marketplace. Proceedings of the 22nd International Conference on World Wide Web. 4. Foley, Sean & Karlsen, Jonathan & Putnins, Talis. (2019). Sex, Drugs, and Bitcoin: How Much Illegal Activity Is Financed through Cryptocurrencies?. Review of Financial Studies. 32. 1798-1853. 10.1093/rfs/hhz015. 5. Heaven, D: Sitting with the cyber-sleuths who track cryptocurrency criminals. MIT technology Review. April 2018. URL : https://www.technologyreview.com/s/610807/sitting-with-thecyber-sleuths-who-track-cryptocurrency-criminals/ 6. Martins, Sergio, and Yang Yang. Introduction to bitcoins: a pseudoanonymous electronic currency system. Proceedings of the 2011 Conference of the Center for Advanced Studies on Collaborative Research. IBM Corp., 2011. 7. Koshy, Philip & Koshy, Diana & McDaniel, Patrick. (2014). An Analysis of Anonymity in Bitcoin Using P2P Network Traffic. 8437. 469-485. 10.1007/978-3-662-45472-5_30. 8. Reid, Fergal, and Martin Harrigan. An analysis of anonymity in the bitcoin system. Security and privacy in social networks. Springer, New York, NY, 2013. 197-223. 9. Zero to Monero: First Edition - a technical guide to a private digital currency; for beginners, amateurs, and experts(2018) URL : https://www.getmonero.org/library/Zero-to-Monero-1-0-0.pdf 10. Ben-Sasson, E., Chiesa, A., Garman, C., Green, M., Miers, I., Tromer, E., & Virza, M. (2014). Zerocash: Decentralized Anonymous Payments from Bitcoin. 2014 IEEE Symposium on Security and Privacy, 459-474. 11. Seo, J., Park, M., Oh H., Lee K.: Money Laundering in the Bitcoin Network: Perspective of Mixing Services. 2018 International Conference on Information and Communication Technology Convergence (ICTC), Jeju, 2018, pp. 1403-1405. 12. Fleder, Michael, Michael S. Kester, and Sudeep Pillai. Bitcoin transaction graph analysis. arXiv preprint arXiv:1502.01657 (2015). 13. Ermilov, D., Panovy, M., Yanovich Y.: Automatic Bitcoin Address Clustering. 2017 16th IEEE International Conference on Machine Learning and Applications. 14. Jourdan, Marc, et al. Characterizing entities in the bitcoin blockchain. 2018 IEEE International Conference on Data Mining Workshops (ICDMW). IEEE, 2018. 15. Cai, L., Wang B.: Research on Tracking and Tracing Bitcoin Fund Flows. 2018 IEEE 4th Information Technology and Mechatronics Engineering Conference 16. Nakamoto, S.: Bitcoin: A peer-to-peer electronic cash system. 2008. URL : https://bitcoin.org/bitcoin.pdf 17. Andreas M. Antonopoulos. 2014. Mastering Bitcoin: Unlocking Digital Crypto-Currencies (1st. ed.). O’Reilly Media, Inc. 18. Spagnuolo, Michele, Federico Maggi, and Stefano Zanero. Bitiodine: Extracting intelligence from the bitcoin network. International Conference on Financial Cryptography and Data Security. Springer, Berlin, Heidelberg, 2014. 19. Cuneyt Gurcan Akcora, Matthew F. Dixon, Yulia R. Gel, Murat Kantarcioglu: Blockchain Data Analytics. Journal of IEEE Intelligent Informatics, vol. 20, January 19 20. Cuneyt Gurcan Akcora, Yulia R. Gel, and Murat Kantarcioglu. 2017. Blockchain: A Graph Primer. 1, 1, Article 1 (August 2017). arXiv:1708.08749 [cs.CY] 21. Hong, Younggee, Hyunsoo Kwon, Sangtae Lee, and Junbeom Hur. Poster: De-mixing Bitcoin Mixing Services. (2018). 22. Goodell, G., Aste, T.: Can Cryptocurrencies Preserve Privacy and Comply with Regulations?. SSRN Electronic Journal. 2. 10.2139/ssrn.3293910 23. Balthasar, T., Hernandez-Castro, J.: An Analysis of Bitcoin Laundry Services. NordSec 2017, LNCS 10674, pp. 297–312, 2017. 24. Möser, Malte, Rainer Böhme, and Dominic Breuker. An inquiry into money laundering tools in the Bitcoin ecosystem. 2013 APWG eCrime Researchers Summit. IEEE, 2013. 25. Saurkar, Anand V., Kedar G. Pathare, and Shweta A. Gode. An Overview on Web Scraping Techniques and Tools. International Journal on Future Revolution in Computer Science & Communication Engineering (2018) 26. Balsakas, A., Franqueira, V.: “Analytical Tools for Blockchain: Review, Taxonomy and Open Challenges," *2018 International Conference on* *Cyber Security and Protection of Digital Services (Cyber Security)*, Glasgow, 2018, pp. 1-8. 27. S. Jouili and V. Vansteenberghe, "An Empirical Comparison of Graph Databases," *2013 International Conference on Social Computing*, Alexandria, VA, 2013, pp. 708-715. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/SDS49854.2020.9143908?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/SDS49854.2020.9143908, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "other-oa", "status": "GREEN", "url": "https://hal-imt-atlantique.archives-ouvertes.fr/hal-02750844/file/RC_BAT2020_14.pdf" }
2,020
[ "JournalArticle", "Conference" ]
true
2020-04-01T00:00:00
[]
6,219
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Engineering", "source": "external" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01c0c84d17eb45fe51311b7656ffea8f28fb2db7
[ "Computer Science", "Engineering" ]
0.86267
Minimax Flow Over Acyclic Networks: Distributed Algorithms and Microgrid Application
01c0c84d17eb45fe51311b7656ffea8f28fb2db7
IEEE Transactions on Control of Network Systems
[ { "authorId": "3468262", "name": "M. Coraggio" }, { "authorId": "2178100", "name": "Saber Jafarpour" }, { "authorId": "1793883", "name": "F. Bullo" }, { "authorId": "1792760", "name": "M. Bernardo" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE Trans Control Netw Syst" ], "alternate_urls": null, "id": "0d3564f0-947d-4124-b171-400399406075", "issn": "2325-5870", "name": "IEEE Transactions on Control of Network Systems", "type": "journal", "url": "http://ieeexplore.ieee.org/servlet/opac?punumber=6509490" }
Given a flow network with variable suppliers and fixed consumers, the minimax flow problem consists in minimizing the maximum flow between nodes, subject to flow conservation and capacity constraints. We solve this problem over acyclic graphs in a distributed manner by showing that it can be recast as a consensus problem between the maximum downstream flows, which we define here for the first time. In addition, we present a distributed algorithm to estimate these quantities. Finally, exploiting our theoretical results, we design an online distributed controller to prevent overcurrent in microgrids consisting of loads and droop-controlled inverters. Our results are validated numerically on the CIGRE benchmark microgrid.
## Minimax Flow over Acyclic Networks: Distributed Algorithms and Microgrid Application _Marco Coraggio, Saber Jafarpour, Francesco Bullo*, Mario di Bernardo*_ **Abstract. Given a flow network with variable suppliers and** **fixed consumers, the minimax flow problem consists in min-** **imizing the maximum flow between nodes, subject to flow** **conservation and capacity constraints. We solve this prob-** **lem over acyclic graphs in a distributed manner by showing** **that it can be recast as a consensus problem between the** **maximum downstream flows, which we define here for the** **first time. Additionally, we present a distributed algorithm** **to estimate these quantities.** **Finally, exploiting our theo-** **retical results, we design an online distributed controller to** **prevent overcurrent in microgrids consisting of loads and** **droop-controlled inverters.** **Our results are validated nu-** **merically on the CIGRE benchmark microgrid.** ### 1 Introduction _Problem description and motivation_ Flow networks are dynamical systems where a commodity of interest is provided by supplier nodes, flows over the network edges, and reaches consumer nodes. Critical infrastructure networks such as power grids, water distribution networks, and traffic networks are modeled as flow networks, with the commodity of interest being electrical power, water, and vehicles, respectively [1–3]. A fundamental problem in these networks is to cater for consumers’ demands, while keeping the commodity flows over the network edges below their maximum capacities. Hence, a valuable optimization problem is to minimize the maximum flow over the network edges, thereby ensuring that no edge capacity is exceeded. Violation of capacity constraints is a safety-critical event, with a potential to cause disruptions or faults in real-world infrastructure networks. Typically, the resulting minimax flow problem is solved offline in a centralized This work was in part supported by the Research Project PRIN 2017 “Advanced Network Control of Future Smart Grids” funded by the Italian Ministry of University and Research (2020–2023)–http://vectors.dieti.unina.it, and by the AFOSR grant FA9550-22-1-0059. M. Coraggio is with the Scuola Superiore Meridionale (SSM), School for Advanced Studies (marco.coraggio@unina.it). S. Jafarpour is with the Dept. of Electrical and Computer Engineering, Georgia Inst. of Technology (saber@gatech.edu). F. Bullo is with the Dept. of Mechanical Engineering, Univ. of California Santa Barbara. M. di Bernardo is with the Dept. of Information Technology and Electrical Engineering, Univ. of Naples Federico II, and with the SSM (mario.dibernardo@unina.it). *These authors contributed equally. fashion, so that the “right” flows can be assigned to the network edges. However, recent changes in infrastructure networks, due to the increase in demand, the integration of numerous smart devices and the need for higher energy efficiency, have shown the limitations of such centralized approaches. In this paper, we propose a distributed solution to the minimax flow problem over acyclic networks consisting of suppliers and consumer nodes, where the former can adjust their supply rates to satisfy fixed consumption demands in the latter. In particular, by solving a distributed consensus problem, we propose a strategy for supplier generation that minimizes the maximum flow over all edges, subject to flow conservation and safety constraints. As a case study of relevance in applications, we apply our distributed approach to AC microgrids consisting of resistive loads and droop-controlled distributed energy units. We show that our algorithm is an effective solution to adjust the suppliers’ generation rates in order to prevent overcurrents on the network edges while fulfilling the demands of the consumers. _Literature on network optimization problems_ One of the earliest formulations of minimax optimization problems on graphs is the minimax location problem [4], where the objective function is the distance between a facility node to be placed in the network and the other nodes in the graph. Later studies on this topic include [5,6]. In [7,8], the time-minimizing _transportation problem was studied, where source nodes and_ sink nodes are two disjoint sets making up a bipartite graph, and the objective is to minimize the maximum transportation time among all utilized edges. In [9], the minimax transportation _problem is introduced for cyclic graphs with one source node_ and one sink node, with the objective of minimizing the maximum flow in the network. Later, in [10], the problem is recast as a linear program and several solution algorithm are presented. Surprisingly, to the best of our knowledge, relatively few distributed solutions of minimax problems on graphs have been presented in the existing literature (see [11] for a recent review of distributed network optimization algorithms). Examples of existing distributed approaches, although not applicable to minimax flow problems, include those presented in [12], where two networks are in competition to maximize and minimize an objective function, and [13], where agents are divided into two groups for computing two continuous decision variables in a minimax optimization. For the specific case of flow networks, a Newton ----- based distributed algorithm is presented in [14] for minimizing the sum of all flows, while an accelerated algorithm for a similar problem is described in [15]. Also, a distributed algorithm for minimizing the 𝑝-norm of flows was presented in [16], which approximates the minimax flow problem when 𝑝 becomes very large. _Literature on microgrid protection_ Protection against faults (such as overcurrents) in microgrids can be ensured through three kinds of interventions: preven_tion (before the unwanted events), detection (during the events),_ and management (right after the events). In the literature, most studies focus on detection and management (see [17–19] and references therein). However, fault prevention is one area in which the use of intelligent control strategies could prove particularly fruitful, given the many challenges with fault detection and management algorithms currently available for microgrids [20–22]. An optimization problem to find the maximum permissible loading is solved in [20] through genetic algorithms, to prevent the occurrence of cascading failures. Overvoltages are prevented in [21] via a decentralized control scheme that curtails the active power output of the generators when necessary, while a control strategy is presented in [22] to prevent overloading of distributed generators during peak demand time, employing battery storage units that can intervene smoothly. Further distributed control strategies for microgrids include [23–28] but are not specifically aimed at solving minimax problems. A minimax optimization problem for networks of microgrids is solved in a distributed fashion in [29], minimizing a function of the energy stored in the microgrids and the power flows between them, controlling the latter. _Contributions_ The key contributions of this paper can be summarized as follows: 1. we establish a connection between solving the minimax flow problem over an acyclic graph and achieving consensus of the maximum downstream flows, that we define here for the first time; 2. we propose a distributed estimation strategy to evaluate the maximum downstream flows of a network of interest; 3. we exploit our theoretical results and an estimation strategy to obtain an online distributed controller to minimize the maximum power flow on the lines of a microgrid, by adjusting dynamically the power generated by the suppliers, thus preventing overcurrents in the grid. When compared to the existing literature, our objectives and methodology are closer in flavor to those presented in [16], with the important differences that therein (i) consumers can absorb any amount of commodity and (ii) only an approximate solution of the minimax flow problem is obtained. All the other references we reviewed differ from our work in major aspects, such as the optimization problem (e.g., minisum rather than minimax, as in [15]) or the network structure (e.g., single source and single sink, with cyclic graphs, as in [30]). ### 2 Review of flow networks _Notation_ We let max = 0. Letting and be sets, is the cardi(∅) Q R |Q| nality of Q, and Q ⇒ R is an application from Q to all subsets of R. Given a matrix A, ker(A) is its null space (kernel), and A[†] is its Moore-Penrose (pseudo-)inverse [31]. _Graph theory_ Letting = _,_ be a graph, and are the set of vertices G (V E) V E and the set of edges, respectively; 𝑁 ≜ |V| and 𝑁 E ≜ |E| being the numbers of vertices and edges. We denote an undirected edge connecting vertices 𝑖 and 𝑗 as _𝑖, 𝑗_, and a directed edge { } from 𝑖 to 𝑗 as (𝑖, 𝑗). A and L are the adjacency and Laplacian _matrices associated to_ . In an undirected graph, we let be the G Q set of edges in, after they have been enumerated and oriented E in an arbitrary way, and let B be the incidence matrix associated to the graph _,_ . In a (directed) graph, a (directed) path is (V Q) an ordered sequence of vertices such that any pair of consecutive vertices is an edge in the graph. In a directed graph _,_, the (V E)[�] _out-tree of vertex 𝑖_ is the union of all directed paths starting ∈V from 𝑖; moreover, the out-neighborhood of a vertex 𝑖 is the set of all vertices 𝑗 such that a directed edge _𝑖, 𝑗_ exists in . ( ) E[�] _Flow networks_ Consider a flow network associated to an undirected acyclic un_weighted graph G = (V, E). We define Vs ⊂V as the set_ of supplier vertices and Vc ⊂V as the set of consumer vertices, with {Vs, Vc} being a partition of V. Additionally, we let _𝑁s ≜_ |Vs| ≥ 2 and 𝑁c ≜ |Vc| ≥ 1 be the number of supplier and consumer vertices, respectively. _Commodity_ We let 𝑚𝑖 ∈ R be the amount of commodity supplied (𝑚𝑖 _> 0)_ or consumed (𝑚𝑖 ≤ 0) at vertex 𝑖, and define m ≜ [𝑚𝑖]𝑖 ∈V ∈ R[𝑁] and ms ≜ [𝑚𝑖]𝑖 ∈Vs ∈ R[𝑁][s] . We assume that the amounts of consumed commodity (𝑚𝑖, 𝑖 ∈Vc) are given, whereas the amounts of supplied commodity (𝑚𝑖, 𝑖 ∈Vs) can be controlled, provided that mmin ≤ **ms ≤** **mmax, where mmin, mmax ∈** R>[𝑁]0[s] [are] vectors of positive real numbers.1 _Flows_ For all {𝑖, 𝑗 } ∈E, we let 𝑓𝑖𝑗 ∈ R denote the flow of commodity from 𝑖 to 𝑗; 𝑓𝑖𝑗 _> 0 if commodity flows from 𝑖_ to 𝑗 and viceversa, and 𝑓 _𝑗𝑖_ = − _𝑓𝑖𝑗_ . We also define f = [ 𝑓𝑖𝑗 ] (𝑖, 𝑗) ∈Q ∈ R[𝑁][E] . 1If a supplier 𝑖 is not controllable, it is possible to set 𝑚min,𝑖 = 𝑚max,𝑖. ----- The flows satisfy the balancing equations ∑︁ _𝑓𝑖𝑗_ = 𝑚𝑖, ∀𝑖 ∈V, (2.1) _𝑗:{𝑖, 𝑗_ }∈E which can be written in a more compact form as **Bf = m.** (2.2) Finally, we let 𝑓[¯]𝑖𝑗 ∈ R>0 be the capacity (i.e., maximum flow allowed) of edge {𝑖, 𝑗 }, and define f[¯] = [ _𝑓[¯]𝑖𝑗_ ] (𝑖, 𝑗) ∈Q ∈ R>[𝑁]0[E] [.] Next, we present a result characterizing flows over acyclic networks. For completeness’ sake, we include a short proof. (a) _𝑓12 < 0_ _𝑓23 > 0_ _𝑓34 < 0_ (b) E �E �E[+] Ecf �Ecf 1 2 3 4 _𝑓23 > 0_ _𝑓34 < 0_ 2 3 4 **Lemma 2.1 (Flows [32]). In an acyclic unweighted undi-** _rected flow network with incidence matrix B, Laplacian ma-_ _trix L, and commodity vector m, the flows f are uniquely_ _determined by commodity conservation (2.2) and are given_ _by_ **f = B[T]L[†]m.** (2.3) _Proof. From [28], we have L[†]L = I −_ _𝑁[1]_ **[11][T][, and, as the graph]** is unweighted, L = BB[T] [33, Chapter 9]. Then, consider the following expression: B[T]L[†]BB[T] = B[T]L[†]L = B[T] (I − _𝑁[1]_ **[11][T][)][ =]** **B[T]. As the graph is acyclic, ker(B) = ∅** [33], and thus B[T]L[†]B = **I. Therefore, premultiplying (2.2) by B[T]L[†], we get the thesis.** ### 3 Problem formulation #### 3.1 Minimax flow problem We start by defining the flow safety margin of a network. **Definition 3.1 (Flow safety margin). Given a flow network** _over G = (V, E) with supplied commodity ms, flows 𝑓𝑖𝑗_ _and capacities_ _𝑓[¯]𝑖𝑗_ _, the flow safety margin 𝐽Er : R[𝑁][s]_ → R≥0, _with respect to a given edge set Er ⊆E is_ �� _𝑓𝑖𝑗_ �� _𝐽Er (ms) ≜_ {𝑖, 𝑗max}∈Er _𝑓¯𝑖𝑗_ _._ (3.1) _𝐽Er ≥_ 1 corresponds to a fault condition we wish to avoid. We now state the main problem under study in this paper. **Problem 3.2 (Minimax flow problem). For a flow network over an** _acyclic graph, the minimax flow problem is_ minms _𝐽Er (ms),_ Figure 1: (a), (b): The various edges sets used in the paper, for a simple flow network. Upward green triangles represent suppliers, while downward blue triangles denote consumers. Following the steps in [10] and exploiting (2.3), it is straightforward to verify that the minimax flow problem is a linear program and can be solved using standard centralized iterative approaches. However, such an approach has two major drawbacks: (i) it requires receiving data from all edges and transmitting data to all the suppliers, which can be impractical; (ii) if 𝑚𝑖, 𝑖 ∈Vc are time-varying, the optimization problem needs to be solved repeatedly and if the re-computation is not fast enough, faults may occur from applying control inputs that are not up to date, as we will show in Section 6.3. As explained below, it might occur that the flow can be controlled only on a subset of the edges, say Ecf; therefore, in the rest of this paper, when considering Problem 3.2 and the flow safety margin function 𝐽Er in Definition 3.1, we take Er = Ecf, and omit the subscript of 𝐽Ecf (writing 𝐽), for the sake of brevity. Next, we give a formal definition and characterization of the subset of edges with controllable flows Ecf. #### 3.2 Edges with controllable flows Given an undirected graph = _,_ associated to a flow G (V E) network, a set of directed edges E[�] is obtained by orienting the edges in according to the direction of the flows on them. E Namely, for each _𝑖, 𝑗_, contains either edge _𝑖, 𝑗_ if { } ∈E E[�] ( ) _𝑓𝑖𝑗_ _> 0, or ( 𝑗, 𝑖) if 𝑓𝑖𝑗_ _< 0, or no edge if 𝑓𝑖𝑗_ = 0. We also define the extended set of directed edges E[�] [+] as the set that, for each _𝑖, 𝑗_, contains both _𝑖, 𝑗_ and _𝑗, 𝑖_ (independently of the { } ∈E ( ) ( ) value of 𝑓𝑖𝑗 ). These sets are portrayed in Figure 1a. **Definition 3.3 (Half-cluster). For an acyclic undirected** _graph G = (V, E), the half-cluster is a function H :_ E[�] [+] ⇒ V. In particular, H [(𝑖, 𝑗)] = H𝑖𝑗 _is the set of vertices in_ _the connected component of_ _𝑖_ _that contains 𝑗_ _(Figure_ G \ { } _2a)._ **Bf = m,** � _𝑖_ ∈V _[𝑚]𝑖_ [=][ 0][,] |f| < **f[¯],** **mmin ≤** **ms ≤** **mmax.** (3.2) **Definition 3.4 (Supplier indicator function). For an acyclic** _flow network, the supplier indicator function 𝛽_ : E[�] [+] → s.t.   ----- (a) H _𝑗𝑖_ H𝑖𝑗 _𝑖_ _𝑗_ (b) 3 4 |H𝑗𝑖 𝑖|Col2|H𝑖𝑗 𝑗| |---|---|---| |||| |MDE of vertex 1|Col2|Col3| |---|---|---| |(0.3) 2 (0.9) 3 5 1 (0.4) D1 4 6||| ||2 (0.9) 3 (0.4) 4|| |1 D1||6| |Col1|3 4| |---|---| ||𝛽 = 1 23 𝛽 = 0 25| ||| 5 1 2 _𝛽52 = 1_ (c) D1 = {(1, 2)} 3 4 D4 = {(4, 3), (3, 2)} |3|3 4| |---|---| |1, 2 {( )}|= {(4, 3 D4| ||| 1 2 5 Figure 2: (a): Half-clusters of edges _𝑗, 𝑖_ (left) and _𝑖, 𝑗_ H ( ) ( ) (right) in an example graph _,_ . (b): Supplier indica(V E[�] [+]) tor function 𝛽 for several edges, in an example graph _,_ ; (V E[�] [+]) upward green triangles represent suppliers; downward blue triangles denote consumers. (c): Some downstreams D𝑖 in an example graph _,_ . (V E)[�] 0, 1 _is defined as_ { } _𝛽[(𝑖, 𝑗)] = 𝛽𝑖𝑗_ ≜ � 1, _if Vs ∩H𝑖𝑗_ ≠ ∅, (3.3) 0, _otherwise._ In simple terms, 𝛽𝑖𝑗 is 1 if a supplier can be be found in H𝑖𝑗 ; moreover, notice that in general 𝛽𝑖𝑗 is unrelated to 𝛽 _𝑗𝑖. A graph-_ ical example is given in Figure 2b. As stated in the next Lemma, some flows 𝑓𝑖𝑗 do not depend on the amount of commodity generated by supplier vertices, and thus we will not consider them in the optimization problem. We define the set of edges with controllable flows as Ecf ≜ {{𝑖, 𝑗 } ∈E | 𝛽𝑖𝑗 = 1 ∧ _𝛽_ _𝑗𝑖_ = 1}. (3.4) **Lemma 3.5 (Non-controllable flows). In an acyclic flow net-** _work, the flows 𝑓𝑖𝑗_ _for {𝑖, 𝑗_ } ∈E \ Ecf are independent of _the supplied commodity 𝑚𝑘_ _, ∀𝑘_ ∈Vs. _Proof. Consider an edge {𝑖, 𝑗_ } ∈E \ Ecf; by (3.4), it holds that _𝛽𝑖𝑗_ = 0 ∨ _𝛽_ _𝑗𝑖_ = 0. Without loss of generality, assume that _𝛽𝑖𝑗_ = 0, which means that H𝑖𝑗 contains no suppliers. Then, using (2.1) for all vertices in H𝑖𝑗, we have that all edges reaching a vertex in H𝑖𝑗 (including {𝑖, 𝑗 }) have their flows only determined by {𝑚𝑞 }𝑞 ∈H𝑖𝑗 . As H𝑖𝑗 ∩Vs = ∅, we conclude that these flows do not depend on any 𝑚𝑘, for 𝑘 ∈Vs. We define Vcf as the set of vertices that are reached by at least an edge in Ecf, and the graph Gcf = (Vcf, Ecf). It is immediate D1 4 6 Figure 3: Representation of a maximum downstream edge (MDE). Upward green triangles represent suppliers, while downward blue triangles denote consumers; in red and in parentheses we drew 𝑓𝑖𝑗 / _𝑓[¯]𝑖𝑗_ . Edge (2, 3) is the MDE of vertex 1. Moreover, as vertex 1 is a supplier, 2, 3 is a maximum downstream edge ( ) of a supplier vertex (MDES; i.e., (2, 3) ∈Ms), and, as vertex 3 is a consumer, then also (2, 3) ∈Ms→c. to verify that this graph (i) cuts out from the branches that G contain only consumers, (ii) is connected, and (iii) all of its leaf vertices are suppliers. Finally, we let E[�]cf be the set of directed edges obtained by orienting the edges in Ecf according to the flows, similarly to what we did to obtain E[�] from E. Examples of Ecf and E[�]cf are depicted in Figure 1b. ### 4 Consensus reformulation of the minimax flow problem Next, we introduce the notions of maximum downstream flows and consumer clusters which will then be used to reformulate the minimax flow optimization problem (Problem 3.2) as a consensus problem. **Definition 4.1 (maximum downstream flows and edges).** _Consider a flow network associated to an acyclic graph_ = _,_ _. Then,_ G (V E) _(i) for 𝑖_ _, the downstream of vertex 𝑖, denoted by_ ∈V D𝑖 ⊆ E[�]cf, is the out-tree of vertex _𝑖_ _in (Vcf,_ E[�]cf) (Figure _2c);_ _(ii) the maximum downstream flow 𝜙_ : V → R≥0 is given _by_ _𝑓_ _𝑗𝑘_ _𝜙(𝑖) = 𝜙𝑖_ ≜ ( 𝑗,𝑘max) ∈D𝑖 _𝑓¯𝑗𝑘_ ≥ 0. (4.1) _(iii) for 𝑖_ _, the maximum downstream edge (MDE) of_ ∈V _vertex 𝑖_ _is arg max( 𝑗,𝑘) ∈D𝑖_ _𝑓_ _𝑗𝑘_ / _𝑓[¯]𝑗𝑘_ ∈ E[�]cf (Figure 3). If 𝑖 ∈Vs, we abbreviate “maximum downstream edge of a supplier vertex” as MDES. We denote by Ms ⊆ E[�]cf the set of all MDESs, and by Ms→c ⊆Ms the set of MDESs that have consumers as terminal vertices (see again Figure 3). We give next two instrumental results in Lemmas 4.2 and 4.5. **Lemma 4.2. In an acyclic flow network,** E[�]cf = [�]𝑖 ∈Vs [D]𝑖[.] _Proof. We obtain a proof by contradiction, showing that if the_ thesis did not hold, that would cause some consumer vertices ----- S _. . ._ _. . ._ _𝑗_ _𝑘_ Figure 4: Graph topology described in the proof of Lemma 4.2. Downward triangles represents consumers, while circles can either be suppliers or consumers. not to receive as much commodity as they demand (which would contradict (2.1)). In particular, contrary to the thesis, assume that there exists ( 𝑗, 𝑘) ∈ E[�]cf such that � ( 𝑗, 𝑘) ∉ D𝑖. (4.2) _𝑖_ ∈Vs Define as the set of vertices that have _𝑗, 𝑘_ in their outS ( ) tree (see Figure 4). By Definition 4.1.(i), (4.2) implies that all nodes in are not suppliers (and thus are consumers), because S the right-hand side in (4.2) is computed considering 𝑖 ∈Vs. Moreover, let ES ≜ {( _𝑝, 𝑞) ∈_ E[�]cf | 𝑝 ∉ S, 𝑞 ∈S} (i.e., edges “on the boundary” of that terminate in ). It is immediate to S S see that ES = ∅; (4.3) indeed, if there existed an edge _𝑝, 𝑞_, then _𝑗, 𝑘_ would ( ) ∈ES ( ) belong to the out-tree of 𝑝, which by definition of would imply S _𝑝_, but this is impossible by definition of . ∈S ES However, exploiting (2.1) for 𝑖 ∈ Vc, we have that � ( 𝑗,𝑘) ∈ES _[𝑓]_ _𝑗𝑘_ [=][ −] [�]𝑞 ∈S _[𝑚]𝑞_ _[>][ 0,][2][ which requires that][ E]S_ [≠] [∅][,] but this is in contradiction with (4.3). Therefore, an edge _𝑗, 𝑘_ ( ) that satisfies (4.2) cannot exist, and the thesis is proved. Thirdly, by Definition 4.3, any edge in Ms→c terminates in a consumer cluster. **Definition 4.4 (Critical consumer cluster). In an acyclic flow** _network, A critical consumer cluster_ _is a consumer cluster_ C[∗] _such that all_ _𝑖, 𝑗_ _terminate in_ _and are MDESs_ ( ) ∈E C[∗] C[∗] _(see Figure 5b), i.e.,_ ∀(𝑖, 𝑗) ∈E C[∗], (𝑖, 𝑗) ∈Ms→c ∧ _𝑗_ ∈C[∗]. (4.4) |Col1|𝑗|𝑘| |---|---|---| **Lemma 4.5 (Existence of critical consumer cluster). In an** _acyclic flow network, if 𝜙𝑖_ _> 0 for all 𝑖_ ∈Vs, then there _exists a critical consumer cluster._ **Definition 4.3 (Consumer cluster). In an acyclic flow net-** _work, a consumer cluster C ⊂Vcf is a set of vertices having_ _the following properties (see Figure 5a):_ _all vertices in C are consumers (C ⊆Vc ∩Vcf), and C_ _is a connected component in Gcf = (Vcf, Ecf);_ _(ii) there are no MDESs between the vertices in(i)_ _, i.e.,_ C Ms ∩(C × C) = ∅; _(iii) any edge_ _𝑖, 𝑗_ _or_ _𝑗, 𝑖_ _, where 𝑖_ _is a consumer not_ ( ) ( ) _belonging to_ _and 𝑗_ _is a vertex in_ _, must be an_ C C _MDES;_ _(iv) there exists at least an MDES that terminates in_ _, i.e.,_ C ∃(𝑖, 𝑗) ∈Ms→c : 𝑗 ∈C. Given a consumer cluster C, we denote by E C ⊆ E[�]cf the set of directed edges that are on the boundary of C, i.e., E C ≜ {(𝑖, 𝑗) ∈ �Ecf | (𝑖 ∈C, 𝑗 ∉ C) ∨(𝑖 ∉ C, 𝑗 ∈C)}. Moreover, we denote by [ˆ] the set of all consumer clusters and note the following C facts. Firstly, C[ˆ] is finite because the number of vertices in Vcf is finite. Secondly, any two different consumer clusters C1, C2 ∈ C[ˆ] must be disjoint, because of properties (ii)-(iii) in Definition 4.3. 2Note that [�]𝑞∈S _[𝑚]𝑞_ _[<][ 0, rather than][ �]𝑞∈S_ _[𝑚]𝑞_ [=][ 0, because otherwise the] vertices in S would not be in Vcf . _Proof. First, note that the hypothesis 𝜙𝑖_ _> 0, ∀𝑖_ ∈Vs implies that all suppliers have a MDE (that is a MDES; see Definition 4.1.(iii)). This, in conjunction with the facts that the network has an acyclic structure and that the number of vertices is finite, implies that there exists at least a MDES terminating in a consumer, i.e., Ms→c ≠ ∅, which yields C[ˆ] ≠ ∅. Next, we prove the thesis by contradiction. Negating the existence of a critical consumer cluster, we have, from (4.4), ∀C ∈ C[ˆ], ∃(𝑖, 𝑗) ∈E C : (𝑖, 𝑗) ∉ Ms→c ∨ _𝑗_ ∉ C. (4.5) Let us consider some C1 ∈ Cˆ and assume without loss of generality that the edge _𝑖, 𝑗_ referenced in (4.5) is such that ( ) _𝑖_ ∉ C1 and 𝑗 ∈C1 (i.e., (𝑖, 𝑗) ends in C1; see Figure 5c). In this case, it remains to be proved that assuming (𝑖, 𝑗) ∉ Ms→c leads to a contradiction. Indeed, in this case either 𝑖 is a supplier or it is a consumer. In this latter case, by Definition 4.3 (see in particular point (iii)), 𝑖 must belong to C1, which is against the hypothesis. If 𝑖 is a supplier instead, then it must have some MDES, say _𝑎_ ∈Ms, that cannot be (𝑖, 𝑗) or belong to C1 by Definition 4.3 (point (ii)). Then, either 𝑎 ends in a consumer or in a supplier. If it ends in a consumer, then 𝑎 must end in some consumer cluster C2 different from C1, given the property that the graph is acyclic by hypothesis. On the other hand, if 𝑎 ends in a supplier, then that supplier must have its own MDES and the argument can be repeated until an MDES ending in a consumer is found; hence, this MDES ends in a consumer cluster, which is different from any other defined earlier on in the procedure (because the graph is acyclic). As this argument can be repeated ad infinitum, we G get a contradiction (because [ˆ] must be finite) and the theorem C remains proved. A similar argument could be used to reach a contradiction if the edge (𝑖, 𝑗) is assumed to be such that 𝑖 ∈C1 and 𝑗 ∉ C1 (i.e., (𝑖, 𝑗) does not end in C1). Therefore, we conclude that (4.5) does not hold, which corresponds to the thesis. We are now ready to present our main result. **Theorem 4.6 (Consensus achieves optimization). In an** _acyclic flow network, if 𝜙𝑖_ = 𝜙[∗] _for all 𝑖_ ∈Vs and for _some 𝜙[∗]_ ∈ R≥0, then the cost function 𝐽 _(see Definition 3.1)_ ----- (a) C E C (b) C[∗] E C[∗] (c) C1 _𝑎_ _𝑗_ _𝑖_ E C1 |Col1|C| |---|---| E C2 V3 (lp = 3) V0 (lp = 0) |C1 𝑎 C2|C2|Col3| |---|---|---| |𝑎 𝑗 𝑖||| |||| Figure 5: (a): A consumer cluster (see Definition 4.3); upward C green triangles represent suppliers, while downward blue triangles denote consumers; heavier arrows denote MDESs; dots represents connected components of vertices. (b): A critical consumer cluster (see Lemma 4.5). (c): Situation described C[∗] in the proof of Lemma 4.5. _is minimized with respect to ms._ _Proof. From (3.1), exploiting Lemma 4.2, and using (4.1), we_ have �� _𝑓𝑖𝑗_ �� _𝑓𝑖𝑗_ _𝑓𝑖𝑗_ _𝐽_ = max = max = max = max {𝑖, 𝑗 }∈Ecf _𝑓¯𝑖𝑗_ (𝑖, 𝑗) ∈ E[�]cf _𝑓¯𝑖𝑗_ (𝑖, 𝑗) ∈[�]𝑖∈Vs [D]𝑖 _𝑓¯𝑖𝑗_ _𝑖_ ∈Vs _[𝜙][𝑖][.]_ (4.6) From (4.6), it is obvious that, if 𝜙[∗] = 0, then 𝐽 = 0, which clearly corresponds to the lowest possible value of 𝐽. We consider next the case that 𝜙[∗] _> 0._ For the sake of brevity, let 𝑥𝑖𝑗 ≜ _𝑓𝑖𝑗_ / _𝑓[¯]𝑖𝑗_ . From Lemma 4.5, there exists a critical consumer cluster, and using (4.6) and the fact that C[∗] E C[∗] ⊆ E[�]cf we have _𝐽_ ≥ _𝐽[˜]_ ≜ max (4.7) (𝑖, 𝑗) ∈EC∗ _[𝑥][𝑖𝑗]_ _[.]_ Then, from (2.1), it is straightforward to compute that ∑︁ ∑︁ _𝑓𝑖𝑗_ = − _𝑚𝑘_ _,_ (𝑖, 𝑗) ∈EC∗ _𝑘_ ∈C[∗] which, letting 𝑚 C[∗] ≜ − [�]𝑘 ∈C[∗] _[𝑚]𝑘_ _[>][ 0, can be rewritten as]_ � (𝑖, 𝑗) ∈EC∗ _[𝑥]𝑖𝑗_ _[𝑓][¯]𝑖𝑗_ [=][ 𝑚] C[∗][. Therefore, considering the problem] min _𝐽,˜_ _𝑥𝑖𝑗_ ∈R≥0, (𝑖, 𝑗) ∈EC∗ ∑︁ s.t. _𝑥𝑖𝑗_ _𝑓[¯]𝑖𝑗_ = 𝑚 C[∗], (𝑖, 𝑗) ∈EC∗ and recalling (4.7), it is clear that the minimum value of 𝐽[˜] is achieved when all 𝑥𝑖𝑗 s are equal. At this point, by hypothesis, _𝑥𝑖𝑗_ = 𝜙[∗], ∀(𝑖, 𝑗) ∈E C[∗], and thus 𝐽[˜] = 𝜙[∗] is minimal. From (4.6) and the hypothesis, it also holds that 𝐽 = 𝜙[∗]; therefore, from (4.7), 𝐽 is also minimized. Figure 6: Grouping of vertices in accordance to their values of _𝑙p, defined in the proof of Lemma 5.1, for an example graph_ _,_ . (V E)[�] Note that Theorem 4.6 offers only a sufficient condition for the solution of Problem 3.2. ### 5 Distributed estimation of maximum down- stream flows In this section, we study how the maximum downstream flows 𝜙𝑖 can be estimated by each node using a recursive process that only requires local information. Then, in Section 6, we embed such estimation process in a heuristic distributed control approach to achieve consensus of the maximum downstream flows, and hence solve Problem 3.2 via Theorem 4.6, for the case of electric microgrids. Let us denote by V𝑖[out] the out-neighborhood of vertex 𝑖 in the graph _,_ . (V E)[�] **Lemma 5.1 (Reformulation of maximum downstream flows).** _In an acyclic flow network, the maximum downstream flow_ _𝜙𝑖_ _(see Definition 4.1.(ii)) can be found by computing_ � _𝑓𝑖𝑗_ � _𝜙𝑖_ = max𝑗 ∈V𝑖[out] _𝛽𝑖𝑗_ _𝑓¯𝑖𝑗_ _, 𝜙_ _𝑗_ _._ (5.1) _Proof. For the sake of simplicity and without loss of generality,_ assume that 𝑓[¯]𝑖𝑗 = 1 and 𝛽𝑖𝑗 = 1 for all (𝑖, 𝑗) ∈ E[�] [+]. In the directed acyclic graph (V, E)[�], let us denote by 𝑙p (𝑖) the maximum length of all directed paths starting from vertex 𝑖; then V0, V1, V2, . . . are the sets of vertices that have 𝑙p = 0, 𝑙p = 1, 𝑙p = 2, . . ., respectively (see Figure 6). We show the thesis, i.e., that (5.1) is equivalent to (4.1), for the subsets V0, V1, V2, . . . one at a time. - 𝑘 ∈V0. As D𝑘 = V𝑘[out] = ∅, both (4.1) and (5.1) yield _𝜙𝑘_ = 0, _𝑘_ ∈V0. (5.2) - 𝑗 ∈V1. We have D 𝑗 = {( 𝑗, 𝑘) | 𝑘 ∈V𝑗[out]}. This, together with (5.2), means that both (4.1) and (5.1) give - 𝑖 ∈V2. Now, D𝑖 = {(𝑖, 𝑗) | 𝑗 ∈V𝑖[out]} ∪{( 𝑗, 𝑘) | 𝑗 ∈ � � _𝜙_ _𝑗_ = max _𝑓_ _𝑗𝑘_ _𝑘_ ∈V𝑗[out] _,_ _𝑗_ ∈V1. (5.3) ----- V𝑖[out], 𝑘 ∈V𝑗[out]}, From (4.1), we have �� � _𝜙𝑖_ = max _𝑓𝑖𝑗_ � _,_ _𝑖_ ∈V2. �� � � � _𝜙𝑖_ = max _𝑓𝑖𝑗_ _𝑗_ ∈V𝑖[out] _[,]_ _𝑓_ _𝑗𝑘_ _𝑗_ ∈V𝑖[out],𝑘 ∈V𝑗[out] _,_ _𝑖_ ∈V2. (5.4) Then, using (5.3), (5.4) can be rewritten as - ℎ ∈{V2, . . ., V𝑁 −1}. The above steps can be repeated to show the thesis for the remaining nodes. To compute the generator indicator function 𝛽 appearing in (5.5) (and defined in (3.3)), we use the following algorithm, which ideally converges arbitrarily fast. For each _𝑖, 𝑗_, ( ) ∈ E[�] [+] we define 𝛽[ˆ]𝑖𝑗, which is initialised to 1 if 𝑗 ∈Vs, or 0 otherwise. Then, it is straightforward to verify that any 𝛽[ˆ]𝑖𝑗 converges exactly to 𝛽𝑖𝑗 in at most 𝑁 − 2 steps, repeating the following Boolean assignments: �� � _𝜙𝑖_ = max _𝑓𝑖𝑗_ � � _𝑗_ ∈V𝑖[out] _[,]_ _𝜙_ _𝑗_ _𝑗_ ∈V𝑖[out] � _,_ _𝑖_ ∈V2, which corresponds to (5.1). - ℎ ∈{V3, . . ., V𝑁 −1}. The reasoning presented at the above point can be easily repeated to show that (5.1) is equivalent to (4.1) for all remaining vertices. In practice, the calculation in (5.1) can be implemented through an arbitrarily fast dynamical estimation system, as stated in the next proposition. **Proposition 5.2 (Distributed estimation of maximum down-** stream flows). In an acyclic flow network, we let _𝜙[ˆ]_ : V × R≥0 → R—denoting _𝜙[ˆ](𝑖, 𝑡) by_ _𝜙[ˆ]𝑖_ (𝑡)—be the solution to _𝜙ˆ�𝑖_ (𝑡) = −𝑘 _𝜙_ �𝜙ˆ𝑖 (𝑡) − _𝑗max∈V𝑖[out]_ �𝛽𝑖𝑗 _𝑓𝑓¯𝑖𝑗𝑖𝑗_ _,_ _𝜙[ˆ]_ _𝑗_ (𝑡)�[�] _,_ _𝜙[ˆ]𝑖_ (0) = 0, (5.5) ∀𝑖 ∈V. Assume the 𝑓𝑖𝑗 _s are constant, or 𝑘_ _𝜙_ ∈ R>0 is _large enough so that the 𝑓𝑖𝑗_ _s can be considered constant_ _with respect to the dynamics of the_ _𝜙[ˆ]𝑖s. Then,_ _𝜙[ˆ]𝑖_ _converges_ _to 𝜙𝑖, ∀𝑖_ ∈V. _Proof. As in the Proof of Lemma 5.1, for simplicity and without_ loss of generality, assume that 𝑓[¯]𝑖𝑗 = 1 and 𝛽𝑖𝑗 = 1 for all (𝑖, 𝑗) ∈ E[�] [+]; moreover, consider again the sets V0, V1, V2, . . . defined in that Proof and depicted in Figure 6. - 𝑘 ∈V0. From (5.5), we have _𝜙ˆ�𝑘_ (𝑡) = −𝑘 _𝜙_ _𝜙ˆ𝑘_ (𝑡), _𝜙ˆ𝑘_ (0) = 0, _𝑘_ ∈V0. Thus, for 𝑘 ∈V0, ∀𝑡, 𝜙[ˆ]𝑘 (𝑡) = 0 = 𝜙𝑘 (see (5.1)). - 𝑗 ∈V1. From (5.5) and what we stated at the previous point, we have _𝛽ˆ𝑖𝑗_ ← _𝛽ˆ𝑖𝑗_ ∨ [�]� � � _𝛽ˆ_ _𝑗𝑘_ [�]� _,_ ∀(𝑖, 𝑗) ∈ E[�] [+]. _𝑘_ ∈V |𝑘≠𝑖, ( 𝑗,𝑘) ∈ E[�] [+] � Next, we will show through a representative application to microgrids that the distributed approach to estimate the maximum downstream flows can be used together with Theorem 4.6 to synthesize a heuristic control strategy able to solve the minimax flow optimization problem in a distributed manner. ### 6 Application to microgrids We consider an AC microgrid [34] whose communication topology is described by an undirected, connected, acyclic, and weighted graph G = (V, E), with 𝑁 ≜ |V| and 𝑁 E ≜ |E|. We let Vs ≜ (1, . . ., 𝑁s), where 𝑁s < 𝑁, denote the set of power generators (suppliers), whereas Vc ≜ (𝑁s + 1, . . ., 𝑁) denotes loads (consumers). We let Q and B be defined as in Section 2. Assuming (i) the generators are distributed energy resources with voltage source converters as power electronic interfaces, (ii) resistive loads, (iii) lossless lines, (iv) quasi-synchronization, and (v) constant voltages, the frequency dynamics can be described as [28,35]: ∑︁𝑁 _𝐷𝑖𝛿[�]𝑖_ (𝑡) = 𝑃𝑖 − _𝑗=1_ _[𝐴][𝑖𝑗]_ [sin][(][𝛿][𝑖] [(][𝑡][) −] _[𝛿]_ _[𝑗]_ [(][𝑡][))][,] _𝑖_ ∈Vs, (6.1a) ∑︁𝑁 0 = 𝑃𝑖 − _𝑗=1_ _[𝐴][𝑖𝑗]_ [sin][(][𝛿][𝑖] [(][𝑡][) −] _[𝛿]_ _[𝑗]_ [(][𝑡][))][,] _𝑖_ ∈Vc, (6.1b) � _,_ _𝑗_ ∈V1. (5.6) _𝜙ˆ�_ _𝑗_ (𝑡) = −𝑘 _𝜙_ _𝜙ˆ_ _𝑗_ (𝑡) − max _𝑘_ ∈V𝑗[out] � � � _𝑓_ _𝑗𝑘_ _, 0_   where 𝛿𝑖 (𝑡) is the voltage phase angle at node 𝑖 at time 𝑡; 𝑃𝑖 is the power supplied or consumed at node 𝑖, with 𝑃𝑖 _> 0 if_ _𝑖_ ∈Vs and 𝑃𝑖 ≤ 0 if 𝑖 ∈Vc; 𝐴𝑖𝑗 = 𝐸𝑖 _𝐸_ _𝑗_ ��𝑌𝑖𝑗 ��, where 𝐸𝑖 is the voltage magnitude at node 𝑖 and 𝑌𝑖𝑗 is the admittance on the line between nodes 𝑖 and 𝑗 (𝑌𝑖𝑗 = 𝑌 _𝑗𝑖); 𝐷𝑖_ _> 0 is the droop coefficient_ of generator 𝑖; 𝜉𝑖𝑗 (𝑡) = 𝐴𝑖𝑗 sin(𝛿𝑖 (𝑡) − _𝛿_ _𝑗_ (𝑡)) is the power flow from 𝑖 to 𝑗 at time 𝑡. Each edge _𝑖, 𝑗_ can only bear a power flow { } equal (in absolute value) to 𝑓[¯]𝑖𝑗 ∈ R>0 before breaking down or being disconnected. For compactness, we also define P ≜ [𝑃1 · · · 𝑃𝑁 ][T], Ps ≜ [𝑃1 · · · 𝑃𝑁s ][T], D ≜ [𝐷1 · · · 𝐷 _𝑁s 0 · · · 0][T]_ ∈ R[𝑁], 𝝃 (𝑡) ≜ [𝜉𝑖𝑗 (𝑡)][T](𝑖, 𝑗) ∈Q [∈] [R][𝑁][E] [, ¯][f][ ≜] [[][ ¯][𝑓][𝑖𝑗] []][T](𝑖, 𝑗) ∈Q [∈] [R][𝑁][E] [.] Recall that all 𝑓 _𝑗𝑘_ can be considered constant by hypothesis. Therefore, ∀ _𝑗_ ∈V1, 𝜙[ˆ] _𝑗_ converges exponentially fast to 𝜙 _𝑗_, as given in (5.1). - 𝑖 ∈V2. From (5.5), we get � � _𝜙ˆ�𝑖_ (𝑡) = −𝑘 _𝜙_ _𝜙ˆ𝑖_ (𝑡) − max � _𝑓𝑖𝑗_ _,_ _𝜙[ˆ]_ _𝑗_ (𝑡)� _,_ _𝑖_ ∈V2. _𝑗_ ∈V𝑖[out] (5.7) After a short time, all 𝜙[ˆ] _𝑗_, 𝑗 ∈V1, can be considered at steady state. Thus, clearly 𝜙[ˆ]𝑖 converges to 𝜙𝑖 (as given in (5.1)), for all 𝑖 ∈V2. ----- #### 6.1 Optimization problem The asymptotic behaviour of (6.1) was characterised in [28] through the following theorem. On the basis of this observation, we let 𝑃𝑖, 𝑖 ∈Vs (see (6.1)) be functions of time, and define the Boolean quantities _𝛾𝑖_ (𝑡) ≜ � 1, if 𝑃min,𝑖 _< 𝑃𝑖_ (𝑡) < 𝑃max,𝑖, _𝑖_ ∈Vs; 0, otherwise, **Theorem 6.1 (Steady-state solution [28]). Let f ∈** R[𝑁][E] _be_ _defined implicitly by_ **Bf = P −** _𝜔D,_ (6.2) _where 𝜔_ ≜ ([�]𝑖 ∈V _[𝑃]𝑖[)/(][�]𝑖_ ∈Vs _[𝐷]𝑖[)][. The following state-]_ _ments are equivalent:_ _(i) A_ _unique_ _locally_ _stable_ _phase-locked_ _solu-_ _tion_ _𝛿1_ (𝑡), . . ., 𝛿𝑁 (𝑡) _of_ (6.1) _exists_ _such_ _that_ lim𝑡→+∞ _𝝃_ (𝑡) = f and lim𝑡→+∞ _𝛿[�]𝑖_ (𝑡) = 𝜔 _for all_ _𝑖_ _;_ ∈V _(ii)_ �� _𝑓𝑖𝑗_ �� /𝐴𝑖𝑗 _< 1 for all {𝑖, 𝑗_ } ∈E. in practice, 𝜙[ˆ][n-sat]avg,𝑖 [is an average computed over non-saturated] generators, always including 𝑖, whereas 𝜙[ˆ][sat]max,𝑖 [is a maximum] computed over saturated generators, always excluding 𝑖. Omitting time dependence for the sake of brevity, we propose to select _𝑃𝑖, ∀𝑖_ ∈Vs, according to the law we say that generator 𝑖 has saturated if 𝛾𝑖 = 0. We also define3 _𝜙ˆ[n-sat]avg,𝑖_ [(][𝑡][)][ ≜] [mean] �{𝜙[ˆ]𝑖 (𝑡)} ∪ �𝜙ˆ _𝑗_ (𝑡)� � _𝑗_ ∈Vs | 𝑗≠𝑖,𝛾 _𝑗_ =1 _,_ _𝑖_ ∈Vs, _𝜙ˆ[sat]max,𝑖_ [(][𝑡][)][ ≜] [max] �𝜙ˆ _𝑗_ (𝑡)� _𝑗_ ∈Vs | 𝑗≠𝑖,𝛾 _𝑗_ =0 _[,]_ _𝑖_ ∈Vs; −𝑘 _𝑃_ (𝜙[ˆ]𝑖 − _𝜙[ˆ]avg),_ if 𝛾𝑘 = 1, ∀𝑘 ∈Vs, (6.5a) _𝑃˜𝑖,_ if (∃𝑘 ∈Vs : 𝛾𝑘 = 0) ∧ (𝛾𝑖 = 1 ∨ _𝜁𝑖_ = 1), (6.5b) 0, otherwise, (6.5c) We assume that in (6.1) the terms 𝐴𝑖𝑗 are large enough that (ii) in Theorem 6.1 holds. Moreover, we highlight that (6.2) is a flow network such as (2.1), where m = P − _𝜔D, noting that_ � _𝑖_ ∈V _[𝑚]𝑖_ [=][ �]𝑖 ∈V _[𝑃]𝑖_ [−] _[𝜔]_ [�]𝑖 ∈Vs _[𝐷]𝑖_ [=][ 0. Therefore, to minimize] the likelihood of line faults, we aim to regulate the power values **Ps in a distributed fashion so as to solve** where 𝑘 _𝑃_ ∈ R>0, and, for 𝑖 ∈Vs, � � � � _𝑃˜𝑖_ ≜ −𝑘 _𝑃_ _𝜙ˆ𝑖_ − _𝜙ˆ[n-sat]avg,𝑖_ − _𝑘_ _[𝛾]𝑃_ _𝜙ˆ[n-sat]avg,𝑖_ [−] _[𝜙][ˆ]max[sat]_ _,𝑖_ _,_ _𝑃�𝑖_ =   min max **Ps** {𝑖, 𝑗 }∈Ecf �� _𝑓𝑖𝑗_ �� _,_ _𝑓¯𝑖𝑗_ 1, if (𝑃𝑖 ≤ _𝑃𝑖[min]_ ∧ _𝑃[˜]𝑖_ _> 0) ∨_ (𝑃𝑖 ≥ _𝑃𝑖[max]_ ∧ _𝑃[˜]𝑖_ _< 0),_ 0, otherwise, _𝜁𝑖_ ≜ (6.3) **Bf = P −** _𝜔D,_ |f| < **f[¯],** **Pmin ≤** **Ps ≤** **Pmax,**   s.t.   which is a particularization of Problem 3.2, and where Ecf is defined as in (3.4), and Pmin, Pmax ∈ R>[𝑁]0[s] [.] We remark that the problem in (6.3) does not aim at minimizing the economic cost of operation. Therefore, if a network operator wishes to keep costs low, they might also alternate between cost-first strategies and prevention-first strategies, depending on the criticality of the current operating conditions, e.g., when the network is becoming particularly congested, or when some of the suppliers are shut down. #### 6.2 Heuristic distributed control approach Recall that in a flow network, according to Theorem 4.6, Problem 3.2 is solved if the maximum downstream flows 𝜙𝑖, ∀𝑖 ∈Vs, achieve consensus. We observed heuristically that this happens if (i) the suppliers’ commodity (𝑚𝑖) is taken as a function of time and varied continuously with the law _𝑚�_ _𝑖_ (𝑡) = −𝑘 (𝜙[ˆ]𝑖 (𝑡) − _𝜙[ˆ]avg_ (𝑡)), ∀𝑖 ∈Vs, (6.4) where 𝑘 ∈ R>0 and 𝜙[ˆ]avg (𝑡) ≜ mean �𝜙ˆ𝑖 (𝑡)�𝑖 ∈Vs [, and (ii) it holds] that mmin < ms(𝑡) < mmax at all time (see § 2). with 𝑘 _[𝛾]𝑃_ [∈] [R][>][0][. Note that][ 𝜁][𝑖] [=][ 1 if][ 𝑖] [has saturated, but applying] control law (6.5b) would bring 𝑃𝑖 closer to its admissible region (i.e., 𝑃min,𝑖 _< 𝑃𝑖_ _< 𝑃max,𝑖)._ In (6.5), the main purpose of (6.5b) and (6.5c) is to factor in the constraint on power generation. Indeed, when no generators have saturated, (6.5a) is active, resembling (6.4), causing 𝜙𝑖, ∀𝑖 ∈ Vs to converge (which solves (6.3) by virtue of Theorem 4.6). Nonetheless, if at least one generator saturates, (6.5b) becomes active. In (6.5b), the term 𝜙[ˆ]𝑖 − _𝜙[ˆ][n-sat]avg,𝑖_ [achieves convergence of] _𝜙𝑖, ∀𝑖_ ∈Vs : 𝛾𝑖 = 1 (non-saturated generators), whereas the term _𝜙ˆ[n-sat]avg,𝑖_ [−] _[𝜙][ˆ]max[sat]_ _,𝑖_ [reduces the gap between the][ 𝜙][𝑖][s of non-saturated] generators and the 𝜙𝑖s of saturated ones. Both effects decrease max𝑖 ∈Vs 𝜙𝑖 as much as possible, thus achieving the optimum value of 𝐽 (see (3.1)). To take into account more constraints or objectives, it might be required to further modify the control law. #### 6.3 Numerical simulations 3The estimates 𝜙[ˆ]𝑖 (𝑡) are computed using the current values of the flows, i.e., replacing 𝑓𝑖𝑗 with 𝜉𝑖𝑗 (𝑡) in (5.5). Moreover, in practice, 𝜙[ˆ]avg, 𝜙[ˆ]avg[n-sat],𝑖[, and] _𝜙ˆmax[sat]_ _,𝑖_ [can be estimated locally at the nodes through arbitrarily fast consensus] protocols and simple information propagation schemes; e.g., see [33]. ----- _Setup_ We tested our distributed estimation and control strategy (5.5)(6.5) on a benchmark problem and compared it to an offline centralized solution to (6.3). We used a slightly modified version of the standard CIGRE microgrid benchmark [36], as depicted in Figure 7. All computations were carried out in Matlab [37]; the centralized solution to (6.3) was found using the fminimax function; the parameters we used are Pmin = 0.8Ps, Pmax = 1.2Ps, 𝑘 _𝜙_ = 200, 𝑘 _𝑃_ = 40, 𝑘 _[𝛾]𝑃_ [=][ 40.] We simulated a scenario where the power values 𝑃𝑖 are initially assigned as in Figure 7; then, at time 𝑡 = 6, 𝑃9, 𝑃10, 𝑃11 become 8, 4, 4, respectively; at time 𝑡 = 12, the original power − − − values are restored. These rapid fluctuations may represent the effect due to the plug-in and plug-out of multiple devices at once. In Figure 8, we report the results obtained by applying periodically an offline centralized solution to (6.3). To account for the centralized and offline nature of this scheme, we consider a 1.5 s delay in the application of the control values. In Figure 9, we show the results of applying our online distributed control strategy (6.5). As a metric of performance, we consider 𝐽[𝝃] (𝑡) ≜ max{𝑖, 𝑗 }∈Ecf ��𝜉𝑖𝑗 (𝑡)�� / ¯𝑓𝑖𝑗 ; note that at steady state, when 𝝃 → **f** (see Theorem 6.1), we have 𝐽[𝝃] _𝑡_ _𝐽_ (see § 3.1). ( ) → _Results_ For 0 _𝑡< 6, at steady state, the optimal value 𝐽_ = 0.584 is ≤ obtained by both strategies. In this time window, only (6.5a) is active, and convergence among all 𝜙𝑖, 𝑖 ∈Vs, is achieved, providing a practical demonstration of Theorem 4.6. For 6 _𝑡< 12, the distributed control strategy achieves_ ≤ a maximum value (over time) of 𝐽[𝝃] equal to 0.915, while the centralized scheme achieves 1.027, which would trigger a fault (𝐽[𝝃] = 1 is a fault condition). This is an effect of the delay considered with this strategy to account for it being centralized and offline. At steady state, both strategies yield 𝐽 = 0.906. In this time window, several generators saturate; still, our distributed control strategy successfully achieves the optimal value of the cost function 𝐽, while preserving feasibility. For 12 _𝑡_ 18, both strategies yield the same optimal value ≤ ≤ of the cost function, that is 𝐽 = 0.587. _Secondary controller_ We also verified that (6.3) can be solved by controlling D (i.e., _𝐷𝑖s in (6.1a)), rather than Ps: this can be useful if one also wants_ to use a secondary controller [28, (16)] (to control Ps) with the aim to regulate the value of 𝜔 (defined in Theorem 6.1). In that case, (6.5) is applied to **D[�]**, rather than to **P[�]** s, and the right-hand side of (6.5) is multiplied by −1 (because D appears with the minus sign in (6.2)). The results we obtain are qualitatively the same as those in Figure 9, and thus we omit them here for brevity. 1 P6= 0 P7= 0 15.9 (0.46) 10.2 (0.29) P13= 30 P8= 0 7 (0.58) 7.2 (0.3) P9= 0 P10= 0 P11= 0 3.6 (0.3) P12= 0 Figure 7: Microgrid topology used in Section 6.3, with active power values expressed in kW. Upward green triangles are generators, while downward blue triangles are loads. Dotted edges are those in E \ Ecf. The values of the power flows 𝜉𝑖𝑗 are the optimal ones with respect to (3.2), computed with the Matlab ``` minimax function, and are reported on the edges. The fractions ``` ��𝜉𝑖𝑗 �� / ¯𝑓𝑖𝑗 are reported in brackets and the colors of the edges are a measure of proximity to failure. ### 7 Conclusion We studied the minimax flow problem on acyclic networks showing that, by introducing the notion of maximum downstream flows, it can be reformulated as the problem of achieving their consensus. We then proposed a distributed estimation strategy to evaluate maximum downstream flows. We applied our results to the problem of preventing overcurrents in a droop-controlled AC microgrid via a distributed control strategy based on our approach. Our numerical experiments show that the distributed strategy is at least as effective, or even better, than the more traditional centralized solution strategy. _Extension to cyclic graphs_ Future research will address the extension of the approach to solve minimax flow problems on cyclic networks. This is particularly important in applications such as transmission grids where the network can have a meshed structure. In this paper, the assumption that the graph is acyclic (i) implies that the maximum downstream flow (MDF) of a supplier quantifies how much that node is contributing to network congestion, and (ii) is used to allow distributed computation of the MDFs. Then, leveraging (i), P2= -5.7 P14= 30 P4= -5.7 1 0.8 0.6 0.4 0.2 0 ----- the minimax flow problem is solved by balancing the MDFs. The main challenge associated with extending the results presented here to cyclic graphs will be to design quantities analogous to the MDFs that satisfy these two properties. ### References [1] A. R. Bergen and D. J. Hill, “A structure preserving model for power system stability analysis,” IEEE Transactions on _Power Apparatus and Systems, vol. 100, no. 1, pp. 25–35,_ 1981. [2] J. Burgschweiger, B. Gnädig, and M. C. Steinbach, “Optimization models for operative planning in drinking water networks,” Optimization and Engineering, vol. 10, no. 1, pp. 43–73, 2009. [3] E. Lovisari, G. Como, and K. Savla, “Stability of monotone dynamical flow networks,” in IEEE Conference on _Decision and Control, Los Angeles, CA, USA, 2014, pp._ 2384–2389. [4] S. L. Hakimi, “Optimum locations of switching centers and the absolute centers and medians of a graph,” Operations _Research, vol. 12, no. 3, pp. 450–459, 1964._ [5] M. E. O’Kelly and H. J. Miller, “Solution strategies for the single facility minimax hub location problem,” Papers in _Regional Science, vol. 70, no. 4, pp. 367–380, 1991._ [6] A. M. Campbell, T. J. Lowe, and L. Zhang, “Upgrading arcs to minimize the maximum travel time in a network,” _Networks, vol. 47, no. 2, pp. 72–80, 2006._ [7] P. L. Hammer, “Time-minimizing transportation problems: Time-minimizing transportation,” Naval Research Logis_tics Quarterly, vol. 16, no. 3, pp. 345–357, 1969._ [8] R. S. Garfinkel and M. R. Rao, “The bottleneck transportation problem,” Naval Research Logistics Quarterly, vol. 18, no. 4, pp. 465–472, 1971. [9] T. Ichimori, H. Ishil, and T. Nishida, “Finding the weighted minimax flow in a polynomial time,” Journal of the Op_erations Research Society of Japan, vol. 23, no. 3, pp._ 268–272, 1980. [10] R. K. Ahuja, “Algorithms for the minimax transportation problem,” Naval Research Logistics Quarterly, vol. 33, no. 4, pp. 725–739, 1986. [11] A. Nedić and J. Liu, “Distributed optimization for control,” _Annual Review of Control, Robotics, and Autonomous Sys-_ _tems, vol. 1, no. 1, pp. 77–103, 2018._ [12] B. Gharesifard and J. Cortés, “Distributed convergence to Nash equilibria in two-network zero-sum games,” Auto_matica, vol. 49, no. 6, pp. 1683–1692, 2013._ [13] S. Yang, J. Wang, and Q. Liu, “Cooperative–competitive multiagent systems for distributed minimax optimization subject to bounded constraints,” IEEE Transactions on Au_tomatic Control, vol. 64, no. 4, pp. 1358–1372, 2019._ [14] A. Jadbabaie, A. Ozdaglar, and M. Zargham, “A distributed Newton method for network optimization,” in Proceed_ings of the 48h IEEE Conference on Decision and Control_ _(CDC) held jointly with 2009 28th Chinese Control Con-_ _ference._ Shanghai, China: IEEE, 2009, pp. 2736–2741. [15] M. Zargham, A. Ribeiro, A. Ozdaglar, and A. Jadbabaie, “Accelerated dual descent for network flow optimization,” _IEEE Transactions on Automatic Control, vol. 59, no. 4,_ pp. 905–920, 2014. [16] S. Z. Anaraki and M. Kalantari, “Acceleration of distributed minimax flow optimization in networks,” in Annual Con_ference on Information Sciences and Systems._ Baltimore, MD, USA: IEEE, 2011, pp. 1–5. [17] A. A. Memon and K. Kauhaniemi, “A critical review of AC microgrid protection issues and available solutions,” Elec_tric Power Systems Research, vol. 129, pp. 23–31, 2015._ [18] A. Hooshyar and R. Iravani, “Microgrid protection,” Pro_ceedings of the IEEE, vol. 105, no. 7, pp. 1332–1353, 2017._ [19] S. A. Hosseini, H. A. Abyaneh, S. H. H. Sadeghi, F. Razavi, and A. Nasiri, “An overview of microgrid protection methods and the factors involved,” Renewable and Sustainable _Energy Reviews, vol. 64, pp. 174–186, 2016._ [20] M. Khederzadeh, “Identification and prevention of cascading failures in autonomous microgrid,” IEEE Systems _Journal, vol. 12, no. 1, p. 8, 2018._ [21] P. Nahata, S. Mastellone, and F. Dörfler, “A decentralized switched system approach to overvoltage prevention in PV residential microgrids,” IFAC-PapersOnLine, vol. 50, no. 1, pp. 6630–6635, 2017. [22] M. Goyal, A. Ghosh, and F. Shahnia, “Overload prevention in an autonomous microgrid using battery storage units,” in _IEEE PES General Meeting, National Harbor, MD, USA,_ 2014. [23] A. H. Etemadi, E. J. Davison, and R. Iravani, “A decentralized robust control strategy for multi-DER Microgrids—Part I: Fundamental concepts,” IEEE Transactions _on Power Delivery, vol. 27, no. 4, pp. 1843–1853, 2012._ [24] J. Shah, B. F. Wollenberg, and N. Mohan, “Decentralized power flow control for a smart micro-grid,” in 2011 IEEE _Power and Energy Society General Meeting, San Diego,_ CA, 2011, pp. 1–6. ----- [25] Niannian Cai and J. Mitra, “A decentralized control architecture for a microgrid with power electronic interfaces,” in _North American Power Symposium 2010, Arlington, TX,_ USA, 2010, pp. 1–8. [26] S. Anand and B. G. Fernandes, “Reduced-order model and stability analysis of low-voltage DC microgrid,” IEEE _Transactions on Industrial Electronics, vol. 60, no. 11, pp._ 5040–5049, 2013. [27] Y. Gu, X. Xiang, W. Li, and X. He, “Mode-adaptive decentralized control for renewable DC microgrid with enhanced reliability and flexibility,” IEEE Transactions on _Power Electronics, vol. 29, no. 9, pp. 5072–5080, 2014._ [28] J. W. Simpson-Porco, F. Dörfler, and F. Bullo, “Synchronization and power sharing for droop-controlled inverters in islanded microgrids,” Automatica, vol. 49, no. 9, pp. 2603–2611, 2013. [29] C. Bersani, H. Dagdougui, A. Ouammi, and R. Sacile, “Distributed robust control of the power flows in a team of cooperating microgrids,” IEEE Transactions on Control _Systems Technology, vol. 25, no. 4, pp. 1473–1479, 2017._ [30] T. Ichimori, H. Ishii, and T. Nishida, “Weighted minimax real-valued flows,” Journal of the Operations Research So_ciety of Japan, vol. 24, no. 1, pp. 52–60, 1981._ [31] R. Penrose, “A generalized inverse for matrices,” Mathe_matical Proceedings of the Cambridge Philosophical So-_ _ciety, vol. 51, no. 3, pp. 406–413, 1955._ [32] F. Dörfler, M. Chertkov, and F. Bullo, “Synchronization in complex oscillator networks and smart grids,” Proceedings _of the National Academy of Sciences, vol. 110, no. 6, pp._ 2005–2010, 2013. [33] F. Bullo, Lectures on Network Systems, 1.6 ed. Kindle Direct Publishing, 2022. [34] S. Parhizi, H. Lotfi, A. Khodaei, and S. Bahramirad, “State of the art in research on microgrids: A review,” IEEE _Access, vol. 3, pp. 890–925, 2015._ [35] S. V. Iyer, M. N. Belur, and M. C. Chandorkar, “A generalized computational method to determine stability of a multi-inverter microgrid,” IEEE Transactions on Power _Electronics, vol. 25, no. 9, pp. 2420–2432, 2010._ [36] S. Papathanassiou, N. Hatziargyriou, and K. Strunz, “A benchmark low voltage microgrid network,” in Proceed_ings of the CIGRE Symposium: Power Systems with Dis-_ _persed Generation, 2005, pp. 1–8._ [37] MATLAB, Version 9.9.0.1524771 (R2020b) Update 2. Natick, Massachusetts: The MathWorks Inc., 2021. 1 0.5 0 0 5 10 15 1 0.5 0 0 5 10 15 30 25 20 15 10 0 5 10 15 Figure 8: Results obtained when applying a centralized solution to (6.3). In the top panel, different colors represent |𝜉𝑖𝑗 |/ _𝑓[¯]𝑖𝑗_ for different edges, with {𝑖, 𝑗 } ∈Ecf. In the middle and bottom panels, different colors represent 𝜙[ˆ]𝑖 and 𝑃𝑖 for different supplier nodes, i.e., 𝑖 ∈Vs. ----- Figure 9: Results obtained when using the distributed online control strategy (6.5). -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2201.03310, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/2201.03310" }
2,022
[ "JournalArticle" ]
true
2022-01-10T00:00:00
[ { "paperId": "d8131f4436adc2fc1de42dca82070ebcb3e4f832", "title": "Cooperative–Competitive Multiagent Systems for Distributed Minimax Optimization Subject to Bounded Constraints" }, { "paperId": "c0eea5c4bab0c810e4820c67d3aee8df763c672e", "title": "Distributed Optimization for Control" }, { "paperId": "14fd1b838e03ed30ed12b2cd5cdf73f2f74e4aea", "title": "Identification and Prevention of Cascading Failures in Autonomous Microgrid" }, { "paperId": "d71cb3c8caea83209bf895e0f0cf4862cd3ad3f0", "title": "Distributed Robust Control of the Power Flows in a Team of Cooperating Microgrids" }, { "paperId": "b74519698e633c1d96bd8a7b8afb2f8eec258e4a", "title": "A Decentralized Switched System Approach to Overvoltage Prevention in PV Residential Microgrids" }, { "paperId": "0cdaab478fb1afd598b00956dc0fa195cf76e5c6", "title": "An overview of microgrid protection methods and the factors involved" }, { "paperId": "9ac054e7a61154c767e025fb3a01f03ff49d992a", "title": "A critical review of AC Microgrid protection issues and available solutions" }, { "paperId": "101fea686599874023a10723f11bd744b92cadbf", "title": "State of the Art in Research on Microgrids: A Review" }, { "paperId": "d9de15bd72692b087e5fd22cb2e0548d255fc46f", "title": "Stability of monotone dynamical flow networks" }, { "paperId": "02b41d46496d3afdf8411fc0f9f6044f1d084045", "title": "Mode-Adaptive Decentralized Control for Renewable DC Microgrid With Enhanced Reliability and Flexibility" }, { "paperId": "231df35ba4d3c61210b80b675f21705df55577ec", "title": "Overload prevention in an autonomous microgrid using battery storage units" }, { "paperId": "d975a861a736c6ff186202bf0ee74429f5a3d548", "title": "Accelerated Dual Descent for Network Flow Optimization" }, { "paperId": "0b5ce6a35b0c7e19c77a4b93cd317e3d3a3e2fa4", "title": "Reduced-Order Model and Stability Analysis of Low-Voltage DC Microgrid" }, { "paperId": "511786a037059d6d296d114435523fbc8cbcf15f", "title": "Synchronization in complex oscillator networks and smart grids" }, { "paperId": "3437567d3d382c28f810886ed63f500d92ab79d3", "title": "A Decentralized Robust Control Strategy for Multi-DER Microgrids—Part I: Fundamental Concepts" }, { "paperId": "efd921de7ee8980a7240bbe52caeff270db1de1a", "title": "Synchronization and power sharing for droop-controlled inverters in islanded microgrids" }, { "paperId": "d0cf5ce3fb8477a6998c30c0fa45f29606657dac", "title": "Distributed convergence to Nash equilibria in two-network zero-sum games" }, { "paperId": "01a2fc6a09853ca9770e33aaede40d3ec581f133", "title": "Decentralized power flow control for a smart micro-grid" }, { "paperId": "e4baffe340a8e3ce6b4e39e5afccc7b8d3082b44", "title": "Acceleration of distributed minimax flow optimization in networks" }, { "paperId": "c704b099c341f98b8f0e6e3aaf89512759335885", "title": "A decentralized control architecture for a microgrid with power electronic interfaces" }, { "paperId": "7b26749d84027a5e1b541597bb4d58bdadd9f3b2", "title": "A Generalized Computational Method to Determine Stability of a Multi-inverter Microgrid" }, { "paperId": "a2ee50c951c5b8cf863d9f82ef0bcf6123806b9d", "title": "A distributed newton method for network optimization" }, { "paperId": "fe50a5f3d1ddd47dffe8b168a6874fc3a46058d0", "title": "Optimization models for operative planning in drinking water networks" }, { "paperId": "a5568b7bd75f98cef752211c207db0bf56947ff4", "title": "Microgrid Protection" }, { "paperId": "7e6eb2e6ff1900d44dd855022ec964c840f4c2d8", "title": "Upgrading arcs to minimize the maximum travel time in a network" }, { "paperId": "8c27c1298c609ed52d4e349a9acff4219fbf54ed", "title": "Solution strategies for the single facility minimax hub location problem" }, { "paperId": "1079aac8e8053ed9e5776e573cc184cc179c5b5c", "title": "Algorithms for the minimax transportation problem" }, { "paperId": "00b913a487308fc99fd0dbb9d0005c078ac0a560", "title": "The bottleneck transportation problem" }, { "paperId": "6d411d0592e7e8e8273e6e32838e666984f2a6f5", "title": "Optimum Locations of Switching Centers and the Absolute Centers and Medians of a Graph" }, { "paperId": "c39cf7057efe6d47fd2aea2dd3c9d1772b807841", "title": "Generalized Inverse Matrices" }, { "paperId": "f4aea3dce8e89b3221db8065c45d99dd4eff1457", "title": "A BENCHMARK LOW VOLTAGE MICROGRID NETWORK" }, { "paperId": "bab7417f4c801243197d281076ef521c6ad32603", "title": "WEIGHTED MINIMAX REAL-VALUED FLOWS" }, { "paperId": "885cd58693d2e29bdb8c7de0680b59ab51a21fbb", "title": "A Structure Preserving Model for Power System Stability Analysis" }, { "paperId": "d6fcf6a4a866a12b680f67c768b386e68a5061f6", "title": "FINDING THE WEIGHTED MINIMAX FLOW IN A POLYNOMIAL TIME" } ]
21,109
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01c116d620336f65dab1fb4d393497ba83bc6709
[ "Computer Science" ]
0.864911
SVD-Based Image Watermarking Using the Fast Walsh-Hadamard Transform, Key Mapping, and Coefficient Ordering for Ownership Protection
01c116d620336f65dab1fb4d393497ba83bc6709
Symmetry
[ { "authorId": "9720405", "name": "Tahmina Khanam" }, { "authorId": "34513122", "name": "P. K. Dhar" }, { "authorId": "2094261465", "name": "Saki Kowsar" }, { "authorId": "2145451650", "name": "Jong-Myon Kim" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "https://www.mdpi.com/journal/symmetry", "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-172134" ], "id": "1620da87-4387-4b9a-9bf4-22fdf74d4dc3", "issn": "2073-8994", "name": "Symmetry", "type": "journal", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-172134" }
Proof of ownership on multimedia data exposes users to significant threats due to a myriad of transmission channel attacks over distributed computing infrastructures. In order to address this problem, in this paper, an efficient blind symmetric image watermarking method using singular value decomposition (SVD) and the fast Walsh-Hadamard transform (FWHT) is proposed for ownership protection. Initially, Gaussian mapping is used to scramble the watermark image and secure the system against unauthorized detection. Then, FWHT with coefficient ordering is applied to the cover image. To make the embedding process robust and secure against severe attacks, two unique keys are generated from the singular values of the FWHT blocks of the cover image, which are kept by the owner only. Finally, the generated keys are used to extract the watermark and verify the ownership. The simulation result demonstrates that our proposed scheme is highly robust against numerous attacks. Furthermore, comparative analysis corroborates its superiority among other state-of-the-art methods. The NC of the proposed method is numerically one, and the PSNR resides from 49.78 to 52.64. In contrast, the NC of the state-of-the-art methods varies from 0.7991 to 0.9999, while the PSNR exists in the range between 39.4428 and 54.2599.
# S symmetry _Article_ ### SVD-Based Image Watermarking Using the Fast Walsh-Hadamard Transform, Key Mapping, and Coefficient Ordering for Ownership Protection **Tahmina Khanam** **[1], Pranab Kumar Dhar** **[1], Saki Kowsar** **[1]** **and Jong-Myon Kim** **[2,]*** 1 Department of Computer Science and Engineering, Chittagong University of Engineering and Technology (CUET), Chattogram-4349, Bangladesh; tahminacse0904079@gmail.com (T.K.); pranabdhar81@gmail.com (P.K.D.); sakikowsar@cuet.ac.bd (S.K.) 2 School of IT Convergence, University of Ulsan, Ulsan 44610, Korea ***** Correspondence: jmkim07@ulsan.ac.kr; Tel.: +82-52259-2217 Received: 27 October 2019; Accepted: 24 December 2019; Published: 26 December 2019 [����������](http://www.mdpi.com/2073-8994/12/1/52?type=check_update&version=1) **�������** **Abstract: Proof of ownership on multimedia data exposes users to significant threats due to a myriad** of transmission channel attacks over distributed computing infrastructures. In order to address this problem, in this paper, an efficient blind symmetric image watermarking method using singular value decomposition (SVD) and the fast Walsh-Hadamard transform (FWHT) is proposed for ownership protection. Initially, Gaussian mapping is used to scramble the watermark image and secure the system against unauthorized detection. Then, FWHT with coefficient ordering is applied to the cover image. To make the embedding process robust and secure against severe attacks, two unique keys are generated from the singular values of the FWHT blocks of the cover image, which are kept by the owner only. Finally, the generated keys are used to extract the watermark and verify the ownership. The simulation result demonstrates that our proposed scheme is highly robust against numerous attacks. Furthermore, comparative analysis corroborates its superiority among other state-of-the-art methods. The NC of the proposed method is numerically one, and the PSNR resides from 49.78 to 52.64. In contrast, the NC of the state-of-the-art methods varies from 0.7991 to 0.9999, while the PSNR exists in the range between 39.4428 and 54.2599. **Keywords: fast Walsh–Hadamard transform; Gaussian mapping; singular value decomposition;** coefficient ordering; key mapping **1. Introduction** The flow of multimedia data increases manifold with the recent infrastructural development of computer networks. Accordingly, the proof of ownership issue for multimedia data has come to the surface as an impending challenge. In a bid to negotiate with this problem, the watermarking approach might be used as an indispensable tool. Since multimedia data often suffer from different types of transmission channel attacks, the technique should be immune to such maladies. Hence, the watermarking approach is used for hiding the digital information during transmission. The watermark is typically used to prove the ownership of such host signals. Several algorithms have been proposed in the literature to create robust and imperceptible watermarks. In general, watermarking methods can be divided into three main categories: (i) blind methods [1–13], (ii) semi-blind methods [14–17] (iii) non-blind methods [18–24]. A blind watermarking framework for high dynamic range images (HDRIs) is proposed in [1]. In this method, the artificial bee colony algorithm is employed to select the best block for the embedding watermark. Then, the watermark is inserted in the first level approximation sub-band of the discrete wavelet transform (DWT) of each selected block. This method provides ----- _Symmetry 2020, 12, 52_ 2 of 20 good quality watermarked images, although it is not robust against geometric attacks such as rotation and scaling. In [2], a new blind error diffusion-based halftone visual watermarking method called content aware double-sided embedding error diffusion (CaDEED) is introduced. By adopting the problem formulation of CaDEED, the optimization problem is solved in order to achieve an optimal solution. Although it shows good results for imperceptibility and robustness, the performance of this system is highly dependent on the content of the host image and watermark. A blind integer wavelet-based watermarking scheme for inserting the compressed version of the binary watermark is presented in [3]. The peak signal-to-noise ratio (PSNR) result of this method is quite satisfactory. However, the robustness against compression attacks is not significant. The authors in [4] proposed a blind geometrically invariant image watermarking method by employing connected objects and a gravity center. This framework has proven resistant against geometrical attacks, such as rotation and scaling. However, it has low robustness against other regular noise attacks such as Gaussian or speckle noise. Furthermore, a contrast-adaptive strategy as a removal solution for visible watermarks is presented in [5] where a sub-sampling technique is adopted to propose such a blind system. The imperceptibility results of this method are very good. However, it shows low robustness against some attacks. In addition to this, a blind watermarking scheme based on singular value decomposition (SVD) is introduced in [6]. Initially, they analyzed the orthogonal matrix U via SVD. This work utilizes the concept of finding a strong similarity correlation existing between the second-row first column element and the third-row first column element. At the final stage, the color watermark is embedded by slightly modifying the value of the second-row first column element and the third-row first column element of the U matrix. The technique performs well against various attacks, although it demonstrates very poor performance under median filtering of the watermarked image. Furthermore, the authors in [7] proposed a robust watermarking scheme using discrete cosine transform (DCT) and SVD for lossless copyright protection. Its imperceptibility result is significantly good. However, its robustness result against cropping attacks is quite low. A blind simple watermarking algorithm for image authentication is presented using fractional wavelet packet transform (FRWPT) and SVD in [8]. The proposed algorithm performs the embedding operation on singular values of the host image. To improve the fidelity, the perceptual quality of the watermarked images is exhibited. Although this method is highly secured, it shows low robustness against various attacks for some watermarked images. For estimation of the original coefficients, a blind watermarking method is placed in [9]; the authors used a trained SVR there. Additionally, the particle swarm optimization (PSO) is further utilized to optimize the proposed scheme. It provides high imperceptibility; however, it could not show excellent robustness against several attacks. A blind self-synchronized watermarking method in the cepstrum domain is suggested in [10]. This method does not provide a good trade-off between imperceptibility and robustness. Furthermore, a blind scheme is proposed in [11] in a bid to obtain minimal image distortion. This method provides high-quality watermarked images, albeit low robustness against various attacks. In [12], hamming codes are used to embed the authentication information in a cover image. The watermark extraction process of this method is blind and provides satisfactory results in imperceptibility. However, the robustness result against various attacks is not reported there. The authors of [13] suggested a blind watermarking algorithm based on lower-upper (LU) decomposition. The watermark is embedded into the first-column second-row element and the first-column third-row element of the lower triangular matrix obtained from LU decomposition. It provides good quality watermarked images despite the low robustness against compression attacks. A semi-blind self-reference image watermarking method using discrete cosine transform (DCT) and singular value decomposition (SVD) is proposed in [14]. Initially, essential blocks are fetched by using a threshold on the number of edges in each block. Using these essential blocks, a reference image is created and then transformed into the DCT and SVD domain. Embedding the watermark is done by modifying singular values of the host image using singular values of the watermark image. This method yields good quality watermarked images. However, it shows low robustness against the scaling operation. To embed the watermark, the concepts of vector quantization (VQ) and association ----- _Symmetry 2020, 12, 52_ 3 of 20 rules in data mining are employed in [15]. The approach is semi-blind, which hides the association rules of the watermark instead of the whole watermark. This method shows good robustness against various attacks with poor performance on imperceptibility. In addition, a reference watermarking scheme with semi-blind is proposed in [16] based on DWT and SVD for copyright protection and authenticity. The method has high imperceptibility showing the low robustness against cropping and rotation attacks. An image watermarking method using DWT, all phase discrete cosine bi-orthogonal transform (APDCBT), and SVD is proposed in [17]. This method shows high imperceptibility; however, it provides low robustness against combined cropping and compression attacks. A non-blind image watermarking algorithm based on the Hadamard transform is proposed in [18]. In this method, the breadth first search (BFS) technique is used to embed the watermark. Notably, it shows good performance in imperceptibility. However, it has the limitation of relatively poor performance against compression attacks. The authors in [19] introduced a non-blind robust watermarking technique using DCT and a normalization procedure. They used image normalization for calculating the affine transform parameters so that the watermark embedding and detection processes can be performed in the original coordinate system. However, this method shows low robustness against some attacks. In [20], a non-blind digital watermarking algorithm using wavelet-based contourlet transform (WBCT) is presented. To select the position for inserting the watermark, the texture information of the image is used. It has good robustness against numerous attacks, albeit low robustness against filtering attacks. Moreover, the imperceptibility result of this method is not reported there. A non-blind hybrid image watermarking scheme based on DWT and SVD is proposed in [21]. In this approach, the watermark is embedded to the elements of singular values of the cover image of DWT sub-bands. The imperceptibility result of this method is quite high, having low the robustness against cropping attacks. A non-blind SVD-based digital watermarking scheme for ownership protection is proposed in [22]. In this method, a meaningful text message is used rather than using a randomly generated Gaussian sequence. However, the robustness of this method against attacks is low. A non-blind image watermarking using DCT and DWT is proposed in [23]. The DCT coefficients of the watermark image are embedded into four DWT bands of the color components of the host image. The imperceptibility result of this method is quite satisfactory. However, the robustness against rotation attacks is a little low. A non-blind color image watermarking method using SVD and QR code is suggested in [24]. This method shows good results in imperceptibility; the robustness result against poison and speckle noise attack is not reported. From the above studies, we can conclude that some methods have low robustness, whereas some methods have less imperceptible or less secured. Further, some methods are non-blind and semi-blind. To overcome these limitations, an SVD-based blind symmetric image watermarking method using fast Walsh–Hadamard transform (FWHT) with key mapping and coefficient ordering for ownership protection is proposed in this paper. In symmetric watermarking, the same keys are used for embedding and detecting the watermark. The major contributions of this research work are subjected: - A blind image watermarking method is proposed that is highly robust and secured against numerous attacks while providing good quality watermarked images; - To safeguard the unauthorized detection, the Gaussian mapping is used to scramble the watermark; - To facilitate authentic and errorless extraction of the watermark image by generating the keys from the singular values the FWHT blocks of the cover image; - It provides a good trade-off among robustness, security, and imperceptibility. Simulation results indicated that our proposed method is highly robust against numerous attacks. The normalized correlation (NC) of the proposed method is numerically one, whereas the NC of the recent methods [13,23,24] vary from 0.7991 to 0.9999. The peak signal-to-noise ratio (PSNR) of the proposed method varies from 49.78 to 52.64, whereas the PSNR of the recent methods [13,23,24] vary from 39.4428 to 54.2599. In other words, the proposed method outperforms state-of-the-art methods in terms of robustness, security, and imperceptibility. ----- _Symmetry 2020, 12, 52_ 4 of 20 The rest of the paper is organized as follows. Section 2 introduces the background information, whereas the proposed watermarking method is illustrated in Section 3. Section 4 provides the experimental results. Finally, the paper is concluded in Section 5 with future remarks. **2. Background Information** _2.1. Singular Value Decomposition_ For an M × M square matrix X with rank ≤ _M, its SVD is represented by Equation (1):_ _X = UDV[T]_ _v1,1_ - · · _v1,M_ _v2,1_ . . . _v2,M_ ... ... ... _vM,1_ - · · _vM,M_   λ1 0 - · · 0 0 λ2 . . . 0 ... ... ... ... 0 0 - · · λM _U1,1_ - · · _U1,M_ _U2,1_ . . . _U2,M_ ... ... ... _UM,1_ - · · _UM,M_       _X =_   (1) where U and V are M × M orthogonal matrices, and D is a singular diagonal matrix with diagonal elements λ1, λ2, λ3, . . ., λM,. These diagonal elements are unique for image data. Therefore, these values are used to generate unique keys for the errorless and authentic extraction of the watermarks. _2.2. Fast Walsh-Hadamard Transform_ General Hadamard transform is performed by a Hadamard matrix H with the size 4 × 4 defined in Equation (2). It is an orthogonal square matrix with only +1 and −1 values. Furthermore, it has a unique sequence that is counted on the basis of the changes of the values in a row.  _H=_  1 1 1 1 1 −1 1 −1 1 1 −1 −1 1 −1 −1 1  (2)  The Hadamard transform concentrates most of the energy into the upper left corner of the transformed matrix. The direct current (DC) and alternating current (AC) coefficients of the transform matrix are arranged in zigzag order from low-frequency components to high-frequency components. In this study, the low-frequency components are used for embedding the watermark, since they are less sensitive to noise. Additionally, the Hadamard matrix has a different form called Walsh matrix W, which is defined in Equation (3). 1 1 1 1 1 1 −1 −1 1 −1 −1 1 1 −1 1 −1  (3)  _W =_   In this proposed method, fast Walsh–Hadamard transform (FWHT) is utilized, which is a technique of calculating a discrete Walsh–Hadamard transform with less computation time. **3. Proposed Method** Let _X_ = �x(i, j), 1 ≤ _i ≤_ _M, 1 ≤_ _j ≤_ _M�_ be the original host image and _W_ = �w(k, l), 1 ≤ _k ≤_ _N, 1 ≤_ _l ≤_ _N�_ be the watermark image to be embedded into the original image. _3.1. Watermark Preprocessing_ It is essential to preprocess the watermark for enhancing its security. Preprocessing includes the scrambling of the watermark image. In this proposed method, we utilize Gaussian mapping to scramble the watermark. To implement the Gaussian mapping on the watermark, the following steps are performed: ----- _Symmetry 2020, 12, 52_ 5 of 20 **Step 1. The watermark image W is reshaped into a one-dimensional sequence Q = {q(r), 1 ≤** _r ≤_ _N × N}._ **Step 2. Initially, a reference pattern P =** [�]p(r), 1 ≤ _r ≤_ _N × N[�]_ is generated using a Gaussian map, which is defined in Equation (4). � _p(r) = exp_ −a×(p(r + 1))[2][�]+b (4) where a, b, and p(1) are predefined constants and are used as key k3, as shown in Figure 1. **Step 3. Then, the binary reference pattern Z =** [�]z(r), 1 ≤ _r ≤_ _N × N[�]_ is calculated using the following equation: � 1 _if p(r) > T_ _z(r) =_ (5) 0 _otherwise_ _Symmetry2020where, 11, x; doi: FOR PEER REVIEW T is a predefined threshold._ 6 of 20 **Step 4. Finally, the watermark sequence q(r) is scrambled with z(r) using Equation (6):** selected low-frequency coefficients ����:���� are sorted in descending order; otherwise, they are sorted in ascending order. The concept of embedding the watermark bit in u(r) = z(r) ⊕ _q(r),_ 1 ≤ _r ≤_ _N × N_ (6) ascending and descending order with a block size of 4 × 4 where, m = 4, is shown in Figure 2. where ⊕ denotes the bitwise XOR operation. **Figure 1. Figure 1.Proposed embedding algorithm. Proposed embedding algorithm.** _3.2. Watermark Embedding Process_ The proposed watermark embedding process is shown in Figure 1. The pseudo code of the watermark embedding process is presented in Algorithm 1. The embedding process is described in the following steps: **Step 1. The original host image X is first divided into three channels(a)** (b) _Xred, Xgreen, and Xblue, where_ _Xred, Xgreen, and Xblue represent the red, green, and blue channels of the original image,_ **Figure 2. respectively. Then, the mean of the pixel values of each channel is calculated using Equation (7).(a) Sorting in ascending order to embed 0 bits and (b) sorting in descending order to embed** 1 bit. In this step, two keys k1 and k2 are also used in order to make the watermarking method more secured. The key k1 is generated from the singular values of each block �� of the selected channel of The proposed watermark embedding process is shown in Figure 1 watermark embedding process is presented in Algorithm 1. The embedding process is described in the ----- _Symmetry 2020, 12, 52_ 6 of 20 �M �M _Xred_ �M �M _Xgreen_ �M �M _Xblue_ µ(Xred) = 255 [,][ µ][(][X][green][) =] 255 [,][ µ][(][X][blue][) =] 255 (7) _i=1_ _j=1_ _i=1_ _j=1_ _i=1_ _j=1_ where µ(Xred), µ(Xgreen), and µ(Xblue), indicate the mean of the pixel values of the red, green, and blue channels, respectively. After that, the channel with minimum mean Xmin is selected, which is either Xred, Xgreen, or Xblue. **Step 2. The selected channel Xmin is further divided into m × m non-overlapping blocks, H =** {Hi; 1 ≤ _i ≤_ _n}, where i is the block number and m is the length of the row and column of_ each block. **Step 3. FWHT is applied in each block Hi to obtain the transformed block Ri, where Ri contains the** FWHT coefficients. **Step 4. Among all the n blocks, each set of four consecutive blocks Ri, Ri+1, Ri+2, and Ri+3 is selected** to embed a watermark bit. The main idea of the embedding process is to sort the coefficients � � of the first row represented by C _Ri:i+3_, where {i : i + 3} indicates {i, i + 1, i + 2, i + 3} of each set of selected blocks Ri, Ri+1, Ri+2, and Ri+3 except the DC value. If the watermark bit is 1, the � � selected low-frequency coefficients C _R_ are sorted in descending order; otherwise, they _i:i+3_ are sorted in ascending order. The concept of embedding the watermark bit in ascending and descending order with a block size of 4 × 4 where, m = 4, is shown in Figure 2. **Figure 1. Proposed embedding algorithm.** (a) (b) **Figure 2. Figure 2. ((aa) Sorting in ascending order to embed 0 bits and () Sorting in ascending order to embed 0 bits and (bb) sorting in descending order to embed ) sorting in descending order to embed** 1 bit.1 bit. _Symmetry 2020, 12, 52_ 6 of 20 �M �M _Xred_ �M �M _Xgreen_ �M �M _Xblue_ µ(Xred) = 255 [,][ µ][(][X][green][) =] 255 [,][ µ][(][X][blue][) =] 255 _i=1_ _j=1_ _i=1_ _j=1_ _i=1_ _j=1_ where µ(Xred), µ(Xgreen), and µ(Xblue), indicate the mean of the pixel values of the red, green, and blue channels, respectively. After that, the channel with minimum mean Xmin is selected, which is either Xred, Xgreen, or Xblue. **Step 2. The selected channel Xmin is further divided into m × m non-overlapping blocks, H** {Hi; 1 ≤ _i ≤_ _n}, where i is the block number and m is the length of the row and column of_ each block. **Step 3. FWHT is applied in each block Hi to obtain the transformed block Ri, where Ri contains the** FWHT coefficients. **Step 4. Among all the n blocks, each set of four consecutive blocks Ri, Ri+1, Ri+2, and Ri+3 is selected** to embed a watermark bit. The main idea of the embedding process is to sort the coefficients � � of the first row represented by C _Ri:i+3_, where {i : i + 3} indicates {i, i + 1, i + 2, i + 3} of each set of selected blocks Ri, Ri+1, Ri+2, and Ri+3 except the DC value. If the watermark bit is 1, the � � selected low-frequency coefficients C _R_ are sorted in descending order; otherwise, they _i:i+3_ are sorted in ascending order. The concept of embedding the watermark bit in ascending and In this step, two keys In this step, two keys kk1 and 1 and kk2 are also used in order to make the watermarking method more 2 are also used in order to make the watermarking method more secured. The key secured. The key kk1 is generated from the singular values of each block 1 is generated from the singular values of each block H��i of the selected channel of of the selected channel of host image. The key host image. The key k2 is generated from keyk2 is generated from key k1, which is used to authenticate keyk1, which is used to authenticate key k1 in the watermarkk1 in the watermark extraction process. The following operation is performed for embedding a watermark bit extraction process. The following operation is performed for embedding a watermark bit into each into each selected block. selected block. �(��:���� �) = �������� ��:���� ��; ������� ��� �1 ��� �2, �ℎ�� �(�) = 0�� _C_ _R[′]_ = asc _C_ _R_ ; mapping key k1 and k2, _when u(r) = 0_ _i:i+3_ _i:i+3_ (8) (8) � � � � � �� �(��:���C ) = ��������R[′]i:i+3 = desc �:���C _R��; ������� ��� �1 ��� �2, �ℎ�� �(�) = 1i:i+3_ ; mapping key k1 and k2, _when u(r) = 1_ wherewhere ascasc and and descdesc represent sorting the data in ascending order and descending order, respectively. represent sorting the data in ascending order and descending order, respectively. The process of mapping the keysThe process of mapping the keys k1k1 and and k2k2 are described in the next section. are described in the next section. ������� ��� �� ��� ��Mapping key k1 and k2: : In this section, the process of mapping keysIn this section, the process of mapping keys kk1 and 1 and kk2 is explained, 2 is explained, which is defined in Equation (8). This step is introduced to strengthen the proposed algorithm underwhich is defined in Equation (8). This step is introduced to strengthen the proposed algorithm under severe attack. To map the keys initially, SVD is applied in each block severe attack. To map the keys initially, SVD is applied in each block H�i� to generate the necessaryto generate the necessary information. To perform the operation, the following steps are used: information. To perform the operation, the following steps are used: (1) Each block Hi of the selected channel is decomposed into three matrices: Ui, Di, and Vi using Equation (9). _Hi = UiDiVi[T]_ (9) where λi1, λi2, . . . . . . . . . . . ., λim are the singular values of the matrix Di of each block Hi. These singular values are unique for each block Hi. The keys k1 and k2 are calculated using these ----- _Symmetry 2020, 12, 52_ 7 of 20 singular values. Thus, unauthorized people could not map the keys without the host image to prove fake ownership. To do this, initially, a null key k1 is defined. Then, k1 is generated using these singular values as defined in Equation (10) below: � � � ��� � � � ��� _k1 = append_ _k1,_ _asc_ λij, u(r) = 0k1 = append _k1,_ _desc_ λij, u(r) = 1 (10) where i indicates the block number, j = [�]1 ≤ _j ≤_ _m} indicates the singular values of each block,_ _asc and desc represent sorting the data in ascending order and descending order, respectively,_ and append indicates the concatenation operation. The singular values are sorted according to the watermark bit 0 or 1. (2) Finally, k1 is converted into a one-dimensional sequence of length L = n × m, where n is the total number of blocks and m is the total number of singular values in each block. (3) To generate key k2, define a null key k2 with length S, where S = n/m. Then, k2 is generated from key k1 using the following Equation (11): _k2 = append(k2, µ (k1h:h+t)) where 1 ≤_ _h ≤_ _L_ (11) where µ is the mean of consecutive t values of key k1 and the length of key k2 is (n/m) + 1. Although the first key can be generated by the owner only, the second key is generated to authenticate the first key in the extraction process. **Step 5. Inverse FWHT is applied to each transformed block R[′]** _i_ [and the watermarked blocks][ H]i[′] are found. **Step 6. Finally, three watermarked channels Xred[′]** [,][ X][′]green[,] and Xblue[′] [are combined to generate the] watermarked image X[′]. **Algorithm 1: Watermark Insertion** Variable Declaration: _X: Host image_ µ: Mean intensity value of each channel of host image (Lena) _Xmin: Channel with minimum mean_ _Hi: Non-overlapping blocks of Xmin (size 4 × 4)_ FWHT, SVD: Transformation and decomposition used in the algorithm _Ri: FWHT transformed block of Hi_ _C(Ri:i+3): Three coefficients of first row (except DC value) of the consecutive transformed block_ _C(R[′]i:i+3): Coefficients in ascending or descending order_ _W: Watermark image_ _u: Scrambled watermark sequence_ Watermark Embedding Procedure: 1. Watermark preprocess: scramble W to obtain u using Gaussian mapping 2. Read the host image and calculate µ of each channel (Red, Green, Blue) _X.bmp (host image with size of 256 × 256)_ _W.bmp (watermark image with size of 32 × 32)_ 3. Select channel Xmin and divide it into 4 × 4 Hi blocks 4. Apply FWHT to each block Hi and found Ri 5. Watermark Insertion � � � � �� _C_ _R[′]i:i+3_ = asc _C_ _Ri:i+3_ ; mapping key k1 and k2, when u(r) = 0 � � � � �� _C_ _R[′]i:i+3_ = desc _C_ _Ri:i+3_ ; mapping key k1 and k2, when u(r) = 1 asc: ascending order, desc: descending order, 1 ≤ _r ≤_ 32 × 32 // Use SVD to map keys k1 and k2 6. Perform inverse FWHT and combine the channels to get the Watermarked Image ----- asc: ascending order, desc: descending order, 1 ≤�≤32 × 32 _Symmetry 2020, 12, 52_ 8 of 20 // Use SVD to map keys �1 and �2 6. Perform inverse FWHT and combine the channels to get the Watermarked Image _3.3. Watermark Detection Process_ _3.3. Watermark Detection Process_ The watermark extraction process has two main phases: (1) modify the degree of ascendantThe watermark extraction process has two main phases: (1) modify the degree of /descendant of the attacked watermarked image with key k1, and (2) authenticate key k1 with ascendant/descendant of the attacked watermarked image with keykey k2. In addition, the pseudo code of the watermark detection process is presented in Algorithm 2. �1, and (2) authenticate key �1 with keyThe overall process is described below and shown in Figure �2. In addition, the pseudo code of the watermark detection process is presented in 3: Algorithm 2. The overall process is described below and shown in Figure 3: **Step 1. The attacked watermarked image X[∗]** is first divided into three channels, {Xred[∗] [,][ X][∗]green∗[,][ and X]∗ _blue[∗]_ [}.] **Step 1.** The attacked watermarked image Then, the mean value of the pixels of the red, green, and blue channels represented by�[∗] is first divided into three channels, {����, ������, ��� �µ(Xred[∗] ����∗[)][,][ µ]}.Then, the mean value of the pixels of the red, green, and blue channels [(][X][∗]green[)][,][ and]∗ [ µ][(][X]blue[∗] [)]∗[ are calculated. Thereafter that, the channel with minimum]∗ represented by mean X[∗] _µ(����_ ), µ(������ ), and µ(����� ) are calculated. Thereafter that, the **Step 2. The selected channelchannel with minimum mean min** [is selected for extracting the watermark.] X[∗] ����∗ is selected for extracting the watermark. _min∗_ [is further divided into][ m][ ×][ m][ non-overlapping blocks][ H]i[∗]∗[, where][ i][ is] **Step 2.** The selected channel the block number. ����is further divided into m×m non-overlapping blocks ��, where � is the block number. **Step 3.Step 3. FWHT is carried out on each blockblocksFWHT is carried out on each blockare found. ��∗ are found.** _Hi[∗][. After applying this operation, the transformed blocks] ��∗. After applying this operation, the transformed [ R]i[∗]_ **Step 4. The degree of ascendant/descendant denoted by dof is calculated for four consecutive** **Step 4.** The degree of ascendant/descendant denoted by _dof is calculated for four consecutive_ that the low-frequency coefficients in the first rowtransformed blocks transformed blocksthat the low-frequency coe {{�R�[∗]i∗[,], �[ R]���[∗]i∗+ffi1cients in the first row, �[,][ R]���∗[∗]i+, �2[,][ R]���∗ [∗]i+}. Therefore, 3[}. Therefore,] C[∗]dof(asc) [�][ dof]R �[∗]i:i[∗]+[(](�[asc]3��:���∗[) represents the number of times]of each transformed block exceptrepresents the number of times ) of each transformed block for the DC value are in ascending order. Similarly, the dof (desc) represents the number of times except for the DC value are in ascending order. Similarly, the _dof(desc)_ represents the number of times that the low-frequency coefficients in the first rowthat the low-frequency coefficients in the first row C[∗][�]R[∗]i:i+3� of each transformed block except �[∗](��:���∗ ) of each the DC value are in descending order. transformed block except the DC value are in descending order. **Figure 3. Figure 3. Proposed extraction algorithm.Proposed extraction algorithm.** Later, Later, dof��� is modified with key is modified with key k1. This phase assists the system to resist when the noise attack�1. This phase assists the system to resist when the noise attack is severe. Initially, the is severe. Initially, the dof [′] of the first���[′] of the first t values of keyt values of key k1 is calculated to extract the first watermark�1 is calculated to extract the first bit using Equations (12) and (13). Thus, consecutive t values of the key are considered each time for watermark bit using Equations (12) and (13). Thus, consecutive _t values of the key are considered_ each time for extracting a one-bit watermark. We found another two matrices,extracting a one-bit watermark. We found another two matrices, dof [′](asc) and dof [′](desc ���), with[′](���)[L]t [=] and [ N][2] ���values, where[′](����), with L is the length of key� _k1, with 1 ≤_ _h ≤_ _L._ � [= �][�][ values, where ] [�][ is the length of key ] [�1][, ] [with 1 ≤ℎ≤�][. ] _dof_ [′](asc) = dof [′](asc) + 1; if k1h > k1h+1 (12) ----- _Symmetry 2020, 12, 52_ 9 of 20 _dof_ [′](desc) = dof [′](desc) + 1; fk1h < k1h+1 (13) Finally, we modify the dof with dof [′] by a simple addition operation, as shown in Equations (14) and (15), and found two matrices, dofh(asc) and dofh(desc). _dofh(asc) = dof_ (asc) + dof [′](asc) (14) _dofh(desc) = dof_ (desc) + dof [′](desc) (15) **Authenticate k1 with k2: This operation is carried out to authenticate key k1 using k2. For this** purpose, the average of the consecutive t values of k1 is calculated and compared with one value of k2. This operation is represented using Equations (16) and (17) given below: _if_ (µ(k1h:h+t)) = k2h; k1 ← _k2_ (16) _if_ (µ(k1h:h+t))! = k2h; !k1 ← _k2_ (17) where k1 ← _k2 means k1 is authenticated by k2 and !k1 ←_ _k2 means k1 is not authenticated by k2. If k1_ is authenticated, then the watermark would be extracted accordingly. **Step 5. The hidden binary sequence is found using the following rule:** **If dofh(asc) > dofh(desc) and k1 ←** _k2_ then u(r) = 0 **else If dofh(asc) > dofh(desc) and k1 ←** _k2_ then u(r) = 1 **Step 6. The binary watermark sequence q*(r) is extracted with key k3 using the following equation:** _q[∗](r) = z(r) ⊕_ _u(r),_ 1 ≤ _r ≤_ _N × N._ (18) Finally, the watermark image W* is found by arranging the watermark sequence q*(r) into the _N×N matrix._ **Algorithm 2: Watermark Extraction** Variable Declaration: _X[∗]: Attacked watermarked image_ µ: Mean intensity value of each channel of X[∗] _X[∗]_ _min[: Channel with minimum mean]_ _H[∗]_ _i_ [: Non-overlapping blocks of][ X]min[∗] [(size 4][ ×][ 4)] FWHT: Transformations used in the algorithm _R[∗]_ _i_ [: FWHT transformed block of][ H]i[∗] � _C[∗][�]_ _R[∗]i:i+3_ : Three coefficients of first row (except DC value) of four consecutive transformed block _W: Watermark image_ _u: Scrambled watermark sequence_ � _dof_ (asc/desc): The number of times low-frequency coefficients in the first row C[∗][�] _R[∗]i:i+3_ of each transformed block except the DC value are in ascending/descending order. Watermark Extraction Procedure: 1. Read X[∗] and calculate µ of each channel (Red, Green, Blue) 2. Select channel X[∗] _min_ [and divide into 4][ ×][ 4][ H]i[∗] [blocks] 3. Apply FWHT to each block H[∗] _i_ [and found][ R]i[∗] 4. Watermark extraction ----- _Symmetry 2020, 12, 52_ 10 of 20 (a) Modifying dof (asc/desc) into dof [′](asc/desc) with key k1 _dof_ [′](asc) = do f [′](asc) + 1; i f k1h > k1h+1 _dof_ [′](desc) = do f [′](desc) + 1; f k1h < k1h+1 where L is the length of key k1, with 1 ≤ _h ≤_ _L and then calculate_ _dofh(asc) = do f_ (asc) + do f [′](asc) _dofh(desc) = do f_ (desc) + do f [′](desc) (b) Authenticate key k1 with key k2 _if_ (µ(k1h:h+t)) = k2h; k1 ← _k2_ _if_ (µ(k1h:h+t))! = k2h; !k1 ← _k2_ where k1 ← _k2 means k1 is authenticated by k2 and !k1 ←_ _k2 means k1 is not authenticated by k2._ // Consecutive t values of the key are considered each time for extracting a one-bit watermark, where _[L]t_ [=][ N][2] and µ(k1h:h+t) means mean of these t values (c) Watermark extraction If do fh(asc) > dofh(desc) and k1 ← _k2_ then u(r) = 0 else If do fh(asc) > dofh(desc) and k1 ← _k2_ then u(r) = 1 where, 1 ≤ _r ≤_ 32 × 32 (d) Re-scramble u to get W **4. Experimental Results and Discussions** In this section, the performance of our proposed method is evaluated in terms of imperceptibility and robustness. The proposed method used various images, including Lena, Peppers, Baboon, and Fruit, with the size 256 × 256 as host images shown in Figure 4. The size of the binary watermark image is 32 × 32, as shown in Figure 5. It performs well for all the host images in term of imperceptibility and robustness. In this study, the selected values for m and t are 4 and 16, respectively, as the size of each block Hi is 4 × 4. Therefore, the total number blocks is 4096. Thus, the length of the key k1 is 16384. The main reason for selecting a smaller value for m to embed the watermark bit is that sorting larger _Symmetryblocks causes greater degradation in the quality of the watermarked image.2020, 11, x; doi: FOR PEER REVIEW_ 11 of 20 (a) (b) (c) (d) **Figure 4. Figure 4. The host images: (The host images: (aa) Lena, () Lena, (bb) Peppers, () Peppers, (cc) Baboon, and () Baboon, and (dd) Fruit. ) Fruit.** ----- (c) (d) _Symmetry 2020, 12, 52_ 11 of 20 **Figure 4. The host images: (a) Lena, (b) Peppers, (c) Baboon, and (d) Fruit.** (a) (b) (c) **Figure 5.Figure 5. Watermark images: (Watermark images: (aa) original, () original, (bb) scrambled with a = 10, b = 0.05, and y0 = 20, and () scrambled with a = 10, b = 0.05, and y0 = 20, and (cc) )** scrambled with ascrambled with a = 30, b = 0.01, and y0 = 10. = 30, b = 0.01, and y0 = 10. **Imperceptibility test:Table 1.** Comparison between the proposed and recent methods in terms of peak signal-to-noise The imperceptibility of the watermarked images can be evaluated in terms of the peak signal-to-noise ratio (PSNR), as given in Equation (19).ratio (PSNR).   **Watermarked** **Proposed** **Ahmed et al.** **Patvardhar et al.** **Su et al.** **Images** **Method** **[23]** 255[2] **[24]** **[13]** PSNR = 10 log10 _M_ _M_ (19) _MM1_ � � (X − _X[′])[2]_  _i=1_ _j=1_  50.04 54.2577 39.4428 where X and X’ are the original and watermarked images, respectively. Higher values of PSNR indicate the better quality of the watermarked image. Figure 4 shows the original and scrambled images with different values of a, b, and y0. To test the imperceptibility of the proposed framework, the PSNR values are calculated and compared with those values of the existing methods, as shown in Table49.78 47.1961 1. From this table, it is observed40.8216 that the PSNR of the proposed method varies from 49.78 to 52.64, whereas the PSNR of the recent methods [13,23,24] varies from 47.1961 to 47.1836, 54.2577 to 54.2599, and 39.4428 to 40.8216. Therefore, it is evident that the PSNRs of the recent methods [23,24] are quite high, whereas the PSNRs of the recent method [13] are low compared to all other methods. In other word, the PSNR of the proposed 51.56 47.1836 54.3499 method is higher than that of the methods reported in [13]. However, it is slightly lower than that of the method reported in [24]. This comparison justifies that the suggested method outperforms the other recent techniques. Since in each 4 × 4 block, only three AC values are shuffled, and the DC value remains in its position, low image degradation took place. However, low image degradation results in high imperceptibility. **Security analysis: For a secured watermarking method, how it performs against various attacks** is very important. The proposed method utilizes a Gaussian map to enhance the security. To encrypt the watermark image, some predefined constants are used such as a, b, and p(1), which are considered as secret key k3. If the selected value for a, b, and p(1) are wrong, in that case, the watermark will not be extracted properly. Further, in order to make the watermarking method more secured, the two keys _k1 and k2 are used. The key k1 is generated from the singular values of each block Hi of the selected_ channel of host image. Moreover, it is observed that these singular values are floating point numbers, and it is not possible to find out these singular values without the host image. Therefore, it is not possible to generate key k1 without the host image. The key k2 is generated from key k1, which is used to authenticate the key k1 in the watermark extraction process. Therefore, it is not possible to generate the key k2 without k1. These keys (k1, k2, k3) are used in the watermark detection process to extract the embedded watermark. The correct watermark can be extracted when all the keys (k1, k2, and k3) are correct. In other words, if any one of the keys is wrong, then the watermark will not be extracted correctly. This phenomenon is illustrated in Figure 6. Moreover, the size of each block Hi of the selected channel of the host image is 4 × 4; therefore, the total number of blocks in each host image is 4096. Thus, the length of the key k1 is 4096 × 4 = 16,384, and the length of the key k2 is (4096/4) + 1 = 1025, erent values of a, b, and y0. _X and X’_ ----- _Symmetry 2020, 12, 52_ 12 of 20 which are quite long, indicating that the key space is large enough. As the key(((ccc) ) ) _k1, k2, and k3 are floating_ point numbers, therefore, the value of these keys cannot be determined. Hence, the probability ofSymmetryFigure 5. 2020, 11, x; doi: FOR PEER REVIEWWatermark images: (a) original, (b) scrambled with a = 10, b = 0.05, and y0 = 20, and (12 of 20 c) **Figure 5. Figure 5. Watermark images: (Watermark images: (aa) original, () original, (bb) scrambled with a = 10, b = 0.05, and y0 = 20, and () scrambled with a = 10, b = 0.05, and y0 = 20, and (cc) )** extracting the right watermark is near to 0. Therefore, the attacker cannot detect the correct watermarkscrambled with a = 30, b = 0.01, and y0 = 10. scrambled with a = 30, b = 0.01, and y0 = 10. scrambled with a = 30, b = 0.01, and y0 = 10. without the right key, which enhances the security of the proposed watermarking method. **Table 1.** Comparison between the proposed and recent methods in terms of peak signal-to-noise **Table 1. Table 1.** Comparison between the proposed and recent methods in terms of peak signal-to-noise Comparison between the proposed and recent methods in terms of peak signal-to-noise **Table 1.ratio (PSNR). Comparison between the proposed and recent methods in terms of peak signal-to-noise ratio52.64** ratio (PSNR). ratio (PSNR). (PSNR). **Watermarked** **Proposed** **Ahmed et al.** **Patvardhar et al.** **Su et al.** **Watermarked ImagesWatermarked Watermarked** **Proposed MethodProposed Proposed** **Ahmed et al. [Ahmed et al. Ahmed et al. 23]** **Patvardhar et al. [Patvardhar et al. Patvardhar et al. 24]** **Su et al. [Su et al. Su et al. 13]** **Images** **Method** **[23]** **[24]** **[13]** **Images Images** **Method Method** **[23] [23]** **[24] [24]** **[13] [13]** **Security analysis:** For a secured watermarking method, how it performs against various attacks is very important. The proposed method utilizes a Gaussian map to enhance the security. To encrypt the watermark image, some predefined constants are used such as 50.0450.04 54.257754.2577 a, b, and _p(1), which are 39.4428 39.4428_ 50.04 50.04 54.2577 54.2577 39.4428 39.4428 considered as secret key _k3. If the selected value for_ _a,_ _b, and_ _p(1) are wrong, in that case, the_ watermark will not be extracted properly. Further, in order to make the watermarking method more secured, the two keys k1 and k2 are used. The key k1 is generated from the singular values of each block �� of the selected channel of host image. Moreover, it is observed that these singular values 49.78 47.1961 40.8216 are floating point numbers, and it is not possible to find out these singular values without the host 49.7849.78 49.78 47.196147.1961 47.1961 40.8216 40.8216 40.8216 image. Therefore, it is not possible to generate key k1 without the host image. The key k2 is generated from key k1, which is used to authenticate the key k1 in the watermark extraction process. Therefore, it is not possible to generate the key k2 without k1. These keys (k1, k2, k3) are used in the watermark detection process to extract the embedded watermark. The correct watermark can be extracted 51.56 47.1836 54.3499 when all the keys (k1, k2, and 51.5651.56 51.56 k3) are correct. In other words, if any one of the keys is wrong, then 47.183647.1836 47.1836 54.349954.3499 54.3499 _Symmetrythe watermark will not be extracted correctly. This phenomenon is illustrated in Figure 6. 2020, 11, x; doi: FOR PEER REVIEW_ 12 of 20 Moreover, the size of each block Hi of the selected channel of the host image is 4 × 4; therefore, the total number of blocks in each host image is 4096. Thus, the length of the key k1 is 4096 × 4 = 16,384, and the length of the key k2 is (4096/4) + 1 = 1025, which are quite long, indicating that the key space is large enough. As the key k1, 52.6452.64 k2, and k3 are floating point numbers, therefore, the value of these keys cannot be determined. Hence, the probability of extracting the right watermark is near to 0. Therefore, the attacker cannot detect the correct watermark without the right key, which enhances the security of the proposed watermarking method. secured, the two keys �� secured, the two keys �� from key k when all the keys ( **_2020_** **Table 1.** **Table 1. Table 1.** **Table 1.ratio (PSNR).** ratio (PSNR). ratio (PSNR). (PSNR). **Security analysis:** considered as secret key considered as secret key **Security analysis:** For a secured watermarking method, how it performs against various attacks is very important. The proposed method utilizes a Gaussian map to enhance the security. To Key **Case 1** **Case 2** **Case 3** **Case 4** encrypt the watermark image, some predefined constants are used such as Key k1 √ × √ _a√, b, and_ _p(1), which are_ considered as secret key _kKey 3. If the selected value for k2_ √ _a√,_ _b, and_ _p× (1) are wrong, in that case, the √_ watermark will not be extracted properly. Further, in order to make the watermarking method more Key k3 √ √ √ × secured, the two keys k1 and k2 are used. The key k1 is generated from the singular values of each block �� of the selected channel of host image. Moreover, it is observed that these singular values Extracted watermark are floating point numbers, and it is not possible to find out these singular values without the host image. Therefore, it is not possible to generate key k1 without the host image. The key k2 is generated from key k1, which is used to authenticate the key Figure 6.Figure 6. The extracted watermark with right and diThe extracted watermark with right and different wrong keys. k1 in the watermark extraction process. Therefore, fferent wrong keys. it is not possible to generate the key k2 without k1. These keys (k1, k2, k3) are used in the watermark detection process to extract the embedded watermark. The correct watermark can be extracted Robustness test:Robustness test: To measure the robustness of the proposed algorithm, the normalized correlation To measure the robustness of the proposed algorithm, the normalized correlation (NC) is calculated between the original watermark image and the extracted watermark (NC) is calculated between the original watermark image and the extracted watermark image. The NCwhen all the keys (k1, k2, and k3) are correct. In other words, if any one of the keys is wrong, then image. The NC value is calculated using Equation (20): value is calculated using Equation (20):the watermark will not be extracted correctly. This phenomenon is illustrated in Figure 6. Moreover, the size of each block total number of blocks in each host image is 4096. Thus, the length of the key NC W W(, *H) =i of the selected channel of the host image is 4 × 4; therefore, the N _N_ �Nk=kN1=1�NllN==1w k l w k l1(, )[w][(]N[k]⋅[,][ l]*[)](, )[ ·]N _[ w]*[∗][(][k][,][ l][)]*_ _k1 is 4096 × 4 = 16,384, (20)_ and the length of the key NC(W, W[∗]) =k2 is (4096/4) + 1 = 1025, which are quite long, indicating that the key space k=1 _l=1w k l w k l(, )⋅_ (, ) k=1 _l=1w k l w k l(, )⋅_ (, ) (20) _N_ _N_ _N_ _N_ �� � �� � is large enough. As the key k1, k2, and k=1 _l=k3 are floating point numbers, therefore, the value of these 1_ _[w][(][k][,][ l][)][ ·][ w][(][k][,][ l][)]_ _k=1_ _l=1_ _[w][∗][(][k][,][ l][)][ ·][ w][∗][(][k][,][ l][)]_ where � and W* are the original watermark and extracted watermark, respectively. keys cannot be determined. Hence, the probability of extracting the right watermark is near to 0. Now, the main fact is to consider the different types of noise attacks on the watermarked image. where W and W* are the original watermark and extracted watermark, respectively. Therefore, the attacker cannot detect the correct watermark without the right key, which enhances The results are illustrated in such a way as to identify the effect of keys on the NC values. Figures 7– Now, the main fact is to consider the different types of noise attacks on the watermarked image. the security of the proposed watermarking method. 9 show the effect of keys in a pictorial way, including the PSNR and NC values. The results are illustrated in such a way as to identify the effect of keys on the NC values. Figures 7–9 show the effect of keys in a pictorial way, including the PSNR and NC values.Key **Case 1** **Case 2** **Case 3** **Case 4** Key k1 √ × √ √ Key k2 √ √ × √ |h host i|m ag Ne is 4 w0 k9 6 ). ⋅T w(h l=u ls ),  N ( k = , l1 k , 1|w th(ek ,l le)n ·g wth∗( ok,f lt)he key k1 is 4096 N N w* (k , l )⋅w* (k , l)| |---|---|---| |sq (4096/4|) +k =11 = l1=1025, which|arqek= 1quli=1te long, indic ating that| secured, the two keys �� from key k when all the keys ( **_2020_** Moreover, the size of each block considered as secret key of the selected channel of host image. Moreover, it is observed that these singular values 1 is generated from the singular values of each ----- _Symmetry 2020, 12, 52_ 13 of 20 _Symmetry2020, 11, x; doi: FOR PEER REVIEW_ 13 of 20 **Attack type** **Lena** **Peppers** **Baboon** **Fruit** No attack Watermarked image PSNR 50.04 49.78 51.56 52.64 Extracted watermark (with and without keys) NC (with and without keys) 1.0 and 1.0 1.0 and 1.0 1.0 and 1.0 1.0 and 1.0 Watermarked image Gaussian noise (0.1) Extracted watermark (with and without keys) NC (with and without keys) 1.0 and 0.9997 1.0 and 0.9823 1.0 and 1.0 1.0 and 0.9351 Watermarked image Speckle noise (0.01) Extracted watermark (with and without keys) NC (with and without keys) 1.0 and 0.8835 1.0 and 0.9292 1.0 and 0.9068 1.0 and 0.9349 **Figure 7. Cont.** ----- _SymmetrySymmetry 20202020, 11, x; doi: FOR PEER REVIEW, 12, 52_ 14 of 20 14 of 20 _Symmetry2020, 11, x; doi: FOR PEER REVIEW_ 14 of 20 Watermarked image Salt and pepper noise (0.01) Salt and pepper noise (0.01) Watermarked image Extracted watermark (with and without keys) Extracted watermark (with and without keys) NC (with and without keys) 1.0 and 0.9945 1.0 and 0.9931 1.0 and 0.9956 1.0 and 0.9944 **Figure 7. Analysis of proposed method under No attack, Gaussian noise (0.01), Speckle noise, and Salt and Pepper noise (0.01). NC: normalized correlation. NC (with and without keys)** 1.0 and 0.9945 1.0 and 0.9931 1.0 and 0.9956 1.0 and 0.9944 **Figure 7. Analysis of proposed method under No attack, Gaussian noise (0.01), Speckle noise, and Salt and Pepper noise (0.01). NC: normalized correlation.** **Figure 7. Analysis of proposed method under No attack, Gaussian noise (0.01), Speckle noise, and Salt and Pepper noise (0.01). NC: normalized correlation.** **Attack type** **Lena** **Peppers** **Baboon** **Fruit** **Attack type** **Lena** **Peppers** **Baboon** **Fruit** Watermarked image Watermarked image Adjustment Extracted watermark (with Adjustment and without keys) Extracted watermark (with and without keys) NC (with and without 1.0 and 0.9543 1.0 and 0.7544 1.0 and 0.9014 1.0 and 0.6137 keys) NC (with and without 1.0 and 0.9543 1.0 and 0.7544 1.0 and 0.9014 1.0 and 0.6137 keys) Cropped (50%) Watermarked image Cropped (50%) Watermarked image **Figure 8. Cont.** |Attack type|Col2|Lena|Peppers|Baboon|Fruit| |---|---|---|---|---|---| |Attack type||Lena|Peppers|Baboon|Fruit| |Adjustment Adjustment|Watermarked image Watermarked image||||| ||Extracted watermark (with||||| ||aEnxdtr awcittehdo uwt akteeyrms) ark (with||||| ||and without keys) NC (with and without||||| ||kNeCys ) (with and without|1.0 and 0.9543|1.0 and 0.7544|1.0 and 0.9014|1.0 and 0.6137| ||keys)|1.0 and 0.9543|1.0 and 0.7544|1.0 and 0.9014|1.0 and 0.6137| |Cropped (50%) Cropped (50%)|Watermarked image Watermarked image||||| ||||||| **Baboon** ----- _Symmetry 2020, 12, 52_ 15 of 20 _Symmetry2020, 11, x; doi: FOR PEER REVIEW_ 15 of 20 Extracted watermark (with and without keys) NC (with and without 1.0 and 0.7919 1.0 and 0.7821 1.0 and 0.7912 1.0 and 0.7866 keys) Watermarked image Sharpening (tolerance = 0.1) Extracted watermark (with and without keys) NC (with and without 1.0 and 0.9578 1.0 and 0.9335 1.0 and 0.9241 1.0 and 0.8594 keys) Watermarked image Weiner filtering Extracted watermark (with and without keys) NC (with and without 1.0 and 0.6753 1.0 and 0.6785 1.0 and 0.6884 1.0 and 0.6771 keys) **Figure 8. Analysis of proposed method under Adjustment, Cropping (50%), Sharpening (0.1), and Weiner filtering.** |Col1|Extracted watermark (with and without keys)|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| ||NC (with and without keys)|1.0 and 0.7919|1.0 and 0.7821|1.0 and 0.7912|1.0 and 0.7866| |Sharpening (tolerance = 0.1)|Watermarked image||||| ||Extracted watermark (with and without keys)||||| ||NC (with and without keys)|1.0 and 0.9578|1.0 and 0.9335|1.0 and 0.9241|1.0 and 0.8594| |Weiner filtering|Watermarked image||||| ||Extracted watermark (with and without keys)||||| ||NC (with and without keys)|1.0 and 0.6753|1.0 and 0.6785|1.0 and 0.6884|1.0 and 0.6771| ----- _Symmetry2020, 11, x; doi: FOR PEER REVIEW_ 16 of 20 _Symmetry 2020, 12, 52_ 16 of 20 **Figure 8. Analysis of proposed method under Adjustment, Cropping (50%), Sharpening (0.1), and Weiner filtering.** **Attack type** **Lena** **Peppers** **Baboon** **Fruit** Watermarked image Poison noise Extracted watermark (with and without keys) NC (with and without keys) 1.0 and 0.9950 1.0 and 0.9963 1.0 and 0.9992 1.0 and 0.9990 Watermarked image Median filtering Extracted watermark (with and without keys) NC (with and without keys) 1.0 and 0.9762 1.0 and 0.9541 1.0 and 0.9896 1.0 and 0.9459 Watermarked image Compression (quality factor: 50%) Extracted watermark (with and without keys) NC (with and without keys) 1.0 and 0.5775 1.0 and 0.5936 1.0 and 0.5912 1.0 and 0.5676 **Figure 9. Cont.** |Attack type|Col2|Lena|Peppers|Baboon|Fruit| |---|---|---|---|---|---| |Poison noise|Watermarked image||||| ||Extracted watermark (with and without keys)||||| ||NC (with and without keys)|1.0 and 0.9950|1.0 and 0.9963|1.0 and 0.9992|1.0 and 0.9990| |Median filtering|Watermarked image||||| ||Extracted watermark (with and without keys)||||| ||NC (with and without keys)|1.0 and 0.9762|1.0 and 0.9541|1.0 and 0.9896|1.0 and 0.9459| |Compression (quality factor: 50%)|Watermarked image||||| ||Extracted watermark (with and without keys)||||| ||NC (with and without keys)|1.0 and 0.5775|1.0 and 0.5936|1.0 and 0.5912|1.0 and 0.5676| ----- _Symmetry 2020, 12, 52_ 17 of 20 _Symmetry2020, 11, x; doi: FOR PEER REVIEW_ 17 of 20 Watermarked image Rotation (40[0]) Extracted watermark (with and without keys) NC (with and without keys) 1.0 and 0.5160 1.0 and 0.5132 1.0 and 0.5194 1.0 and 0.5193 **Figure 9. Analysis of the proposed method under Poison noise, Median filtering, Compression (quality factor: 50%), and Rotation.** **Figure 9. Analysis of the proposed method under Poison noise, Median filtering, Compression (quality factor: 50%), and Rotation.** |Rotation (400)|Watermarked image|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| ||Extracted watermark (with and without keys)||||| ||NC (with and without keys)|1.0 and 0.5160|1.0 and 0.5132|1.0 and 0.5194|1.0 and 0.5193| ----- _Symmetry 2020, 12, 52_ 18 of 20 Furthermore, Tables 2 and 3 show an overview of the NC values of the proposed scheme with keys and without keys, respectively. Notably, the NC values shown in Table 2 reflect better results than Table 3. This is because severe noise attacks affect the degree of ascendant/descendant (dof ). This dof is derived without key k1 and is vulnerable to noise attack until it is modified with dof [′], as defined in Equations (14) and (15). Since the keys make the system effectively resistant against noise, Table 3 shows better results in terms of NC. Further, the extracted watermarks from four different watermarked images under Gaussian noise with tolerance 0.1 are shown in Figure 7. The extracted watermark using only dof (without key) for the “Fruit” cover image provides lower NC values than the others. Since the color variation in this host image is not very high, dof (without keys) is more vulnerable under additive noise attack. This is also applicable for other attacks, such as adjustment and sharpening, as shown in Figure 8. This problem is overcome in the proposed framework with the concept of key mapping. In spite of the high noise attack, extracting the watermark using dofk (with keys) could reconstruct the watermark image successfully with a unity of NC values, as shown in Figures 7–9. We observed that the NC of the proposed method against various attacks is numerically one. It is because the keys (k1, k2 and k3) that contain the necessary information of the watermark are not affected by various attacks. Hence, this proposed technique ensures high robustness. **Table 2. NC values after applying various noise attacks (with keys).** **No** **Attack Type** **Lena** **Peppers** **Baboon** **Fruit** 1 Gaussian (0.01) 1.0 1.0 1.0 1.0 2 Speckle (0.01) 1.0 1.0 1.0 1.0 3 Adjustment 1.0 1.0 1.0 1.0 4 Cropping (50%) 1.0 1.0 1.0 1.0 5 Sharpening (tol = 0.1) 1.0 1.0 1.0 1.0 6 Rotation (40[0]) 1.0 1.0 1.0 1.0 7 Wiener filtering 1.0 1.0 1.0 1.0 8 Poison noise 1.0 1.0 1.0 1.0 9 Salt and pepper noise (0.01) 1.0 1.0 1.0 1.0 10 Median filtering 1.0 1.0 1.0 1.0 11 Compression (quality factor = 50%) 1.0 1.0 1.0 1.0 **Table 3. NC values after applying various noise attacks (without keys).** **No** **Attack Type** **Lena** **Peppers** **Baboon** **Fruit** 1 Gaussian (0.1) 0.9997 0.9823 1.0 0.9351 2 Speckle (0.01) 0.8835 0.9292 0.9068 0.9349 3 Adjustment 0.9543 0.7544 0.9014 0.6137 4 Cropping (50%) 0.7919 0.7821 0.7912 0.7866 5 Sharpening (tol = 0.1) 0.9578 0.9335 0.9241 0.8594 6 Rotation (40[0]) 0.5160 0.5132 0.5194 0.5193 7 Wiener filtering 0.6753 0.6785 0.6884 0.6771 8 Poison noise 0.9950 0.9963 0.9992 0.9990 9 Salt and pepper noise (0.01) 0.9945 0.9931 0.9956 0.9944 10 Median filtering 0.9762 0.9541 0.9896 0.9459 11 Compression (quality factor = 50%) 0.5775 0.5936 0.5912 0.5676 Table 4 shows a comparative analysis between the proposed and several recent state-of-the-art methods [13,23,24] for NC against different attacks. From this table, it is observed that the NC of the proposed method is numerically one against various attacks using keys, in contrast to state-of-the-art methods whose NC vary from 0.7991 to 0.9999. It should be mentioned that Ahmed et al. [23] shows low robustness against rotation and salt and pepper noise attack, and Su et al. [13] shows low robustness against median filtering and JPEG compression attack. In all other cases, these two methods show good robustness. Moreover, Patvardhar et al. [24] shows good robustness against various attacks. ----- _Symmetry 2020, 12, 52_ 19 of 20 **Table 4. A comparative analysis between the proposed and several recent methods in terms of NC.** **No** **Attack Type** **Ahmed et al. [23]** **Patvardhar et al. [24]** **Su et al. [13]** **Proposed** 1 Gaussian noise (0.1) 0.9625 0.9885 0.9131 1.0 2 Speckle noise (0.01) 0.9601 – – 1.0 3 Contrast Adjustment – 0.9491 – 1.0 4 Cropping (50%) – 0.9947 0.9604 1.0 5 Sharpening 0.9388 – 0.9999 1.0 6 Rotation (25[◦]) 0.7991 0.9989 – 1.0 7 Poison noise 0.9884 – – 1.0 8 Salt and pepper noise (0.01) 0.9117 0.9807 0.9902 1.0 9 Median filtering 0.9908 0.9989 0.8814 1.0 JPEG compression (quality 10 0.9784 0.9895 0.8469 1.0 factor = 20%) In other words, this proposed algorithm with its unique key approach is much more robust than any other existing method. In addition, our method utilizes the key mapping concept with singular values of the host image. This concept improves the performance of the proposed method against severe noise attack. This approach also ensures ownership with high robustness. Furthermore, Gaussian mapping enhances the security of the watermark. Finally, coefficient ordering in the smaller block provides high imperceptibility. The concatenation of smaller blocks into the larger block provides high robustness against noise attack. In a nutshell, it can be concluded that our proposed method outperforms recent state-of-the-art methods in terms of robustness, security, and imperceptibility. **5. Conclusions** This paper presented an image watermarking scheme using FWHT, SVD, key mapping, and coefficient ordering. FWHT is chosen because of its low computational complexity. To enhance the robustness of the proposed method against severe attacks, key mapping is introduced using SVD. It is used because unique keys are generated from the singular values of the FWHT blocks of the cover image. Furthermore, Gaussian mapping is used to scramble the watermark. This makes the system secured against unauthorized detection. Thus, the proposed method ensures high robustness as well as high security against numerous attacks. Experimental results indicated that the proposed scheme shows better results than the recent methods in terms of robustness and security. Moreover, it yielded high-quality watermarked images. The NC value of the proposed method is numerically one, while the PSNR of it lies between 49.78 and 52.64. In contrast, the recent state-of-the-art methods show that the NC varies from 0.7991 to 0.9999, while the PSNR resides between 39.4428 and 54.2599. These results verified that the proposed method could be effectively utilized for image copyright protection and proof of ownership. We will extend the proposed method for video watermarking in the future. **Author Contributions: All authors contributed equally to the conception of the idea, the design of experiments,** the analysis and interpretation of results, and the writing and improvement of the manuscript. All authors have read and agreed to the published version of the manuscript. **Funding: This work was supported by the Korea Institute of Energy Technology Evaluation and Planning (KETEP)** and the Ministry of Trade, Industry and Energy (MOTIE) of the Republic of Korea (20192510102510, 20172510102130) **Conflicts of Interest: The authors declare no conflict of interest.** **References** 1. Bakhsh, F.Y.; Moghaddam, M.E. A robust HDR images watermarking method using artificial bee colony algorithm. J. Inf. Secur. Appl. 2018, 41, 12–27. 2. Guo, Y.; Au, O.C.; Wang, R.; Fang, L.; Cao, X. Halftone image watermarking by content aware double-sided [embedding error diffusion. IEEE Trans. Image Process. 2018, 27, 3387–3402. [CrossRef] [PubMed]](http://dx.doi.org/10.1109/TIP.2018.2815181) 3. Chetan, K.R.; Nirmala, S. An efficient and secure robust watermarking scheme for document images using [Integer wavelets and block coding of binary watermarks. J. Inf. Secur. Appl. 2015, 24, 13–24. [CrossRef]](http://dx.doi.org/10.1016/j.jisa.2015.07.002) ----- _Symmetry 2020, 12, 52_ 20 of 20 4. Wanga, H.; Yina, B.; Zhoub, L. Geometrically invariant image watermarking using connected objects and gravity centers. KSII Trans. Internet Inf. Syst. 2013, 7, 2893–2912. 5. Lin, P.Y.; Chen, Y.H.; Chang, C.C.; Lee, J.S. Contrast adaptive removable visible watermarking (CARVW) [mechanism. Image Vis. Comput. 2013, 31, 311–321. [CrossRef]](http://dx.doi.org/10.1016/j.imavis.2013.02.002) 6. Su, Q.; Niu, Y.; Zou, H.; Liu, X. A blind dual color images watermarking based on singular value decomposition. _[Appl. Math. Comput. 2013, 219, 8455–8466. [CrossRef]](http://dx.doi.org/10.1016/j.amc.2013.03.013)_ 7. Wu, X.; Sun, W. Robust copyright protection scheme for digital images using overlapping DCT and SVD. _[Appl. Soft Comput. 2013, 13, 1170–1182. [CrossRef]](http://dx.doi.org/10.1016/j.asoc.2012.09.028)_ 8. Bhatnagar, G.; Wua, Q.M.J.; Raman, B. A new robust adjustable logo watermarking scheme. Comput. Secur. **[2012, 31, 40–58. [CrossRef]](http://dx.doi.org/10.1016/j.cose.2011.11.003)** 9. Tsai, H.H.; Huang, Y.J.; Lai, Y.S. An SVD-based image watermarking in wavelet domain using SVR and PSO. _[Appl. Soft Comput. 2012, 12, 2442–2453. [CrossRef]](http://dx.doi.org/10.1016/j.asoc.2012.02.021)_ 10. Hun, H.T.; Chen, W.H. A dual cepstrum based watermarking scheme with self-synchronization. Signal Process. **2012, 92, 1109–1116.** 11. Lin, C.C. An information hiding scheme with minimal image distortion. Comput. Stand. Interfaces 2011, 33, [477–484. [CrossRef]](http://dx.doi.org/10.1016/j.csi.2011.02.003) 12. Lee, Y.; Kim, H.; Park, Y. A new data hiding scheme for binary image authentication with small image [distortion. Inf. Sci. 2009, 179, 3866–3884. [CrossRef]](http://dx.doi.org/10.1016/j.ins.2009.07.014) 13. Su, Q.; Wang, G.; Zhang, X. A new algorithm of blind color image watermarking based on LU decomposition. _[Multidimens. Syst. Signal Process. 2018, 29, 1055–1074. [CrossRef]](http://dx.doi.org/10.1007/s11045-017-0487-7)_ 14. Murty, P.S.; Kumar, S.D.; Kumar, P.R. A semi blind self reference image watermarking in DCT using Singular Value Decomposition. Int. J. Comput. Appl. 2013, 62, 29–36. 15. Shen, J.J.; Ren, J.M. A robust associative watermarking technique based on vector quantization. Digit. Signal _[Process. 2010, 20, 1408–1423. [CrossRef]](http://dx.doi.org/10.1016/j.dsp.2009.10.015)_ 16. Bhatnagar, G.; Raman, B. A new robust reference watermarking scheme based on DWT-SVD. Comput. Stand. _[Interfaces 2009, 31, 1002–1013. [CrossRef]](http://dx.doi.org/10.1016/j.csi.2008.09.031)_ 17. Zhou, X.; Zhang, H.; Wang, C. A robust image watermarking technique based on DWT, APDCBT, and SVD. _[Symmetry 2018, 10, 77. [CrossRef]](http://dx.doi.org/10.3390/sym10030077)_ 18. Sarker, M.I.H.; Khan, M.I. An efficient image watermarking scheme using BFS technique based on Hadamar Transform. Smart Comput. Rev. 2013, 3, 298–308. 19. Kumar, A.; Luhach, A.K.; Pal, D. Robust digital image watermarking technique using image normalization and Discrete Cosine Transformation. Int. J. Comput. Appl. 2013, 65, 5–13. 20. Liua, J.; Liub, G.; Hea, W.; Lia, Y. A new digital watermarking algorithm based on WBCT. Procedia Eng. 2012, _[29, 1559–1564. [CrossRef]](http://dx.doi.org/10.1016/j.proeng.2012.01.173)_ 21. Lai, C.C.; Tsai, C.C. Digital image watermarking using Discrete Wavelet Transform and Singular Value [Decomposition. IEEE Trans. Instrum. Meas. 2010, 59, 3060–3063. [CrossRef]](http://dx.doi.org/10.1109/TIM.2010.2066770) 22. Mohammad, A.A.; Alhaj, A.; Shaltaf, S. An improved SVD-based watermarking scheme for protecting [rightful ownership. Signal Process. 2008, 88, 2158–2180. [CrossRef]](http://dx.doi.org/10.1016/j.sigpro.2008.02.015) 23. Ahmed, K.A.; Ozturk, S. A novel hybrid DCT and DWT based robust watermarking algorithm for color images. Multimed. Tools Appl. 2019, 78, 17027–17049. 24. Patvardhan, C.; Kumar, P.; Lakshmi, C.V. Effective color image watermarking scheme using YCbCr color [space and QR code. Multimed. Tools Appl. 2018, 77, 12655–12677. [CrossRef]](http://dx.doi.org/10.1007/s11042-017-4909-1) © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution [(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/sym12010052?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/sym12010052, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/2073-8994/12/1/52/pdf?version=1577363614" }
2,019
[ "JournalArticle" ]
true
2019-12-26T00:00:00
[ { "paperId": "15d98ab7fc262044b2a0284216da30e257181d0d", "title": "A novel hybrid DCT and DWT based robust watermarking algorithm for color images" }, { "paperId": "0b2db3a53d0a0cda7f1f94e3aab43b1cf8be00ef", "title": "A robust HDR images watermarking method using artificial bee colony algorithm" }, { "paperId": "2394d1ff20ca8b7c563213d71b01d414ad5d8d44", "title": "Effective Color image watermarking scheme using YCbCr color space and QR code" }, { "paperId": "7d51829b342c0032a20ab38014798a3771b154c8", "title": "A Robust Image Watermarking Technique Based on DWT, APDCBT, and SVD" }, { "paperId": "48075354e362b3622d4d46acca4ca5c02e5f9cb0", "title": "Halftone Image Watermarking by Content Aware Double-Sided Embedding Error Diffusion" }, { "paperId": "689515343634dabaee83999c38b5077aa0757dca", "title": "A new algorithm of blind color image watermarking based on LU decomposition" }, { "paperId": "ab6f13de9457be47c7e8d81936604041f3dea166", "title": "An efficient and secure robust watermarking scheme for document images using Integer wavelets and block coding of binary watermarks" }, { "paperId": "75f2adebd7b7ef2cccaf61ef682c054704f36be2", "title": "Geometrically Invariant Image Watermarking using Connected Objects and Gravity Centers" }, { "paperId": "bb4eb9e5ba4c646b476fafaabc988f01325ab144", "title": "Contrast-Adaptive Removable Visible Watermarking (CARVW) mechanism" }, { "paperId": "157a644a4c14a5bffa0b188c15b38f14e37b6305", "title": "A blind dual color images watermarking based on singular value decomposition" }, { "paperId": "04573367c553b1b1ad1aa60c31d2c0e5f1e612ad", "title": "Robust Digital Image Watermarking Technique using Image Normalization and Discrete Cosine Transformation" }, { "paperId": "ff848c2cf6bd31cf0799820349e68a92beacdb4d", "title": "Robust copyright protection scheme for digital images using overlapping DCT and SVD" }, { "paperId": "940ec677c97f32ad913e5a30d05442eb58befe00", "title": "An SVD-based image watermarking in wavelet domain using SVR and PSO" }, { "paperId": "76342b023cfa0e41a724f5b032aa14249a816c69", "title": "A dual cepstrum-based watermarking scheme with self-synchronization" }, { "paperId": "b349a638938d3e1a228010bd6aaafbd81c9cd187", "title": "A new robust adjustable logo watermarking scheme" }, { "paperId": "80af7ff40cdc4ca460afec630dbef69148466c6f", "title": "An information hiding scheme with minimal image distortion" }, { "paperId": "a1554a84db2714cec8c01d255a163898c23e8fa9", "title": "Digital Image Watermarking Using Discrete Wavelet Transform and Singular Value Decomposition" }, { "paperId": "01f42d81c2ab552010b1804cb61aadbfaf42cca3", "title": "A robust associative watermarking technique based on vector quantization" }, { "paperId": "a097390f5b242639693c3c963e156eb71bab8c55", "title": "A new data hiding scheme for binary image authentication with small image distortion" }, { "paperId": "7fa9c119aa0306489a1d5f06983639992f2b9e11", "title": "A new robust reference watermarking scheme based on DWT-SVD" }, { "paperId": "20a38dd04aea98fecc6485a5b75e0535d9c67e20", "title": "An improved SVD-based watermarking scheme for protecting rightful ownership" }, { "paperId": "e5724961c9c6378004fb59a82f8833d9c8a80c5d", "title": "An Efficient Image Watermarking Scheme Using BFS Technique Based on Hadamard Transform" }, { "paperId": "be310bceeec592a274c468bb327e11b51f93b8b6", "title": "A New Digital Watermarking Algorithm Based On WBCT" } ]
20,820
en
[ { "category": "Business", "source": "external" }, { "category": "Environmental Science", "source": "s2-fos-model" }, { "category": "Business", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01c125cb04b3186ab4dbc30740a5c8f6ea9d8eed
[ "Business" ]
0.888687
At the Nexus of Blockchain Technology, the Circular Economy, and Product Deletion
01c125cb04b3186ab4dbc30740a5c8f6ea9d8eed
Applied Sciences
[ { "authorId": "92780905", "name": "Mahtab Kouhizadeh" }, { "authorId": "2992875", "name": "Joseph Sarkis" }, { "authorId": "50736468", "name": "Qingyun Zhu" } ]
{ "alternate_issns": null, "alternate_names": [ "Appl Sci" ], "alternate_urls": [ "http://www.mathem.pub.ro/apps/", "https://www.mdpi.com/journal/applsci", "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-217814" ], "id": "136edf8d-0f88-4c2c-830f-461c6a9b842e", "issn": "2076-3417", "name": "Applied Sciences", "type": "journal", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-217814" }
The circular economy (CE) is an emergent concept to rethink and redesign how our economy works. The concept recognizes effective and efficient economic functioning at multiple scales—governments and individuals, globally and locally; for businesses, large and small. CE represents a systemic shift that builds long-term resilience at multiple levels (macro, meso and micro); generating new business and economic opportunities while providing environmental and societal benefits. Blockchain, an emergent and critical technology, is introduced to the circular economy environment as a potential enabler for many circular economic principles. Blockchain technology supported information systems can improve circular economy performance at multiple levels. Product deletion, a neglected but critical effort in product management and product portfolio management, is utilized as an illustrative business scenario as to blockchain’s application in a circular economy research context. Product deletion, unlike product proliferation, has received minimal attention from both academics and practitioners. Product deletion decisions need to be evaluated and analyzed in the circular economy context. CE helps address risk aversion issues in product deletions such as inventory, waste and information management. This paper is the first to conceptualize the relationships amongst blockchain technology, product deletion and the circular economy. Many nuances of relationships are introduced in this study. Future evaluation and critical reflections are also presented with a need for a rigorous and robust research agenda to evaluate the multiple and complex relationships and interplay amongst technology, policy, commerce and the natural environment.
# applied sciences _Article_ ## At the Nexus of Blockchain Technology, the Circular Economy, and Product Deletion **Mahtab Kouhizadeh** **[1,]*** **, Joseph Sarkis** **[1,2]** **and Qingyun Zhu** **[3]** 1 Foisie Business School, Worcester Polytechnic Institute, Worcester, MA 01609, USA; jsarkis@wpi.edu 2 Hanken School of Economics, 00100 Helsinki, Finland 3 College of Business Administration, The University of Alabama in Huntsville, Huntsville, AL 35899, USA; q.zhu@uah.edu ***** Correspondence: mkouhizadeh@wpi.edu Received: 3 April 2019; Accepted: 16 April 2019; Published: 25 April 2019 [����������](http://www.mdpi.com/2076-3417/9/8/1712?type=check_update&version=2) **�������** **Abstract: The circular economy (CE) is an emergent concept to rethink and redesign how our** economy works. The concept recognizes effective and efficient economic functioning at multiple scales—governments and individuals, globally and locally; for businesses, large and small. CE represents a systemic shift that builds long-term resilience at multiple levels (macro, meso and micro); generating new business and economic opportunities while providing environmental and societal benefits. Blockchain, an emergent and critical technology, is introduced to the circular economy environment as a potential enabler for many circular economic principles. Blockchain technology supported information systems can improve circular economy performance at multiple levels. Product deletion, a neglected but critical effort in product management and product portfolio management, is utilized as an illustrative business scenario as to blockchain’s application in a circular economy research context. Product deletion, unlike product proliferation, has received minimal attention from both academics and practitioners. Product deletion decisions need to be evaluated and analyzed in the circular economy context. CE helps address risk aversion issues in product deletions such as inventory, waste and information management. This paper is the first to conceptualize the relationships amongst blockchain technology, product deletion and the circular economy. Many nuances of relationships are introduced in this study. Future evaluation and critical reflections are also presented with a need for a rigorous and robust research agenda to evaluate the multiple and complex relationships and interplay amongst technology, policy, commerce and the natural environment. **Keywords: blockchain technology; circular economy; product deletion; sustainability; supply chain** **1. Introduction** Imagine a world without waste [1]. That is the imagery presented by circular economy (CE) proponents. To make this vision a reality, social, technological, and commercial cooperation, at the very least, is needed. It is from this three-dimensional perspective, with support from other perspectives, that we introduce our thoughts and concerns. The circular economy has taken on especial recent importance as a social innovation that helps to address economic, environmental and sometimes social concerns. The advent of new technologies and digitization has also taken on greater importance as a more interconnected world emerges. Blockchain is one such technological innovation. It has received increasing attention in both research and practice. Blockchains are emerging in a traditional economic situation where marketing and consumption of products and services are still the engines of economies. We consider one aspect of products and commercial decisions as an illustrative business scenario—what happens when there is a decision to stop offering a given product by an organization; when a product is deleted? ----- _Appl. Sci. 2019, 9, 1712_ 2 of 20 This business problem of product deletion in an emergent technological environment has not been studied. The circular economy, where important but limited elements exist globally, causes an additional and important nuance. For the circular economy to function there is a dependence on product material from a product’s end-of-life. There also needs to be a significant availability of these products for the economy of scale maturation. Companies make decisions to stop manufacturing products for a variety of reasons. Whether they are automobiles such as the Chevy Volt, or a laundry detergent that has environmentally damaging chemicals in it. What happens to the potential circularity of these goods in a situation where the circular economy is gaining steam? Can blockchain technology play a role in monitoring the circularity of potentially deleted products, supporting decisions on which products to delete, and managing the deletion and tracing of materials through the circular economy? These are some of the basic questions we seek to investigate and critique in this paper. There will be practical and research issues related to evaluating the nexus of these three topics—product deletion, blockchains and the circular economy. Each of these concerns is necessary for advancement in multiple directions, but especially implicating the effectiveness and efficiency within a circular economic environment. **2. Background** Initially, we provide an overview of each topic separately. Some of the latest literature and thought in each of these subjects is presented to set the foundation of discussing and critiquing their nexus. Various use cases and analyses at various levels provide insights and exemplars into the relationships in later sections. Study directions, practical and theoretical, are also integral to their advancement. _2.1. The Circular Economy_ The circular economy has a variety of characterizations and definitions [2]. It begins with the idea of materials cycles including recycling, remanufacturing, refurbishment, reuse, and reclamation. The circular economy also includes management practices that help to close-the-loop such as reverse logistics and supply chain activities. Industrial waste minimization also occurs with former wastes transformed into useful, revenue generating, byproducts. The sale of byproducts to other organizations for use in production has also been termed industrial symbiosis [3]. Another important aspect of the circular economy, at a minimum, is the involvement of consumers in a sharing or servicizing economy [4]. For example, in a service economy, products are not bought, but are leased as services; such as the leasing of document copiers instead of purchasing them outright. Where, after a time period, these leased products are brought back for refurbishing or recycling. A consumer, in this situation, is buying the service of making copies. The sharing portion comes in with a product that is only leased for a short time and it is shared with other consumers. The product used in the service at the end-of-life will have its materials reused, reclaimed, refurbished, or one of the other “Re’s”. This idea can be extended to almost any product that is currently purchased, where the leasing model has a retailer or manufacturer as the product steward. Circular economy practices can reduce costs and create new revenue sources for companies by reusing materials and minimizing wastes. However, in most cases, the technology that is needed for a circular economy is costly and lack of financial resources impede the successful implementation of CE [5,6]. The challenges facing the circular economy are manifold. A few of these, which are core concerns in this paper, have been delineated in the literature; relating to governance, economic, and organizational theory [7,8]. The circular economy involves some form of transaction and exchange. For an effective circular economy, data and knowledge of sources and markets are needed. Many times, the suppliers and users of various products that flow within circular economy supply chains may originate from very different industries and regions. ----- _Appl. Sci. 2019, 9, 1712_ 3 of 20 In some of the more popular industrial symbiosis relationships, companies from very different industries would work together [9]. An example is a gel manufacturer that uses styrene to clean out its equipment. The manufacturer could use the styrene waste from this cleaning process for energy; or sell the waste on a materials exchange market [10]. However, information exchange across industries—such as with blockchain technology—especially with respect to wastes and byproducts can be difficult. Companies will typically focus on traditional customers and their own industries. Additionally, if a company stops making a product due to some sustainability or environmental concern—a product deletion decision—having this information becomes critical for a circular economy and byproduct management planning. The information symmetry and information search may be significant and expensive to address. Other major concerns are uncertainties and lack of scale [11]. The scale for waste may be large overall, but the dispersion of waste streams can make it difficult to locate and acquire circular economy materials. Small, distributed and informal waste and material flows are difficult and expensive to manage [12]. Achieving sufficient economies of scale for circular economy materials will require systems to capture materials into useful quantities. Knowing the flows—through blockchain technology—and making sure that streams exist and remain—product deletion decisions—are concerns related to supply uncertainties and risks. A circular economy requires broader and more inclusive supply chains, not only amongst industry, but communities, and individuals and their households. This dispersion and variety of actors cause difficulties in identifying, developing, and maintaining reliable circular economy sourcing. Various stakeholders such as industrial partners can provide material and component information, and thus communities and municipalities can organize regional circular economy efforts and eco-industrial parks [13]; and non-governmental organizations (NGOs) can offer expertise and information and lead [consortia such as Nextwave for ocean plastics and upcycling (https://www.nextwaveplastics.org/).](https://www.nextwaveplastics.org/) Overall, as we have seen, CE practices, principles, and characterizations appear at multiple levels of analysis. There are macro, meso, and micro levels of analysis [14]. Although there are some disagreements and concerns on the definitions of these levels, we essentially present them as relative concerns from the broadest to more specific focused areas. We now provide some examples, some of which will guide our framework for the evaluation of blockchain, circular economy, and product deletion relationships. Macro levels of analysis will include institutional issues that are typically global or broadly geographic and multi-governmental regions. It may include broader concepts such as full economies and principles. We will also be considering looking at major resources and markets, such as energy, that focus as very broad concerns and issues. At the meso level, we essentially identify an environment that considers multiple organizations and their networks. These can include supply chains and their flows or elements of the closed-loop supply chain—such as supply chain monitoring or reverse logistics operations. Industrial symbiosis and eco-industrial parks are additional examples of this level of analysis. At the micro level, we will be focusing primarily on issues facing specific organizational, intra-organizational, and individual consumer level issues. That is what type of value, knowledge and behavior, can be managed at these levels. There are many other ways to consider these issues, and the examples we provide is an initial categorization that fits well with relationships and influences of blockchain and product deletion. _2.2. Blockchain Technology_ A potential breakthrough for future supply chains can be adopting technological disruptive innovations such as blockchain technology. Information sharing is an urgent requirement in supply chains; especially with greater interest of digitization and Industry 4.0 developments [15,16]. Information can connect dispersed entities, facilitate better relationships in supply chains, prevent fraud and falsification, and reduce risks. However, tracing information through a complex supply ----- _Appl. Sci. 2019, 9, 1712_ 4 of 20 chain network is a challenge. Blockchains can support information sharing in supply chains, link stand-alone systems, and provide real-time data to all stakeholders. Blockchain technology records information through decentralized ledgers [17,18]. Ledgers are visible to all actors involved in transactions including supply chain partners [19]. Ledger transactions have cryptographic time stamping that elevates the security of information [20]. In this way, the blockchain allows customers to inspect the uninterrupted chain of custody and transactions from the raw materials to the end sale. This information is recorded in ledgers as transactions occur on these multiple blockchain information dimensions; with verifiable updates. For example, end customers can rely on the authenticity of valuable goods by tracing them to their origin [21]. Blockchain technology can benefit supply chain provenance and sustainability. A blockchain application that is connected with radio frequency identification (RFID) [22], Internet of Things (IoT) [23,24], and global position sensors (GPS) [25] can collect accurate data and address traceability issues in supply chains. High levels of transparency, verifiability, immutability, and reliability of data provided by blockchain can facilitate information flow among complex supply chain networks and stakeholders [26]. The immutability feature arises from the append-only concept of blockchain ledgers where a recorded transaction cannot be changed or altered without blockchain network consensus. This characteristic strengthens the reliability of blockchain information. Decentralized ledgers reduce the need for trust based on third-party transaction verification; shedding intermediaries from transactions [27]. Blockchain technology effectively supports updated tracking in the supply chain. Information related to the sources of materials, product supply chain journey, and participating actors in purchasing, producing and distributing products can each be presented on a blockchain platform; while maintaining visibility to supply chain network participants. Supply chain members may verify transactions and vote to maintain some trustworthiness in records. A key element of blockchain technology is a smart contract, sometimes reflecting real-world contracts in a digital way. Smart contracts contain codes of agreements between parties, monitor conditions, and execute the embedded functions [28]. Smart contracts shift the need for traditional legal third parties to network consensus. Automatic execution of trigger points and digital records of regulations and business logic can increase efficiency and reduce transaction costs [29]. Smart contracts can also be utilized for supply chain process management and even process reengineering. Permissionless (public) and permissioned (private) are two types of blockchain that deal with the openness of the platform. A permissionless blockchain allows anonymous users to interact with the system. Bitcoin and cryptocurrencies are examples of permissionless blockchains. Alternatively, permissioned blockchains limit information access to recognized users [30]. For example, IBM and Maersk have developed a permissioned blockchain that included a defined group of participants to trace information in the supply chain. Although the permissioned blockchain allows companies to control who can access critical information, the appropriate level of openness and information sharing is still debatable. For example, tracing individual items in a CE setting may mean the invasion of private information, raising ethical concerns. A combination of permissionless and permissioned blockchain can enable supply chains to achieve a variety of purposes. For example, authentication certificates can be linked to a public blockchain for marketing purposes to assure customers about the provenance of products [31]. This addresses the other dimension of trust of source, which in itself addresses some ethical concerns on the veracity of statements made about products. There are some concerns in the field related to whether permissioned blockchains are truly blockchains. It is an example of an essentially contested characteristic for blockchains [32]. We will not enter this debate in this article; we bring it to the general attention of the readership and it requires significant critical reflection for both researchers and practitioners. Information technology has been linked to CE given the critical nature of data and information for its broad management (e.g., [33]). Blockchain technology can benefit circular economy activities ----- _Appl. Sci. 2019, 9, 1712_ 5 of 20 through information management. Accurate information related to recycling programs, reusability of materials, green packaging, energy consumption, and carbon emissions can be made available on a blockchain [34]. Companies can use this information to evaluate the circularity performance of their supply chain versus their competitors, recognize their strengths and weaknesses, and use benchmarking data to improve their circular economy practices. Although significant possibilities exist, blockchain implementation may face challenges and require preparation. Scalability is a critical barrier that stems from the immaturity of blockchain technology [35]. Another challenge is that blockchain-enabled software requires novel and specialized software development tools and techniques, many of which still require development [36]. In addition, there is significant confusion concerning blockchain applications and adaptability in the supply chain context. _2.3. Product Deletion_ Companies invest vast monetary and time resources launching new products, leveraging product portfolios, and acquiring rivals all seeking a competitive advantage. Managers are engrossed with product line extensions and proliferation, channel extensions, and supplier development while seeking to cater to their customer segments [37]. Complex and broad product portfolio strategies attract customers but do not necessarily sustain profitability [38]. Surprisingly, rarely do companies examine their product portfolio and doubt if they might be housing too many products. Product deletion, or killing, is perceived as a less appealing management activity when it comes to product portfolio management [39]. The inescapable fact is, for most companies, some products are not making a profit and drain valuable resources [40]. Managing them is sometimes more challenging than developing them; and keeping them requires more effort than killing them [41]. However, discontinuing or withdrawing these lagging products from the product portfolio is not necessarily a trivial decision. For example, deleting a specific product may negatively influence the market for the associated maintenance services that may have created financial value for the company. The product deletion decision can affect strategic and operational concerns including customer satisfaction, profit margin, market building, and supply chain relationships management [41,42]. Material, information and capital involved in products are important flows within supply chains [43]. Companies are interlocked in these chains to serve a market; these chains involve suppliers, channel partners, the government, employees and consumers. Products, components, and materials with their associated transactions flow through raw material sourcing, internal manufacturing, storage, transportation delivery, and end-user consumption in the forward chain. There also may be reverse logistics activities such as reusing, recycling, reclaiming and remanufacturing [42]. Close-looped product activities are necessary for a circular economy. Product deletion can be defined as discontinuing a product from a product portfolio; deletion can occur at the product level (complete deletion) or product variate level (partial deletion) [41]. In this paper, product deletion mainly focuses on complete deletion—kill a product and most of its key components. This paper is one of the few papers that relates product deletion to supply chains in a CE environment, taking the perspective of original equipment manufacturers (OEM). Given this supply chain environment, product deletions have implications on circular economy operations; in turn, circular economy activities and actions can influence product deletion decisions. The traditional linear economy presents a “make and dispose” model of product production. Within this model, when a product is deleted, its inventory will immediately become obsolete and transform into waste. It may be disposed, sometimes to third parties for resale purposes; or disposed of in a traditional fashion into landfills. In a circular economic system, deleted product inventory and their finished components may be reclaimed as input in resource, energy and material loops through remanufacturing, refurbishing, reusing and recycling [44]. Product deletion may become, in the short-term, profitable not only from ----- _Appl. Sci. 2019, 9, 1712_ 6 of 20 more rationalized product portfolio management, but also from the utilization of freed up resourcesAppl. Sci. 2019, 9, x FOR PEER REVIEW 6 of 20 and materials as closed-loop inputs [45]. The circular economy’s focuses on design thinking, systems thinking, and product life extensionThe circular economy’s focuses on design thinking, systems thinking, and product life extension influence the product deletion decision. Product deletion occurs for many reasons, including customerinfluence the product deletion decision. Product deletion occurs for many reasons, including complaints on performance issues, product defects and quality concerns. Long-lasting designs help tocustomer complaints on performance issues, product defects and quality concerns. Long-lasting decrease the likelihood of occurrence of such issues, hence, also decreasing the likelihood of productdesigns help to decrease the likelihood of occurrence of such issues, hence, also decreasing the deletion [likelihood of product deletion [41, 46]. 41,46]. Another major trigger for product deletion lies in resource concerns including capacity andAnother major trigger for product deletion lies in resource concerns including capacity and eefficiency aspects; especially those that closely relate to operational performance [41]. fficiency aspects; especially those that closely relate to operational performance [41]. Circular economy practices help to minimize resource inputs into and the waste and emissionCircular economy practices help to minimize resource inputs into and the waste and emission leakage out the supply chain and production system. Resources in a CE environment may arise fromleakage out the supply chain and production system. Resources in a CE environment may arise from recycling approaches, erecycling approaches, efficiency improvements, and product use extensions. fficiency improvements, and product use extensions. Product deletion decisions can be aProduct deletion decisions can be affected by CE practices. Product life extension in a CE alters ffected by CE practices. Product life extension in a CE alters the product deletion decision. Traditional product deletion typically occurs in the decline stage of athe product deletion decision. Traditional product deletion typically occurs in the decline stage of a product lifecycle. The phases of product lifecycles are likely to be extended in a circular economyproduct lifecycle. The phases of product lifecycles are likely to be extended in a circular economy context, potentially delaying many such decisions. A product’s decline in a CE environment maycontext, potentially delaying many such decisions. A product’s decline in a CE environment may result in a decision other than deletion. Specifically, the organizational focus will be on rebooting aresult in a decision other than deletion. Specifically, the organizational focus will be on rebooting a new life cycle for products rather than closing the current life cycle through deletion.new life cycle for products rather than closing the current life cycle through deletion. Having credible, transparent, traceable, and secure information and exchange systems can greatlyHaving credible, transparent, traceable, and secure information and exchange systems can benefit the CE and product deletion management situation. Blockchain technology can enable somegreatly benefit the CE and product deletion management situation. Blockchain technology can enable of these capabilities at multiple CE levels. As an initial caveat, similar to CE, blockchain is still ansome of these capabilities at multiple CE levels. As an initial caveat, similar to CE, blockchain is still ‘essentially contested concept’ [an ‘essentially contested concept’ [2]. 2]. **3. Framework and Propositions3. Framework and Propositions** The product deletion, circular economy and blockchain technology nexus conceptualization isThe product deletion, circular economy and blockchain technology nexus conceptualization is presented in Figurepresented in Figure 1. We offer two general propositions from the previous background discussion 1. We offer two general propositions from the previous background discussion in Sectionin Section 2. Additionally, they set the stage for more a detailed analysis and evaluation in Section 4. 2. Additionally, they set the stage for more a detailed analysis and evaluation in Section 4. These propositions are generic and serve the secondary function as research questions.These propositions are generic and serve the secondary function as research questions. **Figure 1.Figure 1. A Conceptual Framework of Product deletion, Circular Economy and Blockchain Relationships.A Conceptual Framework of Product deletion, Circular Economy and Blockchain** Relationships. Figure 1 shows the interrelationships between blockchain as both a direct and indirect influencer with both product deletion and the circular economy. We have shown and made this argumentFigure 1 shows the interrelationships between blockchain as both a direct and indirect influencer in a number of examples. The primary arguments made thus far have shown a number of dyadicwith both product deletion and the circular economy. We have shown and made this argument in a relationships between the three subjects. The complexities involve multiple relationships, includingnumber of examples. The primary arguments made thus far have shown a number of dyadic two-way interactions and moderating relationships. Thus, we initially posit three general propositions.relationships between the three subjects. The complexities involve multiple relationships, including two-way interactions and moderating relationships. Thus, we initially posit three general **Proposition 1. Product deletion is interrelated with circular economy practices. Product deletion impacts** propositions. _circular economy practices and circular economy practices support product deletion management concerns; the_ _relationships aid improved product deletion decision making processes and reduces product deletion risks.Proposition 1. Product deletion is interrelated with circular economy practices. Product deletion_ impacts circular economy practices and circular economy practices support product deletion **Proposition 2. Blockchain technology is an enabler that can moderate the interrelationships between product** management concerns; the relationships aid improved product deletion decision making processes _deletion and circular economy. Blockchain technology activates and upgrades the inter- and intra-organizational_ and reduces product deletion risks. **P** **i i** Bl k h i h l i bl h d h i l i hi b ----- _Appl. Sci. 2019, 9, 1712_ 7 of 20 _information management systems that facilitate product deletion decision making and advances circular economy_ _development and operations._ **Proposition 3. Although not explicitly shown in Figure 1, we posit that these relationships can occur at multiple** _circular economy levels. Micro, Meso, and Macro level influences and relationships exist amongst the three_ _subject areas._ Some of the practical and theoretical foundations of this framework and discussions are further elicited in Section 4. **4. Blockchain Enabled Product Deletion Decision Making in a Circular Economy** This section introduces how blockchain-based information management enables and facilitates the product deletion relationships within a circular economy (Table 1). The analyses are conducted at three levels, the macro (institutional) level, the meso (networks and supply chains) level and the micro (organizational and consumer) level. At each level, the discussions are organized by circular economy initiatives, followed by a short discussion on how blockchain technology can contribute to the circular economy initiative and additional short discussion concerning blockchain, product deletion and circular economy synergies. **Table 1. At the nexus of blockchain technology, the circular economy, and product deletion.** **Product Deletion** **Circular Economy** **Analysis, Evaluation and Decision Making** **_Macro (Institutional)_** **Sharing or servicizing economy** **Energy** **Market for secondary materials** Products designed for CE � Increase the scale of product portfolio for CE purpose � Focus on Product durability—delete short-term components � Complete deletion on unendurable products; � Partial delete products with less sharing value � Delete the utilization of non-green energy � Focus on energy usage/consumption level � Focus on energy usage/consumption efficiency � Delete products with poor energy consumption efficiency � Utilize the free-up energy from deleted products to reverse � energy cycles Focus on material innovations—reduce material waste � Create a material cycle on supply chain incorporating secondary � material market and product deletion waste and inventory Extend product candidates’ lifecycle by reducing material cost and � increasing material efficiency and durability Accurate energy consumption data � Energy trading � Energy decentralization � Utilize secondary materials to new product development � Replace material sourcing from the primary market to the � secondary market with quality and performance assurance ----- _Appl. Sci. 2019, 9, 1712_ 8 of 20 **Table 1. Cont.** **Product Deletion** **Circular Economy** **Analysis, Evaluation and Decision Making** **_Meso (Networks and Supply Chains)_** Implement reverse infrastructures in product development **Reverse logistics** � Information on quality of returned products � **Industrial Symbiosis and Eco-Industrial** **Parks** **Supply chain monitoring** Intra-organizational involvement � Benefits to stakeholders � Waste exchange and byproduct information managed and verified � by block chains can influence product deletion decisions. Increase product information and waste exchange between supply � chain actors Increase the involvement of supply chain actors into product � deletion decision making Invest in technological platforms for product development and � lifecycle management monitoring **_Micro (Organizational/Consumer)_** **Organizational Value and Knowledge** **Consumer Knowledge and Behavior** Firm strategy � Value and culture: i.e., sustainability/CSR; openness to change; � product attachment Byproducts � Product design and differentiation � Operational capacity � Replace product components with higher end-of-life value � Customer demand � Customer loyalty � Consumer involvement in product deletion � Consumer post-purchase behaviors � _4.1. Macro: Circular Economy at the Institutional Level_ Our categorization of circular economy initiatives at the macro level includes (1) sharing or servicizing economy; (2) general energy management and (3) secondary market management. 4.1.1. Sharing or Servicizing Economy In CE a sharing or servicizing economy enables exchanging or leasing products and services. However, lack of information products throughout their lifecycle is a barrier of successful implementation of these and other circular economy principles [47,48]. Accurate and real-time data sharing is an urgent need for shared economy activities. Blockchain technology can provide a platform for such activities. Blockchain can support transparency to supply chain networks to trace closing-the-loop activities. Network participants can track updated transactions, understand the product status, and exchange data efficiently. Blockchain technology can facilitate sharing economy activities by reducing the need for third parties in transactions [49]. Users can exchange their services and products through a blockchain platform directly, without intermediaries, and thus save money and time. This can further leverage sharing activities. Blockchain can contribute to product deletion management by providing accurate and reliable information related to shared products and services [50]. The ability to collect accurate updated data related to products can include the quality and circular possibility of products, their locations, and their current stage in the product life cycle. This gives companies the opportunity to trace and analyze reusability, performance, and durability of products and identify the points of failure. Those products ----- _Appl. Sci. 2019, 9, 1712_ 9 of 20 with poor sharing value and with durability issues, e.g., contains short-term components and circularity concerns, can be candidates for removing them from the product portfolio. Companies can further build up their circularity capacity by designing products with maximum sharing values and circularity, expanding the scale of the product portfolio for the CE purpose, and introducing new CE technology or components to products. Records of leased products can be captured, no matter the location of these products, with performance information to determine if expectations of usage were met. Given that products in this environment are shared or leased, their attachment and care by consumers may not be as high. Thus, durability is an important performance measure for their circularity capabilities. If durability or maintenance requirements are too large, then the information may support produce deletion. Blockchains without intermediaries can also bring down sharing fees greatly, allowing some shared products and materials to be resourced for maintenance and delaying product obsolescence and deletion. 4.1.2. Energy Energy is a key source of supply chain activities. CE proposes that the circularity of energy and materials improves sustainability values [51]. To specify, minimizing energy consumption, environmental pollution, and the usage of green energies can support circular economy purposes and sustain the environment. Converting wastes to biomass energy can further leverage the circularity of energy [15]. However, waste-to-energy is generally not considered a preferred option in the waste hierarchy model, which ranks different waste management techniques [52]. Alternatives such as recycling and remanufacturing are preferred, with reduction typically the most preferred option. Blockchain technology can facilitate energy exchange and trading by offering new developments for decentralized energy markets. Agents and network participants can share their energy usage and surplus and trade their carbon credits through a blockchain platform. This may provide information for governments, policymakers, and communities in the broad design of these systems. Cryptocurrencies and the reliability of information supported by blockchain can further boost energy markets performance [53]. Governmental regulators and stakeholders can observe and evaluate energy markets information and monitor their compliance with environmental goals. Accurate information that is presented on blockchain ledgers can enhance real-time monitoring and assessing the energy consumption level of materials and products. Numerous materials are extracted from rare and non-renewable resources or use non-green energy resources in their processes. Those materials and energy are not only consumed but may create wastes and damage the environment. Product manufacturing and usage information that is continuously and accurately monitored can provide energy performance. Blockchain helps identification of energy problems by providing the traceability of materials and products back to their origin and metrics to evaluate their energy usage to ensure sound and effective product deletion decisions. Products with a high level of energy usage and poor energy efficiency can be candidates for deletion that, in turn, can enhance the circular economy. Policymakers can also tax products with poor energy performance more accurately, internalizing external energy and related emission costs. The freed energy from deleted products can be used to source circular economy activities and reverse energy cycles. Although circular economy activities, such as refurbishing, remanufacturing, and recycling, require intensive resources and energy, the environmental damage is typically less than primary production processes. Using a blockchain-enabled system helps companies determine the materials and products that use non-renewable resources and remove them or invest in alternative green resources to benefit the circularity of energy. However, while blockchains are maturing, vast amounts of power and energy are needed for data validation. This process is through so-called mining in cryptocurrencies and thus can have negative impacts on the environment. Growing interest in blockchain technology motivated technological advancements that shift blockchain toward green and renewable energies [54]. ----- _Appl. Sci. 2019, 9, 1712_ 10 of 20 4.1.3. Secondary Materials Markets Substituting primary components with materials that are acquired from secondary markets can effectively support circular economy principles. Blockchain technology can provide a distributed platform for trading secondhand materials and products. Improving information transparency and verifiability allow network participants to sell and buy their wastes and used materials and products in secondary markets. Amazon and eBay are examples of current secondary markets. The presence of real-time information regarding the veracity and status of the used materials and products can further boost circular economy efforts. This information can also denote the feasibility of replacing materials from a primary market to a secondary market. Reducing material costs can provide financial resources for extending the product life cycle and increasing material efficiency and durability. Quality and performance of the secondary materials are traceable on a blockchain. This information can further address the potential and opportunities for developing new products that incorporate secondary materials, meet the green initiatives, and maximize circularity. Knowing this information can help delete products that do not meet the necessary circularity criteria. Disintermediation is another advantage provided by blockchain that can cultivate secondary market activities by connecting buyers and sellers without any intermediaries and reduce the costs of transactions. Those materials or products that cannot be managed with fewer intermediaries may result in more costly and complex systems, causing the deletion of these products from further consideration. Accurate and updated information about the secondary materials and markets can ameliorate product deletion analysis and decision making. Blockchain-based information can provide more accurate reusability and recyclability of materials and products information. Transparent market pricing and costing information on secondary materials may also provide information to determine which products are more feasible or should remain in a portfolio. Products that demonstrate poor performance in the secondary market or incorporate materials that are not replaceable by used items might be candidates for deletion. Agents and business entities can make their deletion efforts more profitable and leverage the circular economy by selling their wastes and marketing the inventory of deleted products on a blockchain platform. _4.2. Meso: Circular Economy at Networks and Supply Chain Level_ Circular economy initiatives at the meso level may include systems and multi-organizational and regional practices such as (1) reverse logistics; (2) industrial symbiosis and eco-industrial parks and (3) supply chain monitoring. 4.2.1. Reverse Logistics Environmental concerns motivate supply chain networks to incorporate reverse logistics activities and networks into their classical supply chain processes and close their supply chain loops. Reverse logistics refers to collecting and transferring products from the point of consumption (end consumers) to the origination of supply chains in order to recover value [55]. Reverse logistics may contain various “Re’s” activities such as recycling, recovering, remanufacturing, and refurbishing. Products in each stage of their life cycle might be subject to return and reverse logistics. Accurate information regarding the condition of products, their location, their quality, and the undertaken processes is the core of efficient reverse logistics operations. This information is difficult to acquire through complex and multi-tier supply chain networks and after use by consumers. Blockchain can address this issue by presenting reliable information on the history of the materials and products. Every classical and reverse supply chain transaction can create a record on blockchain ledgers that are immutable and traceable. This historical information that is visible to supply chain networks can be used to help them make a sound decision about the proper reverse logistics activity that best matches the condition of products. ----- _Appl. Sci. 2019, 9, 1712_ 11 of 20 Blockchains can further leverage reverse logistics using smart contracts. Smart contracts can facilitate returning, reusing, and recycling activities between supply chain parties when product deletion occurs. A smart contract may reflect the agreement about the condition and quality of a product or material. When a returned product or material is identified, the smart contract can automatically generate the payment based on the defined product conditions [34]. The product condition may be evaluated and certified in the system. It can also determine the eventual plight of a product or material, e.g., reuse, recycle, or remanufacturing the product or material. For example, some companies have a take-back system that allows their customers to return products at the end of their lifecycle back to the stores and receive a discount for future purchases [52]. Using smart contracts, those customers who returned products with high recyclability potentials which can create more revenue for the company can receive more discounts and credits. This approach can further incentivize customers to return products and close the supply chain loop. Payment can be done in terms of cryptocurrencies and thus save time, especially in international transactions. The processes would be easier for customers and thus motivate customers to return the problematic products. Customers can track return products back to the supply chains. Furthermore, traceable information regarding the reverse logistics of products can be used to evaluate the reusability of the deleted products and their associated materials. Products with minimum value creation over their entire life cycle can be deleted or replaced with products that create more value to the reverse logistics activities. When products are deleted, the reverse logistics cycles will likely start to lose material from that product, or temporarily have additional obsolete non-saleable material. In each case, knowing the length of time a product or material is in a CE is important to be able to plan for reverse logistics returned resources. Information, long and short-term information, can provide this forecast and returned products and materials inventory. Thus, if a system is dependent on a particular material or returned product, then there might be possibilities to keep producing a product or material due to value for material supply for a reverse logistics network. For example, for remanufacturing, knowing the location and condition of a ‘core’ for a product is necessary and the technology to trace this material is necessary [56]. If a product is deleted, the need to capture and return cores is no longer a necessity and there might be a shift in the reverse logistics from remanufacturing to recycling. 4.2.2. Industrial Symbiosis and Eco-Industrial Parks An eco-industrial park contains several companies that cooperate to share their resources and manage their wastes in an environmentally sound way. Waste management is an important part of a circular economy. Although based on the sustainability principles, the primary focus is on a no-waste strategy, in most cases, complete waste elimination is not possible, and thus produced wastes need to be managed effectively. Waste exchange programs are important aspects of industrial symbiosis and require collaboration among companies to address environmental issues and support the circularity of resources. Blockchain technology can provide a platform to connect companies to exchange and trade their wastes and recreate value. Companies can interact directly to exchange their wastes without any middle-men and improve profit margins. Smart contracts can further facilitate waste exchanges by automatic execution of waste exchanges based on factors such as the condition of wastes, their volume and quality. In addition, electronic sensors and tracking devices can capture the location and value of wastes and make data available on blockchain ledgers. Traceability of wastes is critical, especially for hazardous wastes [34]. Stakeholders can use blockchain information to evaluate the efficiency of waste exchange programs. Information regarding the waste exchange can be recorded on blockchain ledgers. Blockchain can present information about the number of waste exchanges in a network and the value and quality of exchanges. This accurate information can be used for product deletion management. Those products that generate wastes with a low likelihood of waste exchanges can be candidates for removal from ----- _Appl. Sci. 2019, 9, 1712_ 12 of 20 supply chains. The waste exchange information may also help supply chain participants assess how well products and materials are selling at their final stages and thus make sounder product deletion decisions. Byproduct synergies are another aspect of product management decisions. For example, if an important byproduct that is profitable is made with waste from a product targeted for deletion, the decision may be impacted the value of the by-product. Information on this by-product and other potential verified byproducts can be managed in the blockchain. 4.2.3. Supply Chain Monitoring A circular economy contains operations that recollect the value of materials and products. Recapturing circular economy values require tracing materials and product flow in supply chains with sometimes complex and multifaceted networks of participants. Information discrepancy and asymmetry among supply chain participants can impede the identification of opportunities and potentials for enhancing sustainability efforts and a circular economy [57]. Blockchain technology provides supply chain transparency and traceability. Supply chain members from upstream to downstream can obtain accurate and updated information about the products and inventory levels. Supply chain transactions related to the materials and product flows can be recorded on blockchain ledgers. Some transactions may be generated automatically by smart contracts or collected by automatic electronic sensors, such as RFID or Internet of Things-enabled devices [58]. Supply chain members can monitor and audit information using blockchain ledgers and adjust their inventory, optimize resource usage, and modify their processes to generate minimum wastes. Effective information sharing can proliferate collaboration among supply chain members and build strategic and operationally beneficial relationships [59]. Supply chain members can address sustainability issues by integrating the information, evaluating the efficiency of their supply chain processes, and positing solutions to optimize circularity of materials and products such as replacing some materials or investing on green technological advancements. From a CE perspective, waste exchanges within supply chains and amongst partners may exist. Blockchains can be used to find additional supply chains from existing waste streams. If a product or material has profitable byproducts that can form new supply chains, blockchain can help identify these alternative mechanisms. Blockchain information of product history can identify past materials uses and byproducts that may be used to identify future supply chains. This type of additional and easily accessible information may delay deletion decisions. Alternatively, byproduct and waste exchange information that was found not to be valid and performing well, maybe cause to delete a material or supply chain branch. As blockchain technology presents a platform for data sharing in supply chains, product information exchanges among supply chain actors can be captured on blockchain ledgers. Supply chain networks can monitor the lifecycle of products and evaluate the green performance of supply chain activities and products flowing through them [60]. Those materials and products that degrade the environment from their sourcing and undertaken operations and processes and create more wastes are candidates for deletion or altering by environmental-friendly products. A blockchain-enabled supply chain requires a high level of coordination among supply chain members [16]. This can increase the involvement of supply chain actors into product deletion decision making. Supply chain members can make joint decisions about removing products with poor circularity from the supply chains. The joint decisions may decrease conflicts and challenges of product deletion implementation, as supply chain members already agreed on the product deletion decision. _4.3. Micro: Circular Economy at the Organizational and Consumer Level_ We have defined CE initiatives at the micro level to include (1) organizational value and knowledge, and (2) consumer knowledge and behavior. In this situation, we consider issues at the organizational and lower levels, such as households and even individuals. We try to keep the evaluation not on ----- _Appl. Sci. 2019, 9, 1712_ 13 of 20 specific activities or functions, although they are included somewhat, but at general characteristics such as value, knowledge, and behavior. 4.3.1. Organizational Value and Knowledge Companies can build competitive advantages through developing their organizational resources and following a path of capabilities development [61]. Firms can improve their market power by sustaining the environment by reusing materials, minimizing environmental pollutions and wastes, reducing environmental costs of products, and implementing sustainable development [62,63]. Building organizational knowledge is a central factor in a circular economy. Companies can build capabilities, e.g., better image and reputation, by investing in circular economy initiatives, green projects and implementing green values in their manufacturing operations and processes [64]. Blockchain technology supports knowledge sharing and development. Companies can monitor real information about the life cycle of materials and products and determine initiatives to extend their life cycle. Environmental knowledge and skills development are key capabilities that can be developed through sustainability efforts, such as green circular economy supplier development programs [65]. Knowing organizational capabilities and monitoring organizational improvements, can be managed through the blockchain, especially for products flowing in distant locations and information. Shared knowledge, part of capabilities and value gaining, on a blockchain platform can help firms advance their strategies, values, and cultures to integrate circular economy initiatives. Companies can use the built knowledge and values to identify which products contain components that share less value for circular economic purposes. They can delete those products or replace product components with higher end-of-life value materials and further design materials and technologies that improve the operational capacity and durability of products. Companies can further gather accurate information from blockchain ledgers to improve their ability to repair and upgrade products and learn how to design byproducts from their wastes. Those products with higher resource usages and lower circularity potentials can be considered for removal from a product portfolio. In addition, deleted products and the remaining inventory can be by-products or side-products to the circular economy manufacturing processes to utilize operational capacity and maximize supply chain value. 4.3.2. Consumer Knowledge and Behavior A large fraction of consumers expects companies to be sustainable [66]. Autonomous motivations, which are ideally embedded into humans’ sense of self, contain intrinsic and extrinsic motivations that provide energy to individuals to actively pursue the goal of environmental protection or other goals of sustainability [67]. Intrinsic motivators may refer to inherent enjoyment that may drive consumers to purchase green products or adopt environmental-friendly behaviors such as returning products, repairing materials, reducing wastes, and recycling efforts. Reducing costs and meeting environmental regulations can be extrinsic factors that direct consumers to adopt circular economy and sustainability initiatives [68,69]. Similar to what we discussed in organizational value and knowledge, blockchains can play an important role in building environmental knowledge that fuels the intrinsic and extrinsic motivations in consumers. Consumers can track product life cycle information, form knowledge, and adjust their behavior based on the available real-time information. Companies can also address proper reverse logistics strategies by using blockchain information regarding consumer demands, actions, loyalty, and post-purchase behavior. Being aware and confident of certain green product characteristics can help motivate purchase behavior for sustainable products. The transparency and traceability of blockchain technology can greatly improve this confidence; circular economy characteristics such as recycled materials can increase the confidence of an environmentally sustainable product. Blockchain can be used to incentivize products returns to the supply chain. For instance, those consumers who return products at the end of the life cycle can be rewarded by cryptocurrency tokens. This can stimulate the circularity of products and provide integrated information about the ----- _Appl. Sci. 2019, 9, 1712_ 14 of 20 performance of the returning programs and funding management for these programs. The incentive systems also help identify those products with low returning rates which can be candidates for deletion, because they do not provide value for circular economy purposes. Traceability of information allows companies to identify products and materials that are collected by poor people and informal markets for recycling purposes and secondary markets. Deleting those products may conflict with moral values that should be considered in product deletion decision making. In each case, we provided some examples of how the nexus can work together at multiple levels of CE. These activities and characteristics are exemplary. Many additional and emergent issues, as well as broader categories, also exist. **5. Implications and Future Research** In this section, we briefly discuss theoretical and practical implications which can both lead to future research. _5.1. Research and Theoretical Implications_ The research and theoretical implications are quite varied. Given the multiple levels of analysis, the theory involved in the design, planning, adopting, general management of this blockchain, CE and product deletion nexus can become quite complex. A theory based issue at each of the three levels is introduced. Many theories exist for multiple levels of analysis and include economic, organizational, and even individual behavioral theories [70]. At the broadest levels, there are issues related to economic and policy theory. For example, ecological modernization theory [71] has been utilized to explain how various technologies are applicable to CE and sustainability issues. Given that blockchain technology can help with efficiencies and building efficiencies, through product deletion in this case; the theory can help in explaining how economic growth can be decoupled from environmental degradation. Whether this situation holds a broad country or even supply chain levels can be investigated. Managing a circular economy is a ‘wicked problem’ [71,72]. It has been found that a single theoretic perspective cannot truly address wicked problems [73]. Finding appropriate theories to help study and describe phenomena are necessary. At the supply chain and information technology level, resource dependence, relational and organizational information processing theories have been used to evaluate complex relationships with big data (e.g., [74,75]). Whether these theories can help explain when and how to eliminate products in a CE and sustainability context needs investigation. Finally, an example of a theoretical implication at the micro level is how individual consumerism and motivation relates to CE and information on product deletion. Numerous consumer theories exist [76], and motivation theory is a core aspect of these systems. Reward systems to motivate individuals to be involved in CE practices, e.g., recycling, is a big concern. We need to sure people are fairly rewarded and incentive mechanisms in this context need study. Would theories like self-determination theory [77] be able to help model situations where consumers can use the information for CE practices, even when demotivational pressures such as product deletion occur? Given the relative novelty and the essential conceptual characteristics of each of the three topics within this nexus, there is ample room for further theoretical and conceptual development. What theories are most applicable is a research concern. _5.2. Managerial and Practical Implications_ The managerial and practical implications of various examples of the interactions presented occur across multiple levels, governments, communities, supply chains, organizations and individuals. Much of what we presented is primarily through a product perspective, although supporting processes and information were also incorporated. The implications provided here, again, only represent examples of the many potential interactions and relationships. Our goal is to help show some of the complexities ----- _Appl. Sci. 2019, 9, 1712_ 15 of 20 that exist at the nexus. As mentioned earlier, these topics are all essentially contested concepts amongst academics, and it also occurs amongst practitioners. Implicitly in all these practical issues and concerns, the need to draw an appropriate boundary may be critical. In fact, this is what we have done when looking at various levels of implementation and analysis. It is also a concern for those who are seeking to actually link all three areas or even any two of them. We provide a limited number, exemplary, practical implications at the nexus for each level. From a policy perspective, managing the information across the blockchain can prove valuable for developing the necessary CE infrastructure. In some cases where products are deleted, policymakers need to determine not only the CE implications of this deletion, but broader environmental issues. The aggregation of information may be more easily developed in this case. The issue will arise from being able to monitor over a given planning horizon what inventory of materials exist for appropriate development material flows and natural resources policies. Knowing which products exist and which products may be deleted can help make sure that materials for specific industries are available. For example, if plastic products are being phased out, then plastics recycling and availability may decrease. This may require additional petroleum investments that may not be environmentally good choices. Investing in biodegradables or other aspects may be a long term policy issue for communities and governments. The determination and interaction of public or private blockchains are also concerns. Waste exchange information may be public, but private decisions related to product deletion may not be as easily available due to their proprietary nature. In this situation, some form of development along the lines of allowing some third-party management or smart contract situation that may anonymize the relationships may be required. Example supply chain and network issues can also relate to transparency and security issues. Once again the sensitivity of information is critical to whether it should be shared. Eco-industrial park and industrial symbiosis systems can be set up to help identify virtual inter-organizational relationships, instead of physical close geographic proximity eco-industrial parks. The virtual nature, although allowing for transparency, will require constant monitoring. Whether this monitoring is automated through artificial intelligence, or by actual personnel needs to be determined. Relatedly, the implementation of these systems is not in a vacuum. When there are novel systems and activities to be introduced, existing operations and systems require careful consideration of the legacy systems. The existing systems that supply chains use to communicate and make decisions with, whether formal or informal, are still in existence. Some can be easily replaceable if they are less costly or very difficult to use. Some may not be and may need integration with a new system. For example, Internet-based waste exchange programs are cheap and relatively quick to use; would blockchain add value? In this situation, blockchain may not add immediate direct waste exchange value due to its technological limitations. However, if product deletion decisions are something that companies value and can use strategically, then blockchain may provide a more proactive and transparent inter-organizational system. Individual enhancements and consumer behavior incentive systems to complete transactions in recycling and other aspects can prove complex as well. The reward system may need to be reevaluated after the product deletion of modular systems that had been sold which can be returned for upgrading. How to incentivize people to make these returns would require their understanding and knowledge of blockchain incentive and cryptocurrency. These are not simple activities to complete and can provide substantial behavioral barriers. _5.3. Future Research_ Blockchain as an enabler for facilitating business activities has received significant attention. Blockchain has some unique features that elevate this technology beyond the traditional supply chain integration information systems, e.g., Enterprise Resource Planning (ERP) systems. Traditional ----- _Appl. Sci. 2019, 9, 1712_ 16 of 20 information systems mostly use centralized databases that are vulnerable to being manipulated or crashing. Decentralized structures provided by blockchain technology removes central authorities from systems and minimizes the likelihood of system failure. In addition, blockchain uses a cryptographic signing structure that increases the reliability and security of records. Disintermediation, the immutability of information, trustless environment, and smart contracts are other specific technologies underpinning blockchain. However, blockchain is an emergent technology that requires greater clarity. What is and what is not blockchain and what characteristics exist is part of the essentially contested concept of blockchain. Additionally, there is confusion about the real-world and large-scale business applications and interoperability of blockchain, especially the scalability issue. More research is needed to address the real-world application of blockchain technology to clearly define this technology in different business contexts and elaborate on the governance and business models and structures for using blockchain. As implied by many of the theoretical and practical implications, significant future research can be targeted for the study of the discussed joint topics: blockchain, CE, and product deletion. The studies can be at the dyadic level—such as blockchain and CE only—or at all three topics simultaneously. We posit further future research directions. The first step may be to identify some real-world studies, especially case studies that attempt to consider all three actions. Data acquisition and empirical data are needed to further advance the application of blockchain technology in product deletion decisions in the circular economy context. Hypothesis development and testing may be possible with additional data acquisition for additional relationship identification and evaluation. Simulations can be completed to address various business scenarios including the industry type, product portfolio size, life cycle maturity and product characteristics to test the likelihood of deletion for circular economy reasons; as well as whether adopting and utilizing blockchain technology in those business scenarios is appropriate and feasible. Sensitivity analysis and robustness can help validate and evaluate blockchain technology applications in product deletion decision making processes. Given the relative novelty of all three areas, simulation analysis may be the most appropriate approach for further investigation. In this situation, tools such as system dynamics can be developed and applied to determine what occurs in different scenarios. The complexity of relationships would need to be explicitly modeled and then executed to determine long-run implications at the various levels of analysis. Stakeholders might have diverse opinions and concerns on blockchain technology application in strategic organizational decisions. Future research could develop certain tools or models to quantify their beliefs and concerns into the revitalization and evaluation processes for more rational and appealing product deletion decisions. These future research directions are mostly considering the use of various tools and techniques. The many questions identified earlier in the discussion are also open research questions. We do not repeat them here given that this paper provides a conceptual series of issues that need to be studied. For example, each of the relationships identified in Table 1 can and should be investigated. **6. Conclusions** In this paper, we introduced a concern of current and emergent importance to nations, organizations, and consumers—CE, blockchain technology, and product deletion. The nuances and interactions amongst these three areas were presented. Much of the presentation was at a relatively conceptual and strategic level, although some operational concerns were also addressed. The purpose of this work and its contribution was to identify various ways that these three areas interact and the research and managerial implications of each. By using a three-level analysis and sub-analysis of the macro, meso and micro concerns, a series of issues were identified. Clearly, there are some issues that are probably more prevalent and realistic, while others are still relatively conceptual. Making sense of these interrelationships can advance CE as a development—for communities and governments—and competitive weapon for supply chains. ----- _Appl. Sci. 2019, 9, 1712_ 17 of 20 Given that the ultimate goal is to improve the economy and the environment, we provide a number of additional theoretical and managerial concerns. Any one of these topics alone is a fertile area of research and practical development; together, the ground is very fertile for significant investigation. This investigation should not only be about how the three ideas can be integrated, but also the need to overcome some of the limitations of definition, capabilities, and feasibility of the linkages. There are many remaining concerns related to technological, organizational, cultural, and economic feasibility issues. Each concern needs attention by researchers and practitioners before the interactions and synergies can become reality. These caveats are self-evident; although, we wished to make them explicit as well, recognizing that the three—the circular economy, blockchain technology, and product deletion—are not panaceas for organizations, the society, and the future; but require critical reflection. We hope that this paper has set the foundation to build a further and much-needed investigation. **Author Contributions: The authors collaborated on all sections of the paper, participated in all writing,** and revising. **Funding: This research was funded by the ASCM, Association for Supply Chain Management, grant number 227940.** **Conflicts of Interest: The authors declare no conflict of interest.** **References** 1. [Timko, M.T. A World Without Waste. IEEE Eng. Manag. Rev. 2019, 47, 106–109. [CrossRef]](http://dx.doi.org/10.1109/EMR.2019.2900636) 2. Korhonen, J.; Nuur, C.; Feldmann, A.; Birkie, S.E. Circular economy as an essentially contested concept. _[J. Clean. Prod. 2018, 175, 544–552. [CrossRef]](http://dx.doi.org/10.1016/j.jclepro.2017.12.111)_ 3. Sarkis, J.; Zhu, H. Information technology and systems in China’s circular economy: Implications for [sustainability. J. Syst. Inf. Technol. 2008, 10, 202–217. [CrossRef]](http://dx.doi.org/10.1108/13287260810916916) 4. Spring, M.; Araujo, L. Product biographies in servitization and the circular economy. Ind. Mark. Manag. **[2017, 60, 126–137. [CrossRef]](http://dx.doi.org/10.1016/j.indmarman.2016.07.001)** 5. De Jesus, A.; Mendonça, S. Lost in transition? Drivers and barriers in the eco-innovation road to the circular [economy. Ecol. Econ. 2018, 145, 75–89. [CrossRef]](http://dx.doi.org/10.1016/j.ecolecon.2017.08.001) 6. Tura, N.; Hanski, J.; Ahola, T.; Ståhle, M.; Piiparinen, S.; Valkokari, P. Unlocking circular business: A framework [of barriers and drivers. J. Clean. Prod. 2019, 212, 90–98. [CrossRef]](http://dx.doi.org/10.1016/j.jclepro.2018.11.202) 7. Patala, S.; Albareda, L.; Halme, M. Polycentric Governance of Privately Owned Resources in Circular [Economy Systems. Acad. Manag. Proc. 2018, 2018, 16634. [CrossRef]](http://dx.doi.org/10.5465/AMBPP.2018.155) 8. Geng, Y.; Sarkis, J.; Bleischwitz, R. How to Globalize the Circular Economy; Nature Publishing Group: London, UK, 2019. 9. Korhonen, J.; Honkasalo, A.; Seppälä, J. Circular economy: The concept and its limitations. Ecol. Econ. 2018, _[143, 37–46. [CrossRef]](http://dx.doi.org/10.1016/j.ecolecon.2017.06.041)_ 10. Lee, D.; Toffel, M.W.; Gordon, R.; Cook Composites and Polymers Co. Harvard Business School Technology & [Operations Mgt. Unit. HBS Case No. 608-055. 2009. Available online: https://ssrn.com/abstract=1444806](https://ssrn.com/abstract=1444806) (accessed on 24 April 2019). 11. Park, J.; Sarkis, J.; Wu, Z. Creating integrated business and environmental value within the context of China’s [circular economy and ecological modernization. J. Clean. Prod. 2010, 18, 1494–1501. [CrossRef]](http://dx.doi.org/10.1016/j.jclepro.2010.06.001) 12. Tong, X.; Wang, T.; Chen, Y.; Wang, Y. Towards an inclusive circular economy: Quantifying the spatial flows [of e-waste through the informal sector in China. Resour. Conserv. Recycl. 2018, 135, 163–171. [CrossRef]](http://dx.doi.org/10.1016/j.resconrec.2017.10.039) 13. Geng, Y.; Sarkis, J.; Ulgiati, S. Sustainability, well-being, and the circular economy in China and worldwide. _Science 2016, 6278, 73–76._ 14. Ghisellini, P.; Cialani, C.; Ulgiati, S. A review on circular economy: The expected transition to a balanced [interplay of environmental and economic systems. J. Clean. Prod. 2016, 114, 11–32. [CrossRef]](http://dx.doi.org/10.1016/j.jclepro.2015.09.007) 15. De Sousa Jabbour, A.B.L.; Jabbour, C.J.C.; Godinho Filho, M.; Roubaud, D. Industry 4.0 and the circular economy: A proposed research agenda and original roadmap for sustainable operations. Ann. Oper. Res. **[2018, 270, 273–286. [CrossRef]](http://dx.doi.org/10.1007/s10479-018-2772-8)** 16. Saberi, S.; Kouhizadeh, M.; Sarkis, J.; Shen, L. Blockchain technology and its relationships to sustainable [supply chain management. Int. J. Prod. Res. 2018, 57, 2117–2135. [CrossRef]](http://dx.doi.org/10.1080/00207543.2018.1533261) ----- _Appl. Sci. 2019, 9, 1712_ 18 of 20 17. Nakamoto, S. Bitcoin: A Peer to Peer Electronic Cash System; Krypton Publisher Ltd.: Majuro, Marshall Islands, 2009. 18. Swan, M. Blockchain: Blueprint for a New Economy; O’Reilly Media, Inc.: Sevastopol, CA, USA, 2015. 19. Casado-Vara, R.; Prieto, J.; De la Prieta, F.; Corchado, J.M. How blockchain improves the supply chain: Case [study alimentary supply chain. Procedia Comput. Sci. 2018, 134, 393–398. [CrossRef]](http://dx.doi.org/10.1016/j.procs.2018.07.193) 20. Li, X.; Jiang, P.; Chen, T.; Luo, X.; Wen, Q. A survey on the security of blockchain systems. Future Gener. _[Comput. Syst. 2017. [CrossRef]](http://dx.doi.org/10.1016/j.future.2017.08.020)_ 21. Kelley, J. Global Diamond & Jewelry Market Tracks Authenticity with, IBM Blockchain: IBM. 2018. Available [online: https://www.ibm.com/blogs/think/2018/04/global-jewelry-ibm-blockchain/ (accessed on 1 April 2019).](https://www.ibm.com/blogs/think/2018/04/global-jewelry-ibm-blockchain/) 22. Tian, F. An agri-food supply chain traceability system for China based on RFID & blockchain technology. In Proceedings of the 2016 13th International Conference on Service Systems and Service Management (ICSSSM), Kunming, China, 24–26 June 2016. 23. Christidis, K.; Devetsikiotis, M. Blockchains and smart contracts for the internet of things. IEEE Access 2016, _[4, 2292–2303. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2016.2566339)_ 24. [Kshetri, N. Can blockchain strengthen the internet of things? It Prof. 2017, 19, 68–72. [CrossRef]](http://dx.doi.org/10.1109/MITP.2017.3051335) 25. Yuan, Y.; Wang, F.-Y. Towards blockchain-based intelligent transportation systems. In Proceedings of the 2016 IEEE 19th International Conference on Intelligent Transportation Systems (ITSC), Rio de Janeiro, Brazil, 1–4 November 2016. 26. Francisco, K.; Swanson, D. The supply chain has no clothes: Technology adoption of blockchain for supply [chain transparency. Logistics 2018, 2, 2. [CrossRef]](http://dx.doi.org/10.3390/logistics2010002) 27. Wüst, K.; Gervais, A. Do you need a Blockchain? In Proceedings of the 2018 Crypto Valley Conference on Blockchain Technology (CVCBT), Zug, Switzerland, 20–22 June 2018. 28. Macrinici, D.; Cartofeanu, C.; Gao, S. Smart contract applications within blockchain technology: A systematic [mapping study. Telemat. Inform. 2018, 35, 2337–2354. [CrossRef]](http://dx.doi.org/10.1016/j.tele.2018.10.004) 29. Giancaspro, M. Is a ‘smart contract’really a smart idea? Insights from a legal perspective. Comput. Law Secur. _[Rev. 2017, 33, 825–835. [CrossRef]](http://dx.doi.org/10.1016/j.clsr.2017.05.007)_ 30. Ølnes, S.; Ubacht, J.; Janssen, M. Blockchain in government: Benefits and implications of distributed ledger [technology for information sharing. Gov. Inf. Q. 2017, 34, 355–364. [CrossRef]](http://dx.doi.org/10.1016/j.giq.2017.09.007) 31. Kshetri, N. 1 Blockchain’s roles in meeting key supply chain management objectives. Int. J. Inf. Manag. 2018, _[39, 80–89. [CrossRef]](http://dx.doi.org/10.1016/j.ijinfomgt.2017.12.005)_ 32. [Jeffries, A. ‘Blockchain’ Is Meaningless: The Verge. 2018. Available online: https://www.theverge.com/2018/](https://www.theverge.com/2018/3/7/17091766/blockchain-bitcoin-ethereum-cryptocurrency-meaning) [3/7/17091766/blockchain-bitcoin-ethereum-cryptocurrency-meaning (accessed on 1 April 2019).](https://www.theverge.com/2018/3/7/17091766/blockchain-bitcoin-ethereum-cryptocurrency-meaning) 33. Sarkis, J.; Zhu, Q. Environmental sustainability and production: Taking the road less travelled. Int. J. Prod. _[Res. 2018, 56, 743–759. [CrossRef]](http://dx.doi.org/10.1080/00207543.2017.1365182)_ 34. Kouhizadeh, M.; Sarkis, J. Blockchain Practices, Potentials, and Perspectives in Greening Supply Chains. _[Sustainability 2018, 10, 3652. [CrossRef]](http://dx.doi.org/10.3390/su10103652)_ 35. Mougayar, W. The Business Blockchain: Promise, Practice, and Application of the Next Internet Technology; John Wiley & Sons: Hoboken, NJ, USA, 2016. 36. Porru, S.; Pinna, A.; Marchesi, M.; Tonelli, R. Blockchain-oriented software engineering: Challenges and new directions. In Proceedings of the 2017 IEEE/ACM 39th International Conference on Software Engineering Companion (ICSE-C), Buenos Aires, Argentina, 20–28 May 2017; pp. 169–171. 37. Avlonitis, G.J.; Argouslidis, P.C. Tracking the evolution of theory on product elimination: Past, present, and [future. Mark. Rev. 2012, 12, 345–379. [CrossRef]](http://dx.doi.org/10.1362/146934712X13469451716592) 38. [Hart, S.J. Product deletion and the effects of strategy. Eur. J. Mark. 1989, 23, 6–17. [CrossRef]](http://dx.doi.org/10.1108/EUM0000000000591) 39. Weckles, R. Product line deletion and simplification: Tough but necessary decisions. Bus. Horiz. 1971, 14, [71–74. [CrossRef]](http://dx.doi.org/10.1016/0007-6813(71)90092-9) 40. Hart, S.J. The causes of product deletion in British manufacturing companies. J. Mark. Manag. 1988, 3, [328–343. [CrossRef]](http://dx.doi.org/10.1080/0267257X.1988.9964050) 41. Zhu, Q.; Shah, P.; Sarkis, J. Addition by subtraction: Integrating product deletion with lean and sustainable [supply chain management. Int. J. Prod. Econ. 2018, 205, 201–214. [CrossRef]](http://dx.doi.org/10.1016/j.ijpe.2018.08.035) 42. Bai, C.; Shah, P.; Zhu, Q.; Sarkis, J. Green product deletion decisions: An integrated sustainable production [and consumption approach. Ind. Manag. Data Syst. 2018, 118, 349–389. [CrossRef]](http://dx.doi.org/10.1108/IMDS-05-2017-0175) ----- _Appl. Sci. 2019, 9, 1712_ 19 of 20 43. Sarkis, J. A boundaries and flows perspective of green supply chain management. Supply Chain Manag. Int. J. **[2012, 17, 202–216. [CrossRef]](http://dx.doi.org/10.1108/13598541211212924)** 44. Zhu, Q.; Shah, P. Product deletion and its impact on supply chain environmental sustainability. Resour. _[Conserv. Recycl. 2018, 132, 1–2. [CrossRef]](http://dx.doi.org/10.1016/j.resconrec.2018.01.010)_ 45. Bai, C.; Satir, A.; Sarkis, J. Investing in lean manufacturing practices: An environmental and operational [perspective. Int. J. Prod. Res. 2018, 57, 1037–1051. [CrossRef]](http://dx.doi.org/10.1080/00207543.2018.1498986) 46. Pourhejazy, P.; Sarkis, J.; Zhu, Q. A fuzzy-based decision aid method for product deletion of fast moving [consumer goods. Expert Syst. Appl. 2019, 119, 272–288. [CrossRef]](http://dx.doi.org/10.1016/j.eswa.2018.11.001) 47. Su, B.; Heshmati, A.; Geng, Y.; Yu, X. A review of the circular economy in China: Moving from rhetoric to [implementation. J. Clean. Prod. 2013, 42, 215–227. [CrossRef]](http://dx.doi.org/10.1016/j.jclepro.2012.11.020) 48. Kirchherr, J.; Piscicelli, L.; Bour, R.; Kostense-Smit, E.; Muller, J.; Huibrechtse-Truijens, A.; Hekkert, M. Barriers to the circular economy: Evidence from the European Union (EU). Ecol. Econ. 2018, 150, 264–272. [[CrossRef]](http://dx.doi.org/10.1016/j.ecolecon.2018.04.028) 49. Huckle, S.; Bhattacharya, R.; White, M.; Beloff, N. Internet of things, blockchain and shared economy [applications. Procedia Comput. Sci. 2016, 98, 461–466. [CrossRef]](http://dx.doi.org/10.1016/j.procs.2016.09.074) 50. Zhu, Q.; Kouhizadeh, M. Blockchain Technology, Supply Chain Information, and Strategic Product Deletion [Management. IEEE Eng. Manag. Rev. 2019, 47, 36–44. [CrossRef]](http://dx.doi.org/10.1109/EMR.2019.2898178) 51. Geissdoerfer, M.; Savaget, P.; Bocken, N.M.; Hultink, E.J. The Circular Economy–A new sustainability [paradigm? J. Clean. Prod. 2017, 143, 757–768. [CrossRef]](http://dx.doi.org/10.1016/j.jclepro.2016.12.048) 52. Corvellec, H.; Stål, H.I. Evidencing the waste effect of product-service systems (PSSs). J. Clean. Prod. 2017, _[145, 14–24. [CrossRef]](http://dx.doi.org/10.1016/j.jclepro.2017.01.033)_ 53. Mengelkamp, E.; Notheisen, B.; Beer, C.; Dauer, D.; Weinhardt, C. A blockchain-based smart grid: Towards [sustainable local energy markets. Comput. Sci. Res. Dev. 2018, 33, 207–214. [CrossRef]](http://dx.doi.org/10.1007/s00450-017-0360-9) 54. Truby, J. Decarbonizing Bitcoin: Law and policy choices for reducing the energy consumption of Blockchain [technologies and digital currencies. Energy Res. Soc. Sci. 2018, 44, 399–410. [CrossRef]](http://dx.doi.org/10.1016/j.erss.2018.06.009) 55. Rogers, D.S.; Tibben-Lembke, R.S. Going Backwards: Reverse Logistics Trends and Practices; Reverse Logistics Executive Council: Pittsburgh, PA, USA, 1999; Volume 2. 56. Yang, S.; Aravind Raghavendra, M.R.; Kaminski, J.; Pepin, H. Opportunities for industry 4.0 to support [remanufacturing. Appl. Sci. 2018, 8, 1177. [CrossRef]](http://dx.doi.org/10.3390/app8071177) 57. Tseng, M.-L.; Tan, R.R.; Chiu, A.S.; Chien, C.-F.; Kuo, T.C. Circular economy meets industry 4.0: Can big data [drive industrial symbiosis? Resour. Conserv. Recycl. 2018, 131, 146–147. [CrossRef]](http://dx.doi.org/10.1016/j.resconrec.2017.12.028) 58. Lindström, J.; Hermanson, A.; Blomstedt, F.; Kyösti, P. A multi-usable cloud service platform: A case study [on improved development pace and efficiency. Appl. Sci. 2018, 8, 316. [CrossRef]](http://dx.doi.org/10.3390/app8020316) 59. Soosay, C.A.; Hyland, P. A decade of supply chain collaboration and directions for future research. Supply _[Chain Manag. Int. J. 2015, 20, 613–630. [CrossRef]](http://dx.doi.org/10.1108/SCM-06-2015-0217)_ 60. Shah, P.; Zhu, Q.; Sarkis, J. Product deletion and the supply chain: A greening perspective. In Proceedings of the 2017 IEEE Technology & Engineering Management Conference (TEMSCON), Santa Clara, CA, USA, 8–10 June 2017; pp. 324–328. 61. Dierickx, I.; Cool, K. Asset stock accumulation and sustainability of competitive advantage. Manag. Sci. **[1989, 35, 1504–1511. [CrossRef]](http://dx.doi.org/10.1287/mnsc.35.12.1504)** 62. [Hart, S.L. A natural-resource-based view of the firm. Acad. Manag. Rev. 1995, 20, 986–1014. [CrossRef]](http://dx.doi.org/10.5465/amr.1995.9512280033) 63. Yunus, E.N.; Michalisin, M.D. Sustained competitive advantage through green supply chain management practices: A natural-resource-based view approach. Int. J. Serv. Oper. Manag. 2016, 25, 135–154. 64. Sarkis, J. Convincing industry that there is value in environmentally supply chains. Probl. Sustain. Dev. 2009, _4, 61–64._ 65. Bai, C.; Sarkis, J. Green supplier development: Analytical evaluation using rough set theory. J. Clean. Prod. **[2010, 18, 1200–1210. [CrossRef]](http://dx.doi.org/10.1016/j.jclepro.2010.01.016)** 66. Ottman, J. The New Rules of Green Marketing: Strategies, Tools, and Inspiration for Sustainable Branding; Routledge: Abington, UK, 2017. 67. Thøgersen, J. How may consumer policy empower consumers for sustainable lifestyles? J. Consum. Policy **[2005, 28, 143–177. [CrossRef]](http://dx.doi.org/10.1007/s10603-005-2982-8)** 68. Koo, C.; Chung, N.; Nam, K. Assessing the impact of intrinsic and extrinsic motivators on smart green IT [device use: Reference group perspectives. Int. J. Inf. Manag. 2015, 35, 64–79. [CrossRef]](http://dx.doi.org/10.1016/j.ijinfomgt.2014.10.001) ----- _Appl. Sci. 2019, 9, 1712_ 20 of 20 69. Schösler, H.; de Boer, J.; Boersema, J.J. Fostering more sustainable food choices: Can Self-Determination [Theory help? Food Qual. Prefer. 2014, 35, 59–69. [CrossRef]](http://dx.doi.org/10.1016/j.foodqual.2014.01.008) 70. Liu, J.; Feng, Y.; Zhu, Q.; Sarkis, J. Green supply chain management and the circular economy: Reviewing [theory for advancement of both fields. Int. J. Phys. Distrib. Logist. Manag. 2018, 48, 794–817. [CrossRef]](http://dx.doi.org/10.1108/IJPDLM-01-2017-0049) 71. Bergendahl, J.A.; Sarkis, J.; Timko, M.T. Transdisciplinarity and the food energy and water nexus: Ecological modernization and supply chain sustainability perspectives. Resour. Conserv. Recycl. 2018, 133, 309–319. [[CrossRef]](http://dx.doi.org/10.1016/j.resconrec.2018.01.001) 72. Tong, X.; Tao, D. The rise and fall of a “waste city” in the construction of an “urban circular economic system”: [The changing landscape of waste in Beijing. Resour. Conserv. Recycl. 2016, 107, 10–17. [CrossRef]](http://dx.doi.org/10.1016/j.resconrec.2015.12.003) 73. Termeer, C.; Dewulf, A.; Karlsson-Vinkhuyzen, S.; Vink, M.; Van Vliet, M. Coping with the wicked problem of climate adaptation across scales: The Five R Governance Capabilities. Landsc. Urban Plan. 2016, 154, [11–19. [CrossRef]](http://dx.doi.org/10.1016/j.landurbplan.2016.01.007) 74. Hazen, B.T.; Skipper, J.B.; Ezell, J.D.; Boone, C.A. Big Data and predictive analytics for supply chain [sustainability: A theory-driven research agenda. Comput. Ind. Eng. 2016, 101, 592–598. [CrossRef]](http://dx.doi.org/10.1016/j.cie.2016.06.030) 75. Waller, M.A.; Fawcett, S.E. Data science, predictive analytics, and big data: A revolution that will transform [supply chain design and management. J. Bus. Logist. 2013, 34, 77–84. [CrossRef]](http://dx.doi.org/10.1111/jbl.12010) 76. Groening, C.; Sarkis, J.; Zhu, Q. Green marketing consumer-level theory review: A compendium of applied [theories and further research directions. J. Clean. Prod. 2018, 172, 1848–1866. [CrossRef]](http://dx.doi.org/10.1016/j.jclepro.2017.12.002) 77. Ryan, R.M.; Deci, E.L. Self-determination theory and the facilitation of intrinsic motivation, social [development, and well-being. Am. Psychol. 2000, 55, 68. [CrossRef] [PubMed]](http://dx.doi.org/10.1037/0003-066X.55.1.68) © 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution [(CC BY) license (http://creativecommons.org/licenses/by/4.0/).](http://creativecommons.org/licenses/by/4.0/.) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/APP9081712?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/APP9081712, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/2076-3417/9/8/1712/pdf?version=1557133324" }
2,019
[]
true
2019-04-25T00:00:00
[ { "paperId": "b01b7d680f6c10e15ec6f16e0ff4cfe39a78d4cd", "title": "A fuzzy-based decision aid method for product deletion of fast moving consumer goods" }, { "paperId": "a15950206cbc5418708dcbfd537d5e567a4359ec", "title": "Unlocking circular business: A framework of barriers and drivers" }, { "paperId": "85e15a0242686f182b494013c7b3278267a68875", "title": "A World Without Waste" }, { "paperId": "47a5a76956915f182552c445de8131508c89e8a3", "title": "Blockchain Technology, Supply Chain Information, and Strategic Product Deletion Management" }, { "paperId": "dcfb8fe022d268c49c98899225f545d007f8e361", "title": "How to globalize the circular economy" }, { "paperId": "f5a21fb87d88b4510dd4d42fbdc52a674a592ea6", "title": "Smart contract applications within blockchain technology: A systematic mapping study" }, { "paperId": "ae6fcf256d3fb62bbca30ba7fcd925991b1f597a", "title": "Addition by subtraction: Integrating product deletion with lean and sustainable supply chain management" }, { "paperId": "2e82b8539af92b4af1f5c1c59dcce9d31dcefccc", "title": "Blockchain technology and its relationships to sustainable supply chain management" }, { "paperId": "d0cf0cf487ea5197ecfabfabed2e2f978a49dd96", "title": "Blockchain Practices, Potentials, and Perspectives in Greening Supply Chains" }, { "paperId": "15c69c4ece24b82105fc41d270d642593ce4318f", "title": "Decarbonizing Bitcoin: Law and policy choices for reducing the energy consumption of Blockchain technologies and digital currencies" }, { "paperId": "5472282e4d9328c865e2784a12e6b995acbec627", "title": "Green supply chain management and the circular economy" }, { "paperId": "104ee2e08c8a6429def9a7164e2953e65ed86e6c", "title": "Barriers to the Circular Economy: Evidence From the European Union (EU)" }, { "paperId": "b88c6adbde065e0b51eba99af7b3e19b59564a3d", "title": "Investing in lean manufacturing practices: an environmental and operational perspective" }, { "paperId": "e525dc3be118dd73efd783b97664bab847420c63", "title": "Opportunities for Industry 4.0 to Support Remanufacturing" }, { "paperId": "35a7eb8bed4c4a897904362eac6d370f9040de25", "title": "Polycentric Governance of Privately Owned Resources in Circular Economy Systems" }, { "paperId": "ad9760ea1568263d4f670edc52e8d91875c95e42", "title": "Do you Need a Blockchain?" }, { "paperId": "572cedb7decfb64bd2a915750a98f9988dc1c5fb", "title": "Transdisciplinarity and the food energy and water nexus: Ecological modernization and supply chain sustainability perspectives" }, { "paperId": "4fd7fa52bff2ed0f181a52393a5c5c22599bd617", "title": "Product deletion and its impact on supply chain environmental sustainability" }, { "paperId": "79f4ba320215fb0586d216f6ac4ba981307af4fb", "title": "Circular economy meets industry 4.0: Can big data drive industrial symbiosis?" }, { "paperId": "677d276996ba7a84b9078e9c413cbb1d8820a15e", "title": "1 Blockchain's roles in meeting key supply chain management objectives" }, { "paperId": "db1057480c48637be42dbc5aa7b91a7af6038f13", "title": "Green product deletion decisions: An integrated sustainable production and consumption approach" }, { "paperId": "3ccfc42c124d569e0e582fc01598828aed21bfe5", "title": "A multi-usable cloud service platform : a case study on improved development pace and efficiency" }, { "paperId": "b4886bd7237ea38f2d3263f556960dd22aa8b1f3", "title": "Industry 4.0 and the circular economy: a proposed research agenda and original roadmap for sustainable operations" }, { "paperId": "30963fc718e1eb73fd904b5f0723618d34971561", "title": "Green marketing consumer-level theory review: A compendium of applied theories and further research directions" }, { "paperId": "240d2c94ec256bbcb3a187164e57c76df0d617d6", "title": "Environmental sustainability and production: taking the road less travelled" }, { "paperId": "b0058dba99b5f1a02d8ed29e2922b376d005f3b0", "title": "The Supply Chain Has No Clothes: Technology Adoption of Blockchain for Supply Chain Transparency" }, { "paperId": "cce36384f4707de082a1e2059c2560f40cbceda9", "title": "Is a 'smart contract' really a smart idea? Insights from a legal perspective" }, { "paperId": "d98a03b7c2f8995867bf8c7891975ecfef9cd80d", "title": "Towards an inclusive circular economy: Quantifying the spatial flows of e-waste through the informal sector in China" }, { "paperId": "647a16658edd7b45d19571e3f1d55530a196becb", "title": "Lost in Transition? Drivers and Barriers in the Eco-Innovation Road to the Circular Economy" }, { "paperId": "488ebe4db7190efe445c225aa67a10f70bc46d8d", "title": "Blockchain in government: Benefits and implications of distributed ledger technology for information sharing" }, { "paperId": "ca4c0ab7304ebbbb052887332d80dbe673ed4b7c", "title": "A Survey on the Security of Blockchain Systems" }, { "paperId": "57d6de06108aab568915c1590b2fc114947cd35c", "title": "A blockchain-based smart grid: towards sustainable local energy markets" }, { "paperId": "e8709e2906361ade9064cc605b9c7637bec474a0", "title": "Can Blockchain Strengthen the Internet of Things?" }, { "paperId": "a06b0cda1ef8bc6738a8e371543b3b430ce400c6", "title": "Product deletion and the supply chain: A greening perspective" }, { "paperId": "b7db0e51741f6fb63feb9ae81955432656881460", "title": "Evidencing the waste effect of Product-Service Systems (PSSs)" }, { "paperId": "9a9643601989088ace41382b3c1cc61e1b4d5633", "title": "Blockchain-Oriented Software Engineering: Challenges and New Directions" }, { "paperId": "eb69d11beb18b4c99a6391a5ff80ca980ba37b6d", "title": "The Circular Economy - A New Sustainability Paradigm?" }, { "paperId": "8b1624f053c1cd9ac2e09d6181a301333990d128", "title": "Big data and predictive analytics for supply chain sustainability: A theory-driven research agenda" }, { "paperId": "3222d1e74b171cfd84516e4652c0efafb804c95c", "title": "Towards blockchain-based intelligent transportation systems" }, { "paperId": "ac013d1d21a659da4873164c43d005416e1bce7a", "title": "Internet of Things, Blockchain and Shared Economy Applications" }, { "paperId": "64b7c30d25df99d79e649e2317c9d5b84ed6fc78", "title": "Coping with the wicked problem of climate adaptation across scales: The Five R Governance Capabilities" }, { "paperId": "0ed376014aefccae83e3df7b5656b494eea07630", "title": "Sustained competitive advantage through green supply chain management practices: a natural-resource-based view approach" }, { "paperId": "24cdeb7d7421012c2fdd362b8e2816c105b7071f", "title": "An agri-food supply chain traceability system for China based on RFID & blockchain technology" }, { "paperId": "c998aeb12b78122ec4143b608b517aef0aa2c821", "title": "Blockchains and Smart Contracts for the Internet of Things" }, { "paperId": "cb348091c12b2dc33db6c7e7b2b7722d5e6de08b", "title": "A review on circular economy: the expected transition to a balanced interplay of environmental and economic systems" }, { "paperId": "d55231763b0fbb0c6b32b5ed17fc9ad9c26e1fde", "title": "The rise and fall of a “waste city” in the construction of an “urban circular economic system”: The changing landscape of waste in Beijing" }, { "paperId": "5e16a6391360259946568412ad15f1515e73ba89", "title": "A decade of supply chain collaboration and directions for future research" }, { "paperId": "4a16d0a9453d89ce6621a92ba231e513d5edc66d", "title": "Assessing the impact of intrinsic and extrinsic motivators on smart green IT device use: Reference group perspectives" }, { "paperId": "97fddbbfd681bce9eeb8e0a013353b4d5b2ba0db", "title": "Blockchain: Blueprint for a New Economy" }, { "paperId": "aa067010a39f41596365d6a538018f7b0776c946", "title": "Fostering more sustainable food choices: Can Self-Determination Theory help?" }, { "paperId": "9c1b9598f82f9ed7d75ef1a9e627496759aa2387", "title": "Data Science, Predictive Analytics, and Big Data: A Revolution that Will Transform Supply Chain Design and Management" }, { "paperId": "b8fe495d39674c5a1b6c6bde225747dcdeba2f13", "title": "A review of the circular economy in China : Moving from rhetoric to implementation" }, { "paperId": "811be2b65fd6490da6f3f239b71cdb7d569ccdf6", "title": "Tracking the evolution of theory on product elimination: Past, present, and future" }, { "paperId": "3b16f2b54e230ee3850b53825438e403571387c5", "title": "A boundaries and flows perspective of green supply chain management" }, { "paperId": "246fc9d939ddbcf0a8a5b7d0be4fa5e8769afdb1", "title": "The New Rules of Green Marketing: Strategies, Tools, and Inspiration for Sustainable Branding" }, { "paperId": "c148afee97beab35e288950f2a15b9adc9737e44", "title": "Creating integrated business and environmental value within the context of China’s circular economy and ecological modernization" }, { "paperId": "ef0614e91f422b4b03b349263fa67759a07d45f2", "title": "Green supplier development: analytical evaluation using rough set theory" }, { "paperId": "601a636b2b3d583a93b9723de6c4f4530cfd457d", "title": "The verge" }, { "paperId": "71cfc40d5fd87e63629c3632d7d8933e80092812", "title": "Information Technology and Systems in China's Circular Economy: Implications for Sustainability" }, { "paperId": "4b00918f7f8bb890bb9df87ca967048ea96e34e7", "title": "How May Consumer Policy Empower Consumers for Sustainable Lifestyles?" }, { "paperId": "ca71c573ab3d8e77b8ce8bd5eb211f5eaf1234aa", "title": "Going Backwards: Reverse Logistics Trends and Practices" }, { "paperId": "5a5d2ca82798e63eb545723b9dd646ec5f965384", "title": "A Natural-Resource-Based View of the Firm" }, { "paperId": "885dad2d774a306e181d9ae3f325cf0776f53167", "title": "Asset stock accumulation and sustainability of competitive advantage" }, { "paperId": "c1d73d632594c0070ef78c891535345b01011eba", "title": "Product Deletion and the Effects of Strategy" }, { "paperId": "22dd0000c241700dc2fe76fa94ad08350ac1dbec", "title": "Product line deletion and simplification: Tough but necessary decisions" }, { "paperId": "ff383ef68eb7895fd350938f1d45a1448e5aa9f8", "title": "Circular economy as an essentially contested concept" }, { "paperId": "a8fb7cf12cbab1dce0a8d4a34a6f77efec243a8b", "title": "How blockchain improves the supply chain: case study alimentary supply chain" }, { "paperId": "2bdb5047ec8c07b390a9c50104647e37e8fc178a", "title": "Circular Economy: The Concept and its Limitations" }, { "paperId": null, "title": "Global Diamond & Jewelry Market Tracks Authenticity with , IBM Blockchain : IBM" }, { "paperId": "5548f585d06c76ebcbadd30dd5f3bed153685290", "title": "Product biographies in servitization and the circular economy" }, { "paperId": null, "title": "The Business Blockchain: Promise, Practice, and Application of the Next Internet Technology" }, { "paperId": null, "title": "Sustainability, well-being, and the circular economy in China and worldwide" }, { "paperId": "3a5229852e440ef8384d177c6d1525d015f07d1b", "title": "Convincing Industry that There is Value in Environmentally Supply Chains" }, { "paperId": null, "title": "Composites and Polymers Co. Harvard Business School Technology & Operations Mgt. Unit" }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": "358f092645d60e74a0d917c147a33076037cf23e", "title": "Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being." }, { "paperId": "9cf4bde7ad055af59fe9417b5bbe8633ebd1a67e", "title": "The causes of product deletion in British manufacturing companies" }, { "paperId": null, "title": "Harvard Business School Technology & Operations Mgt. Unit. HBS Case No. 608-055" } ]
19,731
en
[ { "category": "Medicine", "source": "external" }, { "category": "Biology", "source": "external" }, { "category": "Medicine", "source": "s2-fos-model" }, { "category": "Biology", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01c447906a81b623ec70b7a6834f5a28586000d4
[ "Medicine", "Biology" ]
0.817865
Maternal-Fetal Transmission of Zika Virus: Routes and Signals for Infection.
01c447906a81b623ec70b7a6834f5a28586000d4
Journal of Interferon and Cytokine Research
[ { "authorId": "48437904", "name": "B. Cao" }, { "authorId": "144850886", "name": "M. Diamond" }, { "authorId": "6180820", "name": "I. Mysorekar" } ]
{ "alternate_issns": null, "alternate_names": [ "J Interferon Cytokine Res" ], "alternate_urls": null, "id": "01ac9a19-88d1-4d0f-b248-4f461fa975ad", "issn": "1079-9907", "name": "Journal of Interferon and Cytokine Research", "type": "journal", "url": "https://www.liebertpub.com/loi/jir" }
null
Volume 37, Number 7, 2017 ª Mary Ann Liebert, Inc. DOI: 10.1089/jir.2017.0011 ## Maternal-Fetal Transmission of Zika Virus: Routes and Signals for Infection Bin Cao,[1] Michael S. Diamond,[2–4] and Indira U. Mysorekar[1,3] #### The emerging mosquito-borne virus, Zika virus (ZIKV), has been causally associated with adverse pregnancy and neonatal outcomes, including miscarriage, microcephaly, serious brain abnormalities, and other birth defects indicative of a congenital ZIKV syndrome. In this review, we highlight work from human and animal studies on routes of infection in pregnancy that lead to adverse fetal and neonatal outcomes. A number of innate and adaptive immune mechanisms and signaling molecules that may have key roles in ZIKV infection path- ogenesis are discussed along with putative viral entry pathways. A more granular understanding of pathogenesis of ZIKV infection during pregnancy is critical for developing therapeutics and vaccines and mounting a global public health response to limit ZIKV infections. We also report on new therapeutic interventions that have shown success in preclinical studies. Keywords: trophoblast, placenta, Hofbauer, interferon Zika Virus and Human Pregnancy Outcomes ika virus (ZIKV) is a flavivirus in the Flaviviridae family, which includes other globally important patho# Z gens including dengue, West Nile, and yellow fever, and Japanese encephalitis viruses. ZIKV is transmitted predominantly by Aedes aegypti mosquitoes that are common in tropical areas and also by Aedes albopictus, which is prevalent in the upper continental United States and more temperate climates. Following its initial identification in the Zika forest in Uganda in 1947, sporadic outbreaks in parts of Africa and Asia, and high incidence of epidemics in Micronesia and French Polynesia in 2007 and 2013 occurred. Since 2015, ZIKV has emerged as a major cause of adverse fetal outcomes during pregnancy (Petersen and others 2016; Rasmussen and others 2016; van der Eijk and others 2016) and is linked epidemiologically to the Guillain–Barre´ syndrome in infected adults (Cao-Lormeau and others 2016). ZIKV infection can have devastating effects throughout pregnancy with damage to the fetal brain possible even if the infection occurs in the later stages of pregnancy (Brasil and others 2016). The impact of ZIKV epidemic on human health reflects the devastating fetal and neonatal outcomes, and the possible long-term neurodevelopmental consequences of in utero infection even in those with no overt signs at the time of birth. Routes and Cellular Sites of ZIKV Infection During Human Pregnancy In humans, in addition to mosquito transmission, ZIKV can be spread through sexual contact (male to female, female to male, or male to male) (D’Ortenzio and others 2016; Turmel and others 2016). ZIKV viral RNA has been found in semen for over 6 months following the initial diagnosis of infection (Nicastri and others 2016) and in female vaginal secretions (Davidson and others 2016; Prisant and others 2016). Transmission via blood is also possible (Driggers and others 2016). The route of transmission that has received the most attention in humans is vertical transmission during pregnancy. Infection at any time point during pregnancy has been associated with adverse fetal outcomes, however, the first-trimester infection appears to pose the highest risk for fetal injury (Cauchemez and others 2016; Honein and others 2017) when transmission across the developing placenta and into the amniotic or yolk sacs may occur (Boeuf and others 2016). In the past 12 months, several mouse and human studies have yielded insights into pathogenesis of this viral infection during pregnancy (Fig. 1A). Recent studies investigating how ZIKV reaches the intrauterine space and infects the fetus have found broad cell tropism of ZIKV in the human placenta, including infection of placental trophoblasts, endothelial cells, fibroblasts, and Departments of [1]Obstetrics and Gynecology, [2]Medicine, [3]Pathology and Immunology, and [4]Molecular Microbiology, Washington University School of Medicine, St. Louis, Missouri. 287 ----- FIG. 1. ZIKV infection pathogenesis in pregnancy. (A) Summary of routes of ZIKV transmission in a pregnant woman and the cells and signals implicated in transmission. (B) Structure of human placenta and sites of ZIKV infection. In the human placenta, there exist fetal-derived chorionic villi (blue box), which are tree-like projections lined with 2 layers of trophoblasts and bathed in maternal blood. Villous trophoblasts comprise 2 layers: the STBs and CTBs. The CTBs are highly proliferative and form a monolayer of polarized cells that eventually differentiate via cell–cell fusion into STBs. STBs form a surface covered by a dense network of branched microvilli that are bathed in maternal blood, mediate nutrient and gas exchange between mother and fetus. Fetal-derived macrophages, known as Hofbauer cells, are found in the intervillous spaces. A subset of trophoblasts, termed EVTs, migrates from the chorionic villi, invades into the uterine wall, and remodels maternal spiral arteries to facilitate blood supply of the placental unit. In addition to the EVTs, the decidual compartment also includes maternal immune cells (eg, decidual macrophages, decidual natural killer cells) and stromal cells. CTBs, cytotrophoblasts; EVTs, extravillous cytotrophoblasts; STBs, syncytiotrophoblasts; ZIKV, Zika virus. fetal macrophages known as Hofbauer cells in the intervillous space (El Costa and others 2016; Jurado and others 2016; Miner and others 2016a; Quicke and others 2016; Tabata and others 2016; Aagaard and others 2017). The placental syncytium comprises undifferentiated cytotrophoblasts (CTBs), which can fuse to form syncytiotrophoblasts (STBs) or migrate as extravillous cytotrophoblasts to invade into the uterine wall and remodel maternal spiral arteries to facilitate blood supply of the placental unit (Red-Horse and others 2004) (Fig. 1B). STBs are refractory to ZIKV infection in primary villous explants (Tabata and others 2016) and primary cultured STBs (Bayer and others 2016). This is consistent with previous studies showing that STBs are resistant to pathogenic infection by parasites (Toxoplasma gondii) and bacteria (Listeria monocytogenes and Escherichia coli) (Robbins and others 2012; Zeldovich and others 2013; Cao and Mysorekar 2014). However, these do not exclude the possibility that cellular damage of the placental syncytium caused by even limited ZIKV replication in STBs could facilitate ZIKV entry into CTBs and further into the intravillous space to infect Hofbauer cells. A number of studies have demonstrated that primary and cultured CTBs (Miner and others 2016a; Quicke and others 2016; Aagaard and others 2017) and Hofbauer cells (Jurado and others 2016; Noronha and others 2016) support ZIKV replication. A recent study evaluated placentas from a pregnancy complicated by ZIKV infection and demonstrated that infection appeared to induce proliferation of Hofbauer macrophages (Rosenberg and others 2017). In support of this, another human study found ZIKV RNA localized in placental chorionic villi in more than three-quarters of women who were positive for ZIKV RNA during their pregnancies ----- and/or had adverse pregnancy outcomes. ZIKV RNA was predominantly localized to the Hofbauer cells (Bhatnagar and others 2017) (Fig. 1B). Although the importance of ZIKV infection in Hofbauer cells is still unclear, it has been speculated that their infection may promote vertical transmission of ZIKV and pathogenesis of congenital ZIKV symptoms (Simoni and others 2017). It is evident that ZIKV needs to cross a number of cellular protective layers, including those formed by trophoblasts and Hofbauer cells to reach the fetal compartment. Generating Mouse Models of ZIKV Infection Pathogenesis During Pregnancy As the Zika viral epidemic started to emerge, it became clear that animal models were needed to better understand the mechanisms of vertical transmission and disease. However, ZIKV did not cause consistent infection in healthy wild-type mice. ZIKV, analogous to other flaviviruses, must overcome type I interferon (IFN) signaling to multiply and cause infection in vertebrates. Activation of IFN signaling via IFN receptors (IFNAR1 and IFNAR2) and subsequent activation of the Jak/Stat pathway (Jak1, TYk2, and STAT1/STAT2), leads to production of IFN-stimulated genes (MacMicking 2012) that restrict infection and modulate cellular and adaptive immunity. Flaviviruses, which require prolonged viremia (viral loads in blood) to maintain their vector–host cycles, efficiently antagonize IFN signaling in humans as some of their nonstructural genes (eg, NS3 and NS5) act as viral IFN antagonists through binding, degradation, and proteasomal targeting of host defense proteins (Versteeg and Garcia-Sastre 2010). In contrast to the human STAT2 ortholog, ZIKV does not promote degradation of murine STAT2 and is thus unable to establish sustained infection and viremia in mice (Grant and others 2016; Kumar and others 2016). Given these findings, several groups have used mice with deficiencies in IFN signaling to model ZIKV pathogenesis in mice (Cugola and others 2016; Lazear and others 2016; Miner and others 2016b; Tang and others 2016), including during pregnancy (Miner and others 2016a). A contemporary strain of ZIKV from French Polynesia was inoculated subcutaneously in IFNAR-deficient mice to permit a sufficiently high level of viremia in the pregnant dam to infect the placenta. An early time point in pregnancy (embryonic day 6.5) was selected to model the first trimester in human pregnancy. ZIKV infection of pregnant dams led to severe placental and fetal injury, including damage to fetal blood vessels, which in turn led to fetal demise. ZIKV infected trophoblasts and fetal endothelial cells that line fetal capillaries [reviewed in Mysorekar and Diamond (2016)], suggesting a transplacental route of transmission for ZIKV. These observed phenotypes were akin to those noted in pregnant women infected with ZIKV (Parameswaran and others 2010; Brasil and others 2016; Sarno et al., 2016). Particularly noteworthy was that the placental tissue contained *1,000-fold higher concentration of ZIKV RNA than was found in maternal serum, suggesting that ZIKV preferentially replicates in cells of the placenta. Thus, maternal ZIKV infection compromises the placental barrier by infecting fetal trophoblasts and thereby enters the fetal circulation and impairs development. A second model of ZIKV infection was also developed that used a monoclonal anti body against IFNAR1 (MAR1-5A3), which transiently blocked IFNAR signaling in wild-type mice (Sheehan and others 2006). Treatment with the anti-IFNAR1 antibody a day before ZIKV infection was sufficient to permit the virus to infect the pregnant dams and result in fetal brain injury and adverse fetal outcomes. Two additional studies using mouse models also addressed the causal relationship between maternal ZIKV infection in pregnancy and fetal outcomes. Cugola and others (2016) inoculated pregnant SJL dams intravenously with a high dose of a Brazilian strain of ZIKV and demonstrated fetal growth restriction and severe fetal brain injury to cortical neurons in the cerebral cortex, and ocular abnormalities were also noted in human neonates. A third study by Wu and others (2016) injected a contemporary Asian ZIKV strain intraperitoneally into pregnant immunocompetent dams at embryonic day 13.5, which elicited a transient viremia and placental seeding leading to infection of cortical neural progenitors of fetal mice. Together, these studies established that ZIKV infection in pregnancy led directly to fetal brain injury via a transplacental route. More recent studies have demonstrated that vaginal transmission route can lead to fetal infection as well as direct intrauterine inoculation (Yockey and others 2016). Vaginal infection with an Asian strain of ZIKV of Ifnar1[-][/][-] dams at an early pregnancy stage led to embryo reabsorption, intrauterine growth restriction, and infection in fetal brains, suggesting that ZIKV infection in lower female reproductive tract may take a transvaginal ascending route to access the fetus during pregnancy and infect via a placental or paraplacental route (Yockey and others 2016). Most recently, Vermillion and others have established a model of intrauterine infection with ZIKV in wild-type mice. This model has the advantage of using immunocompetent and outbred mice and bypassing the need for ZIKV infection of the periphery. Using intrauterine inoculation with African, contemporary Asian and Brazilian strains of ZIKV directly into the uterine artery of a given fetoplacental unit, they found ZIKV viral RNA localized to the infected uterine horns, placentas, and fetuses (Vermillion and others 2017). Wild-type pregnant mice infected with this strain using the intraperitoneal route did not exhibit these phenotypes. Together, these studies suggest that, similar to ascending intrauterine bacterial infections, if ZIKV reaches the intrauterine compartment via sexual transmission route, vaginal route, or a direct intrauterine route, it poses risk for vertical transmission. Whether a transgenital route is implicated in human vertical transmission of ZIKV remains to be investigated. Signals Implicated in Vertical ZIKV Transmission in Mouse and Human Pregnancy Type I/III IFN signaling As mentioned above, wild-type mice with intact type I IFN do not get infected with ZIKV in the periphery (Lazear and others 2016). However, animal models with compromised type I IFN signaling, including Ifnar1-deficient females crossed to WT males and pregnant WT females treated with an IFNAR-blocking antibody, are susceptible to ZIKV infection and lead to fetal demise (Miner and others 2016a) Intrauterine delivery of ZIKV in pregnancy in ----- immunocompetent mice also upregulates type I IFN signaling and IFN stimulated gene expression (Vermillion and others 2017). These data strongly support an antiviral role of type I IFN in vertical transmission of ZIKV during pregnancy in mice. Similarly, type I IFN signaling has been shown to be critical for infection and prolonged persistence of ZIKV in the female genital tract, as female Ifnar1[-][/][-] mice exhibit high-titer ZIKV replication in the vagina (Yockey and others 2016). A recent study also shows dampened induction of type I IFN and various IFN-stimulated genes on ZIKV infection in the vagina in a wild-type mouse relative to what is elicited on systemic administration of ZIKV (Khan and others 2016). This could explain why ZIKV may take a transgenital infection route. Type III IFN-l signaling has also been identified as a possible regulator of ZIKV infection (Bayer and others 2016). Primary cultured human trophoblast cells (STBs) isolated from full-term placentas were resistant to ZIKV infection due to production of type III IFNs, especially IFNl1, which may protect the trophoblasts from ZIKV infection in an autocrine or paracrine manner. Type I and III IFNs have been shown to be induced in response to ZIKV infection in decidual explant cultures, in which ZIKV infection induced transcription of IFNa/b and IFNl (Weisblum and others 2017). However, another study has reported that type III IFN signals were not induced in CTBs infected with ZIKV (Quicke and others 2016). It remains to be determined whether the differences noted represent different antiviral responses in CTBs and STBs or the experimental conditions of the studies. Antiviral functions of IFN-l have been shown in viral infections in a number of tissues, including the liver, skin, respiratory, gastrointestinal, and urogenital tracts (Lazear and others 2015a, 2015b). There is some evidence suggesting a role for IFN-l in maternal-fetal transmission of placental pathogens. For example, infection of pregnant mice with L. monocytogenes, a vertically transmitted bacterium that causes maternal-fetal listeriosis, induces transcription of IFN-l2/3 and IFNresponsive genes (IFIT1 and Mx1/2) in their placentas. Furthermore, IFN-l2 treatment induces robust increase of Mx1 expression in the mouse maternal–fetal unit, including maternal decidua, placental labyrinth, and fetal membranes (Bierne and others 2012). These studies indicate that the maternal–fetal unit responds to IFN-l and suggest a protective function in placenta to congenital bacterial infections. Further work is needed to provide a complete picture of possible antiviral functions for type III IFN signaling in pregnancy in vivo. Adaptive immune signals Deletion of recombination activating gene-2 (Rag2[-][/][-]) in mice, which prevents development of mature T and B cells, did not affect ZIKV infectivity in the female genital tract, suggesting that adaptive immune responses are not required to control early ZIKV replication (Yockey and others 2016). However, a recent study has demonstrated a protective function of CD8[+] T cells in ZIKV pathogenesis (Elong Ngono and others 2017). ZIKV infection induced CD8[+] T cell expansion and activation in mice with compromised type I IFN signaling. CD8[+] T cell-deficient (CD8[-][/][-]) nonpregnant C57BL/6 were more susceptible to ZIKV infection and adoptive transfer of ZIKV immune CD8[+] T cells significantly decreased the ZIKV burden. Whether the protective function of CD8[+] T cells is true in pregnancy remains to be investigated, especially considering that pregnancy is a naturally immunocompromised state (PrabhuDas and others 2015). Hormonal signals The mammalian endocrine system can modulate susceptibility to microbial infections in females. For example, increased levels of the hormone progesterone, which occur during stages of the menstrual cycle or pregnancy, can affect susceptibility to viral infections (eg, HIV). Several studies showed that estradiol upregulates type I IFN production via the canonical estrogen receptor-mediated signaling pathway. This regulatory effect of estrogen on IFNs may explain gender differences of HIV pathogenesis and protective roles of estrogen on in vitro HIV infection. Recently, Tang and others demonstrated that AG129 mice deficient in type I or type II IFN signals systemically or type I IFN in myeloid cells support transgenital transmission when challenged by ZIKV in the diestrus but not estrus phase (Tang and others 2016). This suggests that transgenital transmission of ZIKV may be under hormonal regulation. However, whether different susceptibilities to ZIKV infection at different estrus cycle stages are through estrogen-dependent regulation and involve other IFNs is still unclear. Moreover, whether the hormonal changes that occur during pregnancy play a role in ZIKV susceptibility remains to be elucidated. Putative receptors for ZIKV entry into placental cells The TAM receptors (Tyro3, Axl, and Mertk) are a family of receptor tyrosine kinases, activated by soluble ligands Gas6 and Protein S, which recognize phosphatidylserine on the surface of apoptotic cells and enveloped viruses (Meertens and others 2012). TAMs can be exploited by flaviviruses, such as West Nile virus and dengue virus to infect target cells (Meertens and others 2012). TAM receptors activated by viruses can dampen innate immune response, such as inhibition of type I IFN signaling (Bhattacharyya and others 2013). In particular, the TAM receptor Axl has been suggested as a key target for ZIKV attachment for ZIKV in different models (Hamel and others 2015; Ma and others 2016; Nowakowski and others 2016; Savidis and others 2016). ZIKV infection has been shown to promote Axl kinase activity to enhance infection in glia (Meertens and others 2017). AXL binding but not intracellular kinase activity appears required for ZIKV infection in glial cells (Retallack and others 2016). Similarly, AXL was shown to mediate ZIKV entry via clathrin-mediated endocytosis in glial cells, which requires Gas 6 as a bridge to link ZIKV to glial cells (Meertens and others 2017). Furthermore, blocking AXL activation in endothelial cells by targeting the extracellular domain of the protein has been shown to inhibit ZIKV entry, and viral entry has been shown to require AXL catalytic activity (Liu and others 2016). However, other studies performed in vitro and in vivo do not support the hypothesis that AXL is required for viral entry of ZIKV. For example, in human neural progenitor cells and cerebral organoids, genetic deletion of AXL did not affect ZIKV entry nor limit the cell death caused by ZIKV infection (Wells and others ----- 2016). In addition, mice deficient in Axl, Mertk, or both did not differ from wild-type mice in terms of ZIKV replication or pathogenesis in the brain, eye, or testis, suggesting that Axl and Mertk are not required for infection of these organs in adult mice (Govero and others 2016; Miner and others 2016b). Moreover, ZIKV infection increased Axl kinase activity by promoting Axl phosphorylation and further suppression of innate immunity to enhance infection in glia (Meertens and others 2017). It is important to note that the majority of mouse models for ZIKV studies were developed by compromising the type I IFN signaling pathway. Inhibition of type I IFN pathways by ZIKV infection through AXL may be not seen in these mouse models. Thus, it is difficult to interpret the effects of TAM receptor on innate immune response in vivo in an innate immunity deficient background. Most recently, Vermillion and others (2017) have shown Axl expression was increased on intrauterine ZIKV infection in placentas from an immunocompetent outbred mouse strain. Tabata and others (2016) demonstrated that inhibiting AXL in primary human trophoblasts led to only a modest reduction of ZIKV infection. However, the human trophoblast cell line, Jeg-3, has low levels of AXL expression but is highly permissive to ZIKV infection, suggesting that additional viral entry mechanisms must exist (Rausch and others 2017). The function of AXL in the context of viral entry, replication, or pathogenesis may vary substantially depending on tissue compartment, cell type, and experimental model. These data together indicate that AXL likely is not the only or dominant entry factor required for ZIKV infection in trophoblasts. TIM1, a member of the T cell immunoglobulin and mucin domain protein family, has been suggested as an important factor in maternal-fetal transmission of ZIKV. TIM1, like TAM receptors, is widely expressed in different cells at the maternal–fetal interface (Tabata and others 2016). Interestingly, a TIM1 inhibitor, duramycin, reduces ZIKV infection more significantly compared with an AXL inhibitor, indicating perhaps a more important role of TIM1 in ZIKV congenital infection (Tabata and others 2016). There has been no experimental evidence supporting in vivo roles for TAMs or TIM on vertical transmission of ZIKV thus far. Cell-type-specific modulation of TAMs and/or TIM in trophoblasts or other cell types on the maternal–fetal interface may be more reasonable considering the complicated celltype-specific role of TAM receptors. Development of Therapeutic Interventions to Block ZIKV Vertical Transmission Given the lack of effective and safe vaccines against ZIKV, the introduction of immediate interventions to attenuate and stop the maternal-fetal transmission of ZIKV has become an urgent challenge (Pierson and Graham 2016). Systemic administration of convalescent serum from a patient with prior ZIKV infection into the peritoneal cavity of pregnant ICR mice infected with ZIKV successfully protected the fetus from microcephaly and other neurological damage (Wang and others 2017). However, uncertainties and limitations of the convalescent plasma weaken the feasibility of using it as a large-scale therapeutics with certain and proven safety. Remarkably rapid progress has been reported in identifying neutralizing monoclonal antibodies against ZIKV from humans (Sapparapu and others 2016; Stettler and others 2016; Wang and others 2016) and mice (Zhao and others 2016) with the capacity of blocking ZIKV transmission. In vivo passive transfer of these antibodies protects adult mice from ZIKV infection providing avenues of use as prophylaxis or treatment against ZIKV infections. However, prevention and mitigation of ZIKV congenital infections require that any ZIKV therapeutic developed should be amenable to be given to pregnant women. Thus, the efficacy and safety of these treatments against ZIKV infection should be tested in pregnant animal models at the preclinical stage. To this end, a neutralizing mAb, ZIKV-117, worked as both prophylaxis and therapy in ZIKV-infected pregnant mice, as evidenced by reductions in ZIKV titers in maternal organs and the feto-placental units. ZIKV-117 treatment improved or completely rescued pregnancy complications caused by maternal-fetal transmission of ZIKV in mouse, including placental insufficiency, fetal growth restriction, and fetal demise (Sapparapu and others 2016). Summary Since the appreciation of ZIKV congenital syndrome in 2015, an unprecedented level of collaborative, global, rapid progress has been made to understand the routes of ZIKV maternal-fetal transmission and develop new therapeutic interventions. Ongoing and future investigations into the impact of ZIKV infection at different stages of pregnancy and the identification of ZIKV entry mechanisms into the placenta will undoubtedly yield further insights into its unique pathogenesis. Acknowledgments This work was supported by a Preventing Prematurity Initiative grant from the Burroughs Wellcome Fund and a Prematurity Research Initiative Investigator award from the March of Dimes (to I.U.M.), NIH/NICHD grant R01HD091218 (to I.UM. and M.S.D.), and R01 AI073755, R01 AI104972, and P01 AI106695 to M.S.D. Author Disclosure Statement No competing financial interests exist. References Aagaard KM, Lahon A, Suter MA, Arya RP, Seferovic MD, Vogt MB, Hu M, Stossi F, Mancini MA, Harris RA, Kahr M, Eppes C, Rac M, Belfort MA, Park CS, Lacorazza D, RicoHesse R. 2017. Primary human placental trophoblasts are permissive for Zika virus (ZIKV) replication. Sci Rep 7:41389. Bayer A, Lennemann NJ, Ouyang Y, Bramley JC, Morosky S, Marques ET, Jr., Cherry S, Sadovsky Y, Coyne CB. 2016. Type III interferons produced by human placental trophoblasts confer protection against Zika virus infection. Cell Host Microbe 19(5):705–712. Bhatnagar J, Rabeneck DB, Martines RB, Reagan-Steiner S, Ermias Y, Estetter LB, Suzuki T, Ritter J, Keating MK, Hale G, Gary J, Muehlenbachs A, Lambert A, Lanciotti R, Oduyebo T, Meaney-Delman D, Bolanos F, Saad EA, Shieh WJ, Zaki SR. 2017. Zika virus RNA replication and persistence in brain and placental tissue. Emerg Infect Dis 23(3) 405 414 ----- Bhattacharyya S, Zagorska A, Lew ED, Shrestha B, Rothlin CV, Naughton J, Diamond MS, Lemke G, Young JA. 2013. Enveloped viruses disable innate immune responses in dendritic cells by direct activation of TAM receptors. Cell Host Microbe 14(2):136–147. Bierne H, Travier L, Mahlakoiv T, Tailleux L, Subtil A, Lebreton A, Paliwal A, Gicquel B, Staeheli P, Lecuit M, Cossart P. 2012. Activation of type III interferon genes by pathogenic bacteria in infected epithelial cells and mouse placenta. PLoS One 7(6):e39080. Boeuf P, Drummer HE, Richards JS, Scoullar MJ, Beeson JG. 2016. The global threat of Zika virus to pregnancy: epidemiology, clinical perspectives, mechanisms, and impact. BMC Med 14(1):112. Brasil P, Pereira JP, Jr., Moreira ME, Ribeiro Nogueira RM, Damasceno L, Wakimoto M, Rabello RS, Valderramos SG, Halai UA, Salles TS, Zin AA, Horovitz D, Daltro P, Boechat M, Raja Gabaglia C, Carvalho de Sequeira P, Pilotto JH, Medialdea-Carrera R, Cotrim da Cunha D, Abreu de Carvalho LM, Pone M, Machado Siqueira A, Calvet GA, Rodrigues Baiao AE, Neves ES, Nassar de Carvalho PR, Hasue RH, Marschik PB, Einspieler C, Janzen C, Cherry JD, Bispo de Filippis AM, Nielsen-Saines K. 2016. Zika virus infection in pregnant women in Rio de Janeiro. N Engl J Med 375(24):2321–2334. Cao B, Mysorekar IU. 2014. Intracellular bacteria in placental basal plate localize to extravillous trophoblasts. Placenta 35(2):139–142. Cao-Lormeau VM, Blake A, Mons S, Lastere S, Roche C, Vanhomwegen J, Dub T, Baudouin L, Teissier A, Larre P, Vial AL, Decam C, Choumet V, Halstead SK, Willison HJ, Musset L, Manuguerra JC, Despres P, Fournier E, Mallet HP, Musso D, Fontanet A, Neil J, Ghawche F. 2016. GuillainBarre syndrome outbreak associated with Zika virus infection in French Polynesia: a case-control study. Lancet 387(10027):1531–1539. Cauchemez S, Besnard M, Bompard P, Dub T, Guillemette-Artur P, Eyrolle-Guignot D, Salje H, Van Kerkhove MD, Abadie V, Garel C, Fontanet A, Mallet HP. 2016. Association between Zika virus and microcephaly in French Polynesia, 2013–2015: a retrospective study. Lancet 387(10033):2125–2132. Cugola FR, Fernandes IR, Russo FB, Freitas BC, Dias JL, Guimaraes KP, Benazzato C, Almeida N, Pignatari GC, Romero S, Polonio CM, Cunha I, Freitas CL, Brandao WN, Rossato C, Andrade DG, Faria Dde P, Garcez AT, Buchpigel CA, Braconi CT, Mendes E, Sall AA, Zanotto PM, Peron JP, Muotri AR, Beltrao-Braga PC. 2016. The Brazilian Zika virus strain causes birth defects in experimental models. Nature 534(7606):267–271. D’Ortenzio E, Matheron S, Yazdanpanah Y, de Lamballerie X, Hubert B, Piorkowski G, Maquart M, Descamps D, Damond F, Leparc-Goffart I. 2016. Evidence of sexual transmission of Zika virus. N Engl J Med 374(22):2195–2198. Davidson A, Slavinski S, Komoto K, Rakeman J, Weiss D. 2016. Suspected female-to-male sexual transmission of Zika virus—New York City, 2016. MMWR Morb Mortal Wkly Rep 65(28):716–717. Driggers RW, Ho CY, Korhonen EM, Kuivanen S, Jaaskelainen AJ, Smura T, Rosenberg A, Hill DA, DeBiasi RL, Vezina G, Timofeev J, Rodriguez FJ, Levanov L, Razak J, Iyengar P, Hennenfent A, Kennedy R, Lanciotti R, du Plessis A, Vapalahti O. 2016. Zika virus infection with prolonged maternal viremia and fetal brain abnormalities. N Engl J Med 374(22):2142–2151. El Costa H, Gouilly J, Mansuy JM, Chen Q, Levy C, Cartron G, Veas F, Al-Daccak R, Izopet J, Jabrane-Ferrat N. 2016. ZIKA virus reveals broad tissue and cell tropism during the first trimester of pregnancy. Sci Rep 6:35296. Elong Ngono A, Vizcarra EA, Tang WW, Sheets N, Joo Y, Kim K, Gorman MJ, Diamond MS, Shresta S. 2017. Mapping and role of the CD8+ T cell response during primary Zika virus infection in mice. Cell Host Microbe 21(1):35–46. Govero J, Esakky P, Scheaffer SM, Fernandez E, Drury A, Platt DJ, Gorman MJ, Richner JM, Caine EA, Salazar V, Moley KH, Diamond MS. 2016. Zika virus infection damages the testes in mice. Nature 540(7633):438–442. Grant A, Ponia SS, Tripathi S, Balasubramaniam V, Miorin L, Sourisseau M, Schwarz MC, Sanchez-Seco MP, Evans MJ, Best SM, Garcia-Sastre A. 2016. Zika virus targets human STAT2 to inhibit type I interferon signaling. Cell Host Microbe 19(6):882–890. Hamel R, Dejarnac O, Wichit S, Ekchariyawat P, Neyret A, Luplertlop N, Perera-Lecoin M, Surasombatpattana P, Talignani L, Thomas F, Cao-Lormeau VM, Choumet V, Briant L, Despres P, Amara A, Yssel H, Misse D. 2015. Biology of Zika virus infection in human skin cells. J Virol 89(17):8880– 8896. Honein MA, Dawson AL, Petersen EE, Jones AM, Lee EH, Yazdy MM, Ahmad N, Macdonald J, Evert N, Bingham A, Ellington SR, Shapiro-Mendoza CK, Oduyebo T, Fine AD, Brown CM, Sommer JN, Gupta J, Cavicchia P, Slavinski S, White JL, Owen SM, Petersen LR, Boyle C, Meaney-Delman D, Jamieson DJ; US Zika Pregnancy Registry Collaboration. 2017. Birth defects among fetuses and infants of US women with evidence of possible Zika virus infection during pregnancy. JAMA 317(1):59–68. Jurado KA, Simoni MK, Tang Z, Uraki R, Hwang J, Householder S, Wu M, Lindenbach BD, Abrahams VM, Guller S, Fikrig E. 2016. Zika virus productively infects primary human placenta-specific macrophages. JCI Insight 1(13):pii e88461. Khan S, Woodruff EM, Trapecar M, Fontaine KA, Ezaki A, Borbet TC, Ott M, Sanjabi S. 2016. Dampened antiviral immunity to intravaginal exposure to RNA viral pathogens allows enhanced viral replication. J Exp Med 213(13):2913– 2929. Kumar A, Hou S, Airo AM, Limonta D, Mancinelli V, Branton W, Power C, Hobman TC. 2016. Zika virus inhibits type-I interferon production and downstream signaling. EMBO Rep 17(12):1766–1775. Lazear HM, Daniels BP, Pinto AK, Huang AC, Vick SC, Doyle SE, Gale M, Jr., Klein RS, Diamond MS. 2015a. Interferonlambda restricts West Nile virus neuroinvasion by tightening the blood-brain barrier. Sci Transl Med 7(284):284ra59. Lazear HM, Govero J, Smith AM, Platt DJ, Fernandez E, Miner JJ, Diamond MS. 2016. A mouse model of Zika virus pathogenesis. Cell Host Microbe 19(5):720–730. Lazear HM, Nice TJ, Diamond MS. 2015b. Interferon-lambda: Immune functions at barrier surfaces and beyond. Immunity 43(1):15–28. Liu S, DeLalio LJ, Isakson BE, Wang TT. 2016. AXL-mediated productive infection of human endothelial cells by Zika virus. Circ Res 119(11):1183–1189. Ma W, Li S, Ma S, Jia L, Zhang F, Zhang Y, Zhang J, Wong G, Zhang S, Lu X, Liu M, Yan J, Li W, Qin C, Han D, Qin C, Wang N, Li X, Gao GF. 2016. Zika virus causes testis damage and leads to male infertility in mice. Cell 167(6): 1511–1524.e10. ----- MacMicking JD. 2012. Interferon-inducible effector mechanisms in cell-autonomous immunity. Nat Rev Immunol 12(5):367–382. Meertens L, Carnec X, Lecoin MP, Ramdasi R, GuivelBenhassine F, Lew E, Lemke G, Schwartz O, Amara A. 2012. The TIM and TAM families of phosphatidylserine receptors mediate dengue virus entry. Cell Host Microbe 12(4):544–557. Meertens L, Labeau A, Dejarnac O, Cipriani S, Sinigaglia L, Bonnet-Madin L, Le Charpentier T, Hafirassou ML, Zamborlini A, Cao-Lormeau VM, Coulpier M, Misse D, Jouvenet N, Tabibiazar R, Gressens P, Schwartz O, Amara A. 2017. Axl mediates ZIKA virus entry in human glial cells and modulates innate immune responses. Cell Rep 18(2):324– 333. Miner JJ, Cao B, Govero J, Smith AM, Fernandez E, Cabrera OH, Garber C, Noll M, Klein RS, Noguchi KK, Mysorekar IU, Diamond MS. 2016a. Zika virus infection during pregnancy in mice causes placental damage and fetal demise. Cell 165(5):1081–1091. Miner JJ, Sene A, Richner JM, Smith AM, Santeford A, Ban N, Weger-Lucarelli J, Manzella F, Ruckert C, Govero J, Noguchi KK, Ebel GD, Diamond MS, Apte RS. 2016b. Zika virus infection in mice causes panuveitis with shedding of virus in tears. Cell Rep 16(12):3208–3218. Mysorekar IU, Diamond MS. 2016. Modeling Zika virus infection in pregnancy. N Engl J Med 375(5):481–484. Nicastri E, Castilletti C, Liuzzi G, Iannetta M, Capobianchi MR, Ippolito G. 2016. Persistent detection of Zika virus RNA in semen for six months after symptom onset in a traveller returning from Haiti to Italy, February 2016. Euro Surveill 21(32). Noronha L, Zanluca C, Azevedo ML, Luz KG, Santos CN. 2016. Zika virus damages the human placental barrier and presents marked fetal neurotropism. Mem Inst Oswaldo Cruz 111(5):287–293. Nowakowski TJ, Pollen AA, Di Lullo E, Sandoval-Espinosa C, Bershteyn M, Kriegstein AR. 2016. Expression analysis highlights AXL as a candidate Zika virus entry receptor in neural stem cells. Cell Stem Cell 18(5):591–596. Parameswaran P, Sklan E, Wilkins C, Burgon T, Samuel MA, Lu R, Ansel KM, Heissmeyer V, Einav S, Jackson W, Doukas T, Paranjape S, Polacek C, dos Santos FB, Jalili R, Babrzadeh F, Gharizadeh B, Grimm D, Kay M, Koike S, Sarnow P, Ronaghi M, Ding SW, Harris E, Chow M, Diamond MS, Kirkegaard K, Glenn JS, Fire AZ. 2010. Six RNA viruses and forty-one hosts: viral small RNAs and modulation of small RNA repertoires in vertebrate and invertebrate systems. PLoS Pathog 6(2):e1000764. Petersen LR, Jamieson DJ, Powers AM, Honein MA. 2016. Zika virus. N Engl J Med 374(16):1552–1563. Pierson TC, Graham BS. 2016. Zika virus: immunity and vaccine development. Cell 167(3):625–631. PrabhuDas M, Bonney E, Caron K, Dey S, Erlebacher A, Fazleabas A, Fisher S, Golos T, Matzuk M, McCune JM, Mor G, Schulz L, Soares M, Spencer T, Strominger J, Way SS, Yoshinaga K. 2015. Immune mechanisms at the maternalfetal interface: perspectives and challenges. Nat Immunol 16(4):328–334. Prisant N, Bujan L, Benichou H, Hayot PH, Pavili L, Lurel S, Herrmann C, Janky E, Joguet G. 2016. Zika virus in the female genital tract. Lancet Infect Dis 16(9):1000–1001. Quicke KM, Bowen JR, Johnson EL, McDonald CE, Ma H, O’Neal JT, Rajakumar A, Wrammert J, Rimawi BH, Pulendran B, Schinazi RF, Chakraborty R, Suthar MS. 2016. Zika virus infects human placental macrophages. Cell Host Microbe 20(1):83–90. Rasmussen SA, Jamieson DJ, Honein MA, Petersen LR. 2016. Zika virus and birth defects—reviewing the evidence for causality. N Engl J Med 374(20):1981–1987. Rausch K, Hackett BA, Weinbren NL, Reeder SM, Sadovsky Y, Hunter CA, Schultz DC, Coyne CB, Cherry S. 2017. Screening bioactives reveals nanchangmycin as a broad spectrum antiviral active against Zika virus. Cell Rep 18(3):804–815. Red-Horse K, Zhou Y, Genbacev O, Prakobphol A, Foulk R, McMaster M, Fisher SJ. 2004. Trophoblast differentiation during embryo implantation and formation of the maternalfetal interface. J Clin Invest 114(6):744–754. Retallack H, Di Lullo E, Arias C, Knopp KA, Laurie MT, Sandoval-Espinosa C, Mancia Leon WR, Krencik R, Ullian EM, Spatazza J, Pollen AA, Mandel-Brehm C, Nowakowski TJ, Kriegstein AR, DeRisi JL. 2016. Zika virus cell tropism in the developing human brain and inhibition by azithromycin. Proc Natl Acad Sci U S A 113(50):14408–14413. Robbins JR, Zeldovich VB, Poukchanski A, Boothroyd JC, Bakardjiev AI. 2012. Tissue barriers of the human placenta to infection with Toxoplasma gondii. Infect Immun 80(1):418–428. Rosenberg AZ, Yu W, Hill DA, Reyes CA, Schwartz DA. 2017. Placental pathology of Zika virus: viral infection of the placenta induces villous stromal macrophage (Hofbauer cell) proliferation and hyperplasia. Arch Pathol Lab Med 141(1):43–48. Sapparapu G, Fernandez E, Kose N, Bin C, Fox JM, Bombardi RG, Zhao H, Nelson CA, Bryan AL, Barnes T, Davidson E, Mysorekar IU, Fremont DH, Doranz BJ, Diamond MS, Crowe JE. 2016. Neutralizing human antibodies prevent Zika virus replication and fetal disease in mice. Nature 540(7633):443–447. Sarno M, Sacramento GA, Khouri R, do Rosa´rio MS, Costa F, Archanjo G, Santos LA, Nery N Jr, Vasilakis N, Ko AI, de Almeida AR. 2016. Zika virus infection and stillbirths: a case of hydrops fetalis, hydranencephaly and fetal demise. PLoS Negl Trop Dis 10(2):e0004517. Savidis G, McDougall WM, Meraner P, Perreira JM, Portmann JM, Trincucci G, John SP, Aker AM, Renzette N, Robbins DR, Guo Z, Green S, Kowalik TF, Brass AL. 2016. Identification of Zika virus and dengue virus dependency factors using functional genomics. Cell Rep 16(1):232–246. Sheehan KC, Lai KS, Dunn GP, Bruce AT, Diamond MS, Heutel JD, Dungo-Arthur C, Carrero JA, White JM, Hertzog PJ, Schreiber RD. 2006. Blocking monoclonal antibodies specific for mouse IFN-alpha/beta receptor subunit 1 (IFNAR-1) from mice immunized by in vivo hydrodynamic transfection. J Interferon Cytokine Res 26(11):804–819. Simoni MK, Jurado KA, Abrahams VM, Fikrig E, Guller S. 2017. Zika virus infection of Hofbauer cells. Am J Reprod Immunol 77(2). [Epub ahead of print]; DOI: 10.1111/aji.12613. Stettler K, Beltramello M, Espinosa DA, Graham V, Cassotta A, Bianchi S, Vanzetta F, Minola A, Jaconi S, Mele F, Foglierini M, Pedotti M, Simonelli L, Dowall S, Atkinson B, Percivalle E, Simmons CP, Varani L, Blum J, Baldanti F, Cameroni E, Hewson R, Harris E, Lanzavecchia A, Sallusto F, Corti D. 2016. Specificity, cross-reactivity, and function of antibodies elicited by Zika virus infection. Science 353(6301):823–826. Tabata T, Petitt M, Puerta-Guardo H, Michlmayr D, Wang C, Fang-Hoover J, Harris E, Pereira L. 2016. Zika virus targets different primary human placental cells, suggesting two routes for vertical transmission. Cell Host Microbe 20(2): 155–166. Tang WW, Young MP, Mamidi A, Regla-Nava JA, Kim K, Shresta S. 2016. A mouse model of Zika virus sexual transmission and vaginal viral replication. Cell Rep 17(12):3091–3098. ----- Turmel JM, Abgueguen P, Hubert B, Vandamme YM, Maquart M, Le Guillou-Guillemette H, Leparc-Goffart I. 2016. Late sexual transmission of Zika virus related to persistence in the semen. Lancet 387(10037):2501. van der Eijk AA, van Genderen PJ, Verdijk RM, Reusken CB, Mogling R, van Kampen JJ, Widagdo W, Aron GI, GeurtsvanKessel CH, Pas SD, Raj VS, Haagmans BL, Koopmans MP. 2016. Miscarriage associated with Zika virus infection. N Engl J Med 375(10):1002–1004. Vermillion MS, Lei J, Shabi Y, Baxter VK, Crilly NP, McLane M, Griffin DE, Andrew Pekosz A, Klein SL, Burd I. 2017. Intrauterine Zika virus infection of pregnant immunocompetent mice models transplacental transmission and adverse perinatal outcomes. Nat Commun 8:14575. Versteeg GA, Garcia-Sastre A. 2010. Viral tricks to grid-lock the type I interferon system. Curr Opin Microbiol 13(4):508– 516. Wang Q, Yang H, Liu X, Dai L, Ma T, Qi J, Wong G, Peng R, Liu S, Li J, Li S, Song J, Liu J, He J, Yuan H, Xiong Y, Liao Y, Li J, Yang J, Tong Z, Griffin BD, Bi Y, Liang M, Xu X, Qin C, Cheng G, Zhang X, Wang P, Qiu X, Kobinger G, Shi Y, Yan J, Gao GF. 2016. Molecular determinants of human neutralizing antibodies isolated from a patient infected with Zika virus. Sci Transl Med 8(369):369ra179. Wang S, Hong S, Deng YQ, Ye Q, Zhao LZ, Zhang FC, Qin CF, Xu Z. 2017. Transfer of convalescent serum to pregnant mice prevents Zika virus infection and microcephaly in offspring. Cell Res 27(1):158–160. Weisblum Y, Oiknine-Djian E, Vorontsov OM, HaimovKochman R, Zakay-Rones Z, Meir K, Shveiky D, Elgavish S, Nevo Y, Roseman M, Bronstein M, Stockheim D, From I, Eisenberg I, Lewkowicz AA, Yagel S, Panet A, Wolf DG. 2017. Zika virus infects early- and mid-gestation human maternal-decidual tissues, inducing distinct innate tissue re sponses in the maternal-fetal interface. J Virol 91(4):pii: e01905-16. Wells MF, Salick MR, Wiskow O, Ho DJ, Worringer KA, Ihry RJ, Kommineni S, Bilican B, Klim JR, Hill EJ, Kane LT, Ye C, Kaykas A, Eggan K. 2016. Genetic ablation of AXL does not protect human neural progenitor cells and cerebral organoids from Zika virus infection. Cell Stem Cell 19(6):703–708. Wu KY, Zuo GL, Li XF, Ye Q, Deng YQ, Huang XY, Cao WC, Qin CF, Luo ZG. 2016. Vertical transmission of Zika virus targeting the radial glial cells affects cortex development of offspring mice. Cell Res 26(6):645–654. Yockey LJ, Varela L, Rakib T, Khoury-Hanold W, Fink SL, Stutz B, Szigeti-Buck K, Van den Pol A, Lindenbach BD, Horvath TL, Iwasaki A. 2016. Vaginal exposure to Zika virus during pregnancy leads to fetal brain infection. Cell 166(5):1247–1256.e4. Zeldovich VB, Clausen CH, Bradford E, Fletcher DA, Maltepe E, Robbins JR, Bakardjiev AI. 2013. Placental syncytium forms a biophysical barrier against pathogen invasion. PLoS Pathog 9(12):e1003821. Zhao H, Fernandez E, Dowd KA, Speer SD, Platt DJ, Gorman MJ, Govero J, Nelson CA, Pierson TC, Diamond MS, Fremont DH. 2016. Structural basis of Zika virus-specific antibody protection. Cell 166(4):1016–1027. Address correspondence to: Dr. Indira U. Mysorekar Department of Obstetrics and Gynecology Washington University School of Medicine 660 South Euclid Avenue St. Louis, MO 63110 E-mail: indira@wustl.edu Received 7 February 2017/Accepted 1 March 2017 -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1089/jir.2017.0011?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1089/jir.2017.0011, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://europepmc.org/articles/pmc5512303?pdf=render" }
2,017
[ "Review", "JournalArticle" ]
true
2017-07-01T00:00:00
[]
12,286
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01c49338292689ba504f84ba2980903cdcb77f1e
[ "Computer Science" ]
0.897626
Decentralized P2P Energy Trading Under Network Constraints in a Low-Voltage Network
01c49338292689ba504f84ba2980903cdcb77f1e
IEEE Transactions on Smart Grid
[ { "authorId": "35503338", "name": "Jaysson Guerrero" }, { "authorId": "1996149", "name": "Archie C. Chapman" }, { "authorId": "2448835", "name": "G. Verbič" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE Trans Smart Grid" ], "alternate_urls": null, "id": "1c2f3998-b5ca-48ca-9991-94b71c71ecb7", "issn": "1949-3053", "name": "IEEE Transactions on Smart Grid", "type": "journal", "url": "http://ieeexplore.ieee.org/servlet/opac?punumber=5165411" }
The increasing uptake of distributed energy resources in distribution systems and the rapid advance of technology have established new scenarios in the operation of low-voltage networks. In particular, recent trends in cryptocurrencies and blockchain have led to a proliferation of peer-to-peer (P2P) energy trading schemes, which allow the exchange of energy between the neighbors without any intervention of a conventional intermediary in the transactions. Nevertheless, far too little attention has been paid to the technical constraints of the network under this scenario. A major challenge to implementing P2P energy trading is ensuring network constraints are not violated during the energy exchange. This paper proposes a methodology based on sensitivity analysis to assess the impact of P2P transactions on the network and to guarantee an exchange of energy that does not violate network constraints. The proposed method is tested on a typical U.K. low-voltage network. The results show that our method ensures that energy is exchanged between users under the P2P scheme without violating the network constraints, and that users can still capture the economic benefits of the P2P architecture.
## Decentralized P2P Energy Trading under Network Constraints in a Low-Voltage Network #### Jaysson Guerrero, Student Member, IEEE, Archie C. Chapman, Member, IEEE, #### and Gregor Verbiˇc, Senior Member, IEEE **_Abstract—The increasing uptake of distributed energy resources_** **(DERs) in distribution systems and the rapid advance of tech-** **nology have established new scenarios in the operation of low-** **voltage networks. In particular, recent trends in cryptocurrencies** **and blockchain have led to a proliferation of peer-to-peer (P2P)** **_energy trading schemes, which allow the exchange of energy_** **between the neighbors without any intervention of a conventional** **intermediary in the transactions. Nevertheless, far too little** **attention has been paid to the technical constraints of the network** **under this scenario. A major challenge to implementing P2P** **energy trading is that of ensuring that network constraints are** **not violated during the energy exchange. This paper proposes a** **methodology based on sensitivity analysis to assess the impact of** **P2P transactions on the network and to guarantee an exchange** **of energy that does not violate network constraints. The proposed** **method is tested on a typical UK low-voltage network. The results** **show that our method ensures that energy is exchanged between** **users under the P2P scheme without violating the network** **constraints, and that users can still capture the economic benefits** **of the P2P architecture.** **_Index_** **_Terms—Peer-to-peer_** **energy** **trading,** **local** **market,** **distribution grid, smart grids, distributed energy resources,** **blockchain.** NOMENCLATURE Set of all time-slots t. _T_ Set of all households. _H_ Set of all buyers. _B_ Set of all sellers. _S_ Set of all nodes i in the network. _N_ Set of distribution lines connecting the nodes in _E_ the network. _x[+][/][−]_ Electrical power flowing from/to grid. _s[+][/][−]_ Import and export tariffs. _πb_ Bid price of buyer b. _πs_ Ask price of seller s. _σb_ Quantity of energy to purchase by buyer b. _σs_ Quantity of energy to supply by seller s. _Ci[c]_ Marginal benefit of consumer c. _Ci[p]_ Marginal cost of prosumer p. _Pi[c]_ Real power consumption of consumer c. _Pi[p]_ Real power generation of prosumer p. _Lmin_ Minimum value of bidding offers. _Lmax_ Maximum value of bidding offers. The authors are with The University of Sydney, School of Electrical and Information Engineering, NSW, 2006, Australia (email: jaysson.guerrero@sydney.edu.au; gregor.verbic@sydney.edu.au; hi h @ d d ) Φ[ij]kl Power transfer distribution factor of line (k, l) due to changes in nodes i and j. Ψkl Injection shift factor of a line connecting nodes _k and l._ _Ploss_ Active power losses. BEC[ij] Bilateral exchange coefficient due to a bilateral transaction between nodes i and j. I. INTRODUCTION HE role of distributed energy resources (DERs) characterizes the future of electrical power systems. Photo# T voltaic (PV) panels, battery storage systems, smart appliances and electric vehicles are some of the resources that allow traditional domestic consumers to become prosumers. In fact, end-users can already undertake control actions to manage their consumption and generation. This context has introduced new opportunities and challenges to power systems. Local energy trading between consumers and prosumers is one of the new scenarios of growing importance in the domain of distribution networks. Local distribution markets have been proposed as means of efficiently managing the uptake of DERs [1], [2]. This involves the creation of new roles and market platforms that allow the active participation of endusers and the direct interaction between them. This scenario brings potential benefits for the grid and users, by facilitating: (i) the efficient use of demand-side resources, (ii) the local balance of supply and demand, as well as (iii) opportunities for users to receive economic benefits through sharing and using clean and local energy. Given this context, a decentralized peer-to-peer (P2P) architecture has been proposed to implement local energy trading. Unlike to the traditional scheme, under a P2P scheme, prosumers can trade their energy surplus with neighboring users. Currently, the implementation of decentralized market platforms is possible due to new advances in information and communication technology, such as blockchain and other dis_tributed ledger technologies (DLTs), which support transparent_ and decentralized transactions. Many studies have already considered DLTs as the base of their P2P energy trading platforms [3], [4]. For example, [5] proposed a P2P energy trading model for electrical vehicles, showing the potential of blockchain to enhance cybersecurity on the P2P transactions. Similarly, the work in [6] demonstrates the benefits of a blockchain-based microgrid energy market using smart contracts Additionally commercial P2P trading pilots projects ----- have also been implemented recently. Most of these create a cryptocurrency that is used to trade energy between users[1]. However, electricity exchange is different from any other exchange of goods. Residential users are part of an electricity network, which imposes hard technical constraints on the energy exchange. Completely decentralized energy trading, without any coordination, compromises the operation of the network within its technical limits. Therefore, physical network constraints must be included in energy trading models. Despite the importance of the technical constraints, so far they have attracted little attention. The work in [3] introduces the application of the blockchain technology for energy trading as well as for technical operation. Although the variation in power losses due to the energy exchanges is evaluated, the impacts of each transaction on voltage and network capacity issues are not considered. More recently, works like that of [6] and [7] used decomposition techniques to solve an optimal power flow in a distributed fashion for P2P energy trading. In a similar context, an alternative approach to account for network constraints and attribution of network usage cost is proposed in [8]. Nevertheless, there are still some elements of debate such as the market framework, and how external cost due to the power exchange and network coupling constraints (from the AC power flow) can be associated with the transactions. In response to this shortcoming, in this paper, we extend the existing P2P energy trading scheme by explicitly taking into account the underlying network constraints at the distribution level. All transactions have to be validated during the bidding process, based on the network condition. Moreover, each transaction will be charged with the extra costs associated with the physical energy exchanged (i.e. due to losses). To our knowledge, this is the first model that integrates decentralized P2P energy trading with network constraints. Previous research either only focused on the DLTs technologies or did not consider the network constraints. In summary, the contributions of this paper are as follows: We illustrate the importance of including network con _•_ straints in the models of P2P trading to prevent voltage and capacity problems in the network; We propose a novel methodology based on sensitivity _•_ analysis to asses the impact of the transactions on the network and to internalize the external cost associated with the energy exchange; We present the benefits that P2P trading under network _•_ constraints may bring to power systems and end-users, by comparing our method with other strategies proposed to prevent upcoming LV network issues; We demonstrate a specific implementation of our method _•_ ology for P2P energy trading, comprising consumers and prosumers, which shows that our method is feasible and thereby appropriate for P2P energy trading schemes. The paper progresses as follows: The next section introduces pertinent concepts from the implementation of P2P energy 1Examples of DLTs in P2P energy trading include PowerLedger (https://powerledger.io), Enosi (https://enosi.io) and LO3 Energy (htt //l 3 ) trading, and illustrates why network constraints must be considered. This is followed by a description of the methodology in Section III. Section IV summarizes the trading mechanism scheme that the case study of this paper builds on. Section V presents the model of the case study and simulation results, and Section VI concludes the paper. II. PRELIMINARIES Let R denote the set of real numbers, and C complex numbers. For a scalar, vector, or matrix A, A[′] denotes its transpose and A[∗] its complex conjugate. The P2P scheme adopted is illustrated in Fig. 1. The information flows between peers in a decentralized manner. As such, every peer can interact through financial flows with the others. It should be noted that the interaction channels (e.g. DLTs) are separate from the physical links. The P2P scheme is composed of H households agents, which are interacting among themselves over a decision horizon := _τ, τ + ∆τ, . . ., τ + T_ ∆τ _T_ _{_ _−_ _}_ (typically one day) consisting of T time-slots. Specifically, the network comprises a set of nodes := 0, 1, 2, . . ., N . We _N_ _{_ _}_ index the nodes in by i = 0, 1, . . ., N . _N_ _A. Problem Description_ We consider a smart grid system for a P2P energy trading in a low-voltage (LV) network under a decentralized scheme. This paper considers the interaction of residential users through an online platform. Users can sell and buy energy to/from their neighbors or a retailer. We consider this a realistic assumption since currently there are pilot projects based on this concept, and it does not interfere with existing institutional arrangements (retail)[2]. A general P2P scheme is a method by which households interact directly with other households. Users are self-interested and have complete control of their energy used (different to centralized direct load control structures, in which some entity may have control of some appliances). Let = 1, 2, . . ., H be the set of all households in _H_ _{_ _}_ the local grid. The time is divided into time slots t, _∈T_ where = 1, 2, . . ., T and T is the total number of _T_ _{_ _}_ time slots. The set of all households is composed of _H_ the union of two sets: consumers and prosumers (i.e. _P_ _C_ = ). We assume that all households are capable of _H_ _P ∪C_ predicting their levels of demand and generation for electrical energy for a particular time slot t. Specifically, we assume consumers bid in the market based on their demand profiles. As such, a demand profile is not divided into tasks or device utilization patterns, so that is the demand levels represent the total energy consumption over time. Prosumers are classified into two types. Type 1 prosumers include those which have only PV systems; Type 2 includes prosumers which have PV systems, battery storage and home energy management systems (HEMS). Prosumers have two options to sell their energy 2Examples of pilot projects include Decentralized Energy Exchange (deX) Project, available at https://arena.gov.au/projects/decentralised-energyexchange-dex/; and White Gum Valley energy sharing trial, available at https://westernpower.com.au/energy-solutions/projects-and-trials/white-gumll h i t i l/ ----- Fig. 1. Model of information flows and physical links between households under a P2P scheme. surplus: (i) they can sell to the retailer and receive a payment for the amount of energy (e.g. feed-in tariff), or (ii) they can sell on the local market to consumers who participate in the P2P energy trading process. _B. Household Agent Model_ A household h ∈H uses d[h]t [units of electrical energy in slot] _t. Likewise, a household h ∈H has wt[h]_ [units of energy surplus] in slot t. The total quantity of electrical energy purchased in a slot t is given by x[+]t [, and its price is denoted by][ s]t[+][. The] total energy consumption x[+]t [includes the amount of electrical] energy purchased from the grid and from the local market. Similarly, the quantity of electrical energy sold in a slot t is given by x[−]t [, and its price is denoted by][ s]t[−][. While the energy] surplus of Type 1 prosumers in comes entirely from the _P_ PV system, each prosumer Type 2 in uses its HEMS to _P_ optimize its self-consumption, considering their demand and energy surplus by solving the following mixed-integer linear programming (MILP) problem [9]: � minimize (s[+]t _[x]t[+]_ _t_ _[x]t[−][)]_ (1) _x∈X_ _[−]_ _[s][−]_ _t∈T_ s.t. device operation constraints, energy balance constraints, _t_ _,_ _∀_ _∈T_ where X is the set of decision variables �x[+]t _[, x]t[−]�. State_ variables in the model are s[+]k [and][ s]t[−][. The former is associated] with the price of energy in time slot t, and the latter with the incentive received for the contribution to the grid. In other words, s[+]t [and][ s]t[−] [are related to import tariffs (e.g. flat,] time-of-use) or export tariffs (e.g. feed-in-tariff). The outcome of this process provides net load profiles for users with HEMS. After their self-optimisation, prosumers can export their energy surplus to the grid. _C. Network Model_ We consider a radial distribution network ( _,_ ), consist_G_ _N_ _E_ ing of a set of nodes and a set of distribution lines (edges) _N_ _E_ connecting these nodes. Using the notation of the branch flow model [10], we index the nodes by i = 0, 1, . . ., N, where the root of our radial network (Node 0) represents the substation bus, and it is considered as the slack bus. The other nodes in represent branch nodes. _N_ Denote a line in by the pair (i, j) of nodes it connects, _E_ where j is closer to the feeder 0. We call j the parent of i, denote by ς(i) and call i the child of j Denote the child set of Fig. 2. Percentage of households with voltage problems. 30 Prosumers-R1 30 Prosumers-R2 30 Prosumers-R3 50 Prosumers-R1 50 Prosumers-R2 50 Prosumers-R3 70 Prosumers-R1 70 Prosumers-R2 70 Prosumers-R3 100 80 60 40 20 0 00:00 03:00 06:00 09:00 12:00 15:00 18:00 21:00 Time Fig. 3. Percentage of households with voltage problems in one day - Under different resistance values and number of prosumers (R1 < R2 < R3). _j as δ(j) :=_ _i : (i, j)_ . Thus, a link (i, j) can be denoted _{_ _∈E}_ as (i, ς(i)). For each line (i, ς(i)) ∈E, let Iij be the complex current flowing from nodes i to ς(i), let Zij = Rij + iXij be the impedance of the edge, and Sij = Pij + iQij be the complex power flowing from nodes i to ς(i). On each node i, _∈N_ let Vi be the complex voltage, and Si = Pi + iQi be the net complex power injection. Define vi := |Vi|[2]. We assume the complex voltage V0 at the feeder root node is given and fixed. Let V = �v[1], . . ., v[N][�] be the concatenation of voltage vectors in all nodes in the network. _D. Local energy trading under network constraints_ In this subsection, we illustrate the importance of considering the physical network constraints in the trading models, while Section III provides the description of our methodology. Many studies in local energy trading have avoided consideration of network constraints to facilitate their modeling [11]–[14]. Given that residential users are connected to LV distribution systems, it is necessary to assess the impacts due to the exchange process. Active participation of households without any control could cause network issues such as overvoltage and reverse flows tripping protection equipment [15]. For example, let us assume that a group of end-users in a particular LV network participates in P2P energy trading without considering the network constraints in their trading mechanism, and the energy traded is supplied by nondispatchable generation such as PV systems. Based on the probabilistic impact assessment methodology proposed in [15], we evaluated the voltage issues at different levels of PV penetrations. Fig. 2 shows the percentage of households with voltage problems (overvoltage) at different levels of PV penetration For this feeder in the worst case problems start at a ----- penetration of 20% when the size of the PV systems is between 5 kW and 7 kW. The situation is better with smaller PV sizes. Nevertheless, voltage issues are experienced in all cases. Fig. 3 illustrates the impact of PV penetration using different types of conductors in the network, showing the voltage issues may be worst for networks with greater resistance values. Throughout the day (Fig. 3), the most critical situation happens around midday (peak of PV generation). Similarly, the situation is worst when there are more prosumers in the network. Given this context, many strategies have been proposed to prevent the approaching LV network issues. While some methods leave the responsibility to the distribution system operator (DSO; e.g. grid reinforcement, and active transformers with on-load-tap changers [16], [17].), other strategies consider the direct participation of end-users. For example, PV generation can be curtailed proportionally to avoid voltage problems using a dynamic curtailment method [18]. This method brings benefits to weak nodes which could be highly restricted due to their location in the network, but requires designing a costsharing model among prosumers to guarantee fair conditions in the curtailment. In contrast, we show that local energy markets can efficiently allocate the energy surplus, and enable mutual benefits for distribution system operator and all users. Apart from network issues, technical constraints also influence market efficiency. Since there are external costs associated with power flows, those externalities could represent a barrier to efficient markets. Those extra costs could be internalized in the trading offers of agents. A principled way of addressing the problem of DER dispatch subject to network _constraints is to use distribution optimal power flow (DOPF)_ [19], which is formulated as follows: 3 2 ∆P3 0 1 4 ∆P4 Fig. 4. A simple distribution network. be done at a household level to preserve consumers prerogative and privacy [22], [23]. Finally, household consumption patterns are stochastic, so proper mechanisms are required to ensure that the customers follow the allocated power profiles [22], [24], [25]. Second, a DOPF implementation would require a complete redesign of the tariff structures, so it cannot be easily incorporated into the existing market framework. A viable alternative to the DOPF approach which obviates many of the above challenges is decentralized P2P. However, a successful P2P approach needs to obey the network constraints, as discussed next. III. METHODOLOGY In this section, we propose a methodology to implement P2P energy trading under network constraints with self-interested agents. This situation is similar to the bilateral trading in a power system. Fig. 4 illustrates the situation where a user located at Bus 4 has purchased energy from the prosumer located at Bus 3. This implies physical changes in the power flows through the lines in the network. Hence, our aim is to estimate the impact of the injection and absorption of that amount of power on the grid. The methodology proposed in this work embeds analytically derived sensitivity coefficients to guarantee bilateral transactions as well as internalizing the external costs associated with the power flows. Specifically, we incorporate three factors in the market mechanism: _Voltage sensitivity coefficients (VSCs): Through VSCs,_ _•_ we can estimate the variation in the voltages as a function of the power injections in the network; _Power transfer distribution factors (PTDFs): These reflect_ _•_ the changes in active power line flows due to an exchange of active power between two nodes; _Loss sensitivity factors (LSFs): These reflect the portion_ _•_ of system losses due to power injections in the network. _A. Voltage Sensitivity Coefficients Formulation_ The traditional approach to obtain the VSCs is to use the Jacobian matrix after solving the Newton-Raphson power flow [26]: maximize _Pi[c][,P][ p]i_ � � _Ci[c][P]i[ c]_ _[−]_ _Ci[p][P]i[ p]_ (2) _i∈N_ _i∈N /0_ s.t. power flow constraints, power balance constraints, and DER operational constraints, where Ci[c] [is the marginal benefit of consumers,][ C]i[p] [is the] marginal cost of prosumers, Pi[c] [is the real power consumption,] and Pi[p] [is the real power generation.] DOPF produces distribution locational marginal prices (DLMPs) which can be used to attribute the network cost to the market. In doing so, a central entity (e.g. DSO) solves the optimization problem across the scheduling horizon with the goal of minimizing the total cost of supplying power to the consumers subject to network constraints. As a result, real power losses, and (binding) capacity and voltage constraints result in DLMPs being different across the network. Conceptually, DOPF is the same as the optimal power flow (OPF) problem used in the wholesale market. The DOPF implementation is however riddled with technical and market design barriers. First, the number of market agents (consumers) is significantly larger than in the conventional OPF problem so that a centralized DOPF computation can be challenging if not intractable. To this end, distributed optimization approaches have been considered [20], [21] to ensure scalability Next the problem decomposition needs to where P and Q are the vectors of real and reactive nodal injections, and θ and V are the vectors of voltage angles and magnitudes. Calculating the inverse of the Jacobian at a given operating point gives an idea of the voltage changes (∆Vi) due to changes in power injections (∆P ∆Q ) as follows: � _J =_ � _∂P_ _∂P_ _∂|V |_ _∂θ_ _∂Q_ _∂Q_ _∂|V |_ _∂θ_ _,_ (3) ----- � _∂Vi_ ∆Vi = _∂Pi_ � ∆Pi + � _∂Vi_ � ∆Qi. (4) _∂Qi_ branch (k, l) (measured at Bus k) resulting from ∆Pi. Then, it follows that: Ψ[i]kl [:=][ ∂P][kl] _≈_ [∆][P]kl[ i] _._ (9) _∂Pi_ ∆Pi However, running a full load power flow every time the state of the network changes may not be feasible or tractable. Therefore, in our study, we use the analytical derivation of VSCs proposed in [27]. In doing so, we use the so-called compound admittance matrix. The relation of the power injection and bus voltages is given by[3]: � _Si[∗]_ [=][ V][ ∗]i _YijVj i ∈N_ _._ (5) _j∈N_ To obtain VSCs, the partial derivatives of the voltages with respect to the active power Pk of a Bus k ∈N _/0 are_ computed. The partial derivatives with respect to active power satisfy the following system of equations: 1{i=k} = _[∂V]∂Pi[ ∗]k_ � � _∂Vj_ _YijVj + Vi[∗]_ _Yij_ _._ (6) _∂Pk_ _j∈N_ _j∈N /0_ Although this system is not linear over complex components, it is linear with respect to _∂P[∂V]k[i]_ [and][ ∂V]∂P[ ∗]ik [, therefore it is linear] over real numbers with respect to rectangular coordinates. Moreover, it has a unique solution, and can be used to compute the partial derivatives. Once they are obtained, the partial derivatives of the voltage magnitude are expressed as: _∂_ _|Vi|_ 1 � _∂Vi_ � _∂Pk_ = _|Vi|_ [Re] _Vi[∗]_ _∂Pk_ _,_ (7) To calculate these values, we use an approximation of the network equations. Let _B[˜] = diag {bkl}, which is a diago-_ nal matrix whose entries are bkl, the susceptance of branch (k, l). Also, denote the branch-to-node incidence matrix by _A = [..., akl, ...]′_, where akl ∈ R[n] is a vector in which the _k[th]_ entry is 1 and the l[th] entry is -1. Then, by using the DC approximations, we arrive at the expression: ∆Pkl ≈ _B[˜]klAB[−][1]∆P,_ (10) where _B[˜]kl is the row in_ _B[˜] corresponding to branch (k, l), and_ _B = A′ ˜BA. Denote Ψkl =_ �Ψ[1]kl[, ...,][ Ψ][i]kl[, ...,][ Ψ][N]kl�[′], then the model-based linear sensitivity factors for branch (k, l) with respect to active power injections at all buses are given by: Ψkl = B[˜]klAB[−][1]. (11) Once the ISFs are obtained, we compute PTDFs. A PTDF, Φ[ij]kl[, provides the sensitivity of the active power flow in branch] (k, l) with respect to an active power transfer of a given amount of power, ∆Pij, from Bus i to j. The PTDF for a branch (k, l) with respect to an injection at a Bus i that is withdrawn at a Bus j is calculated directly from the ISFs as follows: Φ[ij]kl [= Ψ]kl[i] _[−]_ [Ψ]kl[j] _[,]_ (12) where Ψ[i]kl [and][ Ψ]kl[j] [are the line flow sensitivities in branch] (k, l) with respect to injections at Buses i and j, respectively. _C. Loss Sensitivity Factors_ We derived the LSFs using a similar approach to the use above. The term for the LSF is given by [30]: � _∂Vi_ ∆ _|Vi| = [∆][P][k]_ _Vi[∗]_ _|Vi|_ [Re] _∂Pk_ � _._ (8) Voltage changes can therefore be calculated based on the power changes in specific buses of the network. _B. Power Transfer Distribution Factors_ Since the exchange of energy involves power flow through physical routes, PTDFs can give an idea of the sensitivity of the active power flow with respect to various variables. Specifically, the injection shift factor (ISF) quantifies the redistribution of power through each branch following a change in generation or load on a particular bus. It reflects the sensitivity of a flow through a branch with respect to changes in generation or load. Once we obtain the ISFs, we can calculate the PTDFs, which capture the variation in the power flows with respect to the injection in Bus i and a withdrawal of the same amount at Bus j [28], [29]. In order to calculate the ISFs, we use the reduced nodal susceptance matrix. The ISF of a branch (k, l) (assume _∈E_ positive real power flow from Bus k to l measured at Bus _k) with respect to Bus i ∈N_, which we denote by Ψ[i]kl[, is] the linear approximation of the sensitivity of the active power flow in branch (k, l) with respect to the active power injection at Bus i with the location of the slack bus specified and all other quantities constant. Suppose Pi varies by a small amount, ∆Pi, and let ∆Pkl[i] [be the change in the active power flow in] 3C l j t b d t d ith t b ( _V_ _[∗])_ The bilateral exchange coefficient (BEC) can be used to associate the losses due to a bilateral transaction [31]. An overview of the methodology is shown in Fig 5 _∂Ploss_ = 2Re �V[∗][T] _G [∂][V]_ _∂Pk_ _∂Pk_ � _,_ (13) where the partial derivatives are obtained from (6), and G is the conductance matrix. In order to assign losses associated to a changes in the power, we consider the approach to attribute losses to bilateral exchanges. For example, in the bilateral exchange in Fig. 4, there is a bilateral exchange from Bus 3 to Bus 4. The terms _[∂P]∂P[loss]i_ [and][ ∂P]∂P[loss]j [are the loss sensitivities] with respect to power injection at bus i and to power out at Bus _j respectively. Then the bilateral exchange coefficient (BEC)_ is defined as follows: BEC[ij] = _[∂P][loss]_ _−_ _[∂P][loss]_ _._ (14) _∂Pi_ _∂Pj_ ----- Fig. 5. Overview of the methodology. Modules to calculate: (i) VSCs, (ii) PTDFs, and (iii) LSFs. _D. Illustrative example_ We present a simple example case to illustrate how a bilateral transaction is associated with real power losses, congestion and voltage constraints. We consider a simple five node model shown in Fig. 4, and we apply the methodology explained in this Section. We assume that the prosumer at Node 3 wants to exchange energy with the consumer at Node 4. That is, an amount of power injected at Node 3 (∆P3) and is withdrawn at Node 4 (∆P4). From this transaction, we can obtain the following parameters: Voltage variations caused by the transaction can be esti _•_ mated using VSCs and (8). The transaction will not be allowed if it causes voltage issues in the network. _• The PTDFs values, (Φ[34]01[,][ Φ][34]12[,][ Φ][34]23[,][ Φ][34]14[), are calculated]_ to evaluate the utilization rate of the lines based on the transaction. These values can be used to assign congestion charges. As such, agents will pay a charge for using the physical network. Moreover, PTDFs can be used to estimate the congestion in the lines. The total system losses caused by the transaction are _•_ calculated using the VSCs and (13) and (14). Therefore, agents involved in the transaction will be responsible for paying an extra cost due to the losses caused using coefficient BEC[34]. These elements allow us to evaluate the impact of each transaction in the network, and they can be used to incorporate more properties to the model. For example, since users will have to pay the extra cost due to congestion and losses, users will tend to prefer to exchange energy with the closest ones. IV. TRADING MARKET MECHANISM The market mechanism for a P2P energy trading developed in this paper builds on our previous work [11]. There are three components to our market mechanism: (i) a continuous double _auction (CDA), (ii) the agents’ bidding strategies, and (iii) the_ network permission structure, as described below. _A. Continuous Double Auction_ A CDA matches buyers and sellers in order to allocate a commodity. It is widely used, including in major stock markets like the NYSE. A CDA is a simple market format that matches parties interested in trading, rather than holding any of the traded commodity itself. This makes it very well suited for P2P exchanges. Bids into a CDA indicate the prices that participants are willing to accept a trade, and reflect their desire to improve their welfare. As such, the CDA tends towards a highly efficient allocation of commodities [32]. In more detail, a CDA comprises: A set of buyers, where each b defines its trading _•_ _B_ _∈B_ price πb and the amount of energy to purchase σb. A set of sellers, where each s defines its trading _•_ _S_ _∈S_ price πs and the amount of energy to sell σs. _• An order book, with bids ob(b, πb, σb, t), made by buyers_ _B, and asks os(s, πs, σs, t), made by sellers S._ Pseudo-code of the matching process in a CDA is given in Algorithm 1. A CDA is run for each time slot separately. Any intertemporal couplings that arise on a customer’s side from using batteries or loads with long minimum operating times are not passed up to the market clearing entity. Once the market is open, arriving orders are queued in the order book for trades during a fixed interval td (lines 2-8), which is limited by the start time t[st]d [and the trading end time][ t]d[end] (i.e. t[end]d = t[st]d [+] _[t][d][).]_ During the trading period, orders are submitted for buying or selling units of electrical energy in time-slot t. At the end of the trading period, the market closes, thereby no more offers are received. We assume the orders arrive according to a Poisson process with mean arrival rate λ. The current best bid (ask) is the earliest bid (ask) with the highest (lowest) price. A bid and an ask are matched when the price of a new bid (ask) is higher than or equal to the price of the best ask o[∗]s[(][s][∗][, π]s[∗][, σ]s[∗][, t][∗][)] (the best bid o[∗]b [(][b][∗][, π]b[∗][, σ]b[∗][, t][∗][)][) in the order book (line 9).] However, if a new bid (ask) is not matched, then it is added to the order book, recording its arrival time and price. Note that after matching, an order may be only partially covered. If this is the case, it will remain at the top of the order book waiting for a new order. This process is executed continually during the trading period as new asks and bids arrive ----- _B. Bidding Strategies_ Conventionally, market participants (buyers and sellers) define their asks and bids based on their preferences and the associated costs. The HEMS act as agents for the customers, and are continually responding to new stochastic information. As such, they appear very unpredictable from the outside. Moreover, because the market is thin, this can produce large swings in available energy and prices. In this context, constructing an optimal bidding strategy is futile, but simple bidding heuristics are still valuable. In particular, in our study the agents are zero intelligence plus (ZIP) traders [11], [33]. ZIP traders use an adaptive mechanism which can give performance very similar to that of human traders in stock markets. Agents have a profit margin which determines the difference between their limit prices and their asks or bids. Under this strategy, traders adapt and update their margins based on the matching of previous orders (lines 12-23 for buyers and lines 24-35 for sellers). Indeed, the participation of ZIP traders in a CDA allows us to assess the economic benefits of the market separate from that of a particular bidding strategy. Specifically, ZIP traders are subject to a budget constraint (Lmax and Lmin are the maximum and minimum price respectively) which forbids the trader to buy or sell at a loss. Then, buyers and sellers select their bids or asks uniformly at random between these limits. _C. Network Permission Structure_ The outline of the mechanism is presented in Fig. 6. A third party entity (e.g. DSO) validates the transactions using a network permission structure based on the network’s features and sensitivity coefficients. Every time one ask and one bid are matched, voltage variation and line congestion are evaluated. All households receive a signal (φ[h]) which informs them if they can still participate in the market without causing problems in the network. For instance, one prosumer could be blocked from injecting power into the grid at a certain time due to the high risk of causing voltage problems in the network. This is achieved using the VSCs and PTDFs. If the transaction is approved, the extra cost associated with the network constraints are allocated to the users involved in the matched transaction. Importantly, power curtailment is implicitly incorporated in the trading. Thus, this method may bring extra benefits in comparison to others curtailment methods. For example, users at the worst node location still have the opportunity to participate if their order can be matched and if the mechanism allows the trade. This improves the efficiency by allowing greater participation of consumers and a better reflect of network conditions. V. SYSTEM MODEL - CASE STUDY Our study is focused on a LV network with a high DERs penetration. The group of households is constituted by consumers and prosumers (Type 1 and Type 2) defined in Section II. There are three components to our model: the local power _network, the customers and the market for trading energy, as_ defined above **Algorithm 1 Matching process in a CDA with ZIP traders** 1: while market is open do 2: randomly select a new ZIP trader 3: **if buyer then** 4: new ob(b, πb, σb, t) 5: **else** 6: new os(s, πs, σs, t) 7: **end if** 8: allocate a new order in the order book _▷_ Evaluate matching process with best bid and ask 9: **if πb[∗]** _[≥]_ _[π]s[∗]_ **[then]** 10: clear orders o[∗]b [and][ o]s[∗] [at a price][ π][t][ and amount][ σ][t] 11: **end if** _▷_ Update values of profit margins _▷_ Buyers 12: **if the last order was matched at price πt then** 13: all buyers for which πb ≥ _πt, raise their margins;_ 14: **if the last trader was a seller then** 15: any active buyer for which πb ≤ _πt,_ 16: lower its margin; 17: **end if** 18: **else** 19: **if the last trader was a buyer then** 20: any active buyer for which πb ≤ _πt,_ 21: lower its margin; 22: **end if** 23: **end if** _▷_ Sellers 24: **if the last order was matched at price πt then** 25: all sellers for which πs ≤ _πt, raise their margins;_ 26: **if the last trader was a buyer then** 27: any active seller for which πs ≥ _πt,_ 28: lower its margin; 29: **end if** 30: **else** 31: **if the last trader was a seller then** 32: any active seller for which πs ≥ _πt,_ 33: lower its margin; 34: **end if** 35: **end if** 36: end while Update network Yes Estimate voltage state estimation. Allocate Matched? and power extra Block high risk flow variations costs households. No Households Prosumers Received _os, ob_ continually Consumers asks & bids _φ[1], φ[2], . . ., φ[H]_ Open Fig. 6. Schematic of the P2P trading under network constraints. _A. Implementation: Test Network_ We consider a smart grid system for energy trading at a local level. The methodology is applied to the UK LV network shown in Fig. 7, comprising one feeder and 100 single phase households. The simulations are carried out with T = 24 hours, ∆τ = 15 minutes and up to 100 agents. There are 50 consumers and 50 prosumers, 40 for Type 1 (PV) and 10 for Type 2 (PV, battery and HEMS). Each household has a stochastic load consumption profile, with load profiles using the tool presented in [34]. Similarly, PV profiles are generated considering sun irradiance data, capturing the sunniest days in order to evaluate the method on the most challenging ----- 50 100 [m] 150 200 Fig. 7. Topology of the studied LV network. The black squares, the green point and the red triangle in the topology, represent the location of households, the CES, and the transformer, respectively. yet realistic scenarios. We assume that all prosumers have a PV system with installed capacity of 5.0 kWp. Each Type 2 households has a battery of 3 kW and 10 kWh. Additionally, there is one community electricity storage (CES) of 25 kW and 50 kWh operated by the retailer. In particular, the operation objective of the CES is to apply peak shaving during peak load hours. The CES strategy is to buy only the energy to charge in the P2P market to other prosumers around midday (when there are low rates and a high number of prosumers with energy surplus) and resell the energy during peak demand hours to the consumers. Like the prosumers behavior, the CES is modeled as a ZIP trader. We define the price constraints Lmax and Lmin based on the values of import and export electricity tariffs through the day. Lmax depends on the time-of-use tariff (ToU) and Lmin on the feed-in-tariff (FiT). These definitions are consistent in the sense that no buyer would pay more than the tariff of a retailer (ToU), and no seller would sell their units cheaper than the export tariff (FiT). In summary, the process of our model is: 1) The HEMS minimizes a prosumer’s costs by solving problem (1), using a mixed-integer linear program. 2) Prosumers state the time-slots when they have extra energy to trade. 3) The bidding strategies for the market participants are initialized, using their load and generation profiles and tariffs, and the market is opened. 4) Every time an ask and a bid are matched, the network conditions are evaluated. The market remains open as long as the network constraints are respected. 5) Agents accept the number of units to be exchanged and their prices. _B. Scenarios’ Description_ Since our interest is to evaluate our methodology and to show the benefits of P2P energy trading under network constraints, two scenarios are evaluated. _1) Scenario I: The first scenario is based on the methodol-_ ogy introduced in this paper. Users participate in P2P trading. The matching process between asks and bids in the P2P market promotes the local balance of demand and generation of endusers. In this case, a market rule allows the prosumers to supply their energy surplus until the total demand, including the energy required by the CES is covered _2) Scenario II: In this case, prosumers are allowed to inject_ more energy into the grid as long as that does not cause any voltage or capacity problems in the network. Since curtailment methods are commonly used to prevent LV network issues in a high PV penetration, we considered them as a benchmark in this scenario. As such, we compared our scheme with other curtailment methods to illustrate the benefits of the local markets and the extra benefits of power curtailment functionality. Specifically, the four schemes to compare are: _Local market P2P (P2P): The methodology introduced in_ _•_ this paper. _Reduce capacity (Red. Cap): A static active power curtail-_ _•_ ment method. All users can export only a limited power to the grid. In this case, all prosumers can export 3 _≤_ kW. This value is chosen based on an impact assessment study of this particular network. It ensures the network constraints are not violated. _Tripping: The standard approach where an inverter op-_ _•_ erates until it reaches the maximum voltage limit. Then, the inverter protection shuts it down. _Droop-based active curtailment (APC-OLP): A dynamic_ _•_ active power curtailment method. Inverters are controlled with a droop-based active power curtailment method (APC). The droop parameters of the inverters are different so that the output power losses (OPL) are shared equally among all prosumers [18]. For the three benchmark schemes, households buy energy at the ToU rate and sell at the FiT value. Each scheme is simulated using OpenDSS software. We consider a daily simulation mode using the same input data for all schemes. The operation settings of PV systems is modified depending on the features of each scheme (e.g. 3 kW is the maximum power to export to the grid in Red.Cap case). _C. Scenario I Results_ Fig. 8 shows the average transaction price (ATP) and the amount of energy purchased from the grid or in the P2P market during one day. The transaction prices remain in the range of ToU and FiT rates because of the ZIP limits Lmax and _Lmin. Hence, both prosumers and consumers obtain monetary_ benefits by participating in P2P trading. Most of the energy is traded during 8:00 and 14:00. During that time, there is an excess of energy due to PV generation. Notably, there is a peak of energy sold in the market around 11 am because of the charging strategy of the CES. There is some energy traded after 18:00 due to the CES and the prosumers who kept some energy in the battery. Once the peak time ends (20:00), the ZIP maximum limit (Lmax) is low. As a consequence, no prosumers submit any new asks to trade in the market. Moreover, in this case, when the total energy surplus from prosumers is greater than the total demand of consumers (e.g. around midday), some prosumers (those who do not match their asks with consumers’ bids) have to curtail their power generation. Fig. 9 presents a histogram of voltages at all users’ nodes during one day of simulation. There are no cases of overvoltage. The voltages varied between 0.945 pu and 1.022 pu. Around 55% of the voltages are between 0 99 pu and 1 pu As ----- |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10| |---|---|---|---|---|---|---|---|---|---| |ToU FiT|||||||||| |ATP|||||||||| ||||||||||| ||||||||||| Fig. 8. Average transaction prices (top), demand and generation levels (bottom) in Scenario I. Fig. 9. Histogram of voltages at users’ nodes - number of occurrences in one day period at a certain voltage [pu] in Scenario I. TABLE I COMPARISON OF TOTAL EXPENSES AND INCOMES IN SCENARIO I **Without P2P** **With P2P** **Market** Expenses Incomes Expenses Incomes **Benefit** $241.98 $32.37 $198.50 $64.81 $75.92 such, all exchanges respect the network constraints, and the external costs were attributed among the households involved in each transaction. Finally, Table I compares the total expenses and incomes of all households during one day. Without the P2P trading, end-users buy energy at the ToU rate and sell it at the FiT value. In contrast, with P2P, the transaction prices are discovered through the market mechanism. Hence, the users’ expenses decrease and the users’ incomes increase, achieving a market benefit of $75.92, while remaining within the networks operating limits. _D. Scenario II Results_ This scenario compares our method with the benchmark curtailment schemes. The results in Fig. 10 show that in the P2P case there is more energy traded, and the revenues for the prosumers are greater in comparison with the other methods. Hence, this local market reduces the energy spilled and increases the prosumers’ incomes. Particularly, the drawback of the power curtailment methods is that they do not consider the impact on the revenues of end-users. In contrast, the P2P scheme offers greater economic benefits to all users. For example, in the Tripping case, the furthest prosumer (with respect to the location of the feeder) is regularly the first to be curtailed, and its energy spilled is 70% of its total energy surplus In contrast the energy spilled is only around 50% Fig. 10. Total energy supplied to the grid by prosumers and their incomes received in Scenario II. in the P2P case. So, the prosumer sold more energy in the P2P case, thereby its income increased by $0.7. In this way, the P2P local market provides distributed coordination, control and management of the DERs. VI. CONCLUSION In this paper, we have proposed a new methodology to deploy P2P energy trading local markets considering the network constraints in the market mechanism. We explicitly considered the impact of the injection and absorption of power in the network in a P2P exchange. Users exchange energy with their neighbors through a continuous double auction, and their transaction internalized the extra cost associated with the technical constraints. Simulation results showed that our proposed method reduces the energy cost of the users and achieves the local balance between generation and demand of households without violating the technical constraints. Finally, we compared the implementation of our market with other curtailment methods. Our technique captures the desirable properties of curtailment methods with the market platform. Hence, our system exploits profitable opportunities for reduced spilled energy to all stakeholders. Due to the use of a continuous double auction (CDA), the proposed method doesnt suffer from the scalability issues of OPF and DLMP models. Specifically, stock exchanges allow for huge numbers of trades a day (e.g. NASDAQ processes 10M trades each day). This is actually a key benefit of the CDA approach, because the complexity is kept on the trading agent side of the ledger, not the clearing entity. In a standard CDA, the clearing entity has only very low computation routines to complete. While this P2P framework has an additional bid permission overlay, the complexity of these routines is not great (i.e. no optimization) and the number of bids on a typical MV feeder is not expected to exceed that of a stock exchange. The future work will extending the study of bidding strategies of agents with flexible loads participating in a P2P market, as well as the incorporation of penalty policy to evaluate prediction deviations in forecast profiles and to enhance the trading among nearby users. REFERENCES [1] Australian Energy Market Commission (AEMC), “Distribution Market M d l j t ” R t 2017 |Without P2P|With P2P| |---|---| |Expenses Incomes|Expenses Incomes| |$241.98 $32.37|$198.50 $64.81| ----- [2] F. Moret and P. Pinson, “Energy collectives: a community and fairness based approach to future electricity markets,” IEEE Trans. Power Syst., pp. 1–1, 2018. [3] G. Zizzo, E. R. Sanseverino, M. G. Ippolito, M. L. D. Silvestre, and P. Gallo, “A technical approach to P2P energy transactions in microgrids,” IEEE Trans. Ind. Informat., pp. 1–1, 2018. [4] A. Goranovi, M. Meisel, L. Fotiadis, S. Wilker, A. Treytl, and T. Sauter, “Blockchain applications in microgrids: An overview of current projects and concepts,” in IECON 2017 - 43rd Conf. IEEE Industrial Electronics _Society, Oct 2017, pp. 6153–6158._ [5] J. Kang, R. Yu, X. Huang, S. Maharjan, Y. Zhang, and E. Hossain, “Enabling localized peer-to-peer electricity trading among plug-in hybrid electric vehicles using consortium blockchains,” IEEE Trans. Ind. _Informat., vol. 13, no. 6, pp. 3154–3164, Dec 2017._ [6] E. M¨unsing, J. Mather, and S. Moura, “Blockchains for decentralized optimization of energy resources in microgrid networks,” in 2017 IEEE _Conf. Control Technol. Appl. (CCTA), Aug 2017, pp. 2164–2171._ [7] T. Baroche, P. Pinson, R. Le Goff Latimier, and H. Ben Ahmed, “Exogenous approach to grid cost allocation in peer-to-peer electricity markets,” 03 2018, arXiv preprint, arXiv:1803.02159v1. [8] T. Morstyn and M. McCulloch, “Multi-class energy management for peer-to-peer energy trading driven by prosumer preferences,” IEEE _Trans. Power Syst., pp. 1–1, 2018._ [9] C. Keerthisinghe, G. Verbiˇc, and A. C. Chapman, “A fast technique for smart home management: Adp with temporal difference learning,” IEEE _Trans. Smart Grid, vol. 9, no. 4, pp. 3291–3303, July 2018._ [10] N. Li, “A market mechanism for electric distribution networks,” in 2015 _54th IEEE Conf. Decis. and Control (CDC), Dec 2015, pp. 2276–2282._ [11] J. Guerrero, A. Chapman, and G. Verbiˇc, “A study of energy trading in a low-voltage network: Centralised and distributed approaches,” in 2017 _Australasian Universities Power Engineering Conference (AUPEC), Nov_ 2017, pp. 1–6. [12] D. Ili´c, P. G. D. Silva, S. Karnouskos, and M. Griesemer, “An energy market for trading electricity in smart grid neighbourhoods,” in 2012 6th _IEEE Int. Conf. Digital Ecosyst.Technol. (DEST), June 2012, pp. 1–6._ [13] Y. Wang, W. Saad, Z. Han, H. V. Poor, and T. Bas¸ar, “A game-theoretic approach to energy trading in the smart grid,” IEEE Trans. Smart Grid, vol. 5, no. 3, pp. 1439–1450, May 2014. [14] E. Mengelkamp, P. Staudt, J. Garttner, and C. Weinhardt, “Trading on local energy markets: A comparison of market designs and bidding strategies,” in 2017 14th Int. Conf. Eur. Energy Market (EEM), Jun. 2017, pp. 1–6. [15] A. Navarro-Espinosa and L. F. Ochoa, “Probabilistic impact assessment of low carbon technologies in LV distribution systems,” IEEE Trans. _Power Syst., vol. 31, no. 3, pp. 2192–2203, May 2016._ [16] S. Hashemi and J. Østergaard, “Methods and strategies for overvoltage prevention in low voltage distribution systems with PV,” IET Renewable _Power Generation, vol. 11, no. 2, pp. 205–214, 2017._ [17] K. E. Antoniadou-Plytaria, I. N. Kouveliotis-Lysikatos, P. S. Georgilakis, and N. D. Hatziargyriou, “Distributed and decentralized voltage control of smart distribution networks: Models, methods, and future research,” _IEEE Trans. Smart Grid, vol. 8, no. 6, pp. 2999–3008, Nov 2017._ [18] R. Tonkoski, L. A. C. Lopes, and T. H. M. El-Fouly, “Coordinated active power curtailment of grid connected PV inverters for overvoltage prevention,” IEEE Trans. Sustainable Energy, vol. 2, no. 2, pp. 139–147, April 2011. [19] A. Papavasiliou, “Analysis of distribution locational marginal prices,” _IEEE Trans. Smart Grid, pp. 1–1, 2017._ [20] S. Mhanna, G. Verbiˇc, and A. C. Chapman, “A component-based dual decomposition method for the OPF problem,” Sustainable Energy, Grids _and Networks, 2017, in press._ [21] P. Scott and S. Thi´ebaux, “Distributed multi-period optimal power flow for demand response in microgrids,” in Proc. ACM 6th Int. Conf. Future _Energy Systems, ser. e-Energy ’15._ ACM, 2015, pp. 17–26. [22] A. C. Chapman, G. Verbiˇc, and D. J. Hill, “Algorithmic and strategic aspects to integrating demand-side aggregation and energy management methods,” IEEE Trans. Smart Grid, vol. 7, no. 6, pp. 2748–2760, Nov 2016. [23] S. Mhanna, A. C. Chapman, and G. Verbiˇc, “A fast distributed algorithm for large-scale demand response aggregation,” IEEE Trans. Smart Grid, vol. 7, no. 4, pp. 2094–2107, July 2016. [24] S. Mhanna, G. Verbiˇc, and A. C. Chapman, “A faithful distributed mechanism for demand response aggregation,” IEEE Trans. Smart Grid, vol. 7, no. 3, pp. 1743–1753, May 2016. [25] S. Mhanna, A. C. Chapman, and G. Verbiˇc, “A faithful and tractable distributed mechanism for residential electricity pricing,” IEEE Trans. _P_ _S_ _t_ 1 1 2017 [26] W. F. Tinney and C. E. Hart, “Power flow solution by newton’s method,” _IEEE Trans. Power Apparatus and Systems, vol. PAS-86, no. 11, pp._ 1449–1460, Nov 1967. [27] K. Christakou, J. Y. LeBoudec, M. Paolone, and D. C. Tomozei, “Efficient computation of sensitivity coefficients of node voltages and line currents in unbalanced radial electrical distribution networks,” IEEE _Trans. Smart Grid, vol. 4, no. 2, pp. 741–750, June 2013._ [28] Y. C. Chen, A. D. Dom´ınguez-Garc´ıa, and P. W. Sauer, “Measurementbased estimation of linear sensitivity distribution factors and applications,” IEEE Trans. Power Syst., vol. 29, no. 3, pp. 1372–1382, May 2014. [29] A. J. Wood, B. F. Wollenberg, and G. B. Shebl´e, “Power generation, operation, and control. (3rd edition),” John Wiley & Sons, 2013. [30] A. J. Conejo, F. D. Galiana, and I. Kockar, “Z-bus loss allocation,” IEEE _Trans. Power Syst., vol. 16, no. 1, pp. 105–110, Feb 2001._ [31] F. D. Galiana, A. J. Conejo, and H. A. Gil, “Transmission network cost allocation based on equivalent bilateral exchanges,” IEEE Trans. Power _Syst., vol. 18, no. 4, pp. 1425–1431, Nov 2003._ [32] D. K. Gode and S. Sunder, “Allocative efficiency of markets with zero-intelligence traders: Market as a partial substitute for individual rationality,” Journal of Political Economy, vol. 101, no. 1, pp. 119–137, 1993. [33] D. Cliff and J. Bruten, “Minimal-intelligence agents for bargaining behaviors in market-based environments,” Technical Report HPL-97-91, _HP Laboratories Bristol, Aug. 1997._ [34] E. McKenna and M. Thomson, “High-resolution stochastic integrated thermal electrical domestic demand model,” Applied Energy, vol. 165, pp. 445–461, 2016. **Jaysson** **Guerrero** (S’10) was born in Pasto, Colombia. He received the B.Sc. degree in electronics engineering, B.Sc. degree in electrical engineering, and the M.Sc. degree in electrical engineering from the Universidad de los Andes, Bogot´a, Colombia, in 2013, and 2014, respectively. He is currently pursuing the Ph.D. degree in Electrical Engineering at The University of Sydney. His research interests include integration of renewable energy into power systems, smart grid technologies and local energy trading. **Archie C. Chapman (M’14) received the B.A.** degree in math and political science, and the B.Econ. (Hons.) degree from the University of Queensland, Brisbane, QLD, Australia, in 2003 and 2004, respectively, and the Ph.D. degree in computer science from the University of Southampton, Southampton, U.K., in 2009. He is currently a Research Fellow in Smart Grids with the School of Electrical and Information Engineering, Centre for Future Energy Networks, University of Sydney, Sydney, NSW, Australia. His work focuses on the use of distributed energy resources, such as batteries and flexible loads, to provide power network and system services, while making best use of legacy infrastructure. His expertise is in optimization and control of large distributed systems, using methods from game theory and artificial intelligence. **Gregor Verbiˇc (S’98-M03-SM’10) received the** B.Sc., M.Sc., and Ph.D. degrees in electrical engineering from the University of Ljubljana, Ljubljana, Slovenia, in 1995, 2000, and 2003, respectively. In 2005, he was a NATO-NSERC Postdoctoral Fellow with the University of Waterloo, Waterloo, ON, Canada. Since 2010, he has been with the School of Electrical and Information Engineering, The University of Sydney, Sydney, NSW, Australia. His expertise is in power system operation, stability and control, and electricity markets. His current research interests include grid and market integration of renewable energies and distributed energy resources, future grid modelling and scenario analysis, wide-area coordination of distributed energy resources, and demand response. He was a recipient of the IEEE Power and Energy Society Prize Paper Award in 2006. He is an Associate Editor of the IEEE Transactions on Smart Grid. -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1809.06976, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/1809.06976" }
2,018
[ "JournalArticle" ]
true
2018-09-19T00:00:00
[ { "paperId": "b2b73c2ba3ced9a7130710a5e41aed29fd5444a9", "title": "Energy Collectives: A Community and Fairness Based Approach to Future Electricity Markets" }, { "paperId": "7747a7e83a7b44df463f5e8f81f68ecc0c038571", "title": "Multiclass Energy Management for Peer-to-Peer Energy Trading Driven by Prosumer Preferences" }, { "paperId": "bcb239973c7b50c7bea94cacae8e6cada07f148f", "title": "Analysis of Distribution Locational Marginal Prices" }, { "paperId": "000a0a77c66f978d895500fb5ce14193af4ac535", "title": "A Faithful and Tractable Distributed Mechanism for Residential Electricity Pricing" }, { "paperId": "ab1b2bdf36be57a0af9c9155e2977df3000ea036", "title": "A Fast Technique for Smart Home Management: ADP With Temporal Difference Learning" }, { "paperId": "0e6bfe8881118bf485f924c17b487d166b544525", "title": "Exogenous Approach to Grid Cost Allocation in Peer-to-Peer Electricity Markets" }, { "paperId": "b100619c0ca6f5e75dd9ad13f22aab88269c031a", "title": "A Technical Approach to the Energy Blockchain in Microgrids" }, { "paperId": "0c224a1b007418eefb5fad8cf638b1a508f9a6aa", "title": "A study of energy trading in a low-voltage network: Centralised and distributed approaches" }, { "paperId": "fe9ec12bd36cbf29e1d1733e4f752de1ac7244f6", "title": "Blockchain applications in microgrids an overview of current projects and concepts" }, { "paperId": "431c2a76c038a232513583f3dedaf1b86ad09ca6", "title": "Trading on local energy markets: A comparison of market designs and bidding strategies" }, { "paperId": "81bfea080e833fd0046b1e9b879a19429c1d08bf", "title": "Enabling Localized Peer-to-Peer Electricity Trading Among Plug-in Hybrid Electric Vehicles Using Consortium Blockchains" }, { "paperId": "ab9f52957050c3a9c5678040c787f3f5708991ca", "title": "Methods and strategies for overvoltage prevention in low voltage distribution systems with PV" }, { "paperId": "8a91d8cb15d5d0bf4ae8ba8055a7d0a29c7a24dc", "title": "A Component-Based Dual Decomposition Method for the OPF Problem" }, { "paperId": "818a654629a437932f4a959b986d82ba9d7aded2", "title": "Application of Network-Constrained Transactive Control to Electric Vehicle Charging for Secure Grid Operation" }, { "paperId": "b3427fadb79e813d3fad9f7ec815b2ca7958031e", "title": "Blockchains for decentralized optimization of energy resources in microgrid networks" }, { "paperId": "51151e4c13ef285621eb8449e4d1771f6ac2f253", "title": "Distributed and Decentralized Voltage Control of Smart Distribution Networks: Models, Methods, and Future Research" }, { "paperId": "fd09a90f2e67d3dfd8e2dfd58902b18d415f18e4", "title": "Probabilistic Impact Assessment of Low Carbon Technologies in LV Distribution Systems" }, { "paperId": "5cd9030d65400d106d1fc468a1549864866cd14e", "title": "A Faithful Distributed Mechanism for Demand Response Aggregation" }, { "paperId": "f10f683e5251d84c88b20881a7cd260fe40a3743", "title": "A Fast Distributed Algorithm for Large-Scale Demand Response Aggregation" }, { "paperId": "82b91cff57e4a404e306eb445245243a2a05e58b", "title": "High-resolution stochastic integrated thermal–electrical domestic demand model" }, { "paperId": "0207ca52e2090bd818941e29eadfc2e73d259f0d", "title": "Algorithmic and Strategic Aspects to Integrating Demand-Side Aggregation and Energy Management Methods" }, { "paperId": "e2be1618c5c83f26a207d2206209c556b5ad8a72", "title": "A market mechanism for electric distribution networks" }, { "paperId": "1151bfdf1e4abda3501ff98196e07c54ec6d36f3", "title": "Distributed Multi-Period Optimal Power Flow for Demand Response in Microgrids" }, { "paperId": "704cca2a964214ac716458e2ae5aa42bb20340dc", "title": "Measurement-Based Estimation of Linear Sensitivity Distribution Factors and Applications" }, { "paperId": "a7773661b9b524ca3fbb0788cb224742f7c5a0cb", "title": "A Game-Theoretic Approach to Energy Trading in the Smart Grid" }, { "paperId": "cf2ddf35ac0acc96f3a668a882443929da6aae5e", "title": "An energy market for trading electricity in smart grid neighbourhoods" }, { "paperId": "4fde6bba491b5043ac8c2b5d7b65f512bc3aaf7e", "title": "Efficient Computation of Sensitivity Coefficients of Node Voltages and Line Currents in Unbalanced Radial Electrical Distribution Networks" }, { "paperId": "032a6ee093ba4a6ac602f73c1a5cd604824368f0", "title": "Coordinated Active Power Curtailment of Grid Connected PV Inverters for Overvoltage Prevention" }, { "paperId": "13c3a6413aeeb68345eb2c4f2602ad070aeeea0b", "title": "Transmission network cost allocation based on equivalent bilateral exchanges" }, { "paperId": "22db1472aed4a6d82cdd12f5763ad95154a7a607", "title": "Z-Bus Loss Allocation" }, { "paperId": "acfa4bfd158586d9535bd78ea158a5a5824dd577", "title": "Power generation operation and control — 2nd edition" }, { "paperId": "08fa207ef2db3c88a5e5d188f721ffb0e8274518", "title": "Allocative Efficiency of Markets with Zero-Intelligence Traders: Market as a Partial Substitute for Individual Rationality" }, { "paperId": "94e39bb4e76a86303b56e7046eb2eb86cc4a8889", "title": "Power Flow Solution by Newton's Method" }, { "paperId": null, "title": "Distribution market model project" }, { "paperId": null, "title": "wide-area coordination of distributed energy resources" }, { "paperId": "30c2dc632dbb609b940a51e1806d4d5d59985b05", "title": "Minimal-Intelligence Agents for Bargaining Behaviors in Market-Based Environments" }, { "paperId": "1fa959bc71bdcb96acc5ef950958a04bbb4df84e", "title": "Power Generation, Operation, and Control" }, { "paperId": null, "title": "He received the B.Sc. degree in electronics engineering, B.Sc. degree in electrical engineering, and the M.Sc. degree in electrical engineering from the Universidad de los Andes" }, { "paperId": null, "title": "Gregor Verbiˇc (S’98-M03-SM’10) received the B.Sc., M.Sc., and Ph.D. degrees in electrical engineering from the University of Ljubljana, Ljubljana," }, { "paperId": null, "title": "The bidding strategies for the market participants are initialized, using their load and generation profiles and tariffs, and the market is opened. of" }, { "paperId": null, "title": "control, and electricity markets" } ]
15,369
en
[ { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01c50af63792def22f51c1bce18dc2a094077233
[]
0.902318
Understanding and mitigating cybersecurity risks of electric vehicle charging
01c50af63792def22f51c1bce18dc2a094077233
[ { "authorId": "2291689829", "name": "Fatima Nisar" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
In recent years, the adoption of Electric Vehicles (Evs) has increased with global sales to 23 million. EVs act as prosumers and have transformed the transportation and energy sectors. However, challenges of cybersecurity and scalability persists. This thesis helps to understand and quantify those potential coordinated attacks on EV charging. It also explores Blockchain based architecture for secure, transparent, and decentralized EV charging networks. While appreciating Blockchain’s potential, this study directs the creation of a reliable and long-lasting EV charging network.
## Understanding and Mitigating Cybersecurity Risks of Electric Vehicle Charging Fatima Nisar BScPhDPhD SUBMITTED IN FULFILMENT OF THE REQUIREMENT FOR THE DEGREE OF MASTER OF PHILOSOPHY School of Computer Science Faculty of Science Queensland University of Technology 2024 ----- # Abstract In recent years, the penetration of Electric Vehicles (EV) has increased, owing to the advance ment of technology and the need for cleaner transportation. Global EV sales are expected to reach 23 million in the coming years. With the increasing growth of smart grids in conventional power systems, EVs act as a game changer in transportation and energy. The ability of EVs to act as prosumers has revolutionized the entire industry and helped to achieve an energy supply demand balance. Despite significant advancements, there are some limitations of security, scalability and poorly implemented cyber security measures. For an efficient integration of EVs into the smart grid, it is imperative to comprehend the consequences of possible coordinated attacks against EV charging. Assessing and reducing any security threats before the widespread adoption of EVs assure the stability of the energy system. Therefore, this thesis first quantifies the impact of such coordinated attacks, which is an important consideration for the future of EVs as they have become an essential part of the grid. To address and rectify these potential possible EV attacks, there is a need for an effective digital infrastructure to manage EV transaction in a secure, transparent, and decentralized manner. Therefore, creation of a blockchain-based archi tecture for safe and transparent decentralized EV charging networks that does not require the participation of third parties is the subject of the second research topic. While acknowledging blockchain’s promise to improve security and transparency, this research also recognizes some of its limits. Important discoveries should provide guidance for the construction of robust EV charging infrastructure and promote sustainable grid integration. ----- # Keywords Electric Vehicles, Smart Grid, Charging Stations, Manipulation of Actual Demand in EV (MAD EV), EV Charging, EV Scheduling, Blockchain, Hyperledger Fabric, Hyperledger Caliper. ----- # Acknowledgments In my MPhil journey, I faced many up and downs, and it brought me vast experiences that are insightful for the rest of my life. I would declare that I learned a lot during this journey, and I enjoyed even the challenges. I gained a lot of valuable and insightful experience from the brilliant people around me, my supervisory team, colleagues, academic staff, and friends at QUT. I express my deepest gratitude to these amazing people. My special thanks to Professor Raja Jurdak, my principal supervisor, who always advised me with his insightful comments, extraordinary patience, and generous support. He helped me a lot in the start of my candidature during the hardest time of life and I will always owe him for this. For the rest of my career, I would be always grateful for the experience that I gained by studying under his supervision. I would also like to express my gratitude to Professor Mahinda Vilathgamuva, my associate supervisor, for providing me with this opportunity, and for his continuous support and encouragement throughout my MPhil. I would also like to express my greatest gratitude to Dr Gowri Ramachandran, another associate supervisor who helped me a lot in every ups and downs of research candidature. He guided me, motivated me, listened to me, and corrected me without any second thought and hesitation. I would also like to thank Australian Government Research Training Program Scholarship scholarships for funding my research degree and paving for me a path for innovation and research. My sincere thanks to my family for all their kind support and endless love. My parents, who taught me the meaning of life, and who encouraged me in all its stages. Thank you both for all your effort and love for bringing me up to become a resilient individual. My lovely husband, thank you for always supporting me and giving me the passion for becoming a better person. I want to express my gratitude for all the support, guidance, and encouragement you had for me during this journey. ----- # Table of Contents **Abstract** **i** **Keywords** **ii** **Acknowledgments** **iii** **List of Figures** **viii** **List of Tables** **vii1** **List of Abbreaviations** **21[xi]** **1** **Introduction** **31** 1.1 Research Background & Motivation . . . . . . . . . . . . . . . . . . . . . . . 324 1.2 Research Aims . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26257 1.3 Research Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 768 1.4 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 987 **2** **Literature Review** **10** **84** 2.1 Electric Vehicles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108 2.2 Types of EV charging technologies . . . . . . . . . . . . . . . . . . . . . . . . 791011 2.3 Types of EV chargers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101211 2.4 Vulnerabilities of EV Charging . . . . . . . . . . . . . . . . . . . . . . . . . . 1311 2.5 Cyber Attacks on Smart Grid . . . . . . . . . . . . . . . . . . . . . . . . . . . 141312 ----- 2.6 Impact of EV charging on Smart Grid . . . . . . . . . . . . . . . . . . . . . . 1315 2.7 EV Attack Studies on Smart Grid . . . . . . . . . . . . . . . . . . . . . . . . . 1416 2.8 EV Charging Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1517 2.8.1 Centralized Approaches . . . . . . . . . . . . . . . . . . . . . . . . . 1816 2.8.2 Decentralized Approaches . . . . . . . . . . . . . . . . . . . . . . . . 1719 2.9 Security Challenges . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2018 2.10 Blockchain-based EV Management Systems . . . . . . . . . . . . . . . . . . . 1921 2.11 Survey Findings and Summary of Gaps . . . . . . . . . . . . . . . . . . . . . 242 2.12 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 242 2.13 Novelty of this Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252423 **3** **Manipulation of Actual Demand in Electric Vehicles (MaD EV): A Cyber-Security** **Perspective** **264** 3.0.1 Statement of Contribution of Co-Authors . . . . . . . . . . . . . . . . 27252426171761 3.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262527 3.2 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 262527 3.3 BACKGROUND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 272829 3.3.1 A. Smart Grids and Electric Vehicles . . . . . . . . . . . . . . . . . . 301822211 3.3.2 B. Electric Vehicles and Charging Stations . . . . . . . . . . . . . . . 313029 3.4 SYSTEM MODEL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 313029 3.4.1 Smart EV charging . . . . . . . . . . . . . . . . . . . . . . . . . . . . 323130 3.4.2 Steady-state . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333132 3.4.3 Transient Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343332 3.4.4 Voltage Stability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333234 3.4.5 Co-orrdinated and Unco-ordinated Charging . . . . . . . . . . . . . . 343533 3.5 ATTACK DESCRIPTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . 343335 3.5.1 Potential Cyber Attacks . . . . . . . . . . . . . . . . . . . . . . . . . 343533 3.5.2 Manipulation of Demand in EVs . . . . . . . . . . . . . . . . . . . . . 363534 ----- 3.5.3 Attack Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 353637 3.5.4 Attack Scenario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373638 3.6 Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 393738 3.6.1 Charging Attacks on Home Chargers . . . . . . . . . . . . . . . . . . 403839 3.6.2 Charging Attacks on Fast Chargers . . . . . . . . . . . . . . . . . . . 4321 3.7 DISCUSSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4321 3.8 MITIGATION RECOMMENDATIONS . . . . . . . . . . . . . . . . . . . . . 434445 3.9 CONCLUSION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464 **4** **Decentralized Scheduling Framework For EVs** **4745** 4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 464745 4.2 Blockchain Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4884957 4.2.1 Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50448 4.2.2 System Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5069451 4.2.3 Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 515052 4.3 Evaluation and Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5654 4.3.1 Experimental Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5755 4.3.2 Qualitative Assessment . . . . . . . . . . . . . . . . . . . . . . . . . . 5557 4.3.3 Quantitative Assessment . . . . . . . . . . . . . . . . . . . . . . . . . 576358 4.4 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6058 **5** **Conclusions** **56194** 5.1 Summary of the Research . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61695 5.1.1 Demand Manipulation Attacks . . . . . . . . . . . . . . . . . . . . . . 61695 5.1.2 Decentralized EV Charging Management . . . . . . . . . . . . . . . . 62610 5.2 Future Study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 631 **6** **Bibliography** **664** ----- # List of Figures 3.1 Overview of Power Grid System, showing bi-directional and unidirectional flow 30 3.2 Multiple Attack Scenarios of MAD EV Attacks . . . . . . . . . . . . . . . . . 37 3.3 IEEE 9-Bus System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39 3.4 Frequency Drop on IEEE 9-Bus System by Home Chargers . . . . . . . . . . . 41 3.5 Voltage Drop on IEEE 9-Bus System by Home Chargers . . . . . . . . . . . . 41 3.6 Current Rise on IEEE 9-Bus System by Home Chargers . . . . . . . . . . . . . 42 3.7 Frequency Rise on IEEE 9-Bus System by Home Chargers . . . . . . . . . . . 42 3.8 Voltage Rise on IEEE 9-Bus System by Home Chargers . . . . . . . . . . . . . 42 3.9 Current Drop on IEEE 9-Bus System by Home Chargers . . . . . . . . . . . . 42 3.10 Frequency Drop on 9-Bus System by Fast Chargers . . . . . . . . . . . . . . . 43 3.11 Frequency Rise on 9-Bus System by Fast Chargers . . . . . . . . . . . . . . . 44 4.1 Network Entities . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.2 Peer Nodes and Certificate Authority . . . . . . . . . . . . . . . . . . . . . . . 52 4.3 Blockchain Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.4 Blockchain Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55 4.5 Transaction Throughput and Latency of Blockchain . . . . . . . . . . . . . . . 59 ----- # List of Tables 2.1 Comparison of Centralized and Decentralized Approach . . . . . . . . . . . . 19 3.1 Comparison of Co-Ordinated and Un co-ordinated Charging . . . . . . . . . . 35 3.2 Comparison of Home chargers and Fast Chargers . . . . . . . . . . . . . . . . 38 ----- # List of Abbreviations BC Blockchain CS Charging Station EV Electric Vehicles HLF Hyperledger Fabric SG Smart Grid ----- # Chapter 1 Introduction EVs (EVs) have emerged as a promising technology for the automotive industry. They have provided various environmental, financial, and technological benefits. Compared to traditional vehicles, EVs have lower fuel costs, less air pollution, and provide more energy efficiency. The potential for improved connectivity has also increased with these advancements in EVs. However, increased connectivity for EVs has created new mobility options and transformed how we engage with transportation systems. Connectivity enables EVs, charging stations and smart grids for real time data usage and monitoring. Significant problems such as traffic congestion and energy management issues occur as a result of the close connection of EVs with the energy grid and their growing integration into the transportation network. Being vulnerable to physical and cyber hazards is one of the main problems. Due to their grid connectivity, EVs are susceptible to future power outages and system failures, which might have a cascading effect on both sectors. Additionally, as digital technologies and communication networks become more prevalent, EVs face cybersecurity threats such as possible hacking and data breaches. In order to guarantee a safe and dependable connected ecosystem for EVs, it is essential to address these cybersecurity challenges. The management of the rising demand for power presents a significant additional problem. It is necessary to carefully develop and build charging infrastructure to fulfill the charging needs of a growing EV fleet without exceeding the grid’s capacity as increased EV adoption can put more strain on the electricity grid. We can protect against any cyber threats and ensure the integrity of EV connectivity by putting strong security measures in place. This will allow for easy and secure communication between vehicles and the infrastructure around them. These threats can range from minor inconveniences to safety ----- 42 _CHAPTER 1. INTRODUCTION_ functions to disabling critical operations and damage to infrastructure. Overall, while increased connectivity in EVs presents enormous potential, overcoming these significant obstacles is essential to ensuring a resilient and sustainable future for electric mobility. In this chapter a brief background of this study and the motivations that outline this research are given. Furthermore, the research aims are presented. The contributions of this research study and the thesis outline conclude the chapter. ### 1.1 Research Background & Motivation Sales of EVs are predicted to increase by 35% this year, following a record-breaking trend in 2022. The EV sales forecasting scenario is presented in [1]. In the current political climate, the projected demand for EVs in major auto markets significantly impacts energy markets and climate goals. [1]. The transition from fossil fuels to renewable electricity to power automobiles require significant changes in the energy landscape. We must consider the effects of more automobiles switching to electric power and frequently recharging their batteries [2]. Over the past few years, there has been a significant movement in the power grid’s moni toring and control towards automation. However, this change exposes the grid to cyberattacks since it closely links the grid’s security to the dependability and security of the underlying smart devices and communication infrastructure [3]. With their growing numbers, EVs are now also an integral part of the power grid. Cybersecurity experts are raising the alarm that EVs will be an emerging target for hackers [4]. If precautionary steps are not done to properly safeguard EVs and charging infrastructure from cyber assaults, there will be a huge impact on both the energy and transportation sectors. These attacks include manipulation, unauthorized access, malware, and denial of service. To the best of my knowledge, no recorded cyber-attack using EVs on the smart grid have occurred so far. However, EVs have been used as a target attack vector for attacks on other large infrastructures which are listed in the next chapter. In addition, other components of the infrastructure for the power grid have been the target of cyberattacks in recent years, including the attack on the Ukrainian power grid in 2015 [5] and the attack on the US power grid in 2019 [6]. These occurrences demonstrate the necessity of implementing strong cybersecurity controls to guard against future attacks on the smart grid and other power grid infrastructure, ----- _1.1. RESEARCH BACKGROUND & MOTIVATION_ 53 including those utilizing EVs. In a recent report on cybersecurity issues in the automotive industry, Deloitte Canada [7] found that 84% of cyberattacks on vehicles were conducted remotely and 50% of the attacks took place in the previous two years, indicating that cybersecurity concerns in the sector are expected to worsen in the coming years. The vehicle control system, as well as any infras tructure related to it, might all be impacted by an attack on an EV through a charging station, according to researchers from the University of Georgia [8]. According to Markets and Markets, the automotive cybersecurity market will be worth $5.3 billion by 2026 [9]. The EV ecosystem has two significant challenges: comprehending the effects of cyberse curity breaches and developing a framework for mitigating these attacks. First of all, under standing the potential consequences of cybersecurity attacks is essential for creating efficient defenses. A vulnerability known as MAD EV (Manipulation of Actual Demand for EVs) is proposed in this research work. This is created by the manipulation of EV charging demand in the EV charging ecosystem. Cybercriminals can use this vulnerability to influence the energy grid and interfere with grid balances. Therefore, it is necessary to study this vulnerability in detail. In addition, the construction of a decentralized framework that solves these issues and guarantees the safe functioning of EVs within the changing mobility scene is equally important. MAD EV attacks include manipulating the real energy demand of EVs to produce disruptive effects. They utilize the coordinated charging attacks strategy. These attacks constitute coordi nated attempts by actors to simultaneously alter numerous EV charging patterns, which could have disruptive effects. The aim is to overload the infrastructure, causing instability in the power grid and creating potential operation disruptions. The impact of these attacks is significant and is presented in Chapter 3 of this thesis. For sustaining grid stability, operational effectiveness, and public safety, it is crucial to comprehend the consequences of MAD EV and comparable risks. We must also consider the charging stations they require, their link to the grid, and the data these systems transport (ranging from personal data to billing accounts) in addition to the protection and cyber security of physical cars [10]. The risk of potentially harmful out comes including broken vehicle control systems, stolen personal information, and infrastructure damage at charging stations rises as the number of cyberattacks on automobiles does. The anticipated expansion of the automotive cybersecurity market highlights the significance of ----- 64 _CHAPTER 1. INTRODUCTION_ creating practical solutions to reduce these risks and guarantee the safety of EVs. In addition to this, charging an EV adds a considerable amount of load to the grid. The critical supply-demand balance inside the smart grid may be disturbed if electric vehicle (EV) charging is not properly regulated and monitored. Therefore, it is essential to carefully organize EV charging sessions. This planning helps to keep the smart grid stable, particularly during peak usage times when disruptions to the grid can lead to power outages and system instability. Second, it can help in cost reduction by charging EVs during off-peak times. Third, it can also increase the battery life. To optimally schedule the EV charging, there is a need to create a secure network that manages the use of the charging resources in an efficient manner. The demand for security and trust increases simultaneously with the growing number of EVs. The scheduling and reservation of EVs charging help in keeping track of charging patterns. It also enables a secure and equal opportunity to use the charging infrastructure. However, there is a use of a centralized management system for resolving the issues of scheduling of EVs in current practice. There is a central authority that controls all the activities during the charging session. It includes the rate of the charging, planning of charging sessions, monitoring of the charging process, and processing payments for charging. Such a centralized system has many drawbacks and limitations. First, it is quite vulnerable to cyber attacks as all the data and sensitive information is managed by a central authority. In addition, there can be long delays during peak hours when multiple EVs want to charge simultaneously. Most importantly, such a centralized system is based on intermediary parties to establish the connection between the smart grid and EVs, which can lead to a single point of failure, which can be easily compromised. The understanding of the threats of EV cybersecurity is crucial. It helps to achieve the safety and reliability of the charging infrastructure which includes EVs, charging stations, power grids, and consumers. It also helps to maintain the supply and demand balance of the grid. The potential impact of cyber-attacks via EVs including MAD EVs can lead to economic losses. In conclusion, it is crucial to comprehend the implications of cybersecurity risks, particularly MAD EV, to ensure the security, dependability, and wide-scale adoption of EVs. It enables stakeholders to resolve weaknesses, uphold grid stability, lessen economic risks, and foster confidence in this game-changing technology. Similarly, in the context of creating a balance ----- _1.2. RESEARCH AIMS_ 57 between the supply and demand of grid energy, it is essential to plan the charging of EVs in an efficient and controlled way. This helps to avoid overloading at the power grid, especially during peak hours. Such management of EV scheduling is generally managed by a central entity either a charging station or third-party operators. The entire EV charging system can be compromised if the central authority is compromised or got a technical problem. It would bring down the whole infrastructure of EV charging down with disruptions on the smart grid as well. Similary, with the growing number of EVs, it is difficult for this central entity to accommodate and control all the EV charging management like scheduling, authentication and other features. Due to the above limitations, there is a need to have a decentralized and secure platform for managing EV charging. To achieve the objectives of decentralization, security, and trans parency, blockchain technology is essential. This is distributed ledger technology where all the EV transactions will be immutable, tamper-proof, and easily trackable. Such a system helps to increase the security, transparency, and efficiency of the EV charging ecosystem. Therefore, this in thesis, we are focussing on multi-dimensional aspects of EVs cyber security including manipulation of demand attacks and securing the EV management in a decentralized way. This thesis gives a summary of the new EV perspective and lists some of the vulnerabilities that can be exploited by hackers to attack the power grid by hacking into the EV system. Based on the above discussion, this thesis also aims to address this emerging issue of the cyber security of EVs. ### 1.2 Research Aims The above discussions revealed that there is a need to investigate the role of EV charging attacks over the smart grid. Such attacks may have a number of negative effects on the operation of the electric power system, including increased stress on the assets and power disruptions for the customers. Therefore, there is a need for the approach to highlight early identification of the cyber attacks on the EV charging ecosystem. This approach should be able to guide the industry partners to work over the vulnerabilities before they get exploited by hackers. The major aim of the research was to develop an in-depth study that enumerates the potential vulnerabilities. ----- 86 _CHAPTER 1. INTRODUCTION_ The possible hazards connected with centrally controlled EV charging systems can be iden tified and understood with the aid of in-depth investigations on EV vulnerabilities. These studies look at a range of attack methods, including infrastructure attacks, remote control fraud, unauthorized access, and privacy invasions. Analysis of these flaws reveals that the security and dependability of EV charging infrastructure are seriously threatened by depending on a central ized management system. Understanding them promotes the need for robust security measures. Because they require cooperation from a single authority and approval before upgrades can be implemented, centralized systems can have trouble addressing new risks and implementing updates in a timely manner. In light of the previous discussion’s identification of the risks and weaknesses related to centralized EV charging management, it is clear that an entirely new approach is required to successfully solve these issues. Consequently, the objective is to provide a decentralized framework for EVs that takes into account the changing cybersecurity threats and supports a safe and reliable mobility environment. Our system intends to disperse data management, and authentication through a network of interconnected nodes, lowering the vulnerability to cyberattacks and preserving the integrity of EV operations by utilizing the potential of decen tralization. Through the use of Blockchain technology, our aim is to deliver a decentralized framework that ensures secure authentication and scheduling for EVs charging ecosystem. Our methodology intends to improve the overall cybersecurity posture and enable the smooth integration of EVs into the larger transportation and energy grid by encouraging secure and effective energy management, and coordination among stakeholders. The extensive background and inspiration for this study, the goals and objectives of the study, and an outline of the thesis are all provided in the following parts. ### 1.3 Research Contributions The following summarizes the major research contributions of this thesis: 1. To present a general overview of the existing vulnerabilities in the EV charging ecosystem and to quantify the impact of cyberattacks that make use of grid conditions to maximize the impact of the attack while compromising the fewest number of EVs possible. 2. To propose a decentralized approach for the EV charging management infrastructure that ----- _1.4. THESIS OUTLINE_ 79 incorporates scheduling, including insights into how blockchain technology can help with EV charging management the security and privacy issues. ### 1.4 Thesis Outline The remaining parts of the thesis are organised as follows. - Chapter 2: Literature Review This chapter introduces the necessary overview of EVs in this thesis. This chapter begins with a brief introduction to the types of EVs, types of EV chargers, cybersecurity aspects of EVs, and their vulnerabilities to cyber-attacks. It discusses the associated risk of cyber-attacks with the growing number of EVs. This chapter then explores the existing software and hardware measures of the EV cyber security mechanism. It provides an in depth analysis of blockchain technology and its potential applications in EV management. This chapter also reviews the existing literature relevant to demand manipulation attacks, scheduling, authentication, and demand forecasting of EVs. The technical gap and the shortcomings of the current approaches are presented to justify the research problems. - Chapter 3: Manipulation of Actual Demand of EV An emerging and innovative cyber attack has been proposed against the smart grid. It has the capacity to bring down the operations of the power grid by utilizing coordinated charging attacks. A detailed discussion of the system model, simulation set-up, results, and evaluations are presented. - Chapter 4: Decentralized Blockchain Framework The main goal of this chapter is to propose a decentralized approach for the security issues of EV management including scheduling. In order to investigate this in detail, the smart contracts of the Hyperledger Fabric Blockchain are utilized. - Chapter 5: Conclusion & Future Work The main focus of this chapter is to study the consequences of the above research prob lems along with suggestions for their mitigation. By stressing the practical and scholarly implications, summarising the study findings and contributions, and outlining potential research areas, we bring the thesis to an end.e way for future research plan in this area. ----- # Chapter 2 Literature Review The literature review provides a detailed summary of the existing research work that has ad dressed the problems of the identification of cyber-attacks and their impact on the infrastructure. It also highlights the contributions of the researchers to enhance security and transparency with the help of both centralized and decentralized EV charging management systems. The chapter begins by offering an overview of recent research studies that shed light on EV cybersecurity and how it might be applied to power systems. Additionally, since the thesis focuses on cyber attacks, this chapter provides information on the research on smart grid cybersecurity. Additionally, it will look into the research gaps present in current contributions related to EV charging management. ### 2.1 Electric Vehicles EVs were originally introduced in 1899, however, the popularity of internal combustion engines halted their adoption. With the transition towards a cleaner and greener environment, EV demand is considerably increasing. The global EV outlook has seen a tremendous increase in EV adoption. In 2022, there were more than 26 million electric vehicles on the road, an increase of 60% from 2021 and more than 5 times the stock in 2018 [1]. The widespread use of EVs has also contributed to the transformation in the power sector. Typically, there are five main types of EVs as mentioned below [11]. 1. Battery Electric Vehicles (BEVs) are an electric vehicle powered exclusively by an onboard battery pack. ----- _2.2. TYPES OF EV CHARGING TECHNOLOGIES_ 119 2. Plug-In Hybrid Electric Vehicles (PHEVs) are a type of hybrid electric vehicle that can be charged from an external power source, such as the electric grid, in addition to using an internal combustion engine. 3. Hybrid Electric Vehicles (HEVs) are vehicles that combine an internal combustion engine with an electric motor and a battery pack. 4. Fuel Cell Electric Vehicles (FCEVs) are a type of electric vehicle that uses a fuel cell to generate electricity. 5. Extended-Range Electric Vehicles (ER-EVs) is a type of electric vehicle that uses a small internal combustion engine as a range extender. It is crucial to note that BEVs, PHEVs, and ER-EVs offer a direct connection with the power grid for charging purposes. Therefore, it is essential to study their specific designs and controls to identify threats especially MAD EV and other related cyber-attacks. Only by highlighting these innovative attacks, we can guarantee the safe integration of EVs after providing proper security mesasures. ### 2.2 Types of EV charging technologies EV charging technologies are examined in this section. It is essential to discuss them as they are helpful in understanding vulnerabilities, strengthening security, and creating mitigation strategies. These technologies offer various degrees of accessibility, convenience, and speed. The negative effects of climate change have sped up the transformation of the automotive sector and the move toward an entirely electric future. The time needed to charge electric vehicles (EVs) is one of the main barriers preventing their widespread deployment. There are a variety of issues with designing a safe charging scheme, which is related to appropriate charging converter architecture. A safe charging protocol must be established within a timeframe of 5 to 10 minutes. [12]. The three primary methods of charging are battery exchange, wireless charging, and con ductive charging as mentioned below [13]. 1. Battery swapping, sometimes referred to as battery exchange, is a technique for charging ----- 1210 _CHAPTER 2. LITERATURE REVIEW_ EVs that entails renting a battery from a battery swap station (BSS) owner on a monthly basis. 2. Electromagnetic induction is used by Wireless Power Transfer (WPT) technology to charge electric vehicles, with the primary coil placed on the road and the secondary coil inside the car. 3. Direct electrical contact between the car and the charging inlet is required for conductive charging. Based on the power level, there are three charging levels (Level 1, Level 2, and Level 3). In the context of MAD EV attacks, all the above charging methods have vulnerabilities that can be compromised to manipulate the actual demand of EVs and impact the supply-demand bal ance of the smart grid. In terms of battery swapping, if the swapped battery is compromised by a malicious actor it can affect the charging behavior. WPT offers potential risks, as unauthorized access to the infrastructure can be exploited to create demand manipulation. Similarly, a direct connection can be compromised by having unsupervised access to the charger for creating false energy demand. ### 2.3 Types of EV chargers Depending on their charging speed, portability, and power supply, electric vehicle (EV) chargers can be divided into a number of different categories. Here are a few types of EV chargers that are frequently used: 1. Level 1 Charger: This basic charger charges at a slow rate, usually from 1 to 5 miles per hour, and is usually an AC outlet. 2. Level 2 Charger: This charger charge at a rate of 10 to 25 miles per hour with a level 2 charger, which is a quicker charger. A 240V outlet, like the one for a clothes dryer, is generally used for level 2 charging. 3. Level 3 Charger: Also known as Direct Current Fast Charging (DCFC), this is the fastest form of charger. In approximately 20 to 30 minutes, Level 3 chargers can deliver an average charging speed of 60 to 100 miles of range. ----- _2.4. VULNERABILITIES OF EV CHARGING_ 1311 In this thesis, we mainly focus on the Level-1 and Level-3 EV chargers for attack formu lation. As these are the most commonly used chargers in most regions of the world. The next section discusses the multiple attacks on the power grid and highlights how EVs can be utilized as threat vectors for breaching the security of the grid. ### 2.4 Vulnerabilities of EV Charging The growing number of EVs has raised security and privacy problems that need to be addressed. The possibility of cyberattacks, which can seriously harm both the car and its people, is one of the key worries. For instance, a hacker could be able to take over the car’s braking or acceleration systems and cause an accident. Additionally, if an EV’s onboard computer system is compromised, personal data and sensitive information may be at risk. The theft of priceless parts from the car is another possible security problem, as is the manipulation of charging stations to conduct unauthorized activities. Recently, in March 2022, a number of EV charging stations outside of Moscow were compromised, making them inoperable to EV owners [14]. Similarly, the Combined Charging System (CCS) is vulnerable to the innovative attack known as Brokenwire, which prevents the car and charger from communicating with each other, resulting in the termination of charging sessions. Using electromagnetic interference, the attack can be carried out remotely from a distance [15]. As of April 2022, a security weakness in the infrastructure was brought to light when UK EV charging points in a council’s parking lots were compromised and used to display an unauthorized website on their screens. Such a deed not only calls into question the efficacy of security measures put in place for these charging stations, but it also highlights the potential dangers connected to hacks aimed at public EV charging infrastructure [16]. Efforts have already been made in order to address the above concerns; however, to further improve cybersecurity more work has to be done. To stay ahead of the changing nature of cyber threats, it is essential to regularly review and upgrade security procedures. To develop industry wide standards and best practices for EV cybersecurity, manufacturers, operators of charging networks, and cybersecurity researchers must cooperate together. As the use of EVs creates a substantial quantity of data that could be abused, protecting the privacy of EV users is therefore of utmost importance. The adoption of strong security ----- 1214 _CHAPTER 2. LITERATURE REVIEW_ mechanisms, such as encryption and authentication protocols, as well as the usage of secure communication channels, are required to address these security and privacy challenges. To secure the data and privacy of EV users, it is also critical to implement strong privacy policies and regulations. To strengthen the resilience of EV charging infrastructure against cyber attacks, advanced technologies might be studied, including blockchain, protocols for secure communication, ma chine learning techniques and intrusion detection systems. To sum up, in order to safeguard the integrity and safety of EV charging systems in the context of emerging cyber threats, it is crucial to strengthen security measures, develop industry standards, raise awareness, and invest in research. ### 2.5 Cyber Attacks on Smart Grid In recent years, cybersecurity has grown to be a major issue in every life sector. The electric power grid is dangerously vulnerable. In the recent past, there have been multiple cyber breaches that have halted the grid operations and left many sectors without power. In the context of smart grids, two major cyber attacks come to mind i-e Stuxnet [17] and the Ukraine attack. The first significant instance of state-level attacks on the smart grid can be seen in the 2010 Stuxnet malware strike against Iran’s nuclear facilities [18]. In order to lessen detection, a worm that was initially put into a Windows PC spread to its targets (Siemens PLC S7) and then erased itself from untargeted devices. Eventually, the malware was able to covertly change the centrifugal pressures, destroying 10% of Iran’s centrifuges and significantly delaying the country’s nuclear program. The first successful cyberattack against a power grid was the cyberattack on Ukraine in 2015 [17]. Over 200,000 people lost energy as a result of the attack. Additionally, compromised high-wattage IoT (Internet of Things) equipment like air condi tioners and water heaters have been taken into account in recent investigations [19]. Despite the fact that the Black IoT attack described in [19] does not specifically identify IoT exploits, it is important to note that attacks on IoT devices are almost unavoidable. In the Mirai Botnet [20], when over 600,000 IoT devices were infiltrated and exploited to execute DDOS assaults, the flaws in IoT devices were clearly demonstrated. Weak encryption and insecure data transfer, guessable passwords, inadequate privacy protection, and a lack of secure update procedures are ----- _2.6. IMPACT OF EV CHARGING ON SMART GRID_ 1513 all examples of IoT risks. Similarly, EVs have the same potential to disrupt the grid operation by compromising the vulnerabilities. They are now a cyber-physical attack vector and pose a threat to launch attacks against the grids. ### 2.6 Impact of EV charging on Smart Grid Dynamic charging behaviour becomes more problematic and can have a severe influence on the operation of the power grid as the number of EVs grows. According to the study done in [21], unmanaged EV charging, particularly during high load times, can result in a loss of a load of up to 6.89%. In Portugal, it is predicted that a 10% EV penetration can result in a sizable voltage reduction during peak hours [22]. In [23], a comparison was made between two optimization goals: lowering the peak load by scheduling EV charging for the evening hours and lowering daytime peaks by using the reverse power flow from vehicles to the grid. Their research showed that it is impossible to reduce peak demand and cut operating expenses at the same time. The authors came to the conclusion that in order to increase system load without considerably raising operating costs, it is more crucial to manage the EV charging schedule efficiently than it is to discharge them. Researchers in [24] highlighted another element of how EVs affect power system costs, concentrating on infrastructure investment costs and system losses. This study ran simulations on two residential areas: Area A, which had 6,000 customers and 3,676 cars, and Area B, which had 61,000 consumers and 28,626 cars. Between 35% and 62% of the total number of cars, different EV penetration levels were considered. In the grid, the EV charging stations were dispersed at random places where there were already non-EV loads. At all levels of EV penetration, the simulations for the peak and off-peak demand scenarios in each area show higher investment costs and system losses. The research work mentioned above identifies a number of drawbacks and difficulties re lated to the growing use of electric cars (EVs) and their effects on the electrical grid. Un managed EV charging, particularly during periods of high load, may result in a loss of load. According to the analysis, this loss is nearly equal to the house load. This means that if numer ous EVs charge at once without effective management, the power system may be overloaded ----- 1614 _CHAPTER 2. LITERATURE REVIEW_ and experience power outages or decreased supply reliability. This suggests that the additional demand caused by EV charging may result in voltage reductions, which may have an impact on the functionality of electrical equipment and appliances. Overall, the research’s limitations highlight the importance of managing EV charging schedules effectively, taking into account the potential impact on the power grid, voltage fluctuations, optimization difficulties, and the associated costs of infrastructure changes. To achieve seamless integration of EVs into the current electrical infrastructure, it is important to take these variables into consideration. ### 2.7 EV Attack Studies on Smart Grid The literature has considered attacks on or through the EV ecosystem against users and the power grid in numerous works. The authors of [25] offered an EV attack formulation that would destabilize the Manhattan power grid by merely using data that was readily available to the public. Their approach entails modeling the power system as a feedback control system and the EV as the system’s feedback gain in order to calculate the necessary number of EVs. According to their research, even if Manhattan doesn’t have enough EVs at the moment to launch such an attack, the increase in EV sales will eventually create a surface large enough to enable it. However, their approach was based solely on the DC power flow model, and other problems with grid behavior. Similarly, [3] has discussed the non-linear nature of EV load and compared it with residential loads. The quantitative comparison highlights the same amount of EV load can destabilize the system, however, the residential load has no effect. Attackers who take over the EV’s battery management system using hacked web services or malware that has been downloaded into the vehicle’s systems can seriously harm the EV itself. In fact, [26] talk about how attackers can harm EV batteries by tampering with the charging current and evading security precautions. The above-mentioned research studies highlight that there are certain limitations that need to be addressed in order to avoid cyber attacks and security breaches. These limitations include inadequate safety measures that could result in privacy violations and unauthorized access to sensitive and personal information contained in EVs ecosystem. Similarly, cyberattacks that target EVs have the ability to interfere with crucial infrastructure, including power grid distribution networks. Furthermore, weakened smart grids ----- _2.8. EV CHARGING MANAGEMENT_ 1715 may find it difficult to react to crises, resulting in disruptions in power supply or inefficient responses to system breakdowns. ### 2.8 EV Charging Management EV charging management system is an end-to-end solution for managing EV charging opera tions, energy management, billing of the charging sessions, and authorization of EVs. People are now more inclined to try charging electric cars at work, the mall, or any place there is a convenient location, making the argument for EV charging management system stronger than ever. It provides the benefits of facilitating communication between EVs and EV charging stations. It helps to reduce time and long waiting queues by booking ahead for the charging sessions. Users can keep track of their mobile applications and can provide their banking details for automatic payments. They provide the benefits of allowing EV owners the option of searching nearby charging points by providing their locations. However, these systems are vulnerable to cyber attacks as they include information about the charging points and the availability of charging sessions. First of all, these management systems are susceptible to a single point of failure. They rely heavily on a central database for managing the charging ecosystem. These systems may enable hackers to disrupt charging operations, and steal energy and sensitive user information. An attacker can exploit these vulnerabilities and can mount cyber attacks that can destabilize the grid operations. These charging management systems typically include EV scheduling, EV authentication, and calculating the demand for a balanced infrastructure. Similarly, drivers’ private information, including payment card information, as well as other sensitive information, including server credentials, can be accessible to hackers. Under these centralized management systems, if the charger accepts unidentified driver IDs, an attacker is able to charge their vehicle without having to pay for it. EV scheduling refers to the process of managing the charging and discharging of EVs in a coordinated manner, to match the EV demand with the available grid resources.The benefits of the scheduling mechanism include lesser congestion at charging stations with a significant and long-term impact on the power grid. Additionally, EV owners can select the CS based on their preferences and comfort. To authenticate and identify EVs, the ISO 15118 protocol ----- 1816 _CHAPTER 2. LITERATURE REVIEW_ involves an authorized intermediary mobility operator, which maintains the private information (EV identification, location, state of charge, charging settings, availability, and payment infor mation). Furthermore, it tracks mobile EVs to direct them to an appropriate CS for charging. This approach is helpful but can also create major issues if the mobility operator purposefully or inadvertently releases the EV’s private information [27,28]. To reduce the overall cost of charging for EVs and CSs, various approaches for scheduling EVs at CSs have been proposed in the literature. The EVs scheduling problem was formulated by the authors in [29] with two goals in mind: first, to reduce the number of cars required to complete all the scheduled trips, and second, to reduce the overall distance traveled. Work in [30] proposed a scheduling mechanism for EVs that maximize the number of EVs being charged at a time while minimizing overall charging cost. Authors in [31] investigated the EV charging scheduling activities by minimizing the waiting time at the CSs. Using simply the least journey time, authors in [32] presented a recharging strategy for electric vehicles (EVs) to locate the nearest charging station. We conducted a detailed analysis on both centralized and decentralized frameworks for EVs, followed by a mention of their shortcomings. **2.8.1** **Centralized Approaches** In the literature studies, multiple proposals exist for managing EV charging management that includes scheduling and authentication. Authors in [27,28] suggested scheduling EVs by taking into account both the EVs and the aggregator’s revenue. A scheduling system for EVs was proposed in [29] that maximizes the number of EVs being charged at once while lowering total charging costs. To encourage both CSs and EVs, research in [33] proposed an online scheduling and pricing approach for EV charging on an auction-based platform. Similarly, The EVs scheduling issue was developed by the authors in [30] with two goals in mind: first, to reduce the number of cars required to complete all the scheduled trips, and second, to reduce the overall distance traveled. The authors of [34] proposed a scheduling technique that enables the coordination of CSs to reduce waiting times at CSs. However, in all this research work the central aggregator and CSs have made these decisions. Additionally, the choice for EVs is made by sharing information with a central aggregator and CSs, which may cause the exposure of EVs’ private data. As a result, a decentralized system is required for the effective scheduling of charging slots by EVs, whereby each EV can choose a CS in a distributed way based on its ----- _2.8. EV CHARGING MANAGEMENT_ 1917 needs without disclosing any of its personal information to central aggregators and CSs. **2.8.2** **Decentralized Approaches** Several work has been proposed to utilize blockchain technology for managing the scheduling and authentication of EV charging in a decentralized mechanism. To efficiently assign CSs to EVs through smart contract infrastructure, a blockchain-based architecture is proposed [35]. However, EVs use blockchain as a trustworthy third party to communicate with the central aggregator or CSs. These blockchain systems also have large overhead costs for blockchain storage and transaction fees. Similarly, the authors of [36] proposed a blockchain-based energy trading system and anonymous payment system for electric vehicles, but their work takes into account a consortium blockchain with the assumption of reliable third parties. In addition, work in [37] proposed a decentralized EV charging framework to address the issues of CS selection, scheduling, authentication and charging payment. They addressed these problems simultaneously with the use of Ethereum however, their major assumption is that the physically installed (Road Side Units) RSUs are honest. Whereas, multiple studies confirm that RSUs can be compromised. Since the physical security of RSUs is only achieved via CCTV, therefore, they can be tampered with and can be susceptible to physical damage. The attackers can take advantage of these vulnerabilities [38]. Therefore, the below table highlights the importance of utilizing decentralized frameworks for managing EV scheduling for a number of reasons. **Table 2.1: Comparison of Centralized and Decentralized Approach** **Attributes** **Blockchain** **Centralized** **Approach** **Approach** Single Point of Failure No Yes Authority Decentralized Centralized Architecture Peer-to-Peer Client-Server Energy Profile Anonymity Yes No User Registration Permissioned Private Charge Scheduling Private Public However, in all of these works, either centralized or decentralized, the fundamental decision maker incorporates aggregators and CSs. Additionally, the decision for EVs is made by sharing |Attributes|Blockchain Approach|Centralized Approach| |---|---|---| |Single Point of Failure Authority Architecture Energy Profile Anonymity User Registration Charge Scheduling|No Decentralized Peer-to-Peer Yes Permissioned Private|Yes Centralized Client-Server No Private Public| ----- 1820 _CHAPTER 2. LITERATURE REVIEW_ information with a central aggregator and CSs, which may result in the disclosure of private information. It should be noted that this thesis addresses the charging scheduling problem for an individual EV in a decentralized manner. In centralized charging manner, a single point of failure is created when scheduling and authentication are assigned to a central authority. Similarly, there is a lack of transparency as the participants may not have access to the decision making procedures and standards for scheduling and authentication. As mentioned in the above research work, both decentralized and centralized approaches follow the same trend of relying on a trusted third party for EV charging management. The trusted third party can store user sensitive data, which can be compromised as a result of unauthorized access and risk of data breaches. ### 2.9 Security Challenges In addition to the above trust issues, recent studies have identified the key cyber security issues and provided a list of possible cyber attacks over the EV ecosystem. The highlighted vulnera bilities not only endanger the EV ecosystem but also potentially put the other infrastructure of power and transportation at high risk. Below are the summarized attacks that have the potential to exploit the EV charging ecosys tem. 1. Denial-of-Service: Such an attack creates the unavailability of certain charging stations and has the potential to increase the grid load by creating false demand requirements. 2. Impersonation Attacks: Such an attack involves a threat actor impersonating a trustwor thy organization to get critical information for digital authentication. 3. Sybil Attacks: Such an attack creates fake and multiple identities of an EV owner to manipulate the EV charging ecosystem. 4. Man-in-the-Middle: Such an attack captures the data exchange between the EV and the charging station and has the ability to influence it by adding harmful messages or changing the original ones. ----- _2.10. BLOCKCHAIN-BASED EV MANAGEMENT SYSTEMS_ 1921 The above-mentioned cyber-attacks highlight the existing centralized EV charging infrastruc ture has various loopholes. To control and monitor data, a single entity exists which is sus ceptible to a single point of failure. Similarly, the identification of these attack vectors and vulnerabilities gives rise to trust and security issues. To acquire unauthorized access or control over an EV’s operations via a centrally located control system, an attacker only has to target one location. This simplifies the task for malicious actors to focus their efforts on finding weaknesses in that key location, which might have serious repercussions like granting unautho rized access to the vehicle’s systems or interfering with components that are crucial for safety. Redundancy and failover methods are frequently absent from centralized systems. The entire system may become vulnerable or unusable in the event of a breakdown or attack. On the other hand, decentralized architectures allow for the distribution of crucial functions over numerous nodes or parts, resulting in redundancy and enhancing system resilience. In view of the above discussions, a decentralized EV framework is crucial. By eliminat ing a single point of failure, it will provide improved security, privacy and better resilience against cyber-attacks. Such a framework can greatly enhance the security of the EV charging ecosystem by distributing the control across various entities and therefore, can help in reducing the cybersecurity risks and failures. With the use of blockchain technology, participants can conduct transactions without the need for a trusted middleman or centralized authority. Be cause blockchain is decentralized, all transactions are verified and recorded using a consensus mechanism chosen by the network’s users. Blockchain’s distributed ledger is transparent and impervious to manipulation. A transaction becomes practically unchangeable once it is added to the blockchain, making it challenging for any party to change or modify the transaction record without network consent. ### 2.10 Blockchain-based EV Management Systems Such potential issues mentioned above highlight the use of a decentralized management system for EV charging. Therefore, the second research objective is to devise a decentralized trusted framework for EV owners that is secure, transparent, and offers trust among participants. It doesn’t require any database or central authority to control the charging operations. It should be able to hide irrelevant information from the participants that could lead to security and privacy breaches. ----- 2220 _CHAPTER 2. LITERATURE REVIEW_ However, it is essential to differentiate between CS selection and EV scheduling. Schedul ing of charging focuses on maximizing the charging capacity of the infrastructure by taking into account energy availability and user preferences. However, CS selection, on the other hand, includes choosing the most suitable CS based on various parameters such as location, CS slot availability, and EV charger compatibility. Although the main aim of both these concepts differentiate in nature. However, both of them are crucial for ensuring efficient charging for EVs, our focus is only on EV scheduling. Several studies have proposed the use of smart contracts as a means of managing EV charging. Smart contracts are self-executing contracts with the terms of the agreement written directly into code, which can be executed automatically when the specified conditions are met. They can be used to automate the EV charging process, by automatically initiating and stopping charging based on predefined rules. One of the key advantages of using smart contracts for EV charging is that they can be executed on a blockchain, which provides a tamper-proof record of all transactions and can be used to ensure that all transactions are executed in a transparent and secure manner. A scheduling system for EVs was proposed in [29] that enhances the number of EVs being charged at once while lowering total charging costs. Similarly, the EVs scheduling issue was developed by the authors in [30] with two goals in mind: first, to reduce the number of cars required to complete all the scheduled trips, and second, to reduce the overall distance traveled. A scheduling system for police EVs was presented by authors in [39] with the purpose of reducing the total cost, which is made up of the cost of the trip and the cost of delay. Authors in [40] suggested a scheduling approach to maximize CS slot consumption by reducing EV waiting time. By reducing the time spent waiting at the CSs, the authors of [41] explored the scheduling activities for EV charging. However, in all the above-mentioned research work, the choice for EVs is made by sharing information with a central aggregator and CSs, which may cause the exposure of EVs’ private data. Similarly, authors in [31] proposed a blockchain-based EV charging system to improve security, efficiency, and key management. A blockchain-based framework is proposed to opti mally allocate CSs to EVs through smart contract infrastructure [27]. Authors in [29] proposed a blockchain-based energy trading mechanism for EVs along with an anonymous payment mech anism however their work considers a consortium blockchain with an assumption of trusted third parties. Work in [33] proposed a blockchain-based energy trading mechanism for EVs and CSs however, in their framework, the EVs are sharing private information with a central ----- _2.10. BLOCKCHAIN-BASED EV MANAGEMENT SYSTEMS_ 2321 aggregator which could lead to serious privacy concerns. The authors of [38] presented a blockchain-based energy trading system and anonymous payment system for electric vehicles, although their work takes into account a consortium blockchain with the supposition of reliable third parties. The authors of [16] presented a blockchain-based framework for data exchange with anonymous payments, although their work is concentrated on private payment mechanisms and ignores the choice of CSs by EVs. To the best of our knowledge, only the work in [30] and [37] are relevant to the framework we provide. The work in [30] solves the CS selection problem in a decentralized way, however, it doesn’t include scheduling. Similarly, the work in [37] makes use of Blockchain for CS selection and user authentication while allowing the EVs to communicate with road-side units (RSU) which is another example of involving a trusted third party. Their biggest assumption is that the administration has strategically placed the RSUs throughout the city and that they are conducting themselves honestly and normally. Because the government sets the security requirements for the deployment of RSUs, they view RSUs as honest entities. The fact that RSUs abide by legally mandated security standards does not automatically render them imper vious to deceit or bad intent. Although adhering to security standards is a crucial step in risk mitigation, it does not ensure that security vulnerabilities or breaches won’t occur accidentally or on purpose. The majority of government security requirements for RSUs concentrate on particular security measures, like data protection, privacy, or connectivity standards. Although these criteria address significant issues, they might not cover all attack surfaces or vulnerabil ities. RSUs may still be exposed to fresh or newly emerging dangers that are not covered by the existing regulations. The research in [38] mentions although network operators guarantee a high level of security in RSU since RSU is primarily static, physical damage to its hardware represents the biggest threat to RSU. The other dangers include malicious RSUs, DoS attacks, and unauthorized access by attackers to its software platform. Therefore, EV scheduling data can be tampered with by dishonest RSUs, which could result in unfair prioritization, incorrect charge allocations, or the interruption of planned charging sessions. The system’s overall effectiveness and dependability may suffer as a result. Blockchain technology has the potential to improve security and transparency in decentral ized charging systems, however it has a number of limitations as well. The implementation on a wide-scale will involve regulatory, technological and user acceptance challenges. Additionally, ----- 2422 _CHAPTER 2. LITERATURE REVIEW_ scalability and environmental issues may limit the significance of its effectiveness. ### 2.11 Survey Findings and Summary of Gaps The thorough analysis of the literature review has provided important facts about the state of electric vehicle (EV) charging systems, with an emphasis on security issues and the need for creative solutions. This section gives a quick rundown of the survey results and highlights the most important gaps that were found. These gaps serve as the foundation for the research questions that this thesis attempts to answer. A set of challenges arise when EVs are integrated into the grid. These challenges include load management, grid capacity/stability, peak demand calculations and security is one of the major ones. In the past, studies have pinpointed the possi ble weak points in the infrastructure for EV charging, highlighting the possibility of coordinated attacks. The impact of coordinated EV charging attacks has not been adequately quantified or examined despite the critical importance of resolving these vulnerabilities. First, we need to better understand the impact of these coordinated attacks. Secondly, there is a need for an effective digital infrastructure to manage EV transaction in a secure, transparent, and decentral ized manner. The literature identifies that blockchain has the potential to improve security and transparency across EV charging. Nevertheless, there is a lack of thorough examination of the blockchain decentralized design framework in the existing literature, particularly when there is no involvement from a third party. The gaps found in the literature point to the necessity for creative solutions in the construction of a blockchain-based architecture for decentralised EV charging networks that guarantees security and transparency. Therefore, this thesis is focused on addressing these gaps by defining targeted research questions, discussed in the next section ### 2.12 Research Questions This thesis addresses the following research questions. 1. Can we quantify and analyze the impact of coordinated EV charging attacks, which is an important consideration for the future of EVs as they have become an essential part of the grid? For an efficient integration of EVs into the electricity grid, it is imperative to comprehend ----- _2.13. NOVELTY OF THIS WORK_ 2523 the consequences of coordinated attacks against EV charging. Assessing and reducing any security threats before the widespread adoption of EVs assure the stability of the energy system. By employing current cybersecurity and grid analytics approaches, it is possible to create models that replicate and measure the effects of coordinated attacks on EV charging. This research seeks to give a complete and practical analysis by integrating simulation methodologies with real-world data. 2. Can we design a blockchain-based architecture to provide transparency and security, within a decentralized EV charging ecosystem, without the involvement of a third party? In order to provide secure and transparent transactions, decentralized systems are be coming more prevalent in EV charging networks, eliminating the need for centralized authorities. The significance of creating an architecture based on blockchain that guar antees security and transparency is addressed by this research question.With the progress made in blockchain research and development, creative design ideas for decentralized EV charging can be investigated. Considering the ability of smart contracts and cryptography approaches to automate and protect transactions within the blockchain framework, the lack of third-party involvement is a difficult but achievable feature. ### 2.13 Novelty of this Work In this thesis, we address the concerns about the security and privacy of EVs and their related infrastructure. Our contributions involve identifying a new form of new cyber attack with the quantitative assessment that has a potential impact on the grid. Similarly, for managing the EV charging ecosystem our proposed blockchain framework doesn’t rely on any trusted third parties. Current state-of-the-art solutions focus either on the allocation of EV charging sessions or to authorize users for a trusted charging ecosystem. In summary, our research domain is multi-fold, meaning it not only highlights cybersecurity issues of the EV industry but also identifies the threat vectors that can be exploited to breach the security of power systems. It provides a detailed insight into the vulnerabilities that exist in the current centralized EV ecosystem. ----- # Chapter 3 Manipulation of Actual Demand in Electric Vehicles (MaD EV): A Cyber-Security Perspective This chapter is derived from a publication Manipulation of Actual Demand in Electric Vehicles (MaD EV): A Cyber-Security Perspective. **3.0.1** **Statement of Contribution of Co-Authors** The authors listed below have certified that: 1. They meet the criteria for authorship and that they have participated in the conception, execution, or interpretation, of at least that part of the publication in their field of exper tise; 2. They take public responsibility for their part of the publication, except for the responsible author who accepts overall responsibility for the publication; 3. There are no other authors of the publication according to these criteria; 4. Potential conflicts of interest have been disclosed to (a) granting bodies, (b) the editor or publisher of journals or other publications, and (c) the head of the responsible academic unit, and 5. They agree to the use of the publication in the student’s thesis and its publication on the QUT’s ePrints site consistent with any limitations set by publisher requirements. ----- _3.1. PROBLEM FORMULATION_ 2725 **Contributor** **Statement of contribution** Fatima Nisar manuscript, conducted experiments and data analysis Prof. Raja Jurdak abstract and manuscript, aided experimental design Prof. Mahinda Vilathgamuva conducted data analysis and experiment Gowri Ramachandran aided experimental design In the case of this chapter: the publication title and date of publication or status are: Manipulation of Actual Demand in Electric Vehicles (MAD EV) : A New Cyber Security Approach Published on 10 March 2023 This chapter provides an overview of a new cyber-security perspective related to demand manipulation. It demonstrates the impact of cyber attack by quantifying it with the help of simulation setup and results. ### 3.1 Problem Formulation In recent years, the penetration of Electric Vehicles (EV) has increased owing to the advance ment in battery technology and the need for cleaner transportation. This trend is transforming EVs into an integral part of our power grid ecosystem, where they can act both as a provider and consumer of energy. However, the cybersecurity risks of large fleets of EVs within our power grids remain under-explored. This chapter defines and analyses a specific cybersecurity risk for EVs, which we refer to as Manipulation of Actual Demand. This attack involves coordinated charging of a large number of EVs across multiple charging stations to disrupt the power grid. We provide a detailed analysis and quantification of the impact of this unique cyber-attack on the power grid in terms of demand-side load. The findings of our analysis guide future considerations on cybersecurity risks of coordinated EV charging and their mitigation. ### 3.2 Introduction Electric vehicles (EVs) are a game-changer in the transportation and energy sector. Global EV sales are expected to rise significantly in the coming years. Many countries have shared realistic targets for EV adoption by phasing out petrol cars and offering price incentives with tax reductions. Norway has set an example of widespread 72% EVs adoption and charging |Contributor|Statement of contribution| |---|---| |Fatima Nisar Prof. Raja Jurdak Prof. Mahinda Vilathgamuva Gowri Ramachandran|manuscript, conducted experiments and data analysis abstract and manuscript, aided experimental design conducted data analysis and experiment aided experimental design| ----- 2826 _CHAPTER 3. MAD EV_ infrastructure [42]. The ability of EVs to act as prosumers is revolutionizing the entire industry and can contribute to the energy supply-demand balance. EVs can be viewed as cyber-physical systems (CPS) as well since they are composed of both physical and cyber components and face the challenges of reliability and energy efficiency by relying on batteries for power supply [43]. Thus, electricity demand is expected to steadily increase, and future power grids need to be prepared well in time for this transition. The authors in [44] presented a vision of the Internet of Mobile Energy (IoME) that highlights EV’s parallel flow of energy and information for grid stability. This bidirectional energy transportation of EVs is the starting point of our research. When the load in a specific location increases, the distribution system’s power quality may deteriorate. Therefore, we study the major cyber vulnerabilities and analyse the threat information in this chapter. Today, EVs include a large number of sensors and have increased connectivity with smart phones and power grid systems. This poses a great risk to providing owners with the latest updates. EVs communicate safety information to nearby vehicles and with the surrounding infrastructure; however, this ecosystem is also comprised of multiple IoT devices, including EVs, and is at high risk in terms of cyber-attacks. Extensive research has been conducted on EV adoption, different battery technologies, charging techniques, life cycle cost, emissions, regulations, standards of energy efficiencies, power system integration, and cyber security chal lenges. Multiple limitations remain for EVs, including battery state of health (SOH) and poorly implemented cybersecurity measures. Cybersecurity attacks tend to exploit vulnerabilities in communications or control systems to disrupt system operations or execute malicious actions [20]. Charging/discharging of EVs are important considerations for ensuring cybersecurity. For instance, the sensors and communication infrastructure of EVs are quite vulnerable to cyber attacks. These vulnerabilities can limit the uptake of EVs because of security concerns. The huge growth in the number of devices connected across the internet such as IoT devices, and EVs indicate increased chances of exploitation. Cyber-attacks are on the rise since the advancement in technology. As per Kaspersky labs, there were 1.5 billion attacks against IoT devices during the first half of 2021. Since Electric Vehicles (EVs) and Smart grids (SG) are also connected to the internet, they provide access points by which the security of large infrastructure can be compromised. For instance, in 2015, the Ukraine power grid attack left 225,000 households deprived of electricity [45]. Similarly, BlackIoT [19] indicates a botnet of high-wattage IoT devices that has disrupted the Power Grid. Also, in 2016, Mirai Botnet mainly ----- _3.3. BACKGROUND_ 2927 utilized 600k IoT devices to launch Distributed Denial of Service attacks [20]. To address the potential vulnerabilities and the risks in the EV ecosystem we have quantified the impact of compromised coordinated charging. This chapter highlights the potential risks of load modulation on power system stability. We consider the threats of an EV connected across physical grids. Then, we discuss the percentage of comprised EVs required for this imbalance in the grid operation. Moreover, this study highlights a need for real-time detection of these issues. Therefore, it is essential to study their impact on the grid. In summary, the contributions of this chapter are: 1. Analysis of the vulnerabilities; to highlight the potential risks present in the EV charging ecosystem. 2. Quantifying the impact; to show the percentage of compromised EVs needed to disrupt grid operation 3. Discussion of the mitigation strategies; to propose possible solutions to overcome these attacks. The rest of the chapter is structured as follows. Section 3.3 provides background on EV cyber-attacks. Section 3.4 discusses the system model. Section 3.5 highlights the MAD EV attack description and general attack info. Section 3.6 discusses the attack model. Section 3.7 highlights different attack scenarios. Section 3.8 discusses simulation studies. Section 3.9 shows the analysis and evaluation of the results. Section 3.10 discusses future work, and Section 3.11 concludes the chapter. ### 3.3 BACKGROUND This section provides an overview of cybersecurity vulnerabilities among Smart Grids (SG), Electric Vehicles (EV), and Electric Vehicle Charging Stations (EVCS). The discussion high lights the gaps in the current literature in addressing the coordinated EV charging attack, achieved via Manipulation of Actual Demand, which is the focus of this chapter. ----- 3028 _CHAPTER 3. MAD EV_ **Figure 3.1: Overview of Power Grid System, showing bi-directional and unidirectional flow** **3.3.1** **A. Smart Grids and Electric Vehicles** Smart Grids are quite vulnerable to cyberattacks owing to the distributed nature of their compo nents. The work in [25] discusses a potential cyber threat that could be utilized by an attacker with publicly available data. In [47], EV attacks on the power grid are analyzed. It highlights the degraded power quality because of cyberattacks on EV charging control systems. Similarly, the work in [48] discusses EV cyberattack botnet to create power outages. The authors in [49] highlight the major parameters of the botnet that could be utilized to cause frequency instability. Recently, the work in [3] offers a detailed insight of potential weaknesses in the EV charging load, compared with residential load by launching attacks. It provides brief suggestions for the detection of such attacks. EVCS requires extensive infrastructure to meet market demand across various locations. For small charging stations, the impact of individual or grouped chargers on the distribution system can be overlooked. However, multiple EVs charging at the same time can have a significant grid impact. Existing literature didn’t consider this coordinated nature of the attack. It is important to consider this integration of Electric Vehicles and Smart grids. This highlights the need to understand the impact of this coordinated charging attack, as a step towards fixing the vulnerability to protect critical infrastructure. ----- _3.4. SYSTEM MODEL_ 3129 **3.3.2** **B. Electric Vehicles and Charging Stations** Extensive research has contributed towards the study of attacks on EVs and charging stations. Multiple cyber threats were considered for this by researchers; however, the major focus re volves around the Denial-of-Service attacks where several charging stations were compromised to make them unavailable for users. For instance, [50] highlights attacks that leverage individual weaknesses of EVs and discusses the complex challenges in addressing them. Major security issues were observed in the EV chargers developed by Schneider Electric that allow the in trusion of authentication credentials and can disable the system by introducing malware [51]. Another example is the popular EV charging application CirCarLife; where login credentials of this application are stored in plain text, which could be easily hacked and utilized for bypassing the authentication [52]. A report by Sandia National Laboratory [53] has gained attention for highlighting cybersecurity risks to EV supply equipment and provides actionable recommenda tions. The primary goal of this report is to predict potential infrastructure vulnerabilities and to provide recommendations for improving energy security. This will be only limited to the power systems. Moreover, this report is only intended to highlight the gaps and does not quantify the impact of attacks. As a result of the cyberattack on the EV charging control system as studied by Rohde [47], distortions is observed due to higher current and lower power factors. Similarly, EV charging data altering, spoofing, and stealing has been studied in [54], which highlights a major security weakness across the EV charging station servers. All the above contributions revolve around charging individual EVs only in isolation from what other EVs are doing. This chapter identifies and explores a novel cyber-attack where the demand load of multiple EVs is synchronously manipulated by an adversary that has the poten tial of bringing down the power system. This research work is helpful in securing the electrical, transportation, and vehicular infrastructure, which are becoming increasingly integrated. ### 3.4 SYSTEM MODEL This section presents the system model. A simplified diagram showing the components of a power grid is shown in Fig. 3.1. This includes generation, transmission, and distribution systems. The generation block includes power plants for generating power. At the plants, ----- 3230 _CHAPTER 3. MAD EV_ transformers boost the voltage to minimize losses within the lines as electricity makes its way to the desired consumer area. Then, the transmission network consists of high voltage transmission lines, substations, and transformers to transmit power over longer distances. They convert this voltage to a safer voltage with step-down transformers and have the ability to regulate the quality of electricity. Meanwhile, breakers help to isolate potential faults. The distribution system provides power across multiple sectors (i-e, residential, commercial, industrial) via feeder lines. From feeders, smaller transformers step down the voltage to the final levels. The power grid uses alternating current AC. For instance,the US power grid operates at 60Hz frequency while the European grid works over 50Hz. The grid frequency is always tightly maintained within a narrow tolerance under all operating conditions. The frequency equilibrium of the power grid depends on the supply-demand matching, and any disequilibrium either over-production or under-production will lead to a disturbance in the grid system. The greater penetration of EVs only increases the risk of disturbing this supply-demand balance, which is the main focus of our research. We discuss the vulnerabilities that exist across both grid systems and EVs. In the rest of the section, we first discuss smart EV charging. Essential concepts of steady and transient states are then introduced. The section is concluded with discussions about voltage instabilities and the MAD-EV attacks. **3.4.1** **Smart EV charging** One of the main concerns about electric vehicles is their charging times. While a combustion engine car fuelling takes about only a few minutes, charging an electric vehicle battery takes much longer. When the charging of a battery starts, it typically charges at a constant current equal to or less than the nominal current of the battery. During this time, the voltage of the battery increases as it gets charged. Fast charging is done in this constant current region. When charging a battery, there is a maximum charging current and voltage for the safe operation of the battery. When the voltage reaches a maximum set point, usually at the state-of-charge of 80 per cent, the electric voltage charger changes to a constant voltage region where the voltage is maintained but the current is gradually reduced to zero. Charging in this region typically takes a long time due to a reduction in charging current. Charging is stopped when the current is moved to zero. Let’s have a look at smart EV charging and compare how it is different from the conventional ----- _3.4. SYSTEM MODEL_ 3331 charging procedure explained above. Smart charging helps to overcome the disadvantages of high peak over transformers. For instance, if many cars are connected across a charging station, smart charging helps to plan and spread the charging power over the day. The cars can be charged at the same time with lower power. The smart charging network can monitor the electrical grid and charge more cars when there is less demand on the grid. The most important use of smart charging is to enable renewable energy resources (RES) for charging EVs, thus reducing the charging cost. It allows the sustainable charging of EVs from RES. Smart charging is implemented, in the case of both AC and DC charging, where control and communication are established between the EV and EVCS using protocols like IEC 61851 and ICO 15118. In this way, the charging current can be continuously controlled and monitored for both time and magnitude. In the future, it is expected that the charging stations will be smart enough to talk to the grid in order to find the best time and available speed of charging. The smart charging system connected at home can monitor residential usage. It helps the system balance the power between the charger and other appliances. If the energy usage in the building changes, the rate of charging responds to these changes. This is termed dynamic load balancing. Smart charging also monitors the solar installations and increases the available power based on the weather, and energy required by EV. This whole smart charging infrastructure promises to provide cleaner and greener trans portation with a zero-emission future. Typically, it also provides adversaries with a much greater attack surface for exploiting the cyber security of the grid-EV ecosystem. For instance, the increased charging time with low power and low cost will benefit not only the consumer with cost reduction but will also help the attacker to have more time for implanting malicious software between the station and the EVs. Having discussed smart EV charging, the next two sub-sections discuss different states of the system model. **3.4.2** **Steady-state** Power system stability involves the study of the dynamics of the power system under distur bances. Power system stability refers to its ability to return to normal or stable operation after having been subjected to some form of disturbances. Steadystate stability relates to the response of a synchronous machine to a gradually increasing load. It is basically concerned with the determination of the upper limit of machine loading without losing synchronism, provided the ----- 3432 _CHAPTER 3. MAD EV_ loading is increased gradually [55]. Now, we discuss the transient behaviour of this model. **3.4.3** **Transient Stability** Transient stability means the ability of a power system to experience a sudden change in generation, load, or system characteristics without a prolonged loss of synchronism. Power systems never operate in a steady state. The load on the system continuously changes and the generators continuously respond to the load change to maintain the system frequency within acceptable levels. The power system is also subject to disturbances due to faults. Faults are detected by protection systems, and faulty components are removed by system operators to prevent the disturbance from spreading into the rest of the network. These disturbances result in a mismatch of power generation and consumption, which in turn result in disturbing the system frequency, voltages, and the speed of generators. A stable power system is capable of returning to a new steady-state operation with satisfactory voltage levels and system frequency. After understanding both steady and transient states, we next discuss the importance of voltage stability. **3.4.4** **Voltage Stability** Voltage stability refers to the power system’s ability to maintain acceptable voltages throughout the system. It involves normal operating conditions and after being subject to disturbances. Depending on the power system’s operating condition, an immediate voltage collapse can activate protective devices. This activation initiates cascading tripping of the part(s) of the network. This tripping leads to a partial or global voltage collapse. Voltage instability is a crucial phenomenon in the power grid infrastructure that impacts electrical systems [44]. Studies in [45,19] show that most of the blackouts that occurred between 1965 and 2015 were primarily caused by voltage instability. Voltage stability issues appear when a mismatch occurs between generation and demand. This stability is measured across a bus. Each bus or node is correlated with one of four quantities: (1) magnitude of voltage, (2) phase angle of voltage, (3) active power or true power, and (4) reactive power. Voltage instability manifests as a decrease or an increase in voltage magnitude across these voltage buses. When the voltage at any bus drops fast, the affected bus reaches the critical point. Next, we analyze the voltage stability of the grid system under the ----- _3.5. ATTACK DESCRIPTION_ 3533 influence of our proposed cyber-attacks on EVs. How they are capable of creating an impact on the grid network. **3.4.5** **Co-orrdinated and Unco-ordinated Charging** In order to facilitate the proper functioning of the smart grid, it is required to balance production and consumption, so that frequency and voltage amplitudes can be closer to their nominal values. Coordinated charging refers to scheduling and shifting the EV charging load during the off-peak time to reduce energy demand on the power grid. Uncoordinated charging refers to charging an EV at all times without any time bound. With uncoordinated charging, EV load can create voltage fluctuations and can increase harmonic distortions in current. The detrimental impacts of both are presented in Table I below. **Table 3.1: Comparison of Co-Ordinated and Un co-ordinated Charging** **Co-ordinated Charging** **Un Co-Ordinated Charging** Optimized power demand Unregulated energy demand Less voltage distortions More voltage deviations Increased competency of grid Reduced reliability Increased load at peak hours Balance in daily load patterns ### 3.5 ATTACK DESCRIPTION The focus of this chapter is to identify an unexplored charging attack and to create an attack model for this to study the associated potential risks. Thus, the power system impact will be analyzed for two different scenarios based on the percentage of compromised EVs. Attackers can exploit the EV, the charging infrastructure, and the power grid system via this type of attack. A detailed study is required to understand the vulnerabilities ahead of time to secure this electrical and transportation nexus. **3.5.1** **Potential Cyber Attacks** Cybersecurity challenges revolving around the EV ecosystem need to be predicted and solved in order to have a smooth transition in the transportation industry. The dynamic nature of the |Co-ordinated Charging|Un Co-Ordinated Charging| |---|---| |Optimized power demand Less voltage distortions Increased competency of grid Increased load at peak hours|Unregulated energy demand More voltage deviations Reduced reliability Balance in daily load patterns| ----- 3634 _CHAPTER 3. MAD EV_ EV needs to be studied for market development. Cyberattacks that can affect EVs are discussed below. 1. Manipulation of Demand; This type of attack occurs when EV demand is manipulated in order to create a system imbalance [3]. 2. Denial-of-Service (DoS); This type of attack occurs by making the charging station unavailable to EVs. 3. Distributed Denial-of-Service (DDoS); This type of attack is an extended version of a DoS attack at a distributed level, where a number of charging stations appear unavailable for EVs to disturb charging and traffic [65]. 4. False Data Injection: The kind of attack where an attacker injects false data into the EV ecosystem in order to tamper with either charging price or traffic [66]. 5. Man-in-the-Middle; This type of attack compromises the communication between EV and the charging station [67]. This chapter focuses on the cyber-physical attack termed as Manipulation of Actual Demand (MAD EV) attacks. This kind of manipulation attack will disturb the normal grid operations and affects not only the EV owners, but also residential, commercial, and industrial users. The expected impact of this attack is that EVs will pose a coordinated cybersecurity threat to the power grid infrastructure. Several EVs are compromised to affect a group of charging stations in a large geographical area. The target of the attack is to disrupt the power system network to have a larger impact on its services. The attacker will be able to synchronize the attack in a way that will be executed during the charging and discharging of numerous EVs and then will be used to compromise them simultaneously. This creates a larger impact and leads to voltage disruptions and frequency fluctuations in the mains grid. To model and study this type of attack, we use simulation experiments, as discussed below. **3.5.2** **Manipulation of Demand in EVs** Manipulation of demand is generally termed as an increase in demand. Existing literature [3] shows how an increase in demand will create an imbalance in the system and how it will impact the services and normal operations. However, we have made an important assumption that ----- _3.5. ATTACK DESCRIPTION_ 3735 **Figure 3.2: Multiple Attack Scenarios of MAD EV Attacks** manipulation of demand can result in an increase or decrease in demand. To the best of our knowledge, MAD based on an increase or decrease in demand has not been explored previously. It is, hereby, important to study these new perspectives of attacks in the present era to become aware of the vulnerabilities in EVs, charging stations, and smart grids. Therefore, we have termed Manipulation of Demand as follows: Manipulation of demand means manipulating the EV load connected across the power grid in a way that could bring the system down. This could be done in two ways mentioned below: 1. Increase in Demand 2. Reduction in Demand **3.5.3** **Attack Model** We consider an ecosystem comprising 2 million vehicles that include 10% penetration of elec tric vehicles, served by 51 public charging stations scattered across the Manhattan city area as described in [25]. These stations have 100 charging ports and 80% of the ports offer free charges for the electric car. The sum of grid load for the Manhattan area ranges from 2,000-2,100 MW. Attacks are launched on compromised EVs and analyze the scale of attacks across both Level 2 (L-2) and Level 3 (L-3) chargers respectively. It can be clearly seen from Table I that the total system load can get exceeded by the EV load with penetration of 10% to 25% only. ----- 3836 _CHAPTER 3. MAD EV_ Keeping this assumption, we have simulated the attack scenarios with EVs ranging from 125 6000 and compared their results. To depict the normal grid operation and then compromised grid operation under the influence of a cyber-attack, we have identified the percentage of adver saries that will bring down the system. Our analysis shows that only 6% compromised EVs will be able to break the system. **Table 3.2: Comparison of Home chargers and Fast Chargers** **Types** **1%** **10%** **25%** **50%** **of Chargers** **EV penetration** **EV penetration** **EV penetration** **EV penetration** No. of EV 20,000 EVs 2,00,000EVs 5,00,000EVs 1,000,000EVs Level-2 (7.2kW) 144MW 1440MW 3600MW 7200MW Level-3 (350kW) 7GW 70GW 0.17TW 0.35TW **3.5.4** **Attack Scenario** In order to explain the above mentioned manipulation scenarios, we simulate attacks across the demand side and quantify their impact. The topology of these attacks is represented below: 1. Level-2 Chargers/Home Chargers: A typical home charger is often termed a L-2 charger that operates at 32A and 230V. Level-2 chargers draw 7.4kW of power. For instance, out of 2 million vehicles, if 3000 EVs get compromised as a result of the MAD cyber-attack, this attack draws 22.2 MW of potential load from the power grid. This kind of attack is responsible for creating a drop in frequency. It can cause the system to re-instate its stability. Another variation of the same attack is launched by doubling the number of EVs. Around 6000 compromised EVs in the Manhattan area out of 2 million, charging across home chargers incurs a total load of 44 MW on the system. Apparently, 44MW of the potential load is causing minor disruption over the grid system. However, it generates localized problems. These two variations of attacks are simulated, and the results are shown in Section V. 2. Level-3 Chargers/ Fast Commercial Chargers: A typical fast commercial charger is often termed a L-3 charger operates at 32A and 230V and resulted in 240-350 kW of |Types of Chargers|1% EV penetration|10% EV penetration|25% EV penetration|50% EV penetration| |---|---|---|---|---| |No. of EV Level-2 (7.2kW) Level-3 (350kW)|20,000 EVs 144MW 7GW|2,00,000EVs 1440MW 70GW|5,00,000EVs 3600MW 0.17TW|1,000,000EVs 7200MW 0.35TW| ----- _3.6. SIMULATION_ 3937 **Figure 3.3: IEEE 9-Bus System** power. For instance, out of 2 million vehicles, 125 EVs are compromised as a result of the MAD cyber-attack. This attack draws 1.05 GW of potential load from the power grid. Such a huge demand-side load because of an attack is responsible for creating a blackout in the power system, as the demand load exceeds the total system load. A comparison of EV penetration across the Home and Fast chargers is presented in Table 3.2. ### 3.6 Simulation For demonstrating the attacks impact, we selected the IEEE 9-bus system which is widely used for research purposes. It consists of three generators and nine buses and is shown in Fig.3.3. For simulation, we have used Power World Simulator [56], an interactive power system simulation software package designed to simulate high voltage power system operation on a time frame ranging from several minutes to several days. We vary the percentages of compromised EVs from 0% to 100% The simulator is used to perform transient stability analysis on the system. The power grid ----- 4038 _CHAPTER 3. MAD EV_ transient behaviour relies on proposed models. Therefore, we used the same models as in [3]: 1. Machine Model: GENSAL 2. Generator Exciter Model: IEEE T1 3. Turbine Speed Governor: IEEE G2 The simulation results reveal that these types of attacks are quite easy to execute under different conditions and the scale of damage can be adjusted by the adversaries to achieve the desired level of mass disruption. This next section will provide the simulation results after launching the cyber-attacks on both types of chargers and will analyse them in detail. **3.6.1** **Charging Attacks on Home Chargers** We consider the scenario of a coordinated charging attack on home chargers, where a number of EVs connected at home in a certain area get compromised and then effects to infiltrate the ecosystem. We tested both the manipulation scenarios on them and the results are mentioned below. 1. Increasing the Load: An increase in the demand side load will cause the power gener ators and turbines to slow down due to a voltage drop. Insufficient voltage means that the equipment has to draw extra current in order to meet the power requirements. This voltage drop will then become responsible for the drop in frequency and rise in current. When a certain threshold is crossed, the grid system tries to disconnect itself. To create a surge in demand, an attack is simulated by adding a cumulative 22.2 MW load across the three load buses. This load will represent 1000 EVs charging at Level-2 charging spots in our system model. The attack was initialized at t=25s and the system frequency will drop as shown in Fig.3.4. Due to increased power load, voltage stability will also be disturbed across the three load buses and is shown in Fig.3.5. ----- _3.6. SIMULATION_ 4139 **Figure 3.4: Frequency Drop on IEEE 9-Bus System by Home Chargers** **Figure 3.5: Voltage Drop on IEEE 9-Bus System by Home Chargers** In accordance with frequency and voltage imbalance, current variations will follow the pattern in Fig.3.6. 2. Reducing the load: Off-loading a certain amount of demand side load also results in instability. As a result, generators speed up with an increase in voltage. Over-voltage causes more damage to the power system apparatus and is more challenging to mitigate. This case is more severe, as it increases the frequency and causes the transmission lines to trip. To create a reduction in demand, an attack is simulated by off-loading a cumulative 22.2 MW load across the three load buses. This load will be represented by 1000 EVs charging at Level-2 charging spots in our system model. The attack was initialized at t=25s and the system frequency will drop as shown in Fig.7. ----- 4240 _CHAPTER 3. MAD EV_ **Figure 3.6: Current Rise on IEEE 9-Bus System by Home Chargers** **Figure 3.7: Frequency Rise on IEEE 9-Bus System by Home Chargers** **Figure 3.8: Voltage Rise on IEEE 9-Bus System by Home Chargers** **Figure 3.9: Current Drop on IEEE 9-Bus System by Home Chargers** ----- _3.7. DISCUSSION_ 4341 **Figure 3.10: Frequency Drop on 9-Bus System by Fast Chargers** Similarly, current and voltage variations are also shown in Fig.3.8 and Fig.3.9. **3.6.2** **Charging Attacks on Fast Chargers** As described earlier, fast chargers tend to incur the same load on the grid stations with only a small number of compromised EVs. For instance, if we consider the case of 125 compromised EVs charging across stations via Fast Chargers. These 125 EVs charging at the rate of 350kW put a cumulative load of 44MW over the grid. This attack is launched at 15s, and the high-power load of 44MW required by these EVs alerts the system at around 17s to activate itself for load shedding, as the frequency starts to drop below the defined thresholds. Right after a few seconds, the system sheds load and disconnects itself as the frequency continues to drop. The frequency behaviour of this case is shown in Fig. 3.10. We simulated another case of offloading a similar amount of load from the smart grid via fast chargers. This attack represents a V2G scenario where EVs act as prosumers and provide the grid with energy and the attack is launched at 35s, and the frequency starts to rise. Until it reaches more than 62.5Hz, the system disconnects in order to avoid a blackout. The simulation results are shown in Fig. 3.11. These fast-charging results are simulated on the IEEE 9-bus system having a limited generating capacity. However, if a country’s generating capacity is around 20GW, approximately 20k EVs will be enough to compromise and create voltage instability and frequency fluctuations. ### 3.7 DISCUSSION An important assumption we made in this chapter is that Manipulation of demand is also responsible for creating a denial- of-service scenario across a charging station. That ----- 4442 _CHAPTER 3. MAD EV_ **Figure 3.11: Frequency Rise on 9-Bus System by Fast Chargers** means the demand instability will create an imbalance of demand and supply across the charging station and new incoming EVs will have to be routed to other nearby charging stations unless the issue gets solved. This is made possible by attacking only a single EV or a few EVs to create a denial-of-charging scenario across a charging station. Consider a charging station having 10 charging ports and 10 EVs plugged across them. Suppose a compromised EV starts to manipulate the demand as mentioned above. The demand supply equilibrium of the power system will get disrupted and this will result in an imbalance. Nearby EVs will also get affected due to this, however, the new incoming EVs for charging purposes will be routed back as if this charging station is not available either due to high demand or undergoing any maintenance work by using over-the-air updates from Wi-Fi or through the charging apps. Thus, incoming EVs will have to look for other nearby charging stations. If attackers implement the same strategy across multiple EV stations in a city area, there will be a high influx of compromised EVs that demands to be charged at once. Thus, a distributed denial-of-charging scenario will be created soon, and the issue is distributed among multiple charging stations creating a distributed denialof- charging situation. This is expected to become a more significant issue as EV penetration increases. When all our rideshare and government infrastructure will be transitioned to EVs, these kinds of malicious attacks are capable of damaging critical infrastructure. It will create significant disruption, especially if this unique nature of the attack is implemented during peak hours. Extensive research studies need to be done before the execution of such attacks in order to detect, respond, mitigate, and provide solutions to such problems. The detailed analysis of the above mentioned denial of-service scenario is left for future work. ----- _3.8. MITIGATION RECOMMENDATIONS_ 4543 ### 3.8 MITIGATION RECOMMENDATIONS The power grid operators can undertake early attack detection to prevent demand-supply manipulation attacks on the grid. Anomolies can be detected by monitoring the connected EV charging station’s status and schedules. To automate this procedure, machine learning (ML) models can be used to create an anomaly detection system that continuously scans the charging records. These records are obtained from the data collected by smart meters at the EV charging stations. They help to alert grid operators in case of malicious activities. As a result, the operators can respond to anomalies and execute backup plans to deal with assault scenarios. It should be emphasised that the establishment of a trust model between the electric grid and operators of EV charging station is crucial to the success with this anomaly-detection technique in order for them to communicate data. Most energy providers promise to schedule the charging of EVs in order to save the grid from energy loss. This indicates that a botnet of compromised EVs of the above mentioned coordinated charging attacks can occur for scheduled charging times. Imple mentation of mutual consensus between the operator and EV charging station is required. For example, to make modifications in the schedules of EV chargings, the EV charging station management system would require the EV charging station to communicate with the EV owner to accept or decline it. In this manner, an adversary cannot charging schedules and configuration without the consent of involved parties like EV owners and charging station operators. Our recommendations for securing the energy grid and charging system network lie in the use of blockchain technology. Blockchain offers salient features of immutability and decentralisation that will be helpful for this charging infrastructure. As mentioned above, current research shows the flattening of load curves by managing smart charging at feasible times. Consider the more likely scenario, in which EVs do not recharge during off-peak times. For example, if only a quarter of a country’s vehicles will become EVs, the burden on the electric grid would be crippling. Millions of EVs charging during peak demand times may strain the grid too much, causing a massive MAD EV attack against the smart grid. ----- 4644 _CHAPTER 3. MAD EV_ ### 3.9 CONCLUSION This chapter highlights the concerns about the impact of electric vehicles compromise on the energy demand of the smart grid. MAD-EV is capable of bringing down the power system either way. By performing simulations, the following major effects are demonstrated: (a) voltage instability leading to load shedding in the distribution system (b) wide scale disruptions of grid operations (c) significant economic loss of EV infrastructure We have highlighted the importance of these vulnerabilities in order for the grid operators to be well prepared for future.If these issues are unaddressed, they will jeopardize the deployment of emerging technologies of EV industry. An interesting direction for future work is the detection mechanism and mitigation methods for these types of cyber-attacks. Other directions for future work cover the inclusion of discharging, solar data, and a broader range of scenarios to better quantify the impact of the MAD-EV attack. ----- # Chapter 4 Decentralized Scheduling Framework For EVs This chapter aims to address the challenges of EV scheduling by proposing a blockchain-based approach. The proposed architecture eliminates the need for a centralized trusted third party. It incorporates the use of smart contracts and provides security and privacy features such as secure charging sessions and user/data privacy protection. It helps to mitigate these risks and supports the widespread adoption of EVs. The performance evaluation of this proposed framework is also shared, demonstrating its efficacy and potential for widespread adoption. Employing a decentralized blockchain based solution can also be helpful in addressing the potential cyber attacks mentioned in section 3.5.1. The efficacy of security measures is contingent upon several factors, including the implementation details, consensus algorithms selected, and the overall design of the system. Additionally, to remain ahead of growing risks in the decentralized EV charging ecosystem, regular updates, coordination with the cybersecurity community, and ongoing monitoring are crucial. ### 4.1 Introduction The issue of EV scheduling is complicated and has a number of difficulties that must be successfully overcome. Understanding these difficulties makes the need for a blockchain-based approach more obvious. The need for charging infrastructure grows as the number of electric vehicles rises. It becomes extremely difficult to coordinate the scheduling and verification of a wide variety of EVs. The increased number of EVs and the variety of interconnections between them make scheduling exponentially more challenging as the number of EVs rises. ----- 4846 _CHAPTER 4. EV SCHEDULING_ Large-scale schedule coordination calls for complex algorithms, real-time data processing, and reliable communication networks. The complexity increases with the inclusion of other stakeholders, including EV owners, CS operators, and utility providers. Furthermore, the EV ecosystem is evolving, with continually changing elements like traffic patterns and weather patterns. Real-time scheduling of EV charging adds an additional level of complexity, de manding the existence of dynamic scheduling systems. An advanced, dynamic, and flexible solution is necessary to achieve interoperability as well as scalability while upholding security and trust. In addition, trust is crucial in the EV ecosystem. Owners of electric vehicles (EVs) must have confidence that their vehicles will be scheduled for charging as anticipated, and owners of charging stations must have confidence that only authorized users are using their stations. Traditional centralized methods might pose weaknesses, increasing the possibility of fraud, data breaches, and unauthorized access. Centralized scheduling techniques may put system efficiency first, but they might not offer ample flexibility to consider specific users’ preferences and requirements. Users may have certain charging needs or restrictions that are not properly taken into account in a centralized method, which might cause them inconvenience or displeasure. These preferences include flexible charging schedules to accommodate different schedules, preferences for particular charging locations, energy cost optimization during off peak hours, considering battery State of Charge requirements, selections for charging with renewable energy sources, and prioritization of urgent charging needs. To protect the rights of all stakeholders, a secure and reliable scheduling system must be built. Furthermore, exchanges of sensitive data, including user identities, location information, and transaction specifics, are necessary for EV scheduling. To ensure consumer trust and adherence to data protection laws, the confidentiality of this data must be protected. Traditional centralized systems are quite vulnerable to unauthorized access, causing data privacy violations that can compromise user privacy. Traditional centralized scheduling techniques have their drawbacks, which can be overcome by a decentralized approach. It provides enhanced scalability, flexibility, privacy, security, and user-centricity, opening the way for better and more efficient coordination of EV schedules in a developing EV ecosystem [61,62,63]. Therefore, it is critical to design a framework that provides solutions for the above-mentioned challenges. The key contributions of this chapter are: 1. A decentralized consortium framework for managing authentication and scheduling of ----- _4.2. BLOCKCHAIN FRAMEWORK_ 4947 EV charging ecosystem. The framework eliminates the requirement for any trusted third party and allows secure communication between EVs and CSs in a decentralized manner. 2. A detailed qualitative assessment of the proposed infrastructure is mentioned. The rest of the chapter is structured as follows. Section 4.2 discusses the blockchain frame work for scheduling. Section 4.3 highlights the evaluation and analysis of the smart contract, including performance metrics. Section 4.4 concludes the chapter with a discussion on future work. ### 4.2 Blockchain Framework In this section, we propose a consortium blockchain-based framework for managing EV charg ing scheduling and authenticating EVs at the CSs. This kind of blockchain architecture is intended to be managed and controlled by multiple organizations instead of single. Consortium blockchains balance decentralization and control in contrast to private blockchains, which are controlled by a single organization, and public blockchains, which let anyone to join the network and validate transactions.All of the charging business logic, permissions, and rules are handled by smart contracts, enabling it to operate without the need for third-party entities. This approach provides a platform through which EV owners can communicate their charging demands while operators of charging stations can offer available time slots and cost options. EV users can search and choose suitable charging slots through the decentralized network based on their needs, and charging station operators can maximize the use of their charging stations. For EV scheduling, a blockchain-based framework is an efficient choice for various com pelling reasons, especially in terms of decentralization and avoiding trusted third parties. While other secure systems might be able to accomplish comparable objectives, blockchain offers special benefits, including immutability, transparency, and self-executed smart contracts that make it especially well-suited for this use. Decentralization is a feature of blockchain, which means that instead of depending on a centralized authority, power and decision-making are divided across various parties. Decentralization improves transparency, gets rid of single points of failure, and lessens the chance of bias or manipulation. Blockchain keeps a permanent record of all transactions and acts. A high level of data integrity is provided by the fact that once data is stored on the blockchain, it cannot be changed or tampered with. For EV scheduling, this ----- 5048 _CHAPTER 4. EV SCHEDULING_ transparency and immutability are essential since they make sure that all charging transactions are recorded on the ledger. Participants’ trust is increased by this transparency because any inconsistencies or questionable activity may be quickly found and addressed. **4.2.1** **Assumptions** In this proposed framework, we have made an assumption that EV growth would impact the Smart Grid. As the number of EVs increases, the more load grid has to bear. Therefore, in the registration and verification step, where both EVs and CSs have to provide their identities in order to register on the Hyperledger Fabric network, it is assumed that Smart Grid issues credentials for authentication after verifiying the documents. It is assumed that the management of digital certificates, including issuance, verification, and revocation, will be handled by a separate subsystem of Smart Grid. This new subsystem will be designed by the grid authorities. To join the decentralized EV scheduling network, an EV or charging station must go through an identity verification procedure with the Smart Grid CA. It will be necessary to submit essential identification papers, licenses, or digital IDs for verification throughout this process. The Smart Grid CA creates a digital certificate for the EV and CS following successful identity verification. The certificate contains facts on the entity’s public key, identity information, and maybe other features required for network authentication and authorization. The private key for the issued certificate is securely distributed to the CS or EV by the CA of Smart Grid. This approach closely resembles AWS Key Management Service (KMS) [64]. Our strategy creates a central key management component within the Smart Grid architecture, functioning as a trusted party in charge of safe key generation, distribution, and revocation. This is similar to AWS KMS. We use periodic key rotation to improve security and reduce key compromise risk, much as the strong key rotation techniques in AWS KMS. Additionally, our strategy incorporates thorough auditing and monitoring features, documenting crucial management operations to guarantee openness, traceability, and compliance. We prioritize the protection and integrity of keys, similar to the increased security measures used, by implementing fine-grained access control and potentially utilizing hardware security modules. The entity uses this private key, which is kept private, to validate its identity within the decentralized network and sign messages. The Smart Grid CA can start a certificate revocation procedure if an EV or charging station is discovered to be malicious, compromised, or in breach of network policies. By doing this, the entity’s digital certificate will be rendered invalid for use in subsequent network transactions. The issued ----- _4.2. BLOCKCHAIN FRAMEWORK_ 4951 certificates and the related metadata are kept in a certificate database that is maintained by the Smart Grid CA. Using this repository, network users can confirm the legitimacy and authenticity of certificates while interacting with EVs and CSs. The integrity and security of its infrastructure are guaranteed by the Smart Grid CA. This entails using robust encryption techniques, pro tecting private keys, putting in place secure communication protocols, and routinely updating hardware components and patching CA software. The Smart Grid, acting in the capacity of the certificate authority, is in charge of creating trust, overseeing the management of digital certificates, and ensuring secure communications between EVs, CSs, and the decentralized blockchain network for EV scheduling. It seems an additional burden over the Smart Grid, but in the whole EV ecosystem, Smart Grid is the only most trusted entity as the grid is responsible for other consumers as well like industrial, residential, and commercial. Therefore, it is recommended to seek the services of a smart grid for the purpose of government-issued identity verification. It is also suggested that the smart grid can create a separate unit/department inside its infrastructure whose role is to look after the energy consumption data and verify EVs and CSs. Another assumption is that the certificate authority (CA) will be a unit of smart grid infrastructure that will manage all the identities. This assumption holds true as SG has multiple other consumers including residential, commercial, and industrial in addition to EVs. In the coming years, as the number of EVs increases, it is anticipated that SG infrastructure will incorporate new departments for the management of EV charging. **4.2.2** **System Model** We take into account a network of X CSs from various rival providers that are geographically distributed throughout a metropolis. These X CSs spatially communicate their charge data to the decentralized framework. A number of H EVs are in operation in the city, either parked or in on-the-go mode. Each EV has different charging needs, for example, some EV owners are prepared to charge their EV for less money with more waiting and traveling time, while other EV users prefer short waiting periods but with a more expensive charging price. These EVs must research the decentralized architecture to identify the best CS in accordance with their needs. The main participants of the proposed blockchain framework are: 1. Electric Vehicles: EVs who want to schedule their charging sessions in order to avoid ----- 5250 _CHAPTER 4. EV SCHEDULING_ **Figure 4.1: Network Entities** **Figure 4.2: Peer Nodes and Certificate Authority** waiting time and select the available CS slot on their own preferences. 2. Charging Stations: Charging stations include the charging ports where EVs come and connect for charging and discharging. These main network entities can interact with each other via transactions and smart contracts and are mentioned in Fig- 4.1. **4.2.3** **Overview** An EV who wants to charge needs to register over the blockhain network by providing in formation related to name, ID, EV type, and EV charger type. This overview is presented in Fig-4.2. Similarly, CSs also need to register over the network and share information related ----- _4.2. BLOCKCHAIN FRAMEWORK_ 5351 **Figure 4.3: Blockchain Framework** to CS location, available CS slots, available charger types, and energy prices. The platform ensures the integrity of participant identities by using a certificate authority. An EV can view the available CS with their configurations on the blockchain network and select a CS based on its needs. After the selection of the CS with the desired charging requirements, an EV then submits a scheduling request. This request includes the desired charger type, charging slot, and time. This request is received over the network and is validated over the network with approval from the CS. The blockchain network facilitates the process by updating the CS parameters in real time. A smart contract is formed to record the scheduling information and conditions once EV owners have successfully reserved a time slot. More details on the specific actions of the smart contract are presented and mentioned below. A high-level visual representation of the above-mentioned involved steps is illustrated in Fig-4.3. EV owner uploads registration information, including car details, ownership docu mentation, and digital identification. The Certificate Authority (CA) verifies the registration request to confirm the identification of the EV owner. EV registration transaction creation and validation propagation to peer nodes. Peer nodes verify the transaction and ensure the validity and reliability of the registration information. The validated EV registration transaction is included in a fresh block when the peer nodes agree. The peer nodes disseminate and verify the new block. The registered EV is recorded in the validated block, which is committed to the blockchain ledger. The CS owner uploads registration information, including the location, charging requirements, and digital identity. The Certificate Authority (CA) validates the reg istration request to confirm the identification of the person who owns the charging station. To ----- 5452 _CHAPTER 4. EV SCHEDULING_ be verified by the peer nodes, the CS registration transaction is constructed and transmitted. Peer nodes verify the transaction and ensure the validity and reliability of the registration information. When the peer nodes reach an agreement, the validated CS registration transaction is added to a fresh block. The peer nodes validate and verify the new block. The registered charging station is recorded in the validated block, which is then committed to the blockchain ledger. The owner of the electric vehicle (EV) sends a charging request outlining their preferred time, location, and charging needs. This request will include: EV ID —— Time —— Energy Demand When a charging request is made, the CS checks to see if the requested charging station slot is available. It also verifies the EV’s identity and the digital certificate, issued by the Smart Grid’s CA, as part of this verification procedure. This guarantees that the EV’s certificate was not tampered with and is still in effect. In case of successful authentication, the slot is reserved. A transaction is created that includes the details of the EV owner and the reserved slot and timing details. The CS adds an additional layer of security by including this verification step and verifying the EV’s identity before starting the charging procedure. This guarantees the charging session is started only with validated and authorized EVs and prevents unauthorized use of the charging infrastructure. This transaction will include: EV ID —— CS Location —— CS Slot —— Start Time ——End Time —— Charging Cost For validation, the transaction is propagated to the network’s peer nodes. Every peer node checks the transaction’s legitimacy, and integrity. Peer nodes take part in the consensus pro cedure to determine whether the transaction is genuine. The validated transaction is appended to a new block once consensus has been obtained. All peer nodes in the network receive the new block with the validated transaction in it. Each peer node individually confirms the block’s accuracy and integrity. The block is committed to the blockchain ledger once it has received sufficient validation from peer nodes. The block becomes an unchangeable, permanent compo nent of the blockchain. The CS notifies the EV owner of the confirmed charging reservation and grants them access to the schedule as well as the necessary information. During the designated time slot, the EV owner shows up to the scheduled CS and starts the charging procedure. If the EV owner didn’t show up at the scheduled time and slot for charging purposes, a number of penalties can be introduced. These can include charging a specific amount for not showing up, ----- _4.2. BLOCKCHAIN FRAMEWORK_ 5553 **Figure 4.4: Blockchain Framework** providing low rating to the EV owner for future charging requests, or not allowing that specific EV owner to book charging for a certain time period. The EV owner detaches the vehicle from the CS slot once the charge is finished. When the charging procedure is finished, a new transaction is issued to update the charge status. The mechanism described earlier is used to validate the new transaction and add it to a new block. The peer nodes disseminate and verify the new block. The committed validated block records the conclusion of the charging procedure and is added to the blockchain ledger. The above-designed network typically consists of peer nodes, and certificate authority (CA). Here’s a brief explanation of each component: Peer nodes are the nodes that will participate in Hyperledger Fabric’s consensus protocol and keep a copy of the distributed ledger. They carry out smart contract code (chain code), verify transactions, and store blockchain data. Peer nodes in the EV charging scenario include for charging stations, and EV owners. Depending on the needs and scale of the network, different peer nodes may be used. It can have a small number of nodes or many nodes dispersed across numerous sites. The management of digital identities and certificates within the network is the responsibility of the certificate authority. To ensure secure communication and authentication, it will issue and remove certificates to network users. The CA is utilized in the EV charging scenario to verify and authorize EV owners, CS operators. The CA is implemented within the network as a unique node or service. The network’s peer nodes, orderers, and CAs will vary depending on the size of the deployment, the required performance, the preferred degree of decentralization, and the required fault tolerance. It can be modified to ----- 5654 _CHAPTER 4. EV SCHEDULING_ fit the particular requirements of the scheduling and authentication scenario for EV charging. For EV authentication, multiple mechanisms exist in the literature. These include biometric authentication [57], multi-factor authentication [58], decentralized identities (DID)s [59], and others. However, in our strategy, we will use decentralized identities to effectively manage EV charging while managing authentication. With the help of cutting-edge technologies like blockchain, decentralized identities present a promising alternative for safe, unchangeable, and privacy-preserving authentication procedures. We can create a trustless environment where EV owners can validate their identities without depending on a centralized authority by using decentralized identities. This strategy supports user privacy, improves security, and is consistent with the decentralized structure we want to employ for EV charging scheduling. Multiple features can be added during the charging slot scheduling related to energy prices. We have used fixed energy prices, however, it can be made feasible via dynamic pricing based on grid load or via bidding and negotiations. Similarly, additional authentication features such as a combination of passwords, multi-factor authentication, and biometrics can also be incorporated. The management of the suggested strategy is a team effort involving numerous entities. Each entity contributes to the general operation and governance of the system while carrying out unique duties. The implementation of the EV scheduling and authorization procedures is guaranteed to be transparent, equitable, and accountable due to the dispersed nature of the management. In order to submit scheduling requests, confirm transactions, and obtain necessary information, EVs and CSs communicate with the blockchain network. The consortium has set regulations and norms that EV owners and charging stations must follow, as well as defined mechanisms for scheduling and verification. ### 4.3 Evaluation and Analysis In this section, we present results quantifying and qualifying the performance of our system for relevant benchmarks. ----- _4.3. EVALUATION AND ANALYSIS_ 5755 **4.3.1** **Experimental Setup** The deployment of the business network and performance tests are carried out on a GPU Server (Intel(R) Core(TM) i7-1185G7 3.00GHz (8 CPUs), 1.8GHz. We use all nodes on VM running[˜] LinuxOS. We build a Fabric network of two organizations (EVs and CSs) consisting of two peer nodes each. We used the Hyperledger Calliper [60], a blockchain benchmark tool to compare the performance of various blockchain technologies, to assess the performance and scalability of Fabric. With the help of this tool, you can create HTML reports that include metrics like resource usage and transaction throughput/latency. **4.3.2** **Qualitative Assessment** In this section, we conduct a comprehensive security assessment of the proposed system, aiming to identify potential vulnerabilities, threats, and risks associated with centralized approaches. Furthermore, we evaluate the effectiveness of the Hyperledger Fabric blockchain design in combatting these threats, risks, and vulnerabilities. This assessment highlights the strengths of the proposed design and demonstrates how it addresses the identified security concerns, providing a secure foundation for the EV charging scheduling process. As discussed in Chapter 2, certain decentralized frameworks exist is literature that still incorporates trusted third parties like RSU for routing scheduling information. Therefore, this section also shares how this approach eliminates the necessity for RSUs and adopts a wholly decentralized strategy, our proposed solution deviates from this standard. Our approach seeks to automate and optimize the entire EV charging procedure while maintaining efficiency, security, and transparency without depending on any centralized middlemen by utilizing blockchain technology. 1. RSU Spoofing: Attackers may impersonate RSUs to gain access to the system and manipulate scheduling or authentication processes. In our proposed framework, there is no involvement of RSUs for managing authentication and scheduling as with relevant blockchain papers mentioned above. 2. Data Manipulation: Malicious actors can tamper with data stored or processed by RSUs, leading to inaccurate scheduling, unauthorized charging, or compromised authentication. In our framework, the distributed ledger stores and records all the data flows and trans actions. It is then available across all nodes, meaning no one can tamper it as it needs ----- 5856 _CHAPTER 4. EV SCHEDULING_ modifications in all copies. 3. Denial-of-Service (DoS) Attacks: Cybercriminals can launch DoS attacks, overwhelm ing the centralized system with a high volume of requests or malicious activities, and rendering it unavailable for legitimate users. In our permission network, all participants have defined access and control. Another protection against this attack is the decentralisation of scheduling. If one node is attacked, others may still be able to support EV scheduling. 4. Sybil Attacks: attacks involve the creation of multiple fake identities or nodes by a single malicious entity, aiming to gain control or influence over the network. In our framework, participants’ identities are verified, making it harder for adversaries to construct numerous false identities. 5. Unauthorized Access: Attackers may gain unauthorized access to the centralized sys tem, compromising the integrity of scheduling and authentication processes. They can manipulate data, perform unauthorized transactions, or disrupt the system. In our proposed framework, all the entities have registered in the first step, meaning there is no chance of any compromise made possible by utilizing the unauthorized access of attackers. **4.3.3** **Quantitative Assessment** We have deployed a chaincode that has the functions of creating EV schedule based on EV owner preference (invoke), and the query function is used for querying a scheduled CS timeslot for EV charging. By altering the number of transactions from 10 to 100, the performance of our method in terms of transaction delay and throughput is assessed. We consider the below metrics for the performance evaluation as described below: **Latency: It is the time taken from an application sending the transaction to the time it is** committed to the ledger. Fig. 4.4 shows the latency of the invoke transactions. **Transaction Throughput: It refers to the rate at which transactions are committed to the** ledger after they have been issued. ----- _4.3. EVALUATION AND ANALYSIS_ 5957 **Figure 4.5: Transaction Throughput and Latency of Blockchain** These parameters help to determine the effectiveness of the smart contract-based solution for scheduling, authentication, and demand forecasting in the electric vehicle charging context. Examining the current state of blockchain applications in the automotive industry is es sential to determining the viability of the suggested BC-based EV scheduling system. The companies that are leading the ongoing projects and activities within the MOBI Alliance [68] offer a strong basis for the feasibility of blockchain solutions concerning EV charging and administration. The research places itself at the forefront of technical breakthroughs in the automotive and energy industries, drawing inspiration from real-world implementations, by matching the suggested scheduling framework with industry-driven efforts. The collaborative character of these initiatives also prompts an exploration of possible interconnection points and scalability considerations, recognising the necessity of interoperability within the developing ecosystem of blockchain applications for electric vehicles. This reality check confirms that the suggested EV scheduling system is not just theoretical but also purposefully crafted to complement and advance current industry initiatives, increasing its viability and applicability in the quickly changing field of electric mobility. ----- 6058 _CHAPTER 4. EV SCHEDULING_ ### 4.4 Conclusion The widespread use of electric vehicles (EVs) signals an unprecedented change in the trans portation industry with significant implications for the ecosystem and the power grid infras tructure. The advantages of EV adoption come with a number of difficulties, including worries about security. This paper proposes a reliable and decentralized framework for EV scheduling, and EV authentication. In this paper, we utilize blockchain-based smart contracts on a Hy perledger Fabric for these purposes, without the need for a trusted third party. By using this model, stakeholders in the EV charging ecosystem can have greater confidence in the security and reliability of the system. ----- # Chapter 5 Conclusions This chapter summarises the research and the study of this thesis. The main contribution and the future study are also discussed in this chapter. We examined the crucial problem of EV manipulation assaults and emphasized the significance of a decentralized architecture for EV charging management. We determined the necessity for reliable and secure solutions in the EV charging arena by examining the flaws associated with centralized systems and the possible threats posed by attackers. ### 5.1 Summary of the Research **5.1.1** **Demand Manipulation Attacks** In this thesis, we looked at Electric Vehicles (EV) manipulation attacks and their potential effects on the smart grid in great detail. We showed the viability of coordinated EV charging as a technique to mount attacks on the smart grid infrastructure by researching the flaws in EV charging systems and comprehending the interdependencies between EVs and the grid. We quantify the effects of these attacks and assess how they affected the grid’s stability, dependability, and economic effectiveness through a series of simulations and experiments. We modeled situations where the grid’s capacity and stability are jeopardized, resulting in potential blackouts, voltage instability, and increased costs by changing the charging patterns of a significant number of EVs. **Findings: According to our findings, coordinated EV charging attacks can seriously harm** ----- 6062 _CHAPTER 5. CONCLUSIONS_ the smart grid. Attackers can take advantage of the grid’s weaknesses, overload the distribution network, and cause voltage & frequency fluctuations by carefully coordinating the charging activities of a sizable number of EVs. This compromises the grid’s overall stability. The higher expenses the grid operator incurred to offset the consequences of the modified charge patterns also allowed us to quantify the economic impact of these attacks. **Significance: These results underline the significance of strong security controls in EV** charging systems and the requirement for proactive methods to identify and counter manipula tion threats. To detect aberrant charging patterns and stop malicious activities, it is essential to strengthen the security of communication protocols, put authentication and encryption mecha nisms into place, and establish anomaly detection systems. Grid operators should think about implementing cutting-edge monitoring and control sys tems that can proactively alter how EVs charge, preventing these attacks, and assuring grid stability and dependability. For the purpose of enhancing the security of EV charging systems and reducing the risks associated with manipulation attacks, cooperation between EV manufac turers, charging infrastructure suppliers, and grid operators is imperative. Future investigation is required to investigate and create efficient defense systems against coordinated EV charging attacks. This entails analyzing the scalability and economic feasibility of these systems, as well as developing sophisticated algorithms for monitoring in real-time, anomaly identification, and response coordination. **5.1.2** **Decentralized EV Charging Management** We presented a novel BC-based framework for EV scheduling and authentication to overcome these issues. Our architecture offers a number of benefits in terms of security, transparency, and efficiency by utilizing the inherent characteristics of BC technology, such as trustless transactions, immutability, and decentralized consensus. **Findings: This thesis demonstrated the effectiveness of our suggested methodology by** sharing a qualitative and quantitative analysis. The framework’s decentralized design makes sure that crucial operations, such as EV scheduling and authentication, are split among several nodes, lowering the possibility of single points of failure and unauthorized access. The integrity and transparency of EV charging transactions are guaranteed by the immutability of the BC ----- _5.2. FUTURE STUDY_ 6163 ledger, making it challenging for attackers to interfere with the charging process or jeopardize the system’s security. **Significance: It is critical to build a blockchain-based architecture with transparency and** security in mind as the sector adopts decentralized technologies. The importance is in providing workable ways to build confidence in decentralised EV charging networks. The results of the research can impact the creation of best practices and standards, encouraging a wider use of decentralised systems. Furthermore, the results have greater implications for blockchain uses beyond just EV charging, adding to the current conversation about transparent and safe decentralized systems across a range of industries. In conclusion, the answers to these two research issues have a big impact on how EVs and the infrastructure that supports them will operate in the future. The results could influence industry practices, influence regulatory decisions, and advance research of security and trans parency in developing technologies. ### 5.2 Future Study This thesis inspires future studies to a number of research avenues. **Scalability and BC Performance Optimization: With the growing number of EVs on the** road, it is critical to address scalability challenges and improve the BC framework’s perfor mance. In order to support more EVs and charging stations, future research can concentrate on examining techniques to increase transaction throughput, decrease latency, and increase the system’s overall scalability. **Techniques for preserving privacy: It’s crucial to protect the privacy of EV owners and** their charging habits. Future work can investigate privacy-preserving methods, including ho momorphic encryption or zero-knowledge proofs, to guarantee that sensitive data is kept private while still allowing secure and effective EV scheduling and authentication. Conducting real-world deployments and incorporating the suggested framework into current EV charging infrastructures are crucial for the practical validation of the approach. Future de velopment should concentrate on working in conjunction with industry partners, EV manufac turers, and operators of charging networks to test and assess the framework’s efficacy, usability, and compatibility in a variety of operational situations. This can include grid integration testing, ----- 6462 _CHAPTER 5. CONCLUSIONS_ reliability testing, compatability with existing infrastructure and user experience testing. **Robust Security Analysis: A thorough security evaluation of the BC-based system is** essential to find and fix potential flaws or attack routes. To ensure the framework’s resistance to sophisticated attacks, future research should concentrate on undertaking thorough security evaluations, including penetration testing and vulnerability assessments. **Interoperability and Standardization: For EV charging networks and BC frameworks to** be widely used, interoperability standards and protocols must be developed. Future research might examine initiatives to standardize interoperability procedures, data formats, and commu nication protocols to enable seamless integration as well as interoperability among various EV charging infrastructures. We believe that the proposed consortium Blockchain framework has the potential to improve the efficiency, security, and scalability of EV integration into the grid without the need for any central entity. The use of BC technology in this scenario provides a secure and decentralized system for scheduling and authenticating charging events to CSs and grid services, along with the use of smart contracts enables the efficient and accurate management of EV demand and energy consumption data. Our future work includes the plan to develop the following: 1. The full implementation and testing of this consortium framework in a real-world scenario where the smart contracts can be further evaluated. 2. We plan to include the additional functionality of electricity price in the scheduling algorithm. If the electricity price is higher during peak hours, the charging system could adjust the charging rate to a lower level or delay the charging process to a cheaper time. 3. We plan to create smart contracts for the remaining forecasting methodologies, including short-term and long-term energy forecasting categories. We will then compare the results of our smart contracts with other existing prediction models and provide detailed analysis on it for SG. Our research focus will also include the creation of smart contracts where energy can be traded between EV-EV and EV-CS without relying on any central management entity. Our thesis advances knowledge of EV manipulation assaults and highlights the importance of a decentralized framework for controlling EV charging. The suggested BC-based architecture ----- _5.2. FUTURE STUDY_ 6563 has the potential to improve EV scheduling and authentication in terms of security, transparency, and efficiency. The suggested framework will be improved and advanced through additional research and development, paving the way for future safe and dependable EV charging systems. ----- # Chapter 6 # Bibliography [1] IEA. (2023, April). Global EV Outlook 2023 – Analysis. IEA. https://www.iea.org/reports /global-ev-outlook-2023 [2] Paoli, L., & G¨ul, T. (2022, January 30). Electric cars fend off supply challenges to more than double global sales – Analysis. IEA; IEA. https://www.iea.org/commentaries/electric-cars fend-off-supply-challenges-to-more-than-double-global-sales [3] Sayed, M. A., Atallah, R., Assi, C., & Debbabi, M. (2022). Electric vehicle attack impact on power grid operation. International Journal of Electrical Power Energy Systems, 137, 107784. [4] Ziegler, B. (2023, February 14). Could Electric Vehicles Be Hacked? WSJ. https://www.wsj.com /articles/could-electric-vehicles-be-hacked-71a543e3 [5] Liang, G., Weller, S. R., Zhao, J., Luo, F., & Dong, Z. Y. (2016). The 2015 Ukraine blackout: Implications for false data injection attacks. IEEE transactions on power systems, 32(4), 3317-3318. [7] Barrett, B. (2019, September 7). An Unprecedented Cyberattack Hit the US Power Grid.[6] Wired; WIRED. https://www.wired.com/story/power-grid-cyberattack-facebook-phone-numbers security-news/ [8] Connecting Canada: Securing the vehicles of the future. (n.d.). Retrieved July 2, 2023, from[7] https://www2.deloitte.com/content/dam/Deloitte/ca/Documents/risk/ca-en-risk-advisory-securing the-vehicles-of-the-future-aoda.pdf ----- 6765 [9] Fridman, R. (n.d.). Council Post: The Importance Of Cybersecurity In Fueling The Elec-[8] tric Vehicle Revolution. Forbes. Retrieved July 2, 2023, from https://www.forbes.com/sites/ forbestechcouncil/2022/10/19/the-importance-of-cybersecurity-in-fueling-the-electric-vehicle revolution/?sh=224416385994 [10] Automotive Cybersecurity Market worth $5.3 billion by 2026. (n.d.). Www.linkedin.com.[9] Retrieved July 2, 2023, from https://www.linkedin.com/pulse/automotive-cybersecurity-market worth-53-billion-2026-swaraj-bhosale-2f/ [11] Sunderland, C. (n.d.).[10] “Protect the Plug!” The Cybersecurity of Electric Vehicles & their Charging Points. Www.victanis.com. https://www.victanis.com/blog/protect-the-plug the-cybersecurity-of-electric-vehicles-their-charging-points [11][12] Sanguesa, J. A., Torres-Sanz, V., Garrido, P., Martinez, F. J., & Marquez-Barja, J. M. (2021). A review on electric vehicles: Technologies and challenges. Smart Cities, 4(1), 372 404. [12][13] Shahjalal, M., Shams, T., Tasnim, M. N., Ahmed, M. R., Ahsan, M., & Haider, J. (2022). A critical review on charging technologies of electric vehicles. Energies, 15(21), 8239. [13][14] Arif, S. M., Lie, T. T., Seet, B. C., Ayyadi, S., & Jensen, K. (2021). Review of electric vehicle technologies, charging methods, standards and optimization techniques. Electronics, 10(16), 1910. [15] Russian EV Chargers Hacked, Screen Reads “Glory To Ukraine!” (n.d.).[14] InsideEVs. Retrieved July 2, 2023, from https://insideevs.com/news/570958/russia-electric-car-chargers hacked/ [16] K¨ohler, S., Baker, R., Strohmeier, M., & Martinovic, I. (2022). Brokenwire: Wireless[15] disruption of ccs electric vehicle charging. arXiv preprint arXiv:2202.02104. [17] Malik, A. W., & Anwar, Z. (2022). Do Charging Stations Benefit from Cryptojacking?[16] A Novel Framework for Its Financial Impact Analysis on Electric Vehicles. Energies, 15(16), 5773. [18] Kushner, D. (2013). The real story of stuxnet. ieee Spectrum, 50(3), 48-53.[17] ----- 6866 _CHAPTER 6. BIBLIOGRAPHY_ [18][19] Albright, D., Brannan, P., & Walrond, C. (2011). Stuxnet malware and natanz: Update of isis december 22, 2010 report. Institute for Science and International Security, 15, 739883-3. [20] Soltan, S., Mittal, P., & Poor, H. V. (2018). BlackIoT:IoT botnet of high wattage devices[19] can disrupt the power grid. In 27th USENIX Security Symposium (USENIX Security 18) (pp. 15-32). [20][21] Antonakakis, M., April, T., Bailey, M., Bernhard, M., Bursztein, E., Cochran, J., ... Zhou, Y. (2017). Understanding the mirai botnet. In 26th USENIX security symposium (USENIX Security 17) (pp. 1093-1110). [21][22] Shafiq, S., Irshad, U. B., Al-Muhaini, M., Djokic, S. Z., & Akram, U. (2020). Reliability evaluation of composite power systems: Evaluating the impact of full and plug-in hybrid electric vehicles. IEEE Access, 8, 114305-114314. [22][23] Delgado, J., Faria, R., Moura, P., & de Almeida, A. T. (2018). Impacts of plug-in elec tric vehicles in the portuguese electrical grid. Transportation Research Part D: Transport and Environment, 62, 372-385. [23][24] Morais, H., Sousa, T., Vale, Z., & Faria, P. (2014). Evaluation of the electric vehicle impact in the power demand curve in a smart grid environment. Energy Conversion and Management, 82, 268-282. [25] Fernandez, L. P., San Rom´an, T. G., Cossent, R., Domingo, C. M., & Frias, P. (2010). As-[24] sessment of the impact of plug-in electric vehicles on distribution networks. IEEE transactions on power systems, 26(1), 206-213. [26] Acharya, S., Dvorkin, Y., & Karri, R. (2020). Public plug-in electric vehicles+ grid data:[25] Is a new cyberattack vector viable?. IEEE Transactions on Smart Grid, 11(6), 5099-5113. [27] Acharya, S., Dvorkin, Y., Pandˇzi´c, H., & Karri, R. (2020). Cybersecurity of smart electric[26] vehicle charging: A power grid perspective. IEEE Access, 8, 214434-214453. [28] Jin, C., Tang, J., & Ghosh, P. (2013). Optimizing electric vehicle charging: A customer’s[27] perspective. IEEE Transactions on vehicular technology, 62(7), 2919-2927. [28][29] Goyal, P., Sharma, A., Vyas, S., & Kumar, R. (2016, December). Customer and aggregator ----- 6967 balanced dynamic Electric Vehicle charge scheduling in a smart grid framework. In 2016 International Conference on Electrical Power and Energy Systems (ICEPES) (pp. 276-283). IEEE. [29][30] Mukherjee, J. C., & Gupta, A. (2014, January). A mobility aware scheduler for low cost charging of electric vehicles in smart grid. In 2014 Sixth International Conference on Communication Systems and Networks (COMSNETS) (pp. 1-8). IEEE. [30][31] Wen, M., Linde, E., Ropke, S., Mirchandani, P., & Larsen, A. (2016). An adaptive large neighborhood search heuristic for the electric vehicle scheduling problem. Computers Operations Research, 76, 73-83. [32] Said, D., Cherkaoui, S., & Khoukhi, L. (2013, July). Queuing model for EVs charging[31] at public supply stations. In 2013 9th International Wireless Communications and Mobile Computing Conference (IWCMC) (pp. 65-70). IEEE. [32][33] Zhou, Y., Huang, J., Shi, J., Wang, R., & Huang, K. (2021). The electric vehicle routing problem with partial recharge and vehicle recycling. Complex Intelligent Systems, 7, 1445 1458. [33][34] Nejad, M. M., Mashayekhy, L., Chinnam, R. B., & Grosu, D. (2017). Online scheduling and pricing for electric vehicle charging. IISE Transactions, 49(2), 178-193. [34][35] Gusrialdi, A., Qu, Z., & Simaan, M. A. (2017). Distributed scheduling and cooperative control for charging of electric vehicles at highway service stations. IEEE Transactions on Intelligent Transportation Systems, 18(10), 2713-2727. [36] Pustiˇsek, M., Kos, A., & Sedlar, U. (2016, October).[35] Blockchain based autonomous selection of electric vehicle charging station. In 2016 international conference on identification, information and knowledge in the Internet of Things (IIKI) (pp. 217-222). IEEE. [37] Radi, E. M., Lasla, N., Bakiras, S., & Mahmoud, M. (2019, May). Privacy-preserving[36] electric vehicle charging for peer-to-peer energy trading ecosystems. In ICC 2019-2019 IEEE International Conference on Communications (ICC) (pp. 1-6). IEEE. [38] Danish, S. M., Zhang, K., Jacobsen, H. A., Ashraf, N., & Qureshi, H. K. (2020). BlockEV:[37] Efficient and secure charging station selection for electric vehicles. IEEE Transactions on ----- 7068 _CHAPTER 6. BIBLIOGRAPHY_ Intelligent Transportation Systems, 22(7), 4194-4211. [39] Ahmad, F., Adnane, A., & Franqueira, V. N. (2016). A systematic approach for cyber[38] security in vehicular networks. Journal of Computer and Communications, 4(16), 38-62. [40] Schmidt, K., Saucke, F., & Spengler, T. S. (2018). Scheduling of electric vehicles in[39] the police fleet. In Operations Research Proceedings 2017: Selected Papers of the Annual International Conference of the German Operations Research Society (GOR), Freie Universi¨at Berlin, Germany, September 6-8, 2017 (pp. 693-699). Springer International Publishing. [41] Bodet, C., Sch¨ulke, A., Erickson, K., & Jabłonowski, R. (2012, November). Optimization[40] of charging infrastructure usage under varying traffic and capacity conditions. In 2012 IEEE Third International Conference on Smart Grid Communications (SmartGridComm) (pp. 424 429). IEEE. [42] Qin, H., & Zhang, W. (2011, September). Charging scheduling with minimal waiting[41] in a network of electric vehicles and charging stations. In Proceedings of the Eighth ACM international workshop on Vehicular inter-networking (pp. 51-60). [42][43] State of Electric Vehicles – March 2022 - Electric Vehicle Council. (n.d.). https:// elec tricvehiclecouncil.com.au/ reports/state-of-electric-vehicles-march-2022/ [43][44] Vatanparvar, K. (2018). Reliable and Energy Efficient Battery-Powered Cyber-Physical Systems. University of California, Irvine. [45] Jurdak, R., Dorri, A., & Vilathgamuwa, M. (2021). A trusted and privacy-preserving[44] internet of mobile energy. IEEE Communications Magazine, 59(6), 89-95. [46] Case, D. U. (2016). Analysis of the cyber attack on the Ukrainian power grid. Electricity[45] Information Sharing and Analysis Center (E-ISAC), 388(1-29), 3. [46][47] Nisar, F., Ramachandran, G., Vilathgamuva, M., & Jurdak, R. (2022, December). Manip ulation of Actual Demand in Electric Vehicles (MaD EV): A Cyber-Security Perspective. In 2022 IEEE 7th Southern Power Electronics Conference (SPEC) (pp. 1-8). IEEE. [48] Rohde, K. W. (2019). Cyber security of dc fast charging: Potential impacts to the electric[47] grid (No. INL/CON-18-52242-Rev000). Idaho National Lab.(INL), Idaho Falls, ID (United ----- 6971 States). [48][49] Khan, O. G. M., El-Saadany, E., Youssef, A., & Shaaban, M. (2019, October). Impact of electric vehicles botnets on the power grid. In 2019 IEEE Electrical Power and Energy Conference (EPEC) (pp. 1-5). IEEE. [50] Morrison, G. S. (2018). Threats and mitigation of DDoS cyberattacks against the US power[49] grid via EV charging (Doctoral dissertation, Wright State University). [51] Koscher, K., Czeskis, A., Roesner, F., Patel, S., Kohno, T., Checkoway, S., ... & Sav-[50] age, S. (2010, May). Experimental security analysis of a modern automobile. In 2010 IEEE symposium on security and privacy (pp. 447-462). IEEE. [52] Schneider Electric EVLink Parking — CISA. (2019, January 31). Www.cisa.gov. https://[51] www.cisa.gov/news-events/ics-advisories/icsa-19-031-01 [52][53] Johnson, J., Berg, T., Anderson, B., & Wright, B. (2022). Review of electric vehicle charger cybersecurity vulnerabilities, potential impacts, and defenses. Energies, 15(11), 3931. [54] Anderson, B. R., & Johnson, J. B. (2021). Securing Vehicle Charging Infrastructure (No.[53] SAND2021-5745PE). Sandia National Lab.(SNL-NM), Albuquerque, NM (United States). [55] Alcaraz, C., Lopez, J., & Wolthusen, S. (2017). OCPP protocol: Security threats and[54] challenges. IEEE Transactions on Smart Grid, 8(5), 2452-2459. [55][56] Huseinovi´c, A., Mrdovi´c, S., Bicakci, K., & Uludag, S. (2020). A survey of denial-of service attacks and solutions in the smart grid. IEEE Access, 8, 177447-177470. [56][57] PowerWorld The visual approach to electric power systems. (n.d.). Www.powerworld.com. https://www.powerworld.com/ [57][58][58] Breitinger, F., & Nickel, C. (2010). User survey on phone security and usage. BIOSIG 2010: Biometrics and Electronic Signatures. Proceedings of the Special Interest Group on Biometrics and Electronic Signatures. [59] Khan, S. H., Akbar, M. A., Shahzad, F., Farooq, M., & Khan, Z. (2015). Secure biometric[58] template generation for multi-factor authentication. Pattern Recognition, 48(2), 458-472. ----- 7270 _CHAPTER 6. BIBLIOGRAPHY_ [59][60] Iqbal, A., Rajasekaran, A. S., Nikhil, G. S., & Azees, M. (2021). A secure and decentral ized blockchain based EV energy trading model using smart contract in V2G network. IEEE Access, 9, 75761-75777. [61][61] Hyperledger Caliper https://github.com/hyperledger/caliper-benchmarks[60] [62] Chaouachi, A., Bompard, E., Fulli, G., Masera, M., De Gennaro, M., & Paffumi, E. (2016).[61] Assessment framework for EV and PV synergies in emerging distribution systems. Renewable and Sustainable Energy Reviews, 55, 719-728. [62][63] Al-Ogaili, A. S., Hashim, T. J. T., Rahmat, N. A., Ramasamy, A. K., Marsadek, M. B., Faisal, M., & Hannan, M. A. (2019). Review on scheduling, clustering, and forecasting strategies for controlling electric vehicle charging: Challenges and recommendations. Ieee Access, 7, 128353-128371. [63][64] Iqbal, A., Rajasekaran, A. S., Nikhil, G. S., & Azees, M. (2021). A secure and decentral ized blockchain based EV energy trading model using smart contract in V2G network. IEEE Access, 9, 75761-75777. [64][65] Mathew, S., & Varia, J. (2014). Overview of amazon web services. Amazon Whitepapers, 105, 1-22. [66] Kim, Y., Hakak, S., Ghorbani, A. (2023, August). DDoS Attack Dataset (CICEV2023)[65] against EV Authentication in Charging Infrastructure. In 2023 20th Annual International Con ference on Privacy, Security and Trust (PST) (pp. 1-9). IEEE. [67] Pham, T. N., Oo, A. M. T., Trinh, H. (2021). Detecting and isolating false data injection at-[66] tacks on electric vehicles of smart grids using distributed functional observers. IET Generation, Transmission Distribution, 15(4), 762-779. [67][68] Kim, M., Park, K., Yu, S., Lee, J., Park, Y., Lee, S. W., Chung, B. (2019). A secure charging system for electric vehicles based on blockchain. Sensors, 19(13), 3028. [68] Azzouz, I., Fekih Hassen, W. (2023). Optimization of Electric Vehicles Charging Scheduling Based on Deep Reinforcement Learning: A Decentralized Approach. Energies, 16(24), 8102. ----- 7371 [69] mobiwp. (2020, March 2). Working Groups – MOBI — The New Economy of Movement. https://dlt.mobi/mobi-working-groups/ -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.5204/thesis.eprints.247165?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5204/thesis.eprints.247165, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBYNCND", "status": "HYBRID", "url": "https://eprints.qut.edu.au/247165/1/Fatima%2BNisar%2BMPhil_Thesis%283%29.pdf" }
null
[]
true
null
[]
38,046
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Medicine", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01c715d849c663b96b41018a8e842c2769768418
[ "Computer Science", "Medicine" ]
0.88402
Distributed Denial of Service Attack Detection in Network Traffic Using Deep Learning Algorithm
01c715d849c663b96b41018a8e842c2769768418
Italian National Conference on Sensors
[ { "authorId": "2261595503", "name": "Mahrukh Ramzan" }, { "authorId": "2261542302", "name": "Muhammad Shoaib" }, { "authorId": "2580281", "name": "Ayesha Altaf" }, { "authorId": "3118239", "name": "Shazia Arshad" }, { "authorId": "1491127485", "name": "Faiza Iqbal" }, { "authorId": "2243994776", "name": "Ángel Kuc Castilla" }, { "authorId": "2007377902", "name": "Imran Ashraf" } ]
{ "alternate_issns": null, "alternate_names": [ "SENSORS", "IEEE Sens", "Ital National Conf Sens", "IEEE Sensors", "Sensors" ], "alternate_urls": [ "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-142001", "http://www.mdpi.com/journal/sensors", "https://www.mdpi.com/journal/sensors" ], "id": "3dbf084c-ef47-4b74-9919-047b40704538", "issn": "1424-8220", "name": "Italian National Conference on Sensors", "type": "conference", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-142001" }
Internet security is a major concern these days due to the increasing demand for information technology (IT)-based platforms and cloud computing. With its expansion, the Internet has been facing various types of attacks. Viruses, denial of service (DoS) attacks, distributed DoS (DDoS) attacks, code injection attacks, and spoofing are the most common types of attacks in the modern era. Due to the expansion of IT, the volume and severity of network attacks have been increasing lately. DoS and DDoS are the most frequently reported network traffic attacks. Traditional solutions such as intrusion detection systems and firewalls cannot detect complex DDoS and DoS attacks. With the integration of artificial intelligence-based machine learning and deep learning methods, several novel approaches have been presented for DoS and DDoS detection. In particular, deep learning models have played a crucial role in detecting DDoS attacks due to their exceptional performance. This study adopts deep learning models including recurrent neural network (RNN), long short-term memory (LSTM), and gradient recurrent unit (GRU) to detect DDoS attacks on the most recent dataset, CICDDoS2019, and a comparative analysis is conducted with the CICIDS2017 dataset. The comparative analysis contributes to the development of a competent and accurate method for detecting DDoS attacks with reduced execution time and complexity. The experimental results demonstrate that models perform equally well on the CICDDoS2019 dataset with an accuracy score of 0.99, but there is a difference in execution time, with GRU showing less execution time than those of RNN and LSTM.
# sensors _Article_ ## Distributed Denial of Service Attack Detection in Network Traffic Using Deep Learning Algorithm **Mahrukh Ramzan** **[1], Muhammad Shoaib** **[1], Ayesha Altaf** **[1,]*** **, Shazia Arshad** **[1], Faiza Iqbal** **[1],** **Ángel Kuc Castilla** **[2,3,4]** **and Imran Ashraf** **[5,][∗]** 1 Department of Computer Science, University of Engineering & Technology (UET), Lahore 54890, Pakistan; mahrukh312@gmail.com (M.R.); shoaib@uet.edu.pk (M.S.); shazia.shoaib@uet.edu.pk (S.A.); faiza.iqbal@uet.edu.pk (F.I.) 2 Universidad Europea del Atlántico, Isabel Torres 21, 39011 Santander, Spain; angel.kuc@unini.edu.mx 3 Universidad Internacional Iberoamericana, Campeche 24560, Mexico 4 Universidad Internacional Iberoamericana, Arecibo, PR 00613, USA 5 Department of Information and Communication Engineering, Yeungnam University, Gyeongsan 38541, Republic of Korea ***** Correspondence: ayesha.altaf@uet.edu.pk (A.A.); imranashraf@ynu.ac.kr (I.A.) **Citation: Ramzan, M.; Shoaib, M.;** Altaf, A.; Arshad, S.; Iqbal, F.; Castilla, Á.K.; Ashraf, I. Distributed Denial of Service Attack Detection in Network Traffic Using Deep Learning Algorithm. Sensors 2023, 23, 8642. [https://doi.org/10.3390/s23208642](https://doi.org/10.3390/s23208642) Academic Editor: Ilsun You Received: 13 September 2023 Revised: 9 October 2023 Accepted: 19 October 2023 Published: 23 October 2023 **Copyright:** © 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons [Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/) [creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/) 4.0/). **Abstract: Internet security is a major concern these days due to the increasing demand for information** technology (IT)-based platforms and cloud computing. With its expansion, the Internet has been facing various types of attacks. Viruses, denial of service (DoS) attacks, distributed DoS (DDoS) attacks, code injection attacks, and spoofing are the most common types of attacks in the modern era. Due to the expansion of IT, the volume and severity of network attacks have been increasing lately. DoS and DDoS are the most frequently reported network traffic attacks. Traditional solutions such as intrusion detection systems and firewalls cannot detect complex DDoS and DoS attacks. With the integration of artificial intelligence-based machine learning and deep learning methods, several novel approaches have been presented for DoS and DDoS detection. In particular, deep learning models have played a crucial role in detecting DDoS attacks due to their exceptional performance. This study adopts deep learning models including recurrent neural network (RNN), long short-term memory (LSTM), and gradient recurrent unit (GRU) to detect DDoS attacks on the most recent dataset, CICDDoS2019, and a comparative analysis is conducted with the CICIDS2017 dataset. The comparative analysis contributes to the development of a competent and accurate method for detecting DDoS attacks with reduced execution time and complexity. The experimental results demonstrate that models perform equally well on the CICDDoS2019 dataset with an accuracy score of 0.99, but there is a difference in execution time, with GRU showing less execution time than those of RNN and LSTM. **Keywords: distributed denial of service attacks; denial of service attack detection; deep learning;** network security **1. Introduction** The use of Internet technology is expanding rapidly, enabling hundreds of thousands of devices to perform online operations. The Internet is being widely embraced in different domains; it has expanded and is vulnerable to several attacks. Among such attacks, denial of service (DoS) and distributed DoS (DDoS) are the most frequently occurring attacks. There are many methods to launch DoS attacks. The primary goal of DoS and DDoS is to stop the services provided by applications to users by exhausting the network resources. DDoS attacks occur when the hosted server is targeted with a large number of irrelevant traffic by zombie devices [1]. DoS and DDoS attacks are growing in strength and frequency. An average of 28.7 k attacks are launched every day. As per Neustar’s Cyber Threats and Trends Report, the ----- _Sensors 2023, 23, 8642_ 2 of 24 frequency of DDoS attacks increased by 200% in the first six months of 2019, while the volume increased by 73% in 2018. It is predicted that by the end of 2023, the total number of DDoS attacks will be doubled compared to 2018, reaching up to 15.4 million. The Neustar’s Cyber Threats and Trends Report 2020 indicates that a 151% increase in the number of attacks was observed in June 2020, compared to 2019 [2]. In addition, there is a 192% increase in the largest attack size and an 81% increase in the maximum attack intensity. The attack volume has also increased to 12Gbps in June 2020, compared to 11Gbps in 2019 for the same period. Therefore, there is an increased need to develop a solution to detect DDoS attacks effectively and successfully [3,4]. Very well-known DDoS attacks are SYN, TCP, ICMP, UDP, HTTP, and DNS flood [5]. DDoS attack types and their sub-types are shown in Figure 1. **Figure 1. Categorization of DDoS attacks.** Several machine learning (ML) and deep learning (DL) models have been utilized for network attack detection. For example, decision tree (DT), logistic regression (LoR), linear regression (LR), Naive Bayes (NB), support vector machine (SVM), K nearest neighbor (KNN), random forest (RF), XGBoost, AdaBoosting, ResNet, artificial neural networks (ANNs), and convolutional neural networks (CNNs) are implemented using the CICDDoS2019 dataset to detect the DDoS attacks [6]. In addition, the CICIDS2017 dataset, KDD datasets, CAIDA 2007 dataset, IoT NI, BoT IoT, MQTT, MQTTset, IoT-23, IoT-DS2, and UNSWNB15 datasets are utilized for DDoS attack detection. The CICDDoS2019 [6] dataset is a well-known dataset for analyzing the performance of ML and DL models for DDoS attacks. It contains real-time DDoS attacks from network traffic. The dataset contains a vast variety of DDoS attacks. There are 12 types of attacks available in the dataset, including ’DNS’, ’SNMP’, ’NTP’, ’WebDDoS’, ’MSSQL’, ’UDP’, ’LDAP’, ’NetBIOS’, ’SSDP’, ’PortScan’, ’UDP-Lag’, and ’SYN’. Many researchers used this dataset in their research to find the best features and the best model to detect DDoS attacks with minimum execution time and cost. DL techniques are much better than ML techniques in terms of precision and accuracy, and can process huge amounts of data [7]. Recurrent neural networks (RNNs) are useful for large amounts of data and they use previous computation and current input for evaluation. RNN is useful when information is preserved with minimal loss. Long short-term memory (LSTM) and gated recurrent units (GRU) are a special form of RNN. The primary motivation for using LSTM and GRU is the retention of prominent information for later use in the system, which could work effectively in detecting both known and unknown attacks [5,8]. This study adopts the DL models for detecting DDoS attacks using the CICDDoS2019 dataset. DL models RNN, LSTM, and GRU are utilized for experiments. The dataset is preprocessed, involving several steps like data normalization, dealing with missing values and null values, transforming categorical values, label encoding, and feature selection. Feature selection is performed to select the top 20 features for obtaining better performance. Experimental results are presented concerning training and validation graphs, as well as accuracy, recall, precision, accuracy, F1 score, and execution time for binary and multiclassification [9]. ----- _Sensors 2023, 23, 8642_ 3 of 24 Section 2 describes the related work for this study. Section 3 presents the proposed methodology, including the implemented models, selection of models, and parameter optimization. Results and discussion are given in Section 4. Finally, Section 5 concludes this study. **2. Related Work** This section presents previous work in the form of a comprehensive literature review. IT has gained significant popularity in the modern world. DoS and DDoS are the most prevalent attacks that compromise IT security. The primary objective of an attack is to disable victims’ devices and make them inaccessible to legitimate users. A large body of work can be found on network attack detection. For example, the research in [1] discussed the problems associated with DDoS attacks in the Internet of Things (IoT) devices. The perception layer, also known as the sensing layer, uses radio frequency identification (RFID) tags, global positioning system (GPS), wireless sensor network (WSN), Bluetooth, and cameras for attack detection. There have been eavesdropping and radio frequency (RF) jamming attacks at the perception layer. Flooding and reflection attacks are well-known network layer attacks. Signature wrapping attacks and flooding attacks are well-known types of middleware layer attacks. Reprogramming attacks and path-based DoS attacks are well-known application layer attacks. The studies in [10,11] used six different ML models including NB, KNN, DT, SVM, RF, and LR on the CICDDoS2019 dataset. Results indicate that the best accuracy of 99% is obtained using the DT and RF models. However, the DT is better than the RF due to lower computational complexity. The authors adopt an image processing-based approach for network attack detection in [3]. The research shows that network traffic transformed into an image can be used with the CNN model for network attack detection. Results using the ResNet model show a 99% accuracy for DDoS attack detection for binary classification and 87% accuracy for eleven kinds of DDoS assaults. In [5], SVM, KNN, NB, RF, AdaBoost, and XGBoost are used for DDoS detection. Accuracy, F1-score, and training time are used for evaluation. The CICDDoS2019 dataset is used for experiments. XGBoost and AdaBoost are found to accurately predict attacks with 100% accuracy. Another study in [12] implemented RT, KNN, DT, and ANN for DDoS attack detection using the CICDDoS2019 dataset. Results show a 99.95% accuracy for attack detection using the ANN model. Similarly, in [13] the authors employed mathematical and ML models for attack detection using the CAIDA 2007 dataset. The accuracy of LoR varies from 99% to 100%, while NB shows accuracy between 98% and 99%. The results showed an accuracy of 100% for the ML model and an accuracy of 99.75% for mathematical models. Along the same lines, the study in [14] used eight ML models for DDoS attack detection using the CICIDS2017 dataset. K-fold was used to train algorithms for detecting DDoS attacks. RF was found to be the best algorithm out of eight models. It detects DDoS attacks with 99.885% precision and has 0.05% false alarms. In [15], the authors used gradient descent with momentum algorithm, scaled pooled gradient, and descent algorithm with variable learning rate. An RNN was trained to detect DDoS attacks. The accuracy with variable rate descent algorithm learning is 99.9%. The variable learning rate descent algorithm gave better output than momentum gradient descent and scaled pooled gradient algorithms. The study in [16] used an RF model together with a highly adaptable neural network algorithm for DDoS attack detection. Results indicate that RF and neural network models achieved 95.2% and 83% accuracy, respectively. Similarly, the authors in [17] proposed a DL model for DDoS detection using RNN in IoT networks. LSTM, Bi-LSTM, and GRU are also used. The proposed models are implemented using the NSLKDD, IoT-NI, IoT-23, BoT-IoT, MQTT, IoT-DS2, and MQTT datasets. Better results are reported for the RNN model. A model called LBDMIDS was proposed in [18] which shows promising performance for intrusion detection. In addition, bidirectional and stacked LSTM are also used for experiments on the UNSW-NB15 and BoT-IoT datasets. Stacked LSTM accuracy was 96.60% and ----- _Sensors 2023, 23, 8642_ 4 of 24 Bi-Directional LSTM accuracy was 96.41% on the UNSW-NB15 dataset. On the BoT-IoT dataset, the accuracy obtained by the bidirectional and stacked LSTM was 99.99%. The results produced by LBDMIDS are the best. The authors utilize the KDD dataset for experiments using ANN in [19]. The ANN is used with five different algorithms, including Polak–Ribiére conjugate gradient, robust backpropagation, Fletcher–Powell conjugate gradient, variable rate gradient descent algorithm learning, and gradient conjugation with Powell/Beale restarts. Conjugate gradient with Powell/Beale restart showed superior performance, with 99% accuracy. Three different neural networks were compared in [20] for DDoS attack detection. Case cade, feedforward, and fitting neural networks were trained using the one-step secant and QuasiNewton backpropagation algorithm. Shallow neural gave good accuracy results with less computing time [20]. The study in [21] used SVM to detect DDoS attacks. The authors utilize eight machine learning algorithms, including MLP, LSTM, BiLSTM, KNN, SVM, linear discriminant analysis (LDA), DT, and RF. LSTM and BiLSTM accuracy ranges between 99.9% and 100%. LSTM, MLP, BiLSTM, LDA, KNN, SVM, DT, and RF test accuracy values are 79.5%, 80%, 82.3%, 77%, 82.8%, 69%, 77.7%, and 75.4%, respectively. SVM has better detection of DDoS attack accuracy among ML models, at 97.1%. Results show that BiLSTM performs better among all models. Similarly, the research in [22] used a hybrid model based on RNN extreme learning machine (ELM) algorithms. Features are extracted from the dataset using linear regression with recursive feature extraction and sequence forward selector. For experiments, the NSL-KDD dataset is used. The proposed hybrid model showed enhanced accuracy of up to 99%. Another similar work that utilized the NSL-KDD dataset is [23]. The authors used the LSTM RNN algorithm for detecting DDoS attacks. LSTM achieved a high accuracy of 97.37%. An LSTM model is used in [24] for DDoS attack detection using the UNSW-NB15 dataset. Binary classification was performed to detect the attack and normal traffic. The model is able to detect attacks with a 99% accuracy and up to 100% precision. The study in [25] combined three algorithms, RNN, LSTM, and CNN, to build a bidirectional CNNBiLSTM DDoS detection model. The CICIDS2017 dataset was used for evaluating the performance of the proposed model. The individual accuracy of RNN and LSTM reached 99.00%, while CNN showed an accuracy of 98.82%. The proposed CNN-BiLSTM model obtained an accuracy of 99.76%. Similarly, the research [26] utilized the CICDDoS2019 dataset for experiments using a backpropagation neural network called Kalman backpropagation. The model achieved an accuracy of 94% and a precision of 91.22%. The above-discussed studies indicate that a rich variety of ML models are implemented for DDoS attack detection, including LR, LoR, DT, SVM, NB, KNN, RF, XG Boost, and AdaBoosting. These models are tested on different datasets, such as UNSW-NB15, CICIDS2017, KDD, NSL-KDD, CAIDA 2007, and CICDDOS2019 datasets. While several ML models are tested in the existing studies, DL models are not very well-studied, especially using the CICDDOS2019 dataset. DL methods are much better than ML methods in terms of precision and accuracy, as they can process large amounts of data. This study aims to utilize RNN on CICDDOS2019 to detect DDOS attacks and perform multi-class classification. **3. Materials and Methods** This study proposes an approach based on the RNN model to detect DDoS attacks. In addition, RNN, LSTM, and GRU are used for binary and multi-class classification. Figure 2 illustrates the methodology adopted in the current study. It comprises data normalization, feature extraction, model training, and attack detection modules. Figure 2 shows that this study uses the CICDDoS2019 dataset for experiments [6]. The dataset must be in an appropriate form for model training to obtain the best performance. For this purpose, data prepossessing is carried out, involving several steps. Missing and null values are removed to reduce ambiguity in data and improve the models’ training ----- _Sensors 2023, 23, 8642_ 5 of 24 process. Categorical values are converted to numerical values as needed by deep learning models. Afterward, the data are normalized. During the data prepossessing step, the feature selection process is carried out to select the top 20 features. The purpose of feature selection is to obtain better performance from the models with less computational complexity. The selected features are the most efficient features for detecting DDoS attacks in network traffic. After that, the data are split into training and testing sub-sets to train RNN, LSTM, and GRU models for binary and multi-classification of attacks. The testing sub-set is later used to test the performance of trained models. **Figure 2. Methodology adopted in this study.** _3.1. Data Prepossessing_ Before training the model, the dataset needs to be preprocessed to remove noise and reduce the amount of redundant or unnecessary data. Data preprocessing is required to improve models’ performance and reduce computational complexity. 3.1.1. Data Normalization Standard scalar normalization is the process of normalizing the features of the selected dataset for attack detection. The CICDDoS2019 dataset contains different features with different dimensions, scales, and distributions. For example, the ’Fwd Packets/s’ feature contains values that are very large for some records, while very small for others. Utilizing these raw features for training DL models tends to show poor performance. The basic purpose of feature scaling is to ensure that no single feature disproportionately impacts the results. It preserves the relationship between the minimum and maximum values of each feature. So, the features are rescaled into a fixed scale by using standard scalar normalization. Standard scalar normalization used 0 mean and 1 standard deviation for feature rescaling. The expression in (1) is used to obtain a normalized value of the eature [27]. _xn =_ _[x][ −]_ _[µ]_ (1) _σ_ where xn = normalized value, x = original value, µ = mean of data, and σ = data standard deviation. 3.1.2. Dealing with Missing and Null Values As explained above, this study uses the CICDDoS2019 dataset for DDoS attack detection in network traffic. Dealing with missing and null values is an important step in ----- _Sensors 2023, 23, 8642_ 6 of 24 data preprocessing which can impact the accuracy and precision of the models. For the current study, the records that have missing or null values are removed from the dataset. Removing such records reduces computational complexity and improves the performance of the model [28]. 3.1.3. Dealing with Categorical Values ML and DL models work on numerical values, indicating the need to convert categorical values to numerical values. Categorical values and special characters are transformed into numerical values for better model performance. To convert the categorical data types into numerical data types, label encoding and one-hot encoding methods are used. Label encoding converts categorical data into numerical data by assigning a unique numerical label to each category. The sklearn provides a library called LabelEncoder that is used to transform categorical to numerical data [29]. 3.1.4. Labels to One-Hot Encoding When dealing with output labels in the CICDDOS2019 dataset, one-hot encoding is preferable over label encoding since the output labels are categorical and not ordinal in nature. Label encoding gives a unique numeric value to each feature, which implies an inherent ordering of the categories. Label encoding gives a unique numeric value to each class, implying that the classes are inherently ordered. However, there is no meaningful order or link between the distinct classes in the case of output labels. In the case of DDoS attacks, for example, there may be several classes, such as “DrDOS_SNMP”, “TFTP”, “DrDOS_SSDP”, and ICMP, and providing arbitrary numeric values to these classes can bring unexpected associations or biases into the model. This format keeps the label’s categorical characteristics and considers each category as an independent class, with no numerical order or connection imposed. It guarantees that the model understands the categorical characteristics of the output labels and prevents data misinterpretation. As a result, one-hot encoding is recommended for the output labels in the CICDDOS2019 dataset to properly reflect the categorical characteristics of the labels and give a suitable input representation for ML and DL models [30]. 3.1.5. Feature Selection Feature selection is also an important step in data preprocessing. By selecting important and weighed features of the CICDDoS2019 dataset, the attack prediction of the model can be increased. An extra tree classifier is used for feature selection. The extra tree classifier is a decision tree-based classifier, which uses the decision tree approach to select prominent features [31]. For this study, the top 20 features are selected using an extra tree classifier. ’Timestamp’, ’Source Port’, ’Min Packet Length’, ’Fwd Packet Length Min’, ’Flow ID’, ’Packet Length Mean’, ’Fwd Packet Length Max’, ’Average Packet Size’, ’ACK Flag Count’, ’Avg Fwd Segment Size’, ’Fwd Packet Length Mean’, ’Flow Bytess’, ’Max Packet Length’, ’Protocol’, ’Fwd Packetss’, ’Flow Packetss’, ’Total Length of Fwd Packets’, ’Subflow Fwd Bytes’, ’Destination Port’, and ’act_data_pkt_fwd’ are the best 20 features used for model training in this study. 3.1.6. Data Splitting Data splitting is an important step in data preprocessing too [32]. The CICDDoS2019 dataset is split into training and testing sets. Sklearn library is used for data splitting [33]. A total of 70% of the data are used for training, and 30% are used for testing. _3.2. Classification Models_ This study used RNN, GRU, and LSTM to detect DDoS attacks. A brief overview of these models is provided for completeness. ----- _Sensors 2023, 23, 8642_ 7 of 24 3.2.1. Recurrent Neural Network RNN has several applications, including image processing, market prediction, handwriting recognition, and speech recognition. RNN works better on large amounts of data and using backpropagation improves its final result. The backpropagation vanishing gradient problem arises in RNN and is handled by its variants, the LSTM and GRU models. The RNN is adopted in this study, as the dataset is large and contains attack sequences. When selecting any model for DDoS attack detection, three important factors are considered, including data availability, task complexity, and training resources. Hyperparameter optimization is very important for obtaining optimal performance [34]. The implementation details of the RNN model are provided in Algorithm 1. **Algorithm 1 Implementing an RNN model** **Require: Input sequence x1, x2, . . ., xT** 1: Initial hidden state h0 2: RNN parameters (Wxh1, Wh1h1, Wh1h2, Wxh2, Wh2h2, Wh2y, bh1, bh2, by) 3: Activation function ReLu 4: Decision Making Modules: 5: Attack := 1 & No Attack := 0, T := 0 6: for all t = 1 to T do 7: hidden states 8: _h10 & h20_ initialized hidden state h0 _←_ 9: end for 10: Compute the activation of the first RNN layer 11: if a1t = Wxh1 · xt + Wh1h1 · h1t−1 + bh1 then 12: Apply the activation function ReLu to the activation 13: Get a1t obtain the hidden state h1t = ReLU(a1t) _←_ 14: Compute the activation of the second RNN layer 15: else if a2t = Wh1h2 · h1t + Wxh2 · xt + Wh2h2 · h2t−1 + bh2 then 16: Apply the activation function ReLU to the activation 17: Get a2t obtain the hidden state h2t = ReLU(a2t) _←_ 18: end if 19: return Output of the RNN at time step t yt = Wh2y _h2t + by to predict whether_ _·_ incoming traffic is a DDoS attack or not The parameters of the RNN are represented by various weight matrices and bias vectors in the equations provided. Table 1 provides the details of the symbols used for Algorithm 1. 3.2.2. Long Short-Term Memory For analyzing network traffic data, LSTM is a better choice than other models. The ability of the LSTM model to recall the previous input helps to find patterns and long-lasting connections at input sequences. The CICDDoS2019 dataset contains details of attacks like flow lengths, source, destination IP, and port number, which shows the sequential nature of attacks in network traffic. LSTM also overcomes the disappearing gradient issue of RNN. In addition, it is used in real-world applications where data are large and data handling is more complicated. LSTM works using an input, output, and forgot gate, which controls the attack flow in and out of cells. Attacks are memorized by the LSTM cell [35]. The LSTM model is trained to classify the instances as normal or attack. The LSTM model has the ability to find the patterns in regular network traffic to detect DDoS attacks. For multi-classification, instances of network traffic values are set to 0, 1, 2, 3. For model training, we used each type of instance from the training dataset of network traffic for proper attack detection type. Label encoding is used to label all attack types and convert the attack to a specific value. The memory cell feature of the LSTM model performs the categorization of network traffic attacks successfully. ----- _Sensors 2023, 23, 8642_ 8 of 24 **Table 1. Symbols and their respective descriptions used throughout in Algorithm 1.** **Symbol** **Description** _a1t and a2t_ The activation function Weight matrix for the first RNN layer’s _Wxh1_ input x_t. Weight matrix for the second RNN layer’s _Wxh2_ input x_t. Weight matrix for the first RNN layer’s _Wh1h1_ previous hidden state h1t−1. Weight matrix for the second RNN layer’s _Wh2h2_ previous hidden state h2t−1. _bh1_ The first RNN layer’s bias vector. _bh2_ Bias vector for the second RNN layer. The first and second RNN layers’ hidden states, _h1t and h2t_ respectively. They are computed by applying the ReLU activation function to a1t and a2t. _yt_ The output Weight matrix for the hidden state h2t to the _Wh2y_ output yt. _by_ Bias vector for the output. 3.2.3. Gated Recurrent Unit The GRU model is also used to detect attacks in network traffic, which takes less memory and is time-efficient. GRU captures long-term relationships in the temporal flow of network traffic. As compared to the RNN and LSTM models, the GRU model is easier to use, which increases computing efficiency without compromising their ability to accurately predict the temporal dynamics in the data. GRU takes less time to train because it has a simplified gate arrangement with no output gate. GRU worked on two sigmoid gates and one hidden state [36]. The GRU model has the ability to find the patterns in regular network traffic to detect DDOS attacks from the CICDDOS2019 dataset. 3.2.4. Hyperparameter Training For DL models, hyperparameter tuning is the process of determining the optimal combination of various parameters to maximize network performance and efficacy. It entails systematically exploring various hyperparameter values or ranges, training and evaluating the network for each configuration, and selecting the set of hyperparameters that provide the best performance on a validation set or cross-validation. The parameter values of various RNN models vary based on the requirement and dataset. The configuration parameters for model training are displayed in Table 2. 3.2.5. Learning Rate The learning rate parameter defines the footstep for every repetition as it moves toward the minimum loss function [37]. To find the best learning rate, it is necessary to perform experiments using multiple learning rates. This study used the adaptive moment estimation (Adam) method to find the learning rate for the LSTM and GRU models. The models gave the best optimization at a 0.001 learning rate. ----- _Sensors 2023, 23, 8642_ 9 of 24 **Table 2. Parameters for RNN, LSTM, and GRU models.** **Parameters** **LSTM** **GRU** Relu, Softmax (multiclass), Sigmoid Relu, Softmax (multiclass), Sigmoid Activator (binary class) (binary class) Optimizer Adam Adam Learning rate 0.001 0.001 Categorical cross entropy (multiclass), Categorical cross entropy (multiclass), Loss Binary cross entropy (binary-class) Binary cross entropy (binary-class) LSTM/GRU layers 2 2 Hidden layers 2 2 Neurons per LSTM layers 8 8 Neurons per hidden layers 16, 8 (1st Layer, 2nd layer) 16, 8 (1st Layer, 2nd layer) Batch size 1000 1000 Epochs 100 100 3.2.6. Overfitting Prevention The overfitting problem occurs during the training of the neural networks. Early stopping and dropout layers are effective methods used in this research for overfitting prevention. To begin, early stopping was used, which allowed the models to run for an additional two rounds before halting to avoid overfitting the training data. Dropout layers were also used, which drop certain neurons at random throughout training to prevent them from dominating the learning process [38]. 3.2.7. Activation Functions This study used rectified linear activation function (ReLU) activation. By applying the ReLU function, the model learned the complicated features of the network’s hidden layers. As compared to other activation functions, like sigmoid and tanh, ReLU results are more efficient [39]. 3.2.8. Early Stopping Early stopping is defined as the technique where the training of the model stops after some time when the performance of the model does not improve after a fixed number of epochs. The early stopping callback keeps track of the validation loss with a 0.001 minimum change. The training will end early if the validation loss does not decrease by at least 0.001 over the course of five consecutive epochs [40]. 3.2.9. Optimizer The Adam optimizer is an optimizing algorithm that uses the RMSprop and AdaGrad techniques. Modifying them in accordance with the first and second moments of the gradients preserves pre-parameter learning rates [41]. The Adam optimizer dynamically modifies the learning rate for each parameter during training to efficiently update the weights of the LSTM and GRU models. 3.2.10. Batch Size Batch size is defined as the training samples the model takes in each cycle of the training process. As per research, it is found that larger batch size leads to more stable gradients and more stable training models. Smaller batch size leads to the fastest training models but less stable and less accurate models. Batch size typically varies from 32 and above [42]. In the proposed work, the experiments are carried out with batch sizes of 128, 1000, and 2050. A batch size of 1000 gave the best results. ----- _Sensors 2023, 23, 8642_ 10 of 24 3.2.11. Hidden Layers and Number of Neurons In this work, 2 LSTM and 2 GRU layers are employed with 2 hidden layers before evaluating the performance of the model. The models were implemented using 8, 128, and 256 neurons, but the results were the same. However, 128 and 256 neurons increased the computational overhead. Therefore, 8 neurons were selected. _3.3. Evaluation Matrix_ The performance of the models is evaluated using several metrics, including a confusion matrix. There are four parameters of the confusion matrix: true positive (TP), true negative (TN), false positive (FP), and false negative (FN). Accuracy shows how frequently the trained models detect the desired attacks correctly. Accuracy is calculated using _TP + TN_ _Accuracy =_ (2) _TP + FN + FP + TN_ Precision defines the model’s performance, indicating the TP suggested by the classifier. Precision is defined as the number of TP divided by the total number of positive predictions. It is calculated using _TP_ _Precision =_ (3) _TP + FP_ The recall of the model is calculated by using the Equation (4). _TP_ _Recall =_ (4) _TP + FN_ The F1 score is considered a better evaluation parameter, as it combines both precision and recall. We can find the F1 score of the model by using Equation (5). F1 score = 2 [(][Precision][ ×][ Recall][)] (5) _×_ (Precision + Recall) **4. Results and Discussion** This section is a crucial part of a research study. It provides a comprehensive review of the major findings, ensuring clarity and brevity by utilizing tables and textual explanations. The overall performance of the proposed methodology is visually represented through graphs, allowing for the identification of patterns in accuracy and loss across different test scenarios. Also, a comparative performance analysis between CICDDoS2019 and CICIDS2017 has been performed. Additionally, to show the superiority of the approach, a comparative table is presented, highlighting the outcomes achieved in comparison with existing state-of-the-art techniques. _4.1. Model Implementation_ This study utilized RNN, LSTM, and GRU models for DDoS attack identification using the CICDDOS2019 dataset, which is publicly available at [6]. The selected dataset contains thousands of DDoS attacks that fall under 12 classes, including DNS, SNMP, NTP, WebDDoS, MSSQL, UDP, LDAP, NetBIOS, SSDP, PortScan, UDP-Lag, and SYN. This study performs both the binary as well as multi-class classification involving all 12 classes. In all twelve attacks, plans were carried out on the training day, and seven attacks were executed on the testing day; attacks against DNS, SNMP, NTP, WebDDoS, MSSQL, UDP, LDAP, UDP-Lag, NetBIOS, SSDP, SYN, and TFTP were part of the training day, while LDAP, PortScan, MSSQL, UDP-Lag, UDP, and SYN attacks were part of the testing day. ----- _Sensors 2023, 23, 8642_ 11 of 24 Experimental Setup This study implemented the models using the Python programming language. A Jupyter Notebook was utilized to conduct the experiment. DL application programming interface (API) libraries pandas, matplotlib, sci-kit-learn, Keras, and scipy were used to implement the DL models. A machine with a dedicated GPU of Nvidia 1080Ti with 11 GB of memory was used and took 2 h for model training. _4.2. Evaluation Using CICDDoS2019 Dataset_ Experiments are performed using the CICDDoS2019 dataset for binary and multi-class classification. 4.2.1. Binary Classification The CICDDoS2019 [6] dataset offers very encouraging results for binary classification DDoS detection using RNN, LSTM, and GRU, as explained in Table 3. **Table 3. Performance results for binary classification using the CICDDoS2019 dataset.** **Performance** **RNN** **LSTM** **GRU** **Measure** Accuracy 99.99% 99.99% 99.99% Precision 99.99% 99.0% 99.0% Recall 99.99% 99.0% 100% F1 score 99.99% 99.0% 100% Execution time 10 min 1 min 17 s 47.9 s LSTM and GRU performed well in the intrusion detection assignment on the CICDDOS2019 dataset. They demonstrated high precision, recall, and F1-score, suggesting their usefulness in recognizing and categorizing cyber threats. In terms of execution time, GRU outperformed LSTM, with a much lower execution time of 47.9 s compared to 1 min and 17 s for LSTM and 10 min for RNN. This displays the GRU model’s computational efficiency without compromising performance. GRU’s substantially rapid execution time emphasizes its efficiency as a solution for real-time IDS. The confusion matrices in the case of RNN, LSTM, and GRU for binary classification are shown in Figure 3. The results in confusion matrices illustrate that the RNN, LSTM, and GRU models effectively classified a significant number of instances. A huge number of TP cases shows that the models efficiently detected instances of all attacks. Furthermore, the TN instance demonstrates that the models accurately detected the normal instances. LSTM has 22 FP cases and 17 FN instances in terms of misclassifications, whereas GRU has 15 FP cases and 17 FN cases. FN is the number of DDoS assaults that go undetected by the models and are incorrectly classified as normal traffic, whereas FP is the number of instances of normal traffic that are incorrectly classified as DDoS attacks. In order to achieve correct classification in DDoS attack detection, it is crucial to reduce the number of FP and FN cases. Figure 3a demonstrates that only eight instances of normal traffic are FP, which means RNN predicted them as an attack but they actually belong to a normal class. Figure 3b,c demonstrate that 22 by LSTM and 15 by GRU are false positively identified as attacks. Furthermore, seven instances are FN in RNN, indicating that they actually belong to the attack class but are predicted as normal. There were a total of 17 FPs by LSTM and 13 by GRU, which means that RNN has the lowest false positive and false negative rate, but its execution time is higher than LSTM and GRU. ----- _Sensors 2023, 23, 8642_ 12 of 24 (a) CM for recurrent neural network. (b) CM for long short-term memory. (c) CM for gated recurrent unit. **Figure 3. Confusion matrices for binary classification using the CICDDoS2019 dataset.** Figure 4a–c show the validation and training accuracy of RNN, LSTM, and GRU. The blue line indicates the training line accuracy and the orange line indicates the validation accuracy. RNN model training accuracy starts from 99.45% and reaches 99.99%. LSTM model training accuracy starts at 99.70% and reaches 99.9%. On the other hand, validation accuracy starts at 99.98% and reaches 99.99%. The GRU model training accuracy starts at 98.4% and reaches 99.99%. This shows that the model effectively learns from the training data and becomes more proficient at making accurate predictions. As the model reaches its highest accuracy, the training accuracy stabilizes, indicating that the model has successfully captured the underlying design in the data and consistently performs well. (a) RNN accuracy (b) LSTM accuracy (c) GRU accuracy **Figure 4. Model accuracy for binary classification using the CICDDoS2019 dataset.** Figure 5a–c show the training and validation loss of RNN, LSTM, and GRU. RNN model training loss starts from 0.092, and as the training continues, the loss reaches an impressively low value of 0.0002, indicating that the model accurately fits the data and captures the significant patterns within it. It is the same for LSTM and GRU. LSTM training loss starts from 0.577 and reaches 0.00055. GRU model training loss starts from 0.08 and reaches 0.0004. Conclusively, in the case of binary classification, the RNN model performs better than LSTM and GRU models by having the fewest FP and FN. This indicates that the RNN model achieved a better balance in accurately identifying positive and negative instances compared to the other models. Furthermore, when examining the loss and accuracy graphs, it can be observed that the RNN model does not exhibit signs of overfitting during training. The validation accuracy and loss of the RNN model are slightly lower than the training accuracy and loss, indicating that the model generalizes well to unseen data. In contrast, the LSTM and GRU models show a slight increase in validation accuracy and loss compared to the training phase, suggesting a higher risk of overfitting. However, LSTM and GRU have faster execution time than RNN. This implies that the LSTM and GRU models may have a tendency to memorize the training data, leading to slightly low performance on unseen data. Overall, the results suggest that the RNN model is more robust and effective ----- _Sensors 2023, 23, 8642_ 13 of 24 in binary classification, as it achieves better accuracy, lower false positive and false negative rates, and shows less risk of overfitting compared to the LSTM and GRU model. (a) RNN loss (b) LSTM loss (c) GRU loss **Figure 5. Loss graph for binary classification using the CICDDoS2019 dataset.** 4.2.2. Multi-Class Classification The CICDDoS2019 dataset offers very encouraging results for DDoS detection using LSTM and GRU for multi-classification, as shown in Table 4. **Table 4. Experimental results for multi-classification using the CICDDoS2019 dataset.** **Performance** **RNN** **LSTM** **GRU** **Measures** Accuracy 99.15% 99.43% 99.54% Precision 97% 98% 98% Recall 97% 99% 99% F1-score 97% 98% 98% Execution time 4 min 16 min 30 s 7 m 3 s Figure 6a shows the confusion matrix for multi-classification in the case of RNN. The analysis of the confusion matrix for the RNN model provided valuable insights into the misclassification patterns. Among all the classes, the class “DrDoS_NTP” has the lowest number of misclassified cases, with only 75 cases being misclassified. However, the class “DrDoS_NETBIOS” has the highest number of misclassified instances, with 1511 cases being wrong, shown as “DrDoS_MSSQL”. This information highlights the specific misclassification tendencies of the model and can help identify areas for improvement or further investigation. (a) CM for recurrent neural network. (b) CM for long short-term memory. (c) CM for gated recurrent unit. **Figure 6. Confusion matrices for multi-classification using the CICDDoS2019 dataset.** ----- _Sensors 2023, 23, 8642_ 14 of 24 Figure 6b shows the confusion matrix for multi-classification for LSTM. The confusion matrix for LSTM provided insights into the misclassification patterns. It reveals that the classes “Benign” and “DrDoS_NTP” have the lowest number of misclassified instances. Only four cases of the Benign class and fifty-eight cases of the DrDoS_NTP class are misclassified. On the other hand, the classes “DrDoS_MSSQL” and “DrDoS_NETBIOS” have the highest number of misclassified cases. Specifically, 841 instances of the “DrDoS_MSSQL” class are misclassified as “DrDoS_DNS”, “DrDoS_LDAP”, “DrDoS_NTP”, and “DrDoS_ NETBIOS”. Additionally, 772 instances are misclassified as “DrDoS_LDAP”. In the case of the “DrDoS_NETBIOS” class, 662 instances are misclassified as “DrDoS_MSSQL”. Furthermore, 411 instances of the “DrDoS_DNS” class are misclassified as “DrDoS_LDAP”. These misclassification patterns highlight the challenges in accurately distinguishing between certain classes, particularly “DrDoS_MSSQL”, “DrDoS_NETBIOS”, and “DrDoS_DNS”, which exhibit higher rates of misclassification. Figure 6c shows the confusion matrix for multi-classification for GRU. The confusion matrix of the GRU reveals interesting insights into the classification performance for different classes. It shows that the classes “Benign” and “DrDoS_NTP” have the lowest number of misclassified instances, with 62 instances of “Benign” and 51 instances of “DrDoS_NTP“ being misclassified. This indicates that the GRU model is quite effective in accurately classifying these classes. On the other hand, the classes “DrDoS_MSSQL” and “Syn” have the highest number of misclassified instances. Specifically, 1227 instances of “DrDoS_MSSQL” are misclassified as “DrDoS_LDAP” and “DrDoS_NETBIOS”. Additionally, 892 instances are misclassified as “DrDoS_LDAP” and 326 instances as “DrDoS_NETBIOS”. This illustrates that the GRU model demonstrates more accurate results for identifying and distinguishing instances of the “DrDoS_MSSQL” class. Similarly, 400 instances are misclassified. Among these misclassifications, 284 instances are classified as “UDPLag”, 293 instances of “DrDoS_UDP” are misclassified as “DrDoS_SSDP”, and 246 instances of “DrDoS_SSDP” are misclassified as “DrDoS_SNMP”. These misclassifications highlight the challenges the GRU model faces in accurately differentiating between these classes. Figure 7a–c show the accuracy of RNN, LSTM, and GRU. The RNN model accuracy starts at 97.5% and reaches 99.15%. The LSTM model accuracy starts at 88% and reaches 99.9%. The GRU model accuracy starts at 83.25% and reaches 99.47%. This indicates the model is gaining knowledge and enhancing its functionality over time. (a) RNN accuracy. (b) LSTM accuracy. (c) GRU accuracy. **Figure 7. Models’ accuracy for multi-classification using the CICDDoS2019 dataset.** Figure 8a–c show the training and validation loss of RNN, LSTM, and GRU. The LSTM model loss starts from 0.411 and reaches 0.0176. The GRU model loss starts from 0.60 and reaches 0.0170. Convergence of both accuracy and loss in the training and validation sets demonstrates the effectiveness of the model in learning the underlying patterns of the data and making accurate predictions. The decreasing loss indicates that the model is optimizing its parameters and improving its predictive performance. ----- _Sensors 2023, 23, 8642_ 15 of 24 (a) RNN loss. (b) LSTM loss. (c) GRU loss. **Figure 8. Loss graphs for multi-classification using the CICDDoS2019 dataset.** In terms of multi-classification, the GRU model outperforms both the LSTM and RNN models, with the fewest misclassified instances. This implies that, as compared to the other models, the GRU model is more effective in correctly classifying instances into their appropriate classes even though the RNN model performed quicker than the GRU model in terms of execution time. When the loss and accuracy graphs of all three models are examined, it is clear that they do not overfit throughout the training procedure. The validation accuracy and loss curves are relatively lower than the training accuracy and loss curves. This shows that the models generalize well to new data and are not impacted by the training data. _4.3. Evaluation Using CICIDS2017 Dataset_ The results of the DL models are validated through experiments using the CICIDS2017 dataset [43]. Experimental results reveal that the RNN model also detects DDoS attacks in older datasets with greater precision. 4.3.1. Binary Classification The CICIDS2017 dataset offers very encouraging results for binary classification of DDoS attacks using RNN, LSTM, and GRU models, as given in Table 5. Performance is given in terms of accuracy, precision, recall, F1 score, and execution time. All the models adeptly distinguished between regular and attack activities, with an impressive accuracy of 98% for RNN and LSTM, and 97% for GRU. Every model boasts accuracy, indicating their proficiency in correctly identifying attacks and minimizing false alerts. RNN, LSTM, and GRU all achieved a commendable recall rate of 98% for RNN and LSTM, and 97% for GRU, underscoring their capability to identify false negatives. As for the F1 score, all models performed well. GRU recorded an F1 score of 97%, while LSTM and RNN achieved a 98% F1 score. **Table 5. Binary classification results for the CICIDS2017 dataset.** **Performance** **RNN** **LSTM** **GRU** **Measures** Accuracy 98.0% 98.0% 97.0% Precision 98.0% 98.0% 97.0% Recall 98.0% 98.0% 97.0% F1-score 98.0% 98.0% 97.0% Execution time 1 min 27 s 1 min 18 s 1 min 30 s The confusion matrix in the case of RNN is shown in Figure 9a. In this context, the number of TN, 731,852, illustrates the model’s ability to accurately identify benign instances. Conversely, the number of FP, 15,191, points to instances where genuine attacks ----- _Sensors 2023, 23, 8642_ 16 of 24 are mistakenly categorized as benign. Additionally, The FN value of 4136 signifies benign data wrongly classified as attacks, while the TP value of 97,080 indicates successful identification of actual attacks. The confusion matrix in the case of LSTM in Figure 9b shows the model’s performance. It excels in accurately classifying benign instances, with a high value of TN, at 733,680. However, it shows some misclassifications of actual attacks as benign (FN), and there are also instances of benign data being incorrectly classified as attacks (FP). Overall, the model appears to be adept at identifying benign instances but has room for improvement in detecting attacks with higher precision. The confusion matrix in Figure 9c provides insights into the GRU model’s performance. It demonstrates strength in correctly classifying negative class instances, with a high TN value, at 732,413. However, it also shows some misclassifications of actual positive class instances as negative class (FN) and instances of negative class data being incorrectly classified as positive class (FP). This shows that, while the GRU model does a good job of classifying negative class instances, it could do a better job of detecting positive class instances with higher precision. This model appears to be adept at identifying benign instances but has room for improvement in detecting attacks with higher precision. (a) CM for recurrent neural network. (b) CM for long short-term memory. (c) CM for gated recurrent unit. **Figure 9. Confusion matrices for binary classification using the CICIDS2017 dataset.** Figure 10a depicts the progression of the RNN model’s training and validation accuracy. According to the data, the RNN model achieves the highest level of accuracy, at 98%, during the sixth epoch. The training accuracy starts out at 87% and grows gradually to an apex accuracy of 98%. This shows a steady learning curve where the model improves at producing accurate projections. With the increase in the number of epochs, the training consistency is stabilized and the model reaches the optimal accuracy. The training and validation accuracy is shown in Figure 10b. Results indicate that at the fifth epoch, the maximum accuracy of 98% is attained. The training accuracy trajectory rises gradually from 86% to 98%. The accuracy of the training then stabilizes and stays consistent. On the other hand, the validation accuracy begins at 87% and steadily rises to 98%. The accuracy of the GRU model during training and validation is shown in Figure 10c. At the fifth epoch, the model reaches its maximum accuracy of 99.99%. The training accuracy continually increases from 86% to 97%, demonstrating the model’s efficient learning and prediction ability. Figure 11a depicts the RNN model’s training and validation loss. During the training period, the training loss starts at 0.483 and consistently decreases. This shows how effective the model is at reducing the discrepancy between forecasted and actual values. The loss drops to a noticeably low value of 0.0682 as the training goes on, indicating that the model accurately matches the data and recognizes its key trends. Figure 11b depicts the convergence of loss through epochs, highlighting the fifth epoch’s lowest loss of less than 0.1091. The training loss starts at 0.4202 and steadily drops to 0.1091. This denotes the model’s enhanced improvement in minimizing the discrepancy between expected and actual values, resulting in a more accurate representation of the data. ----- _Sensors 2023, 23, 8642_ 17 of 24 (a) RNN accuracy. (b) LSTM accuracy. (c) GRU accuracy. **Figure 10. Models’ accuracy for binary classification using the CICIDS2017 dataset.** The training and validation loss for the GRU is shown in Figure 11c. The training loss starts at 0.4274 and gets smaller with each epoch. This shows how well the model works at closing the gap between expected and actual results. The loss decreases to a negligible 0.0766 over the training period, demonstrating the model’s outstanding data fit and its ability to recognize significant patterns. (a) RNN loss. (b) LSTM loss. (c) GRU loss. **Figure 11. Loss graphs for binary classification using the CICIDS2017 dataset.** 4.3.2. Multi-Class Classification The CICIDS2017 dataset offers very encouraging results for DDoS detection using RNN, LSTM, and GRU for multi-classification, as shown in Table 6. The LSTM and GRU models both displayed outstanding accuracy, precision, F1 score, and recall in the CICIDS2017 dataset, indicating their utility in identifying assaults. Table 6 lists the performance parameters for each model, including training and testing accuracy as well as recall, precision, and F1 scores. Precision, recall, and F1-score all reached 97% on the LSTM model, which also scored an accuracy of 97%. On the other hand, the GRU model demonstrates an even greater accuracy of 98% and displays a high precision, indicating a lower probability of false positives. A memory and precision balance that is well-maintained is indicated by an F1 score of 97%. The model also displays a remarkable 98% recall rate, indicating that it accurately identified 98% of the attacks. The GRU model outperforms the LSTM model, displaying exceptional performance with an accuracy of 98%. The findings are consistent across all criteria and reflect the same recall, accuracy, and F1-score as LSTM. The execution time for the GRU model is 1 min and 27 s, which is less time than the LSTM model, i.e., 1 min 37 s. Both the LSTM and GRU models show better performance for multi-class intrusion detection using the CICIDS2017 dataset. ----- _Sensors 2023, 23, 8642_ 18 of 24 **Table 6. Results for multi-class classification using the CICIDS2017 dataset.** **Performance** **RNN** **LSTM** **GRU** **Measures** Accuracy 96.0% 97.0% 98.0% Precision 96.0% 97.0% 97.0% Recall 96.0% 97.0% 97.0% F1-score 96.0% 97.0% 97.0% Execution time 1 min 42 s 1 min 37 s 1 m 27 s Figure 12a–c show the confusion matrices for multi-classification in the case of RNN, LSTM, and GRU, respectively. Figure 12a provides the confusion matrix representing the performance of an RNN model for a multi-class classification problem with five different classes. Each row and column in the matrix corresponds to a specific class, and the numbers in the matrix show how many instances from each true class are classified into each predicted class. A total of 513,739 instances of “Benign” traffic are accurately classified as such, constituting the TPs. However, the model also misclassified 1434 instances of “Benign” traffic as other classes, which are represented as false negatives. The maximum number of false negatives for this class, denoted as 1434, indicates the largest count of instances from the “Benign” class that were incorrectly classified as something else. The confusion matrix for LSTM in Figure 12b provides insights into the misclassification patterns. In summary, the severity of misclassification for each class depends on the highest count of false negatives or false positives within the confusion matrix. For DDoS attacks, higher misclassification corresponds to 3404 false positives. Conversely, for “Hulk” attacks and “Slowloris” attacks, the most critical misclassification comprises 6385 false negatives for DoS “Hulk” and 6385 false negatives for DoS “Slowloris”. Notably, both DoS “GoldenEye” attacks and DoS “Slowloris” have no correctly classified instances. (a) CM for recurrent neural network. (b) CM for long short-term memory. (c) CM for gated recurrent unit. **Figure 12. Confusion matrices for multi-class classification using the CICIDS2017 dataset.** In conclusion, the confusion matrix for the GRU model reveals a mixed performance across various classes, as given in Figure 12c. It demonstrates exceptional accuracy in correctly classifying instances of the Benign and DDoS classes, with minimal misclassifications. However, the model faces challenges in distinguishing instances belonging to the DoS “GoldenEye”, DoS “Hulk”, and DoS “Slowloris” classes, resulting in a notable number of misclassifications in these categories. These findings underscore the strengths and limitations of the GRU model in detecting specific types of DDoS attacks. While it excels in identifying certain attack patterns, further refinements may be necessary to enhance its ability to differentiate between the more intricate attack types. These insights provide valuable guidance for fine-tuning the model and developing strategies to mitigate misclassifications, ultimately improving the accuracy of intrusion detection in diverse network scenarios. Figure 13a–c show the accuracy of RNN, LSTM, and GRU. The RNN model accuracy starts at 85% and reaches 96%. The LSTM model accuracy starts at 84% and reaches 97%. The GRU model accuracy starts at 81% and reaches 98%. ----- _Sensors 2023, 23, 8642_ 19 of 24 (a) RNN accuracy. (b) LSTM accuracy. (c) GRU accuracy. **Figure 13. Models’ accuracy for multi-class classification using the CICIDS2017 dataset.** Figure 14a–c show the training and validation loss of RNN, LSTM, and GRU. The RNN model loss starts from 1.1673 and reaches 0.1300. The LSTM model loss starts from 1.5246 and reaches 0.1293. The GRU model loss starts from 1.2427 and reaches 0.1195. (a) RNN loss. (b) LSTM loss. (c) GRU loss. **Figure 14. Loss graphs for multi-class classification using the CICIDS2017 dataset.** In terms of multi-classification, the GRU model outperforms both the LSTM and RNN models, with the fewest misclassified instances. This implies that, as compared to the other models, the GRU model is more accurate in effectively classifying instances into their appropriate classes even though the RNN model performs quicker than the GRU model in terms of execution time. When the loss and accuracy graphs of all three models are examined, it is clear that they do not overfit throughout the training procedure. The validation accuracy and loss curves are relatively lower than the training accuracy and loss curves, indicating this. This shows that the models generalize well to new data and are not too impacted by the training data. The GRU model performs better in multi-classification scenarios than the LSTM and RNN models, with the fewest misclassifications. None of the three models overfitted during training, according to an examination of the loss and accuracy graphs for each one. This is supported by the lower validation accuracy and loss measures in comparison to training metrics. It shows that the models generalize well to new data without being significantly influenced by the training dataset. _4.4. Comparison with State of the Art_ This study performs a comparative analysis of models employed in this study with existing state-of-the-art approaches. 4.4.1. Performance Comparison Using CICDDoS2019 Dataset The performance of the DL models is compared with existing state-of-the-art methods using the CICDDoS2019 dataset. The employed models aim to improve these state-ofthe-art methods for enhancing the accuracy and efficiency of DDoS detection. For this comparison, the best-performing models are compared with other state-of-the-art models’ ----- _Sensors 2023, 23, 8642_ 20 of 24 performance. Table 7 shows the performance comparison of various models that utilized the DDOS19 dataset. Results indicate that the proposed approach in this study tends to show superior results compared to existing models. **Table 7. Performance comparison with state of the art using the CICD-DoS2019 dataset.** **SDN Methods** **Precision** **Recall** **F1-Score** **Accuracy** ResNet 80% 38% 51% 87% Naïve Bayes 51% 49% 49% 57% Random Forest 78% 70% 73% 86% Decision Tree 92% 60% 40% 77% Logistic 86% 11% 19% 95% Regression Neural Network 79% 4% 53% 83% Hybrid Model 80% 72% 75% 95% SVM 29% 7% 11% 97% MLP 72% 11% 19% 79% KNN 61% 4% 48% 77% LSTM for binary 99.0% 99.0% 99.0% 99.99% classification GRU for binary 99.0% 100% 100% 99.99% classification LSTM for Multi 98% 99% 98% 99.43% classification GRU for Multi 98% 99% 98% 99.54% classification 4.4.2. Performance Comparison Using CICIDS2017 XGBoost, RF, DT, KNN, CNN, multi-layer perceptron, and LSTM-based approaches have been employed in the existing literature using the CICIDS2017 dataset. Table 8 shows the results for performance comparison. Results indicate that the models employed in this study show superior results on the CICIDS2017 dataset and obtained the highest values for all performance measures. These results show that the proposed approach outperforms other state-of-the-art approaches based on ML and DL models. 4.4.3. Scenario Explanation The objective of this research is to identify the most suitable model for DDoS attack detection compared to previous research. This study aims to leverage models that are well-suited for analyzing sequential data, as these features are crucial for identifying the patterns and characteristics of DDoS attacks. This research shows the utilization of LSTM, RNN, and GRU models to accomplish this objective. For experiments, this study used the CICDDOS2019 dataset and validated the employed models using the CICIDS2017 dataset, which generates synthetic network traffic comprising both normal and malicious activities. To simulate real-world DDoS attacks, this study deployed a variety of attack strategies, such as SYN flood, UDP flood, and DNS amplification. Prepossessing is used to network traffic data in order to extract relevant features of network flows. These characteristics include packet length, flow time, and protocol type. Following that, the dataset is divided into training and testing sets to evaluate the effectiveness of the LSTM, RNN, and GRU models. We analyzed the classification results, computational efficiency, and robustness to different forms of DDoS attacks of the RNN, LSTM, and GRU models. This study also ----- _Sensors 2023, 23, 8642_ 21 of 24 analyzed previous research studies to see how specific model components affect overall performance [22–25]. **Table 8. Performance comparison with state of the art using the CICIDS2017 dataset.** **SDN Methods** **Precision** **Recall** **F1-Score** **Accuracy** KNN 76% 67% 74% 70% Deep Neural 87% 81% 74% 77% Network Decision Tree 84% 86% 76% 86% Multi-Layer 72% 79% 68% 73% Perceptron XGBoost 84% 73% 83% 78% CNN 89% 86% 79% 86% LSTM 91% 91% 92% 90% Random Forest 64% 67% 82% 74% LSTM for binary 98% 98% 98% 98% classification GRU for binary 97% 97% 97% 97% classification LSTM for Multi 97% 97% 97% 97% classification GRU for Multi 98% 97% 98% 97% classification As shown in Figure 15, the performance of the LSTM, RNN, and GRU models is analyzed in the context of detecting DDoS attacks. The classification accuracy, computational efficiency, and resilience to various forms of DDoS attacks are evaluated. Furthermore, a performance review of previous studies is carried out to examine the influence of specific model components on overall performance in order to highlight the importance of choosing the right architecture for DDoS attack detection. The existing literature regarding DDoS attack detection demonstrates high false positives with low precision and low accuracy. This study implemented and analyzed the performance of RNN in particular, as it can identify sequential patterns in network traffic. The results demonstrate that RNN outperforms the other two models for binary classification and GRU outperforms LSTM and RNN for multi-class classification in identifying different types of DDoS attacks. **Figure 15. DDoS attack mitigation.** ----- _Sensors 2023, 23, 8642_ 22 of 24 _4.5. Comparative Analysis between CICDDOS2019 and CICIDS2017 Datasets_ For binary class classification, the accuracy, precision, recall, and F1 score for the RNN and LSTM models are improved from 98% to 99.99% when transitioning from the previous dataset to the CICDDOS2019 dataset. For GRU, the accuracy improves from 97% to 99.99% when transitioning from the previous dataset to CICDDOS2019. So, for RNN and LSTM, the performance improvement on the CICDDOS2019 dataset compared to the previous dataset is approximately 1.99%, while for GRU, it is around 2.99%. For multi-class classification, RNN, LSTM, and GRU attain 99% accuracy, precision, recall, and F1 score for CICDDOS 2019. Meanwhile, RNN attained 96%, LSTM attained 97%, whereas GRU attained 98% on CICIDS2017. So, for RNN and LSTM, the performance improvement on the CICDDOS2019 dataset compared to CICIDS2017 is approximately 3 and 2%, respectively, while for GRU, it is approximately 1%. **5. Conclusions and Future Work** The objective of this research is to detect the DDoS attacks in the latest CICDDOS2019 dataset and validate the model using the CICIDS2017 dataset by employing RNN, LSTM, and GRU. In the proposed work, the RNN, LSTM, and GRU models are evaluated using the top 20 features from the CICDDOS2019 dataset and taking the same features from CICIDS2017. Both models achieved 99% accuracy for both binary and multi-class classification. The RNN model achieves an accuracy of 99.99% for binary classification and 99.54% for multi-class classification, suggesting that it identifies and correctly classifies 99% of all actual positive instances. Overall, the findings indicate that the RNN model is more resilient and successful in binary classification than the LSTM and GRU models, as it achieves higher accuracy, lower false positive and false negative rates, and has a reduced risk of overfitting. For multi-class classification, these findings highlight the superiority of the GRU model in terms of classification performance, while also considering the computational efficiency of the RNN model. The results indicate that the models are able to effectively learn and capture the hidden patterns in data without overfitting, demonstrating their robustness for the detection of different DDoS attacks. Combining rule-based or signature-based techniques with deep learning can help improve the model. Hybrid methods can combine the advantages of both methodologies, allowing for more precise and reliable DDoS attack detection. **Author Contributions: Conceptualization, M.R. and M.S.; Data curation, M.S. and A.A.; Formal** analysis, M.R. and A.A.; Funding acquisition, Á.K.C.; Investigation, Á.K.C.; Methodology, A.A. and S.A.; Project administration, S.A. and Á.K.C.; Resources, F.I.; Software, S.A. and F.I.; Supervision, I.A.; Validation, I.A.; Visualization, F.I.; Writing—original draft, M.R. and M.S.; Writing—review and editing, I.A. All authors have read and agreed to the published version of the manuscript. **Funding: This study is funded by the European University of Atlantic.** **Institutional Review Board Statement: Not applicable.** **Informed Consent Statement: Not applicable.** **Data Availability Statement: Not applicable.** **Conflicts of Interest: The authors declare no conflict of interest.** **References** 1. [Khader, R.; Eleyan, D. Survey of dos/ddos attacks in iot. Sustain. Eng. Innov. 2021, 3, 23–28. [CrossRef]](http://doi.org/10.37868/sei.v3i1.124) 2. Neustar Security. Cyber Threats & Trends: January–June 2020. [Available online: https://www.cdn.neustar/resources/](https://www.cdn.neustar/resources/whitepapers/security/neustar-cyber-threats-trends-report-2020.pdf) [whitepapers/security/neustar-cyber-threats-trends-report-2020.pdf (accessed on 5 August 2020 ).](https://www.cdn.neustar/resources/whitepapers/security/neustar-cyber-threats-trends-report-2020.pdf) 3. Hussain, F.; Abbas, S.G.; Husnain, M.; Fayyaz, U.U.; Shahzad, F.; Shah, G.A. IoT DoS and DDoS attack detection using ResNet. In Proceedings of the 2020 IEEE 23rd International Multitopic Conference (INMIC), Bahawalpur, Pakistan, 5–7 November 2020; pp. 1–6. 4. Alanazi, F.; Jambi, K.; Eassa, F.; Khemakhem, M.; Basuhail, A.; Alsubhi, K. Ensemble Deep Learning Models for Mitigating DDoS [Attack in Software-Defined Network. Intell. Autom. Soft Comput. 2022, 33, 2. [CrossRef]](http://dx.doi.org/10.32604/iasc.2022.024668) ----- _Sensors 2023, 23, 8642_ 23 of 24 5. Seifousadati, A.; Ghasemshirazi, S.; Fathian, M. A Machine Learning approach for DDoS detection on IoT devices. arXiv 2021, arXiv:2110.14911. 6. [Ddos Evaluation Dataset (cic-ddos2019). Available online: https://www.unb.ca/cic/datasets/ddos-2019.html (accessed on 12](https://www.unb.ca/cic/datasets/ddos-2019.html) December 2022). 7. Mittal, M.; Kumar1, K.; Behal, S. Deep learning approaches for detecting DDoS attacks: A systematic review. Soft Comput. 2022, _27, 3337–3349._ 8. Bediako, P.K. Long Short-Term Memory Recurrent Neural Network for detecting DDoS flooding attacks within TensorFlow Implementation framework. Digit. Vetenskapliga Ark. 2017, 2017, 4. 9. Alshra’a, A.S.; Farhat, A.; Seitz, J. Deep learning algorithms for detecting denial of service attacks in software-defined networks. _[Procedia Comput. Sci. 2021, 191, 254–263. [CrossRef]](http://dx.doi.org/10.1016/j.procs.2021.07.032)_ 10. Alzahrani, R.J.; Alzahrani, A. Security analysis of ddos attacks using machine learning algorithms in networks traffic. Electronics **[2021, 10, 2919. [CrossRef]](http://dx.doi.org/10.3390/electronics10232919)** 11. Dhamor, T.; Bhat, S.; Thenmalar, S. Dynamic approaches for detection of DDoS threats using machine learning. Ann. Rom. Soc. _Cell Biol. 2021, 2021, 13663–13673._ 12. Amrish, R.; Bavapriyan, K.; Gopinaath, V.; Jawahar, A.; Kumar, C.V. DDoS detection using machine learning techniques. J. IOT _[Soc. Mobile, Anal. Cloud 2022, 4, 24–32. [CrossRef]](http://dx.doi.org/10.36548/jismac.2022.1.003)_ 13. Kumari, K.; Mrunalini, M. Detecting Denial of Service attacks using machine learning algorithms. _J. Big Data 2022, 9, 56._ [[CrossRef]](http://dx.doi.org/10.1186/s40537-022-00616-0) 14. M NALAYINI, C.; Katiravan, J. Detection of DDoS Attack Using Machine Learning Algorithms. SSRN 2022, 9, 4173187. 15. Qamar, R.; Zardari, B.; Arain, A.; Khoso, F.; Jokhio, A. Detecting Distributed Denial of Service attacks using Recurrent Neural Network. Psychology 2022, 2022, 1. 16. Kona, S.S. Detection of DDoS Attacks Using RNN-LSTM and Hybrid Model Ensemble. Ph.D. Thesis, National College of Ireland, Dublin, Ireland, 2020. 17. Ullah, I.; Mahmoud, Q.H. Design and development of RNN anomaly detection model for IoT networks. IEEE Access 2022, _[10, 62722–62750. [CrossRef]](http://dx.doi.org/10.1109/ACCESS.2022.3176317)_ 18. Saurabh, K.; Sood, S.; Kumar, P.A.; Singh, U.; Vyas, R.; Vyas, O.; Khondoker, R. Lbdmids: LSTM based deep learning model for intrusion detection systems for IOT networks. In Proceedings of the 2022 IEEE World AI IoT Congress (AIIoT), Seattle, WA, USA, 6–9 June 2022; pp. 753–759. 19. Qamar, R. Gradient Techniques to Predict Distributed Denial-Of-Service Attack. Iraqi J. Comput. Sci. Math. 2022, 3, 55–71. [[CrossRef]](http://dx.doi.org/10.52866/ijcsm.2022.02.01.006) 20. Qamar, R.; Arain, A.A.; Kanwar, K.; Khoso, F.H.; Jokhio, F. Distributed Denial Of Service Attack Detection Based On Neural Network: A Comparative Study. Int. J. Sci. Technol. Res. 2022, 2, 15. 21. Rahman, M.A. Detection of distributed denial of service attacks based on machine learning algorithms. Int. J. Smart Home 2020, _[14, 15–24. [CrossRef]](http://dx.doi.org/10.21742/IJSH.2020.14.2.02)_ 22. Hariprasad, S.; Deepa, T.; Bharathiraja, N. Detection of DDoS Attack in IoT Networks Using Sample Selected RNN-ELM. Intell. _[Autom. Soft Comput. 2022, 34, 17. [CrossRef]](http://dx.doi.org/10.32604/iasc.2022.022856)_ 23. Rusyaidi, M.; Jaf, S.; Ibrahim, Z. Detecting distributed denial of service in network traffic with deep learning. Int. J. Adv. Comput. _[Sci. Appl. 2022, 13, 34–41. [CrossRef]](http://dx.doi.org/10.14569/IJACSA.2022.0130105)_ 24. Costa, J.; Dessai, N.; Gaonkar, S.; Aswale, S.; Shetgaonkar, P. Iot-botnet detection using long short-term memory recurrent neural network. Int. J. Eng. Res 2020, 9, 18. 25. Aswad, F.M.; Ahmed, A.M.S.; Alhammadi, N.A.M.; Khalaf, B.A.; Mostafa, S.A. Deep learning in distributed denial-of-service [attacks detection method for Internet of Things networks. J. Intell. Syst. 2023, 32, 20220155. [CrossRef]](http://dx.doi.org/10.1515/jisys-2022-0155) 26. Almiani, M.; AbuGhazleh, A.; Jararweh, Y.; Razaque, A. DDoS detection in 5G-enabled IoT networks using deep Kalman [backpropagation neural network. Int. J. Mach. Learn. Cybern. 2021, 12, 3337–3349. [CrossRef]](http://dx.doi.org/10.1007/s13042-021-01323-7) 27. Data Normalization. [Available online: https://www.geeksforgeeks.org/data-normalization-with-pandas/ (accessed on](https://www.geeksforgeeks.org/data-normalization-with-pandas/) 4 April 2023). 28. [Normalization. Available online: https://www.digitalocean.com/community/tutorials/normalize-data-in-python (accessed on](https://www.digitalocean.com/community/tutorials/normalize-data-in-python) 4 April 2023). 29. [Categorical Data. Available online: https://www.kdnuggets.com/2021/05/deal-with-categorical-data-machine-learning.html](https://www.kdnuggets.com/2021/05/deal-with-categorical-data-machine-learning.html) (accessed on 9 September 2020). 30. [One Hote Encoding. Available online: https://www.analyticsvidhya.com/blog/2020/03/one-hot-encoding-vs-label-encoding-](https://www.analyticsvidhya.com/blog/2020/03/one-hot-encoding-vs-label-encoding-using-scikit-learn/) [using-scikit-learn/ (accessed on 4 April 2023).](https://www.analyticsvidhya.com/blog/2020/03/one-hot-encoding-vs-label-encoding-using-scikit-learn/) 31. [Feature Extraction. Available online: https://towardsdatascience.com/feature-extraction-techniques-d619b56e31be (accessed on](https://towardsdatascience.com/feature-extraction-techniques-d619b56e31be) 4 April 2023). 32. [Testing Split Method in Machine Learning. Available online: https://www.researchgate.net/post/70_training_and_30_testing_](https://www.researchgate.net/post/70_training_and_30_testing_spit_method_in_machine_learning) [spit_method_in_machine_learning (accessed on 12 December 2022).](https://www.researchgate.net/post/70_training_and_30_testing_spit_method_in_machine_learning) 33. [Data Splitting. Available online: https://www.techtarget.com/searchenterpriseai/definition/data-splitting (accessed on 4](https://www.techtarget.com/searchenterpriseai/definition/data-splitting) April 2023). ----- _Sensors 2023, 23, 8642_ 24 of 24 34. Sambangi, S.; Gondi, L. A machine learning approach for ddos (distributed denial of service) attack detection using multiple linear regression. Proceedings 2020, 63, 51. 35. Hu, C.; Ou, T.; Chang, H.; Zhu, Y.; Zhu, L. Deep GRU neural network prediction and feedforward compensation for precision multiaxis motion control systems. IEEE/ASME Trans. Mechatron. 2020, 25, 1377–1388. 36. Tang, T.A.; Mhamdi, L.; McLernon, D.; Zaidi, S.A.R.; Ghogho, M. Deep recurrent neural network for intrusion detection in sdn-based networks. In Proceedings of the 2018 4th IEEE Conference on Network Softwarization and Workshops (NetSoft), Montreal, QC, Canada, 25–29 June 2018; pp. 202–206. 37. [Learning Rate. Available online: https://machinelearningmastery.com/understand-the-dynamics-of-learning-rate-on-deep-](https://machinelearningmastery.com/understand-the-dynamics-of-learning-rate-on-deep-learning-neural-networks/) [learning-neural-networks/ (accessed on 4 April 2023).](https://machinelearningmastery.com/understand-the-dynamics-of-learning-rate-on-deep-learning-neural-networks/) 38. [Overfitting. Available online: https://www.v7labs.com/blog/overfitting (accessed on 4 April 2023).](https://www.v7labs.com/blog/overfitting) 39. [Activation Function. Available online: https://machinelearningmastery.com/choose-an-activation-function-for-deep-learning/](https://machinelearningmastery.com/choose-an-activation-function-for-deep-learning/) (accessed on 4 April 2023). 40. [Early Stopping. Available online: https://www.educative.io/answers/what-is-early-stopping (accessed on 4 April 2023).](https://www.educative.io/answers/what-is-early-stopping) 41. Optimization. [Available online: https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/](https://machinelearningmastery.com/adam-optimization-algorithm-for-deep-learning/) (accessed on 4 April 2023). 42. [Batch and Epoch. Available online: https://machinelearningmastery.com/difference-between-a-batch-and-an-epoch/ (accessed](https://machinelearningmastery.com/difference-between-a-batch-and-an-epoch/) on 9 September 2022). 43. [Intrusion Detection Evaluation Dataset (CIC-IDS2017). Available online: https://www.unb.ca/cic/datasets/ids-2017.html](https://www.unb.ca/cic/datasets/ids-2017.html) (accessed on 12 December 2022). **Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual** author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. -----
{ "disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC10611275, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/1424-8220/23/20/8642/pdf?version=1698046252" }
2,023
[ "JournalArticle" ]
true
2023-10-01T00:00:00
[ { "paperId": "151dde6413387906fdd0f470031695d5bf89d103", "title": "Detection of DDoS Attack using Machine Learning Algorithms" }, { "paperId": "b4c44675896d8b267193c36ca6742b55040e4020", "title": "Deep learning in distributed denial-of-service attacks detection method for Internet of Things networks" }, { "paperId": "0939b9d835e8af2ee8f09154dd06174b9709c805", "title": "LBDMIDS: LSTM Based Deep Learning Model for Intrusion Detection Systems for IoT Networks" }, { "paperId": "f907daab9fa90990dd6652222b4c2ebf607b058a", "title": "DDoS Detection using Machine Learning Techniques" }, { "paperId": "be70238629f55456bc7acfc8c9ddaa3acc23a00d", "title": "Detecting Denial of Service attacks using machine learning algorithms" }, { "paperId": "c6f3779834ba5a740c6339cc8eaf20861aecd391", "title": "Gradient Techniques To Predict Distributed Denial-Of-Service Attack" }, { "paperId": "14a91a00725a68e12e26008b6d4aabba35b123a4", "title": "Deep learning approaches for detecting DDoS attacks: a systematic review" }, { "paperId": "30e30ef7dd4edf31d2814d0fa2aea5ecfbdb7147", "title": "Security Analysis of DDoS Attacks Using Machine Learning Algorithms in Networks Traffic" }, { "paperId": "98144f61145c82dc18c21090c74d242a5bd98c38", "title": "A Machine Learning Approach for DDoS Detection on IoT Devices" }, { "paperId": "c0172cb4fe3f92dfe3ba461fccf000818098ea77", "title": "DDoS detection in 5G-enabled IoT networks using deep Kalman backpropagation neural network" }, { "paperId": "9ac6e60db9bc57420d022dd4a76428b3567a9eba", "title": "Survey of DoS/DDoS attacks in IoT" }, { "paperId": "3f1ff9cb1bca24d8e36acfdd3397a7cf8a299c04", "title": "A Machine Learning Approach for DDoS (Distributed Denial of Service) Attack Detection Using Multiple Linear Regression" }, { "paperId": "705cbd384e3f02069aa71815848964927f6acda8", "title": "IoT DoS and DDoS Attack Detection using ResNet" }, { "paperId": "3542d5888d9efce7a5988300011268fe87f3474e", "title": "Detection of Distributed Denial of Service Attacks based on Machine Learning Algorithms" }, { "paperId": "2ccea5b32e01676ee38bdadea25a1351a6c8921f", "title": "IoT-Botnet Detection using Long Short-Term Memory Recurrent Neural Network" }, { "paperId": "11c48c8e92dacfd8bf7d46255db70d3893aa33ff", "title": "Deep GRU Neural Network Prediction and Feedforward Compensation for Precision Multiaxis Motion Control Systems" }, { "paperId": "7f27e7bf9116ebeeeab1ef010fde5a4d6544ee14", "title": "Normalization" }, { "paperId": "ca743a63470d6e372fe5de5651d368e984ea159f", "title": "Detection of DDoS attacks using RNN-LSTM and Hybrid model ensemble" }, { "paperId": "d7a78adcc93a51d0b01817cc041941aff9ca39ec", "title": "Activation function" }, { "paperId": "88f0a31c4d9e4a025fd0f617e3635eb85a6c9649", "title": "Data normalization" }, { "paperId": "997672c9c2ec5d78a90a5d2c1046e7955c9d6c0f", "title": "Deep Recurrent Neural Network for Intrusion Detection in SDN-based Networks" }, { "paperId": "f7ec4293c8946b2633fdd7d41bd7cf120290242b", "title": "Optimization" }, { "paperId": null, "title": "Early Stopping" }, { "paperId": null, "title": "Learning Rate" }, { "paperId": null, "title": "One Hote Encoding" }, { "paperId": null, "title": "(cic-ddos2019" }, { "paperId": "980d55054f12460044e0041c60c09a57e0df773e", "title": "Detecting Distributed Denial of Service in Network Traffic with Deep Learning" }, { "paperId": "f3a0ad04ddbb490941fba29356a9a74941094ba9", "title": "Design and Development of RNN-based Anomaly Detection Model for IoT Networks" }, { "paperId": "3aef575839d8f6f6c70e3a1701a55e9f72d225a9", "title": "Detection of DDoS Attack in IoT Networks Using Sample Selected RNN-ELM" }, { "paperId": "2efd98e6021c868643704660410f87246e8ff3b6", "title": "Ensemble Deep Learning Models for Mitigating DDoS Attack in Software-Defined Network" }, { "paperId": null, "title": "Intrusion Detection Evaluation Dataset (CIC-IDS2017)" }, { "paperId": null, "title": "Testing Split Method in Machine Learning" }, { "paperId": "7f3ce917d2e55e5e0dc2a85d6a8b2f5f7709f7ae", "title": "Deep Learning Algorithms for Detecting Denial of Service Attacks in Software-Defined Networks" }, { "paperId": null, "title": "Dynamic approaches for detection of DDoS threats using machine learning" }, { "paperId": "330aa6fa445c5a9731e861992de3badbc4b5bf4f", "title": "Feature Extraction" }, { "paperId": "6f43217be8fab9ec5669e7247ce5ca770038f734", "title": "Long Short-Term Memory Recurrent Neural Network for detecting DDoS flooding attacks within TensorFlow Implementation framework." }, { "paperId": "3804cb787031aacd41c2de320bc4ddad637238a3", "title": "Data Splitting" }, { "paperId": "7ca242a8717291f2f1572885b15e1f87f5a60846", "title": "What Are Categorical Data ?" }, { "paperId": null, "title": "Batch and Epoch" }, { "paperId": null, "title": "Neustar Security" } ]
20,152
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Medicine", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01c803711795e240c611950711210384c9887640
[ "Computer Science" ]
0.845113
Domain-Specific Pretraining for Vertical Search: Case Study on Biomedical Literature
01c803711795e240c611950711210384c9887640
Knowledge Discovery and Data Mining
[ { "authorId": "2153607780", "name": "Yu Wang" }, { "authorId": "2887412", "name": "Jinchao Li" }, { "authorId": "40466858", "name": "Tristan Naumann" }, { "authorId": "144628574", "name": "Chenyan Xiong" }, { "authorId": "47413820", "name": "Hao Cheng" }, { "authorId": "1846722967", "name": "Robert Tinn" }, { "authorId": "2109566188", "name": "Cliff Wong" }, { "authorId": "2637252", "name": "N. Usuyama" }, { "authorId": "46187984", "name": "Richard Rogahn" }, { "authorId": "3303634", "name": "Zhihong Shen" }, { "authorId": "2116247137", "name": "Yang Qin" }, { "authorId": "145479841", "name": "E. Horvitz" }, { "authorId": "144609235", "name": "Paul N. Bennett" }, { "authorId": "48441311", "name": "Jianfeng Gao" }, { "authorId": "1759772", "name": "Hoifung Poon" } ]
{ "alternate_issns": null, "alternate_names": [ "KDD", "Knowl Discov Data Min" ], "alternate_urls": null, "id": "a0edb93b-1e95-4128-a295-6b1659149cef", "issn": null, "name": "Knowledge Discovery and Data Mining", "type": "conference", "url": "http://www.acm.org/sigkdd/" }
Information overload is a prevalent challenge in many high-value domains. A prominent case in point is the explosion of the biomedical literature on COVID-19, which swelled to hundreds of thousands of papers in a matter of months. In general, biomedical literature expands by two papers every minute, totalling over a million new papers every year. Search in the biomedical realm, and many other vertical domains is challenging due to the scarcity of direct supervision from click logs. Self-supervised learning has emerged as a promising direction to overcome the annotation bottleneck. We propose a general approach for vertical search based on domain-specific pretraining and present a case study for the biomedical domain. Despite being substantially simpler and not using any relevance labels for training or development, our method performs comparably or better than the best systems in the official TREC-COVID evaluation, a COVID-related biomedical search competition. Using distributed computing in modern cloud infrastructure, our system can scale to tens of millions of articles on PubMed and has been deployed as Microsoft Biomedical Search, a new search experience for biomedical literature: https://aka.ms/biomedsearch.
## Domain-Specific Pretraining for Vertical Search: Case Study on Biomedical Literature ### Yu Wang,* Jinchao Li,* Tristan Naumann,* Chenyan Xiong, Hao Cheng, Robert Tinn, Cliff Wong, Naoto Usuyama, Richard Rogahn, Zhihong Shen, Yang Qin, Eric Horvitz, Paul N. Bennett, Jianfeng Gao, Hoifung Poon ##### yuwan,jincli,tristan,cxiong,chehao,rotinn,clwon,naotous, rrogahn,zhihosh,yaq,horvitz,pauben,jfgao,hoifung@microsoft.com Microsoft Research Redmond, WA #### ABSTRACT Information overload is a prevalent challenge in many high-value domains. A prominent case in point is the explosion of the biomedical literature on COVID-19, which swelled to hundreds of thousands of papers in a matter of months. In general, biomedical literature expands by two papers every minute, totalling over a million new papers every year. Search in the biomedical realm, and many other vertical domains is challenging due to the scarcity of direct supervision from click logs. Self-supervised learning has emerged as a promising direction to overcome the annotation bottleneck. We propose a general approach for vertical search based on domainspecific pretraining and present a case study for the biomedical domain. Despite being substantially simpler and not using any relevance labels for training or development, our method performs comparably or better than the best systems in the official TRECCOVID evaluation, a COVID-related biomedical search competition. Using distributed computing in modern cloud infrastructure, our system can scale to tens of millions of articles on PubMed and has been deployed as Microsoft Biomedical Search, a new search [experience for biomedical literature: https://aka.ms/biomedsearch.](https://aka.ms/biomedsearch) _Discovery and Data Mining (KDD ’21), August 14–18, 2021, Virtual Event, Sin-_ _[gapore. ACM, New York, NY, USA, 9 pages. https://doi.org/10.1145/3447548.](https://doi.org/10.1145/3447548.3469053)_ [3469053](https://doi.org/10.1145/3447548.3469053) #### 1 INTRODUCTION #### CCS CONCEPTS - Information systems → **Information retrieval; • Comput-** **ing methodologies →** **Natural language processing; • Applied** **computing →** **Bioinformatics.** #### KEYWORDS Domain-specific pretraining, Search, Biomedical, NLP, COVID-19 **ACM Reference Format:** Yu Wang,* Jinchao Li,* Tristan Naumann,* Chenyan Xiong, Hao Cheng, Robert Tinn, Cliff Wong, Naoto Usuyama, Richard Rogahn, Zhihong Shen, Yang Qin, Eric Horvitz, Paul N. Bennett, Jianfeng Gao, Hoifung Poon. 2021. Domain-Specific Pretraining for Vertical Search: Case Study on Biomedical Literature. In Proceedings of the 27th ACM SIGKDD Conference on Knowledge *These authors contributed equally to this research. _KDD ’21, August 14–18, 2021, Virtual Event, Singapore_ © 2021 Copyright held by the owner/author(s). Publication rights licensed to ACM. This is the author’s version of the work. It is posted here for your personal use. Not for redistribution. The definitive Version of Record was published in Proceedings of _the 27th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD ’21),_ _[August 14–18, 2021, Virtual Event, Singapore, https://doi.org/10.1145/3447548.3469053.](https://doi.org/10.1145/3447548.3469053)_ Keeping up with scientific developments on COVID-19 highlights the perennial problem of information overload in a high-stakes domain. At the time of writing, hundreds of thousands of research papers have been published concerning COVID-19 and the SARSCoV-2 virus. For biomedicine more generally, the PubMed[1] service adds 4,000 papers every day and over a million papers every year. While progress in general search has been made using sophisticated machine learning methods, such as neural retrieval models, vertical search is often limited to comparatively simple keyword search augmented by domain-specific ontologies (e.g., entity acronyms). The PubMed search engine exemplifies this experience. Direct supervision, while available for general search in the form of relevance labels from click logs, is typically scarce in specialized domains, especially for emerging areas such as COVID-related biomedical search. Self-supervised learning has emerged as a promising direction to overcome the annotation bottleneck, based on automatically creating noisy labeled data from unlabeled text. In particular, neural language model pretraining, such as BERT [8], has demonstrated superb performance gains for general-domain information retrieval [21, 27, 45, 46] and natural language processing (NLP) [39, 40]. Additionally, for specialized domains, domain-specific pretraining has proven to be effective for in-domain applications [1, 3, 11, 12, 15, 20, 34]. We propose a general methodology for developing vertical search systems for specialized domains. As a case study, we focus on biomedical search. We find evidence that the methods have significant impact in the target domain, and, likely generalize to other vertical search domains. We demonstrate how advances described in earlier and related work [11, 44, 48] can be brought together to provide new capabilities. We also provide data supporting the feasibility of a large-scale deployment through detailed system analysis, stress-testing of the system, and acquisition of expert relevance evaluations.[2] In section 2, we explore the key idea of initializing a neural ranking model with domain-specific pretraining and fine-tuning [1http://pubmed.ncbi.nlm.nih.gov](http://pubmed.ncbi.nlm.nih.gov) 2The system has been released, though large-scale deployment measures other than stress-testing are not yet available and we focus on the evidence from expert evaluation. ----- **Figure 1: General approach for vertical search: A neural ranker is initialized by domain-specific pretraining and fine-tuned on** **self-supervised relevance labels generated using a domain-specific lexicon from the domain ontology to filter query-passage** **pairs from MS MARCO.** the model on a self-supervised domain-specific dataset generated from general query-document pairs (e.g., from MS MARCO [26]). Then, we introduce the biomedical domain as a case study. In section 3, we evaluate the method on the TREC-COVID dataset [30, 38]. We find that the method performs comparably or better than the best systems in the official TREC-COVID evaluation, despite its generality and simplicity, and despite using zero COVID-related relevance labels for direct supervision. In section 4, we discuss how our system design leverages distributed computing and modern cloud infrastructure for scalability and ease of use. This approach can be reused for other domains. In the biomedical domain, our system can scale to tens of millions of PubMed articles and attain a high query-per-second (QPS) throughput. We have deployed the resulting system for preview as Microsoft Biomedical Search, which provides a new search experience over biomedical literature: [https://aka.ms/biomedsearch.](https://aka.ms/biomedsearch) #### 2 DOMAIN-SPECIFIC PRETRAINING FOR VERTICAL SEARCH In this section, we present a general approach for vertical search based on domain-specific pretraining and self-supervised learning (Figure 1). We first review neural language models and show how domain-specific pretraining can serve as the foundation for a domain-specific document neural ranker. We then present a general method of fine-tuning the ranker by using self-supervised, domainspecific relevance labels from a broad-coverage query-document dataset using the domain ontology. Finally, we show how this approach can be applied in biomedical literature search. #### 2.1 Domain-Specific Pretraining Language model pretraining can be considered a form of task_agnostic self-supervision that generates training examples by hiding_ words from unlabeled text and tasks the model with predicting the hidden words. In our work on vertical search, we adopt the popular Bidirectional Encoder Representations from Transformers (BERT) [8], which has become a standard building block for NLP applications. Instead of predicting the next token based on the preceding tokens, as in traditional generative models, BERT employs a Masked _Language Model (MLM), which randomly replaces a subset of to-_ kens by a special token [𝑀𝐴𝑆𝐾], and tries to predict them from the rest of the words. The training objective is the cross-entropy loss between the original tokens and the predicted ones. BERT builds on the transformer model [37] with its multi-head self-attention mechanism, which has demonstrated high performance in parallel computation and modeling long-range dependencies, as compared to recurrent neural networks such as LSTM [13]. The input consists of text spans, such as sentences, separated by a special token [𝑆𝐸𝑃]. To address out-of-vocabulary words, tokens are divided into subword units using Byte-Pair Encoding (BPE) [33] or its variants [18], which generates a fixed-size subword vocabulary to compactly represent the training text corpora. The input is first passed to a lexical encoder, which combines the token embedding, position embedding, and segment embedding by element-wise summation. The embedding layer is then passed to multiple layers of transformer modules to generate a contextual representation [37]. Prior pretraining efforts have focused frequently on the newswire and web domains. For example, the BERT model was trained on Wikipedia[3] and BookCorpus [49], and subsequent efforts have focused on crawling additional web text to conduct increasingly large-scale pretraining [6, 23, 29]. For domain-specific applications, pretraining on in-domain text has been shown to provide additional gains, but the prevalent assumption is that out-domain text is still helpful and pretraining typically adopts a mixed-domain approach [12, 20]. Gu et al. [11] changes this assumption and shows that, for domains with ample text, a pure domain-specific pretraining approach is advantageous and leads to substantial gains in downstream in-domain applications. We adopt this approach by generating domain-specific vocabulary and performing language model pretraining from scratch on in-domain text [11]. #### 2.2 Self-Supervised Fine-Tuning As a first-order approximation, the search problem can be abstracted as learning a relevance function for query 𝑞 and text span _𝑡: 𝑓_ (𝑞,𝑡) →{0, 1}. Here, 𝑡 may refer to a document or arbitrary text span such as a passage. [3http://wikipedia.org](http://wikipedia.org) ----- Traditional search methods adopt a sparse retrieval approach by essentially treating the query as a bag of words and matching each word against the candidate text, which can be done efficiently using an inverted index. Individual words are weighted (e.g., by TF-IDF) to downweight the effect of stop words or function words, as exemplified by BM25 and its variants [31]. Variations abound in natural language expressions, which can cause significant challenges in sparse retrieval. To address this problem, dense retrieval maps query and text each to a vector in a continuous representation space and estimates relevance by computing the similarity between the two vectors (e.g., via dot product) [16, 17, 45]. Dense retrieval can be made highly scalable by pre-computing text vectors, and can potentially replace or combine with sparse retrieval. Neither sparse retrieval nor dense retrieval attempts to model complex interdependencies between the query and text. In contrast, sophisticated neural approaches concatenate query and text as input for a BERT model to leverage cross-attention among query and text tokens [47]. Specifically, query 𝑞 and text 𝑡 are combined into a sequence “[𝐶𝐿𝑆] q [𝑆𝐸𝑃] t [𝑆𝐸𝑃]” as input, where [𝐶𝐿𝑆] is a special token to be used for final prediction [8]. This could produce significant performance gains but requires a large amount of labeled data for fine-tuning the BERT model. Such a cross-attention neural model will not be scalable enough for the retrieval step, as we must compute, from scratch, for each candidate text with a new query. The standard practice thus adopts a two-stage approach, by using a fast L1 retrieval method to select top 𝐾 text candidates, and applying the neural ranker on these candidates as L2 reranking. In our proposed approach, we use BM25 for L1 retrieval, and initialize our L2 neural ranker with a domain-specific BERT model. To fine-tune the neural ranker, we use the Microsoft Machine Reading Comprehension dataset, MS MARCO [26], and a domainspecific lexicon to generate noisy relevance labels at scale using self-supervision (Figure 1). MS MARCO was created by identifying pairs of anonymized queries and relevant passages from Bing’s search query logs, and crowd-sourcing potential answers from passages. The dataset contains about one million questions spanning a wide range of topics, each with corresponding relevant answer passages from Bing question answering systems. For self-supervised fine-tuning labels, we use the MS MARCO subset [24] whose queries contain at least one domain-specific term from the domain ontology. #### 2.3 Application to Biomedical Literature Search Biomedicine is a representative case study that illustrates the challenges of vertical search. It is a high-value domain with a vast and rapidly growing research literature, as evident in PubMed (30+ million articles; adding over a million a year). However, existing biomedical search tools are typically limited to sparse retrieval methods, as exemplified by PubMed. This search is primarily limited to keyword matching, though it is augmented with limited query expansion using domain ontologies (e.g., MeSH terms [22]). This method is suboptimal for long queries expressing complex intent. We use biomedicine as a running example to illustrate our approach for vertical search. We leverage PubMed articles for domainspecific pretraining and use the publicly-available PubMedBERT [11] to initialize our L2 neural ranker. For self-supervised fine-tuning, we use the Unified Medical Language System (UMLS) [5] as our domain ontology and filter MS MARCO queries using the disease or syndrome terms in UMLS, similar to MacAvaney et al. [24, 25] but focusing on the broad biomedical literature rather than COVID-19. This medical subset of MS MARCO contains about 78 thousand annotated queries. We used these queries and their relevant passages in MS MARCO as positive relevance labels. To generate negative labels, we ran BM25 for each query over all non-relevant passages in MS MARCO, and selected the top 100 results. This forces the neural ranker to work harder in separating truly relevant passages from ones with mere overlap in keywords. For balanced training, we down-sampled negative instances to equal the number of positive instances (i.e., 1:1 ratio). This resulted in about 640 thousand (query, passage, label) examples. Based on preliminary experiments, we chose a learning rate of 2𝑒 − 5 and ran fine-tuning for one epoch in all subsequent experiments. We found that the results are not sensitive to hyperparameters, as long as the learning rate is of the same order of magnitude and at least one epoch is run over all the examples. At retrieval time, we used 𝐾 = 60 in the L1 ranker by default (i.e., we used BM25 to select top 60 text candidates). #### 3 CASE STUDY EVALUATION ON COVID-19 SEARCH The COVID-19 literature provides a realistic test ground for biomedical search. In a little over a year, the COVID-related biomedical literature has grown to include over 440 thousand papers that mention COVID-19 or the SARS-CoV-2 virus. This explosive growth sparked the creation of the COVID-19 Open Research Dataset (CORD-19)[43] and subsequently TREC-COVID [30, 38], an evaluation resource for pandemic information retrieval. In this section, we describe our evaluation of the biomedical search system on TREC-COVID, focusing on two key questions. First, how does our system perform compared to the best systems participating in TREC-COVID? We note that many of these systems are expected to have complex designs and/or require COVID-related relevance labels for training and development. Second, what is the impact of domain-specific pretraining compared to general-domain or mixed-domain pretraining? #### 3.1 The TREC-COVID Dataset To create TREC-COVID, organizers from the National Institute of Standards and Technology (NIST) used versions of CORD-19 from April 10 (Round 1), May 1 (Round 2), May 19 (Round 3), June 19 (Round 4), and July 16 (Round 5). These datasets spanned an initial set of 30 topics with five new topics planned for each additional round; the final set thus consists of 50 topics and cumulative judgements from previous rounds generated by domain experts [30]. Relevance labels were created by annotators using a customized platform and released in rounds. Round 1 contains 8,691 relevance labels for 30 topics, and was provided to participating teams for training and development. Subsequent rounds were hosted to introduce additional topics and relevance labels as a rolling evaluation ----- for increased participation. We use Round 2, the round we participated in, to evaluate our system development. It contains 12,037 relevance labels for 35 topics. #### 3.2 Top Systems in TREC-COVID Leaderboard The results of TREC-COVID Round 2 are organized into three groups: Manual, which used manual interventions, e.g., manual query rewriting, in any part of the system, Feedback, which used labels from Round 1, and Automatic, which does not use manual effort or Round 1 labels.[4] Note that the categorization of Feed_back and Automatic is not always explicit so their grouping might_ be mixed. Overall, 136 systems participated in the official evaluation. NDCG@10 was used as the main evaluation metric, with Precision@5 (P@5) reported as an additional metric. The best performing systems typically adopted a sophisticated neural ranking pipeline and performed extensive training and development on TREC-COVID labeled data from Round 1. Some systems also use very large pretrained language models. For example, covidex.t5 used T5 Large [29], a general-domain transformer-based model with 770 million parameters pretrained on the Colossal Clean Crawled (C4) web corpus (26 TB).[5] The best performing non-manual system for Round 2 is CMT (CMU-Microsoft-Tsinghua) [44], which adopted a two-stage ranking approach. For L1, CMT used standard BM25 sparse retrieval as well as dense retrieval by fusing top ranking results from the two methods. The dense retrieval method computed the dot product of query and passage embeddings based on a BERT model [17]. For L2, CMT used a neural ranker with cross-attention over query and candidate passage. For training, CMT started with the same biomedical MS MARCO data (by selecting MS MARCO queries with biomedical terms) [24], but then applied additional processing to generate synthetic labeled data. Briefly, it first trained a query generation system using query _generation (QG) [27] on the query-passage pairs from biomedical_ MS MARCO, initialized by GPT-2 [28]. Given this trained QG system, for each COVID-related document 𝑑, it generated a pseudo query 𝑞 = 𝑄𝐺 (𝑑), and then applied BM25 to retrieve a pair of documents with high and low ranking, 𝑑 [′] _,𝑑_ [′] . Finally, it called on + − ContrastQG [44] to generate a query that would best differentiate the two documents 𝑞[′] = 𝐶𝑜𝑛𝑡𝑟𝑎𝑠𝑡𝑄𝐺 (𝑑+[′] _,𝑑−[′]_ ). For the neural ranker, CMT started with SciBERT [3] with continual pretraining on CORD-19, and fine-tuned the model using both Med MARCO labels and synthetic labels from ContrastQG. To leverage the TREC-COVID data from Round 1, CMT incorporated data reweighting (ReinfoSelect) based on the REINFORCE algorithm [48]. It used performance on Round 1 data as a reward signal, and learned to denoise training labels by re-weighting them using policy gradient. #### 3.3 Our Approach on TREC-COVID TREC-COVID offers an excellent benchmark for assessing the general applicability of our proposed approach for vertical search. We evaluated our systems on the test set (Round 2) and compared them with the best systems in the official TREC-COVID evaluation. We [4https://castorini.github.io/TREC-COVID/round2/](https://castorini.github.io/TREC-COVID/round2/) [5https://www.tensorflow.org/datasets/catalog/c4](https://www.tensorflow.org/datasets/catalog/c4) Model NDCG@10 P@5 _Our approach:_ PubMedBERT 61.5 ( 1.1) 69.5 ( 1.8) ± ± PubMedBERT-COVID 65.6 ( 1.0) 73.2 ( 1.1) ± ± _+ dev set:_ PubMedBERT 64.8 71.4 PubMedBERT-COVID 67.9 73.7 _Top systems in TREC-COVID:_ covidex.t5 (T5) 62.5 73.1 mpiid5 (ELECTRA) 66.8 77.7 CMT (SparseDenseSciBERT) 67.7 76.0 **Table 1: Comparison with the top-ranked systems in official** **TREC-COVID evaluation (test results; Round 2). Our results** **were averaged from ten runs with different random seeds** **(standard deviation shown in parentheses). The best systems** **in TREC-COVID evaluation (bottom panel) all used Round** **1 data for training, as well as more sophisticated learning** **methods and/or larger models such as T5. In contrast, our** **systems (top panel) are much simpler and used zero TREC-** **COVID relevance labels, but they already perform compet-** **itively against the best systems by using domain-specific** **pretraining (PubMedBERT). Our systems were trained using** **one epoch with a fixed learning rate. By exploring longer** **training and multiple learning rates and using Round 1 data** **for development, our systems can perform even better (mid-** **dle panel).** Model NDCG@10 P@5 BERT 55.0 ( 1.2) 63.4 ( 2.3) ± ± RoBERTa 53.5 ( 1.6) 61.1 ( 2.3) ± ± UNILM 55.0 ( 1.2) 62.0 ( 1.8) ± ± SciBERT 58.9 ( 1.5) 67.7 ( 2.2) ± ± PubMedBERT 61.5 ( 1.1) 69.5 ( 1.8) ± ± PubMedBERT-COVID 65.6 (±1.0) 73.2 (±1.1) **Table 2: Comparison of domain-specific (PubMedBERT and** **PubMedBERT-COVID) pretraining with out-domain (BERT,** **RoBERTa, UniLM) or mixed-domain pretraining (SciBERT)** **in TREC-COVID test results (Round 2). All results were av-** **eraged from ten runs (standard deviation in parentheses).** **Domain-specific pretraining is essential for attaining good** **performance in our general approach for vertical search.** essentially took the biomedical search system from subsection 2.3 as is (PubMedBERT). Although COVID-related text may differ somewhat from general biomedical text, we expect that a biomedical model should offer strong performance for this subset of biomedical literature. To further assess the impact from domain-specific pretraining, we also conducted continual pretraining using CORD19 for 100K BERT steps and evaluated it in our biomedical search system (PubMedBERT-COVID). Table 1 shows the results. Surprisingly, without using any relevance labels, our systems (top panel) performs competitively against the best systems in TREC-COVID evaluation. E.g., PubMedBERTCOVID outperforms covidex.t5 by over three absolute points in ----- |Biomedical Search Engine|CORD-19 PubMed PMC|Retrieval|Reranking| |---|---|---|---| |PubMed6|✓|Keyword + MeSH|| |COVID-19 Search (Azure)7|✓|BM25|| |CORD-19 Explorer (AI2)8|✓|BM25|LightGBM| |COVID-19 Research Explorer (Google)9|✓|BM25|Neural (BERT)| |Covidex (U of Waterloo, NYU)10|✓|BM25|Neural (T5)| |COVID-19 Search (Salesforce)11|✓|BM25|Neural (BERT)| |Microsoft Biomedical Search12|✓ ✓ ✓|BM25|Neural (PubMedBERT)| **Table 3: Overview of representative biomedical search systems. ✓** **signifies coverage on CORD-19 (440 thousand abstracts and** **full-text articles), PubMed (30 million abstracts), PubMed Central (PMC; 3 million full-text articles). Most systems cover CORD-** **19 (or the earlier version with about 60 thousand articles). Only Microsoft Biomedical Search (our system) uses domain-specific** **pretraining (PubMedBERT), which outperforms general-domain language models, for neural reranking.** NDCG@10, even though the latter used a much larger language model pretrained on three orders of magnitude more data (26TB vs 21GB). Our systems were trained using one epoch with a fixed learning rate (2e-5). By exploring longer training (up to five epochs) and multiple learning rates (1e-5, 2e-5, 5e-5) and using Round 1 as dev set, our best system (middle panel) performs on par in NDCG@10 with CMT, the top system in TREC-COVID, while requiring no additional sophisticated learning components such as dense retrieval, QG, ContrastQG, and ReinfoSelect. The success of our systems can be attributed primarily to our in-domain language models (PubMedBERT, PubMedBERT-COVID). To further assess the impact of domain-specific pretraining, we also evaluated our system using out-domain and mixed-domain models. See Table 2 for the results. Out-domain language models all perform relatively poorly in this evaluation of biomedical search, and exhibit little difference in search relevance despite significant difference in the size of vocabulary, pretraining corpus, and model (e.g., RoBERTa [23] used a larger vocabulary and both RoBERTa and UniLM [9] were pretrained on much larger text corpus). Pretraining on PubMed text helps SciBERT, but its mixeddomain approach (including compute science literature) inhibits its performance compared to domain-specific pretraining. Continual pretraining on covid-specific literature helps substantially, with PubMedBERT-COVID outperforming PubMedBERT by over four absolute points in NDCG@10. Overall, domain-specific pretraining is essential for the performance gain, with PubMedBERT-COVID outperforming general-domain BERT models by over ten absolute points in NDCG@10. In sum, the TREC-COVID results provide strong evidence that, by leveraging domain-specific pretraining, our approach for vertical search is general and can attain high accuracy in a new domain without significant manual effort. #### 4 PUBMED-SCALE BIOMEDICAL SEARCH The canonical tool for biomedical search is the PubMed search itself. Recently, COVID-19 has spawned a plethora of new prototype biomedical search tools. See Table 3 for a list of representative [6https://pubmed.ncbi.nlm.nih.gov/](https://pubmed.ncbi.nlm.nih.gov/) [7https://covid19search.azurewebsites.net/home/index?q=](https://covid19search.azurewebsites.net/home/index?q=) [8https://cord-19.apps.allenai.org/](https://cord-19.apps.allenai.org/) [9https://covid19-research-explorer.appspot.com/](https://covid19-research-explorer.appspot.com/) [10https://covidex.ai/](https://covidex.ai/) [11https://sfr-med.com/search](https://sfr-med.com/search) [12https://aka.ms/biomedsearch](https://aka.ms/biomedsearch) systems. PubMed covers essentially the entire biomedical literature, but its aforementioned search engine is based on relatively simplistic sparse retrieval methods, which generally perform less well, especially in the presence of long queries with complex intent. By contrast, while some new search tools feature advanced neural ranking methods, their search scope was typically limited to CORD-19, which considers only a tiny fraction of biomedical literature. In this section, we describe our effort in developing and deploying Microsoft Biomedical Search, a new biomedical search engine that combines PubMed-scale coverage and state-of-the-art neural ranking, based on our general approach for vertical search, as described in subsection 2.3 and validated in section 3. Creating the system required addressing significant challenges with system design and engineering. Employing a modern cloud infrastructure helped with the fielding of the system. The fielded system can serve as a reference architecture for vertical search in general; many components are directly reusable for other high-value domains. #### 4.1 System Challenges The key challenge in the system design is to scale to tens of millions of biomedical articles, while enabling affordable and fast computation in sophisticated neural ranking methods, based on large language models with hundreds of millions of parameters. Specifically, the CORD-19 dataset initially covered about 29,000 documents (abstracts or full-text articles) when it was first launched in March 2020. It quickly grew to about 60,000 documents when it was adopted by TREC-COVID (Round 2, May 2020), which is the version used by many COVID-search tools. Even in its latest version (as of early Feb. 2021), CORD-19 only contains about 440,000 documents (with about 150,000 full-text articles). By contrast, PubMed covers over 30 million biomedical publications, with about 20 million abstracts and over 3 million full-text articles, which is two orders of magnitude larger than CORD-19. Given early feedback from a range of biomedical practitioners, in addition to document-level retrieval, we decided to enable passagelevel retrieval to enhance granularity and precision. This further exacerbates our scalability challenge, as the retrieval candidates now include over 216 million paragraphs (passages). Neural ranking methods can greatly improve search relevance compared to standard keyword-based and sparse retrieval methods. However, they present additional challenges as these methods often ----- **Figure 2: Left: Overview of the Microsoft Biomedical Search system. Right: A reference cloud architecture for servicing the L2** **neural ranker and machine reading comprehension (MRC) with automatic scaling. Queries are processed by a standard two-** **stage architecture, where an L1 ranker based on BM25 generates the top 60 passages for each query, followed by an L2 neural** **ranker to produce final reranking results, which are then passed to the MRC module to generate answers from a candidate** **passage if applicable.** build upon large pretrained language models, which are computational intensive and generally require expensive graphic processing units (GPUs). #### 4.2 Our Solution As described in subsection 2.3, we adopt a two-stage ranking model, with an L1 ranker based on BM25 and an L2 reranker based on PubMedBERT. As shown in Figure 2 (left), the system comprises a web front end, web back end API, cache, L1 ranking, and L2 ranking. Query requests are passed on from web front end to back end API, which coordinates L1 and L2 ranking. The system first consults the cache and returns results directly if the query is cached. Otherwise, it calls on L1 to retrieve top candidates and then calls on L2 to conduct neural reranking. Finally, it combines the results and returns them to the front end for display. To address the scalability challenges, we develop our system on top of modern cloud infrastructures to leverage their native capabilities of distributed computing, cache, and load balancing, which drastically simplifies our system design and engineering. We choose to use Microsoft Azure as the cloud infrastructure, but our design is general and can be easily adapted to other cloud infrastructures. In early experiments, we found that the Web front end, back end and cache components are sufficiently fast. So, in what follows, we will focus on discussing how to address scalability challenges in L1 and L2 ranking. For L1, we use BM25, which can be supported by standard inverted index methods. We adopt Elastic Search, an open-source distributed search engine built on Apache Lucene [10]. Given our PubMed-scale coverage, the index size of Elastic Search is over 160GB and is growing as new papers arrive. The index size further multiplies with the number of replications added to ensure system availability (we use two replications). As such, we need to use machines with enough memory and processing power. For L2, although we only run on limited number of candidate passages from L1 (we used top 60 in our system), the neural ranking model is based on large pretrained language models which are computationally intensive. Currently, we use the base model of PubMedBERT with 12 layers of transformer modules, containing over 300 million parameters. We thus use a distributed GPU cluster and make careful hardware and software choices to maximize overall cost-effectiveness while minimizing L2 latency. We use query-per-second (QPS) as our key workload metric for system design. To identify major bottlenecks and fine-tune design choices, we conducted focused experiments on L1 and L2 rankers separately to assess their impact on run-time latency. We use Locust [14], a Python-based framework for load testing. To ensure head-to-head comparison among design choices, we adopted a fixed system setup as follows: The back end API is developed with Flask [32], using Gevent [4] with 8 workers to ensure the highest performance To minimize variance due to network cost, the back end API and L1 or L2 rankers are deployed in the same data center, as well as machines used to send queries. All the servers are deployed in the same virtual network. We prepare a query set which contains 71 thousand anonymized queries sampled from Microsoft Academic Search. We turned off the cache layer during all experiments. With this configuration, the latency of back end API per query is around 20 ms. We used the Locust client to simulate asynchronous requests from multiple users. Each simulated user would randomly wait for 15-60 seconds after each search request. Each experiments ran for 10 minutes. From preliminary experiments, we found that Elastic Search requires warm-up to reach maximum performance, so we ran the system with low QPS (0.5 per sec) for 10 minutes before conducting the our experiments. Elastic Search might cache results to speed up ----- QPS Median (s) 90% (s) Mean (s) Min (s) Max (s) 13.2 0.51 0.75 0.59 0.23 7.07 26.8 0.60 1.50 0.88 0.22 31.0 **Table 4: Latency results in two simulated load tests on L1** **ranking (plus back end API). Query-per-second (QPS) is the** **average request load in the test. Back end API takes about** **20 ms for each query. Most queries can be processed within** **a second, even with relatively high request load.** QPS Median (s) 90% (s) Mean (s) Min (s) Max (s) 14.8 1.80 2.80 2.01 0.37 31.21 15.0 1.70 2.60 1.85 0.34 30.66 **Table 5: Latency results in two simulated load tests on L2** **ranking (plus back end API). Query-per-second (QPS) is the** **average request load in the test. Back end API takes about** **20 ms for each query. Most queries can be processed within** **1-2 second, even with relatively high request load.** repeat queries. To eliminate confounders from caching, we ensure that no query is repeated in each experiment. For L1, based on the performance experiments, we chose the following configuration for Elastic Search: Each query is processed by a main node, which distributes its query to data nodes and then merges the results. There are three main nodes and ten data nodes, each using a premium machine (D8s v3) with a 1TB SSD disk (P30). The index is divided into 30 shards. For L2, we used Kubernetes to manage a GPU cluster. See Figure 2 (right) for a reference architecture. We used V100 GPUs in initial experiments. Since they are relatively expensive, we explored using low-cost GPUs in subsequent experiments to maximize cost effectiveness. For each query, to rerank the top 60 candidate paragraphs from L1, it takes about 0.9 second on a V100 GPU. The K80 GPU only costs a fraction of V100, but requires 3 second per query. We therefore used 4-K80 machines, which reduce the latency to 0.75 second but cost less than a third of the cost for V100. Table 4 and Table 5 shows simulated test results for L1 and L2 ranking, respectively. There were no failures in all the tests. For L1 ranking, our configuration can already support 10-20 QPS while keeping latency for most queries to less than a second. To support higher QPS, we can simply add more main and data nodes, which scale roughly linearly. For L2 ranking, our test used 32 4-K80 machines with a total of 128 K80s. It can support about 10 QPS while keeping latency for most queries to around or under a second. To support higher QPS, we can simply add more K80 machines. #### 4.3 Microsoft Biomedical Search Our biomedical search system has been deployed as Microsoft Biomedical Search, which is publicly available. See Figure 3 for a sample screenshot. Before deployment, we conducted several user studies among the co-authors and our extended teams with a diverse set of selfconstructed and sampled queries. Overall, we verified that our QPS L1 (Cost) L2 (Cost) MRC (Cost) Total Cost 4 13 D8v3 ($5K) 32 K80 ($10K) 48 K80 ($14K) $29K 7 13 D8v3 ($5K) 64 K80 ($20K) 96 K80 ($28K) $53K 14 13 D8v3 ($5K) 128 K80 ($40K) 192 K80 ($55K) $100K 28 26 D8v3 ($10K) 256 K80 ($80K) 384 K80 ($110K) $200K **Table 6: Reference configuration and monthly cost estimate** **to support expected QPS while keeping median latency un-** **der two seconds (based on pricing from June 2021).** system performed well for long queries with complex intent, generally returning more relevant results compared to PubMed and other search tools. However, for overly general short queries (e.g., “breast cancer”), our system can be under-selective among articles that all mention the terms. To improve user experience, we augmented L1 ranking by including results from Microsoft Academic, which uses a saliency score that takes into account temporal evolution and heterogeneity of the Microsoft Academic Graph to predict and up-weight influential papers [35, 41, 42]. Given a query, we retrieve top 30 results from Microsoft Academic as ranked by its saliency score and combine them with the top 30 results from BM25. L2 reranking is then conducted over the combined set of results. The saliency score helps elevate important papers when the query is underspecified, which generally leads to a better user experience. In addition to standard search capabilities, our system incorporates a state-of-the-art machine reading comprehension (MRC) method [7] trained on [19] as an optional component. Given a query and a top reranked candidate passage, the MRC component will treat it as a question-answering problem and return a text span in the passage as the answer, if the answer confidence is above the score of abstaining from answering. The MRC component uses the same cloud architecture as the L2 neural ranker Figure 2 (right), with similar latency performance. Our system can be deployed for public release at a rather affordable cost. Table 6 shows the reference configuration and cost estimate to support various expected loads (QPS). #### 5 DISCUSSION Prior work on vertical search tends to focus on domain-specific crawling (focused crawling) and user interface [2]. We instead explore the orthogonal aspect of the underlying search algorithm. These tend to be simplistic in past systems, due to the scarcity of domain-specific relevance labels, as exemplified by the PubMed search engine. While easier to implement and scale, such systems often render subpar search experiences, which is particularly concerning for high-value verticals such as biomedicine. E.g., Soni and Roberts [36] studied the evaluation of commercial COVID-19 search systems and found that “commercial search engines sizably _underperformed those evaluated under TREC-COVID. This has im-_ _plications for trust in popular health search engines and developing_ _biomedical search engines for future health crises.”_ By leveraging domain-specific pretraining and self-supervision from broad-coverage query-passage dataset, we show that it is possible to train a sophisticated neural ranking system to attain high search relevance, without requiring any manual annotation ----- **Figure 3: Sample screenshot of Microsoft Biomedical Search. The system applies our general approach for vertical search based** **on domain-specific pretraining and self-supervision, and covers all abstracts and full-text articles in CORD-19, PubMed, and** **PubMed Central (PMC).** effort. Although we focus on biomedical search as a running example in this paper, our reference system comprises general and reusable components that can be directly applied to other domains. Our approach may potentially help bridge the performance gap in conventional vertical search systems while keeping the design and engineering effort simple and affordable. There are many exciting directions to explore. For example, we can combine our approach with other search engines that take advantage of complementary signals not used in ours. Our hybrid L1 ranker combining BM25 with Microsoft Academic Search saliency scores is an example of such fusion opportunities. A particularly exciting prospect is applying our approach to help improve the PubMed search engine, which is an essential resource for millions of biomedical practitioners across the globe. In the long run, we can also envision applying our approach to other high-value domains such as finance, law, retail, etc. Our approach can also be applied to enterprise search scenarios, to facilitate search across proprietary document collections, which standard search engines are not optimized for. In principle, all it takes is gathering unlabeled text in the given domain to support domain-specific pretraining. If a comprehensive index is not available (as in PubMed for biomedicine), one could leverage focused crawling in traditional vertical search to identify such in-domain documents from the web. In practice, additional challenges may arise, e.g., in self-supervised fine-tuning. Currently, we generate the training dataset by selecting MARCO queries using a domain lexicon. If such a lexicon is not readily available (as in UMLS for biomedicine), additional work is required to identify words most pertinent to the given domain (e.g., by contrasting between general and domain-specific language models). We also rely on MARCO to have sufficient coverage for a given domain. We expect that highvalue domains are generally well represented in MARCO already. For an obscure domain with little representation in open-domain query log, we can fall back to using a general query-document relevance model as a start and invest additional effort for refinement. #### 6 CONCLUSION We described a methodology for developing vertical search capabilities and demonstrate its effectiveness in the TREC-COVID evaluation for COVID-related biomedical search. The generality and efficacy of the approach rely on domain-specific pretraining and self-supervised fine-tuning, which require no annotation effort for applying to a new domain. Using biomedicine as a running example, we present a general reference system design that can scale to tens of millions of domain-specific documents by leveraging capabilities supplied in modern cloud infrastructure. Our system has been deployed as Microsoft Biomedical Search. Future directions include further improvement of self-supervised reranking, combining the core retrieval and ranking services with complementary search methods and resources, and validation of the generality of the methodology by testing the approach in building search systems for other vertical domains. #### ACKNOWLEDGMENTS The authors thank Grace Huynh, Miah Wander, Michael Lucas, Rajesh Rao, Mu Wei, and Sam Preston for their support in assessing ranking relevance; as well as, Mihaela Vorvoreanu, Dean Carignan, Xiaodong Liu, Adam Fourney, and Susan Dumais for contributing their expertise and shaping Microsoft Biomedical Search. We thank colleagues at the Cleveland Clinic Foundation for composing and sharing a sample of COVID-19–centric queries spanning a broad range of biomedical topics. ----- #### REFERENCES [1] Emily Alsentzer, John Murphy, William Boag, Wei-Hung Weng, Di Jindi, Tristan Naumann, and Matthew McDermott. 2019. Publicly Available Clinical BERT Embeddings. In Proceedings of the 2nd Clinical Natural Language Processing Work_shop. Association for Computational Linguistics, Minneapolis, Minnesota, USA,_ [72–78. https://doi.org/10.18653/v1/W19-1909](https://doi.org/10.18653/v1/W19-1909) [2] Ricardo Baeza-Yates, Berthier Ribeiro-Neto, et al. 2011. Modern information _retrieval. Addison Wesley._ [3] Iz Beltagy, Kyle Lo, and Arman Cohan. 2019. SciBERT: A Pretrained Language Model for Scientific Text. In Proc. 2019 EMNLP-IJCNLP. Association for Computa[tional Linguistics, Hong Kong, China, 3615–3620. https://doi.org/10.18653/v1/](https://doi.org/10.18653/v1/D19-1371) [D19-1371](https://doi.org/10.18653/v1/D19-1371) [[4] Denis Bilenko. [n.d.]. gevent. http://www.gevent.org/](http://www.gevent.org/) [5] Olivier Bodenreider. 2004. The unified medical language system (UMLS): integrating biomedical terminology. Nucleic acids research 32, suppl_1 (2004), D267–D270. [6] Tom B Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. arXiv preprint _arXiv:2005.14165 (2020)._ [7] Hao Cheng, Yelong Shen, Xiaodong Liu, Pengcheng He, Weizhu Chen, and Jianfeng Gao. [n.d.]. UnitedQA: A Hybrid Approach for Open Domain Question Answering. arXiv preprint arXiv:2101.00178 ([n. d.]). [8] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv _preprint arXiv:1810.04805 (2018)._ [9] Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, and Hsiao-Wuen Hon. 2019. Unified language model pretraining for natural language understanding and generation. arXiv preprint _arXiv:1905.03197 (2019)._ [10] Clinton Gormley and Zachary Tong. 2015. Elasticsearch: the definitive guide: a _distributed real-time search and analytics engine. " O’Reilly Media, Inc."._ [11] Yu Gu, Robert Tinn, Hao Cheng, Michael Lucas, Naoto Usuyama, Xiaodong Liu, Tristan Naumann, Jianfeng Gao, and Hoifung Poon. 2020. Domain-specific language model pretraining for biomedical natural language processing. arXiv _preprint arXiv:2007.15779 (2020)._ [12] Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A Smith. 2020. Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks. arXiv preprint arXiv:2004.10964 (2020). [13] Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural _computation 9, 8 (1997), 1735–1780._ [[14] Lars Holmberg and Jonatan Heyman. [n.d.]. locust. https://locust.io/](https://locust.io/) [15] Kexin Huang, Jaan Altosaar, and Rajesh Ranganath. 2019. ClinicalBERT: Modeling clinical notes and predicting hospital readmission. arXiv preprint arXiv:1904.05342 (2019). [16] Po-Sen Huang, Xiaodong He, Jianfeng Gao, Li Deng, Alex Acero, and Larry Heck. 2013. Learning deep structured semantic models for web search using clickthrough data. In Proc. 22nd ACM Int. SIGCIKM. 2333–2338. [17] Vladimir Karpukhin, Barlas Oğuz, Sewon Min, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense passage retrieval for open-domain question answering. arXiv preprint arXiv:2004.04906 (2020). [18] Taku Kudo and John Richardson. 2018. SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing. In Proc. 2018 EMNLP: System Demonstrations. Association for Computational [Linguistics, Brussels, Belgium, 66–71. https://doi.org/10.18653/v1/D18-2012](https://doi.org/10.18653/v1/D18-2012) [19] Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural Questions: A Benchmark for Question Answering Research. Transactions of the Association for _[Computational Linguistics 7 (2019), 453–466. https://doi.org/10.1162/tacl_a_00276](https://doi.org/10.1162/tacl_a_00276)_ [arXiv:https://doi.org/10.1162/tacl_a_00276](https://arxiv.org/abs/https://doi.org/10.1162/tacl_a_00276) [20] Jinhyuk Lee, Wonjin Yoon, Sungdong Kim, Donghyeon Kim, Sunkyu Kim, Chan Ho So, and Jaewoo Kang. 2020. BioBERT: a pre-trained biomedical language representation model for biomedical text mining. Bioinformatics 36, 4 (2020), 1234–1240. [21] Jimmy Lin, Rodrigo Nogueira, and Andrew Yates. 2020. Pretrained transformers for text ranking: Bert and beyond. arXiv preprint arXiv:2010.06467 (2020). [22] Carolyn E Lipscomb. 2000. Medical subject headings (MeSH). Bulletin of the _Medical Library Association 88, 3 (2000), 265._ [23] Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692 (2019). [24] Sean MacAvaney, Arman Cohan, and Nazli Goharian. 2020. SLEDGE: A Simple Yet Effective Baseline for Coronavirus Scientific Knowledge Search. arXiv preprint _arXiv:2005.02365 (2020)._ [25] Sean MacAvaney, Arman Cohan, and Nazli Goharian. 2020. SLEDGE-Z: A ZeroShot Baseline for COVID-19 Literature Search. In Proc. of the 2020 Conference on _Empirical Methods in Natural Language Processing (EMNLP). Assoc. for Computa-_ [tional Linguistics, 4171–4179. https://doi.org/10.18653/v1/2020.emnlp-main.341](https://doi.org/10.18653/v1/2020.emnlp-main.341) [26] Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In CoCo@ NIPS. [27] Rodrigo Nogueira and Kyunghyun Cho. 2019. Passage Re-ranking with BERT. [(2019). http://arxiv.org/abs/1901.04085](http://arxiv.org/abs/1901.04085) [28] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI _blog 1, 8 (2019), 9._ [29] Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J. Liu. 2020. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. Journal of Machine _[Learning Research 21, 140 (2020), 1–67. http://jmlr.org/papers/v21/20-074.html](http://jmlr.org/papers/v21/20-074.html)_ [30] Kirk Roberts, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, Kyle Lo, Ian Soboroff, Ellen Voorhees, Lucy Lu Wang, and William R Hersh. 2020. TRECCOVID: rationale and structure of an information retrieval shared task for COVID19. J. Am. Med. Inform. Assoc. 27, 9 (2020), 1431–1436. [31] Stephen E Robertson and K Sparck Jones. 1976. Relevance weighting of search terms. J. Assoc. Inf. Sci. Technol. 27, 3 (1976), 129–146. [[32] Armin Ronacher. [n.d.]. Flask. https://palletsprojects.com/p/flask/](https://palletsprojects.com/p/flask/) [33] Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proc. 54th Annual Meeting _of the ACL (Volume 1: Long Papers). Association for Computational Linguistics,_ [Berlin, Germany, 1715–1725. https://doi.org/10.18653/v1/P16-1162](https://doi.org/10.18653/v1/P16-1162) [34] Yuqi Si, Jingqi Wang, Hua Xu, and Kirk Roberts. 2019. Enhancing clinical concept extraction with contextual embeddings. J. Am. Med. Inform. Assoc. (2019). [35] Arnab Sinha, Zhihong Shen, Yang Song, Hao Ma, Darrin Eide, Bo-June Hsu, and Kuansan Wang. 2015. An overview of microsoft academic service (mas) and applications. In Proc. 24th international conference on world wide web. 243–246. [36] Sarvesh Soni and Kirk Roberts. 2021. An evaluation of two commercial deep learning-based information retrieval systems for covid-19 literature. J. Am. Med. _Inform. Assoc. 28, 1 (2021), 132–137._ [37] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information processing systems. 5998–6008. [38] Ellen Voorhees, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, William R Hersh, Kyle Lo, Kirk Roberts, Ian Soboroff, and Lucy Lu Wang. 2020. TREC-COVID: constructing a pandemic information retrieval test collection. _arXiv preprint arXiv:2005.04474 (2020)._ [39] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. Superglue: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural _Information Processing Systems. 3266–3280._ [40] Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. 2019. GLUE: A Multi-task Benchmark and Analaysis Platform for Natural Language Understanding. In ICLR. [41] Kuansan Wang, Zhihong Shen, Chiyuan Huang, Chieh-Han Wu, Yuxiao Dong, and Anshul Kanakia. 2020. Microsoft Academic Graph: When experts are not enough. Quantitative Science Studies 1, 1 (2020), 396–413. [42] Kuansan Wang, Zhihong Shen, Chiyuan Huang, Chieh-Han Wu, Darrin Eide, Yuxiao Dong, Junjie Qian, Anshul Kanakia, Alvin Chen, and Richard Rogahn. 2019. A Review of Microsoft Academic Services for Science of Science Studies. _Frontiers in Big Data 2 (2019), 45._ [43] Lucy Lu Wang, Kyle Lo, Yoganand Chandrasekhar, Russell Reas, Jiangjiang Yang, Darrin Eide, Kathryn Funk, Rodney Kinney, Ziyang Liu, William Merrill, et al. 2020. Cord-19: The covid-19 open research dataset. ArXiv (2020). [44] Chenyan Xiong, Zhenghao Liu, Si Sun, Zhuyun Dai, Kaitao Zhang, Shi Yu, Zhiyuan Liu, Hoifung Poon, Jianfeng Gao, and Paul Bennett. 2020. CMT in TREC-COVID Round 2: Mitigating the Generalization Gaps from Web to Special Domain Search. arXiv preprint arXiv:2011.01580 (2020). [45] Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwikj. 2021. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval. In International Conference _[on Learning Representations. https://openreview.net/forum?id=zeFrfgyZln](https://openreview.net/forum?id=zeFrfgyZln)_ [46] Wei Yang, Haotian Zhang, and Jimmy Lin. 2019. Simple applications of BERT for ad hoc document retrieval. arXiv preprint arXiv:1903.10972 (2019). [47] Zeynep Akkalyoncu Yilmaz, Shengjin Wang, Wei Yang, Haotian Zhang, and Jimmy Lin. 2019. Applying BERT to document retrieval with birch. In Proc. 2019 _EMNLP-IJCNLP: System Demonstrations. 19–24._ [48] Kaitao Zhang, Chenyan Xiong, Zhenghao Liu, and Zhiyuan Liu. 2020. Selective weak supervision for neural information retrieval. In Proceedings of The Web _Conference 2020. 474–485._ [49] Yukun Zhu, Ryan Kiros, Rich Zemel, Ruslan Salakhutdinov, Raquel Urtasun, Antonio Torralba, and Sanja Fidler. 2015. Aligning books and movies: Towards story-like visual explanations by watching movies and reading books. In ICCV. -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2106.13375, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/2106.13375" }
2,021
[ "Book", "JournalArticle", "Conference" ]
true
2021-06-25T00:00:00
[ { "paperId": "375cc986ab185c161b80b4f13f4d8f9e6521eedf", "title": "UnitedQA: A Hybrid Approach for Open Domain Question Answering" }, { "paperId": "00b36c57052f9cb2e6e39ed1106fd7a51920cec0", "title": "CMT in TREC-COVID Round 2: Mitigating the Generalization Gaps from Web to Special Domain Search" }, { "paperId": "2c953a3c378b40dadf2e3fb486713c8608b8e282", "title": "Pretrained Transformers for Text Ranking: BERT and Beyond" }, { "paperId": "05598331268614305ff844cea001f5b22f3519c9", "title": "SLEDGE: A Simple Yet Effective Zero-Shot Baseline for Coronavirus Scientific Knowledge Search" }, { "paperId": "a2f38d03fd363e920494ad65a5f0ad8bd18cd60b", "title": "Domain-Specific Language Model Pretraining for Biomedical Natural Language Processing" }, { "paperId": "b7da946b428b43ef925f2b18cbe51379339fc388", "title": "An evaluation of two commercial deep learning-based information retrieval systems for COVID-19 literature" }, { "paperId": "c9b8593db099869fe7254aa1fa53f3c9073b0176", "title": "Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval" }, { "paperId": "90abbc2cf38462b954ae1b772fac9532e2ccd8b0", "title": "Language Models are Few-Shot Learners" }, { "paperId": "995ac7bbbf6d9548b3aeac502600ca99db919894", "title": "TREC-COVID" }, { "paperId": "4699fb5445e6718f9c540c196f1eee2979526a27", "title": "SLEDGE: A Simple Yet Effective Baseline for Coronavirus Scientific Knowledge Search" }, { "paperId": "85e60f8f947b4478a04dbd425cac32d2245c9b9c", "title": "TREC-COVID: rationale and structure of an information retrieval shared task for COVID-19" }, { "paperId": "e816f788767eec6a8ef0ea9eddd0e902435d4271", "title": "Don’t Stop Pretraining: Adapt Language Models to Domains and Tasks" }, { "paperId": "bc411487f305e451d7485e53202ec241fcc97d3b", "title": "CORD-19: The Covid-19 Open Research Dataset" }, { "paperId": "b26f2037f769d5ffc5f7bdcec2de8da28ec14bee", "title": "Dense Passage Retrieval for Open-Domain Question Answering" }, { "paperId": "8b780041274aebb8390ab0097015ac0887e9de0f", "title": "Selective Weak Supervision for Neural Information Retrieval" }, { "paperId": "ea9a516d5cb0b298f0df50e82b3e0400b72fcdff", "title": "Microsoft Academic Graph: When experts are not enough" }, { "paperId": "7e95c6f943b7c47af1b2ef1651b86022a001ce81", "title": "A Review of Microsoft Academic Services for Science of Science Studies" }, { "paperId": "df12d1d972708250f3769eaaa34f4b156cf695fe", "title": "Applying BERT to Document Retrieval with Birch" }, { "paperId": "6c4b76232bb72897685d19b3d264c6ee3005bc2b", "title": "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer" }, { "paperId": "17dbd7b72029181327732e4d11b52a08ed4630d0", "title": "Natural Questions: A Benchmark for Question Answering Research" }, { "paperId": "077f8329a7b6fa3b7c877a57b81eb6c18b5f87de", "title": "RoBERTa: A Robustly Optimized BERT Pretraining Approach" }, { "paperId": "1c71771c701aadfd72c5866170a9f5d71464bb88", "title": "Unified Language Model Pre-training for Natural Language Understanding and Generation" }, { "paperId": "d9f6ada77448664b71128bb19df15765336974a6", "title": "SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems" }, { "paperId": "b3c2c9f53ab130f3eb76eaaab3afa481c5a405eb", "title": "ClinicalBERT: Modeling Clinical Notes and Predicting Hospital Readmission" }, { "paperId": "2a567ebd78939d0861d788f0fedff8d40ae62bf2", "title": "Publicly Available Clinical BERT Embeddings" }, { "paperId": "ea57734824426a427f8b9139da1ae574cc929543", "title": "Simple Applications of BERT for Ad Hoc Document Retrieval" }, { "paperId": "156d217b0a911af97fa1b5a71dc909ccef7a8028", "title": "SciBERT: A Pretrained Language Model for Scientific Text" }, { "paperId": "06b36e744dca445863c9f9aefe76aea95ba95999", "title": "Enhancing Clinical Concept Extraction with Contextual Embedding" }, { "paperId": "1e43c7084bdcb6b3102afaf301cce10faead2702", "title": "BioBERT: a pre-trained biomedical language representation model for biomedical text mining" }, { "paperId": "85e07116316e686bf787114ba10ca60f4ea7c5b2", "title": "Passage Re-ranking with BERT" }, { "paperId": "b5246fa284f86b544a7c31f050b3bd0defd053fd", "title": "SentencePiece: A simple and language independent subword tokenizer and detokenizer for Neural Text Processing" }, { "paperId": "451d4a16e425ecbf38c4b1cca0dcf5d9bec8255c", "title": "GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding" }, { "paperId": "204e3073870fae3d05bcbc2f6a8e263d9b72e776", "title": "Attention is All you Need" }, { "paperId": "dd95f96e3322dcaee9b1e3f7871ecc3ebcd51bfe", "title": "MS MARCO: A Human Generated MAchine Reading COmprehension Dataset" }, { "paperId": "1518039b5001f1836565215eb047526b3ac7f462", "title": "Neural Machine Translation of Rare Words with Subword Units" }, { "paperId": "0e6824e137847be0599bb0032e37042ed2ef5045", "title": "Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books" }, { "paperId": "8ebc4145aef6a575cbaffcfeec56b20586db573a", "title": "An Overview of Microsoft Academic Service (MAS) and Applications" }, { "paperId": "fdb813d8b927bdd21ae1858cafa6c34b66a36268", "title": "Learning deep structured semantic models for web search using clickthrough data" }, { "paperId": "3418cf24575313d9942af12d1a6f572ac0d34579", "title": "Medical Subject Headings (MeSH)." }, { "paperId": "f6e3e57567e9803718623ec088cd7fea65cfbc9d", "title": "Relevance weighting of search terms" }, { "paperId": "df2b0e26d0599ce3e70df8a9da02e51594e0e992", "title": "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding" }, { "paperId": "9405cc0d6169988371b2755e573cc28650d14dfe", "title": "Language Models are Unsupervised Multitask Learners" }, { "paperId": null, "title": "Lars Holmberg and Jonatan Heyman" }, { "paperId": null, "title": "Elasticsearch: the definitive guide: a distributed real-time search and analytics engine" }, { "paperId": null, "title": "Modern information retrieval" }, { "paperId": "1f1eaf19e38b541eec8a02f099e3090536a4c936", "title": "The Unified Medical Language System (UMLS): integrating biomedical terminology" }, { "paperId": null, "title": "Long short-termmemory" }, { "paperId": null, "title": "Denis Bilenko" } ]
14,562
en
[ { "category": "Materials Science", "source": "external" }, { "category": "Materials Science", "source": "s2-fos-model" }, { "category": "Chemistry", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01cf176234e33b73fb12e4706678079b755db868
[ "Materials Science" ]
0.854005
A Nonconjugated Radical Polymer with Stable Red Luminescence in Solid State
01cf176234e33b73fb12e4706678079b755db868
[ { "authorId": "51147505", "name": "Zhaoyu Wang" }, { "authorId": "2000599120", "name": "Xinhui Zou" }, { "authorId": "1576268172", "name": "Yinjuan Xie" }, { "authorId": "10025268", "name": "Haoke Zhang" }, { "authorId": "14483185", "name": "Lian-rui Hu" }, { "authorId": "144057375", "name": "Christopher C. S. Chan" }, { "authorId": "2110064246", "name": "Ruoyao Zhang" }, { "authorId": "2157956779", "name": "Jing Guo" }, { "authorId": "2126570450", "name": "Ryan Tsz Kin Kwok" }, { "authorId": "144153561", "name": "J. Lam" }, { "authorId": "35077097", "name": "Ian D. Williams" }, { "authorId": "145197186", "name": "Z. Zeng" }, { "authorId": "101770793", "name": "K. Wong" }, { "authorId": "144566756", "name": "C. Sherrill" }, { "authorId": "12545295", "name": "Ruquan Ye" }, { "authorId": "2065095048", "name": "B. Tang" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
Luminescent organic radicals have attracted much attention due to its distinctive open-shell structure and all-in-one properties on optoelectronics, electronics, and magnetics. However, organic radicals are usually instable and only very limited stable structures with π-radicals can exhibit luminescent property in the isolated state, most of which originate from the family of triphenylmethyl derivatives. Here, we report an unusual radical luminescence phenomenon that nonconjugated radical polymer can readily emits red luminescence at ~635 nm in the solid state. A traditional luminescence quencher, 2,2,6,6-tetramethylpiperidine 1-oxyl (TEMPO), was turned into a red chromophore when grafted onto a polymer backbone. Experimental data confirms the emission is associated with the nitroxide radicals and is also affected by the packing of polymer. As a proof of concept, a biomedical application in intracellular ascorbic acid visualization is demonstrated. This work discloses a novel class of luminescent radicals and provides a distinctive and simple pathway for stable radical luminescence.
# A nonconjugated radical polymer with stable red luminescence in solid state Zhaoyu Wang[1, 8], Xinhui Zou[1, 8], Yi Xie[2, 8], Haoke Zhang[1], Lianrui Hu[1], Christopher C. S. Chan[1], Ruoyao Zhang[1], Jing Guo[3], Ryan T. K. Kwok[1], Jacky W. Y. Lam[1], Ian D. Williams[1], Zebing Zeng[3], Kam Sing Wong[1], C. David Sherrill[2], Ruquan Ye[4]*, and Ben Zhong Tang[1, 5, 6, 7]* 1. Department of Chemistry, Hong Kong Branch of Chinese National Engineering Research Center for Tissue Restoration and Reconstruction and Institute for Advanced Study, and Department of Chemical and Biological Engineering, and Department of Physics, The Hong Kong University of Science and Technology (HKUST), Clear Water Bay, Kowloon, Hong Kong, China. 2. Center for Computational Molecular Science and Technology, School of Chemistry and Biochemistry, Georgia Institute of Technology, Atlanta, Georgia 30332-0400, USA. 3. State Key Laboratory of Chemo/Biosensing and Chemometrics, College of Chemistry and Chemical Engineering, Hunan University, Changsha 410082, P. R. China. 4. Department of Chemistry, City University of Hong Kong, Hong Kong, China. 5. HKUST-Shenzhen Research Institute, No. 9 Yuexing 1st RD, Nanshan District, Shenzhen 518057, China. 6. Center for Aggregation-Induced Emission, State Key Laboratory of Luminescent Materials and Devices, SCUT-HKUST Joint Research Institute, South China University of Technology, Tianhe District, Guangzhou 510640, China. 7. AIE Institute, Guangzhou Development District, Huangpu, Guangzhou 510530, China 8. These authors contributed equally to this work: Zhaoyu Wang, Xinhui Zou, Yi Xie *email: ruquanye@cityu.edu.hk; tangbenz@ust.hk **Luminescent organic radicals have attracted much attention due to its distinctive open-** **shell structure and all-in-one properties on optoelectronics[1], electronics[2], and magnetics[3].** **However, organic radicals are usually instable[4] and only very limited stable structures** **with π-radicals can exhibit luminescent property in the isolated state, most of which** **originate from the family of triphenylmethyl derivatives[5–7]. Here, we report an unusual** **radical luminescence phenomenon that nonconjugated radical polymer can readily emits** **red luminescence at ~635 nm in the solid state. A traditional luminescence quencher,** 1 ----- **2,2,6,6-tetramethylpiperidine 1-oxyl (TEMPO)[8], was turned into a red chromophore** **when grafted onto a polymer backbone. Experimental data confirms the emission is** **associated with the nitroxide radicals and is also affected by the packing of polymer. As a** **proof of concept, a biomedical application in intracellular ascorbic acid visualization is** **demonstrated. This work discloses a novel class of luminescent radicals and provides a** **distinctive and simple pathway for stable radical luminescence.** **Introduction** Synthetic organic chromophores are commonly featured with extended π-conjugation and closed-shell structure, but their synthesis could be tedious and challenging[9]. Most of the radicals with open-shell structure are not stable, and diligent efforts have been made to stabilize the unpaired electron by delicate structure design[10,11]. Yet they typically relax via a non radiative decay pathway upon excitation, thereby showing non-luminescent property[5]. Since the first reported case in 2006[12], luminescent radicals have been widely investigated from excited-states dynamics/mechanisms[13,14] to applications[15–17]. Nevertheless, luminescence from stable radicals remains a sporadic phenomenon and most of the structures are limited to triphenyl methyl radical derivatives and their analogues[7,18–21]. In nature, non-covalent interaction and self-assembly are playing a critical role in photophysical properties.[22,23] Modern photophysics suggests that in addition to the intrinsic energy states of chromophores, their luminescent properties could be affected or even reversed by the surrounding[24,25]. Here, we report that TEMPO, a non-luminescent radical[5], could be 2 ----- transformed into a red-emissive radical after polymerization. The polymer is free from any conjugation and aromatic rings, yet the existence of narrow highest occupied molecular orbital to singly occupied molecular orbital (HOMO-SOMO) gap of nitroxide radical enables the emission in long wavelength. Experimental and theoretical data underscore the significance of intermolecular non-covalent interactions among TEMPO units. Our results disclose an unusual luminescence phenomenon and advances the development of luminescent radicals. **Results and discussion** The non-conjugated radical polymer, poly(4-glycidyloxy-2,2,6,6-tetramethylpiperidine-1-oxyl) (PGTEMPO), was synthesized via the ring-opening polymerization of stable radical monomers initiated by potassium _tert-butoxide (Figure 1a). The TEMPO derivative, 4-glycidyloxy-_ 2,2,6,6-tetramethylpiperidine-1-oxyl (GTEMPO), was used as the monomer as it is stable at a wide range of temperatures and easy to crystalize. PGTEMPO is orange and it has a number average molecular weight of 4.9 kg mol[-1] and a narrow molecular weight distribution (Đ = 1.32). It is readily soluble in common organic solvents, such as tetrahydrofuran (THF), chloroform, dichloromethane, and dimethylsulfoxide. Electron paramagnetic resonance (EPR) spectroscopy and attenuated total reflectance Fourier transform infrared (ATR-FTIR) spectroscopy confirm the existence of stable radical and nitroxide function group respectively (Figure S3 and Figure S4). Besides, the thermal property of PGTEMPO was characterized by thermogravimetric analysis (TGA), presenting a degradation temperature (Td) of ~246 °C 3 ----- (Figure S5). **Fig. 1 | Synthesis and photophysical properties. a, Synthetic route of the PGTEMPO radical** polymer. b, Normalized absorption spectra of PGTEMPO (red solid line) and GTEMPO (red dash line) in THF solution; excitation spectrum of PGTEMPO solid (blue line) at emission peak of 635 nm. c, PL spectra of PGTEMPO and GTEMPO solid excited at 532 nm. Insets are their photos taken under 510–560 nm excitation. Scale bar: 200 μm. **d, PL spectra of** PGTEMPO in THF solution at various concentration. Excitation: 532 nm. Absorption spectra of PGTEMPO and GTEMPO in THF solution showed similar absorption maxima (λabs), which are located at about 459 nm and 468 nm, respectively (Figure 1b). No PL emission signal was detected from the GTEMPO monomer. Yet for PGTEMPO, a red emission with peak intensity at ~635 nm, a quantum yield of 1.3% and a lifetime of 0.198 ns emerged in the solid state under the excitation of 532 nm (Figure 1c). The excitation spectra of PGTEMPO was obtained at the fixed emission peak of 635 nm as shown in Figure 1b, which does not align 4 ----- with the absorption spectra. The photographs were taken under various excitation channel (Figure S6), which further confirms that the monomer is non-luminescent under a broad range of irradiation. Then, the PL spectra of PGTEMPO were measured in THF at various concentration from 0.1 mM to 100 mM (Figure 1d). At low concentration, PGTEMPO displayed negligible emission. However, when the concentration increased to a threshold of 0.1 M, the emission intensity was boosted by 10 folds, demonstrating a typical aggregation induced emission (AIE) property[26]. From the PL data, it is hypothesized that the emission of PGTEMPO comes from intermolecular interactions. At low concentration, the population of intermolecular interactions is low, which accounts for the faint emission. When the concentration reaches the threshold, the intermolecular interactions enhance, which turns on the luminescence. 5 ----- **Fig. 2. | The role of nitroxide radical in the photophysical property of PGTEMPO. a,** Reaction scheme of PGTEMPO with VC to form PGTEMPOH. **b,** The EPR signal of PGTEMPO and PGTEMPOH in the solid state. **c, UV−vis spectra of PGTEMPO and** PGTEMPOH in DMSO (20 mM). d, PL spectra of PGTEMPO and PGTEMPOH in the solid state under an excitation wavelength of 532 nm and 360 nm respectively. To confirmed that the luminescence is associated with the radical on TEMPO, we first designed an experiment to chemically quench the radical site on PGTEMPO with acid (vitamin C, VC)[27] and observed the subsequent luminescence change. After reacting with VC, the nitroxide group (N-O) was reduced into hydroxylamine group (N-OH), generating poly(4-glycidyloxy-2,2,6,6 tetramethylpiperidine-1-hydroxyl) (PGTEMPOH) (Figure 2a). The suppression of EPR signal (Figure 2b) and the emergence of NMR spectrum of PGTEMPOH (Figure S9) suggested the successful quenching of radicals. The orange color of PGTEMPO solution also faded to colorless after reduction into PGTEMPOH (Figure 2c). As we expected, the red-emissive peak 6 ----- of PGTEMPO was significantly weakened along with the quenching of radicals (Figure 2d, top). The response to VC is also very sensitive and rapid (Figure S10). On the other hand, the quenching of radical generates PGTEMPOH, which is a classic clusteroluminogen[28]. Previous study suggests that the clustering effect from the inter/intramolecular hydrogen bond interactions could trigger the emission.[29] As expected, we observed a blue emission from PGTEMPOH under excitation of 360 nm, which is absent from PGTEMPO (Figure 2d, bottom). The luminescence quenching experiment proves that radical is playing a crucial role in the unusual red-luminescence property of PGTEMPO. To understand the origin of luminescence from TEMPO units, we studied the dependence of luminescence on polymer packing. A cycle of temperature-dependent PL was performed between -20 and 50 [o]C, and compared to the differential scanning calorimetry (DSC) result. The PGTEMPO presents a glass transition temperature (Tg) of 17.40 °C. In general, the luminescence intensity decreases as the temperature increases, which is because of the favourable non-radiative decay at higher temperature[30]. Surprisingly, there is a significant drop of PL intensity between 7 to 17 [o]C, where the polymer undergoes a glassy transition. It is hypothesized that the glassy transition breaks the rigidity of polymer, which decreases the intensity[31]. Besides, the real-time monitoring of the luminescence of PGTEMPO at 80°C under N2 was performed to probe the dynamic structural evolution of PGTEMPO. As high temperature will boost the non-radiative decay, the maximum PL intensity decreased rapidly 7 ----- within the first 20 min. Serendipitously, the PL intensity rebounded afterwards. Previous Monte Carlo simulations suggested that the annealing will form a continuous percolation network among TEMPO units for charge transport[32]. Therefore, it is plausible that the increasing population of through-space interaction among the TEMPO units stimulated by the annealing process account for the escalating PL intensity. This is also supported by the observation of a red shift from 635 to 647 nm during the real-time annealing (Figure 3c). To further understand the luminescence mechanism of PGTEMPO, we combined the structural information and calculations. We first investigate the properties of the monomer, GTEMPO. The X-ray diffraction analysis revealed that the GTEMPO powder is orderly packed (Figure S11). The single crystal structure of GTEMPO was obtained as shown in Figure S12. In the side view, the nitroxide groups are sterically hindered by the surrounding methyl groups. The nearest distance between two nitroxide groups is 5.817 Å. If we term the nitroxide site as the head of the molecule, from the top view, one can observe that GTEMPO adopts a head-to-tail model, and the distance between adjacent nitroxide groups is 6.140 Å. Then we use time dependent density functional theory (TDDFT) calculations with a B3LYP functional and def2 TZVP basis set via Q-Chem, to reveal the orbital states. The ground state of TEMPO is a doublet (D0) due to the existence of an unpaired electron. As depicted in Figure S13, 44α refers to the SOMO. The calculated D1 energy of TEMPO is 2.754 eV (459 nm), which agrees with the experimental absorption data (Figure 1b, 468 nm) and is attributed to the HOMO-SOMO 8 ----- transition. **Fig. 3. | Structure-dependent photophysical properties of PGTEMPO. a-b, The PL** intensity of unannealed PGTEMPO under various temperature in combination with differential scanning calorimetry thermograms recorded under nitrogen at a rate of 10 °C/min. c, The real time annealing of PGTEMPO at the temperature of 80 °C under. Excitation: 532 nm. d, First excitation energy of TEMPO cluster (dimer, trimer, and tetramer) at various separation distance. In comparison, for PGTEMPO, the backbone of polymer readily breaks the orderly packed conformation, as shown by the powder X-ray diffraction (XRD) results that the fresh PGTEMPO sample lose the fine peaks and became completely amorphous after annealing at 80 °C (Figure S14). To understand the energy state of PGTEMPO, we first simulate the structure by aligning the TEMPO dimer to form a parallelogram.[32,33] We further added extra TEMPO units near the parallelogram to form trimer and tetramer clusters to model the effect of multi-unit clusters. 9 ----- For convenience, we defined the y-axis of the geometry as the line running through the atoms N1 and C4, and the x-axis as the line running through atoms C3 and C5 (Figure S16a). We used the displacement between the TEMPO unit on x-axis (Δx) and y-axis (Δy) to specify the configuration of such TEMPO dimers (Figure S16b). Trimers and tetramers are constructed by placing the extra TEMPO unit above and below the nitroxide radical parallelogram plane in the dimer (Figure S16c). For all dimers in this section, we choose the value of Δx to be 1.5 Å and Δy to be between 5.5 Å and 6.0 Å, so that the two TEMPO units can approach each other to form the parallelogram between nitroxide radicals, while avoiding direct collision between the radicals and methyl groups on C2 and C6. The ground state frontier orbitals of TEMPO dimer, trimer, and tetramer were shown in Figure S17, Figure S18, and Figure S19, respectively. The excitation energy of cluster at various distance was calculated and plotted as shown in Figure 3d. The result shows that the energy gap decreases as the value of Δy decreases and the clustering size increases. This suggests that the existence of through-space interaction will form a new through-space cluster in the polymer with narrower HOMO-SOMO gap. It agrees with the excitation spectrum that the luminescence is induced by excitation of long wavelength (Figure 1b). In addition, it also explains the red-shift emission of annealed PGTEMPO (Figure 3c), as according to the Monte Carlo simulation[32] that annealing will favor the TEMPO clustering. 10 ----- **Fig. 4. | Intracellular Vitamin C detection. a, Schematic preparation of PGTEMPO NPs via** a nanoprecipitation method by using an amphiphilic block copolymer DSPE-PEG as the encapsulation materials. **b-c, Confocal laser scanning microscope images of A549 cell after** incubation with (b) PGTEMPO NPs (10 μg/mL) for 4 h and (c) after addition with VC medium solution (1 mg/mL) incubated for 15 min. Excitation: 405 nm for second column and 561 nm for third column. As a proof-of-concept demonstration, we used PGTEMPO as a potential fluorescent sensor for intracellular VC detection. To render the hydrophobic PGTEMPO with good intracellular biocompatibility and solubility in water, we encapsulated PGTEMPO with the assistance of surfactant, 1,2-distearoyl-sn-glycero-3-phosphoethanolamine-N-[methoxy-(polyethylene glycol) (DSPE-PEG) via nanoprecipitation,[34] as schematically illustrated in Figure 3a. Briefly, the PGTEMPO and DSPE-PEG were dissolved in THF and added dropwise into an aqueous solution. After sonication, PGTEMPO self-assembles into nanoparticles (NPs). The size of hydrodynamic diameters of nanoparticles center around 95 nm as revealed by the dynamic light 11 ----- scattering (DLS) (Figure S20a). To demonstrate the viability of VC sensing in cells, we first test the fluorescent response of PGTEMPO NPs to VC in water. The reaction of PGTEMPO NPs with VC is very fast in room temperature and aqueous solution. We obtained a peak at around 450 nm after the addition of VC solution to PGTEMPO NPs (Figure S20b-c). Afterwards, we explored the in vitro cellular uptake and VC mapping performance of PGTEMPO NPs using A549 lung cancer cells as an example. After incubation for 4 h, we observed a substantial accumulation of PGTEMPO NPs inside the A549 cells via confocal laser scanning microscope. In the image of Figure 4b under excitation of 561 nm, red luminescence was observed. Subsequently, we added VC to medium and incubated the cells for 15 min. The fluorescent signal in cells under irradiation of 405 nm emerged in Figure 3c, confirming that PGTEMPO NPs could serve as a turn-on probe for intracellular mapping of VC distribution. To further confirm that PGTEMPO NPs targeted at lysosome, the colocalization experiment was carried out with commercial LysoTracker Green (LTG), a lysosome marker with short excitation/emission of 488/511 nm as the control. As shown in Figure S21, the fine lysosome structures from PGTEMPO NPs greatly overlap with those from LTG, confirming the specific lysosome targeting of PGTEMPO NPs. The fluorescent signal in cells indicates that VC diffused in cells swiftly (< 15 min) and can enter lysosomes. In comparison with reported VC fluorescent probe from one-channel imaging[35], the dual fluorescent signals (under 561 nm and 405 nm channel) from PGTEMPO NPs improve 12 ----- the imaging reliability. **Conclusions** We have synthesized a nonconjugated radical polymer showing luminescence in the solid state. To our knowledge, this is the first report that stable radical without any conjugated or aromatic structures can emit light. Luminescence quenching experiment confirms the key role of nitroxide radicals. Combining the structural information and calculation, we propose that the intermolecular interactions of TEMPO clusters account for the long-wavelength emission. Using the redox characteristic of TEMPO, a biological application of intracellular VC visualization was demonstrated. This work expands the family of luminescent radicals. We envision that further experimental and theoretical researches on this unconventional luminescence phenomenon will provide insights into principles governing radical luminescence and find applications in broad fields. **References** 1. Ai, X. et al. Efficient radical-based light-emitting diodes with doublet emission. _Nature_ **563, 536–540 (2018).** 2. Guo, H. et al. High stability and luminescence efficiency in donor–acceptor neutral radicals not following the Aufbau principle. Nat. Mater. **18, 977–984 (2019).** 3. Kimura, S. et al. Magnetoluminescence in a Photostable, Brightly Luminescent Organic Radical in a Rigid Environment. Angew. Chemie **57, 12711–12715 (2018).** 4. Peng, Q., Obolda, A., Zhang, M. & Li, F. Organic light-emitting diodes using a neutral π radical as emitter: The emission from a doublet. Angew. Chemie - Int. Ed. **54, 7091–** 7095 (2015). 5. Teki, Y. Excited-State Dynamics of Non-Luminescent and Luminescent π-Radicals. _Chem. - A Eur. J._ **26, 980–996 (2020).** 13 ----- 6. Abdurahman, A., Peng, Q., Ablikim, O., Ai, X. & Li, F. A radical polymer with efficient deep-red luminescence in the condensed state. Mater. Horizons **6, 1265–1270** (2019). 7. Ai, X., Chen, Y., Feng, Y. & Li, F. A Stable Room-Temperature Luminescent Biphenylmethyl Radical. Angew. Chemie - Int. Ed. **57, 2869–2873 (2018).** 8. Rivera, S. A. & Hudson, B. S. Rapid exchange luminescence: Nitroxide quenching and implications for sensor applications. J. Am. Chem. Soc. **128, 18–19 (2006).** 9. Xu, W., Wang, D. & Tang, B. Z. NIR‐II AIEgens: A Win‐Win Integration towards Bioapplications. Angew. Chemie (2020). 10. Armet, O. et al. Inert carbon free radicals. 8. Polychlorotriphenylmethyl radicals. Synthesis, structure, and spin-density distribution. J. Phys. Chem. **91, 5608–5616** (1987). 11. Ballester, M., Riera, J., Castafier, J., Badía, C. & Monsó, J. M. Inert carbon free radicals. I. Perchlorodiphenylmethyl and perchlorotriphenylmethyl radical series. J. _Am. Chem. Soc._ **93, 2215–2225 (1971).** 12. Gamero, V. et al. [4-(N-Carbazolyl)-2,6-dichlorophenyl]bis(2,4,6trichlorophenyl)methyl radical an efficient red light-emitting paramagnetic molecule. _Tetrahedron Lett._ **47, 2305–2309 (2006).** 13. Kato, K., Kimura, S., Kusamoto, T., Nishihara, H. & Teki, Y. Luminescent RadicalExcimer: Excited-State Dynamics of Luminescent Radicals in Doped Host Crystals. _Angew. Chemie - Int. Ed._ **58, 2606–2611 (2019).** 14. Ito, A. et al. Excited-State Dynamics of Pentacene Derivatives with Stable Radical Substituents. Angew. Chemie **126, 6833–6837 (2014).** 15. Badalyan, A. & Stahl, S. S. Cooperative electrocatalytic alcohol oxidation with electron-proton-transfer mediators. Nature **535, 406–410 (2016).** 16. Shimizu, A., Ito, A. & Teki, Y. Photostability enhancement of the pentacene derivative having two nitronyl nitroxide radical substituents. Chem. Commun. **52, 2889–2892** (2016). 17. Rajca, A. et al. Organic radical contrast agents for magnetic resonance imaging. J. Am. _Chem. Soc._ **134, 15724–15727 (2012).** 18. Hattori, Y., Kusamoto, T. & Nishihara, H. Enhanced luminescent properties of an open-shell (3,5-Dichloro-4-pyridyl)bis(2,4,6-trichlorophenyl)methyl radical by coordination to gold. Angew. Chemie - Int. Ed. **127, 3802–3805 (2015).** 19. Dong, S. et al. Multicarbazolyl substituted TTM radicals: Red-shift of fluorescence emission with enhanced luminescence efficiency. Mater. Chem. Front. **1, 2132–2135** (2017). 20. Heckmann, A. et al. Highly fluorescent open-shell NIR dyes: The time-dependence of back electron transfer in triarylamine-perchlorotriphenylmethyl radicals. J. Phys. 14 ----- _Chem. C_ **113, 20958–20966 (2009).** 21. Velasco, D. et al. Red organic light-emitting radical adducts of carbazole and tris(2,4,6-trichlorotriphenyl)methyl radical that exhibit high thermal stability and electrochemical amphotericity. J. Org. Chem. **72, 7523–7532 (2007).** 22. Shimomura, O. & Johnson, F. H. Calcium binding, quantum yield, and emitting molecule in aequorin bioluminescence. Nature **227, 1356–1357 (1970).** 23. Shimomura, O., Johnson, F. H. & Saiga, Y. Extraction, Purification and Properties of Aequorin, a Bioluminescent Protein from the Luminous Hydromedusan,Aequorea. J. _Cell. Comp. Physiol._ **59, 223–239 (1962).** 24. Zhao, Z., Zhang, H., Lam, J. W. Y. & Tang, B. Z. Aggregation-Induced Emission: New Vistas at the Aggregate Level. Angewandte Chemie (2020). 25. Sun, P. et al. J-Aggregate squaraine nanoparticles with bright NIR-II fluorescence for imaging guided photothermal therapy. Chem. Commun. **54, 13395–13398 (2018).** 26. Wang, Q. et al. Reevaluating Protein Photoluminescence: Remarkable Visible Luminescence upon Concentration and Insight into the Emission Mechanism. Angew. _Chemie_ **58, 12667–12673 (2019).** 27. Tang, Y. et al. Radical scavenging mediating reversible fluorescence quenching of an anionic conjugated polymer: Highly sensitive probe for antioxidants. Chem. Mater. **18,** 3605–3610 (2006). 28. Zhang, H. et al. Clusterization-triggered emission: Uncommon luminescence from common materials. Mater. Today **32, 275–292 (2020).** 29. Ye, R. et al. Non-conventional fluorescent biogenic and synthetic polymers without aromatic rings. Polym. Chem. **8, 1722–1727 (2017).** 30. Cui, Y., Zhu, F., Chen, B. & Qian, G. Metal-organic frameworks for luminescence thermometry. Chem. Commun. **51, 7420–7431 (2015).** 31. Leung, N. L. C. et al. Restriction of intramolecular motions: The general mechanism behind aggregation-induced emission. Chem. - A Eur. J. **47, 15349–15353 (2014).** 32. Joo, Y. et al. A nonconjugated radical polymer glass with high electrical conductivity. _Science_ **359, 1391–1395 (2018).** 33. Zhang, H. et al. In situ monitoring of molecular aggregation using circular dichroism. _Nat. Commun._ **9, 4961 (2018).** 34. Cai, X. et al. Multifunctional Liposome: A Bright AIEgen-Lipid Conjugate with Strong Photosensitization. Angew. Chemie **130, 16634–16638 (2018).** 35. Ishii, K., Kubo, K., Sakurada, T., Komori, K. & Sakai, Y. Phthalocyanine-based fluorescence probes for detecting ascorbic acid: Phthalocyaninatosilicon covalently linked to TEMPO radicals. Chem. Commun. **47, 4932–4934 (2011).** 15 ----- **Methods** **Synthesis of GTEMPO** The monomer, GTEMPO, was synthesized as reported[36]. Briefly, TEMPO-OH was purified by recrystallization before use. Sodium hydroxide (NaOH) (8 g) was gradually added to deionized water (16 mL) in a 250 mL round-bottom flask under vigorous stirring. After NaOH was completely dissolved, epichlorohydrin (10 mL, 120 mmol) and TBA (1.5 g, 4.6 mmol) were added. A solution of TEMPO-OH (4.12 g, 24 mmol) in 20 mL tetrahydrofuran (THF) was then added dropwise into the mixture. The resulting solution was stirred at room temperature overnight. The reaction mixture was poured into 200 mL of ice water and then extracted with ethyl acetate (EA). The organic layer was washed with sodium chloride (NaCl) aqueous solution and then extracted with ethyl acetate again. The combined organic layers were then dried over anhydrous sodium sulfate. After filtration, the filtrate was evaporated under reduced pressure and the crude product was purified on a silica gel column using hexane/EA (8/1, v/v) as the eluent. The oily product obtained was freeze-dried for 1 day to yield the monomer, GTEMPO, as a red crystalline solid. **Synthesis of PGTEMPO** The polymerization of the monomer was achieved using a procedure optimized from the literature[37]. GTEMPO was further dried under reduced pressure for one day before use and stored in glove box. Inside a glove box, a mixture of GTEMPO (300 mg, 1.31 mmol) and potassium tert-butoxide (12 mg) were added into a 10 mL Schlenk tube with a stirring bar, which was already dried in a hot oven overnight. Then, the tube was sealed with a rubber stopper. The reaction mixture was heated at 80 °C for 2 hours without solvent and then was injected with 2 mL anhydrous THF for another 8 hours. The mixture was vortexed after addition of solvent to make sure the mixture was well dissolved in THF. After cooling down to room temperature, NaCl aqueous solution was added to mixture, followed by extraction with chloroform for three times. The organic solvent was removed under reduced pressure and the crude polymer dissolved with a small volume of THF. The crude polymer solution was passed through a simple column filled with neutral Al2O3 powder and precipitated in hexane. The precipitates were collected by ultracentrifugation (7000 rpm for 3 min). This protocol was repeated for three times to remove excess unreacted monomer. The polymer was dried overnight in a vacuum oven at room temperature to obtain an orange solid. Mn = 4,900; Mw = 6,500; Mw/Mn = 1.32 (GPC, polystyrene calibration). **Data Availability** All experimental data are available in the main text or the supplementary materials. 16 ----- **Methods references** 36. Chang, C. et al. Synthesizing and characterization of comb-shaped carbazole containing copolymer via combination of ring opening polymerization and nitroxide-mediated polymerization. Polymer. **51, 1947–1953 (2010).** 37. Endo, T. _et al._ Synthesis and polymerization of 4-(glycidyloxy)-2, 2, 6, 6tetramethylpiperidine-1-oxyl. Macromolecules **26, 3227-3229 (1993).** **Acknowledgments** The authors are grateful for financial support from the National Science Foundation of China (21788102), the Research Grants Council of Hong Kong (16308016, C6009-17G, and AHKUST 605/16), the University Grants Committee of Hong Kong (AoE/P-03/08 and AoE/P02/12), the Innovation and Technology Commission (ITC-CNERC14SC01 and ITS/254/17), and the Science and Technology Plan of Shenzhen (JCYJ20160229205601482 and JCY20170818113602462). We would also like to thank Dr. Shunjie Liu, Dr. Qingqing Gao, Zaiyu Wang for their kind assistance. Besides, we are grateful for Dr. Herman H. Y. Sung who conducted single crystal X‐ray diffraction in this work. **Author contributions** Z. W., R. Y., and B. Z. T. conceived the idea. Z. W. synthesized the materials and completed the characterization. X. Z., Z. W., C. C. S. C., and K. S. W. performed the photophysical experiments. Y. X., L. H., and D. S. carried out the theoretical calculations and results analyses. R. Z. and Z. W. obtained the biological application experiments. J. G. and Z. Z. conducted the EPR measurement. I. W. carried out the single crystal X‐ray diffraction. H. Z., R. Y. and B. Z. T. initiated and supervised the work. Z. W., R. Y., and B. Z. T. wrote the manuscript. H. Z., R. T. K. K., J. W. Y. L., and K. S. W. revised the manuscript with input from all authors. **Conflicts of interests** The authors declare no competing interests. 17 -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.26434/chemrxiv.12924200.v1?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.26434/chemrxiv.12924200.v1, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBYNC", "status": "GREEN", "url": "https://doi.org/10.26434/chemrxiv.12924200" }
2,020
[]
true
2020-09-07T00:00:00
[ { "paperId": "9600e33c16a64063cb91d68f5096208e0e567139", "title": "NIR-II AIEgens: A Win-Win Integration towards Bioapplications." }, { "paperId": "8ad02146dee331c390c3abc84c404906d7ca57ce", "title": "Aggregation-Induced Emission: New Vistas at Aggregate Level." }, { "paperId": "63d8d160280b7a9af699782eb9419af80322e02e", "title": "Excited-State Dynamics of Non-Luminecent and Luminescent π-Radicals." }, { "paperId": "05a8320395bcf51cacc2d84fb035fbc7970c49aa", "title": "Clusterization-triggered emission: Uncommon luminescence from common materials" }, { "paperId": "b41481c918e1d1d3ad5c0f314ba5e0997e0c50f3", "title": "High stability and luminescence efficiency in donor–acceptor neutral radicals not following the Aufbau principle" }, { "paperId": "a6b046a841e96e58a3a1b1f82ffb78cef3d970fb", "title": "A radical polymer with efficient deep-red luminescence in the condensed state" }, { "paperId": "01e4449e92bf66a693ab1aff025d7f7b2cbba8c5", "title": "Reevaluating Protein Photoluminescence: Remarkable Visible Luminescence upon Concentration and Insight into the Emission Mechanism" }, { "paperId": "2f8927cbc7438144bb47b02fa6c158b6de7f8370", "title": "Luminescent Radical-Excimer: Excited-State Dynamics of Luminescent Radicals in Doped Host Crystals." }, { "paperId": "295dcd56e7f3b5158263da4e75144e2f92b635b3", "title": "J-Aggregate squaraine nanoparticles with bright NIR-II fluorescence for imaging guided photothermal therapy." }, { "paperId": "fbade998f5e82089481a4b6c14db921dd0695011", "title": "Multifunctional Liposome: A Bright AIEgen-Lipid Conjugate with Strong Photosensitization." }, { "paperId": "8322654aa87eab4dd6b8f744e720415772fbd23c", "title": "In situ monitoring of molecular aggregation using circular dichroism" }, { "paperId": "0b8f3a3ddb8ca22451ed011358998662c7397bbe", "title": "Efficient radical-based light-emitting diodes with doublet emission" }, { "paperId": "b2dd5217476c7f90a882e283c6d489f09dfc09f5", "title": "Magnetoluminescence in a Photostable, Brightly Luminescent Organic Radical in a Rigid Environment." }, { "paperId": "828842106f9bb910f7fa98180ab071493f9b4a71", "title": "A nonconjugated radical polymer glass with high electrical conductivity" }, { "paperId": "a24d05a3391a8ea4dc376f66c7b319686f33f4a0", "title": "A Stable Room-Temperature Luminescent Biphenylmethyl Radical." }, { "paperId": "96976f5ab10fa2d03dbeff61e78e21ff0be49c3a", "title": "Multicarbazolyl substituted TTM radicals: red-shift of fluorescence emission with enhanced luminescence efficiency" }, { "paperId": "2406b3b51fac646612e711cedd520ba65095ba9f", "title": "Non-conventional fluorescent biogenic and synthetic polymers without aromatic rings" }, { "paperId": "aadb2cb8038ee5ca4b24a61d67f7651e61b3f6a3", "title": "Cooperative electrocatalytic alcohol oxidation with electron-proton-transfer mediators" }, { "paperId": "65fda3d23e4af0fa91dde8cdb5243c832a9bd12d", "title": "Photostability enhancement of the pentacene derivative having two nitronyl nitroxide radical substituents." }, { "paperId": "35a45289b731032b50b29053d1308e1c6ccf2083", "title": "Organic Light-Emitting Diodes Using a Neutral π Radical as Emitter: The Emission from a Doublet." }, { "paperId": "60180bc3030bb13b16cacb48d5ac3dd5f3445321", "title": "Metal-organic frameworks for luminescence thermometry." }, { "paperId": "11a4341c8692ae6d59b3b3693ae717654ef3b0e6", "title": "Enhanced luminescent properties of an open-shell (3,5-dichloro-4-pyridyl)bis(2,4,6-trichlorophenyl)methyl radical by coordination to gold." }, { "paperId": "d1c49eefebd3a6891012534b8c3545910207deba", "title": "Restriction of intramolecular motions: the general mechanism behind aggregation-induced emission." }, { "paperId": "7189191a1ae3d894a961e4454487ed5d11bce4c1", "title": "Excited-state dynamics of pentacene derivatives with stable radical substituents." }, { "paperId": "2ad89742ab3b947056e04896e599e0abefbf13c5", "title": "Organic radical contrast agents for magnetic resonance imaging." }, { "paperId": "58321fddcdee6054b84998d0ffc205e837cf8b97", "title": "Phthalocyanine-based fluorescence probes for detecting ascorbic acid: phthalocyaninatosilicon covalently linked to TEMPO radicals." }, { "paperId": "9625af9691b26eadee196a8f1e9dd6b91474540d", "title": "Synthesizing and characterization of comb-shaped carbazole containing copolymer via combination of ring opening polymerization and nitroxide-mediated polymerization" }, { "paperId": "a4a68a1bd260a693f0d7dabcf860f4590186f25e", "title": "Highly Fluorescent Open-Shell NIR Dyes: The Time-Dependence of Back Electron Transfer in Triarylamine-Perchlorotriphenylmethyl Radicals" }, { "paperId": "43e400e7fd121e3673005af70464d5b2943b65e3", "title": "Red organic light-emitting radical adducts of carbazole and tris(2,4,6-trichlorotriphenyl)methyl radical that exhibit high thermal stability and electrochemical amphotericity." }, { "paperId": "2bf2699a47053b765e2c488b0e9fcf10af847ef9", "title": "Radical Scavenging Mediating Reversible Fluorescence Quenching of an Anionic Conjugated Polymer: Highly Sensitive Probe for Antioxidants" }, { "paperId": "1df967bc04112af850ce0d25d467d3dbec840f88", "title": "[4-(N-Carbazolyl)-2,6-dichlorophenyl]bis(2,4,6-trichlorophenyl)methyl radical an efficient red light-emitting paramagnetic molecule" }, { "paperId": "2d25b403f7afd79e62ac4bf032a2213fb5f95830", "title": "Rapid exchange luminescence: nitroxide quenching and implications for sensor applications." }, { "paperId": "c5c169800f9b1862de8f3c87d16b14d9f18f73bf", "title": "Synthesis and polymerization of 4-(glycidyloxy)-2,2,6,6-tetramethylpiperidine-1-oxyl" }, { "paperId": "b9a3a6f666e8cbe35e7c3b38513fef2478487faa", "title": "Inert carbon free radicals. 8. Polychlorotriphenylmethyl radicals: synthesis, structure, and spin-density distribution" }, { "paperId": "843dee20fd9ae49ec790c49d8a520305fbfa3d12", "title": "Inert carbon free radicals. I. Perchlorodiphenylmethyl and perchlorotriphenylmethyl radical series" }, { "paperId": "a3a6d83f503fd8613cd6144ccf31696764321660", "title": "Calcium Binding, Quantum Yield, and Emitting Molecule in Aequorin Bioluminescence" }, { "paperId": "5c77c7fdf5ac60fcdedb86a3c1cd5be118e1a19b", "title": "Inert carbon free radicals" }, { "paperId": "fe3a09d2a0d48a5c542f5cffdb4dd74ed42f82e1", "title": "Extraction, purification and properties of aequorin, a bioluminescent protein from the luminous hydromedusan, Aequorea." }, { "paperId": null, "title": "Chemie -Int" }, { "paperId": null, "title": "HKUST-Shenzhen Research Institute, No. 9 Yuexing 1st RD, Nanshan District, Shenzhen 518057, China" }, { "paperId": null, "title": "Clear Water Bay, Kowloon, Hong Kong, China" }, { "paperId": null, "title": "concentration reaches the threshold, the intermolecular interactions enhance, which turns on the luminescence" }, { "paperId": null, "title": "by the surrounding 24,25 . Here" }, { "paperId": null, "title": "Department of Chemistry, City University of Hong Kong" }, { "paperId": null, "title": "Center for Computational Molecular Science and Technology, School of Chemistry and Biochemistry, Georgia Institute of Technology, Atlanta, Georgia 30332-0400, USA" }, { "paperId": null, "title": "State Key Laboratory of Chemo/Biosensing and Chemometrics" } ]
8,110
en
[ { "category": "Economics", "source": "external" }, { "category": "Economics", "source": "s2-fos-model" }, { "category": "Business", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01d015e7045f31b6fb0f0de8055691d7c9a130a3
[ "Economics" ]
0.896921
A Systematic Review of the Bubble Dynamics of Cryptocurrency Prices
01d015e7045f31b6fb0f0de8055691d7c9a130a3
Research In International Business and Finance
[ { "authorId": "2102580226", "name": "Νικόλαος Κυριαζής" }, { "authorId": "134706162", "name": "S. Corbet" }, { "authorId": "2680883", "name": "Stephanos Papadamou" } ]
{ "alternate_issns": null, "alternate_names": [ "Res Int Bus Finance", "Research in International Business and Finance" ], "alternate_urls": [ "https://www.journals.elsevier.com/research-in-international-business-and-finance", "http://www.sciencedirect.com/science/journal/02755319" ], "id": "2c288007-ffc0-4ffd-8906-b36ab6ea9655", "issn": "0275-5319", "name": "Research In International Business and Finance", "type": "journal", "url": "http://www.elsevier.com/locate/ribaf" }
Abstract This paper surveys the academic literature concerning the formation of pricing bubbles in digital currency markets. Studies indicate that several bubble phases have taken place in Bitcoin prices, mostly during the years 2013 and 2017. Other digital currencies of primary importance, such as Ethereum and Litecoin, also exhibit several bubble phases. The Augmented Dickey Fuller (ADF) as well as the Log-Periodic Power Law (LPPL) methodology are the most frequently employed techniques for bubble detection and measurement. Based on much academic research, Bitcoin appears to have been in a bubble-phase since June 2015, while Ethereum, NEM, Stellar, Ripple, Litecoin and Dash have been denoted as possessing bubble-like characteristics since September 2015. However, this latter group possess little academic evidence supporting the presence of bubbles since early 2018. An overall perspective is provided based on a robust bibliography based on large deviations of market quotes from fundamental values that can serve as a guide to policymakers, academics and investors.
# A Systematic Review of the Bubble Dynamics of Cryptocurrency Prices Nikolaos Kyriazis *[a][∗]*, Stephanos Papadamou *[a]*, Shaen Corbet *[b,c]* *a* *Department of Economics, University of Thessaly, Filellinon, Volos 382 21, Greece* *b* *DCU Business School, Dublin City University, Dublin 9, Ireland* *c* *School of Accounting, Finance and Economics, University of Waikato, New Zealand* **Abstract** This paper surveys the academic literature concerning the formation of pricing bubbles in digital currency markets. Studies indicate that several bubble phases have taken place in Bitcoin prices, mostly during the years 2013 and 2017. Other digital currencies of primary importance, such as Ethereum and Litecoin, also exhibit several bubble phases. The Augmented Dickey Fuller (ADF) as well as the Log-Periodic Power Law (LPPL) methodology are the most frequently employed techniques for bubble detection and measurement. Based on much academic research, Bitcoin appears to have been in a bubble-phase since June 2015, while Ethereum, NEM, Stellar, Ripple, Litecoin and Dash have been denoted as possessing bubble-like characteristics since September 2015. However, this latter group possess little academic evidence supporting the presence of bubbles since early 2018. An overall perspective is provided based on a robust bibliography based on large deviations of market quotes from fundamental values that can serve as a guide to policymakers, academics and investors. *Keywords:* Cryptocurrencies; Bitcoin; Systematic Review; Pricing Bubbles. **1. Introduction** Bubbles have existed across many differing investment assets, with research developing across a number of related strands including information source, contagion effects, the speed of development, signal processing and the role of algorithm trading and news dissemination through social media. The reasons for this broad interest are far from difficult to understand as extreme price fluctuations in investment forms have always attracted considerable academic debate and the interest of investors, policymakers and regulators. Moreover, sudden upheavals or abrupt decreases in market values of assets have been of primordial interest for their societal influencing, such as the generation and escalation of both social and economic disparities. Unsurprisingly, this has spurred substantial interest in bubble-formation within cryptocurrency *Preprint submitted to Research in International Business and Finance* *May 26, 2021* ----- markets (Frehen et al. [2013]; Corsi and Sornette [2014]; Vogel and Werner [2015]), especially when the asset under scrutiny constitutes a new, developing and promising tool that can be used for both liquidity and reserve management with an intriguing level of appeal to speculative investors seeking unexploited profit opportunities. Notably, a broad spectrum of alternative perspectives as regards the definition of bubbles has been brought about. The best-known among them is the asset-pricing approach that considers assets as investment tools capable of differentiating their nominal value from their fundamental value in a large extent (West [1987]; Diba and Grossman [1988]). It should be noted that the nominal value of an asset is defined as the market value by which it can be sold or bought whereas its fundamental value is lower and generally based on its costs of production. Continuous increases in the multiplicity through which nominal prices exceed fundamental values lead to explosive behaviour and the formation of bubbles. Such deviations from fundamental prices are mainly generated through highly optimistic investor sentiment that thereby lead to an increased level of aggregate demand for assets. This phenomenon of sharp demand elevation is reinforced if supply is stable or decreasing, as is found to be the case when considering the majority of digital currencies. Digital currencies have been an axis of interest with regards to the presence of a number of specific characteristics, such as their nature and functions and whether they constitute a commodity or fiat money. Baur et al. [2018] found that Bitcoin is a hybrid of commodity money and fiat money. While digital coins employ peer-to-peer (P2P) networks and open-source software in order to prevent double spending and bypass the need for intermediation by commercial banks (Dwyer [2015]). Most cryptocurrencies are highly decentralised coins. Determinants of the value of Bitcoin are the demand for this currency in combination with its limited supply. Nadler and Guo [2020] estimated the pricing kernel with which users price factors affecting their token holdings, identifying that blockchain specific risk factors are priced in to the price of cryptocurrencies. Ammous [2018] argued that only Bitcoin can serve as a store of value, as it is considered more credible than other virtual currencies, its supply can be predicted and can resist manipulation due to its incumbency in the cryptocurrency market. Nevertheless, Baur et al. [2018] found that Bitcoin cannot be considered as a strong safe haven during crises. A complete survey about cryptocurrencies as a financial asset has been conducted by Corbet et al. [2019]. Symitsi and Chalvatzis [2019] and Akhtaruzzaman et al. [2019] found statistically significant diversification benefits from the inclusion of Bitcoin which are more pronounced for commodities. This paper surveys the key relevant literature in the area of bubble price formation in digital currencies and provides in the most representative manner the colourful nomenclature in relevant 2 ----- academic papers. A profound understanding of large deviations of nominal prices from fundamental ones allows an in-depth overview of inflation determinants of cryptocurrency values and also casts light on price formation of other assets of primary importance. This study aims to ascribe further foresight into bubble formation matters as a better understanding of this phenomenon is useful not only for academics, market participants or individuals, but also for society as a whole. Section 2 presents the most popular definitions of asset bubbles and the most important bubble formation events in economic history. Section 3 offers a comprehensive review on the most popular methodological approaches for testing and measuring the bubble character of cryptocurrencies. Section 4 lays out a survey on the literature about bubble price formation in virtual decentralised currencies. Finally, in Section 5 discussion of findings and their economic underpinnings takes place. Tables A1 and A2 in the Appendix provide a brief overview of the studies investigated and the bubbles detected in these academic papers, respectively. **2. Defining and presenting a brief history of asset bubbles** Bubble formation has been a term that has received a number of alternative though not con tradictory definitions throughout the years. A simple definition of bubbles can be presented as ‘ *systematic deviations of the market value from the fundamental value of the asset* ’, where the latter is defined as the net present value of the future cash flows emanating from it. Van Horne [1985] supported this definition, stating that ‘a balloon might be a better metaphor for certain financial promotions. It is blown up, to be sure, but not to the extent that it pops. The eventual deflation is less abrupt.’ Garber [1990] argued that the term ‘bubble is a fuzzy word filled with import but lack ing any solid operational definition’ documenting that one should not try to define bubbles as just financial events, as we have just to date being unable to understand the exact driving forces within. The author considers that such deviations cannot be explained based on any of the fundamentals. O’Hara [2008] provided support to such a theory on bubbles, noting that they depend on combina tions of the rationality or not of agents and markets. Brunnermeier and Oehmke [2013] identify that bubbles consist of: a) a run-up phase that leads in formation of bubbles and imbalances; and b) a crisis phase, where accumulated risk materialises and the crisis breaks out. Moreover, Shiller et al. [1984] reveals that asset markets are directed by mercurial investors acting on the basis of short lived enthusiasms and bubbles. Brunnermeier and Oehmke [2013] described bubbles as dramatic price increases which lead to bursting, while Kindleberger and Aliber [2011] considered bubbles as fast increases in the market value of an asset and that the initial upwards spur triggers expectations of a series of price enlargements. This is what feeds elevated interest about that particular asset 3 ----- and results in higher demand for investment in it. This is the so-called ‘irrational exuberance’ in investors’ behaviour (Shiller [2015]). A standard pricing pattern arises for new investment assets, such as digital currencies. When a new form of liquidity is developed, the first coins of this currency are sold in a very high price. One should take into consideration that there is an upper limit in the quantity of supply of a large number of cryptocurrencies, for example Bitcoin will stop being produced when it reaches 21 million coins. This supply will continue to increase in decreasing steps until 2040 and then will remain at that level forever (Baur et al. [2018]). Azariadis [1981] and Frehen et al. [2013] consider that the three most important historic bubbles have been; the Dutch ’tulip mania,’ the South Sea bubble in England and the collapse of the Mississippi Company in France. These events are considered to have been the prominent landmarks in the financial economic events history as the vertical ascents in prices that took place had been phenomenal. Van Horne [1985], based on a large bulk of evidence regarding financial market anomalies, takes into consideration the possibility of bubbles and manias and argues that during the tulipmania a single bulb could be sold for many years’ salary. Garber [1990] believes that the Dutch experience of Tulipmania during the period 1634-7 was characterised by amazingly high prices of single bulbs of rare and prized varieties of tulips. Emphasis should be paid in that towards the most intense phase of the Tulipmania in the early 1637, just before the burst of this bubble, even common tulip varieties skyrocketed with approximately 2,000% increases in prices within a month. According to Johannessen [2017], rampant speculation on the stock exchanges in the various Dutch towns based on the stock prices of tulip bulbs became a frequent phenomenon. It is note worthy that the price of such a bulb was between 10 and 25 guilders in 1612 whereas reached approximately 6,650 guilders 25 years later due to collective optimism in the Dutch market. This optimism had been the product of institutional innovation (stock exchanges) and product inno vation. Johannessen [2017] argued that the motivation for founding South Sea Company was the refinancing of the massive national debts that the British and French had acquired during the Spanish War of Succession. In no more than a decade, the share value of South Sea Company had reached the enormous amount of £200 million. Its rally in prices was based on attracting investors from France by promising enormous profits in the French colonies in North America. It is widely accepted that the South Sea bubble (1720) was generated as many investors from the Continent had purchased South Sea Company shares in London (Brunnermeier and Oehmke [2013]). As there was not in reality any perspective of significant trade and profits, the company’s value decreased and fell to lower levels than before the start of the bubble. The Mississippi bubble (1719-20) was 4 ----- the result of Compagnie d’Occident (‘Company of the West’) that John Law created in order to have the exclusive privileges to develop the vast French territories in the Mississippi River valley of North America. This company had the monopoly power over the French tobacco and African slave trades and Law used it for selling its shares to the public in exchange for state-issued public securities. The mania of the public to sell debt for shares of the company weakened when inflation rose too high because of over-issuing of public debt. Thereby, the bubble collapsed and triggered a crash in equity markets in France. Frehen et al. [2013] provide evidence that all three bubbles had innovation and irrational investor exuberance as key drivers of bubble expectations. They reject clientele-based theories that attribute emphasis to bubble-riding and short-sales restrictions. **3. Methodological Approaches for Defining, Detecting and Measuring Bubbles** *3.1. Main existing literature on Detecting Bubbles* Academic work used for the process of identifying bubbles in asset prices based on fundamental values, possesses roots in the asset pricing model of Lucas Jr [1978]. This has been the axis on which a number of important contributors have developed econometric methodologies in order to test for bubble behaviour in prices. Blanchard and Watson [1982] argue that bubbles can follow many types of processes and that certain bubbles lead to violation of variance bounds implied by a class of rational expectations models. Shiller et al. [1984] support that social movements and habits in specific time periods are responsible for increases in asset prices. Investing incentives and asset price fluctuations are due to observations of participants in the market and to human nature. Tirole [1985] reveals that there are three conditions for bubble creation: durability, scarcity and common beliefs. He argues that scarcity is based on new units having the same price as old ones and claims that limited supply may prevent bubbles. This could be very intuitive as regards Bitcoin. Furthermore, he distinguishes between the financial bubble, which depends on market price, and the real bubble that is established by fundamentals of this market. Notably, he supports that overlapping generations models should focus on speculative assets rather than money. Evans [1989] argued that in rational expectations models sunspots and other ‘rational bubble’ solutions present only weak or no expectational stability and that in linear models there is at most one strongly expectational stable solution. Diba and Grossman [1988] support the view that stock prices do not contain explosive price bubbles, moreover, claiming that it is impossible for negative rational bubbles in stock prices to exist, thereby if a bubble bursts then there is no opportunity that it will ever restart. Froot and Obstfeld [1989] focused on rational intrinsic bubbles dependent only on dividends, that is bubbles 5 ----- that derive all their fluctuations from exogenous economic fundamentals but not from extraneous factors. They find evidence in favour of bubbles in the US stock market that are difficult to be explained by alternative models. Gurkaynak [2008] documents that asset bubble tests cannot manage to offer adequate information about the existence or not of bubbles. He finds that inclusion of model assumptions about time-varying discount rates, risk aversion or structural breaks permit the appearance of bubbles only in a very weak extent. Furthermore, there is no way to distinguish bubbles from time-varying or regime-switching fundamentals. Overall, the author argues that when bubble detection tests indicate the existence of a bubble we could be far from certain that this bubble exists. *3.2. Definition of Bubbles: Intrinsic versus Extrinsic rational bubbles* Rational bubbles appear when asset prices keep rising due to investors’ beliefs that there will be a possibility to sell the overvalued asset at a higher price in the future (Flood and Hodrick [1990]). As investors are aware of the risk of bubble bursting at some future point in time, they require compensation for bearing that risk which gets higher as time passes because risk becomes higher. The continuing requirement for higher profits leads to overgrowing of prices and finally the bubble bursts. Dale et al. [2005] argued that intrinsic rational bubbles are formed when investors systematically and continuously conduct wrong estimations of asset fundamentals. This is more common when it comes to advanced technology products where it is more difficult to determine the exact fundamental value. Crashes are usually the result of informational dynamics after long periods of price increases have taken place. Extrinsic rational bubbles, also called as ‘sunspots’, occur when rational investors have to confront large levels of uncertainty concerning the economic environment. This is what leads investor to ascribe a value - with regards to price prediction to endogenously determine factors that do not have both a real or significant influence on fundamental values of assets. The main source of extrinsic rational bubbles is reliance on misinformation that results in poor management skills. *3.3. Approaches for Detecting and Measuring Bubbles* No consensus is apparent as regards the tracing and measurement of price bubbles. Rational bubbles could appear in the form of deterministic time trends, as explosive AR(1) processes or even more complex stochastic processes. Among others, there have been four principal alternative approaches in order to define bubbles. The first view about defining bubbles is more traditional and lies on the comparison between the fundamental value and the nominal value of the underlying asset. It should be noted that the fundamental value is defined as the present value of the payoffs deriving from the assets since all relevant information has been taken into consideration (Taipalus 6 ----- [2012]). Thereby, the asset-pricing approach considers that bubbles exist when the nominal value that coincides with market value is not equal to the fundamental value of the asset. Another approach for modelling the fundamental value is provided by Foster and Wild [1999] by using the sigmoid (or logistic) curve approach. This methodology is beneficial when aiming to capture the different phases in the evolution of a bubble, such as the expansion phase, the inflexion phase and the saturation phase. All three are considered typical phases during price bubble formation. The expansion phase presents positive growth, the inflexion phase is characterised by stability whereas the saturation phase represents a fall in prices. Tracing the date of launch of the saturation phase is what this approach wants to succeed. It is worth noting that the period of positive growth is in practice not equal to that of negative growth in prices. The main drawback of adopting the sigmoid curve approach is its doubtful effectiveness in measurement during multiple bubbles. A methodology suitable for testing about single or multiple bubbles is offered by the Markov switching Augmented Dickey-Fuller (MSADF) unit root test that detects explosive autoregressive roots. This procedure has been proposed by Hall et al. [1999] in order to track alterations from non-bubble to bubble regimes. The main drawback of this method is the difficulty in tracing whether high volatility or explosive autoregressive behaviour exists in regimes. Among the popular methodologies for detecting price bubbles could be found the Phillips et al. [2014] and Phillips et al. [2015] procedures. This is about a bubble test based on the assumption that bubbles follow a mildly explosive behaviour, that is an autoregressive root *θ* = 1 + *gT* *[−][m]*, where *g* is positive and *m*, *c* parameters lie in the interval between 0 and 1. This test abides by the theory that suggests differences in tendencies of prices during upwards phases in comparison to tendencies in downswing periods. Thereby, sub-martingale behaviour in bullish markets is considered to be different from martingale behaviour in bearish times. **4. Literature on Cryptocurrency Bubble Price Formation** There has a been an increasing number of empirical papers that investigate the bubble price dynamics in cryptocurrency markets. The majority of them have been investigating price formation in Bitcoin but also studies on the CRIX index, the remaining digital coins of major importance and comparisons with national currencies have been conducted. Further issues such as the role of cybercriminality and illicit behaviour have also been analysed in substantial detail (Corbet et al. [2019]). To date, it has been identified that cryptocurrencies contain a number of pricing ineffi ciencies (Urquhart [2016], Sensoy [2019], Mensi et al. [2019], Corbet et al. [2019]; Ma and Tanizaki 7 ----- [2019]), persistence (Caporale et al. [2018]; Corbet and Katsiampa [2018]), to be correlated or in isolation from other traded assets (Gil-Alana et al. [2020]; Sifat et al. [2019]; Corbet et al. [2018]), news response (Aysan et al. [2019]; Flori [2019]; Nguyen et al. [2019]; Nguyen et al. [2019]; Zargar and Kumar [2019]); derivative development (Akyildirim et al. [2019]); contagion effects (Handika et al. [2019]; Omane-Adjepong and Alagidede [2019]; Beneki et al. [2019]); evidence of price clus tering (Urquhart [2017]; Kallinterakis and Wang [2019]), pricing bubbles (Corbet et al. [2018]), regulatory ambiguity (Fry [2018]; Shanaev et al. [2020]), and exceptional levels of both complex and uncomplex fraud (Gandal et al. [2018]). Much concern has been placed on the valuation of cryptocurrencies, with particular emphasis on placed on pricing efficiency, market dynamics and the potential presence of a pricing bubble. Hayes [2019] found that the marginal cost of production plays an important role in explaining Bitcoin prices, while Van Vliet [2018] investigated the role that Metcalfe’s Law played in the valuation of Bitcoin. Dwyer [2015] found that the use of cryp tocurrency technologies and the limitation of the quantity produced can create an equilibrium in which a digital currency has a positive value. Bedi and Nashier [2020] provide insights into sharp disparity in Bitcoin trading volumes across national currencies from a portfolio theory perspective. Panagiotidis et al. [2018] investigated using a LASSO framework, the influence on Bitcoin prices of factors such as stock market returns, exchange rates, gold and oil returns, the Federal Reserve and ECB’s rates and internet trends on Bitcoin returns for alternate time periods. Search intensity and gold returns emerge as the most important variables for Bitcoin returns. Fry [2018] showed that liquidity risks may generate heavy-tails in Bitcoin and cryptocurrency markets. There have also been investigations of interactions between cryptocurrencies themselves. Wei [2018] found that Tether issuance do not impact subsequent Bitcoin returns, however, they do impact traded volumes using a VAR methodology, which in fact ran contrary to market expectations. While investigating ICOs, Felix and von Eije [2019] found that there exists an average level of under-pricing of 123% for USA ICOs and 97% for the other countries examined. Hendrickson and Luther [2017] went as far as to investigate the process of banning Bitcoin. The authors found that a government of sufficient size can prevent an alternative currency from circulating without relying on punishments, where they can ban the cryptocurrency as long as it disseminated sufficiently severe punishments. The continued evolution of cryptocurrencies and the underlying exchanges on which they trade has generated tremendous urgency to develop our understanding of a product that has been iden tified as a potential enhancement of and replacement for traditional cash as we know it. Bitcoin has now developed in so far that it now possesses a robust and liquid derivatives market when compared to a number of other traditional financial products (Corbet et al. [2018]; Fassas et al. [2020]). As our understanding of FinTech evolves (Goldstein et al. [2019]) and the growing value of 8 ----- blockchain (Chen et al. [2019]), one key area of research focuses on the interactions between cryp tocurrencies and other more traditional financial markets. Regulatory bodies and policy-makers alike have observed the growth of cryptocurrencies with a certain amount of scepticism, based on this growing potential for illegality and malpractice. Foley et al. [2019] estimate that around $76 billion of illegal activity per year involve Bitcoin (46% of Bitcoin transactions). This is estimated to be in the same region of the U.S. and European markets for illegal drugs, and is identified as ‘black e-commerce’. While thorough investigation of the issues surrounding cryptocurrencies continues to develop, we continue to set out to analyse the potential mechanisms through which these new products can influence unsuspecting populations. Their potential use by companies attempting to take advantage of ‘crypto-exuberance’ must be considered (Akyildirim et al. [2020]). This research has raised much concern about the central rationale surrounding investment in this new investment asset class, but one fundamental issue has remained, namely, what exactly is the price of one unit of cryptocurrency? We set out to establish a review of the broad estimates while considering the broad use of bubble-identifying techniques. While considering research specifically analysing the potential for bubbles in the markets for cryptocurrencies, Cheung et al. [2015] use daily Bitcoin data over the period from July 17, 2010 to February 18, 2014 and adopt the Phillips et al. [2012] methodology in order to examine whether price bubbles exist in Bitcoin’s biggest exchange up to then, the Mt. Gox. Estimations by the generalised Supremum Augmented Dickey Fuller (GSADF) statistic reveal that most of the bubbles do not last for long as their duration does not exceed a few days period. Three very large Bitcoin bubbles have been detected. The first bubble starts on April 24, 2011 and ends on July 3, 2011. The second one begins on January 27, 2013 and ends on April 15, 2013. Finally, the third Bitcoin bubble in Mt. Gox is the largest one as it begins on November 5, 2013 and ends on February 18, 2014. It can be seen that bubble behaviour lasts for larger time periods as time passes. The burst of the last bubble is perhaps responsible for the collapse of the Mt. Gox. MacDonell [2014] uses weekly data covering the period from July 18, 2010 until August 25, 2013 and employs Autoregressive Moving Average (ARMA) methodologies and the Log Periodic Power Law (LPPL) models by Johansen Ledoit-Sornette (JLS) in order to predict crashes. Findings by ARMA methodologies indicate that investment sentiment as expressed by the CBOE Volatility Index drives Bitcoin prices. It can be noted that the LPPL model safely predicts the crash that took place in December 2013. Cheah and Fry [2015] employ daily closing prices about the Bitcoin Coindesk Index spanning the period from July 18, 2010 to July 17, 2014 in order to perform price modelling and detect the existence of bubbles. By following Johannessen [2017] they use a price model including a Wiener process and a jump process in order to control whether the intrinsic rate of return and the intrinsic level of risk are 9 ----- constant. They examine the bubble component as well as run a BDS test to trace bubble behaviour. Results reveal that a bubble character exists in the Bitcoin market and the random walk hypothesis is rejected. The speculative character of Bitcoin fed by high volatility and explosive behaviour of the currency is reinforced by econometric outcomes. Corbet et al. [2018] employ daily data from January 9, 2009 and from August 7, 2015 until November 9, 2017 concerning Bitcoin and Ethereum, respectively. The authors attempt to cap ture intrinsic bubbles, herd behaviour and time-varying fundamentals in discount factor models using a rolling-window approach with the Supremum-, the Generalised Supremum and the back ward Supremum Augmented Dickey-Fuller specifications. Econometric findings provide evidence of Bitcoin bubble behaviour around the turn of the year from 2013 to 2014. Moreover, Ethereum exhibits bubble behaviour in the beginning of 2016 and in the mid-2017. Overall, bubbles in the currencies investigated do not last for long. Bouri et al. [2019] use daily data about Bitcoin, Ripple, Ethereum, Litecoin, Nem, Dash and Stellar that span the period from August 7, 2015 until Decem ber 31, 2017 in order to study co-explosivity in their markets. Bitcoin’s explosivity is found to lower Ripple’s explosivity. Moreover, high prices in Ethereum, Litecoin, Nem and Stellar render more probable the appearance of hikes in Ripple’s market values. Ethereum’s explosivity is reinforced by Bitcoin, Ripple, Nem and Dash while receives a negative impact by Stellar. When it comes to Litecoin, there is evidence that Bitcoin, Ripple, Nem, Dash and Stellar feed its bubbling. Five digital currencies are also found to positively influence the bubble behaviour of Nem and of Stellar. It can be noted that also lower capitalisation currencies prove to be influential towards larger ones. Holub and Johnson [2019] investigate the influence that the Bitcoin bubble exerted on Bitcoin’s peer-to-peer (P2P) market during the bullish 2017 period. They employ daily data that span the period from January 2017 to June 2018. Thereby, the increasing, the skyrocketing and the bearish periods in Bitcoin’s market quotes are examined. Furthermore, data of national currencies from 13 advanced and developing economies are used. Emphasis is paid on analysis of publicly available bid-ask spreads. Results indicate that spreads decline for the US dollar, the Hong Kong dollar, the dollar of New Zealand, the Swedish Krone and the Singapore dollar. Nevertheless, the Euro, the United Kingdom pound, the Australian dollar, the Brazilian real, the Norwegian Krone, the Polish Zloty, the Russian Rouble and the South African Rand do not present significant falls in spreads while they abide by the thinking that higher Bitcoin prices lead to wider spreads. This presents credence to currency and country dependency of the bubble’s effect on Bitcoin prices in the P2P market. The SADF methodology is used for detecting bubbles by including a sequence of forward recur 10 ----- sive ADF unit root tests in right tails. In case that there are numerous episodes of booms and busts due to rapid alterations in market conditions, then the generalised SADF (GSADF) specification is preferable. This allows changing in starting points and end points of recursive schemes over flexible windows, thereby it allows right-sided double recursive test for detecting unit roots. Moreover, the backward SADF (BSADF) enables conducting a supremum ADF test by backward expanding on a sample sequence with a fixed end point but not a fixed starting point. Another strand of research on cryptocurrencies focuses on investigations based on the Log Periodic Power Low (LPPL) framework. MacDonell [2014] uses weekly data covering the period from July 18, 2010 until August 25, 2013 and employs Autoregressive Moving Average (ARMA) methodologies and the Log Periodic Power Law (LPPL) models by Johansen-Ledoit-Sornette (JLS) in order to predict crashes. Findings by ARMA methodologies indicate that investment sentiment as expressed by the CBOE Volatility Index drives Bitcoin prices. It can be noted that the LPPL model safely predicts the crash that took place in December 2013. Bianchetti et al. [2018] employ daily data of Bitcoin and Ethereum covering the period from December 1, 2016 until January 16, 2018 in order to detect bubbles in their prices. The methodologies adopted are the Log Periodic Power Law (LPPL) model by Johansen, Ledoit and Sornette (JLS) and the model of Phillips, Shi and Yu (PSY) and genetic algorithms. To be more precise, the Ordinary Least Squares (OLS), the generalised Least Squares (GLS) and the Maximum Likelihood Estimation (MLE) specifications of the JLS model are adopted. Moreover, the two versions of the PSY methodology are employed. Estimations reveal that a Bitcoin bubble appears in mid-December 2017 and in the first half of January 2018. When it comes to Ethereum, bubble behaviour is traced in mid-June 2017 and a weaker bubble sign is detected around January 12, 2018. Wheatley et al. [2018] employ a generalised Metcalfe’s law in combination with the Log Periodic Power Law Singularity (LPPLS) model in order to predict bubbles and crashes in the markets of digital currencies. They define bubbles as deviations of the Market-to-Metcalfe value that they define and document that four bubbles have aroused in the Bitcoin market with varying height and length among them. These bubbles have taken place by starting on: August 28, 2012, April 10, 2013, December 5, 2013 and December 28, 2017. Therefore, these results give credence to the belief that no random walk exists in cryptocurrency markets. The Log-Periodic Power Law (LPPL) model is based on econophysics and seeks to determine whether a critical point is reached. It is supposed that bubbles or crashes obey a particular power law with log-periodic fluctuations. This model predicts the date of occurrence of a bubble or crash as it contains a component that captures the market’s excessive volatility before a crash. A range of alternative estimation frameworks have been adopted in order to detect price bubbles. 11 ----- Bouoiyour et al. [2014] employ data of the Bitcoin Price Index (BPI) and the exchange-trade ratio (ETR) and users’ attractiveness to Bitcoin in order to examine the Granger causality between Bitcoin’s price and transactions as well as between Bitcoin’s price and investors’ attractiveness. The data adopted are of daily frequency and span the period from December 2010 to June 2014. Moreover, it is revealed that bubble behaviour in Bitcoin markets exists as the attractiveness to Bitcoin influences the Bitcoin Price Index at short- and long-run frequencies and there is a reverse (feedback) effect at lower frequencies. This cyclical nexus is found not to have duration of a stable length. Furthermore, Bouoiyour et al. [2016] employ the innovative technique of Empirical Mode Decomposition (EMD) to analyse and explain the price dynamics of Bitcoin. They use daily data of the Bitcoin Price Index (BPI) over the period from December 2010 to June 2015 and extract data into independent Intrinsic Mode Functions (IMFs) and by filtering high frequency (fluctuating process) from low frequency (slowing varying components) modes. Moreover, Pearson correlations and variance of components analysis are employed. Findings provide evidence that apart from the speculative character of Bitcoin also the long-term fundamentals as expressed by the low-frequency components are major determinants of fluctuations in Bitcoin quotes. Cheah and Fry [2015] employ daily closing prices about the Bitcoin Coindesk Index spanning the period from July 18, 2010 to July 17, 2014 in order to perform price modelling and detect the existence of bubbles. By following Johannessen [2017] they use a price model including a Wiener process and a jump process in order to control whether the intrinsic rate of return and the intrinsic level of risk are constant. They examine the bubble component as well as run a BDS test to trace bubble behaviour. Results reveal that a bubble character exists in the Bitcoin market and the random walk hypothesis is rejected. The speculative character of Bitcoin fed by high volatility and explosive behaviour of the currency is reinforced by econometric outcomes. Fry and Cheah [2016] develop an econophysics model in order to investigate the formation of bubbles in Bitcoin and Ripple. They employ data on market capitalisation and market share as well as daily closing values of Bitcoin Coindesk Index and weekly data on Ripple covering the period from February 26, 2013 to February 24, 2015. Events of exogenous and endogenous shocks in these currencies are taken into consideration. Univariate and bivariate model representations are used to test for spillover and contagion effects. Evidence documents that Ripple is over-priced in relation to Bitcoin and that the former exerted a spillover influence to the latter that exacerbated recent price falls in Bitcoin. Holub and Johnson [2019] investigate the influence that the Bitcoin bubble exerted on Bitcoin’s peer-to-peer (P2P) market during the bullish 2017 period. They employ daily data that span the period from January 2017 to June 2018. Thereby, the increasing, the skyrocketing and the bearish periods in Bitcoin’s market quotes are examined. Furthermore, data of national currencies from 13 12 ----- advanced and developing economies are used. Emphasis is paid on analysis of publicly available bid-ask spreads. Results indicate that spreads decline for the US dollar, the Hong Kong dollar, the dollar of New Zealand, the Swedish Krone and the Singapore dollar. Nevertheless, the Euro, the United Kingdom pound, the Australian dollar, the Brazilian real, the Norwegian Krone, the Polish Zloty, the Russian Rouble and the South African Rand do not present significant falls in spreads while they abide by the thinking that higher Bitcoin prices lead to wider spreads. This gives credence to currency and country dependency of the bubble’s effect on Bitcoin prices in the P2P market. Chen and Hafner [2019] investigate whether sentiment-induced bubbles exist in markets of digital currencies by using daily data covering the period from August 8, 2014 to May 15, 2018. They test for bubbles using a transition variable and the CRIX index in a smooth transition autoregressive model (STAR) with regime switching. Moreover, volatility is expressed by a Beta-t-Exponential Generalised Autoregressive Conditional Heteroskedasticity (Beta-t-EGARCH) model. Estimations indicate that volatility has a negative nexus with the sentiment index. Multiple periods are detected in the period from May 2017 to April 2018. It is revealed that volatility is higher during bubble periods. In a more recent strand of research, Corbet et al. [2020] employ Generalised Autoregressive Conditional Heteroskedasticity (GARCH) and Dynamic Conditional Correlations Generalised Au toregressive Conditional Heteroskedasticity (DCC-GARCH) methodologies with 5-minute data to the nexus between Kodak returns and Dow Jones Industrial Average (DJIA) as well as Bitcoin returns. The period examined spans November 22, 2017 to February 21, 2018 divided into sub periods. They provide evidence that before the KodakCoin announcement, there was a strong link age between Kodak and the DJIA index, whereas a weal one with Bitcoin, Nevertheless, after the KodakCoin announcement, the connection between Kodak and the DJIA rendered weaker but the relation of Kodak with Bitcoin was significantly fortified. Kodak’s return volatility also reveals the closer linkage with risky digital currencies after the announcement. Chaim and Laurini [2019] investigate whether Bitcoin is a bubble by adopting the strict local martingale theory of finan cial bubbles and employing the non-parametric estimator of Florens-Zmirou and the Hamiltonian Monte Carlo simulation scheme for estimations. Examination is also conducted with the SP500 index, the euro-dollar exchange rate, the gold-dollar prices and the market value of Brent oil for comparison purposes. It is found that Bitcoin exhibits bubble behaviour only during the period from January 2013 to April 2014. Cagli [2019] investigate explosive behaviour in the market values of Bitcoin, Ethereum, Ripple, Litecoin, Stellar, Nem, Dash and Monero by employing daily data spanning from September 2015 to January 2018. The methodology adopted is based on Chen et al. [2017]. Evidence indicates that all digital currencies except for Nem present explosive behaviour 13 ----- and exhibit significant pairwise comovement linkages. More specifically, statistically significant bi lateral co-explosive relations are detected between the pairs of: Bitcoin-Dash, Ethereum-Litecoin, Ethereum-Dash, Ethereum-Monero and Ripple-Stellar. It should also be noted that recent academic work has focused interest on investigating which model would better fit the examination of cryptocurrency booms and busts. Cretarola and Figà Talamanca [2019a] employ a continuous time stochastic model for Bitcoin dynamics. They provide evidence that bubbles are connected with the correlation between the market attention factor on Bitcoin and Bitcoin returns being above a non-negative threshold. Thereby, market exuberance is found to be influential for Bitcoin bubbles. Such bubbles are evident during 2012-2013 and 2017. Moreover, Cretarola and Figà-Talamanca [2019b] extend the model employed in Cretarola and Figà Talamanca [2019a] and allow for a state-dependent correlation parameter between asset returns and market attention. It is revealed that based on the modified model the correlation between cryptocurrencies and their market attention can indicate the speed by which a bubble boosts. Both Pyo and Lee [2019] and Corbet et al. [2020] investigate the impact of FOMC announcements on Bitcoin returns by conducting regressions. They take into consideration 65 FOMC meetings related to monetary policy. Findings reveal that the Producer Price Index exerts significant effects on Bitcoin prices only one day before the FOMC announcement while no significant impacts from macroeconomic announcements are found in general. Eom [2020] by using Bitcoin data from Korea and the US and employing Generalised Method of Moments (GMM) estimations support that the high trading volume and price instability can explain the Kimchi premium. Higher Bitcoin bubbles lead to a clearer nexus between trading volume and premium. Bubbles are found to grow due to fundamental uncertainty and higher trading. Moreover, Shu and Zhu [2020] provide evidence that an adaptive multilevel time series detection methodology based on the LPPLS model and high frequency data can effectively detect bubbles. Moreover, it can forecast bubble crashes, even for short-term bubbles. In another vein, Xiong et al. [2019] verify that bubble estimation based on the production cost by applying VAR and LPPL models display good predictive capacities. Moreover, the price-electricity cost ratio (PECR) and the bubble coefficient (BC) are found to be effective measures. Furthermore, it is argued that the next large Bitcoin bubble is expected to take place in the second half of 2020, just after Bitcoin’s halving. **Insert Figure 1 about here** Emphasis should be paid in that academic evidence reveals a clearer bubble character in major cryptocurrencies, especially Bitcoin but also Ethereum, whereas the remaining highly-capitalised 14 ----- digital currencies present price increases in a more modest level. It should be emphasised when the CRIX index, the Bitcoin Price Index or the Mt.Gox values represent Bitcoin, bubbles are found to be more intensive. Moreover, one should underline that methodologies based on the SADF provide evidence of higher or multiple bubbles in cryptocurrency markets. While considering all of the above research, it is very important to try to define a central estimate over time as to how estimates of the size of a bubble in cryptocurrency markets vary. While this research provides a central piece that provides a broad overview of the techniques used to measure pricing bubbles, we further attempt to provide estimates both over time frequency and by type of cryptocurrency. In Figure 1, we observe eight examples of monthly cryptocurrency price behaviour when compared to that of the periods of time in which academic research had pre-defined the existence of bubble-like properties in each respective market using the techniques earlier outlined in our research. The collected data used to generate these figures are available in the attached Appendices. We can clearly observe that each example with the exception of Maidsafecoin and Monero exhibit sustained warnings with regards to the existence of bubbles far in advance of the sharp price increases that existed throughout 2016 and 2017. Interestingly, such warnings then disappeared when the price of each cryptocurrency subsequently collapsed throughout 2017 and during early 2018. Although there have existed many warnings throughout a variety of reputable academic sources, it would largely appear that such advice has been broadly ignored. Much of the research provided in this systematic review considers cryptocurrencies to be an exceptionally volatile product, exhibiting many behavioural traits that do not appear to be shared within traditional financial markets. **5. Concluding Comments** The substantial body of evidence that seeks to test for the existence and measurement of the size of bubble price formation in financial assets has accumulated substantially during the past decades. There already exists considerable evidence that economic sentiment and speculative mo tives combined with overconfidence, trigger significant divergences of asset market values from the corresponding fundamental values. Bubble-formation has received a wide array of alternative defini tions. The majority of these definitions agree with the view that such behaviour is generated within elevated interest of economic units due to especially favourable conditions that lead to multiple size of nominal values in relation to the fair value. The asset pricing approach considers assets as investment tools capable of proving extremely profitable for traders. The highly speculative char acteristics of cryptocurrencies and the consequentially increasing popularity of Bitcoin and other digital coins fuelled the bubble price literature with some very interesting academic debate during 15 ----- recent years. Research interest in cryptocurrency bubbles is increasing substantially due to the ensuing challenges that high and enduring price alterations bring to the surface. There are a vari ety of investigative methodologies preferred across cases where a bubble is singular or when there are multiple bubbles. Moreover, different detection approaches are preferred in the case that is mildly-explosive or explosive in nature. While investing in cryptocurrencies renders an increasingly popular option as prices elevate, substantial uncertainty remains due to the enormous levels of volatility in both returns and unpre dictability, therefore risk. Bubble formation in prices of virtual coins leads to substantial difficulty in such currencies performing efficiently as a account of unit and store of value, some of the key func tions in which much literature has observed substantial weakness within these developing products. Literature associated with digital currency bubbles indicates that Bitcoin has presented several bub ble phases, mostly during the years 2013 and 2017. Other major coins also exhibit several bubble phases. Most studies employ daily data from free sources but papers employing high-frequency data from not publicly accessible data sources have also been authored. The most popular methodologies for detecting bubbles have been the Augmented Dickey Fuller (ADF). Moreover, the Log-Periodic Power Law (LPPL) methodology is often used in relevant research. Overall, the highly speculative, volatile and unpredictable character of cryptocurrencies is verified by empirical studies. The present study contributes to relevant literature by providing an overall perspective of empirical academic studies of bubble price formation of digital currencies and a road-map for future research. This could prove a highly valuable tool for investors, speculators, regulators and supervising authorities. Finally, it is worth asking as to whether the bubble characteristics of digital currencies will perpetuate in the future without risk of key cryptocurrency assets such as Bitcoin bursting. To the extent that elevated investor optimism continues and irrational behaviour dominates investing strategies, prices will most likely remain in an upward trajectory. Virtual currencies created by monetary authorities (such as the Central Bank Digital Currency, CBDC) or coins attached to bank deposits or government securities (such as stablecoins) are identified to play a primordial role in the survival of cryptocurrencies. Should regulation or innovation in digital money strengthen the ‘trust’ of investors regarding digital forms of liquidity, such currencies could enjoy legal tender status, which could present owners of these products with the ability protect themselves from instability and frequent upheavals. A tendency towards centralisation of digital currencies could contribute towards cooling digital bubbles before bursting and leading to further crisis episodes. 16 ----- **Bibliography** Akhtaruzzaman, M., A. Sensoy, and S. Corbet (2019). The influence of bitcoin on portfolio diversification and design. *Finance Research Letters*, 101344. Akyildirim, E., S. Corbet, D. Cumming, B. Lucey, and A. Sensoy (2020). Riding the wave of crypto-exuberance: The potential misusage of corporate blockchain announcements. *SSRN Working Paper* . Akyildirim, E., S. Corbet, P. Katsiampa, N. Kellard, and A. Sensoy (2019). The development of bitcoin futures: Exploring the interactions between cryptocurrency derivatives. *Finance Research Letters* . Ammous, S. (2018). Can cryptocurrencies fulfil the functions of money? *Quarterly Review of Economics and* *Finance 70*, 38–51. Aysan, A., E. Demir, G. Gozgor, and C. Lau (2019). Effects of the geopolitical risks on bitcoin returns and volatility. *Research in International Business and Finance 47*, 511–518. Azariadis, C. (1981). Self-fulfilling prophecies. *Journal of Economic Theory 25* (3), 380–396. Baur, D., K. Hong, and A. Lee (2018). Bitcoin: Medium of exchange or speculative assets? *Journal of International* *Financial Markets, Institutions and Money 54*, 177–189. Bedi, P. and T. Nashier (2020). On the investment credentials of bitcoin: A cross-currency perspective. *Research in* *International Business and Finance 51* . Beneki, C., A. Koulis, N. Kyriazis, and S. Papadamou (2019). Investigating volatility transmission and hedging properties between bitcoin and ethereum. *Research in International Business and Finance 48*, 219–227. Bianchetti, M., C. Ricci, and M. Scaringi (2018). Are cryptocurrencies real financial bubbles? evidence from quantitative analyses. *Evidence from Quantitative Analyses (February 24, 2018). A version of this paper was published* *in Risk 26* . Blanchard, O. J. and M. W. Watson (1982). Bubbles, rational expectations and financial markets. Bouoiyour, J., R. Selmi, and A. Tiwari (2014). Is bitcoin business income or speculative bubble? unconditional vs. conditional frequency domain analysis. Bouoiyour, J., R. Selmi, A. Tiwari, and O. Olayeni (2016). What drives bitcoin price? *Economics Bulletin 36* (2), 843–850. Bouri, E., R. Gupta, and D. Roubaud (2019). Herding behaviour in cryptocurrencies. *Finance Research Letters 29*, 216–221. Brunnermeier, M. and M. Oehmke (2013). Bubbles, financial crises, and systemic risk. *Handbook of the Economics* *of Finance 2* (PB), 1221–1288. Cagli, E. (2019). Explosive behavior in the prices of bitcoin and altcoins. *Finance Research Letters 29*, 398–403. Caporale, G., L. Gil-Alana, and A. Plastun (2018). Persistence in the cryptocurrency market. *Research in Interna-* *tional Business and Finance 46*, 141–148. Chaim, P. and M. Laurini (2019). Is bitcoin a bubble? *Physica A: Statistical Mechanics and its Applications 517*, 222–232. Cheah, E.-T. and J. Fry (2015). Speculative bubbles in bitcoin markets? an empirical investigation into the fundamental value of bitcoin. *Economics Letters 130*, 32–36. Chen, C. Y.-H. and C. M. Hafner (2019). Sentiment-induced bubbles in the cryptocurrency market. *Journal of Risk* *and Financial Management 12* (2), 53. Chen, M., Q. Wu, and B. Yang (2019). How valuable is FinTech innovation? *Review of Financial Studies 32*, 2062–2106. 17 ----- Chen, Y., P. Phillips, and J. Yu (2017). Inference in continuous systems with mildly explosive regressors. *Journal of* *Econometrics 201* (2), 400–416. Cheung, A., E. Roca, and J.-J. Su (2015). Crypto-currency bubbles: an application of the phillips–shi–yu (2013) methodology on mt. gox bitcoin prices. *Applied Economics 47* (23), 2348–2358. Corbet, S., D. J. Cumming, B. M. Lucey, M. Peat, and S. A. Vigne (2019). The destabilising effects of cryptocurrency cybercriminality. *Economics Letters*, 108741. Corbet, S., V. Eraslan, B. M. Lucey, and A. Sensoy (2019). The effectiveness of technical trading rules in cryptocurrency markets. *Finance Research Letters 31*, 32–37. Corbet, S. and P. Katsiampa (2018). Asymmetric mean reversion of bitcoin price returns. *International Review of* *Financial Analysis* . Corbet, S., C. Larkin, B. Lucey, A. Meegan, and L. Yarovaya (2020). Cryptocurrency reaction to FOMC announcements: Evidence of heterogeneity based on blockchain stack position. *Journal of Financial Stability 46*, 100706. Corbet, S., C. Larkin, B. Lucey, and L. Yarovaya (2020). Kodakcoin: a blockchain revolution or exploiting a potential cryptocurrency bubble? *Applied Economics Letters 27* (7), 518–524. Corbet, S., B. Lucey, M. Peat, and S. Vigne (2018). Bitcoin futures - What use are they? *Economics Letters 172*, 23–27. Corbet, S., B. Lucey, A. Urquhart, and L. Yarovaya (2019). Cryptocurrencies as a financial asset: A systematic analysis. *International Review of Financial Analysis 62*, 182–199. Corbet, S., B. Lucey, and L. Yarovaya (2018). Datestamping the Bitcoin and Ethereum bubbles. *Finance Research* *Letters 26*, 81–88. Corbet, S., A. Meegan, C. Larkin, B. Lucey, and L. Yarovaya (2018). Exploring the dynamic relationships between cryptocurrencies and other financial assets. *Economics Letters 165*, 28–34. Corsi, F. and D. Sornette (2014). Follow the money: The monetary roots of bubbles and crashes. *International* *Review of Financial Analysis 32*, 47–59. Cretarola, A. and G. Figà-Talamanca (2019a). Bubble regime identification in an attention-based model for bitcoin and ethereum price dynamics. *Economics Letters*, 108831. Cretarola, A. and G. Figà-Talamanca (2019b). Detecting bubbles in bitcoin price dynamics via market exuberance. *Annals of Operations Research*, 1–21. Dale, R., J. Johnson, and L. Tang (2005). Financial markets can go mad: Evidence of irrational behaviour during the south sea bubble. *Economic History Review 58* (2), 233–271. de Sousa, H. R. and A. Pinto (2019). Blockchain based informed consent with reputation support. In *International* *Congress on Blockchain and Applications*, pp. 54–61. Springer. Diba, B. T. and H. I. Grossman (1988). Explosive rational bubbles in stock prices? *The American Economic* *Review 78* (3), 520–530. Dwyer, G. (2015). The economics of bitcoin and similar private digital currencies. *Journal of Financial Stability 17*, 81–91. Eom, Y. (2020). Premium and speculative trading in bitcoin. *Finance Research Letters*, 101505. Evans, G. (1989). The fragility of sunspots and bubbles. *Journal of Monetary Economics 23* (2), 297–317. Fassas, A., S. Papadamou, and A. Koulis (2020). Price discovery in bitcoin futures. *Research in International Business* *and Finance 52* . Felix, T. and H. von Eije (2019). Underpricing in the cryptocurrency world: Evidence from initial coin offerings. *Managerial Finance 45*, 563–578. 18 ----- Flood, R. P. and R. J. Hodrick (1990). On testing for speculative bubbles. *Journal of economic perspectives 4* (2), 85–101. Flori, A. (2019). News and subjective beliefs: A bayesian approach to bitcoin investments. *Research in International* *Business and Finance 50*, 336–356. Foley, S., J. R. Karlsen, and T. J. Putnins (2019). Sex, drugs, and Bitcoin: How much illegal activity is financed through cryptocurrencies? *Review of Financial Studies 32*, 1789–1853. Foster, J. and P. Wild (1999). Econometric modelling in the presence of evolutionary change. *Cambridge Journal of* *Economics 23* (6), 749–770. Frehen, R., W. Goetzmann, and K. Geert Rouwenhorst (2013). New evidence on the first financial bubble. *Journal* *of Financial Economics 108* (3), 585–607. Froot, K. A. and M. Obstfeld (1989). Intrinsic bubbles: The case of stock prices. Technical report, National Bureau of Economic Research. Fry, J. (2018). Booms, busts and heavy-tails: The story of Bitcoin and cryptocurrency markets? *Economics* *Letters 171*, 225–229. Fry, J. and E.-T. Cheah (2016). Negative bubbles and shocks in cryptocurrency markets. *International Review of* *Financial Analysis 47*, 343–352. Gandal, N., J. Hamrick, T. Moore, and T. Oberman (2018). Price manipulation in the Bitcoin ecosystem. *Journal* *of Monetary Economics 95*, 86–96. Garber, P. M. (1990). Famous first bubbles. *Journal of Economic perspectives 4* (2), 35–54. Geuder, J., H. Kinateder, and N. Wagner (2019). Cryptocurrencies as financial bubbles: The case of bitcoin. *Finance* *Research Letters* . Gil-Alana, L., E. Abakah, and M. Rojo (2020). Cryptocurrencies and stock market indices. are they related? *Research* *in International Business and Finance 51* . Goldstein, I., W. Jiang, and A. Karolyi (2019). To FinTech and beyond. *Review of Financial Studies 32*, 1647–1661. Gurkaynak, R. (2008). Econometric tests of asset price bubbles: Taking stock. *Journal of Economic Surveys 22* (1), 166–186. Hafner, C. (2018). Testing for bubbles in cryptocurrencies with time-varying volatility. *Available at SSRN 3105251* . Hall, S., Z. Psaradakis, and M. Sola (1999). Detecting periodically collapsing bubbles: A markov-switching unit root test. *Journal of Applied Econometrics 14* (2), 143–154. Handika, R., G. Soepriyanto, and S. Havidz (2019). Are cryptocurrencies contagious to asian financial markets? *Research in International Business and Finance 50*, 416–429. Hayes, A. (2019). Bitcoin price and its marginal cost of production: support for a fundamental value. *Applied* *Economics Letters 26* (7), 554–560. Hendrickson, J. and W. Luther (2017). Banning Bitcoin. *Journal of Economic Behavior and Organization 141*, 188–195. Holub, M. and J. Johnson (2019). The impact of the bitcoin bubble of 2017 on bitcoin’s p2p market. *Finance Research* *Letters 29*, 357–362. Johannessen, J.-A. (2017). The south sea and mississippi bubbles of 1720. In *Innovations Lead to Economic Crises*, pp. 59–87. Springer. Kallinterakis, V. and Y. Wang (2019). Do investors herd in cryptocurrencies âĂŞ and why? *Research in International* *Business and Finance 50*, 240–245. 19 ----- Kindleberger, C. P. and R. Z. Aliber (2011). *Manias, panics and crashes: a history of financial crises* . Palgrave Macmillan. Lucas Jr, R. E. (1978). Asset prices in an exchange economy. *Econometrica: Journal of the Econometric Society*, 1429–1445. Ma, D. and H. Tanizaki (2019). The day-of-the-week effect on bitcoin return and volatility. *Research in International* *Business and Finance 49*, 127–136. MacDonell, A. (2014). Popping the bitcoin bubble: An application of log-periodic power law modeling to digital currency. *University of Notre Dame working paper* . Mensi, W., Y.-J. Lee, K. H. Al-Yahyaee, A. Sensoy, and S.-M. Yoon (2019). Intraday downward/upward multifractality and long memory in Bitcoin and Ethereum markets: An asymmetric multifractal detrended fluctuation analysis. *Finance Research Letters 31*, 19–25. Nadler, P. and Y. Guo (2020). The fair value of a token: How do markets price cryptocurrencies? *Research in* *International Business and Finance 52* . Nguyen, T., B. Nguyen, K. Nguyen, and H. Pham (2019). Asymmetric monetary policy effects on cryptocurrency markets. *Research in International Business and Finance 48*, 335–339. Nguyen, T., B. Nguyen, T. Nguyen, and Q. Nguyen (2019). Bitcoin return: Impacts from the introduction of new altcoins. *Research in International Business and Finance 48*, 420–425. O’Hara, M. (2008). Bubbles: Some perspectives (and loose talk) from history. *The Review of Financial Studies 21* (1), 11–17. Omane-Adjepong, M. and I. Alagidede (2019). Multiresolution analysis and spillovers of major cryptocurrency markets. *Research in International Business and Finance 49*, 191–206. Panagiotidis, T., T. Stengos, and O. Vravosinos (2018). On the determinants of Bitcoin returns: A LASSO approach. *Finance Research Letters 27*, 235–240. Phillips, P. C., S. Shi, and J. Yu (2012). Testing for multiple bubbles. Phillips, P. C., S. Shi, and J. Yu (2014). Specification sensitivity in right-tailed unit root testing for explosive behaviour. *Oxford Bulletin of Economics and Statistics 76* (3), 315–333. Phillips, P. C., S. Shi, and J. Yu (2015). Testing for multiple bubbles: Historical episodes of exuberance and collapse in the s&p 500. *International economic review 56* (4), 1043–1078. Phillips, R. C. and D. Gorse (2018). Cryptocurrency price drivers: Wavelet coherence analysis revisited. *PloS* *one 13* (4), e0195200. Puljiz, M., S. Begušić, and Z. Kostanjčar (2018). Market microstructure and order book dynamics in cryptocurrency exchanges. In *Crypto Valley Conference on Blockchain Technology* . Pyo, S. and J. Lee (2019). Do fomc and macroeconomic announcements affect bitcoin prices? *Finance Research* *Letters*, 101386. Sensoy, A. (2019). The inefficiency of Bitcoin revisited: A high-frequency analysis with alternative currencies. *Finance* *Research Letters, 28*, 68–73. Shanaev, S., S. Sharma, B. Ghimire, and A. Shuraeva (2020). Taming the blockchain beast? regulatory implications for the cryptocurrency market. *Research in International Business and Finance 51* . Shiller, R. J. (2015). *Irrational exuberance: Revised and expanded third edition* . Princeton university press. Shiller, R. J., S. Fischer, and B. M. Friedman (1984). Stock prices and social dynamics. *Brookings papers on economic* *activity 1984* (2), 457–510. 20 ----- Shu, M. and W. Zhu (2020). Real-time prediction of bitcoin bubble crashes. *Physica A: Statistical Mechanics and its* *Applications*, 124477. Sifat, I., A. Mohamad, and M. Mohamed Shariff (2019). Lead-lag relationship between bitcoin and ethereum: Evidence from hourly and daily data. *Research in International Business and Finance 50*, 306–321. Su, C.-W., Z.-Z. Li, R. Tao, and D.-K. Si (2018). Testing for multiple bubbles in bitcoin markets: A generalized sup adf test. *Japan and the World Economy 46*, 56–63. Symitsi, E. and K. Chalvatzis (2019). The economic value of bitcoin: A portfolio analysis of currencies, gold, oil and stocks. *Research in International Business and Finance 48*, 97–110. Taipalus, K. (2012). *Detecting asset price bubbles with time-series methods* . Tirole, J. (1985). Asset bubbles and overlapping generations. *Econometrica: Journal of the Econometric Society*, 1499–1528. Urquhart, A. (2016). The inefficiency of Bitcoin. *Economics Letters 148*, 80–82. Urquhart, A. (2017). Price clustering in Bitcoin. *Economics letters 159*, 145–148. Van Horne, J. (1985). Of financial innovations and excesses. *The Journal of Finance 40* (3), 621–631. Van Vliet, B. (2018). An alternative model of Metcalfe’s Law for valuing Bitcoin. *Economics Letters 165*, 70–72. Vogel, H. and R. Werner (2015). An analytical review of volatility metrics for bubbles and crashes. *International* *Review of Financial Analysis 38*, 15–28. Wei, W. (2018). The impact of Tether grants on Bitcoin. *Economics Letters 171*, 19–22. West, K. D. (1987). A specification test for speculative bubbles. *The Quarterly Journal of Economics 102* (3), 553–580. Wheatley, S., D. Sornette, T. Huber, M. Reppen, and R. N. Gantner (2018). Are bitcoin bubbles predictable? combining a generalized metcalfe’s law and the lppls model. *Combining a Generalized Metcalfe’s Law and the* *LPPLS Model (March 15, 2018). Swiss Finance Institute Research Paper* (18-22). Xiong, J., Q. Liu, and L. Zhao (2019). A new method to verify bitcoin bubbles: Based on the production cost. *The* *North American Journal of Economics and Finance*, 101095. Zargar, F. and D. Kumar (2019). Informational inefficiency of bitcoin: A study based on high-frequency data. *Research in International Business and Finance 47*, 344–353. 21 ----- Figure 1: Bubbles in cryptocurrency markets as identified by academic studies a) Bitcoin b) Ethereum c) Dash d) Litecoin e) Maidsafecoin f) Monero g) Ripple h) Stellar Note: The above figures represent selected one hundred day dynamic correlations between a selected sub-set of companies in the above analysis and our selected cryptocurrency fund. 22 ----- **Appendices** **Table A1** : Studies about bubble price formation in cryptocurrencies Authors Currencies Frequency of Time period examined Data Source Methodology Conclusions examined data Puljiz et al. [2018] Bitcoin prices by Bitfinex, BitStamp, BTC-e, Kraken, Mt.Gox Scaling exponent in tails using the Hill esti- Volatility and heavy tails mator Puljiz et al. [2018] Bitcoin Trade-level March 2013- December Bitfinex, Bit- Scaling exponent in tails using the Hill esti- Volatility and heavy tails prices by in frequen- 2016 in Bitfinex; July Stamp, BTC-e, mator Bitfinex, cies from 2010- February 2014 in Kraken, Mt.Gox BitStamp, 1-minute up Mt.Gox; January 2014BTC-e, to-1 day February 2018 in Kraken; Kraken, August 2011- July 2017 Mt.Gox in BTC-e; September 2011- February 2018 in BitStamp Bianchetti et al. Bitcoin; Daily December 1, 2016- Jan- Bloomberg Log-Periodic Power Law (LPPL) by Jo- Yes [2018] Ethereum uary 16, 2018 hansen and Sornette (1999); OLS, GLS and MLE with Johansen-Ledoit-Sornette (JLS) model; Phillips-Shi-Yu (PSY) model with Backward Supremum Augmented Dickey Fuller (BSADF and BSADF*) Bouri et al. [2019] Bitcoin, Daily August 7, 2015- November Coinmarketcap.com generalised Supremum Augmented Dickey Yes Ripple, 7, 2015 Fuller (GSADF) by Phillips et al. (2013), loEthereum, gistic regression Litecoin, Nem, Dash, Stellar Bouoiyour et al. Bitcoin Price daily December 2010- June 2014 www.blockchain.info; Frequency Domain Analysis- Granger Yes [2014] Index www.quandl.com; Causality by Breitung and Candelon (2006) Google Bouoiyour et al. Bitcoin Price Daily December 2010- June 2015 www.blockchain.info Empirical Mode Recognition (EMD); Yes, but also determined [2016] Index Kendall correlation; Pearson correlation by long-term fundamentals Cagli [2019] Bitcoin, Daily September 1, 2015- Jan- Coinmarketcap.com Methodology of Chen et al. (2017) All except for Nem Ethereum, uary 31, 2018 and bilateral coRipple, Lite- explosive nexus between coin, Stellar, Bitcoin-Dash, EthereumNem, Dash Litecoin, Ethereum-Dash, and Monero Ethereum-Monero and Ripple-Stellar Chaim and Lau- Bitcoin Daily; 5- January 2013- September Blockchain.com Non-parametric estimator of Florens-Zmirou Yes, from January 2013 to rini [2019] minute 2018 (in sub periods) (1993); Hamiltonian Monte Carlo Simulation April 2014 frequency scheme Cheah and Fry Bitcoin Coin- Daily July 18, 2010- July 17, Coinmarketcap.com Model with Wiener process and jump pro- Yes, intense bubble char [2015] desk Index 2014; January 1, 2013- cess; BDS test based on Brock et al. (1996) acter November 30, 2013 Cheung et al. Bitcoins Daily July 17, 2010- February Bitcoincharts.com generalised Supremum Augmented Dickey Yes, intense [2015] traded on 18, 2014 Fuller (GSADF) by Phillips et al. (2013) Mt.Gox Chen and Hafner CRIX index Daily August 8, 2014- May 15, StockTwits Smooth Transition Autoregressive Model Yes, multiple [2019] 2018 Application (STAR); Beta-t-GARCH model by Creal et Programming al. (2011) and Harvey (2013) in volatility; Interface (API); Sentiment measures by Nasekin and Chen thecrix.de (2018) Corbet et al. KodakCoin; 5-minute fre- November 22, 2017- Febru- Bloomberg; generalised Autoregressive Conditional Yes [2020] Bitcoin quency ary 21, 2018 CryptoCom- Heteroskedasticity (GARCH) by Bollerslev pare.com (1986); Dynamic Conditional Correlations generalised Autoregressive Conditional Heteroskedasticity (DCC- GARCH) by Engle (2002) Corbet et al. Bitcoin, Daily January 9, 2009- Novem- Historical API’s Backward Supremum Augmented Dickey Yes, clearly [2018] Ethereum ber 9, 2017 (Application Fuller (GSADF) based on Phillips et al. Programming (2011), rolling-window Augmented Dickey Interfaces) Fuller style regression Trade-level in frequencies from 1-minute up to-1 day Bitfinex, BitStamp, BTC-e, Kraken, Mt.Gox ----- **Table A1** : Studies about bubble price formation in cryptocurrencies Authors Currencies Frequency of Time period examined Data Source Methodology Conclusions examined data de Sousa and Bitcoin, Daily Since the launch of each Coinmarketcap.com Right-tailed Augmented Dickey-Fuller Yes Pinto [2019] Ethereum, currency until January 27, (RtADF), Rowlling-Augmented Dickey Ripple, 2017 Fuller (RADF), Supremum Augmented Litecoin, Dickey Fuller (SADF), generalised SupreMonero, mum Augmented Dickey Fuller (GSADF) Dash, MadeSafeCoin, Nem Geuder et al. Bitcoin Daily March 19, 2016- Septem- Coinmarketcap.com Log-Periodic Power Law (LPPL) model by Yes [2019] ber 19, 2018 Filimonov and Sornette (2013); Supremum Augmented Dickey Fuller (SADF) and generalised Supremum Augmented Dickey Fuller (GSADF) and Backward Supremum Augmented Dickey Fuller (BSADF) by Phillips et al. (2015) Fry and Cheah Bitcoin, Rip- Daily; February 26, 2015- Febru- Coinmarketcap.com; Univariate and multivariate models for bub- Yes [2016] ple Weekly ary 24, 2015 Rip- bles using Wiener process and jump process plecharts.com; Coindesk.com Hafner [2018] Bitcoin, Daily Since the launch of each Coinmarketcap.com; Spline-GARCH model of Engle and Rangel Yes, in Bitcoin and the Ripple, currency until December http://thecrix.de; (2008); Supremum Augmented Dickey Fuller CRIX Ethereum, 31, 2017 CoinGecko (SADF) by Phillips et al. (2011); ExponenBitcoin tial generalised Autoregressive heteroskedasCash, car- ticity (E-GARCH) by Nelson (1991); Threshdano, Lite- old generalised Autoregressive Conditional coin, IOTA, Heteroskedasticity (T-GARCH) by Glosten Nem, Dash, et al. (1993) Stellar, Monero Hayes [2019] Bitcoin Daily June 29, 2013- April 27, Blockchain.info Ordinary Least Squares (OLS), Vector Au- Yes 2018 toregressions (VAR), marginal cost of production model Holub and John- Bitcoin ex- Daily January 2017 to June 2018 Bitcoincharts; Measurement of the bid-ask spread Yes son [2019] changes rates Datastream in relation to 11 national currencies MacDonell [2014] Bitcoin Weekly July 18, 2010- August 25, Mt.Gox Maximum Likelihood Estimation (MLE); Yes 2013 Log-Periodic Power Law (LPPL); Autoregressive Moving Average (ARMA) Phillips and Bitcoin; Daily April 2015- September Reddit Hidden Markov Model (HMM) Yes Gorse [2018] Ethereum; 2016; but Ethereum: AuLitecoin; gust 8, 2015- September Monero 2016 Su et al. [2018] Bitcoin Weekly June 16, 2011- September Wind database Supremum Augmented Dickey Fuller Yes, multiple 30, 2017 (SADF); generalised Supremum Augmented Dickey Fuller (GSADF) by Phillips et al. (2013), Wheatley et al. Bitcoin Daily Bitinfocharts.com Metcalfe’s Law; Ordinary Least Squares Yes [2018] (OLS); generalised Least Squares (GLS); Log-periodic Power Law Singularity (LPPLS) model Cretarola and Bitcoin, Daily January 1, 2012- Septem- Coinmarketcap.com Extension of the model in Cretarola and Correlation between crypFigà-Talamanca Ethereum ber 30, 2019 (Bitcoin), Au- Figà-Talamanca [2019b] tocurrencies and their [2019a] gust 2015- September 2019 market attention can indi (Ethereum) cate the speed by which a bubble boosts Cretarola and Bitcoin Daily January 1, 2012- January www.blockchain.info Continuous time stochastic model depending Bubble effects in 2012Figà-Talamanca 20, 2018 on a market attention factor 2013 and 2017 [2019b] ----- **Table A1** : Studies about bubble price formation in cryptocurrencies Authors Currencies Frequency of Time period examined Data Source Methodology Conclusions examined data Eom [2020] Bitcoin Daily January 2015- September Bitcoincharts.com, Kimchi premium estimation, Generalized Cryptocurrency bubbles 2018, Coinmarket- Method of Moments (GMM) are loud, Fundamental cap.com, Bank of uncertainty leads to high Korea trading and speculative bubbles Pyo and Lee Bitcoin Daily Monthly July 18, 2010- CryptoCompare.com, Event-driven regression model No significant impacts [2019] September 10, 2018 www,federalreserve,gov, from macroeconomic anwww.bls.gov nouncements are found in general Shu and Zhu Bitcoin Daily January 11, 2017- April 11, Bitcoincharts.com Adaptive multilevel time series detection The LPPLS confidence in [2020] 2019 methodology based on the LPPLS model dicator employed is an excellent tool for tracing detect bubbles and forecast ing bubble crashes Xiong et al. Bitcoin Daily January 1, 2011- Decem- - Vector Autoregressive Model (VAR), LPPL Models display good pre [2019] ber 30, 2018 dictive capacities The next large Bitcoin bubble is expected to take place in the second half of 2020 ----- **Table A2** : Bubbles in cryptocurrency markets according to studies Authors Cr yp to w / bubble character Period of bubble behaviour Bouri et al. [2019] Bitcoin October 27, 2015- November 7, 2015 Ethereum February- March 2016 Bitcoin Early January 2017 Dash, Ethereum February 25, 2017- March 25, 2017 Ripple, Ethereum, Litecoin, Nem April- May 2017 Bitcoin Late May- June 2017 Bitcoin August-September 2017 Bitcoin, Ripple, Litecoin, Nem, Late October 2017 Dash, Stellar Cagli [2019] Bitcoin, Ethereum, Ripple, Lite- Inside the period September 2015- January 2018 coin, Stellar, Dash Chaim and Laurini [2019] Bitcoin January 2013- April 2014 Corbet et al. [2020] KodakCoin (Launch of KodakCoin) January 9, 2018- February 21, 2018 Corbet et al. [2018] Bitcoin 2013- 2014 turn of the year Ethereum Early-2016 and mid-2017 Bianchetti et al. [2018] Bitcoin Mid-December 2017 Bitcoin First half of January 2018 Ethereum Mid-June 2017 Bitcoin Mid-January 2018 de Sousa and Pinto [2019] Bitcoin October 20, 2013- December 15, 2013 Bitcoin September 19, 2014- September 23, 2014 Bitcoin October 4, 2014- October 9, 2014 Bitcoin October 30, 2015- November 5, 2015 Bitcoin May 29, 2016- June 23, 2016 Bitcoin October 27, 2016- November 4, 2016 Bitcoin December 22, 2016- January 4, 2017 Ethereum January 15, 2016- February 1, 2016 Ethereum February 4, 2016- February 17, 2016 Ethereum February 23, 2016- March 25, 2016 Ethereum March 28, 2016- April 2, 2016 Ethereum June 13, 2016- June 18, 2016 Ripple November 22, 2014- January 4, 2015 Ripple November 22, 2014- January 4, 2015 Litecoin November 18, 2013- December 1, 2013 Litecoin August 12, 2014- August 21, 2014 Litecoin January 3, 2015- January 24, 2015 Litecoin June 16, 2015- July 10, 2015 Litecoin May 27, 2016- June 7, 2016 Litecoin June 11, 2016- June 21, 2016 Monero March 4, 2016- March 11, 2016 Monero March 20, 2016- April 8, 2016 Monero August 20, 2016- September 29, 2016 Monero December 27, 2016- January 10, 2017 Dash May 10, 2014- June 5, 2014; March 22, 2015- March 27, 2015; January 17, 2016- January 23, 2016; March 23, 2016- April 9, 2016; May 20, 2016- June 6, 2016; August 7, 2016- September 1, 2016; January 4, 2017- January 8, 2017 MaidSafe July 12, 2014- July 22, 2014; December 4, 2014- December 9, 2014; July 22, 2015- July 30, 2015; February 11, 2016- March 29, 2016 NEM January 18, 2016- January 24, 2016; February 1, 2016- February 17, 2016; March 6, 2016- March 16, 2016; March 25, 2016April 3, 2016; June 13, 2016- July 7, 2016 Cheung et al. [2015] Bitcoin April 24, 2011- July 3, 2011 Bitcoin January 27, 2013- April 15, 2013 Bitcoin November 5, 2013- February 18, 2014 Geuder et al. [2019] Bitcoin May- June 2016 Bitcoin End of October- start of November 2016 Bitcoin December 2016- January 2017 Bitcoin Mid-May 2017 to early July 2017 Bitcoin Early August 2017- mid-September 2017 Bitcoin Mid-October 2017- January 2018 Hafner [2018] Bitcoin November 7, 2013- December 18, 2013 Bitcoin November 27, 2017- up to the time of writing CRIX index May 5, 2017- up to the time of writing Su et al. [2018] Bitcoin (in the US) Short period in August 2012 Bitcoin (in the US) November 7, 2013- December 12, 2013 Bitcoin (in the US) Early 2017 Bitcoin (in the US) May 18, 2017- September 14, 2017 Bitcoin (in China) February 7, 2013- April 18, 2013 Bitcoin (in China) November 7, 2013- December 12, 2013 Bitcoin (in China) Early 2017 Bitcoin (in China) May 18, 2017- September 14, 2017 Phillips and Gorse [2018] Monero Sep-16 Ethereum January 2016- April 2016 Wheatley et al. [2018] Bitcoin May 25, 2012- August 18, 2012 Bitcoin January 3, 2013- April 11, 2013 Bitcoin October 7, 2013- November 23, 2013 Bitcoin June 8, 2015- December 18, 2016 Bitcoin March 31, 2017- December 18, 2017 26 -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.2139/ssrn.3758498?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2139/ssrn.3758498, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://doras.dcu.ie/25991/1/R27.pdf" }
2,020
[ "Review" ]
true
2020-12-01T00:00:00
[]
19,222
en
[ { "category": "Medicine", "source": "external" }, { "category": "Biology", "source": "external" }, { "category": "Biology", "source": "s2-fos-model" }, { "category": "Medicine", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01d02c38ec9b9fbd256f2cb4f690ccd1d40c17e0
[ "Medicine", "Biology" ]
0.809926
Gene Expression Analysis Reveals Novel Shared Gene Signatures and Candidate Molecular Mechanisms between Pemphigus and Systemic Lupus Erythematosus in CD4+ T Cells
01d02c38ec9b9fbd256f2cb4f690ccd1d40c17e0
Frontiers in Immunology
[ { "authorId": "6821125", "name": "T. Sezin" }, { "authorId": "5733722", "name": "A. Vorobyev" }, { "authorId": "5143211", "name": "C. Sadik" }, { "authorId": "5601572", "name": "D. Zillikens" }, { "authorId": "90771027", "name": "Y. Gupta" }, { "authorId": "144637689", "name": "R. Ludwig" } ]
{ "alternate_issns": null, "alternate_names": [ "Front Immunol" ], "alternate_urls": [ "https://www.frontiersin.org/journals/immunology", "http://journal.frontiersin.org/journal/immunology" ], "id": "3f7605a7-7e53-4ff7-8895-42f09a5f4355", "issn": "1664-3224", "name": "Frontiers in Immunology", "type": "journal", "url": "http://www.frontiersin.org/immunology" }
Pemphigus and systemic lupus erythematosus (SLE) are severe potentially life-threatening autoimmune diseases. They are classified as B-cell-mediated autoimmune diseases, both depending on autoreactive CD4+ T lymphocytes to modulate the autoimmune B-cell response. Despite the reported association of pemphigus and SLE, the molecular mechanisms underlying their comorbidity remain unknown. Weighted gene co-expression network analysis (WGCNA) of publicly available microarray datasets of CD4+ T cells was performed, to identify shared gene expression signatures and putative overlapping biological molecular mechanisms between pemphigus and SLE. Using WGCNA, we identified 3,280 genes co-expressed genes and 14 co-expressed gene clusters, from which one was significantly upregulated for both diseases. The pathways associated with this module include type-1 interferon gamma and defense response to viruses. Network-based meta-analysis identified RSAD2 to be the most highly ranked hub gene. By associating the modular genes with genome-wide association studies (GWASs) for pemphigus and SLE, we characterized IRF8 and STAT1 as key regulatory genes. Collectively, in this in silico study, we identify novel candidate genetic markers and pathways in CD4+ T cells that are shared between pemphigus and SLE, which in turn may facilitate the identification of novel therapeutic targets in these diseases.
**_Edited by:_** _Herman Waldmann,_ _University of Oxford, United Kingdom_ **_Reviewed by:_** _Huanfa Yi,_ _Jilin University, China_ _Anne Fletcher,_ _Monash University, Australia_ **_*Correspondence:_** _Tanya Sezin_ _[tanya.sezin@uksh.de](mailto:tanya.sezin@uksh.de)_ _†These authors have contributed_ _equally to this work._ **_Specialty section:_** _This article was submitted to_ _Immunological Tolerance and_ _Regulation,_ _a section of the journal_ _Frontiers in Immunology_ **_Received: 30 August 2017_** **_Accepted: 22 December 2017_** **_Published: 17 January 2018_** **_Citation:_** _Sezin T, Vorobyev A, Sadik CD,_ _Zillikens D, Gupta Y and Ludwig RJ_ _(2018) Gene Expression Analysis_ _Reveals Novel Shared Gene_ _Signatures and Candidate_ _Molecular Mechanisms between_ _Pemphigus and Systemic Lupus_ _Erythematosus in CD4[+] T Cells._ _Front. Immunol. 8:1992._ _[doi: 10.3389/fimmu.2017.01992](https://doi.org/10.3389/fimmu.2017.01992)_ p y [doi: 10.3389/fimmu.2017.01992](https://doi.org/10.3389/fimmu.2017.01992) # G _[Tanya Sezin[1]*[†], Artem Vorobyev](http://loop.frontiersin.org/people/453088)_ _[[2†], Christian D. Sadik[1], Detlef Zillikens[1,2], Yask Gupta](http://loop.frontiersin.org/people/236399)_ _[2†]_ _[and Ralf J. Ludwig[1,2†]](http://loop.frontiersin.org/people/23927)_ _1 Department of Dermatology, University of Lübeck, Lübeck, Germany, 2 Lübeck Institute of Experimental Dermatology (LIED),_ _University of Lübeck, Lübeck, Germany_ Pemphigus and systemic lupus erythematosus (SLE) are severe potentially life-threatening autoimmune diseases. They are classified as B-cell-mediated autoimmune diseases, both depending on autoreactive CD4[+] T lymphocytes to modulate the autoimmune B-cell response. Despite the reported association of pemphigus and SLE, the molecular mechanisms underlying their comorbidity remain unknown. Weighted gene coexpression network analysis (WGCNA) of publicly available microarray datasets of CD4[+] T cells was performed, to identify shared gene expression signatures and putative overlapping biological molecular mechanisms between pemphigus and SLE. Using WGCNA, we identified 3,280 genes co-expressed genes and 14 co-expressed gene clusters, from which one was significantly upregulated for both diseases. The pathways associated with this module include type-1 interferon gamma and defense response to viruses. Network-based meta-analysis identified RSAD2 to be the most highly ranked hub gene. By associating the modular genes with genome-wide association studies (GWASs) for pemphigus and SLE, we characterized IRF8 and STAT1 as key regulatory genes. Collectively, in this in silico study, we identify novel candidate genetic markers and pathways in CD4[+] T cells that are shared between pemphigus and SLE, which in turn may facilitate the identification of novel therapeutic targets in these diseases. **Keywords: autoimmunity, gene expression analysis, weighted gene co-expression analysis, pemphigus, systemic** **lupus erythematosus, CD4[+] T cells** ## INTRODUCTION Pemphigus is a rare autoimmune bullous dermatosis, clinically characterized by intraepidermal blistering of the skin and/or mucous membranes. Immunologically, pemphigus is characterized by autoantibodies directed against desmosomal and non-desmosomal adhesion molecules expressed in the skin and mucosa. Binding of the pathogenic autoantibodies in the skin leads to dissociation of adjacent keratinocytes and formation of blisters. Based on the clinical presentation and the specificity of the anti-desmoglein (Dsg) autoantibodies, pemphigus is classified into two main forms, pemphigus ----- vulgaris (PV), with autoantibodies targeting Dsg3, and in some cases also Dsg1, and pemphigus foliaceus (PF), with autoantibodies targeting Dsg1 (1). The association of pemphigus with connective tissue diseases such as systemic lupus erythematosus (SLE) has been previously noted on a case report/case series basis (2, 3). In line, pemphigus autoantibodies and antinuclear autoantibodies, one immunological hallmark of SLE (4), coexist in healthy blood donors (5). However, the molecular mechanism remains unknown. The co-occurrence of pemphigus and SLE can suggest a common network of multifunctional genes and pathways. Alternatively, it can be altogether serendipitous. Due to the complexity of such a system, weighted gene co-expression network analysis (WGCNA) can serve as a comprehensive tool for identifying gene clusters of correlating and connected shared genes (6, 7). This approach has been previously successfully applied in various biological contexts to identify regulatory genes and networks associated with multiple disease phenotypes (8–11). Systemic lupus erythematosus and pemphigus are characterized by the production of autoantibodies and traditionally classified as B-cell-mediated autoimmune diseases. Compelling evidence has, however, shown that autoreactive helper-T lymphocytes are crucial in pathogenicity of both diseases by regulating B cells response and promoting autoantibodies production (12–15). Thus, studying gene expression networks within the CD4[+] T-cell population is not only essential for understanding the underlying pathophysiology but also for identifying predictive biomarkers and establishment of novel therapeutic targets for these diseases. Using publically available gene expression data from NCBI GEO database, we investigated gene co-expression networks of CD4[+] T cells obtained from pemphigus (PV as well as PF) and SLE patients (16). Our analysis revealed 14 distinct modules containing 3,280 co-expressed genes between the two diseases. Two out of 14 modules were found significantly upregulated: one in PF and SLE, and the other in PV. We further identified biological pathways such as “type I interferon signaling pathway” and “defense response to virus” using KEGG database, to be enriched in disease-associated modules. To the best of our knowledge, this is the first study applying a systems biology approach to identify shared molecular mechanisms between pemphigus and SLE. ## MATERIALS AND METHODS Data Collection All the data for the analysis were collected by searching expression databases such as NCBI GEO and Array Express for CD4[+] T cells for pemphigus and SLE (17, 18). The datasets from other tissues or cell type were discarded. Also, the datasets, which did not have raw data files, were discarded from the downstream analysis. Two datasets, one for pemphigus (GSE53873) and one for SLE (GDS4185), were included in this study. The covariate information available for the patients is summarized in Table S1 in Supplementary Material. Altogether 46 samples (4 PV, 15 PF, 13 SLE, and 14 healthy controls) were used in the analysis. To avoid a potential bias that could be introduced by obtaining two separate microarray datasets, the deposited gene expression data were directly used for batch normalization. The expression profiles were log2 transformed and batch normalization was done using “sva” and “combat” functions in SVA R package (19). The effect of normalization was investigated by principal component analysis using the R-based “prcomp” function. Since batch normalization still produced biased results (Figure 1), the raw files were preprocessed again and an additional normalization **Figure 1 | PCA plot illustrating the normalization procedure. (A) PCA plot showing clustering of the samples based on the gene expression profiling, before and** **(B) after batch correction on raw data. (C) PCA plot showing clustering of the samples after using identical background correction and normalization methods,** before and (D) after batch correction. The X- and Y-axes represent the first and the second principal components and the associated percentage of variation. ----- step was performed. In detail, raw gene expression profiles were deduced from text files (Codelink array) using Codelink R package (20). Using the same package, first, the background was corrected with the “normexp” method and then normalized by the “cyclicloess” method. For Affymetrix data, raw gene expression for each sample was derived using R Affy package (21). The background correction was performed by “backgroundCorrect (method = ‘normexp’)” and cyclic normalization was performed on log2 expression values using limma R package (22). All the probes from each of the microarray platforms were filtered out for significant low expression/variation (P < 0.05) using the “varianceBasedfilter” function from DCGL R package (23). The remaining probes were mapped to Ensembl gene identifiers and probes’ expression was collapsed to gene-level expression using “collapseRows” function with default parameters in WGCNA R package (24). Consequently, batch normalization and statistical analysis were performed on the overlapping genes between two platforms using “combat” and PCA analyses, respectively (25). The data were further investigated for the presence of confounding effects such as clinical form of the disease (generalized vs. localized) and treatment group (predisnome vs. untreated) for pemphigus dataset (GSE53873) using anosim function with 999 permutations in vegan R package (26). ## Co-Expression Networks Co-expression modules were generated using WGCNA R package. A signed weighted adjacency matrix of pair-wise connection strengths (bicor correlation) was constructed using the soft-threshold approach with a scale-independent topological power β = 6. For a gene, the connectivity was defined as the sum of all connection strengths with all other genes. Genes were aggregated into modules by hierarchical clustering and refined by the dynamic tree cut algorithm. Thereafter, module eigenvalues were calculated. The eigenvalue is the first principal component of the gene expression profile within a module, representing average module expression profile (27). The statistical significance (P < 0.05) of module eigenvalues among the groups was accessed by Kruskal–Wallis test. Modular hub gene candidates were identified by correlating the gene expression with its module eigenvalues (“chooseTopHubInEachModule” function in WGCNA). To generate the causal network within a module, the C3NET R package was used (28). The algorithm uses mutual information theory to construct gene networks from gene expression data. The final network was generated using “c3net” function with default setting. A gene–gene interaction was considered to be significant if α < 0.05. ## Functional Characterization of a Module To investigate known gene–gene interactions, we used the INMEX web server (29). All genes within a specific module were queried, and a minimum network connecting all genes within this module was obtained. The hub gene candidates from this analysis were defined by their degree of interactions. Gene ontology terms, enriched KEGG pathways, and transcription factor binding sites for each module were obtained using David web server. Thereafter, all the mapped genes and reported genes to the disease-associated loci were selected from genome-wide association study (GWAS) catalog. The selected genes and modular genes were connected to each other based on known gene–gene interactions (INMEX web server). Only the direct interactions between the modular genes and GWAS genes were considered. Gene–gene interactions were visualized using Cytoscape software and figures were generated using R programing language. Intermediate gene conversions and data formatting were done using Perl programing language (30). ## RESULTS Data Selection and Normalization Microarray data were obtained for peripheral CD4[+] T-cell samples from 19 pemphigus patients (4 PV; 15 PF), 13 SLE patients, and 14 healthy controls from NCBI GEO and EBI Array Express (GSE53873; GDS4185). Altogether, our dataset included 46 samples derived from Codelink and Affymetrix arrays. Only datasets comprising raw files were included in the downstream meta-analysis. Therefore, we excluded samples GSE4588 and GSM260948 from our analysis. To implement the co-expression network analysis, we standardized and batch-normalized the datasets. We collected common probes across the two chip-arrays. The CodeLink Human Whole Genome Bioarray from GE Healthcare consisted of 54,359 probes, while the Affymetrix Human Genome U133A array consisted of 22,283 probes. We converted these probes to ensemble gene identifiers using ensemble biomart and found that 12,980 genes were common between the two platforms. Consequently, the datasets were merged based on the expression of common genes and “combat” and “sva” (SVA R package) functions were applied to remove the batch effect. Our results show that while the Affymetrix samples were distributed uniformly among the principal components, the data generated from the CodeLink array still clustered together (Figures 1A,B), suggesting that the dataset was not properly normalized and required further optimization. To further optimize the datasets, we used the “normexp” method for background correction and “cyclicloess” on log2 transformed values. Additionally, each dataset was separately filtered for low expressing/varying probes, as well as multiple probes were collapsed for each gene. Briefly, 18,038 probes representing 12,980 genes were identified in the CodeLink dataset. These probes were filtered for low variation and collapsed to generate 5,646 gene expression profiles. Similarly, the Affymetrix gene chip consisted of 20,366 probes representing 12,980 genes. These probes were filtered and collapsed, resulting in 6,073 gene expression profiles. Overall, the overlap between the two datasets consisted of 3,280 gene expression profiles, which were further used in the downstream analysis. After applying the batch effect normalization “combat” algorithm, we observed that the samples were distributed among first principal component with only 8.3% variation explained by the first component (Figures 1C,D). We also analyzed confounding effects by stratifying the dataset for different covariates. We found no significant differences for covariate generalized vs. localized (P = 0.402) and prednisone treated vs. untreated (P = 0.596) for pemphigus samples. No covariate information was available for SLE samples (Figure S1 in Supplementary Material). ----- ## Detection of Co-Expression Modules Related to Pemphigus and SLE Next, we set out to identify system-level similarity between pemphigus and SLE. Therefore, we applied WGCNA, aiming to identify gene modules that are co-expressed between pemphigus and SLE samples, and that are likely to be involved in common pathways. The major advantage of using such an approach is that it alleviates the multiple-testing problem that is inherent to microarray datasets. Using WGCNA, we identified 14 modules of co-expressed genes for 3,280 highly expressed and varying gene expression profiles, which are represented by different color codes (Figure 2; Figure S1, Data Sheet 1 in Supplementary Material). Two out of 14 modules showed differences between control and disease samples. The module “magenta” was significantly upregulated for both PF (P = 0.005) and SLE (P = 0.016) in comparison to healthy controls, and the module “salmon” was specifically upregulated only in PV (P = 0.034) (Figure 2). ## Biological Pathways in the PF- and SLE-Associated Module “Magenta” Module “magenta” consisted of 74 genes and, compared with controls, was significantly upregulated in PF and SLE. To investigate different known mechanisms associated with this module, we performed gene ontology analysis using DAVID database (31). We found that this module was, among others, enriched in biological processes such as “type I interferon signaling pathway” (P.adj = 6.4E−11), “defense response to virus” (P.adj = 2.7E−10), and “cytokine-mediated signaling pathway” (P.adj = 1.3E−7) (Table 1). This module was also enriched in KEGG pathways, including “measles” (P.adj = 2.3E−4), “influenza A” (P.adj = 2.7E−4), and “herpes simplex infection” (P.adj = 1.3E−3). On the basis of statistical module membership and eigengenes value, we identified s-adenosyl methionine domain containing 2 (RSAD2) gene as the most highly ranked hub gene for this module. To identify subnetworks and statistical interactions within the modules we used the “c3net” algorithm. The “c3net” algorithm investigates the direct physical interaction for gene expression data, further providing putative mechanisms within a module and characterizing its key regulating genes (9). We found 2’-5’-oligoadenylate synthetase 1 (OAS1), MX dynamin-like GTPase 1 (MX1), interferon-induced protein with tetratricopeptide repeats 3 (IFIT3), and spermatogenesisassociated serine-rich 2 like (SPATS2L) genes as master regulator genes of the module (degree ≥ 5) (Figure 3). Moreover, to further explore known gene–gene interactions among the genes in “magenta” module, we used the INMEX web server (32). We were specifically interested in examining “minimum interaction networks.” In this type of networks, a minimum number of genes are required to connect all the nodes to a given set of genes. Using this approach, we further derived additional regulators such as junction plakoglobin (JUP), B-cell CLL/lymphoma 2 (BCL2), ISG15 ubiquitin-like modifier (ISG15), STAT1, S-phase kinaseassociated protein 2 (SKP2), and eukaryotic translation initiation factor 2 alpha kinase 2 (EIF2AK2) (Figure S2 in Supplementary Material). ## Biological Pathways in the PV-Associated Module “Salmon” Although the sample size for PV samples was small (n = 4), we identified a distinct module that, compared with controls, was significantly upregulated in PV, namely the “salmon” module (P = 0.034). The “salmon” module comprises 39 genes (Table 1) and was enriched in the following biological processes: “blood coagulation” (P.adj = 1.4E−1) and the KEGG pathway “platelet activation” (P.adj = 1.8E−1). Using statistical module eigengenes, we identified platelet glycoprotein IX (GP9) as a hub gene of this **Figure 2 | Boxplots of eigengene values across modules. Boxplots depicting different identified modules on the X-axis and the corresponding module eigengene** values for each group of samples on the Y-axis. The significance among the groups was calculated using Kruskal–Wallis test. *P < 0.05; **P < 0.01. PF, pemphigus foliaceus; PV, pemphigus vulgaris; SLE, systemic lupus erythematosus. ----- **Table 1 | Gene ontology and enriched KEGG pathways for “magenta” and “salmon” modules.** **Module** **Category** **Term** **_P-value_** **Benjamini** Magenta UP_KEYWORDS Antiviral defense 1.18273E−16 1.84297E−14 UP_KEYWORDS Immunity 1.22704E−13 1.01824E−11 GOTERM_BP_DIRECT GO:0060337~type-I interferon signaling pathway 9.37804E−14 6.3981E−11 UP_KEYWORDS Innate immunity 3.82091E−12 2.11426E−10 GOTERM_BP_DIRECT GO:0051607~defense response to virus 7.83394E−13 2.6713E−10 GOTERM_BP_DIRECT GO:0045071~negative regulation of viral genome replication 1.21675E−10 2.76607E−08 GOTERM_BP_DIRECT GO:0009615~response to virus 2.90413E−10 4.95154E−08 GOTERM_BP_DIRECT GO:0019221~cytokine-mediated signaling pathway 9.178E−10 1.25188E−07 KEGG_PATHWAY hsa05162:Measles 4.89228E−06 0.00022502 KEGG_PATHWAY hsa05164:Influenza A 2.9062E−06 0.000267335 KEGG_PATHWAY hsa05168:Herpes simplex infection 4.20496E−05 0.001288717 GOTERM_MF_DIRECT GO:0003725~double-stranded RNA binding 6.2164E−05 0.009466294 GOTERM_BP_DIRECT GO:0060333~interferon-gamma-mediated signaling pathway 0.000216281 0.024286767 Salmon GOTERM_BP_DIRECT GO:0030041~actin filament polymerization 0.000889415 0.183621158 GOTERM_BP_DIRECT GO:0007596~blood coagulation 0.001317229 0.139518478 KEGG_PATHWAY hsa04611:Platelet activation 0.003518509 0.179124611 **Figure 3 | Gene–gene interaction network for the “magenta” module. De novo network generated by C3NET algorithm for the “magenta” module. The figure** shows statistically significant (α < 0.05) edges predicted by the algorithm. Fully colored nodes represent the “magenta” module-associated genes. Empty nodes represent the regulatory genes (degree ≥ 5). module. Additionally, using the “c3net” algorithm, we identified pro-platelet basic protein (PPBP), G protein subunit gamma 11 (GNG11), and thrombospondin 1 (THBS1) genes as key regulators of the “salmon” module (degree ≥ 4) (Figure 4). In addition, while using the INMEX server we identified protein kinase cAMP-dependent type-II regulatory subunit beta (PRKAR2B), Src homology 2 domain-containing-transforming protein 3 (SHC3), tensin 1 (TNS1), PPBP, and GNG11 as regulatory genes ----- **Figure 4 | Gene–gene interaction network for the “salmon” module. De novo network generated by C3NET algorithm for the “salmon” module. The figure shows** statistically significant (α < 0.05) edges predicted by the algorithm. Fully colored nodes represent the “salmon” module-associated genes. Empty nodes represent the regulatory genes (degree ≥ 4). (Figure S3 in Supplementary Material). Interestingly, both PPBP and GNG11 genes coincided with the list of the aforementioned C3NET-derived key regulatory genes. ## Cross-Linking SLE and Pemphigus GWA Studies with Clusters of Co-Expressed Genes in the “Magenta” and the “Salmon” Modules While multiple GWA studies had been undertaken in a continuous effort to identify SLE susceptibility genes, only one GWA study was previously conducted in pemphigus, namely in PV (33, 34). In contrast to GWA studies that normally investigate the causal genes for a disease phenotype, gene expression profiles indicate the downstream effector phase. In the present work, we investigated direct interactions between previously reported susceptibility genes in SLE and pemphigus GWA studies and genes comprising the “magenta” and “salmon” modules, which were identified herein. We found the SLE-susceptible gene interferon regulatory factor 8 (IRF8) to have the largest number of direct interactions with “magenta” module-associated genes (Figure 5). The IRF8 gene interacted with genes encoding for interferon-induced protein with tetratricopeptide repeats 1 (IFIT1), interferon-induced guanylate-binding protein 1 (GBP1), 2’-5’-oligoadenylate synthetase 2 (OAS2), 2’-5’-oligoadenylate synthetase-like (OASL), and signal transducer and activator of transcription 1 (STAT1). Both IRF5 and STAT1 SLE GWAS genes directly interacted with IRF8 and with the other 4 “magenta” module-associated genes such as interferon induced with helicase C domain 1 (IFIH1), IFIT1, GBP1, OASL, OAS2, and EIF2AK2 (Figure 5). Polymorphism in the gene ST18 has been previously found in a PV GWA study. However, we could not identify direct interactions between ST18 and genes associated with the “salmon” module. To further establish a putative association of ST18 to other genes in the “salmon” module, we performed the transcriptional factor binding sites enrichment analysis (39 “salmon” genes and the ST18 gene). We observed that 34 out of the 40 analyzed genes are regulated by the nuclear hormone peroxisome proliferator activated receptor γ (PPAR-γ; P. adj = 8.3E−3) and 25 out of 40 genes are regulated by growth factor independent 1 transcriptional repressor (GFI1; P.adj = 8.3E−3). ## DISCUSSION The pathogenesis of most autoimmune disorders is still largely unknown. Environmental triggers in genetically susceptible individuals, as well as molecular mimicry mechanisms, may only partially account for this phenomenon (35). The co-occurrence of autoimmune diseases has been previously documented and aided in our understanding of autoimmunity (36). Pemphigus and SLE are well-characterized autoimmune diseases that were previously reported to coexist in the same patient (37). Even though each of these two autoimmune diseases affects distinct organs and systems, the comorbidity of both diseases suggests an existence of fundamental common pathophysiological mechanisms. As we were interested in systems level similarity between the diseases rather than characterizing individual gene signatures, we used WGCNA to study pemphigus and SLE. Using this analytical approach, we identify modules across ----- **Figure 5 | Interactions among genome-wide-associated genes and module-derived genes. Direct curated gene–gene interactions between modular genes and** genes identified from SLE GWAS. Hub genes are represented by empty blue nodes. Common genes between SLE GWAS and the “magenta” module are denoted in blue nodes with red contour. SLE, systemic lupus erythematosus; GWAS, genome-wide association study. microarray datasets obtained from CD4[+] T cells in pemphigus and SLE patients. In this study, we further demonstrate that gene expression data processed by two different batch correction algorithms remains biased and can lead to false positive estimations. Therefore, to standardize and remove batch effects from both datasets, we used “normexp,” “cyclicloess,” and “combat” algorithms. Using this strategy, we could compensate for the potential bias introduced by obtaining two distinct microarray datasets (Figure 1). Our network analysis revealed two co-expression modules (denoted as “magenta” and “salmon”) that were significantly associated with PF and SLE, or PV only, respectively (Figure 2). Identification of the “magenta” module suggests common underlying mechanisms for pemphigus and SLE and identifies key regulatory genes for both diseases in CD4[+] T cells. In terms of functional relevance, based on DAVID and KEGG ontology analyses, the “magenta” module is enriched in genes corresponding to type-I interferon (IFN) signaling and viral infection including herpes simplex, measles, and influenza viruses. Although type-I interferons were initially described and termed for their ability to “interfere” with viral replication, their role as immune modulators of both innate and adaptive immunity is now widely established (38). Moreover, a role for viruses in an induction of autoimmune diseases through several potential mechanisms, such as epitope spreading, molecular mimicry, cryptic antigens, and bystander activation, was also previously reported (39). The role of viral infection in the etiopathogenesis of SLE, the so-called “viral hypothesis,” has been extensively studied (40–42). SLE ----- patients may present severe systemic viral infections primarily associated with Epstein-Bar virus (EBV), cytomegalovirus, and herpes simplex virus (HSV). With respect to pemphigus, in 1974, Krain et al. first reported the association between HSV and PV (43); meanwhile, several additional case reports were published examining this association (44–46). A more recent study by Kurata and colleagues demonstrated high levels of HSV DNA in the saliva of PV patients at the earliest stage of the disease without a history of herpetic infection, thus suggesting the presence of cases of pemphigus induced by herpesviruses (47). In our work, on the basis of statistical module membership and its eigengene value, we identified RSAD2 gene as the hub gene of the “magenta” module. Notably, by examining the expression levels of RSAD2 gene in our datasets we could demonstrate its significant upregulation in PF (P = 0.005) and SLE (P = 0.007) in comparison to healthy controls (Figure S4A in Supplementary Material). To confirm, the expression of the RSAD2 gene is encoding for interferon-inducible viperin protein, which inhibits viral replication and facilitates T-cell receptor-mediated GATA3 activation, and optimal Th2 cytokine production through modulation of NFKB1 and JUNB activities. As a result, viperin-deficient mice show impaired Th2 cell development (48). Interestingly, transcripts for RSAD2 were found to be upregulated in SLE CD3[+] CD4[+] cells, as well as SLE CD19[+] B cells, and SLE CD33[+] myeloid cells in comparison to similar cellular subsets isolated from healthy controls (49). Although it has been previously demonstrated that Th2 cells exert broad activity in blister formation in pemphigus, the association of RSAD2 with pemphigus is unknown. To examine the relevance of Th2 response in pemphigus and SLE, a set of 44 genes associated with Th2 differentiation were downloaded from the PathCards Pathway Unification Database from the Weizmann Institute of Science, and examined for their fold change expression in our disease datasets (PV, PF, and SLE) in comparison to healthy controls (Figure S4B in Supplementary Material). Our findings confirm that the fold change expression of Th2-associated genes was positively correlated between SLE and PF (P = 0.01, ρ = 0.36) and between SLE vs. PV (P = 1.087E−05 ρ = 0.62), suggesting that the Th2 response is skewed in a similar pattern between SLE and pemphigus. While investigating subnetworks within the “magenta” module (using the “c3net” algorithm), we identified OAS1, MX1, IFIT3, and SPATS2L genes as master regulators (Figure 3). Additional regulatory genes such as JUP, BCL2, ISG15, STAT1, SKP2, and EIF2AK2 were identified using known gene–gene interactions database (INMEX) (Figure S2 in Supplementary Material). Transcripts of 7 out of the 11 identified genes (i.e., RSAD2, OAS1, MX1, IFIT3, ISG15, STAT1, and EIF2AK2) were previously shown to be upregulated in SLE CD3[+] CD4[+] cells (49). Consistent with a previous study that examined possible related signaling pathways shared in the pathogenesis of several systemic autoimmune diseases (SAID) such as dermatomyositis, polymyositis, rheumatoid arthritis, and SLE, a subset of five viral-related differentially expressed genes (i.e., RSAD2, IFIT3, ISG15, STAT1, and EIF2AK) was detected in peripheral blood of SAID probands and their unaffected twins (50). Additionally, other genes that were identified in our study, including BCL2, OAS1, MX1, and SKP2 have been previously associated with various autoimmune diseases (51–54). Therefore, our findings further suggest that these common IFN signature genes are shared across multiple autoimmune diseases including pemphigus and SLE. Here, we identified a PV-specific associated module. The “salmon” module consisted of 39 genes and was enriched in genes involved in blood coagulation and platelet activation. Based on the eigenegene value, the gene GP9 was identified as the hub gene of the “salmon” module. GP9 encodes a small-membrane glycoprotein that is part of the GPIb-V-IX complex that mediates platelet adhesion to blood vessels and promotes hemostasis. Thus, mutations in the GP9 protein lead to a coagulation disorder, also known as the Bernard–Soulier syndrome, characterized by thrombocytopenia. Of note, although this is a first report suggesting a role for GP9 in PV, a previous study by Hunziker et al. identified platelet-derived factors to enhance pemphigus acantholysis in skin organ cultures (55). Moreover, another study by Mizutani et al. found increased expression of the coagulation factor on keratinocytes, which shield blisters in PV (56). In line with this observation, using the “c3net” algorithm, we identified an additional list of platelet-associated genes i.e., PPBP, GNG11, and THSB1, as key regulators of the “salmon” module (Figure 4). Furthermore, by examining known gene–gene interactions, we could identify PPBP, GNG11, as well as another group of plateletfunction-associated genes such as PRKAR2B, SHC3, and TNS1 (Figure S3 in Supplementary Material) as additional regulators of this module. Further in our analysis, we associated the genes found in the “magenta” and “salmon” modules with known susceptibility markers of PV and SLE, which had been formerly identified by GWASs. GWASs are applied to identify genetic variants that are associated with a disease trait. However, the identification of loci harboring the susceptible genes does not fully reveal the molecular mechanisms that are at play to yield the observed phenotype. Therefore, linking these susceptibility genes with the module-associated genes may identify pathways that control the disease phenotype and provide potential therapeutic targets for intervention. By cross-linking susceptibility genes derived from SLE GWAS with clusters of co-expressed genes in “magenta” module, we found IRF8 to directly interact with the largest number of interferoninduced genes present in the “magenta” module including IFIT1, GBP1, OAS2, OASL, and STAT1 (Figure 5). Interestingly, STAT1 was identified both as an SLE susceptibility gene and as a key regulator gene of the “magenta” module. Therefore, based on our analysis, we predict IRF8 to have pharmacological relevance, as previously described (57). With regard to PV, we did not identify direct interactions between the known GWAS gene, ST18, and the 39 “salmon” module-associated genes. To circumvent this finding, we additionally performed a transcriptional factor binding sites enrichment analysis for the 40 genes. We found that the majority of the genes are regulated by the transcription factors PPAR-γ and GFI1 that have been previously described for their role in Th2 cell development (58, 59). Moreover, PPAR-γ has been suggested as a pharmacological target for PV (60). Altogether, our work reveals conserved molecular mechanisms and pathways between pemphigus and SLE and identifies novel gene candidates that could be used as biomarkers or as potential targets for therapeutic intervention. ----- ## AUTHOR CONTRIBUTIONS TS, AV, YG, and RL designed the study, interpreted the data, and wrote the manuscript. All authors contributed equally to this work. YG downloaded and analyzed the data. CS and DZ discussed the results and contributed to the writing of the manuscript. ## ACKNOWLEDGMENTS We thank Prof. SM Ibrahim (Lübecker Institut für Experimentelle _Dermatologie (LIED), Lübeck, Germany) for critical discussion_ and assistance in preparation of the manuscript. ## REFERENCES 1. Hammers CM, Stanley JR. Mechanisms of disease: pemphigus and bullous [pemphigoid. Annu Rev Pathol (2016) 11:175–97. doi:10.1146/annurev-pathol-](https://doi.org/10.1146/annurev-pathol-012615-044313) [012615-044313](https://doi.org/10.1146/annurev-pathol-012615-044313) 2. Malik M, Ahmed AR. Concurrence of systemic lupus erythematosus and pemphigus: coincidence or correlation? _Dermatol Basel Switz (2007)_ [214:231–9. doi:10.1159/000099588](https://doi.org/10.1159/000099588) 3. Calebotta A, Cirocco A, Giansante E, Reyes O. Systemic lupus erythematosus and pemphigus vulgaris: association or coincidence. Lupus (2004) 13:951–3. [doi:10.1191/0961203304lu1073cr](https://doi.org/10.1191/0961203304lu1073cr) 4. Rahman A, Isenberg DA. Systemic lupus erythematosus. N Engl J Med (2008) [358:929–39. doi:10.1056/NEJMra071297](https://doi.org/10.1056/NEJMra071297) 5. Prüßmann J, Prüßmann W, Recke A, Rentzsch K, Juhl D, Henschler R, et al. Co-occurrence of autoantibodies in healthy blood donors. _Exp Dermatol_ [(2014) 23:519–21. doi:10.1111/exd.12445](https://doi.org/10.1111/exd.12445) 6. Holtman IR, Raj DD, Miller JA, Schaafsma W, Yin Z, Brouwer N, et al. Induction of a common microglia gene expression signature by aging and neurodegenerative conditions: a co-expression meta-analysis. _Acta Neuro­_ _[pathol Commun (2015) 3:31. doi:10.1186/s40478-015-0203-5](https://doi.org/10.1186/s40478-015-0203-5)_ 7. Granlund Av, Flatberg A, Østvik AE, Drozdov I, Gustafsson BI, Kidd M, et al. Whole genome gene expression meta-analysis of inflammatory bowel disease colon mucosa demonstrates lack of major differences between Crohn’s disease and ulcerative colitis. _[PLoS One (2013) 8(2):e56818. doi:10.1371/journal.](https://doi.org/10.1371/journal.pone.0056818)_ [pone.0056818](https://doi.org/10.1371/journal.pone.0056818) 8. Cárdenas-Roldán J, Rojas-Villarraga A, Anaya J-M. How do autoimmune diseases cluster in families? A systematic review and meta-analysis. _BMC_ _[Med (2013) 11:73. doi:10.1186/1741-7015-11-73](https://doi.org/10.1186/1741-7015-11-73)_ 9. Troy NM, Hollams EM, Holt PG, Bosco A. Differential gene network analysis for the identification of asthma-associated therapeutic targets in allergenspecific T-helper memory responses. _BMC Med Genomics (2016) 9:9._ [doi:10.1186/s12920-016-0171-z](https://doi.org/10.1186/s12920-016-0171-z) 10. Zhao H, Cai W, Su S, Zhi D, Lu J, Liu S. Screening genes crucial for pediatric pilocytic astrocytoma using weighted gene coexpression network analysis combined with methylation data analysis. Cancer Gene Ther (2014) 21:448–55. [doi:10.1038/cgt.2014.49](https://doi.org/10.1038/cgt.2014.49) 11. Ring KL, An MC, Zhang N, O’Brien RN, Ramos EM, Gao F, et al. Genomic analysis reveals disruption of striatal neuronal development and therapeutic targets in human Huntington’s disease neural stem cells. _Stem Cell Reports_ [(2015) 5:1023–38. doi:10.1016/j.stemcr.2015.11.005](https://doi.org/10.1016/j.stemcr.2015.11.005) 12. Nishifuji K, Amagai M, Kuwana M, Iwasaki T, Nishikawa T. Detection of antigen-specific B cells in patients with pemphigus vulgaris by enzyme-linked immunospot assay: requirement of T cell collaboration for autoantibody production. _[J Invest Dermatol (2000) 114:88–94. doi:10.1046/j.1523-1747.](https://doi.org/10.1046/j.1523-1747. 2000.00840.x)_ [2000.00840.x](https://doi.org/10.1046/j.1523-1747. 2000.00840.x) 13. Takahashi H, Kouno M, Nagao K, Wada N, Hata T, Nishimoto S, et al. Desmoglein 3-specific CD4+ T cells induce pemphigus vulgaris and interface dermatitis in mice. _[J Clin Invest (2011) 121:3677–88. doi:10.1172/](https://doi.org/10.1172/ JCI57379)_ [JCI57379](https://doi.org/10.1172/ JCI57379) 14. Mak A, Kow NY. The pathology of T cells in systemic lupus erythematosus. _[J Immunol Res (2014) 2014:e419029. doi:10.1155/2014/419029](https://doi.org/10.1155/2014/419029)_ ## FUNDING This study was supported by the Deutsche Forschungsgemeinschaft through the training programs “Modulation of Autoimmunity” (grant number GRK 1727/1) and “Genes, Environment and Inflammation” (grant number GRK 1743/1). ## SUPPLEMENTARY MATERIAL The Supplementary Material for this article can be found online at [http://www.frontiersin.org/articles/10.3389/fimmu.2017.01992/](http://www.frontiersin.org/articles/10.3389/fimmu.2017.01992/full#supplementary-material) [full#supplementary-material.](http://www.frontiersin.org/articles/10.3389/fimmu.2017.01992/full#supplementary-material) 15. Konya C, Paz Z, Tsokos GC. The role of T cells in systemic lupus erythematosus: an update. _[Curr Opin Rheumatol (2014) 26:493–501. doi:10.1097/](https://doi.org/10.1097/BOR.0000000000000082)_ [BOR.0000000000000082](https://doi.org/10.1097/BOR.0000000000000082) 16. Barrett T, Wilhite SE, Ledoux P, Evangelista C, Kim IF, Tomashevsky M, et al. NCBI GEO: archive for functional genomics data sets––update. Nucleic Acids _[Res (2013) 41:D991–5. doi:10.1093/nar/gks1193](https://doi.org/10.1093/nar/gks1193)_ 17. Jeffries MA, Dozmorov M, Tang Y, Merrill JT, Wren JD, Sawalha AH. Genome-wide DNA methylation patterns in CD4+ T cells from patients with systemic lupus erythematosus. _[Epigenetics (2011) 6:593–601. doi:10.4161/](https://doi.org/10.4161/ epi.6.5.15374)_ [epi.6.5.15374](https://doi.org/10.4161/ epi.6.5.15374) 18. Malheiros D, Panepucci RA, Roselino AM, Araújo AG, Zago MA, Petzl-Erler ML. Genome-wide gene expression profiling reveals unsuspected molecular alter[ations in pemphigus foliaceus. Immunology (2014) 143:381–95. doi:10.1111/](https://doi.org/10.1111/imm.12315) [imm.12315](https://doi.org/10.1111/imm.12315) 19. Leek JT, Johnson WE, Parker HS, Jaffe AE, Storey JD. The sva package for removing batch effects and other unwanted variation in high-throughput experiments. _[Bioinformatics (2012) 28:882–3. doi:10.1093/bioinformatics/](https://doi.org/10.1093/bioinformatics/bts034)_ [bts034](https://doi.org/10.1093/bioinformatics/bts034) 20. Diez D, Alvarez R, Dopazo A. Codelink: an R package for analysis of GE healthcare gene expression bioarrays. Bioinforma Oxf Engl (2007) 23:1168–9. [doi:10.1093/bioinformatics/btm072](https://doi.org/10.1093/bioinformatics/btm072) 21. Gautier L, Cope L, Bolstad BM, Irizarry RA. affy—analysis of Affymetrix GeneChip data at the probe level. _Bioinforma Oxf Engl (2004) 20:307–15._ [doi:10.1093/bioinformatics/btg405](https://doi.org/10.1093/bioinformatics/btg405) 22. Ritchie ME, Phipson B, Wu D, Hu Y, Law CW, Shi W, et al. limma powers differential expression analyses for RNA-sequencing and microarray studies. _[Nucleic Acids Res (2015) 43(7):e47. doi:10.1093/nar/gkv007](https://doi.org/10.1093/nar/gkv007)_ 23. Liu BH, Yu H, Tu K, Li C, Li YX, Li YY. DCGL: an R package for identifying differentially coexpressed genes and links from gene expression microarray [data. Bioinformatics (2010) 26:2637–8. doi:10.1093/bioinformatics/btq471](https://doi.org/10.1093/bioinformatics/btq471) 24. Miller JA, Cai C, Langfelder P, Geschwind DH, Kurian SM, Salomon DR, et al. Strategies for aggregating gene expression data: the collapseRows R function. _[BMC Bioinformatics (2011) 12:322. doi:10.1186/1471-2105-12-322](https://doi.org/10.1186/1471-2105-12-322)_ 25. Johnson WE, Li C, Rabinovic A. Adjusting batch effects in microarray expression data using empirical Bayes methods. Biostat Oxf Engl (2007) 8:118–27. [doi:10.1093/biostatistics/kxj037](https://doi.org/10.1093/biostatistics/kxj037) 26. Dixon P. VEGAN, a package of R functions for community ecology. _J Veg_ _[Sci (2003) 14:927–30. doi:10.1111/j.1654-1103.2003.tb02228.x](https://doi.org/10.1111/j.1654-1103.2003.tb02228.x)_ 27. Langfelder P, Horvath S. Eigengene networks for studying the relationships between co-expression modules. _BMC Syst Biol (2007) 1:54._ [doi:10.1186/1752-0509-1-54](https://doi.org/10.1186/1752-0509-1-54) 28. Altay G, Emmert-Streib F. Inferring the conservative causal core of gene reg[ulatory networks. BMC Syst Biol (2010) 4:132. doi:10.1186/1752-0509-4-132](https://doi.org/10.1186/1752-0509-4-132) 29. Xia J, Fjell CD, Mayer ML, Pena OM, Wishart DS, Hancock REW. INMEX—a web-based tool for integrative meta-analysis of expression data. Nucleic Acids _[Res (2013) 41:W63–70. doi:10.1093/nar/gkt338](https://doi.org/10.1093/nar/gkt338)_ 30. Thornton-Wells TA, Johnson KB. Perl Programming for biologists. J Am Med _[Inform Assoc (2004) 11:173. doi:10.1197/jamia.M1457](https://doi.org/10.1197/jamia.M1457)_ 31. Dennis G Jr, Sherman BT, Hosack DA, Yang J, Gao W, Lane HC, et al. DAVID: database for annotation, visualization, and integrated discovery. Genome Biol [(2003) 4:3. doi:10.1186/gb-2003-4-5-p3](https://doi.org/10.1186/gb-2003-4-5-p3) ----- 32. Xia J, Benner MJ, Hancock REW. Network Analyst––integrative approaches for protein-protein interaction network analysis and visual exploration. _[Nucleic Acids Res (2014) 42:W167–74. doi:10.1093/nar/gku443](https://doi.org/10.1093/nar/gku443)_ 33. Sarig O, Bercovici S, Zoller L, Goldberg I, Indelman M, Nahum S, et al. Population-specific association between a polymorphic variant in ST18, encoding a pro-apoptotic molecule, and pemphigus vulgaris. J Invest Dermatol [(2012) 132:1798–805. doi:10.1038/jid.2012.46](https://doi.org/10.1038/jid.2012.46) 34. Vodo D, Sarig O, Geller S, Ben-Asher E, Olender T, Bochner R, et al. Identification of a functional risk variant for pemphigus vulgaris in the ST18 gene. _[PLoS Genet (2016) 12:e1006008. doi:10.1371/journal.pgen.](https://doi.org/10.1371/journal.pgen. 1006008)_ [1006008](https://doi.org/10.1371/journal.pgen. 1006008) 35. Manolio TA, Collins FS, Cox NJ, Goldstein DB, Hindorff LA, Hunter DJ, et al. Finding the missing heritability of complex diseases. _Nature (2009)_ [461:747–53. doi:10.1038/nature08494](https://doi.org/10.1038/nature08494) 36. Cojocaru M, Cojocaru IM, Silosi I. Multiple autoimmune syndrome. Maedica _(Buchar) (2010) 5:132–4._ 37. Sawamura S, Kajihara I, Makino K, Makino T, Fukushima S, Jinnin M, et al. Systemic lupus erythematosus associated with myasthenia gravis, pemphigus foliaceus and chronic thyroiditis after thymectomy. _Australas J Dermatol_ [(2016) 58(3):e120–2. doi:10.1111/ajd.12510](https://doi.org/10.1111/ajd.12510) 38. Theofilopoulos AN, Baccala R, Beutler B, Kono DH. Type I interferons (alpha/ beta) in immunity and autoimmunity. Annu Rev Immunol (2005) 23:307–36. [doi:10.1146/annurev.immunol.23.021704.115843](https://doi.org/10.1146/annurev.immunol.23.021704.115843) 39. Olson JK, Croxford JL, Miller SD. Virus-induced autoimmunity: potential role of viruses in initiation, perpetuation, and progression of T-cell-mediated autoimmune disease. _[Viral Immunol (2001) 14:227–50. doi:10.1089/](https://doi.org/10.1089/088282401753266756)_ [088282401753266756](https://doi.org/10.1089/088282401753266756) 40. Phillips PE. The virus hypothesis in systemic lupus erythematosus. Ann Intern _[Med (1975) 83:709–15. doi:10.7326/0003-4819-83-5-709](https://doi.org/10.7326/0003-4819-83-5-709)_ 41. Denman AM. Systemic lupus erythematosus—is a viral aetiology a credible [hypothesis? J Infect (2000) 40:229–33. doi:10.1053/jinf.2000.0670](https://doi.org/10.1053/jinf.2000.0670) 42. Ramos-Casals M. Viruses and lupus: the viral hypothesis. _Lupus (2008)_ [17:163–5. doi:10.1177/0961203307086268](https://doi.org/10.1177/0961203307086268) 43. Krain LS. Pemphigus. Epidemiologic and survival characteristics of 59 [patients, 1955-1973. Arch Dermatol (1974) 110:862–5. doi:10.1001/archderm.](https://doi.org/10.1001/archderm.1974.01630120012002) [1974.01630120012002](https://doi.org/10.1001/archderm.1974.01630120012002) 44. Marzano AV, Tourlaki A, Merlo V, Spinelli D, Venegoni L, Crosti C. Herpes simplex virus infection and pemphigus. Int J Immunopathol Pharmacol (2009) [22:781–6. doi:10.1177/039463200902200324](https://doi.org/10.1177/039463200902200324) 45. Senger P, Sinha AA. Exploring the link between herpes viruses and pemphigus vulgaris: literature review and commentary. Eur J Dermatol (2012) 22:728–35. [doi:10.1684/ejd.2012.1836](https://doi.org/10.1684/ejd.2012.1836) 46. Takahashi I, Kobayashi TK, Suzuki H, Nakamura S, Tezuka F. Coexistence of pemphigus vulgaris and herpes simplex virus infection in oral mucosa diagnosed by cytology, immunohistochemistry, and polymerase chain reaction. _[Diagn Cytopathol (1998) 19:446–50. doi:10.1002/(SICI)1097-0339](https://doi.org/10.1002/(SICI)1097-0339 (199812)19:6 < 446::AID-DC8 > 3.0.CO;2-2)_ [(199812)19:6<446::AID-DC8>3.0.CO;2-2](https://doi.org/10.1002/(SICI)1097-0339 (199812)19:6 < 446::AID-DC8 > 3.0.CO;2-2) 47. Kurata M, Mizukawa Y, Aoyama Y, Shiohara T. Herpes simplex virus reactivation as a trigger of mucous lesions in pemphigus vulgaris. Br J Dermatol (2014) [171:554–60. doi:10.1111/bjd.12961](https://doi.org/10.1111/bjd.12961) 48. Seo J-Y, Yaneva R, Cresswell P. Viperin: a multifunctional, interferoninducible protein that regulates virus replication. _Cell Host Microbe (2011)_ [10:534–9. doi:10.1016/j.chom.2011.11.004](https://doi.org/10.1016/j.chom.2011.11.004) 49. Becker AM, Dao KH, Han BK, Kornu R, Lakhanpal S, Mobley AB, et al. SLE peripheral blood B cell, T cell and myeloid cell transcriptomes display unique profiles and each subset contributes to the interferon signature. _PLoS One_ [(2013) 8:e67003. doi:10.1371/journal.pone.0067003](https://doi.org/10.1371/journal.pone.0067003) 50. Gan L, O’Hanlon TP, Lai Z, Fannin R, Weller ML, Rider LG, et al. Gene expression profiles from disease discordant twins suggest shared antiviral pathways and viral exposures among multiple systemic autoimmune diseases. PLoS One [(2015) 10(11):e0142486. doi:10.1371/journal.pone.0142486](https://doi.org/10.1371/journal.pone.0142486) 51. Tischner D, Woess C, Ottina E, Villunger A. Bcl-2-regulated cell death signalling in the prevention of autoimmunity. _Cell Death Dis (2010) 1:e48._ [doi:10.1038/cddis.2010.27](https://doi.org/10.1038/cddis.2010.27) 52. Choi UY, Kang J-S, Hwang YS, Kim Y-J. Oligoadenylate synthase-like (OASL) proteins: dual functions and associations with diseases. Exp Mol Med (2015) [47:e144. doi:10.1038/emm.2014.110](https://doi.org/10.1038/emm.2014.110) 53. Ferreira RC, Guo H, Coulson RM, Smyth DJ, Pekalski ML, Burren OS, et al. A type I interferon transcriptional signature precedes autoimmunity in children genetically at risk for type 1 diabetes. Diabetes (2014) 63:2538–50. [doi:10.2337/db13-1777](https://doi.org/10.2337/db13-1777) 54. Wang D, Qin H, Du W, Shen YW, Lee WH, Riggs AD, et al. Inhibition of S-phase kinase-associated protein 2 (Skp2) reprograms and converts diabetogenic T cells to Foxp3+ regulatory T cells. Proc Natl Acad Sci U S A (2012) [109:9493–8. doi:10.1073/pnas.1207293109](https://doi.org/10.1073/pnas.1207293109) 55. Hunziker T, Nydegger UE, Lerch PG, Vassalli JD. Platelet-derived factors enhance pemphigus acantholysis in skin organ cultures. _Clin Exp Immunol_ (1986) 64:442–9. 56. Mizutani H, Ohyanagi S, Nouchi N, Inachi S, Shimizu M. Tissue factor and thrombomodulin expression on keratinocytes as coagulation/anti-coagulation cofactor and differentiation marker. Australas J Dermatol (1996) 37(Suppl):1. [doi:10.1111/j.1440-0960.1996.tb01085.x](https://doi.org/10.1111/j.1440-0960.1996.tb01085.x) 57. Chrabot BS, Kariuki SN, Zervou MI, Feng X, Arrington J, Jolly M, et al. Genetic variation near IRF8 is associated with serologic and cytokine profiles in systemic lupus erythematosus and multiple sclerosis. Genes Immun (2013) [14:471–8. doi:10.1038/gene.2013.42](https://doi.org/10.1038/gene.2013.42) 58. Choi J-M, Bothwell ALM. The nuclear receptor PPARs as important regulators of T-cell functions and autoimmune diseases. _Mol Cells (2012) 33:217–22._ [doi:10.1007/s10059-012-2297-y](https://doi.org/10.1007/s10059-012-2297-y) 59. Zhu J, Jankovic D, Grinberg A, Guo L, Paul WE. Gfi-1 plays an important role in IL-2-mediated Th2 cell expansion. _Proc Natl Acad Sci U S A (2006)_ [103:18214–9. doi:10.1073/pnas.0608981103](https://doi.org/10.1073/pnas.0608981103) 60. McCarthy FP, Delany AC, Kenny LC, Walsh SK. PPAR-γ––a possible drug target for complicated pregnancies. _Br J Pharmacol (2013) 168:1074–85._ [doi:10.1111/bph.12069](https://doi.org/10.1111/bph.12069) **Conflict of Interest Statement: The authors declare that the research was** conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. _Copyright © 2018 Sezin, Vorobyev, Sadik, Zillikens, Gupta and Ludwig. This is an_ _[open-access article distributed under the terms of the Creative Commons Attribution](http://creativecommons.org/licenses/by/4.0/)_ _[License (CC BY). The use, distribution or reproduction in other forums is permitted,](http://creativecommons.org/licenses/by/4.0/)_ _provided the original author(s) or licensor are credited and that the original publica­_ _tion in this journal is cited, in accordance with accepted academic practice. No use,_ _distribution or reproduction is permitted which does not comply with these terms._ -----
{ "disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC5776326, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.frontiersin.org/articles/10.3389/fimmu.2017.01992/pdf" }
2,018
[ "JournalArticle" ]
true
2018-01-17T00:00:00
[ { "paperId": "e40a9f5c16b7da2750458ebd5ff36442805607d5", "title": "Systemic lupus erythematosus associated with myasthenia gravis, pemphigus foliaceus and chronic thyroiditis after thymectomy" }, { "paperId": "67682d92dbad4993ad2b093ec554acf3c6274f19", "title": "Mechanisms of Disease: Pemphigus and Bullous Pemphigoid." }, { "paperId": "455d0672900c0b96bc57d5c3dfa971b5cac8caab", "title": "Identification of a Functional Risk Variant for Pemphigus Vulgaris in the ST18 Gene" }, { "paperId": "9bfc6549936e70c312f99f44b20c86212f80ace4", "title": "Differential gene network analysis for the identification of asthma-associated therapeutic targets in allergen-specific T-helper memory responses" }, { "paperId": "8e29407eb5ad59d3a1ae35eb09f6482485206e21", "title": "Genomic Analysis Reveals Disruption of Striatal Neuronal Development and Therapeutic Targets in Human Huntington’s Disease Neural Stem Cells" }, { "paperId": "b992d83632eafe0892824684dd1eee8389907b5f", "title": "Gene Expression Profiles from Disease Discordant Twins Suggest Shared Antiviral Pathways and Viral Exposures among Multiple Systemic Autoimmune Diseases" }, { "paperId": "7efd86f2a60222370c3a1e15013a2fa9fa0d9f27", "title": "Induction of a common microglia gene expression signature by aging and neurodegenerative conditions: a co-expression meta-analysis" }, { "paperId": "4bd993ed684d4c751619da3349d569b99c861e22", "title": "Oligoadenylate synthase-like (OASL) proteins: dual functions and associations with diseases" }, { "paperId": "54566d9fdfad503f98c25a394ec50e1bb625b532", "title": "limma powers differential expression analyses for RNA-sequencing and microarray studies" }, { "paperId": "c7917704ef7bedae86c6f84c49a3bae75c5a18b9", "title": "Genome‐wide gene expression profiling reveals unsuspected molecular alterations in pemphigus foliaceus" }, { "paperId": "b12b6b15c5050a4faf0c112ad0c2de82242d6136", "title": "Screening genes crucial for pediatric pilocytic astrocytoma using weighted gene coexpression network analysis combined with methylation data analysis" }, { "paperId": "43599b1eda2c5a68c0635e4eb0c04603fe11a9f1", "title": "Herpes simplex virus reactivation as a trigger of mucous lesions in pemphigus vulgaris" }, { "paperId": "636775c8cb6b40c0dec310b6d6d0985cf0e75b78", "title": "The role of T cells in systemic lupus erythematosus: an update" }, { "paperId": "ac37ff8c7c0bf05f2616526c47718199825913ba", "title": "Co‐occurrence of autoantibodies in healthy blood donors" }, { "paperId": "be6b7fb3c8eb7b8610497042ff6de0a5b8d770bb", "title": "NetworkAnalyst - integrative approaches for protein–protein interaction network analysis and visual exploration" }, { "paperId": "a599b38f5577de7294ab7b2e93e6a52daf3c5a71", "title": "The Pathology of T Cells in Systemic Lupus Erythematosus" }, { "paperId": "5c2f396ba2cd80853aa1507b1a95e01e8ee5aee3", "title": "A Type I Interferon Transcriptional Signature Precedes Autoimmunity in Children Genetically at Risk for Type 1 Diabetes" }, { "paperId": "1d3fca22e2a5f8f6f20a91b48c4b4c1dfa3bf168", "title": "Genetic Variation near IRF8 is Associated with Serologic and Cytokine Profiles in Systemic Lupus Erythematosus and Multiple Sclerosis" }, { "paperId": "d741c37113b52bad3e44cbc2108b552c0747f11c", "title": "SLE Peripheral Blood B Cell, T Cell and Myeloid Cell Transcriptomes Display Unique Profiles and Each Subset Contributes to the Interferon Signature" }, { "paperId": "9812d92d92dadaa5f827f9311552a6283b1302a9", "title": "INMEX—a web-based tool for integrative meta-analysis of expression data" }, { "paperId": "0e933d329978425eba6ce3dd9e76b05516f46191", "title": "PPAR‐γ – a possible drug target for complicated pregnancies" }, { "paperId": "a1fcdacefdcda0dc66eba852e8d89472f2f766fb", "title": "How do autoimmune diseases cluster in families? A systematic review and meta-analysis" }, { "paperId": "13ffa1cfc9b77162969be10e1788637df587d49f", "title": "Whole Genome Gene Expression Meta-Analysis of Inflammatory Bowel Disease Colon Mucosa Demonstrates Lack of Major Differences between Crohn's Disease and Ulcerative Colitis" }, { "paperId": "9add875c382973c043772347c21ba7fc2b4ceed3", "title": "Exploring the link between herpes viruses and pemphigus vulgaris: literature review and commentary." }, { "paperId": "1823026160626ed2cd47de840d3829a7bb2ebb38", "title": "NCBI GEO: archive for functional genomics data sets—update" }, { "paperId": "8cf6d8729ffdea65fabc462ce1e3604f66fc15de", "title": "Population-specific association between a polymorphic variant in ST18, encoding a pro-apoptotic molecule, and pemphigus vulgaris." }, { "paperId": "d35a456ee45baeb879a633aeb2c38e453d7785b5", "title": "Inhibition of S-phase kinase-associated protein 2 (Skp2) reprograms and converts diabetogenic T cells to Foxp3+ regulatory T cells" }, { "paperId": "d94a92fdb7cf7613809dbfbeffa10e95598f2d28", "title": "The sva package for removing batch effects and other unwanted variation in high-throughput experiments" }, { "paperId": "68615a87a8e30800780725fd8b7e6d65551c1fec", "title": "The nuclear receptor PPARs as important regulators of T-cell functions and autoimmune diseases" }, { "paperId": "5e8a5d40231681aaec56c8bd181eac6a33d77480", "title": "Viperin: a multifunctional, interferon-inducible protein that regulates virus replication." }, { "paperId": "0588dbf9002f59650d1f6e7bfcacb6bb3c4fb8de", "title": "Desmoglein 3-specific CD4+ T cells induce pemphigus vulgaris and interface dermatitis in mice." }, { "paperId": "56fe1ce14961800208a16c0f6ecd4278cf3066ae", "title": "Strategies for aggregating gene expression data: The collapseRows R function" }, { "paperId": "cb47c5216bee27aa14b59ee4de7f6925f7f8234d", "title": "Genome-wide DNA methylation patterns in CD4+ T cells from patients with systemic lupus erythematosus" }, { "paperId": "faaaad7409bbd9936fb04f7689c94a0872deed6f", "title": "Inferring the conservative causal core of gene regulatory networks" }, { "paperId": "62c0c881e23d33e2b2dbd390f85eb3896699f80f", "title": "DCGL: an R package for identifying differentially coexpressed genes and links from gene expression microarray data" }, { "paperId": "3208b7657a13bdb73956b44d20bb43393203d92f", "title": "Bcl-2-regulated cell death signalling in the prevention of autoimmunity" }, { "paperId": "000551662c3902ed66709be879053d95dafc0211", "title": "Finding the missing heritability of complex diseases" }, { "paperId": "4228aba58a3cb259a532b6ceb3e274a86796c7ca", "title": "Herpes Simplex Virus Infection and Pemphigus" }, { "paperId": "b012e29e95b27dae0184f4baa80ac070d02d4524", "title": "Viruses and lupus: the viral hypothesis" }, { "paperId": "d9e370465967352eca00debdf5527e12f4b587d7", "title": "Eigengene networks for studying the relationships between co-expression modules" }, { "paperId": "d6ec5472aa8bb05d93b62d5f91c8d8ab242529fd", "title": "Codelink: an R package for analysis of GE healthcare gene expression bioarrays" }, { "paperId": "433ca39482e4d30953b903180be82dba46ab641e", "title": "Concurrence of Systemic Lupus Erythematosus and Pemphigus: Coincidence or Correlation?" }, { "paperId": "259fd152a913b00d82eef4185874a7393f8e27b5", "title": "Gfi-1 plays an important role in IL-2-mediated Th2 cell expansion" }, { "paperId": "aadf62dc69b6268d28169520caefe2621c14df49", "title": "Systemic lupus erythematosus and pemphigus vulgaris: association or coincidence" }, { "paperId": "87e8746a04c4ceecefc1f1ded5b770fb74b45a5b", "title": "Perl Programming for Biologists" }, { "paperId": "13e6d58e6006bed58b0b67263ef94b3ac4c13293", "title": "affy - analysis of Affymetrix GeneChip data at the probe level" }, { "paperId": "30a9644bdaac6f0ca78e8f0cd6b5a23d37a4d006", "title": "VEGAN, a package of R functions for community ecology" }, { "paperId": "80e394ee3e1834091596e8b55c9ad9bf11456e09", "title": "DAVID: Database for Annotation, Visualization, and Integrated Discovery" }, { "paperId": "bff166dfe756c70000a831d9e323ea7b7f9a6053", "title": "Systemic lupus erythematosus--is a viral aetiology a credible hypothesis?" }, { "paperId": "732d52d1d3d4127db7c077e7f911f0221121fc49", "title": "Coexistence of Pemphigus vulgaris and herpes simplex virus infection in oral mucosa diagnosed by cytology, immunohistochemistry, and polymerase chain reaction" }, { "paperId": "cb72a51b0a36c324cd4f688d0960ee58658e43a1", "title": "Tissue factor and thrombomodulin expression on keratinocytes as coagulation/anti‐coagulation cofactor and differentiation marker" }, { "paperId": "a5a8e23703a93be12aae41d4b8049b6ecb8b2f3a", "title": "Hair shaft abnormalities: Part I" }, { "paperId": "2a94cec5f9b1968ae5f73680231aeb9bb558b37a", "title": "Platelet-derived factors enhance pemphigus acantholysis in skin organ cultures." }, { "paperId": "15462580499bdf591751bae758ed844a5d44d087", "title": "The virus hypothesis in systemic lupus erythematosus." }, { "paperId": "52cd8e4e045e163af1f07ec6daaadba9bda557f0", "title": "Pemphigus. Epidemiologic and survival characteristics of 59 patients, 1955-1973." }, { "paperId": "065588752af5c1ca67d7af9b576b81dd901a198a", "title": "Multiple autoimmune syndrome." }, { "paperId": "427eaafcd47b3abc620b953ff021f25925ff0e62", "title": "Adjusting batch effects in microarray expression data using empirical Bayes methods." }, { "paperId": "03b381818a59d97768c423514080bed01c4ae1d5", "title": "Orphanet Journal of Rare Diseases BioMed Central Review" }, { "paperId": "1d9f821bdd97d9f2172b6d2bd8c52c079719ca41", "title": "Type I interferons (alpha/beta) in immunity and autoimmunity." }, { "paperId": "af0c5195b8a3ad1f78433b3c9f18e9820565f637", "title": "TYPE I INTERFERONS (/) IN IMMUNITY AND AUTOIMMUNITY" }, { "paperId": "05fba9e230ef113b68ae4e99d686d698d73e91a4", "title": "Virus-induced autoimmunity: potential role of viruses in initiation, perpetuation, and progression of T-cell-mediated autoimmune disease." }, { "paperId": "2c288b6afc407d1b528e4f5926871e0af1d5bf55", "title": "Detection of antigen-specific B cells in patients with pemphigus vulgaris by enzyme-linked immunospot assay: requirement of T cell collaboration for autoantibody production." } ]
14,636
en
[ { "category": "Environmental Science", "source": "s2-fos-model" }, { "category": "Business", "source": "s2-fos-model" }, { "category": "Economics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01d06e30ac0505f2559392e5ff1fad97ba8a55a4
[]
0.800535
Blockchain Teknolojisi ve Sürdürülebilir Lojistik: Döngüsel Ekonomi Entegrasyonu
01d06e30ac0505f2559392e5ff1fad97ba8a55a4
Toros Üniversitesi İİSBF Sosyal Bilimler Dergisi
[ { "authorId": "2083330743", "name": "Emel Yontar" } ]
{ "alternate_issns": null, "alternate_names": [ "Toros Üniversitesi İİSBF Sos Bilim Derg" ], "alternate_urls": null, "id": "d516bda8-4e0d-45dd-81f9-4db0a4439f00", "issn": "2147-8414", "name": "Toros Üniversitesi İİSBF Sosyal Bilimler Dergisi", "type": "journal", "url": null }
Recycling, reuse and reduction, which are among the “3R” actions of the circular economy, have an important place in ensuring resource efficiency. Minimizing the use of resources, ensuring their reuse and obtaining gains by recycling them at high standards can contribute to the sustainability studies of the logistics sector. This study covers associating the circular economy with blockchain technology, taking into account sustainable logistics studies. From the circular economy perspective, the features of blockchain technology that are thought to affect sustainable logistics; carbon emission reduction, logistics cost reduction, ease of communication, hacking, increased performance, data immutability, effective information sharing, transparency, uncertain legal situation, new technology and trust. From this point of view, the place of blockchain technology on the road to circular economy has been examined in the current study.
**_y_** **_f_** **_Special Issue on 2nd International Symposium of Sustainable Logistics “Circular Economy”_** _Toros Üniversitesi İİSBF Sosyal Bilimler Dergisi 2. Uluslararası Sürdürülebilir Lojistik “Dögüsel Ekonomi” Sempozyumu Özel Sayı_ **RESEARCH ARTICLE / ARAŞTIRMA MAKALESİ** **ISSN: 2147-8414** # Blockchain Technology and Sustainable Logistics: Integration in the Circular Economy ### Blockchain Teknolojisi ve Sürdürülebilir Lojistik: Döngüsel Ekonomi Entegrasyonu ## Emel YONTAR[1] ### ABSTRACT Recycling, reuse and reduction, which are among the “3R” actions of the circular economy, have an important place in ensuring resource efficiency. Minimizing the use of resources, ensuring their reuse and obtaining gains by recycling them at high standards can contribute to the sustainability studies of the logistics sector. This study covers associating the circular economy with blockchain technology, taking into account sustainable logistics studies. From the circular economy perspective, the features of blockchain technology that are thought to affect sustainable logistics; carbon emission reduction, logistics cost reduction, ease of communication, hacking, increased performance, data immutability, effective information sharing, transparency, uncertain legal situation, new technology and trust. From this point of view, the place of blockchain technology on the road to circular economy has been examined in the current study. **Keywords: Blockchain technology, Circular Economy, Sustainable Logistics, Logistics, Entropy Method** ### ÖZ Döngüsel ekonominin “3R” eylemleri arasında yer alan geri dönüşüm, yeniden kullanım ve azaltma, kaynak verimliliğinin sağlanmasında önemli bir yere sahiptir. Kaynakların kullanımının en aza indirilmesi, yeniden kullanılmasının sağlanması ve yüksek standartlarda geri dönüştürülerek kazanımların elde edilmesi lojistik sektörünün sürdürülebilirlik çalışmalarına katkı sağlayabilir. Bu çalışma, sürdürülebilir lojistik çalışmaları dikkate alınarak döngüsel ekonomiyi blockchain teknolojisi ile ilişkilendirmeyi kapsamaktadır. Döngüsel ekonomi perspektifinden bakıldığında, sürdürülebilir lojistiği etkilediği düşünülen blockchain teknolojisinin özellikleri; karbon emisyonunun azaltılması, lojistik maliyetlerinin azaltılması, iletişim kolaylığı, bilgisayar korsanlığı, artan performans, veri değişmezliği, etkin bilgi paylaşımı, şeffaflık, belirsiz yasal durum, yeni teknoloji ve güven. Bu noktadan hareketle mevcut çalışmada blockchain teknolojisinin döngüsel ekonomiye giden yolda yeri incelenmiştir. **Anahtar Kelimeler: Blockchain teknolojisi, Döngüsel Ekonomi, Sürdürülebilir Lojistik, Lojistik, Entropi Yöntemi** **_Atıf (to cite): Yontar, E. (2022). Blockchain Technology and Sustainable Logistics: Integration in the Circular Economy. Toros_** _University FEASS Journal of Social Sciences, 9(Special Issue): 1-9. doi:10.54709/iisbf.1161463_ Received Date (Makale Geliş Tarihi): 12.08.2022 Accepted Date (Makale Kabul Tarihi): 27.09.2022 1 Asst. Prof., Department of Industrial Engineering, Tarsus University, 33400 Mersin, Turkey. [eyontar@tarsus.edu.tr, ORCID:](mailto:eyontar@tarsus.edu.tr) 0000-0001-7800-2960. 1 ----- _Yontar, E.,_ _Blockchain Technology and Sustainable Logistics: Integration in the Circular Economy_ ### 1. INTRODUCTION Logistics activity, which is the last link of supply chain management, has become more important when it is associated with sustainability. Logistics activities not only make a significant contribution to economic performance, but also contain elements that must be taken into account in terms of environmental and social aspects. The first is responsible for consuming significant energy resources and generating greenhouse gas emissions. On the other hand, it causes air and noise pollution. Again, with the increase of industrialization, increasing wastes due to the use of resources bring along various problems. The signal of depletion of resources, that resource use is expected to increase threefold globally until 2050 due to the increase in consumption (Jaeger and Upadhyay, 2020), and the circular economy, which is a sustainable model as a result of seeking solutions to the increasing environmental pollution, may be an idea for the sector at this stage. The circular economy model, which ensures the use of resources as long as possible, energy savings and reduction of waste by keeping the resources in the loop, is based on sustainability and was born against the known linear economy model. Recycle, reuse and reduce, which are among the “3R” actions of the circular economy, have an important place in ensuring resource efficiency. Reduce, reducing the use of raw materials; Reuse, the most efficient reuse of products and components; Recycle means high quality reuse of raw materials. Minimizing the use of resources, ensuring their reuse and obtaining gains by recycling them at high standards can contribute to the sustainability efforts of the logistics sector. In the circular economy, waste is minimized by properly designing products and industrial processes so that resources and materials are constantly flowing and in use; The wastes and residues that are inevitable to come out are recycled or recovered (EMF, 2014). On the other hand, technological developments provide various benefits to businesses on the way to sustainability. Blockchain technology, which is one of them and has been frequently heard in studies in recent years, can be the subject of sustainable studies within the scope of circular economy. Blockchain is recognized as a cost-effective technology (using smart contracts) to control communication between multiple participants in a reliable, efficient and decentralized manner (Nesarani et al., 2020). Blockchain technology includes three core technologies: asymmetric encryption algorithms, distributed data storage, and consensus algorithms. This technology can actually be defined as a system that allows the flow of information to be done reliably and without any outside interference. While blockchain technology benefits the supply chain line in many ways, it is considered to be capable of solving many problems, especially when logistics activities are taken into account. In the literature, studies using blockchain technology within the scope of logistics activities and associating it with sustainability have been examined (Table 1). Table 1. Literature reviews contributing to the study **Authors** **Scope** **Methodology** **Sector** Tektaş and Kırbaç (2020) Orji et al. (2020) A case study is conducted on the use of blockchain technology in logistics and supply chain, and an application study of this case study is carried out in a logistics company using appropriate methodological methods. It proposes a technology-organization-environment (TOE) theoretical framework of critical factors affecting the successful adoption of blockchain technologies in the transportation logistics industry and prioritizes it using ANP. Case study Logistics ANP Logistics ----- **_y_** **_f_** **_Special Issue on 2nd International Symposium of Sustainable Logistics “Circular Economy”_** _Toros Üniversitesi İİSBF Sosyal Bilimler Dergisi 2. Uluslararası Sürdürülebilir Lojistik “Dögüsel Ekonomi” Sempozyumu Özel Sayı_ Case study Logistics Case study Logistics Case study Logistics Case study Logistics Tijan et al. (2019) Sundarakani et al. (2021) Andreou et al. (2018) “It explores the decentralized data storage represented by blockchain technology and the possibility of its development in sustainable logistics and supply chain management.” It explores the need for blockchain in the Industry 4.0 environment from the perspective of Big Data in supply chain management. In this study, a smart contract mechanism over blockchain is presented for advantages in logistics. Yi (2019) It offers techniques to leverage blockchain to secure logistics. Sunmola and Apeji (2020) Upadhyay et al. (2021) Rejeb and Rejeb (2020) Kouhizadeh et al. (2021) Esmaeilian et al. (2020) Yadav and Singh (2020) Tsolakis et al. (2021) Nandi et al. (2021) It focuses on blockchain technology and explores sustainable supply chain visibility and features of blockchains. Discusses the current and potential compatibility of blockchain with circular economy. It explores the blockchain literature and its relevance to supply chain sustainability. Provides a comprehensive overview of the barriers to adopting blockchain technology to manage sustainable supply chains. It provides an overview of Blockchain technology and Industry 4.0 to drive supply chains towards sustainability. It allows the use of blockchain technology to be explored and supply chain management to develop efficient sustainable supply chain management. It examines the design of blockchain-based food supply chains that support the Sustainable Development Goals. Using blockchain technology and circular economy principle capabilities, it offers a potential solution by addressing localization, agility and digitization (LAD) features. Literature review General Case study General Literature review General DEMATEL General Literature review Fuzzy DEMATEL General General Case study Food Case study General ### In the circular economy integration of blockchain technology, which is considered within the scope of sustainability, its compliance with the supply chain line has had a positive effect in many studies. Some of the benefits can be listed as follows; − Faster and error-free process management − Accelerating the physical flow of goods thanks to its transparency feature − Efficient process operations − Preventing fraud in resource management and tracking − Increased trust as a result of effective information sharing among supply chain stakeholders − Avoiding delivery delays ## − While doing all this, reducing carbon emissions with optimum planning ### In the current study, it is aimed to contribute to the literature by examining the place of blockchain technology on the road to circular economy. Blockchain technology allows the monitoring of all workflows, from the material selection point of the products to the distribution, when logistics activities are taken into account in designing the circular economy. Many parameters such as the material of the product purchased as raw material, whether it uses fossil fuels during production, the amount of carbon emissions exposed in the logistics processes, the amount of product and waste suitable for recycling can be provided with blockchain. These are positive developments that will contribute to the circular economy. 3 ----- _Yontar, E.,_ _Blockchain Technology and Sustainable Logistics: Integration in the Circular Economy_ ### The aim of this study is to evaluate the compatibility of blockchain technology with the circular economy in sustainable logistics activities without being indifferent to technological developments. Considering the circular economy on the road to sustainability, the criteria that are among the features of blockchain technology have been evaluated in this context. 2. METHODOLOGY In this section, the criteria determined by considering the concept of circular economy and its compatibility with the sustainable logistics sector by considering blockchain technology are tried to be explained. At this stage, Entropy Method, one of the Multi-Criteria Decision Making methods, is used. 2.1. Entropy Method The entropy method is used to measure the amount of useful information provided by existing data (Wu et al., 2011). In the entropy method, the data in the decision matrix is used to calculate the weights of the criteria in the decision problem. The applicability of the method is made strong because there is no need for any other subjective evaluation. The entropy method consists of 5 steps (Wang and Lee, 2009). Stage 1. Creation of the decision matrix; the decision matrix consisting of xij values (the value of the i. alternative according to the j. evaluation criterion) is included in Equation (1). ### 𝐷= ### 𝐴1 . . 𝐴𝑚 ### [ ### 𝑋11 ⋯ 𝑋1𝑛 ⋮ ⋱ ⋮ 𝑋𝑚1 ⋯ 𝑋𝑚𝑛 ### ] (1) ### Stage 2. Normalization of the decision matrix; the values are standardized with the help of Equation (2). ### 𝑝𝑖𝑗 = ∑𝑚𝑖=1𝑋𝑖𝑗𝑋𝑖𝑗 (2) ### Stage 3. Finding the entropy values of the criteria; the entropy values (ej) of each evaluation criterion are calculated by the Equation (3). 𝑒𝑖𝑗 = −𝑘. ∑𝑛𝑗=1 𝑝𝑖𝑗. ln 𝑝𝑖𝑗 i=1,2..m j=1,2..n (3) ### Stage 4. Finding degrees of differentiation; using the ej values found in the 3rd stage, the dj values are found by Equation (4). A high dj value indicates that the distance or differentiation between alternative scores for the criteria is large. 𝑑𝑗 = 1 −𝑒𝑗 j=1,2..,n (4) Stage 5. Calculation of entropy criterion weights; the weight values of the criteria are calculated with the help of Equation (5). ∑𝑛𝑗=1𝑑𝑗 𝑑𝑗 (5) ### 𝑤𝑗 = ### 3. RESULTS AND FINDINGS Considering the “3R” headings of the circular economy, the criteria considered appropriate for the logistics sector and in the literature are brought together. (Table 2). ----- **_y_** **_f_** **_Special Issue on 2nd International Symposium of Sustainable Logistics “Circular Economy”_** _Toros Üniversitesi İİSBF Sosyal Bilimler Dergisi 2. Uluslararası Sürdürülebilir Lojistik “Dögüsel Ekonomi” Sempozyumu Özel Sayı_ ### Table 2. Definitions of the criteria that are the subject of the study **Criteria** **Authors** **Code Description** Reducing carbon emissions Green, 2018 BC1 Blockchain technology can promote clean energy trade by improving carbon emissions with optimum transport management. Reducing logistics costs Tijan et al., 2019; Chang et al., 2019 BC2 It can significantly reduce logistics costs, additional costs, transportation costs. Ease of communication Author* BC3 It provides accurate and reliable communication between the end-to-end stakeholders of the supply chain process. Hacking Min, 2019 BC4 It can prevent hacking, vulnerability disputes by increasing transaction security. Increased performance Author* BC5 It increases the end-to-end speed of the supply chain process and provides performance increase. Data immutability Dutta et al., 2020 Effective information sharing Litke et al., 2019; Min, 2019 Transparency Wang et al., 2019; Saberi et al., 2019 Uncertain legal status Niranjanamur thy et al., 2018 New technology Hughes et al., 2019; Johansson and Nilsson, 2018 Trust Saberi et al., 2018; Tijan et al, 2019 - Created by the author. BC6 Data is immutable due to the need for verification by other nodes and traceability of changes. BC7 It can contribute effectively to information sharing among supply chain stakeholders. BC8 It helps to keep track of the status of an item during a transaction BC9 The uncertain legal situation can be confusing and prohibitive. BC10 The fact that it is a new technology may cause it to not be understood yet. BC11 Trust among stakeholders can increase as data becomes more transparent. ### As explained in Table 2, when the recycle, reuse and reduce activities of the circular economy are considered, the sustainable criteria in these stages are included in 11 studies. These are the blockchain features obtained from the literature by considering every stage of the logistics process (Reducing carbon emissions, Reducing logistics costs, Ease of communication, Hacking, Increased performance, Data immutability, Effective information sharing, Transparency, Uncertain legal status, New technology, Trust). These parameters in advanced technology are of a nature that will benefit the circular economy and explain its compliance with sustainability. Accordingly, a decision matrix is first created (Table 3) and normalized for the evaluation between criteria (Table 4). Table 3. Decision matrix of Entropy method **BC1** **BC2** **BC3** **BC4** **BC5** **BC6** **BC7** **BC8** **BC9** **BC10** **BC11** **BC1** 1.00 7.00 2.00 7.00 0.20 7.00 0.17 6.00 8.00 7.00 0.25 **BC2** 0.14 1.00 2.00 2.00 0.33 3.00 2.00 5.00 6.00 6.00 3.00 **BC3** 0.50 0.50 1.00 2.00 0.20 0.25 0.20 0.33 6.00 6.00 0.33 **BC4** 0.14 0.50 0.50 1.00 0.14 0.17 0.17 0.20 0.50 0.33 0.20 **BC5** 5.00 3.00 5.00 7.00 1.00 3.00 2.00 4.00 6.00 6.00 2.00 **BC6** 0.14 0.33 4.00 6.00 0.33 1.00 0.33 3.00 6.00 5.00 0.33 5 ----- _Yontar, E.,_ _Blockchain Technology and Sustainable Logistics: Integration in the Circular Economy_ **BC7** 6.00 0.50 5.00 6.00 0.50 3.00 1.00 6.00 7.00 7.00 3.00 **BC8** 0.17 0.20 3.00 5.00 0.25 0.33 0.17 1.00 5.00 5.00 1.00 **BC9** 0.13 0.17 0.17 2.00 0.17 0.17 0.14 0.20 1.00 2.00 0.25 **BC10** 0.14 0.17 0.17 3.00 0.17 0.20 0.14 0.20 0.50 1.00 0.25 **BC11** 4.00 0.33 3.00 5.00 0.50 3.00 0.33 1.00 4.00 4.00 1.00 ### Table 4. Normalized decision matrix of Entropy method **BC1** **BC2** **BC3** **BC4** **BC5** **BC6** **BC7** **BC8** **BC9** **BC10** **BC11** **BC1** 0.06 0.51 0.08 0.15 0.05 0.33 0.03 0.22 0.16 0.14 0.02 **BC2** 0.01 0.07 0.08 0.04 0.09 0.14 0.30 0.19 0.12 0.12 0.26 **BC3** 0.03 0.04 0.04 0.04 0.05 0.01 0.03 0.01 0.12 0.12 0.03 **BC4** 0.01 0.04 0.02 0.02 0.04 0.01 0.03 0.01 0.01 0.01 0.02 **BC5** 0.29 0.22 0.19 0.15 0.26 0.14 0.30 0.15 0.12 0.12 0.17 **BC6** 0.01 0.02 0.15 0.13 0.09 0.05 0.05 0.11 0.12 0.10 0.03 **BC7** 0.35 0.04 0.19 0.13 0.13 0.14 0.15 0.22 0.14 0.14 0.26 **BC8** 0.01 0.01 0.12 0.11 0.07 0.02 0.03 0.04 0.10 0.10 0.09 **BC9** 0.01 0.01 0.01 0.04 0.04 0.01 0.02 0.01 0.02 0.04 0.02 **BC10** 0.01 0.01 0.01 0.07 0.04 0.01 0.02 0.01 0.01 0.02 0.02 **BC11** 0.23 0.02 0.12 0.11 0.13 0.14 0.05 0.04 0.08 0.08 0.09 ### After the normalized matrix, the entropy values (ej) of the criteria are found (Table 5). Table 5. Entropy values for criteria **BC1** **BC2** **BC3** **BC4** **BC5** **BC6** **BC7** **BC8** **BC9** **BC10** **BC11** **BC1** -0.16 -0.34 -0.20 -0.29 -0.16 -0.37 -0.09 -0.33 -0.29 -0.28 -0.08 **BC2** -0.04 -0.19 -0.20 -0.14 -0.21 -0.28 -0.36 -0.31 -0.25 -0.26 -0.35 **BC3** -0.10 -0.12 -0.13 -0.14 -0.16 -0.05 -0.11 -0.05 -0.25 -0.26 -0.10 **BC4** -0.04 -0.12 -0.08 -0.08 -0.12 -0.04 -0.09 -0.04 -0.05 -0.03 -0.07 **BC5** -0.36 -0.33 -0.32 -0.29 -0.35 -0.28 -0.36 -0.28 -0.25 -0.26 -0.30 **BC6** -0.04 -0.09 -0.29 -0.27 -0.21 -0.14 -0.15 -0.24 -0.25 -0.23 -0.10 **BC7** -0.37 -0.12 -0.32 -0.27 -0.27 -0.28 -0.28 -0.33 -0.28 -0.28 -0.35 **BC8** -0.04 -0.06 -0.25 -0.24 -0.18 -0.07 -0.09 -0.12 -0.23 -0.23 -0.21 **BC9** -0.04 -0.05 -0.03 -0.14 -0.14 -0.04 -0.08 -0.04 -0.08 -0.13 -0.08 **BC10** -0.04 -0.05 -0.03 -0.18 -0.14 -0.04 -0.08 -0.04 -0.05 -0.08 -0.08 **BC11** -0.34 -0.09 -0.25 -0.24 -0.27 -0.28 -0.15 -0.12 -0.20 -0.20 -0.21 ### Then, the weightings of each criterion are determined (Table 6). ----- **_y_** **_f_** **_Special Issue on 2nd International Symposium of Sustainable Logistics “Circular Economy”_** _Toros Üniversitesi İİSBF Sosyal Bilimler Dergisi 2. Uluslararası Sürdürülebilir Lojistik “Dögüsel Ekonomi” Sempozyumu Özel Sayı_ ### Table 6. Determination of weights **BC1** **BC2** **BC3** **BC4** **BC5** **BC6** **BC7** **BC8** **BC9** **BC10** **BC11** **ej** 0.654 0.658 0.870 0.941 0.917 0.774 0.773 0.799 0.912 0.931 0.811 **dj** 0.345 0.341 0.129 0.058 0.082 0.225 0.226 0.200 0.087 0.068 0.188 **wj** 0.177 0.1748 0.0661 0.0301 0.0421 0.1152 0.1159 0.1025 0.0446 0.0351 0.0965 ### Accordingly, the targeted reduction of carbon emissions in the circular economy is benefited by using blockchain technology from the logistics sector. Looking at the ranking between the criteria (Table 6) (Figure 1), the BC1 coded “Reducing carbon emissions” criterion proves this benefit. 0.2000 0.1800 0.1600 0.1400 0.1200 0.1000 0.0800 0.0600 0.0400 0.0200 0.0000 |Col1|BC1|BC2|BC3|BC4|BC5|BC6|BC7|BC8|BC9|BC10|BC11| |---|---|---|---|---|---|---|---|---|---|---|---| |Criteria|0.1770|0.1748|0.0661|0.0301|0.0421|0.1152|0.1159|0.1025|0.0446|0.0351|0.0965| ### Figure 1. Ranking of criteria In the same way, the circular economy BC2 coded “Reducing logistics costs”, which calls for a source different from the linear economy to stay in the heart at the point of contributing to the economy, and when blockchain technology is used, it will help to reduce costs. These criteria are followed by BC7 “Effective information sharing” and BC6 “Data immutability” parameters. With effective information sharing and data immutability, stakeholders in the supply chain line will be able to assume more effective roles. The BC8 “Transparency” criterion that follows will provide a high-level impact on the resource management for the visibility and tracking system and will be able to decide on the evaluations of the resources within the economy. BC11 “Trust” criterion ensures trust between stakeholders. With the BC3 “Ease of communication” criterion, which comes later, it will again facilitate communication between stakeholders and provide the opportunity to produce fast solutions to problems that may arise. The BC9 “Uncertain legal status” criterion is currently considered negative for the cross-country adoption and enforcement of blockchain technology, so it is at the bottom of the list. It is inevitable that BC5 “Increased performance” will contribute to the increase in performance in logistics processes at the point of circular economy. The BC10 “New technology” criterion is that the technology is new and has low awareness among stakeholders, which negatively affects it. The BC4 “Hacking” criterion, which is in the last place, is effective against hacking and damage that may occur to works with blockchain technology. 7 ----- _Yontar, E.,_ _Blockchain Technology and Sustainable Logistics: Integration in the Circular Economy_ ### 4. CONCLUSION The circular economy model, which keeps resources in the loop, ensures the use of resources as long as possible, enables energy savings and reduces waste, is a concept developed against the known linear model. On the other hand, developing technologies that contribute to businesses also support this economic model. Every business aiming at sustainable logistics also contributes to the circular economy model. This model, which makes resource management effective, reduces carbon emissions, and ensures recycling and recovery of waste, is possible with blockchain technology. In this study, the circular economy and blockchain technology integration, which are discussed in the light of these parameters, are shown with criteria. At this stage, Entropy Method, one of the Multi-Criteria Decision Making methods, was used. As a result of the blockchain technology literature examined within the scope of sustainability, 11 criteria (Reducing carbon emissions, reducing logistics cost, ease of communication, hacking, increased performance, data immutability, effective information sharing, transparency, uncertain legal status, new technology, trust) were decided and evaluated. As a result of the evaluation, the most important criteria were Reducing carbon emissions and reducing logistics cost. It can be said that these criteria will contribute significantly to the “3R” rule of the circular economy. Considering the sustainable logistics studies for businesses, the importance of blockchain technology, which has been shown to facilitate the transition to the circular economy, has been tried to be conveyed in this study. The importance of blockchain technology will increase gradually when uncertainty disappears in future studies. In this process, in addition to this study, criteria can be developed and solutions can be evaluated with new methods. At the same time, logistics activities of different sectors can be examined in detail and contribute to the literatüre. REFERENCES Andreou, A. S., Christodoulou, P., & Christodoulou, K. (2018). A decentralized application for logistics: Using blockchain in real-world applications. The Cyprus Review. Chang, S. E., Chen, Y. C., & Lu, M. F. (2019). Supply chain re-engineering using blockchain technology: A case of smart contract based tracking process. Technological Forecasting and Social Change, 144, 1-11. Dutta, P., Choi, T. M., Somani, S., & Butala, R. (2020). Blockchain technology in supply chain operations: Applications, challenges and research opportunities. Transportation Research Part E: Logistics and Transportation Review, 142, 102067. EMF 2014, Ellen MacArthur Vakfı Ellen MacArthur Foundation,2014. Esmaeilian, B., Sarkis, J., Lewis, K., & Behdad, S. (2020). Blockchain for the future of sustainable supply chain management in Industry 4.0. Resources, Conservation and Recycling, 163, 105064. Green, J. Solving The Carbon Problem One Blockchain at A Time. Forbes, 2018. Retrieved from https://www.forbes.com/sites/jemmagreen/2018/09/19/solvingthe- carbon-problem-one-blockchain-at-atime/#1992bb415f5e. Hughes, L., Dwivedi, Y. K., Misra, S. K., Rana, N. P., Raghavan, V., & Akella, V. (2019). Blockchain research, practice and policy: Applications, benefits, limitations, emerging research themes and research agenda. International Journal of Information Management, 49, 114-129. Jaeger, B., & Upadhyay, A. (2020). Understanding barriers to circular economy: cases from the manufacturing industry. Journal of Enterprise Information Management. Johansson, J.,Nilsson, C. (2018). How the blockchain technology can enhance sustainability for contractors within the construction industry Kouhizadeh, M., Saberi, S., & Sarkis, J. (2021). Blockchain technology and the sustainable supply chain: Theoretically exploring adoption barriers. International Journal of Production Economics, 231, 107831. ----- **_y_** **_f_** **_Special Issue on 2nd International Symposium of Sustainable Logistics “Circular Economy”_** _Toros Üniversitesi İİSBF Sosyal Bilimler Dergisi 2. Uluslararası Sürdürülebilir Lojistik “Dögüsel Ekonomi” Sempozyumu Özel Sayı_ Litke, A., Anagnostopoulos, D., & Varvarigou, T. (2019). Blockchains for supply chain management: Architectural elements and challenges towards a global scale deployment. Logistics, 3(1), 5. Min, H. (2019). Blockchain technology for enhancing supply chain resilience. Business Horizons, 62(1), 35-45. Nandi, S., Sarkis, J., Hervani, A. A., & Helms, M. M. (2021). Redesigning supply chains using blockchain-enabled circular economy and COVID-19 experiences. Sustainable Production and Consumption, 27, 10-22. Nesarani, A., Ramar, R., & Pandian, S. (2020). An efficient approach for rice prediction from authenticated Block chain node using machine learning technique. Environmental Technology & Innovation, 20, 101064. Niranjanamurthy, M., Nithya, B., & Jagannatha, S. (2019). Analysis of blockchain technology: pros, cons and SWOT. Cluster Computing, 22(6), 14743-14757. Orji, I. J., Kusi-Sarpong, S., Huang, S., & Vazquez-Brust, D. (2020). Evaluating the factors that influence blockchain adoption in the freight logistics industry. Transportation Research Part E: Logistics and Transportation Review, 141, 102025. Rejeb, A., & Rejeb, K. (2020). Blockchain and supply chain sustainability. Logforum, 16(3). Saberi, S., Kouhizadeh, M., Sarkis, J., & Shen, L. (2019). Blockchain technology and its relationships to sustainable supply chain management. International Journal of Production Research, 57(7), 2117-2135. Sundarakani, B., Ajaykumar, A., & Gunasekaran, A. (2021). Big data driven supply chain design and applications for blockchain: An action research using case study approach. Omega, 102, 102452. Sunmola, F., & Apeji, D. U. (2020). Blockchain characteristics for sustainable supply chain visibility. In 5th NA International Conference on Industrial Engineering and Operations Management. Detroit, Michigan, USA. Tektaş, B., & Kırbaç, G. (2020). Lojistik Sektöründe Blokzinciri Teknolojisinin Kullanılmasına Yönelik Bir Vaka Analizi İncelemesi Ve Lojistik Şirketi Uygulaması. Süleyman Demirel Üniversitesi İktisadi Ve İdari Bilimler Fakültesi Dergisi, 25(3), 343-356. Tijan, E., Aksentijević, S., Ivanić, K., & Jardas, M. (2019). Blockchain technology implementation in logistics. Sustainability, 11(4), 1185. Tsolakis, N., Niedenzu, D., Simonetto, M., Dora, M., & Kumar, M. (2021). Supply network design to address United Nations Sustainable Development Goals: A case study of blockchain implementation in Thai fish industry. Journal of Business Research, 131, 495-519. Upadhyay, A., Mukhuty, S., Kumar, V., & Kazancoglu, Y. (2021). Blockchain technology and the circular economy: Implications for sustainability and social responsibility. Journal of Cleaner Production, 293, 126130. Wang, T. C., & Lee, H. D. (2009). Developing a fuzzy TOPSIS approach based on subjective weights and objective weights. Expert systems with applications, 36(5), 8980-8985. Wu, J., Sun, J., Liang, L., & Zha, Y. (2011). Determination of weights for ultimate cross efficiency using Shannon entropy. Expert Systems with Applications, 38(5), 5162-5165. Yadav, S., & Singh, S. P. (2020). Blockchain critical success factors for sustainable supply chain. Resources, Conservation and Recycling, 152, 104505. Yi, H. (2019). Securing e-voting based on blockchain in P2P network. EURASIP Journal on Wireless Communications and Networking, 2019(1), 1-9. 9 -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.54709/iisbf.1161463?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.54709/iisbf.1161463, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://dergipark.org.tr/en/download/article-file/2594215" }
2,022
[]
true
2022-09-27T00:00:00
[]
8,678
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Mathematics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01d090ac9cd0ddc7da673e12acb157431260afbc
[ "Computer Science" ]
0.879053
Structure and Intractability of Optimal Multi-Robot Path Planning on Graphs
01d090ac9cd0ddc7da673e12acb157431260afbc
AAAI Conference on Artificial Intelligence
[ { "authorId": "144018368", "name": "Jingjin Yu" }, { "authorId": "1683060", "name": "S. LaValle" } ]
{ "alternate_issns": null, "alternate_names": [ "National Conference on Artificial Intelligence", "National Conf Artif Intell", "AAAI Conf Artif Intell", "AAAI" ], "alternate_urls": null, "id": "bdc2e585-4e48-4e36-8af1-6d859763d405", "issn": null, "name": "AAAI Conference on Artificial Intelligence", "type": "conference", "url": "http://www.aaai.org/" }
In this paper, we study the structure and computational complexity of optimal multi-robot path planning problems on graphs. Our results encompass three formulations of the discrete multi-robot path planning problem, including a variant that allows synchronous rotations of robots along fully occupied, disjoint cycles on the graph. Allowing rotation of robots provides a more natural model for multi-robot path planning because robots can communicate.Our optimality objectives are to minimize the total arrival time, the makespan (last arrival time), and the total distance. On the structure side, we show that, in general, these objectives demonstrate a pairwise Pareto optimal structure and cannot be simultaneously optimized. On the computational complexity side, we extend previous work and show that, regardless of the underlying multi-robot path planning problem, these objectives are all intractable to compute. In particular, our NP-hardness proof for the time optimal versions, based on a minimal and direct reduction from the 3-satisfiability problem, shows that these problems remain NP-hard even when there are only two groups of robots (i.e. robots within each group are interchangeable).
# Structure and Intractability of Optimal Multi-Robot Path Planning on Graphs[∗] ## Jingjin Yu Department of Electrical and Computer Engineering University of Illinois, Urbana, IL 61801 _jyu18@uiuc.edu_ **Abstract** In this paper, we study the structure and computational complexity of optimal multi-robot path planning problems on graphs. Our results encompass three formulations of the discrete multi-robot path planning problem, including a variant that allows synchronous rotations of robots along fully occupied, disjoint cycles on the graph. Allowing rotation of robots provides a more natural model for multi-robot path planning because robots can communicate. Our optimality objectives are to minimize the total arrival time, the makespan (last arrival time), and the total distance. On the structure side, we show that, in general, these objectives demonstrate a pairwise Pareto optimal structure and cannot be simultaneously optimized. On the computational complexity side, we extend previous work and show that, regardless of the underlying multi-robot path planning problem, these objectives are all intractable to compute. In particular, our NP-hardness proof for the time optimal versions, based on a minimal and direct reduction from the 3-satisfiability problem, shows that these problems remain NP-hard even when there are only two groups of robots (i.e. robots within each group are interchangeable). ## Introduction Discrete multi-robot path planning problems seem to have originated from the study of Sam Loyd’s 15-puzzle (Loyd 1959; Story 1879), a well known board based puzzle game. The 15-puzzle can be viewed as moving 15 robots on a 16vertex grid graph, which readily generalizes to the multirobot path planning problem on a N -vertex graph with n < _N robots. In the most basic formulation, only one pebble_ may move in a time step to an adjacent unoccupied vertex; we call this problem pebble motion on graphs or PMG. Since robots can act autonomously and communicate, multiple robots are capable of moving in the same time step. A parallel move of robots is a synchronous move of a (nonself-intersecting) chain of robots as long as the first robot moves into a vertex that is unoccupied at the beginning of the _∗This work was supported in part by NSF grant 0904501_ (IIS Robotics), NSF grant 1035345 (Cyberphysical Systems), and MURI/ONR grant N00014-09-1-1052. We thank the anonymous reviewers for their helpful suggestions. Copyright © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. ## Steven M. LaValle Department of Computer Science University of Illinois, Urbana, IL 61801 _lavalle@uiuc.edu_ time step. If multiple disjoint parallel moves per time step are allowed, we call this problem variant multi-robot path _planning on graphs with parallel moves, or MPPp, which_ were studied in (Ryan 2008; Surynek 2010), among others. Feasible moves require unoccupied vertices in PMG and MPPp formulations. More recently, in a variant of the problem (Yu and LaValle 2012; 2013), robots are allowed to rotate synchronously along fully occupied cycles. It was pointed out in (Yu 2012) that instances having N robots (on a N -vertex graph) can often be feasible. We call this problem multi-robot path planning on graphs with parallel _moves and rotations or MPPpr for short. The rotation primi-_ tive was also mentioned in a grid setting (Standley and Korf 2011). Arguably, MPPpr provides a better model for multirobot path planning problem than MPPp does for two reasons: (1) when parallel moves are allowed, it is natural to include rotations, and (2) allowing rotations can only reduce the best plan’s size, given some optimality criterion. It is well known that PMG (therefore, MPPp) is solvable in polynomial time (Kornhauser, Miller, and Spirakis 1984). Moreover, feasibility tests for PMG can be performed in linear time (Goraly and Hassin 2010). These algorithms were generalized to include MPPpr in (Yu 2012). Since feasible solutions can be found efficiently, one might be motivated to seek polynomial time optimal solutions to these formulations. For PMG, a distance optimal solution is NP-hard to compute (Goldreich 1984; Ratner and Warmuth 1990). Finding a plan with minimum makespan (i.e., last arrival time) for MPPp was also shown to be NP-hard (Surynek 2010). However, not much is known about the computational complexity of optimal MPPpr formulations or optimal MPPp formulations other than minimum makespan. Moreover, there is a lack of understanding on the structures and relationships between different optimal multi-robot path planning formulations (e.g., whether there is a Pareto front for two different optimality criteria). In this paper, we address these issues and systematically study three optimality objectives: minimizing the total arrival time, minimizing the makespan, and minimizing the total distance. First, we show that these objectives have a Pareto optimal structure for MPPp and MPPpr. That is, any pair of these three objectives cannot be simultaneous optimized for MPPp or MPPpr. These objectives are equivalent for the PMG problem. Continuing onto the subject of com ----- putational complexity, we show that computing an optimal solution for any of the three objectives is NP-hard for PMG, MPPp, and MPPpr. We point out that the NP-hardness results without rotations do not carry over to the case that allows rotations because rotations may introduce better optimal solutions that can be computed efficiently. ## Problem Formulation ### Multi-robot path planning on graphs with parallel moves and rotations[1] Let G = (V, E) be a connected, undirected, simple graph with vertex set V = {vi} and edge set E = {(vi, vj)}. Let _R = {r1, . . ., rn} be a set of robots that move with unit_ speeds along the edges of G, with initial and goal locations on G given by the injective maps xI _, xG : R →_ _V, respec-_ tively. A path is a map pi : Z[+] _→_ _V . A path pi is feasible for_ a robot ri if it satisfies the following properties: (1) pi(0) = _xI_ (ri), (2) for each i, there exists a smallest ti ∈ Z[+] such that pi(ti) = xG(ri), (3) for any t ≥ _ti, pi(t) ≡_ _xG(ri),_ and (4) for any 0 ≤ _t < ti, (pi(t), pi(t + 1)) ∈_ _E or_ _pi(t) = pi(t + 1) (if pi(t) = pi(t + 1), robot ri stays at_ vertex pi(t) between the time steps t and t + 1). We say that two paths pi, pj are in collision if there exists k ∈ Z[+] such that pi(t) = pj(t) or (pi(t), pi(t + 1)) = (pj(t + 1), pj(t)). **Problem (MPPpr ). Given (G, R, xI** _, xG), find a set of_ paths P = {p1, . . ., pn} such that pi’s are feasible paths for respective robots ri’s and no two paths pi, pj are in collision. Synchronized rotations of robots along fully occupied cycles distinguishe MPPpr from the majority of previously studied multi-robot path planning problems. In an MPPpr instance, even when the number of robots equals the number of vertices, robots may still be able to move. A simple feasible example here is n robots on an n-cycle, with each robot having the left (assuming an orientation of the cycle in the plane) adjacent vertex as its goal. ### Optimality We examine three common objectives in optimal multi-robot path planning: minimizing the makespan (last arrival time), minimizing the total arrival time, and minimizing the total distance. Formally, let P = {p1, . . ., pn} be an arbitrary solution to a fixed MPPpr instance. For a path pi _P_, len(pi) _∈_ denotes the length of the path pi, which is incremented by one each time when the robot ri passes an edge. A robot, following pi, may visit the same edge multiple times. Recall that ti denotes the arrival time of robot ri. **Objective 1 (Minimum Total Arrival Time). Compute a** path set P that minimizes [�]i[n]=1 _[t][i][.]_ **Objective 2 (Minimum Makespan). Compute a path set P** that minimizes max1≤i≤n ti. **Objective 3 (Minimum Total Distance). Compute a path** set P that minimizes [�]i[n]=1 _[len][(][p][i][)][.]_ 1We only provide a full description of MPPpr here. For complete formulations of PMG and MPPp, see (Kornhauser, Miller, and Spirakis 1984; Surynek 2010). For a PMG problem with a single unoccupied vertex, these objectives are all equivalent because only one robot can move in each time step. Therefore, the NP-hardness result from (Goldreich 1984) implies the following. **Lemma 1. Computing a minimum total arrival time, min-** _imum makespan, or minimum total distance solution for a_ _PMG problem is NP-hard._ The decision versions of the opitmal MPPpr problems are defined as follows. **MTATMPP (Minimum Total Arrival Time MPPpr)** INSTANCE: An instance of MPPpr, and k ∈ Z. QUESTION: Is there a solution path set P with a total arrival time no more than k? **M3PP (Minimum Makespan MPPpr)** INSTANCE: An instance of MPPpr, and k ∈ Z. QUESTION: Is there a solution path set P with a makespan no more than k? **MTDMPP (Minimum Total Distance MPPpr)** INSTANCE: An instance of MPPpr, and k ∈ Z. QUESTION: Is there a solution path set P with a total path distance no more than k? ## The Pareto Optimal Structure In this section, we show that in general, it is impossible to simultaneously optimize multiple objectives for MPPp and MPPpr. This is true for every pair from Objectives 1-3. Since the incompatibility proof for Objectives 2 and 3 was given in (Yu and LaValle 2013), we show Pareto optimal structures for the other two pairing of the three objectives. For each pair, we provide an infinite family of instances on which the two objectives are optimized by different solutions. **Proposition 2. For MPPp and MPPpr, optimality cannot** _always be simultaneously achieved for minimum makespan_ _and minimum total arrival time._ 3 2 1 1 3 2 Figure 1: An instance in which the graph is a single cycle. Discs with solid borders are the start locations of robots 13 (as numbered) and discs with dotted borders are the goal locations of robots 1-3. PROOF. In Fig. 1, the start and goal vertices of robots 1-3 are as marked. Let the distance between the consecutive numbered discs on the left side of the oval be one each and let the distance of the right path (between robot 3’s vertex and 2’s goal) be x 1. Clearly, optimal solutions require that all _≥_ robots move in the same (clockwise or counterclockwise) direction until they reach their goals. If all robots move in the clockwise direction, the cost vector for makespan and total arrival time is (x+1, 2x+3). The cost vector is (x+4, x+12) if the robots move in the counterclockwise direction. Thus, a clockwise move always yields the solution with minimum ----- makespan. However, when x > 9, the solution corresponding to counterclockwise movements has a smaller total arrival time. **Proposition 3. For MPPp and MPPpr, optimality cannot** _always be simultaneously achieved for minimum total ar-_ _rival time and minimum total distance._ 2 3 1 4 1 4 2 3 Figure 2: The start locations of robots 1-4 are marked with discs having solid borders (as numbered). Their goals are the numbered discs with dotted borders. PROOF. In Fig. 2, the start and goal locations of robots 14 are as marked. The distance between any adjacent pair of nodes (discs, black dots) is one. The solution with minimum total arrival time sends robots 1-3 through solid paths on the left and robot 4 through the dotted path on the right. This yields a total arrival time of 3 + 4 + 5 + 4 = 16 and a total distance of 3 + 3 + 3 + 4 = 13. On the other hand, the solution with minimum total distance sends all robots from the left path, which yields a total arrival time of 18 and a total distance of 12. By extending the lengths of the two vertical edges in the middle, we get an infinite family of examples. ## Intractability of MTATMPP and M3PP Unlike finding feasible solutions, solving optimal versions of MPPpr appears to be intractable in general. In this section, we provide evidence to this claim by showing that **MTATMPP and M3PP are NP-hard. We give a mini-** mal and direct reduction from 3SAT (Garey and Johnson 1979) that works for both problems. **Theorem 4. MTATMPP is NP-hard.** PROOF We reduce 3SAT to MTATMPP. Let (X, C) be an arbitrary instance of 3SAT with _X_ = n vari_|_ _|_ ables x1, . . ., xn and |C| = m clauses c1, . . ., cm, in which _cj = yj[1]_ _j_ _j_ [. Without loss of generality, we may as-] _[∨]_ _[y][2]_ _[∨]_ _[y][3]_ sume that the set of all literals, yj[k][’s, contain both unnegated] and negated form of each variable xi. From the 3SAT instance, an MTATMPP instance is constructed as follows. For each variable xi, two paths of length m + 2 each, jointed at the end, are added (e.g. the four horizontal strips in the middle of Fig. 3). At the left end of the joined path, vertex vxi, sits a robot rxi, with its goal vertex, vx[′] _i_ [, at the right end. The robot can travel along either] of the two paths to reach its goal in m + 2 steps. Call these two paths the i-th upper and lower paths. Then, for each clause cj = yj[1] _[∨]_ _[y]j[2]_ _[∨]_ _[y]j[3][, add a robot][ r][c]j_ [,] sitting at a vertex vcj . The vertex vcj is connected to three paths associated with the three variables corresponding to _cj’s three literals. If a literal is the unnegated (resp., negated)_ form of variable xi, then vcj is connected to the i-th upper (resp., lower) path at a vertex of distance j from vxi . For example, if c1 = x1 ∨¬x3 ∨ _x4, then vc1 is connected to the_ first upper, third lower, and fourth upper paths, all at vertices of distance 1 from the left end of the “strips” (see Fig. 3). vx4 vx[¶]4 v c[¶]1 v c[¶]2 v c[¶]3 vx3 vx[¶]3 v c 3 vx2 vx[¶]2 v c 1v c 2 vx1 vx[¶]1 Figure 3: An MPPpr instance constructed from the 3SAT instance ({x1, x2, x3, x4}, {x1 ∨¬x3 ∨ _x4, ¬x1 ∨_ _x2 ∨_ _¬x4, ¬x2 ∨_ _x3 ∨_ _x4}). The red vertices are the start vertices_ and the blues one the goals. After the clause structures are created, the goals for the _rcj_ ’s are added. For this purpose, a path of length m is added (e.g. the leftmost path with blue vertices in Fig. 3), with the left vertex being the goal for rc1 and the right vertex the goal for rcm . The goal vertex for rcm, vc[′] _m_ [, is connected to all] _vxi_ ’s, the start vertices of robots rxi ’s. Having constructed an MPPpr instance, setting k = (n + m)(m + 2) fully describes an instance of MTATMPP. Fig. 3 gives the complete graph for the MTATMPP instance constructed from the 3SAT instance ({x1, x2, x3, x4}, {x1 _∨¬x3_ _∨x4, ¬x1_ _∨_ _x2 ∨¬x4, ¬x2 ∨_ _x3 ∨_ _x4})._ If the 3SAT instance is satisfiable, let _x1, . . .,_ _xn be an_ � � assignment of the truth values to the variables. For each variable xi, if _xi is true (resp., false), then let robot rxi take_ � the lower (resp., upper) path on its strip. The upper (resp., lower) path is then free to use for transporting the robots corresponding to the clauses, rcj ’s. All m + n robots can start moving at time step zero and arrive at their desired goals at time step m + 2. The total time is then (m + n)(m + 2). On the other hand, if the MPPpr instance have a solution with total arrival time (n+m)(m+2), then every robot must start moving at time step zero, follow a shortest path, and never stop until it reaches its goal. This forces every robot _rxi to take either the upper or lower path on its own strip,_ which prevents any robot rcj from using the same path in the opposite direction. If robot rxi uses the upper (resp., lower) path, let _xi = true (resp., false). The resulting assignment_ � _x1, . . .,_ _xn satisfies the 3SAT instance._ � � **Corollary 5. M3PP is NP-hard.** PROOF. In the proof of Theorem 5, after the MPPpr instance is created, setting k = m + 2 as the minimum makespan produces a M3PP instance from the 3SAT instance. The rest of the proof remains essentially the same. In our many-one reduction, it is clear that rotations of robots along cycles do not contribute to better paths. Therefore, the reduction works for time optimal MPPp problem as ----- well. In particular, our proof greatly simplifies the NP-hard proof of minimum makespan MPPp from (Surynek 2010). **Corollary 6. Finding a minimum total arrival time or a min-** _imum makespan solution for MPPp is NP-hard._ The reduction illustrates one reason that makes finding time optimal solutions hard: When multiple robots want to travel in opposite directions on a few shared paths, it is critical that the right paths are picked if time optimality is sought. Moreover, our proof shows an even stronger intractability result: Computing a time optimal solution is NPhard even when there are only two groups of robots (i.e., the robots within each group are interchangeable). **Theorem 7. MTATMPP and M3PP remain NP-hard,** _even when there are only two groups of robots._ PROOF. In the reduction from 3SAT, let the variable robots belong to one group and the clause robots belong to another group. ## Intractability of MTDMPP Unfortunately, the simple structure from Fig. 3 is not as useful in proving the NP-hardness of MTDMPP because there is no need for the robots to synchronize their movements unless they are forced to. It is possible, however, to force such a synchronization, as shown in (Ratner and Warmuth 1990), in which 2/2/4 SAT is reduced to the distance optimal (n[2] 1)-puzzle. 2/2/4 SAT is a specialized _−_ version of the boolean satisfiability problem. **2/2/4 SAT** INSTANCE: An instance of the boolean satisfiability problem with m boolean variables and m clauses. Each clause has exactly four literals and each variable appear four times in the clauses, twice negated and twice unnegated. QUESTION: Does the instance have a satisfiable assignment? **2/2/4 SAT is NP-hard and has the property that given a** satisfying assignment, each clause has exactly two true literals and two false literals (Ratner and Warmuth 1990). Once rotation is allowed, the proof from (Ratner and Warmuth 1990) (or (Goldreich 1984)) no longer works because its synchronization scheme depends on the fact that only robots near the only unoccupied vertex may move. **Proof outline. To show that MTDMPP is NP-hard,** we adapt the construction from (Ratner and Warmuth 1990) with some significant changes. To reduce proof complexity, we will build an MPPpr instance such that all vertices are occupied by robots. The essential idea behind the main construct of the reduction (Fig. 6, to be introduced in detail shortly) is to force the robots to go through a predetermined “path” along the construct. If the robots are to deviate from this path, significant extra distance cost will be incurred. On the other hand, the construct ensures that the robots, following the predetermined “path”, can reach the desired goals if and only if the associated 2/2/4 SAT instance is solvable. To build a new scheme for synchronizing robots’ movements in an optimal solution, we need several gadgets. The first gadget (see e.g., Fig. 4) allows distance optimal transportation of three robots. In the structure, there are 2ℓ + 2 r1 r2`+2 r`+3 r2 r` r3r`+1 r4 r5 r5 r4 r`+1 r3 r`+2 r2 r`+3 r2`+2 r1 Figure 4: A gadget for optimally transporting three robots in the middle path. [top] Initial configuration of robots. [bottom] The final configuration. robots and r1, r2, r3 are the robots to be transported. The starts and goals for these three robots may be temporary; the starts and goals for all other robots are final. We call such a gadget a forward path. In a forward path, each robot must move at least a distance of ℓ to reach its goal. The gadget can only be joined to other structures at the two short sides in such a way that all shortest paths between any robot _ri ∈{r4, . . ., r2ℓ+2} and its goal are within the forward_ path. Furthermore, any path connecting ri and its goal without using a long side of the forward path must have a distance at least 2ℓ. It is clear that the optimal total distance for all robots, including r1-r3, is 2ℓ[2] + 2ℓ. **Proposition 8. Transporting multiple groups (one group** _must reach the right end before another group can be trans-_ _ported) of robots through a forward path incurs an extra dis-_ _tance of Ω(ℓ) for robots r4, . . ., r2ℓ+2._ PROOF SKETCH. Each group of robots to be transported must use a long side of a forward path and pushes all other robots on the long side through with them. It can be checked (simple but tedious case analysis and counting are involved) that intermediate configurations between transporting different groups of robots will require robots r4, . . ., r2ℓ+r to deviate from optimal paths by at least Θ(ℓ) in total. **Proposition 9. Transporting a single group of more than** _three robots through a forward path incurs an extra distance_ _of Ω(ℓ) for robots r4, . . ., r2ℓ+2._ PROOF SKETCH. When more than three robots are in a forward path at the same time, some robot(s) in r4, . . ., r2ℓ+4 cannot stay on the forward path and must travel extra distance. This induces an extra cost of at least Θ(ℓ). r2 r3 r4 r1 r4 r3 r2 Figure 5: A gadget for synchronizing the movements of robots. [top] Initial configuration. [bottom] Final configuration. ----- di The second gadget given in Fig. 5 consists of a single cycle (formed by two paths joined at the ends) that will be connected to other gadgets at the two end vertices on the left and right. The length of the cycle is 8ℓ. Denote such a gadget a backward path. The function of a backward path is to push r1 into the cycle and r2 out of the cycle as a synchronization mechanism. Every robot in the middle of the cycle have its goal one vertex to its left or right as indicated by the arrows. An optimal solution for a backward path is to rotate all robots in the direction of the arrows, which yields a total distance of 8ℓ. Before the rotation, r1 may come from elsewhere to its start location and after the rotation, r2 may move to elsewhere. All other start and goal locations are final. The optimal cost for transporting all robots including _r1, r2 is 8ℓ. It is clear that if r1, r2 are not moved at the_ same time, then an extra cost of at least 4ℓ is incurred. Note that moving a robot through a backward path incurs Ω(ℓ[2]) cost to the robots on the path. x1 x1 p 1 p 1 p 2¶ x2 x2 p 2¶ p 2 p 2 p m¶ pm¶ p m xm xm pm TC q 1 c 1 q 1 FC q 2 q¶1 c 2 q¶1 q 2 qm q¶2 q¶2 qm c m Figure 6: Reduction of 2/2/4 SAT to MTDMPP. The third and the main construct (similar to that used in (Ratner and Warmuth 1990)) is given in Fig. 6, constructed from a 2/2/4 SAT instance with m variables. In the construct, each solid edge (pi, qj, pi, qj) represents a forward path and each dotted edge (p[′]i[, q]i[′][, p][′]i[, q]i[′][) a backward path.] On the top half (above the squares marked TC and FC) there are m diamond structures. We call these the variable _diamonds. The details of a variable diamond is given in Fig._ 7. The start locations for robots ai-fi and xi1, xi2, xi1, xi2 (the robots representing the unnegated and negated literals) are given in the figure. The goal locations of bi, ci are start locations of xi1, xi2, respectively. Same goes for ei, fi. The literals will be moved out of the variable diamond. The goals of ai, di are in the next variable diamond (the goals of _ai−1, di−1 are marked as dotted circles in Fig. 7)._ The squares on the sides, TC and FC, each contains a strip of 3m vertices and robots. TC has the structure given in Fig. 8; the structure of FC is similar. On the bottom half of Fig. 6 there are m clause nodes, the structure of the j ai Figure 7: The structure of a variable diamond. The top of the variable diamond for x1 is slightly different and is shown in the bottom right corner. p m¶ p m p 2¶ p 1 q¶m-1 qm q¶1 q 1 Figure 8: The gadget for temporarily hosting the true literals. th node is given in Fig. 9. These clause nodes host the goal locations for the 4m literals (these are yj1-yj4). The start locations of gj, hj and goal locations of gj−1, hj−1 are as marked. The goal location of gm−1 will be given shortly; the goal of hm−1 is an arbitrary unused location in the last (m-th) clause node. TC yj1 yj3 FC g j p i g j-1 yj2 yj4 hj-1 q i hj p i¶ q j¶ Figure 9: The structure of the clause node j. Finally, the backward path connecting the last clause node and the first variable diamond is given in Fig. 10, which specifies the start location of d1 and goal location of gm−1. So far the start and goal locations of almost all robots are specified, with the exception of some robots in the 3 3 _×_ grids, TC, FC, and near the ends of backward paths. The goals for these robots can be set arbitrarily as long as they remain local with respect to their start location (i.e. within a constant distance) and consistent. So far, a full MPPpr problem has been constructed from the 2/2/4 SAT instance. Recall that we require a forward path to be joined to the rest of the graph such that for an arbitrary robot ri ∈{r4, . . ., r2ℓ+2} on the forward path, a path connecting ri and its goal must have a distance of 2ℓ or more if it does not pass through the forward path itself. It can be checked that this is satisfied by the MPPpr instance. We set ℓ = m[4]. ----- d1 g m-1 Figure 10: The backward path connecting the last clause node and the first variable diamond. **Lemma 10. If an instance of 2/2/4 SAT is satisfiable,** _then the corresponding MPPpr problem has a solution with_ _a total distance of 16m[9]_ + 48m[5] 24m[4] + O(m[2]). _−_ PROOF. Suppose that the 2/2/4 SAT instance is satisfiable. Let _x1, . . .,_ _xm be a satisfying assignment to the vari-_ � � ables x1, . . ., xm. The paths for taking the robots to their goals are described below. The first moves take a1 to TC. If _x1 is true, a1, b1, c1_ � can be transported through the top left forward path in the first variable diamond. If _x1 is false, using a constant num-_ � ber of moves (see Proposition 5 in (Yu and LaValle 2013)), _a1 can be exchanged with the robot at the top right corner_ of the top 3 3 grid of the first variable diamond (i.e., on _×_ top of e1, f1). Such local rearrangements will be assumed from now on without explicitly stating so. Then a1, e1, f1 will take the top right forward path. Without loss of generality, assume that the right path is taken. Once a1, e1, f1 get to the right 3 × 3 grid in the first variable diamond, e1, f1 stay and a1, x11, x12 can be moved to the bottom 3 × 3 grid of the variable diamond. They can then be moved to TC using the left forward path. Once a1 is in TC, it can be used to free a2 in p[′]2[. In-] ductively, all the 2m literals that are set to true can be collected to TC along with am. These literals can then be distributed to the clause nodes, two at a time (including _am, g1, . . ., gm−1, three robots will actually be transported_ at a time). Since _x1, . . .,_ _xm is a satisfying assignment, TC_ � � contains the robots such that two of which have goals in each clause node. Once gm−1 gets to the last clause node, it can then free d1 on the top, and the right half of the paths can be “traversed” so that all robots can reach their desired goals. There are 8m forward paths and 4m 3 backward paths. _−_ These induce a total distance cost of 8m(2ℓ[2] + 2ℓ) + (4m _−_ 3)(8ℓ) = 16m[9] +48m[5] 24m[4], plus some local rearrange_−_ ments. These local rearrangements can be performed with a total distance cost of O(m[2]) (again, see Proposition 5 in (Yu and LaValle 2013)). **Lemma 11. If the MPPpr problem reduced from an** **2/2/4 SAT instance has a solution with a total distance** _of 16m[9]_ + 48m[5] 24m[4] + O(m[2]), the 2/2/4 SAT in_−_ _stance is satisfiable._ PROOF. Through straightforward counting, it can be checked that the least amount of distance connecting the start and goal locations of the robots is 16m[9] + 48m[5] _−_ 24m[4] + O(m[2]). For such a total distance to be achievable, the forward and backward paths must be followed in a pattern similar to that from the proof of Lemma 10 because if not, an extra cost of Ω(ℓ) = Ω(m[4]) is incurred by Propositions 8, 9 and properties of backward paths. This means that a forward path can only be used to transport a single group of no more than three robots and robots on a backward path can only move once (excluding robots at the two end vertices). Otherwise, the Ω(m[4]) extra cost will take the total distance cost to 16m[9] + 48m[5] 24m[4] + Ω(m[4]), which is _−_ strictly larger than 16m[9] +48m[5] 24m[4] +Ω(m[4]) for large _−_ enough m. Suppose that a feasible solution path set with a total distance 16m[9] + 48m[5] 24m[4] + O(m[2]) exists. At the begin_−_ ning, no backward path can be taken due to the synchronization/locking mechanism. For example, the backward path connecting the last clause node and the first variable diamond cannot be used because gm−1 is not in the last clause node. This suggests that the only possible first move (without incurring an Ω(m[4]) extra cost) is to move three robots, _a1 with c1, d1 or e1, f1, along the top left or top right for-_ ward path in the first variable diamond (since a1-f1 must all travel down and no forward path can transport more than three a time or multiple groups, each forward path must take three robots). Following this argument, 2m of the 4m literal robots must go to TC and the other 2m must go to FC. The robots going to FC must have one pair of literals (either unnegated or negated, but not both) per variable. These robots then must end in the clause nodes with each clause node getting two literals from TC and two from FC. Setting the literals corresponding to the literal robots passing through TC yields a good assignment. Lemmas 10 and 11 prove to the following theorem. **Theorem 12. MTDMPP is NP-hard, even if all vertices** _are occupied by robots._ PROOF. After the MPPpr instance is constructed, set k = 16m[9] + 48m[5] 24m[4] + m[3] proves the claim for large _−_ enough m. ## General Intractability of Optimal Multi-robot Path Planning Problems on Graphs We conclude this paper with the following main result. **Theorem 13. Computing a minimum total arrival time, a** _minimum makespan, or a minimum total distance solution is_ _NP-complete for PMG, MPPp, and MPPpr._ PROOF. Lemma 1 covers the PMG part. Theorem 4, Corollary 5, and Theorem 12 cover MPPpr. It is also clear that Theorem 4 and Corollary 5 generalizes to MPPp without modification because the optimal paths do not use synchronous rotations of robots. We are left to show that computing a distance optimal solution for MPPp is NP-hard. This is again covered by the distance optimal result from (Ratner and Warmuth 1990) because parallel moves do not shorten the total distance traveled. These problems are NP-complete because PMG, MPPp, and MPPpr are in NP (Kornhauser, Miller, and Spirakis 1984; Yu 2012). ----- ## References Garey, M. R., and Johnson, D. S. 1979. Computers and _Intractability: A Guide to the Theory of NP-Completeness._ W. H. Freeman. Goldreich, O. 1984. Finding the shortest move-sequence in the graph-generalized 15-puzzle is np-hard. Laboratory for Computer Science, Massachusetts Institute of Technology, unpublished manuscript. Goraly, G., and Hassin, R. 2010. Multi-color pebble motion on graph. Algorithmica 58:610–636. Kornhauser, D.; Miller, G.; and Spirakis, P. 1984. Coordinating pebble motion on graphs, the diameter of permutation groups, and applications. In Proceedings of the _25th Annual Symposium on Foundations of Computer Sci-_ _ence (FOCS ’84), 241–250._ Loyd, S. 1959. Mathematical Puzzles of Sam Loyd. New York: Dover. Ratner, D., and Warmuth, M. 1990. The (n[2] 1)-puzzle _−_ and related relocation problems. Journal of Symbolic Com_putation 10:111–137._ Ryan, M. R. K. 2008. Exploiting subgraph structure in multi-robot path planning. Journal of Artificial Intelligence _Research 31:497–542._ Standley, T., and Korf, R. 2011. Complete algorithms for cooperative pathfinding problems. In Twenty-Second Interna_tional Joint Conference on Artificial Intelligence, 668–673._ Story, E. W. 1879. Note on the ‘15’ puzzle. _American_ _Journal of Mathematics 2:399–404._ Surynek, P. 2010. An optimization variant of multi-robot path planning is intractable. In The Twenty-Fourth AAAI _Conference on Artificial Intelligence (AAAI-10), 1261–_ 1263. Yu, J., and LaValle, S. M. 2012. Multi-agent path planning and network flow. In The Tenth International Workshop on _Algorithmic Foundations of Robotics._ Yu, J., and LaValle, S. M. 2013. Planning optimal paths for multiple robots on graphs. In Proceedings IEEE Inter_national Conference on Robotics & Automation. to appear._ Yu, J. 2012. Diameters of permutation groups on graphs and linear time feasibility test of pebble motion problems. _arXiv:1205.5263._ -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1609/aaai.v27i1.8541?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1609/aaai.v27i1.8541, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://ojs.aaai.org/index.php/AAAI/article/download/8541/8400" }
2,013
[ "JournalArticle", "Conference" ]
true
2013-06-29T00:00:00
[]
9,426
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01d1b66bd789dde4e73417976a1128c6b89107b4
[ "Computer Science" ]
0.864707
A Tool for Choreography Analysis Using Collaboration Diagrams
01d1b66bd789dde4e73417976a1128c6b89107b4
IEEE International Conference on Web Services
[ { "authorId": "1706329", "name": "T. Bultan" }, { "authorId": "144984705", "name": "Chris Ferguson" }, { "authorId": "144478993", "name": "Xiang Fu" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE ICWS", "IEEE Int Conf Web Serv" ], "alternate_urls": null, "id": "00e31f26-8a90-465e-8962-9ff3fa9d6f0c", "issn": null, "name": "IEEE International Conference on Web Services", "type": "conference", "url": "https://conferences.computer.org/icws/" }
null
# A Tool for Choreography Analysis Using Collaboration Diagrams ## Tevfik Bultan Chris Ferguson University of California Santa Barbara {bultan,fergy}@cs.ucsb.edu Abstract ## Xiang Fu Hofstra University Xiang.Fu@hofstra.edu _Analyzing interactions among peers that interact via_ _messages is a crucial problem due to increasingly dis-_ _tributed nature of current software systems, especially the_ _ones built using the service oriented computing paradigm._ _In service oriented computing, interactions among peers_ _participating to a composite service involve message ex-_ _changes across organizational boundaries in a distributed_ _computing environment. In order to build such systems in a_ _reliable manner, it is necessary to develop techniques for_ _analysis and verification of interactions among services._ _Collaboration diagrams provide a convenient visual model_ _for modeling service interactions. In this paper, we present_ _a tool that 1) checks the realizability of interactions speci-_ _fied by the given collaboration diagram, 2) verifies the LTL_ _properties of the interactions specified by the given collab-_ _oration diagram by automatically converting it to a state_ _machine model, and 3) synthesizes peer state machines that_ _realize the set of interactions specified by the given collab-_ _oration diagram._ ## 1 Introduction Service oriented computing provides technologies that enable multiple organizations to integrate their businesses over the Internet. Typical execution behavior in such a distributed system involves a set of autonomous peers interacting with each other through messages. Choreography specification languages, such as the Web Services Choreography Description Language (WS-CDL), are used for specification of such interactions. A choreography specification identifies the global ordering of the messages exchanged among the peers participating to a composite service. We call such message sequences conversations, i.e., a choreography specification identifies the set of allowable conversations for a composite web service. Collaboration diagrams (called communication diagrams in [20]) provide a convenient visual formalism for specifying the choreography among the services (peers) participating to a composite service [6]. Characterization of interactions using a global view, as collaboration diagrams allow us to do, can lead to specification of choreographies that may not be implementable. Hence, using collaboration diagrams for choreographyspecification leads to the following realizability problem: Given a choreography specification, is it possible to find a set of distributed peers which interact exactly according to the choreography specification. If a collaboration diagram is realizable, then we can check the properties of the interactions among the peers by investigating the possible message orderings allowed by the collaboration diagram. In this paper we present a toolset for verification and analysis of choreographies specified using collaboration diagrams. As shown in Figure 1, our tool consists of six components: The first component constructs a dependency graph for the events in the input collaboration diagram. The second component checks the realizability of the input collaboration diagram by checking a set of conditions on this dependency graph. The third component converts the collaboration diagram to a finite state automaton such that the language accepted by the automaton is equal to the set of interactions specified by the input collaboration diagram. The fourth components converts the collaboration diagram automaton to the input language of the Web Service Analysis Tool (WSAT) [11] (a tool developed for checking realizability web service choreography specifications) to check a different set of realizability conditions. The fifth component converts the collaboration diagram automaton to a Promela specification in order to check LTL properties using the Spin model checker [13]. Finally, the sixth compo 1 ----- nent synthesizes a set of state machines that generate exactly the set of interactions specified by the collaboration diagram automaton. We collected a set of collaboration diagrams from the literature and analyzed them using this toolset. Our experiments indicate that realizability analysis, LTL model checking and synthesis for collaboration diagrams is very efficient and can easily be used in practice. Our contributions in this paper can be summarized as follows: 1) Extending the semantics for a single collaboration diagram given in [6] to collaboration diagram sets and graphs, with increasing expressive power. 2) An algorithm for converting collaboration diagrams/sets/graphs to an automaton that accepts the same set of conversations. 3) A translator for converting the collaboration diagram automaton to a Promela model, enabling LTL model checking using the Spin model checker [13]. 4) Implementing the realizability check for single collaboration diagrams from [6]. 5) A translator for converting the collaboration diagram automaton to a Conversation Protocol, enabling realizability check for collaboration diagram sets/graphs using the realizability analysis for conversation protocols implemented in Web Service Analysis Tool [11]. 6) A peer synthesis algorithm for generating state machine implementations for peers for realizable collaboration diagrams/sets/graphs by projecting the collaboration diagram automaton to each peer participating to the collaboration. 7) Experiments with several collaboration diagrams from the literature. **Related Work** Message Sequence Charts (MSCs) provide another visual model for specification of interactions in distributed systems. MSC model has also been used in modeling and verification of web services [8]. However, collaboration diagrams provide a global view of interactions where as MSCs provide a local view. The realizability problem for MSCs [2] have been studied before. However as we mentioned above, the type of interactions specified by collaboration diagrams and MSCs are different. There has been work on formalizing choreography specifications using process algebras [7, 16]. Our work is complementary to work on formalizing semantics of choreography specification languages. Our focus in this paper is formal visual representations that can be used by service developers to visualize their designs. There has been earlier work on using various UML diagrams in modeling different aspects of service compositions (for example [3, 18]). Specification and analysis of web service interactions using conversation protocols has been investigated [10, 12]. In this paper, we investigate the relationship between the collaboration diagrams and the conversation protocols using the collaboration diagram semantics from [6]. A complementary approach to the one presented here is discussed in [17], where realizability of collaboration diagrams is analyzed using process algebra encodings. However, compared to these earlier works, in this paper we |scheduler: FactoryScheduler 1: start A2,B2/2:completed manager: FactoryJobManager rtOven A2:completedOven 1/B1:startRobot B2:com oven:Oven robot:Robot|Col2| |---|---| |oven:Oven|robot:Robot| **Figure 2. A collaboration diagram (top) and its depen-** dency relation (bottom) Figure 2 shows an example collaboration diagram from the UML 1.3 specification.The diagram consists of four peers Scheduler, Manager, Oven, Robot. The edges that connect the boxes shows the links between the peers. A link between two peers indicate that they can send each other messages. In collaboration diagrams, message send events are shown as arrows drawn over the links. The direction of the arrow indicates the sender and the receiver (the arrow points to the receiver). Each send event is marked with a sequence label. The sequence labels specify the ordering of the message send events. Formally, a _collaboration_ _diagram_ C = (P, L, M, E, D) consists of a set of peers P, a set of links L ∈ P × P, a set of messages M, a set of message extend the collaboration diagram semantics to collaboration diagrams sets and collaboration diagram graphs which have more expressive power. ## 2 Formal Model We assume that a choreography specification consists of a finite set of peers P, and a finite set of messages M . Each message m ∈ M has a unique sender and a unique receiver denoted by send (m) ∈ P and recv (m) ∈ P, respectively. Note that, messages can always be converted to this form by concatenating each message with tags its sender and its receiver. A conversation σ is a sequence of messages exchanged among the peers that participate to a composite web service, i.e., σ ∈ M [∗]. A choreography C is a set of conversations, i.e., C ⊆ M [∗]. **1/A1:startOven** **A2:completedOven** **1/B1:startRobot** **B2:completedRobot** 2 ----- send events E and a dependency relation D ⊆ E × E among the message send events [6]. For each message m ∈ M, the sender and the receiver of m must be linked, i.e., (send (m), recv (m)) ∈ L. In a collaboration diagram, each message send event has a unique sequence label. Each sequence label consists of a possibly empty prefix followed by a sequence number. The numeric ordering of the sequence numbers defines an implicit total ordering among the message send events with the same prefix. Each prefix represents a message thread where each message thread refers to a set of messages that have a total ordering and that can be interleaved arbitrarily with other messages. For example, event A2 can occur only after the event A1, but B1 and A2 do not have any implicit ordering. In addition to the implicit ordering defined by the sequence numbers, it is possible to explicitly state the events that should precede an event e by listing their sequence labels (followed by the symbol “/”) before the sequence label of the event e. For example if an event e is marked with “B2,C3/A2” then A2 is the sequence label of the event e, and the events with sequence labels B2, C3 and A1 must precede e. Also, message send events can be marked to be conditional, denoted as a suffix “[condition]”, or iterative, denoted as a suffix “*[condition]”, where condition is written in some pseudocode. Formally, the set of send events E is a set of tuples of the form (l, m, r) where l is the label of the event, m ∈ M is a message, and r ∈{1, ?, ∗} is the recurrence type. We denote the size of the set E with |E| and for each event e ∈ E, e.l, e.m, and e.r denote the unique sequence label, the message and the recurrence type for event e, respectively. Each event e ∈ E denotes a message send event where the peer send(e.m) sends a message e.m to the peer recv (e.m). The recurrence type r ∈{1, ?, ∗} determines if the send event corresponds to a single message send event (r = 1), a conditional message send event (r =?), or an iterative message send event (r = ∗). The dependency relation D ⊆ E × E denotes the ordering among the message send events where (e1, e2) ∈ D means that e1 has to occur before e2. The bottom of the Figure 2 shows the dependency graph for the the collaboration diagram shown at the top. We assume that there are no circular dependencies, i.e., the dependency graph (E, D), where the send events in E form the vertices and the dependencies in D form the edges, should be a directed acyclic graph (dag). Given a dependency relation D ⊆ E × E let pred (e) denote the predecessors of the event e where e[′] ∈ pred (e) if there exists a set of events e1, e2, . . ., ek where k > 1, e[′] = e1, e = ek, and for all i ∈ [1..k − 1], (ei, ei+1) ∈ D. We assume that there are no redundant dependencies in D (i.e., it is the transitive reduction). We call e[′] an immediate predecessor of e if (e[′], e) ∈ D. We call an event eI with pred (eI ) = ∅ an initial event of D and an event eF where for all e ∈ E eF ̸∈ pred (e) a final event of D. Given a collaboration diagram D = (P, L, M, E, D) we denote the choreography defined by D as C(D) where C(D) ⊆ M [∗]. A conversation σ = m1m2 . . . mn is in C(D), i.e., σ ∈C(D), if and only if σ ∈ M [∗] and there exists a corresponding matching sequence of message send events γ = e1e2 . . . en such that: 1) for all i ∈ [1..n] mi = ei.m and ei ∈ E; 2) for all i, j ∈ [1..n] (ei, ej) ∈ D ⇒ i < j; 3) for all e ∈ E (for all i ∈ [1..n] ei ̸= e) ⇒ (e.r = ∗∨ e.r =?); and 4) for all e ∈ E if there exists i, j ∈ [1..n] such that i ̸= j ∧ ei = ej then ei.r = ∗. The first condition above ensures that each message in the conversation σ is equal to the message of the matching send event in the event sequence γ. The second condition ensures that the ordering of the events in the event sequence γ does not violate the dependencies in D. The third condition ensures that if an event does not appear in the event sequence γ then it must be either a conditional event or an iterative event. Finally, the fourth condition states that only iterative events can be repeated in the event sequence γ. **Collaboration Diagram Sets** Without the conditional or iterative events, a single collaboration diagram with a single message thread specifies a single conversation. The conditional and iterative events and message threads introduce nondeterminism to collaboration diagrams, enabling specification of multiple conversations with a single collaboration diagram. However, the level of nondeterminism in a single collaboration diagram is still quite limited. For example, assume that we have three messages m1,m2 and m3 sent from one peer to another peer and we would like to specify the following choreography {m1m2m3, m3m1m2}. It is not possible to specify this simple choreography using a single collaboration diagram. However, it is possible to specify each conversation in this choreography using a separate collaboration diagram. So, the choreography we want to describe is the union of the choreographies of two different collaboration diagrams. We define a collaboration diagram set as S = {D1, D2, . . ., Dn} where n is the number of collaboration diagrams in S and each Di is in the form Di = (P, L, M, Ei, Di), i.e., the collaboration diagrams in a collaboration diagram set only differ in their event sets and dependencies. (we can always convert a set of collaboration diagrams to this form without changing their interaction sets by replacing the individual peer, link and message sets by their unions.) We define the set of interactions defined by a collaboration diagram set as C(S) = [�]D∈S [C][(][D][)][.] **Collaboration Diagram Graphs** Although collaboration diagrams sets increase the expressiveness of collaboration diagrams, they still have an important limitation. It is not possible to specify looping behaviors using collaboration 3 ----- diagram sets. The only looping construct in collaboration diagrams/sets is the iterative event that specifies the repetition of a single event. Assume that we have two messages m1 and m2 exchanged among two peers and we would like to specify the following choreography (m1m2)[∗], i.e., zero or more repetitions of the message sequence m1m2. This could be a typical request/acknowledgement sequence for example, which can be repeated arbitrary many times. It is not possible to specify this choreography using collaboration diagram sets, however by allowing the concatenation of choreographies specified by different collaboration diagrams, we can specify such choreographies. A collaboration diagram graph G = (vs, Z, V, O) is a directed graph which consists of a set of vertices V, a set of directed edges O ⊆ V × V, an initial vertex vs ∈ V, a set of final vertices Z ⊆ V, where each vertex in v ∈ V is a collaboration diagram v = (P, L, M, Ev, Dv). As with the collaboration diagram sets, to simplify our presentation, we assume that the collaboration diagrams in a collaboration diagram graph only differ in their event sets and dependency relations. Given a collaboration diagram graph G = (vs, Z, V, O) we define the set of interactions defined by G as C(G). The interactions of a collaboration diagram graph is defined as the concatenation of the interactions of its vertices on a path that starts from the initial vertex and ends at a final vertex. Formally, an interaction σ ∈ M [∗], is in the interaction set of G, i.e., σ ∈G, if and only if σ = σ1σ2 . . . σn where for all i ∈ [1..n] σi ∈ M [∗] and there exists a path v1, v2, . . ., vn in G such that v1 = vs, vn ∈ Z, for all i ∈ [1..n − 1] (vi, vi+1) ∈ O and for all i ∈ [1..n] σi ∈C(vi). As the two simple examples we discussed above demonstrate, collaboration diagram sets are strictly more powerful than single collaboration diagrams, and collaboration diagram graphs are strictly more powerful than collaboration diagram sets. ## 3 Automata Construction Figure 3 shows an automaton automatically constructed from the collaboration diagram shown in Figure 2. The language accepted by this automaton is exactly the choreography specified by the collaboration diagram in Figure 2. Given a collaboration diagram D = (P, L, M, E, D), the corresponding collaboration diagram automaton AD = (M, T, s, F, δ) is a nondeterministic FSA where M is a set of messages such that for each m ∈ M recv (m) ∈ P and send (m) ∈ P, T is the finite set of states, s ∈ T is the initial state, F ⊆ T is the set of final states, and δ ⊆ T × (M ∪{ǫ}) × T is the transition relation. A collaboration diagram automaton has two types of transitions: (1) (t1, m, t2) denotes a message transmission where message m is sent from peer send (m) to peer recv (m), and (2) (t1, ǫ, t2) denotes an ǫ-transition. **Figure 3. Automata construction** We define the choreography C(A) defined by the collaboration diagram automaton A is the language accepted by A, i.e., C(A) ⊆ M [∗] and σ ∈C(A) if and only if σ = m1, m2, . . ., mn where for all i ∈ [1..n] mi ∈ M and there exists a path t1, t2, . . ., tn, tn+1 in A such that t1 = s, tn+1 ∈ F, and for all i ∈ [1..n] (ti, mi, ti+1) ∈ δ. **Collaboration** **Diagram** **Automaton** **Construction** Given a collaboration diagram D = (P, L, M, E, D), we want to automatically construct a collaboration diagram automaton AD = (M, T, s, F, δ) such that C(D) = C(AD). We define the set of states of AD as T = 2[E], i.e., the set of states of AD is the power sets of the event set of the collaboration diagram D. The initial state is defined as s = E. The set of final states are defined as F = {∅}. We define the transition relation δ as follows: For each state S ⊆ E, if there exists an event e ∈ S such that for all (e[′], e) ∈ D e[′] ̸∈ S, then - e = (l, m, 1) ⇒ (S, m, S \ {e}) ∈ δ, - e = (l, m, ?) ⇒{(S, m, S \{e}), (S, ǫ, S \{e})} ⊆ δ, - e = (l, m, ∗) ⇒{(S, m, S), (S, ǫ, S \ {e})} ⊆ δ. Each state in the automaton represents a set of events that need to be executed. Given a state E, if there is an event e ∈ E which does not have any of its predecessors in E, then we add a transition from E to E −{e} to represent the execution of the send event e. If e is an iterative event, then we add a self loop to E to represent arbitrary number of sends. For iterative and conditional events, we also generate ǫ-transitions. Figure 3 shows the collaboration diagram automaton automatically constructed from the collaboration diagram shown in Figure 2 based on the above construction. The initial state corresponds to the whole event set E = {1, 2, A1, A2, B1, B2} meaning that initially all the events 4 ----- have to be executed, and the final state corresponds to the empty set meaning that there are no more events to be executed. In the initial state, only event 1 is enabled since event 1 has no predecessors in the dependency graph shown in Figure 2 (i.e., it is an initial event). Hence, there is one one transition from the initial state to the state {2, A1, A2, B1, B2} labeled with the message start, corresponding to the execution of event 1. Note that, in state {2, A1, A2, B1, B2} events A1 and B1 are both enabled since their only predecessor in the dependency graph is event 1 and event 1 is not in {2, A1, A2, B1, B2}, meaning that it has already been executed. Hence, there are two transitions from the {2, A1, A2, B1, B2}, one for event A1 and one for event B1. Based on the above construction, the number of states generated for a collaboration diagram C with the event set E could be 2[|][E][|] in the worst case. This worst case is realized only if C has |E| threads, i.e., the number of states is exponential in the number of threads. **Automaton Construction for Collaboration Diagram** **Sets** The above construction algorithm can be extended to collaboration diagram sets as follows. Given a collaboration diagram set S = {D1, D2, . . ., Dn} where n is the number of collaboration diagrams in S and each Di is in the form Di = (P, L, M, Ei, Di) we want to construct an automaton AS = (M, T, s, F, δ) such that C(AS) = C(S). For each Di ∈S construct the corresponding collaboration diagram automaton ADi = (M, Ti, si, Fi, δi) where C(Di) = C(ADi) using the construction defined above. Let AS = (M, T, s, F, δ). We define the set of states of AS as T = {s}∪ [�]Di∈S [T][i][, i.e., the set of states of][ A][S][ consists of] a start state s and the power sets of the event sets of the collaboration diagrams that are in S. Each state in the automaton after the start state represent a set of events that need to be executed. If there exists an Ei such that Ei = ∅, then F = {s, ∅}, otherwise F = {∅}. We define the transition relation δ as follows: δ = ([�]Di∈S[(][s, ǫ, E][i][))][∪][(][�]Di∈S [δ][i][))][.] The automaton AS first nondeterministically chooses one of the collaboration diagrams in the collaboration diagram set and then transitions to the initial state of the corresponding collaboration diagram automaton. Recall that, the number of states in a collaboration diagram automaton ADi generated from a collaboration diagram Di is exponential in the number of threads in Di. If we determinize the automaton AS, then the number of states will also be exponential in |S|, i.e., the number of collaboration diagrams in the collaboration diagram set. **Automaton Construction for Collaboration Diagram** **Graphs** Next, we show that given a collaboration diagram graph G = (vs, Z, V, O) where each v ∈ V is a collaboration diagram v = (P, L, M, Ev, Dv), we can construct an automaton where AG = (M, T, s, F, δ), such that C(G) = C(AG). First, for each vertex v ∈ V of G, construct an automaton Av = (M, Tv, sv, Fv, δv) using the construction given above for translating collaboration diagram sets to automata (each vertex v corresponds to a singleton collaboration diagram set) such that C(v) = C(Av). Then for AG = (M, T, s, F, δ) we have T = [�]v∈V [T][v][, i.e., the set of states of] AG is the union of the states of the automata constructed for each vertex of G. We define the initial state of AG as the initial state of the automaton constructed for the initial vertex vs, i.e., s = svs . The final states of AG are the union of the final states of the automata constructed for vertices v ∈ Z, i.e, F = [�]v∈Z [F][v][.] The transitions of AG include all the transitions of the automata constructed for all the vertices, i.e., δ ⊇ [�]v∈V [δ][v][.] Additionally we add some ǫ-transitions to δ as follows. For each edge (v, v[′]) ∈ O, where Av = (M, Tv, sv, Fv, δv) and Av′ = (M, Tv′, sv′, Fv′, δv′ ) are the automata constructed for v and v[′], respectively, δ includes an ǫ-transition from each final state of Av to the initial state of Av′, i.e., δ ⊇ � (v,v[′])∈O,s∈Fv [(][s, ǫ, s][v][′] [)][.] ## 4 Synthesizing Peer Implementations We model the behaviors of peers that participate to a composite web service as concurrently executing finite state machines that interact via messages [10, 12]. We assume that the machines interact with asynchronous messages where each finite state machine has a single FIFO input queue, and the messages are delivered reliably i.e., no message loss or reordering during transmission. Formally, given a set of peers P = {p1, . . ., pn} that participate in a collaboration, the peer state machine for the peer pi ∈ P is a nondeterministic FSA Ai = (Mi, Ti, si, Fi, δi) where Mi is the set of messages that are either received or sent by pi, Ti is the finite set of states, si ∈ T is the initial state, Fi ⊆ T is the set of final states, and δi ⊆ Ti × ({!, ?} × Mi ∪{ǫ}) × Ti is the transition relation. A transition τ ∈ δi can be one of the following three types: (1) a send-transition of the form (t1, !m, t2) which sends out a message m ∈ Mi from peer pi = send (m) to peer recv (m) that appends the message to the end of the input queue of the receiver recv (m), (2) a receive-transition of the form (t1, ?m, t2) which receives a message m ∈ Mi from peer send(m) to peer pi = recv (m) that removes the message at the head of the input queue of the peer pi, and (3) an ǫ-transition of the form (t1, ǫ, t2). A run of a set of peers is a sequence of transitions executed by the peers. A complete run is one such that at the end of the run each peer is in a final state and each FIFO queue is empty. The corresponding sequence of messages induced from the send transitions of a complete run is called a conversation (see [12] for the detailed formal definition). The choreography C(A1, . . ., An) of a set of peer state ma 5 ----- chines A1, . . ., An is the set of conversations generated by all the complete runs of A1, . . ., An. We call a set of peer state machines A1, . . ., An well_behaved if each partial run is a prefix of a complete run._ If a set of peer state machines are well-behaved then the peers never get stuck (i.e., each peer can always consume all the incoming messages in its input queue and reach a final state). Let C be a choreography. We say that the peer state machines A1, . . ., An realize C if C(A1, . . ., An) = C and A1, . . ., An are well-behaved. **:FactorJobManager** **?start** **:Scheduler** **?completed** **!startOven** **!startRobot** **!start** **!startRobot** **!startOven** **?completedOven** **?completedRobot** **:Oven** **!completedOven** **!startRobot** **!startOven** **?completedOven** **?startOven** **?completedRobot** **?completedRobot** **?completedOven** :Robot !completedRobot **!completed** **?startRobot** **Figure 4. Peer synthesis** Given a choreography specification in the form of a collaboration diagram, it would be helpful to synthesize peer implementations that realize the interactions defined by the choreography specification. Since we already showed that collaboration diagrams can be converted to automata, we can use the collaboration diagram automaton to synthesize the peer state machines. In fact, one can obtain the peer state machines by projecting the transitions of the collaboration diagram automata to the peers. Consider a transition in collaboration diagram automaton for a message send event from peer pi to peer pj. This transition should be projected to the peer state machine of peer pi as a send transition and it should be projected to the peer state machine of peer pj as a receive transition. Given a peer pk that is different than peers pi and pj, the same transition should be projected to the peer state machine of peer pk as an ǫ transition. We formalize this projection operation below. Given a collaboration diagram automaton A = (M, T, s, F, δ) we denote the projection of A to peer pi ∈ P as πi(A) which is defined as follows: πi(A) = (Mi, T, s, F, δi) where Mi ⊆ M contains all the messages m such that send (m) = pi or recv (m) = pi. The set of states, the initial state and the final states of A and πi(A) are the same. We define δi as follows: - For each m ∈ M such that m ̸∈ Mi, for each transition (t1, m, t2) ∈ δ, or (t1, m, t2) ∈ δ we add the transition (t1, ǫ, t2) to δi. - For each m ∈ Mi such that send(m) = pi, for each transition (t1, m, t2) ∈ δ, we add the transition (t1, !m, t2) to δi. - For each m ∈ Mi such that recv (m) = pi, for each transition (t1, m, t2) ∈ δ, we add the transition (t1, ?m, t2) to δi. - For each transition (t1, ǫ, t2) ∈ δ we add the transition (t1, ǫ, t2) to δi. Using the standard automata algorithms, we can remove ǫtransitions in a projection using determinization and then minimize it. We call the resulting automaton the determinized peer projection to pi. Figure 4 shows the determinized peer projection of the collaboration diagram automaton shown in Figure 3 to the peers Manager, Scheduler, Oven and Robot. The set of conversations generated by the peer state machines shown in Figure 4 is exactly the choreography specified by the collaboration diagram automaton in Figure 3 and the collaboration diagram in Figure 2. In the next section we show that this is is not the case for some collaboration diagrams. ## 5 Realizability **orderWindow:** **OrderEntryWindow** **1:prepareOrder** **order:Order** **2:prepareOrderLine** **3:check** **5:needToReorder** **macallanLine:** **macallanStock:** **OrderLine** **StockItem** **4:remove?** **7:newDelivery?** **6:newReOrder** **deliveryItem:** **reorderItem:** **DeliveryItem** **ReOrderItem** **Figure 5. An unrealizable example** Figure 5 shows a collaboration diagram taken from a book on UML [9]. This collaboration diagram is not realizable since it is not possible to guarantee that newDeliv_ery message will be sent after the newReorder message as_ required by this collaboration diagram. Based on the ordering of the send events in this collaboration diagram there is no way for OrderLine process to know that StockItem process has already sent the newReorder message. Hence, in any implementation of this collaboration diagram, newDe_livery message may be sent before the newReorder message._ The realizability analysis techniques we implement in our 6 ----- toolset will identify that this collaboration diagram is not realizable. It is possible to fix this collaboration diagram by adding an extra message from StockItem to Orderline and changing the event labels so that this new message is sent after the newReorder message and before the newDe_livery message. After this modification, our tool identifies_ the modified collaboration diagram to be realizable. We formalize the realizability problem as follows. Let D be a collaboration diagram. We say that a set of peer state machines A1, . . ., An realize D if the set of conversations generated by the peer state machines A1, . . ., An is the same as the choreography defined by D, i.e., C(A1, . . ., An) = C(D), A collaboration diagram D is re_alizable if there exists a set of well-behaved peer state ma-_ chines which realize D. In [6] a sufficient condition for realizability of collaboration diagrams was given. This realizability condition can be checked on the dependency relation of the collaboration diagram. We implemented this realizability condition in our toolset. However, the realizability condition in [6] can only be used in determining realizability of a single collaboration diagram and results on realizability of collaboration diagrams are not directly applicable to collaboration diagrams. A collaboration diagram set that consists of realizable collaboration diagrams may not be realizable, and, it is also possible to have a realizable collaboration diagram set which consists of unrealizable collaboration diagrams [5]. Hence, determining realizability of a single collaboration diagram is not sufficient for checking realizability of a collaboration diagram set. However, our results in this paper show that the realizability of collaboration diagram sets can be reduced to realizability of conversation proto_cols [10]. A conversation protocol is a finite state automaton_ that specifies a choreography. In fact, the collaboration diagram automata we discussed in Section 3 are conversation protocols. For example, the collaboration diagram automaton shown in Figure 3 is a conversation protocol. Hence, the collaboration diagram to finite state automata translation we presented in Section 3 is equivalent to a translation from a collaboration diagram to a conversation protocol. Furthermore, as we discussed in Section 3, the translation can be extended to collaboration diagram sets and graphs. In [10, 12] sufficient conditions for realizability of conversation protocols were presented. Given a collaboration diagram set S, let AS be the conversation protocol with the same choreography set. If AS satisfies the realizability conditions presented in [10,12], then we conclude that S is realizable. Moreover, if the realizability condition holds, S will be realized by the determinized projections of its collaboration diagram automaton AS [10,12] which means that the peers synthesized based on the algorithm given in Section 4 will realize S. These results also apply to collaboration diagram graphs. ## 6 Implementation and Experiments We implemented the techniques described above in our collaboration diagram analysis and verification tool. We chose the Sparx Systems Enterprise Architect UML Editor [19] as the front end to our tool because of its comprehensive support for UML diagrams and ability to add custom modules. The Add-In we built translates Collaboration Diagrams defined by the user into our implementation of a Collaboration Diagram consisting of Peers, Links, Messages, and Events, based on the formal model defined in Section 2. From there, we are able to construct the dependency graph based on the event orderings defined in each event label as defined in Section 2. Using the dependency graph, we create the collaboration diagram automaton based on the construction given in Section 3. Using the collaboration diagram automaton we generate the peer state machines using the peer synthesis algorithm described in Section 4. We implement two types of realizability checks. The first one is an implementation of the realizability condition described in [6]. This realizability check is implemented by checking a set of condition on the dependency graph. However, this realizability check cannot be used for checking realizability of collaboration diagram sets and graphs. So we also implemented a translator that converts collaboration diagrams/set/graphs to conversation protocols and uses the Web Service Analysis Tool (WSAT) [11] to check the realizability condition from [10,12]. Finally, we convert the collaboration diagram automaton to Promela and use the model checker Spin [13] to check LTL properties of the choreography defined by a given collaboration diagram, collaboration diagram set or a collaboration diagram graph. In addition, the Add-In creates visual representations of the dependency graphs, collaboration diagram automaton, and the peer state machines. Using our collaboration diagram analysis and verification tool we experimented with several examples we found in the literature on collaboration diagrams. For each example, we checked the realizability first. If the example was not realizable we manually added new events to make them realizable. We then used our tool to generate a Promela specification and wrote temporal logic properties for each example collaboration diagram. These specifications were then verified using the Spin model checker. In Table 1, we summarize each example and our experimental results. All of the examples in Table 1 are single collaboration diagrams, so we able to use the realizability condition from [6] for all of them. In Table 1, R1 corresponds to the realizability condition from [6]. and R2 corresponds to the realizability condition from [10, 12]. Note that both of these conditions are sufficient conditions, so the fact that they are not satisfied does not mean that the collaboration diagram is not realizable. However if they are satisfied, we are sure that the collaboration diagram is realizable. Two 7 ----- **Table 1. Realizability analysis and verification results** of the collaboration diagrams we analyzed (Order Item and Voting Booth) violated both of the realizability conditions and after manual inspection we concluded that they were not realizable. Order Item example is shown in Figure 5. The realizability condition from [6] identified remaining five collaboration diagrams as realizable. Three of these five violated the realizability condition from [10, 12]. All the three examples that violate the the realizability condition from [10, 12] have multiple message threads and violate this realizability condition due to nondeterminism between message send and receive events. Our results show that it is beneficial to use the realizability condition from [6] whenever it is applicable rather than using the more general realizability condition from [10,12]. Finally, the verification of LTL properties of these examples with the Spin model checker took less than 15 milliseconds each and used 2.5 MBytes of memory. In Table 1 we show the number of states visited during verification. Note that, as expected, the three examples with larger state spaces are the ones with multiple message threads. Spin is able to handle much larger state spaces than any of these examples, so it is safe to say that verification of collaboration diagrams with a model checker is feasible. The unrealizable examples we discussed above are unrealizable under the concurrent execution semantics we defined in Section 4. We believe that in some of these cases the intention of the developers were to specify a sequential execution rather than a concurrent execution and under the concurrent execution semantics these specifications become unrealizable. Even for such specifications the realizability analysis we implement in our tool is useful since it can help in identifying specifications for which concurrent execution can create problems. ## 7 Conclusions In this paper we discussed choreography specification with collaboration diagrams. We defined three classes of collaboration diagrams with increasing expressive power: single collaboration diagrams, collaboration diagram sets and collaboration diagram graphs. We presented techniques for realizability, synthesis and verification and we implemented these techniques in a toolset. Our experimental results indicate that realizability analysis, synthesis and verification of choreographers specified using collaboration diagrams can be done efficiently. ## References [1] A. Abdurazik and A. J. Offutt. Using uml collaboration diagrams for static checking and test generation. In Proc. 3rd Int. Conf. on the _Unified Modeling Language (UML’00), pages 383–395, 2000._ [2] R. Alur, K. Etessami, and M. Yannakakis. Inference of message sequence charts. In Proc. 22nd Int. Conf. on Software Engineering, pages 304–313, 2000. [3] B. Benatallah, Q. Z. Sheng, and M. Dumas. The self-serv environment for web services composition. IEEE Internet Computing, 7(1):40–48, Jan 2003. [4] Business process execution language for web services (BPEL), version 1.1. `http://www.ibm.com/developerworks/` ``` library/ws-bpel. ``` [5] T. Bultan and X. Fu. Realizability of interactions in collaboration diagrams. Technical Report 2006-11, Computer Science Department, University of California, Santa Barbara, September 2006. [6] T. Bultan and X. Fu. Specification of realizable service conversations using collaboration diagrams. In Proc. IEEE Int. Conf. on Service_Oriented Computing and Applications (SOCA’07), pages 122–132,_ 2007. [7] M. Carbone, K. Honda, N. Yoshida, R. Milner, G. Brown, and S. Ross-Talbot. A theoretical basis of communication-centred concurrent programming. WCD-Working Note, 2006. [8] H. Foster, S. Uchitel, J. Magee, and J. Kramer. Model-based verification of web service compositions. In Proc. 18th IEEE Int. Conf. on _Automated Software Engineering, pages 152–163, 2003._ [9] M. Fowler. UML Distilled. Addison Wesley, 2004. [10] X. Fu, T. Bultan, and J. Su. Conversation protocols: A formalism for specification and analysis of reactive electronic services. Theoretical _Computer Science, 328(1-2):19–37, November 2004._ [11] X. Fu, T. Bultan, and J. Su. WSAT: A tool for formal analysis of web services. In Proc. 16th Int. Conf. on Computer Aided Verification _(CAV’04), pages 510–514, 2004._ [12] X. Fu, T. Bultan, and J. Su. Synchronizability of conversations among web services. IEEE Transactions on Software Engineering, 31(12):1042–1055, December 2005. [13] G. J. Holzmann. The SPIN Model Checker: Primer and Reference _Manual. Addison-Wesley, Boston, Massachusetts, 2003._ [14] J. Pu, Z. Zhang, Y. Xu, and H. Yang. Reusing legacy cobol code with uml collaboration diagrams via a wide spectrum language. In Pro_ceedings of the 2005 IEEE International Conference on Information_ _Reuse and Integration (IRI’05), pages 78–83, 2005._ [15] H. C. Purchase, L. Colpoys, M. McGill, and D. A. Carrington. Uml collaboration diagram syntax: An empirical study of comprehension. In Proc. 1st Int. Workshop on Visualizing Software for Understanding _and Analysis (VISSOFT’02), pages 13–22, 2002._ [16] Z. Qiu, X. Zhao, C. Cai, and H. Yang. Towards the theoretical foundation of choreography. In Proceedings of WWW 2007, 2007. [17] G. Sala¨un and T. Bultan. Realizability of choreographies using process algebra encodings. In Proc. 7th Int. Conf. on Integrated Formal _Methods (IFM’09), pages 167–182, 2009._ [18] D. Skogan, R. Gronmo, and I. Solheim. Web Service Composition in UML. In Proc. of 8th Int. IEEE Enterprise Distributed Object _Computing Conference, 2004._ [19] Sparx systems enterprise architect UML editor. https://www. ``` sparxsystems.com.au/. ``` [20] OMG unified modeling language superstructure, version 2.1.2. ``` http://ww.uml.org/, October 2007. ``` 8 -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ICWS.2009.100?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ICWS.2009.100, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "http://www.cs.ucsb.edu/~bultan/publications/icws09.pdf" }
2,009
[ "JournalArticle", "Conference" ]
true
2009-07-06T00:00:00
[]
10,781
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01d317affb6b1d57d25d4f6b39b493e03226afc4
[ "Computer Science" ]
0.825971
Robust Encryption
01d317affb6b1d57d25d4f6b39b493e03226afc4
Journal of Cryptology
[ { "authorId": "145355903", "name": "Michel Abdalla" }, { "authorId": "1703441", "name": "M. Bellare" }, { "authorId": "1788020", "name": "G. Neven" } ]
{ "alternate_issns": null, "alternate_names": [ "J Cryptol" ], "alternate_urls": [ "https://www.iacr.org/jofc/jofc.html", "http://www.springeronline.com/sgw/cda/frontpage/0,11855,5-0-70-1009426-detailsPage=journal|description|description,00.html?referer=www.springeronline.com/journal/00145/about" ], "id": "de5467ac-3f75-47f8-8397-1c10f6f9fc09", "issn": "0933-2790", "name": "Journal of Cryptology", "type": "journal", "url": "https://link.springer.com/journal/145" }
null
# Robust Encryption Michel Abdalla[1], Mihir Bellare[2], and Gregory Neven[3][,][4] 1 Departement d’Informatique, ´Ecole normale sup´erieure, Paris, France Michel.Abdalla@ens.fr http://www.di.ens.fr/users/mabdalla 2 Department of Computer Science & Engineering, University of California San Diego, USA mihir@cs.ucsd.edu http://www.cs.ucsd.edu/users/mihir 3 Department of Electrical Engineering, Katholieke Universiteit Leuven, Belgium 4 IBM Research – Zurich, Switzerland nev@zurich.ibm.com http://www.neven.org **Abstract. We provide a provable-security treatment of “robust” en-** cryption. Robustness means it is hard to produce a ciphertext that is valid for two different users. Robustness makes explicit a property that has been implicitly assumed in the past. We argue that it is an essential conjunct of anonymous encryption. We show that natural anonymitypreserving ways to achieve it, such as adding recipient identification information before encrypting, fail. We provide transforms that do achieve it, efficiently and provably. We assess the robustness of specific encryption schemes in the literature, providing simple patches for some that lack the property. We present various applications. Our work enables safer and simpler use of encryption. ## 1 Introduction This paper provides a provable-security treatment of encryption “robustness.” Robustness reflects the difficulty of producing a ciphertext valid under two different encryption keys. The value of robustness is conceptual, “naming” something that has been undefined yet at times implicitly (and incorrectly) assumed. Robustness helps make encryption more mis-use resistant. We provide formal definitions of several variants of the goal; consider and dismiss natural approaches to achieve it; provide two general robustness-adding transforms; test robustness of existing schemes and patch the ones that fail; and discuss some applications. The definitions. Both the PKE and the IBE settings are of interest and the explication is simplified by unifying them as follows. Associate to each identity an encryption key, defined as the identity itself in the IBE case and its (honestly generated) public key in the PKE case. The adversary outputs a pair id 0, id 1 of distinct identities. For strong robustness it also outputs a ciphertext C[∗]; for weak, it outputs a message M _[∗], and C[∗]_ is defined as the encryption of M _[∗]_ under the encryption key ek 1 of id 1. The adversary wins if the decryptions of The original version of this chapter was revised: The copyright line was incorrect. This has been [corrected. The Erratum to this chapter is available at DOI: 10.1007/978-3-642-11799-2_36](http://dx.doi.org/10.1007/978-3-642-11799-2_36) ----- _C[∗]_ under the decryption keys dk 0, dk 1 corresponding to ek 0, ek 1 are both non-⊥. Both weak and strong robustness can be considered under chosen plaintext or chosen ciphertext attacks, resulting in four notions (for each of PKE and IBE) that we denote WROB-CPA, WROB-CCA, SROB-CPA, SROB-CCA. Why robustness? The primary security requirement for encryption is dataprivacy, as captured by notions IND-CPA or IND-CCA [18,21,16,5,11]. Increasingly, we are also seeing a market for anonymity, as captured by notions ANO-CPA and ANO-CCA [4,1]. Anonymity asks that a ciphertext does not reveal the encryption key under which it was created. Where you need anonymity, there is a good chance you need robustness too. Indeed, we would go so far as to say that robustness is an essential companion of anonymous encryption. The reason is that without it we would have security without basic communication correctness, likely upsetting our application. This is best illustrated by the following canonical application of anonymous encryption, but shows up also, in less direct but no less important ways, in other applications. A sender wants to send a message to a particular target recipient, but, to hide the identity of this target recipient, anonymously encrypts it under her key and broadcasts the ciphertext to a larger group. But as a member of this group I need, upon receiving a ciphertext, to know whether or not I am the target recipient. (The latter typically needs to act on the message.) Of course I can’t tell whether the ciphertext is for me just by looking at it since the encryption is anonymous, but decryption should divulge this information. It does, unambiguously, if the encryption is robust (the ciphertext is for me iff my decryption of it is not ) but otherwise I might accept a ciphertext (and some _⊥_ resulting message) of which I am not the target, creating mis-communication. Natural “solutions,” such as including the encryption key or identity of the target recipient in the plaintext before encryption and checking it upon decryption, are, in hindsight, just attempts to add robustness without violating anonymity and, as we will see, don’t work. We were lead to formulate robustness upon revisiting Public key Encryption with Keyword Search (PEKS) [9]. In a clever usage of anonymity, Boneh, Di Crescenzo, Ostrovsky and Persiano (BDOP) [9] showed how this property in an IBE scheme allowed it to be turned into a privacy-respecting communications filter. But Abdalla et. al [1] noted that the BDOP filter could lack consistency, meaning turn up false positives. Their solution was to modify the construction. What we observed instead was that consistency would in fact be provided by the _original construct if the IBE scheme was robust. PEKS consistency turns out to_ correspond exactly to communication correctness of the anonymous IBE scheme in the sense discussed above. (Because the PEKS messages in the BDOP scheme are the recipients identities from the IBE perspective.) Besides resurrecting the BDOP construct, the robustness approach allows us to obtain the first consistent IND-CCA secure PEKS without random oracles. Sako’s auction protocol [23] is important because it was the first truly practical one to hide the bids of losers. It makes clever use of anonymous encryption for ----- privacy. But we present an attack on fairness whose cause is ultimately a lack of robustness in the anonymous encryption scheme (cf. [2]). All this underscores a number of the claims we are making about robust ness: that it is of conceptual value; that it makes encryption more resistant to mis-use; that it has been implicitly (and incorrectly) assumed; and that there is value to making it explicit, formally defining and provably achieving it. Weak versus strong. The above-mentioned auction protocol fails because an adversary can create a ciphertext that decrypts correctly under any decryption key. Strong robustness is needed to prevent this. Weak robustness (of the underlying IBE) will yield PEKS consistency for honestly-encrypted messages but may allow spammers to bypass all filters with a single ciphertext, something prevented by strong robustness. Strong robustness trumps weak for applications and goes farther towards making encryption mis-use resistant. We have defined and considered the weaker version because it can be more efficiently achieved, because some existing schemes achieve it and because attaining it is a crucial first step in our method for attaining strong robustness. Achieving robustness. As the reader has surely already noted, robustness (even strong) is trivially achieved by appending the encryption key to the ciphertext and checking for it upon decryption. The problem is that the resulting scheme is not anonymous and, as we have seen above, it is exactly for anonymous schemes that robustness is important. Of course, data privacy is important too. Letting AI-ATK = ANO-ATK + IND-ATK for ATK CPA, CCA, our goal _∈{_ _}_ is to achieve AI-ATK + XROB-ATK, ideally for both ATK CPA, CCA and _∈{_ _}_ X W, S . This is harder. _∈{_ _}_ Transforms. It is natural to begin by seeking a general transform that takes an arbitrary AI-ATK scheme and returns a AI-ATK + XROB-ATK one. This allows us to exploit known constructions of AI-ATK schemes, supports modular protocol design and also helps understand robustness divorced from the algebra of specific schemes. Furthermore, there is a natural and promising transform to consider. Namely, before encrypting, append to the message some redundancy, such as the recipient encryption key, a constant, or even a hash of the message, and check for its presence upon decryption. (Adding the redundancy before encrypting rather than after preserves AI-ATK.) Intuitively this should provide robustness because decryption with the “wrong” key will result, if not in rejection, then in recovery of a garbled plaintext, unlikely to possess the correct redundancy. The truth is more complex. We consider two versions of the paradigm and summarize our findings in Fig. 1. In encryption with unkeyed redundancy, the redundancy is a function RC of the message and encryption key alone. In this case we show that the method fails spectacularly, not providing even weak robustness _regardless of the choice of the function RC. In encryption with keyed redundancy,_ we allow RC to depend on a key K that is placed in the public parameters of the transformed scheme, out of direct reach of the algorithms of the original scheme. ----- In this form, the method can easily provide weak robustness, and that too with a very simple redundancy function, namely the one that simply returns K. But we show that even encryption with keyed redundancy fails to provide _strong robustness. To achieve the latter we have to step outside the encryption_ with redundancy paradigm. We present a strong robustness conferring transform that uses a (non-interactive) commitment scheme. For subtle reasons, for this transform to work the starting scheme needs to already be weakly robust. If it isn’t already, we can make it so via our weak robustness transform. In summary, on the positive side we provide a transform conferring weak robustness and another conferring strong robustness. Given any AI-ATK scheme the first transform returns a WROB-ATK + AI-ATK one. Given any AI-ATK + WROB-ATK scheme the second transform returns a SROB-ATK+AI-ATK one. In both cases it is for both ATK = CPA and ATK = CCA and in both cases the transform applies to what we call general encryption schemes, of which both PKE and IBE are special cases, so both are covered. Robustness of specific schemes. The robustness of existing schemes is important because they might be in use. We ask which specific existing schemes are robust, and, for those that are not, whether they can be made so at a cost lower than that of applying one of our general transforms. There is no reason to expect schemes that are only AI-CPA to be robust since the decryption algorithm may never reject, so we focus on schemes that are known to be AI-CCA. This narrows the field quite a bit. Our findings and results are summarized in Fig. 1. Canonical AI-CCA schemes in the PKE setting are Cramer-Shoup (CS ) in the standard model [15,4] and DHIES in the random oracle (RO) model [3,4]. We show that both are WROB-CCA but neither is SROB-CCA, the latter because encryption with 0 randomness yields a ciphertext valid under any encryption key. We present modified versions CS _[∗],_ _DHIES_ _[∗]_ of the schemes that we show are SROB-CCA. Our proof that CS _[∗]_ is SROB-CCA builds on the informationtheoretic part of the proof of [15]. The result does not need to assume hardness of DDH. It relies instead on pre-image security of the underlying hash function for random range points, something not implied by collision-resistance but seemingly possessed by candidate functions. In the IBE setting, the CCA version BF of the RO model Boneh-Franklin scheme is AI-CCA [10,1], and we show it is SROB-CCA. The standard model Boyen-Waters scheme BW is AI-CCA [13], and we show it is neither WROB-CCA nor SROB-CCA. It can be made either via our transforms but we don’t know of any more direct way to do this. ### BF is obtained via the Fujisaki-Okamoto (FO) transform [17] and BW via the Canetti-Halevi-Katz (CHK) transform [14,8]. We can show that neither transform generically provides strong robustness. This doesn’t say whether they do or not when applied to specific schemes, and indeed the first does for BF and the second does not for BW . Summary. Protocol design suggests that designers have the intuition that robustness is naturally present. This seems to be more often right than wrong ----- _BW_ IBE Yes [13] No No No **Fig. 1. Achieving Robustness. The first table summarizes our findings on the en-** cryption with redundancy transform. “No” means the method fails to achieve the indicated robustness for all redundancy functions, while “yes” means there exists a redundancy function for which it works. The second table summarizes robustness results about some specific AI-CCA schemes. when considering weak robustness of specific AI-CCA schemes. Prevailing intuition about generic ways to add even weak robustness is wrong, yet we show it can be done by an appropriate tweak of these ideas. Strong robustness is more likely to be absent than present in specific schemes, but important schemes can be patched. Strong robustness can also be added generically, but with more work. Related work. There is growing recognition that robustness is important in applications and worth defining explicitly, supporting our own claims to this end. In particular the correctness requirement for predicate encryption [20] includes a form of weak robustness and, in recent work concurrent to, and independent of, ours, Hofheinz and Weinreb [19] introduced a notion of well-addressedness of IBE schemes that is just like weak robustness except that the adversary gets the IBE master secret key. Neither work considers or achieves strong robustness, and neither treats PKE. ## 2 Definitions Notation and conventions. If x is a string then _x_ denotes its length, and if _|_ _|_ _S is a set then |S| denotes its size. The empty string is denoted ε. By a1∥_ _. . . ∥an,_ we denote a string encoding of a1, . . ., an from which a1, . . ., an are uniquely recoverable. (Usually, concatenation suffices.) By a1∥ _. . . ∥an ←_ _a, we mean that_ _a is parsed into its constituents a1, . . ., an. Similarly, if a = (a1, . . ., an) then_ (a1, . . ., an) ← _a means we parse a as shown. Unless otherwise indicated, an_ algorithm may be randomized. By y _←$_ _A(x1, x2, . . .) we denote the operation_ of running A on inputs x1, x2, . . . and fresh coins and letting y denote the output. We denote by [A(x1, x2, . . .)] the set of all possible outputs of A on inputs _x1, x2, . . .. We assume that an algorithm returns ⊥_ if any of its inputs is ⊥. ----- **proc Initialize** (pars, msk ) _←$_ PG ; b _S, T, U, V ←∅_ Return pars _←{$_ 0, 1} **proc GetEK(id** ) _U ←_ _U ∪{id_ _}_ (EK[id ], DK[id ]) _←$_ KG(pars, msk _, id)_ Return EK[id] **proc GetDK(id)** If id ̸∈ _U then return ⊥_ If id ∈ _S then return ⊥_ _V ←_ _V ∪{id_ _}_ Return DK[id ] **proc Dec(C, id)** If id ̸∈ _U then return ⊥_ If (id, C) ∈ _T then return ⊥_ _M ←_ Dec(pars, EK[id ], DK[id ], C) Return M **proc LR(id** _[∗]0[,][ id][ ∗]1[, M]0[ ∗][, M]1[ ∗][)]_ If (id _[∗]0_ _[̸∈]_ _[U]_ [)][ ∨] [(][id] 1[∗] _[̸∈]_ _[U]_ [) then return][ ⊥] If (id _[∗]0_ _[∈]_ _[V][ )][ ∨]_ [(][id][ ∗]1 _[∈]_ _[V][ ) then return][ ⊥]_ IfC |[∗] _M←$_ 0[∗]Enc[| ̸][=][ |](pars[M]1[ ∗][|][ then return], EK[id b], Mb ∗[ ⊥][)] _S ←_ _S ∪{id_ _[∗]0[,][ id][ ∗]1[}]_ _T ←_ _T ∪{(id_ 0[∗][, C] _[∗][)][,][ (][id]_ _[∗]1[, C]_ _[∗][)][}]_ Return C _[∗]_ **proc Finalize(b[′])** Return (b[′] = b) **Fig. 2. Game AIGE defining AI-ATK security of general encryption scheme GE =** (PG, KG, Enc, Dec) Games. Our definitions and proofs use code-based game-playing [6]. Recall that a game —look at Fig. 2 for an example— has an Initialize procedure, procedures to respond to adversary oracle queries, and a Finalize procedure. A game G is executed with an adversary A as follows. First, Initialize executes and its outputs are the inputs to A. Then A executes, its oracle queries being answered by the corresponding procedures of G. When A terminates, its output becomes the input to the Finalize procedure. The output of the latter, denoted G[A], is called the output of the game, and we let “G[A]” denote the event that this game output takes value true. Boolean flags are assumed initialized to false. Games Gi, Gj are identical until bad if their code differs only in statements that follow the setting of bad to true. Our proofs will use the following. **Lemma 1 [6] Let Gi, Gj be identical until bad games, and A an adversary.** _Then_ ��Pr Pr � _−_ G[A]j G[A]j _[sets][ bad]_ _._ � G[A]i � ��� _≤_ Pr � � The running time of an adversary is the worst case time of the execution of the adversary with the game defining its security, so that the execution time of the called game procedures is included. General encryption. We introduce and use general encryption schemes, of which both PKE and IBE are special cases. This allows us to avoid repeating similar definitions and proofs. A general encryption (GE) scheme is a tuple ### GE = (PG, KG, Enc, Dec) of algorithms. The parameter generation algorithm PG takes no input and returns common parameter pars and a master secret key msk . On input pars, msk _, id_, the key generation algorithm KG produces an encryption key ek and decryption key dk . On inputs pars, ek _, M, the encryption algorithm_ Enc produces a ciphertext C encrypting plaintext M . On input pars, ek _, dk_ _, C_, ----- **proc Initialize** (pars, msk ) _←$_ PG ; U, V ←∅ Return pars **proc GetEK(id)** _U ←_ _U ∪{id_ _}_ (EK[id ], DK[id ]) _←$_ KG(pars, msk _, id)_ Return EK[id ] **proc GetDK(id** ) If id ̸∈ _U then return ⊥_ _V ←_ _V ∪{id_ _}_ Return DK[id ] **proc Dec(C, id** ) If id ̸∈ _U then return ⊥_ _M ←_ Dec(pars, EK[id ], DK[id ], C) Return M **proc Finalize(M,** _id_ 0, id 1) // WROBGE If (id 0 ̸∈ _U_ ) ∨ (id 1 ̸∈ _U_ ) then return false If (id 0 ∈ _V ) ∨_ (id 1 ∈ _V ) then return false_ If (id 0 = id 1) then return false _M0 ←_ _M ; C_ _←$_ Enc(pars, EK[id 0], M0) _M1 ←_ Dec(pars, EK[id 1], DK[id 1], C) Return (M0 ̸= ⊥) ∧ (M1 ̸= ⊥) **proc Finalize(C,** _id_ 0, id 1) // SROBGE If (id 0 ̸∈ _U_ ) ∨ (id 1 ̸∈ _U_ ) then return false If (id 0 ∈ _V ) ∨_ (id 1 ∈ _V ) then return false_ If (id 0 = id 1) then return false _M0 ←_ Dec(pars, EK[id 0], DK[id 0], C) _M1 ←_ Dec(pars, EK[id 1], DK[id 1], C) Return (M0 ̸= ⊥) ∧ (M1 ̸= ⊥) **Fig. 3. Games WROBGE and SROBGE defining WROB-ATK and SROB-ATK security** (respectively) of general encryption scheme GE = (PG, KG, Enc, Dec). The procedures on the left are common to both games, which differ only in their Finalize procedures. the deterministic decryption algorithm Dec returns either a plaintext message M or ⊥ to indicate that it rejects. We say that GE is a public-key encryption (PKE) scheme if msk = ε and KG ignores its id input. To recover the usual syntax we may in this case write the output of PG as pars rather than (pars, msk ) and omit msk _, id as inputs to KG. We say that GE is an identity-based encryption_ (IBE) scheme if ek = id, meaning the encryption key created by KG on inputs _pars, msk_ _, id always equals id_ . To recover the usual syntax we may in this case write the output of KG as dk rather than (ek _, dk_ ). It is easy to see that in this way we have recovered the usual primitives. But there are general encryption schemes that are neither PKE nor IBE schemes, meaning the primitive is indeed more general. Correctness. Correctness of a general encryption scheme GE = (PG, KG, Enc, Dec) requires that, for all (pars, msk) [PG], all plaintexts M in the underlying _∈_ message space associated to pars, all identities id, and all (ek _, dk_ ) [KG(pars, _∈_ _msk_ _, id_ )], we have Dec(pars, ek _, dk_ _, Enc(pars, ek_ _, M )) = M with probability one,_ where the probability is taken over the coins of Enc. AI-ATK security. Historically, definitions of data privacy (IND) [18,21,16,5,11] and anonymity (ANON) [4,1] have been separate. We are interested in schemes that achieve both, so rather than use separate definitions we follow [12] and capture both simultaneously via game AIGE of Fig. 2. A cpa adversary is one that makes no Dec queries, and a cca adversary is one that might make such queries. The ai-advantage of such an adversary, in either case, is 1. _−_ AI[A]GE **Adv[ai]GE** [(][A][) =][ 2][ ·][ P][r] � � ----- We will assume an ai-adversary makes only one LR query, since a hybrid argument shows that making q of them can increase its ai-advantage by a factor of at most q. Oracle GetDK represents the IBE key-extraction oracle [11]. In the PKE case it is superfluous in the sense that removing it results in a definition that is equivalent up to a factor depending on the number of GetDK queries. That’s probably why the usual definition has no such oracle. But conceptually, if it is there for IBE, it ought to be there for PKE, and it does impact concrete security. Robustness. Associated to general encryption scheme GE = (PG, KG, Enc, Dec) are games WROB, SROB of Fig. 3. As before, a cpa adversary is one that makes no Dec queries, and a cca adversary is one that might make such queries. The wrob and srob advantages of an adversary, in either case, are and **Adv[srob]GE** [(][A][) = P][r] _._ WROB[A]GE SROB[A]GE **Adv[wrob]GE** [(][A][) = P][r] � � � � The difference between WROB and SROB is that in the former the adversary produces a message M, and C is its encryption under the encryption key of one of the given identities, while in the latter it produces C directly, and may not obtain it as an honest encryption. It is worth clarifying that in the PKE case the adversary does not get to choose the encryption (public) keys of the identities it is targeting. These are honestly and independently chosen, in real life by the identities themselves and in our formalization by the games. ## 3 Robustness Failures of Encryption with Redundancy A natural privacy-and-anonymity-preserving approach to add robustness to an encryption scheme is to add redundancy before encrypting, and upon decryption reject if the redundancy is absent. Here we investigate the effectiveness of this encryption with redundancy approach, justifying the negative results discussed in Section 1 and summarized in the first table of Fig. 1. Redundancy codes and the transform. A redundancy code RED = (RKG, RC, RV) is a triple of algorithms. The redundancy key generation algorithm RKG generates a key K. On input K and data x the redundancy computation algorithm RC returns redundancy r. Given K, x, and claimed redundancy r, the deterministic redundancy verification algorithm RV returns 0 or 1. We say that ### RED is unkeyed if the key K output by RKG is always equal to ε, and keyed otherwise. The correctness condition is that for all x we have RV(K, x, RC(K, x)) = 1 with probability one, where the probability is taken over the coins of RKG and RC. (We stress that the latter is allowed to be randomized.) Given a general encryption scheme GE = (PG, KG, Enc, Dec) and a redun dancy code RED = (RKG, RC, RV), the encryption with redundancy transform associates to them the general encryption scheme GE = (PG, KG, Enc, Dec) whose algorithms are shown on the left side of Fig. 5. Note that the transform has the first of our desired properties, namely that it preserves AI-ATK. ----- RKG RC(K, ek _∥M_ ) RV(K, ek _∥M, r)_ Return K ← _ε_ Return 0[k] Return (r = 0[k]) Return K ← _ε_ Return ek Return (r = ek ) Return K ← _ε_ ReturnL _←{$_ 0 L, 1∥}Hk ;(L, ek _∥M_ ) _LReturn (∥h ←_ _r ;h = H(L, ek_ _∥M_ )) Return K _←{$_ 0, 1}k Return K Return (r = K) Return K _←{$_ 0, 1}k Return H(K, ek _∥M_ ) Return (r = H(K, ek _∥M_ )) **Fig. 4. Examples of redundancy codes, where the data x is of the form ek** _∥M_ . The first four are unkeyed and the last two are keyed. Also if GE is a PKE scheme then so is GE, and if GE is an IBE scheme then so is GE, which means the results we obtain here apply to both settings. Fig. 4 shows example redundancy codes for the transform. With the first, GE is identical to GE, so that the counterexample below shows that AI-CCA does not imply WROB-CPA. The second and third rows show redundancy equal to a constant or the encryption key as examples of (unkeyed) redundancy codes. The fourth row shows a code that is randomized but still unkeyed. The hash function H could be a MAC or a collision resistant function. The last two are keyed redundancy codes, the first the simple one that just always returns the key, and the second using a hash function. Obviously, there are many other examples. SROB failure. We show that encryption with redundancy fails to provide strong robustness for all redundancy codes, whether keyed or not. More precisely, we show that for any redundancy code RED and both ATK ∈{CPA, CCA}, there is an AI-ATK encryption scheme GE such that the scheme GE resulting from the encryption-with-redundancy transform applied to GE _,_ _RED is not_ SROB-CPA. We build GE by modifying a given AI-ATK encryption scheme ### GE [∗] = (PG, KG, Enc[∗], Dec[∗]). Let l be the number of coins used by RC, and let RC(x; ω) denote the result of executing RC on input x with coins ω 0, 1 . Let _∈{_ _}[l]_ _M_ _[∗]_ be a function that given pars returns a point in the message space associated to pars in GE _[∗]. Then GE = (PG, KG, Enc, Dec) where the new algorithms are_ shown on the bottom right side of Fig. 5. The reason we used 0[l] as coins for RC here is that Dec is required to be deterministic. Our first claim is that the assumption that GE _[∗]_ is AI-ATK implies that ### GE is too. Our second claim, that GE is not SROB-CPA, is demonstrated by the following attack. For a pair id 0, id 1 of distinct identities of its choice, the adversary A, on input (pars, K), begins with queries ek 0 _←$_ **GetEK(id 0)** and ek 1 _←$_ **GetEK(id** 1). It then creates ciphertext C ← 0 ∥ _K and returns_ (id 0, id 1, C). We claim that Adv[srob]GE [(][A][) = 1. L][ett][in][g][ dk][ 0][,][ dk][ 1][ de][no][te the de][-] cryption keys corresponding to ek 0, ek 1 respectively, the reason is the following. For both b ∈{0, 1}, the output of Dec(pars, ek b, dk b, C ) is M _[∗](pars)∥rb(pars)_ where rb(pars) = RC(K, ek b∥M _[∗](pars); 0[l]). But the correctness of RED implies_ ----- **Algorithm PG** (pars, msk ) _←$_ PG ; K _←$_ RKG Return ((pars, K), msk ) **Algorithm KG((pars, K), msk** _, id_ ) (ek _, dk_ ) _←$_ KG(pars, msk _, id_ ) Return ek **Algorithm Enc((pars, K), ek** _, M )_ _r_ _←$_ RC(K, ek _∥M )_ _C_ _←$_ Enc(pars, ek _, M ∥r)_ Return C **Algorithm Dec((pars, K), ek** _, dk_ _, C_ ) _M ∥r ←_ Dec(pars, ek _, dk_ _, C_ ) If RV(K, ek _∥M, r) = 1 then return M_ Else return ⊥ **Algorithm Dec(pars, ek** _, dk_ _, C_ ) _b∥C_ _[∗]_ _←_ _C_ If b = 1 then return Dec[∗](pars, ek _, dk_ _, C_ _[∗])_ Else return M _[∗](pars)∥RC(C_ _[∗], ek_ _∥M_ _[∗](pars); 0[l])_ **Fig. 5. Left: Transformed scheme for the encryption with redundancy paradigm. Top** **Right: Counterexample for WROB. Bottom Right: Counterexample for SROB.** that RV(K, ek b∥M _[∗](pars), rb(pars)) = 1 and hence Dec((pars, K), ek b, dk b, C_ ) returns M _[∗](pars) rather than ⊥._ WROB failure. We show that encryption with redundancy fails to provide even weak robustness for all unkeyed redundancy codes. This is still a powerful negative result because many forms of redundancy that might intuitively work, such the first four of Fig. 4, are included. More precisely, we claim that for any unkeyed redundancy code RED and both ATK ∈{CPA, CCA}, there is an AI-ATK encryption scheme GE such that the scheme GE resulting from the encryption-with-redundancy transform applied to GE _,_ _RED is not WROB-CPA._ We build GE by modifying a given AI-ATK + WROB-CPA encryption scheme ### GE [∗] = (PG, KG, Enc[∗], Dec[∗]). With notation as above, the new algorithms for the scheme GE = (PG, KG, Enc, Dec) are shown on the top right side of Fig. 5. Our first claim is that the assumption that GE _[∗]_ is AI-ATK implies that GE is too. Our second claim, that GE is not WROB-CPA, is demonstrated by the following attack. For a pair id 0, id 1 of distinct identities of its choice, the adversary A, on input (pars, ε), makes queries ek 0 _←$_ **GetEK(id 0) and ek 1** _←$_ **GetEK(id 1) and returns (id 0, id 1, M** _[∗](pars)). We claim that Adv[wrob]GE_ [(][A][) i][s] high. Letting dk 1 denote the decryption key for ek 1, the reason is the following. Let r0 _←$_ RC(ε, ek 0∥M ∗(pars)) and C _←$_ Enc(pars, ek 0, M ∗(pars)∥r0). The as sumed WROB-CPA security of GE _[∗]_ implies that Dec(pars, ek 1, dk 1, C ) is most probably M _[∗](pars)∥r1(pars) where r1(pars) = RC(ε, ek 1∥M_ _[∗](pars); 0[l]). But the_ correctness of RED implies that RV(ε, ek 1∥M _[∗](pars), r1(pars)) = 1 and hence_ Dec((pars, ε), ek 1, dk 1, C ) returns M _[∗](pars) rather than ⊥._ ----- ## 4 Transforms That Work We present a transform that confers weak robustness and another that confers strong robustness. They preserve privacy and anonymity, work for PKE as well as IBE, and for CPA as well as CCA. In both cases the security proofs surface some delicate issues. Besides being useful in its own right, the weak robustness transform is a crucial step in obtaining strong robustness, so we begin there. Weak robustness transform. We saw that encryption-with-redundancy fails to provide even weak robustness if the redundancy code is unkeyed. Here we show that if the redundancy code is keyed, even in the simplest possible way where the redundancy is just the key itself, the transform does provide weak robustness, turning any AI-ATK secure general encryption scheme into an AI-ATK + WROB-ATK one, for both ATK CPA, CCA . _∈{_ _}_ The transformed scheme encrypts with the message a key K placed in the public parameters. In more detail, the weak robustness transform associates to a given general encryption scheme GE = (PG, KG, Enc, Dec) and integer parameter _k, representing the length of K, the general encryption scheme GE = (PG, KG,_ Enc, Dec) whose algorithms are depicted in Fig. 6. Note that if GE is a PKE scheme then so is GE and if GE is an IBE scheme then so is GE, so that our results, captured by Theorem 2 below, cover both settings. The intuition for the weak robustness of GE is that the GE decryption under one key, of an encryption of M _K created under another key, cannot, by the_ _∥_ assumed AI-ATK security of GE, reveal K, and hence the check will fail. This is pretty much right for PKE, but the delicate issue is that for IBE, information about K can enter via the identities, which in this case are the encryption keys and are chosen by the adversary as a function of K. The AI-ATK security of ### GE is no protection against this. We show however that this can be dealt with by making K sufficiently longer than the identities. **Theorem 2. Let GE = (PG, KG, Enc, Dec) be a general encryption scheme with** _identity space {0, 1}[n], and let GE = (PG, KG, Enc, Dec) be the general encryption_ _scheme resulting from applying the weak robustness transform to GE and integer_ _parameter k. Then_ **1.** AI-ATK: Let A be an ai-adversary against GE _. Then there is an ai-adversary_ _B against GE such that Adv[ai]GE_ [(][A][) =][ Adv]GE[ai] [(][B][)][. Adversary][ B][ inherits] _the query profile of A and has the same running time as A. If A is a cpa_ _adversary then so is B._ **2.** WROB-ATK: Let A be a wrob adversary against GE with running time t, _and let ℓ_ = 2n+⌈log2(t)⌉. Then there is an ai-adversary B against GE such _that Adv[wrob]GE_ [(][A][)][ ≤] **[Adv]GE[ai]** [(][B][) +][ 2][ℓ][−][k][. Adversary][ B][ inherits the query] _profile of A and has the same running time as A. If A is a cpa adversary_ _then so is B._ The first part of the theorem implies that if GE is AI-ATK then GE is AI-ATK as well. The second part of the theorem implies that if GE is AI-ATK and k is ----- **Algorithm PG** (pars, msk ) _←$_ PG _K_ _←{$_ 0, 1}k Return ((pars, K), msk ) **Algorithm Enc((pars, K), ek** _, M )_ _C_ _←$_ Enc(pars _, ek_ _, M ∥K))_ Return C **Algorithm KG((pars, K), msk** _, id)_ (ek _, dk_ ) _←$_ KG(pars, msk _, id)_ Return (ek _, dk_ ) **Algorithm Dec((pars, K), ek** _, dk_ _, C_ ) _M ←_ Dec(pars _, ek_ _, dk_ _, C_ ) If M = ⊥ then return ⊥ _M ∥K_ _[∗]_ _←_ _M_ If (K = K _[∗]) then return M_ Else Return _⊥_ **Fig. 6. General encryption scheme GE = (PG, KG, Enc, Dec) resulting from applying** our weak-robustness transform to general encryption scheme GE = (PG, KG, Enc, Dec) and integer parameter k chosen sufficiently larger than 2n + ⌈log2(t)⌉ then GE is WROB-ATK. In both cases this is for both ATK CPA, CCA . The theorem says it directly for _∈{_ _}_ CCA, and for CPA by the fact that if A is a cpa adversary then so is B. When we say that B inherits the query profile of A we mean that for every oracle that _B has, if A has an oracle of the same name and makes q queries to it, then_ this is also the number B makes. The proof of the first part of the theorem is straightforward and is omitted. The proof of the second part is given in [2]. It is well known that collision-resistant hashing of identities preserves AI-ATK and serves to make them of fixed length [7] so the assumption that the identity space is 0, 1 rather than 0, 1 is not really a restriction. In practice we might hash _{_ _}[n]_ _{_ _}[∗]_ with SHA256 so that n = 256, and, assuming t 2[128], setting k = 768 would _≤_ make 2[ℓ][−][k] = 2[−][128]. Commitment schemes. Our strong robustness transform will use commitments. A commitment scheme is a 3-tuple CMT = (CPG, Com, Ver). The parameter generation algorithm CPG returns public parameters cpars. The committal algorithm Com takes cpars and data x as input and returns a commitment com to x along with a decommittal key dec. The deterministic verification algorithm Ver takes cpars, x _, com, dec as input and returns 1 to indicate_ that accepts or 0 to indicate that it rejects. Correctness requires that, for any _x_ 0, 1, any cpars [CPG], and any (com, dec) [Com(cpars, x )], we have _∈{_ _}[∗]_ _∈_ _∈_ that Ver(cpars, x _, com, dec) = 1 with probability one, where the probability is_ taken over the coins of Com. We require the scheme to have the uniqueness property, which means that for any x 0, 1, any cpars [CPG], and any _∈{_ _}[∗]_ _∈_ (com, dec) [Com(cpars, x )] it is the case that Ver(cpars, x _, com_ _[∗], dec) = 0 for_ _∈_ all com _[∗]_ ≠ _com. In most schemes the decommittal key is the randomness used_ by the committal algorithm and verification is by re-applying the committal function, which ensures uniqueness. The advantage measures Adv[hide]CMT [(][A][)][ a][n][d] **Adv[bind]CMT** [(][A][),][ referr][in][g t][o][ the sta][n][dard h][i][d][in][g a][n][d b][in][d][in][g pr][o][pert][i][es][,][ are re][-] called in [2]. We refer to the corresponding notions as HIDE and BIND. ----- **Algorithm PG** (pars, msk ) _←$_ PG _cpars_ _←$_ CPG Return ((pars, cpars ), msk ) **Algorithm Enc((pars, cpars** ), ek _, M )_ (com, dec) _←$_ Com(cpars _, ek_ ) _C_ _←$_ Enc(pars, ek _, M ∥dec))_ Return (C _, com)_ **Algorithm KG((pars, cpars** ), msk _, id_ ) (ek _, dk_ ) _←$_ KG(pars, msk _, id)_ Return (ek _, dk_ ) **Algorithm Dec((pars, cpars), ek** _, dk_ _, (C_ _, com))_ _M ←_ Dec(pars, ek _, dk_ _, C_ ) If M = ⊥ then return ⊥ _M ∥dec ←_ _M_ If (Ver(cpars _, ek_ _, com, dec) = 1) then return M_ Else Return _⊥_ **Fig. 7. General encryption scheme GE = (PG, KG, Enc, Dec) resulting from applying** our strong robustness transform to general encryption scheme GE = (PG, KG, Enc, Dec) and commitment scheme CMT = (CPG, Com, Ver) The strong robustness transform. The idea is for the ciphertext to include a commitment to the encryption key. The commitment is not encrypted, but the decommittal key is. In detail, given a general encryption scheme GE = (PG, KG, Enc, Dec) and a commitment scheme CMT = (CPG, Com, Ver) the strong _robustness transform associates to them the general encryption scheme GE =_ (PG, KG, Enc, Dec) whose algorithms are depicted in Fig. 7. Note that if GE is a PKE scheme then so is GE and if GE is an IBE scheme then so is GE, so that our results, captured by the Theorem 3, cover both settings. In this case the delicate issue is not the robustness but the AI-ATK security of ### GE in the CCA case. Intuitively, the hiding security of the commitment scheme means that a GE ciphertext does not reveal the encryption key. As a result, we would expect AI-ATK security of GE to follow from the commitment hiding security and the assumed AI-ATK security of GE . This turns out not to be true, and demonstrably so, meaning there is a counterexample to this claim. (See below.) What we show is that the claim is true if GE is additionally WROB-ATK. This property, if not already present, can be conferred by first applying our weak robustness transform. **Theorem 3. Let GE = (PG, KG, Enc, Dec) be a general encryption scheme,** _and let GE = (PG, KG, Enc, Dec) be the general encryption scheme resulting_ _from applying the strong robustness transform to GE and commitment scheme_ ### CMT = (CPG, Com, Ver). Then **1.** AI-ATK: Let A be an ai-adversary against GE. Then there is a wrob ad_versary W against GE_ _, a hiding adversary H against CMT and an ai-_ _adversary B against GE such that_ **Adv[ai]GE** [(][A][)][ ≤] [2][ ·][ Adv]GE[wrob][(][W] [) +][ 2][ ·][ Adv][hide]CMT [(][H][) + 3][ ·][ Adv]GE[ai] [(][B][)][ .] _Adversaries W, B inherit the query profile of A, and adversaries W, H, B_ _have the same running time as A. If A is a cpa adversary then so are W, B._ ----- **2.** SROB-ATK: Let A be a srob adversary against GE making q GetEK _queries. Then there is a binding adversary B against CMT such that_ _· CollGE ._ **Adv[srob]GE** [(][A][)][ ≤] **[Adv]CMT[bind]** [(][B][) +] �q� 2 _Adversary B has the same running time as A._ The first part of the theorem implies that if GE is AI-ATK and WROB-ATK and ### CMT is HIDE then GE is AI-ATK, and the second part of the theorem implies that if CMT is BIND secure and GE has low encryption key collision probability then GE is SROB-ATK. In both cases this is for both ATK ∈{CPA, CCA}. We remark that the proof shows that in the CPA case the WROB-ATK assumption on GE in the first part is actually not needed. The encryption key collision probability CollGE of GE is defined as the maximum probability that ek 0 = ek 1 in the experiment where we let (pars, msk ) _←$_ PG and then let (ek 0, dk 0) _←$_ KG(pars, msk _, id_ 0) and (ek 1, dk 1) _←$_ KG(pars, msk _, id 1), where the maximum is_ over all distinct identities id 0, id 1. The collision probability is zero in the IBE case since ek 0 = id 0 = id 1 = ek 1. It is easy to see that GE being AI implies _̸_ **CollGE is negligible, so asking for low encryption key collision probability is in** fact not an extra assumption. (For a general encryption scheme the adversary needs to have hardwired the identities that achieve the maximum, but this is not necessary for PKE because here the probability being maximized is the same for all pairs of distinct identities.) The reason we made the encryption key collision probability explicit is that for most schemes it is unconditionally low. For example, when GE is the ElGamal PKE scheme, it is 1/|G| where G is the group being used. Proofs of both parts of the theorem are in [2]. The need for weak-robustness. As we said above, the AI-ATK security of GE won’t be implied merely by that of GE . (We had to additionally assume that GE is WROB-ATK.) Here we justify this somewhat counter-intuitive claim. This discussion is informal but can be turned into a formal counterexample. Imagine that the decryption algorithm of GE returns a fixed string of the form ( M[ˆ] _,_ _dec[ˆ]_ ) whenever the wrong key is used to decrypt. Moreover, imagine CMT is such that it is easy, given cpars, x _, dec, to find com so that_ Ver(cpars, x _, com, dec) = 1. (This is true for any commitment scheme where_ _dec is the coins used by the Com algorithm.) Consider then the AI-ATK adver-_ sary A against the transformed scheme that that receives a challenge ciphertext (C[∗], com _[∗]) where C[∗]_ _←_ Enc(pars, EK[id _b], M_ _[∗]∥dec[∗]) for hidden bit b ∈{0, 1}._ It then creates a commitment _comˆ_ of EK[id 1] with opening information dec[ˆ], and queries (C[∗], _comˆ_ ) to be decrypted under DK[id0]. If b = 0 this query will prob ably return ⊥ because Ver(cpars, EK[id 0], _comˆ_ _, dec[∗]) is unlikely to be 1, but if_ _b = 1 it returns_ _M[ˆ]_, allowing A to determine the value of b. The weak robustness of GE rules out such anomalies. ----- Algorithm PG _K_ _←$_ Keys(H) ; g1 _←$_ G∗ ; w _←$_ Z∗p [;][ g][2] _[←]_ _[g]1[w]_ [; Return (][g][1][, g][2][, K][)] Algorithm KG(g1, g2, K) _x1, x2, y1, y2, z1, z2_ _←$_ Zp ; e ← _g1x1_ _[g]2[x][2]_ [;][ f][ ←] _[g]1[y][1]_ _[g]2[y][2]_ [;][ h][ ←] _[g]1[z][1]_ _[g]2[z][2]_ Return ((e, f, h), (x1, x2, y1, y2, z1, z2)) Algorithm Enc((g1, g2, K), (e, f, h), M ) _u_ _←$_ Z *p [;][ a][1] _[←]_ _[g]1[u]_ [;][ a][2] _[←]_ _[g]2[u]_ [;][ b][ ←] _[h][u][ ;][ c][ ←]_ _[b][ ·][ M][ ;][ v][ ←]_ _[H][(][K,][ (][a][1][, a][2][, c][)) ;][ d][ ←]_ _[e][u][f][ uv]_ Return (a1, a2, c, d) Algorithm Dec((g1, g2, K), (e, f, h), (x1, x2, y1, y2, z1, z2), C ) (a1, a2, c, d) ← _C ; v ←_ _H(K, (a1, a2, c)) ; M ←_ _c · a[−]1_ _[z][1]_ _a[−]2_ _[z][2]_ If d ̸= a1[x][1][+][y][1][v]a2[x][2][+][y][2][v] Then M ←⊥ If a1 = 1 Then M ←⊥ Return M **Fig. 8. The original CS scheme [15] does not contain the boxed code while the variant** _CS_ _[∗]_ does. Although not shown above, the decryption algorithm in both versions always checks to ensure that the ciphertext C ∈ G[4]. The message space is G. ## 5 A SROB-CCA Version of Cramer-Shoup Let G be a group of prime order p, and H: Keys(H) × G[3] _→_ G a family of functions. We assume G, p, H are fixed and known to all parties. Fig. 8 shows the Cramer-Shoup (CS) scheme and the variant CS _[∗]_ scheme where 1 denotes the identity element of G. The differences are boxed. Recall that the CS scheme was shown to be IND-CCA in [15] and ANO-CCA in [4]. However, for any message _M ∈_ G the ciphertext (1, 1, M, 1) in the CS scheme decrypts to M under any _pars, pk_ _, and sk_, meaning in particular that the scheme is not even SROB-CPA. The modified scheme CS _[∗]_ —which continues to be IND-CCA and ANO-CCA— removes this pathological case by having Enc choose the randomness u to be non-zero —Enc draws u from Z[∗]p [w][h][il][e the C][S][ scheme dra][w][s][ i][t fr][o][m][ Z][p][—][ a][n][d] then having Dec reject (a1, a2, c, d) if a1 = 1. This thwarts the attack, but is there any other attack? We show that there is not by proving that CS _[∗]_ is actually SROB-CCA. Our proof of robustness relies only on the security — specifically, pre-image resistance— of the hash family H: it does not make the DDH assumption. Our proof uses ideas from the information-theoretic part of the proof of [15]. We say that a family H: Keys(H) Dom(H) Rng(H) of functions is pre_×_ _→_ _image resistant if, given a key K and a random range element v[∗], it is com-_ putationally infeasible to find a pre-image of v[∗] under H(K, ·). The notion is captured formally by the following advantage measure for an adversary I: **Adv[pre]H** [-][img](I) _H(K, x) = v[∗]_ : K _._ $ _I(K, v∗)_ _←_ � = Pr � $ Keys(H) ; v∗ $ Rng(H) ; x _←_ _←_ ----- Pre-image resistance is not implied by the standard notion of one-wayness, since in the latter the target v[∗] is the image under H(K, ) of a random domain point, _·_ which may not be a random range point. However, it seems like a fairly mild assumption on a practical cryptographic hash function and is implied by the notion of “everywhere pre-image resistance” of [22], the difference being that, for the latter, the advantage is the maximum probability over all v[∗] Rng(H). _∈_ We now claim the following. **Theorem 4. Let B be an adversary making two GetEK queries, no GetDK** _queries and at most q_ 1 Dec queries, and having running time t. Then we can _−_ _construct an adversary I such that_ **Adv[srob]CS** _[∗]_ [(][A][)][ ≤] **[Adv][pre]H** _[-][img](I) +_ [2][q][ + 1] _._ (1) _p_ _Furthermore, the running time of I is t + q · O(texp) where texp denotes the time_ _for one exponentiation in G._ Since CS _[∗]_ is a PKE scheme, the above automatically implies security even in the presence of multiple GetEK and GetDK queries as required by game SROBCS ∗. Thus the theorem implies that CS _[∗]_ is SROB-CCA if H is pre-image resistant. A detailed proof of Theorem 4 is in [2]. Here we sketch some intuition. We begin by conveniently modifying the game interface. We replace B with an adversary A that gets input (g1, g2, K), (e0, f0, h0), (e1, f1, h1) representing the parameters that would be input to B and the public keys returned in response to B’s two GetEK queries. Let (x01, x02, y01, y02, z01, z02) and (x11, x12, y11, y12, _z11, z12) be the corresponding secret keys. The decryption oracle takes (only) a_ ciphertext and returns its decryption under both secret keys, setting a Win flag if these are both non- . Adversary A no longer needs an output, since it can _⊥_ win via a Dec query. Suppose A makes a Dec query (a1, a2, c, d). Then the code of the decryption algorithm Dec from Fig. 8 tells us that, for this to be a winning query, it must be that _d = a[x]1_ [01][+][y][01][v]a[x]2 [02][+][y][02][v] = a[x]1 [11][+][y][11][v]a[x]2 [12][+][y][12][v] where v = H(K, (a1, a2, c)). Letting u1 = logg1 (a1), u2 = logg2 (a2) and s = logg1 (d), we have _s = u1(x01 + y01v)+ wu2(x02 + y02v) = u1(x11 + y11v)+ wu2(x12 + y12v) (2)_ However, even acknowledging that A knows little about xb1, xb2, yb1, yb2 (b ∈ 0, 1 ) through its Dec queries, it is unclear why Equation (2) is prevented by _{_ _}_ pre-image resistance —or in fact any property short of being a random oracle— of the hash function H. In particular, there seems no way to “plant” a target v[∗] as the value v of Equation (2) since the adversary controls u1 and u2. However, suppose now that a2 = a[w]1 [. (W][e][ will][ d][i][scuss][ l][ater][ w][h][y w][e ca][n][ assume th][i][s][.) T][h][i][s] implies wu2 = wu1 or u2 = u1 since w ̸= 0. Now from Equation (2) we have _u1(x01 + y01v) + wu1(x02 + y02v) −_ _u1(x11 + y11v) −_ _wu1(x12 + y12v) = 0 ._ ----- We now see the value of enforcing a1 ̸= 1, since this implies u1 ̸= 0. After canceling u1 and re-arranging terms, we have _v(y01 + wy02_ _y11_ _wy12) + (x01 + wx02_ _x11_ _wx12) = 0 ._ (3) _−_ _−_ _−_ _−_ Given that xb1, xb2, yb1, yb2 (b ∈{0, 1}) and w are chosen by the game, there is at most one solution v (modulo p) to Equation (3). We would like now to design I so that on input K, v[∗] it chooses xb1, xb2, yb1, yb2 (b ∈{0, 1}) so that the solution v to Equation (3) is v[∗]. Then (a1, a2, c) will be a pre-image of v[∗] which I can output. To make all this work, we need to resolve two problems. The first is why we may assume a2 = a[w]1 [—w][h][i][ch][ i][s][ w][hat e][n][ab][l][es Equat][ion (3)—][ g][iv][e][n][ that] _a1, a2 are chosen by A. The second is to properly design I and show that it can_ simulate A correctly with high probability. To solve these problems, we consider, as in [15], a modified check under which decryption, rather than rejecting when _d ̸= a[x]1_ [1][+][y][1][v]a[x]2 [2][+][y][2][v], rejects when a2 ̸= a[w]1 [o][r][ d][ ̸][=][ a]1[x][+][yv], where x = x1 + wx2, _y = y1 + wy2, v = H(K, (a1, a2, c)) and (a1, a2, c, d) is the ciphertext being_ decrypted. See [2]. ## Acknowledgments First and third authors were supported in part by the European Commission through the ICT Program under Contract ICT-2007-216646 ECRYPT II. First author was supported in part by the French ANR-07-SESU-008-01 PAMPA Project. Second author was supported in part by NSF grants CNS-0627779 and CCF-0915675. Third author was supported in part by a Postdoctoral Fellowship from the Research Foundation – Flanders (FWO – Vlaanderen) and by the European Community’s Seventh Framework Programme project PrimeLife (grant agreement no. 216483). We thank Chanathip Namprempre, who declined our invitation to be a co author, for her participation and contributions in the early stage of this work. ## References 1. Abdalla, M., Bellare, M., Catalano, D., Kiltz, E., Kohno, T., Lange, T., Malone Lee, J., Neven, G., Paillier, P., Shi, H.: Searchable encryption revisited: Consistency properties, relation to anonymous IBE, and extensions. Journal of Cryptology 21(3), 350–391 (2008) 2. Abdalla, M., Bellare, M., Neven, G.: Robust encryption. Cryptology ePrint Archive [(2009), Full version of this paper: http://eprint.iacr.org/](http://eprint.iacr.org/) 3. Abdalla, M., Bellare, M., Rogaway, P.: The oracle Diffie-Hellman assumptions and an analysis of DHIES. In: Naccache, D. (ed.) CT-RSA 2001. LNCS, vol. 2020, pp. 143–158. Springer, Heidelberg (2001) 4. Bellare, M., Boldyreva, A., Desai, A., Pointcheval, D.: Key-privacy in public-key encryption. In: Boyd, C. (ed.) ASIACRYPT 2001. LNCS, vol. 2248, pp. 566–582. Springer, Heidelberg (2001) 5. Bellare, M., Desai, A., Pointcheval, D., Rogaway, P.: Relations among notions of security for public-key encryption schemes. In: Krawczyk, H. (ed.) CRYPTO 1998. LNCS, vol. 1462, pp. 26–45. Springer, Heidelberg (1998) ----- 6. Bellare, M., Rogaway, P.: The security of triple encryption and a framework for code-based game-playing proofs. In: Vaudenay, S. (ed.) EUROCRYPT 2006. LNCS, vol. 4004, pp. 409–426. Springer, Heidelberg (2006) 7. Boneh, D., Boyen, X.: Efficient selective-ID secure identity based encryption with out random oracles. In: Cachin, C., Camenisch, J.L. (eds.) EUROCRYPT 2004. LNCS, vol. 3027, pp. 223–238. Springer, Heidelberg (2004) 8. Boneh, D., Canetti, R., Halevi, S., Katz, J.: Chosen-ciphertext security from identity-based encryption. SIAM Journal on Computing 36(5), 915–942 (2006) 9. Boneh, D., Di Crescenzo, G., Ostrovsky, R., Persiano, G.: Public key encryption with keyword search. In: Cachin, C., Camenisch, J.L. (eds.) EUROCRYPT 2004. LNCS, vol. 3027, pp. 506–522. Springer, Heidelberg (2004) 10. Boneh, D., Franklin, M.K.: Identity-based encryption from the Weil pairing. In: Kilian, J. (ed.) CRYPTO 2001. LNCS, vol. 2139, pp. 213–229. Springer, Heidelberg (2001) 11. Boneh, D., Franklin, M.K.: Identity based encryption from the Weil pairing. SIAM Journal on Computing 32(3), 586–615 (2003) 12. Boneh, D., Gentry, C., Hamburg, M.: Space-efficient identity based encryption without pairings. In: 48th FOCS, pp. 647–657. IEEE Computer Society Press, Los Alamitos (2007) 13. Boyen, X., Waters, B.: Anonymous hierarchical identity-based encryption (without random oracles). In: Dwork, C. (ed.) CRYPTO 2006. LNCS, vol. 4117, pp. 290–307. Springer, Heidelberg (2006) 14. Canetti, R., Halevi, S., Katz, J.: Chosen-ciphertext security from identity-based encryption. In: Cachin, C., Camenisch, J.L. (eds.) EUROCRYPT 2004. LNCS, vol. 3027, pp. 207–222. Springer, Heidelberg (2004) 15. Cramer, R., Shoup, V.: Design and analysis of practical public-key encryption schemes secure against adaptive chosen ciphertext attack. SIAM Journal on Computing 33(1), 167–226 (2003) 16. Dolev, D., Dwork, C., Naor, M.: Nonmalleable cryptography. SIAM Journal on Computing 30(2), 391–437 (2000) 17. Fujisaki, E., Okamoto, T.: Secure integration of asymmetric and symmetric encryp tion schemes. In: Wiener, M. (ed.) CRYPTO 1999. LNCS, vol. 1666, pp. 537–554. Springer, Heidelberg (1999) 18. Goldwasser, S., Micali, S.: Probabilistic encryption. Journal of Computer and Sys tem Sciences 28(2), 270–299 (1984) 19. Hofheinz, D., Weinreb, E.: Searchable encryption with decryption in the standard model. Cryptology ePrint Archive, Report 2008/423 (2008), [http://eprint.iacr.org/](http://eprint.iacr.org/) 20. Katz, J., Sahai, A., Waters, B.: Predicate encryption supporting disjunctions, poly nomial equations, and inner products. In: Smart, N.P. (ed.) EUROCRYPT 2008. LNCS, vol. 4965, pp. 146–162. Springer, Heidelberg (2008) 21. Rackoff, C., Simon, D.R.: Non-interactive zero-knowledge proof of knowledge and chosen ciphertext attack. In: Feigenbaum, J. (ed.) CRYPTO 1991. LNCS, vol. 576, pp. 433–444. Springer, Heidelberg (1992) 22. Rogaway, P., Shrimpton, T.: Cryptographic hash-function basics: Definitions, im plications, and separations for preimage resistance, second-preimage resistance, and collision resistance. In: Roy, B., Meier, W. (eds.) FSE 2004. LNCS, vol. 3017, pp. 371–388. Springer, Heidelberg (2004) 23. Sako, K.: An auction protocol which hides bids of losers. In: Imai, H., Zheng, Y. (eds.) PKC 2000. LNCS, vol. 1751, pp. 422–432. Springer, Heidelberg (2000) -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/s00145-017-9258-8?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/s00145-017-9258-8, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "CLOSED", "url": "https://link.springer.com/content/pdf/10.1007%2F978-3-642-11799-2_28.pdf" }
2,010
[ "JournalArticle" ]
true
2010-02-09T00:00:00
[]
16,777
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01d3fd7ba9b5b759197008801bbc415a17b6dcf5
[ "Computer Science" ]
0.83162
GARUDA: Pan-Indian distributed e-infrastructure for compute-data intensive collaborative science
01d3fd7ba9b5b759197008801bbc415a17b6dcf5
CSI Transactions on ICT
[ { "authorId": "144281857", "name": "N. Mangala" }, { "authorId": "9220004", "name": "Prahlada Rao B.B." }, { "authorId": "48986585", "name": "S. Chattopadhyay" }, { "authorId": "143903718", "name": "R. Sridharan" }, { "authorId": "1397494925", "name": "N. Sarat Chandra Babu" } ]
{ "alternate_issns": null, "alternate_names": [ "CSI Trans ICT" ], "alternate_urls": [ "https://link.springer.com/journal/40012" ], "id": "33a6df3c-41ae-42db-942e-907f0079caa8", "issn": "2277-9078", "name": "CSI Transactions on ICT", "type": "journal", "url": "http://www.springer.com/" }
null
DOI 10.1007/s40012 013 0016 2 ORIGINAL RESEARCH # GARUDA: Pan-Indian distributed e-infrastructure for compute-data intensive collaborative science N. Mangala [•] B. B. Prahlada Rao [•] Subrata Chattopadhyay [•] R. Sridharan [•] N. Sarat Chandra Babu Received: 8 November 2012 / Accepted: 28 April 2013 / Published online: 7 June 2013 � CSI Publications 2013 Abstract GARUDA is a nation-wide grid of computational nodes, mass storage and scientific instruments with an aim to provide technological advancements required to enable compute-data intensive, collaborative applications for the twenty-first century. From a Proof-of-Concept, the GARUDA has evolved to an operational grid, aggregating nearly 70TF-15TB compute–storage power, via high-speed National Knowledge Network and hosts a stack of middleware and tools to enable hundreds of users from diverse communities like life science, earth science, computer aided engineering, material science, etc. Evolution and confluence of research and technologies has led to the maturity of GARUDA grid: there have been addition of several hundred CPUs, large data stores, standardization of grid middleware, research on interoperability between grids and participation from varied application communities that have made significant impact to GARUDA. The GARUDA partner institutes are using this e-infrastructure to grid enable applications of societal and national importance. The authors in this paper present the manner of N. Mangala (&) � B. B. Prahlada Rao � S. Chattopadhyay � R. Sridharan � N. Sarat Chandra Babu Center for Development of Advanced Computing (C-DAC), #1, Old Madras Road, Byappanhalli, Bangalore 560038, Karnataka, India e-mail: mangala@cdac.in B. B. Prahlada Rao e-mail: prahladab@cdac.in S. Chattopadhyay e-mail: subratac@cdac.in R. Sridharan e-mail: rsridharan@cdac.in N. Sarat Chandra Babu e-mail: sarat@cdac.in building a nation-wide operational grid and its evolution, its deliverables, architecture and applications. Keywords Grid computing e-Infrastructure e-Science � � � Virtual communities Networking Grid enable � � 1 Introduction The revolutionary changes in technologies brought the scientific and engineering communities to embrace grid technology [1], and new e-infrastructures. The new scientific applications are having challenging demands of data, computing power, instrumentation intensive science and importance for collaborations. Analysis of multi-Petabyte archives are required in fields as diverse as astronomy, biology, medicine, environment engineering and high-energy physics [2] to gain insights into the nature of matter, life or other aspects of the physical world. The Large Hadron Collier [2] is the world’s largest high-energy particle accelerator and collider, located at CERN to search for the key to generation of matter. Similar challenges beckon the environment and earth observation, disaster management [3, 4], astronomy, bioinformatics [5], Human Genome project [6], and human health care monitoring. Grid is a type of parallel and distributed system that enables the sharing, selection, and aggregation of heterogeneous geographically distributed autonomous resources dynamically depending on their availability, capability, performance, cost, and users quality-of-service (QoS) requirements. Major initiatives are underway worldwide, aimed variously, at supporting major science and grid research projects, developing and facilitating grid technologies. The Japan’s ‘Grid Consortium Japan’ [7], China’s ‘CNGrid’ ## 1 3 ----- [8], Korea’s ‘K*Grid’ [9], and Italy ‘INFN Grid’ [10], are indicative of the global awakening to the potential of grid computing. 2 GARUDA grid National governments are realizing the importance of new e-infrastructures to enable scientific progress and enhance research competitiveness. Making e-infrastructures available to the research community is crucial and is important to the researchers and the development teams in India. Similar to the international scenario the Indian Government also realized the strategic importance of grid computing. The Department of Electronics and Information Technology [11], Government of India, supported CDAC [12], to develop and deploy a nation-wide computational gridGARUDA [13, 14]. The GARUDA is a Pan-Indian grid connecting 71 research/academic institutions spread over 30 cities of the country via high speed (multi-gigabit), highly reliable and available National Knowledge Network (NKN) [15]. The GARUDA grid aggregates over 70TF-15TB compute-storage power of heterogeneous HPC resources from 16 resource provider sites. All the participating institutes of GARUDA are connected on the NKN. Fig. 1 GARUDA architecture ## 1 3 The authors in this paper try to highlight the architecture of the Indian national grid—GARUDA; readers can get more details from [3–5, 13, 14, 16–21]. 2.1 GARUDA architecture GARUDA has a hybrid architecture as it supports both centralized and peer-to-peer; for the grid administration it is centralized, while from the end users’ perspective the GARUDA works in peer-to-peer mode. It offers Service Oriented Architecture based on Globus 4.0.7 middleware [22]. The participating institutes may be partners contributing resources or partners without resources. Figure 1 depicts the overall architecture of GARUDA built on the NKN backbone. 2.2 GARUDA network The NKN [15, 23] is the Indian National Initiative of stateof-the-art multi-gigabit Pan-India network for providing a unified high-speed network backbone for all knowledge related institutions in the country. The NKN enables scientists, researchers and academicians from different backgrounds and diverse geographies to work closely in critical and emerging areas. The design of NKN comprises of an ultra-high-speed CORE (multiples of 10 Gbps), ----- Fig. 2 GARUDA partners on NKN backbone complimented with a distribution layer at appropriate speeds; participating institutions at the edge connect to the NKN seamlessly at speeds of 1 Gbps or higher. Design and performance [24] details can be referred International Journal of Computer Applications (0975-888) Volume 48, No. 13, June 2012. The network is designed to support Overlay-, Dedicated-, and Virtual Networks. It is highly reliable, scalable and highly available by design and provides strict QoS and security. Figure 2 shows the NKN backbone connecting the GARUDA resources and partners. Users can access the grid either from NKN or Internet. The GARUDA partner institutes are connected on Layer 2/3 Multi-Protocol Label Switching Virtual Private Network (VPN) [25] on NKN backbone. The backbone of GARUDA VPN is NKN with bandwidth of 1 Gbps, and provisions of QoS and security. NKN had a Point-of-Presence in most cities where GARUDA partner institutes were located. The NKN network end point was made available to these institute premises. The last mile cable extension to the Laboratory/ Computer Centre was done in cooperation with the partner institute. The GARUDA team jointly with NKN and administrators of the partner institutes configured the Customer Premise Equipment (routers) as shown in Fig. 3, thereby connecting the institutes over NKN. 2.3 GARUDA resources GARUDA has a pragmatic approach of augmenting resources through resource initiatives and collaborations— the PARAM Yuva [26] supercomputer at CDAC Pune, and GSAT-3 satellite terminals [27] for satellite communication of Space Application Centre (SAC) [28], ISRO. With growing popularity of grid computing several partner institutes also volunteered to contribute compute resources to the project. Host certificates [29] were issued ## 1 3 ----- Fig. 3 Partners connectivity to NKN Fig. 4 GARUDA software stack for these clusters and the grid middleware (described in next section) was deployed on them. A utility (GARUDA Sigma) for automated installation and configuration of the core middleware components was ## 1 3 developed; the GARUDA administrators were able to remotely login, download and configure the newly added resources. 2.4 GARUDA stack The GARUDA grid hosts a stack of middleware and tools for secure access, program development, problem solving environments, scientific visualization, storage grid solutions, metascheduler, and grid monitoring and management system, to help developers and application users to utilize the system effectively. The Fig. 4 shows components of the software stack—consisting of GARUDA Access Portal [16], monitoring tool–Paryavekshanam [17], GARUDA integrated development environment [18], automatic grid service generator [19], PSE for protein structure prediction [5], Kepler and Galaxy [30] workflows, and GARUDA visualization gateway. GARUDA Access Portal (Fig. 5) is a web portal for submitting and monitoring jobs and accounting. Secure access to GARUDA is enabled through Indian Grid Certification Authority (IGCA) [31] certificates and authentication using MyProxy [32]. Grouping users into virtual communities is supported through Virtual Organization Membership Service (VOMS) [33]. The GARUDA portal allows reservation of computational nodes for guaranteed availability of resources. The jobs are scheduled on grid resources by Gridway [34] and GT4 web services. The portal supports data management through GARUDA-storage resource manager (G-SRM) [35]. Globus 4.0.7 is the core of GARUDA middleware. Several (middleware-level) services like the login service, ----- Fig. 5 GARUDA access portal Fig. 6 GARUDA storage resource manager compiler service, accounting service, and reservation service have been developed in-house to facilitate building of applications and tools. The Gridway metascheduler deployed on the grid headnode talks to the local resource managers (such as PBS/Torque [36, 37] and Loadleveler [38]) to manage the job submission (execution). Gridway (v5.6.1) has been customized to survive the failures of computing resources and information systems in GARUDA, with a failover module in Middleware Access Driver for information management. Gridway is well integrated with GARUDA Reservation System to take care of jobs with prior resource reservation. We have also introduced a custom stage in pre- and post-processing mechanisms for serving jobs large number of input and output files. Gridway was also further customized to schedule and run OpenMP parallel jobs considering the specific job requirements. Gridway is integrated with the GARUDA Storage Grid to service data staging requirements of jobs and there is provision introduced for adding user specific job wall time for long running jobs. The G-SRM has been engineered based on open source Disk Pool Manager [39], supports Grid Security Infrastructure (GSI) [40] and VOMS security mechanisms, dynamic space management, provides direct data transfer from compute cluster to GSRM storage, and has interoperability with other SRM implementations like Bestman ## 1 3 ----- Fig. 7 Grid enabled flood assessment system Table 1 DMSAR execution time on GARUDA resources **User** **Agencies** **PARAM Padma** **at Bangalore** **at Pune** **S A C** **Ahmedabad** **ASAR flight data** **transmission from** **nearby Airport** **User** **Agencies** **GRID** **Communication** **Fabric** **User** **Agencies** **High Speed** **Communication** **PARAM Padma** Date set Serial size (GB) processing (h) With 272 procs on GG-BLR (min) With 368 processes on PARAM Yuva (min) 9 30 64 26 [41] and StoRM [42]. Figure 6 reveals the integration of compute- and data grids. In the GARUDA architecture the grid headnode is centralized point of access to GARUDA, and hosting the portal and Gridway. This creates concerns of load and bottleneck for multiple simultaneous user accesses, in addition to making the headnode a critical point of failure. To alleviate this concern failover for headnode and the hybrid architecture were planned. The software deployment architecture was well thought out to prevent load issues. For example, probes (information providers) are run on the different clusters for monitoring the resource; the time interval for running these scripts and data transferred have been carefully selected to avoid overloading the system. 2.5 GARUDA security For enhanced security and international compliance, GARUDA established the IGCA, in addition to the basic GSI and VOMS. It is the first Indian Certificate Authority established to address security issues of grids and interoperability between international grids. The IGCA received accreditation from Asia Pacific Grid Policy Management ## 1 3 Fig. 8 GARUDA for OSDD Authority [43] to provide access of worldwide grids to Indian researchers. The IGCA will help scientists, users and collaborative community in India and neighboring countries, to obtain an internationally recognized passport to interoperate with worldwide grids. Details of IGCA can [be obtained at http://ca.garudaindia.in/.](http://ca.garudaindia.in/) 2.6 GARUDA communities Scientists collaborating for common problem, felt the need for secure and controlled sharing of resources. For this purpose, the VOMS [33] developed by European Datagrid project and publicly available in Globus Alliance, was customized and deployed on GARUDA. Domain specific virtual communities have been created under GARUDA such as atmospheric science, life science, computer aided engineering, material science, geophysics, health ----- Fig. 9 OSDD–GARUDA interface DB Ext DBExt DB GGHYD Cluster informatics, etc. and special virtual organization (VO) for Open Source Drug Discovery (OSDD) community. 3 GARUDA applications Application enablement involves understanding the nature of application and executing it suitably in the grid environment to take advantage of the virtualized grid infrastructure to improve processing speed and/or increase collaboration. The GARUDA has successfully demonstrated compute-collaboration intensive applications such as synthetic aperture radar (SAR) raw data processing for flood assessment [3, 4], PSE, molecular docking, OSDD [20, 44], Collaborative Classroom [45], etc. 3.1 Flood assessment For assessing the extent of inundation during a flood, SAR is employed to capture the raw data of the flood affected region [3, 4]. This raw data is voluminous and processing it is a compute intensive/time consuming task involving processing several blocks of data, by using FFTs, data compression, mosaicing, etc. This program has been effectively parallelized at two levels—the code has inherent iterative constructs performing large matrix manipulations, which are parallelized using MPI and OpenMP to work on a cluster of SMPs (CLUMPs). Secondly, the voluminous data (typically *35 GB) itself can be split and processed on separate CLUMPs in almost same time. The |nnnttteeerrrnnneee|ttt ///| |---|---| partial results obtained by processing each partition is merged to obtain the complete resultant image. The application flow is depicted in Fig. 7 and the execution time for typical data set is shown in Table 1. The benefits of grid enabling the SAR based flood assessment application is evident in both improved processing speed as well as increased collaboration. Collaborative analysis of the resultant image by experts at different geographic locations is possible by using a visualization software called Leica Virtual Explorer which enables remote sharing of visualization. This project was done in partnership with SAC, ISRO [28]. 3.2 OSDD GARUDA is facilitating the OSDD [44]—a CSIR initiative funded by Government of India, to develop drugs for tropical infectious diseases like malaria, tuberculosis, etc. Drug discovery involves characterizing a disease on a molecular level like—identifying the target and its structure, identifying potential ligand binding sites and applying docking methods, identifying the lead and lead optimization. Considering the number of possibilities of drug-totarget interactions, it is evident that this process is exhaustive requiring enormous data and compute power offered by grid computing. GARUDA is facilitating the OSDD applications [20, 21] which have enormous data and complex algorithms demanding tremendous computational cycles beyond those available at any single location. Running drug design on ## 1 3 ----- grid computing also enhanced the collaboration between bio- and chemoinformatics researchers. Figure 8 shows the compute-data intensive phases of drug discovery, where grid GARUDA is utilized. The users of OSDD community required Galaxy workflow [30] to compose their applications and then execute it on the GARUDA infrastructure. In order to facilitate their requirements, an interface architecture as shown in Fig. 9 was designed. Gridway job-runner was developed for Galaxy, GARUDA Login Service was integrated, specific bio-tools were deployed and parallelized, and separate VO was created. Currently, about 79 OSDD users have successfully run their jobs on GARUDA (about 3,500 jobs consuming approximately 5,000 CPU hours wall time). Access to GARUDA grid via Internet has been enabled to help OSDD users not having NKN connectivity. 3.3 Winglet design An optimal winglet design requires large number of Computational Fluid Dynamics (CFD) simulations for parametric and optimization studies. Genetic Algorithms using cross-breeding and local mutations run iteratively on large population size to yield potential winglet designs. A winglet optimization application by Zeus Numerix Private Limited, simulating 6,000 winglets taking nearly 30 days sequential computing time was able to complete in about 3 hours by running concurrently over large computing resource of GARUDA grid. 4 GARUDA usage and operation 4.1 Awareness/dissemination The agreed mode of communication from GARUDA to the partnering institutes was at the management level. As a result, information about GARUDA, had not percolated to the researchers level in some organizations. This issue came to light during interactions at different levels. To overcome this shortfall, GARUDA organized thematic workshops, partner meets [46] and periodic telephonic interactions with scientists in the partner institutes. The project website [47] was populated with technical reports, publications and news letters to serve as a mechanism for the GARUDA community to exchange and disseminate information easily. 4.2 Grid enablement Domain researchers working on compute-data intensive scientific problems were eager to use the new grid computing infrastructure but one of the problems in grid ## 1 3 enabling applications was the complexity involved in knowing about the GARUDA tools and their usage. As a result application developers find it difficult to grid enable their applications. This issue was solved by handholding the application developers. An in-house grid application enablement team was formed with a mandate to interact with the application/ domain experts to find out problems faced by the application developers and provide on-site support to them. Issues faced by application developers ranged from understanding grid computing, understanding their application characteristics, parallelizing codes, managing configuration settings, libraries, and use of third party software, etc. The main objective of application developers was to get significant improvement in speed/execution time by exploiting the vast grid resources. However, the problem in most cases was that the application itself was not parallelized. The application enablement team studied the codes and parallelized them into hybrid MPI ? OpenMP code which could run multiple threads on GARUDA’s CLUMPS. Further, in applications such as the flood assessment by processing SAR raw data, it was observed that the voluminous data need to divided thoughtfully and sent for processing on different clusters of the grid, to concurrently process the vast data thereby improving the processing time. Scalability and benchmarking was carried out for several applications (like flood assessment, PSE, winglet design). 4.3 Help desk/customer support (GGOA and RT) Complexity of managing the grid increased as the number of grid users increased, and as the number of resources and software tools were added. A well trained group called Fig. 10 GARUDA-EGI interoperability ----- Table 2 GARUDA resource usage Location 2010 2011 Mid 2012 Jobs submitted CPU hours utilized Jobs submitted CPU hours utilized Jobs submitted CPU hours utilized C-DAC, Banglore 8648 48699 16430 112018 7168 194852 C-DAC, Chennai 6087 29484 9307 58091 2538 101380 C-DAC, Hyderabad 5357 15717 10075 72419 4843 68424 C-DAC, Pune 12565 144119 5905 74254 2136 54612 IISc 9 0 2002 4 1602 56 IIT, Delhi 554 1 554 239 624 2 IIT, Guwahati 1104 8113 1879 9463 1186 7780 IMSC 50 2486 0 0 0 0 JNU 0 0 2096 1 614 0 MIT, Chennai 0 0 125 0 168 28 PRL 0 0 361 0 916 14 Total 34374 248619 48734 326489 21795 427148 GARUDA Grid Operations and Administration (GGOA) was formed to front end the customer support to GARUDA affiliates. Any problem or query was recorded and tracked with a unique identification number using the Request Tracker (RT) [48] software. The RT has a mechanism to report and resolve problem/request; if the problem remains unsolved for a long time, the RT automatically escalates the case to the reporting officers. The GGOA team conducts weekly tele-meetings with local system administrators to effectively resolve issues at different locations of GARUDA grid. Table 2 shows the utilization of various GARUDA resource by different users. 5 Interoperability with international grids Interoperability between grids is an important research issue. As each nation has grid infrastructure, it is essential to work out mechanisms to collaborate between these grids. The EU–India Grid project [49] was setup in early 2007 to identify mechanism for interoperability between grids. It is supporting and linking grid community in Europe and India and to promote research in both regions. GARUDA grid has GT4 and Gridway metascheduler while EGI (European Grid Initiative) with CREAM CE [50, 51] and Gridway. To facilitate interoperability between the grids the Gridway has been tweaked to recognize the target grid and do job submission-management accordingly, as shown in Fig. 10. Presently the interface between the two grids has been completed and job submission from either grid to any resource belonging to EGI or GARUDA has been successfully demonstrated. In fact the key part of interoperability between grids is unified job submission (resource usage) [52]. Fig. 11 Satellite grid–GARUDA interface 6 SATGrid–GARUDA interface Combining Satellite technology and grid computing concepts, SAC, ISRO designed a satellite grid (SATGrid). Based on security, authentication, monitoring and discovery and data transfer (GridFTP) of Globus Toolkit 2.4.3 and SAC developed scheduler GANESH [53], a prototype satellite based grid was established. It was desired to submit compute intensive jobs from this grid to GARUDA’s unprecedented resource. For this a SATGrid–GARUDA interface was developed to using certificate chaining for authentication of SATGrid users to GARUDA and job execution was supported through GARUDA portal APIs, as shown in Fig. 11. ## 1 3 ----- Table 3 Components evolution in GARUDA project phases Features Phase GARUDA PoC (2004–2008) GARUDA Foundation (2008–2009) GARUDA Operational (2009–2013) Architecture Centralized Centralized Hybrid: centralized—P2P Network Private Private NKN Resource 5TF-2TB 16TF-8TB 70TF-15TB Middleware Globus 2.4.3 (stable release) Globus 4.0.7 (stable release) Globus ? clouds Web compliance Pre WS Web service based Web service based SOA support Not supported Service oriented grid Supported Grid metascheduler Moab Gridway Gridway-tuned QoS compliance Rudimentary Advanced reservation Support for resources and services Storage solutions SRB—commercial SRM—open source S/W SRM—Gridway integrated for seamless job submission Virtual community support Virtual community Enabling virtual communities Fully supported groups formed thru VOMS 7 GARUDA evolution In 2004, GARUDA started with an ambitious plan for PanIndian computing grid with an aim to provide the technology required to enable data and compute intensive science for the twenty-first century. The Research and Development organizations having data-compute intensive problems were approached as collaborating partners. The key objectives of Proof-of-Concept (PoC) GARUDA were—resource aggregation, establishing nation-wide communication network, provisioning grid tools and services, and grid enablement and deployment of select applications. Considering the enormity of task, PoC GARUDA adopted a pragmatic approach to setting up the grid by using a judicious mix of open source and in-house developed and industry components. In 2006–2007 it was observed that grid technology was fast converging with web services architecture with the invention of Web Services Resource Framework (WSRF) [54]. SOA [55]—a combination of the principles of object orientation and web services led to formulation of Open Grid Standards Infrastructure (OGSI) [56]. In compliance to the OGSI, Globus released the GT 4.x in 2008. Also successful demonstration of PoC prompted us to think about turning the research investment into tangible commercial opportunities. Hence the GARUDA evolved to a service oriented grid with stable GT 4.x middleware during the Foundation Phase in 2008–2009. The Table 3 gives the details of the Proof-of-Concept (PoC), Foundation and Operational phases of GARUDA. Many grand projects like the TeraGrid [57], NAREGI [58], and Distributed European Infrastructure for Supercomputing Applications (DEISAs) [59] have had their quota of achievements as well as learning. As mentioned by Peter H. Beckman, in the article ‘Building the ## 1 3 TeraGrid’—one of the most important learning is to have a precise definition of the word ‘grid’ specifying the architecture, application and policies, differentiating it from general distributed computing. DEISA had to cope with issues of dynamic, heterogeneous and geographically distributed resources, manpower and operation issues. NAREGI had to cope with finetuning the middleware for coexistence of multi-type jobs, production level loads and interoperability with other grids. Operational issues was a common issue faced by all grids. GARUDA also encountered several technical, administrative and managerial issues—in terms of interface for diverseusers,parallelizingandoptimizingusers’applications, interoperation,standards [60] and security, usage policies,etc. 8 Conclusion GARUDA has become a successful Indian nation-wide operational grid and helped to build the grid community in the country. With various research and academic institutes actively participating in GARUDA, it has created awareness in parallel and distributed computing in the country starting at the Graduate Engineering levels. Many of the research and academic institutes are participating in collaborative projects using aggregated resources of the grid. GARUDA teams are working on next generation technologies such as interoperability of grid and cloud, workflows and PSEs for various application areas, and on mobile interfaces for GARUDA that will significantly improve the ease of access to this critical e-infrastructure. Acknowledgments We are thankful to the Department of Information Technology (DeitY) Government of India, for the financial and technical support to GARUDA—The National Grid Computing Initiative. ----- References 1. Hey AJG, Trefethen A (2003) Grid2—the blueprint for new computing infrastructure. The data deluge: an e-science perspective. In: Berman F, Fox GC, Hey AJG (eds) Grid computing: making the global infrastructure a reality. Wiley, New York 2. CERN Accelerating Science. Welcome to the Worldwide LHC [Computing Grid. http://wlcg.web.cern.ch/. Accessed May 2013](http://wlcg.web.cern.ch/) 3. Ojha P, Mangala N, Prahlada Rao BB, Manavalan R, Mishra T, Manavala Ramanujam V, Bhat H (2008) Disaster management and assessment system using interfaced satellite and terrestrial grids. In: 15th International conference on high performance computing (HiPC 2008), WUGC workshop, Bangalore, 17–20 Dec, 2008 4. Manavalan R, Manavala Ramanujam V, Mishra T, Rana SS, Chattopadhyay S, Prahlada Rao BB, Mangala N, Ojha P, Gupta K (2008) Garuda flood assessment system (G-FAS) version 1.0. In: ISRS—national symposium on advances in remote sensing technology and applications with special emphasis on microwave remote sensing, Indian Society of Remote Sensing (ISRS), Ahmedabad, India, 18–20 Dec 2008 5. Janaki C, Swapna G, Mangala N, Prahlada Rao BB, Sundararajan V (2008) Distributed genetic algorithms on grid for protein structure prediction. In: 15th International conference on high performance computing (HiPC 2008), WUGC workshop, Bangalore, 17–20 Dec 2008 6. Lister Hill National Center for Biomedical Communications, U.S. National Library of Medicine. The Human Genome Project, Reprinted from Genetics Home Reference. [7. Grid Consortium Japan. http://www.jpgrid.org/english/. Accessed](http://www.jpgrid.org/english/) May 2013 8. Zha L, Li W, Yu H, Xie X, Xiao N, Xu Z (2005) System software for China national grid. In: Network and Parallel Computing, pp. 14–21 9. Park H Korea National Grid Projects: K*Grid, KoCED Grid [and Korea e-Science. http://www.ucalgary.ca/iccs/plenary_slides/](http://www.ucalgary.ca/iccs/plenary_slides/Hyoungwoo_Park.PDF) [Hyoungwoo_Park.PDF. Accessed May 2013](http://www.ucalgary.ca/iccs/plenary_slides/Hyoungwoo_Park.PDF) [10. Italian Grid Infrastructure. http://www.italiangrid.it/. Accessed](http://www.italiangrid.it/) May 2013 11. Government of India, Department of Electronics and Information Technology, Ministry of Communication and Information Tech[nology. www.deity.gov.in. Accessed May 2013](http://www.deity.gov.in) 12. Centre for Development of Advanced Computing (C-DAC). [www.cdac.in. Accessed May 2013](http://www.cdac.in) 13. Ram N, Ramakrishnan S (2006) GARUDA: India’s National Grid [Computing Initiative. CTWatch Q 2(1), February 2006. http://](http://www.ctwatch.org/quarterly/articles/2006/02/garuda-indias-national-grid-computing-initiative/) [www.ctwatch.org/quarterly/articles/2006/02/garuda-indias-national-](http://www.ctwatch.org/quarterly/articles/2006/02/garuda-indias-national-grid-computing-initiative/) [grid-computing-initiative/](http://www.ctwatch.org/quarterly/articles/2006/02/garuda-indias-national-grid-computing-initiative/) 14. Prahlada Rao BB, Ramakrishnan S, RajaGopalan MR, Chattopadhyay S, Mangala N, Sridharan R (2009) E-infrastructures in IT: a case study on Indian National Grid Computing Initiative— GARUDA. In: International supercomputing conference (ISC’09), 23–26 June 2009, Hamburg, Germany. Published in special edition of Springer’s J Comput Sci—Res Dev vol 23(3–4):283–290, June 2009. (Glesner S [ed]. ISSN: 1865–2042, Journal no. 450, Springer) 15. Raghavan SV National Knowledge Network (NKN): concept, design and realization. National Knowledge Network Brief Article. [http://ebookbrowse.com/national-knowledge-network-](http://ebookbrowse.com/national-knowledge-network-brief-article-pdf-d88360252) [brief-article-pdf-d88360252. Accessed May 2013](http://ebookbrowse.com/national-knowledge-network-brief-article-pdf-d88360252) 16. Arackal VS, Arunachalam B, Bijoy MB, Prahlada Rao BB, Kalasagar B,SridharanR,ChattopadhyayS(2009)Anaccessmechanismforgrid GARUDA. In: IEEE IMSAA—2009, Bangalore, India, Dec 2009 17. Karuna, Deepika HV, Mangala N, Prahlada Rao BB, Mohan Ram N (2008) Paryavekshanam: a status monitoring tool for India grid GARUDA. In: 24th NORDUnet conference, Espoo, Finland, 9–11 April 2008 18. Sukeshini, Kalaiselvan K, Vallinayagam P, VijayaNagamani MS, Mangala N, Prahlada Rao BB, Mohan Ram N (2007) Integrated development environment for GARUDA Grid (G-IDE). In: Proceedings of 3rd IEEE international conference on eScience and grid computing, Bangalore, India, 10–13 Dec 2007, pp 499–506 19. Mangala N, Singh M, Maan A, Janaki Ch, Chattopadhyay S (2009) Seamless grid service generator for applications on a service oriented grid. In: 2009 World conference on services—II, 2009, Bangalore, India 20. Bhardwaj A, Janaki Ch (2011) Customized galaxy with applications as web services and on the grid for open source drug discovery. In: Galaxy community conference, Lunteren, The Netherlands, 25–26 May 2011 21. Karuna, Harikrishna M, Mangala N, Janaki Ch, Shashi S, Chattopadhyay S (2010) Python based galaxy workflow integration on GARUDA grid. In: International conference on scientific computing with python. SciPy.in-2010, Hyderabad, 13–18 Dec 2010 [22. Welcome to the globus toolkit. www.globus.org/toolkit. Acces-](http://www.globus.org/toolkit) sed May 2013 23. National Knowledge Network—Connecting Knowledge Institu[tions. www.nkn.in. Accessed May 2013](http://www.nkn.in) 24. Saxena V, Mishra N (2012) Performance evaluation of national knowledge network connectivity. Int J Comput Appl (0975-888) 48(13), June 2012 [25. Difference Between VPN and Internet. http://www.differencebetween.](http://www.differencebetween.net/technology/difference-between-vpn-and-internet/) [net/technology/difference-between-vpn-and-internet/.](http://www.differencebetween.net/technology/difference-between-vpn-and-internet/) Accessed May 2013 26. PARAM Yuva is the latest in the series of C-DAC’s supercomputers. Frontline 26(03), 31 Jan–13 Feb 2009 [27. GSAT3 (EduSat). http://space.skyrocket.de/doc_sdat/gsat-3.htm.](http://space.skyrocket.de/doc_sdat/gsat-3.htm) Accessed May 2013 [28. Space Application Centre. www.sac.gov.in/. Accessed May 2013](http://www.sac.gov.in/) [29. GSI: key concepts—glossary—host certificate. http://globus.org/](http://globus.org/toolkit/docs/3.2/gsi/key/glossary.html) [toolkit/docs/3.2/gsi/key/glossary.html. Accessed May 2013](http://globus.org/toolkit/docs/3.2/gsi/key/glossary.html) [30. Galaxy—Data intensive biology for everyone. http://galaxyproject.](http://galaxyproject.org) [org. Accessed May 2013](http://galaxyproject.org) [31. IGCA—Indian Grid Certification Authority. http://ca.garudaindia.](http://ca.garudaindia.in/) [in/. Accessed May 2013](http://ca.garudaindia.in/) 32. MyProxy—Credential Management Service. [http://grid.ncsa.](http://grid.ncsa.illinois.edu/myproxy/) [illinois.edu/myproxy/. Accessed May 2013](http://grid.ncsa.illinois.edu/myproxy/) [33. Virtual Organization Membership Service (VOMS). www.globus.](http://www.globus.org/grid_software/security/voms.php) [org/grid_software/security/voms.php. Accessed May 2013](http://www.globus.org/grid_software/security/voms.php) 34. Ferna´ndez-Quiruelas C, Cofin˜o V (2011) Aggregating HPC and grid resources using GridWay metascheduler. The 2011 International Conference on Computational Science and Its Applications, June 2011 35. Saluja P, Prahalada Rao BB, Shashidhar V, Paventhan A, Sharma N (2010) An interoperable and optimal data grid solution for heterogeneous service oriented grid—GARUDA. In: High-performance grid computing workshop (HPGC), international parallel and distributed processing symposium (IPDPS), Atlanta, USA, 19–23 April 2010 [36. Portable Batch System. http://en.wikipedia.org/wiki/Portable_](http://en.wikipedia.org/wiki/Portable_Batch_System) [Batch_System. Accessed May 2013](http://en.wikipedia.org/wiki/Portable_Batch_System) 37. Garrick Staples, TORQUE resource manager. In: SC’06, proceedings of the 2006 ACM/IEEE conference on supercomputing. ISBN 0-7695-2700-0 [38. IBM Tivoli workload scheduler loadleveler. http://www.ibm.](http://www.ibm.com/systems/software/loadleveler/) [com/systems/software/loadleveler/. Accessed May 2013](http://www.ibm.com/systems/software/loadleveler/) [39. Disk Pool Manager. http://www.gridpp.ac.uk/wiki/Disk_Pool_Manager.](http://www.gridpp.ac.uk/wiki/Disk_Pool_Manager) Accessed May 2013 40. Overview of the grid security infrastructure. [http://www.](http://www.globus.org/security/overview.html) [globus.org/security/overview.html. Accessed May 2013](http://www.globus.org/security/overview.html) ## 1 3 ----- [41. Berkeley Storage Manager (BeStMan). https://sdm.lbl.gov/bestman/.](https://sdm.lbl.gov/bestman/) Accessed May 2013 42. Magnoni L, Zappi R, Ghiselli A (2008) StoRM: a flexible solution for storage resource manager in grid. In: 2008 IEEE nuclear science symposium conference record 43. Asia Pacific Grid Policy Management Authority (APGrid PMA). [www.apgridpma.org/. Accessed May 2013](http://www.apgridpma.org/) [44. Open Source Drug Discovery (OSDD) www.osdd.net/. Accessed](http://www.osdd.net/) May 2013 45. Satyanarayana N, Jyothi N, Ramu, Sarat Chandra Babu N (2009) A service oriented overlay network architecture for collaborative class room. In: WWW/Internet 2009 conference, Rome, Italy, 17–22 Nov 2009 [46. Garuda events and conferences. http://www.garudaindia.in/html/](http://www.garudaindia.in/html/events.aspx) [events.aspx. Accessed May 2013](http://www.garudaindia.in/html/events.aspx) [47. GARUDA India. www.garudaindia.in. Accessed May 2013](http://www.garudaindia.in) [48. Request Tracker (RT). http://bestpractical.com/rt/. Accessed May](http://bestpractical.com/rt/) 2013 [49. EGEE Portal: Enabling grids for E-sciencE. http://www.eu-egee.](http://www.eu-egee.org/) [org/. Accessed May 2013](http://www.eu-egee.org/) 50. Cream—Main—Homepage—Infn. [http://grid.pd.infn.it/cream/.](http://grid.pd.infn.it/cream/) Accessed May 2013 [51. GLite—Cream CE. https://twiki.cern.ch/twiki/bin/view/EGEE/](https://twiki.cern.ch/twiki/bin/view/EGEE/GLiteCREAMCE) [GLiteCREAMCE. Accessed May 2013](https://twiki.cern.ch/twiki/bin/view/EGEE/GLiteCREAMCE) 52. Shamjith KV, Asvija B, Sridharan R, Prahlada Rao BB, Mohan Ram N (2008) Realizing interoperability among grids: a case study with GARUDA grid and the EGEE grid. In: Presented in ## 1 3 international symposium on grid computing 2008, Taipei, Taiwan, 7–11 April 2008 53. Bhatt HS, Patel RM, Kotecha HJ, Patel VH, Dasgupta A (2007) GANESH: grid application management and enhanced scheduling. In: IJHPCA, vol 21, Nr. 4, 2007, pp S419–S428 54. The WS—resource framework. [http://www.globus.org/wsrf/.](http://www.globus.org/wsrf/) Accessed May 2013 55. Srinivasan L, Treadwell J (2005) An overview of service-oriented architecture, web services and grid computing. HP 56. Open grid services infrastructure (OGSI)—The Globus Project. [http://www.globus.org/toolkit/draft-ggf-ogsi-gridservice-33_2003-](http://www.globus.org/toolkit/draft-ggf-ogsi-gridservice-33_2003-06-27.pdf) [06-27.pdf. Accessed May 2013](http://www.globus.org/toolkit/draft-ggf-ogsi-gridservice-33_2003-06-27.pdf) [57. Beckman PH Building the TeraGrid. ftp://mcs.anl.gov/pub/](ftp://mcs.anl.gov/pub/tech_reports/reports/P1206.pdf) [tech_reports/reports/P1206.pdf](ftp://mcs.anl.gov/pub/tech_reports/reports/P1206.pdf) 58. Sakane E, Higashida M, Shimojo S (2009) An Application of the NAREGI grid middleware to a nationwide joint-use environment for computing. High performance computing on vector systems 2008, 55–64 59. Distributed European infrastructure for supercomputing applica[tions. http://www.deisa.eu. Accessed May 2013](http://www.deisa.eu) 60. Chattopadhyay S Challenges of Garuda: The National Grid Computing Initiative. In: ATIP 1st workshop on HPC in India @ [SC-09. http://www.serc.iisc.ernet.in/hpit/proceed/sc/garuda-hpc-](http://www.serc.iisc.ernet.in/hpit/proceed/sc/garuda-hpc-sc09-v2-2.pdf) [sc09-v2-2.pdf](http://www.serc.iisc.ernet.in/hpit/proceed/sc/garuda-hpc-sc09-v2-2.pdf) -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/s40012-013-0016-2?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/s40012-013-0016-2, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://link.springer.com/content/pdf/10.1007/s40012-013-0016-2.pdf" }
2,013
[]
true
2013-06-07T00:00:00
[]
10,778
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Mathematics", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01d594e846ad2805a208be36c4d45456d45f5fa3
[ "Computer Science", "Mathematics" ]
0.900792
Partial Consensus and Conservative Fusion of Gaussian Mixtures for Distributed PHD Fusion
01d594e846ad2805a208be36c4d45456d45f5fa3
IEEE Transactions on Aerospace and Electronic Systems
[ { "authorId": "47268366", "name": "Tiancheng Li" }, { "authorId": "1729096", "name": "J. Corchado" }, { "authorId": "4247284", "name": "Shudong Sun" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE Trans Aerosp Electron Syst" ], "alternate_urls": [ "https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=7", "https://ieeexplore.ieee.org/xpl/mostRecentIssue.jsp?punumber=7" ], "id": "aa923ff7-b740-49bd-ab6f-4abcd124c6a0", "issn": "0018-9251", "name": "IEEE Transactions on Aerospace and Electronic Systems", "type": "journal", "url": "http://ieeexplore.ieee.org/servlet/opac?punumber=7" }
We propose a novel consensus notion, called “partial consensus,” for distributed Gaussian mixture probability hypothesis density fusion based on a decentralized sensor network, in which only highly weighted Gaussian components (GCs) are exchanged and fused across neighbor sensors. It is shown that this not only gains high efficiency in both network communication and fusion computation, but also significantly compensates the effects of clutter and missed detections. Two “conservative” mixture reduction schemes are devised for refining the combined GCs. One is given by pairwise averaging GCs between sensors based on Hungarian assignment and the other merges close GCs for trace minimal, yet, conservative covariance. The close connection of the result to the two approaches, known as covariance union and arithmetic averaging, is unveiled. Simulations based on a sensor network consisting of both linear and nonlinear sensors, have demonstrated the advantage of our approaches over the generalized covariance intersection approach.
## Partial Consensus and Conservative Fusion of Gaussian Mixtures for Distributed PHD Fusion #### Tiancheng Li, Juan M. Corchado, and Shudong Sun Abstract—We propose a novel consensus notion, called “partial consensus”, for distributed GM-PHD (Gaussian mixture probability hypothesis density) fusion based on a peer-to-peer (P2P) sensor network, in which only highly-weighted posterior Gaussian components (GCs) are disseminated in the P2P communication for fusion while the insignificant GCs are not involved. The partial consensus does not only enjoy high efficiency in both network communication and local fusion computation, but also significantly reduces the affect of potential false data (clutter) to the filter, leading to increased signal-to-noise ratio at local sensors. Two “conservative” mixture reduction schemes are advocated for fusing the shared GCs in a fully distributed manner. One is given by pairwise averaging GCs between sensors based on Hungarian assignment and the other is merging close GCs based a new GM merging scheme. The proposed approaches have a close connection to the conservative fusion approaches known as covariance union and arithmetic mean density. In parallel, average consensus is sought on the cardinality distribution (namely the GM weight sum) among sensors. Simulations for tracking either a single target or multiple targets that simultaneously appear are presented based on a sensor network where each sensor operates a GM-PHD filter, in order to compare our approaches with the benchmark generalized covariance intersection approach. The results demonstrate that the partial, arithmetic average, consensus outperforms the complete, geometric average, consensus. Index Terms—distributed tracking, average consensus, covariance union, PHD filter, Gaussian mixture, arithmetic mean. I. INTRODUCTION HE rapid development of wireless sensor networks (WSNs) in the last decade is in large part responsible for # T the recent upsurge in interest in WSN-based distributed tracking. A typical decentralized WSN consists of a number of spatially distributed, interconnected sensors that have independent sensing and calculation capabilities and (only) communicate with the neighbors for information sharing, namely peer-topeer (P2P) communication. In particular, due to the appealing fault-tolerance and scalability to large networks, consensusbased distributed algorithms have gained immense popularity in the sensor networks community. In the consensus-oriented distributed filtering setup, each sensor operates an independent filter while sharing information with its neighbors iteratively to ameliorate each other’s Manuscript received xxxx T. Li and J.M. Corchado are with School of Sciences, University of Salamanca, 37007 Salamanca, Spain, E-mail: {t.c.li, corchado}@usal.es; During the work, T. Li has undertaken a Secondment at the Institute of Telecommunications, Vienna University of Technology, 1040 Wien, Austria S. Sun is with School of Mechanical Engineering, Northwestern Polytechnical University, Xian 710072, China, E-mail: sdsun@nwpu.edu.cn This work is in part supported by the Marie Skłodowska-Curie Individual F ll hi (H2020 MSCA IF 2015) ith G t 709267 estimation with the goal of converging to the same estimate over the entire network. As a result, local estimation that are made based on both local observation and the information disseminated from neighbors are similar to each other (or in other words, the sensors asymptotically reach a consensus), which are better as compared to the independent estimation without network cooperation [1]–[3]. In this paper, we consider the scenario with a time-varying, unknown, number of targets, which are synchronously observed by all sensors in the presence of false and missing observations. Great interest has been seen for extending the theory of average consensus [4], [5] for which the item being estimated may be the arithmetic average (considered as the default manner [5], akin to the linear opinion pool [6] ) or the geometric average [7] (akin to the logarithmic opinion pool [6] ) of the initial values. With regard to the type of information disseminated, three main categories of protocols exist; we note that there are protocols such as diffusion [8], [9] that may belong to either. Our approach falls into the last category: 1) Measurement/Likelihood. Disseminating raw measurement can be practically preventable in communication. Instead, the likelihood function, as a compact representation of the measurement information, is a promising alternative [10]–[12]. However, in multisensor multi-target cases, computationally cumbersome measurements-to-targets association or enumeration [13] is typically required. Moreover, it is nontrivial to fuse raw measurements reported at sensors of different profiles including detection probabilities, clutter rates, etc. To date, measurement/likelihood consensus is mainly limited to the single target case. 2) Estimate/Track. This involves running tracking algorithms on each sensor separately, yielding a set of tracks to be associated between sensors based on their proximities and then be fused, namely track-to-track fusion [14], [15]. When tracks are distant in the state space, this may work well, e.g., [16], [17] otherwise it suffers from the fragility and high computational cost for maintaining a large number of hypotheses. 3) Posterior/Intensity. This involves disseminating and fusing the multi-target posterior [18], [19] or the density of its statistical moments between sensors. In particular, the probability hypothesis density (PHD) that is the first order moment of the random target-state set, has been developed as a powerful alternative to the full posterior for time series recursion [20], [21]. In this case, the key is to disseminate and fuse PHDs ----- As the state of the art, the geometric average for PHDs/multi-target densities is referred to as the KullbackLeibler average (KLA) [12], [22], [23], also known as the geometric mean density (GMD) or the exponential mixture density (EMD) [24]–[26]. The fusion approach is known as generalized covariance intersection (GCI) [27]–[29] which extends the Chernoff fusion/covariance intersection [30], [31] to multi-target densities. In spite of its success in certain scenarios, deficiencies of GCI, most of which have already been recognized in the literature, are noted in this paper. However, it is not our intention to revise or improve these geometric average consensus approaches. Instead, we propose novel arithmetic average consensus approaches for PHD fusion, which are closely connected to the so-called covariance union (CU) [32]–[35] and arithmetic mean density (AMD) [26]. In short, there are two distinguishable features with our approaches: 1) Only the significant components of local PHDs, which are more likely target signals rather than false alarms, are disseminated between neighbors, while the insignificant components that are more suspected to be false alarms will not be involved in either the P2P communication or the consensus fusion. As such, the signal-to-noise ratio (SNR) can be positively enhanced at local sensors. This significantly differs from existing consensus approaches where the (complete) consensus is conditioned on all the information available in the network. The consensus in our approach is made based on a part of the information of local posteriors, termed partial consensus. 2) Only closely distributed components, which are more likely corresponding to the same target, are fused, in either of two conservative manners: averaging and merging, based on union calculation rather than intersection. The resulting consensus remains defined in the default arithmetic average sense rather in the KLA sense, which demonstrates particular advantages in dealing with the false and missing observations which are inevitably involved in realistic tracking. A preliminary part of the merging-for-fusion protocol has been presented in our conference paper [36], in which, however, we did not analyze its connection to AMD/CU, nor did we provide any conservativeness analysis. The merging scheme adopted initially is a standard one [37], which as found in this paper can be optimized in the covariancefusion part. These have now been completed in this article. In addition, we present much more technical extension and new results, including a new, communicatively much cheaper, partial consensus protocol. Therefore, this paper serves as a significant revision and extension to [36]. The remainder of this paper is organized as follows. The basics of GM-PHD, conservative fusion and GCI are given in Section II, followed by the motivation and key idea of our approach in section III. The proposed distributed GM-PHD fusion protocol is detailed in Section IV. Simulations are given in Section V for comparing our approaches with the GCI. In particular, the weakness of the GCI is noted. We conclude in Section VI II. BACKGROUND AND PRELIMINARY A. RFS and GM-PHD We consider an unknown and time-varying number Mk of targets with random states x[(]k[n][)] in the state space χ ⊆ R[d], n = 1, 2, · · ·, Mk. The collection of target states at time k can be modeled by a random finite set (RFS) Xk = {xk,1, · · ·, xk,Mk } with random cardinality Mk = |Xk|. The cardinality distribution ρ(n) of Xk is a discrete probability mass function of Mk, i.e., ρ(n) ≜ Pr[Mk = n]. A RFS variable X is a random variable that takes values as unordered finite sets and is uniquely specified by its cardinality distribution ρ(n) and a family of symmetric joint distributions pn(x1, x2, · · ·, xn) that characterize the distribution of its elements over the state space, conditioned on the set cardinality n. Here, a joint distribution function pn(x1, x2, · · ·, xn) is said to be symmetric if its value remains unchanged for all of the n! possible permutations of its variables. The probability density function (PDF) f (X) of a RFS variable X is given as f ({x1, x2, · · ·, xn}) = n!ρ(n)pn(x1, x2, · · ·, xn). Instead of propagating the full multi-target density which has been considered computationally intractable, the PHD filter propagates its first order statistical moment [20]. For a multi-target RFS variable X with the PDF f (X), its first order moment PHD DS(x) in a region S ⊆ χ is given as: � DS(x) = δX (x)f (X)δX, (1) S where δX (x) ≜ [�]w∈X [δ][w][(][x][)][ which is used to convert the] finite set X = {x1, x2, · · · } into vectors since the first order moment is defined in the single-target vector space, δy(x) denotes the generalized Kronecker delta function, and the RFS integral in the region S ⊆ χ is defined as: � f (X)δX S ∞ ≜ f (∅) + � � f ({x1, x2, · · ·, xn}) dx1dx2 · · · dxn . n=1 S[n] n! (2) Straightforwardly, the GM approximation of the whole PHD at filtering time k can be written as: Dk(x) = Jk � wk[(][i][)][N] [(][x][;][ m][(]k[i][)][,][ P][(]k[i][)][)] [,] (3) i=1 where N (x; m, P) denotes a Gaussian component (GC) with mean m and covariance P, Jk is the number of GCs in total, and wk[(][i][)] is the weight of ith GC. The PHD is uniquely defined by the property that its integral in any region gives the expected number of targets in that region. Therefore, the expected number of targets can be approximated by the weight sum Wk of all GCs, i.e., Wk = Jk � wk[(][i][)] [.] (4) i=1 In this paper, we assume each local sensor running a GMPHD filter, e.g., as given in [21] and that they are synchronous. Our work focuses on the posterior GM dissemination and fusion between neighboring sensors ----- B. Conservative Fusion and Mixture Reduction We consider now a sensor network where all the sensors observe the same set of targets but their measurements are conditionally independent. The errors of estimates yielded at local sensors are correlated with each other where the correlation is due to the common prior/noise in the models and common information shared between sensors, etc. Despite any a priori information on the cross-covariance [38]–[40], it is typically intractable, if not impossible, to quantify the correlation between sensors, which is at least time-varying due to the P2P communication and prevents optimal fusion (e.g., in the sense of minimum mean square error); see also [41]. Therefore, a pseudo-optimal, “conservative”, fusion rule is resorted to for avoiding underestimating the actual squared estimate errors. The benefit to do so includes better fault tolerance and robustness [32], [33]. To be more precise, we have the following definition on the notion of “conservative”, as used in [30], [32], [34], [42]: Definition 1 (conservativeness). An estimate pair (ˆx, P) of the real state x (a random vector), consisting of a vector estimate mean ˆx with an associated error covariance matrix P, is called conservative when P is no less than the actual error covariance of the estimate, i.e., P − E[(x − ˆx)(x − ˆx)[T]] is positive semi-definite. With the associated covariance matrix being “conservative”, the estimate pair is also called “consistent” [32], [35]. However, a consistent estimator is traditionally an estimator that converges in probability to the quantity being estimated as the sample size grows. To avoid confusion, we shall only use the terminologies of “conservative” or “conservativeness”. Lemma 1. A sufficient condition for the fused estimate pair (ˆx, P), due to fusing a set of estimate pairs (ˆxi, Pi), i ∈ I = {1, 2, · · ·}, in which at least one pair is unbiased and conservative, to be conservative is that P ≥ Pi + (ˆx − ˆxi)(ˆx − ˆxi)[T], ∀i ∈I . (5) Proof. Without lose of generality, supposing (ˆxi, Pi) is unbiased and conservative, we have E[(x − ˆxi)(ˆxi − ˆx)[T]] = 0, (6) Pi ≥ E[(x − ˆxi)(x − ˆxi)[T]] . (7) due to the unbiasedness and conservativeness, respectively. By decomposing x − ˆxi as (x − ˆxi) + (ˆxi − ˆx), we obtain E[(x − ˆx)(x − ˆx)[T]] ≤ Pi + (ˆx − ˆxi)(ˆx − ˆxi)[T] easily to finish the proof. Lemma 2. Given a set of conservative estimate pairs (ˆx, P[˜] i), i ∈I = {1, 2, · · ·} of the same, unbiased estimate mean associated with possibly different error-covariance matrix, a sufficient condition for the fused estimate pair (ˆx, P), to be conservative is given by P ≥ � ωiP[˜] i, (8) i∈I where the non-negative scaling parameters ωi ≥ 0, � ω 1 are called fusing weights hereafter Proof. The conservativeness of fusing estimate pairs reads P˜ i ≥ E[(x − ˆx)(x − ˆx)[T]], ∀i ∈I . (9) The proof is simply done by multiplying both sides of (9) by ωi and summing up over all i ∈I, which leads to � ωiP[˜] i ≥ E[(x − ˆx)(x − ˆx)[T]] . (10) i∈I Definition 2 (Standard Mixture Reduction, SMR). Given a set of estimate pairs (ˆxi, Pi) weighted as wi, i ∈I, respectively, the SMR scheme [37] fuses them into a single estimate pair (ˆxSMR, PSMR) with weight wSMR, given by wSMR = � wi, (11) i∈I �i∈I [w][i][ˆx][i] ˆxSMR =, (12) �i∈I [w][i] �i∈I [w][i][ ˜][P][i] PSMR =, (13) �i∈I [w][i] where the adjusted covariance matrix is given by (cf. (5)) P˜ i = Pi + (ˆxSMR − ˆxi)(ˆxSMR − ˆxi)[T] . (14) Lemma 3. Given that all the fusing estimate pairs are unbiased and conservative, the resulting estimate pair given by the SMR scheme as in (12)-(13) is unbiased and conservative. Proof. The proof is straightforwardly based on Lemmas 1 and 2. First, unbiasedness is due to the convex combination. Second, given that each (ˆxi, Pi), ∀i ∈I = {1, 2, · · ·} is unbiased and conservative, (ˆxSMR, P[˜] i) is conservative according to Lemma 1, and so their convex combination of (ˆxSMR, P[˜] SMR) is conservative according to Lemma 2. It is important to note that, considering “conservativeness” only, the fusing weights used to get a conservative fused covariance matrix as in (13) do not have to be the same as that to get the fused state as in (12). But instead, it is typically of interest to use different fusing weights to get an optimal fused covariance in some sense, while retaining conservativeness. For this, we have the following proposition, akin to the CIbased optimization [43], [44]. Proposition 1. The covariance-fusing weights for (8) can be determined such that the trace of the resulting covariance ma � trix is minimized, i.e., �ωi�i∈I [= argmin] tr��i∈I [ω][i][ ˜][P][i] . ωi,i∈I Thanks to the convex combination and positive trace of the matrices, the solution is simply given by ωi = 1, ωj = 0, ∀j ̸= i, j ∈I where i = argmin tr(P[˜] i). That is, the trace-minimal i∈I yet conservative fused covariance is given by POMR = argmin tr�P˜ i� . (15) P˜ i Hereafter, we refer to the MR scheme based on (11), (12) (15) and (14) as the optimal mixture reduction (OMR), which differs from the SMR only in the covariance-fusion part. It is a type of fusion seeking conservativeness, given that all fusing estimate pairs are unbiased and conservative ----- C. GCI Fusion and KLA Given a set of posteriors fi ∈ Ψ, i ∈I to be fused by the fusing weights ωi ≥ 0, where Ψ denotes the set of PDFs over the state space χ, and I = {1, 2, · · ·} denotes the fusing sensor set, the GCI/Chernoff fusion [27] which resembles the logarithmic opinion pool [6] and the belief consensus [7] reads fGCI ≜ C[−][1][ �] fi[ω][i], (16) i∈I where C is a normalization constant. The GCI fusing result is also known as GMD [26] or EMD [24], [25], [45], which actually minimizes the weighted sum of its KLD with respect to all posteriors fi, ∀i ∈I, and is, therefore, also referred to as the weighted Kullback-Leibler average (KLA) [12], [22], i.e., should be disseminated, while the insignificant GC (those that are more like false alarms) should be the least involved for conservative consideration. We refer to this as the “conservative communication” principle. Here the “conservativeness” is not the same as the estimate “conservativeness” given in Definition 1. We assume that the reader will not be confused. - P.2 Conservative fusion. Only highly relevant information, namely that which corresponds to the same target as at least very likely, should be fused and the fused results should retain “conservativeness”, to deal with the unknown correlation between sensors. We refer to this as the “conservative fusion” principle. A. Conservative Communication for Partial Consensus First, the mixture reduction is carried out in local GM filters as usual at each filtering step, before network communication, to control the GM size. Second, only highly-weighted GCs that are more likely corresponding to real targets rather than false alarms are disseminated between neighbors. To this end, we propose two alternative rules to identify these target-likely GCs, referred to as rank rule and threshold rule, respectively. - P.1.1 Rank rule. Specify the number of GCs to be disseminated as equal to the intermediately estimated number of targets at each sensor using the closest integer to Wk as in (4) or more straightforwardly, specify a fixed number of GCs when a priori information (e.g., maximum) about the number of targets is known. Then, only the corresponding number of GCs with the greatest weights are transmitted to the neighbors. - P.1.2 Threshold rule. Specify a weight threshold ws, and only the component that is weighted greater than that threshold will be transmitted. It is also possible to use a hybrid, arguably more conservative, criterion such that only the GCs that fulfill both rules are chosen, or a hybrid, less conservative, criterion such that any GCs that fulfill either rules can be chosen. In any case, we refer to them as a separate GM, hereafter called Target-likely GM (T-GM) and denote the T-GM size at sensor a as na, i.e., fKLA = arginf f ∈Ψ � ωidKL(f ||fi), (17) i where dKL(fa||fb) ≜ � fa(X) [f]f[a]b([(]X[X])[)] [δX][ is the set-theoretical] KLD of the intensities fa from fb. Three challenges arise due to the exponentiation and product calculation when the posterior fi in (16) is given by a mixture, such as the GM: 1) The fractional order exponential power of a GM does not provide a GM. Existing solutions are based on either analytical approximation that only appeals to special mixtures (e.g., components are well distant) [12], [22], [23], [46] or numerical approximation via important sampling [47], [48] or sigma point method [49]. 2) The product rule is prone to mis-detection. Misdetection at one sensor will remarkably degrade the detection at the other sensors since any signal multiplied by a weak signal of almost zero energy will be greatly weakened. See also illustration given in [50], [51]. 3) The GCI/KLA fusion will typically result in a multiplying number of fused GCs [22], which is costly in both communication and computation. These problems can lead to disappointing results in certain cases, which will be discussed within our simulation study in Section V.C. To overcome these deficiencies, we propose alternatives without the intrinsic need for exponentiation and product calculation of mixtures while being “conservative” not only in fusion but also in communication. In the following distributed formulation, we will use subscripts a and b to distinguish between two neighboring sensors. Since all calculations regard the same filtering time k, we drop the subscript k for notation simplicity. III. KEY IDEA AND PROPERTIES OF OUR PROPOSAL The section presents two “conservative” principles for distributed fusion algorithm design, which constitute the key idea of our approaches: - P.1 Conservative communication. Consensus should only be sought on the information of targets. To get this maximally respected, only the GC of significance (those that are highly likely to corresponding to the “target”) Correspondingly, the remaining GC at each sensor is called false alarm-suspicious GC (FA-GC), which will not be involved in the neighborhood communication. Definition 3 (partial consensus) The consensus yielded by disseminating among sensors an incomplete part of the information they own, i.e., only target-likely GCs in our approaches is called partial consensus Da,T(x) ≜ na � wa[(][i][)][N] [(][x][;][ m][(]a[i][)][,][ P][(]a[i][)][)] [,] (18) i=1 of which the total weight (≤ Wa) is given as Wa,T = na � wa[(][i][)] [.] (19) i=1 ----- B. Conservative AMD Fusion Different to the KLA optimality of the GMD as in (17), the AMD [26] calculates the average of posterior multi-target density in the arithmetic sense [5] rather than in the geometric average sense [7], or equivalently speaking, based on the linear opinion pool rather than the logarithmic opinion pool. Definition 4 (AMD). The AMD of multiple posteriors fi, i ∈ I, is given as follows: fAMD ≜ � ωifi, (20) i∈I where the fusing weights ωi ≥ 0, [�]i∈I [ω][i][ = 1][. As addressed,] fi is only a partial PHD obtained at sensor i in our work. As shown, the AMD is given by a convex union of multisensor posteriors, which does not double count information [26] and is provably conservative (cf. Lemma 2). It was further compared with the GMD in [26] as that, “the GMD is potentially inconsistent if a single component is inconsistent while the AMD is conservative if even a single component is consistent” [cf. Lemma 1]. Indeed, the union-type AMD fusion is less prone to the problem of misdetection as it does not involve product calculation. More importantly, it does not fuse the information of different targets and of clutter unless they lie to each other too close. The AMD of GMs, can be easily realized through reweighting (by using the fusing weights) and combining GMs in neighborhood. Similar idea has actually been applied [50] for pairwise gossip-based fusion and for averaging the “generalized likelihood” [52]. Basically, two key issues need to be solved. First, we need a proper mechanism to design the fusing weights. The most straightforward solution is given by uniform fusing weights, which may not guarantee efficient consensus convergence and appeals primarily to the case when only few P2P communication iterations are allowed. For faster convergence, the popular Metropolis weights [4], [53] approach is readily competent, given a large number of communication iterations. It determines the fusing weights for the information from sensor b at the host sensor a as follows  1+max (|1Na|,|Nb|) if b ∈ Na, b ̸= a,  ωb→a = 1 − [�]l∈Na [ω][l][→][a] if b = a, (21)  0 if b ̸∈ Na, where Na denotes the set of neighbors of sensor a (excluding a). Second, the AMD of N GCs and M GCs as in (20) will have N + M GCs, which is in general much smaller than N × M yielded by GCI. Still the local GM size grows linearly with the number of fusing sensors. In order to reduce the number of GCs to be transmitted and to maintain a stable overall GM size, we next present two conservative MR schemes for fusing the gathered T-GMs in a fully distributed fashion. IV. CONSERVATIVE MR SCHEMES We use t ∈ N = {0, 1, 2, ...} to denote the P2P communication iteration. t = 0 means the original statue of the local sensor without any communication. This section presents two MR approaches in line with the conservative fusion principle, based on either OMR or pairwise GM averaging, which need to be executed at each P2P communication iteration A. Conservative Fusion P.2.1: GM Merging The first MR protocol for T-GM fusion is given by combining the newly received and the local T-GMs into one set and merging the close T-GCs based on the proposed OMR. Before this, the GC weights should be scaled by using the fusing weights as addressed, according to their origination sensor. However, as shown in our simulation that this protocol typically bears high communication cost (which increases with t) and more iterations (t > 2) do not yield significantly more benefit, we do not suggest a larger number of communication iterations. Therefore, uniform fusing weights are more preferable (especially for t ≤ 2). A key of MR/OMR is to determine the size of gate for fusing GCs to be merged. In our approach, the distance between two T-GCs, e.g., N (x; ma, Pa) and N (x; mb, Pb), is measured by the Mahalanobis-type distance as follows T Ca,b ≜ �ma − mb�P[−][1][�]ma − mb�, (22) where P is chosen as the covariance of the GC of higher weight. A gate threshold τ is needed to control the GC grouping such that only T-GCs that are of distance smaller than τ will be merged, for trade-off between the resultant GM size and merging error. The gate has a clear physical meaning as it indicates the distance no further than τ standard deviations from the state estimate that the real state lies in with a probability, or at least a lower bound on the probability. When the estimate is unbiased and inferred from Gaussian random variables, the probability that the real state x lies within τ standard deviations of the state estimate ˆx is given by [54] Pr�(x − ˆx)P[−][1](x − ˆx)� ≤ γ� d 2 [, τ]2[ 2] � , (23) where P is the error-covariance matrix of the estimate ˆx, γ is the lower incomplete Gamma distribution and d is the cardinality of the state vector. Due to the uniform fusing weights, the T-GM combination and merging will certainly raise the weight sum at sensor a to W˜ a(t) = Wa(t − 1) + � Wj,T(t − 1), (24) j∈Na where Wa(t − 1) and Wj,T(t − 1) are the whole GM and the T-GM at local sensor a after t − 1 iterations of P2P communication, respectively and Wj,T(0) is defined as in (19). As such, the weights of all GCs wa[(][i][)][, i][ = 1][,][ 2][,][ · · ·][, J]a[(][t][)] after merging at each iteration t need to be re-scaled for correct cardinality estimation, where Ja(t) is the GM size at iteration t and we have W[˜] a(t) = [�]i[J]=1[a][(][t][)] [w]a[(][i][)][. To this end,] we may apply average consensus on the cardinality estimates, namely “cardinality consensus”, which will be carried out simultaneously with the proposed T-GM consensus. This is feasible because the cardinality estimates yielded by the PHD filter (4) are scalar-valued parameters, for which the standard average consensus based on Metropolis weights [4], [53] is straightforwardly applicable. To this end, the local GM weight sums will also be disseminated in neighborhood along with the T-GCs for consensus and we have the following proposition ----- Proposition 2 (Cardinality Consensus). The Metropolis weights based average consensus is applied to update the local weight sum at each communication iteration as follows: Wa(t) = � ωl→aWl(t − 1), (25) l∈{a,Na} which will be used for re-scaling the weights of all GCs at each communication iteration t, i.e., wa[(][i][)] ← βawa[(][i][)][,][ ∀][i][ = 1][,][ 2][,][ · · ·][, J][a][(][t][)] [.] (26) where βa = [W]W˜ [a]a[(]([t]t[)]) [.] In order to analyze the change of the weight of FA-GCs due to (26), we make two approximate albeit reasonable assumptions: Wa(t − 1) ≈ Wb(t − 1) and Wb,T(t − 1) ≈ αWb(t − 1), for all b ∈ Na. Clearly, α < 1. As such, (24) and (25) reduce to W[˜] a(t) ≈ (1 + α|Na|)Wa(t − 1) and Wa(t) ≈ Wa(t − 1), respectively. These read 1 βa ≈ (27) 1 + α|Na| In most cases, the T-GCs take the majority of the weight sum, namely α > 0.5. For example, when sensor a has two appr neighbors namely |Na| = 2, βa < 0.5, which indicates that the weight of FA-GCs at local sensor a will be approximately reduced to less than a half by (26). Comparably, the T-GCs merge with many others from neighbors which will counteract such reduction but instead their weight will likely be increased slightly. That is, the target-likely signal will be enhanced while the FA-suspicious signal will be weakened or even ultimately removed by pruning. This will give rise to the SNR at local sensors, reducing the possibility of causing false alarms and facilitating more accurate estimation. We refer to this fusion protocol as conservative GM merging (CGMM). For illustrative purposes, CGMM operations including GC selection, transmission, merging and re-weighting are given, as shown in Fig.1, for a 1-dimensional state space model using two sensors. In the top row, the original PHDs at local sensors are given as GMs, each having two significant GCs that likely correspond to targets. The sensors share them with each other, and then GC merging (and pruning to remove very insignificant FA-GCs) and re-weighting are performed, as shown in the middle and bottom rows, respectively. The resulting GMs are reweighted such that they have the same weight sum for cardinality consensus. At the end, the T-GCs will become more significant due to merging while the FAGCs are weakened, leading to an enhanced SNR. B. Conservative Fusion P.2.2: Pairwise GM Averaging CGMM can not guarantee all received GCs to be merged to the local T-GM unless a sufficiently high merging threshold τ is used which will in turn cause greater merging errors. As a result, the local T-GM size will likely grow against the P2P communication iteration. As an alternative, we integrate the received T-GM to the local GM in a way such that each of the received T-GC is fused to the nearest host GC immediately if closely enough or otherwise abandoned. This will retain a promisingly constant local GM size during networking To this where Πnb is permutations of nb entries and π(i) indicates the ith entry in the permutation π. The Hungarian algorithm has proven to be efficient in solving the above assignment problem in polynomial time [55]. As a result, all the GCs in the smaller GM set will be assigned to one and only one GC from the larger GM set, while the GCs from the latter will be assigned to one or no GC from the former. We call this one to one-or-zero assignment where the unassigned component will be unfused Fig. 1. Illustration of the proposed CGMM fusion between two sensors. The GCs originally formed by sensor a are given in solid lines, while those formed by sensor b are given in dash lines. Significant components of the GM are given in color while the insignificant ones in black end, we associate the received T-GMs from neighbors with the host T-GM based on Hungarian assignment (also called Munkres algorithm [55]) and gating. Then, only associated GCs will be fused in the manner of “averaging”. For clarity, denote the host sensor as a, one of its neighbors as b ∈ Na, and the number of original T-GCs as na and nb, respectively. To carry out Hungarian assignment, a na × nb cost matrix needs to be constructed as follows: if na ≤ nb (otherwise transpose the matrix)  C1,1 - · · C1,nb  - · · - · · - · ·, (28)   Cna,1 - · · Cna,nb where Ci,j is the Mahalanobis-type distance as in (22) between GC i from sensor a and GC j from sensor b. The optimal assignment is given by choosing one entry at each row of the cost matrix (28), all entries belonging to different columns, with a minimal sum. That is, the optimization cost function is given by argmin π∈Πnb na � Ci,π(i), (29) i=1 ----- Furthermore, a double-checking step is required so that only the assigned pair that are close enough are to be fused. Again, we use the Mahalanobis-type distance as given in (22) to measure the distance between two GCs and the rule (23) to design the gating threshold. Any assignment that does not fall in the valid gate will be canceled. The unassigned T-GCs will be abandoned and will not be involved in any fusion if it is received from the neighbor, otherwise it remains unchanged at local sensor. This will guarantee promisingly that the GM set of constant size at each local sensor. Finally, the pairwise assigned GCs from will be fused in the manner of Metropolis weights-based averaging at each iteration, as given below. First, Metropolis weights are used to re-scale all T-GC weights according to their origination sensor. Then, the associated GCs are labeled with the same, to say ℓ, originating from whether sensor a or its neighbors Na. As addressed above, each sensor contributes a maximum of one GC to each group. We denote all the sensors that contribute one GC to group ℓ by a set Sa[[][ℓ][]] ∈{a, Na} for which we have the following proposition for conservative fusion. Proposition 3 (GM averaging) All T-GCs associated in Sa[[][ℓ][]] are averaged, resulting in a new single GC N (x; m[[]a[ℓ][]][(][t][)][,][ P][[]a[ℓ][]][(][t][))][ with weight][ w]a[[][ℓ][]][(][t][)][ as follows:] Fig. 2. Tracking scenario, target trajectories and sensor network V. SIMULATIONS In this section, the proposed CGMM and CGMA approaches for distributed GM-PHD fusion are evaluated for tracking either a single target or simultaneous multiple targets, with comparison to the benchmark GCI/KLA fusion [22] and the pure cardinality consensus based on either flooding (CCF) [56] or Metropolis weights-based averaging (CCA). These different distributed filters will be evaluated on the same ground truths, sensor data series and sensor network setting up. For MR in all filters: GCs with a weight lower than 10[−][4] will be truncated, any two GCs closer than Mahalanobisdistance τ = 5 will be merged, and the maximum number of GCs is 50 in the case for tracking a single target and 100 in the case of multiple targets. The proposed partial consensus is carried out based on the rank rule P.1.1 for selecting the TGCs. To save communication in GCI, we suggest a threshold wc = 0.005 such that only the GC with a weight larger than wc will be disseminated to neighbors and then be considered in the subsequent fusion. The optimal sub-pattern assignment (OSPA) metric [60] is used to evaluate the estimation accuracy of the filter, with cut-off parameter c = 1000 and order parameter p = 2; for the meaning of these two parameters, please refer to [60]. We refer to the average of OSPAs obtained by all sensors in the network at each sampling step as Network OSPA. The average of the Network OSPAs over all filtering steps is called Time-average Network OSPA. To evaluate the communication cost, we record a GC that consists of a weight parameter (1 tuple), a 4-dimensional vector mean (4 tuples), and a 4×4 dimension matrix covariance (16 tuples) as data size 21 tuples and the scale-valued cardinality parameter as 1 tuple. Given that the covariance matrix is symmetric, only 10 tuples are needed here and then a GC only needs 15 tuples for data storing. Furthermore, to measure the efficiency of different consensus protocols, we define a consensus efficiency (CE) measure regarding the average OSPA reduction gained by sharing each tuple of network data as follows: OSPA reduction due to communication CE ≜ (33) Network communication cost (no. tuples) [.] The simulations are set up in a scenario over the planar region [ 1000 1000]m × [ 1000 1000]m which is monitored wa[[][ℓ][]][(][t][) =] �l∈Sa[[][ℓ][]] [ω][l][→][a][w]l[[][ℓ][]][(][t][ −] [1)] , (30) �l∈Sa[[][ℓ][]] [ω][l][→][a] l∈Sa[[][ℓ][]] [ω][l][→][a][w]l[[][ℓ][]][(][t][ −] [1)][m]l[[][ℓ][]][(][t][ −] [1)] , (31) �l∈Sa[[][ℓ][]] [ω][l][→][a][w]l[[][ℓ][]][(][t][ −] [1)] m[[]a[ℓ][]][(][t][) =] � P[[]a[ℓ][]][(][t][) =][ P][OMR] [,] (32) where POMR is given as in (15) by substituting I = Sa[[][ℓ][]] [and] P˜ i = P[[]i[ℓ][]][(][t][ −] [1) +] �m[[]i[ℓ][]][(][t][ −] [1)][ −] [m]a[[][ℓ][]][(][t][)]��m[[]i[ℓ][]][(][t][ −] [1)][ −] m[[]a[ℓ][]][(][t][)]�T for all i ∈ Sa[ℓ][.] As shown, the calculation of the fused state and covariance is akin to that of CGMM but they are different at the fused weight, which is an average in (30) rather than a sum in (11) in the CGMM. Therefore, (24) does not hold here but instead roughly W[˜] a(t) ≈ Wa(t − 1) and βa ≈ 1 instead of (27). Comparably speaking, the FA-GCs will not be so significantly weakened in CGMA as in CGMM. However, the cardinality average consensus scheme as given in Proposition 2 can still be applied at each communication iteration to re-weight all GCs including the averaged one given in (30) and the insignificant GC that is not involved in fusion. Overall, we refer to this consensus protocol as conservative GM averaging (CGMA). C. Potential Extensions The proposed union-type, conservative fusion and partialconsensus-based distributed GM-PHD fusion can be extended in terms of both the communication protocol and the local filter. In the former, other consensus protocols other than averaging schemes (e.g., diffusion [8], [9], flooding [56]) can be applied, while in the latter, multi-Bernoulli filters [57]–[59] and even particle filter-based RFS filters can be employed based on novel mixture reduction or particleresampling schemes ----- by a randomly generated sensor network (with total 12 sensors and diameter 6) as shown in Fig.2. We assume two different ground truths for the target trajectories, to be presented in the following two subsections respectively. To capture the average performance, we perform each simulation 100 MC runs with independently generated observation series for each run. Different numbers t of P2P communication iterations from 0 (without applying any information disseminating) to 12 (twice the network diameter) are applied to all consensus schemes. To set up the local filter, the ground truth is simulated as follows: The target birth process follows a Poisson RFS with intensity function γk(x) = [�]i[4]=1 [λ][i][N] [(][.][;][ m][i][,][ Q][r][)][, where the] Poisson rate parameters λ1 = λ2 = λ3 = λ4, the Gaussian parameters m1 = [0, 0, 0, 0][T], m2 = [−500, 0, −500, 0][T], m3 = [0, 0, 500, 0][T], m4 = [500, 0, −500, 0][T], Qr = diag([400, 100, 400, 100][T]), and diag(a) represents a diagonal matrix with diagonal a. In addition, the target intensity function spawn from target u is given as bk(x|u) = 0.05N (.; u, Qb), where Qb = diag([100, 400, 100, 400][T]). Each target has a time-constant survival probability pS(xk) = 0.99 and the survival target follows a nearly constant velocity motion as given  1 ∆ 0 0   ∆[2]/2 0  0 1 0 0 ∆ 0 xk = xk−1 + uk, 0 0 1 ∆ 0 ∆[2]/2     0 0 0 1 0 ∆ (34) where xk = [px,k, ˙px,k, py,k, ˙py,k][T] with the position [px,k, py,k][T] and the velocity [ ˙px,k, ˙py,k][T], the sampling interval ∆= 1s, and the process noise uk ∼N (02, 25I2). Without loss of generality, we employ a hybrid sensor network that consists of both linear and nonlinear observation sensors which run linear GM-PHD filter and unscented transform based nonlinear GM-PHD filter [21], respectively. The sensors are ordered from 1 to 12, where the sensors no.1-6 generate linear observation (which are referred to as linear sensors, marked by square in Fig.2) while the rest (no. 7-12) generate nonlinear observation (referred to as nonlinear sensors, marked by circles in Fig.2). The linear sensors have the same timeconstant target detect probability pD(xk) = 0.95 and the linear position observation model given as follows Clutter is uniformly distributed over each sensor’s FOV with an average rate of r points per scan. For the nonlinear sensors, we set r = 5 in both scenarios indicating a clutter intensity κk = 5/3000/2π while for th linear sensors we set r = 5 for the first single target scenario and r = 10 for the second multiple target scenario, indicating clutter intensities κk = 5/2000[2] and κk = 10/2000[2], respectively. A. Single Target Scenario First, we limit the maximal number of targets that simultaneously exist in the scenario to one for generating the ground truth as that new target which can only appear after the existing target disappears. Also, there is no target spawning. That is to say, the tracking at any time actually involves maximally one target, which is favorable for CI/GCI. The network and the ground truth of the target trajectories are given in Fig.2. When a total of t = 6 P2P communication iterations are applied, the Network OSPA, the online estimated number of targets, and the computing time of different consensus protocols for each filtering step are given in Fig.3, separately. For different numbers of communication iterations, the timeaveraged network OSPA, time-averaged network communication cost, and CE of different consensus protocols are given in Fig.4, separately. We have the following key findings: 1) All consensus schemes converge with the increase of the number of P2P communication iterations; meanwhile, the more iterations, the higher the communication and computing cost and the lower OSPA. In particular, when t = 1 (each sensors only share information with their immediate neighbors), GCI yields the best performance, providing the lowest Network OSPA and time-average OSPA over all. When t ≥ 2, CGMM yields the lowest OSPA over all which is even better than the GCI. 2) When t = 6, the computing time required by GCI is the most and is much higher than that of the others, while CGMA and CGMM come as the second and the third, respectively. 3) On communication cost, CGMM costs slightly more than CGMA, both smaller than GCI, especially when few P2P communication iterations (t < 6) are applied. 4) Cardinality consensus has improved the cardinality estimation in all consensus schemes. However, the benefit of pure cardinality consensus is limited, whether CCF by flooding or CCA by averaging, which converges to a level that is significantly inferior to that of the others including GCI, CGMM, and CGMA. However, this is achieved at the price of significantly less computation and communication. 5) The CE decreases with the increase of t in all consensus schemes. Overall, CCA yields the highest CE and CCF comes second; comparably, CCF converges faster at the expense of more communication cost than CCA. When t ≥ 6, the CCF achieves complete consensus/convergence [56] and so it will no further reduce the OSPA, leading to a zero CE. In regard to CE, CGMA slightly outperforms GCI and CGMM while the latter two perform similar � 1 0 0 0 zk = 0 0 1 0 � xk + � vk,1 vk,2 � , (35) with vk,1 and vk,2 as mutually independent zero-mean Gaussian noise with the same standard deviation of 10. The FOV of each nonlinear sensor is a disc of radius 3000m centralized with the sensor’s position [sn,x, sn,y][T], n = 16, .., 30, which is able to fully cover the scenario. The target detection probability depends on the target position [px,k, py,k][T], as given by pD(xk) = 0.95N ([|px,k − sn,x|, |py,k − sn,y|][T]; 0, 6000[2]I2)/N (0; 0, 6000[2]I2) and the nonlinear range and bearing observation is given by � �(px,k − sn,x)[2] + (py,k − sn,y)[2] � zk = + vk, (36) arctan �(py,k − sn,y)/(px,k − sn,x)� where vk ∼N (; 0, Rk), with Rk = diag�[σr[2][, σ]θ[2][]][T][�], σr = 10m σ π/90 rad/s ----- Fig. 3. Network OSPA, online estimated number of targets and computing time of different consensus protocols for each filtering step when six iterations of P2P communication are applied 150 105 101 CCF 104 100 CCACGMM CGMA 100 −1 GCI 10 103 10−2 50 102 10−3 0 101 10−4 0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10 No. P2P Comm. Iterations No. P2P Comm. Iterations No. P2P Comm. Iterations Fig. 4. Time-averaged network OSPA, network communication cost and CE against P2P communication iterations 6) Local GM size remains constant favorably during network communication at each filtering step using CCF, 1000 Trajectory1: k ∈ [1, 60] CCA, and CGMA but varies (mainly increases with t) due to CGMM and GCI. This is one advantage that 500 CGMA has over CGMM and GCI. To summarize the results in the single target case, CGMM is a fair alternative to GCI in favor of smaller OSPA and 0 Trajectory2: k ∈ [7, 60] fewer fusion computation and communication, while CGMA Trajectory3: k ∈ [8, 44] is a better choice than GCI in favor of less computation and −500 communication, and higher CE. More discussion will be given Trajectory4: k ∈ [58, 60] in Section V.C. Nonlinear sensors Linear sensors −1000 −1000 −500 0 500 1000 B. Multiple Target Scenario x coordinate (m) In this case, we extend the maximal number of targets that simultaneously exist in the scenario to three for generating a new ground truth. The trajectories of totally four targets are given in Fig.5 with the starting and ending times of each trajectory noted. To show the simulation result, similar contents given in Figs. 6 and 7 correspond to those in Figs. 3 and 4, respectively. While some of them give similar indication, e.g., the relative communication and computation cost of different protocols, key new findings are summarized as follows. 1) On filtering accuracy, CGMM gets the minimum network OSPA which significantly outperforms the others and CGMA comes second In particular when t 1 Fig. 5. Trajectories of simultaneously appearing multiple targets CGMM that consumes the same communication as CGMA and smaller than GCI, yields the largest OSPA reduction, even more significant than that of the others by performing multiple iterations of communication. This simply indicates that, only immediate neighborhood T-GM information sharing for partial consensus outperforms sophisticated averaging fusion like GCI based on multiple iterations for complete consensus. We refer to this as “many could be better than all” ----- Fig. 6. Network OSPA, online estimated number of targets and computing time of different consensus protocols for each filtering step when six iterations of P2P communication are applied 250 105 100 200 104 10−1 150 10−2 103 100 10−3 CCF CCA 50 102 10−4 CGMMCGMA GCI 0 101 10−5 0 2 4 6 8 10 0 2 4 6 8 10 0 2 4 6 8 10 No. P2P Comm. Iterations No. P2P Comm. Iterations No. P2P Comm. Iterations Fig. 7. Time-averaged network OSPA, network communication cost and CE for different numbers of consensus iterations 2) Somehow surprisingly, GCI does not benefit the filter accuracy when t = 1. We leave here an attempt to explain the reason. 3) On communication, CGMM costs less than GCI if t ≤ 3 but more if otherwise. CGMA communicates always less than CGMM and GCI. 4) On computation, GCI costs the most again while CGMM and CGMA perform similar except when the true number of targets is one for which CGMM may be more computing costly than GCI. 5) GCI shows delay at detecting new born targets as it will even increase OSPA compared with the centralized filter with no sensor communication at time k ∈ [7, 11] and k ∈ [58, 60] as shown in the middle sub-figure of Fig.6. This obtuse capability in new target detection is indeed unfavorable, significantly reducing the filtering accuracy (as shown in the left sub-figure of Fig.6). 6) CGMA and CGMM perform worse on CE than GCI except the first communication iteration, all inferior to CCF and CCA again. But, their achievements in reducing OSPA is to a large degree more significant than that of GCI and others. To summarize this multiple-target case, both CGMA and CGMM (in particular) afford better alternative to GCI in favor of smaller OSPA, less fusion computation and even less communication for the same OSPA reduction gain C. Further Discussion Experimental findings reported in the literature are notable. The performance of GCI is greatest for few sensors and distant targets [23], [25] or only a single target [12], [22], [57]. Closely-distributed targets in dense clutter environment have not been particularly considered except few works such as [46], [58], which just showed that GCI made worse result when targets are close. For example, the cardinality estimation is worse at around time k = 800s when more iterations of GCI fusion are applied, as shown in Fig.5-7 of [46]. Delay has also been observed in estimating the number of targets when new targets appear in the scenario in [49]. More specifically, the simulation given in Section V.A of [58] has explicitly demonstrated that GCI will degrade the local PHD filter in the case of close targets whose distance is under a specific threshold and/or in the case of low SNR. Deficiency of GCI for handling misdetection has been particularly noticed in [50], [51]. Relatively, the findings given in [61] suggested that the arithmetic average method is most robust to incorrect information than the geometric average. It has also been demonstrated that the CI provides estimation error covariance that is not honest but pessimistic for track fusion with feedback, inferior to the minimum variance rule [14]. In summary of our findings and those given in the literature, the problems that a distributed multi-target filter may potentially suffer from due to GCI include: ----- 1) Weakness to deal with closely distributed targets and/or low SNR background; 2) Prone to mis-detection or local sensor failure; 3) Delay in detecting new appearing targets; 4) High communication and computation cost for complete consensus. It seems still unclear how to fix these problems on the basis of GCI, even some of the causes have been noted, nor was that our intension in this paper. We leave here direct simulation demonstration about the failures of GCI in complicated multitarget scenarios (e.g., new targets appear frequently, targets move closely or there is a high rate of mis-detection or clutter) in which our proposed approaches demonstrate more significant advantage. In particular, the straightforward arithmetic average based CGMM that can be easily implemented on different filter beds yields significant accuracy benefit with only one or two iterations of P2P diffusion. The merit of the presented partial consensus and conservative arithmetic average fusion is not only on reliable and significant consensus benefit, but also on inexpensive communication and computation for complying with the need of real time filtering. It is very crucial to note that, a key challenge in many large-scale WSN scenarios comes exactly from limitations imposed on the communication bandwidth/power allowance and the sensor computing capability because the nodes are low powered wireless devices. VI. CONCLUSION For distributed GM-PHD fusion, this paper has proposed a notion of“partial consensus” which abandons the ultimate goal that the estimate of each sensor converges to the estimation conditioned on all the information over the entire network but instead neighboring sensors shares only highly-weighted GCs with each other and at the end, the network achieves partially consensus. In addition to saving communication and computation, the local SNR at each sensor can be increased because of partial consensus, reducing the possibility to generate false alarms and facilitating more accurate estimation. To further reduce the communication cost, the disseminated significant GCs can be either pairwise averaged or locally merged in a fully distributed and conservative manner. In parallel, the arithmetic average consensus is sought on the GM weight sum at each communication iteration. Simulations based on both single target scenario and multiple target scenario have been provided to demonstrate the effectiveness and reliability of our approach with comparison to the GCI, which is the state of the art approach for distributed RFS filter fusion. Although the GCI works well in the single target scenario in the presence of low misdetection and clutter rates, it exhibits severe problems in complicated multi-target scenarios, such as delay in detecting new appearing targets, and incompetent to handle closely distributed targets, intensive clutter and mis-detection, in addition to its high communication and computation cost. For multi-target density fusion in the presence of significant clutter and misdetection our final remarks are: - Many could be better than all: the concept of “partial consensus” is important as can not only save communication and computation but also benefit the accuracy more than the complete consensus. - Union outperforms intersection: Union-format arithmetic average fusion, as the original average consensus is, is computationally easier and provably more reliable than the Intersection-format geometric average fusion, while the former is also more conservative in general. REFERENCES [1] J. Liu, M. Chu, and J. E. Reich, “Multitarget tracking in distributed sensor networks,” IEEE Signal Processing Magazine, vol. 24, no. 3, pp. 36–46, May 2007. [2] D. Akselrod, A. Sinha, and T. Kirubarajan, “Information flow control for collaborative distributed data fusion and multisensor multitarget tracking,” IEEE Transactions on Systems, Man, and Cybernetics, Part C (Applications and Reviews), vol. 42, no. 4, pp. 501–517, July 2012. [3] A. Mohammadi and A. Asif, “Distributed consensus + innovation particle filtering for bearing/range tracking with communication constraints,” IEEE Transactions on Signal Processing, vol. 63, no. 3, pp. 620–635, Feb 2015. [4] L. Xiao and S. Boyd, “Fast linear iterations for distributed averaging,” Systems & Control Letters, vol. 53, no. 1, pp. 65 – 78, 2004. [5] R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and cooperation in networked multi-agent systems,” Proceedings of the IEEE, vol. 95, no. 1, pp. 215–233, Jan 2007. [6] T. Heskes, “Selecting weighting factors in logarithmic opinion pools,” in Advances in Neural Information Processing Systems. The MIT Press, 1998, pp. 266–272. [7] R. Olfati-Saber, E. Franco, E. Frazzoli, and J. S. Shamma, Belief Consensus and Distributed Hypothesis Testing in Sensor Networks. Berlin, Heidelberg: Springer Berlin Heidelberg, 2006, pp. 169–182. [8] A. H. Sayed, “Diffusion adaptation over networks,” Academic Press Library in Signal Processing, vol. 3, pp. 323–453, 2014, academic Press Library in Signal Processing: Volume 3. [9] K. Dedecius and P. M. Djuric, “Sequential estimation and diffusion of information over networks: A Bayesian approach with exponential family of distributions,” IEEE Transactions on Signal Processing, vol. 65, no. 7, pp. 1795–1809, April 2017. [10] O. Hlinka, O. Sluciak, F. Hlawatsch, P. M. Djuric, and M. Rupp, “Likelihood consensus and its application to distributed particle filtering,” IEEE Transactions on Signal Processing, vol. 60, no. 8, pp. 4334–4349, Aug 2012. [11] O. Hlinka, F. Hlawatsch, and P. M. Djuric, “Distributed particle filtering in agent networks: A survey, classification, and comparison,” IEEE Signal Processing Magazine, vol. 30, no. 1, pp. 61–81, Jan 2013. [12] G. Battistelli, L. Chisci, and C. Fantacci, “Parallel consensus on likelihoods and priors for networked nonlinear filtering,” IEEE Signal Processing Letters, vol. 21, no. 7, pp. 787–791, July 2014. [13] R. Mahler, “Approximate multisensor CPHD and PHD filters,” in 2010 13th International Conference on Information Fusion, July 2010, pp. 1–8. [14] S. Mori, K. C. Chang, and C. Y. Chong, “Comparison of track fusion rules and track association metrics,” in 15th International Conference on Information Fusion, July 2012, pp. 1996–2003. [15] B. Noack, M. Reinhardt, and U. D. Hanebeck, “On nonlinear track-totrack fusion with Gaussian mixtures,” in 17th International Conference on Information Fusion (FUSION), July 2014, pp. 1–8. [16] J. P. Beaudeau, M. F. Bugallo, and P. M. Djuric, “RSSI-based multitarget tracking by cooperative agents using fusion of cross-target information,” IEEE Transactions on Signal Processing, vol. 63, no. 19, pp. 5033–5044, Oct 2015. [17] H. Zhu, M. Wang, K. V. Yuen, and H. Leung, “Track-to-track association by coherent point drift,” IEEE Signal Processing Letters, vol. 24, no. 5, pp. 643–647, May 2017. [18] F. Meyer, O. Hlinka, and F. Hlawatsch, “Sigma point belief propagation,” IEEE Signal Processing Letters, vol. 21, no. 2, pp. 145–149, Feb 2014. [19] F. Meyer, P. Braca, P. Willett, and F. Hlawatsch, “A scalable algorithm for tracking an unknown number of targets using multiple sensors,” IEEE Transactions on Signal Processing, vol. 65, no. 13, pp. 3478–3493, July 2017 ----- [20] R. P. S. Mahler, “Multitarget Bayes filtering via first-order multitarget moments,” IEEE Transactions on Aerospace and Electronic Systems, vol. 39, no. 4, pp. 1152–1178, Oct 2003. [21] B. N. Vo and W. K. Ma, “The Gaussian mixture probability hypothesis density filter,” IEEE Transactions on Signal Processing, vol. 54, no. 11, pp. 4091–4104, Nov 2006. [22] G. Battistelli, L. Chisci, C. Fantacci, A. Farina, and A. Graziano, “Consensus CPHD filter for distributed multitarget tracking,” IEEE Journal of Selected Topics in Signal Processing, vol. 7, no. 3, pp. 508– 520, June 2013. [23] G. Battistelli and L. Chisci, “Kullback-Leibler average, consensus on probability densities, and distributed state estimation with guaranteed stability,” Automatica, vol. 50, no. 3, pp. 707 – 718, 2014. [24] M. Uney, D. E. Clark, and S. J. Julier, “Information measures in distributed multitarget tracking,” in 14th International Conference on Information Fusion, July 2011, pp. 1–8. [25] ——, “Distributed fusion of PHD filters via exponential mixture densities,” IEEE Journal of Selected Topics in Signal Processing, vol. 7, no. 3, pp. 521–531, June 2013. [26] T. Bailey, S. Julier, and G. Agamennoni, “On conservative fusion of information with unknown non-Gaussian dependence,” in 15th International Conference on Information Fusion, July 2012, pp. 1876–1883. [27] R. P. S. Mahler, “Optimal/robust distributed data fusion: a unified approach,” in Proc. SPIE, vol. 4052, 2000, pp. 128–138. [28] D. Clark, S. Julier, R. Mahler, and B. Ristic, “Robust multi-object sensor fusion with unknown correlations,” in Sensor Signal Processing for Defence (SSPD 2010), Sept 2010, pp. 1–5. [29] R. P. S. Mahler, “Toward a theoretical foundation for distributed fusion,” in Distributed Data Fusion for Network-Centric Operations, D. Hall, C.Y. Chong, J. Llinas, and M. L. II, Eds. CRC Press, 2012, pp. 199–224. [30] J. K. Uhlmann, “Dynamic map building and localization: new theoretical foundations,” Ph.D. dissertation, University of Oxford, 1995. [31] S. J. Julier and J. K. Uhlmann, “A non-divergent estimation algorithm in the presence of unknown correlations,” in Proceedings of the 1997 American Control Conference, Jun 1997, pp. 2369–2373. [32] J. K. Uhlmann, “Covariance consistency methods for fault-tolerant distributed data fusion,” Information Fusion, vol. 4, no. 3, pp. 201 – 215, 2003. [33] S. J. Julier and J. K. Uhlmann, “Fusion of time delayed measurements with uncertain time delays,” in Proceedings of the 2005 American Control Conference, June 2005, pp. 4028–4033. [34] O. Bochardt, R. Calhoun, J. K. Uhlmann, and S. J. Julier, “Generalized information representation and compression using covariance union,” in 9th International Conference on Information Fusion, July 2006, pp. 1–7. [35] S. Reece and S. Roberts, “Generalised covariance union: A unified approach to hypothesis merging in tracking,” IEEE Transactions on Aerospace and Electronic Systems, vol. 46, no. 1, pp. 207–221, Jan 2010. [36] T. Li, J. Corchado, and S. Sun, “On generalized covariance intersection for distributed phd filtering and a simple but better alternative,” in Proc. 20th Int. Conf. Inf. Fusion, July 2017, pp. 808–815. [37] D. J. Salmond, “Mixture reduction algorithms for target tracking in clutter,” pp. 434–445, 1990. [38] K. C. Chang, R. K. Saha, and Y. Bar-Shalom, “On optimal track-totrack fusion,” IEEE Transactions on Aerospace and Electronic Systems, vol. 33, no. 4, pp. 1271–1276, Oct 1997. [39] C. Chong, S. Mori, and K. Chang, “Graphical models for nonlinear distributed estimation,” in 2004 7th International Conference on Information Fusion, June 2004, pp. 1–8. [40] Y. Gao, X. R. Li, and E. Song, “Robust linear estimation fusion with allowable unknown cross-covariance,” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 46, no. 9, pp. 1314–1325, Sept 2016. [41] M. A. Bakr and S. Lee, “Distributed multisensor data fusion under unknown correlation and data inconsistency,” Sensors, vol. 17, no. 11, 2017. [42] J. Ajgl and M. Simandl, “Conservativeness of estimates given by[ˇ] probability density functions: Formulation and aspects,” Information Fusion, vol. 20, pp. 117 – 128, 2014. [43] L. Chen, P. O. Arambel, and R. K. Mehra, “Estimation under unknown correlation: covariance intersection revisited,” IEEE Transactions on Automatic Control, vol. 47, no. 11, pp. 1879–1882, Nov 2002. [44] W. Niehsen, “Information fusion based on fast covariance intersection filtering,” in Proceedings of the 5th International Conference on Inforti F i J l 2002 901 904 [45] S. J. Julier, “An empirical study into the use of chernoff information for robust, distributed fusion of Gaussian mixture models,” in 9th International Conference on Information Fusion, July 2006, pp. 1–8. [46] G. Battistelli, L. Chisci, C. Fantacci, A. Farina, and R. P. S. Mahler, “Distributed fusion of multitarget densities and consensus PHD/CPHD filters,” pp. 94 740E–94 740E–15, 2015. [47] N. T. nee Mariam, “Conservative non-Gaussian data fusion for decentralized networks,” Master’s thesis, The University of Sydney, Sydney, Australia, 8 2007. [48] J. Li and A. Nehorai, “Distributed particle filtering via optimal fusion of Gaussian mixtures,” IEEE Transactions on Signal and Information Processing over Networks, vol. PP, no. 99, pp. 1–1, 2017. [49] M. Gunay, U. Orguner, and M. Demirekler, “Chernoff fusion of Gaussian mixtures based on sigma-point approximation,” IEEE Transactions on Aerospace and Electronic Systems, vol. 52, no. 6, pp. 2732–2746, December 2016. [50] J. Y. Yu, M. Coates, and M. Rabbat, “Distributed multi-sensor CPHD filter using pairwise gossiping,” in 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), March 2016, pp. 3176–3180. [51] W. Yi, M. Jiang, S. Li, and B. Wang, “Distributed sensor fusion for RFS density with consideration of limited sensing ability,” in 20th International Conference on Information Fusion, July 2017, pp. 1–6. [52] R. L. Streit, “Multisensor multitarget intensity filter,” in Proc. 11th Int. Conf. Inf. Fusion, June 2008, pp. 1–8. [53] L. Xiao, S. Boyd, and S. Lall, “A scheme for robust distributed sensor fusion based on average consensus,” in IPSN 2005. Fourth International Symposium on Information Processing in Sensor Networks, April 2005, pp. 63–70. [54] J. C. Ye, Y. Bresler, and P. Moulin, “Asymptotic global confidence regions in parametric shape estimation problems,” IEEE Transactions on Information Theory, vol. 46, no. 5, pp. 1881–1895, Aug 2000. [55] J. Munkres, “Algorithms for the assignment and transportation problems,” Journal of the Society for Industrial and Applied Mathematics, vol. 5, no. 1, pp. 32–38, 1957. [56] T. Li, J. Corchado, and J. Prieto, “Convergence of distributed flooding and its application for distributed Bayesian filtering,” IEEE Transactions on Signal and Information Processing over Networks, vol. 3, no. 3, pp. 580–591, 2017. [57] M. B. Guldogan, “Consensus Bernoulli filter for distributed detection and tracking using multi-static doppler shifts,” IEEE Signal Processing Letters, vol. 21, no. 6, pp. 672–676, June 2014. [58] B. Wang, W. Yi, R. Hoseinnezhad, S. Li, L. Kong, and X. Yang, “Distributed fusion with multi-Bernoulli filter based on generalized covariance intersection,” IEEE Transactions on Signal Processing, vol. 65, no. 1, pp. 242–255, Jan 2017. [59] S. Li, W. Yi, R. Hoseinnezhad, G. Battistelli, B. Wang, and L. Kong, “Robust distributed fusion with labeled random finite sets,” arXiv:1710.00501 [cs.SY], 2017. [60] D. Schuhmacher, B. T. Vo, and B. N. Vo, “A consistent metric for performance evaluation of multi-object filters,” IEEE Transactions on Signal Processing, vol. 56, no. 8, pp. 3447–3457, Aug 2008. [61] I. Hwang, K. Roy, H. Balakrishnan, and C. Tomlin, “A distributed multiple-target identity management algorithm in sensor networks,” in 43rd IEEE Conference on Decision and Control (CDC), vol. 1, Dec 2004, pp. 728–734 Vol.1. -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1711.10783, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/1711.10783" }
2,017
[ "JournalArticle" ]
true
2017-11-29T00:00:00
[ { "paperId": "7a6833dd4b1067702c8f4752cf4f37bb18e7b451", "title": "Local-Diffusion-Based Distributed SMC-PHD Filtering Using Sensors With Limited Sensing Range" }, { "paperId": "1bea18320ccbf40b2d7a0564a33b0cad87550702", "title": "Cardinality-Consensus-Based PHD Filtering for Distributed Multitarget Tracking" }, { "paperId": "dcaa9d4f58433aa0a80dda39591bfe8ebe043ef8", "title": "Information Fusion Using Particles Intersection" }, { "paperId": "b253c11cc052fbdd4ff61525eb2e16c15ca9e914", "title": "Fusion of Finite-Set Distributions: Pointwise Consistency and Global Cardinality" }, { "paperId": "ceea73e1afdfca12cc3f84ad3a8cfce4c14c98b1", "title": "Toward a Theoretical Foundation for Distributed Fusion" }, { "paperId": "59af635aa9a5c72854d87f55461312e5c62022e6", "title": "Distributed SMC-PHD Fusion for Partial, Arithmetic Average Consensus" }, { "paperId": "4f30c3a5cfa758a6b08e8f23a35f23298049324d", "title": "Distributed Multisensor Data Fusion under Unknown Correlation and Data Inconsistency" }, { "paperId": "a498ea9a3cf8710fe8bf266312f4848efcd049ac", "title": "Robust Distributed Fusion With Labeled Random Finite Sets" }, { "paperId": "a49cae85b73e1b67c67721a54361fb1cf3040e3b", "title": "Convergence of Distributed Flooding and Its Application for Distributed Bayesian Filtering" }, { "paperId": "61c67194122bdd925c8b314c6fe7fe2b3066a2db", "title": "On generalized covariance intersection for distributed PHD filtering and a simple but better alternative" }, { "paperId": "c8cd970594f0215d41de9f7ed5c905f871f53d46", "title": "Distributed sensor fusion for RFS density with consideration of limited sensing ability" }, { "paperId": "a61d975067aac38eebd9eeaf2868fbd5d8549993", "title": "Sequential Estimation and Diffusion of Information Over Networks: A Bayesian Approach With Exponential Family of Distributions" }, { "paperId": "6005f7878e7711bfa5bb22f0c1e45039792f19e8", "title": "Track-to-Track Association by Coherent Point Drift" }, { "paperId": "85826f40bf57ce01d5801c4912062ca4de56d8ed", "title": "Chernoff fusion of Gaussian mixtures based on sigma-point approximation" }, { "paperId": "293e3c6e73cc3a316d0ea83c27252bb7183aaf1a", "title": "A Scalable Algorithm for Tracking an Unknown Number of Targets Using Multiple Sensors" }, { "paperId": "02ba8212ecdb4b362dbb95c4207ff202e0fb3c62", "title": "Distributed Fusion With Multi-Bernoulli Filter Based on Generalized Covariance Intersection" }, { "paperId": "33be742a44e7e9347c12d1aae226113cc8e22849", "title": "Distributed multi-sensor CPHD filter using pairwise gossiping" }, { "paperId": "9565e57585ee119340d936c55e45fc2e9e843833", "title": "Distributed particle filtering via optimal fusion of Gaussian mixtures" }, { "paperId": "7aec3a41e1ee5ddb063f5e770538cb5d28a7e11c", "title": "RSSI-Based Multi-Target Tracking by Cooperative Agents Using Fusion of Cross-Target Information" }, { "paperId": "b1243728aa7a2cb89912a09f7e42ce401847e468", "title": "Distributed fusion of multitarget densities and consensus PHD/CPHD filters" }, { "paperId": "1bb7a62a8ed11a81ae34fbb0731a5d3a25a17ea1", "title": "Distributed Consensus $+$ Innovation Particle Filtering for Bearing/Range Tracking With Communication Constraints" }, { "paperId": "5fc49a8539b8cde809b3e9cb4239e47e71af1948", "title": "Conservativeness of estimates given by probability density functions: Formulation and aspects" }, { "paperId": "5e68641b6edeb5bafc6b402666316fcec5b67f02", "title": "Robust linear estimation fusion with allowable unknown cross-covariance" }, { "paperId": "d9a0ddedb8ac185181331c16ae80822c86182cee", "title": "On nonlinear track-to-track fusion with Gaussian mixtures" }, { "paperId": "bf9a532a1d28a80469addad9fda973ecbb5019b2", "title": "Parallel Consensus on Likelihoods and Priors for Networked Nonlinear Filtering" }, { "paperId": "9e7b94b6513ca0719c0405dafc267245663b1574", "title": "Consensus Bernoulli Filter for Distributed Detection and Tracking using Multi-Static Doppler Shifts" }, { "paperId": "7f07bd0f2af6882bdb24c763e9fc0e9d762c4f55", "title": "Kullback-Leibler average, consensus on probability densities, and distributed state estimation with guaranteed stability" }, { "paperId": "2d59ab7401cd8f4089b3aa71c7c1e41df333647b", "title": "Sigma Point Belief Propagation" }, { "paperId": "9c957b29c1709773cf7e0e8526e94f03faade376", "title": "Bayesian multi-hypothesis scan matching" }, { "paperId": "e72ff007e4b7a5b6d1ddaa3b80b078edecfd0c66", "title": "Distributed Fusion of PHD Filters Via Exponential Mixture Densities" }, { "paperId": "e965bb26c29c0ed9f8dd2c5d5d45102b55e22803", "title": "Consensus CPHD Filter for Distributed Multitarget Tracking" }, { "paperId": "8a160ae323e03d1d9c9b5e7c1173b5a60864d534", "title": "Distributed Data Fusion for Network-Centric Operations" }, { "paperId": "eba8d20b903e8fcf5f3bdb48e3dab2db8216997f", "title": "On conservative fusion of information with unknown non-Gaussian dependence" }, { "paperId": "1f2284507fc1b931befc8c1c3e4b7ec0f688ad3d", "title": "Comparison of track fusion rules and track association metrics" }, { "paperId": "db87347d3a83d07b6c3cfce4604d0e9ffe731cf8", "title": "Information Flow Control for Collaborative Distributed Data Fusion and Multisensor Multitarget Tracking" }, { "paperId": "513af7b62378f20a61c14a733b37acc562a78561", "title": "Diffusion Adaptation over Networks" }, { "paperId": "21f3a2faa021b57ebeea4e2693b65dcc6b0a32ec", "title": "Likelihood Consensus and Its Application to Distributed Particle Filtering" }, { "paperId": "b0555f4e66410c75c5b563418364d1899c660ceb", "title": "Information measures in distributed multitarget tracking" }, { "paperId": "5c887108cf19629b26c762e9f878fc7a055b6c53", "title": "Approximate multisensor CPHD and PHD filters" }, { "paperId": "7616360019fe4f39a773ad4b164308f9278c917f", "title": "Generalised Covariance Union: A Unified Approach to Hypothesis Merging in Tracking" }, { "paperId": "233bb8183e3163363f2cc12ebda9d20e73d4e860", "title": "Estimating and exploiting the degree of independent information in distributed data fusion" }, { "paperId": "f3cd5e46580ec7eca97bd7814afe30c14ef6142d", "title": "The multisensor PHD filter: II. Erroneous solution via Poisson magic" }, { "paperId": "1f9ef15abb45482f399635393bc000117de2cd3c", "title": "Multisensor multitarget intensity filter" }, { "paperId": "22b26297e0cc5df3efdba54a45714e4e27b59e17", "title": "A Consistent Metric for Performance Evaluation of Multi-Object Filters" }, { "paperId": "7330b7a9ba71b4645abf64794d825e02420c3620", "title": "Multitarget Tracking in Distributed Sensor Networks" }, { "paperId": "aa6be519b394b44ab24c6ad964f8a2c6a9b23571", "title": "Consensus and Cooperation in Networked Multi-Agent Systems" }, { "paperId": "b6427febbcc396a2a88ecccda59a23a6aece7149", "title": "Statistical Multisource-Multitarget Information Fusion" }, { "paperId": "ba6b51be3d9eb6b3e09a160e63f9eadd703a6fd2", "title": "The Gaussian Mixture Probability Hypothesis Density Filter" }, { "paperId": "d04d150e9198e7bc09a09dd8826b43de832df033", "title": "An Empirical Study into the Use of Chernoff Information for Robust, Distributed Fusion of Gaussian Mixture Models" }, { "paperId": "50e76ff4747c4010a3143ef33df28f41dfa37c4f", "title": "Generalized Information Representation and Compression Using Covariance Union" }, { "paperId": "2fe363a54bd69f789e0f9cfef72efcb26acd966f", "title": "Mobile positioning using wireless networks: possibilities and fundamental limitations based on available wireless network measurements" }, { "paperId": "3b154f6b0ab98f8e2c2956ede29b1d499e42bbd5", "title": "Fusion of time delayed measurements with uncertain time delays" }, { "paperId": "59697e0aea25057adf743265888b3a4f5a607f82", "title": "A scheme for robust distributed sensor fusion based on average consensus" }, { "paperId": "48372b9fdbe64ec8d619babaf7f7ee734b00127c", "title": "Fast linear iterations for distributed averaging" }, { "paperId": "5efa73676159b7d32de7cc8d1cd94bf8aec8345c", "title": "Multitarget Bayes filtering via first-order multitarget moments" }, { "paperId": "24a262c2c888e030d5fd925bae60f70d2afffccf", "title": "Covariance consistency methods for fault-tolerant distributed data fusion" }, { "paperId": "e7533b973c457b96e81801eb94f9b001bd5d308c", "title": "Sensor networks: evolution, opportunities, and challenges" }, { "paperId": "628bf7b0af173de590e435b13f950d28f3ea0fc0", "title": "Estimation under unknown correlation: covariance intersection revisited" }, { "paperId": "422b80195d24c410e9f516026d7cf6df75781a7d", "title": "Information fusion based on fast covariance intersection filtering" }, { "paperId": "a1fa3bec5d131f693bf2e09a8f2276f184d3f9d6", "title": "Optimal/robust distributed data fusion: a unified approach" }, { "paperId": "54ce33d238538530de057b489f4c19555f528159", "title": "Asymptotic global confidence regions in parametric shape estimation problems" }, { "paperId": "d919e2bf6e87607ea7de4cde1da6c77f0ea46fa3", "title": "Selecting Weighting Factors in Logarithmic Opinion Pools" }, { "paperId": "fee61f2d8aaf5120e565bd84c725ccffdab6d2a5", "title": "On optimal track-to-track fusion" }, { "paperId": "0ff82aaa06e578968e35da6700d24096ec49953b", "title": "A non-divergent estimation algorithm in the presence of unknown correlations" }, { "paperId": "3e1651164c5a47890771d9f4193155d1960e53a4", "title": "Censoring sensors: a low-communication-rate scheme for distributed detection" }, { "paperId": "1c0f03b080708e07f043032d64e0da9fed732ba8", "title": "Mixture reduction algorithms for target tracking in clutter" }, { "paperId": "f2963642d0e3d9d9f5b015b89f7da084a9e492a0", "title": "The Effect of the Common Process Noise on the Two-Sensor Fused-Track Covariance" }, { "paperId": "b8c94c13c08d9ca76d8f51838ebc298b29c4a9ec", "title": "Dynamic Map Building and Localization : New Theoretical Foundations" }, { "paperId": "8872801e78048fbeb0ef9a784d1d3fa358b22f30", "title": "On Linear Estimation Fusion under Unknown Correlations of Estimator Errors" }, { "paperId": "d1533a732f1db1dd2a2ce4601ac369d30272efb3", "title": "Distributed Least Mean-Square Estimation With Partial Diffusion" }, { "paperId": "b89fb68af5494f05fafe80719b28aa2d3c650e05", "title": "Distributed particle filtering in agent networks: A survey, classification, and comparison" }, { "paperId": "3d2218b17e7898a222e5fc2079a3f1531990708f", "title": "I and J" }, { "paperId": null, "title": "Multisensor traffic mapping filters In Proc. Workshop Sensor Data Fusion: Trends, Solutions" }, { "paperId": "5752a49cad61d1c627b805b9760df3fc65ed79b4", "title": "Robust multi-object sensor fusion with unknown correlations" }, { "paperId": "8c0cf27a5e6606b494fab40c7138e4d996c65fb6", "title": "Conservative Non-Gaussian Data Fusion for Decentralized Networks" }, { "paperId": "ced092b90d8706cf2bd700af14c15a2b85c979c9", "title": "Belief consensus and distributed hypothesis testing in sensor networks" }, { "paperId": null, "title": "Hero Partial update LMS algorithms" }, { "paperId": "0b3d0b3390f183a3a96d5313e39dc65b4e384e42", "title": "Graphical Models for Nonlinear Distributed Estimation" }, { "paperId": "e749eabcd449a6a38f394412ad50fd88d20fbb26", "title": "A distributed multiple-target identity management algorithm in sensor networks" }, { "paperId": null, "title": "Degroot Reaching a consensus" }, { "paperId": "848c717ba51e48afef714dfef4bd6ab1cc050dab", "title": "ALGORITHMS FOR THE ASSIGNMENT AND TRANSIORTATION tROBLEMS*" }, { "paperId": null, "title": "On communication, CGMM costs less than GCI if t ≤ 3 but more if otherwise. CGMA communicates always less than CGMM and GCI. targets [23], [25" }, { "paperId": null, "title": "Approximate Gaussian conjugacy : Recursive parametric filtering under nonlinearity , multimodality , uncertainty , and constraint , and beyond" }, { "paperId": null, "title": "On computation, GCI costs the most again while CGMM and CGMA perform similar except when the true number of targets is one for which CGMM may be more computing costly than GCI" }, { "paperId": null, "title": "On communication cost, CGMM costs slightly more than CGMA, both smaller than GCI, especially when few P2P communication iterations ( t < 6 ) are applied" }, { "paperId": null, "title": "Cardinality consensus has improved the cardinality estimation in all consensus schemes" }, { "paperId": null, "title": "GCI shows delay at detecting new born targets as it will even increase OSPA compared with the centralized filter with no sensor communication at time k ∈ [7 , 11] and k ∈ [58 , 60]" }, { "paperId": null, "title": "When t = 6 , the computing time required" } ]
18,356
en
[ { "category": "Medicine", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01d5a28a5a6d9d1274cc62924768e70111da4d52
[]
0.893077
An Architecture and Management Platform for Blockchain-Based Personal Health Record Exchange: Development and Usability Study (Preprint)
01d5a28a5a6d9d1274cc62924768e70111da4d52
[ { "authorId": "9503333", "name": "Hsiu‐An Lee" }, { "authorId": "117292220", "name": "Hsin-Hua Kung" }, { "authorId": "51905760", "name": "Jai Ganesh Udayasankaran" }, { "authorId": "2364178", "name": "Boonchai Kijsanayotin" }, { "authorId": "2008742015", "name": "Alvin B Marcelo" }, { "authorId": "2097934", "name": "L. R. Chao" }, { "authorId": "3061732", "name": "Chien-Yeh Hsu" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
BACKGROUND Personal health record (PHR) security, correctness, and protection are essential for health and medical services. Blockchain architecture can provide efficient data retrieval and security requirements. Exchangeable PHRs and the self-management of patient health can offer many benefits to traditional medical services by allowing people to manage their own health records for disease prevention, prediction, and control while reducing resource burdens on the health care infrastructure and improving population health and quality of life. OBJECTIVE This study aimed to build a blockchain-based architecture for an international health record exchange platform to ensure health record confidentiality, integrity, and availability for health management and used Health Level 7 Fast Healthcare Interoperability Resource international standards as the data format that could allow international, cross-institutional, and patient/doctor exchanges of PHRs. METHODS The PHR architecture in this study comprised 2 main components. The first component was the PHR management platform, on which users could upload PHRs, view their record content, authorize PHR exchanges with doctors or other medical health care providers, and check their block information. When a PHR was uploaded, the hash value of the PHR would be calculated by the SHA-256 algorithm and the PHR would be encrypted by the Rivest-Shamir-Adleman encryption mechanism before being transferred to a secure database. The second component was the blockchain exchange architecture, which was based on Ethereum to create a private chain. Proof of authority, which delivers transactions through a consensus mechanism based on identity, was used for consensus. The hash value was calculated based on the previous hash value, block content, and timestamp by a hash function. RESULTS The PHR blockchain architecture constructed in this study is an effective method for the management and utilization of PHRs. The platform has been deployed in Southeast Asian countries via the Asia eHealth Information Network (AeHIN) and has become the first PHR management platform for cross-region medical data exchange. CONCLUSIONS Some systems have shown that blockchain technology has great potential for electronic health record applications. This study combined different types of data storage modes to effectively solve the problems of PHR data security, storage, and transmission and proposed a hybrid blockchain and data security approach to enable effective international PHR exchange. By partnering with the AeHIN and making use of the network’s regional reach and expert pool, the platform could be deployed and promoted successfully. In the future, the PHR platform could be utilized for the purpose of precision and individual medicine in a cross-country manner because of the platform’s provision of a secure and efficient PHR sharing and management architecture, making it a reasonable base for future data collection sources and the data analytics needed for precision medicine.
JOURNAL OF MEDICAL INTERNET RESEARCH Lee et al ##### Original Paper # An Architecture and Management Platform for Blockchain-Based Personal Health Record Exchange: Development and Usability Study ##### Hsiu-An Lee[1,2,3,4,5], MS; Hsin-Hua Kung[2,3,4,5], BS; Jai Ganesh Udayasankaran[3,4,6], MSc, MBA; Boonchai Kijsanayotin[3,4,7], MSc, MD, PhD; Alvin B Marcelo[3,4,8], MD; Louis R Chao[1], PhD; Chien-Yeh Hsu[2,3,4,5,9], PhD 1Department of Computer Science and Information Engineering, Tamkang University, New Taipei City, Taiwan 2Taiwan e-Health Association, Taipei, Taiwan 3Asia eHealth Information Network, Hong Kong, Hong Kong 4Standards and Interoperability Lab, Smart Healthcare Center of Excellence, Taipei, Taiwan 5Department of Information Management, National Taipei University of Nursing and Health Sciences, Taipei, Taiwan 6Sri Sathya Sai Central Trust, Prasanthi Nilayam, Puttaparthi, India 7Thai Health Information Standards Development Center, Health System Research Institute, Ministry of Public Health, Bangkok, Thailand 8University of the Philippines, Manila, Philippines 9Taipei Medical University Master Program in Global Health and Development, Taipei, Taiwan **Corresponding Author:** Chien-Yeh Hsu, PhD Department of Information Management National Taipei University of Nursing and Health Sciences No 365, Ming-te Road, Peitou District, Taipei City Taipei, 112 Taiwan Phone: 886 939193212 [Email: cyhsu@ntunhs.edu.tw](mailto:cyhsu@ntunhs.edu.tw) ### Abstract **Background:** Personal health record (PHR) security, correctness, and protection are essential for health and medical services. Blockchain architecture can provide efficient data retrieval and security requirements. Exchangeable PHRs and the self-management of patient health can offer many benefits to traditional medical services by allowing people to manage their own health records for disease prevention, prediction, and control while reducing resource burdens on the health care infrastructure and improving population health and quality of life. **Objective:** This study aimed to build a blockchain-based architecture for an international health record exchange platform to ensure health record confidentiality, integrity, and availability for health management and used Health Level 7 Fast Healthcare Interoperability Resource international standards as the data format that could allow international, cross-institutional, and patient/doctor exchanges of PHRs. **Methods:** The PHR architecture in this study comprised 2 main components. The first component was the PHR management platform, on which users could upload PHRs, view their record content, authorize PHR exchanges with doctors or other medical health care providers, and check their block information. When a PHR was uploaded, the hash value of the PHR would be calculated by the SHA-256 algorithm and the PHR would be encrypted by the Rivest-Shamir-Adleman encryption mechanism before being transferred to a secure database. The second component was the blockchain exchange architecture, which was based on Ethereum to create a private chain. Proof of authority, which delivers transactions through a consensus mechanism based on identity, was used for consensus. The hash value was calculated based on the previous hash value, block content, and timestamp by a hash function. **Results:** The PHR blockchain architecture constructed in this study is an effective method for the management and utilization of PHRs. The platform has been deployed in Southeast Asian countries via the Asia eHealth Information Network (AeHIN) and has become the first PHR management platform for cross-region medical data exchange. ----- JOURNAL OF MEDICAL INTERNET RESEARCH Lee et al **Conclusions:** Some systems have shown that blockchain technology has great potential for electronic health record applications. This study combined different types of data storage modes to effectively solve the problems of PHR data security, storage, and transmission and proposed a hybrid blockchain and data security approach to enable effective international PHR exchange. By partnering with the AeHIN and making use of the network’s regional reach and expert pool, the platform could be deployed and promoted successfully. In the future, the PHR platform could be utilized for the purpose of precision and individual medicine in a cross-country manner because of the platform’s provision of a secure and efficient PHR sharing and management architecture, making it a reasonable base for future data collection sources and the data analytics needed for precision medicine. **_(J Med Internet Res 2020;22(6):e16748)_** [doi: 10.2196/16748](http://dx.doi.org/10.2196/16748) **KEYWORDS** blockchain; personal health records; health information interoperability; precision health care; health information management ### Introduction ##### Background Traditionally, standard clinics have offered medical services focused on disease treatment. However, with the world’s current aging populations, there is a growing gap between what services clinics offer and patients’ actual needs. This means that clinics may not be equipped to offer the complete range of care required by patients, resulting in preventable medical harm. The National Institute for Health and Care Excellence’s 2016 Multimorbidity Clinical Assessment and Management Guidelines Report [1] emphasized the importance of integrating patient-centered decision-making methods for multiple problems, with a focus on precision medicine. Precision medicine is a disease treatment and prevention strategy formulated with reference to individual variability in terms of genes, environment, and lifestyle, which is used to determine necessary dynamic changes and personalized treatment for preventative health care and clinical care. The core elements of precision medicine are historical disease data, daily vital signs data, personal health management, and medical record exchange, and it aims to stop potentially harmful or unnecessary medical behavior, integrate care, reduce treatment burden, and help patients select meaningful treatment and care goals through accurate assessment. With the requirements of precision medicine mentioned earlier, there is a need to not only maintain patients’electronic medical records (EMRs) in hospitals but also to establish personal health records (PHRs) by combining medical records from different health institutes and functions of precision medicine, which patients can use to save, manage, use, and exchange with health care practitioners. PHRs are highly private data, and this sensitivity means that there are significant security challenges involved in their management and exchange. Any system that seeks to manage and exchange such records must ensure that health records are exchanged appropriately, that they are not leaked, and that protected data are not tampered with. A good way to achieve the secure exchange of health records is by using blockchain architecture. A decentralized storage management architecture based on blockchain would be able to meet the security requirements. In a 2016 study, Ford [2] predicted that 75% of the adults worldwide will be using PHRs by 2020 without any external incentives. The importance of a PHR is that it allows a health care provider to examine a patient’s history of illnesses and medications and it provides a basis for medical decision making. More importantly, PHRs offer a basis for personal health management. PHRs include various health information such as medical information, vital signs (heartbeat, blood pressure, blood sugar, and body temperature), family disease history, and blood test reports [3-5]. Most countries today, however, still use the EMR system. In 2013 in Taiwan, a total of 502 hospitals had a comprehensive EMR system for accessing medical records, inspection reports, medical images, medication information, and so on. However, these data only exist in hospitals and are exchanged between other hospitals or clinics via the EMR exchange center. To achieve the goals of precision medicine and health care, a patient-centered approach to record management and exchanges is required; the traditional centralized PHR repository in hospitals does not meet the requirements to achieve this. A patient-centered approach would involve PHRs being managed by the patients themselves, while providing those records to various health care providers as needed. This kind of system would require a very secure architecture to protect PHR data. According to the National Health Insurance (NHI) Administration of the Ministry of Health and Welfare in Taiwan, the average number of outpatient visits, not including Chinese medicine or dentists, is 13 per year for people in Taiwan. Most of these people visit different hospitals for treatment of the same condition over a short period of time. With the PHR system, people can manage their own health records and conditions, and doctors can also view their past medical records and medication status. Blockchain technology was proposed by Nakamoto in 2008 [6] in a white paper titled “Bitcoin: A Peer-to-Peer Electronic Cash System.” A blockchain has the characteristics of decentralization, and its encryption mechanism can be designed to verify the data content to ensure that the data have not been tampered with. In this paper, the blockchain concept was used to solve the problem of data security and third-party authentication in the transaction process. A blockchain is a decentralized public account that records all money transactions and how much money everyone owns. John et al [7] proposed that the use of blockchain technology in electronic health care records can avoid the need to add another organization between the patient and the records. It is not a new repository for data but rather implies a decentralized control mechanism in which all users have an interest, but no one exclusively owns the data. This technology can improve data safety and remove privacy issues. Pouyan et al [8] stated that regarding the trust in health ----- JOURNAL OF MEDICAL INTERNET RESEARCH Lee et al information exchange competency and exchange integrity, the blockchain architecture is more trustworthy than other exchange mechanisms for exchanging highly sensitive information. This design differed from previous work on blockchain infrastructures and associated consensus mechanisms in that, while they operate in a decoupled manner from other blockchain frameworks, Fast Healthcare Interoperability Resource (FHIR) Chain [9] focuses on designing the decisions of smart contracts to be compatible with any existing blockchain architecture that supports the execution of smart contracts. However, this architecture remains vulnerable to the 51% of cyber attacks and does not provide complete data security. ##### Objectives This study proposed a blockchain-based architecture for storing, sharing, and protecting sensitive personal information. In the proposed architecture, the blockchain manages the authorization of data exchanges between patients, health care providers, and other users. The blockchain does not physically replace the electronic health record system, as most hospital information systems store detailed EMRs in a secure database on site or on a duplicate site located outside the hospital. Therefore, the blockchain architecture simply helps to ensure the security, confidentiality, integrity, and availability of the data. Combined with FHIR’s data format standards, stakeholders can read and write data into their own electronic health record systems that can be exchanged securely with other systems using the blockchain. The computational strength of the encryption built into the blockchain ensures that the data are correctly and safely transferred during PHR exchange transactions. However, a blockchain is not a data repository, rather it is a ledger of data integrity. This technology can be used to exchange records, verify data, and protect sensitive data. It can ensure that medical records will not be modified by unauthorized third parties. The uploading time of the data to the blockchain can also be recorded. Thus, the enabling of the collection of a patient’s more complete longitudinal data and the ability to share it remotely with professionals can allow for better decision making and reduce medical errors and medical malpractice. ### Methods The blockchain-based exchange architecture for PHR management proposed in this study comprises 2 main components. The first component is the PHR management platform, on which users can upload PHRs, view their record content, authorize PHR exchange with doctors or other medical health care providers, and check their block information. When a PHR is uploaded, the hash value of the PHR is calculated by the SHA-256 algorithm, and the PHR is encrypted by the RSA (Rivest-Shamir-Adleman) encryption mechanism before being transferred to a secure database. The second component of the architecture is the blockchain exchange architecture, which is based on Ethereum to create a private chain. Proof of authority (PoA), which delivers transactions through a consensus mechanism based on identity, is used for consensus. The hash value is calculated based on the previous hash value, block content, and timestamp by a hash function. The architecture of the platform is shown in Figure 1. The PHR management platform consists of the transfer module, the security module, and the view PHR module. The transfer module allows users to connect to the blockchain exchange architecture to create or search for blocks. The security module is used to encrypt and confirm the PHR content. The view PHR module displays the PHR content for personal health management or for doctors to view the record. The blockchain architecture in this study is designed based on Ethereum, including elliptic curve digital signature, PoA, and the new block creation function. The blockchain architecture ensures that the PHR content remains secure and confirms that the PHR content is correct. **Figure 1.** Personal health record management platform and blockchain architecture. PHR: personal health record. ##### Personal Health Record Management Platform The major goal of this study was to build a cross-area health information exchange platform that could fulfill the needs of international medical services. This study used My Health Bank (MHB) as an initial example of PHRs. In Taiwan, MHB is issued by the NHI and contains a majority of the clinical data collected from different health care services. MHB not only includes the ----- JOURNAL OF MEDICAL INTERNET RESEARCH Lee et al necessary clinical data chronically arranged by time for a single patient but also contains the information entered by the patient, such as blood pressure measured at home. Therefore, there was a good reason for this study to choose MHB for the PHRs in the Asia eHealth Information Network (AeHIN). Detailed items of the MHB are provided in this manuscript. Basically, PHRs refer to individual-centric personal health data from different medical service providers or devices, while EMRs represent the data of a patient in a single hospital. Multiple simulated computers are used as blockchain nodes in this study to emulate the encryption and secure storage of a PHR in this study. As health records are private data, the blockchain must be built in a secure environment as a private chain, increasing the efficiency and stability of data transmission and sharing. MHB was used as a PHR example in this study. MHB was launched by the Ministry of Health and Welfare of Taiwan in 2015. It allows Taiwan’s NHI members to download their own health records from its website. The MHB data contain all the necessary clinical information because they are generated by the hospital when it applies for health insurance payments. The entire PHR of any single patient was uploaded in our platform. For authority management and confidentiality, we used a variety of tags in the contents to specify the function levels to different uses through a carefully designed user interface, through which patients could assign which data would not be revealed to others as well as assign tags to the data. Our design to keep the whole data is for the purpose of future use of the data, as the PHR platform could also become a clinical data repository and the data could be used for further analysis of precision medicine in the future. The MHB data include (1) outpatient information for Western medicine, traditional Chinese medicine, and dentistry; (2) hospitalization information; (3) allergy information; (4) images and information for pathological exams and tests; (5) patients’ discharge record abstract; (6) patients’ intention for organ donation and palliative care; (7) preventive health data; (8) preventive vaccination information; (9) patients’ health insurance card information; (10) premium and charging specific information; and (11) insurance premium payment specific information. The MHB file format can be selected as either XML or JSON. This study used the XML format. ##### Hash Value for Data Integrity Confirmation To ensure that PHRs are not modified when they are transferred between platforms, this study designed a hash function to confirm the integrity of PHR data. SHA-256 was used to create a hash value for each PHR. SHA-256 is a cryptographic hash function, which takes an input and produces a 256-bit (32-byte) hash value known as a message digest, typically rendered as a hexadecimal number, 40-digit long. It was designed by the United States National Security Agency and is a US Federal Information Processing Standard [10,11]. If the PHR data have not been altered during transfer, the SHA-256 hash value would remain the same. Unlike encryption, which converts text into reversible cipher texts of different lengths, the hash function converts text into irreversible hash strings (or message digests) of the same length. When users upload their PHRs to the platform, the PHR hash value is created and transferred to the blockchain architecture as block content. Then, when the PHRs are viewed by the owner, or exchanged with other users, the platform obtains the hash value from the block and calculates the PHR hash value by SHA-256 again. If the hash value from the PHR is equal to the hash value from the block, the PHR data have not been modified. The procedure of PHR management is shown in Figure 2. **Figure 2.** Personal health record creation, uploading, and verification procedure. DB: database; PHR: personal health record. ----- JOURNAL OF MEDICAL INTERNET RESEARCH Lee et al ##### Data Encryption for Personal Health Record Security In this study, PHRs were encrypted by RSA before being uploaded to the secure database. RSA is a public-key cryptosystem used for secure data transmission [12]. The encryption key is public and differs from the decryption key, which is private. The platform automatically creates the RSA public and private keys for users. When users upload their PHRs, the public key is used to encrypt the record. Thus, even if a malicious attacker were to overcome the firewall and all other security mechanisms, they would only be able to obtain the encrypted PHR and would have no means of decrypting it. The user private key is used to decrypt the PHR when exchanged. ##### Viewing Personal Health Record and Block Information for Personal Health Management This study designed a PHR exchange architecture in which PHR contents are not read when users upload their PHRs; the platform only uploads encrypted PHRs to the secure database, thus ensuring the security of personal data. Moreover, this study developed a user interface for personal health management that shows PHR contents when users want to access them. Using MHB as an example, when users use the application to read their PHRs, it means that the platform has the authority to read the PHRs. The PHRs are then decrypted by the user’s RSA private key, and the platform reads PHR data, without storing them. This means that the platform cannot simply access PHRs without explicit user consent and action. ##### Blockchain Exchange Architecture As the blocks in a blockchain cannot be tampered with or maliciously altered, this study stored PHR hash values in a blockchain to protect the PHR data and confirm the integrity of the PHR contents. Ethereum’s private chain was used as the blockchain architecture, and the Geth (Go Ethereum) application, which is the Ethereum protocol, was used to transfer the transaction from the proposed platform to the blockchain **Figure 3.** Block creation process. PHR: personal health record. exchange architecture, create a new block, and connect to the blockchain. The block creation process is shown in Figure 3. To secure against private data being leaked during transmission on the network, the data are encrypted during the data transmission process. The health record uploaded to the secure database by the platform is also encrypted to ensure the privacy of the user. The block content includes the PHR hash and timestamp, where the PHR hash is used to check whether the PHR in the database has been tampered with. If a malicious attacker attempts to obtain the block content, they will only get a collection of random numbers. The encryption method combines hash encryption and asymmetric encryption. The block content is protected by a hash encryption function that uses SHA-256 to scramble data into a set of hexadecimal strings. Asymmetric encryption uses the elliptic curve digital signature algorithm to encrypt PHR transfer information, ensuring the integrity and nonrepudiation of transaction data, and then the PoA consensus mechanism is used for validation by a qualified verifier established by an audited authority to confirm the correctness and validity of the PHR and create the verified blocks of the blockchain. Elliptic curve cryptography (ECC) is a public-key cryptography based on elliptic curve mathematics, also known as asymmetric cryptography. The elliptic curve digital signature algorithm is based on ECC for digital signatures. The working principle is similar to that of most digital signature algorithms. They are signed with a private key and verified with a public key, thus offering nonrepudiation. Compared with traditional digital signature algorithms (such as RSA), ECC is faster, offers stronger security, and requires shorter signatures. In the proposed platform, each user has one password for a user account and a private key for blockchain and PHR decryption. To improve the platform efficiency, users can choose to store their personal blockchain private key in the platform’s security database (or store it themselves). When data are uploaded to the platform, the system will retrieve the key from the database to complete the transaction process. ----- JOURNAL OF MEDICAL INTERNET RESEARCH Lee et al ##### Proof of Authority for Block Creation PoA is a technology that achieves consensus in a private chain. In the operation, an authorized node has the authority to generate the next block in a blockchain network. The blockchain information reaches the extreme value of the consensus of all nodes, which can guarantee that the latest blocks are accurately connected in series to the blockchain, and the blockchain information stored by the nodes is consistent, indivisible, and even resistant to malicious attacks. In this study, the private chain consensus mechanism is established, and the verifier is set on multiple simulated computers. The initial setup verifier node is set up on the simulated computers. In the feature, possible nodes represent cooperating institutions, medical institutions, research centers, and so on; the verifier uses this identity to obtain the right to verify. Compared with other proof mechanisms, the key elements of the PoA network in this study include the following: 1. Improved efficiency: Block creation is accelerated and the waiting time for data exchange is reduced. 2. Verifier setup: A mutual supervision relationship with partner institutions is established to allow self-supervision and supervision of others, preventing the blockchain from being controlled by the node manager; the verifier can vote for a new verifier or remove an unqualified verifier at any time. 3. Highly scalable and highly compatible: It is also possible to complete intelligent collaborative construction and optimize it. ##### Hash Value for Block Corrected, Confirmed, and Connected The cryptographic hash function is an important part of the blockchain. It is essentially a function that gives security capabilities to the created block, based on processed transactions, making them immutable. In Ethereum’s function, SHA-256 is used to create new blocks. The hash of a block is created based on the block content, previous hash value, and timestamp. The block content and architecture are shown in Figure 4. Block content includes the following: - Block number: Current block number - Pre-Hash: The hash value of the previous block - Hash: The hash value of this block - Timestamp: Current time - PHR hash: The hash value of PHR created by the platform - PHR index: The index position of the health record in the secure database **Figure 4.** Block content on the blockchain architecture. PHR: personal health record. ##### Overall System Workflow Personal Health Record Exchange Authority Mechanism Users can manage the authority for PHR exchange once they have uploaded their PHRs. When users want to make their PHRs available to a doctor, the authority assignment procedure is as shown in Figure 5. The workflow of the system comprises 3 components: upload, exchange, and view. To begin, a user uploads their PHR to the platform (Figure 6). In the uploading process, the PHR is assigned a hash value by SHA-256. Then, the PHR is transferred to the secure database after encryption by RSA. Once the data are stored in the database, blockchain is used to ensure data security and integrity. SHA-256 and ECC are then used to create a block, and the Ethereum architecture is used as the blockchain architecture in this study. The PHR hash value and the PHR index in the database are transmitted to the Ethereum block by the user’s blockchain account (public key) and using the user’s private key signature. To create a block, block content must be verified and the block hash value must be calculated by the verifier node; it is then broadcast to each node. The workflow of users sharing their PHRs with a doctor is shown in Figure 6. First, the platform sends the transaction to the blockchain architecture. The block architecture will then select the user’s block and read its content. The users’ PHRs will be obtained from the secure database based on the database index of the PHRs and decrypted using the users’ private key, and the hash value will be created again. The PHR will then be ----- JOURNAL OF MEDICAL INTERNET RESEARCH Lee et al transferred to the doctor after being encrypted by the doctor’s RSA public key and decrypted by the user’s private key if the hash value is equal to the block content’s hash. The workflow of viewing the PHR content is shown in Figure 7. When users want to view their PHRs or share their PHRs with a doctor, the platform will send the transaction to the blockchain architecture. The blockchain architecture will confirm that the PHR content has not been modified and that the user or doctor has the authority to view the PHR. The PHR will then be transferred to the user or doctor after being encrypted by their RSA public key. The user or doctor will then use their own private key to decrypt it. They will then be able to view the PHR content. MHB is used as an example in this study. **Figure 5.** Workflow of a user uploading their personal health record. PHR: personal health record. ----- JOURNAL OF MEDICAL INTERNET RESEARCH Lee et al **Figure 6.** Workflow of a user sharing their personal health record with a doctor. PHR: personal health record; RSA: Rivest-Shamir-Adleman. **Figure 7.** Workflow of a user viewing their own personal health record. PHR: personal health record; RSA: Rivest-Shamir-Adleman. ----- JOURNAL OF MEDICAL INTERNET RESEARCH Lee et al ##### International Personal Health Record Exchange Implementation Process The platform can be used at all places where the internet is available. This study used the data format designed in Taiwan MHB as an example to test the system in Asia. MHB contains all the necessary clinical health care data. In Taiwan, 99% of residents can access MHB. Therefore, we chose MHB as an example of PHRs for the AeHIN. The Philippines and Thailand were used as test cases for this study, and 2 of the physician representatives in this study were Dr Alvin in the Philippines and Dr Boonchai in Thailand. A testing scenario was designed in which a patient from Taiwan travels to Bangkok and the Philippines and suddenly requires medical services. Both the patient and doctors in different countries were registered on this platform. Before the patient would see a doctor in a specific country, authorization to view the PHR would need to be given to the doctor by the patient. For this testing scenario, a patient’s PHR with diagnoses of type 2 diabetes mellitus, epilepsy, brain stem stroke, and proteinuria NOS (not otherwise specified) and medication data was designed. The data of testing scenario is descripted in Table 1. The scenarios consisted of the following scenes: 1. A patient from Taiwan travels to the Philippines. 2. The patient develops a headache and dizziness. 3. The patient goes to see a doctor who has been registered in our platform. 4. Authorization to view the PHR is given to the doctor. 5. The doctor retrieves the patient’s PHR from the platform. 6. By viewing the previous PHRs of the patient, the doctor obtains the health profile of the patient and then completes a new diagnosis, treatment, or medication order according to the current status of the patient. 7. A new block is created and the new PHR is stored in the PHR database, if the doctor is willing to upload the new record. **Table 1.** The data of testing scenario for international personal health record exchange. Num Date Diagnosis Medical 1 October 10, 2017 Type 2 diabetes mellitus Iunaidon Tablets Yu Sheng 2 July 16, 2017 Epilepsy Neurtrol F.C. Tablets 300 mg. 3 May 25, 2017 Brain stem stroke Cofarin Tab 1 mg 4 May 20, 2017 Brain stem stroke Cofarin Tab 1 mg 5 January 20, 2017 Proteinuria not otherwise specified Kaluril Tablets 5 mg ### Results ##### Study Design This study designed a blockchain-based PHR exchange architecture and management platform for the secure management transfer and sharing of PHR data between patients **Figure 8.** The user interface of a personal health record viewer. and medical health care providers. In the PHR management component, the user interface was established; its functions include viewing PHRs for personal health management, sharing PHRs with a doctor, and checking the blockchain content for security. The PHR viewer user interface is shown in Figure 8. MHB was used as an example in this study. ----- JOURNAL OF MEDICAL INTERNET RESEARCH Lee et al ##### The Personal Health Record Viewer User Interface of Platform In Figure 8, the uploaded PHR is displayed. Records are sorted in a time sequence from the latest to the oldest. The display shows the record of each visit, and the patient’s medication history. A doctor can view the latest related health record and recent medication status to give the patient the most appropriate diagnosis, while avoiding the problem of adverse reactions between repeated medications or adverse medications. ##### Blockchain Information in the Platform The block content is shown in Figure 9 and includes the time at which the PHR was uploaded, the PHR owner, the PHR hash value, a timestamp, a block hash value, and a pre-hash value. Each block records the previous block location and concatenates to the previous block. When users upload their personal MHB file, the system automatically converts the file to the FHIR format and transfers it to the security database. The data are then encrypted and uploaded to the blockchain. The users can view the uploaded **Figure 9.** Block content. **Figure 10.** Authority control user interface. data records and the contents of the block by uploading the module and obtain a health record for downloading in the FHIR format. A hospital can then upload that data to their system, as long as the system supports the FHIR format. The blockchain architecture allows users to set their own PHR read permission using the PHR management platform to control who can view their records. The blockchain is used to confirm that the PHR content is correct. The authority control user interface is shown in Figure 10. The simple user interface design ensures that the platform and function are easy to navigate and operate. The design uses 2 columns to display a list of permissions, one of which is a list of trusted participants, and the other is a list of participants to whom the user wishes to grant permission to view their current PHR. When the user wants to grant a doctor permission to view their PHR, they select the doctor from the left-hand column and update the identity. The blockchain architecture in this study is built by Ethereum, and the blocks are connected by the hash value of each block. The connection diagram is shown in Figure 11. ----- JOURNAL OF MEDICAL INTERNET RESEARCH Lee et al **Figure 11.** Blockchain connection diagram. ##### Testing Feedback from Physicians This study used the MHB data as an example to demonstrate the functioning of the PHR exchange platform and the PHR exchange mechanism based on the blockchain architecture and encryption mechanism, which can ensure PHR storage security and its tamper-proof nature. The system can more effectively manage self-health records and provide physicians with PHRs as a decision-making reference. The results of this study have been cross-nationally tested in Southeast Asian countries, exchanging PHRs via the AeHIN, and invited physicians from Southeast Asian countries as international participant doctors to allow users to exchange PHRs internationally for appropriate treatment. The proposed platform was designed to easily share and exchange PHR information electronically. The contents of the PHRs were protected and kept unchanged by the technology of the blockchain architecture. The international standard format of Health Level 7 FHIR was designed in this platform to ensure the interoperability. Doctors could use the platform to upload and download PHR data from different places at any time, thereby allowing PHRs to be exchanged efficiently. Therefore, this platform could increase the accessibility, interoperability, timeliness, and usability of PHRs. The platform is currently in its testing stage, and there is a low number of users on the network. The users’ comments could be summarized as follows: 1. PHRs that are in a standardized format on this platform are a benefit for clinical service. 2. By using the platform, the exchange of PHRs is easy and efficient. 3. The protection offered by the blockchain technology can convince users that the system is secure. 4. Even if the role of the user is that of the platform manager, PHRs still cannot be read without the authorization given by the patient to view the PHR. 5. Personal health management functions can be designed in future work. ### Discussion ##### Potential Blockchain technology has great potential for electronic health records [13]. The core of the blockchain model ensures that any information involved has nonrepudiation to maintain the correctness of the historical process records [14]. Gary et al [15] Reviewed the current PHR definitions and multiple blockchain architectures for PHR management and found that blockchain technology is a key requirement for the management of consent to use private health data. Many studies have proposed health applications based on the blockchain technology that can be used in the medical domain to achieve medical record sharing. In 2016, Ekblaw et al [16] created a decentralized medical record management platform that was built on the private network of Ethereum. The platform can only be accessed by authorized users, and blockchain was used to manage authentication, data sharing, and other security functions in the medical field. In the study, when any information was updated on the hospital side, it was uploaded to the blockchain; the platform was synchronized with the patient’s database, and the patient would be reminded to update the block. However, patients were unable to upload data themselves, as the data were all still stored in the centralized hospital database. Omar et al [17] used Ethereum’s smart contracts and a decentralized application to build a cloud-based PHR system. This system was used to store the PHR of each user and also to ensure the security and integrity of the uploaded data. Private accessible units (PAU) were responsible for all encryption, decryption, uploading of data, searching for data, and verification of data in which users can encrypt data with an encryption key and upload data to the blockchain through a smart contract, which then returns a block-id to the user uploading the data. The user would be responsible for ----- JOURNAL OF MEDICAL INTERNET RESEARCH Lee et al remembering the block-id. To view the data, the user would provide the PAU with the block-id, and the system would automatically return the corresponding block content and decrypt it with the decryption key. This system, however, did not offer the capability of sharing personal medical records or system interoperability. Peterson et al [18] presented a blockchain-based approach to sharing patient medical data that relies on a single centralized source of trust rather than network consensus to translate data and provides consensus on proof of structural and semantic interoperability. Zhang et al [9] presented a blockchain-based framework FHIR Chain that was designed to fit the technical requirements defined by the Office of the National Coordinator for Health Information Technology interoperability roadmap. Precision medicine requires the accurate collection and management of all kinds of clinical data. To this end, this study constructed an innovative data storage mechanism, used blockchain technology to ensure the correctness and safety of the PHR data, and combined a security database storage structure with a data verification mechanism to complete data management. A Korean team implemented the blockchain PHR management platform; however, the data transaction time in their study was too long. To allow for the management of queries by a large number of patients, transaction and propagation times must improve [19]. Ahmed et al [20] proposed a blockchain-based emergency access control management system that can protect PHRs using a smart-contract design; however, the system manager can still retrieve real patient data, making privacy issues a concern. The platform designed in this study could offer patient-centered clinical record exchange and decision-making support and allow patients to view and share their own PHRs, as well as manage their health status and apply for medical data using other functions effectively. The platform and architecture could enable the meaningful use of PHRs and promote self-health management. The feasibility was demonstrated by an application test with international users in this study. An important element of precision medicine is the exchange and management of PHRs and the subsequent provision of personalized medical treatment based on that data during the clinical diagnosis and treatment. This study therefore combined blockchain architecture and data verification methods to effectively solve the problems of data security, storage, and transmission and proposed a hybrid blockchain and data security approach that could enable effective international PHR exchanges. Using the AeHIN’s cross-national network environment, PHRs were successfully exchanged, and an international network of medical and health care providers was established to improve the quality of health care and precision medicine internationally. ##### Principal Findings The principal findings are as follows: - A cross-country platform for PHRs was developed in this study. By using this platform, PHRs could be exchanged and shared between different organizations and individuals (doctors, patients, etc) in an efficient manner. - A PHR platform was built using a blockchain architecture to ensure the security and privacy of health data. Few PHR systems based on blockchain technology have been developed for cross-country data exchange purposes. - The platform has been tested by several users in different countries in the AeHIN and has shown that it is a suitable platform for PHR sharing and exchange. - In our design, health data that can be used for precision medicine and can be stored and modeled in the architecture. ##### Limitations Currently, our PHR platform is at the prototype stage. Users from limited groups are participating in testing of the platform. However, the hardware architecture will need to be expanded to ensure the good performance of the platform when a large number of users wish to access the system. Furthermore, as the contents of the PHRs will be exchanged and shared by different countries and regions, an international data standard, such as HL7 FHIR, will be required to ensure smooth implementation. ##### Future Directions Important points regarding the comparison with prior work are as follows: - Precision medicine is the future trend of health care and must be based on PHRs. Our PHR platform not only enables PHRs to be shared between countries but also creates space for future functions of precision medicine. - Blockchain technology ensures data security and privacy and has been successfully used in financial data management systems. - A cross-country medical care architecture must be developed in the present busy international activities. ##### Conclusions On the basis of the blockchain technology, it is possible to remove all limitations to patients’ ability to copy and transfer their own health records to other health service providers [21]. After data are uploaded in the blockchain, the block can guarantee that the records cannot be modified by anyone [22]. The PHRs are stored in a decentralized network; therefore, it is impossible to steal PHR data or hack the system illegally [21]. In addition to improved health record sharing and analysis, data sharing will be secured and privacy will be protected [23]. In addition, the blockchain technology is essential for future precision medicine applications. Through the blockchain architecture, the data required by precision medicine can be integrated from different sources. In addition to using blockchains as a ledger for patient care data, they can also be used to store various types of health care–related data, such as precision medical data and genomic data [24], health care plan data, patient-centered data [25], clinical trial data [26], medication supply chain data, and biomarker data [27-29]. In this study, we implemented a cross-country platform for PHRs. By using this platform, PHRs can be exchanged and shared between different organizations in an efficient manner. The platform has been tested by several users in different countries in the AeHIN and has been shown to be a suitable platform for PHR sharing and exchange. With our design, the health data ----- JOURNAL OF MEDICAL INTERNET RESEARCH Lee et al that can be used for precision medicine can be stored and further modeled in the architecture. The security and privacy of PHRs can also be ensured by the features of blockchain technology, such as distributed node consensus algorithms, data transmission ##### Acknowledgments cryptography, and a decentralized network of smart contracts. However, an international standard, such as FHIR, will be required to ensure the PHR contents are internationally compatible. This project has received funding from the Ministry of Science and Technology, Taiwan, under the project no. 108-3011-F-075-001 and Ministry of Education, Taiwan, under the project no. 107EH12-22. ##### Authors' Contributions The work presented in this paper was carried out in collaboration among all authors. HL and CH conceptualized the study and study design and also designed the architecture of the system. HL, HK, and JU carried out literature review and system analysis. HK put a lot of effort in the implementation of the system. HL drafted the manuscript, and CH made significant revisions. JU, BK, and AM remotely tested the system. CH, JU, BK, AM, and LC supervised the methods of the implementation on a cross-country platform and suggested valuable improvements. All authors approved the final version of the manuscript. ##### Conflicts of Interest None declared. ##### References 1. Farmer C, Fenu E, O'Flynn N, Guthrie B. Clinical assessment and management of multimorbidity: summary of NICE [guidance. Br Med J 2016 Sep 21;354:i4843. [doi: 10.1136/bmj.i4843] [Medline: 27655884]](http://dx.doi.org/10.1136/bmj.i4843) 2. Ford EW, Hesse BW, Huerta TR. Personal health record use in the United States: forecasting future adoption levels. J Med [Internet Res 2016 Mar 30;18(3):e73 [FREE Full text] [doi: 10.2196/jmir.4973] [Medline: 27030105]](https://www.jmir.org/2016/3/e73/) 3. Kaelber DC, Jha AK, Johnston D, Middleton B, Bates DW. A research agenda for personal health records (PHRs). J Am [Med Inform Assoc 2008;15(6):729-736 [FREE Full text] [doi: 10.1197/jamia.M2547] [Medline: 18756002]](http://europepmc.org/abstract/MED/18756002) 4. AHIMA e-HIM Personal Health Record Work Group. American Health Information Management Association. 2005. Role [of the Personal Health Record in the EHR (2010 update) - Retired URL: https://library.ahima.org/doc?oid=103209#.](https://library.ahima.org/doc?oid=103209#.XnGsmagzaM8) [XnGsmagzaM8 [accessed 2020-03-18]](https://library.ahima.org/doc?oid=103209#.XnGsmagzaM8) 5. American Health Information Management Association. New York: Markle Foundation; 2003. Connecting for Health: A [Public-Private Collaborative(Report) URL: http://bok.ahima.org/PdfView?oid=76138 [accessed 2020-03-18]](http://bok.ahima.org/PdfView?oid=76138) 6. [Nakamoto S. Bitcoin. 2008. Bitcoin: A Peer-to-Peer Electronic Cash System URL: https://bitcoin.org/bitcoin.pdf [accessed](https://bitcoin.org/bitcoin.pdf) 2020-03-18] 7. Halamka JD, Lippman A, Ekblaw A. Harvard Business Review. 2017 Mar 3. The Potential for Blockchain to Transform [Electronic Health Records URL: https://hbr.org/2017/03/the-potential-for-blockchain-to-transform-electronic-health-records](https://hbr.org/2017/03/the-potential-for-blockchain-to-transform-electronic-health-records) [accessed 2017-03-03] 8. Esmaeilzadeh P, Mirzaei T. The potential of blockchain technology for health information exchange: experimental study [from patients' perspectives. J Med Internet Res 2019 Jun 20;21(6):e14184 [FREE Full text] [doi: 10.2196/14184] [Medline:](https://www.jmir.org/2019/6/e14184/) [31223119]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=31223119&dopt=Abstract) 9. Zhang P, White J, Schmidt DC, Lenz G, Rosenbloom ST. FHIRChain: applying blockchain to securely and scalably share [clinical data. Comput Struct Biotechnol J 2018;16:267-278 [FREE Full text] [doi: 10.1016/j.csbj.2018.07.004] [Medline:](https://linkinghub.elsevier.com/retrieve/pii/S2001-0370(18)30037-0) [30108685]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=30108685&dopt=Abstract) 10. [NIST Computer Security Resource Center. Gaithersburg: NIST FIPS; 2002 Aug 1. Secure Hash Standard URL: https://csrc.](https://csrc.nist.gov/csrc/media/publications/fips/180/2/archive/2002-08-01/documents/fips180-2withchangenotice.pdf) [nist.gov/csrc/media/publications/fips/180/2/archive/2002-08-01/documents/fips180-2withchangenotice.pdf [accessed](https://csrc.nist.gov/csrc/media/publications/fips/180/2/archive/2002-08-01/documents/fips180-2withchangenotice.pdf) 2020-03-18] 11. Dang Q. National Institute of Standards and Technology. Gaithersburg: US Department of Commerce, National Institute [of Standards and Technology; 2012 Aug. Recommendation for Applications Using Approved Hash Algorithms URL: https:/](https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-107r1.pdf) [/nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-107r1.pdf [accessed 2020-03-18]](https://nvlpubs.nist.gov/nistpubs/Legacy/SP/nistspecialpublication800-107r1.pdf) 12. Rivest RL, Shamir A, Adleman L. A method for obtaining digital signatures and public-key cryptosystems. Commun ACM [1978;21(2):120-126. [doi: 10.1145/359340.359342]](http://dx.doi.org/10.1145/359340.359342) 13. [Dimitrov DV. Blockchain applications for healthcare data management. Healthc Inform Res 2019 Jan;25(1):51-56 [FREE](https://www.e-hir.org/DOIx.php?id=10.4258/hir.2019.25.1.51) [Full text] [doi: 10.4258/hir.2019.25.1.51] [Medline: 30788182]](https://www.e-hir.org/DOIx.php?id=10.4258/hir.2019.25.1.51) 14. Mendes D, Rodrigues I, Fonseca C, Lopes M, García-Alonso JM, Berrocal J. Anonymized distributed PHR using blockchain for openness and non-repudiation guarantee. In: Méndez E, Crestani F, Ribeiro C, David G, Lopes J, editors. Digital Libraries for Open Knowledge. Cham: Springer; 2018:381-385. 15. Leeming G, Cunningham J, Ainsworth J. A ledger of me: personalizing healthcare using blockchain technology. Front Med [(Lausanne) 2019;6:171 [FREE Full text] [doi: 10.3389/fmed.2019.00171] [Medline: 31396516]](https://doi.org/10.3389/fmed.2019.00171) ----- JOURNAL OF MEDICAL INTERNET RESEARCH Lee et al 16. Ekblaw A, Azaria A, Halamka JD, Lippman A. A Case Study for Blockchain in Healthcare: 'MedRec' Prototype for Electronic Health Records and Medical Research Data. In: Proceedings of the 2016 IEEE Open & Big Data Conference. [2016 Presented at: IEEE BigData'16; August 22-24, 2016; Washington DC URL: https://www.healthit.gov/sites/default/](https://www.healthit.gov/sites/default/files/5-56-onc_blockchainchallenge_mitwhitepaper.pdf) [files/5-56-onc_blockchainchallenge_mitwhitepaper.pdf](https://www.healthit.gov/sites/default/files/5-56-onc_blockchainchallenge_mitwhitepaper.pdf) 17. Omar AA, Bhuiyan MZ, Basu A, Kiyomoto S, Rahman MS. Privacy-friendly platform for healthcare data in cloud based [on blockchain environment. Future Gener Comput Syst 2019;95:511-521. [doi: 10.1016/j.future.2018.12.044]](http://dx.doi.org/10.1016/j.future.2018.12.044) 18. Peterson K, Deeduvanu R, Kanjamala P, Boles K. A Blockchain-Based Approach to Health Information Exchange Networks. In: Proceedings of the 2016 NIST Blockchain & Healthcare Workshop. 2016 Presented at: NIST'16; September 26-27, [2016; Gaithersburg, MD URL: https://www.healthit.gov/sites/default/files/12-55-blockchain-based-approach-final.pdf](https://www.healthit.gov/sites/default/files/12-55-blockchain-based-approach-final.pdf) 19. Park YR, Lee E, Na W, Park S, Lee Y, Lee J. Is blockchain technology suitable for managing personal health records? [Mixed-methods study to test feasibility. J Med Internet Res 2019 Feb 8;21(2):e12533 [FREE Full text] [doi: 10.2196/12533]](https://www.jmir.org/2019/2/e12533/) [[Medline: 30735142]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=30735142&dopt=Abstract) 20. Rajput AR, Li Q, Ahvanooey MT, Masood I. EACMS: emergency access control management system for personal health [record based on blockchain. IEEE Access 2019;7:84304-84317. [doi: 10.1109/access.2019.2917976]](http://dx.doi.org/10.1109/access.2019.2917976) 21. Ivan D. HealthIT. 2016 Aug. Moving Toward a Blockchain-Based Method for the Secure Storage of Patient Records URL: [https://www.healthit.gov/sites/default/files/9-16-drew_ivan_20160804_blockchain_for_healthcare_final.pdf [accessed](https://www.healthit.gov/sites/default/files/9-16-drew_ivan_20160804_blockchain_for_healthcare_final.pdf) 2020-03-18] 22. Yue X, Wang H, Jin D, Li M, Jiang W. Healthcare data gateways: found healthcare intelligence on blockchain with novel [privacy risk control. J Med Syst 2016 Oct;40(10):218. [doi: 10.1007/s10916-016-0574-6] [Medline: 27565509]](http://dx.doi.org/10.1007/s10916-016-0574-6) 23. Linn LA, Koo MB. HealthIT. Blockchain for Health Data and Its Potential Use in Health It and Health Care Related [Research URL: https://www.healthit.gov/sites/default/files/11-74-ablockchainforhealthcare.pdf [accessed 2020-03-18]](https://www.healthit.gov/sites/default/files/11-74-ablockchainforhealthcare.pdf) 24. McKernan KJ. The chloroplast genome hidden in plain sight, open access publishing and anti-fragile distributed data [sources. Mitochondrial DNA A DNA Mapp Seq Anal 2016 Nov;27(6):4518-4519. [doi: 10.3109/19401736.2015.1101541]](http://dx.doi.org/10.3109/19401736.2015.1101541) [[Medline: 26486305]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=26486305&dopt=Abstract) 25. Goldwater JC. HealthIT. Bethesda: National Quality Forum; 2016 Aug 8. The Use of a Blockchain to Foster the Development [of Patient-Reported Outcome Measures URL: https://www.healthit.gov/sites/default/files/](https://www.healthit.gov/sites/default/files/6-42-use_of_blockchain_to_develop_proms.pdf) [6-42-use_of_blockchain_to_develop_proms.pdf [accessed 2020-03-18]](https://www.healthit.gov/sites/default/files/6-42-use_of_blockchain_to_develop_proms.pdf) 26. Nugent T, Upton D, Cimpoesu M. Improving data transparency in clinical trials using blockchain smart contracts. F1000Res [2016;5:2541 [FREE Full text] [doi: 10.12688/f1000research.9756.1] [Medline: 28357041]](https://f1000research.com/articles/10.12688/f1000research.9756.1/doi) 27. [Taylor P. Securing Industry. 2016 Apr 27. Applying Blockchain Technology to Medicine Traceability URL: https://www.](https://www.securingindustry.com/pharmaceuticals/applying-blockchain-technology-to-medicine-traceability/s40/a2766/#.XmoIhpMzZ0t) [securingindustry.com/pharmaceuticals/applying-blockchain-technology-to-medicine-traceability/s40/a2766/#.XmoIhpMzZ0t](https://www.securingindustry.com/pharmaceuticals/applying-blockchain-technology-to-medicine-traceability/s40/a2766/#.XmoIhpMzZ0t) [accessed 2019-09-18] 28. Jenkins J, Kopf J, Tran BQ, Frenchi C, Szu H. Bio-Mining for Biomarkers With a Multi-Resolution Block Chain. In: Proceedings of the 2015 SPIE Sensing Technology + Applications conference. 2015 Presented at: SPIE DSS'15; April [20-24, 2015; Baltimore. [doi: 10.1117/12.2180648]](http://dx.doi.org/10.1117/12.2180648) 29. IBM Global Business Services Public Sector Team. HealthIT. Bethesda: IBM Global Business Services Public Sector Team; 2016 Aug 8. Blockchain: The Chain of Trust and Its Potential to Transform Healthcare - Our Point of View URL: [https://www.healthit.gov/sites/default/files/8-31-blockchain-ibm_ideation-challenge_aug8.pdf [accessed 2020-03-18]](https://www.healthit.gov/sites/default/files/8-31-blockchain-ibm_ideation-challenge_aug8.pdf) ##### Abbreviations **AeHIN:** Asia eHealth Information Network **ECC:** elliptic curve cryptography **FHIR:** Fast Healthcare Interoperability Resource **EMR:** electronic medical record **MHB:** My Health Bank **NHI:** National Health Insurance **PAU:** private accessible units **PHR:** personal health record **PoA:** proof of authority **RSA:** Rivest-Shamir-Adleman ----- JOURNAL OF MEDICAL INTERNET RESEARCH Lee et al _Edited by G Eysenbach; submitted 21.10.19; peer-reviewed by JT te Gussinklo, B Vaes; comments to author 23.12.19; revised version_ _received 14.02.20; accepted 22.02.20; published 09.06.20_ _Please cite as:_ _Lee HA, Kung HH, Udayasankaran JG, Kijsanayotin B, B Marcelo A, Chao LR, Hsu CY_ _An Architecture and Management Platform for Blockchain-Based Personal Health Record Exchange: Development and Usability_ _Study_ _J Med Internet Res 2020;22(6):e16748_ _[URL: https://www.jmir.org/2020/6/e16748](https://www.jmir.org/2020/6/e16748)_ _[doi: 10.2196/16748](http://dx.doi.org/10.2196/16748)_ _[PMID: 32515743](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=32515743&dopt=Abstract)_ ©Hsiu-An Lee, Hsin-Hua Kung, Jai Ganesh Udayasankaran, Boonchai Kijsanayotin, Alvin B Marcelo, Louis R Chao, Chien-Yeh Hsu. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 09.06.2020. This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on http://www.jmir.org/, as well as this copyright and license information must be included. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.2196/preprints.16748?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2196/preprints.16748, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GREEN", "url": "https://www.jmir.org/2020/6/e16748/PDF" }
2,019
[]
true
2019-10-21T00:00:00
[ { "paperId": "8057d34fbfa126970ba9e31c6bb26b5bc1a77e1b", "title": "A Ledger of Me: Personalizing Healthcare Using Blockchain Technology" }, { "paperId": "eaa6cbab5bd50cee86ca038d6d036f11f7d6df9b", "title": "Privacy-friendly platform for healthcare data in cloud based on blockchain environment" }, { "paperId": "47712878bbf5245b2ff0a3befa098dcdedfd6bd8", "title": "EACMS: Emergency Access Control Management System for Personal Health Record Based on Blockchain" }, { "paperId": "6c6b4d43df1529b969910f7e112e831360f12132", "title": "The Potential of Blockchain Technology for Health Information Exchange: Experimental Study From Patients’ Perspectives" }, { "paperId": "b32a19c5d3f8c31e9329090affa806d19ccf4d16", "title": "Is Blockchain Technology Suitable for Managing Personal Health Records? Mixed-Methods Study to Test Feasibility" }, { "paperId": "a830083704284c8c5ddaf04f676c6ce23d583942", "title": "Blockchain Applications for Healthcare Data Management" }, { "paperId": "eb1f719817e30d474d0ade448dc65718c3efd1cb", "title": "Anonymized Distributed PHR Using Blockchain for Openness and Non-repudiation Guarantee" }, { "paperId": "493897a42c53209994787eea20c34f700e1bba63", "title": "FHIRChain: Applying Blockchain to Securely and Scalably Share Clinical Data" }, { "paperId": "1379ab7ae189fbd9abbe96707ec9f11452d834ac", "title": "The chloroplast genome hidden in plain sight, open access publishing and anti-fragile distributed data sources" }, { "paperId": "6e20b49c64d68d50929e8968237a1eb0227d013e", "title": "Improving data transparency in clinical trials using blockchain smart contracts" }, { "paperId": "208735a6c437b8ae3efba01693c3e8a06289c3dd", "title": "Healthcare Data Gateways: Found Healthcare Intelligence on Blockchain with Novel Privacy Risk Control" }, { "paperId": "3a7e7fd9a17c93601e0d95aa431bc650c18eda39", "title": "Clinical assessment and management of multimorbidity: summary of NICE guidance" }, { "paperId": "2f3a786a229b895f2b6795965a04fff9d77e1d43", "title": "Personal Health Record Use in the United States: Forecasting Future Adoption Levels" }, { "paperId": "727cfc8ca59e5ab2494fa55fe302a165b959a844", "title": "Recommendation for Applications Using Approved Hash Algorithms" }, { "paperId": "80b388f07313e609a0fbd7dadfbadc69ae3b653e", "title": "Protection" }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": "362aa1e6773b514e33879011e1a610cee9bad3e0", "title": "Viewpoint Paper: A Research Agenda for Personal Health Records (PHRs)" } ]
14,116
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01d8a5fffbb6b5e0ec3490d1dbdc32c4b3f4af9d
[ "Computer Science" ]
0.831159
Blockchain-Based Application Security Risks: A Systematic Literature Review
01d8a5fffbb6b5e0ec3490d1dbdc32c4b3f4af9d
CAiSE Workshops
[ { "authorId": "37390344", "name": "Mubashar Iqbal" }, { "authorId": "2112115", "name": "Raimundas Matulevičius" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
Although the blockchain-based applications are considered to be less vulnerable due to the nature of the distributed ledger, they did not become the silver bullet with respect to securing the information against different security risks. In this paper, we present a literature review on the security risks that can be mitigated by introducing the blockchain technology, and on the security risks that are identified in the blockchain-based applications. In addition, we highlight the application and technology domains where these security risks are observed. The results of this study could be seen as a preliminary checklist of security risks when implementing blockchain-based applications.
### Blockchain-based Application Security Risks: A Systematic Literature Review Mubashar Iqbal[1] and Raimundas Matuleviˇcius[1] Institute of Computer Science, University of Tartu, Tartu, Estonia {mubashar.iqbal,raimundas.matulevicius}@ut.ee Abstract. Although the blockchain-based applications are considered to be less vulnerable due to the nature of the distributed ledger, they did not become the silver bullet with respect to securing the information against different security risks. In this paper, we present a literature review on the security risks that can be mitigated by introducing the blockchain technology, and on the security risks that are identified in the blockchain-based applications. In addition, we highlight the application and technology domains where these security risks are observed. The results of this study could be seen as a preliminary checklist of security risks when implementing blockchain-based applications. Keywords: Blockchain · Blockchain-based applications · Decentralized applications · Security risks #### 1 Introduction Blockchain is a distributed immutable ledger technology [34]. It gives participants an ability to share a ledger by peer-to-peer replication and updates every time when a transaction occurs. A ledger contains a certain and verifiable record of every single transaction ever made [22]. Security engineering is concerned with lowering the risk of intentional unauthorized harm to valuable assets to that level which is acceptable to the systems stakeholders by preventing and reacting to malicious harm, misuse, threats, and security risks [14]. Security plays an important role in blockchain-based applications. Those applications are acknowledged to be less vulnerable because the use of a decentralized consensus paradigm to validate the transactional information. They also backed by cryptography technology. However, the blockchain technology is continuously penetrating various fields and the involvement of the monetary assets raised the security concerns, mainly when the attackers stole the monetary assets or damage the system. For example, the reentrancy attack on the Ethereum based decentralized autonomous organization (DAO) smart contracts when an adversary gained control on $60 million Ethers [4,26]. Blockchain technology promises to overcome the security challenges, enhance the data integrity and to transform the transacting process into a decentralized, transparent and immutable manner. The recent progression of blockchain technology captured the interest of various sectors to transform their business processes by using blockchain-based applications. Hence, the security challenges are ----- 2 M. Iqbal & R. Matuleviˇcius debatable and there is no comprehensive (or standardized) overview of security risks which can potentially damage the blockchain-based applications. There exist few studies reporting on security challenges in the blockchain platforms [4,24], but there is still a lack of focus on the blockchain-based applications security. In this paper, we present a systematic literature review (SLR) following the guidelines of [20]. Our research objectives are twofold. Firstly, we explain what security risks of centralized applications are mitigated by introducing blockchainbased applications. Secondly, we report the security risks of the blockchain-based applications which appear after introducing the blockchain technology. The main contributions of our study are: (1) a list of security risks in the blockchainbased applications which mitigate or inherit by incorporating the blockchain technology/platform, (2) aggregate a list of possible countermeasures and (3) an overview of the prominent research domains which are nourishing by the blockchain. The results of this study could be seen as a preliminary checklist of security risks when implementing blockchain-based applications. The rest of the paper is structured as follows: Section 2 provides an overview of the blockchain and related work. Section 3 presents the contributions which explain the SLR process and Section 4 discuss its results. In Section 5, conclusion and future research directions are conferred. #### 2 Background In this section, first, we introduce the blockchain technology. Second, we present an overview of related work. 2.1 Overview of Blockchain Technology Blockchain forms a chain by a sequence of blocks that replicates over a peer-topeer (P2P) network. In the blockchain, each block is attached to the previous block by a cryptographic hash, a block contains block header and a list of transactions as a Merkle tree. Blockchain is classified as a permissionless or permissioned [31]. In permissionless blockchain, anyone can join or leave the network and transactions are publicly available. In permissioned blockchain only predefined verified nodes can join the network and transactions visibility is restricted [2,31]. In the blockchain, a smart contract (SC) is a computer program [4,7] which constitutes a digital contract to store data and to execute functions [28] when certain conditions are met. In the ethereum platform, developers use Solidity programming language to write a smart contract and to build decentralized applications [7]. In Hyperledger Fabric, a smart contract is called chaincode. Similarly, other blockchain platforms introduce smart contracts to perform contractual agreements in a digital realm. The smart contracts are the high-level programming language-based programs and those can be error-prone where security flaws could be introduced (e.g. the reentrancy bug [26]). ----- Blockchain-based Application Security Risks 3 Blockchain eliminates the trusted intermediary and follows the decentralized consensus mechanism to validate the transactional information. Different blockchains use various consensus mechanism. Proof of Work (PoW) is a widely used computational rich energy-waste consensus strategy where special nodes called miners validate transactions by solving the crypto puzzle. Proof of Stake (PoS) is an energy-efficient consensus strategy [42] where miners become validators [12] and lock a certain amount of cryptocurrency to show ownership to participate in the consensus process. There are other consensus mechanisms, for example, Delegated Proof of Stake (DPoS), Proof of Authority (PoA), Proof of Reputation (PoR) and Proof of Spacetime (PoSt). The number of blockchain platforms is rapidly growing and thus, security becomes an important factor of the successful blockchain-based applications. In this paper, we focus on three frequently used blockchain platforms (Bitcoin, Ethereum, Hyperledger fabric). In addition, we also look at customised permissioned & permissionless platforms (see Table 3). Our goal is to learn which security risks and threats are considered in the applications of these platforms. 2.2 Related Work There exist a few surveys, which consider blockchain platforms security risks. For instance, Li et al. [24] overview the security attacks on the blockchain platforms & summarise the security enhancements. In our work, we consider the security risks on the blockchain-based applications and their countermeasures. Another related study [4] is conducted on Ethereum smart contracts security. It reports on the major security attacks and presents a taxonomy of common programming pitfalls, which could result in different vulnerabilities. This study focuses on the security risks in the Ethereum smart contracts, further investigation is required to explore possible security risks in smart contracts based decentralized applications and their viable countermeasures. The main attributes of blockchain are integrity, reliability and security [21] which are also important in the IoT systems. The conventional approaches and reference frameworks of IoT network implementation are still unable to fulfil the requirements of security [19]. Minhaj et al. [19] survey major security issues of IoT and discuss different countermeasures along with the blockchain solution. This study, however, does not detail security challenges in the blockchain-based IoT applications. Our study reviews the different blockchain-based IoT applications, discusses their security risks and potential countermeasures. #### 3 Survey Settings In [20], a comprehensive approach is presented to perform a SLR. In this section, we apply it to conduct a SLR on the security risks in the blockchain-based applications. ----- 4 M. Iqbal & R. Matuleviˇcius 3.1 Review Method In order to achieve the objectives of this study, we consider four research questions: (i) What are the domains where blockchain solutions are applied? (ii) What security risks are mitigated by the blockchain solutions? (iii) What do security risks appear within the blockchain-based applications? (iv) What are the countermeasures to mitigate security risks in the blockchain-based applications? Selection of databases. The selection of electronic databases and literature search is carried out by consulting with the experts of software security. Literature studies are collected from ACM digital library, IEEE digital library, ScienceDirect, SpringerLink and Scopus. The search queries (including some alternative terms and synonyms) are formulated as follows: Blockchain applications security (risks, threats, gaps, issues, challenges), permissioned blockchain applications security, permissionless blockchain applications security, public blockchain applications security Relevance and Quality Assessment. The inclusion and exclusion criteria listed in Table 1. In this study, we only include the peer-reviewed literature because most of the grey literature is based on assumptions, abstract concepts and prejudices towards the security of their applications. Based on these shreds of evidence the grey literature could lead to the publication bias and erroneous results, so in order to eliminate these concerns only peer-reviewed literature is considered. Table 1. Inclusion and exclusion criteria. Inclusion Criteria Exclusion Criteria Only the peer-reviewed literature Literature that does not subject to peer review Literature studies that discuss security Grey literature or informal studies with no risks in the blockchain-based applications concrete evidence The selection of the studies was made after reading the paper title, abstract, introduction and conclusion sections. Finally, following the quality guidelines of [20] and research scope of our study we have assessed the quality of studies using the following questions: – Are the goals and purpose of a study is clearly stated? – Is the study describes security risks on the blockchain-based applications? – Is the study provide the countermeasures to mitigate security risks? – Is the study answered the defined research questions? – How well the research results are presented? The answers to the above questions are scored as follows: 1=Fully satisfy, 0.5=Partially satisfy, 0=Not satisfy. The studies with 2.5 or more points are included. ----- Blockchain-based Application Security Risks 5 3.2 Screening Results Table 2. Literature studies. Database Total Excl. Incl. ACM 21 11 10 Table 2 presents the screening results. Initially, IEEE 31 9 22 a total of 141 studies was collected. Later ScienceDirect 22 15 7 SpringerLink 23 12 11 73 studies were excluded by applying inclu- Scopus 44 26 18 sion/exclusion and quality assessment criteria. Total 141 73 68 Finally, 68 studies remained[1]. The extracted information outlines the study identification, research problem, security risks and countermeasures. #### 4 Results and discussion In this section, we present Table 3. Statistics of literature studies as per year. the SLR results. Table 3 Permissionless Permissioned Bitcoin Ethereum CPL HLF CP Generic Total shows how the field of 2016 2 0 0 0 0 0 2 blockchain-based applications 2017 7 3 8 1 2 1 22 2018 9 15 3 8 8 1 44 is emerging every year. We Total 18 18 11 9 10 2 68 observe that Ethereum-based applications are gaining popularity among others. Also, permissioned blockchain platforms (Hyperledger Fabric (HLF) & Customised Permissioned (CP)) are arising because of those support various industry-based use cases beyond cryptocurrencies. Practitioners also presented various Customised Permissionless (CPL) platforms to achieve customised tasks and to overcome the limitations of other platforms. The term Generic refers to studies where the blockchain type and platform is not mentioned. 4.1 Applications Domains Table 4 presents the quantity of applications domains & technology solutions based on the different blockchain platforms. It shows Healthcare is mostly studied Table 4. Research areas based on different blockchain platforms. Permissionless Permissioned Bitcoin Ethereum CPL HLF CP Generic Total Applications domains where blockchain is used. Healthcare 0 3 1 2 4 1 11 Resource monitoring & Dig- 1 3 2 0 2 1 9 ital rights management Financial 2 1 1 1 0 0 5 Smart vehicles 1 0 1 1 2 0 5 Voting 1 1 0 2 0 0 4 Technology solutions where blockchain is used. Security layer 6 7 1 0 1 0 15 IoT 2 2 1 2 2 0 9 Total 13 17 7 8 11 2 58 application domain and security layer as a technology solution. Also, it indicates that Ethereum is widely used blockchain platform for building the decentralized applications. [1 Here is a list of these SLR studies: http://datadoi.ut.ee/handle/33/89](http://datadoi.ut.ee/handle/33/89) ----- 6 M. Iqbal & R. Matuleviˇcius 4.2 Security Risks Security risks result in harm to the system and its components [18]. In our study, the identified security risks are classified into two categories. (i) Security risks which are mitigated by introducing the blockchain-based applications (see Table 5), and (ii) Security risks which appear within the blockchain-based applications (see Table 6). Table 5 presents the most common security risks which show that the researchers are utilizing the blockchain-based applications to overcome the limitations of centralized applications. For example, data tampering attack is mitigated in Healthcare applications and DDoS attack/Single point failure is resisted by decentralized distributed property of blockchain. Table 5. Security risks which are mitigated by introducing blockchain applications. Permissionless Permissioned Bitcoin Ethereum CPL HFL CP Generic Total Data tampering attack 7 8 4 7 5 1 32 DoS/DDoS attack 7 7 5 3 2 1 25 MitM attack 3 6 2 2 0 1 14 Identity theft/Hijacking 1 0 3 0 0 1 5 Spoofing attack 2 0 1 0 1 0 4 Other risks/threats 6 4 2 1 2 2 17 Total 26 25 17 13 10 6 97 In addition to risks in Table 5, other risks (found once or twice in the studies) are: Side-channel attack, Impersonation attack, Phishing attack, Password attack, Cache poisoning, Arbitrary attack, Dropping attack, Appending attack, Authentication attack, Signature forgery attack, Keyword guess attack, Chosen message attack, Audit server attack, Inference attack, Binding attack and Bleichenbach-style attack Table 6 represents the most common security risks which appear within the blockchain-based applications after introducing the blockchain technology. The table indicates the security risks, which have a high probability to make the blockchain-based applications vulnerable to attack. Table 6. Security risks which appear within the blockchain applications. Permissionless Permissioned Bitcoin Ethereum CPL HLF CP Generic Total Sybil attack 5 1 1 4 1 1 13 Double spending attack 4 1 2 2 0 1 10 51% attack 3 3 1 0 0 1 8 Deanonymization attack 2 1 3 0 0 1 7 Replay attack 2 4 1 0 0 0 7 Quantum computing threat 0 1 1 2 0 1 5 Selfish mining attack 1 0 2 1 0 0 4 SC reentrancy attack 0 2 0 0 0 1 3 Other risks/threats 6 1 6 3 1 3 20 Total 23 14 17 12 2 9 77 Hence the Sybil attack, Double spending attack and 51% attack are the most appeared security risks after incorporating the blockchain technology. Other se ----- Blockchain-based Application Security Risks 7 curity risks which are appeared once or twice in the studies are: Eclipse attack, BWH attack, 25% attack, Stake grinding attack, Block Discarding attack, Difficulty Raising attack, Pool-hopping attack, Node masquerading attack, Timestamp attack, Balance attack, Signature forgery attack, Confidentiality attack, Private keys compromise, Overspending attack, Collusion attack and Illegal activities. In Table 7 we encompass the security risks along with the blockchain-based applications research areas to show which security risks are more frequently occurring on different blockchain-based applications. Most frequently the security risks expose in Resource monitoring and digital rights management applications, followed by the Financial, Healthcare, Smart vehicles and Voting applications. Also, blockchain is presented as a technology solution where researchers incorporated the blockchain as a security layer to protect against the listed security risks. However, Table 7 shows 34 different security risks (combining both security risks which are mitigated and appear by introducing the blockchain solution). Furthermore, a blockchain technology solution for IoT based applications is rapidly increasing because it provides integrity, reliability and security [19] and these are important for IoT based solutions to reach high requirements of security. By the results, the most common security risks in IoT based applications are mitigated by implementing the blockchain-based solution and only 3 different security risks are inherited after introducing the blockchain solution. The other column represents the generic blockchain-based applications and blockchain technology solutions where no specific domain is studied. Table 7. Security risks based on the research areas. Security risks which are mitigated by introducing blockchain applications. Applications Technology Healthcare Resource Financial Smart Voting Security IoT other Total monit. vehicles layer Data tampering attack 6 5 1 4 3 2 5 6 32 DoS/DDoS attack 0 5 1 3 1 7 3 5 25 MitM attack 1 4 1 1 1 2 2 2 14 Identity theft/Hijacking 1 2 0 0 0 0 1 1 5 Spoofing attack 0 0 0 0 1 0 1 2 4 Other risks/threats 2 0 1 0 1 5 5 3 17 Security risks which appear within the blockchain applications. Sybil attack 1 1 1 1 2 1 1 5 13 Double spending attack 0 4 2 0 0 2 0 2 10 51% attack 0 4 0 0 1 1 0 2 8 Deanonymization attack 0 2 1 1 1 1 1 0 7 Replay attack 0 2 1 0 0 4 0 0 7 Quantum comp. threat 1 0 0 0 0 2 0 2 5 Selfish mining attack 0 1 1 0 0 2 0 0 4 SC reentrancy attack 0 0 0 0 0 3 0 0 3 Other risks/threats 0 11 5 0 0 2 1 1 20 Total 12 41 15 10 11 34 20 31 174 4.3 Countermeasures In this section, we overview countermeasures to mitigate the security risks listed in Table 5 and 6. ----- 8 M. Iqbal & R. Matuleviˇcius Countermeasures introduced with blockchain solution. The security risks presented in Table 5 are mitigated by implementing the blockchain-based applications together with the techniques to mitigate these risks. For instance, Data tampering attack poses a threat to data-sensitive applications. In [40,41] authors implement the smart contract to mitigate votes tampering. In [35,40] authors encrypt information and associate a unique hash. Lei et al. [9] propose a random oracle model with strong RSA. And Li et al. [23] introduce an elliptic curve digital signature algorithm (ECDSA) based signature scheme for anonymous data transmission along Merkle hash tree based selective disclosure mechanism. Han et al. [16] propose to use permissioned blockchain where only the authorized nodes are able to access the data as well as generate a cypher-text by using digital signatures. DoS/DDoS attack is another exploitable cyber-attack, it is resisted by a distribution of service on different nodes [40]. The [25,11] authors implement an access control scheme to prevent unauthorized requests. Androulaki et al. [3] propose a block-list to track suspicious requesting nodes and the authors of [3,32] incorporate the transaction fee to resist it. In order to resist the MitM attack, authors suggest to encrypt an information [10,40] and publish on the blockchain [40]. In [25,38] research studies, an authentication scheme is introduced to verify each communication node. Identity theft/Hijacking based risks are mitigated by information authentication and message generation time-stamping [13]. Mylrea et al. [30] suggest a permission-based solutions (e.g. KSI). Spoofing attack is mitigated by introducing an anonymous communication among nodes [8] and Keyless Signature Infrastructure (KSI) based distributed & witnesses trust anchor [30]. Countermeasures to mitigate security risks of blockchain solutions. The blockchain solution comes with a few trade-offs and inherits several security risks (see Table 6) of blockchain technology which are mitigated by implementing the various techniques, those techniques are listed below as countermeasures. In order to mitigate the Sybil attack, in [15,41] authors suggest the permissioned blockchain-based application. Bartolucci et al. [5] incorporate the transaction fee & identification system to allow only authorized users to perform different operations. In [32], authors use the PoR scheme and Liu et al. [27] implement the customised blockchain to control the computing power. Double spending attack is mitigated by the transaction verification based on unspent transaction state [3]. In [1] authors resisted this attack by PoA scheme and in [6] by PoW complexity. Also, the Muzammal et al. [29] append the nonce with each transaction. Another frequent security risk on the blockchain-based applications is 51% attack which is resisted by implementing trusted authorities control [43] and Hjalmarsson et al. [17] customised the Ethereum blockchain to permissioned blockchain. In order to mitigate Deanonymization attack, in [25] authors propose a solution to obtain identity information only after authorization. Bartolucci et al. [5] propose the mixer for mixing the position of output addresses. In [33,37] authors propose another solution to mitigate this attack by using the fresh key for each transaction. Selfish mining attack is mitigated by PoR scheme [32] and by raising the threshold [37]. No countermeasure is found for Replay attack. In ----- Blockchain-based Application Security Risks 9 order to overcome the Quantum computing threats, Yin et al. [39] implement the lattice cryptography and in [6] authors suggest an additional digital signature or a hard fork in the post-quantum era. Decusatis et al. [11] propose a need of quantum blockchain. To eliminate the chances of Smart contract reentrancy attack, authors of [26] present the automation tool to detect smart contract bugs via run-time trace analysis and in [36] authors built a static analysis tool that detects reentrancy bugs in a smart contract and translates solidity source code into an XML-based intermediate representation and checks it against XPath patterns. #### 5 Conclusion and Future Work In this paper, we present a systematic literature review on the blockchain-based applications security risks to explain what security risks are mitigated by introducing the blockchain-based applications, and what security risks are reported in the blockchain-based applications. Our result is a preliminary checklist to support developers’ decisions while developing blockchain-based applications. Our current study has a few limitations: (i) Applications which are built on the blockchain platforms are mostly in the prototype phase. Thus the research studies present only the conceptual illustrations of different security risks and their countermeasures but not the real-life applications. (ii) The field of decentralized applications is relatively new but continuously evolving. Not all the possible security risks are researched in the blockchain-based applications which show the possibility that a wide range of security risks will emerge in upcoming years. (iii) This study found that a lot of security risks and their countermeasures are either obscure or the practical implementation is still not available. Overcoming these limitations could possibly result in the interesting insights and contribute to the explaining the blockchain-based application security risks, their vulnerabilities and the countermeasures for more in-depth. As a part of the future work, our aim is to build a comprehensive reference model for security risk management to systematically evaluate the security needs. This model would explain the protected assets of the blockchain-based applications, and countermeasures to mitigate their risks. Acknowledgement. This research has been supported by the Estonian Research Council (grant IUT20-55). #### References 1. Alcarria, R., Bordel, B., Robles, T., Mart´ın, D., Manso-Callejo, M.A.: A[´] blockchain-based authorization system for trustworthy resource monitoring and trading in smart communities. In: Journal of Sensors (Switzerland) 18(10) (2018) 2. Ali, S., Wang, G., White, B., Cottrell, R.L.: A Blockchain-Based Decentralized Data Storage and Access Framework for PingER. In: Proceedings of 17th IEEE ----- 10 M. Iqbal & R. Matuleviˇcius International Conference on Trust, Security and Privacy in Computing and Communications and 12th IEEE International Conference on Big Data Science and Engineering, Trustcom/BigDataSE 2018 pp. 1303–1308 (2018) 3. Androulaki, E., Barger, A., Bortnikov, V., Cachin, C., Christidis, K., De Caro, A., Enyeart, D., Ferris, C., Laventman, G., Manevich, Y., Muralidharan, S., Murthy, C., Nguyen, B., Sethi, M., Singh, G., Smith, K., Sorniotti, A., Stathakopoulou, C., Vukoli´c, M., Cocco, S.W., Yellick, J.: Hyperledger Fabric: A Distributed Operating System for Permissioned Blockchains. In: Proceedings of EuroSys ’18 Thirteenth EuroSys Conference Article No.30 (2018) 4. Atzei, N., Bartoletti, M., Cimoli, T.: A survey of attacks on Ethereum smart contracts (SoK). In: Proceedings of 6th International Conference on Principles of Security and Trust - Volume 10204 Pages 164-186 April 22 - 29, 2017 (2017) 5. Bartolucci, S., Bernat, P., Joseph, D.: SHARVOT: secret SHARe-based VOTing on the blockchain. In: Proceedings of ACM/IEEE 1st International Workshop on Emerging Trends in Software Engineering for Blockchain (2018), 30–34 (2018) 6. Buchmann, N., Rathgeb, C., Baier, H., Busch, C., Margraf, M.: Enhancing Breeder Document Long-Term Security Using Blockchain Technology. In: Proceedings of International Computer Software and Applications Conference 2, 744–748 (2017) 7. Buterin, V.: A Next-Generation Smart Contract and Decentralized Application [Platform (2014), https://github.com/ethereum/wiki/wiki/White-Paper](https://github.com/ethereum/wiki/wiki/White-Paper) 8. Cebe, M., Erdin, E., Akkaya, K., Aksu, H., Uluagac, S.: Block4Forensic: An Integrated Lightweight Blockchain Framework for Forensics Applications of Connected Vehicles. In: Journal of IEEE Communications Magazine ( Volume: 56, Issue: 10 , OCTOBER 2018 ) (October), 50–57 (2018) 9. Chen, L.: EPBC : Efficient Public Blockchain Client for Lightweight Users. In: Proceedings of SERIAL ’17 1st Workshop on Scalable and Resilient Infrastructures for Distributed Ledgers Article No. 1 10. Dagher, G.G., Mohler, J., Milojkovic, M., Marella, P.B.: Ancile: Privacy-preserving framework for access control and interoperability of electronic health records using blockchain technology. In: Journal of Sustainable Cities and Society 39(December 2017), 283–297 (2018) 11. Decusatis, C., Lotay, K.: Secure, Decentralized Energy Resource Management Using the Ethereum Blockchain. In: Proceedings of 17th IEEE International Conference on Trust, Security and Privacy in Computing and Communications and 12th IEEE International Conference on Big Data Science and Engineering, Trustcom/BigDataSE 2018 pp. 1907–1913 (2018) 12. Fabian Vogelsteller, V.B.: Proof of Stake FAQs (2018), [https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQs](https://github.com/ethereum/wiki/wiki/Proof-of-Stake-FAQs) 13. Fan, K., Wang, S., Ren, Y., Yang, K., Yan, Z., Li, H., Yang, Y.: Blockchain-based Secure Time Protection Scheme in IoT. In: Journal of IEEE Internet of Things PP(c), 1 (2018) 14. Firesmith, D.G.: Cite this column as follows: Donald Firesmith: Engineering Security Requirements. In: Journal of Object Technology, Published by ETH Zurich, Chair of Software Engineering JOT, 2003 2(1), 53–68 (2003) 15. Gallo, P., Quoc Nguyen, U.: BlockSee: Blockchain for IoT video surveillance in smart cities Suporn Pongnumkul NECTEC Thailand. In: Proceedings of IEEE International Conference on Environment and Electrical Engineering and 2018 IEEE Industrial and Commercial Power Systems Europe (EEEIC / I&CPS Europe) pp. 1–6 (2018) ----- Blockchain-based Application Security Risks 11 16. Han, H., Huang, M., Zhang, Y.: An Architecture of Secure Health Information Storage System Based on Blockchain Technology. In: Proceedings of International Conference on Computer and Communication Systems (ICCCS) 2018, LNCS 11064, pp. 578588, 2018 (2018) 17. Hjalmarsson, F.P., Hreioarsson, G.K., Hamdaqa, M., Hjalmtysson, G.: Blockchain-Based E-Voting System. In: Proceedings of IEEE 11th International Conference on Cloud Computing (CLOUD) pp. 983–986 (2018), [https://ieeexplore.ieee.org/document/8457919/](https://ieeexplore.ieee.org/document/8457919/) 18. Jouini, M., Rabai, L.B.A., Aissa, A.B.: Classification of security threats in information systems. In: Proceedings of Procedia Computer Science 32(October 2017), 489–496 (2014) 19. Khan, M.A., Salah, K.: IoT security: Review, blockchain solutions, and open challenges. In: Journal of Future Generation Computer Systems 82 (2018) 20. Kitchenham, B., Charters, S.: Guidelines for performing Systematic Literature reviews in Software Engineering Version 2.3. Engineering 45(4ve), 1051 (2007) 21. Koteska, B., Mishev, A.: Blockchain Implementation Quality Challenges: A Literature Review. In: Proceedings of the SQAMIA 2017: 6th Workshop of Software Quality, Analysis, Monitoring, Improvement, and Applications (September), 11–13 (2017) 22. Lewis, A.: Blockchain Technology Explained (2015), [http://www.blockchaintechnologies.com/blockchain-definition](http://www.blockchaintechnologies.com/blockchain-definition) 23. Li, H., Lu, R., Misic, J., Mahmoud, M.: Security and Privacy of Connected Vehicular Cloud Computing. In: Journal of IEEE Network ( Volume: 32, Issue: 3, May/June 2018 ) 32(3), 4–6 (2018) 24. Li, X., Jiang, P., Chen, T., Luo, X., Wen, Q.: A survey on the security of blockchain systems. In: Journal of Future Generation Computer Systems (2017) 25. Lin, C., He, D., Huang, X., Choo, K.K.R., Vasilakos, A.V.: BSeIn: A blockchainbased secure mutual authentication with fine-grained access control system for industry 4.0. In: Journal of Network and Computer Applications 116(February), 42–52 (2018) 26. Liu, C., Liu, H., Cao, Z., Chen, Z., Chen, B., Roscoe, B.: ReGuard: Finding reentrancy bugs in smart contracts. In: Proceedings of International Conference on Software Engineering pp. 65–68 (2018) 27. Liu, M., Shang, J., Liu, P.: VideoChain : Trusted Video Surveillance Based on Blockchain for Campus. In: Proceedings of ICCCS 2018: Cloud Computing and Security pp 48-58 (2018) 28. Macrinici, D., Cartofeanu, C., Gao, S.: Smart Contract Applications within Blockchain Technology: A Systematic Mapping Study. In: Journal of Telematics and Informatics (October), 0–1 (2018), [https://linkinghub.elsevier.com/retrieve/pii/S0736585318308013](https://linkinghub.elsevier.com/retrieve/pii/S0736585318308013) 29. Muzammal, M., Qu, Q., Nasrulin, B.: Renovating blockchain with distributed databases: An open source system. In: Journal of Future Generation Computer Systems 90, 105–117 (2018) 30. Mylrea, M., Gourisetti, S.N.G.: Blockchain for smart grid resilience: Exchanging distributed energy at speed, scale and security. In: Proceedings of 2017 Resilience Week, RWS 2017 pp. 18–23 (2017) 31. Pradeepkumar, D.S., Singi, K., Kaulgud, V., Podder, S.: Evaluating complexity and digitizability of regulations and contracts for a blockchain application design. In: Proceedings of 2018 ACM/IEEE 1st International Workshop on Emerging Trends in Software Engineering for Blockchain (1), 25–29 (2018) ----- 12 M. Iqbal & R. Matuleviˇcius 32. Qin, D., Wang, C., Jiang, Y.: RPchain: A Blockchain-Based Academic Social Networking Service for Credible Reputation Building, vol. 10974. In: Proceedings of ICBC 2018: Blockchain pp 183-198 (2018), [http://link.springer.com/10.1007/978-3-319-94478-4](http://link.springer.com/10.1007/978-3-319-94478-4) 33. Saritekin, R.A., Karabacak, E., Duray, Z., Karaarslan, E.: Blockchain based secure communication application proposal: Cryptouch. In: Proceedings of 6th International Symposium on Digital Forensic and Security, ISDFS 2018 2018-Janua, 1–4 (2018) 34. Sato, T., Himura, Y.: Smart-Contract Based System Operations for Permissioned Blockchain. In: Proceedings of 2018 9th IFIP International Conference on New Technologies, Mobility and Security, NTMS 2018 2018-Janua, 1–6 (2018) 35. Sylim, P., Liu, F., Marcelo, A., Fontelo, P.: Blockchain technology for detecting falsified and substandard drugs in distribution: Pharmaceutical supply chain intervention. In: Journal of Medical Internet Research 20(9) (2018) 36. Tikhomirov, S., Voskresenskaya, E., Ivanitskiy, I., Takhaviev, R., Marchenko, E., Alexandrov, Y.: SmartCheck: Static Analysis of Ethereum Smart Contracts. In: Proceedings of the 1st International Workshop on Emerging Trends in Software Engineering for Blockchain - WETSEB ’18 (October), 9–16 (2018) 37. Tosh, D.K., Shetty, S., Liang, X., Kamhoua, C.A., Kwiat, K.A., Njilla, L.: Security Implications of Blockchain Cloud with Analysis of Block Withholding Attack. In: Proceedings of 17th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing, CCGRID 2017 pp. 458–467 (2017) 38. Yao, H., Wang, C.: A novel blockchain-based authenticated key exchange protocol and its applications. In: Proceedings of IEEE 3rd International Conference on Data Science in Cyberspace, DSC 2018 pp. 609–614 (2018) 39. Yin, W.E.I., Wen, Q., Li, W., Zhang, H.U.A., Jin, Z.: An Anti-Quantum Transaction Authentication Approach in Blockchain. In: Journal of IEEE Access ( Volume: 6 ) 6 (2018) 40. Yu, B., B, J.K.L., Sakzad, A., Steinfeld, R., Rimba, P., Au, M.H.: PlatformIndependent Secure Blockchain-Based Voting System, vol. 2433. In: Proceedings of ISC 2018: Information Security pp 369-386 (2018) 41. Zhang, W.: A Privacy-Preserving Voting Protocol on Blockchain. In: Proceedings of IEEE 11th International Conference on Cloud Computing (April), 401–408 (2018) 42. Zheng, Z., Xie, S., Dai, H.N., Chen, X., Wang, H.: Blockchain Challenges and Opportunities : A Survey Shaoan Xie Hong-Ning Dai Huaimin Wang. In: International Journal of Web and Grid Services 14(4), 1–24 (2016), [http://inpluslab.sysu.edu.cn/files/blockchain/blockchain.pdf](http://inpluslab.sysu.edu.cn/files/blockchain/blockchain.pdf) 43. Zhu, L., Wu, Y., Gai, K., Choo, K.K.R.: Controllable and trustworthy blockchain-based cloud data management. In: Journal of Future Generation Computer Systems 91, 527–535 (2018), [https://linkinghub.elsevier.com/retrieve/pii/S0167739X18311993](https://linkinghub.elsevier.com/retrieve/pii/S0167739X18311993) ----- ## This figure "Review_phases.png" is available in "png"� format from: http://arxiv.org/ps/1912.09556v1 ----- ## This figure "Review_process.png" is available in "png"� format from: http://arxiv.org/ps/1912.09556v1 ----- ## This figure "blockchain.png" is available in "png"� format from: http://arxiv.org/ps/1912.09556v1 -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1912.09556, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/1912.09556" }
2,019
[ "JournalArticle", "Review" ]
true
2019-06-03T00:00:00
[ { "paperId": "0fe3ccf3a006b4b82560a617036c4c971bf11f95", "title": "Blockchain-Based Secure Time Protection Scheme in IoT" }, { "paperId": "d9b5194f3f959eda2e95df6a340254f52ced46f4", "title": "Controllable and trustworthy blockchain-based cloud data management" }, { "paperId": "19c88b6c30199b30ee9f4143bb0c16007b66c2ea", "title": "Renovating blockchain with distributed databases: An open source system" }, { "paperId": "f5a21fb87d88b4510dd4d42fbdc52a674a592ea6", "title": "Smart contract applications within blockchain technology: A systematic mapping study" }, { "paperId": "305edd92f237f8e0c583a809504dcec7e204d632", "title": "Blockchain challenges and opportunities: a survey" }, { "paperId": "31527b9484f48f97cf78c4e30edcaf5f17abed51", "title": "A Blockchain-Based Authorization System for Trustworthy Resource Monitoring and Trading in Smart Communities" }, { "paperId": "438c1955912d1f8839723255b3747ca764ca9bb3", "title": "Platform-independent Secure Blockchain-Based Voting System" }, { "paperId": "b61e12f03500ed3740df2b43210a6e6de2db816e", "title": "Blockchain Technology for Detecting Falsified and Substandard Drugs in Distribution: Pharmaceutical Supply Chain Intervention" }, { "paperId": "3208df1dfdfcea4c2bdfe12669e32f242be78b06", "title": "Secure, Decentralized Energy Resource Management Using the Ethereum Blockchain" }, { "paperId": "50357836e1519b36d1efcd06728998562367c17d", "title": "A Blockchain-Based Decentralized Data Storage and Access Framework for PingER" }, { "paperId": "f67a89eaa58a2b2b4552b22279305ebfb6e8a0fc", "title": "BSeIn: A blockchain-based secure mutual authentication with fine-grained access control system for industry 4.0" }, { "paperId": "2e5c41811c420e7e0f755accd6b60784464737fe", "title": "A Privacy-Preserving Voting Protocol on Blockchain" }, { "paperId": "54d50269928dafc6a0744e46044c17d973fdb01c", "title": "Blockchain-Based E-Voting System" }, { "paperId": "0ce5379436e7bb41c3b7cc2bc708c6053f169072", "title": "RPchain: A Blockchain-Based Academic Social Networking Service for Credible Reputation Building" }, { "paperId": "3a9afa08ab44e44f5466540a07251771cba9355b", "title": "An Architecture of Secure Health Information Storage System Based on Blockchain Technology" }, { "paperId": "8f84f6aba25d675e4d82c880c59fcc556903bbc8", "title": "VideoChain: Trusted Video Surveillance Based on Blockchain for Campus" }, { "paperId": "87d856df763fe733af3456b23d62712f3ced82a4", "title": "Security and Privacy of Connected Vehicular Cloud Computing" }, { "paperId": "8a149e4a8523e48b699d3614c992da30c19b3296", "title": "BlockSee: Blockchain for IoT Video Surveillance in Smart Cities" }, { "paperId": "005f45b34a60afe19af4ac7d15b0573381133115", "title": "A Novel Blockchain-Based Authenticated Key Exchange Protocol and Its Applications" }, { "paperId": "0b0381cfd895fd5aafe373f8a263614ec9cf031f", "title": "ReGuard: Finding Reentrancy Bugs in Smart Contracts" }, { "paperId": "c260938b7bd504a70344ca0d6d8848a3840607f8", "title": "Evaluating Complexity and Digitizability of Regulations and Contracts for a Blockchain Application Design" }, { "paperId": "8f22bf55536b50145bb117c97e13ea4b32a5e8fa", "title": "SmartCheck: Static Analysis of Ethereum Smart Contracts" }, { "paperId": "863dff6fea7811e6c2b76b3eb64eee84ee280b33", "title": "Ancile: Privacy-Preserving Framework for Access Control and Interoperability of Electronic Health Records Using Blockchain Technology" }, { "paperId": "caf968e4122e397ad80892f793c1da9eb9011b8f", "title": "SHARVOT: Secret SHARe-Based VOTing on the Blockchain" }, { "paperId": "0e79345aef430b6b0335fdbf912866895610c15b", "title": "Blockchain based secure communication application proposal: Cryptouch" }, { "paperId": "1ce725fdabe981f000419fde78156b56add29e66", "title": "Block4Forensic: An Integrated Lightweight Blockchain Framework for Forensics Applications of Connected Vehicles" }, { "paperId": "69e70b8101da9f794c33ee35740344461f262e8e", "title": "Smart-Contract Based System Operations for Permissioned Blockchain" }, { "paperId": "1b4c39ba447e8098291e47fd5da32ca650f41181", "title": "Hyperledger fabric: a distributed operating system for permissioned blockchains" }, { "paperId": "7865900b3a2bd99d76ee0c3bb5e3763b8947f76a", "title": "Efficient Public Blockchain Client for Lightweight Users" }, { "paperId": "11dbb38b7ff8a54ac8387264abd36b017f50f202", "title": "EPBC: Efficient Public Blockchain Client for lightweight users" }, { "paperId": "81f6442e50890b990598e637a44b2d8d10329710", "title": "IoT security: Review, blockchain solutions, and open challenges" }, { "paperId": "5d2eaf88f653dbf6157e80d7bda993bbfe79dc26", "title": "Blockchain for smart grid resilience: Exchanging distributed energy at speed, scale and security" }, { "paperId": "ca4c0ab7304ebbbb052887332d80dbe673ed4b7c", "title": "A Survey on the Security of Blockchain Systems" }, { "paperId": "6bd9004291c55b27617722efadd53f2543f5b5fe", "title": "Enhancing Breeder Document Long-Term Security Using Blockchain Technology" }, { "paperId": "3e61e17a81a2f076e99a6bec28431e88636028a7", "title": "Security Implications of Blockchain Cloud with Analysis of Block Withholding Attack" }, { "paperId": "aec843c0f38aff6c7901391a75ec10114a3d60f8", "title": "A Survey of Attacks on Ethereum Smart Contracts (SoK)" }, { "paperId": "42ef55606ab7763302744acf1e8707492e4d4f2c", "title": "An Anti-Quantum Transaction Authentication Approach in Blockchain" }, { "paperId": null, "title": "Proof of Stake FAQs" }, { "paperId": "5ddb2f7670abcc1cc83f2545fcb78e3538acbd4f", "title": "Blockchain Implementation Quality Challenges: A Literature Review" }, { "paperId": "0dbb8a54ca5066b82fa086bbf5db4c54b947719a", "title": "A NEXT GENERATION SMART CONTRACT & DECENTRALIZED APPLICATION PLATFORM" }, { "paperId": null, "title": "Blockchain Technology Explained" }, { "paperId": "3118ff1cea52b13985051b3c7caa14f26463dd44", "title": "Ambient Systems , Networks and Technologies ( ANT-2014 ) Classification of security threats in information systems" }, { "paperId": null, "title": "Guidelines for performing Systematic Literature reviews in Software Engineering Version 2.3. Engineering 45(4ve" }, { "paperId": "4b584a46aaf2cd601c9cda720c4b02f94babd826", "title": "Engineering Security Requirements" }, { "paperId": "0e0125c7c2c77567fa453501bbf51b05b01adfa2", "title": "Security Use Cases" } ]
8,806
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01d917a1c96792a99d446bf0287acabdffb63627
[ "Computer Science" ]
0.894435
Distributed Classification of Multiple Observation Sets by Consensus
01d917a1c96792a99d446bf0287acabdffb63627
IEEE Transactions on Signal Processing
[ { "authorId": "1788875", "name": "E. Kokiopoulou" }, { "authorId": "1703189", "name": "P. Frossard" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE Trans Signal Process" ], "alternate_urls": [ "http://www.signalprocessingsociety.org/publications/periodicals/tsp/", "https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=78" ], "id": "1f6f3f05-6a23-42f0-8d31-98ab8089c1f2", "issn": "1053-587X", "name": "IEEE Transactions on Signal Processing", "type": "journal", "url": "http://ieeexplore.ieee.org/servlet/opac?punumber=78" }
null
# Distributed classification of multiple observation sets by consensus ### Effrosyni Kokiopoulou and Pascal Frossard **_Abstract—We consider the problem of distributed classification_** **of multiple observations of the same object that are collected** **in an ad-hoc network of vision sensors. Assuming that each** **sensor captures a different observation of the same object, the** **problem is to classify this object by distributed processing in** **the network. We present a graph-based problem formulation** **whose objective function captures the smoothness of candidate** **labels on the data manifold formed by the observations of the** **object. We design a distributed average consensus algorithm for** **estimating the unknown object class by computing the value** **of the above smoothness objective function for different class** **hypotheses. It initially estimates the objective function locally** **based on the observation of each sensor. As the distributed** **consensus algorithm progresses, all observations are progressively** **taken into account in the estimation of the objective function.** **We illustrate the performance of the distributed classification** **algorithm for multi-view face recognition in an ad-hoc network** **of vision sensors. When the training set is sufficiently large, the** **simulation results show that the consensus classification decision** **is equivalent to the decision of a centralized system with access** **to all observations.** I. INTRODUCTION Over the past few years novel multimedia architectures such as vision sensor networks have rapidly emerged. Typically, these networks have an ad-hoc organization i.e., there is no central coordinator node and the topology can be arbitrary and dynamic (e.g., due to sensor motion). Moreover, the visual sensor nodes in such networks have limitations in their computation and communication capabilities. Rinner et al. [1], [2] and Akyildiz et al. [3] provide an overview of platforms that have been recently developed for visual sensor networks, which lend themselves as off-the-self computing infrastructures for conducting various scene analysis tasks in smart environments. The emergence of such distributed multimedia architectures poses new challenges to the analysis of multimedia information, which has to be done now distributively. We quote from [1]: “Existing computer vision algo_rithms often are not designed with collaboration of distributed_ _nodes in mind. For pervasive smart cameras, however, this_ _aspect is highly important. Hence, ways have to be found how_ _algorithms can be adopted for such environments.” Therefore,_ the relevant algorithms have to be (re-)designed such that they accommodate collaborative processing, while at the same time E. Kokiopoulou is with the Seminar for Applied Mathematics, Department of Mathematics, ETH Zurich, CH-8092 Zurich. email: effrosyni.kokiopoulou@sam.math.ethz.ch. Part of this work has been conducted when E. Kokiopoulou was with LTS4, EPFL. P. Frossard is with the Signal Processing Laboratory (LTS4), Institute of Electrical Engineering, Ecole Polytechnique F´ed´erale de Lausanne (EPFL), CH 1015 L il l f d@ fl h Fig. 1. Ad-hoc network of vision sensors. respecting the computation and communication constraints of the underlying network (see e.g., [4]). In this paper, we consider the problem of classifying an object, whose multiple observations are collected in a distributed fashion in a vision sensor network with ad-hoc topology (see, e.g., [5], [6], [7]). Fig. 1 illustrates the scenario of interest, where each vision sensor captures an observation of the _same object in the context of (distributed) scene analysis, for_ example. The problem consists in the distributed classification of the observed object at all sensors such that a consensus decision is reached by aggregating partial information provided by each local observation. It is important to note that this problem is different from the well-studied problem of distributed classification in the presence of a fusion center (see, e.g., [8], [9], [10]), where the information from all sensors is gathered in order to reach the final classification decision. On the contrary, the ad-hoc sensor networks considered in this paper are purely distributed, and there is no possibility of transmitting directly information from the sensors to a central coordinator node. We first present a graph-based problem formulation that defines a smoothness criterion of candidate labels on the data manifold. This criterion reflects the so-called smoothness assumption that is commonly used in semi-supervised learning [11]; namely two closeby data samples on the manifold are likely to share the same class label. It permits to define the objective function of the distributed classification problem, hose sol tion sho ld satisf the smoothness ass mption ----- Our distributed classification algorithm further capitalizes on the fact that the multiple observations belong to the same class. In particular, each sensor captures an observation of the same object (see also Fig. 1) and computes its nearest neighbors among the labelled examples. Under a certain class hypothesis, those neighbors contribute to the local computation of a portion of the objective function value. Those portions are summed distributively by means of average consensus [12], [13], so that all observations are progressively taken into account and the total value of the objective function is computed at all sensors. This process is repeated for all class hypotheses. The sensors eventually reach a consensus classification decision, by picking the class resulting in the smoothest label assignment. We illustrate the performance of the proposed distributed algorithm in multi-view face recognition in a simulated ad-hoc network of vision sensors. When the training set is sufficiently large, the simulation results show that the consensus classification decision is equivalent to the decision of a centralized system that would have access to all observations. The rest of the paper is organized as follows. We formally define the problem of distributed classification in sensor networks with ad-hoc topology in Section II and then in Section III we present our graph-based problem formulation. In Section IV we introduce our distributed classification algorithm, which is solely based on consensus-based distributed averaging. In the sequel, in Section V, we show the feasibility of our algorithm in the context of distributed multi-view face recognition. Finally, we discuss the related work in Section VI. II. DISTRIBUTED CLASSIFICATION OF MULTIPLE OBSERVATIONS Let us formally define the problem of distributed classification of multiple observations in an ad-hoc sensor network. We consider a network of m sensors and we model the network topology as an undirected graph Gs = (Vs, Es) with nodes Vs = {1, . . ., m} corresponding to sensors. An edge (i, j) ∈Es is drawn if and only if the sensor i can communicate with the sensor j. Then, we associate a weight W (i, j) with each edge (i, j) ∈Es. We call weight matrix the matrix W that gathers the edge weights W (i, j). Note that W is a sparse matrix whose sparsity pattern is driven by the network topology. We denote the set of neighbors for node i as Ni = {j| (i, j) ∈Es}. We assume that each sensor j captures a single (unlabelled) observation xj[(][u][)] of an object f . Each observation is different from its peers and has the following form, xj[(][u][)] ≜ U (ηj)f, j = 1, . . ., m. (1) In the above, U (ηj) denotes a transformation applied on the object f with parameters ηj. For instance, the transformation could be a (in-plane or out-of-plane) rotation and ηj could denote the rotation angle. Hence, there are m observations of the object f that are recorded over the sensor network and there is one-to-one correspondence among sensors and obser ations sensor labelled example data graph unlabelled example sensor network graph Fig. 2. Conceptual distinction between the two graphs of the problem. Gs (resp. Gd) denotes the graph of the sensor network topology (resp. the data graph). In Gd, the filled (resp. empty) circles correspond to labelled (resp. unlabelled) examples. Assume further that the data set is organized in two parts X = {X [(][l][)], X [(][u][)]}, where X [(][l][)] = {x1, x2, . . ., xl} = {x[(]1[l][)][, x][(]2[l][)][, . . ., x][(]l[l][)][} ⊂] [R][d][ and][ X] [(][u][)][ =][ {][x][l][+1][, . . ., x][n][}][ =] {x[(]1[u][)][, . . ., x][(]m[u][)][} ⊂] [R][d][, where][ n][ =][ l][ +][ m][. Let also][ L][ =] 1, . . ., c denote the label set. The l examples in X [(][l][)] are { } labelled Y [(][l][)] := {y1, y2, . . ., yl}, yi ∈L and common to all sensors, and the m examples in X [(][u][)] are unlabelled and distributed. Each of these examples corresponds to an observation made at a sensor, which is not available to the other sensors. The problem of distributed classification can be formally defined as follows. **Problem 1. Assume that each sensor j has a copy of the** _labelled set_ X [(][l][)], _in addition to its single observation_ { Y [(][l][)]} x[(]j[u][)] _defined in (1). Assume also that each sensor knows its_ _neighbors and the weights of its links to them. The problem_ _is to reach a consensus classification decision where each_ _sensor predicts the correct class c[∗]_ _of the object of interest_ f _, by aggregating via local communication information from_ _all available observations over the network._ III. GRAPH-BASED PROBLEM FORMULATION We present a graph-based formulation of Problem 1, which is inspired by Label Propagation [14]. The latter is a very popular method for semi-supervised classification [11], which refers to the problem of assigning (possibly different) class labels to a set of given test data samples. It can be seen as a generalization of the problem of assigning a set of multiple test observations to a single class, which is the focus of the present work. Label Propagation is a well known method for semisupervised classification that takes into account the manifold structure of the data by means of a graph. We make use of a smoothness assumption, which states that if data samples x1 and x2 are similar, then their corresponding labels and sho ld be close We represent the data labels ----- with a 1-of-c encoding, which permits to form a binary label matrix of size n c, whose ith row encodes the class label × of the ith example. The class label is basically encoded in the position of the nonzero element. Denote by the set of M matrices with nonnegative entries of size n c. Notice that × any matrix M provides a labelling of the data set by ∈M applying the following rule: yi = maxj=1,...,c Mij. We denote the initial label matrix as Y ∈M where Yij = 1 if xi belongs to class j and 0 otherwise. We further form the k nearest neighbor (k-NN) graph denoted as Gd = (Vd, Ed), where the vertices Vd correspond to the data samples X. Typically, an edge eij d is drawn if ∈E and only if xj is among the k nearest neighbors of xi. Hence, the k-NN graph captures the affinity of the data samples in the ambient space. It is common practice to assign weights on the edge set of Gd, gathered in a weight matrix H ∈ R[n][×][n]. The (normalized) similarity matrix S R[n][×][n] is further defined as ∈ S = D[−][1][/][2]HD[−][1][/][2], (2) where D is a diagonal matrix with entries Dii = [�]j[n]=1 [H][ij][.] It is important to distinguish between the two graph models involved in our problem: the sensor graph and the data graph. Figure 2 illustrates the conceptual distinction between the two. In the sequel, we first review briefly the basics of Label Propagation. Then we present our problem formulation first in centralized settings, which serve as performance benchmark, and then in distributed settings. _A. Label Propagation._ The algorithm computes a real valued M [∗] based on ∈M which the final classification is performed using the rule yi = maxj=1,...,c Mij[∗] [. This is done via a regularization framework] with a cost function defined as _B. Problem formulation in centralized settings_ We now exploit the special structure of the problem, namely that the multiple observations belong to the same class. If we define a binary class label vector λ = [λ1, . . ., λc] ∈ R[c], the optimal classification of Problem 1 should have only one nonzero entry, with the form λ = [0, . . ., 1, . . ., 0]. Intuitively, ����c[∗] we seek for one of the c vectors λ with only one non-zero entry, which best reflects the manifold smoothness assumption. This optimal vector results in similar class label assignments for pairs that are similar. The label smoothness criterion is alternatively captured by the following objective function n Qc(M ) = � Sij∥Mi − Mj∥[2], (5) i,j=1 where Mi (resp. Mj) denotes the ith (resp. jth) row of M . The objective function above becomes equivalent to the smoothness term of eq. (3) when S is row-stochastic i.e., the sum of each row is equal to one. Since all multiple observations belong to the same class, M can be defined as c M = � λpZp, (6) p=1 where λp ∈{0, 1}, [�]p[c]=1 [λ][p][ = 1][ and][ Z][p][ is defined as]  Y [(][l][)] R[l][×][c]  ∈ Zp =   ∈ R[n][×][c]. (7) **1e[⊤]p** [∈] [R][m][×][c] In the above, Y [(][l][)] denotes the submatrix of Y associated with the labeled data X [(][l][)], and ep is the canonical basis vector whose pth element is one and the rest is zero. With the above definition of M, it can be shown [15] that the objective function (5) can be written in the following form, � � Qc(λ) = C + Sij∥Yi − λ∥[2] + Sij∥Yj − λ∥[2], i≤l,j>l i>l,j≤l 1 Φ(M ) = 2 n � i,j�=1 Hij ∥ √D1 ii Mi − �D1 jj Mj∥[2] (3) +µ n � ∥Mi − Yi∥[2][�], i=1 where Mi denotes the ith row of M . The computation of M [∗] is done by solving the quadratic optimization problem M [∗] = arg minM∈M Φ(M ). Intuitively, we are seeking for an M [∗] that is smooth along the edges of similar pairs (xi, xj ) and at the same time close to Y when evaluated on the labelled data X [(][l][)]. The first term in the definition of Φ(M ) is the _smoothness term and the second is the fitness term._ It can be shown [14] that the solution to the minimization of Φ(M ) is given by M [∗] = β(I αS)[−][1]Y, (4) − where α = 1+1µ [and][ β][ =] 1+µµ [.] Since the algorithm has been designed for semi-supervised learning, where the unlabeled data samples may have different class labels, the estimated class of Label Propagation in Problem 1 is finall obtained b majorit oting on M [∗] where C = i≤l,j≤l [S][ij] [∥][Y][i][ −] [Y][j][∥][2][ is a constant term that] [�] does not depend on λ. _C. Problem formulation in distributed settings_ Observe that the evaluation of the cost function Qc(λ) defined above is not feasible in distributed settings. In this case, the nearest neighbors of each example can be chosen only among the labelled ones, as each sensor does not have access to the unlabelled examples apart from its own observation. For this reason, we adopt a slightly modified cost function in distributed settings, which is discussed below. For each candidate vector λ, each sensor j locally computes a smoothness criterion as a weighted summation over the labelled examples that reads where Yi denotes the ith row of the label matrix Y . The eight S denotes the similarit of the nlabeled obser ation l r(j) = � Sji∥Yi − λ∥[2] (8) i=1 ----- xj (collected at sensor j) with the labeled data sample xi. The global smoothness function Qd then aggregates the local criteria as **Algorithm 1 The distributed MASC algorithm** 1: Input to each sensor: l: number of labelled data. X [(][l][)] R[d][×][l], Y [(][l][)]: labelled examples. ∈ x[(][u][)] R[d][×][1]: unlabelled example (observation). ∈ 2: Output at each sensor: pˆ: estimated unknown class. 3: Initialization at each sensor: 4: Form the k-NN graph G[˜]d of the data set {X [(][l][)], x[(][u][)]}. 5: Compute the weight matrix H[˜] ∈ R[(][l][+1)][×][(][l][+1)] of G[˜]d. 6: Compute the diagonal matrix D[˜], where D[˜] i,i = [�]j[l][+1]=1 [H][˜] [ij][.] 7: Compute S[˜] = D[˜] [−][1][/][2][ ˜]HD[˜] [−][1][/][2]. 8: for p = 1 : c do 9: Each sensor sets λ = [0, . . ., 1, . . ., 0]. ����p 10: Each sensor j computes r(j) = [�]i[l]=1 [S][˜][l][+1][,i][∥][Y][i][ −] [λ][∥][2][.] 11: q(p) = [�]j[m]=1 [r][(][j][)][ :=][average_consensus][(][r][).] 12: end for 13: ˆp = arg minp q(p) _B. Distributed classification_ We are ready now to describe the distributed algorithm. First, each sensor j computes the nearest neighbors of its observation x[(]j[u][)] among the labelled examples and further computes the associated similarity weights. Next, it computes the value of the objective function Qd(λ) (see eq. (9)) for each candidate class p. The aforementioned computation involves first a local computation step and then a distributed computation step. In particular, for a certain class p, the neighbors of x[(]j[u][)] contribute to the calculation of a portion r(j) of the objective function value, which involves only local computation (see eq. (8)). Next, those portions are averaged distributively, by means of average consensus, so that all observations are taken into account and the total value of the objective function is computed at all sensors, according to eq. (9). The evaluation of Qd(λ) is repeated for all candidate classes and eventually the sensors reach a consensus classification decision, by picking the class that results in the minimum value of the objective function. We call the proposed algorithm distMASC i.e., distributed MAnifold Smoothing under Constraints. For notational ease, we drop the subscript j from x[(]j[u][)] when it is clear from the context that we refer to sensor j. The main steps are shown in Algorithm 1, where we have used a slightly different notation: we have attached a tilde to those quantities that are different from Section III-C due to the partial information of each sensor. For example, the local similarity matrix S˜ R[(][l][+1)][×][(][l][+1)], which gathers the similarity weights of the ∈ local data set X [(][l][)], x[(][u][)] at each sensor, is not to be confused { } with the global similarity matrix S R[n][×][n] associated with the ∈ whole dataset X [(][l][)], X [(][u][)] . We discuss below the proposed { } distributed algorithm in details. First, each sensor computes the k-NN graph of its own data set X [(][l][)], x[(][u][)] and forms the corresponding S[˜] matrix of size { } (l+1) (l+1) (Lines 4-7). Next, each class hypothesis is tested × (loop 8 12) For each class h pothesis each sensor j first Qd(λ) = n � r(j) (9) j=l+1 where the index j runs over the unlabelled examples (observations). Notice that when an unlabelled example xj (j > l) is similar to a labelled example xi (i.e., the weight Sji is large), then minimizing the above objective function will result in labels that are smooth across similar examples. Hence, we need to solve the following optimization problem. Optimization problem: OPT min[λ1,...,λc] Qd([λ1, . . ., λc]) subject to λ�p ∈{c 0, 1}, p = 1, . . ., c, p=1 [λ][p][ = 1][.] IV. THE DISTRIBUTED CLASSIFICATION ALGORITHM In what follows, we discuss first how one can compute distributively the sum of local functions with consensus algorithms. Then we introduce our proposed distributed algorithm for solving the classification problem OPT. _A. Distributed consensus_ Distributed consensus [12], [13] has recently become an important computational tool for various aggregation tasks in ad-hoc sensor networks. We consider distributed linear iterations of the following form zt+1(i) = W (i, i)zt(i) + � W (i, j)zt(j), (10) j∈Ni for i = 1, . . ., m, where zt(j) represents the value computed by sensor j at iteration t. The above iteration can be compactly written in the following form zt+1 = Wzt. (11) Consensus can be employed for the problem of distributed averaging, as we explain below. Assume that initially each sensor i reports a scalar value z0(i) ∈ R. We denote by z0 = [z0(1), . . ., z0(m)][⊤] ∈ R[n] the vector of initial values on the network. Denote by z¯0 = [1] m m � z0(i) (12) i=1 the average of the initial values of the sensors. The problem of distributed averaging therefore becomes typically to compute z¯0 at each sensor by distributed linear iterations of the form of (11). Iteration (11) converges to the average for every z0 if and only if lim (13) t→∞ [W][ t][ =][ 11]m [⊤] [,] where 1 is the vector of ones [13]. Indeed, notice that in this case z[∗] = lim zt = lim W [t]z0 = **[11][⊤]** t→ t→ m [z][0][ = ¯][z][0][1][.] ----- similarity matrix similarity matrix similarity matrix at sensor 1 at sensor 2 at sensor m 0 0 ... 0 - * - - - distributed averaging Fig. 3. Flow of computation, which is repeated for each hypothesis p, p = 1, . . ., c. The stars in the last row of each similarity matrix correspond to the nearest neighbors of the observation x[(][u][)] among the labelled examples. The computation of r(j) in the first row is local, i.e., no communication among the sensors is required. |Col1|Col2|Col3| |---|---|---| |||| |||| |Col1|Col2|Col3| |---|---|---| |||| |||| |Col1|Col2|Col3| |---|---|---| |||| |||| |Col1|Col2|0| |---|---|---| |||| |* *||| |Col1|Col2|0| |---|---|---| |||| |* *||| |Col1|Col2|0| |---|---|---| |||| |* *||| computes a scalar number r(j) that involves local computation only; namely a weighted sum of the nonzero entries of the last row of S[˜] (i.e., (l + 1)th row). This corresponds to a portion of the value of the objective function, which captures the smoothness of the label assignment under the current class hypothesis. In order to compute the value of the objective function q(p), the partial sums r(j) need to be summed together and this involves distributed computation. This step is performed by distributed average consensus (Line 11), where the summation of all r’s is computed at each sensor. Note that this will result in a scaled version of q(p), due to presence of 1/m in the average. However, this has no influence on the classification decision, which is taken in Line 13 by all sensors, after all hypotheses have been tested. At the end of the algorithm, all sensors reach a consensus decision. Figure 3 shows schematically the flow of the distributed computation in Line 11 of Algorithm 1 for a single hypothesis p. We show the general structure of the similarity matrix S[˜] formed at each sensor j, j = 1, . . ., m (assuming that the labelled data samples are ordered according to their class labels). Observe that the upper left block of S corresponding to the labelled set is common to all similarity matrices of the sensors, as they all have a copy of X [(][l][)]. The only difference is in their last row, whose non-zero entries correspond to the nearest neighbors of their own observation x[(][u][)] among the labelled examples (indicated by asterisks in Figure 3). Notice that those entries contribute to the computation of the partial sums r(j) in Line 10, which involves only local computation. Then, the sum of all values r(j), j = 1, . . ., m is computed distributively by average consensus, which yields the value of the objective function q(p) for the current class hypothesis p. All observations contribute to the final classification decision, thanks to the emplo ment of a erage consens s _a) Computational cost analysis: Let us discuss the com-_ putational cost of distributed MASC. In what follows, denote by T the number of required consensus iterations and k¯ = E{|Nj|} the average number of neighbors of a node in the sensor network. The main computational steps that each sensor has to perform consists of (see also Algorithm 1): - The construction of k-NN graph among the labelled examples that scales as O(l[2]), where l denotes the number of labelled examples. However, this can be performed offline (e.g., before the deployment of the sensor network). - Local computation of the nearest neighbors of x[(]j[u][)] among the labeled data X [(][l][)]. This requires computing the distance of x[(]j[u][)] to all labelled examples and scales as O(l). - Local computation of r( ) in Line 10. It scales as O(kc), because it involves only the last row of S[˜] that contains only k non-zero entries (see also Fig. 3), where k is the set of nearest neighbors of each data sample in the data graph. - Distributed computation of the objective function via distributed averaging in Line 11. This scales as O(kT c[¯] ), which corresponds to the cost of linear iteration (10), repeated T times until convergence, for each class hypothesis. If we omit the off-line cost of forming the graph among the labelled samples, we conclude that the total average computational cost per sensor is O(l + (k + kT[¯] )c). Given the fact the number of consensus iterations T increases when more sensors are added to the network, one would expect that the cost per sensor will also increase with the network size. However, one can practically overcome this problem by resorting to accelerated consensus methods, such as polynomial filtering [16], which admit an almost negligible increase of T ith respect to the net ork si e b means ----- qi qj . . . qmax qi qj . . . qmax δ Fig. 4. The objective function values q(p), p = 1, . . ., c, sorted in ascending order. of increased convergence rate (see [16, Sec. V-B] for more details). Hence, distributed MASC is of very low complexity and thus appropriate for sensor networks. Furthermore, the costs of communication stay similar to those of distributed average consensus solutions, which are very low. In particular, the number of messages per sensor scales as O(kT c[¯] ), see also eq. (10). _C. Further remarks_ Each sensor is able to provide an estimate of the unknown class even before the consensus process starts. This is possible by using its local r value as a (crude) approximation to the objective function value and looping over all class hypotheses. Then, while distributed consensus progresses, information from all observations is propagated over the network, the approximations to the objective function are refined and the partial classification decisions are updated. Eventually, the approximations of the objective function values converge and the sensors reach a consensus classification decision. The latter may even occur long before the function values stabilize. In what follows, we analyze why this is the case. Observe that the consensus decision is reached when the approximation error of consensus at each sensor becomes smaller than half of the gap between the smallest qi and second smallest qj value of the objective function. Denote the gap between them by δ = qj qi > 0 as shown in Fig. − 4. The marks on the horizontal axis represent the sorted list (in ascending order) of the objective function values q(p) for p = 1, . . ., c. Therefore, as long as the approximation error of consensus at each sensor is smaller than δ/2, the order between the estimates ˜qi and ˜qj cannot change, and the consensus decision has been reached. From this point on, further consensus iterations will decrease the approximation error, but they will have no influence on the consensus decision. V. SIMULATION RESULTS _A. Setup_ We compare our distributed algorithm with a distributed baseline scheme for the classification of multiple observations consisting of k-NN followed by majority voting. Each sensor computes a local classification decision using k-NN classification on the labeled set X [(][l][)], and the final decision is obtained by majority voting across sensors. We also compare with two centralized algorithms: Label Propagation (see Sec. III-A) and centralized MASC (see Sec. III-B). In the centralized scenario, each algorithm has access to all observations X [(][u][)] and can further form a full similarity matrix S R[n][×][n]. We illustrate ∈ the performance of all methods in distrib ted face recognition Fig. 5. Sample face images from the UMIST database. The number of different poses for each subject is varying. Fig. 6. Distributed multi-view face recognition in a vision sensor network. Each facial image corresponds to the observation of a sensor. The problem is to estimate the unknown class in a distributed fashion. Note that our goal is not to present a new method for multiview face recognition, but rather to use this application as a showcase in order to illustrate the feasibility and the behavior of our distributed classification algorithm. In the construction of the sensor networks, we use the random geographic graph model [17]. According to this model, we randomly distribute m sensor nodes on a 2-dimensional unit area. Two nodes are adjacent if their Euclidean distance is smaller than ǫ = � logm m [, which ensures connectedness with] high probability. We also assign weights on the edges of the sensor network graph. We provide more information about the weights in the sequel in Section V-C. In all algorithms we use Gaussian weights defined as Hij = � exp(− [∥][x][i]2[−]σ[x][2][j] [∥][2] ) when (i, j) ∈E, (14) 0 otherwise, where each xi corresponds to a raw facial image represented as a high-dimensional vector in R[d]. The parameter σ in the above equation is set equal to half of the median of pairwise distances obtained from a large (random) sample of points. Finally, we set the number of nearest neighbors k to 3 in all methods. We consider the case of a vision sensor network, such as the one shown in Fig. 1, where the face of a subject is captured by different cameras organized in an ad-hoc network. Each observation in this case represents a facial image captured under different viewing angles. Observe again that all observations belong to the same class and the problem resides in estimating the unknown class i.e., recognizing the subject. We used the UMIST database [18] in our simulations. The UMIST database contains 20 people nder different poses The ----- (a) m = 4 **5.5** **6** **number of training samples per class** (c) m = 8 (b) m = 6 **5.5** **6** **number of training samples per class** (d) m = 10 Fig. 7. Difference in performance between MASC and its distributed version versus the number of training samples (per class). number of different views per subject varies from 19 to 48. Fig. 5 illustrates a sample subject from the UMIST database along with its first 20 views. Fig. 6 illustrates a snapshot of the simulated network. The facial image next to each sensor corresponds to its own observation. In order to simulate a generic scenario, we assign randomly the different face poses among the sensors. _B. Classification Performance_ In the first experiment we will investigate the classification performances of all methods: distributed MASC, distributed k-NN + majority voting, centralized MASC and centralized Label Propagation (LP). We assume that the distributed average consensus in Line 11 of Algorithm 1 has converged to the asymptotic solution. In other words, we assume that the distributed summation is exact. The purpose of this experiment is to investigate whether the distributed algorithm suffers any loss in performance due to the partial information and what are the factors that influence this phenomenon. We set µ = 0.1 in LP that worked best in this data set. We investigate the behavior of all methods, when the number of multiple observations m varies from 4 to 10 with step 2. For each partic lar al e of e meas re the classification error rate for different sizes of training set. In particular, we increase gradually the number of training examples per class and measure the average classification error rate over 100 random experiments. Each random experiment corresponds to a random split of the data set into training (labelled) and test (unlabelled) sets. We do many random experiments in order to avoid any bias in the measured classification performances, due to a particular realization of the labeled and unlabeled data sets. Figs 7(a)-7(d) show the obtained results for different number m of multiple observations, when the number of training examples per class increases from 4 to 8 with step 1. First, we see that distributed MASC outperforms the distributed baseline scheme of k-NN followed by majority voting as well as the (centralized) LP, which does not exploit the fact that all observations belong to the same class. Second, we observe that there is a small loss in performance of distributed MASC with respect to its centralized counterpart. To see why this happens, it is important to realize that the k-NN graph in the distributed case is different than the graph in the centralized case. This is due to the fact that the multiple observations are collected distributively. Hence, the neighbors of an obser ation (u) can be selected onl among the labelled ----- (a) MASC (b) distMASC Fig. 8. Classification performance versus number of multiple observations, for both methods. Each curve corresponds to different number of training samples per class. **40** **35** **28** **26** **30** **25** **24** **22** **20** **15** **0** **20** **40** **60** **80** **100** **number of iterations** **20** **18** **16** **0** **20** **40** **60** **80** **100** **number of iterations** |MASC distMASC|MASC distMASC| |---|---| ||| |MASC distMASC|MASC distMASC| |---|---| ||| (a) Maximum degree weights Fig. 9. Average classification error rate vs consensus iterations, for different weight matrices. (b) Metropolis weights examples, whereas in the centralized case they may be selected among all (labelled and unlabelled) examples. This is the main reason for the difference in performance in Fig. 7, which is more pronounced when the training set is small. However, it is exactly this difference in the construction of the k-NN graph that allows for the distributed MASC algorithm to have much lower computational cost than that of centralized MASC. Essentially, this is the main characteristic that makes it efficient and feasible in distributed settings. However, this comes at the cost of a small performance loss, which however reduces when the training set is sufficiently large. Fig. 8 illustrates the same results as Fig. 7 in a different way. In particular, it illustrates the behavior of classification performances of both MASC methods with respect to the number of multiple observations, when the size of the training set is fixed. The number of multiple observations m varies from 4 to 10 with step 2. Each curve corresponds to a fixed number of training samples per class, denoted by p. Unsurprisingly, we observe that an increase in the number of observations tends to improve the classification performance in both algorithms _C. Consensus Performance_ In the previous experiment, we assumed that the distributed summation in Line 11 of Algorithm 1 is exact. In this experiment we drop this assumption and we investigate the effect of employing distributed consensus for the computation of this sum. Note that our goal in this particular experiment is to study the effect of consensus on the classification performances. For this reason, we use the same k-NN graph of distributed MASC in its centralized counterpart. This way, the performance difference of the two algorithms is only due to the summation part. First, we split randomly the data set into training and test sets, by including two examples per class in the labelled set X [(][l][)] and the rest is assigned to the test set. We form m = 10 multiple observations, which are drawn randomly from the test set, and we use k = 1 in the construction of the k-NN graph. Fig. 9 shows the average classification error rate (over 500 random experiments) measured on a certain sensor, say the first one, when the number of iterations in distributed consens s aries from 1 to 100 ith step 5 Each random e periment ----- in this case corresponds to a random realization of the labelled and unlabelled data sets, as well as random generation of the underlying sensor network. We use two different weights from the literature[13], namely the Maximum-degree weights: W (i, j) =    n1 [,] (i, j) ∈Es 1 − [d]n[(][i][)] [,] i = j (15) 0 otherwise, and the Metropolis weights: object pose estimation [25] as well as distributed face pose estimation [6]. A different approach is proposed in [26] for object pose averaging in distributed camera networks. It mainly differs from the approach above in that it includes a rigidity penalty term to distributed consensus, which penalizes the estimates that deviate from the model. Therefore, it bypasses the need for special handling of rotations. _B. Distributed classification_ The authors in [9] propose a distributed multi-target classification algorithm for sensor networks. The authors formulate the classification problem as a multiple hypothesis testing problem and propose a decision fusion methodology by aggregating local classifier decisions to a fusion center. Since the number of hypothesis grows exponentially with the number of targets, the authors propose a sub-optimal approach of partitioning the hypothesis space. A parallel active-set algorithm was proposed in [27] for distributed Support Vector Machines (SVM) training. The authors propose a relaxation to the dual of the SVM training optimization problem, which further permits the partition of the (relaxed) problem into subproblems that can be solved by Lagrangian decomposition and gradient projection. Despite the general scope of the proposed algorithm, the main focus has been on its computational efficiency, rather on its feasibility and implementation aspects in the context of wireless sensor networks. The overview article [28] discusses the problem of distributed classification with non-parametric kernel methods [29], where the goal is to learn a global classification function from distributed data in wireless sensor networks. The method proposed in this work is fundamentally different from the methods discussed in [28] in that it tries to predict directly the single unknown class label based on the multiple observations, rather than trying to learn the classification function itself. The reader is referred to [28] and references therein, for more details on the related methods for nonparametric distributed learning. Finally, we mention that there are approaches that address the problem by distributed feature extraction followed by (centralized) classification at the fusion center. For instance, Yang et. al. in [30] propose a distributed scheme for segmentation and classification of human actions using a network of wearable motion sensors. It is assumed that sensors are able to transmit local feature vectors to a central computer, where the global classification is performed. _C. Consensus-based distributed classification_ Consensus-based methods for distributed classification in ad-hoc sensor networks have recently started to emerge. The authors in [5] propose two consensus algorithms for distributed SVM training for binary classification. The main idea of the first algorithm is to exchange support vectors between adjacent sensor nodes until consensus on the separating hyperplane has been reached. However, it was shown that it results in a suboptimal sol tion The second proposed algorithm comp tes W (i, j) =  1+max{d1(i),d(j)} [,] (i, j) ∈Es  1 − [�](i,j)∈E [W] [(][i, k][)][,] i = j 0 otherwise, (16) where d(i) denotes the degree of the ith node. The weights above are known to satisfy condition (13) and therefore lead the iteration zt+1 = Wzt to asymptotic convergence to the average ¯z0 = m[1] �mi=1 [z][0][(][i][)][. Observe that fairly few iterations,] namely between 30 and 40, provide sufficient accuracy in the computation of the distributed sum, in order to offer similar performance as the centralized MASC algorithm. VI. RELATED WORK In this section, we provide a more detailed exposition of the related work in the field. We start with consensus algorithms for various distributed problems in vision sensor networks and then we discuss distributed classification, first in general settings and then in relation to distributed consensus. _A. Consensus algorithms for vision sensor network problems_ The methods that we are going to discuss below are not directly related to the algorithm proposed in this paper as they address different problems. However, we believe that it is advantageous to mention them as they are all based on distributed consensus, which further emphasizes the importance of the latter as a powerful tool for distributed information processing in vision sensor networks. Distributed consensus [12], [13], [19], [20], [21] has recently become an important computational tool for multimedia data analysis and various aggregation tasks in ad-hoc sensor networks. In general, the main goal of distributed consensus is to reach a global solution iteratively in ad-hoc networks using only local computation and communication, while staying robust to changes in the network topology. The authors in [22] propose a message-passing version of the Kalman-Consensus Filter (KCF) [23] for target tracking in sensor networks with a limited sensing range. The proposed algorithm reaches a consensus on estimates obtained by local Kalman filters in a hybrid architecture formed by a fusion center and a peer-to-peer network. Recently, this distributed tracking algorithm has been applied in [24] for tracking multiple targets in a self-configuring camera network. The authors in [25], [6] have generalized the Euclidean distributed consensus algorithm to non-Euclidean manifolds. In particular, they have considered SE(3), which is the group of rigid-body transformations consisting of rotations in SO(3) and translations The ha e applied their algorithm to distrib ted ----- the optimal solution, at the price of increased communication though. Another distributed SVM algorithm has been recently proposed in [31] that avoids the communication of support vectors between adjacent sensor nodes. The main idea is to cast the SVM optimization problem as the solution of several local convex optimization subproblems solved at each sensor, which are coupled by consensus constraints imposed on the classifier parameters (i.e., hyperplane and bias). The resulting problem is solved using the alternating direction method of multipliers [12] involving only node-to-node message exchanges. The generalization of the distributed algorithm to nonlinear SVMs is discussed in [32]. The above approaches are conceptually the closest to the method proposed in this work under the same perspective of being consensus-based. However, a few things should be kept in mind. First, SVMs are binary classifiers and, to the best of our knowledge, their multi-class extension to distributed settings has not been studied yet. On the contrary, our method inherently operates on multi-class problems. Second, the above methods, unlike our algorithm, have not been explicitly designed for the problem of multiple observations classification considered in this paper. Applying such methods directly on multiple observations will most likely result into several different estimated class labels available at each sensor and one is confronted then with the problem of fusing them in order to reach a single consensus decision. This is due to the fact that consensus is imposed on the classifier parameters and _not on the estimated class label, as done by our method._ VII. CONCLUSIONS We studied the problem of classification of multiple observations in the scenario where the observations are collected distributively. We showed that distributed classification in ad-hoc sensor networks can be effectively performed using distributed consensus. In particular, we proposed a distributed graph-based algorithm that aggregates information from all observations across the network and leads to a consensus classification decision among the sensors. We have illustrated its performance in the context of distributed multi-view face recognition. The simulation results have shown that, when the training set is sufficiently large, the classification decision of the distributed algorithm is equivalent to that of the centralized algorithm. Furthermore, the convergence of the distributed classification algorithm is very fast thanks to the effective consensus strategy. REFERENCES [1] B. Rinner, T. Winkler, W. Schriebl, M. Quaritsch, and W. Wolf. The evolution from single to pervasive smart cameras. _2nd ACM/IEEE_ _International Conference on Distributed Smart Cameras (ICDSC), 2008._ [2] B. Rinner and W. Wolf. An introduction to distributed smart cameras. _Proceedings of the IEEE, 96(10):1565–1575, October 2008._ [3] I. F. Akyildiz, T. Melodia, and K. R. Chowdhury. Wireless Multimedia Sensor Networks: Applications and Testbeds. Proceedings of the IEEE, 2008. [4] Akio Kosaka Johnny Park Gaurav Srivastava, Hidekazu Iwaki and Avinash Kak. Distributed and lightweight multi-camera human activity classification. ACM/IEEE International Conference on Distributed Smart _C_ _(ICDCS 09) S_ t b 2009 [5] K. Flouri, B. Beferull-Lozano, and P. Tsakalides. Distributed consensus algorithms for SVM training in wireless sensor networks. 16th European _Signal Processing Conference (EUSIPCO), 2008._ [6] R. Tron and R. Vidal. Distributed face recognition via consensus on SE(3). 8th Workshop on Omnidirectional Vision, Camera Networks and _Non-classical Cameras (OMNIVIS), 2008._ [7] W. Schriebl, T. Winkler, A. Starzacher, and B. Rinner. A pervasive smart camera network architecture applied for multi-camera object classification. ACM/IEEE International Conference on Distributed Smart _Cameras (ICDCS-09), September 2009._ [8] J. B. Predd, S. R. Kulkarni, and H. V. Poor. Distributed learning in wireless sensor networks. In Proceedings of the 42nd Annual Allerton _Conference on Communication, Control, and Computing, Monticello, IL,_ Sept 29-Oct 1 2004. [9] J. H. Kotecha, V. Ramachandran, and A. M. Sayeed. Distributed multitarget classification in wireless sensor networks. _IEEE Journal on_ _Selected Areas in Communications, 23(4):703–713, April 2005._ [10] D. Li, K. Wong, Y. Hen Hu, and A. Sayeed. Detection, classification and tracking of targets in distributed sensor networks. _IEEE Signal_ _Processing Magazine, 19(2), March 2002._ [11] O. Chapelle, B. Scholkopf, and A. Zien. Semi-Supervised learning. MIT Press, 2006. [12] D. Bertsekas and J. N. Tsitsiklis. Parallel and Distributed Computation: _Numerical Methods. Prentice Hall, 1989._ [13] L. Xiao and S. Boyd. Fast linear iterations for distributed averaging. _Systems and Control Letters, (53):65–78, February 2004._ [14] D. Zhou, O. Bousquet, T. Navin Lal, J. Weston, and B. Sch¨olkopf. Learning with local and global consistency. _Advances in Neural_ _Information Processing Systems (NIPS), 2003._ [15] E. Kokiopoulou and P. Frossard. Graph-based classification of multiple observation sets. November 2009. http://arxiv.org/pdf/0810.4617, submitted. [16] E. Kokiopoulou and P. Frossard. Polynomial filtering for fast convergence in distributed consensus. IEEE Transactions on Signal Processing, 57(1):342–354, 2009. [17] P. Gupta and P. R. Kumar. The capacity of wireless networks. IEEE _Trans. on Information Theory, 46(2):388–404, March 2000._ [18] D. B Graham and N. M Allinson. Characterizing virtual eigensignatures for general purpose face recognition. Face Recognition: From Theory _to Applications, 163:446–456, 1998._ [19] L. Xiao, S. Boyd, and S. Lall. A scheme for robust distributed sensor fusion based on average consensus. Int. Conf. on Information Processing _in Sensor Networks, pages 63–70, April 2005. Los Angeles._ [20] L. Xiao, S. Boyd, and S. Lall. Distributed average consensus with timevarying metropolis weights. Automatica, June 2006. submitted. [21] R. Olfati-Saber, J. A. Fax, and R. M. Murray. Consensus and cooperation in networked multi-agent systems. Proceedings of the IEEE, 95(1):215– 233, January 2007. [22] R. Olfati-Saber and N. F. Sandell. Distributed tracking in sensor networks with limited sensing range. _Proceedings of the American_ _Control Conference, June 2008._ [23] R. Olfati-Saber. Distributed kalman filtering algorithms for sensor networks. IEEE Conference on Decision and Control, December 2007. [24] C. Soto, B. Song, and A. K. Roy-Chowdhury. Distributed multi-target tracking in a self-configuring camera network. _IEEE Int. Conf. on_ _Computer Vision and Pattern Recognition (CVPR), 2009._ [25] R. Tron, R. Vidal, and A. Terzis. Distributed pose averaging in camera networks via consensus on SE(3). ACM/IEEE International Conference _on Distributed Smart Cameras (ICDCS-08), September 2008._ [26] A. Jorstad, D. DeMenthon, I-J. Wang, and P. Burlina. Distributed consensus on camera pose. IEEE Transactions on Image Processing, 19(9):2396–2407, September 2010. [27] T. Alpcan and C. Bauckhage. A discrete-time parallel update algorithm for distributed learning. IEEE Int. Conf. on Pattern Recognition (ICPR), December 2008. [28] J. B. Predd, S. R. Kulkarni, and H. V. Poor. Distributed learning in wireless sensor networks. _IEEE Signal Processing Magazine, pages_ 56–69, July 2006. [29] X. L. Nguyen. _Learning in decentralized systems: A nonparametric_ _approach. PhD thesis, University of California, Berkeley, 2007._ [30] A.Y. Yang, S. Iyengar, S. Sastry, R. Bajcsy, P. Kuryloski, and R. Jafari. Distributed segmentation and classification of human actions using a wearable motion sensor network. IEEE Computer Society Conference _on Computer Vision and Pattern Recognition Workshops, pages 1–8,_ J 2008 ----- [31] P. Forero, A. Cano, and G. B. Giannakis. Consensus-based distributed linear support vector machines. In 9th ACM/IEEE Intl. Conf. on _Information Processing in Sensor Networks (IPSN), Stockholm, Sweden,_ April 2010. [32] P. Forero, A. Cano, and G. B. Giannakis. Consensus-based distributed support vector machines. _Journal of Machine Learning Research,_ 11:16631707, May 2010. **Effrosyni Kokiopoulou (S05,M09) received her** Diploma in Engineering in June 2002, from the Computer Engineering and Informatics Department of the University of Patras, Greece. In June 2005, she received a M.Sc. degree in Computer Science from the Computer Science and Engineering Department of the University of Minnesota, USA, under the supervision of Prof. Yousef Saad. In September 2005, she joined as a PhD student the Signal Processing Laboratory (LTS4) at EPFL, Lausanne, Switzerland and completed her PhD studies in December 2008. Since 2009, she has been a postdoctoral researcher with the Seminar for Applied Mathematics, ETH, Zurich, Switzerland. Her research interests include multimedia data mining, pattern recognition, computer vision and numerical linear algebra. Dr. Kokiopoulou is the 2010 winner of the ACM Special Interest Group on Multimedia (SIGMM) award for Outstanding PhD Thesis in Multimedia Computing, Communications and Applications. She has been elected to receive the EPFL doctorate award in 2010. **Pascal** **Frossard** (S96,M01,SM04) received the M.S. and Ph.D. degrees, both in electrical engineering, from the Swiss Federal Institute of Technology (EPFL), Lausanne, Switzerland, in 1997 and 2000, respectively. Between 2001 and 2003, he was a member of the research staff at the IBM T. J. Watson Research Center, Yorktown Heights, NY, where he worked on media coding and streaming technologies. Since 2003, he has been a professor at EPFL, where he heads the Signal Processing Laboratory (LTS4). His research interests include image representation and coding, visual information analysis, distributed image processing and communications, and media streaming systems. Dr. Frossard has been the General Chair of IEEE ICME 2002 and Packet Video 2007. He has been the Technical Program Chair of EUSIPCO 2008, and a member of the organizing or technical program committees of numerous conferences. He has been an Associate Editor of the IEEE TRANSACTIONS ON MULTIMEDIA (2004-2010), the IEEE TRANSACTIONS ON IMAGE PROCESSING (2010-) and the IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY (2006-). He is an elected member of the IEEE Image and Multidimensional Signal Processing Technical Committee (2007-), the IEEE Visual Signal Processing and Communications Technical Committee (2006-), and the IEEE Multimedia Systems and Applications Technical Committee (2005-). He has served as Vice-Chair of the IEEE Multimedia Communications Technical Committee (2004-2006) and as a member of the IEEE Multimedia Signal Processing Technical Committee (2004-2007). He received the Swiss NSF Professorship Award in 2003, the IBM Faculty Award in 2005 and the IBM Exploratory Stream Analytics Innovation Award in 2008. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1109/TSP.2010.2086450?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/TSP.2010.2086450, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "other-oa", "status": "GREEN", "url": "https://infoscience.epfl.ch/bitstreams/00638506-a66f-4219-b412-c4008182ecfe/download" }
2,011
[ "JournalArticle" ]
true
null
[]
13,831
en
[ { "category": "Physics", "source": "external" }, { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01dc87e26fa1f9b8686679608f06e621182ec4a1
[ "Physics", "Computer Science" ]
0.906535
Interacting Neural Networks and Cryptography
01dc87e26fa1f9b8686679608f06e621182ec4a1
[ { "authorId": "144345775", "name": "W. Kinzel" }, { "authorId": "145556695", "name": "I. Kanter" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
Two neural networks which are trained on their mutual output bits are analysed using methods of statistical physics. The exact solution of the dynamics of the two weight vectors shows a novel phenomenon: The networks synchronize to a state with identical time dependent weights. Extending the models to multilayer networks with discrete weights, it is shown how synchronization by mutual learning can be applied to secret key exchange over a public channel.
## Interacting neural networks and cryptography Wolfgang Kinzel[1] and Ido Kanter[2] 1 Institute for Theoretical Physics and Astrophysics, Universit¨at W¨urzburg, Am Hubland, 97074 W¨urzburg, Germany 2 Minerva Center and Department of Physics, Bar-Ilan University, 52100 Ramat-Gan, Israel Abstract. Two neural networks which are trained on their mutual output bits are analysed using methods of statistical physics. The exact solution of the dynamics of the two weight vectors shows a novel phenomenon: The networks synchronize to a state with identical time dependent weights. Extending the models to multilayer networks with discrete weights, it is shown how synchronization by mutual learning can be applied to secret key exchange over a public channel. ### 1 Introduction Neural networks learn from examples. This concept has extensively been investigated using models and methods of statistical mechanics [1,2]. A ”teacher” network is presenting input/output pairs of high dimensional data, and a ”student” network is being trained on these data. Training means, that synaptic weights adopt by simple rules to the input/output pairs. When the networks — teacher as well as student — have N weights, the training process needs of the order of N examples to obtain generalization abilities. This means, that after the training phase the student has achieved some overlap to the teacher, their weight vectors are correlated. As a consequence, the student can classify an input pattern which does not belong to the training set. The average classification error decreases with the number of training examples. Training can be performed in two different modes: Batch and on-line training. In the first case all examples are stored and used to minimize the total training error. In the second case only one new example is used per time step and then destroyed. Therefore on-line training may be considered as a dynamic process: at each time step the teacher creates a new example which the student uses to change its weights by a tiny amount. In fact, for random input vectors and in the limit N, learning and generalization can be →∞ described by ordinary differential equations for a few order parameters [3]. On-line training is a dynamic process where the examples are generated by a static network - the teacher. The student tries to move towards the teacher. However, the student network itself can generate examples on which it is trained. When the output bit is moved to the shifted input sequence, the network generates a complex time series [4]. Such networks are called bit (for ----- 2 Wolfgang Kinzel and Ido Kanter binary) or sequence (for continuous numbers) generators and have recently been studied in the context of time series prediction [5]. This work on the dynamics of neural networks - learning from a static teacher or generating time series by self interaction - has motivated us to study the following problem: What happens if two neural networks learn from each other? In the following section an analytic solution is presented [6], which shows a novel phenomenon: synchronization by mutual learning. The biological consequences of this phenomenon are not explored, yet, but we found an interesting application in cryptography: secure generation of a secret key over a public channel. In the field of cryptography, one is interested in methods to transmit secret messages between two partners A and B. An opponent E who is able to listen to the communication should not be able to recover the secret message. Before 1976, all cryptographic methods had to rely on secret keys for encryption which were transmitted between A and B over a secret channel not accessible to any opponent. Such a common secret key can be used, for example, as a seed for a random bit generator by which the bit sequence of the message is added (modulo 2). In 1976, however, Diffie and Hellmann found that a common secret key could be created over a public channel accessible to any opponent. This method is based on number theory: Given limited computer power, it is not possible to calculate the discrete logarithm of sufficiently large numbers [7]. Here we show how neural networks can produce a common secret key by exchanging bits over a public channel and by learning from each other. ### 2 Dynamic transition to synchronization Here we study mutual learning of neural networks for a simple model system: Two perceptrons receive a common random input vector x and change their weights w according to their mutual bit σ, as sketched in Fig. 1. The output bit σ of a single perceptron is given by the equation σ = sign(w x) (1) x is an N -dimensional input vector with components which are drawn from a Gaussian with mean 0 and variance 1. w is a N -dimensional weight vector with continuous components which are normalized, w w = 1 (2) The initial state is a random choice of the components wi[A/B], i = 1, ...N for the two weight vectors w[A] and w[B]. At each training step a common random input vector is presented to the two networks which generate two output bits σ[A] and σ[B] according to (1). Now the weight vectors are updated by the perceptron learning rule [3]: ----- x Interacting neural networks and cryptography 3 σ Fig. 1. Two perceptrons receive an identical input x and learn their mutual output bits σ. w[A](t + 1) = w[A](t) + [η] N [xσ][B][ Θ][(][−][σ][A][σ][B][)] w[B](t + 1) = w[B](t) + [η] (3) N [xσ][A][ Θ][(][−][σ][A][σ][B][)] Θ(x) is the step function. Hence, only if the two perceptrons disagree a training step is performed with a learning rate η. After each step (3), the two weight vectors have to be normalized. In the limit N, the overlap →∞ R(t) = w[A](t) w[B](t) (4) has been calculated analytically [6]. The number of training steps t is scaled as α = t/N, and R(α) follows the equation (5) 2 π [η][(1][ −] [R][)][ −] [η][2][ ϕ]π � dR dα [= (][R][ + 1)] �� where ϕ is the angle between the two weight vectors w[A] and w[B], i.e. R = cos ϕ. This equation has fixed points R = 1, R = 1, and − η √ (6) 2π [= 1][ −] ϕ[cos][ ϕ] Fig. 2 shows the attractive fixed point of 5 as a function of the learning rate η. For small values of η the two networks relax to a state of a mutual agreement, R 1 for η 0. With increasing learning rate η the angle → → between the two weight vectors increases up to ϕ = 133[◦] for η → ηc = 1[∼] .816 (7) Above the critical rate ηc the networks relax to a state of complete disagreement, ϕ = 180[◦], R = 1. The two weight vectors are antiparallel to each − other, w[A] = w[B]. − ----- 4 Wolfgang Kinzel and Ido Kanter 1 0.5 0 −0.5 −1 0 0.5 1 1.5 2 η ηc Fig. 2. Final overlap R between two perceptrons as a function of learning rate η. Above a critical rate ηc the time dependent networks are synchronized. From Ref. [6] |theory simulation cos(θ) c|Col2| |---|---| ||| As a consequence, the analytic solution shows, well supported by numerical simulations for N = 100, that two neural networks can synchronize to each other by mutual learning. Both of the networks are trained to the examples generated by their partner and finally obtain an antiparallel alignment. Even after synchronization the networks keep moving, the motion is a kind of random walk on an N-dimensional hypersphere producing a rather complex bit sequence of output bits σ[A] = σ[B] [8]. − ### 3 Random walk in weight space We want to apply synchronization of neural networks to cryptography. In the previous section we have seen that the weight vectors of two perceptrons learning from each other can synchronize. The new idea is to use the common weights w[A] = w[B] as a key for encryption [9]. But two problems have to − be solved yet: (i) Can an external observer, recording the exchange of bits, calculate the final w[A](t), (ii) does this phenomenon exist for discrete weights? Point (i) is essential for cryptography, it will be discussed in the following section. Point (ii) is important for practical solutions since communication is usually based on bit sequences. It will be investigated in the following. Synchronization occurs for normalized weights, unnormalized ones do not synchronize [6]. Therefore, for discrete weights, we introduce a restriction in the space of possible vectors and limit the components wi[A/B] to 2L + 1 different values, wi[A/B] ∈{−L, −L + 1, ..., L − 1, L} (8) In order to obtain synchronization to a parallel – instead of an antiparallel – state w[A] = w[B], we modify the learning rule (3) to: ----- Interacting neural networks and cryptography 5 w[A](t + 1) = w[A](t) xσ[A]Θ(σ[A]σ[B]) − w[B](t + 1) = w[B](t) xσ[B]Θ(σ[A]σ[B]) (9) − Now the components of the random input vector x are binary xi ∈{+1, −1}. If the two networks produce an identical output bit σ[A] = σ[B], then their weights move one step in the direction of −xiσ[A]. But the weights should remain in the interval (8), therefore if any component moves out of this interval, |wi| = L + 1, it is set back to the boundary wi = ±L. Each component of the weight vectors performs a kind of random walk with reflecting boundary. Two corresponding components wi[A] [and][ w]i[B] [receive] the same random number 1. After each hit at the boundary the distance ± |wi[A] [−] [w]i[B][|][ is reduced until it has reached zero. For two perceptrons with a] N -dimensional weight space we have two ensembles of N random walks on the internal L, ..., L . If we neglect the global signal σ[A] = σ[B] as well as {− } the bias σ[A], we expect that after some characteristic time scale τ = (L[2]) O the probability of two random walks being in different states decreases as P (t) P (0)e[−][t/τ] (10) ∼ Hence the total synchronization time should be given by N P (t) 1 which - ≃ gives tsync ∼ τ ln N (11) In fact, our simulations for N = 100 show that two perceptrons with L = 3 synchronize in about 100 time steps and the synchronization time increases logarithmically with N . However, our simulations also showed that an opponent, recording the sequence of (σ[A], σ[B], x)t is able to synchronize, too. Therefore, a single perceptron does not allow a generation of a secret key. ### 4 Secret key generation Obviously, a single perceptron transmits too much information. An opponent, who knows the set of input/output pairs, can derive the weights of the two partners after synchronization. Therefore, one has to hide so much information, that the opponent cannot calculate the weights, but on the other side one has to transmit enough information that the two partners can synchronize. In fact, we found that multilayer networks with hidden units may be candidates for such a task [9]. More precisely, we consider parity machines with three hidden units as shown in Fig. 3. Each hidden unit is a perceptron (1) with discrete weights (8). The output bit τ of the total network is the product of the three bits of the hidden units ----- 6 Wolfgang Kinzel and Ido Kanter τ ### w ### x Fig. 3. Parity machine with three hidden units. τ [A] = σ1[A] [σ]2[A] [σ]3[A] τ [B] = σ1[B] [σ]2[B] [σ]3[B] (12) At each training step the two machines A and B receive identical input vectors x1, x2, x3. The training algorithm is the following: Only if the two output bits are identical, τ [A] = τ [B], the weights can be changed. In this case, only the hidden unit σi which is identical to τ changes its weights using the Hebbian rule w[A]i [(][t][ + 1) =][ w]i[A][(][t][)][ −] [x]i[τ][ A] (13) For example, if τ [A] = τ [B] = 1 there are four possible configurations of the hidden units in each network: (+1, +1, +1), (+1, 1, 1), ( 1, +1, +1), ( 1, 1, +1) − − − − − In the first case, all three weight vectors wi, w2, w3 are changed, in all other three cases only one weight vector is changed. The partner as well as any opponent does not know which one of the weight vectors is updated. The partners A and B react to their mutual stop and move signals τ [A] and τ [B], whereas an opponent can only receive these signals but not influence the partners with its own output bit. This is the essential mechanism which allows synchronization but prohibits learning. Numerical [9] as well as analytical [10] calculations of the dynamic process show that the partners can synchronize in a short time whereas an opponent needs a much longer time to lock into the partners. This observation holds for an observer who uses the same algorithm (13) as the two partners A and B. Note that the observer knows 1. the algorithm of A and B, 2. the input vectors x1, x2, x3 at each time step and 3. the output bits τ [A] and τ [B] at each time step. Nevertheless, he does not succeed in synchronizing with A and B within the communication period. Since for each run the two partners draw random initial weights and since the input vectors are random, one obtains a distribution of synchronization times as shown in Fig. 4 for N = 100 and L = 3. The average value of this distribution is shown as a function of system size N in Fig. 5. Even an infinitely large network needs only a finite number of exchanged bits - about ----- Interacting neural networks and cryptography 7 800 600 400 200 0 1000 Fig. 4. Distribution of synchronization time for N = 100, L = 3. Fig. 5. Average synchronization time as a function of inverse system size. t_av 500 0 0 1000 t_sync 0 0.02 0.04 0.06 0.08 0.1 �1/N 400 in this case - to synchronize, in agreement with the analytical calculation for N . →∞ If the communication continues after synchronization, an opponent has a chance to lock into the moving weights of A and B. Fig. 6 shows the distribution of the ratio between the synchronization time of A and B and the learning time of the opponent. In our simulations, for N = 100, this ratio never exceeded the value r = 0.1, and the average learning time is about 50000 time steps, much larger than the synchronization time. Hence, the two partners can take their weights w[A]i [(][t][) =][ w]i[B][(][t][) at a time step][ t][ where] synchronization most probably occurred as a common secret key. Synchronization of neural networks can be used as a key exchange protocol over a public channel. ----- 8 Wolfgang Kinzel and Ido Kanter 100 P(r) 0 0 0.02 0.04 0.06 r 0.08 Fig. 6. Distribution of the ratio of synchronization time between networks A and B to the learning time of an attacker E. ### 5 Conclusions Interacting neural networks have been calculated analytically. At each training step two networks receive a common random input vector and learn their mutual output bits. A new phenomenon has been observed: Synchronization by mutual learning. If the learning rate η is large enough, and if the weight vectors keep normalized, then the two networks relax to an antiparallel orientation. Their weight vectors still move like a random walk on a hypersphere, but each network has complete knowledge about its partner. It has been shown how this phenomenon can be used for cryptography. The two partners can agree on a common secret key over a public channel. An opponent who is recording the public exchange of training examples cannot obtain full information about the secrete key used for encryption. This works if the two partners use multilayer networks, parity machines. The opponent has all the informations (except the initial weight vectors) of the two partners and uses the same algorithms. Nevertheless he does not synchronize. This phenomenon may be used as a key exchange protocol. The two partners select secret initial weight vectors, agree on a public sequence of input vectors and exchange public bits. After a few steps they have identical weight vectors which are used for a secret encryption key. For each communication they agree on a new secret key, without having stored any secret information before. In contrast to number theoretical methods the networks are very fast; essentially they are linear filters, the complexity to generate a key of length N scales with N (for sequential update of the weights). Of course, one cannot rule out that algorithms for the opponent may be constructed which find the key in much shorter time. In fact, ensembles of opponents have a better chance to synchronize. In addition, one can show that, given the information of the opponent, the key is uniquely determined, and, given the sequence of inputs, the number of keys is huge but finite, even in the ----- Interacting neural networks and cryptography 9 limit N [11]. These may be good news for a possible attacker. However, →∞ recently we have found advanced algorithms for synchronization, too. Such variations are subjects of active research, and future will show whether the security of neural network cryptography can compete with number theoretical methods. Acknowledgments: This work profitted from enjoyable collaborations with Richard Metzler and Michal Rosen-Zvi. We thank the German Israel Science Foundation (GIF) and the Minerva Center of the Bar-Ilan University for support. ### References 1. J. Hertz, A. Krogh, and R. G. Palmer: Introduction to the Theory of Neural Computation, (Addison Wesley, Redwood City, 1991) 2. A. Engel, and C. Van den Broeck: Statistical Mechanics of Learning, (Cambridge University Press, 2001) 3. M. Biehl and N. Caticha: Statistical Mechanics of On-line Learning and Generalization, The Handbook of Brain Theory and Neural Networks, ed. by M. A. Arbib (MIT Press, Berlin 2001) 4. E. Eisenstein and I. Kanter and D.A. Kessler and W. Kinzel, Phys. Rev. Lett. 74, 6-9 (1995) 5. I. Kanter, D.A. Kessler, A. Priel and E. Eisenstein, Phys. Rev. Lett. 75, 26142617 (1995); L. Ein-Dor and I. Kanter, Phys. Rev. E 57, 6564 (1998); M. Schr¨oder and W. Kinzel, J. Phys. A 31, 9131-9147 (1998); A. Priel and I. Kanter, Europhys. Lett. (2000) 6. R. Metzler and W. Kinzel and I. Kanter, Phys. Rev. E 62, 2555 (2000) 7. D. R. Stinson, Cryptography: Theory and Practice (CRC Press 1995) 8. R. Metzler, W. Kinzel, L. Ein-Dor and I. Kanter, Phys. Rev. E 63, 056126 (2001) 9. I. Kanter, W. Kinzel and E. Kanter, Europhys. Lett. 57, 141-147 (2002) [10. M. Rosen-Zvi, I. Kanter and W. Kinzel, cond-mat/0202350 (2002)](http://arxiv.org/abs/cond-mat/0202350) 11. R. Urbanczik, private communication -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/cond-mat/0203011, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "CLOSED", "url": "" }
2,002
[]
false
2002-03-01T00:00:00
[ { "paperId": "340a72e0293f9cf4fb76ef5d427ae4bb3b23cecc", "title": "Secure exchange of information by synchronization of neural networks" }, { "paperId": "1384d42b8df88bab0ce38c21144e154d1afed238", "title": "Statistical Mechanics of Learning" }, { "paperId": "419d602ef49b7425f870d06ff1ad9a695271daed", "title": "Interacting neural networks." }, { "paperId": "d00d6108400ce0cf6a46b08fddedc88b45413643", "title": "Analytical study of time series generation by feed-forward networks." }, { "paperId": "d00145b7045ba0a5c417dbc3ce83dbb452b19e5c", "title": "Generation and prediction of time series by a neural network." }, { "paperId": "6c0cbbd275bb43e09f0527a31ddd61824eca295b", "title": "Introduction to the theory of neural computation" }, { "paperId": "872f24d5f4398df4948768968d2f550697dda67e", "title": "Statistical Mechanics of On{line Learning and Generalization the Handbook of Brain Theory and Neural Networks" }, { "paperId": "bb3256d7ee4d5113349d7501d6e7667a8c4799cf", "title": "Statistical mechanics of on-line learning and generalization" }, { "paperId": null, "title": "Phys. Rev. E" }, { "paperId": "268aeb15cd2834aa09b8b193c6528429e1c3843b", "title": "Cryptography: Theory and Practice" }, { "paperId": null, "title": "Phys. Rev. Lett. Phys. Rev. E J. Phys. A Europhys. Lett" }, { "paperId": "9aaf6bc1405b5aec17c48140337c63f2a8382b91", "title": "INTRODUCTION TO THEORY" } ]
4,921
en
[ { "category": "Medicine", "source": "external" }, { "category": "Medicine", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01dd8ec5e499f9da0c81b6bfbcf13a7caf537a2b
[ "Medicine" ]
0.882879
A Privacy-Preserving Distributed Medical Data Integration Security System for Accuracy Assessment of Cancer Screening: Development Study of Novel Data Integration System
01dd8ec5e499f9da0c81b6bfbcf13a7caf537a2b
JMIR Medical Informatics
[ { "authorId": "1808544", "name": "A. Miyaji" }, { "authorId": "21853039", "name": "Kaname Watanabe" }, { "authorId": "2270956", "name": "Yuuki Takano" }, { "authorId": "40260401", "name": "Kazuhisa Nakasho" }, { "authorId": "2111249045", "name": "Sho Nakamura" }, { "authorId": "2271351628", "name": "Yuntao Wang" }, { "authorId": "6611741", "name": "H. Narimatsu" } ]
{ "alternate_issns": null, "alternate_names": [ "JMIR med informatics", "JMIR medical informatics", "JMIR Med Informatics" ], "alternate_urls": null, "id": "dc6419f3-98b3-4718-bba0-6b6d6c3d192b", "issn": "2291-9694", "name": "JMIR Medical Informatics", "type": "journal", "url": "https://medinform.jmir.org/" }
Background Big data useful for epidemiological research can be obtained by integrating data corresponding to individuals between databases managed by different institutions. Privacy information must be protected while performing efficient, high-level data matching. Objective Privacy-preserving distributed data integration (PDDI) enables data matching between multiple databases without moving privacy information; however, its actual implementation requires matching security, accuracy, and performance. Moreover, identifying the optimal data item in the absence of a unique matching key is necessary. We aimed to conduct a basic matching experiment using a model to assess the accuracy of cancer screening. Methods To experiment with actual data, we created a data set mimicking the cancer screening and registration data in Japan and conducted a matching experiment using a PDDI system between geographically distant institutions. Errors similar to those found empirically in data sets recorded in Japanese were artificially introduced into the data set. The matching-key error rate of the data common to both data sets was set sufficiently higher than expected in the actual database: 85.0% and 59.0% for the data simulating colorectal and breast cancers, respectively. Various combinations of name, gender, date of birth, and address were used for the matching key. To evaluate the matching accuracy, the matching sensitivity and specificity were calculated based on the number of cancer-screening data points, and the effect of matching accuracy on the sensitivity and specificity of cancer screening was estimated based on the obtained values. To evaluate the performance, we measured central processing unit use, memory use, and network traffic. Results For combinations with a specificity ≥99% and high sensitivity, the date of birth and first name were used in the data simulating colorectal cancer, and the matching sensitivity and specificity were 55.00% and 99.85%, respectively. In the data simulating breast cancer, the date of birth and family name were used, and the matching sensitivity and specificity were 88.71% and 99.98%, respectively. Assuming the sensitivity and specificity of cancer screening at 90%, the apparent values decreased to 74.90% and 89.93%, respectively. A trial calculation was performed using a combination with the same data set and 100% specificity. When the matching sensitivity was 82.26%, the apparent screening sensitivity was maintained at 90%, and the screening specificity decreased to 89.89%. For 214 data points, the execution time was 82 minutes and 26 seconds without parallelization and 11 minutes and 38 seconds with parallelization; 19.33% of the calculation time was for the data-holding institutions. Memory use was 3.4 GB for the PDDI server and 2.7 GB for the data-holding institutions. Conclusions We demonstrated the rudimentary feasibility of introducing a PDDI system for cancer-screening accuracy assessment. We plan to conduct matching experiments based on actual data and compare them with the existing methods.
JMIR MEDICAL INFORMATICS Miyaji et al ##### Original Paper # A Privacy-Preserving Distributed Medical Data Integration Security System for Accuracy Assessment of Cancer Screening: Development Study of Novel Data Integration System ##### Atsuko Miyaji[1,2*], PhD; Kaname Watanabe[3,4*], MD, PhD; Yuuki Takano[1], PhD; Kazuhisa Nakasho[5], PhD; Sho Nakamura[3,6], MD, PhD; Yuntao Wang[1], PhD; Hiroto Narimatsu[3,4,6], MD, PhD 1Graduate School of Engineering, Osaka University, Suita, Japan 2Japan Advanced Institute of Science and Technology, Nomi, Japan 3Cancer Prevention and Control Division, Kanagawa Cancer Center Research Institute, Yokohama, Japan 4Department of Genetic Medicine, Kanagawa Cancer Center, Yokohama, Japan 5Graduate School of Science and Technology for Innovation, Yamaguchi University, Ube, Japan 6Graduate School of Health Innovation, Kanagawa University of Human Services, Kawasaki, Japan *these authors contributed equally **Corresponding Author:** Kaname Watanabe, MD, PhD Cancer Prevention and Control Division Kanagawa Cancer Center Research Institute 2-3-2 Nakao, Asahi-ku Yokohama, 241-8515 Japan Phone: 81 45 520 2222 ext 4020 Fax: 81 45 520 2216 [Email: ka-watanabe@gancen.asahi.yokohama.jp](mailto:ka-watanabe@gancen.asahi.yokohama.jp) ### Abstract **Background:** Big data useful for epidemiological research can be obtained by integrating data corresponding to individuals between databases managed by different institutions. Privacy information must be protected while performing efficient, high-level data matching. **Objective:** Privacy-preserving distributed data integration (PDDI) enables data matching between multiple databases without moving privacy information; however, its actual implementation requires matching security, accuracy, and performance. Moreover, identifying the optimal data item in the absence of a unique matching key is necessary. We aimed to conduct a basic matching experiment using a model to assess the accuracy of cancer screening. **Methods:** To experiment with actual data, we created a data set mimicking the cancer screening and registration data in Japan and conducted a matching experiment using a PDDI system between geographically distant institutions. Errors similar to those found empirically in data sets recorded in Japanese were artificially introduced into the data set. The matching-key error rate of the data common to both data sets was set sufficiently higher than expected in the actual database: 85.0% and 59.0% for the data simulating colorectal and breast cancers, respectively. Various combinations of name, gender, date of birth, and address were used for the matching key. To evaluate the matching accuracy, the matching sensitivity and specificity were calculated based on the number of cancer-screening data points, and the effect of matching accuracy on the sensitivity and specificity of cancer screening was estimated based on the obtained values. To evaluate the performance, we measured central processing unit use, memory use, and network traffic. **Results:** For combinations with a specificity ≥99% and high sensitivity, the date of birth and first name were used in the data simulating colorectal cancer, and the matching sensitivity and specificity were 55.00% and 99.85%, respectively. In the data simulating breast cancer, the date of birth and family name were used, and the matching sensitivity and specificity were 88.71% and 99.98%, respectively. Assuming the sensitivity and specificity of cancer screening at 90%, the apparent values decreased to 74.90% and 89.93%, respectively. A trial calculation was performed using a combination with the same data set and 100% specificity. When the matching sensitivity was 82.26%, the apparent screening sensitivity was maintained at 90%, and the screening ----- JMIR MEDICAL INFORMATICS Miyaji et al specificity decreased to 89.89%. For 214 data points, the execution time was 82 minutes and 26 seconds without parallelization and 11 minutes and 38 seconds with parallelization; 19.33% of the calculation time was for the data-holding institutions. Memory use was 3.4 GB for the PDDI server and 2.7 GB for the data-holding institutions. **Conclusions:** We demonstrated the rudimentary feasibility of introducing a PDDI system for cancer-screening accuracy assessment. We plan to conduct matching experiments based on actual data and compare them with the existing methods. **_(JMIR Med Inform 2022;10(12):e38922)_** [doi: 10.2196/38922](http://dx.doi.org/10.2196/38922) **KEYWORDS** data linkage; data security; secure data integration; privacy-preserving linkage; secure matching privacy-preserving linkage; private set intersection; PSI; privacy-preserving distributed data integration; PDDI; big data; medical informatics; cancer prevention; cancer epidemiology; epidemiological survey ### Introduction ##### Distributed Data Integration in Epidemiological Studies With advances in information technology and enhanced data-collection systems, health databases are becoming increasingly abundant. Similar to other countries, the government and academic societies in Japan collect and manage a disease database. In addition, there are patient-based disease databases and population-based cohort study databases that are collected and managed mainly by research institutes [1-5]. Integrating health information held in these independent databases benefits epidemiological studies and public health practices; for example, it is possible to determine important correlations and causal relationships, such as between the onset of disease and the health status of an individual, which cannot be determined using a single database. Therefore, it is important to link databases managed by different institutions [6-8]. There are challenges associated with linking independent databases. The first is the guarantee of information privacy, including the handling of personally identifiable information. Concerns and considerations regarding privacy and data security are paramount; policies and regulations on the collection, use, and movement of personally identifiable information are becoming more stringent [9]. Therefore, in data linkage, sufficient measures to prevent the leakage of personal information are required, which have led to an increase in attendant costs, including labor. The second challenge is the construction of an efficient data linkage system. In countries where a unique identification key, such as the national identification number, is given to each individual and multiple medical or welfare-related data systems are linked, more efficient matching is possible compared with countries where such unique identifiers are not provided to every citizen. Nordic countries are representative of those using such unique identifiers. However, owing to privacy concerns, many issues need to be resolved before linking the databases; therefore, only a few countries have introduced such identifiers so far [10,11]. In countries where the unique identification key system has not been put into practical use, it is even more difficult to build a system that meets information privacy requirements and linkage efficiency. Consequently, it has been impossible to link databases managed by different institutions at a practical level in Japan. ##### Secure Data Integration To safely and effectively collate the data held by each institution in a decentralized state and use them, it is desirable to exchange only necessary information as much as possible without leaking personal information to the outside. However, without a unique identification key, it is common to use personal information, such as name and date of birth, as the key to perform matching [9,12]. The methods that are widely practiced today include one in which a data provider or user performs a matching operation or the method in which a data set containing personal information is passed to a third party (data depository) to perform the matching. Both methods require the movement of personal information that serves as the key to carry out the match. Although some studies [13,14] related to the linkage between 2 databases have been conducted, they are still vulnerable in terms of security and privacy. In fact, in a report by Kho et al [13], a hash value of names was used to match names so that a dictionary attack can determine which hospital a patient is in. A dictionary attack is a method in which the hash values of a precreated patient list are matched with the hash values stored in a system database. As the hash values of a limited range of data, such as patient lists, are vulnerable to a dictionary attack, the use of simple hash tables should be avoided. Furthermore, the proposal by Kho et al assumes that the database is owned by a single institution. In a report by Godlove et al [14], the system and other details were not described; therefore, the method of matching is a black box. Therefore, strict countermeasures against information leakage and the costs involved are obstacles to conducting large-scale epidemiological studies. There are technical efforts to more securely approach a solution to this issue. Under the private set intersection protocol, which has been attracting attention in recent years, data other than those commonly included in data sets, distributed and managed by multiple data-holding institutions, are kept secret from other institutions; hence, only commonly included data are accessible [15-18]. The technology discussed in a previous report [18], which is an extension of private set intersection, focuses on the fact that a data set of medical-related information is generally composed of multiple attributes. After specifying an attribute as the matching key, the data associated with the same key attribute commonly included in each institution are integrated. It is called privacy-preserving distributed data integration (PDDI) because it integrates distributed data while ensuring privacy. Notably, unlike the proposal by Kho et al [13], PDDI does not simply match in the ----- JMIR MEDICAL INFORMATICS Miyaji et al hash values of matching keys; therefore, information on whether a given patient is included in an institution is not available, and unlike Godlove et al [14], the specification is not a black box but is obvious. Studies on the application of newly developed PDDI systems to medical data are ongoing [19]. The PDDI system is expected to enable the secure integration of health information held in databases managed by different institutions and to enable epidemiological studies to be conducted with high security. ##### Challenges in Implementing the Technology PDDI is an established technology, but several additional steps must be taken before its implementation. The most important aspect is to show that the system can maintain sufficient matching accuracy and performance for operational purposes while keeping personal information secure, even when using actual data. The matching keys that are commonly used when a national identification number or similar identifier is not available, such as name and date of birth, include various errors, such as typing errors, at the time of input and orthographic variants owing to differences in the input format. Especially, in Japan, the lack of a standardized identification format also contributes to this effect. Therefore, the identification of identical persons tends to be associated with a certain rate of failure, lowering the matching accuracy [20]. Low matching accuracy affects outcome detection and narrows the research design and research themes to which the system can be applied. Matching accuracy is determined by the quantity and nature of such errors and the matching method [21,22]. The errors that can be found in data types used as matching keys are also affected by the language and characters used in the description. The optimal method for addressing these errors must be considered separately for different countries, regions, and databases. Various strategies have been developed to increase the reliability of matching. These include prior data cleaning, standardizing formats, combining personal information that serves as matching keys, and taking various measures such as probabilistic approaches [9,12,23,24]. However, it is unclear, especially in Japan, which data items can be used as matching keys to maximize the matching accuracy where a unique matching key cannot be used. The other aspect is the system performance. PDDI systems do not consolidate the data of each institution to 1 depository institution. The information held by each institution is encrypted within that institution, and the data are collected and distributed. However, the specifications of computer terminals of data-holding institutions and users vary considerably. Therefore, it is necessary to evaluate the performance of a linkage system for its stable use in a general-purpose environment. The purpose of this project was to demonstrate that the security of personal information can be maintained in matching using actual data and that it is operationally accurate and performs significantly well for PDDI implementation and to identify which data items can be effective matching keys to perform data matching with high accuracy in situations where there is no unique matching key. However, because the use of personal information as a matching key is strictly controlled in Japan, a preliminary experiment was required using dummy data to experiment using actual data. In this study, we evaluated the protection of personal information, matching accuracy in cancer-screening accuracy assessment assuming a large-scale epidemiological study using artificially created data that simulate cancer screening and cancer registration data. If feasibility is confirmed in this study, we plan to carry out a verification study using actual data. The results of these studies are expected to be applied to large-scale population-based genomic cohort studies and large-scale studies using patient databases, thus contributing to further activation and development of database-based epidemiological research. ### Methods ##### PDDI System Overview The features of PDDI used in this study are presented in our previous study [19], in which it is shown that PDDI consists of a secure computation server, data-holding institutions, and client. In PDDI systems, when there are multiple attributes per data sample, the database is divided into 3 types: key information, analysis target data, and others. The data to be analyzed, which are linked to the key commonly included in the database of each institution, are concealed and integrated. The key information and data to be analyzed may match. Important characteristics of PDDI systems are as follows: 1. No institution that uses the system, including those that own databases and those that receive data, can obtain any information other than the key information that is commonly shared between databases. Unlike the query-based method, the fact that 1 institution holds some information about the individual is not divulged to any other institution. 2. Key information used to match the data will not be divulged to any institution, including the PDDI secure computation server. In this paper, the PDDI secure computation server is denoted as PDDI server. 3. The processing time of each institution does not depend on the number of institutions involved in the system. There is no limit to the data available to each institution through the system. 4. No third-party institution collects or aggregates data to carry out matching. We have described the PDDI algorithm in subsequent sections. Figure 1 shows the entire algorithmic process. ----- JMIR MEDICAL INFORMATICS Miyaji et al **Figure 1.** Schematic of the privacy-preserving distributed data integration (PDDI) system algorithm. Steps 1 to 4 represent each step of the merging process using the PDDI system described in the main text. The data held by each institution are encrypted and matched by the PDDI server using the data as the matching key. The analysis target data, which are related to the matching key without distinction between institutions, are decrypted only when they are provided to the client, and the matching-key information is never provided to the client. ##### Step 1: Irreversible Compression and Encryption Each institution compresses the key used for collating the data set with a hash function, converts it into unique and irreversible information, and sends the data encrypted by homomorphic and probabilistic encryption to the PDDI server. ##### Step 2: Creation of Matching Keys The PDDI server calculates the sum of the encrypted data obtained from each institution (called an encrypted matching key) and sends these to each institution. Note that the PDDI server does not have the decryption key; therefore, it cannot decrypt the encrypted matching key. ##### Step 3: Analysis of Target Data for Set Intersection Computation Each institution decrypts the received encrypted matching key and obtains the matching key used for extracting the key that is commonly included in all institutions. Next, the analysis target data related to the commonly included key are encrypted and sent to the PDDI server. ##### Step 4: Integration of Encrypted Analysis Target Data The server integrates the encrypted analysis target data sent from each institution and sends it to the client; the matching-key information is not sent to the client. In this study, 1 data-holding institution evaluates whether the matching was performed correctly; therefore, the data-holding institution acts as a client. These matching keys are transformed into Bloom filters and then encrypted in each institution. The encryption is probabilistic, and thus, the same plaintext is encrypted into different values. Furthermore, it cannot be decrypted without the collaboration of all institutions. Then, they are sent to the PDDI server. Note that the encryption of the compressed matching key is probabilistic, which implies that the ciphertexts of the compressed matching keys are not equal even if the compressed matching keys are equal. Therefore, by using the ciphertext, anyone cannot guess whether a patient with the matching key is included in the institute, unlike the proposal by Kho et al [13]. For the same reason, the PDDI server neither reveals any information of the matching key in each institution nor guesses whether a patient with the matching key is included in the institute. This is a completely different privacy policy from that proposed by Kho et al [13]. The PDDI implementation environment, environment construction, and usability are described in Multimedia Appendix 1. The basic part of this system (code, encryption, and others) is currently being prepared for publication. ##### Experiment Model: Accuracy Assessment of Cancer Screening Overview In this study, we adopted accuracy assessment of cancer screening as a model for the matching experiment. Cancer screening is a general term for cancer-screening programs for the general population, which are conducted to reduce the mortality rate owing to early detection of cancer (secondary prevention). It is implemented around the world, centered on programs that have been scientifically recognized to reduce ----- JMIR MEDICAL INFORMATICS Miyaji et al mortality, such as breast, cervical, and colorectal cancers [25-27]. The examinee is evaluated for the risk of having cancer based on the test results of each program. Patients who are determined to be at high risk, that is, those who are highly suspected of having cancer, are encouraged to visit a medical institution. Assessing the accuracy of cancer risk detection and controlling the quality of screening, so that the number of overlooked cancers and useless tests is kept to a minimum, constitute the major roles of cancer-screening accuracy control. Data on whether a patient who was judged to be at high risk in a program had cancer within a certain period (often 1-2 years) are required to assess the accuracy of cancer screening. The biggest challenge in assessing cancer-screening accuracy is the collection and matching of distributed data. In many cases, cancer incidence, which represents the outcome of screening, needs to be obtained by matching with another source independent of the cancer-screening database; for example, a cancer registration database. In Japan, cancer-screening data are managed in a distributed state by the municipalities that are the implementing bodies. Moreover, cancer registration data are managed in a distributed manner by prefectures. Therefore, to collect and collate these data on a large-scale national or regional basis is difficult. The data size to be handled are large, and when there are many target municipalities, a lot of cumbersome procedures, which are not always standardized by the municipalities, are required to obtain the data. The greater the number of municipalities involved, the greater the movement of privacy information and the higher the risk of leakage. Therefore, in Japan, such studies are only conducted **Textbox 1. Definition of items related to the accuracy of cancer screening** - Screening sensitivity=Proportion of patients with cancer who screen positive - Screening specificity=Proportion of patients without cancer who screen negative - Positive predictive value for screening=Proportion of cases giving positive screen results who are already patients The accuracy of cancer screening is indicated by adding “screening” to distinguish it from the accuracy of matching, which will be described in the “study design” section. ##### Background of Practical Data-Matching Failures In countries that do not have a national identification number, such as Japan, data are generally collated using personal information. In such an environment, the accuracy of matching is reduced owing to various errors that may appear in the data points used as matching keys. The sources of errors when using matching keys are careless mistakes, orthographic variance owing to changes in culture and institutions, and differences in notation. The matching-key information may also change: change of address because of moving and renaming because of marriage. The prevalence of errors varies depending on the format adopted by the data holder and ability of the input person. They are also heavily influenced by the language in which the data are written. Japanese is the de facto official language in Japan, where we live, and it is adopted as the default language in most systems and services in Japan. Many errors in Japanese registry data are due to language-specific problems. Details of sporadically, using limited data from a small number of municipalities [28,29]. This system is characterized by no restrictions on the number of participating institutions or the amount of data held by the institutions and is considered an effective means for solving this problem. This system makes it easy to match the risk assessment information of distributed cancer screening with the cancer incidence information of cancer registration, which is expected to enable large-scale cancer-screening accuracy assessment, which has not yet been possible. Therefore, we surmised that applying a PDDI system for the assessment of cancer-screening accuracy is possible and devised an experimental plan using this model. In cancer-screening accuracy assessment, indicators such as sensitivity, specificity, and positive predictive value are mainly used. If cancer screening indicates that there is a strong suspicion of having cancer (high risk), it is considered positive. In Japan, it is recommended to visit a medical institution, so this result is often called a “requiring detailed examination.” The other judgments are negative. Whether the patient has cancer is evaluated by comparing cancer incidence information in cancer registration data for 1 to 2 years from the date of consultation with the screening result. In other words, if the cancer screen is positive (there is a strong suspicion that the patient has cancer) and the cancer is subsequently diagnosed, the sensitivity, specificity, and positive predictive value in the context of assessment of the accuracy of cancer screening are defined as Textbox 1. the errors originating from Japanese language features are described in Multimedia Appendix 2. ##### Study Design As mentioned in the Introduction section, the purpose of this project is to demonstrate the safety, accuracy, and performance of data matching using the PDDI system and to identify effective data items as matching keys. This study is the first step of the project. We used the PDDI system to perform a data set matching experiment between simulated cancer-screening and cancer registration data sets, in which the PDDI system was tasked with matching data belonging to the same individuals between the sets. Feasibility was evaluated based on data security, matching accuracy (sensitivity and specificity), and system performance. In this experiment, we performed matching under multiple conditions using personal information, such as first and last names, phonetic spelling, date of birth, and address, and evaluated how much matching accuracy could be obtained by combining matching keys. Various matching algorithms were devised to prevent a decrease in sensitivity while maintaining specificity [9,12,23]. However, the purpose of this study was ----- JMIR MEDICAL INFORMATICS Miyaji et al to evaluate the PDDI system, not the novel matching method, to improve the matching accuracy; therefore, these advanced matching algorithms were not considered. Methods for more accurate and practical matching will be considered in the next steps of this project. Instead, we estimated how much the matching accuracy would affect the estimation of cancer-screening accuracy. The feasibility of applying the model in this study was evaluated. Unlike conventional systems that use a simple hash function to compress privacy information or that require a single server to collect and process all data, our system uses the latest security techniques. For example, all data through the network are encrypted, and decryption cannot be performed by a single institution but only by the cooperation of all distributed institutions, without centralizing the data. Therefore, it is important to verify that it can be implemented on a general-purpose computer rather than on a special server. We evaluated the performance of the system, the total data processing time, memory use, and network traffic required by PDDI. The PDDI server was introduced to reduce the processing time and amount of communication between data-holding institutions. In practice, the data processing time of data-holding institutions and the total data processing time required to collect the information contained in common is of critical importance. ##### Setting of the Matching Experiment Four data sets were created to simulate cancer-screening and cancer registration data for 2 types of cancers: colorectal and breast cancers. First, using the web-based test-data generation service that is open to the public in Japan, we created pseudodata that included name, gender, date of birth, and address to serve as matching-key information [30-32]. This service automatically creates personal information, such as name, date of birth, address, and telephone number, from random combinations, which is common in Japan. By selecting the required information items and the desired amount of generated data, the user can obtain data that simulate nonexistent personal information. To account for the possibility that data generated by any particular service may contain certain tendencies or biases, we generated one-third of all the data points from each of the 3 separate services. Next, from the created pseudodata, 60 cases of colorectal cancer and 62 cases of breast cancer were selected as common data that can be matched. These were commonly included in both cancer-screening and cancer registration data sets. To make the simulated data resemble the actual data, we consulted the staff who had abundant experience in registry management and a physician who is an expert in epidemiological research, and the data were modified to include errors and orthographic variants that are often empirically recognized. Experience shows that the number of errors in the data set is expected to be <10%. Previous studies have reported that the number of errors and omissions in the data available for matching keys in disease registries and medical and administrative databases is approximately 15% or less [33-35]. However, the actual prevalence of errors is unknown, as changes in culture and society are expected to affect their occurrence rates. Therefore, to create data that would be more difficult to match, the data were rewritten to increase the number of errors to the extent that a data point would have errors in multiple items. Errors were made more prevalent in the colorectal cancer data set than in the breast cancer data set such that the colorectal cancer data set would be more difficult to match than the breast cancer data set. Subsequently, the remaining pseudodata were added, and finally, a pseudo–data set of 2000 colorectal cancer screenings, 17,866 colorectal cancers, 1048 breast cancer screenings, and 29,949 breast cancers was created. Pseudodata items other than matching keys included serial numbers and pseudoidentification numbers for each database in all data sets. The following pseudodata were randomly added to the colorectal cancer-screening data set: test date, test results, and risk assessment of fecal occult blood test, which is commonly used in Japan. The diagnosis name; International Classification of Diseases, Tenth Revision code; and date of diagnosis were added to the cancer registration data set. Pseudodata items other than these matching keys were only decorative and did not affect the matching experiment. Table 1 lists the errors and orthographic variants added to the data set. The examples of errors specific to Japanese in the data sets used in the experiments in this study are shown in Figure S1 in Multimedia Appendix 2. ----- JMIR MEDICAL INFORMATICS Miyaji et al **Table 1.** Errors and orthographic variants included in the data set. Class, error type, and matching key Number of data points, n (%) Colorectal cancer (n=60) Breast cancer (n=62) **Data entry errors** **Typing errors** Name 3 (5) 1 (2) Birth date 15 (25) 0 (0) Address 6 (10) 2 (3) Sex 5 (8) 0 (0) **Kanji conversion errors** Name 5 (8) 6 (10) Address 2 (3) 0 (0) **Misreading** Name 10 (17) 8 (13) **Missing letters** Name 2 (3) 1 (2) **Omission** Address 4 (7) 0 (0) Name 10 (17) 1 (2) **Orthographic variants** **Variant kanji** Name 7 (12) 4 (6) **Format** Address 5 (8) 15 (24) **Data change** **Name change** Name 2 (3) 1 (2) **Alias** Name 2 (3) 0 (0) **Moving** Address 2 (3) 8 (13) Unmatched on multiple keys 25 (42) 14 (23) Total 51 (85) 36 (59) In the experiment, 6 pieces of information—family name (kanji or kana), first name (kanji or kana), date of birth, and gender—were used. In this experiment, matching was performed by combining ≥2 images. In the case of colorectal cancer, 57 combinations were possible: 6C2 + 6C3 + 6C4 + 6C5 + 6C6. For breast cancer, outside of a small number of exceptional cases, all screening targets were females, and thus, only 26 combinations were possible: 5C2 + 5C3 + 5C4 + 5C5. In the PDDI protocol, a data array called a Bloom filter is encrypted element by element. More than 90% of the total execution time is spent on this encryption process. The encryption of an element of the data array is independent of that of other elements, and parallelization is easy. The multiprocessing module in Python Standard Library (version 3.9; Python Software Foundation) was used for this parallelization. The PC environment used in the experiment was as follows: central processing unit (CPU), Intel (R) Xeon (R) CPU E5-2690 v4@2.60GHz (28 cores), memory 48 GB. The programs of all the institutions were executed on 1 PC. ##### Evaluation Items related to matching accuracy are referred to below with “matching” to distinguish them from the accuracy of cancer screening. To calculate the matching accuracy, the pseudo–cancer screening data were used as a reference point, and when the data matched the specified matching-key conditions in the pseudocancer registration data, the match was ----- JMIR MEDICAL INFORMATICS Miyaji et al considered positive. The case in which no matching data were present was defined as negative. This matching experiment was conducted between data sets in which the same persons were simulated in both data sets in advance. Therefore, the trueness and falseness of matching were determined as follows: cases in which the matching result correctly matched data belonging to the same person were considered true and those in which the matching result did not correctly match data belonging to the same person were considered _false. In other words, a_ _false_ _positive means that data originally registered under separate_ individuals were erroneously matched, and a _false negative_ means that data that should have been matched (because they belong to the same person) were not matched. In an environment in which matching keys that uniquely identify an individual are completely error-free, matching is perfectly accurate. In this experiment, as an evaluation of matching accuracy, the correspondence between positive and negative matches and their trueness or falseness was cross-tabulated to calculate the matching sensitivity and matching specificity. On the basis of this, a combination of matching keys with high matching sensitivity and matching specificity, that is, good matching accuracy, was extracted. For the estimation of the effect of matching accuracy on the assessment of cancer-screening accuracy, we referred to past studies and assumed 2 scenarios: one in which the true accuracy of cancer screening involved a sensitivity of 90% and a specificity of 90% and the other with a sensitivity of 60% and a specificity of 90% [36-38]. Errors between true and estimated values were calculated to assess screening sensitivity, screening specificity, and screening positive predictive value. For matching accuracy, simulations were carried out in the following manner: values were changed in a stepwise manner in scenarios in which the matching sensitivity was 100%, the matching specificity was 100%, and each parameter was equivalent to the corresponding value observed in the matching experiment. The estimation assumed a group that underwent cancer screening in a certain year. The prevalence of new cancer incidence was set at 775.7 of 100,000 person-years based on the average prevalence in Japan. The data size did not affect the estimation, but at the time of calculation, it was set to 1000 people according to the parameters of this experiment. In the performance evaluation experiment, we attempted to simulate a scenario in which the system is used by the institutions that are geographically distant from one another. Therefore, we used 6 computers installed at Osaka University and Yamaguchi University (4 of which simulated data-holding institutions). In the experiment, we measured CPU use, memory use, and network traffic for 3 data sizes: 2[10], 2[12], and 2[14]. We also implemented multiprocess parallelization and measured its speedup ratio. ##### Ethics Approval This study was approved by the institutional review board of the Kanagawa Cancer Center (2021 epidemiology-135). ### Results ##### Data Protection In our experiments, 2 distributed institutes independently held cancer screening and cancer registration data, in which each data set included the terms of birth date, first name, family name, and sex. These terms were used for matching keys. In our system, in addition to the use of probabilistic encryption, all matching keys and information through a network outside the institute are encrypted, and no server deals with raw data were stored in different distributed institutes. In other words, no institute has a decryption key and reveals all information. This implies that our system does not move any privacy information from any institute and thus avoids privacy risk. ##### Matching Accuracy The results of matching using PDDI are shown in subsequent sections. From the preliminary experiments, when only 1 matching key is used, the number of false positives for matching increases and the specificity decreases significantly (Table S2 in Multimedia Appendix 3). Figure 2 shows the results of false positives and false negatives in which pseudodata of colorectal cancer and breast cancer were matched using various combinations of information. In the case of colorectal cancer data, the minimum number of false negatives for matching was 27 and the minimum number of false positives for matching was 0. It is desirable that the common data for all 60 items be output. However, up to 33 (60 – 27) cases are output correctly. For breast cancer data, the minimum number of false negatives for matching was 7, and the minimum number of false positives for matching was 0. Similarly, it is desirable that 62 common data items are output but a maximum of 55 (62 – 7) cases were output correctly. ----- JMIR MEDICAL INFORMATICS Miyaji et al **Figure 2.** Number of false positives and false negatives. The points are placed according to the number of false positives and false negatives by the setting of each experiment conducted. Part A shows the result of data simulating colorectal cancer and Part B shows the result of data simulating breast cancer. Table 2 presents an excerpt of the matching results. Only combinations with a specificity of ≥99% are shown. In this pseudo–data set, it can be inferred that the combination of matching keys, including the date of birth, is particularly effective. In the colorectal cancer pseudodata, the combination with a specificity of ≥99%, the highest matching sensitivity was the one that used the date of birth and first name (kana) as keys; the matching sensitivity was 55.00%, and the matching specificity was 99.85%. For breast cancer pseudodata, the highest matching sensitivity was obtained when the date of birth and family name (kana or kanji) were used as keys: the matching sensitivity was 88.71%, and the matching specificity was 99.80%. In combination with 100% matching specificity, the matching sensitivity was 48.33% for the data simulating colorectal cancer and 82.26% for the data simulating breast cancer. **Table 2.** Matching result between cancer-screening and cancer-registration data (excerpt). Class[a] and matching key False positive, n False negative, n Sensitivity (%) Specificity (%) **Colorectal cancer** Birth date, first name (kana) 3 27 55.00 99.85 Birth date, first name (kana), family name 0 31 48.33 100 (kana) Birth date, sex, first name (kana) 2 28 53.33 99.90 Birth date, sex, family name (kana) 1 29 51.67 99.95 **Breast cancer** Birth date, family name (kana) 2 7 88.71 99.80 Birth date, family name (kanji) 2 7 88.71 99.80 Birth date, first name (kanji) 1 9 85.48 99.90 Birth date, first name (kana), family name 0 11 82.26 100 (kanji) aResults of the matching experiment between cancer-screening and cancer registration data for each matching key used. Cases in which all key data shown in the matching-key column successfully corresponded were considered positive matches. Table 3 shows the effect of matching accuracy on the estimation of sensitivity and specificity of cancer screening based on the model used in this experiment, an assessment of the accuracy of cancer screening. The matching sensitivities were approximately 85%, 50%, and 90%, and the matching specificities were 99.9%, 99.8%, and 99.99%. Assuming that the original values of both screening sensitivity and specificity are both 90% if the matching specificity is set to 100% and the matching sensitivity values are reduced to 90%, 85%, and 50%, the apparent screening specificity values become 89.94% (−0.06%), 89.91% (−0.10%), and 89.69% (−0.34%), respectively. Thus, as the matching sensitivity decreases, the screening specificity is underestimated. If the matching specificity decreases, the screening sensitivity is underestimated. On the basis of the experimental results of the data set simulating breast cancer, when calculated with a matching sensitivity of 88.71% and matching specificity of 99.80%, the apparent value of the screening sensitivity was 72.09% (−19.9%) and that of ----- JMIR MEDICAL INFORMATICS Miyaji et al the screening specificity was 89.93% (−0.08%), and the rate of change in the apparent value of the screening sensitivity was large. However, when using the results of another combination and calculating with a matching sensitivity of 82.26% and matching specificity of 100%, the apparent value of screening sensitivity is 90% (no decrease), and the apparent value of screening specificity is 89.89% (−0.12%). In other words, when the matching specificity is sufficiently large, even if the matching sensitivity is a little low, the change from the original value for both screening sensitivity and screening specificity remains small. As shown in Table 3, this tendency was maintained, even in the estimation assuming the original screening sensitivity of 60%. In addition, regarding the positive predictive value of screening, a decrease in matching sensitivity makes the positive predictive value of screening appear smaller than the original value, and a decrease in matching specificity makes the positive predictive value of screening appear larger than the original value. The effect of matching specificity is also greater for the positive predictive value of screening. **Table 3.** Estimation of the impact of matching accuracy on the screening accuracy[a]. Assumption of matching accuracy (%) Screening sensitivity (%) Screening specificity (%) Positive predictive value (%) Sensitivity Specificity True Estimate True Estimate True Estimate 90 100 90 NA[b] 90 89.94 6.6 5.92 85 100 90 NA 90 89.91 6.6 5.59 50 100 90 NA 90 89.69 6.6 3.29 100 99.99 90 88.99 90 NA 6.6 6.58 100 99.90 90 80.93 90 NA 6.6 6.67 100 99.80 90 73.70 90 NA 6.6 6.76 _88.71_ _99.80_ _90_ _90.00_ _90_ _89.89_ _6.6_ _6.02_ _82.26_ _100_ _90_ _72.09_ _90_ _89.93_ _6.6_ _5.41_ 90 100 60 NA 90 89.96 4.5 4.03 85 100 60 NA 90 89.94 4.5 3.81 50 100 60 NA 90 89.81 4.5 2.24 100 99.99 60 59.37 90 NA 4.5 4.49 100 99.90 60 54.33 90 NA 4.5 4.58 100 99.80 60 49.81 90 NA 4.5 4.67 _88.71_ _99.80_ _60_ _48.81_ _90_ _89.96_ _4.5_ _4.17_ _82.26_ _100_ _60_ _60.00_ _90_ _89.68_ _4.5_ _3.18_ aThe table shows the impact of matching accuracy on cancer-screening accuracy estimates when the true sensitivity of cancer screening is set at 90% and 60%, and the true specificity is set at 90%. The cancer incidence rate is approximately 775.7 person per year, which is the national average in Japan. bNA: not affected. “NA” represents that no change occurred between the true and estimated values. The italicized values show the estimates obtained using the experimental data. In principle, when the matching sensitivity is 100%, even if the matching specificity is reduced, both true-negative and false-positive cancer screenings are misidentified as having cancer at the same rate. Therefore, the specificity of cancer screening does not change. Similarly, when the matching specificity is 100%, even if the matching sensitivity decreases, both true-positive and false-negative cancer screening will be misidentified as “no cancer” at the same rate. Therefore, the sensitivity of cancer screening does not change. Therefore, these values are not shown and are depicted as not affected, except when the matching sensitivity and matching specificity obtained from the matching experiment are used. ##### Performance The results of the performance evaluation experiment are in subsequent sections. The specifications of the computer used in the experiment are listed in Table S1 in Multimedia Appendix 1. Figure 3 shows the relationship between the amount of data and execution time. ----- JMIR MEDICAL INFORMATICS Miyaji et al **Figure 3.** Execution time. The graph shows the relationship between the amount of data and the execution time. The solid line shows the execution time without parallelization, and the dashed line shows the execution time with parallelization. As shown in Figure 3, the amount of data and the execution time are almost proportional. Furthermore, with 2[14] (16,384) data points, the nonparallelized execution time was 82 minutes and 26 seconds, whereas with parallelization, the execution time was 11 minutes 38 seconds; hence, a 7.1-fold speedup was observed with parallelization. Figure 4 shows the changes in CPU use of the PDDI server and data-holding institutions when the process is executed on 2[14] data points without parallelization. As can be observed in this graph, 80.67% of the execution time is processed by the PDDI server, and the calculation time of the data-holding institutions is only 19.33%. **Figure 4.** Changes in central processing unit (CPU) usage. The graphs show the changes in CPU usage of the privacy-preserving distributed data integration (PDDI) server and the data-holding institutions when the process is executed on 214 datapoints without parallelization. Part A represents the results of the PDDI server, and part B represents the results of the data-holding institution. Figure 5 shows the relationship between the amount of data and memory use of the PDDI server and data-holding institutions. Memory use increases linearly with the amount of data. However, even during parallelization for 2[14] data, which uses a large amount of memory, the PDDI server required no more than 3.4 GB of memory, and the data-holding institutions required no more than 2.7 GB of memory. ----- JMIR MEDICAL INFORMATICS Miyaji et al **Figure 5.** Memory usage. The graphs show the relationship between the amount of data and the memory usage of the privacy-preserving distributed data integration (PDDI) server and the data-holding institutions. Part A represents the results of the PDDI server, and part B represents the results of the data-holding institution. ### Discussion ##### Evaluation of Matching Experiment In this study, we conducted a matching experiment using the accuracy assessment of cancer screening as a model by matching the cancer-screening and cancer registration data. In the experiment, any matching information is transformed into Bloom filters, encrypted within each institution, and then sent to the PDDI server. Probabilistic encryption was used in this study. This implies that the same matching key is compressed and randomly encrypted to different ciphertext, for example, each birth date of patients A and B in cancer registration data set is 19970911, but that compressed and randomly encrypted are not equal to each other. Unlike simple matching using a hash value [13], our scheme is secure against dictionary attacks because the same value is encrypted into different values owing to the probabilistic encryption. The matching keys used for multiple combinations, which were particularly excellent with few false positives and false negatives, were all registered in most databases in Japan. It is highly likely that these keys can be applied to existing databases. The matching sensitivity remained in the 50% range for simulated colorectal cancer data containing 85% matching-key errors, but in the case of simulated breast cancer data, which contained 59% matching-key errors, the matching sensitivity value was approximately 85%. This experiment was conducted in a manner that intentionally created a data set that was difficult to match owing to a high prevalence of errors and a large amount of data containing errors in multiple matching keys. The errors contained in the 2 data sets differ as shown in Table 1, and these results cannot be simply compared, but, in general, the fewer the number of errors in the matching keys, the better the matching accuracy. Although cultural backgrounds and times vary, previous studies have shown that the number of errors and omissions in disease registries, medical, and government databases is <15% for matching-key data such as name, zip code, and date of birth [33-35]. On the basis of the opinions of staff with abundant experience in registry management, we predicted that up to approximately 10% of the actual data used for cancer-screening accuracy assessment in Japan includes an error in the matching key. In principle, the false-negative rate cannot be greater than the percentage of data with errors contained in the data set; therefore, it is estimated that a matching sensitivity of ≥90% can be obtained in verification experiments using actual data. The error distributions of the 2 data sets in this experiment were the same, and the prevalence was set at 10%. In the colorectal cancer data, the matching sensitivity was 94.70% when the date of birth and first name (kana) were used as the matching key. In breast cancer data, the matching sensitivity was 98.09% when the date of birth and family name (kana or kanji) were used as the matching key. Regarding the specificity of matching, the combination of keys shown in Table 2 maintained a high specificity of ≥99% in this estimation. In practical use, the influence on the outcome and evaluation index to be obtained by performing matching is more important than the numerical value of the matching accuracy. As shown in Table 3, when assessing test accuracy for infrequent events, such as cancer, changes in matching specificity values have a significant effect on the apparent value of test accuracy. In our model, a slight decrease in matching sensitivity had a relatively small effect on screening sensitivity and screening specificity. In other words, it is highly important to keep the matching specificity as high as possible to prevent underestimation of the screening sensitivity and screening specificity. The estimation shows that a combination of matching keys with 100% matching specificity has a small effect on the sensitivity and specificity of cancer screening, even if the matching sensitivity is low. Assuming that the original screening sensitivity and screening specificity are 90%, even when the matching specificity is not 100% if the matching specificity is ≥99.97%, the screening ----- JMIR MEDICAL INFORMATICS Miyaji et al sensitivity maintains within 5% even if the matching sensitivity is 85%. Therefore, when considering the accurate calculation of sensitivity estimates for cancer screening, it is desirable to select a matching-key or matching algorithm that can improve matching sensitivity as much as possible without reducing matching specificity. Matching specificity has a greater effect than matching sensitivity on the positive predictive value of screening. However, it is more susceptible to matching sensitivity than screening sensitivity or screening specificity. Therefore, when focusing on the positive predictive value of screening as the index, it is necessary to select the matching key in consideration to not only the matching specificity but also the decrease in matching sensitivity. Matching specificity in this experiment is defined as the value obtained by dividing the number of people who are determined not to have cancer as a result of matching by the number of people who do not have cancer among the data included in the cancer-screening data set. Therefore, the specificity of the match is affected by the ratio of the data size of the cancer registration data set to the cancer-screening data set and the percentage of true patients with cancer included in the cancer-screening data set. The cancer-screening and cancer registration data sets used in this experiment were approximately 1000 to 2000 and approximately 17,000 to 30,000, respectively. In Japan, where the cancer-screening rate is low, this is roughly equivalent to the number of cancer screenings in small municipalities and the number of cancers in large prefectures; cancer-screening data are managed for each municipality that is the implementing body, and cancer registration data are managed by each prefecture. Epidemiological studies may have to deal with even larger cancer-screening data. In this case, the difference in data size from the cancer registration data set is smaller than that in this experiment. Therefore, matching specificity is expected to be higher. As the errors of the data set in this experiment do not necessarily reflect the actual prevalence, the sensitivity and specificity in this experiment are just reference values. Even so, it is expected that the PDDI system can be used for the assessment of cancer-screening accuracy using matching with cancer registration data by appropriately adjusting the matching conditions. Performance evaluation experiments verified that the execution time of the PDDI system was almost proportional to the amount of data, and the execution time in parallel execution was 43 seconds per 1000 data samples. With the pseudodatabase used, the execution was completed in approximately 21 minutes, which is sufficient performance for epidemiological studies. The effect of the performance of the computer installed in the data-holding organization on the execution time is relatively small, approximately 20% of the total, and the memory use is <1 GB. Therefore, it was proven that the processing speed was acceptable even with the performance of a normal laptop PC. The maximum network traffic of the PDDI system in this experiment was 858 Mbps. Even so, the execution time consumed by communication is small, and if the communication speed of the data-holding organization is ≥10 Mbps, we do not believe that there will be any problems using this system. ##### Challenges for Next Experiments Using Practical Data On the basis of this study, we plan to conduct a verification experiment using actual cancer-screening and cancer registration data. In this experiment, the number of errors in the actual data were unknown. Therefore, the experiment was conducted using a data set with a large number of errors. In the next matching experiment using actual data, we plan to determine the degree of matching accuracy that can be obtained in comparison to a method that partly uses matching based on human judgment. On the basis of this, it is possible to realistically estimate the extent to which matching can cause errors in examination accuracy. Therefore, it is possible to perform higher quality evaluations for practical use. Regarding performance evaluation, as shown in the results of this experiment, the calculation time and memory consumption of the terminal depend on the amount of data. The main purpose of this experiment was to evaluate the feasibility, and the data set used was with a smaller number of items than those contained in the actual data. Therefore, in the next stage, we will confirm the performance using data on the scale of municipalities and prefectures that may actually be used. On the basis of these results, it is necessary to perform a trial calculation to determine the size of the data set that can be matched. ##### Implementation for Practical Epidemiological Studies Through this experiment and estimation, we demonstrated that the use of matching using the PDDI system for cancer-screening accuracy assessment deserves consideration. This system is expected to be applied to other types of epidemiological research because it assists in data matching between databases managed by different institutions. We considered the applicability based on matching sensitivity and specificity using cohort studies and case-control studies, which are typical epidemiological studies, as examples. Assuming that a cohort study examining the association between a factor and cancer incidence will determine the risk ratio of cancer incidence with people who have the factor compared with those who do not have, each person’s data in the cohort are matched with cancer registration data to record cancer incidence. The estimation of this setting is presented in Table S3 in Multimedia Appendix 4. The risk ratio does not change from the true value only by the decrease in matching sensitivity. If the matching specificity is reduced, the risk ratio is underestimated. However, it can be seen from the estimation that the decrease in the risk ratio is approximately 10% in the matching sensitivity and matching specificity equivalent to this matching experiment, even when the prevalence of the factor is 75%. Next, let us assume a case-control study using a data set that links the factors to be examined with data on the presence or absence of a disease by matching. Table S4 in Multimedia Appendix 4 shows a common disease with a high prevalence, here a trial calculation for diabetes, and Table S5 in Multimedia Appendix 4 shows a trial calculation for ulcerative colitis as an example of a disease with a low prevalence. Poor matching accuracy causes systematic errors in factor exposure in populations and control populations, which tends to underestimate odds ratio estimates. Occasionally, this has a greater effect on the odds ratios in diseases with low ----- JMIR MEDICAL INFORMATICS Miyaji et al prevalence. Therefore, when assuming the use of the PDDI system in cohort and case-control studies, care must be taken in selecting the target disease and underestimating the odds ratio. However, if appropriate calculations are made, it appears that a large variety of applications can be fully examined. The advantage of the PDDI system is that it can provide data to users in an already-matched state, even among ≥3 databases. Currently, in research that integrates data managed by different institutions without a unique identification key, a step-by-step process is necessary, such as collecting data from all target institutions and then performing a match or narrowing down the target audience and repeating the match. However, in the PDDI system, although the data are distributed and stored in different institutions, it is possible to retrieve matched data that meet these conditions. As in other methods [39], it does not assume prior linkage. Therefore, the PDDI system is particularly useful when data obtained from the databases of ≥3 institutions are combined and analyzed. Owing to this characteristic, this system enables the safe and efficient integration of data even in an environment such as Japan, that is, an environment where cancer-screening data are distributed and stored in many municipalities and, therefore, requires multiple movements of private information. ##### Limitations This study has several limitations. This study was conducted as a preliminary step in the experiments using real-life data. The data set used in this experiment is a pseudo–data set created using software that is open to the public and does not reflect the amount or ratio of errors mixed in the actual data, nor does it cover all types of errors contained in real-world data. As the types and number of errors contained in actual data depend on the input style of each database and the ability of the input ##### Acknowledgments person, subsequent verification experiments using actual data are required. In this study, we dealt only with matching under the condition that all the selected matching keys matched and did not use complicated algorithms for partial matches. We did not examine the extent to which the matching sensitivity and matching specificity shown in this study can be improved by further improvements in matching methods. The experiment used a local database in Japan as the environment, and we noted that the error format is also influenced by language, culture, and institution. Therefore, it is unlikely that this result can be applied directly to other countries and regions. ##### Conclusions As a first step toward implementing PDDI in epidemiological studies, we evaluated its feasibility in a model of cancer-screening accuracy assessment in terms of safety, matching accuracy, and performance through a matching experiment using dummy data. This system makes it possible to collate only the information related to the shared data without disclosing the data distributed and managed by multiple institutions and without using a third party. In the matching experiment and the estimation of the effect on the cancer-screening accuracy index using the matching sensitivity and matching specificity obtained by the experiment, it was shown that screening sensitivity and screening specificity can be assessed with minimal errors by keeping the matching specificity high. Because of its characteristics, this system reduces the labor and costs required for personal information management and collation work for both researchers and data providers in many epidemiological studies and is expected to further improve the efficiency and speed of research activities. In future, we will carry out further verification for practical use by using existing data and comparing it with existing methods. This research was supported in part by the Ministry of Education, Culture, Sports, Science and Technology’s 2018 “Society 5.0 Realization Research Center Support Project” and the Japan Society for the Promotion of Science’s Grant-in-Aid for Scientific Research (JP21H034438) and supported by Editage for English language editing and translation. AM, YT, and KN are the developers of the privacy-preserving distributed data integration system discussed in this study. Osaka University has patent rights related to the technology. ##### Authors' Contributions AM, YT, and KN were responsible for the development of the privacy-preserving distributed data integration (PDDI) system and environment. AM, YT, KN, and HN designed the study. KW and HN provided the simulated data used in the experiments, and YT and KN conducted buttress experiments using these data. The results were analyzed and interpreted by all authors. In writing the manuscript, YT was responsible for the PDDI system and matching experiments; KN for performance evaluation; AM for the PDDI system and engineering considerations; and KW for the epidemiological background, simulations, and epidemiological considerations. SN and YW provided a critical review and advice on the manuscript from epidemiological and engineering perspectives, respectively. AM was responsible for the overall supervision and oversight of the study in the engineering field, and HN, in the epidemiological field. AM and KW contributed equally to the preparation of this paper. ##### Conflicts of Interest None declared. ##### Multimedia Appendix 1 Privacy-preserving distributed data integration (PDDI) implementation environment, environment construction and usability. [[DOCX File, 23 KB-Multimedia Appendix 1]](https://jmir.org/api/download?alt_name=medinform_v10i12e38922_app1.docx&filename=16cc5acf368661d7228c5fe18d8c1bdc.docx) ----- JMIR MEDICAL INFORMATICS Miyaji et al ##### Multimedia Appendix 2 Cultural background of practical data-matching failures and examples of the errors specific to Japanese in the dataset of the experiment. [[DOCX File, 185 KB-Multimedia Appendix 2]](https://jmir.org/api/download?alt_name=medinform_v10i12e38922_app2.docx&filename=0a120385465bd1aed61f5cefa2b9be6a.docx) ##### Multimedia Appendix 3 Matching-key combinations and the matching results that were not described in the text. [[DOCX File, 21 KB-Multimedia Appendix 3]](https://jmir.org/api/download?alt_name=medinform_v10i12e38922_app3.docx&filename=afb223ef8e144cd9a820c10225136a1f.docx) ##### Multimedia Appendix 4 Estimating the impact of matching accuracy on outcome evaluation in epidemiological studies. [[DOCX File, 33 KB-Multimedia Appendix 4]](https://jmir.org/api/download?alt_name=medinform_v10i12e38922_app4.docx&filename=f5f6132b616ac47caa41b0b8d3ee6753.docx) ##### References 1. Matsuda T, Sobue T. Recent trends in population-based cancer registries in Japan: the Act on Promotion of Cancer Registries [and drastic changes in the historical registry. Int J Clin Oncol 2015 Feb;20(1):11-20. [doi: 10.1007/s10147-014-0765-4]](http://dx.doi.org/10.1007/s10147-014-0765-4) [[Medline: 25351534]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=25351534&dopt=Abstract) 2. Anazawa T, Miyata H, Gotoh M. Cancer registries in Japan: national clinical database and site-specific cancer registries. [Int J Clin Oncol 2015 Feb;20(1):5-10. [doi: 10.1007/s10147-014-0757-4] [Medline: 25376769]](http://dx.doi.org/10.1007/s10147-014-0757-4) 3. [Rare Disease Data Registry of Japan (in Japanese). Japan Agency for Medical Research and Development. URL: https:/](https://www.raddarj.org) [/www.raddarj.org [accessed 2022-03-03]](https://www.raddarj.org) 4. Tsugane S, Sawada N. The JPHC study: design and some findings on the typical Japanese diet. Jpn J Clin Oncol 2014 Sep [07;44(9):777-782. [doi: 10.1093/jjco/hyu096] [Medline: 25104790]](http://dx.doi.org/10.1093/jjco/hyu096) 5. Takeuchi K, Naito M, Kawai S, Tsukamoto M, Kadomatsu Y, Kubo Y, et al. Study profile of the Japan multi-institutional [collaborative cohort (J-MICC) study. J Epidemiol 2021 Dec 05;31(12):660-668 [FREE Full text] [doi:](https://dx.doi.org/10.2188/jea.JE20200147) [10.2188/jea.JE20200147] [Medline: 32963210]](http://dx.doi.org/10.2188/jea.JE20200147) 6. [Emery J, Boyle D. Data linkage. Aust Fam Physician 2017;46(8):615-619. [Medline: 28787562]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=28787562&dopt=Abstract) 7. Pratt NL, Mack CD, Meyer AM, Davis KJ, Hammill BG, Hampp C, et al. Data linkage in pharmacoepidemiology: a call [for rigorous evaluation and reporting. Pharmacoepidemiol Drug Saf 2020 Jan;29(1):9-17. [doi: 10.1002/pds.4924] [Medline:](http://dx.doi.org/10.1002/pds.4924) [31736248]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=31736248&dopt=Abstract) 8. Hagger-Johnson G. Opportunities for longitudinal data linkage in Scotland. Scott Med J 2016 Aug;61(3):136-145. [doi: [10.1177/0036933015575214] [Medline: 25886907]](http://dx.doi.org/10.1177/0036933015575214) 9. An overview of record linkage methods. In: Linking Data for Health Services Research: A Framework and Instructional Guide. Rockville, MD: Agency for Healthcare Research and Quality (US); 2014. 10. Ludvigsson JF, Almqvist C, Bonamy AE, Ljung R, Michaëlsson K, Neovius M, et al. Registers of the Swedish total [population and their use in medical research. Eur J Epidemiol 2016 Feb;31(2):125-136. [doi: 10.1007/s10654-016-0117-y]](http://dx.doi.org/10.1007/s10654-016-0117-y) [[Medline: 26769609]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=26769609&dopt=Abstract) 11. Laugesen K, Ludvigsson JF, Schmidt M, Gissler M, Valdimarsdottir UA, Lunde A, et al. Nordic health registry-based [research: a review of health care systems and key registries. Clin Epidemiol 2021;13:533-554 [FREE Full text] [doi:](https://europepmc.org/abstract/MED/34321928) [10.2147/CLEP.S314959] [Medline: 34321928]](http://dx.doi.org/10.2147/CLEP.S314959) 12. Christen P. Data Matching Concepts and Techniques for Record Linkage, Entity Resolution, and Duplicate Detection. Berlin, Heidelberg: Springer; 2012. 13. Kho AN, Cashy JP, Jackson KL, Pah AR, Goel S, Boehnke J, et al. Design and implementation of a privacy preserving [electronic health record linkage tool in Chicago. J Am Med Inform Assoc 2015 Sep;22(5):1072-1080 [FREE Full text]](https://europepmc.org/abstract/MED/26104741) [[doi: 10.1093/jamia/ocv038] [Medline: 26104741]](http://dx.doi.org/10.1093/jamia/ocv038) 14. Godlove T, Ball AW. Patient matching within a health information exchange. Perspect Health Inf Manag 2015;12(Spring):1g [[FREE Full text] [Medline: 26755901]](https://europepmc.org/abstract/MED/26755901) 15. Kissner L, Song D. Privacy-preserving set operations. In: Proceedings of the 25th annual international conference on Advances in Cryptology. 2005 Presented at: CRYPTO'05: Proceedings of the 25th annual international conference on [Advances in Cryptology; Aug 14 - 18, 2005; Santa Barbara California. [doi: 10.21236/ada457144]](http://dx.doi.org/10.21236/ada457144) 16. [Many D, Burkhart M, Dimitropoulos X. Fast private set operations with SEPIA. TIK Report. 2012 Mar. URL: https://www.](https://www.research-collection.ethz.ch/handle/20.500.11850/58312) [research-collection.ethz.ch/handle/20.500.11850/58312 [accessed 2022-04-04]](https://www.research-collection.ethz.ch/handle/20.500.11850/58312) 17. Ion M, Kreuter B, Nergiz A, Patel S, Raykova M, Saxena S, et al. On deploying secure computing: private intersection-sum-with-cardinality. In: Proceedings of the 2020 IEEE European Symposium on Security and Privacy (EuroS&P). 2020 Presented at: 2020 IEEE European Symposium on Security and Privacy (EuroS&P); Sep 07-11, 2020; [Genoa, Italy. [doi: 10.1109/eurosp48549.2020.00031]](http://dx.doi.org/10.1109/eurosp48549.2020.00031) ----- JMIR MEDICAL INFORMATICS Miyaji et al 18. Miyaji A, Nakasho K, Nishida S. Privacy-preserving integration of medical data : a practical multiparty private set intersection. [J Med Syst 2017 Mar 16;41(3):37 [FREE Full text] [doi: 10.1007/s10916-016-0657-4] [Medline: 28093660]](https://europepmc.org/abstract/MED/28093660) 19. Miyaji A, Mimoto T. Security Infrastructure Technology for Integrated Utilization of Big Data Applied to the Living Safety and Medical Fields. Cham: Springer; 2020. 20. [Winkler W. Matching and record linkage. WIREs Comp Stat 2014 Jul 02;6(5):313-325. [doi: 10.1002/wics.1317]](http://dx.doi.org/10.1002/wics.1317) 21. Sorensen HT, Sabroe S, Olsen J. A framework for evaluation of secondary data sources for epidemiological research. Int [J Epidemiol 1996 Apr;25(2):435-442. [doi: 10.1093/ije/25.2.435] [Medline: 9119571]](http://dx.doi.org/10.1093/ije/25.2.435) 22. Tromp M, Ravelli AC, Bonsel GJ, Hasman A, Reitsma JB. Results from simulated data sets: probabilistic record linkage [outperforms deterministic record linkage. J Clin Epidemiol 2011 May;64(5):565-572. [doi: 10.1016/j.jclinepi.2010.05.008]](http://dx.doi.org/10.1016/j.jclinepi.2010.05.008) [[Medline: 20952162]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=20952162&dopt=Abstract) 23. [Sayers A, Ben-Shlomo Y, Blom AW, Steele F. Probabilistic record linkage. Int J Epidemiol 2016 Jun;45(3):954-964 [FREE](https://europepmc.org/abstract/MED/26686842) [Full text] [doi: 10.1093/ije/dyv322] [Medline: 26686842]](https://europepmc.org/abstract/MED/26686842) 24. Jaro MA. Probabilistic linkage of large public health data files. Stat Med 1995;14(5-7):491-498. [doi: [10.1002/sim.4780140510] [Medline: 7792443]](http://dx.doi.org/10.1002/sim.4780140510) 25. Page on promoting cancer screening based on scientific evidence (in Japanese). National Cancer Center Institute for Cancer [Control. URL: http://canscreen.ncc.go.jp [accessed 2022-03-03]](http://canscreen.ncc.go.jp) 26. [Screening and earlier diagnosis. NHS England. URL: https://www.england.nhs.uk/cancer/early-diagnosis/](https://www.england.nhs.uk/cancer/early-diagnosis/screening-and-earlier-diagnosis/) [screening-and-earlier-diagnosis/ [accessed 2022-03-03]](https://www.england.nhs.uk/cancer/early-diagnosis/screening-and-earlier-diagnosis/) 27. [American cancer society guidelines for the early detection of cancer. American Cancer Society. URL: https://www.cancer.org/](https://www.cancer.org/healthy/find-cancer-early/american-cancer-society-guidelines-for-the-early-detection-of-cancer.html) [healthy/find-cancer-early/american-cancer-society-guidelines-for-the-early-detection-of-cancer.html [accessed 2022-03-03]](https://www.cancer.org/healthy/find-cancer-early/american-cancer-society-guidelines-for-the-early-detection-of-cancer.html) 28. Tanaka R, Matsukata M. Report on the model project for the accurate management of cancer screening by utilizing cancer [registry data in FY2017 - Aomori Prefecture Commissioned Project (in Japanese). Aomori prefecture 2018 Mar [FREE](https://www.pref.aomori.lg.jp/soshiki/kenko/ganseikatsu/files/H29gsmhokokusyo.pdf) [Full text]](https://www.pref.aomori.lg.jp/soshiki/kenko/ganseikatsu/files/H29gsmhokokusyo.pdf) 29. 2017 by utilizing cancer registry data Accuracy control project report for cancer screening. Ministry of Health, Labor and [Welfare research group. 2018. URL: https://www.pref.wakayama.lg.jp/prefg/041200/h_sippei/gannet/04/05_d/fil/houkokusyo.](https://www.pref.wakayama.lg.jp/prefg/041200/h_sippei/gannet/04/05_d/fil/houkokusyo.pdf) [pdf [accessed 2022-12-19]](https://www.pref.wakayama.lg.jp/prefg/041200/h_sippei/gannet/04/05_d/fil/houkokusyo.pdf) 30. [Pseudo personal information data generation service. hogehoge.tk. URL: http://hogehoge.tk/personal/ [accessed 2021-05-29]](http://hogehoge.tk/personal/) 31. [Personal information. Kazina. URL: http://kazina.com/dummy/ [accessed 2021-05-29]](http://kazina.com/dummy/) 32. [Test Data Generator (in Japanese). Yamagata. URL: http://yamagata.int21h.jp/tool/testdata/ [accessed 2021-05-29]](http://yamagata.int21h.jp/tool/testdata/) 33. Muse AG, Mikl J, Smith PF. Evaluating the quality of anonymous record linkage using deterministic procedures with the [New York State AIDS registry and a hospital discharge file. Stat Med 1995;14(5-7):499-509. [doi: 10.1002/sim.4780140511]](http://dx.doi.org/10.1002/sim.4780140511) [[Medline: 7792444]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=7792444&dopt=Abstract) 34. Howe GR. Use of computerized record linkage in cohort studies. Epidemiol Rev 1998;20(1):112-121. [doi: [10.1093/oxfordjournals.epirev.a017966] [Medline: 9762514]](http://dx.doi.org/10.1093/oxfordjournals.epirev.a017966) 35. Setoguchi S, Zhu Y, Jalbert JJ, Williams LA, Chen C. Validity of deterministic record linkage using multiple indirect [personal identifiers. Circ Cardiovasc Qual Outcomes 2014 May;7(3):475-480. [doi: 10.1161/circoutcomes.113.000294]](http://dx.doi.org/10.1161/circoutcomes.113.000294) 36. Ladabaum U, Dominitz JA, Kahi C, Schoen RE. Strategies for colorectal cancer screening. Gastroenterology 2020 [Jan;158(2):418-432. [doi: 10.1053/j.gastro.2019.06.043] [Medline: 31394083]](http://dx.doi.org/10.1053/j.gastro.2019.06.043) 37. Koliopoulos G, Nyaga VN, Santesso N, Bryant A, Martin-Hirsch PP, Mustafa RA, et al. Cytology versus HPV testing for [cervical cancer screening in the general population. Cochrane Database Syst Rev 2017 Aug 10;8(8):CD008587 [FREE Full](https://europepmc.org/abstract/MED/28796882) [text] [doi: 10.1002/14651858.CD008587.pub2] [Medline: 28796882]](https://europepmc.org/abstract/MED/28796882) 38. Hamashima C, Ohta K, Kasahara Y, Katayama T, Nakayama T, Honjo S, et al. A meta-analysis of mammographic screening [with and without clinical breast examination. Cancer Sci 2015 Jul;106(7):812-818 [FREE Full text] [doi: 10.1111/cas.12693]](https://europepmc.org/abstract/MED/25959787) [[Medline: 25959787]](http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?cmd=Retrieve&db=PubMed&list_uids=25959787&dopt=Abstract) 39. Kawamoto Y, Shirai T, Kamio K, Tanaka Y, Sakumoto K. Information processing apparatus, information processing [method, program, and information processing system. Google Patents. 2014. URL: https://patents.google.com/patent/](https://patents.google.com/patent/US20140012862A1/en?oq=US20140012862A1) [US20140012862A1/en?oq=US20140012862A1 [accessed 2022-04-06]](https://patents.google.com/patent/US20140012862A1/en?oq=US20140012862A1) ##### Abbreviations **CPU:** central processing unit **PDDI:** privacy-preserving distributed data integration ----- JMIR MEDICAL INFORMATICS Miyaji et al _Edited by C Lovis; submitted 17.05.22; peer-reviewed by C Sun, SY Shin; comments to author 07.10.22; revised version received_ _04.11.22; accepted 29.11.22; published 30.12.22_ _Please cite as:_ _Miyaji A, Watanabe K, Takano Y, Nakasho K, Nakamura S, Wang Y, Narimatsu H_ _A Privacy-Preserving Distributed Medical Data Integration Security System for Accuracy Assessment of Cancer Screening: Development_ _Study of Novel Data Integration System_ _JMIR Med Inform 2022;10(12):e38922_ _[URL: https://medinform.jmir.org/2022/12/e38922](https://medinform.jmir.org/2022/12/e38922)_ _[doi: 10.2196/38922](http://dx.doi.org/10.2196/38922)_ _PMID:_ ©Atsuko Miyaji, Kaname Watanabe, Yuuki Takano, Kazuhisa Nakasho, Sho Nakamura, Yuntao Wang, Hiroto Narimatsu. Originally published in JMIR Medical Informatics (https://medinform.jmir.org), 30.12.2022. This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in JMIR Medical Informatics, is properly cited. The complete bibliographic information, a link to the original publication on https://medinform.jmir.org/, as well as this copyright and license information must be included. -----
{ "disclaimer": "Notice: Paper or abstract available at https://pmc.ncbi.nlm.nih.gov/articles/PMC9840098, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://medinform.jmir.org/2022/12/e38922/PDF" }
2,022
[ "JournalArticle" ]
true
2022-05-17T00:00:00
[ { "paperId": "de5b107e4b0e0674c335efea5b793880396130b0", "title": "Nordic Health Registry-Based Research: A Review of Health Care Systems and Key Registries" }, { "paperId": "247691eb70b6b1925dd0d1879e7c76c1e7581357", "title": "Study Profile of the Japan Multi-institutional Collaborative Cohort (J-MICC) Study" }, { "paperId": "738c0ea4c93db68f01128396b8fd632b4b086e88", "title": "On Deploying Secure Computing: Private Intersection-Sum-with-Cardinality" }, { "paperId": "ce856a241e64ed4426baf25289be8553d0e95f1c", "title": "Data linkage in pharmacoepidemiology: A call for rigorous evaluation and reporting" }, { "paperId": "28e34a51857fb142db17e672d1ce6cb78335e230", "title": "Cytology versus HPV testing for cervical cancer screening in the general population." }, { "paperId": "b9b97921734274e8a780ecccb6d925a57c9e9eec", "title": "Privacy-Preserving Integration of Medical Data" }, { "paperId": "ddc80a1b55d0305c008db49ed53eca1ec3b9c631", "title": "Opportunities for longitudinal data linkage in Scotland" }, { "paperId": "d7633027510a36d2b736df921d62f45bd3b32224", "title": "Registers of the Swedish total population and their use in medical research" }, { "paperId": "5bc6c4c564ca2aede670480567bb1dd5de3a071a", "title": "Probabilistic record linkage" }, { "paperId": "18a7fd68a394f55c73f812a38091a5fc504b0ae2", "title": "Design and implementation of a privacy preserving electronic health record linkage tool in Chicago" }, { "paperId": "42a24908d41432d519c147bf5471d4b5e3530167", "title": "A meta-analysis of mammographic screening with and without clinical breast examination" }, { "paperId": "d97edf1f95d052342f97f5c5acedd5c33c06ba6b", "title": "Patient Matching within a Health Information Exchange." }, { "paperId": "9084b0e9a42442d2545d27174ab1052b18e7a1e0", "title": "Cancer registries in Japan: National Clinical Database and site-specific cancer registries" }, { "paperId": "72a59f18b93a4be02a6e36f869da9e404cad9c5b", "title": "Recent trends in population-based cancer registries in Japan: the Act on Promotion of Cancer Registries and drastic changes in the historical registry\n" }, { "paperId": "65b9d8fa0da7c57e3a620d5347ff30a6199563f3", "title": "An Overview of Record Linkage Methods" }, { "paperId": "b3063e8e823092535dd8fdb442946a258b9d31d4", "title": "The JPHC study: design and some findings on the typical Japanese diet." }, { "paperId": "8cee7635683f22278ea373709c7a6f4f6c92f668", "title": "Validity of Deterministic Record Linkage Using Multiple Indirect Personal Identifiers: Linking a Large Registry to Claims Data" }, { "paperId": "4c31f58bf05b51a46bf99603a0127aa8112fb2a3", "title": "Data Matching" }, { "paperId": "0252bf42fd7982c253c4cadb18bd5a156066b08c", "title": "Matching and record linkage" }, { "paperId": "c9990a4fd6257dd2bd4275c62918f3ef564c841e", "title": "Results from simulated data sets: probabilistic record linkage outperforms deterministic record linkage." }, { "paperId": "30081cdd9fcca690744dd71faf1d4ebe7dc60387", "title": "Record linkage" }, { "paperId": "3ef985c0786a3ce3b8681ecfe1a06ed8d432f2de", "title": "Privacy-Preserving Set Operations" }, { "paperId": "639273ef348daa939bad7436eeddef8bf243b6f0", "title": "Personal Information" }, { "paperId": "29418b345bda353b845359e0dd1c98f8f9e1bce8", "title": "A framework for evaluation of secondary data sources for epidemiological research." }, { "paperId": "7b8fb43eab6e37c063298a1e97080af068c172ac", "title": "Evaluating the quality of anonymous record linkage using deterministic procedures with the New York State AIDS registry and a hospital discharge file." }, { "paperId": "c4f3681557b86e2be25b74e60c323dd9d5e95449", "title": "Probabilistic linkage of large public health data files." }, { "paperId": "18444a6b9f2e3e666c4fc35853c299112d40401e", "title": "Security Infrastructure Technology for Integrated Utilization of Big Data: Applied to the Living Safety and Medical Fields" }, { "paperId": "93094c721860658e656de704da5fd5fe4cffd0ac", "title": "Strategies for Colorectal Cancer Screening." }, { "paperId": null, "title": "Report on the model project for the accurate management of cancer screening by utilizing cancer registry data in FY2017 - Aomori Prefecture Commissioned Project (in Japanese)" }, { "paperId": null, "title": "2017 by utilizing cancer registry data Accuracy control project report for cancer screening" }, { "paperId": "362413bf9f2f5cfcb3dc4f4a78147c1315ccfe6b", "title": "Data Linkage" }, { "paperId": "c79ac8b0ff2f7377fc12d0d8f23ba74aecd8db3b", "title": "Fast Private Set Operations with SEPIA" }, { "paperId": "c30e9feda4fbee2ae4e3395c295db5cf0a4fed5e", "title": "American Cancer Society guidelines for the early detection of cancer" }, { "paperId": "495549c1a2d5a4c30c183a08a47ff8b83b25655e", "title": "Use of computerized record linkage in cohort studies." }, { "paperId": null, "title": "Screening and earlier diagnosis" }, { "paperId": null, "title": "on promoting cancer screening based on scientific evidence (in Japanese)" }, { "paperId": null, "title": "The processing time of each institution does not depend on the number of institutions involved in the system" }, { "paperId": null, "title": "Key information used to match the data will not be divulged to any institution, including the PDDI secure computation server" }, { "paperId": null, "title": "Pseudo personal information data generation service" }, { "paperId": null, "title": "third-party institution collects or aggregates data to carry out matching" }, { "paperId": null, "title": "described the PDDI algorithm in subsequent sections" }, { "paperId": null, "title": "Test Data Generator (in Japanese)" }, { "paperId": null, "title": "Rare Disease Data Registry of Japan (in Japanese)" } ]
18,976
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Physics", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Physics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01dd94486bfde27808ba194cd285d2055dcf3494
[ "Computer Science", "Physics" ]
0.841336
A Scalable, Fast and Programmable Neural Decoder for Fault-Tolerant Quantum Computation Using Surface Codes
01dd94486bfde27808ba194cd285d2055dcf3494
arXiv.org
[ { "authorId": "2153204889", "name": "Mengyu Zhang" }, { "authorId": "2218742438", "name": "Xiangyu Ren" }, { "authorId": "2056772101", "name": "Guanglei Xi" }, { "authorId": "2109506822", "name": "Zhenxing Zhang" }, { "authorId": "2153795415", "name": "Qiaonian Yu" }, { "authorId": "2118901489", "name": "Fuming Liu" }, { "authorId": "2143622628", "name": "Hualiang Zhang" }, { "authorId": "38654394", "name": "Shenmin Zhang" }, { "authorId": "103429534", "name": "Yicong Zheng" } ]
{ "alternate_issns": null, "alternate_names": [ "ArXiv" ], "alternate_urls": null, "id": "1901e811-ee72-4b20-8f7e-de08cd395a10", "issn": "2331-8422", "name": "arXiv.org", "type": null, "url": "https://arxiv.org" }
Quantum error-correcting codes (QECCs) can eliminate the negative effects of quantum noise, the major obstacle to the execution of quantum algorithms. However, realizing practical quantum error correction (QEC) requires resolving many challenges to implement a high-performance real-time decoding system. Many decoding algorithms have been proposed and optimized in the past few decades, of which neural network (NNs) based solutions have drawn an increasing amount of attention due to their high efficiency. Unfortunately, previous works on neural decoders are still at an early stage and have only relatively simple architectures, which makes them unsuitable for practical QEC. In this work, we propose a scalable, fast, and programmable neural decoding system to meet the requirements of FTQEC for rotated surface codes (RSC). Firstly, we propose a hardware-efficient NN decoding algorithm with relatively low complexity and high accuracy. Secondly, we develop a customized hardware decoder with architectural optimizations to reduce latency. Thirdly, our proposed programmable architecture boosts the scalability and flexibility of the decoder by maximizing parallelism. Fourthly, we build an FPGA-based decoding system with integrated control hardware for evaluation. Our $L=5$ ($L$ is the code distance) decoder achieves an extremely low decoding latency of 197 ns, and the $L=7$ configuration also requires only 1.136 $\mu$s, both taking $2L$ rounds of syndrome measurements. The accuracy results of our system are close to minimum weight perfect matching (MWPM). Furthermore, our programmable architecture reduces hardware resource consumption by up to $3.0\times$ with only a small latency loss. We validated our approach in real-world scenarios by conducting a proof-of-concept benchmark with practical noise models, including one derived from experimental data gathered from physical hardware.
### A Scalable, Fast and Programmable Neural Decoder for Fault-Tolerant Quantum Computation Using Surface Codes Mengyu Zhang _∗, Xiangyu Ren_ _∗, Guanglei Xi, Zhenxing Zhang, Qiaonian Yu,_ Fuming Liu, Hualiang Zhang, Shengyu Zhang †, and Yi-Cong Zheng † Tencent Quantum Laboratory, Tencent, Shenzhen, Guangdong 518507, China †Corresponding authors: shengyzhang@tencent.com, yicongzheng@tencent.com ##### ABSTRACT Quantum error-correcting codes (QECCs) can eliminate the negative effects of quantum noise, the major obstacle to the execution of quantum algorithms. However, realizing practical quantum error correction (QEC) requires resolving many challenges to implement a high-performance real-time decoding system. Many decoding algorithms have been proposed and optimized in the past few decades, of which neural network (NNs) based solutions have drawn an increasing amount of attention due to their effectiveness and high efficiency. Unfortunately, previous works on neural decoders are still at an early stage and have only relatively simple architectures, which makes them unsuitable for practical fault-tolerant quantum error correction (FTQEC). In this work, we propose a scalable, low-latency and programmable neural decoding system to meet the requirements of FTQEC for rotated surface codes (RSC). Firstly, we propose a hardware-efficient NN decoding algorithm with relatively low complexity and high accuracy. Secondly, we develop a customized decoder architecture for our algorithm and carry out architectural optimizations to reduce decoding latency. Thirdly, our proposed programmable architecture boosts the scalability and flexibility of the decoder by maximizing parallelism. Fourthly, we build an FPGA-based decoding system with integrated control hardware to comprehensively evaluate our design. Our L = 5 (L is the code distance) decoder achieves an extremely low decoding latency of 197 ns, and the L = 7 configuration also requires only 1.136 µs, both taking 2L rounds of syndrome measurements as input. The accuracy results of our system are close to minimum weight perfect matching (MWPM). Furthermore, our programmable architecture reduces hardware resource consumption by up to 3.0 with only a small latency loss. We _×_ validated our approach in real-world scenarios by conducting a proof-of-concept benchmark with practical noise models, including one derived from experimental data gathered from physical hardware. |𝐷𝑊|Col2| |---|---| |Col1|Control System|Col3| |---|---|---| ||Control Logic|| |||| ||Readout Logic|| |||| |𝐷𝑁 𝐷𝑊 𝐷𝐸 𝐷𝑆 Data qubit Ancilla qubit|1. Apply Syndrome Measurement Control System 5. Apply Error Correction Control Logic 4. Error Information Real-time Decoder 2. Readout Signal Readout Logic 3. Syndrome Bits| |---|---| Quantum computers offer a tremendous computational advantage on numerous important problems, but qubits are fragile and easily affected by noises that deteriorate computation fidelity quickly. Quantum error-correcting codes (QECCs) and the theory of fault-tolerant quantum computation (FTQC) are backbones for large-scale quantum computation. FTQC can perform operations at any scale and obtain reliable results on error-prone quantum hardware, as long as noise strength is under a certain threshold [3, 4, 42, 44, 56]. The number of qubits on a single chip has been rapidly increasing [1, 9], but the realization of fault-tolerant quantum error-correcting (FTQEC) schemes is still challenging and has not yet been surmounted. FTQEC introduces redundant resources to encode information into code space and decode them after computation. Among various QECCs proposed in previous 2-3 decades, surface codes [10,20,25,42] are considered the most promising scheme for solid-state platforms, as they require only nearest-neighbor operations. The process of FTQEC based on surface code is shown in Figure 1. A logical qubit is encoded on multiple data qubits, interspersed (also see later Figure 2) with ancilla qubits which are used for performing multiple rounds of syndrome measurements (SM) to collect sufficient error information without destroying the state of data qubits. A control system consisting of control and readout logic applies syndrome measurement signals and discriminates the returned results. The collected syndrome bits are then transferred to the realtime decoder and analyzed to determine the exact locations and types of the errors in-situ. Finally, the control logic applies corresponding error correction signals to the data qubits to complete a QECC cycle. Many challenges rise in designing and implementing good decoders. The most prominent ones are believed to be: (1) **High-performance. The decoding algorithm should reduce** the logical error rate as much as possible. Since QECCs cost **Figure 1: Steps required for QEC after logical qubit encoding.** ##### 1. INTRODUCTION *Mengyu Zhang and Xiangyu Ren are joint first authors. 1 ----- many extra qubits, their error correction capacity should be fully explored to get paid off. (2) Scalability. The decoding algorithms should be intrinsically parallelizable so that their hardware implementation can scale up with the code distance more efficiently by fully utilizing computational resources. On this basis, it is also necessary to perform hardware architectural optimizations to alleviate the high resource consumption caused by the growing size of the FTQC. (3) **Low-latency. The decoding algorithms need to be executed** fast enough to avoid error accumulation. More specifically, the latency for the whole FTQEC process should be short to catch up with syndrome generation so that one can phys_ically correct and control data qubits before non-Clifford_ gates [52, 71]. Failure to achieve this constraint will lead to backlog problem [12, 36, 58, 59], which causes exponential computation overhead to kill any quantum advantage. For state-of-the-art superconducting qubits with lifetime 150300 µs [51], FTQEC within 1.5 µs is highly preferred. (4) **Flexibility. Decoders need to work in lots of different sce-** narios with various noise levels, code distances, code deformations [25, 26] and lattice surgery [37, 63, 64] suitable for FT operations. Decoders that can be programmed to switch between different scenarios would significantly broaden the applicability. In addition to these challenges, the implementation of FTQEC is a system-level task—the decoder has to be seamlessly integrated into the control system to be fully functional. A recent review [8] discusses a range of candidates for realtime error decoding. Among them are minimum weight perfect matching (MWPM) [22,28,68] and Union-Find (UF) [18, 19, 38]. MWPM is the most well-known and advanced, but suffers from being too complicated. Indeed, its complexity scales as O(L[9]) (L is the distance of the code). Even after tremendous optimization [24,27,28,35], it has yet to illustrate its low-latency decoding on real devices even for small L. UF has reasonably good decoding performance, with complexity almost proportional to L[3]. Both algorithms can be directly deployed through Look-Up Table (LUT) solution [15], but is difficult to scale up since the number of entries grows exponentially with L[3] in both cases. UF hardware decoders have been proposed [16, 45], but their actual performance is only evaluated under the phenomenological noise model, while incorporating complete noise would significantly slow the decoder. Recently, neural networks (NNs) based solutions have attracted an increasing amount of attention [7, 12, 13, 17, 30, 46, 47, 61, 63, 65, 66, 67] due to their high accuracy and computational efficiency. Previous works [12,13,48] designed various neural decoders and analyzed their cost and performance for different hardware platforms. Despite the effectiveness in the reported settings, the algorithms and microarchitecture there are relatively primitive and may fail to fit real experimental environments due to their high latency or incomplete noise model. Moreover, to our knowledge, no solution regarding flexibility has been proposed in these prior works. Consequently, the actual performance and latency of the entire _decoding system that can comprehensively address the above_ challenges has yet to be demonstrated. To address these challenges, we propose a scalable, lowlatency and programmable neural decoding system. The proposed neural network-based decoding algorithm has high performance and is customized for hardware-efficient deployment. Additionally, we present a decoder microarchitecture design that optimizes the resource allocation and exploits parallelism in multiple rounds of SMs for low latency. To comprehensively evaluate the performance of the proposed system, we implement a field-programmable gate arrays (FPGAs)-based decoding system, including the decoder as well as other control hardware. To demonstrate the effectiveness of our solution, we use a circuit-level noise model, where noises due to imperfect qubits, gates, and measurements are all considered. The assessment indicates that our decoder’s accuracy at _L = 5, amassing ten rounds of SM results, approximates_ MWPM. However, the decoding latency is experimentally ascertained at 197 ns, substantially quicker than MWPM on CPUs [24, 34, 35]. Furthermore, we employed a noise model derived from experimental data obtained from the Google QEC study to train and test our decoder [2, 31], proving our solution is practical in real-world environments. In contrast to conventional NN accelerators, which emphasize average throughput and avoid using resources simultaneously for single-task latency reduction, quantum error decoding needs to maximize resource utilization within a specific time. We then propose a programmable architecture to exploit this feature. This design reuses general-purpose arithmetic units for diverse decoding configurations, efficiently employing computational resources to minimize latency, enhancing scalability, and addressing flexibility challenges. Overall, our contributions in this work are: 1. We present an innovative, efficient fault-tolerant neural decoding algorithm based on stepper 3D CNN [40] and multi-task learning [11]. It exhibits competitive accuracy compared to MWPM, while significantly reducing latency. Its NN layer count scales as O(log _L), render-_ ing it scalable for future applications requiring large _L and minimal latency. Moreover, the computational_ complexity scales a O �L[3][�], which is comparable to UF and more conducive to hardware implementation. 2. We introduce a decoder microarchitecture optimized for achieving low latency while preserving high accuracy. Our FPGA-based implementations for L = 5 and L = 7 attain decoding latencies of 197 ns and 1.136 µs, respectively. Both configurations incorporate 2L rounds of syndrome measurements. 3. We build a complete decoding system that integrates our decoder and customized control hardware, achieving an overall system latency of 540 ns. This system is the fastest real-time fault-tolerant decoding system ever built and testified for dozens of qubits surface code. 4. We develop a programmable architecture to accommodate diverse decoding configurations with flexibility. In comparison to traditional approaches, our design maximizes hardware resource utilization and diminishes resource overhead by up to 3.0×, incurring only a minimal latency expense. Additionally, the ASIC implementation of our programmable architecture is compatible with diverse decoder configurations, encompassing distinct network structures and code distances. 2 ----- |X|Col2|Col3|Col4|Col5| |---|---|---|---|---| |||||| |||||| ||Z||X|| |||||| |||||| **Figure 2: (left) RSC with L = 5 with 25 data qubits (red dots) encoding** **1 logical qubits characterized by a particular choices of the logical** **operator XL and ZL (dashed lines). Zp and Xv are indicated as cyan and** **yellow plaquettes, respectively. Ancilla qubits (crosses) for Zp and Xv** **measurements are located at the plaquettes and vertices. Several data** **qubits are affected by Pauli errors. Measuring the Zps and Xvs yields** 1-valued syndrome bits of certain Xv (dark blue) and Zp operators (red). **(right) A single round of SM circuits for Zp and Xv.** ##### 2. PRELIMINARIES AND MOTIVATION 2.1 Rotated surface code Surface codes are a family of stabilizer codes defined on a 2D square lattice. The smallest version of planar surface codes, which requires the least amount of physical qubits, are known as the rotated surface codes (RSC). In this paper, we focus on the RSC consisting of L _L data qubits, as_ _×_ shown in Figure 2 for L = 5. The stabilizer generators of surface codes are two different kinds of operators: Xv = �i∈v _[X][i]_ [and][ Z][p] [=][ �]i∈p _[Z][i][,][ that represent vertices (][X][v][, or][ X]_ type) and plaquette (Zp, or Z type) on the square lattice. For each v (ancillary qubit in yellow plaquette), Xv is the tensor product of X operators on the four red qubits around the yellow plaquette; similarly for each Zp in the cyan plaquette. The operators Xv and Zp generate the stabilizer group S. If no error of any kind occurs, the syndrome bits are all 0. If X or Z errors occur, the syndrome bits of the stabilizer generators that anti-commute with errors will be flipped to 1. Each Xv or Zp needs an extra ancillary qubit to interact with the data qubits around it in a specific order for syndrome measurements (SM). See Figure. 2 for an example of errors as well as SM circuits to extract the syndrome bits. All equivalent logical operators form a topology class, called the _homology class, which is also the logical class for surface_ code. For each homology class L, we choose a representative Lc which has the minimum weight L in L. This weight is defined as the distance of RSC. It is known that arbitrary errors on any ⌊ _[L][−]2_ [1] _[⌋]_ [qubits can be corrected. If too many] errors occur, the decoding algorithm fails to correct the errors, which causes failure of computation. RSCs are greatly favored in solid-state platforms due to their low requirement on the number of physical qubits and connections between them. Recent experimental progresses of superconducting platforms have enabled the realization of RSC encoded states using off-line decoding based on multiple rounds of SM [2, 5, 43, 55, 70]. ##### 2.2 FTQC and real-time decoding Quantum noises occur at all places during the computation. One needs to apply SM circuits periodically to extract syndrome bits during the whole procedure of computation. **Figure 3: An illustration of repeated real-time FTQEC every 4 rounds of** **SMs. The effective data and measurement errors caused by a realization** **of circuit-level noise are shown in space-time. The red (blue) lines are** **syndrome history of Xvs (Zps). The green line represents the history of** **measurement errors. The FTQEC is applied every T rounds of SMs and** **the correction is applied on the data qubits right after the decoding.** The SM circuits need to be executed for all Xv and Zp operators simultaneously. Note that the SM circuits themselves also suffer from gate and measurement noises, and the CNOT gates in SMs may propagate single-qubit error to two data qubits. To mitigate the effect caused by such propagation, the order of CNOTs acting on data qubits around ancilla should respect the distribution of logical operators [60]—it maintains the alignment of the last two qubits involved with SM circuits so that they are perpendicular to the direction of the corresponding logical operators. Such alignment can reduce the effect of error propagation caused by SMs. In general, measuring syndromes once cannot distinguish errors on data qubits from measurement errors, which will quickly cause logical errors. Fortunately, with a sufficiently large number T of rounds of SM, one can establish reliable syndrome information for FTQEC. Non-Clifford (like the logical T gate) gates bring more challenges. If only Clifford gates exist, the decoding can be postponed to the end of storage by post-processing all the syndrome bits in the space-time history following the Pauli frame change. However, quantum computational advantage does need non-Clifford gates [32], and when they exist, the SMs after them introduce random Pauli frames and destroy the historical error information. To resolve this, all errors must be corrected before non-Clifford gates. This brings a real-time constraint for the decoding and error correction: after every T _O(L) [20] rounds of SMs, the FT decoder_ _∼_ takes these T slides of syndrome bits as input to infer the most likely errors on the data qubits; these errors then need to be corrected before next rounds of gate operations. Such a procedure needs to be finished at a speed faster than SMs to avoid backlogs problem which causes exponential computation time overhead [12, 36, 59]. The illustration of repeated real-time FTQEC is shown in Figure 3 for T = 4. ##### 2.3 Motivation: FTQEC for Near-term and Large Scale Previous work has shown successful execution of realtime FTQEC based on 3-qubit repetition code [53] recently, but only X (or Z) errors can be corrected. Some state-of 3 ----- the-art superconducting quantum hardware demonstrated the implementation of an RSC with L = 5 with offline decoding. Real-time FTQEC is expected to be achieved in the coming years. To that end, building real-time decoding systems for _L = 5 and beyond based on off-the-shelf devices such as_ FPGAs is a major goal in the near term. In the long term, problems like integer factorization or quantum simulation with FTQC require hundreds or thousands of logical qubits and millions of circuit layers. To achieve this, it is essential to minimize the hardware resource costs in designing large-scale high-performance decoders, especially when considering the future use of emerging technologies such as cryo-electronics. ##### 3. EVALUATION METHODOLOGY 3.1 Noise Model We use circuit-level Pauli noise for our evaluation: assume that during each SM, each data qubit undergoes an X, Y, or _Z error each with probability ps/3, called the storage noise._ For CNOTs, noises are modeled as perfect gates followed by one of the 15 possible two-qubit Pauli operators, with equal probability pg/15, which is called the gate noise. The measurement of a single physical qubit suffers a classical bit-flip error with probability pm, called measurement noise. Recent experiments [6, 55] show that it can catch the essence of practical noises process to a great extent. The phenomenological noise model, employed extensively in prior research, does not account for gate noise. It is crucial to acknowledge that incorporating CNOT errors results in a considerably more computationally demanding decoding process, increased latency, and diminished accuracy. To illustrate the difference, we collected the probability distribution of Hamming weights (HW) of syndrome bits under these two noise models. We generated one million samples and the results are shown in Table 1. HW (L = 5, T = 10, Probability HW (L = 7, T = 14, Probability circuit-level) circuit-level) 23 1.62e-4 50 1.12e-4 24 8.4e-5 51 6.8e-5 HW (L = 5, T = 10, Probability HW (L = 7, T = 14, Probability phenomenological) phenomenological) 15 4.9e-5 27 4.4e-5 16 2.4e-5 28 2.6e-5 **Table 1: Hamming weights sampled at p = 0.006 for different configu-** **rations when the probability decays to 0.** It is clear that the Hamming weight of the syndromes array undergoes a marked reduction when moving from the circuitlevel noise model to the phenomenological model. Consequently, we contend that employing a more comprehensive noise model is essential, as it aids in assessing the applicability of the decoder design for real-world experiments, while simultaneously introducing more challenges in decoding. Moreover, we also test our decoder based on an effective circuit-level noise model extracted from Google’s experiments on 72-qubit Sycamore device [2, 31]. This model can be employed to generate training data for our NN algorithm, so that we can test the practicality of our solution in realistic environments. ##### 3.2 Evaluation Framework We used Monte Carlo simulation for system verification and built an hardware platform (including decoder and other control hardware) to evaluate the actual performance of the decoding system following the procedure of Figure 1. The error is assigned for SMs according to the noise model in software to sample syndrome bits. These bits are then translated into waveform data using a set of demodulation and thresholding parameters, which is also configured in the readout module. This procedure mimics the readout and signal processing in actual experiments. Finally, they are transmitted to the decoder for error correction. The process repeats for each trail trajectory until a decoding failure occurs, and average time duration ¯τ is recorded. The logical error rate is defined as 1/(T ¯τ). At least 400 such trajectories are carried out for each physical error rate to calculate the logical error rate. With this platform, we evaluate the entire decoding process on classical hardware. The implementation of this framework is introduced in Section 7. ##### 3.3 Target Hardware Platform Regarding the near-term goal, we focus on FPGAs, which can be easily integrate into existing centralized control systems [29, 69] and accomodated to the frequent updates of early-stage experimental set-ups. The use of ASICs becomes a natural choice as the system size further grows to future large-scale FTQCs. Emerging technologies such as cryoCMOS put forward higher requirements for power budget and other metrics. Although these limitations are not discussed in detail in this work, resource efficiency and higher scalability presented in our decoder can help alleviate these issues. In this work, we demonstrate the performance of our decoding system with a complete FPGA-based implementation. FPGAs are also used to evaluate the scalability and flexibility of our decoder in large-scale FTQEC scenarios. Our solution can be easily extended to ASICs when required. FTQC requires RSCs with at least L 3 to correct both X _≥_ and Z errors. The smallest case of L = 3 can be implemented directly through LUTs because of the small number of syndrome bits. Therefore, we focus on the case of L = 5 and _L = 7 when studying near-term error decoding, and L > 7 for_ future large-scale FTQEC. ##### 3.4 Syndrome Measurement Rounds To ensure fault-tolerance validity, it is theoretically required that the number of syndrome measurement rounds (T ) be equal to or greater than the code distance (L) [21, 23], which is a common practice in previous error decoding research. In the mean time, for T larger than 2L, the decoding complexity increases but has minimal effect on further lowering the logical error rate. Therefore, the number of SM rounds we choose in the evaluation is between L and 2L. ##### 4. FT NEURAL DECODING ALGORITHM 4.1 Elementary Nueral Network An NN is a directed graph consists of multiple layers of nodes called neurons. Each node v is assigned a value yv and a bias parameter bv, and each edge (p, _v) is assigned a weight_ parameter Wvp. The value yv is obtained from applying an |HW (L = 5, T = 10, circuit-level)|Probability|HW (L = 7, T = 14, circuit-level)|Probability| |---|---|---|---| |23|1.62e-4|50|1.12e-4| |24|8.4e-5|51|6.8e-5| |HW (L = 5, T = 10, phenomenological)|Probability|HW (L = 7, T = 14, phenomenological)|Probability| |15|4.9e-5|27|4.4e-5| |16|2.4e-5|28|2.6e-5| 4 ----- activation function A to the summation of the bias bv and the _Wvp-weighted sum of the values yp of the incoming neighbor_ nodes p: _yv = A ( ∑_ _Wvpyp +_ _bv)._ (1) _p→v_ It should be easy to compute the derivative of the activation function A . Common choices of A include sigmoid, Tahn and rectified linear unit (ReLU) and LeakyReLU, the latter two of which are used in this work. One can also apply an extra Softmax function on the values of the output neurons to generate a normalized output that can represent a distribution. The elementary NNs used in this paper are restricted to fully connected networks (FCN) and 3D convolutional NN (3D CNN) [40]. These modules are chosen because of their good representation power to extract the important local features, as well as their simplicity to implement with digital circuits. Backend Frontend **Figure 4: A structure of FT neural decoding algorithm for RSC.** ##### 4.2 Decoding on marginal posterior distribu- tion The decoding algorithm can be viewed as a process of mapping the collected syndromes to L[2]-fold Pauli operators. The L[2]-fold Pauli group can be divided into 2[L][2][+][1] classes: CLc,s = {gLcT (s) | g ∈ S}, **s ∈** Z[L]2[2][−][1], (2) where the elements in each class are equivalent with respect to RSC, and their representative are LcT (s). Here T (s) is the pure error given s, which can directly calculated through an LUT [49]. In this setting, the optimal way to infer the error on data qubits after T rounds of SM from a measured _T ×_ (L + 1)[2] syndrome array S is: C˜ = argmaxLc,s [Pr][(][C][L][c][,][s][|][ S][) =][ argmax]Lc,s _g[∑]∈S_ Pr(gLcT (s)| S) (3) which can be recognized as a Maximum a Posteriori (MAP) estimation. The distribution is over 2[L][2][+][1] possible entries, which is intractable in general. To solve this, we decompose the binary string s into m pieces: s = s1 ⊔ **s2 ···⊔** **sm, with ⊔** being concatenation and |s _j| ∼_ _O(1) for all j. We approxi-_ mate Equation (3) by the marginal posterior distribution: since T (si) and T (s[′]i[)][ are typically highly different operators] even when the weight of (si ⊕ **s[′]i[)][ is small.]** ##### 4.3 Multi-task learning neural decoder We first introduce an end-to-end NN (see Figure 4) to simultaneously learn multiple marginal posterior distributions [11]. We separate the NN into the frontend and the backend parts. The frontend consists of multiple layers of 3D CNNs followed by one layer of FCN to extract common features. The input and output layers of 3D CNNs are two groups of 3D neuron arrays carrying feature information. Due to the space-time locality of S, we assume that for each 3D neuron array, the correlation of the values of different neurons decays quickly with their distance. Hence, we implemented 3D CNNs in a stepper manner: their strides are roughly the same as the kernel sizes, which are bounded by some constant K, and the mappings focus on extracting local features. Since the sizes of 3D neuron arrays of the i-th layer shrink exponentially with i, both training and inference time of NNs do not increase much with the depth of 3D CNN part. The backend consists of m + 1 multi-layer FCNs to approximate the marginal posterior distributions for Lc and _{s1,...,_ **sm}. These multi-layer FCNs share the same input** from the frontend, which is trained to extract sufficient features to calculate all the marginal posterior distributions. We use the sum of CrossEntropy for the output distributions as the loss function, and SGD/ADAM [41] for training. This Multi-task learning neural decoder (MTLND) is split into two NNs, to infer X(Z) errors based solely on Z(X) syndrome bits. ##### 4.4 Complexity analysis The computation elements for NNs here are exclusively multiplication and addition. With a stepper manner implementation of all 3D CNNs, the total number of layers in frontend is around O(log _L). The sizes of all FCNs are chosen to_ be independent of L, with depth O(1). Hence, the depth of the NNs is O(log _L), which puts a small lower bound of com-_ putation latency if all layers can be sufficiently parallelized to finish in O(1) steps. Suppose the kernel size is lower bounded by k. The total number of multiplication operations, which dominates the computation, is bounded from above by _⌈log_ _L⌉_ � _L[2]_ � _C[2]_ ∑ _K[3][ L][3]_ _∼_ _O(L[3]),_ (4) _i=1_ _k[3][i][ +]_ _[D]_ min{|s _j|}_ [+] [2] _E˜ = argmaxLc_ _g[∑]∈S_ Pr(gLc|S)T � _m_ � � argmax **s** _j_ [Pr][(][s] _[j][|][S][)]_ _j=1_ _._ Such simplification neglects the correlation between different **s** _j of the optimal solution, which is a reasonable assumption_ where C and D are the maximum number of input/output channels of the 3D CNN and of edges of each multi-layer FCN, respectively. Such complexity is competitive with UF. The total number of the parameters for each NNs can be bounded by: � _L[2]_ � _C[2]K[3]⌈logk(L)⌉_ + _D_ _∼_ _O(L[2])._ (5) min{|s _j|}_ [+] [2] This relatively slow scaling makes the hardware implementation feasible for loading all the parameters into on-chip memories, whose sizes are often limited. ##### 4.5 Training and Quantization **Training The training data set is generated by simulating** circuit-level noises at ps = pg = pm 0.006—each sampled _∼_ 5 ----- **Figure 5: Decoder overview.** 3D syndrome S pairs with label (Lc, **s). For X (Z) errors, one** may utilize either Z (X) type syndromes or a combination of both X and Z syndromes as input for the MTLND. The latter approach offers superior accuracy but requires a significantly more intricate neural network structure. The training is carried out through ADAM in Pytorch 1.5 with batch size 700-1000 for 8 to 10 epochs on two NVIDIA V100 GPUs. **Quantization We choose the non-saturating quantization** scheme for all weights and biases [39]. The outputs of each layer are re-scaled so that the input data of its consequent layer is maintained to be signed 8-bit integers. As we will see, it simplifies the implementation of arithmetic modules and data files, while incurring only small loss of accuracy. ##### 5. DECODER OVERVIEW 5.1 Decoder Microarchitecture: A Big Picture Figure 5 shows the microarchitecture of our proposed decoder. We describe and explain the main components and functions of the decoder as follows: **Syndrome Bits. Syndrome bits are measurement results ob-** tained from classical readout logic. For RSCs with distance _L, T_ _O(L) rounds of measurements are required to guaran-_ _∼_ tee fault tolerance. Better decoding accuracy requires larger _T_ . These T slices of syndrome bits are combined into a 3D array and fed into either X-type or Z-type decoding logic, depending on the ancilla type. **Network Parameter File. NN parameters are obtained of-** fline through the training phase and loaded to the Network Parameter File before a quantum computation starts. Different sets of NN parameters need to be fetched during the decoding, demanding fast switching of various sets of parameters during real-time decoding. Therefore, we need to use on-chip memory to implement this module to avoid extensive memory loading delays. The entire storage is divided into two parts according to different data structures, one for storing weight matrices and the other for bias vectors. These parameters are originally floating-point numbers, which lead to complicated multiplications and large storage space. To improve the storage and computational efficiency, the parameters are quantized to 8-bit signed fixed-point numbers. **Neural Processing Engine (NPE). This engine consists of** the arithmetic units (AUs) for NN computation. The operators allowed include 3D CNNs and FCNs, both of which involves repeated computation of vector inner products as in Equation (1). The multiplication-addition operations in Equation (1) take up the majority of computing resources in NPE. Since the bias vectors are accessed only once per iteration, they can also be stored in a series of simple registers. **LUT for Error Combination. The error locations are identi-** fied and combined in this module. For either X-type or Z-type error decoding, NPE generates one logical operators L[˜] _[X]c_ _[|][Z]_ and _[L][2]2[−][1]_ estimated bits ⊔ _js˜[X]j_ _[|][Z]. They are then translated to_ _E˜_ _[X][|][Z]_ = ˜L[X]c _[|][Z]T (⊔_ _js˜[X]j_ _[|][Z]) =_ L[˜] _[X]c ∏[|][Z]_ _T (s˜[X]j_ _[|][Z])_ (6) _j_ through an LUT with _[L][2]2[−][1]_ entries recording Lc[X][|][Z] and {T (h[X]k _[|][Z])},_ where hk is an L[2]-length binary string with all zeros except for the k-th bit. Equation (6) corresponds to a linear combination of these entries, which is a series of pairwise ExclusiveOR (XOR) operations. Afterwards, the error information is transmitted to the control module to generate error correction signals. The total memory consumption for LUTs is 2 _×_ ( _[L][2]2[−][1]_ [)] _[×]_ _[L][2][ =][ L][4][ −]_ _[L][2][ bits. Such memory requirements]_ are relatively small and can be easily implemented using LUT for foreseeable code distances (e.g. only 3.5 KB for L = 13). Therefore, the main memory consumption of our NN decoder is determined by the number of network parameters. ##### 5.2 Network-Specific Architecture Our network-specific architecture is to divide AUs in the NPE into several groups for different network layers. Connections between adjacent network layers are hard-wired, and each network layer will use a separate portion of the computation resources. **Resource Constraints. NPE contributes a significant part** to the decoding latency. If sufficient AUs exist, the computation of each layer in NPE can be carried out in a single step and is executed fully parallel, resulting in a very low latency. However, this approach comes at a price of considerable computational resource consumption. Although many algorithmic efforts have been made to reduce the arithmetic cost, this level of hardware overhead still makes the overall architecture not practical. The later evaluation shows that even cutting-edge FPGAs are incapable of achieving a fully parallelized L = 5 NN decoder (see Section 8). **Resource Allocation Model. Therefore, the resource alloca-** tion of each network layer needs to be carefully customized for optimal performance. To resolve this issue, we use an allocation model to determine resource partitioning. Suppose there are C AUs, nl different NN layers, and _M_ _j multiplications operations for the layer j. The problem_ reduces to a constrained optimization to choose a partition _{C_ _j}_ min _{C_ _j}_ _nl_ _M_ _j_ #### ∑ α j, subject to ∑ α jC j = C (7) _j_ _C_ _j_ _j_ Here, α _j is the number of independent parts for the layer j,_ which equals to 1 for the frontend and > 1 for the backend. 6 ----- This problem can be solved through the Lagrange multiplier, obtaining some real-valued solution {C _j}, which can_ be rounded to integers with the equation constraint satisfied. It turns out that this simple heuristics is efficient and exhibits excellent performance in our experiments. ##### 5.3 Multi-core NPE for Large Distance. **Figure 6: A multi-core NPE for large distance L.** Note that the computational complexity grows as O(L[3]) (Equation (4)), which puts a hard limit on code distance L with the corresponding decoding algorithm being able to be efficiently executed on a single processing core with constrained computational resource. The intrinsic parallelism inside MTLND can be exploited to distribute the computation of the NN to a multi-core NPE. A simplified illustration of such approach is shown in Figure 6. The cores form a tree structure, with each core responsible for a part of computation in the 3D CNNs/FCNs. In the context of 3D CNNs with a stepwise structure, the inputs for different cores are approximately independent, necessitating minimal core-to-core communication. It should be noted that this approach is infinitely parallelizable—by fully utilizing each core, the computational scale can be expanded by adding more cores, maintaining a decoding latency of O(log _L). For_ large-scale FTQEC involving multiple logical qubits decoded using this microarchitecture, syndrome compression as described in [16] can also be employed to conserve bandwidth. ##### 5.4 Exploiting Parallelism in Multi-rounds Mea- surements The decoupled frontend of MTLND allows independent executions of multiple partitioned input information blocks. The syndrome bits collected from T rounds of measurements form a 3D array input to the NPE, which can be divided into multiple information blocks. The results of each SM round are independent and arrive at the decoder sequentially in intervals of an SM period. Such features provide certain degree of parallelism that can be exploited—instead of waiting for all syndrome bits to arrive, we prefetch information blocks that are prepared ahead of other blocks, so that different blocks can be processed in a pipeline. An example of such sliding window decoding is shown in Figure 7. ##### 6. PROGRAMMABLE DECODER In this section, we present an architectural design to support a programmable decoder. This programmable architecture presents better scalability and flexibility compared to the network-specific architecture. 7 |Fault Tolerant Syndrome Measurements t=10 𝑇𝐷 ... t=6 ... t=0|Syndrome Bits 𝐵2 𝐵1|CNN calculations ... … ...| |---|---|---| _t=0_ _t=6_ **…** _Error Correction_ _Starts_ _t=10, last round_ **…** 𝐵1 calculation 𝐵2 calculation 𝑇𝐷 |Col1|Col2|Col3|Col4|Col5| |---|---|---|---|---| |||||| **Figure 7: Timeline of sliding window decoding.** ##### 6.1 Limitations of Network-Specific Architec- ture The network-specific architecture provides good latency performance for small-sized networks due to the customized computational units of each network layer. Although many algorithmic efforts are made and comparably low computational complexity is achieved, the resource constraint on this approach is still stringent for large NNs. Therefore, this architecture suffers from limited scalability when scaling to large code distances. Furthermore, the implemented decoder is restricted to work for specific NNs, resulting in poor flexibility for different decoder configurations. This problem becomes severe when switching to ASICs in the future, which provides better optimized performance but lacks the programmability of FPGA. Finding a solution providing flexibility while alleviating resource constraints is challenging. Meeting latency requirements further complicates the design, as additional latency overhead is often required to provide flexibility. ##### 6.2 Insight: Maximizing Resource Utilization within a Given Time Frame A single instance of syndrome-array decoding necessitates resource optimization within the decoding duration, which is distinct from the emphasis on high average throughput in conventional NN accelerators. Given the fact that decoders’ different layers of networks do not function simultaneously, we employ a generalized NPE design adaptable to various NN structures, maximizing resource utilization by allocating all available AUs to each layer, and enhancing scalability for larger code distances with moderate latency impact. Moreover, the generalized NPE enables the development of programmable decoders. ##### 6.3 Proposal: Programmable Architecture We propose a programmable architecture to achieve flexibility and better scalability. The basic idea is to decompose the execution of each NN layer into a generalized three-stage process and describe it using assembly-level instructions. The decoder microarchitecture is also restructured to accommodate the instruction-based execution. Designing a dedicated architecture for neural decoders is non-trivial because unlike previously proposed machine learning accelerators [14, 33], ----- Neural Processing Engine Locations |Syndrome Bits Instruction Memory Instructions Control Unit Instruction Decoder Register File Manager NPE Scheduler Network Information LUT for Error Combination Error Type and Locations|Data Register File|Data O Mul Bypas …|MA-Stage| |---|---|---|---| ||||ps Weight Op| ||||AT-Stage| ||||Adder Tree …Unit …| ||||SF-Stage| ||||Bias and Scale Ops Acc s Special Function| **Figure 8: Microarchitecture of the programmable decoder.** the entire framework needs to be tailored to achieve low latency for a single inference task. In this microarchitecture, we minimize this latency by ruling out unnecessary memory transfers and customizing the control mechanism in the control unit. It turns out that the gains due to flexibility and resource savings outweigh the latency overhead. The overview of our proposed microarchitecture of the programmable decoder is shown in Figure 8. **Control Unit. Before FTQC begins, a series of assembly** codes describing the network structure in the decoding algorithm is generated and loaded into the instruction memory. Instructions fetched from the instruction memory are decoded and then assigned to control the NPE or manage the register file. Basic network information is also pre-stored in memory and is accessed by the control unit during run-time. The NPE scheduler receives commands about computations, and determines the specific operations to be performed in the NPE using a finite state machine (FSM). The register file manager is responsible for scheduling the communication between registers and the input/output operands collector at each stage of NPE. The contents of the register files are then used to perform computations at various stages in the NPE. **Three-stage NPE. Instead of implementing specific AUs for** different layers, we divide the NPE processing into three stages and applied to all AUs. Each stage is customized to the layer types used in our decoding algorithm. This microarchitecture implements multiple processing engines to fit the vector operations, and the following descriptions take one column as an example. The first stage consists of multiple multiplication-addition units (MAU), which multiply two sets of inputs and add all element-wise products to output the final result. Multiple parallel MAUs can help us flexibly choose how the mathematical operations of the network layers are constructed. This stage completes the primary workload of each layer. The next stage consists mainly of an adder tree (AT), which has a depth of log2 c when there is c MAUs in the MA stage. We can directly connect the output of the MA-stage to the input of the adder tree. A series of multiplexers are used to pre-fetch internal results at different depths within the adder tree, allowing flexible configuration of the MAU operations. Most importantly, this scheme helps reduce decoding latency when only part of the AT is needed for certain layer. The output of the adder tree is sent to the subsequent special function (SF)-stage, where it is summed with the bias and applied to a scaling factor for activation. The final result is then quantized and written to the data register file, waiting to be fetched as input for the next layer operations. **Single layer divided into multiple chunks. A single matrix-** vector calculation can be too large to be finished in a single parallel NPE process. Therefore, the input data of this layer is divided into multiple chunks and calculated sequentially based on the scheduling of control instructions. Hence, an accumulator is implemented in the SF-stage to complete the accumulation of the execution results of different chunks. This stage can also be bypassed according to the NPE scheduler. There are also many occasions where multiple layers can be processed in parallel, and prefetching in the AT-stage can help achieve this parallelism. **Control Instructions. Compared to classical processors, the** error decoding is a static process and the number of NPE execution rounds can be pre-determined based on the network size. Therefore, we can choose the Very Long Instruction Word (VLIW) approach to minimize the instruction execution latency. The control instructions for our programmable decoder can be divided into two groups: computation and _memory transfer. These two groups of instructions are used_ to command the NPE scheduler and register file manager respectively. Hence, the design of control instructions basically represents the method to operate the configurable FSM in the control unit. The reason for dispatching instructions based on different groups is that we can overlap the latency of reading memory with the time spent on NPE execution, thereby reducing overall latency. ##### 7. SYSTEM IMPLEMENTATION In order to give a comprehensive evaluation of our design, we built an FPGA-based system consisting of the decoder itself and control hardware for readout and error correction. ##### 7.1 Decoder Implementation We use Intel Stratix 10 family FPGAs to implement our decoder. We mainly completed two types of implementations: (1) We first implemented L = 5 and L = 7 decoders whose NPE is realized using the network-specific architecture as we discussed in 5.2. These implementations are integrated into the evaluation platform to test the performance of near-term error decoding process. (2) On this basis, we also implemented the microarchitecture of the programmable decoder (see 6.3) to further evaluate the flexibility and scalability of our design. For all implementations, we focus on implementing NPE with single core. We use two FPGAs to process the decoding for X and Z errors separately. The subsequent descriptions are given based on one FPGA. **Network-Specific Implementation (NSI): We use T = 10** 8 ----- |Col1|Error correctio| |---|---| ||| **Figure 9: Hardware structure of the implemented decoding system. For** **evaluation purposes, we connected the measurement signal output of** **the control module directly to the readout module** syndrome measurement rounds for L = 5 and T = 14 rounds for L = 7. These quantities of measurement rounds enable us to assess our architecture’s ability to manage large syndrome inputs. Our decoder can readily transition to a smaller number of measurement rounds when practical circumstances permit. Therefore, The input syndrome results for each error type consist of 120 bits and 336bits, respectively. We trained different configurations for this design, which determines the memory consumption of the parameter file. Other minor memory consumption includes registers and flip-flops implemented to store the inputs and outputs during calculations. These are all implemented using the embedded memory of FPGAs. The main resource overhead comes from the NPE. We prioritize the use of digital signal processing (DSP) units to implement NPE for faster processing. All logical Operations are tailored to the constraints of the DSP to fully exploit the limited resource on the FPGAs. Each round of computation begins by reading new weights into the multiplexer, and the data flow is already hard-wired between different layers. **Programmable Architecture: In this implementation, the** NPE is structured as a three-stage unit and can be reused by all network layers in the NSI, as well as other different network structures and code distances. Instead of maximizing the utilization of FPGA resources, we take the the largest layer in the NSI, max(C _j) in Equation (7), as the resource_ constraint for this implementation. This helps us evaluate the effectiveness of our programmable decoder and gain a better understanding of its latency performance. ##### 7.2 Integrating With Control Hardware The control hardware of the decoding system is also implemented using custom hardware. The schematic of the entire system is shown in Figure 9. Each analog-digital interface and its counterparts contain sixteen analog-to-digital converters (ADCs) and digital-to-analog converters (DACs), respectively, for digitizing and generating analog signals. The decoder takes digitized measurement results as the input syndrome bits, and informs the control module to correct errors. All control and readout modules are connected to the decoding module, and a backplane is implemented to provide wiring of these connections. ##### 8. EVALUATION RESULTS 8.1 Near-term Decoders: L = 5 and L = 7 With Network-Specific Architecture We first use the evaluation platform to test the performance of Network-Specific decoder, which implements FTQEC for both L = 5 and L = 7. **NN structure: Our L = 5 decoder has one 3D CNN layer and** one FCN layer in the frontend and the backend is composed of 3 two-layer FCNs. The NN structure of L = 7 decoder is larger: three 3D CNN layers and one FCN layer in the frontend, and 3 two-layer FCNs for the backend. For evaluation, we choose two regimes for the number N of parameters for _L = 5: N_ 90K and N 330K. L = 7 decoder has N 960K _≈_ _≈_ _≈_ parameters. **Hardware complexity: The resource utilization of each** FPGA in the implemented decoding module is shown in Table 2. We used two FPGAs to achieve complete error decoding functionality. Regarding the logic resources, DSP blocks and Adaptive Logic Modules (ALMs) are used for implementing NPE. We utilized these computing resources as much as possible, as discussed in the resource allocation model in Section 5.2. In the implementation of L = 7, N _≈_ 960K, the resource utilization of DSP blocks and ALMs is 82% and 76%, respectively. A higher level of resource utilization will hamper FPGA routing and can make synthesis fail. The resource consumption of L = 5, N 90K is much _≈_ lower and all network layers are maximally parallelized. The memory consumption of the decoder primarily comes from the parameter file. As shown in Table 2, this level of memory consumption is moderate considering modern FPGAs can provide 10-20 MB of embedded memory. Utilization Configuration Memory Bits DSP Block ALMs _L = 5, T = 10, N ≈_ 90K 114 KB 21% 24% _L = 5, T = 10, N ≈_ 330K 532 KB 81% 67% _L = 7, T = 14, N ≈_ 960K 1.43 MB 82% 76% **Table 2: Hardware complexity** |Configuration|Memory|Utilization Bits DSP Block ALMs|Col4| |---|---|---|---| ||||ALMs| |L = 5, T = 10, N ≈90K|114 K|B 21%|24%| |L = 5, T = 10, N ≈330K|532 K|B 81%|67%| |L = 7, T = 14, N ≈960K|1.43 M|B 82%|76%| error correction 540 ns time |measure|ment 197 ns rection 540 ns|Col3| |---|---|---| |||| |error cor||| **Figure 10: Experimental setup and method for measuring latency.** **Latency: The measured latency results of different configu-** rations are shown in Table 3. The fully-pipelined architecture of the NSI takes 67 cycles to obtain error position for the _L = 5, N_ 90K configuration, resulting in a decoding latency _≈_ of 197 ns. The latency of our L = 7, N ≈ 960K configuration is 1.136 µs, which is quite good performance considering the resource constraints of current FPGAs. Note that this decoding latency is independent of the physical error rate p. The total latency of our system is obtained by measuring the time interval between receiving measurement signals and 9 ----- |Implementation and Configuration|Frequency|Decoding Latency|Total Latency| |---|---|---|---| |NSI, L = 5, T = 10, N ≈90K|330 MHz|197 ns|540 ns| |NSI, L = 5, T = 10, N ≈330K 300 MHz 267 ns 610 ns NSI, L = 7, T = 14, N ≈960K 250 MHz 1.136 µs 1.48 µs|||| **Table 3: Latency of different configurations** issuing correction signals. We connect these two channels to an oscilloscope for testing, as shown in Figure 10. The total latency is measured to be 540 ns, which is fast enough for near-term FTQEC. Our solution supports synchronization and data transmission between dozens of modules, and is the fastest real-time FT decoding system ever built for surface code of approximately 100 qubits . **Accuracy: Figure 11 shows the logical error rate obtained** from performing Monte Carlo experiments using our evaluation platform. Our system with different parameter numbers and quantization choices all exhibit close accuracy as MWPM, and the quantization of NNs has small effects on the accuracy. This shows that our solution, while achieving very low latency, does not sacrifice the accuracy much. We also notice that our system behaves closer to MWPM when the physical error rate gets smaller, which means that our decoder can be more effective as the quantum hardware progresses. 10[−][1] 10[−][2] 10[−][3] 10[−][4] 10[−][5] 10[−][6] |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15|Col16| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| ||||||||||||||||| |||||||||||p|hysi|cal e|rro|r rate|| |||||||||||M M|WP TL|M, ND,|L L|= 3, T = 3 = 3, T =|3| |||||||||||M M M|WP TL WP|M, ND, M,|L L L|= 5, T = 1 = 5, T = = 7, T = 1|0 10 4| |||||||||||M|TL|ND,|L|= 7, T =|14| |||||||||||M M|WP TL|M, ND,|L L|= 9, T = 1 = 9, T =|2 12| |||||||||||M|WP|M,|L|= 11, T =|11| |||||||||||M|TL|ND,|L|= 11, T =|11| 10[−][2] 10[−][3] 5e-4 1e-3 3e-3 5e-3 7e-3 0.01 0.013 Physical error rate **Figure 12: Logical error rate for different code distance.** and measurement error rates are much larger than the single qubit memory error for superconducting qubits. Figure 13 shows the logical error rate for the same network trained by the standard training set (standard MTLND) and the one generated by ps = 0.0024, pg = 0.0072 and pm = 0.012 (reweighted MTLND). This demonstrates that the MTLND can still operate effectively with a slight performance tradeoff, while the reweighted version maintains a similar level of accuracy to MWPM. 10[−][2] 10[−][3] |2|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9| |---|---|---|---|---|---|---|---|---| |||||||||| |||||||||| 10[−][4] 10[−][4] physical error rate _L = 5, T = 10, 90K, INT8 on SlopeND_ _L = 5, T = 10, 330K, INT8 on SlopeND_ _L = 5, T = 10, 330K, FP32 MTLND_ _L = 5, T = 10, MWPM_ _L = 7, T = 14, 960K, INT8 on SlopeND_ _L = 7, T = 14, 960K, INT10 MTLND_ _L = 7, T = 14, 960K, FP32 MLND_ _L = 7, T = 14, 960K, MWPM_ 10[−][3] 2 × 10[−][3] 3 × 10[−][3] 4 × 10[−][3] 6 × 10[−][3] Physical error rate |Col1|Col2|Col3|physic L = 5 L = 5 L = 5 L = 5 L = 7 L = 7 L = 7|al error rate, T = 10, 90, T = 10, 33, T = 10, 33, T = 10, M, T = 14, 96, T = 14, 96, T = 14, 96|K, INT8 o 0K, INT8 0K, FP32 WPM 0K, INT8 0K, INT10 0K, FP32|n SlopeN on Slope MTLND on Slope MTLND MLND|D ND ND| |---|---|---|---|---|---|---|---| **Figure 11: Real-time decoding performance of L = 5 and L = 7.** ##### 8.2 Accuracy of Various Code Distances Based on our NSI, we further estimated the accuracy of our MTLND for various code distances. The accuracy results of L = 3, L = 7, and L = 9 (with T = 3, T = 14, and T = 12) are also obtained using software simulation. Specifications of these configurations are shown in Table. 4, which shows a moderate scaling, suiting for large-scale FTQEC. It should be noted that for L = 11, the MTLND employs both X and Z syndromes with a sufficiently complex NN to showcase its ability to achieve accuracy close to MWPM in larger scale. 3 × 10[−][4] 4 × 10[−][4] 6 × 10[−][4] 10[−][3] Memory error rate **Figure 13: Logical error rate for standard and reweighted MTLND in** **the case ps : pg : pm = 1 : 3 : 5** ##### 8.3 Compared to prior decoders Figure 14 compares MTLND with various decoders proposed. It is clear that the MTLND with T = 10 outperforms both LU-DND [13] and LILLIPUT [15] and is comparable with weighted UF [38]. 10[−][1] 10[−][2] 10[−][3] 10[−][4] physical error rate _L = 5, T = 6, LU-DND_ _L = 5, T = 2, LILLIPUT_ _L = 5, T = 5, MWPM_ _L = 5, T = 10, MWPM_ _L = 5, T = 10, Weighted UF_ _L = 5, T = 10, MTLND_ 10[−][3] 10[−][2] Physical error rate |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8|Col9|Col10|Col11|Col12|Col13|Col14|Col15| |---|---|---|---|---|---|---|---|---|---|---|---|---|---|---| |||||||||||||||| |||||||||ph L L|ysica = 5, = 5,|l er T T|ror = 6 = 2|rat, L, LI|e U- LL|DND IPUT| |||||||||L L L|= 5, = 5, = 5,|T T T|= 5 = 1 = 1|, M 0, 0,|W M We|PM WPM ighted UF| _L = 11, T = 11_ 10 _∼_ 17M _∼_ 87M _∼_ 300M **Table 4: NNs Specs and resource for MTLND.** Their logical error rates are shown in Figure 12, which are all close to their MWPM counterparts while achieving a high accuracy threshold around 0.8%. In actual QEC experiments, one can not access the accurate noise model, which typically differs from the error model used to train the MTLND. Here, we consider the error model when ps : pg : pm = 1 : 3 : 5, which fits the reality that gates **Figure 14: Decoding performance between different decoders for L = 5** ##### 8.4 Programmable Architecture **Hardware complexity: The FPGA resource utilization com-** parison of our programmable decoder and the NSI is shown in Figure 15. Note that the same set of arithmetic units in the programmable decoder are applied to all network layers. As a result, it achieves a 2.4× reduction in DSP blocks and **3.0× in ALMs. This result proves that our programmable** |Configuration|#.Layers|#.Params|#.Mults|#. Train. Data| |---|---|---|---|---| |Configuration L = 3, T = 3 L = 5, T = 10|#.Layers 3 4|#.Params ∼60K ∼330K|#.Mults ∼2M ∼400K|#. Train. Data ∼2M ∼10M| |L = 7, T = 14 L = 9, T = 12|6 8|∼960K ∼2.3M|∼3.17M ∼10M|∼100M ∼240M| |L = 11, T = 11|10|∼17M|∼87M|∼300M| 10 ----- architecture effectively reduces the resource consumption and presents better scalability. **NSI, L=5** **NSI, L=7** **Programmable** experimental data on surface code [2, 31], with pg ∼ 0.005, _ps ∼_ 0.004, and pm ∼ 0.018. The MTLND was trained and assessed under these conditions. Figure.16 illustrates the accuracy results upon extrapolation to lower noise rates. 10[−][2] 10[−][3] |Col1|Col2|Col3|Col4|Col5|Col6|Col7|Col8| |---|---|---|---|---|---|---|---| ||||||||| ||||||||| ||||||||| ||||||||| |||||Memory erro|r rate||| ||||L = 5, T =||10, MTLND, Goog|le error mod|el| |Component|DSP|ALM| |---|---|---| |Col1|Col2| |---|---| |2.4× 3.0×|| |Component|DSP|ALM| |---|---|---| |NSI, L=7, 1st layer|0 (0%)|59k (8.4%)| |NSI, L=7, 2nd layer|630 (15.9%)|113k (16.1%)| |NSI, L=7, 3rd layer NSI, L=7, 4th layer|1176 (29.7%) 676 (17.1%)|173k (24.6%) 63k (9.0%)| |NSI, L=7, 5th layer|544 (13.7%)|82k (11.7%)| |NSI, L=7, 6th layer NSI, L=7, total|220 (5.5%) 3246 (82%)|40k (5.7%) 534k (76%)| |Programmable|1340 (34%)|179k (26%)| **Figure 15: FPGA resource utilization of our NSI (L = 7, N ≈** 960K) and **programmable decoder.** **Reconfigurability and decoding latency: We tested various** configurations on the programmable decoder. All these configurations work correctly and have been verified using the evaluation platform. The decoding latency results of these configurations are shown in Table 5. Comparing to NSI, our programmable architecture incurs only a small latency loss for substantially reduced resource overhead. Note that this programmable decoder is implemented with a small portion of the FPGA computational resource. A fully-utilized programmable decoder can potentially have better latency performance than the corresponding NSI. Furthermore, we have also tested L = 9 configuration, proving that our programmable decoder is capable of handling decoders with large code distances. Implementation Frequency Decoding and Configuration Latency Programmable, L = 5, T = 10, N ≈ 90K 260 MHz 373 ns Programmable, L = 5, T = 10, N ≈ 330K 260 MHz 454 ns Programmable, L = 7, T = 14, N ≈ 960K 260 MHz 2.13 µs Programmable, L = 9, T = 12, N ≈ 2.4M 260 MHz 4.827 µs 10[−][3] 2 × 10[−][3] 3 × 10[−][3] 4 × 10[−][3] Memory error rate **Figure 16: Evaluation of accuracy for the MTLND approach utilizing** **an error model extracted from experiments conducted by Google.** ##### 9. RELATED WORK The challenges and prospects of real-time decoder research were recently reviewed [8]. The review highlights the goal of recent search is to provide concrete evidence that realtime decoding is achievable in practice. Our work aims to accomplish this by employing realistic noise models and implementing a comprehensive system. **LUT Decoders. The decoder in [15] employs an LUT in-** dexed by syndrome bits for error correction search, providing inherent programmability and low latency due to only requiring memory access time. However, this LUT method is not scalable as the number of entries grows exponentially. **Union-Find Decoder [16,** **45]. The UF algorithm potentially** offers hardware implementation simplicity, yet parallelizing this graph-based approach for low latency remains challenging. Moreover, in [16, 45], only the phenomenological noise model is considered, while incorporating circuit-level noise would considerably impede the decoder’s speed. **Other Neural Decoders. In [48], the networks are restricted** to FCNs, limiting their ability to manage large code distances and realistic error models. Chamberland et al. [12, 13] investigated CNNs and estimated hardware performance; however, they either exhibited high latency (over 2000 µs) or unsatisfactory accuracy. To the best of our knowledge, reconfigurable neural decoders have not been previously explored. Furthermore, our programmable solution’s architectural benefits enable improved scalability compared to prior work. **SFQ-based Decoders. Superconducting Single Flux Quan-** tum (SFQ) technology offers high clock speeds and qubit integration capabilities. However, current SFQ-based decoders [36, 50, 62, 63, 64] are hindered by limited computational power, resulting in poor accuracy. Scaling up this approach presents a considerable challenge, barring near-term advancements in superconducting logic device densities. **Real-time QEC Experiments. Experiments on real-time** QECs emerge in past years, including those using the repetition code [53], Gottesman-Kitaev-Preskill (GKP) code [57] and the distance-3 color code [54]. Such simple codes are inadequate for handling general or complex noises. Consequently, they are restricted to small-sized QECCs. |arge code distances.|Col2|Col3| |---|---|---| |Implementation and Configuration|Frequency|Decoding Latency| |Programmable, L = 5, T = 10, N ≈90K|260 MHz|373 ns| |Programmable, L = 5, T = 10, N ≈330K|260 MHz|454 ns| |Programmable, L = 7, T = 14, N ≈960K|260 MHz|2.13 µs| |Programmable, L = 9, T = 12, N ≈2.4M|260 MHz|4.827 µs| **Table 5: Latency of processing different configurations on the pro-** **grammable decoder** **Estimated performance on ASIC: By transitioning to an** ASIC platform, our system’s performance can be enhanced due to increased clock frequency (assuming 2.5 GHz) and elimination of FPGA-induced extra cycles for loading NN parameters. We assess the L = 7 and L = 9 configurations on the FPGA implementation, subsequently estimating ASIC latency results, displayed in Table 6. Configuration Platform and Estimated Assumed Frequency Decoding Latency _L = 7, T = 14, N ≈_ 960K ASIC, 2.5 GHz 170 ns _L = 9, T = 12, N ≈_ 2.3M ASIC, 2.5 GHz 394 ns |Configuration|Platform and Assumed Frequency|Estimated Decoding Latency| |---|---|---| |L = 7, T = 14, N ≈960K|ASIC, 2.5 GHz|170 ns| |L = 9, T = 12, N ≈2.3M|ASIC, 2.5 GHz|394 ns| **Table 6: Estimated latency of larger code distances on the programmable** **decoder** ##### 8.5 Test on Google’s Experiment Setting We additionally refined our noise model to integrate an effective circuit-level noise representation, informed by Google’s ##### 10. CONCLUSIONS Developing scalable and accurate real-time decoders for FTQEC has been an active area of research. In this work, we propose a neural decoding system, which suits both near-term 11 ----- and large-scale FTQCs. We carry out both algorithmic and architectural optimizations for accuracy, scalability, and low latency. Furthermore, our programmable architecture provides flexibility to explore different decoding configurations to adapt to a variety of FTQEC scenarios. Finally, we build a comprehensive decoding system using off-the-shelf FPGAs to evaluate our design. A demonstration of L = 5, T = 10 decoder costs 197 ns on the real device while approaching the comparable accuracy with MWMP under circuit-level noises. The evaluation shows the capability of our system for near-term and large-scale real-time FTQEC. ##### ACKNOWLEDGMENTS We thank all members of Tencent Quantum Labrotory who contributed to the experimental set-up. This work is funded in part by Key-Area Research and Development Program of Guangdong Province, under grant 2020B0303030002. ##### REFERENCES [1] “Our new 2022 development roadmap,” [https://www.ibm.com/quantum/roadmap, accessed: 2022-11-15.](https://www.ibm.com/quantum/roadmap) [2] R. Acharya, I. L. Aleiner, R. Allen, T. I. Andersen, M. Ansmann, F. Arute, K. Arya, A. T. Asfaw, J. Atalaya, R. Babbush, D. Bacon, J. C. Bardin, J. Basso, A. Bengtsson, S. Boixo, G. Bortoli, A. Bourassa, J. Bovaird, L. Brill, M. Broughton, B. B. Buckley, D. A. Buell, T. Burger, B. Burkett, N. Bushnell, Y. Chen, Z. Chen, B. Chiaro, J. Z. Cogan, R. Collins, P. N. Conner, W. Courtney, A. L. Crook, B. M. Curtin, D. M. Debroy, A. D. T. Barba, S. Demura, A. Dunsworth, D. Eppens, C. Erickson, L. Faoro, E. Farhi, R. Fatemi, L. F. Burgos, E. Forati, A. G. Fowler, B. Foxen, W. Giang, C. Gidney, D. Gilboa, M. Giustina, A. G. Dau, J. A. Gross, S. Habegger, M. C. Hamilton, M. P. Harrigan, S. D. Harrington, O. Higgott, J. P. Hilton, M. J. Hoffmann, S. Hong, T. Huang, A. Huff, W. J. Huggins, L. B. Ioffe, S. V. Isakov, J. Iveland, E. Jeffrey, Z. Jiang, C. Jones, P. Juhás, D. Kafri, K. Kechedzhi, J. Kelly, T. Khattar, M. Khezri, M. Kieferov’a, S. Kim, A. Kitaev, P. Klimov, A. R. Klots, A. N. Korotkov, F. Kostritsa, J. M. Kreikebaum, D. Landhuis, P. Laptev, K. M. Lau, L. Laws, J. H. Lee, K. Lee, B. J. Lester, A. T. Lill, W. Liu, A. Locharla, E. Lucero, F. D. Malone, J. Marshall, O. Martin, J. R. McClean, T. Mccourt, M. J. McEwen, A. Megrant, B. M. Costa, X. Mi, K. C. Miao, M. Mohseni, S. Montazeri, A. Morvan, E. Mount, W. Mruczkiewicz, O. Naaman, M. Neeley, C. J. Neill, A. Nersisyan, H. Neven, M. Newman, J. H. Ng, A. Nguyen, M. Nguyen, M. Y. Niu, T. E. O’Brien, A. Opremcak, J. Platt, A. Petukhov, R. Potter, L. P. Pryadko, C. Quintana, P. Roushan, N. C. Rubin, N. Saei, D. T. Sank, K. A. Sankaragomathi, K. J. Satzinger, H. F. Schurkus, C. J. Schuster, M. Shearn, A. Shorter, V. Shvarts, J. Skruzny, V. N. Smelyanskiy, W. C. Smith, G. Sterling, D. Strain, Y. Su, M. Szalay, A. Torres, G. Vidal, B. Villalonga, C. V. Heidweiller, T. White, C. Xing, Z. J. Yao, P. Y. Yeh, J. Yoo, G. Young, A. Zalcman, Y. Zhang, and N. Zhu, “Suppressing quantum errors by scaling a surface code logical qubit,” _Nature, vol. 614, no. 7949, pp. 676–681, 2023._ [3] D. Aharonov, A. Kitaev, and J. Preskill, “Fault-tolerant quantum computation with long-range correlated noise,” Phys. Rev. Lett., vol. 96, no. 5, p. 050504, 2006. [4] P. Aliferis, D. Gottesman, and J. Preskill, “Quantum accuracy threshold for concatenated distance-3 codes,” Quantum Inf. Comput., vol. 6, p. 97, 2006. [5] C. K. Andersen, A. Remm, S. Lazar, S. Krinner, N. Lacroix, G. J. Norris, M. Gabureac, C. Eichler, and A. Wallraff, “Repeated quantum error detection in a surface code,” Nature Physics, vol. 16, no. 8, pp. 875–880, 2020. [6] F. Arute, K. Arya, R. Babbush, D. Bacon, J. C. Bardin, R. Barends, R. Biswas, S. Boixo, F. G. S. L. Brandao, D. A. Buell, B. Burkett, Y. Chen, Z. Chen, B. Chiaro, R. Collins, W. Courtney, A. Dunsworth, E. Farhi, B. Foxen, A. Fowler, C. Gidney, M. Giustina, R. Graff, K. Guerin, S. Habegger, M. P. Harrigan, M. J. Hartmann, A. Ho, M. Hoffmann, T. Huang, T. S. Humble, S. V. Isakov, E. Jeffrey, Z. Jiang, D. Kafri, K. Kechedzhi, J. Kelly, P. V. Klimov, S. Knysh, A. Korotkov, F. Kostritsa, D. Landhuis, M. Lindmark, E. Lucero, D. Lyakh, S. Mandrà, J. R. McClean, M. McEwen, A. Megrant, X. Mi, K. Michielsen, M. Mohseni, J. Mutus, O. Naaman, M. Neeley, C. Neill, M. Y. Niu, E. Ostby, A. Petukhov, J. C. Platt, C. Quintana, E. G. Rieffel, P. Roushan, N. C. Rubin, D. Sank, K. J. Satzinger, V. Smelyanskiy, K. J. Sung, M. D. Trevithick, A. Vainsencher, B. Villalonga, T. White, Z. J. Yao, P. Yeh, A. Zalcman, H. Neven, and J. M. Martinis, “Quantum supremacy using a programmable superconducting processor,” Nature, vol. 574, no. 7779, pp. 505–510, 2019. [7] P. Baireuther, M. Caio, B. Criger, C. W. Beenakker, and T. E. O’Brien, “Neural network decoder for topological color codes with circuit level noise,” New J. Phys., vol. 21, no. 1, p. 013003, 2019. [8] F. Battistel, C. Chamberland, K. Johar, R. W. Overwater, F. Sebastiano, L. Skoric, Y. Ueno, and M. Usman, “Real-time decoding for fault-tolerant quantum computing: Progress, challenges and outlook,” _arXiv preprint arXiv:2303.00054, 2023._ [9] S. Bravyi, O. Dial, J. M. Gambetta, D. Gil, and Z. Nazario, “The future of quantum computing with superconducting qubits,” J. Appl. _Phys, vol. 132, no. 16, p. 160902, 2022._ [10] S. B. Bravyi and A. Y. Kitaev, “Quantum codes on a lattice with boundary,” arXiv:quant-ph/9811052, 1998. [11] R. Caruna, “Multitask learning: A knowledge-based source of inductive bias,” in Machine learning: Proceedings of the tenth _international conference, 1993, pp. 41–48._ [12] C. Chamberland, L. Goncalves, P. Sivarajah, E. Peterson, and S. Grimberg, “Techniques for combining fast local decoders with global decoders under circuit-level noise,” arXiv preprint _arXiv:2208.01178, 2022._ [13] C. Chamberland and P. Ronagh, “Deep neural decoders for near term fault-tolerant experiments,” Quantum Sci. Tech., vol. 3, no. 4, p. 044002, 2018. [14] T. Chen, Z. Du, N. Sun, J. Wang, C. Wu, Y. Chen, and O. Temam, “Diannao: A small-footprint high-throughput accelerator for ubiquitous machine-learning,” ACM SIGARCH Computer Architecture _News, vol. 42, no. 1, pp. 269–284, 2014._ [15] P. Das, A. Locharla, and C. Jones, “Lilliput: A lightweight low-latency lookup-table based decoder for near-term quantum error correction,” _arXiv preprint arXiv:2108.06569, 2021._ [16] P. Das, C. A. Pattison, S. Manne, D. M. Carmean, K. M. Svore, M. Qureshi, and N. Delfosse, “Afs: Accurate, fast, and scalable error-decoding for fault-tolerant quantum computers,” in 2022 IEEE _International Symposium on High-Performance Computer_ _Architecture (HPCA)._ IEEE, 2022, pp. 259–273. [17] A. Davaasuren, Y. Suzuki, K. Fujii, and M. Koashi, “General framework for constructing fast and near-optimal machine-learning-based decoder of the topological stabilizer codes,” _Phys. Rev. Res., vol. 2, no. 3, p. 033399, 2020._ [18] N. Delfosse, “Hierarchical decoding to reduce hardware requirements for quantum computing,” arXiv preprint arXiv:2001.11427, 2020. [19] N. Delfosse and N. H. Nickerson, “Almost-linear time decoding algorithm for topological codes,” Quantum, vol. 5, p. 595, Dec. 2021. [20] E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, “Topological quantum memory,” J. of Math. Phys., vol. 43, p. 4452, 2002. [21] E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, “Topological quantum memory,” Journal of Mathematical Physics, vol. 43, no. 9, pp. 4452–4505, 2002. [22] J. Edmonds, “Paths, trees, and flowers,” Can. J. Math., vol. 17, p. 449, 1965. [23] A. G. Fowler, “Proof of finite surface code threshold for matching,” _Physical review letters, vol. 109, no. 18, p. 180502, 2012._ [24] A. G. Fowler, “Minimum weight perfect matching of fault-tolerant topological quantum error correction in average o (1) parallel time,” _Quantum Inf. Comput., vol. 15, no. 1-2, pp. 145–158, 2015._ [25] A. G. Fowler, M. Mariantoni, J. M. Martinis, and A. N. Cleland, “Surface codes: Towards practical large-scale quantum computation,” _Phys. Rev. A, vol. 86, p. 032324, 2012._ [26] A. G. Fowler, A. M. Stephens, and P. Groszkowski, “High-threshold universal quantum computation on the surface code,” Physical Review _A, vol. 80, no. 5, p. 052312, 2009._ 12 ----- [27] A. G. Fowler, A. C. Whiteside, and L. C. Hollenberg, “Towards practical classical processing for the surface code,” Phys. Rev. Lett., vol. 108, no. 18, p. 180501, 2012. [28] A. G. Fowler, A. C. Whiteside, and L. C. Hollenberg, “Towards practical classical processing for the surface code: Timing analysis,” _Phys. Rev. A, vol. 86, no. 4, p. 042313, 2012._ [29] X. Fu, M. A. Rol, C. C. Bultink, J. Van Someren, N. Khammassi, I. Ashraf, R. Vermeulen, J. De Sterke, W. Vlothuizen, R. Schouten _et al., “An experimental microarchitecture for a superconducting_ quantum processor,” in Proceedings of the 50th Annual IEEE/ACM _International Symposium on Microarchitecture, 2017, pp. 813–825._ [30] S. Gicev, L. C. Hollenberg, and M. Usman, “A scalable and fast artificial neural network syndrome decoder for surface codes,” arXiv _preprint arXiv:2110.05854, 2021._ [31] Google Quantum AI Team, “Data for "suppressing quantum errors by scaling a surface code logical qubit",” [https://zenodo.org/record/6804040#.ZEndcuxBya2.](https://zenodo.org/record/6804040#.ZEndcuxBya2) [32] D. Gottesman, “Stabilizer codes and quantum error correction,” PhD _thesis, 1997._ [33] T. J. Ham, Y. Lee, S. H. Seo, S. Kim, H. Choi, S. J. Jung, and J. W. Lee, “Elsa: Hardware-software co-design for efficient, lightweight self-attention mechanism in neural networks,” in 2021 ACM/IEEE _48th Annual International Symposium on Computer Architecture_ _(ISCA)._ IEEE, 2021, pp. 692–705. [34] O. Higgott, “Pymatching: A python package for decoding quantum codes with minimum-weight perfect matching,” ACM Transactions on _Quantum Computing, vol. 3, no. 3, pp. 1–16, 2022._ [35] O. Higgott and C. Gidney, “Sparse blossom: correcting a million errors per core second with minimum-weight matching,” arXiv _preprint arXiv:2303.15933, 2023._ [36] A. Holmes, M. R. Jokar, G. Pasandi, Y. Ding, M. Pedram, and F. T. Chong, “Nisq+: Boosting quantum computing power by approximating quantum error correction,” in 2020 ACM/IEEE 47th _Annual International Symposium on Computer Architecture (ISCA)._ IEEE, 2020, pp. 556–569. [37] C. Horsman, A. G. Fowler, S. Devitt, and R. Van Meter, “Surface code quantum computing by lattice surgery,” New J. Phys., vol. 14, no. 12, p. 123011, 2012. [38] S. Huang, M. Newman, and K. R. Brown, “Fault-tolerant weighted union-find decoding on the toric code,” Phys. Rev. A, vol. 102, no. 1, p. 012419, 2020. [39] B. Jacob, S. Kligys, B. Chen, M. Zhu, M. Tang, A. Howard, H. Adam, and D. Kalenichenko, “Quantization and training of neural networks for efficient integer-arithmetic-only inference,” in Proceedings of the _IEEE conference on computer vision and pattern recognition, 2018,_ pp. 2704–2713. [40] S. Ji, W. Xu, M. Yang, and K. Yu, “3d convolutional neural networks for human action recognition,” IEEE transactions on pattern analysis _and machine intelligence, vol. 35, no. 1, pp. 221–231, 2012._ [41] D. P. Kingma and J. Ba, “Adam: A method for stochastic optimization,” CoRR, vol. abs/1412.6980, 2015. [42] A. Kitaev, “Fault-tolerant quantum computation by anyons,” Ann. of _Phys., vol. 303, p. 2, 2003._ [43] S. Krinner, N. Lacroix, A. Remm, A. D. Paolo, É. Genois, C. Leroux, C. Hellings, S. Lazar, F. Swiadek, J. Herrmann, G. J. Norris, C. K. Andersen, M. Muller, A. Blais, C. Eichler, and A. Wallraff, “Realizing repeated quantum error correction in a distance-three surface code,” _Nature, vol. 605, no. 7911, pp. 669–674, 2022._ [44] D. Lidar and T. Brun, Quantum Error Correction. Cambridge University Press, Cambridge, September 2013. [45] N. Liyanage, Y. Wu, A. Deters, and L. Zhong, “Scalable quantum error correction for surface codes using fpga,” arXiv preprint _arXiv:2301.08419, 2023._ [46] K. Meinerz, C.-Y. Park, and S. Trebst, “Scalable neural decoder for topological surface codes,” Phys. Rev. Lett., vol. 128, no. 8, p. 080505, 2022. [47] X. Ni, “Neural network decoders for large-distance 2d toric codes,” _Quantum, vol. 4, p. 310, 2020._ [48] R. W. Overwater, M. Babaie, and F. Sebastiano, “Neural-network decoders for quantum error correction using surface codes: A space exploration of the hardware cost-performance tradeoffs,” IEEE _Transactions on Quantum Engineering, vol. 3, pp. 1–19, 2022._ [49] D. Poulin, “Optimal and efficient decoding of concatenated quantum block codes,” Phys. Rev. A, vol. 74, p. 052333, 2006. [50] G. S. Ravi, J. M. Baker, A. Fayyazi, S. F. Lin, A. Javadi-Abhari, M. Pedram, and F. T. Chong, “Better than worst-case decoding for quantum error correction,” arXiv preprint arXiv:2208.08547, 2022. [51] W. Ren, W. Li, S. Xu, K. Wang, W. Jiang, F. Jin, X. Zhu, J. Chen, Z. Song, P. Zhang, H. Dong, X. Zhang, J. Deng, Y. Gao, C. Zhang, Y. Wu, B. Zhang, Q. Guo, H. Li, Z. Wang, J. D. Biamonte, C. Song, D.-L. Deng, and H. Wang, “Experimental quantum adversarial learning with programmable superconducting qubits,” arXiv preprint _arXiv:2204.01738, 2022._ [52] L. Riesebos, X. Fu, S. Varsamopoulos, C. G. Almudever, and K. Bertels, “Pauli frames for quantum computer architectures,” in _Proceedings of the 54th Annual Design Automation Conference 2017,_ 2017, pp. 1–6. [53] D. Ristè, L. C. Govia, B. Donovan, S. D. Fallek, W. D. Kalfus, M. Brink, N. T. Bronn, and T. A. Ohki, “Real-time processing of stabilizer measurements in a bit-flip code,” npj Quantum Inf., vol. 6, no. 1, pp. 1–6, 2020. [54] C. Ryan-Anderson, J. G. Bohnet, K. W. Lee, D. N. Gresh, A. Hankin, J. Gaebler, D. François, A. Chernoguzov, D. Lucchetti, N. C. Brown, T. M. Gatterman, S. K. Halit, K. A. Gilmore, J. Gerber, B. Neyenhuis, D. Hayes, and R. P. Stutz, “Realization of real-time fault-tolerant quantum error correction,” Phys. Rev. X, vol. 11, no. 4, p. 041058, 2021. [55] K. Satzinger, Y.-J. Liu, A. Smith, C. Knapp, M. Newman, N. C. Jones, Z. Chen, C. Quintana, X. Mi, A. Dunsworth, C. Gidney, I. Aleiner, F. Arute, K. Arya, J. Atalaya, R. Babbush, J. C. Bardin, R. Barends, J. Basso, A. Bengtsson, A. Bilmes, M. Broughton, B. B. Buckley, D. A. Buell, B. Burkett, N. Bushnell, B. Chiaro, R. Collins, W. Courtney, S. Demura, A. R. Derk, D. Eppens, C. Erickson, L. Faoro, E. Farhi, B. Foxen, M. Giustina, A. Greene, J. A. Gross, M. P. Harrigan, S. D. Harrington, J. Hilton, S. Hong, T. Huang, W. J. Huggins, L. B. Ioffe, S. V. Isakov, E. Jeffrey, Z. Jiang, D. Kafri, K. Kechedzhi, T. Khattar, S. Kim, P. V. Klimov, A. N. Korotkov, F. Kostritsa, D. Landhuis, P. Laptev, A. Locharla, E. Lucero, O. Martin, J. R. McClean, M. McEwen, K. C. Miao, M. Mohseni, S. Montazeri, W. Mruczkiewicz, J. Mutus, O. Naaman, M. Neeley, C. Neill, M. Y. Niu, T. E. O’Brien, A. Opremcak, B. Pato, A. Petukhov, N. C. Rubin, D. Sank, V. Shvarts, D. Strain, M. Szalay, B. Villalonga, T. C. White, Z. Yao, P. Yeh, J. Yoo, A. Zalcman, H. Neven, S. Boixo, A. Megrant, Y. Chen, J. Kelly, V. Smelyanskiy, A. Kitaev, M. Knap, F. Pollmann, and P. Roushan, “Realizing topologically ordered states on a quantum processor,” Science, vol. 374, pp. 1237–1241, 2021. [56] P. Shor, “Fault-tolerant quantum computation,” in Proc. 37[th] _Annual_ _Symposium on Foundations of Computer Science._ Los Alamitos, CA: IEEE Computer Society Press, 1996, p. 56. [57] V. V. Sivak, A. Eickbusch, B. Royer, S. Singh, I. Tsioutsios, S. Ganjam, A. Miano, B. L. Brock, A. Ding, L. Frunzio, S. M. Girvin, R. J. Schoelkopf, and M. H. Devoret, “Real-time quantum error correction beyond break-even,” arXiv preprint arXiv:2211.09116, 2022. [58] L. Skoric, D. E. Browne, K. M. Barnes, N. I. Gillespie, and E. T. Campbell, “Parallel window decoding enables scalable fault tolerant quantum computation,” arXiv preprint arXiv:2209.08552, 2022. [59] B. M. Terhal, “Quantum error correction for quantum memories,” Rev. _Mod. Phys., vol. 87, no. 2, p. 307, 2015._ [60] Y. Tomita and K. M. Svore, “Low-distance surface codes under realistic quantum noise,” Phys. Rev. A, vol. 90, no. 6, p. 062320, 2014. [61] G. Torlai and R. G. Melko, “Neural decoder for topological codes,” _Phys. Rev. Lett., vol. 119, no. 3, p. 030501, 2017._ [62] Y. Ueno, M. Kondo, M. Tanaka, Y. Suzuki, and Y. Tabuchi, “Qecool: On-line quantum error correction with a superconducting decoder for surface code,” in 2021 58th ACM/IEEE Design Automation _Conference (DAC)._ IEEE, 2021, pp. 451–456. [63] Y. Ueno, M. Kondo, M. Tanaka, Y. Suzuki, and Y. Tabuchi, “Neo-qec: Neural network enhanced online superconducting decoder for surface codes,” arXiv preprint arXiv:2208.05758, 2022. [64] Y. Ueno, M. Kondo, M. Tanaka, Y. Suzuki, and Y. Tabuchi, “Qulatis: 13 ----- A quantum error correction methodology toward lattice surgery,” in _2022 IEEE International Symposium on High-Performance Computer_ _Architecture (HPCA)._ IEEE, 2022, pp. 274–287. [65] S. Varsamopoulos, K. Bertels, and C. G. Almudever, “Decoding surface code with a distributed neural network–based decoder,” _Quantum Mach. Intel., vol. 2, no. 1, pp. 1–12, 2020._ [66] S. Varsamopoulos, K. Bertels, and C. G. Almudever, “Comparing neural network based decoders for the surface code,” IEEE Trans. _Comput., 2019._ [67] S. Varsamopoulos, B. Criger, and K. Bertels, “Decoding small surface codes with feedforward neural networks,” Quantum Sci. Tech., vol. 3, no. 1, p. 015004, 2017. [68] D. S. Wang, A. G. Fowler, and L. C. Hollenberg, “Surface code quantum computing with error rates over 1%,” Phys. Rev. A, vol. 83, no. 2, p. 020302, 2011. [69] M. Zhang, L. Xie, Z. Zhang, Q. Yu, G. Xi, H. Zhang, F. Liu, Y. Zheng, Y. Zheng, and S. Zhang, “Exploiting different levels of parallelism in the quantum control microarchitecture for superconducting qubits,” in _MICRO-54: 54th Annual IEEE/ACM International Symposium on_ _Microarchitecture (MICRO), 2021, pp. 898–911._ [70] Y. Zhao, Y. Ye, H.-L. Huang, Y. Zhang, D. Wu, H. Guan, Q. Zhu, Z. Wei, T. He, S. Cao, F. Chen, T.-H. Chung, H. Deng, D. Fan, M. Gong, C. Guo, S. Guo, L. Han, N. Li, S. Li, Y. Li, F. Liang, J. Lin, H. Qian, H. Rong, H. Su, L. Sun, S. Wang, Y. Wu, Y. Xu, C. Ying, J. Yu, C. Zha, K. Zhang, Y.-H. Huo, C.-Y. Lu, C.-Z. Peng, X. Zhu, and J.-W. Pan, “Realization of an error-correcting surface code with superconducting qubits,” Phys. Rev. Lett., vol. 129, p. 030501, Jul 2022. [71] Y.-C. Zheng, C.-Y. Lai, T. A. Brun, and L.-C. Kwek, “Constant depth fault-tolerant clifford circuits for multi-qubit large block codes,” _Quantum Sci. Tech., vol. 5, no. 4, p. 045007, 2020._ 14 -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2305.15767, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "CLOSED", "url": "http://arxiv.org/pdf/2305.15767" }
2,023
[ "JournalArticle" ]
true
2023-05-25T00:00:00
[ { "paperId": "eb57a5cc2ea992a9f1469752d56c61de84798c6f", "title": "Sparse Blossom: correcting a million errors per core second with minimum-weight matching" }, { "paperId": "c782e2c5456a352207982e084c0986adc5e5b822", "title": "Real-time decoding for fault-tolerant quantum computing: progress, challenges and outlook" }, { "paperId": "01fa4b33dbd61a640c18adcddb778c405cc21fcf", "title": "Scalable Quantum Error Correction for Surface Codes Using FPGA" }, { "paperId": "aa1bad7cc4deec0b79f1678d7639dc841c36912d", "title": "Real-time quantum error correction beyond break-even" }, { "paperId": "80ec53a4d56b8fae66ef66678b8098e4583d3e72", "title": "Parallel window decoding enables scalable fault tolerant quantum computation" }, { "paperId": "89a30b5dab02c9c390a632acad481fa602859272", "title": "The future of quantum computing with superconducting qubits" }, { "paperId": "6c30179e4c8e205eca432502abb0757ef98c77b2", "title": "Better Than Worst-Case Decoding for Quantum Error Correction" }, { "paperId": "621576fda1a07e3362151eef72b204333bc7efe9", "title": "NEO-QEC: Neural Network Enhanced Online Superconducting Decoder for Surface Codes" }, { "paperId": "f25745251b390cd5f62479bddc111f7d0dffa227", "title": "Techniques for combining fast local decoders with global decoders under circuit-level noise" }, { "paperId": "fa08c638941ee4f8aa4da8c33bb9720eb3d5aef3", "title": "Suppressing quantum errors by scaling a surface code logical qubit" }, { "paperId": "5e5ae2df258e7c838acc6d925c7b20fff63ac3e0", "title": "Experimental quantum adversarial learning with programmable superconducting qubits" }, { "paperId": "1795cacfcd2259b465eb34149405f5cc3753e7a0", "title": "AFS: Accurate, Fast, and Scalable Error-Decoding for Fault-Tolerant Quantum Computers" }, { "paperId": "7bb429b90d6e7604329a106df670128b1856b9d1", "title": "QULATIS: A Quantum Error Correction Methodology toward Lattice Surgery" }, { "paperId": "9c3fd56a520e85872d593a91b826193b3ea8e27d", "title": "Neural-Network Decoders for Quantum Error Correction Using Surface Codes: A Space Exploration of the Hardware Cost-Performance Tradeoffs" }, { "paperId": "4a0b2fc9017ac8a594544fb77823c6439f7fb3b7", "title": "Realization of an Error-Correcting Surface Code with Superconducting Qubits." }, { "paperId": "b2a57179e23d49b7c2ffe1ffb7fbb0713249d68f", "title": "Realizing repeated quantum error correction in a distance-three surface code" }, { "paperId": "1ac398fbc5dcae7988d351dcbc95c7f36fda8140", "title": "A scalable and fast artificial neural network syndrome decoder for surface codes" }, { "paperId": "ee9d80eb706bf23a7470d470b0dac718a16c5e3a", "title": "Exploiting Different Levels of Parallelism in the Quantum Control Microarchitecture for Superconducting Qubits" }, { "paperId": "150d19647466359e4ba03859b57cdf79cb89561d", "title": "LILLIPUT: A Lightweight Low-Latency Lookup-Table Based Decoder for Near-term Quantum Error Correction" }, { "paperId": "c938b8abc50711ba2a2b1ad09ad228c1c4ec02c6", "title": "Realization of Real-Time Fault-Tolerant Quantum Error Correction" }, { "paperId": "5af69480a7ae3b571df6782a11ec4437b386a7d9", "title": "ELSA: Hardware-Software Co-design for Efficient, Lightweight Self-Attention Mechanism in Neural Networks" }, { "paperId": "fcd2cdbe447ad67afda1c34dedd6d5a957c0db32", "title": "PyMatching: A Python Package for Decoding Quantum Codes with Minimum-Weight Perfect Matching" }, { "paperId": "3e6629cb18e464c4548e2e464b9d360de13c81d7", "title": "Realizing topologically ordered states on a quantum processor" }, { "paperId": "e0781ec41cf088040e40c77517d3104052ede304", "title": "QECOOL: On-Line Quantum Error Correction with a Superconducting Decoder for Surface Code" }, { "paperId": "7672e2712415db6db46995d40c790b23148f5460", "title": "Scalable Neural Decoder for Topological Surface Codes." }, { "paperId": "2e34a20471b4d2b2455d1085868c9feb9eacdc29", "title": "Real-time processing of stabilizer measurements in a bit-flip code" }, { "paperId": "5325d8194a04e425622b519447b709675dc8436b", "title": "NISQ+: Boosting quantum computing power by approximating quantum error correction" }, { "paperId": "f58df7e89d5287b5086d7a3f3a6e81bbd2e6cda8", "title": "Fault-tolerant weighted union-find decoding on the toric code" }, { "paperId": "095972788bfff774f575ebd9abb76cf7705d09a2", "title": "Constant depth fault-tolerant Clifford circuits for multi-qubit large block codes" }, { "paperId": "0dfec5dd5152e64bbe9fe612afeeaf666a2ebb74", "title": "Hierarchical decoding to reduce hardware requirements for quantum computing" }, { "paperId": "527db080e1f4715dfa04e083f386ec897ddca53f", "title": "Repeated quantum error detection in a surface code" }, { "paperId": "0b51d0e3f828d9277c93842cb9600a003f396717", "title": "Quantum supremacy using a programmable superconducting processor" }, { "paperId": "1647087126a115364219c064624541e4fd772335", "title": "Decoding surface code with a distributed neural network–based decoder" }, { "paperId": "10453b851e2348e4d74f888c6884de29d28ef6e4", "title": "Comparing Neural Network Based Decoders for the Surface Code" }, { "paperId": "0b99c2e5b27fb66eea11c9a62e8cd86710784369", "title": "Neural Network Decoders for Large-Distance 2D Toric Codes" }, { "paperId": "a76d7dbe1043c08cde3b33b9ef24f1251d40997b", "title": "Fault-tolerant quantum error correction for Steane’s seven-qubit color code with few or no extra qubits" }, { "paperId": "f1a6d49f81ef010e9c0afd1bafb37271139001b8", "title": "Neural network decoder for topological color codes with circuit level noise" }, { "paperId": "9564cf1bffd59b80f737bf9e9848947897be69a7", "title": "Deep neural decoders for near term fault-tolerant experiments" }, { "paperId": "269412b545f6bea8cae29fb31cb8c8a62615a1a0", "title": "General framework for constructing fast and near-optimal machine-learning-based decoder of the topological stabilizer codes" }, { "paperId": "59d0d7ccec2db66cad20cac5721ce54a8a058294", "title": "Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference" }, { "paperId": "150c8897eafe69278ab54b869b41e108eaddce76", "title": "Almost-linear time decoding algorithm for topological codes" }, { "paperId": "3f117def2c5f9fc780d2a5f314dc412a627ff64f", "title": "An Experimental Microarchitecture for a Superconducting Quantum Processor" }, { "paperId": "54f270b552f784171a41c19ba8a3bc6e92e0667c", "title": "Pauli frames for quantum computer architectures" }, { "paperId": "715a8895a4b8ab8b27294b0d475a1e1e1f0db305", "title": "Decoding small surface codes with feedforward neural networks" }, { "paperId": "031548b9beb9e9411da1da0dfdff0ed4ffa447d2", "title": "Neural Decoder for Topological Codes." }, { "paperId": "a6cb366736791bcccc5c8639de5a8f9636bf87e8", "title": "Adam: A Method for Stochastic Optimization" }, { "paperId": "f6fc40a3ef1763a86b71a6b9c60e755c94314918", "title": "Low-distance Surface Codes under Realistic Quantum Noise" }, { "paperId": "22e477a9fdde86ab1f8f4dafdb4d88ea37e31fbd", "title": "DianNao: a small-footprint high-throughput accelerator for ubiquitous machine-learning" }, { "paperId": "ddd6258a1781179fabeca3d81ad645ab883d303a", "title": "Minimum weight perfect matching of fault-tolerant topological quantum error correction in average O(1) parallel time" }, { "paperId": "3a0f4231bf931d1e3eb97b2ced99d0be0ca5c1c3", "title": "Quantum error correction for quantum memories" }, { "paperId": "f9db7ae0a333ef8a21317d1a3126d75da9d43ff4", "title": "Surface codes: Towards practical large-scale quantum computation" }, { "paperId": "9d7e1c729889a87b21b9e3e6666e48c7494cd80b", "title": "Proof of finite surface code threshold for matching." }, { "paperId": "c375e28f31864920ca0b867664e3819a577c87f7", "title": "Surface code quantum computing by lattice surgery" }, { "paperId": "2836093637f05a977df55e208ba147a2be0b6459", "title": "Towards practical classical processing for the surface code." }, { "paperId": "f2be031978dd8b4baca39461c7d4411e888c9bbb", "title": "High-threshold universal quantum computation on the surface code" }, { "paperId": "ff1013271682d6de05e1abceb77d5ef6b1ff77bd", "title": "Optimal and efficient decoding of concatenated quantum block codes" }, { "paperId": "529493d79d5c3cac383cb2345b52d4875778004e", "title": "Fault-tolerant quantum computation with long-range correlated noise." }, { "paperId": "fe6af917bcff5807c7574831b726b49fb20b7283", "title": "Quantum accuracy threshold for concatenated distance-3 codes" }, { "paperId": "8ba3a176211e3e9959c36cbb2e22dbdee84d3b00", "title": "Topological quantum memory" }, { "paperId": "94a4d044f04d37a8c615838ec8c0571e0760b5a6", "title": "Quantum error correction" }, { "paperId": "ede0aafcd74539724fbb3e6ab92b3d24e3425b78", "title": "Quantum codes on a lattice with boundary" }, { "paperId": "2a421807060ce02d670f50a7e6403a1cdb43e414", "title": "Fault tolerant quantum computation by anyons" }, { "paperId": "5685c6189cfc06bd474339957e3adf08e995eee1", "title": "Stabilizer Codes and Quantum Error Correction" }, { "paperId": "9464d15f4f8d578f93332db4aa1c9c182fd51735", "title": "Multitask Learning: A Knowledge-Based Source of Inductive Bias" }, { "paperId": "764ace9519283e45664e490a6df581cb68b5250b", "title": "Paths, Trees, and Flowers" }, { "paperId": "52dfa20f6fdfcda8c11034e3d819f4bd47e6207d", "title": "Ieee Transactions on Pattern Analysis and Machine Intelligence 1 3d Convolutional Neural Networks for Human Action Recognition" }, { "paperId": null, "title": "Our new 2022 development roadmap" } ]
23,862
en
[ { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Business", "source": "s2-fos-model" }, { "category": "Economics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01df7ef4de45cedd9b813d0051ad98a210e32cdd
[]
0.887772
Crowdfunding using Blockchain
01df7ef4de45cedd9b813d0051ad98a210e32cdd
International Journal for Research in Applied Science and Engineering Technology
[ { "authorId": "2297071944", "name": "Adwaith Viju" } ]
{ "alternate_issns": null, "alternate_names": [ "Int J Res Appl Sci Eng Technol" ], "alternate_urls": null, "id": "100ffd12-b334-4f02-826b-8ac408e7df49", "issn": "2321-9653", "name": "International Journal for Research in Applied Science and Engineering Technology", "type": "journal", "url": "https://www.ijraset.com/" }
Abstract: Existing crowdfunding consists of reviewing the crowdfunding field and addressing four specific issues: security, collaboration, ignorance, and support. Crowdfunding effectively raises money within and across networks. The idea behind the project is to use smart contracts to make payments and distribute rewards, making the process safe and efficient. Our Crowdfunding platform uses smart contracts to solve these lim- itations by offering crowdfunding using blockchain technology. The platform is designed to enable individuals and organizations to create various campaigns and finance their projects easily and efficiently.
### 12 IV April 2024 https://doi.org/10.22214/ijraset.2024.59996 ----- _Volume 12 Issue IV Apr 2024- Available at www.ijraset.com_ # Crowdfunding using Blockchain #### Adwaith Viju[1], Aarushe Reddy[2], Thejas Nair[3], Levin Viji[4], Rohit Sharma[5] _Dept. of Computer Engineering, Pillai College of Engineering, Panvel, 410206, Maharashtra, India_ **_Abstract: Existing crowdfunding consists of reviewing the crowdfunding field and addressing four specific issues: security,_** **_collaboration, ignorance, and support. Crowdfunding effectively raises money within and across networks. The idea behind_** **_the project is to use smart contracts to make payments and distribute rewards, making the process safe and efficient. Our_** **_Crowdfunding platform uses smart contracts to solve these lim- itations by offering crowdfunding using blockchain technology._** **_The platform is designed to enable individuals and organizations to create various campaigns and finance their projects easily_** **_and efficiently._** **_Index Terms: Crowdfunding, Raising money, Smart contracts, Blockchain._** **I.** **INTRODUCTION** Crowdfunding is a revolutionary financial model that lever- ages the collective support of a diverse group of individuals, often referred to as ”the crowd,” to fund an array of projects, ventures, or creative ideas initiated by creators, entrepreneurs, artists, or individuals with innovative visions. Unlike tra- ditional financing methods reliant on a single institutional investor or a limited group of stakeholders, crowdfunding taps into the power of mass collaboration. It allows countless people to contribute modest sums of money, cumulatively providing the necessary capital for projects spanning from groundbreaking technological innovations and artistic cre- ations like films and music albums to charitable initiatives and personal aspirations. Crowdfunding manifests in various forms, including reward- based crowdfunding, where backers receive non-monetary incentives in exchange for support, equity crowdfunding, where investors receive shares in a company, and donation- based crowdfunding, where individuals contribute to causes or charities. This democratized approach to financing is a necessity in today’s diverse and dynamic landscape, addressing the limitations of traditional funding avenues and fostering innovation, inclusivity, and communitydriven support on a global scale, redefining how we bring ideas and dreams to life. Integrating blockchain technology and smart contracts into crowdfunding holds the potential to revolutionize the execution process. By leveraging the transparency, security, and effi- ciency of blockchain, crowdfunding platforms can ensure that funds are used as intended, enhancing trust among backers and creators. Smart contracts, self-executing agreements with predefined rules, can automate project milestones and fund disbursements, eliminating the need for intermediaries. This automation streamlines project execution, reduces administra- tive overhead, and safeguards against fraud, offering a more transparent and frictionless crowdfunding experience. Additionally, blockchain’s immutable ledger ensures that project progress and financial transactions are permanently recorded, providing a verifiable and auditable record for all stakeholders. Ultimately, this integration enhances the account- ability, efficiency, and integrity of crowdfunding, fostering a more robust ecosystem for creators and backers alike. **II.** **MOTIVATION** It can be a fastest way to raise finance for different causes with no upfront fees. As crowdfunding becomes an increasingly common source of financing for a diverse range of entrepreneurs, hence we got motivated from this idea and decided to develop a project on this topic. It is a great way of raising finance and covering costs for the businesses and causes without having access to traditional forms of bank lending, or in a difficult economy. **III.** **PROBLEM STATEMENT** Trust and transparency are probably the biggest issues when it comes to crowdfunding. Most of the traditional crowdfunding platforms don’t keep a record. Another common problem often faced by user is that they charge high fee for transactions. Interest building is also a very common fail point in the crowdfunding experience. ----- _Volume 12 Issue IV Apr 2024- Available at www.ijraset.com_ **IV.** **OBJECTIVE** To keep track of campaign’s progress as well as fundraising. To create a secure system that is user friendly and trustworthy. **V.** **LITERATURE SURVEY** Vladimir,Ivanov and Anzhela knyazeva [1] has cited in their publication that crowdfunding market has seen gradual adop- tion by issuers and intermediaries but the problem is Insider threats can come from employees or contractors that monitor employee activity and limit access to sensitive information and systems. Sirine,Zribi [2] has cited in their publication that the effect of COVID-19 has increased the use of social media and other digital platforms and this aspect may positively affect the crowdfunding environment. Positive social influence, such as endorsements, can increase the likelihood of a project being funded, while negative social influence, such as criticism, can decrease funding. Huasheng Zhu and Zach Zhizhong Zhou [3] have cited in their publication that Equity crowdfunding via the Internet is a new channel of raising money for startups. As it features low barriers to entry, low cost, and high speed, which encourages innovation. In the recent years, in China equity crowdfunding has experienced some developments. However, there are some problems that remain unsolved in practice. Hasnan,Baber and Mina Fanea-Ivanovici [4] has cited in their publication that Financial backers may be motivated by a desire to support independent creators and help bring unique projects to fruition. **VI.** **PROPOSED SYSTEM** _A._ _Introduction_ Fig. 1. _1)_ _System Overview: The proposed crowdfunding system leverages blockchain technology and smart contracts to ad- dress the_ issues of security, collaboration, ignorance, and support in the existing crowdfunding landscape. This platform allows individuals and organizations to create and manage crowdfunding campaigns efficiently and securely. _2)_ _Key Features_ _a)_ _Smart Contract Integration: Smart contracts will be the backbone of the system, automating payment processing and_ reward distribution. - Ensure transparency and trust in transactions as all actions are recorded on the blockchain. _b)_ _Campaign Creation: Users can easily create and cus- tomize crowdfunding campaigns with detailed project descrip- tions,_ goals, and deadlines. - Specify the type of campaign (e.g., donation-based, equity-based, reward-based). _c)_ _Fundraising: Users can contribute to campaigns using cryptocurrencies. - Real-time tracking of campaign progress and_ contributions. _d)_ _Security Measures: Enhanced security protocols to protect user data and transactions. - Wallet authentication for account_ access. _e)_ _Dispute Resolution: Smart contract-based dispute res- olution mechanism to handle conflicts. - Escrow services for funds in_ dispute. _B._ _Details of Hardware and Software_ Software Requirements (Minimum) Windows 8 or above Google chrome or any other browser Hardware Requirements (Minimum): Intel i3 Processor 4 GB RAM Stable internet connection ----- _Volume 12 Issue IV Apr 2024- Available at www.ijraset.com_ _C. Methodology used_ For the design, we will be using multiple frameworks and tools such as - Solidity, Web3Js, ReactJs, NodeJs Languages we will be using are - HTML, CSS, JAVASCRIPT Fig. 2. **VII.** **CONCLUSION** The current research paper has undertaken an extensive exploration of the multifaceted domain of blockchain-based crowdfunding. A blockchain-based crowdfunding application is a platform that enables users to create, manage, and fund projects using cryptocurrency. The platform is built on blockchain technology, which provides security, transparency, and immutability to the crowdfunding process. Users can create campaigns for various purposes, such as funding star- tups, charities, or personal projects. The platform uses smart contracts to automate the crowdfunding process, which ensures that funds are released to the project creators only when certain conditions are met. This eliminates the need for intermediaries, such as banks or crowdfunding platforms, reducing the costs associated with traditional crowdfunding methods. With the use of cryptocur- rency, a blockchain-based crowdfunding application allows for global participation in the crowdfunding campaign, regardless of the users’ location or currency. It also provides a more secure and efficient way to transact, eliminating the risks of fraud and chargebacks. The platform provides transparency to all participants, allowing them to track the progress of the campaign, view the distribution of funds, and monitor the project’s milestones. The use of blockchain technology also ensures that the crowdfunding process is decentralized, removing the need for a central authority to manage the campaign. Overall, a blockchain-based crowdfunding application pro- vides a secure, efficient, and accessible way for creators to fund their projects and for investors to support innovative ideas. It offers more flexibility, transparency, and autonomy to all participants while eliminating the costs and risks associated with traditional crowdfunding methods. **VIII.** **ACKNOWLEDGMENT** We are grateful to our project guide Mr Rohit Sharma and the Head of the Department for their invaluable support and guidance throughout the completion of the our major project in Blockchain. Their contributions have been instrumental in the academic success and we are indebted to them for their mentorship and tireless efforts. **REFERENCES** [1] Ivanov, Vladimir, and Anzhela Knyazeva. ”US securities-based crowdfunding under Title III of the JOBS Act.” DERA White paper (2017). [2] Zribi, Sirine. ”Effects of social influence on crowd- funding performance: Implications of the covid-19 pandemic.” Humanities and Social Sciences Communications 9, no. 1 (2022): 1-8. [3] Huasheng Zhu and Zach Zhizhong Zhou, ”Analysis and outlook of applications of blockchain technology to equity crowdfunding in China”, (2016). [4] Baber, Hasnan, and Mina Fanea-Ivanovici. ”Motivations behind backers’ contributions in reward-based crowdfunding for movies and web series.” International Journal of Emerging Markets 18, no. 3 (2023): 666-684. [5] Mazzocchini, Francesco James, and Caterina Lucarelli. ”Success or failure in equity crowdfunding? A systematic literature review and research perspectives.” Management Re- search Review ahead-of-print (2022). [6] Cai, Wanxiang, Friedemann Polzin, and Erik Stam. ”Crowdfunding and social capital: A systematic review using a dynamic perspective.” Technological Forecasting and Social Change 162 (2021): 120412. [7] Harsh Khatter, Hritik Chauhan, Ishan Trivedi, Jatin Agarwal, “SECURE AND TRANSPARENT CROWDFUND- ING USING BLOCKCHAIN”, (October2021). [8] Taha Bouhsine, ”Design And Full Stack Development Of A Crowdfunding Platform”, (2020). [9] H.L. Gururaj, V. Janhavi, Abhishek M. Holla, Ashwin A. Kumar, R. Bhumika and Sam Goundar, “Decentralised application for crowdfunding using blockchain technology”, (September 2021). [10] Nikhil Yadav, Sarasvathi V, “Venturing Crowdfunding using Smart Contracts in Blockchain”, (October 2020). ----- -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.22214/ijraset.2024.59996?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.22214/ijraset.2024.59996, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://doi.org/10.22214/ijraset.2024.59996" }
2,024
[ "JournalArticle", "Review" ]
true
2024-04-30T00:00:00
[]
2,756
en
[ { "category": "Engineering", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Art", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01dff7072a8651d46f012e6b9f2a817170866f9f
[ "Engineering" ]
0.854399
X3DOM AS CARRIER OF THE VIRTUAL HERITAGE
01dff7072a8651d46f012e6b9f2a817170866f9f
[ { "authorId": "1680577", "name": "Yvonne Jung" }, { "authorId": "29709688", "name": "J. Behr" }, { "authorId": "4689419", "name": "H. Graf" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
Virtual Museums (VM) are a new model of communication that aims at creating a personalized, immersive, and interactive way to enhance our understanding of the world around us. The term "VM" is a short-cut that comprehends various types of digital creations. One of the carriers for the communication of the virtual heritage at future internet level as de-facto standard is browser front-ends presenting the content and assets of museums. A major driving technology for the documentation and presentation of heritage driven media is real-time 3D content, thus imposing new strategies for a web inclusion. 3D content must become a first class web media that can be created, modified, and shared in the same way as text, images, audio and video are handled on the web right now. A new integration model based on a DOM integration into the web browsers' architecture opens up new possibilities for declarative 3 D content on the web and paves the way for new application scenarios for the virtual heritage at future internet level. With special regards to the X3DOM project as enabling technology for declarative 3D in HTML, this paper describes application scenarios and analyses its technological requirements for an efficient presentation and manipulation of virtual heritage assets on the web.
ISPRS Trento 2011 Workshop, 2-4 March 2011, Trento, Italy # X3DOM AS CARRIER OF THE VIRTUAL HERITAGE ## Yvonne Jung, Johannes Behr, Holger Graf Fraunhofer Institut für Graphische Datenverarbeitung, Darmstadt, Germany {yjung, jbehr, hgraf}@igd.fhg.de _Figure 1. With MeshLab exported model of an old statue visualized via X3DOM in the same HTML page on 3 different platforms:_ _iPhone App using WebKit extensions; Internet Explorer C++ based X3D plugin; WebGL-based implementation on Nokia N900._ **KEY WORDS: 3D Internet, Declarative 3D in Web-Bowser, X3DOM, Virtual Heritage, Cultural Heritage, WebGL** **ABSTRACT:** Virtual Museums (VM) are a new model of communication that aims at creating a personalized, immersive, and interactive way to enhance our understanding of the world around us. The term “VM” is a short-cut that comprehends various types of digital creations. One of the carriers for the communication of the virtual heritage at future internet level as de-facto standard is browser front-ends presenting the content and assets of museums. A major driving technology for the documentation and presentation of heritage driven media is real-time 3D content, thus imposing new strategies for a web inclusion. 3D content must become a first class web media that can be created, modified, and shared in the same way as text, images, audio and video are handled on the web right now. A new integration model based on a DOM integration into the web browsers‟ architecture opens up new possibilities for declarative 3D content on the web and paves the way for new application scenarios for the virtual heritage at future internet level. With special regards to the X3DOM project as enabling technology for declarative 3D in HTML, this paper describes application scenarios and analyses its technological requirements for an efficient presentation and manipulation of virtual heritage assets on the web. **1.** **INTRODUCTION** The trend in using more multimedia technologies in our everyday life has also an impact on digital heritage and its overall value chain from digitisation, processing, and presentation within VM platforms. Moreover, 3D interactive content being the information carrier of the future, still requires dedicated research efforts to enable a seamless process chain of integration, composition and deployment. In recent years 3D enhanced environments and 3D content are more and more seen as a provider for the understanding of complex causalities, advanced visual cognitive stimulus and easy interaction. Hence, complex causalities within the VH (Virtual Heritage) information space have to be adapted to the visitors‟ or users‟ cognitive capabilities allowing them personalised access to the heritage. The combination of multiple media and diverse ICT platforms have to actively support users in the understanding of 3D topics, providing new motivation in engaging the users within either individual or collaborative digital culture experiences. Thus, museums can make complex causalities immersively available and leverage visitors or VH consumers to a higher quality of experience of CH. Coming along with 3D interactive content, we are facing a shift in interaction and presentation paradigms for the access to the Virtual Heritage. It still requires tremendous research activities and we are facing several great challenges within information pre-processing, concatenation and presentation being adaptively supported by (de-facto) standard ICT solutions within the CH (Cultural Heritage) domain. This encompasses hardware, e.g. displays and its scalability, but also adaptive software solutions to support a context change for interactive presentations with the ultimate vision to “bridge the gap between heritage-driven multi-media technologies and our natural environment”. On the other side, the internet can be seen as one carrier of future (learning) worlds in which socialising aspects combined ----- ISPRS Trento 2011 Workshop, 2-4 March 2011, Trento, Italy with motivating, easy to use, exhausting and understandable information can be accessed, retrieved and refined. Connecting modern rich media 3D technology with traditional web-based environments, interesting new possibilities for self-regulated and collaborative knowledge dissemination emerge (Jung, 2008). Here we need besides the acquisition and preparation of heritage driven 3D content new methodologies and tools which are able to comply with the requirements of highly dynamic knowledge and information processing within its presentation. This is required for several involved stakeholders, e.g. future digital curators or non-professional visitors of the museum at any age. New workflows for rich media content creation have to be elaborated for enabling e.g. digital curators to easily prepare and provide 3D heritage driven media on the web. Research activities should therefore focus on how to produce and elaborate sustainable and standardised solutions covering the overall content preparation pipeline for 3D content on the web. Building on the lessons learned in web technology and its applications, we reflect on how to embed heritage driven multiple media content into browser front-ends. Major attention on the conceptual design has been devoted to: - re-usable application environments allowing the integration of standardised media archiving formats, - extensibility with respect to the web browser as major interoperable deployment platform, - declarative heritage-driven 3D content for easy authoring and content concatenation. Thus, in this paper, we first review suitable techniques for the web-based visualisation of heritage-driven objects before presenting our solution. Nowadays, most 3D rendering systems for web-based applications follow the traditional browserplugin-based approach, which has two major drawbacks. On the one hand, plugins are not installed by default on most systems and the user has to deal with security and incompatibility issues. On the other hand, such systems define an application and event model inside the plugin that is decoupled from the HTML page's DOM content, thereby making the development of dynamic web-based 3D content difficult. **2.** **RELATED WORK** Besides the aforementioned browser plugins, Java3D (Sun, 2007) – a scene-graph system that incorporates the VRML/X3D (Web3D, 2008) design – was one of the first means for 3D in the browser. However, it never really was utilized for the web and today Java3D is no longer supported by Sun at all. The open ISO standard X3D in contrast provides a portable format and runtime for developing interactive 3D applications. X3D evolved from the old VRML standard, describes an abstract functional behaviour of time-based, interactive 3D multimedia information, and provides lightweight components for storage, retrieval and playback of real-time 3D graphics content that can be embedded into any application (Web3D, 2008). The geometric and graphical properties of a scene as well as its behaviour are described by a scene-graph (Akenine-Möller et al., 2008). Since X3D is based on a declarative document-based design, it allows defining scene description and runtime behaviour by simply editing XML without the need for dealing with low-level C/C++ graphics APIs, which not only is of great importance for efficient application development but also directly allows its integration into a standard web page. Further, using X3D means that all data are easily distributable and sharable to others. Despite proprietary rendering systems that all implement their own runtime behaviour, X3D allows developing portable 3D applications. The X3D specification (Web3D, 2008) includes various internal and external APIs and has a web-browser integration model, which allows running plugins inside a browser. Hence, there exist several X3D players available as standalone software or as browser plugin. The web browser holds the X3D scene internally and the application developer can update and control the content using the Scene Access Interface (SAI), which is part of the standard and already defines an integration model for DOM nodes as part of SAI (Web3D, 2009), though there is currently no update or synchronization mechanism. To alleviate these issues, with the X3DOM framework (Behr et al., 2009) a DOM-based integration model for X3D and HTML5 was presented to allow for a seamless integration of interactive 3D content into HTML pages. The current implementation is mainly based on WebGL (Khronos, 2010), but the architecture also proposes a fallback model to allow for more powerful rendering backends, too (Behr et al., 2010), which will be explained in the next section. More information can be found [online at http://www.x3dom.org/.](http://www.x3dom.org/) To overcome the old plugin-model, Khronos promotes WebGL as one solution for hardware accelerated 3D rendering in the web. The imperative WebGL API (WebGL, 2010) is a JavaScript (Crockford, 2008) binding for OpenGL ES 2.0 (Munshi et al., 2009) that runs inside a web browser, thereby allowing for native 3D in the web. The very first WebGL implementation was available in late September 2009 with a Mozilla Firefox 3.7 pre-alpha build. Since then, most other browsers like Apple WebKit, Google Chrome and Opera (except Microsoft‟s IE) followed with WebGL-enabled developer (and now beta) builds. By utilizing OpenGL ES 2.0 as basis, it was possible to define the WebGL specification in a platform independent manner, since on the one hand OpenGL 2.1 (the current standard for desktop machines) is a superset of ES 2.0. And on the other hand, most recent smartphones, like the iPhone or the Nokia N900, already have chips being conformant to that standard – even more, since the latest firmware update early June 2010, the built-in web browser of the Nokia N900 now also natively supports WebGL (and thereby X3DOM – compare Figure 1). WebGL (WebGL, 2010) describes an additional 3D rendering context for the HTML5 _<canvas> element (W3C, 2009a) by_ exposing the rendering API via new JavaScript objects and methods acting on the canvas object. The 3D rendering context is then acquired via _gl = canvas.getContext('webgl'). If the_ returned _gl object is defined and not null, the web browser_ supports WebGL – in this case the _gl object provides all API_ calls. As mentioned, WebGL is based on the OpenGL ES 2.0 standard (Munshi et al., 2009), an OpenGL dialect that was developed for embedded and portable devices such as mobile phones with less powerful graphics chips. In contrast to standard desktop OpenGL (Shreiner et al., 2006) it has no support for the old fixed function pipeline (i.e., no matrix stack etc.) but is instead completely based on GLSL shaders (Rost, 2006). Thereby it is comparable to the OpenGL 3.x/4.x standard with the exception that more advanced features like transform feedback or geometry shaders that require rather recent GPUs are not supported. Another drawback is the fact that the webdeveloper has to deal with low-level graphics concepts (maths, GLSL-shaders, attribute binding, and so on). Moreover, JavaScript scene housekeeping can soon lead to performance issues, and there is still no uniform notion of metadata or semantics for the content possible. ----- ISPRS Trento 2011 Workshop, 2-4 March 2011, Trento, Italy During the last year, WebGL-based libraries such as WebGLU (DeLillo, 2009), which mimics the old OpenGL fixed-function pipeline by providing appropriate concepts, emerged as well as rendering frameworks building on top of WebGL by providing a JavaScript-based API. For instance GLGE (Brunt, 2010) is a scene-graph system that masks the low-level graphics API calls of WebGL by providing a procedural programming interface. Likewise, SpiderGL (Di Benedetto et al., 2010) provides algorithms for 3D graphics, but on a lower level of abstraction and without special structures like the scene-graph. These libraries are comparable to typical graphics engines as well as to other JavaScript libraries like jQuery (cp. [http://jquery.com/),](http://jquery.com/) but none of them seamlessly integrates the 3D content into the web page in a declarative way nor do they connect the HTML DOM tree to the 3D content. In this regard, the aforementioned jQuery aims at simplifying HTML document traversing, event handling, and Ajax interactions, thereby easing the development of interactive web applications in general. However, using libraries like SpiderGL forces the web developer to learn new APIs as well as graphics concepts. But when considering that the Document Object Model (DOM) of a web page already is a declarative 2D scene-graph of the web page, it seems natural to directly utilize and extend the well-known DOM as scene-graph and API also for 3D content. **3.** **GETTING DECLARATIVE (X)3D INTO HTML5** Generally spoken, the open source X3DOM framework and runtime was built to support the ongoing discussion in both, the Web3D and W3C communities, of how an integration of HTML5 and declarative 3D content could look like, and allows including X3D (Web3D, 2008) elements directly as part of an HTML5 DOM tree (Behr et al., 2009; Behr et al., 2010). The proposed model thereby follows the original W3C suggestion to use X3D for declarative 3D content in HTML5 (W3C, 2009b): _“Embedding 3D imagery into XHTML documents is the domain_ _of X3D, or technologies based on X3D that are namespace-_ _aware”. Figure 2 relates the concepts of X3DOM to SVG,_ Canvas and WebGL. _Figure 2. SVG, Canvas, WebGL and X3DOM relation._ **3.1** **DOM Integration** In contrast to other approaches, X3DOM integrates 3D content into the browser without the need to forge new concepts, but utilizes today's web standards and techniques, namely HTML, CSS, Ajax, JavaScript and DOM scripting. Figure 3 shows a simple example, where a 3D box is embedded into the 2D DOM tree using X3DOM. Though HTML allows declarative content description already for years, this is currently only possible for textual and 2D multimedia information. Hence, the goal is to have a declarative, open, and humanreadable 3D scene-graph embedded in the HTML DOM, which extends the well-known DOM interfaces only where necessary, and which thereby allows the application developer to access and manipulate the 3D content by only adding, removing or changing the DOM elements via standard DOM scripting – just as it is nowadays done with standard HTML elements like _<div>,_ _<span>,_ _<img> or_ _<canvas> and their corresponding_ CSS styles. Thus, no specific plugins or plugin interfaces like the SAI (Web3D, 2009) are needed, since the well-known and excellently documented JavaScript and DOM infrastructure are utilized for declarative content design. Obviously, this seamless integration of 3D contents in the web browser integrates well with common web techniques such as DHTML and Ajax. Furthermore, semantics integration can be achieved with the help of the X3D metadata concept for creating mash-ups (i.e. a recombination of existing contents) and the like or for being able to index and search 3D content. _Figure 3. Simple example showing how the 3D content is_ _declaratively embedded into an HTML page using X3DOM._ **3.2** **Interaction and Events** Most visible HTML tags can react to mouse events, if an event handler was registered. The latter is implemented either by adding a handler function via element.addEventListener() or by directly assigning it to the attribute that denotes the event type, e.g. _onclick. Standard HTML mouse events like “onclick”,_ “onmouseover”, or “onmousemove” are also supported for 3D objects alike. Within the X3DOM system we also propose to create a new 3DPickEvent type, which extends the W3C MouseEvent IDL interface (W3C, 2000) to better support 3D interaction. The new interface is defined like follows: ``` interface 3DPickEvent : MouseEvent { readonly attribute float worldX; readonly attribute float worldY; readonly attribute float worldZ; readonly attribute float localX; readonly attribute float localY; readonly attribute float localZ; readonly attribute float normalX; readonly attribute float normalY; readonly attribute float normalZ; readonly attribute float colorRed; readonly attribute float colorGreen; readonly attribute float colorBlue; readonly attribute float colorAlpha; readonly attribute float texCoordS; readonly attribute float texCoordT; readonly attribute float texCoordR; object getMeshPickData (in DOMString vertexProp); }; ``` ----- ISPRS Trento 2011 Workshop, 2-4 March 2011, Trento, Italy _Figure 4. Three examples of on-site mobile Augmented Reality (AR) Cultural Heritage applications._ This allows the developer to use the 2D attribute (e.g. screenX) and/ or the 3D attributes (e.g. worldX or localX) if the vertex semantics are given appropriately (in this case the positions). The _getMeshPickData() method additionally can be used to_ access generic vertex data. This way, the 2D/ 3D event now bubbles, as expected from standard HTML events, through the DOM tree and can be combined with e.g. a typical 2D event on the X3D element as is shown in the following code fragment: ``` <shape> <appearance> <material id="mat" diffuseColor="red"></material> </appearance> <box onclick="document.getElementById('mat'). setAttribute('diffuseColor', 'green');"> </box> </shape> ``` **3.3** **Animations** There are several possibilities to animate virtual objects (e.g. for showing an ancient device in action etc.), ranging from updating attributes in a script every frame over standard X3D interpolator nodes up to using CSS-3D-Transforms und CSSAnimations, which are currently given as W3C working draft and only implemented in WebKit based web-browsers such as Apple Safari and Google Chrome. While X3D interpolators are supported by current Digital Content Creation (DCC) tools – an important point when processing the raw data and exporting to other formats – and are also able to animate vertex data (e.g. coordinates or colors), CSS animations are easily accessible using standard web techniques. The following code fragment shows an example on how to use CSS-3D-Transforms to update _Transform nodes for animating their child nodes._ ``` <style type="text/css"> #trans { -webkit-animation: spin 8s infinite linear; } @-webkit-keyframes spin { from { -webkit-transform: rotateY(0); } to { -webkit-transform: rotateY(360deg); } } </style> ... <transform id="trans"> <transform style="-webkit-transform: rotateY(45deg);"> ... </transform> </transform> ... ``` **3.4** **HTML Profile and Render Backend** As mentioned, X3DOM is based upon the concepts of X3D, which defines several profiles, such as the interchange profile, that can be used as a 3D data format, and the immersive profile, which also defines means for runtime and behaviour control (Web3D, 2008). However, these profiles are not suitable for the integration into the HTML DOM due to several reasons, which are discussed in more detail in (Behr et al., 2009; Behr et al., 2010). Thus, we propose an additional “HTML” profile that basically reduces X3D to a 3D visualization component for HTML5 just like SVG for 2D (cf. Figure 2), while all interaction concepts are taken from standard DOM scripting. As also mentioned in (Behr et al., 2010), the general goal here is to utilize HTML, JavaScript, and CSS for scripting and interaction in order to reduce complexity and implementation effort. The proposed “HTML” profile extends the X3D “Interchange” profile and consists of a full runtime with animations, navigation and asynchronous data fetching. On the one hand the latter is used for media data like _<img> and_ _<video>, which_ can directly be used to e.g. parameterize Texture nodes. On the other hand this is used for partitioning the scene data via an XMLHttpRequest (XHR) within _Inline nodes, since 3D data_ can soon get very big, especially in the Virtual Heritage domain as shown in Figure 7 (bottom row). However, X3D _Script_ nodes, Protos, and high-level pointing sensor nodes are not supported, whereas explicit (GLSL) shader materials as well as declarative materials – e.g. via the new CommonSurfaceShader node presented in (Schwenk et al., 2010) – are supported both. While the concept targets at native browser support, the system design now supports different rendering and synchronization backends through a powerful fallback model that matches existing backends and content profiles (compare Figure 5). The flexible open-source implementation of X3DOM already provides various runtime/ rendering backends today. These intermediate solutions are implemented through a WebGL-layer (Behr et al., 2010), which supports WebGL, X3D/ SAI plugins, and native implementations, since using WebGL is slower due to JavaScript and not yet supported by all browsers. The current release of X3DOM supports a native implementation (that is closed source and only for the iOS platform right now), WebGL, and partially X3D/ SAI plugins (like the InstantPlayer ActiveX plugin that can be downloaded from [http://www.instantreality.org/). A comparison of these backends](http://www.instantreality.org/) is shown in Figure 1. Flash as an additional backend (see Figure 5) will be supported as soon as its 3D API layer (codename “Molehill”) is available. ----- ISPRS Trento 2011 Workshop, 2-4 March 2011, Trento, Italy interlinking and concatenation with further information and additional content like HTML sites, multimedia, etc. _Figure 5. Fallback model: depending on the X3DOM profile_ _and current browser environment, the system automatically_ _chooses the appropriate backend rendering system._ **4.** **WORKFLOW FOR THE HERITAGE ON THE WEB** Besides presentation, i.e. the rendering and user interface part, workflow issues must be considered, too, including tools and tool-chains as well as content and media authoring. While declarative representations help reducing the application development and maintenance efforts, the content first needs to be generated somehow. In general X3DOM is extremely helpful for application- or domain-specific production pipelines. First of all, the utilized format, namely X3D (Web3D, 2008), is an open ISO standard that is a superset of the older VRML ISO standard and which is supported by a large and growing number of Digital Content Creation (DCC) tools. Second, the X3DOM project itself provides a bundle of online- and offline-tools (e.g. plugins and re-coder, see Figure 6) to ease the production and processing of content items. Besides all these techniques the project provides also software components, tutorials, and examples on the web page, which explore and explain how to get the data from a specific DCC Tool, e.g. Maya or 3ds Max (Autodesk, 2011), into your 3D web application. [In virtual heritage, MeshLab (http://meshlab.sourceforge.net/) is](http://meshlab.sourceforge.net/) an important tool to process and manipulate mesh datasets, which in addition can already export the 3D data into the X3D format, including textures, vertex colors, etc. However, when dealing with 3D scans the vast amount of data is an issue for several reasons. WebGL only supports 64k indices per mesh and therefore large models have to be split. X3DOM splits this automatically if necessary, but besides the memory footprint, loading the data, especially over the web, still takes time. Hence, data reduction should be considered as well. While progressive meshes and similar level-of-detail techniques are applicable here, the original set of normals and colors of the high-res mesh must be conserved for appropriate visual quality, wherefore normal and color maps can be used. Another issue in the content pipeline one need to think of is annotations and metadata processing. A possible scenario here is 3D content that shall be annotated with metadata to allow for _Figure 6. Interactive tools to export and recode data for_ _X3DOM. MeshLab, as one major VH tool, can export X3D data_ _directly, which can be used without further manual adoptions._ **5.** **APPLICATION SCENARIOS AND RESULTS** There already exist several applications that demonstrate the capabilities of X3DOM. Some examples are discussed next in the context of typical scenarios and uses cases. **5.1** **Primitive Exploration** One of the most basic use cases one can think of here is the examination of individual objects of the virtual heritage. In a typical scenario the 3D object is presented to the user such that he or she can examine it from all directions by simply moving and rotating it (or the virtual camera respectively) around with the mouse or a similar device. Concerning visualization this is a rather simple scenario in that the 3D scene itself keeps static. Here, Figure 7 shows some screenshots of the web-based visualization of Cultural Heritage objects provided by the VMusT consortium. As can be seen, all geometric 3D objects are visualized in the web-browser by simply utilizing our opensource X3DOM framework for rendering the 3D content in realtime. This is especially notable in that this is still almost raw data stemming from 3D Laser scans, which is neither reduced nor somehow otherwise prepared for real-time rendering. Additionally, by extending the web page with some standard JavaScript code for DOM scripting – where appropriate – the user can also interactively manipulate the data using standard 2D GUI elements (e.g. buttons and sliders) as for instance provided by the aforementioned JavaScript library jQuery. This can be useful to vertically or horizontally translate a clipping plane in order to cut away stratigraphic sequences and the like. Furthermore, it is also possible to allow the user to directly interact with an object by clicking on a certain point of interest etc., which then for instance triggers a popup HTML element containing some additional information. More concepts, though in the context of e-learning, are presented in (Jung, 2008). ----- ISPRS Trento 2011 Workshop, 2-4 March 2011, Trento, Italy declared X3D content which is rendered by X3DOM. The subpage is loaded inside an HTML iFrame within each layer inside the main page. Figure 8 shows a screenshot. **5.2** **Dynamic (Walkthrough) Scenarios** Other possible scenarios in CH embrace walkthrough worlds and the inspection of larger models like ancient city models and similar territories in virtual archaeology. With the Cathedral of Siena (cp. Figure 9) a classical guided walkthrough scenario is described in (Behr et al., 2001). Generally, floor plans (Figure 9, right) are a commonly used metaphor for navigation. This is for two reasons: for one thing the plan allows the user to build a mental model of the environment, and for another it prevents him from getting lost in 3D space. In this regard, camera paths with predefined animations are another frequently used means for guided navigation. In X3DOM camera animations can be easily accomplished by using one of the aforementioned animation methods, like for instance X3D interpolators. _Figure 7. Virtual Heritage objects visualized with X3DOM. Top_ _row: a reconstructed 3D capitel of an abbey that can be freely_ _examined from all directions. Bottom row: the statue to the left_ _is a 63 MB 3D scan and the front of the church shown to the_ _right has 32 MB of vertex data._ _Figure 9.The famous Digital Cathedral of Siena (cf. Behr et al.,_ _2001): the left image shows the rendered 3D view of the_ _cathedral’s interior and a virtual guide, and the right image_ _shows the 2D user interface._ Alternatively, the scene author can only define some interesting views and let the system interpolate between them. The resulting smooth camera animations are implemented following (Alexa, 2002). These animations are automatically generated if one binds the camera, e.g. when switching between different Viewpoint nodes (or cameras), which are part of the content. The same method is also used to calculate the animation-path if the current view is being resetted or if the current camera-view shall be moved to the „show all‟ position. _Figure 8. Coform3D – a line-up of multiple scanned 3D objects_ _integrated with X3DOM and JavaScript into HTML._ Another, a bit more intricate application shows a line-up of 3D objects, as it is done with images or videos today. Here, 3D is used as just another medium alike. The 3D Gallery developed [within the 3D-COFORM project (http://www.3dcoform.eu/)](http://www.3dcoform.eu/) shows a line-up of over 30 virtual objects. Historian vases and statues were scanned with a 3D scanner. This allows not only a digital conservation of ancient artefacts but offers the possibility for convenient comparison, too. The results have been exported into the X3D file format. The application framework consists of a HTML page with a table grid with 36 cells, each filled with a thumbnail image of a virtual historical object. As soon as the user clicks on a thumbnail, a second layer pops up inside our HTML file showing the reconstructed object in 3D. The user can now closer examine it or he can close the layer to return to the grid again. Technically, we are opening a subpage with the As explained previously, it is furthermore possible to freely navigate within the 3D scene in order to closely examine all geometric objects. This is done using the “examine” navigation mode. Besides this, the user can also walk or fly through e.g. a reconstructed city model or an old building as shown in Figure 9. Like every X3D runtime, also the current WebGL-/ JS-based implementation of X3DOM provides some generic interaction and navigation methods. As already outlined, interactive objects are handled by HTML-like events, while navigation can either be user-defined or controlled using specific predefined modes. Therefore, we added all standard X3D navigation modes, i.e. “examine”, “walk”, “fly” and “lookAt”. The content creator is free to activate them, for instance directly in the X3D(OM) code with <navigationInfo type=’walk’>, or to alternatively write his own application-specific navigation code. In the WebGL-based implementation the modes use the fast picking code (required for checking front and floor collisions) based on rendering the required information into a helper buffer as described in (Behr et al., 2010), which performs well even for larger worlds. ----- ISPRS Trento 2011 Workshop, 2-4 March 2011, Trento, Italy **5.3** **(Mobile) On-Site AR Scenarios** Figure 4 shows some examples of on-site mobile Augmented Reality CH applications. AR as a rapidly emerging technology combined with the ubiquitous computing power of modern mobile devices means having the desired information in ones pocket. With the help of the video-see-through effect the information – such as 2D images from former times as shown in Figure 4 (right) or the 3D reconstruction of an old temple as shown in Figure 10, which shows some results from Archeoguide (cf. e.g. Vlahakis et al., 2002) – can be superimposed onto the video image, or the real world respectively, by using computer-vision-based tracking techniques. Archeoguide is an example of an outdoor AR system, that utilizes X3D for content description and runtime behaviour, whereas the whole application logic is written in JavaScript. The X3D scene consists of three different layers: the video in the background, the 3D reconstruction of a temple that does not exist anymore, and the user interface. For being able to realize both, the tracking as well as the rendering part, the aforementioned Mixed Reality framework Instant Reality is used as basis. _Figure 10. Archeoguide – example of an outdoor AR system_ _(note the virtual temples and additional information that is_ _rendered on top of the real scene, where only ruines are left)._ In this context the term Mixed Reality means to be able to bring together (web-) content and location-based information directly on site. Especially when producing content for (mobile) MR applications, the unification of 2D and 3D media development is an essential aspect. Other important factors for authoring and rapid application development are declarative content description, flexible content in general (not only for the cultural heritage domain, but also for the industry etc.), and interoperability – i.e., write once, run anywhere (web/ desktop/ mobile). In X3DOM this is achieved by utilizing the wellknown JavaScript and DOM infrastructure also for 3D in order to bring together both, open architectures and declarative content design known from web design as well as “old-school” imperative approaches known from game engine development. The app-independent visualization furthermore enables context sensitive and on-demand information retrieval, which is even more of interest for distributed content development using available web standards. But when limiting oneself to the pure WebGL-based JS layer of X3DOM, at the moment special apps for handling the tracking part are still needed (e.g. by using Flash or the InstantPlayer plugin), because access to the camera image data is required but not yet supported in HTML5. However, with the recently proposed _<device> tag even this_ might change in the near future. **6.** **CONCLUSIONS** We have presented a scalable framework for the HTML/X3D integration, which on the one hand provides a single declarative developer interface, that is based on current web standards, and which on the other supports various backends through a powerful fallback model for runtime and rendering modules. This includes native browser implementations and plugins for X3D as well as a purely WebGL-based scene-graph – hence easing the deployment of 3D content and bringing it back to the user's desktop or mobile device. The benefit of our proposed model is the tight integration of declarative (X)3D content directly into the HTML DOM tree without the need to forge new concepts, but by using today's (web) standards. Similar to images or videos today, 3D objects become just another medium alike. As a thin layer between HTML and X3D we deliver a connecting architecture that employs well-known standards on both sides, such as the CSS integration, thereby easing the users' access. Even more, by building upon appropriate standards, we also give a perspective towards more sustainable 3D contents. **7.** **ACKNOWLEDGEMENTS** Thanks to Daniel Pletinckx and VisualDimension for providing some of the 3D assets and models. **8.** **REFERENCES** Akenine-Möller, T., Haines, E., Hoffmann, N., 2008. Real-Time Rendering. AK Peters, Wellesley, MA, 3[rd] edition. Alexa, M., 2002. Linear combination of transformations. In Proc. SIGGRAPH '02, ACM, New York, USA, pp. 380-387. Autodesk, 2011. Autodesk 3ds Max 2011. http://area.autodesk.com/3dsmax2011/features. Behr, J., Fröhlich, T., Knöpfle, C., Kresse, W., Lutz, B., Reiners, D., Schöffel, F., 2001. The Digital Cathedral of Siena Innovative Concepts for Interactive and Immersive Presentation of Cultural Heritage Sites. In Bearman, David (Ed.): Intl. CH Informatics Meeting. Proceedings: CH and Technologies in the 3[rd] Millennium. Mailand, pp. 57-71. Behr, J., Eschler, P., Jung, Y., Zöllner, M, 2009. X3DOM - a DOM-based HTML5/ X3D integration model. In Stephen Spencer, editor, Proceedings Web3D 2009: 14[th] Intl. Conf. on 3D Web Technology, New York, USA, ACM, pp. 127–135. Behr, J., Jung, Y., Keil, J., Drevensek, T., Eschler, P., Zöllner, M., Fellner, D., 2010. A scalable architecture for the HTML5/ X3D integration model X3DOM. In Stephen Spencer, editor, Proceedings Web3D 2010: 15[th] Intl. Conference on 3D Web Technology, New York, USA, ACM Press, pp. 185-194. Di Benedetto, M., Ponchio, F., Ganovelli, F., Scopigno, R., 2010. SpiderGL: a JavaScript 3D graphics library for nextgeneration WWW. Web3D 2010, New York, USA, ACM Press, pp. 165-174. Jung, Y., 2008. Building Blocks for Virtual Learning Environments. In Cunningham, S. (Ed.) et al.; Eurographics: WSCG 2008, Communications Papers. Plzen, University of West Bohemia, pp. 137-143. Brunt, P., 2010. GLGE. http://www.glge.org/. Crockford, D., 2008. JavaScript: The Good Parts. O‟Reilly, Sebastopol, CA. ----- ISPRS Trento 2011 Workshop, 2-4 March 2011, Trento, Italy DeLillo, B., 2009. WebGLU JavaScript library. http://github.com/OneGeek/WebGLU. Khronos, 2011. WebGL specification, working draft. https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/do c/spec/WebGL-spec.html. Munshi, A., Ginsburg, D., Shreiner, D., 2009. OpenGL ES 2.0 Programming Guide. Addison-Wesley, Boston. Rost, R., 2006. OpenGL Shading Language. Addison-Wesley, Boston, 2[nd] edition. Schwenk, K., Jung, Y., Behr, J., Fellner, D., 2010. A Modern Declarative Surface Shader for X3D. In ACM SIGGRAPH: Proceedings Web3D 2010: 15[th] Intl. Conference on 3D Web Technology. New York, ACM Press, pp. 7-15. Shreiner, D., Woo, M., Neider, J., Davis, T., 2006. OpenGL Programming Guide. Addison-Wesley, Boston, 5[th] edition. Sun, 2007. Java3d. https://java3d.dev.java.net/. Vlahakis, V., Ioannidis, N., Karigiannis, J., Tsotros, M., Gounaris, M., Stricker, D., Gleue, T., Dähne, P., Almeida, L., 2002. Archeoguide: An Augmented Reality Guide for Archaeological Sites. In IEEE Computer Graphics and Applications 22 (5), pp. 52-60. W3C, 2009a. Html 5 specification, canvas section. http://dev.w3.org/html5/spec/Overview.html#the-canvaselement. W3C, 2009b. Html 5 specification draft, declarative 3D scenes section. http://www.w3.org/TR/2009/WD-html520090212/no.html#declarative-3d-scenes. W3C, 2000. Document Object Model Events. http://www.w3.org/TR/DOM-Level-2Events/events.html#Events-MouseEvent. Web3D, 2008. X3D. http://www.web3d.org/x3d/specifications/. Web3D, 2009. Scene access interface (SAI). http://www.web3d.org/x3d/specifications/ISOIEC-FDIS-197752.2-X3D-SceneAccessInterface/. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.5194/ISPRSARCHIVES-XXXVIII-5-W16-475-2011?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5194/ISPRSARCHIVES-XXXVIII-5-W16-475-2011, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://isprs-archives.copernicus.org/articles/XXXVIII-5-W16/475/2011/isprsarchives-XXXVIII-5-W16-475-2011.pdf" }
2,012
[]
true
2012-09-10T00:00:00
[ { "paperId": "19d279b4147209058346dd8cdb5ef7b23bdc43d7", "title": "Real-Time Rendering" }, { "paperId": "6ac7c40898812cde4fac75aea25e38ed59533ede", "title": "A scalable architecture for the HTML5/X3D integration model X3DOM" }, { "paperId": "e7775d93a1503e7496ece380ed93ae5bf2485dcf", "title": "A modern declarative surface shader for X3D" }, { "paperId": "e9050975b0da50a49adf910e6ce34e6b4d644a9e", "title": "SpiderGL: a JavaScript 3D graphics library for next-generation WWW" }, { "paperId": "c60b8b6f6459eabff6ab26fa80a7b16a128bdb79", "title": "X3DOM: a DOM-based HTML5/X3D integration model" }, { "paperId": "17e5b13ed6dc50833cbbe57cf7a0acebf5de0fe4", "title": "The OpenGL ES 2.0 programming guide" }, { "paperId": "1d7aa0bfa64bb3a20b6f93a0a03571e06f8180fc", "title": "JavaScript: The Good Parts" }, { "paperId": "9cb72726ec0f5781f0c7443fa53acda3c41ee233", "title": "Archeoguide: An Augmented Reality Guide for Archaeological Sites" }, { "paperId": "4a38a76dc91a9b371a8630dc67887af7578e8c4c", "title": "Linear combination of transformations" }, { "paperId": "08b59313243bdb730ec78163ab086eaf2b1b2231", "title": "OpenGL programming guide" }, { "paperId": null, "title": "Autodesk 3 ds Max 2011" }, { "paperId": null, "title": "Scene access interface (SAI)" }, { "paperId": null, "title": "Proceedings: CH and Technologies in the 3 Millennium" }, { "paperId": "1b04e69630dc3767a34b4e1f167f48ef3aacd994", "title": "OpenGL Shading Language" }, { "paperId": null, "title": "WebGL specification, working draft. https://cvs.khronos.org/svn/repos/registry/trunk/public/webgl/do c/spec/WebGL-spec.html" }, { "paperId": null, "title": "WebGLU JavaScript library WebGL specification , working draft" }, { "paperId": "f04d5db60aa1228bceafa191d0b55a2f89456c04", "title": "Building Blocks for Virtual Learning Environments" }, { "paperId": "f7e22753856da199ec0f7e3bf434a6fc8ae6afa3", "title": "The Digital Cathedral of Siena - Innovative Concepts for Interactive and Immersive Presentation of Cultural Heritage Sites" }, { "paperId": null, "title": "Document Object Model Events" }, { "paperId": null, "title": "Html 5 specification, canvas section. http://dev.w3.org/html5/spec/Overview.html#the-canvas- element" }, { "paperId": null, "title": "Html 5 specification draft, declarative 3D scenes section" }, { "paperId": null, "title": "International Archives of the Photogrammetry" } ]
9,530
en
[ { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01dffdbcdc970eb510b8c83390990b80bc745df9
[]
0.861442
ENHANCING MOBILE CRYPTOCURRENCY WALLETS: A COMPREHENSIVE ANALYSIS OF USER EXPERIENCE, SECURITY, AND FEATURE DEVELOPMENT
01dffdbcdc970eb510b8c83390990b80bc745df9
JITK (Jurnal Ilmu Pengetahuan dan Teknologi Komputer)
[ { "authorId": "2314471392", "name": "Richard Richard" }, { "authorId": "2266846906", "name": "Muhammad Ammar Marsuki" }, { "authorId": "2266846989", "name": "Gading Aryo Pamungkas" }, { "authorId": "2314470902", "name": "Felix Irwanto" } ]
{ "alternate_issns": null, "alternate_names": [ "JITK (jurnal Ilmu Pengetah dan Teknol Kompʹût" ], "alternate_urls": null, "id": "bcae1bd7-4712-4486-99c5-175b0cca6c1d", "issn": "2527-4864", "name": "JITK (Jurnal Ilmu Pengetahuan dan Teknologi Komputer)", "type": "journal", "url": null }
The surge in cryptocurrency usage has increased reliance on cryptocurrency wallet applications. However, the usability, security, and feature richness of crypto wallets require significant enhancements. This research aims to identify critical factors that should guide the future design of mobile cryptocurrency wallets. The first step was to collect user reviews on several popular crypto wallets as the dataset. A total of 5,466 mobile wallet-related reviews from mobile application stores were filtered and analyzed. A machine-learning approach was used to cluster the user reviews. The analysis shows that customer issues are divided into four main themes: domain-specific challenges, security and privacy concerns, misconceptions, and trust issues. A software process assessment was also conducted to examine the current state of crypto wallets in terms of security, usability, and feature richness. Around 21 crypto wallet platforms were explored and assessed. Based on the thematic analysis and software process assessment, feature recommendations are proposed to address these shortcomings and enhance the credibility of mobile cryptocurrency wallets.
### VOL. 10. NO. 1 AUGUST 2024. . DOI: 10.33480/jitk.v10i1.5157. # ENHANCING MOBILE CRYPTOCURRENCY WALLETS: A COMPREHENSIVE ANALYSIS OF USER EXPERIENCE, SECURITY, AND FEATURE DEVELOPMENT **Richard[1*]; Muhammad Ammar Marsuki[2]; Gading Aryo Pamungkas[3]; Felix Irwanto[4 ]** Information Systems Department[1,2,3,4] Bina Nusantara University, Indonesia[1,2,3,4] https://binus.ac.id/[1,2,3,4] [richard-slc@binus.edu[1*], muhammad.marsuki@binus.ac.id[2], gading.pamungkas@binus.ac.id[3],](mailto:richard-slc@binus.edu1*) [felix.irwanto@binus.ac.id[4]](mailto:felix.irwanto@binus.ac.id) (*) Corresponding Author (Responsible for the Quality of Paper Content) The creation is distributed under the Creative Commons Attribution-NonCommercial 4.0 International License. **_Abstract—The surge in cryptocurrency usage has increased reliance on cryptocurrency wallet applications._** _However, the usability, security, and feature richness of crypto wallets require significant enhancements. This_ _research aims to identify critical factors that should guide the future design of mobile cryptocurrency wallets._ _The first step was to collect user reviews on several popular crypto wallets as the dataset. A total of 5,466_ _mobile wallet-related reviews from mobile application stores were filtered and analyzed. A machine-learning_ _approach was used to cluster the user reviews. The analysis shows that customer issues are divided into four_ _main themes: domain-specific challenges, security and privacy concerns, misconceptions, and trust issues. A_ _software process assessment was also conducted to examine the current state of crypto wallets in terms of_ _security, usability, and feature richness. Around 21 crypto wallet platforms were explored and assessed. Based_ _on the thematic analysis and software process assessment, feature recommendations are proposed to address_ _these shortcomings and enhance the credibility of mobile cryptocurrency wallets._ **_Keywords: Crypto wallet, software process assessment, thematic analysis, user experience._** **Intisari—Peningkatan penggunaan mata uang kripto telah meningkatkan ketergantungan pada aplikasi** _dompet kripto (crypto wallet). Terlepas dari peningkatan yang ada tingkat usability, keamanan, dan kekayaan_ _fitur dari dompet kripto memerlukan peningkatan yang signifikan. Penelitian ini bertujuan untuk_ _mengidentifikasi faktor-faktor kritis yang dapat menjadi dasar dari rancangan masa depan dompet mobile._ _Langkah pertama adalah mengumpulkan ulasan pengguna terhadap beberapa dompet kripto populer_ _sebagai dataset. Sebanyak 5,466 ulasan terkait dompet mobile dari toko aplikasi mobile disaring dan_ _dianalisis. Pendekatan machine learning digunakan untuk mengelompokkan ulasan pengguna. Analisis_ _menunjukkan bahwa masalah pelanggan terbagi menjadi empat tema utama: tantangan spesifik, keamanan_ _dan privasi, kesalahpahaman, dan masalah kepercayaan. Proses Software Process Assessment juga dilakukan_ _untuk memeriksa keadaan saat ini dari dompet kripto dalam hal keamanan, kegunaan, dan kekayaan fitur._ _Sekitar 21 platform dompet kripto dieksplorasi dan dinilai. Berdasarkan analisis tematik dan penilaian proses_ _perangkat lunak, rekomendasi fitur diusulkan untuk mengatasi kekurangan ini dan meningkatkan kredibilitas_ _dompet mata uang kripto mobile._ **_Kata Kunci: Crypto wallet; software process assessment, analisis tematik, pengalaman pengguna._** **INTRODUCTION** A cryptocurrency (crypto) wallet is often defined as a software application allowing users to store, manage, and transact crypto assets such as Bitcoin, Ethereum, etc. Crypto wallets establish a unique field as they combine features of password managers, banking applications, and the need for ----- ### VOL. 10. NO. 1 AUGUST 2024 DOI: 10.33480 /jitk.v10i1.5157 ### . anonymity [1]. A crypto wallet is considered the primary interface for interacting with the asset in the blockchain. Unlike traditional wallets holding physical currency, crypto wallets do not store crypto assets [2]. Instead, they provide the means to access and interact with the digital assets on blockchain networks. The modern-day crypto wallet allows users to connect to various blockchain networks and switch assets across the network [3]. Technically, a crypto wallet operates by keeping track of private keys used to access cryptocurrency addresses and execute transactions [4]. Based on internet accessibility, crypto wallets can be categorized as online (hot) and offline (cold) wallets. A cold wallet is considered the secure version of a hot wallet, as the wallet is not exposed to the online connection. However, hot wallets are more convenient as they connect directly to the blockchain network. Developing a crypto wallet is tricky as the developer should understand and consider security, usability, and feature richness. Security is crucial to protect sensitive data (private key) and transaction authorizations (signing) in a crypto wallet [5]. Usability can ensure widespread crypto wallet adoption by designing a wallet that understands the needs of new and experienced users. Feature richness drives the crypto wallet beyond its essential functionalities, transforming it into a dynamic tool that enhances the user experience. Balancing these three elements is crucial for creating a wallet that is both secure and userfriendly. Recent scholarly investigations reveal that user experience (UX) research in blockchain-related technologies, including cryptocurrency, lags behind the current advancements in blockchain [6]. The prevalence of financial losses attributed to user misconceptions about the functionalities of crypto wallets serves as substantial evidence to support this observation [7], [8]. In their study, Krombholz et al. conducted a survey focusing on UX within the Bitcoin network, revealing a widespread lack of user understanding about available features, particularly regarding security and privacy, which frequently compromises their anonymity. This gap in understanding may stem from inadequate usability in desktop and mobile-based crypto wallets, especially in executing basic operations. Users often encounter instructions written in overly technical language, which is challenging to comprehend, and they lack clear guidance on troubleshooting steps and problemsolving methods. Since trust is a fundamental motivator among crypto wallet users, these usability issues have a direct and adverse effect on the perceived reliability of these wallets, leading to a disproportionately low usage rate despite high adoption figures [7], [9]. To address general and domain-specific challenges, future wallet designs should incorporate user interfaces that offer comprehensive, user-centered information and implement systems to mitigate financial losses [10], [11]. The urgency of having a crypto wallet is underscored by the imperative need for secure transaction confirmation and safeguarding private key addresses. While keeping security, the crypto wallet should provide an excellent experience for its users. In principle, improving the user experience of crypto wallets increases crypto adoption. Crypto wallet developers should consider the user’s needs when developing the platform. This research aims to deepen the understanding of user perceptions regarding crypto wallets, with the user's perception poised to become the main driver for the future of crypto wallet design. This research will employ two primary methods: a thematic analysis of user reviews on mobile application stores and a software process assessment of crypto wallets. A detailed examination of various crypto wallet features, usability, and security aspects will be conducted. Through these methodologies, the research intends to provide a comprehensive overview of the current state of crypto wallets. Afterward, the future feature requirements of the crypto wallet will be proposed as the guideline for further development. **MATERIALS AND METHODS** Our research utilizes two approaches to garner comprehensive insights into the crypto wallet. The first approach involves data mining and analysis, leveraging advanced techniques to extract meaningful reviews from the application store. This process allows us to uncover correlations, identify potential challenges, and reveal valuable information that may not be apparent through traditional methods. Concurrently, we employ a software process assessment approach that examines the crypto wallet to evaluate its effectiveness and adherence to security, usability, and feature richness. The synthesis of findings from these two approaches is presented in the results section, providing a cohesive overview of the data-driven insights derived from the analysis and the actionable recommendations from the software process assessment. This comprehensive synthesis offers a nuanced perspective on the interplay ----- ### VOL. 10. NO. 1 AUGUST 2024. . DOI: 10.33480/jitk.v10i1.5157. between quantitative data and qualitative process collection, a data cleansing phase was initiated, evaluations, enriching our understanding of the employing various pre-processing methods to studied context and facilitating a well-rounded distill the data to only that pertinent to the study. To interpretation of the research outcomes. ascertain the reliability and validity of the sampled dataset, the K-Fold validation technique was **Data Mining and Analysis** implemented, a crucial factor influencing the This research devised a methodical precision of the analysis. The final stage of the framework for examining the data essential for methodology involved the application of statistical exploring users' preferences regarding thematic analysis. The primary objective of this cryptocurrency wallet features. Data acquisition phase was to discern prevalent trends, patterns, and was conducted through a web scraping technique, potential challenges specific to mobile targeting user reviews on App Stores. After data cryptocurrency wallets. Source: (Research Results, 2024) Figure 2. Research Methodology **Data Collection** _Data Source - For this study, the top five_ mobile crypto wallets were chosen based on their popularity, user ratings, and the volume of reviews on both the Google Play Store and Apple App Store. The selected wallets were Blockchain Wallet, MetaMask, Trust, Coinbase, and Coinomi. User reviews were collected from the App Stores to compare user opinions across different platforms and operating systems. _Review Crawler - A custom-built crawler_ collected 35,806 reviews from the data source. The collected data includes the text of the reviews, their rating scores, the dates they were posted, and the versions of the applications at the time of the reviews. _Data Exclusion - Reviews comprising fewer_ than four words were omitted to maintain the integrity of the dataset. This exclusion criterion was applied assuming that such brief reviews may lack substantive content, potentially compromising the overall dataset's accuracy. After eliminating this noisy data, the refined review collection retained approximately 27,934 entries. **Data Selection** This study employs a machine learning methodology to categorize review content. Initial processing techniques are applied to remove extraneous information and to homogenize the textual data. Subsequently, machine learning algorithms convert this processed text into numerical vectors, rendering it interpretable. The pre-processed dataset is then divided, with 80% allocated for training the model and 20% reserved for testing. _Pre-Processing - The raw text data was_ converted into a format suitable for analysis through several steps. First, all text was standardized to the same case (case folding) for consistency. Next, the text was divided into individual words (tokenizing), which helps eliminate stopwords or words that do not contribute to the analysis. Lastly, each word was ----- ### VOL. 10. NO. 1 AUGUST 2024 DOI: 10.33480 /jitk.v10i1.5157 ### . reduced to its root form (stemming) to enhance accuracy by minimizing variations in the text. _Feature Extraction - Four methods were_ employed to analyze the extensive user reviews of mobile crypto wallets. First, a count vectorizer was used to tally the frequency of specific words and phrases. The significance of each word and phrase was then assessed using the term frequency-inverse document frequency (TF-IDF) technique. Sentiment analysis was performed, assigning scores to reviews from -1 (extremely negative) to 1 (extremely positive) based on the occurrence of positive, negative, and neutral words. Finally, the data was divided into training and testing subsets to evaluate the accuracy of the feature extraction models. Table 1. Classified reviews. Classification Review Text Explanation a perfect classifier would score 1. Through 10-fold cross-validation, our classifier achieved an average AUC value of 0.84. **Data Analysis** _Thematic Analysis - The analysis was_ selected for its proficiency in detecting and isolating data, facilitating the interpretation and formation of patterns [12]. In the subsequent phase, the reviews underwent a batch coding process. This process involved identifying themes within the coded data, each being defined and labeled to represent its essence accurately. As the analysis advanced, these themes were meticulously refined to ensure they precisely mirrored the data's content. This analytical process culminated in identifying four primary themes: domain-specific issues, security and privacy concerns, misconceptions, and trust aspects. The scope of the analysis was then concentrated on the most pertinent reviews for each theme, culminating in 5,466 reviews. Table 2 details the specific number of reviews selected for each theme from the different wallets. Table 2. The count of classified and analyzed reviews for each wallet and platform Related to Cryptocurrency I was caught off The high transaction guard by the fees. fees caused a bit of I deposited 100 dissatisfaction for the USD but ended up user. with just about 75 in my account. UX in general With the latest version 1.10.2, crashes are nearly eliminated, but the app still occasionally freezes on startup. Irrelevant to UX I'm hoping this will be my ticket to the moon! Focus on how the application behaves—nothing to do with cryptocurrency. Unrelated to the application or the cryptocurrency. Found Reviews Classified Reviews Analyzed Reviews Source: (Research Results, 2024) _Training Set - Out of the 27,934 reviews, a_ subset of 1,000 reviews was randomly selected for categorization based on their relevance to user experience (UX). In this context, relevance is defined as the review's pertinence to specific features of mobile cryptocurrency wallets and insights derived from previous research. This categorization process sorts the reviews into three groups: those relevant to cryptocurrency, those about UX in general, and those deemed irrelevant to UX. Table 1 presents each review type's examples and explanations for their respective classifications. _Machine Learning Model - After finalizing the_ training dataset, we employed K-Fold validation to evaluate our machine learning model. Combining our pre-processing techniques, sentiment scoring, and random sampling resulted in an F1 score of 0.74 for reviews related to user experience (UX). The F1 score is a metric in machine learning used to assess a model's precision. A random binary classifier would have an Area Under the Receiver Operating Characteristic Curve (AUC-ROC) value of 0.5, while Metamask 1,794 1,498 613 Coinbase 2,360 2,581 1,401 Coinomi 1,692 850 405 Trust Wallet 16,130 4,016 1,884 BlockChain 3,958 2,761 1,163 Total 27,934 11,706 5,466 Source: (Research Results, 2024) **Software Process Assessment** Our software process assessment begins with setting review indicators that focus on security, feature richness, and usability. The following process is choosing samples of wallets, delving down into the features, and exploring the functionalities offered by the wallets to user needs and assessment standards. Usability considerations encompass examining user interfaces, intuitiveness, and overall user experience. Wallet apps are observed in real-time usage scenarios to ensure a holistic assessment. Comprehensive testing is conducted by involving the download and installation of the selected wallet apps. This testing phase examines feature richness, ease of use, and any security issues that might impact user satisfaction. The results of these evaluations are then summarized to contribute valuable insights into the strengths and areas for improvement in non-custodial hot wallet applications. ----- **RESULTS AND DISCUSSION** This section provides detailed outcomes from the research methodology using thematic analysis and software process assessment. The thematic analysis allowed for the distillation and categorization of critical themes and patterns embedded within the qualitative data, providing a view of intrinsic connections within the dataset. Integration with software process assessment helps gain insight into real case testing scenarios based on a wallet’s usability, feature richness, and security. The results could help build the future architecture of a non-custodial hot crypto wallet. **Thematic Analysis Result** Source: (Research Results, 2024) Figure 3. Identified Theme The thematic analysis revealed four distinct themes, with domain-specific issues emerging as the most prevalent. This was followed by themes related to security and privacy, misconceptions, and trust. _Domain-specific - This theme focuses on_ issues unique to mobile crypto wallets. Table 3. Findings in domain-specific themes Review Text Insight Supports nearly all coins The reviewer prefers wallets and allows multiple coins of that support multiple the same wallet, a feature cryptocurrencies over those I've had issues with in other that do not. apps. The interface of [mobile wallet name] is poorly The reviewer experienced designed and easily financial loss due to the poorly targeted by phishing scams. designed user interface. My account was hacked, resulting in a loss of $450! ### VOL. 10. NO. 1 AUGUST 2024. . DOI: 10.33480/jitk.v10i1.5157. functionality typically receive numerous positive reviews. Conversely, poor user interface design is a common critique in the reviews, noted to diminish the overall user experience and, in extreme instances, result in financial loss. _Security and Privacy - This was obtained_ from reviews addressing issues regarding mobile wallets' security and privacy. Table 4. Findings in security and privacy theme Review Text Insight The wallet feels very secure to me, The variety of security thanks to features like password options offered by the protection, biometrics, BIP39 wallet enhances users' passphrase, and the ability to sense of security combine these options. Despite never sharing my Inadequate security password, an unknown party measures and poor accessed my wallet. Customer customer support can support responded with a bot, lead users to abandon leaving me no choice but to delete the wallet. my account. Source: (Research Results, 2024) The reviews highlight the necessity of multiple security measures, particularly emphasizing the importance of second-factor authentication. Additionally, the reviews underline the critical role of customer support in assisting users with issues related to sensitive personal information. _Misconception - This theme highlights the_ drawbacks resulting from user misunderstandings. Table 5. Findings in misconception theme Review Text Insight I've generally had no problems, The reviewer mentioned except my balance seems really issues with the balance buggy and inaccurate after being displayed transfers. Not sure why that inaccurately after happens. transactions, without understanding the cause. Source: (Research Results, 2024) Our findings indicate a strong user preference for mobile wallets capable of storing multiple currencies. Wallets featuring this The abundance of negative The reviewer mentioned comments suggests that many that the majority of users people are unfamiliar with how still lack basic crypto works. The balance takes understanding of time to sync with the blockchain. cryptocurrency. Regarding the high ETH transaction fees, they are not the wallet's fault; refer to this article [url to article about transaction fee]. It appears that no one is willing to take the time to understand how this technology functions. Source: (Research Results, 2024) While certain issues arising from misconceptions could be attributed to developer shortcomings, our analysis suggests that the primary cause often lies in the users' limited understanding of how cryptocurrency functions. ----- ### VOL. 10. NO. 1 AUGUST 2024 DOI: 10.33480 /jitk.v10i1.5157 ### . _Trust - This theme emerges from reviews_ that reflect users' confidence in the mobile crypto wallet. Table 6. Findings in trust theme Review Text Insight Users prefer having A wallet you can trust that gives you more control over their full control over your earnings financial assets. It's really frustrating to trust this The presence of proper wallet when there are so many scams customer support involving people pretending to be greatly impacts user customer support! trust. Source: (Research Results, 2024) Presently, certain mobile wallets exert a degree of indirect control over how customers administer their wallets. Our findings underscore the importance of providing users with maximal autonomy as a key factor in earning their trust. Additionally, the significance of robust customer support is reiterated within this theme. **Software Process Assessment Result** The foundational aspects of crypto wallets are anchored in four key features. Platform availability ensures accessibility across various devices and operating systems, promoting inclusivity and user adoption. Customizability, another vital factor, empowers users to personalize their wallet interfaces and functionalities, enhancing the overall user experience. On-ramp support is integral for facilitating the seamless conversion of traditional fiat currencies into cryptocurrencies, streamlining the entry process for newcomers. Incorporating a built-in crypto exchange within the wallet simplifies the trading experience. It consolidates various financial activities into a single, user-friendly platform. Figure 4 shows the curated leading crypto wallet primary feature. Only six crypto wallets have all complete basic features defined in the manual survey. Source: (Research Results, 2024) Figure 4. Leading crypto wallet basic features User experience is a cornerstone for crypto wallet adoption. The survey revealed significant elements contributing to a positive user experience. Diverse login methods, such as email authentication, ensure accessibility and strengthen security measures. Multi-protocol connection capability is essential for users managing diverse cryptocurrencies, enabling compatibility across different blockchain networks [13]. Integrating crypto name services through Ethereum Name Service (ENS) or an internal naming service enhances user-friendliness by replacing complex wallet addresses with human-readable names, reducing transaction friction. Figure 5 shows the curated leading crypto wallet with a great user experience. Only four crypto wallets meet all the user experience criteria on the manual survey. Source: (Research Results, 2024) Figure 5. Leading crypto wallet user experience Security is paramount in the crypto space, and our survey unveiled several vital features enhancing wallet security. Multi-Party Computation (MPC) [14] and multi-signature (multi-sig) [15] functionalities employ advanced cryptographic techniques to fortify the security posture of wallets. Maximal extractable value [16] safeguards users against potential financial losses, limiting withdrawal amounts to mitigate risks. Anonymity features prioritize user privacy, addressing concerns within the decentralized landscape. Furthermore, the integration of hardware wallets adds an extra layer of security by keeping private keys offline, reducing susceptibility to online attacks, and bolstering overall confidence in the security of digital assets. Figure 6 shows the curated leading crypto wallet with better security. Currently, no crypto wallets meet all the security criteria on the manual survey. Source: (Research Results, 2024) Figure 6. Leading crypto wallet security ----- **Mandatory Features Recommended for Future** **Design** Based on insights derived from the thematic analysis, the incorporation of various features is suggested. Anticipated outcomes from implementing these features include a notable surge in positive reviews relative to negative ones for the application. This shift is expected to assist current and potential users in making informed decisions about adopting the mobile wallet. Source: (Research Results, 2024) Figure 7. The crypto wallet’s required features **CONCLUSIONS** The data classification identified four themes: domain-specific, security and privacy, misconceptions, and trust. The thematic analysis indicates that several features should be included in future mobile crypto wallets. These features are a well-designed user interface, 24/7 customer support, multi-cryptocurrency support, and twofactor authentication. Since the domain-specific theme was the most frequently identified (see Figure 3), it suggests that a well-designed user interface and multi-cryptocurrency support are the most crucial features for future mobile crypto wallets. Additionally, the other two features mentioned in Figure 7 are highly recommended to enhance trust between users and developers, thereby increasing the wallet's credibility compared to competitors. The software process assessment has provided valuable insights into the strengths and areas for improvement within the landscape of crypto wallet development. Examining security measures, features, and usability has illuminated the current state of non-custodial hot wallets and ### VOL. 10. NO. 1 AUGUST 2024. . DOI: 10.33480/jitk.v10i1.5157. laid the groundwork for enhancing overall effectiveness and user experience. The findings from our assessment underscore the importance of continuous improvement in crypto wallet usability and security while emphasizing user-centric features to ensure the long-term viability of crypto wallets. Future research could contribute to designing a crypto wallet architecture that pays attention to improving user experience while enhancing the digital financial ecosystem. **REFERENCE** [1] S. Houy, P. Schmid, and A. Bartel, “Security Aspects of Cryptocurrency Wallets—A Systematic Literature Review,” ACM Comput _Surv, vol. 56, no. 1, pp. 1–31, Jan. 2024, doi:_ 10.1145/3596906. [2] H. Kumar, S. Basak, S. KD, and A. H. Nalband, “Enabling Secured and Seamless Crypto Wallets: A Blockchain Solution,” in 2023 2nd _International Conference on Vision Towards_ _Emerging Trends in Communication and_ _Networking Technologies (ViTECoN), IEEE,_ May 2023, pp. 1–8. doi: 10.1109/ViTECoN58111.2023.10157044. [3] S. Suratkar, M. Shirole, and S. Bhirud, “Cryptocurrency Wallet: A Review,” in 2020 _4th International Conference on Computer,_ _Communication_ _and_ _Signal_ _Processing_ _(ICCCSP), IEEE, Sep. 2020, pp. 1–7. doi:_ 10.1109/ICCCSP49186.2020.9315193. [4] S. Barakat, Q. Hammouri, and K. Yaghi, “COMPARISON OF HARDWARE AND DIGITAL CRYPTO WALLETS,” _Journal of_ _Southwest Jiaotong University, vol. 57, no. 6,_ pp. 380–386, Dec. 2022, doi: 10.35741/issn.0258-2724.57.6.36. [5] P. Ji, “The Advance of Cryptocurrency Wallet with Digital Signature,” Highlights in Science, _Engineering and Technology, vol. 39, pp._ 1098–1103, Apr. 2023, doi: 10.54097/hset.v39i.6714. [6] R. G. Barresi and F. Zatti, “The Importance of Where Central Bank Digital Currencies Are Custodied: Exploring the Need of a Universal Access Device,” SSRN Electronic Journal, pp. 1-12, Nov. 2020, doi: 10.2139/ssrn.3691263. [7] H. Albayati, S. K. Kim, and J. J. Rho, “A study on the use of cryptocurrency wallets from a user experience perspective,” _Hum Behav_ _Emerg Technol, vol. 3, no. 5, pp. 720–738,_ Dec. 2021, doi: 10.1002/hbe2.313. [8] A. Voskobojnikov, O. Wiese, M. Mehrabi Koushki, V. Roth, and K. (Kosta) Beznosov, ----- ### VOL. 10. NO. 1 AUGUST 2024 DOI: 10.33480 /jitk.v10i1.5157 ### . “The U in Crypto Stands for Usable: An Empirical Study of User Experience with Mobile Cryptocurrency Wallets,” in _Proceedings of the 2021 CHI Conference on_ _Human Factors in Computing Systems, New_ York, NY, USA: ACM, May 2021, pp. 1–14. doi: 10.1145/3411764.3445407. [9] A. Tripathi, A. Choudhary, S. K. Arora, G. Arora, G. Shakya, and B. Rajwanshi, “Crypto Bank: Cryptocurrency Wallet Based on Blockchain,” 2024, pp. 223–236. doi: 10.1007/978-3-031-53085-2_19. [10] H. Jang, S. H. Han, and J. H. Kim, “User Perspectives on Blockchain Technology: User-Centered Evaluation and Design Strategies for DApps,” IEEE Access, vol. 8, pp. 226213–226223, 2020, doi: 10.1109/ACCESS.2020.3042822. [11] Vaibhav and D. Arora, “Web 3.0-Based Crypto Wallet for Securing Assets and Blockchain Transactions,” 2024, pp. 583– 591. doi: 10.1007/978-981-99-9811-1_46. [12] V. Braun and V. Clarke, “Toward good practice in thematic analysis: Avoiding common problems and be(com)ing a _knowing researcher,” Int J Transgend Health,_ vol. 24, no. 1, pp. 1–6, Jan. 2023, doi: 10.1080/26895269.2022.2129597. [13] Y. Pang, “A New Consensus Protocol for Blockchain Interoperability Architecture,” _IEEE Access, vol. 8, pp. 153719–153730,_ 2020, doi: 10.1109/ACCESS.2020.3017549. [14] H. Zhong, Y. Sang, Y. Zhang, and Z. Xi, “Secure Multi-Party Computation on Blockchain: An Overview,” 2020, pp. 452–460. doi: 10.1007/978-981-15-2767-8_40. [15] S. Jiang, D. Alhadidi, and H. F. Khojir, “Key-andSignature Compact Multi-Signatures for Blockchain: A Compiler with Realizations,” IEEE Transactions on Dependable and Secure Computing, pp. 1-18, Jun. 2024, doi: 10.1109/TDSC.2024.3410695. [16] K. Kulkarni, T. Diamandis, and T. Chitra, “Towards a Theory of Maximal Extractable Value I: Constant Function Market Makers,” arXiv preprint arXiv:2207.11835, Jul. 2022. doi: 10.48550/arXiv.2207.11835. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.33480/jitk.v10i1.5157?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.33480/jitk.v10i1.5157, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBYNC", "status": "GOLD", "url": "https://ejournal.nusamandiri.ac.id/index.php/jitk/article/download/5157/1207" }
2,024
[ "JournalArticle", "Review" ]
true
2024-07-30T00:00:00
[ { "paperId": "d3518654060df7728e744affea88befbd20ac584", "title": "Security Aspects of Cryptocurrency Wallets—A Systematic Literature Review" }, { "paperId": "9f64c81ecc1b07067f51206244c3f09d14e28f3b", "title": "Enabling Secured and Seamless Crypto Wallets: A Blockchain Solution" }, { "paperId": "af72837430b57b5d55917e48975e1effd14a0e72", "title": "The Advance of Cryptocurrency Wallet with Digital Signature" }, { "paperId": "70a62bd18a6f2f2a0b151fad1eef4cb0d248dc33", "title": "Key-and-Signature Compact Multi-Signatures for Blockchain: A Compiler With Realizations" }, { "paperId": "d089ed89c9d69e6a5dbf00cc8b8c1b444ef0eef1", "title": "COMPARISON OF HARDWARE AND DIGITAL CRYPTO WALLETS" }, { "paperId": "e28e41b67cfbb817d70b90cfab5436d012bb58ea", "title": "Toward good practice in thematic analysis: Avoiding common problems and be(com)ing a knowing researcher." }, { "paperId": "6d79b194e78d09090eaef8a3709995629697ad30", "title": "Towards a Theory of Maximal Extractable Value I: Constant Function Market Makers" }, { "paperId": "952ac5e82f20dd7ea3574c0e79542b4bc900e47d", "title": "A study on the use of cryptocurrency wallets from a user experience perspective" }, { "paperId": "ce68b0eeb744d407b8e44d59e7d26b5486edc55a", "title": "Cryptocurrency Wallet: A Review" }, { "paperId": "f7fc0c9660bb548126dc6a17f55c31813922fd80", "title": "The Importance of Where Central Bank Digital Currencies Are Custodied: Exploring the Need of a Universal Access Device" }, { "paperId": "f864a51095eda485a9e369fb639d49bdf0609f7f", "title": "A New Consensus Protocol for Blockchain Interoperability Architecture" }, { "paperId": "61c88c9cb3db9e759b50bb7fce86c846c3e4ff0b", "title": "Secure Multi-Party Computation on Blockchain: An Overview" }, { "paperId": "65574a7e7d1768b67993fd26e90f0f68c69c0fe9", "title": "Crypto Bank: Cryptocurrency Wallet Based on Blockchain" }, { "paperId": "6e25bdbb42b80905d6f1fd835e86c5d04ccf5e87", "title": "User Perspectives on Blockchain Technology: User-Centered Evaluation and Design Strategies for DApps" }, { "paperId": null, "title": "“Web 3.0-Based Crypto Wallet for Securing Assets and Blockchain Transactions,”" } ]
7,913
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01e09015aa1814c140223e0da0773694be6c11f6
[ "Computer Science" ]
0.884424
xFuzz: Machine Learning Guided Cross-Contract Fuzzing
01e09015aa1814c140223e0da0773694be6c11f6
IEEE Transactions on Dependable and Secure Computing
[ { "authorId": "2367687", "name": "Yinxing Xue" }, { "authorId": "147179436", "name": "Jiaming Ye" }, { "authorId": "2155467453", "name": "Wei Zhang" }, { "authorId": "39838927", "name": "Jun Sun" }, { "authorId": "2109704789", "name": "Lei Ma" }, { "authorId": "48017112", "name": "Haijun Wang" }, { "authorId": "145777691", "name": "Jianjun Zhao" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE Trans Dependable Secur Comput" ], "alternate_urls": null, "id": "d286fdd0-3b6c-433c-afee-87228d8e9f93", "issn": "1545-5971", "name": "IEEE Transactions on Dependable and Secure Computing", "type": "journal", "url": "http://ieeexplore.ieee.org/servlet/opac?punumber=8858" }
Smart contract transactions are increasingly interleaved by cross-contract calls. While many tools have been developed to identify a common set of vulnerabilities, the cross-contract vulnerability is overlooked by existing tools. Cross-contract vulnerabilities are exploitable bugs that manifest in the presence of more than two interacting contracts. Existing methods are however limited to analyze a maximum of two contracts at the same time. Detecting cross-contract vulnerabilities is highly non-trivial. With multiple interacting contracts, the search space is much larger than that of a single contract. To address this problem, we present xFuzz, a machine learning guided smart contract fuzzing framework. The machine learning models are trained with novel features (e.g., word vectors and instructions) and are used to filter likely benign program paths. Comparing with existing static tools, machine learning model is proven to be more robust, avoiding directly adopting manually-defined rules in specific tools. We compare xFuzz with three state-of-the-art tools on 7,391 contracts. xFuzz detects 18 exploitable cross-contract vulnerabilities, of which 15 vulnerabilities are exposed for the first time. Furthermore, our approach is shown to be efficient in detecting non-cross-contract vulnerabilities as well—using less than 20% time as that of other fuzzing tools, xFuzz detects twice as many vulnerabilities.
## xFuzz: Machine Learning Guided Cross-Contract Fuzzing #### Yinxing Xue, Jiaming Ye, Wei Zhang, Jun Sun, Lei Ma, Haijun Wang, and Jianjun Zhao **Abstract—Smart contract transactions are increasingly interleaved by cross-contract calls. While many tools have been developed to** identify a common set of vulnerabilities, the cross-contract vulnerability is overlooked by existing tools. Cross-contract vulnerabilities are exploitable bugs that manifest in the presence of more than two interacting contracts. Existing methods are however limited to analyze a maximum of two contracts at the same time. Detecting cross-contract vulnerabilities is highly non-trivial. With multiple interacting contracts, the search space is much larger than that of a single contract. To address this problem, we present XFUZZ, a machine learning guided smart contract fuzzing framework. The machine learning models are trained with novel features (e.g., word vectors and instructions) and are used to filter likely benign program paths. Comparing with existing static tools, machine learning model is proven to be more robust, avoiding directly adopting manually-defined rules in specific tools. We compare XFUZZ with three state-of-the-art tools on 7,391 contracts. XFUZZ detects 18 exploitable cross-contract vulnerabilities, of which 15 vulnerabilities are exposed for the first time. Furthermore, our approach is shown to be efficient in detecting non-cross-contract vulnerabilities as well—using less than 20% time as that of other fuzzing tools, XFUZZ detects twice as many vulnerabilities. **Index Terms—Smart Contract, Fuzzing, Cross-contract Vulnerability, Machine Learning** #### ! This paper is accepted by IEEE Transactions of Dependable and Secure Computing. Considering the close connection between smart contract and financial activities, the security of smart contract security largely effects the stability of the society. Many methods and tools have since been developed to analyze smart contracts. Existing tools can roughly be categorized into two groups: static analyzers and dynamic ana_lyzers. Static analyzers (e.g., [8], [9], [10], [11], [12], [13]) often_ leverage static program analysis techniques (e.g., symbolic execution and abstract interpretation) to identify suspicious program traces. Due to the well-known limitations of static analysis, there are often many false alarms. On the other side, dynamic analyzers (including fuzzing engines such as [14], [15], [16], [17], [18]) avoid false alarms by dynamically executing the traces. Their limitation is that there can often be a huge number of program traces to execute and thus smart strategies must be developed to selectively test the program traces in order to identify as many vulnerabilities as possible. Besides, static and dynamic tools also have a common drawback — the detection rules are usually built-in and _predefined by developers, sometimes the rules among different_ tools could be contradictory (e.g., reentrancy detection rules in SLITHER and OYENTE [19]). While existing efforts have identified an impressive list of vulnerabilities, one important category of vulnerabilities, i.e., cross-contract vulnerabilities, has been largely overlooked so far. Cross-contract vulnerabilities are exploitable bugs that manifest only in the presence of more than two interacting contracts. For instance, the reentrancy vulnerability shown in Figure 4 occurs only if three contracts interact in a particular order. In our preliminary experiment, the two well-known fuzzing engines for smart contracts, i.e., CONTRACTFUZZER [15] (version 1.0) and SFUZZ [14] (version 1.0), both missed this vulnerability due to they are limited to analyze two contracts at the same time. Given a large number of cross-contract transactions in #### 1 INTRODUCTION THEREUM has been on the forefront of most rankings of block-chain platforms in recent years [1]. It enables # E the execution of programs, called smart contracts, written in Turing-complete languages such as Solidity. Smart contracts are increasingly receiving more attention, e.g., with over 1 million transactions per day since 2018 [2]. At the same time, smart contracts related security attacks are on the rise as well. According to [3], [4], [5], vulnerabilities in smart contracts have already led to devastating financial losses over the past few years. In 2016, the notorious DAO attack resulted in the loss of 150 million dollars [6]. Additionally, as figured out by Zou et al. [7], over 75% of developers agree that the smart contract software has a much high security requirement than traditional software. _•_ _Yinxing Xue and Wei Zhang are with the University of Science and_ _Technology of China. E-mail: yxxue@ustc.edu.cn, sa190@mail.ustc.edu.cn._ _•_ _Jiaming Ye and Jianjun Zhao are with the Kyushu University. Email:_ _ye.jiaming.852@s.kyushu-u.ac.jp, zhao@ait.kyushu-u.ac.jp._ _•_ _Jun Sun is with the Singapore Management University. E-mail: jun-_ _sun@smu.edu.sg._ _•_ _Lei Ma is with the University of Alberta. E-mail: ma.lei@acm.org._ _•_ _Haijun Wang is with the Nanyang Technological University. E-mail:_ _hjwang.china@gmail.com._ _Manuscript received December 22, 2021; revised April 14, 2022; accepted_ _June 2, 2022. Date of publication July 2, 2022; date of current version June 5,_ _2022. This work was supported in part by National Nature Science Foundation_ _of China under Grant 61972373, in part by the Basic Research Program_ _of Jiangsu Province under Grant BK20201192 and in part by the National_ _Research Foundation Singapore under its NSoE Programme (Award Number:_ _NSOE-TSS2019-03). The research of Dr Xue is also supported by CAS Pioneer_ _Hundred Talents Program of China. (Yinxing Xue and Jiaming Ye are co-first_ _authors Yinxing Xue is the corresponding author)_ ----- practice [20], there is an urgent need for developing sys tematic approaches to identify cross-contract vulnerabilities. Detecting cross-contract vulnerabilities however is nontrivial. With multiple contracts involved, the search space is much larger than that of a single contract, i.e., we must consider all sequences and interleaving of function calls from multiple contracts. As fuzzing techniques practically run programs and barely produce false positive reports [15], [21], adopting fuzzing in cross-contract vulnerability detection is preferred. However, due to the efficiency concerns, we need other techniques to guide fuzzers to practically detect crosscontract vulnerabilities. Previous works (e.g., [22], [23]) have evidenced the advantages of applying machine learning method for improving efficiency of vulnerability fuzzing in C/C++ programs. Compared with static rule-based methods, the ML model based method requires no prior domain knowledge about known vulnerabilities, and can effectively reduce the large search space for covering more vulnerable functions. In smart contract, existing works (e.g., ILF [24]) focus on exploring the state space in the intra-contract scope. They are unable to address the cross-contract vulnerabilities. With a large search space of combinations of numerous function calls, it is desired to guide the fuzzing process via the aid of the machine learning models. In this work, we propose XFUZZ, a machine learning (ML) guided fuzzing engine designed for detecting cross-contract vulnerabilities. Ideally, according to the Pareto principle in testing [25] (i.e., roughly 80% of errors come from 20% of the code), we want to rapidly identify the error-prone code _before applying the fuzzing technique. As reported by previous_ works [26], [27], the existing analysis tools suffer from high false positive rates (e.g., SLITHER [10] and SMARTCHECK [13] have more than 70% of false positive rates). Therefore, adopting only one static tool in our approach may produce biased results. To alleviate this, we use three tools to vote the reported vulnerabilities in contracts, and we further train a ML model to learn common patterns from the voting results. It is known that ML models can automatically learn patterns from inputs with less bias [28]. Based on this, the overall bias due to using a certain tool to identify potentially vulnerable functions in contracts can be reduced. Specifically, XFUZZ provides multiple ways of reducing the enormous search space. First, XFUZZ is designed to leverage an ML model for identifying the most probably vulnerable functions. That is, an ML model is trained to filter most of the benign functions whilst preserving most of the vulnerable functions. During the training phase, the ML models are trained based on a training dataset that contains program codes that are labeled using three famous static analysis tools (i.e., the labels are their majority voting result). Furthermore, the program code is vectorized into vectors based on word2vec [29]. In addition, manually designed features, such as can_send_eth, has_call and callee_external, are supplied to improve training effectiveness as well. In the guided fuzzing phase, the model is used to predict whether a function is potentially vulnerable or not. In our evaluation of ML models, the models allow us to filter 80.1% nonvulnerable contracts. Second, to further reduce the effort required to expose cross-contract vulnerabilities, the filtered contracts and functions are further prioritized based on a suspiciousness score, which is defined based on an efficient measurement of the likelihood of covering the program paths. To validate the usefulness of XFUZZ, we performed comprehensive experiments, comparing with a static crosscontract detector CLAIRVOYANCE [19] and two state-ofthe-art dynamic analyzers, i.e., CONTRACTFUZZER [15] and SFUZZ, on widely-used open-dataset ([30], [31]) and additional 7,391 contracts. The results confirm the effectiveness of XFUZZ in detecting cross-contract vulnerabilities, i.e., 18 cross-contract vulnerabilities have been identified. 15 of them are missed by all the tested state-of-the-art tools. We also show that our search space reduction and prioritization techniques achieve high precision and recall. Furthermore, our techniques can be applied to improve the efficiency of detecting intra-contract vulnerabilities, e.g., XFUZZ detects twice as many vulnerabilities as that of SFUZZ and uses less than 20% of time. The contributions of this work are summarized as follows. _• To the best of our knowledge, we make the first attempts_ to formulate and detect three common cross-contract vulnerabilities, i.e., reentrancy, delegatecall and tx-origin. _• We propose a novel ML based approach to significantly_ reduce the search space for exploitable paths, achieving well-trained ML models with a recall of 95% on a testing dataset of 100K contracts. We also find that the trained model can cover a majority of reports of other tools. _• We perform a large-scale evaluation and performed com-_ parative studies with state-of-the-art tools. Leveraging the ML models, XFUZZ outperforms the state-of-the-art tools by at least 42.8% in terms of recall meanwhile keeping a satisfactory precision of 96.1%. _• XFUZZ also finds 18 cross-contract vulnerabilities. All of_ them are verified by security experts from our industry partner. We have published the exploiting code to these vulnerabilities on our anonymous website [32] for public access. #### 2 MOTIVATION In this section, we first introduce three common types of cross-contract vulnerabilities. Then, we discuss the challenges in detecting these vulnerabilities by state-of-the-art fuzzing engines to motivate our work. **2.1** **Problem Formulation and Definition** In general, smart contracts are compiled into opcodes [33] so that they can run on EVM. We say that a smart contract is vulnerable if there exists a program trace that allows an attacker to gain certain benefit (typically financial) illegitimately. Formally, a vulnerability occurs when there exist dependencies from certain critical instructions (e.g., TXORIGIN and DELEGATECALL) to a set of specific instructions (e.g., ADD, SUB and SSTORE). Therefore, to formulate the problem, we adopt definitions of vulnerabilities from [9], [34], based on which we define (control and data) dependency and then define the cross-contract vulnerabilities. **_Definition 1 (Control Dependency). An opcode opj is said to_** be control-dependent on opi if there exists an execution from opi to opj such that opj post-dominates all opk in ----- 1 **function withdrawBalance() public {** 2 **uint amountToWithdraw = userBalances[msg.** **sender];** 3 **msg.sender.call.value(amountToWithdraw)("");** 4 userBalances[msg.sender] = 0; 5 } Fig. 1: An example of reentrancy vulnerability. 1 **contract Delegate {** 2 **address public owner;** 3 **function pwn() {** 4 owner = msg.sender; 5 } } 6 **contract Delegation {** 7 **address public owner;** 8 Delegate delegate; 9 **function() {** 10 **if(delegate.delegatecall(msg.data)) {** 11 **this;** 12 } } } Fig. 2: An example of delegatecall vulnerability. 1 **function withdrawAll(address _recipient) public** { 2 **require(tx.origin == owner);** 3 _recipient.transfer(this.balance); 4 } Fig. 3: An example of tx-origin vulnerability. the path from opi to opj (excluding opi) but does not postdominates opi. An opcode opj is said to post-dominate an opcode opi if all traces starting from opi must go through _opj._ **_Definition 2 (Data Dependency). An opcode opj is said to be_** data-dependent on opi if there exists a trace that executes _opi and subsequently opj such that W_ (opi) _∩_ _R(opj) ̸= ∅,_ where R(opj) is a set of locations read by opj and W (opi) is a set of locations written by opi. An opcode opj is dependent on opi if opj is control or data _dependent to opi or opj is dependent to opk meanwhile opk is_ dependent to opi. In this work, we define three typical categories of crosscontract vulnerabilities that we focus on, i.e., reentrancy, delegatecall and tx-origin. Although our method can be generalized to support more types of vulnerabilities, in this paper, we focus on the above three vulnerabilities since they are among the most dangerous ones with urgent testing demands. Specifically, the reentrancy and delegatecall vulnerabilities are highlighted as top risky vulnerabilities in previous works [9], [10]. The tx-origin vulnerability is broadly warned in previous research [35], [10]. We define C as a set of critical opcodes, which contains CALL, CALLCODE, DELEGATECALL, i.e., the set of all opcode associated with external calls. These opcodes associated with external calls could be the causes of vulnerabilities (since then the code is under the control of external attackers). **_Definition 3 (Reentrancy Vulnerability). A trace suffers from_** reentrancy vulnerability if it executes an opcode opc ∈ _C_ and subsequently executes an opcode ops in the same function such that ops is SSTORE, and opc depends on _op_ A smart contract suffers from reentrancy vulnerability if and only if at least one of its traces suffers from reentrancy vulnerability. This vulnerability results from the incorrect use of external calls, which are exploited to construct a callchain. When an attacker A calls a user U to withdraw money, the fallback function in contract A is invoked. Then, the malicious fallback function calls back to U to recursively steal money. In Figure 1, the attacker can construct an end-toend call-chain by calling withdrawBalance in the fallback function of the attacker’s contract then steals money. **_Definition 4 (Dangerous Delegatecall Vulnerability). A trace_** suffers from dangerous delegatecall vulnerability if it executes an opcode opc ∈ _C that depends on an opcode_ DELEGATECALL. A smart contract suffers from delegatecall vulnerability if and only if at least one of its traces suffers from delegatecall vulnerability. This vulnerability is due to the abuse of dangerous opcode DELEGATECALL. When a malicious attacker _B calls contract A by using delegatecall, contract A’s_ function is executed in the context of attacker, and thus causes damages. In Figure 2, malicious attacker B sends ethers to contract Delegation to invoke the fallback function at line 10. The fallback function calls contract Delegate and executes the malicious call data msg.data. Since the call data is executed in the context of Delegate, the attacker can change the owner to an arbitrary user by executing pwn at line 3. **_Definition 5 (Tx-origin Misuse Vulnerability). A trace suffers_** from tx-origin misuse vulnerability if it executes an opcode opc ∈ _C that depends on an opcode ORIGIN._ A smart contract suffers from tx-origin vulnerability if and only if at least one of its traces suffers from txorigin vulnerability. This vulnerability is due to the misuse of tx.origin to verify access. An example of such vulnerability is shown in Figure 3. When a user U calls a malicious contract A, who intends to forward call to contract B. Contract B relies on vulnerable identity check (i.e., require(tx.origin == owner) at line 2 to filter malicious access. Since tx.orign returns the address of U (i.e., the address of owner), malicious contract A successfully poses as U. **_Definition 6 (Cross-contract Vulnerability). A group of_** contracts suffer from cross-contract vulnerability if there is a vulnerable trace (that suffers from reentrancy, delegatecall, tx-origin) due to opcode from more than two contracts. A smart contract suffers from cross-contract vulnerability if and only if at least one of its traces suffers from cross-contract vulnerability. For example, a cross-contract reentrancy vulnerability is shown in Figure 4. An attack requires the participation of three contracts: malicious contract Logging deployed at addr_m, logic contract Logic deployed at addr_l and wallet contract Wallet deployed at addr_w. First, the attack function log calls function logging at Logic contract then sends ethers to the attacker contract by calling function withdraw at contract Wallet. Next, the wallet contract sends ethers to attacker contract and calls function log An end-to-end call chain ----- Fig. 4: An example of cross-contract reentrancy vulnerability which is missed by the state-of-art fuzzer, namely SFUZZ. _∗Note: The solid boxes represent functions and the dashed containers denote contracts. Specifically, function call is denoted by_ solid line. The cross-contract calls are highlighted by red arrows. The blue arrow represents cross-contract call missed by sFuzz and ContractFuzzer. 1 2 3 4 1 _... is formed and the attacker can_ _⃝→_ _⃝→_ _⃝→_ _⃝→_ _⃝_ recursively steal money without any limitations. **2.2** **State-of-the-arts and Their Limitations** First, we perform an investigation on the capability in detecting vulnerabilities by the state-of-the-art methods, including [10], [8], [9], [19], [14], [15]. In general, cross-contract testing and analysis are not supported by most of these tools except CLAIRVOYANCE. The reason is existing approaches merely focus on one or two contracts, and thus, the sequences and interleavings of function calls from multiple contracts are often ignored. For example, the vulnerability in Figure 4 is a false negative case of static analyzer SLITHER, OYENTE and SECURIFY. Note that although this vulnerability is found by CLAIRVOYANCE, this tool however generates many false alarms, making the confirmation of which rather difficult. This could be a common problem for many static analyzers. Although high false positive rate could be well addressed by fuzzing tools by running contracts with generated inputs, existing techniques are limited to maximum two contracts (i.e., input contract and tested contract). In our investigation of two currently representative fuzzing tools SFUZZ and CONTRACTFUZZER, cross-contract calls are largely overlooked, and thus leads to missed vulnerabilities. To sum up, most of the existing methods and tools are still limited to handle non-cross-contract vulnerabilities, which motivates this work to bridge such a gap towards solving the currently urgent demands. #### 3 OVERVIEW Detecting cross-contract vulnerability often requires examining a large number of sequence transactions and thus can be quite computationally expensive some even infeasible. In this section, we give an overall high-level description of our method, e.g., focusing on fuzzing suspicious transactions based on the guideline of a machine learning (ML) model. Technically, there are three challenges of leveraging ML to guide the effective fuzzing cross-contracts for vulnerability detection: **C1 How to train the machine learning model and achieve** _satisfactory precision and recall._ **C2 How to combine trained model with fuzzer to reduce** search space towards efficient fuzzing. **C3 How to empower the guided fuzzer the support of** _effective cross-contract vulnerability detection._ In the rest of this section, we provide an overview of XFUZZ which aims at addressing the above challenges, as shown in Figure 5. Generally, the framework can be separated into two phases: machine learning model training phase and _guided fuzzing phase._ **3.1** **Machine Learning Model Training Phase** In previous works [36], [37], fuzzers are limited to prior knowledge of vulnerabilities and they are not well generalized against vulnerable variants. In this work, we propose to leverage ML predictions to guide fuzzers. The benefit of using ML instead of a particular static tool is that ML model can reduce bias introduced by manually defined detection rules. In this phase, we collect training data, engineer features, and evaluate models. First, we employ the state-of-the-arts SLITHER, SECURIFY and SOLHINT to detect vulnerabilities on the dataset. Next, we collect their reports to label contracts. The contract gains at least two votes are labeled as vulnerability. After that, we engineer features. The input contracts are compiled into bytecode then vectorized into vectors by Word2Vec [29]. To address C1, they are enriched by combining with static features (e.g., can_send_eth, has_call and callee_external, etc.). These static features are extracted from ASTs and CFGs. Eventually, the features are used as inputs to train the ML models. In particular, the precision and recall of models are evaluated to choose three candidate models (e.g., XGBoost [38], EasyEnsembleClassifier [39] and Decision Tree), among which we select the best one. **3.2** **Guided Testing Phase** In guided testing phase, contracts are input to the pretrained models to obtain predictions After that the vulnerable con ----- Fig. 5: The overview of XFUZZ framework. tracts are analyzed and pinpointed. To address challenge C2, the functions that are predicted as suspiciously vulnerable ones. Then we use call-graph analysis and control-flow-graph analysis to construct cross-contract call path. After we collect all available paths, we use the path prioritization algorithm to prioritize them. The prioritization becomes the guidance of the fuzzer. This guidance of model predictions significantly reduces search space because the benign functions wait until the vulnerable ones finish. The fuzzer can focus on vulnerable functions and report more vulnerabilities. To address C3, we extract static information (e.g., function parameters, conditional paths) of contracts to enrich model predictions. The predictions and the static information are combined to compute path priority scores. Based on this, the most exploitable paths are prioritized, where vulnerabilities are more likely found. Here, the search space of exploitable paths is further reduced and the cross-contract fuzzing is therefore feasible by invoking vulnerability through available paths. #### 4 MACHINE LEARNING GUIDANCE PREPARATION In this section, we elaborate on the training of our ML model for fuzzing guidance. We discuss the data collection in Section 4.1 and introduce feature engineering in Section 4.2, followed by candidate model evaluation in Section 4.3. **4.1** **Data Collection** SMARTBUGS [31] and SWCREGISTRY [40] are two representatives of existing smart contract vulnerability benchmarks. However, their labeled data is scarce and the amount currently available is insufficient to train a good model. Therefore, we choose to download and collect contracts from Etherscan (https://etherscan.io/), a prominent Ethereum service platform. Overall, to be representative, we collect a large set of 100,139 contracts in total for further processing. The collected dataset is then labeled based on the voting results of three most well-rated static analyzers (i.e., SOLHINT [11] v2.3.1, SLITHER [10] v0.6.9 and SECURIFY [9] v1.0 ). The three tools are chosen based on the fact that they are ‚ state-of-the-art static analyzers and ƒ well maintained and frequently updated. The detection capability vary among these tools (as shown in Table 1) We then vote to label the TABLE 1: Vulnerability detection capability of voting static tools. Slither Solhint Securify Reentrancy G G G Tx-origin G G Delegatecall G dataset aiming at eliminating the bias of each tool. Note that the two vulnerabilities (i.e., delegatecall and tx-origin) are hardly supported by existing tools. Therefore, we only vote vulnerable functions on vulnerabilities supported by at least two tools. That is, for reentrancy, the voting results are counted in the way that the function gain at least two votes is deemed as vulnerability; for tx-origin, the function is deemed as vulnerability when it gains at least one vote. As for delegatecall vulnerability, we label all reported functions as vulnerable ones. As a result, we collect 788 reentrancy, 40 delegatecall and 334 tx-origin vulnerabilities, respectively. All of the above vulnerabilities are manually confirmed by two authors of this paper, both of whom have more than 3 years development experience for smart contracts, to remove false alarms. **4.2** **Feature Engineering** Then, both vulnerable and benign functions are preprocessed by SLITHER to extract their runtime bytecode. After that, Word2Vec [29] is leveraged to transform the bytecode into a 20-dimensional vector. However, as reported in [41], vectors alone are still insufficient for training a high-performance model. To address this, we enrich the vectors with 7 additional static features extracted from CFGs. In short, the features are 27 dimensions in total, in which 20 are yielded by Word2Vec and the other 7 are summarized in Table 2. Among the 7 static features, has_modifier, has_call, has_balance, callee_external and can_send_eth are static features. We collect them by utilizing static analysis techniques. The feature has_modifier is designed to identify existing program guards. In smart contract programs, the function modifier is often used to guard a function from arbitrary access. That is, a function with modifier is less like a vulnerable one Therefore we make the modifier as ----- p g **Feature Name** **Type** **Description** has modifier bool whether has a modifier has call bool whether contains a call operation has delegate bool whether contains a delegatecall has tx origin bool whether contains a tx-origin operation has balance bool whether has a balance check operation can send eth bool whether supports sending ethers callee external bool whether contains external callees a counter-feature to avoid false alarms. Feature has_call and feature has_balance are designed to identify external calls and balance check operations. These two features are closely connected with transfer operations. We prepare them to better locate the transfer behavior and narrow search space. Feature callee_external provides important information on whether the function has external callees. This feature is used to capture risky calls. In smart contracts, crosscontract calls are prone to be exploited by attackers. Feature can_send_eth extracts static information (e.g., whether the function has transfer operation) to figure out whether the function has ability to send ethers to others. Considering the vulnerable functions often have risky transfer operations, this feature can help filtering out benign functions and reduce false positive reports. The remaining three features, i.e., has_delegate and has_tx_origin correspond to particular key opcodes used in vulnerabilities. Specifically, feature has_delegate corresponds to the opcode DELEGATECALL in delegatecall vulnerabilities, feature has_tx_origin corresponds to the opcode ORIGIN in tx-origin vulnerabilities. These two features are specifically designed for the two vulnerabilities, as their names suggest. Note that the features can be easily updated to support detection on new vulnerabilities. If the new vulnerability shares similar mechanism with the above three vulnerabilities or is closely related to them, the existing features can be directly adopted; otherwise, one or two new specific features highly correlated with the new type of vulnerability should be added. The 7 static features are combined with word vectors, which together form the input to our ML models for further training. **4.3** **Model Selection** In this section, we train and evaluate diverse candidate models, based on which we select the best one to guide fuzzers. To achieve this, one challenge we have to address first is the dataset imbalance. In particular, there are 1,162 vulnerabilities and 98,977 benign contracts. This is not rare in ML-based vulnerability detection tasks [42], [43]. In fact, our dataset endures imbalance in rate of 1:126 for reentrancy, 1:2,502 for delegatecall and 1:298 for tx-origin. Such imbalanced dataset can hardly be used for training. To address the challenge, we first eliminate the duplicated data. In fact, we found 73,666 word vectors are exactly same to others. These samples are different in source code, but after they are compiled, extracted and transformed into vectors, they share the same values, because most of them are syntactically identical clones [44] at source code level. After our remedy data imbalance comes to 1:31 for reentrancy Fig. 6: The P-R Curve of models. The dashed lines represent performance on training set, while the solid lines represent performance on validation set. TABLE 3: The performance of evaluated ML models. Model Name Precision Recall EasyEnsembleClassifier 26% 95% XGBoost 66% 48% DecisionTree 70% 43% SupportVectorMachine 60% 14% KNeighbors 50% 43% NaiveBayes 50% 59% LogiticRegression 53% 38% 1:189 for delegatecall and 1:141 for tx-origin. Still the dataset is highly imbalanced. As studied in [45], the imbalance can be alleviated by data sampling strategies. However, we find that sampling strategies like oversampling [46] can hardly improve the precision and recall of models because the strategy introduces too much polluted data instead of real vulnerabilities. We then attempt to evaluate models to select one that fits the imbalanced data well. Note that to counteract the impact of different ML models, we try to cover as many candidate ML methods as possible, among which we select the best one. The models we evaluated including tree-based models XGBT [38], EEC [39], Decision Tree (DT), and other representative ML models like Logistic Regression, Bayes Models, SVMs and LSTM [47]. The performance of the models can be found at Table 3. We find that the tree-based models achieve better precision and recall than others. Other non-tree-based models are biased towards the major class and hence show very poor classification rates on minor classes. Therefore, we select XGBT, EEC and DT as the candidate models. The precision-recall curves of the three models on positive cases are shown on Figure 6. In this figure, the dashed lines denote models fitting with validation set and solid lines denote fitting with testing set. Intuitively, model XGBT and model EEC achieve better performance with similar P-R curves. However, EEC performs much better than XGBT in recall. In fact, model XGBT holds a precision rate of 66% and a recall rate of 48%. Comparatively, model EEC achieves a precision rate of 26% and a recall rate of 95%. We remark that our goal is not to train a model that is very accurate, but rather a model that allows us to filter as many benign ----- g ( ) tools. CR(Slither) CR(Securify) CR(Solhint) Reentrancy 83.6% 81.1% 86.3% Tx-origin 91.9% N.A. 75.1% Delegatecall 90.6% N.A. N.A. contracts as possible without missing real vulnerabilities. Therefore, we select the EEC model for further guiding the fuzzing process. **4.4** **Model Robustness Evaluation** To further evaluate the robustness of our selected model and to assess that to how much extent can our model represent existing analyzers, we conduct evaluation of comparing the vulnerability detection on unknown dataset between our model and other state-ofthe-art static analyzers. The evaluation dataset is download from a prominent third-party blockchain security team (https://github.com/tintinweb/smart-contractsanctuary). We select smart contracts released in version 0.4.24 and 0.4.25 (i.e., the majority versions of existing smart contract applications [48]) and remove the contracts which has been used in our previous model training and model selection. After all, we get 78,499 contracts in total for evaluation. **_Definition 7 (Coverage Rate of ML Model on Another Tool)._** Given the true positive reports of ML model Rm, the true positive reports of another tool Rt, a coverage rate of ML model CR(t) on the tool is calculated as: _CR(t) = (Rm ∩_ _Rt)/Rt_ (1) The results are listed in Table 4. Here, we use the coverage rate (CR) to evaluate the representativeness of our model regarding the three vulnerabilities. Specifically, the coverage rate measures how much reports of ML model are intersected with static analyzer tools. The coverage rate CR is calculate as listed in Definition 7. The N.A. in the table denotes that the detection of this vulnerability is not support by the analyzer. Our evaluation results show that the reports of our tool can cover a majority of reports of other tools. Specifically, the trained ML model can well approximate the capability of each static tool used in vulnerability labeling and model training. For example, 81.1% of true positive reports of SECURIFY on reentrancy are also contained in our ML model’s reports. Besides, 75.1% of true positive reports of SOLHINT on Tx-origin and 90.6% of true positive reports of SLITHER on Delegatecall are also covered. #### 5 GUIDED CROSS-CONTRACT FUZZING **5.1** **Guidance Algorithm** The pretrained models are applied to guide fuzzers in the ways that the predictions are utilized to ‚ locate suspicious functions and ƒ combine with static information for path prioritization. Our guidance is based on both model predictions and the priority scores computed from static features The reason 1 **contract Wallet{** 2 **function withdraw(address addr, uint value){** 3 addr.transfer(value); 4 } 5 **function changeOwner(address[] addrArray,** **uint idx) public{** 6 **require(msg.sender == owner);** 7 owner = addrArray[idx]; 8 withdraw(owner, this.balance); 9 } } 10 **contract Logic{** 11 **function logTrans(address addr_w, address** _exec, uint _value, bytes infor) public{ 12 Wallet(addr_w).withdraw(_exec, _value); 13 } } Fig. 7: An example of prioritizing paths. is that even with the machine learning model filtering, the search space is still rather large, which is evidenced by the large number of paths explored by SFUZZ (e.g., the 2,596 suspicious functions have 873 possibly vulnerable paths), and thus we propose to first prioritize the path. The overall process of our guided fuzzing can be found at Algorithm 1. In this algorithm, we first retrieve function list of an input source at line 1. Next, from line 3 to line 8, we calculate the path priority based on two scores (i.e., function priority scores and caller priority scores) for each path. Both scores are designed for prioritizing suspicious functions. After the calculation, the results are saved together with the function itself. In line 10, we prioritize the suspicious function paths. The prioritization algorithm can be found at Algorithm 2. The trace with higher priority will be first tested by fuzzers. Finally, from line 14 to line 21, we pop up a candidate trace from prioritized list and employ fuzzers to **Algorithm 1: Machine learning guided fuzzing** **input : IS, all the input smart contract source code** **input : M**, suspicious function detection ML model **input : TRs ←∅, the set of potentially vulnerable** function execution paths **output: V ←∅, the set of vulnerable paths** **1 Fs ←** _IS.getFunctionList()_ **2 // get the functions in a contract** **3 foreach function f ∈** _Fs do_ **4** **if ifIsSuspiciousFunction(f, M** ) is True then **5** // employ ML models to predict whether the function is suspicious **6** _Sfunc ←_ _getFuncPriorityScore(f_ ) **7** _Scaller ←_ _getCallerPriorityScore(f_ ) **8** _TRs ←_ _TRs ∪{f, Sfunc, Scaller}_ **9** // get scores for each function **10 PTR ←** _PrioritizationAlgorithm(TRs)_ **11 // Prioritized paths** **12 V ←∅** **13 // the output vulnerability list** **14 while not timeout do** **15** _T ←_ _PTR.pop()_ **16** // pop up trace with higher priority **17** _FuzzingResult ←_ _Fuzzing(T_ ) **18** **if FuzzingResult is Vulnerable then** **19** _V ←_ _V ∪{T_ _}_ **20** **else** **21** **continue** **22 return V** ----- **Algorithm 2: Priorization Algorithm** **input : M**, The trained machine learning model **input : TRs, functions and their priority scores** **output: PTR, the set of prioritized vulnerable paths** **1 while isNotEmpty(TRs) do** **2** _TRs ←_ _sortByFunctionPriority(TRs)_ **3** function f ← _TRs.pop()_ **4** paths Ps ← _getAllPaths(f_ ) **5** **while isNotEmpty(Ps) do** **6** _Ps ←_ _sortByCallerPriority(Ps)_ **7** _P ←_ _Ps.pop()_ **8** _PTR ←_ _PTR ∪_ _P_ **9 return PTR** conduct focus fuzzing. The fuzzing process will not end until it reaches an timeout limitation. The found vulnerability will be return as final result. The details of our prioritization algorithm are shown in Algorithm 2. The input of the algorithm is the functions and their corresponding priority scores. The scores are calculated in Algorithm 1. The output of the algorithm is the prioritized vulnerable paths. Specifically, the first step of the algorithm is getting the prioritized function based on the function priority score, as shown in line 2 and line 3. The functions with lower function priority scores will be prioritized. Next, we sort all call paths (no matter cross-contract or non-cross-contract call) which are correlated to the function, as shown from line 4 to line 6. We pop up the call path which has the highest priority and add it to the prioritized path set. The prioritized path set will guide fuzzer to test call path in a certain order. To summarize, the goal of our guidance algorithm is to prioritize cross-contract paths, which are penetrable but usually overlooked by previous practice [15], [14], and to further improve the fuzzing testing efficiency on crosscontract vulnerabilities. **5.2** **Priority Score** Generally, the path priority consists of two parts: function pri_ority and caller priority. The function priority is for evaluating_ the complexity of function and the caller priority is designed to measure the cost to traverse a path. **Function Priority. We collect static features of functions** to compute function priority. After that, a priority score can be obtained. The lower score denotes higher priority. We first mark the suspicious functions by model predictions. A suspicious function is likely to contain vulnerabilities so it is provided with higher priority. We implement this as a factor fs which equals 0.5 for suspicious function otherwise 1 for benign functions. For example, in Figure 7, the function withdraw is predicted as suspicious so that the factor fs equals 0.5. Next, we compute the caller dimensionality SC . The dimensionality is the number of callers of a function. In crosscontract fuzzing, a function with multiple callers requires more testing time to traverse all paths. For example, in Figure 7, function withdraw in contract Wallet has an internal caller changeOwner and an external caller logTrans, thus the dimensionality of this function is 2. The parameter dimensionality SP is set to measure the complexity of parameters The functions with complex parameters (i.e., array, bytes and address parameters) are assigned with lower priority, because these parameters often increase the difficulty of penetrating a function. Specifically, one parameter has 1 dimensionality except for the complex parameters, i.e., they have 2 dimensionalities. The parameter dimensionality of a function is the sum of parameters dimensionalities. For example, in Figure 7, function withdraw and changeOwner both have an address and an integer parameter thus their dimensionality is 3. Function logTrans has two addresses, a byte and an integer parameter, so the dimensionality is 7. **_Definition 8 (Function Priority Score). Given the suspicious_** factor fs, the caller dimensionality score SC and the parameter dimensionality score SP, a function priority score Sfunc is calculated as: _Sfunc = fs × (SC + 1) × (SP + 1)_ (2) In this formula, we add 1 to the caller dimensionality and parameter dimensionality to avoid the overall score to be 0. The priority scores in Figure 7 are: function withdraw = 6, function changeOwner = 4, function logTrans = 8. The results show that function changeOwner has highest priority because function withdraw has two callers to traverse meanwhile function logTrans is more difficult for penetration than changeOwner. **Caller Priority. We traverse every caller of a function** and collect their static features, based on which we compute the priority score to decide which caller to test first. Firstly, the number of branch statements (e.g., if, for and while) and assertions (e.g., require and assert) are counted to measure condition complexity Comp to describe the difficulties to bypass the conditions. The path with more conditions is in lower priority. For example, in Figure 7, function withdraw has two callers. One caller changeOwner has an assertion at line 6, so the complexity is 1. The other caller logTrans contains no conditions thus the complexity is 0. Next, we count the condition distance. SFUZZ selects seed according to branch-distance only, which is not ideal for identifying the three particular kinds of cross-contract vulnerabilities that we focus on in this work. Thus, we propose to consider not only branch distance but also this condition distance CondDis. This distance is intuitively the number of statements from entry to condition. In case of the function has more than one conditions, the distance is the number of statements between entry and first condition. For example, in Figure 7, the condition distance of changeOwner is 1 and the condition distance of logTrans is 0. **_Definition 9 (Caller Priority Score). Given the condition_** distance CondDis and the path condition complexity _Comp, a path priority score Scaller is calculated as:_ _Scaller = (CondDis + 1) × (Comp + 1)_ (3) Finally, the caller priority score is computed based on condition complexity and condition distance, as shown in Definition 9. The complexity and distance add 1 so that the overall score is not 0. The caller priority scores in Figure 7 are: logTrans → withdraw = 1, changeOwner → withdraw = 4 Function changeOwner has identity check at line 6 which ----- Fig. 8: The cross-contract fuzzing process. increase the difficulty to penetrate. Thus, the other path from logTrans to withdraw is prior. **5.3** **Cross-contract Fuzzing** Given the prioritized paths, we utilized cross-contract fuzzing to improve fuzzing efficiency. Here, we implement this fuzzing technique by the following steps: 1) The contracts under test should be deployed on EVM. As shown in Figure 8, the fuzzer will first deploy all contracts on a local private chain to facilitate cross-contract calls among contracts. 2) The path-unrelated functions will be called. Here, the pathunrelated functions denote functions that do not appear in the input prioritized paths. We run them first to initialize state variables of a contract. 3) We store the function selectors appeared in all contracts. The function selector is the unique identity recognizer of a function. It is usually encoded in 4-byte hex code [49]. 4) The fuzzer checks whether there is a cross-contract call. If not, the following step 5 and step 6 will be skipped. 5) The fuzzer automatically searches local states to find out correct function selectors, and then directly trigger a cross-contract call to the target function in step 6. 7) The fuzzer compares the execution results against the detection rules and output reports. #### 6 EVALUATION XFUZZ is implemented in Python and C with 3298 lines of code. All experiments are run on a computer which is running Ubuntu 18.04 LTS and equipped with Intel Xeon E5-2620v4, 32GB memories and 2TB HDD. For the baseline comparison, XFUZZ is compared with the state-of-art fuzzer SFUZZ [14], a previously published testing engine CONTRACTFUZZER [15] and a static cross-contract analysis tool CLAIRVOYANCE [19]. The recently published tool ECHIDNA [16] relies on manually written testing oracles, which may lead to different testing results depending on developer’s expertise. Thus, it is not compared. Other tools (like HARVEY [21]) are not publicly available for evaluation, and thus are not included in our evaluations. We systematically run all four tools on the contract datasets. Notably, to verify the authenticity of the vulnerability reports, we invite senior technical experts from security department of our industry partner to check vulnerable code. Our evaluation aims at investigating the following research questions (RQs) **RQ1. How effective is XFUZZ in detecting cross contract** vulnerabilities? **RQ2. To what extent the machine learning models and the** path prioritization contribute to reducing the search space? **RQ3. What are the overhead of XFUZZ, compared to the** vanilla SFUZZ? **RQ4. Can** XFUZZ discover real-world unknown crosscontract vulnerabilities, and what are the reasons for false negatives? **6.1** **Dataset Preparation** Our evaluation dataset includes smart contracts from three sources: 1) datasets from previously published works (e.g., [30] and [31]); 2) smart contract vulnerability websites with good reputation (e.g., [40]); 3) smart contracts downloaded from Etherscan. The dataset is carefully checked to remove duplicate contracts with dataset used in our machine learning training. Specifically, the DataSet1 includes contracts from previous works and famous websites. After we remove duplicate contracts and toy-contract (i.e., those which are not deployed on real world chains), we collect 18 labeled reentrancy vulnerabilities. To enrich the evaluation dataset, our Dataset2 includes contracts downloaded from Etherscan. We remove contracts without external calls (they are irrelevant to cross-contract vulnerabilities) and contracts that are not developed by using Solidity 0.4.24 and 0.4.25 (i.e., the most two popular versions of Solidity [48]). In the end, 7,391 contracts are collected in Dataset2. The source code of the above datasets are publicly available in our website [32] so that the evaluations are reproducible, benefiting further research. **6.2** **RQ1: Vulnerability Detection Effectiveness** We first conduct evaluations on Dataset1 by comparing three tools CONTRACTFUZZER, SFUZZ and XFUZZ. The CLAIRVOYANCE is not included because it is a static analysis tool. For the sake of page space, we present a part of the results in Table 5 with an overall summary and leave the whole list available at here[1]. In this evaluation, CONTRACTFUZZER fail to find a vulnerability among the contracts. SFUZZ missed 3 vulnerabilities and outputted 9 incorrect reports. Comparatively, XFUZZ missed 2 vulnerabilities and outputted 6 incorrect reports. The reason of the missed vulnerabilities and incorrect reports lies on the difficult branch conditions (e.g., an if statement with 3 conditions) which blocks the fuzzer to traverse vulnerable branches. Note that XFUZZ is equipped with model guidance so that it can focus on fuzzing suspicious functions and find more vulnerabilities than SFUZZ. While we compare our tool with existing works on publicly available Dataset1, the dataset only provides noncross-contract labels thus cannot be used to verify our detection ability on cross-contract ones. To complete this, we further evaluate the effectiveness of cross-contract and non-cross-contract fuzzing on Dataset2. To reduce the effect of randomness, we repeat each setting 20 times, and report the averaged results. [1. https://anonymous.4open.science/r/xFuzzforReview-ICSE/](https://anonymous.4open.science/r/xFuzzforReview-ICSE/Evaluation%20on%20Open-dataset.pdf) [Evaluation%20on%20Open-dataset pdf](https://anonymous.4open.science/r/xFuzzforReview-ICSE/Evaluation%20on%20Open-dataset.pdf) ----- p successfully finds vulnerability in this function, otherwise the tool is marked with . Address ContractFuzzer xFuzz sFuzz 0x7a8721a9    0x4e73b32e    0xb5e1b1ee    0xaae1f51c    0x7541b76c    ... ... ... ... Summary ContractFuzzer xFuzz sFuzz 0/18 9/18 5/18 TABLE 6: Performance of XFUZZ, CLAIRVOYANCE (C.V.), CONTRACTFUZZER (C.F.), SFUZZ on cross-contract vulnerabilities. reentrancy delegatecall tx-origin P% R% #N P% R% #N P% R% #N C.F. 0 0 0 0 0 0 0 0 0 SFUZZ 0 0 0 0 0 0 0 0 0 C.V. 43.7 43.7 16 0 0 0 0 0 0 XFUZZ 100 81.2 13 100 100 3 100 100 2 _6.2.1_ _Cross-contract Vulnerability._ The results are summarized in Table 6. Note that the “P%” and “R%” represent precision rate and recall rate, “#N” is the number of vulnerability reports. “C.V.” means CLAIRVOYANCE and “C.F.” means CONTRACTFUZZER. Crosscontract vulnerabilities are currently not supported by CON TRACTFUZZER, SFUZZ and thus they report no vulnerabilities detected. **Precision. CLAIRVOYANCE managed to find 7 true cross-** contract reentrancy vulnerabilities. In comparison, XFUZZ found 9 cross-contract reentrancy, 3 cross-contract delegatecall and 2 cross-contract tx-origin vulnerabilities. The two tools found 21 cross-contract vulnerabilities in total. CLAIR VOYANCE report 16 vulnerabilities but only 43.7% of them are true positives. In contrast, XFUZZ generates 18 (13+3+2) reports of three types of cross-contract vulnerabilities and all of them are true positives. The reason of the high false positive rate of CLAIRVOYANCE is mainly due to its static analysis based approach, without runtime validation. We further check the 18 vulnerabilities on some third-party security expose websites [50], [40], [31] and we find 15 of them are not flagged. **Recall. The 9 vulnerabilities missed by CLAIRVOYANCE** are all resulted from the abuse of detection rules, i.e., the vulnerable contracts are filtered out by unsound rules. In total, 3 cross-contract vulnerabilities are missed by XFUZZ. A close investigation shows that they are missed due to the complex path conditions, which blocks the input from penetrating the function. We also carefully check false negatives missed by XFUZZ, and find they are not reported by CONRACTFUZZER and SFUZZ as well. While existing works all fail to penetrate the complex path conditions, we believe this limitation can be addressed by future works TRACTFUZZER and SFUZZ on non-cross-contract evaluations. reentrancy delegatecall tx-origin P% R% #N P% R% #N P% R% #N C.F. 100 1.7 3 0 0 0 0 0 0 SFUZZ 84.2 33.5 70 100 54.3 19 0 0 0 C.V. 48.3 40.4 145 0 0 0 0 0 0 XFUZZ 95.5 84.6 156 100 100 35 100 100 25 Fig. 9: Comparison of reported vulnerabilities between XFUZZ and SFUZZ regarding reentrancy. _6.2.2_ _Non-Cross-contract Vulnerability._ The experiment results show that XFUZZ improves detection of non-cross-contract vulnerabilities as well (see Table 7). For reentrancy, CONTRACTFUZZER achieves the best 100% precision rate but the worst 1.7% recall rate. SFUZZ and CLAIRVOYANCE identified 33.5% and 40.4% vulnerabilities. XFUZZ has a precision rate of 95.5%, which is slightly lower than that of CONTRACTFUZZER, and more importantly, the bests recall rate of 84.2%. XFUZZ exhibits strong capability in detecting vulnerabilities by finding a total of 209 (149+35+25) vulnerabilities. **Precision. For reentrancy, CLAIRVOYANCE reports 75** false positives, because of ‚ the abuse of detection rules and ƒ unexpected jump to unreachable paths due to program errors. The 11 false positives of SFUZZ are due to the misconceived ether transfer. SFUZZ captures ether transfers to locate dangerous calls. However, the ethers from attacker to victim is also falsely captured. The 7 false alarms of XFUZZ are due to the mistakes of contract programmers by calling a nonexistent functions. These calls are however misconceived as vulnerabilities by XFUZZ. **Recall. CLAIRVOYANCE missed 59.6% of the true posi-** tives. The root cause is the adoption of unsound rules during static analysis. SFUZZ missed 117 reentrancy vulnerabilities and 16 delegatecall vulnerabilities due to (1) timeout and (2) incapability to find feasible paths to the vulnerability. XFUZZ missed 27 vulnerabilities due to complex path conditions. **Answer to RQ1: Our tool XFUZZ achieves a precision** of 95.5% and a recall of 84.6%. Among the evaluated four methods, XFUZZ achieves the best recall. Besides, XFUZZ successfully finds 209 real-world non-crosscontract vulnerabilities as well as 18 real-world crosscontract vulnerabilities. ----- Fig. 10: Comparison of reported vulnerabilities between XFUZZ and SFUZZ regarding delegatecall. **6.3** **RQ2: The Effectiveness of Guided Testing** This RQ investigates the usefulness of the ML model and path prioritization for the guidance of fuzzing. To answer this RQ, we compare SFUZZ with a customized version of XFUZZ, i.e., which differs from SFUZZ only by adopting the ML model (without focusing on cross-contract vulnerabilities). The intuition is to check whether the ML model enables us to reduce the time spent on benign contracts and thus reveal vulnerabilities more efficiently. That is, we implement XFUZZ such that each contract is only allowed to be fuzzed for tl seconds if the ML model considers the contract benign; or otherwise, 180 seconds, which is also the time limit adopted in SFUZZ. Note that if tl is 0, the contract is skipped entirely when it is predicted to be benign by the ML model. The goal is to see whether we can set tl to be a value smaller than 180 safely (i.e., without missing vulnerabilities). We thus systematically vary the value of tl and observe the number of identified vulnerabilities. The results are summarized in Figure 9 and Figure 10. Note that the tx-origin vulnerability is not included since it is not supported by SFUZZ. The red line represents vulnerabilities only found by XFUZZ, the green line represents vulnerabilities only reported by SFUZZ and the blue line denotes the reports shared by both two tools. We can see that the curves climb/drop sharply at the beginning and then saturate/flatten after 30s, indicating that most vulnerabilities are found in the first 30s. We observe that when tl is set to 0s (i.e., contracts predicted as benign are skipped entirely), XFUZZ still detects 82.8% (i.e., 111 out of 134, or equivalently 166% of that of SFUZZ) of the reentrancy vulnerabilities as well as 65.0% of the delegatecall vulnerability (13 out of 20). The result further improves if we set tl to be 30 seconds, i.e., almost all (except 2 out of 174 reentrancy vulnerabilities; and none of the delegatecall vulnerabilities) are identified. Based on the result, we conclude that the ML model indeed enables to reduce fuzzing time on likely benign contracts significantly (i.e., from 180 seconds to 30 seconds) without missing almost any vulnerability. **The Effectiveness of Path Prioritization. To evaluate** the relevance of path prioritization, we further analyze the results of the customized version of XFUZZ as discussed above Recall that path prioritization allows us to explore p p y vulnerable paths found by the two tools are counted respectively. Number in the Top Found by Vul Total Top10 Other xFuzz Reentrancy 172 152 20 sFuzz Reentrancy 59 57 2 xFuzz Delegatecall 33 32 1 sFuzz Delegatecall 19 19 0 TABLE 9: The time cost of each step in fuzzing procedures. sFuzz C.V. xFuzz Reentrancy N.A. N.A. 630.6 MPT(min) Delegatecall N.A. N.A. 630.6 Tx-origin N.A. N.A. 630.6 **6.4** **RQ3: Detection Efficiency** Next, we evaluate the efficiency of our approach. We record time taken for each step during fuzzing and the results are summarized in Table 9. To eliminate randomness during fuzzing, we replay our experiments for five times and report the averaged results. In this table, “MPT” means model prediction time; “ST” means search time for vulnerable paths during fuzzing; “DT” means detection time for CLAIRVOY ANCE and fuzzing time for the fuzzers “N A ” means that Reentrancy 21,930.0 N.A. 3,621.0 ST(min) Delegatecall 22,131.0 N.A. 3,678.0 Tx-origin N.A. N.A. 3,683.0 Reentrancy 54.1 246.2 86.6 DT(min) Delegatecall 2.8 N.A. 4.2 Tx-origin N.A. N.A. 2.9 Reentrancy 21,984.1 246.2 4,338.2 Total(min) Delegatecall 22,133.8 N.A. 4,312.8 Tx-origin N.A. N.A. 4,316.5 likely vulnerable paths before the remaining. Thus, if path prioritization works, we would expect that the vulnerabilities are mostly found in paths, where XFUZZ explores first. We thus systematically count the number of vulnerabilities found in the first 10 paths which are explored by XFUZZ. The results are summarized in Table 8, where column “Top 10” shows the number of vulnerabilities detected in the first 10 paths explored. The results show that, XFUZZ finds a total of 152 (out of 172) reentrancy vulnerabilities in the first 10 explored paths. In particular, the number of found vulnerabilities in the first 10 explored paths by XFUZZ is almost three times as many as that by SFUZZ. Similarly, XFUZZ also finds 32 (out of 33) delegatecall vulnerabilities in the first 10 explored paths. The results thus clearly suggest that path prioritization allows us to focus on relevant paths effectively, which has practical consequence on fuzzing large contracts. **Answer to RQ2: The ML model enables us to signif-** icantly reduce the fuzzing time on likely benign contracts without missing almost any vulnerabilities. Furthermore, most vulnerabilities are detected efficiently through our path prioritization. Overall, XFUZZ finds _twice as many reentrancy or delegatecall vulnerabilities_ as SFUZZ. ----- 1 **function buyOne(address _exchange, uint256** _value, bytes _data) payable public 2 { 3 ... 4 buyInternal(_exchange, _value, _data); 5 } 6 **function buyInternal(address _exc, uint256** _value, bytes _data) internal 7 { 8 ... 9 **require(_exc.call.value(_value)(_data));** 10 balances[msg.sender] = balances[msg.sender ].sub(_value); 11 } Fig. 11: A real-world reentrancy vulnerability found by XFUZZ, in which the vulnerable path relies on internal calls. the tool has no such step in fuzzing or the vulnerability is currently not supported by it, and thus the time is not recorded. The efficiency of our method (i.e., by reducing the search space) is evidenced as the results show that XFUZZ is obviously faster than SFUZZ, i.e., saving 80% of the time. The main reason for the saving is due to the saving on the search time (i.e., 80% reduction). We also observe that XFUZZ is slightly slower than SFUZZ in terms of the effective fuzzing time, i.e., an additional 32.5 (86.6-54.1) min is used for fuzzing cross-contract vulnerabilities. This is expected as the number of paths is much more (even after the reduction thanks to the ML model and path prioritization) than that in the presence of more than 2 interacting contracts. Note that CLAIRVOYANCE is faster than all tools because this tool is a static detector without perform runtime execution of contracts. 1 **contract SolidStamp{** 2 **function audContract(address _auditor) public** onlyRegister 3 { 4 ... 5 _auditor.transfer(reward.sub(commissionKept )); 6 } 7 } 8 **contract SolidStampRegister{** 9 **address public CSolidStamp;** 10 **function registerAudit(bytes32 _codeHash)** **public** 11 { 12 ... 13 SolidStamp(CSolidStamp).audContract(msg. **sender);** 14 } 15 } Fig. 12: A cross-contract vulnerability found by XFUZZ. This contract is used in auditing transactions in real-world. 1 **if ((random()%2==1) && (msg.value == 1 ether)** && (!locked)) 2 \\at 0x11F4306f9812B80E75C1411C1cf296b04917b2f0 3 4 **require(msg.value == 0 || (_amount == msg.value** && etherTokens[fromToken])); 5 \\at 0x1a5f170802824e44181b6727e5447950880187ab **Answer to RQ3: Owing to the reduced search space** of suspicious functions, the guided fuzzer XFUZZ saves over 80% of searching time and reports more vulnerabilities than SFUZZ with less than 20% of the time. **6.5** **RQ4: Real-world Case Studies** In this section, we present 2 typical vulnerabilities reported by XFUZZ to qualitatively show why XFUZZ works. In general, the ML model and path prioritization help XFUZZ find vulnerabilities in three ways, i.e., ‚ locate vulnerable functions, ƒ identify paths from internal calls and „ identify feasible paths from external calls. **Real-world Case 1: XFUZZ is enhanced with path priori-** tization, which enables it to focus on vulnerabilities related to internal calls. In Figure 11[2], the modifier internal limits the access only to internal member functions. The attacker can however steal ethers by path buyOne → buyInternal. By applying XFUZZ, the vulnerability is identified in 0.05 seconds and the vulnerable path is also efficiently exposed. **Real-world Case 2: The path prioritization also enables** XFUZZ to find cross-contract vulnerabilities efficiently. For example, a real-world cross-contract vulnerability[3] is shown in Figure 12. This example is for auditing transactions in realworld and involves with over 2,000 dollars. In this example, 2. deployed at 0x0695B9EA62C647E7621C84D12EFC9F2E0CDF5F72 3 deployed at 0x165CFB9CCF8B185E03205AB4118EA6AFBDBA9203 Fig. 13: Complex path conditions involving with multiple variables and values. function registerAudit has a cross-contract call to a public address CSolidStamp at line 13, which intends to forward the call to function audContract. While this function is only allowed to be accessed by the registered functions, as limited by modifier onlyRegister, we can bypass this restriction by a cross-contract call registerAudit → audContrat. Eventually, an attacker would be able to steals the ethers in seconds. **Real-world Case 4:During our investigation on the exper-** iment results, we gain the insights that XFUZZ can be further improved in terms of handling complex path conditions. Complex path conditions often lead to prolonged fuzzing time or blocking penetration altogether. We identified a total of 3 cross-contract and 24 non-cross-contract vulnerabilities that are missed due to such a reason. Two of such complex condition examples (from two real-word false negatives of XFUZZ) are shown in Figure 13. Function calls, values, variables and arrays are involved in the conditions. These conditions are difficult to satisfy for XFUZZ and fuzzers in general (e.g., SFUZZ failed to penetrate these paths too). This problem can be potentially addressed by integrating XFUZZ with a theorem prover such that Z3 [51] which is tasked to solve these path conditions. That is, a hybrid fuzzing approach that integrates symbolic execution in a lightweight manner is likely to further improve XFUZZ. **Answer to RQ4: With the help of model predictions** and path prioritization, XFUZZ is capable of rapidly locating vulnerabilities in real-world contracts. The main reason for false negatives is complex path conditions, which could be potentially addressed through integrating hybrid fuzzing into XFUZZ. ----- #### 7 RELATED WORK In this section, we discuss works that are most relevant to ours. **Program analysis. We draw valuable development expe-** rience and domain specific knowledge from existing work [8], [10], [3], [4], [5]. Among them, SLITHER [10], OYENTE [8] and Atzei et al. [5] provide a transparent overlook on smart contracts detection and enhance our understanding on vulnerabilities. Chen et al. [3] and Durieux et al. [4] offer evaluations on the state-of-the-arts, which helps us find the limitation of existing tools. **Cross-contract vulnerability. Our study is closely related** to previous works focusing on interactions between multiple contracts. Zhou et al. [52] present work to analyze relevance between smart contract files, which inspires us to focus on cross-contract interactions. He et al. [24] report that existing tools fail to exercise functions that can only execute at deeper states. Xue et al. [19] studied cross-contract reentrancy vulnerability. They propose to construct ICFG (combining CFGs with call graphs) then track vulnerability by taint analysis. **Smart contract testing. Our study is also relevant to** previous fuzzing work on smart contracts. Smart contract testing plays an important role in smart contract security. Zou et al. [7] report that over 85% of developers intend to do heavy testing when programming. The work of Jiang et al. [15] makes the early attempt to fuzz smart contracts. CONTRACTFUZZER instruments Ethereum virtual machine and then collects execution logs for further analysis. Wustholz¨ _et al. present guided fuzzer to better mutate_ inputs. Similar method is implemented by He et al. [24]. They propose to learn fuzzing strategies from the inputs generated from a symbolic expert. The above two methods inspire us to leverage a guider to reduce search space. Tai D et al. [14] implement a user-friendly AFL fuzzing tool for smart contracts, based on which we build our fuzzing framework. Different from these existing work, our work makes a special focus on proposing novel ML-guided method for fuzzing cross-contract vulnerabilities, which is highly important but largely untouched by existing work. Additionally, our comprehensive evaluation demonstrates that our proposed technique indeed outperforms the stateof-the-arts in detecting cross-contract vulnerabilities. **Machine learning practice. This work is also inspired** by previous work [53], [54], [55]. In their work, they propose learning behavior automata to facilitate vulnerability detection. Zhuang et al. [56] propose to build graph networks on smart contracts to extend understanding of malicious attacks. Their work inspires us to introduce machine learning method for detection. We also improve our model selection by inspiration of work of Liu et al. [39]. Their algorithm helps us select best models with satisfactory performance on recall and precision on highly imbalanced dataset. Yan _et al. [55] have proposed a method to mimic the cognitive_ process of human experts. Their work inspires us to find the consensus of vulnerability evaluators to better train the machine learning models. **Smart contract security to society. Smart contract has** drawn a number of security concerns since it came into being. As figured out by Zou et al [7] over 75% of developers agree that the smart contract software has a much high security requirement than traditional software. According to [7], the reasons behind such requirement are: 1) The frequent operations on sensitive information (e.g., digital currencies, tokens); 2) The transactions are irreversible; 3) The deployed code cannot be modified. Considering the close connection between smart contract and financial activities, the security of smart contract security largely effects the stability of the society. #### 8 CONCLUSION In this paper, we propose XFUZZ, a novel machine learning guided fuzzing framework for smart contracts, with a special focus on cross-contract vulnerabilities. We address two key challenges during its development: the search space of fuzzing is reduced, and cross-contract fuzzing is completed. The experiments demonstrate that XFUZZ is much faster and more effective than existing fuzzers and detectors. In future, we will extend our framework with more static approach to support more vulnerabilities. #### REFERENCES [1] V. K. DAS, “Top blockchain platforms of 2020,” [https://www.blockchain-council.org/](https://www.blockchain-council.org/blockchain/topblockchainplatformsof2020that\ everyblockchainenthusiastmustknow/) [blockchain/topblockchainplatformsof2020that\](https://www.blockchain-council.org/blockchain/topblockchainplatformsof2020that\ everyblockchainenthusiastmustknow/) [everyblockchainenthusiastmustknow/, 2020, online; accessed](https://www.blockchain-council.org/blockchain/topblockchainplatformsof2020that\ everyblockchainenthusiastmustknow/) September 2020. [2] [Ethereum, “Ethereum daily transaction chart,” https://etherscan.](https://etherscan.io/chart/tx) [io/chart/tx, 2017, online; accessed 29 January 2017.](https://etherscan.io/chart/tx) [3] H. Chen, M. Pendleton, L. Njilla, and S. Xu, “A survey on ethereum systems security: Vulnerabilities, attacks, and defenses,” ACM _Computing Surveys (CSUR), 2020._ [4] T. Durieux, J. F. Ferreira, R. Abreu, and P. Cruz, “Empirical review of automated analysis tools on 47,587 ethereum smart contracts,” in Proceedings of the ACM/IEEE 42nd ICSE, 2020, pp. 530–541. [5] N. Atzei, M. Bartoletti, and T. Cimoli, “A survey of attacks on ethereum smart contracts (sok),” in International Conference on _Principles of Security and Trust._ Springer, 2017, pp. 164–186. [6] O. G. Gu¨ c¸lut¨ urk,¨ “The dao hack explained: Unfortunate [take-off of smart contracts,” https://medium.com/@ogucluturk/](https://medium.com/@ogucluturk/the-dao-hack-explained-unfortunate-take-off-of-smart-contracts-2bd8c8db3562) [the-dao-hack-explained-unfortunate-take-off-of-smart-contracts-2bd8c8db3562,](https://medium.com/@ogucluturk/the-dao-hack-explained-unfortunate-take-off-of-smart-contracts-2bd8c8db3562) 2018, online; accessed 22 January 2018. [7] W. Zou, D. Lo, P. S. Kochhar, X.-B. D. Le, X. Xia, Y. Feng, Z. Chen, and B. Xu, “Smart contract development: Challenges and opportunities,” IEEE Transactions on Software Engineering, vol. 47, no. 10, pp. 2084–2106, 2019. [8] L. Luu, D.-H. Chu, H. Olickel, P. Saxena, and A. Hobor, “Making smart contracts smarter,” in Proceedings of the 2016 ACM SIGSAC _CCS, 2016, pp. 254–269._ [9] P. Tsankov, A. Dan, D. Drachsler-Cohen, A. Gervais, F. Buenzli, and M. Vechev, “Securify: Practical security analysis of smart contracts,” in Proceedings of the 2018 ACM SIGSAC Conference on Computer and _Communications Security, 2018, pp. 67–82._ [10] J. Feist, G. Grieco, and A. Groce, “Slither: a static analysis framework for smart contracts,” in 2019 IEEE/ACM 2nd International _Workshop on Emerging Trends in Software Engineering for Blockchain_ _(WETSEB), 2019, pp. 8–15._ [[11] Protofire, “Solhint,” https://github.com/protofire/solhint, 2018,](https://github.com/protofire/solhint) online; accessed September 2018. [12] S. Kalra, S. Goel, M. Dhawan, and S. Sharma, “Zeus: Analyzing safety of smart contracts.” in NDSS, 2018. [13] S. Tikhomirov, E. Voskresenskaya, I. Ivanitskiy, R. Takhaviev, E. Marchenko, and Y. Alexandrov, “Smartcheck: Static analysis of ethereum smart contracts,” in WETSEB, 2018, pp. 9–16. [14] T. D. Nguyen, L. H. Pham, J. Sun, Y. Lin, and Q. T. Minh, “Sfuzz: An efficient adaptive fuzzer for solidity smart contracts,” in _Proceedings of the ACM/IEEE 42nd International Conference on Software_ _Engineering ser ICSE ’20 New York NY USA 2020 p 778–788_ ----- [15] B. Jiang, Y. Liu, and W. Chan, Contractfuzzer: Fuzzing smart contracts for vulnerability detection,” in 2018 33rd IEEE/ACM _International Conference on Automated Software Engineering (ASE)._ IEEE, 2018, pp. 259–269. [16] G. Grieco, W. Song, A. Cygan, J. Feist, and A. Groce, “Echidna: effective, usable, and fast fuzzing for smart contracts,” in Proceed_ings of the 29th ACM SIGSOFT International Symposium on Software_ _Testing and Analysis, 2020, pp. 557–560._ [17] Q. Zhang, Y. Wang, J. Li, and S. Ma, “Ethploit: From fuzzing to efficient exploit generation against smart contracts,” in 2020 IEEE _27th SANER._ IEEE, 2020, pp. 116–126. [18] J. Gao, H. Liu, Y. Li, C. Liu, Z. Yang, Q. Li, Z. Guan, and Z. Chen, “Towards automated testing of blockchain-based decentralized applications,” in IEEE/ACM 27th ICPC, 2019, pp. 294–299. [19] X. Yinxing, M. Mingliang, L. Yun, S. Yulei, Y. Jiaming, and P. Tianyong, “Cross-contract static analysis for detecting practical reentrancy vulnerabilities in smart contracts,” in 2020 35rd IEEE/ACM _International Conference on Automated Software Engineering (ASE),_ 2020. [20] G. A. Oliva, A. E. Hassan, and Z. M. J. Jiang, “An exploratory study of smart contracts in the ethereum blockchain platform,” Empirical _Software Engineering, pp. 1–41, 2020._ [21] V. Wustholz and M. Christakis, “Harvey: A greybox fuzzer for¨ smart contracts,” in Proceedings of the 28th ACM Joint Meeting _on European Software Engineering Conference and Symposium on the_ _Foundations of Software Engineering, 2020, pp. 1398–1409._ [22] X. Du, B. Chen, Y. Li, J. Guo, Y. Zhou, Y. Liu, and Y. Jiang, “Leopard: Identifying vulnerable code for vulnerability assessment through program metrics,” in 2019 IEEE/ACM 41st International Conference _on Software Engineering (ICSE)._ IEEE, 2019, pp. 60–71. [23] P. Godefroid, H. Peleg, and R. Singh, “Learn&fuzz: Machine learning for input fuzzing,” in 2017 32nd IEEE/ACM International _Conference on Automated Software Engineering (ASE)._ IEEE, 2017, pp. 50–59. [24] J. He, M. Balunovic, N. Ambroladze, P. Tsankov, and M. Vechev,´ “Learning to fuzz from symbolic execution with application to smart contracts,” in Proceedings of the 2019 ACM SIGSAC Conference _on Computer and Communications Security, 2019, pp. 531–548._ [25] S. T. Help, “7 principles of software testing: Defect clustering [and pareto principle,” https://www.softwaretestinghelp.com/](https://www.softwaretestinghelp.com/7-principles-of-software-testing/) [7-principles-of-software-testing/, accessed March, 2021.](https://www.softwaretestinghelp.com/7-principles-of-software-testing/) [26] A. Ghaleb and K. Pattabiraman, “How effective are smart contract analysis tools? evaluating smart contract static analysis tools using bug injection,” in Proceedings of the 29th ACM SIGSOFT International _Symposium on Software Testing and Analysis, 2020, pp. 415–427._ [27] M. Ren, Z. Yin, F. Ma, Z. Xu, Y. Jiang, C. Sun, H. Li, and Y. Cai, “Empirical evaluation of smart contract testing: what is the best choice?” in Proceedings of the 30th ACM SIGSOFT International _Symposium on Software Testing and Analysis, 2021, pp. 566–579._ [28] Y. Zhuang, Z. Liu, P. Qian, Q. Liu, X. Wang, and Q. He, “Smart contract vulnerability detection using graph neural network.” in _IJCAI, 2020, pp. 3283–3290._ [29] T. Mikolov, K. Chen, G. Corrado, and J. Dean, “Efficient estimation of word representations in vector space,” arXiv preprint _arXiv:1301.3781, 2013._ [30] M. Ren, Z. Yin, F. Ma, Z. Xu, Y. Jiang, C. Sun, H. Li, and Y. Cai, “Empirical evaluation of smart contract testing: What is the best choice?” 2021. [31] J. F. Ferreira, P. Cruz, T. Durieux, and R. Abreu, “Smartbugs: A framework to analyze solidity smart contracts,” arXiv preprint _arXiv:2007.04771, 2020._ [[32] xFuzz, “Machine learning guided cross-contract fuzzing,” https://](https://anonymous.4open.science/r/xFuzzforReview-ICSE) [anonymous.4open.science/r/xFuzzforReview-ICSE, 2020, online;](https://anonymous.4open.science/r/xFuzzforReview-ICSE) accessed September 2020. [[33] ethervm, “Ethereum virtual machine opcodes,” https://ethervm.](https://ethervm.io/) [io/, 2019, online; accessed September 2019.](https://ethervm.io/) [34] T. D. Nguyen, L. H. Pham, and J. Sun, “sguard: Towards fixing vulnerable smart contracts automatically,” arXiv preprint _arXiv:2101.01917, 2021._ [35] Protofire, Decentralized application security project, [https://](https://dasp.co/) [dasp.co/, accessed September, 2018.](https://dasp.co/) [36] N. Stephens, J. Grosen, C. Salls, A. Dutcher, R. Wang, J. Corbetta, Y. Shoshitaishvili, C. Kruegel, and G. Vigna, “Driller: Augmenting fuzzing through selective symbolic execution.” in NDSS, vol. 16, 2016. [37] W. Drewry and T. Ormandy, “Flayer: Exposing application internals,” 2007. [38] T. Chen and C. Guestrin, “Xgboost: A scalable tree boosting system,” in Proceedings of the 22nd acm sigkdd international conference on _knowledge discovery and data mining, 2016, pp. 785–794._ [39] X.-Y. Liu, J. Wu, and Z.-H. Zhou, “Exploratory undersampling for class-imbalance learning,” IEEE Transactions on Systems, Man, and _Cybernetics, Part B (Cybernetics), pp. 539–550, 2008._ [40] S. C. Security, “Smart contract weakness classification registry,” [https://github.com/SmartContractSecurity/SWC-registry, 2019,](https://github.com/SmartContractSecurity/SWC-registry) online; accessed September 2019. [41] U. Alon, M. Zilberstein, O. Levy, and E. Yahav, “code2vec: Learning distributed representations of code,” Proceedings of the ACM on _Programming Languages, 2019._ [42] Z. Li, D. Zou, J. Tang, Z. Zhang, M. Sun, and H. Jin, “A comparative study of deep learning-based vulnerability detection system,” IEEE _Access, pp. 103 184–103 197, 2019._ [43] G. Grieco, G. L. Grinblat, L. Uzal, S. Rawat, J. Feist, and L. Mounier, “Toward large-scale vulnerability discovery using machine learning,” in Proceedings of the 6th ACM Conference on Data and Application _Security and Privacy, 2016, p. 85–96._ [44] T. Kamiya, S. Kusumoto, and K. Inoue, “Ccfinder: a multilinguistic token-based code clone detection system for large scale source code,” IEEE Transactions on Software Engineering, pp. 654–670, 2002. [45] J. L. Leevy, T. M. Khoshgoftaar, R. A. Bauder, and N. Seliya, “A survey on addressing high-class imbalance in big data,” Journal of _Big Data, p. 42, 2018._ [46] N. V. Chawla, K. W. Bowyer, L. O. Hall, and W. P. Kegelmeyer, “Smote: synthetic minority over-sampling technique,” Journal of _artificial intelligence research, vol. 16, pp. 321–357, 2002._ [47] Z. Huang, W. Xu, and K. Yu, “Bidirectional lstm-crf models for sequence tagging,” arXiv preprint arXiv:1508.01991, 2015. [48] Z. Tian, J. Tian, Z. Wang, Y. Chen, H. Xia, and L. Chen, “Landscape estimation of solidity version usage on ethereum via version identification,” International Journal of Intelligent Systems, vol. 37, no. 1, pp. 450–477, 2022. [[49] S. Contract, “Function selector,” https://solidity-by-example.org/](https://solidity-by-example.org/function-selector/) [function-selector/, accessed March, 2021.](https://solidity-by-example.org/function-selector/) [[50] Dedaub, “Security technology for smart contracts,” https://](https://contract-library.com/) [contract-library.com/, 2020, online; accessed 29 January 2020.](https://contract-library.com/) [51] L. De Moura and N. Bjørner, “Z3: An efficient smt solver,” in _International conference on Tools and Algorithms for the Construction_ _and Analysis of Systems._ Springer, 2008, pp. 337–340. [52] E. Zhou, S. Hua, B. Pi, J. Sun, Y. Nomura, K. Yamashita, and H. Kurihara, “Security assurance for smart contract,” in 2018 _9th IFIP International Conference on New Technologies, Mobility and_ _Security (NTMS)._ IEEE, 2018, pp. 1–5. [53] H. Xiao, J. Sun, Y. Liu, S.-W. Lin, and C. Sun, “Tzuyu: Learning stateful typestates,” in 2013 28th IEEE/ACM ASE. IEEE, 2013, pp. 432–442. [54] Y. Xue, J. Wang, Y. Liu, H. Xiao, J. Sun, and M. Chandramohan, “Detection and classification of malicious javascript via attack behavior modelling,” in Proceedings of the 2015 ISSTA, 2015, pp. 48–59. [55] G. Yan, J. Lu, Z. Shu, and Y. Kucuk, “Exploitmeter: Combining fuzzing with machine learning for automated evaluation of software exploitability,” in 2017 IEEE Symposium on Privacy-Aware _Computing (PAC)._ IEEE, 2017, pp. 164–175. [56] Y. Zhuang, Z. Liu, P. Qian, Q. Liu, X. Wang, and Q. He, “Smart contract vulnerability detection using graph neural network.” International Joint Conferences on Artificial Intelligence Organization, 2020, pp. 3283–3290. -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2111.12423, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/2111.12423" }
2,021
[ "JournalArticle" ]
true
2021-11-24T00:00:00
[ { "paperId": "63aa26c2dff6234e4e57da79484f5cca355c1ae3", "title": "Smart Contract Development: Challenges and Opportunities" }, { "paperId": "5a534cee9dec41690a212fd6c5263ac325cb4cd7", "title": "Landscape estimation of solidity version usage on Ethereum via version identification" }, { "paperId": "f6517a7ff05c418998a5967cb65e2014b6fb5686", "title": "Empirical evaluation of smart contract testing: what is the best choice?" }, { "paperId": "c46195a238ab2d890f327264daab5dde5254a194", "title": "SGUARD: Towards Fixing Vulnerable Smart Contracts Automatically" }, { "paperId": "c97ca434efab51fdf1855ee719ad29d020f81b86", "title": "Cross-Contract Static Analysis for Detecting Practical Reentrancy Vulnerabilities in Smart Contracts" }, { "paperId": "4dafc98d6bc5dd9249b5ef64f2671470e9c8b053", "title": "Echidna: effective, usable, and fast fuzzing for smart contracts" }, { "paperId": "5d2ba840ab0b3f282134d0820e7a8015af3c6612", "title": "SmartBugs: A Framework to Analyze Solidity Smart Contracts" }, { "paperId": "81861b6615015a8af45b4119a1144d9dd5bef4a7", "title": "Smart Contract Vulnerability Detection using Graph Neural Network" }, { "paperId": "f94605eeea3c37882e84a471c8133157ed4da49c", "title": "How effective are smart contract analysis tools? evaluating smart contract static analysis tools using bug injection" }, { "paperId": "f9233720b7a26b7324a1cc1ecbadf83c529efd73", "title": "sFuzz: An Efficient Adaptive Fuzzer for Solidity Smart Contracts" }, { "paperId": "62b2f8ac13c6880fb6fa45898a346d2051026f7b", "title": "An exploratory study of smart contracts in the Ethereum blockchain platform" }, { "paperId": "ceb4dff06c400c119827f3234bac811e228cb008", "title": "EthPloit: From Fuzzing to Efficient Exploit Generation against Smart Contracts" }, { "paperId": "bdef2c6323ba6e02ac8ff265d3652bc5027b3d97", "title": "Learning to Fuzz from Symbolic Execution with Application to Smart Contracts" }, { "paperId": "07e2c2eae0a754430e6d0d9d69ef0e54b54bb22c", "title": "Empirical Review of Automated Analysis Tools on 47,587 Ethereum Smart Contracts" }, { "paperId": "0d33257fd1481a92380396c7882ccac87c294e78", "title": "A Survey on Ethereum Systems Security" }, { "paperId": "8e8cc2d59c8ce7b1f63c0801c945670f9555bdb8", "title": "Harvey: a greybox fuzzer for smart contracts" }, { "paperId": "81aec3c1c0eb0b842216fe4d22077d688f8c64e6", "title": "Slither: A Static Analysis Framework for Smart Contracts" }, { "paperId": "87acd0a8623ee52080c7c0065b10db6b0512ff2f", "title": "Towards Automated Testing of Blockchain-Based Decentralized Applications" }, { "paperId": "b9c86aebba3b0542dafe51db7398ae5a9bfa1c5b", "title": "LEOPARD: Identifying Vulnerable Code for Vulnerability Assessment Through Program Metrics" }, { "paperId": "d99e88d3c1821857ca6945470698351925f9737f", "title": "A survey on addressing high-class imbalance in big data" }, { "paperId": "3785839cb695da8a94602a5e1c067ce1fa3123ec", "title": "ContractFuzzer: Fuzzing Smart Contracts for Vulnerability Detection" }, { "paperId": "272850f06a2aa8c0831f68cf832412852aab5dc8", "title": "Securify: Practical Security Analysis of Smart Contracts" }, { "paperId": "8f22bf55536b50145bb117c97e13ea4b32a5e8fa", "title": "SmartCheck: Static Analysis of Ethereum Smart Contracts" }, { "paperId": "2403c68b7805342fc2c7dc6815bc29e189fb495a", "title": "code2vec: learning distributed representations of code" }, { "paperId": "4b99fbe18fe4a8cd1d797ed073fb92fb71bd2dcf", "title": "Security Assurance for Smart Contract" }, { "paperId": "c54566412a7cd13b55fc43c748f6eece66ed7721", "title": "ExploitMeter: Combining Fuzzing with Machine Learning for Automated Evaluation of Software Exploitability" }, { "paperId": "aec843c0f38aff6c7901391a75ec10114a3d60f8", "title": "A Survey of Attacks on Ethereum Smart Contracts (SoK)" }, { "paperId": "5bd13e6313008e1555e530dda6d84c5004aa09ed", "title": "Learn&Fuzz: Machine learning for input fuzzing" }, { "paperId": "7968129a609364598baefbc35249400959406252", "title": "Making Smart Contracts Smarter" }, { "paperId": "26bc9195c6343e4d7f434dd65b4ad67efe2be27a", "title": "XGBoost: A Scalable Tree Boosting System" }, { "paperId": "77d02ff7f3b8897e58663af52ccdbd48e81b068b", "title": "Toward Large-Scale Vulnerability Discovery using Machine Learning" }, { "paperId": "af88ce6116c2cd2927a4198745e99e5465173783", "title": "Bidirectional LSTM-CRF Models for Sequence Tagging" }, { "paperId": "ae54bcdb0b18e392b77356308b13b71deaa2440d", "title": "Detection and classification of malicious JavaScript via attack behavior modelling" }, { "paperId": "b02ba8c4b8522a34c974b2bae1c2b435fa7b8a46", "title": "TzuYu: Learning stateful typestates" }, { "paperId": "f6b51c8753a871dc94ff32152c00c01e94f90f09", "title": "Efficient Estimation of Word Representations in Vector Space" }, { "paperId": "3960dda299e0f8615a7db675b8e6905b375ecf8a", "title": "Z3: An Efficient SMT Solver" }, { "paperId": "37a8f6570e0c475f790fd08c3ca3e7cc3027c5fd", "title": "Flayer: Exposing Application Internals" }, { "paperId": "444eb20f2fed03cb50b58855c7b30bc33a5036da", "title": "Exploratory Under-Sampling for Class-Imbalance Learning" }, { "paperId": "98e810ed098a651e0ba8cbb63d2d926d4eebdf9b", "title": "CCFinder: A Multilinguistic Token-Based Code Clone Detection System for Large Scale Source Code" }, { "paperId": null, "title": "“7 principles of software testing: Defect clustering and pareto principle,”" }, { "paperId": null, "title": "Machine learning guided cross-contract fuzzing" }, { "paperId": null, "title": "“Top blockchain platforms of 2020,”" }, { "paperId": "c66a052f3849d6f613dbf31571b2f59b03dea49e", "title": "A Comparative Study of Deep Learning-Based Vulnerability Detection System" }, { "paperId": null, "title": "Ethereum virtual machine opcodes" }, { "paperId": null, "title": "“Solhint,”" }, { "paperId": "f3f927adf4aac1146c9587fa646864a040c94fa6", "title": "ZEUS: Analyzing Safety of Smart Contracts" }, { "paperId": null, "title": "“Decentralized application security project,”" }, { "paperId": null, "title": "“The DAO hack explained: Unfortunate take-off of smart contracts,”" }, { "paperId": null, "title": "“Ethereum daily transaction chart,”" }, { "paperId": "f049751103f13d1ce6080418813e2a26820713e1", "title": "Driller: Augmenting Fuzzing Through Selective Symbolic Execution" }, { "paperId": "8cb44f06586f609a29d9b496cc752ec01475dffe", "title": "SMOTE: Synthetic Minority Over-sampling Technique" }, { "paperId": null, "title": "RQ2. To what extent the machine learning models and the path prioritization contribute to reducing the search space?" }, { "paperId": null, "title": "“Function selector,”" }, { "paperId": null, "title": "RQ1. How effective is X F UZZ in detecting cross-contract vulnerabilities?" }, { "paperId": null, "title": "RQ3. What are the overhead of X F UZZ , compared to the vanilla S F UZZ ?" }, { "paperId": null, "title": "How to train the machine learning model and achieve satisfactory precision and recall" }, { "paperId": null, "title": "How to combine trained model with fuzzer to reduce search space towards efficient fuzzing" }, { "paperId": null, "title": "Software Testing Help" }, { "paperId": null, "title": "Smart Contract Weakness Classification Registry" } ]
21,397
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01e0ad31ba9b327e7a16cae133ddc194814ea430
[ "Computer Science" ]
0.907951
LDV: A Lightweight DAG-Based Blockchain for Vehicular Social Networks
01e0ad31ba9b327e7a16cae133ddc194814ea430
IEEE Transactions on Vehicular Technology
[ { "authorId": "1642917412", "name": "Wenhui Yang" }, { "authorId": "19208996", "name": "Xiaohai Dai" }, { "authorId": "2051268320", "name": "Jiang Xiao" }, { "authorId": "145914256", "name": "Hai Jin" } ]
{ "alternate_issns": null, "alternate_names": [ "IEEE Trans Veh Technol" ], "alternate_urls": [ "https://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=25" ], "id": "983b0731-eddf-4f05-9c9b-81059a9f9c51", "issn": "0018-9545", "name": "IEEE Transactions on Vehicular Technology", "type": "journal", "url": "http://ieeexplore.ieee.org/servlet/opac?punumber=25" }
As social networks are integrated into the Vehicular Ad Hoc Networks (VANETs), the emerging Vehicular Social Networks (VSNs) have gained massive interests. However, the security and privacy of data generated by various applications in VSNs is a great challenge, which blocks the further development of VSNs. The emerging blockchain technology seems to be a good catalyst for the development of VSN with its high security and irreversible features, which can be also a data management tool for rapidly generated data of VSNs with tamper proof. However, the full duplicates of blockchain data need to be stored in each node to ensure security, which is unacceptable for vehicles with limited resource. In this paper, to address the above storage challenge, a lightweight Directed Acyclic Graph (DAG) based blockchain (LDV) is proposed for resource-constrained VSNs. Specifically, based on the in-depth analysis of VSNs, we propose the social-based data reduction approach. In detail, each node only stores the interested data within the topic groups of interest and ignores the irrelevant data. To avoid the huge storage cost within large-scale groups with large amounts of data, we further present the historical data pruning method within a group, which meets the storage requirement by reducing the number of duplicates stored in each node. Experimental results show that LDV can save 97.13% storage space and has good scalability.
## LDV: A Lightweight DAG-Based Blockchain for Vehicular Social Networks ### Wenhui Yang, Student Member, IEEE, Xiaohai Dai, Student Member, IEEE, Jiang Xiao, Member, IEEE, and Hai Jin, Fellow, IEEE **_Abstract—As social networks are integrated into the Vehicular_** **_Ad Hoc Networks (VANETs), the emerging Vehicular Social Net-_** **_works (VSNs) have gained massive interests. However, the security_** **and privacy of data generated by various applications in VSNs is** **a great challenge, which blocks the further development of VSNs.** **The emerging blockchain technology seems to be a good catalyst** **for the development of VSN with its high security and irreversible** **features, which can be also a data management tool for rapidly** **generated data of VSNs with tamper proof. However, the full** **duplicates of blockchain data need to be stored in each node to** **ensure security, which is unacceptable for vehicles with limited** **resource. In this paper, to address the above storage challenge, a** **lightweight Directed Acyclic Graph (DAG) based blockchain (LDV)** **is proposed for resource-constrained VSNs. Specifically, based on** **the in-depth analysis of VSNs, we propose the social-based data** **reduction approach. In detail, each node only stores the interested** **data within the topic groups of interest and ignores the irrelevant** **data. To avoid the huge storage cost within large-scale groups** **with large amounts of data, we further present the historical data** **pruning method within a group, which meets the storage require-** **ment by reducing the number of duplicates stored in each node.** **Experimental results show that LDV can save 97.13% storage space** **and has good scalability.** **_Index Terms—Vehicular social networks, blockchain, data_** **reduction.** I. INTRODUCTION ODAY Vehicular Social Networks (VSNs) have attracted massive interests from both academia and industry thanks # T to the promise of advancing the Vehicular Ad Hoc Networks (VANETs) with social networks. In particular, the distributed commuters (e.g., drivers, passengers, Road Side Units (RSUs), and vehicles) in VSNs of similar routine or social behaviours, can group into virtual communities and transmit the sociallyaware data on roadways. By aggregating the social characteristics among the commuters of mutual interests, VSNs have Manuscript received September 1, 2019; revised November 24, 2019; ac cepted December 18, 2019. Date of publication January 8, 2020; date of current version June 18, 2020. This work was supported by the Technology Innovation Project of Hubei Province of China under Grant 2019AEA171, in part by the National Science Foundation of China under Grants 2018YFB1004805 and 61702203, and in part by Hubei Provincial Natural Science Foundations under Grant 2018CFB133. The review of this article was coordinated by Prof. H. Li. _(Corresponding author: Jiang Xiao.)_ The authors are with the National Engineering Research Center for Big Data Technology and System, Services Computing Technology and System Laboratory and the Cluster and Grid Computing Laboratory, School of Computer Science and Technology, Huazhong University of Science and Tech[nology, Wuhan 430074, China (e-mail: ywh@hust.edu.cn; daixh@hust.edu.cn;](mailto:ywh@hust.edu.cn) [jiangxiao@hust.edu.cn; hjin@hust.edu.cn).](mailto:jiangxiao@hust.edu.cn) Digital Object Identifier 10.1109/TVT.2020.2963906 fostered a myriad of prospective applications. For example, drivers can incorporate the passenger’s utility to develop realtime demand-supply recommendation systems [1], passengers cansociailizewithphysicallyclosedusersviamusic,photos,and video [2], RSUs can broadcast the shared traffic conditions with vehicles in range for road safety and emergency warning [3], and the mobility patterns of vehicles along the same road segments can facilitate intelligent traffic control [4]. In return, these applications will generate huge amounts of data, including traffic information, social information, and privacy information such as routine locations or user preferences. The ever-growing volume and high variety of data require novel data storage method for VSNs with limited capacity by nature [5]. Furthermore, there exists malicious commuters who dissem inate false information to others. These attacks will manipulate and violate the VSNs data in holistic environment. The lack of secure data storage will result in misbehaving and discrepancy of vehicular commuters. For instance, the selfish drivers will post false parking information in order to win a parking space for him/her [6], multiple indentities can be forged by malicious commuters to post false information misleading others into congested routes, namely Sybil attack [7]. As a result, a critical design aspect of VSNs is to provide a scalable data storage scheme without compromising the security. Unfortunately, recent work in VSNs primarily attempts to process the data and investigate the social characteristics, e.g., the small-world features investigated in [8] and the user behavior studied in [9]. All the aforementioned literatures we examined have ignored the fundamental storage issue, thus impeding them to meet the desired data storage requirements in VSNs. In this paper, we remedy these deficiencies by empowering VSNs with blockchain technology as the basis of data storage. Blockchain has shown its merits of distributed consensusenabled irreversibility and cryptographic hashing algorithms, when originated from the well-known digital currency Bitcoin [10] in the financial industry. Inspired by this, the built-in tamper-resistant traits of blockchain can enable secure data storage in distributed holistic VSNs environment. Nevertheless, the security of blockchain relies on the highly redundant distributed ledger feature of blockchain, i.e., each node ensures secure storage at the cost of maintaining a complete history of transactions linked by blocks. Taking Bitcoin as an example, the current ledger of blockchain data has exceeded 210 GB, where each full node in Bitcoin network is required to store a full copy. The storage cost of the entire Bitcoin network This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/ ----- becomes exorbitant with large amounts of data. The similar situationcanbewitnessedinVSNs.Accordingtothestatics,[1] the global vehicles in used have come to around 1.2 billion already in 2015, and the scale is likely to reach 2 billion or more by 2035.[2] The total amount of data generated by billions of vehicles is exceedingly enormous, which become worse when the full copies of data need to be stored on each vehicle. More seriously, the total amount of data will become larger when the full copies of data (i.e., the data generated by various of devices in VSNs) grows rapidly. Such rapid increasing data will lead to significant storage overhead for resource-constrained commuters in VSNs. It is non-trivial to apply the conventional blockchain to store socially-aware VSNs data with low storage overhead guarantee. Therefore, to design a lightweight blockchain system for VSNs with the strict restriction of security is of great importance and urgency. To this end, we present LDV, a Directed Acyclic Graph (DAG) based blockchain system to enable lightweight and secure data storage for resource-constrained VSNs. LDV introduces DAG to be the underlying VSNs data structure. Specifically, DAG provides promising properties of higher efficiency and scalability than conventional block-based by organizing the data in the format of transactions directly [11]. The key insight of LDV lies on the fact the storage burden can be reliefed by only storing the data in grouped commuters of common interests. To further reduce the storage overhead in large-scale groups, we decrease the number of duplicates, and prune the historical data with little usefulness to make room for storage of useful real-time information. In more detail, the design of LDV is based on the in-depth analysis of social characteristics of VSNs: - In VSNs, the commuters are usually care about the infor mation of interest, and pay little attention to the irrelevant information that is useless to them. - The expired historical data are of little value to real-time decision-making in the rapidly changing transportation scenario. Therefore, only the relevant data and the recent information need to be stored for normal vehicular nodes, which reduce the storage requirement largely. To evaluate the effect of data reduction approach, we have implemented a prototype system and conduct some experiments. The experimental results show that 97.13% storage space can be saved. In summary, this paper makes three contributions: - We conducted an in-depth analysis of the social relation ships in VSNs. To the best of our knowledge, this is the first attempt deeply combining the social relationship to design a lightweight blockchain system for VSNs. - We design a social-based data reduction approach to reduce the storage cost of blockchain based on the social relationship of VSNs. To further reduce the storage cost within a single group, we propose the pruning method of historical data utilizing the feature of real-time in VSNs. 1https://www.statista.com/ 2https://www.greencarreports.com/ - We have implemented the data reduction approaches in our prototype system, and some experiments are conducted to evaluate the effect of data reduction. The remainder of this paper is structured as follows: we intro ducethebackgroundandrelatedworksofthispaperinSectionII. Then, the design of the lightweight DAG-based blockchain system is described in Section III. The evaluations and discussions of LDV are described in Section IV and Section V respectively. Finally, the paper is concluded in Section VI. II. BACKGROUND AND RELATED WORK _A. Social Relationship in VSNs_ With the help of VANETs, data transformation between mo bile vehicles is feasible. As shown in the lower layer of Fig. 1, VANETs are comprised of vehicles, RSUs and communication links between them. The data exchange in VANETs relies on multiple hops between the above components. Although data transmission is convenient through VANETs, the value and semantic of transmitted data is extremely limited in VANETs, which bring little improvement to transportation network. Furthermore, social network enable the information transfor mation with rich semantics in VSNs rather than simple data transmission. In Online Social Networks (OSNs), people with common interest share information with each other. Fortunately, vehicular network can be provided with the similar characteristic when integrated with OSNs. Similarly, vehicles can also share information that is relevant to their interests with others through VSNs, such as social information or traffic information about a particular road. As an instance, Fig. 1 depicts a common social scenario of VSNswhenintegratedwithOSNs.Inmoredetail,thelowerlayer describes the physical communication links among vehicles through the ad hoc network. The virtual social relationships of vehicles are shown in the upper layer, in which several topic groupsareformedaccordingtotheinterestsofdifferentvehicles. In this scenario, people can subscribe to any topic which they are interested in and join the topic group freely. In this way, drivers can easily receive information from subscribed topics. For example, while a driver has subscribed to a topic about the traffic conditions at lane A, information about lane A will be notified to this driver in time unless he/she leaves this topic. Although vehicular networks integrated with social charac teristics can achieve rich semantic and valuable information transformation, the introduction of social features is likely to deteriorate the security and privacy of VSNs by analyzing the exposed social information, which needs to be tackled carefully. _B. Blockchain Technology_ With the prosperity of Bitcoin [10] and other blockchain systems [12], [13], blockchain is believed to provide a data storage service with high security and good privacy protection in distributed environment. By adopting distributed consensus algorithm, the blockchain nodes can reach an agreement on stored data and thus each node will store the same data ----- Fig. 1. Social relationship in VSNs. eventually. Thanks to the use of consensus algorithm and secure cryptographic hash algorithm as well as Merkle tree, the computational power must exceed 50% to tamper with data in blockchain, which is able to resist attack from malicious nodes. Moreover, all nodes interact with each other through addresses composed of some alphanumeric characters, which achieves good anonymities in blockchain. Therefore, it can realize good privacy protection for data stored in blockchain system due to the data sender represented by address which is not known to others. As a result, blockchain can bring a secure data storage for VSNs to alleviate the problems of VSNs described aboved. It seems that blockchain can solve aforementioned problems of VSNs very well. However, current blockchain systems are extremelyresourceintensive,thepowerusedinminingeachyear is about tens of terawatt-hours [14]–[16], which is unacceptable for resource-constrained VSNs. In addition, popular blockchain systems mainly adopt chain-based structure like Bitcoin and Ethereum. As shown in Fig. 2(a), the chain-based design processes transactions and blocks in a sequential approach, which results in poor performance in terms of throughput. Both Bitcoin and Ethereum hava a low throughput compared to Visa. For example, the average throughput of Bitcoin is estimated to 7 Trans_actions Per Second (TPS) [17]. Obviously, the low throughput_ and resource consumption of chain-based blockchain is not suitable for VSNs with rapid data generation and constrained resource. Recently, a novel DAG-based blockchain is proposed to im prove the scalability of blockchain, such as IOTA [18], Byteball [19] and Nano [20]. Due to the adoption of graph structure, the processing of transactions can be done in a parallel manner, which is different from the sequential way in chain-based blockchain, as shown in Fig. 2(b). In other words, chain-based blockchain processes only one block at a time while DAG-based Fig. 2. The comparison of chain-based blockchain and DAG-based blockchain. (a) The structure of chain-based blockchain. (b) The structure of DAG-based blockchain. blockchain deals with multiple blocks at the same time. Besides, because of the low resource requirements of DAG-based blockchain by avoid the massive useless computation in mining, it is more suitable for resource-constrained VSNs. Therefore, we adopt DAG-based blockchain in the next design to compensate for the shortcomings of VSNs. Unfortunately, the high throughput of blockchain will further increase the storage cost [21], which is conflict with the resource-constrained devices in VSNs. As a result, a lightweight and high throughput blockchain sytem need to be designed for VSNs. ----- _C. Related Work_ There are various of researches focusing on vehicular social networks and blockchain. We introduce these works from the perspective of data management in VSNs and data reduction in blockchain respectively. _1) Data Management in VSNs: Most of researches about_ data management of VSNs are focusing on data analysis, data processing, and data security including privacy protection in VSNs. Wang et al. [1] proposes an real-time recommendation system for drivers and passengers to try to satisfy the their requirement and profit at the same time by analyzing the data generated by taxi. Meanwhile, some of the studies try to analyze the social characteristic in VSNs. Concretely, the small-world features are studied in [8]. The user behavior of publishing information influenced by external environment in VSNs is investigated in [9]. Data processing is also studied by many previous literatures. Yang et al. [22] propose a keyword extraction metric to improve the query performance of information in VSNs. Kong et al. [4] propose a data generation approach of private cars through the dataset of taxis to make up for the lack of private cat data. Meanwhile, efficient range query on encrypted data and secure query with privacy protection are studied in [23] and [24] respectively. In terms of data privacy protection in VSNs, a dynamic group division algorithm [25] is presented to protect privacy of location and trajectory generated by vehicles for the scenario of 5G-based VSNs. In [26], the authors try to address the location privacy issue in VSNs by obscuring the location of original sender of information. Jiang et al. [27] proposes an authentication scheme to protect privacy for thin-client in blockchain-based Public _Key Infrastructure (PKI). However, there are little literatures_ focusing on the storage cost of generated data in VSNs. As a complement, we design a lightweight blockchain system to store data for VSNs with a low storage overhead. _2) Data Reduction in Blockchain: Many existing solutions_ to address the security and privacy of VSNs requiring data encryption [22] which brings extra overhead. Blockchain takes advantage of cryptographic hash to ensure security of data. On the other hand, with the assistance of anonymity, blockchain can provide a great privacy protection of VSNs. However, due to the high storage overhead of Blockcahin, it is urgent to address the storage challenge of blockchain. Recently, some works are trying to alleviate the storage requirement in blockchain system. Based on the different security level of blocks in different terms, Jia et al. [28] propose a duplicate ratio mechanism to store different blocks in different ratio in order to achieve low storage cost. The authors believe that the older block can store a small number of blocks compared to the newer blocks because the requirement of computation is larger when modifying a old block. To avoid data loss of blocks with less duplicates, they present a node reliability verification method to ensure the old blocks are stored in reliable nodes. However, the approach introduces a extra chain to store reliable information which increases the storage overhead. Furthermore, the approach needs a master node to calculate the reliability which is impracticable in P2P network. Xu et al. [29] try to address the storage problem by organizing several nodes into a Consensus Unit. However, their approach is based on strong trust assumptions between nodes in Consensus _Units. But it is so difficult to achieve above conditions in hostile_ VSNs environment. In [21], the authors present a jigsaw-like data reduction ap proach, which each node only store relevant data of themselves and uses the merkle path to verify the authenticity of transactions. Although this approach can achieve low storage overhead by only store a few relevant data, it brings a certain number of communication cost when requesting additional data. Moreover, the approach can only apply to the blockchain systems with merkle tree such as Bitcoin. However, the Bitcoin system has a poor scalability in terms of throughput which is unsuitable for VSNs with rapid data generation. In summary, to the best of our knowledge, this is the first work to study the problem of high storage cost in DAG-based blockchain systems. III. LDV DESIGN In this section, we demonstrate the design of LDV. Firstly, we analyze the situation of VSNs in depth and come to several insights. Based on these insights, we give the design of social-based data reduction approach. Then, to further reduce the storage overhead, we enhance the basic design by pruning the historical data within a topic group. Furthermore, there exists several challenges during the design, which are tackled subtly. _A. In-Depth Analysis of VSNs_ In VSNs, due to the introduction of social networks, people with common interests can share data with each other and form virtual social relationship. For example, on the road, commuters can share information about traffic or entertainment through VSNs with others during their trips. By utilizing the information obtained, drivers will know the current traffic condition on specific road and make decisions about their optimal routes. In order to obtain information from commuters with common interests, drivers can participate in the specific topics they like such as the traffic condition of a specific road, while paying little attention to traffic condition of other roads they do not pass. After joining the topic group, the members in the same group are able to publish some useful information to the group for the convenience of others, and meanwhile receive the information from others in the group. In fact, drivers are more likely to communicate with people with similar interests frequently [5], which means they usually pay more attention to topics of interest and care less about irrelevant topics. More generally, commuters are usually interested in the topics of roads between their location and destination, and are unlikely to receive or publish information about traffic on other roads. Besides, the trip routes of drivers are usually fixed and thus the topics they focus on are often regular, which means the social relationships are usually stable compared to dynamic network topologies. ----- Fig. 3. Overview of the LDV design. _Insight 1: In terms of social relationship in VSNs, people_ usually focus on the information about topics of interest and have little requirement for other data that are of no interest to them. Social features in VSNs allow people to get real-time news on relevant topics in order to make the accurate arrangement for the next travel. However, with the rapid generation of data in VSNs, it it hard to identify useful data from vast quantities of data. Due to the timeliness of traffic data, people are more inclined to choose the latest data because the old historic data often has failed to provide useful information. Specifically, in Waze,[3] the validity of the information reported about an incident is only for a while [30]. Besides, the limited resource of vehicles makes it difficult to assist drivers in decision-making by analyzing historical data. Therefore, the older the data, the less value the data can provide. For example, two hours ago, congestion occured on a certain road due to a traffic accident. In this case, the value of this information may be limited as the traffic jam may have recovered. As a result, people tend to pay more attention to real-time traffic data. _Insight 2: The ancient historical data usually has a limited_ contribution to real-time decision-making compared to real-time data. The real-time information is usually more significant for drivers on the road. _B. System Overview_ _1) Social-Based Data Reduction: Based on Insight 1, we_ propose the data reduction approach to reduce storage cost for DAG-based blockchain used in VSNs as shown in Fig. 3. 3https://www.waze.com/ We adopt DAG-based blockchain for VSNs because of its high throughput, which is suitable for VSNs with a rapid generation of data. Different from the block structure of blockchain, DAGbased blockchain adopts transaction as vertex of graph without packing transactions to blocks. The fine-grained transaction structure improves the efficiency of blockchain and facilitates the data management of VSNs. Each piece of data is included in a transaction. For simplicity, the terms transaction and data are used interchangeably in this paper. Since nodes of blockchain in VSNs mainly care about the data of interest and have no interest in irrelevant data, the nodes only need to store relevant data (i.e., data on topics of concern) to save storage space. Taking the topic group I in Fig. 3 as an example, assuming that transactions numbered 1, 2, 4, 5, 7, 11 and 12 contain data for topic I, thus the node in topic group I only needs to store these relevant transactions while other irrelevant transactions are ignored for reducing storage capacity requirement. Meanwhile, each node can join multiple topic groups to receive information from different interested groups, such as node c. _2) Generation and Broadcast of New Transactions: As a_ vehicle investigates some valuable information about a certain topic, the driver can issue a transaction containing the information to blockchain for the convenience of others. In LDV, the generation of new transaction need to satisfy the Proof of _Work (POW), which is effective to avoid the spam information_ and resist to sybil attack. Specifically speaking, the following cryptographic puzzles (i.e., Formula 1) need to be fulfilled in the calculation of POW. Hash transaction, nonce _< target._ (1) _⟨_ _⟩_ ----- The nonce field represents a random number that can satisfy the puzzle and the transaction field represents the rest of components in transaction including hash of previous transactions, data, signature, etc. The difficulty of POW is set to small enough that can be accepted by resource-constrained vehicles in our system. And it can be adjusted dynamically by setting different _target value._ After the POW of new transaction is completed, the new transaction consisted of valuable information can be issued and further broadcast. To be noticed, the new issued transaction is valid until it achieves the consensus among the vehicles. The consensus process is discussed in Section III-B4. The data inside the valid transaction can provide drivers with useful information to plan their journeys. As the vehicular nodes receive new transactions, vehicles can selectively store related transactions locally according to its interests. Compared to storing the entire data of blockchain, this storage mechanism that only store relevant data can reduce the storage overhead largely. _3) The Roles in LDV: Before discussing the consensus of_ LDV, we first give an introduction to the roles in LDV design. According to the different functions of nodes, LDV includes two categories of nodes. - Normal node: In addition to the broadcast and verification of new transactions, the normal node is also responsible for the data management such as providing storage service for data in VSNs. The normal node can be further divided into two subclasses depending on the type of device. – Vehicular node: Vehicular nodes refer to general vehicles (e.g.,cars,buses).Thesenodesareusuallyhighlymobile, whose locations are always changing dynamically. – Road side unit node (RSU node): Different from the mobile characteristic of vehicular node, the location of RSU node is always fixed and stable relatively (e.g., traffic lights). - Monitoring node: Apart from the duties of normal nodes, the monitoring node is the regulator of blockchain in each topic group. It plays important roles in the process of consensus. These nodes can be served by transportation departments owing to its authority. Besides, the transportation departments are able to access accurate traffic information in time through surveillance cameras. _4) Verification and Consensus of New Transactions: After_ the node receives the new transaction from network, the node will verify the validation of this transaction. The process of verification includes the check of signature and the validation of data inside the transaction. After the completion of verification, the new transaction can be stored locally and broadcast further to other nodes for verification by other nodes. For the stake of simplicity, we first discuss the consensus in each topic group. When the validity of a new transaction is verified by a node, the node can issue other transactions to the network by refering to these valid transactions. The reference relationshipindicatesthatothernodesagreewiththeinformation in this new transaction. For example, in topic group C of Fig. 4, the reference relationship between transaction 5 and 8 indicates transaction8agreeswiththevalidationoftransaction5.Thefinal The meaning of symbol in Formula 2 is described in Table I. As shown in Formula 2, the cumulative weight of transaction i is defined as the weight sum of transactions citing the transaction i. The more computational power it consumes in POW, the higher the weight of the transaction. The greater the cumulative weight of the transaction, the more likely the transaction is valid and final because the computational power consumed is larger. When the cumulative weight of transaction comes to a certain level, the transaction is believed to valid. Additionally, the cumulative weight is one of the determining factor in distinguishing between honest transactions and illegal transaction issued by malicous nodes. In the normal case, the transaction issued by honest nodes will be verified and cited by other honest nodes, therefore, the Fig. 4. The older history with fewer duplicates. TABLE I NOTATIONS validation of a new transaction is determined by the cumulative _weight [18] of the transaction, which is proportional to amounts_ of nodes that agree with this transaction. The cumulative weight of transaction i is defined as follow: _CWi =_ � _τ_ _∈Γi_ _ωτ (Γi = {tx|tx ∈_ Citationi}) (2) ----- cumulative weight will keep increasing and larger than illegal transactions. Therefore, the transaction issued by honest nodes will be valid finally and provide useful information for other nodes. To prevent malicious nodes from destroying the entire sys tem when the total computational power of malicous nodes is larger than honest nodes, we introduce the monitoring nodes mentioned above to solve this security problem. Meanwhile, we assume the total computational power of malicious nodes is less than 50%. The transactions cited by a transacton issued by monitoring nodes is valid and the calculation of cumulative weight is unnecessary for it, because the monitoring node is honest and it can distinguish the authenticity of the information inside transactions through surveillance cameras. Once the validation of two conflict transaction is difficult to confirm by both cumulative weight and monitoring nodes in a topic group, in this case, the consensus of these conflict transactions need to be carried out by the entire network by combining the data and the nodes in other topic group. But the possibility of this situation is so low that can be ignored. In the normal case, the consensus of transactions is carried out in each topic group in order to reduce the communication overhead and the broadcast time in VSNs with a poor communication capabilities. _5) Storage Cost of Large-Scale Topic Group: Until now, the_ design mentioned above can well deal with the storage overhead challenges of blockchain in VSNs when the scale of the topic group is uniform. However, as the number of transactions in each topic group increases, especially the hot topic, the storage overhead is also unacceptable once the size of these transactions in the hot topic group come to a high level. _Challenge 1: How to deal with the storage cost problem in a_ hot topic group with a large number of transactions? _C. Data Reduction Within a Group_ To address the Challenge 1, we further present the data reduc tion approach in a topic group based on the Insight 2. Through the in-depth analysis of VSNs, we learn that the drivers are more likely to choose the latest information when making decision on the travel route. The fresh data often provides more valuable information compared to historical data in the fast changing traffic scenario. In addition, the older transactions have high cumulative weights, which are difficult to be tampered with. Besides, the storage cost of historical data with a large amount of data is so expensive for vehicles with a limited resource. _1) Overview of Data Reduction Approach: As a result, in-_ spired by [28] focusing on data reduction on chain-based blockchain, we reduce the number of duplicates of historical data on DAG-based blockchain instead of deleting historical data of all nodes directly. Meanwhile, the remaining copies are significant to data integrity and traceability of blockchain. And the historical data can also be used for data analysis to discover potential value in VSNs. The overview of enhanced design is depicted in Fig. 4, the older historical data contains fewer duplicates in blockchain network. Taking the topic group II in Fig. 3 as an example, as shown in Fig. 4, the oldest historical transactions numbered 1 and 3 have only one copy around these nodes in this group while the transaction numbered 5 has two copies. Correspondingly, the latest transactions numbered 8, 12, 13 are stored on each node. It further reduces the storage overhead by reducing the number of duplicates of unnecessary historical data. Although reducing duplicates of historical data can save stor age space, the few data replicas bring a serious effect to the data integrity and security of blockchain. A good data allocation strategy is not only conducive to data reduction, but also helpful to data integrity. _Challenge 2: How to allocate the amount and the storage_ location of historical replicas reasonably? _2) Allocation of Duplicates: To guarantee the data integrity_ and security of blockchain, we give the allocation strategy of replicas in this section. In the normal blockchain system, each full node need to store the full copies of data to ensure integrity and security of data. But in LDV, to save storage space, only the latest data needs to be stored on each node due to its low cumulative weight and high value. The historical data can be pruned for saving storage space. As a result, only a part of nodes need to store the full data. Each node can adjust the range of historical data to be stored according to their demands and owned resources. To prevent data loss caused by machine failure, the full duplicates including historical data need to be stored on monitoring nodes in each topic group. The monitoring nodes are usually the server machines with large storage capacity, which belongs to traffic control department. Besides, the data stored in monitoring nodes is beneficial to data security, which is also effective to prevent these small amounts of duplicates of historical data from being controlled by malicious nodes. _D. Complements of Design_ At present, the above-mentioned design is able to provide a lightweight blockchain system for VSNs. However, there remain some challenges, such as data query of cross-group. Therefore, we advance the design of LDV in terms of data integrity and query in this subsection. _1) Data Integrity of a Single Group: As discussed in_ Section III-C, increased nodes and transactions will further deteriorate the storage problem, especially in the large-scale groups. On the contrary, the decrease in number of nodes will affect the integrity of data because each node only stores relevant data in our design. More seriously, when no one pays attention to a topic, the data on that topic will be at risk of loss. _Challenge 3: How to ensure the data integrity of the topic_ groups with a small number of nodes? Fortunately, the RSU nodes of VSNs are very useful for ensuring the data integrity of groups with a small amount of nodes. In general, the RSUs are highly common on the roads and are a part of road such as the traffic light. Naturally, the RSU node is a member of topic group about that road. As a result, we can use the stability and university of RSU nodes to store data for frosty topic groups. For example, a topic about ----- Road A is less concerned. In this way, the RSUs around this road will join this topic group automatically, which can be used to store information about that topic in order to avoid data loss. Furthermore, to avoid the data loss incurred by the failure of RSU nodes in one road, the RSU nodes near that road will join the topic group of that road automatically when the number of RSU nodes drops to a certain threshold. The threshold can be set flexibly according to different situations. _2) Data Query of Cross-Group: In addition to information_ acquisition of topics of interest, sometimes it is also significant to get information on other topics. Apart from joining this topic group to retrieve data, querying the data of this topic directly is also a way for those who do not want to join this topic and just require the data temporarily. As stated in III-A, because the trips of commuters are usually stable, thus they just need to join the topics about their trips. When the commuters have requirements for data in other topic groups, they can query this data directly from the nodes in other topic groups. _Challenge 4: How to query data from other topic groups?_ As aforementioned, only the relevant data is stored locally. If the commuters need the data of other groups, the commuters can issue a transaction including the data request of relevant groups and broadcast it out to wait for the response of data. When a node in that group receives the request, it returns the request data to the commuter. Nevertheless, the correctness of data obtained from other groups is questionable. Therefore, ensuring the validity of data is a challenge for data query of cross-group. As described in III-B4, the validity of transaction is determined by the cumulative weight of it. Therefore, we utilize the cumulative weight to ensure the validity of data received from other groups. As the cumulative weight is attached to the requested data, the commuters can verify the validation of the returned data easily through the cumulative weight. To prevent the request data and its cumulative weight from being tampered with by malicious nodes, the data query request of the topic group will be responsed by the monitoring nodes to guarantee the correctness and security of the requested data. IV. EVALUATION We have implemented a prototype DAG-based blockchain system called DAGChain for VSNs, and LevelDB[4] is taken as the underlying database of DAGChain. DAGChain adopts DAG structure instead of chain structure for efficient parallelism. The transaction is taken as the vertex of graph to achieve more efficient data processing inside transactions. Based on DAGChain, we implement LDV to evaluate the effect of the proposed data reduction approach, and conduct several experiments on the servers to simulate the situation of VSNs. The servers are used to simulate the vehicular nodes to send, broadcast and store data. Each machine has two 24-core Intel Xeon 8260 2.4 GHz CPUs, 128 GB DRAM, and 7.2 TB HDD, with CentOS 7.6 operatingsystem.Toensuretheuniformityofstorageindifferent nodes with different interested topics, the size of data inside all transactions is set to same. 4https://github.com/syndtr/goleveldb TABLE II THE NUMBER OF TRANSACTION IN DIFFERENT TOPICS AND ITS FOLLOWERS Fig. 5. The data size of different nodes using social-based data reduction approach. _A. Effects of Social-Based Data Reduction Approach_ We first evaluate the social-based data reduction approach described in III-B1. Each node only needs to store the data of interested topics. A node can join multiple topic groups freely to get the interested information, and leave freely if it is no longer interested. We first study the storage cost of nodes with one interested topic, and the cost of nodes with multiple interested topics will be analyzed in IV-C. The storage space used by different nodes is measured by generating different numbers of transactions in different topic groups. For the sake of simplicity, the transaction only contains information that belongs to one topic. The number of transaction in different topic groups and the member of groups are listed in Table II. Taking the data of last line as an example, topic5 has 6,000 transactions containing the data about this topic, and node E, F are interested in this topic. Fig. 5 shows that the storage cost of different nodes with different topics interested. In particular, node F does not adopt data reduction approach, i.e., it follows all the topics, which can be considered as a full node in normal blockchain system. As we can see from the experimental results, compared to full node F, other nodes have less storage cost when only joining the topic group of interest. Node A consumes the minimal storage space due to the topic it follows has the minimum transations. Specifically, it saves 97.13% storage space compared to node F, which is beneficial to resource-constrained VSNs. _B. Effects of Data Reduction Within a Single Group_ To evaluate the effect of storage reduction described in Section III-C, we reset the experimental setting. Specifically, the blockchain of this experiment only contains one topic to ----- Fig. 6. The data size within a topic group. (a) The average data size. (b) The comparison of data size of normal node and monitoring node. verify the consumed storage space within a topic group, using the historical data pruning method. In detail, each normal node only needs to store recent data, and the ancient historical data can be pruned to save storage space. Consecutive transactions containing the fixed size data about this topic are generated and stored in six nodes to measure the average storage cost of the six nodes. The six nodes are numbered as A, B, C, D, E and F. Specifically, two of the nodes serve as monitoring nodes (e.g., node E and F) to keep the full duplicates of historical data for the integrity introduced in Section III-C2. The data reduction strategy of historical data is pruning data before a certain days, which can be adjust according to the requirement of each node. In this experiment, for simplicity, we set it to 3 days for all the four normal nodes. We assume that 3,000 transactions are issued everyday.Morespecifically,weprunethetransactionsmorethan 9,000 transaction away from the latest transaction and preserve the recently 9,000 transactions to simulate the transactions of 3 days ago in reality. Fig. 6 demonstrates the storage cost of the pruning method within the topic group. The average data size of all the six nodes is shown in Fig. 6(a). The storage cost without pruning is similar to the counterpart when the number of transactions is low. However, it increases larger and faster when the scale of transactions becomes larger. The storage cost with pruning keeps increasing because the data size of the two monitoring nodes is increasing all the time. Fig. 6(b) compares the data size of normal node and monitoring node respectively. The size of monitoring node keeps increasing. By contrast, the size of a normal node remains unchanged, as it only persists the recently data, which confirms the efficiency of the historical data pruning approach in large-scale groups. _C. Scalability of LDV_ To analyze the scalability of LDV, we conduct several ex periments to evaluate the storage space influenced by different number of transactions and interested topics. Specifically, to evaluate the storage cost of nodes with multiple interested topics, the total amounts of transactions is uniform in each group of experiments and the specific numbers are listed in Table III. TABLE III THE NUMBER OF INTERESTED TOPICS AND CONTAINED TRANSACTIONS Fig. 7. Consumed storage space affected by the number of topics. Six nodes are deployed to test the consumed storage space of different number of topics. Additionally, in another experiment about the variation of the number of transaction, for fairness, the number of transactions in each topic group is the same and the number of topics is set to 3. Three nodes without data reduction, with social-based reduction and with social-based combining pruning approach are running respectively to measure the storage cost of different methods. Similarly, the number of historical data stored in the two experiments is set to 9,000. Fig. 7 depicts the used storage space varying with the number of interested topics. Since the total number of transactions is the same, the storage overhead with different topics of interest is similar. On the contrary, the total data size is increasing when the number of interested topics increases. That is because the pruning method runs in each topic group. Although the storage cost with pruning method is unchangeable in each group, it increases as the number of interested topics increases. Fig. 8 shows the consumption of storage space influenced by the number of transactions. As shown in the result, the LDV with the ----- Fig. 8. The storage cost of different methods. combination of social-based and pruning approaches performs best. The data size increases largely when both the two data reductionapproachesarenotadopted.Meanwhile,theconsumed storage space of LDV keeps the same after 27,000 transactions are issued, which achieves good scalability. V. DISCUSSION AND FUTURE WORK We will discuss the robustness and communication efficiency of LDV in this section, which are not mentioned in the design. In addition, we give several research points that can be studied in future work. _A. Robustness_ The robustness of a system is of significance to ensure the availability of service, especially the online service for VSNs. Although the RSU nodes can dedicate storage for the topic groups with a few nodes to avoid the data loss, the potential failure of RSU nodes (e.g., machine downtime) may still result in data loss. With the prosperity of cloud services, the cloud servers can be used to back up data for VSNs to prevent data loss. To be specific, the wired communication module, which is common for RSUs, is utilized to upload data to cloud servers periodically to avoid losing data. The better fault tolerance mechanism is leaved to our future work to achieve good robustness. _B. Communication_ As aforementioned, the data transmission of VSNs relies on the underlying ad hoc network. However, the poor communication capability of ad hoc network may be a bottleneck for the development. Specifically, in our design, the social relationship of topic groups is a virtual link relying on the physical link to communicate. It is possible that the physical distance between two nodes with the direct social relationship is very long. In this case, the data exchange between these nodes is difficult. The emergence of fifth-generation mobile networks may alleviate this problem when used in VSNs due to its high speed. An efficient routing algorithm that takes social relationship in VSNs and the beneficial features of blockchain into consideration may be another possible solution. Since the main focus in the paper is to reduce the storage cost of blockchain used in VSNs, we leave the above challenges to our future works. VI. CONCLUSION In this paper, we design a lightweight blockchain system for VSNs based on the DAG structure. To be specific, a social-based data reduction approach on the whole network, and a pruning method within a single group are proposed to reduce the storage cost of vehicular nodes, respectively. To ensure the data integrity and query ability of cross-group, we further present the relative mechanism in our design. The prototype of LDV has been implemented to evaluate the effect of data reduction. The experimental results demonstrate that LDV can save 97.13% storage space and is scalable. REFERENCES [1] X. Wang, H. Zhang, L. Wang, and Z. Ning, “A demand-supply oriented taxi recommendation system for vehicular social networks,” IEEE Access, vol. 6, pp. 41 529–41 538, 2018. [2] S. Smaldone, L. Han, P. Shankar, and L. Iftode, “RoadSpeak: Enabling voice chat on roadways using vehicular social networks,” in Proc. 1st _Workshop Social Netw. Syst., 2008, pp. 43–48._ [3] D. Camara, C. Bonnet, and F. Filali, “Propagation of public safety warning messages: A delay tolerant network approach,” in Proc. IEEE Wireless _Commun. Netw. Conf., 2010, pp. 1–6._ [4] X. Kong et al., “Mobility dataset generation for vehicular social networks based on floating car data,” IEEE Trans. Veh. Technol., vol. 67, no. 5, pp. 3874–3886, May 2018. [5] A. Rahim et al., “Vehicular social networks: A survey,” Pervasive Mobile _Comput., vol. 43, pp. 96–113, 2018._ [6] Q. Yang and H. Wang, “Towards trustworthy vehicular social network,” _IEEE Commun. Mag., vol. 53, no. 8, pp. 42–47, Aug. 2015._ [7] D. Kushwaha, P. K. Shukla, and R. Baraskar, “A survey on sybil attack in vehicular Ad-Hoc Network,” Int. J. Comput. Appl., vol. 98, no. 15, pp. 31–36, 2014. [8] A. M. Vegni, V. Loscrí, and P. Manzoni, “Analysis of small-world features invehicularsocialnetworks,”in _Proc.16thIEEEAnnu.Consum.Commun._ _Netw. Conf., 2019, pp. 1–2._ [9] V. R. Neto, D. S. Medeiros, and M. E. M. Campista, “Analysis of mobile user behavior in vehicular social networks,” in Proc. 7th Int. Conf. Netw. _Future, 2016, pp. 1–5._ [10] S. Nakamoto, “Bitcoin: A peer-to-peer electronic cash system,” 2008. [[Online]. Available: https://bitcoin.org/bitcoin.pdf/](https://bitcoin.org/bitcoin.pdf/) [11] X. Xu et al., “A taxonomy of blockchain-based systems for architecture design,” in Proc. IEEE Int. Conf. Softw. Architecture, 2017, pp. 243–252. [12] G. Wood, “Ethereum: A secure decentralised generalised transaction ledger,” Ethereum Project Yellow Paper, vol. 151, no. 2014, pp. 1–32, 2014. [13] E. Androulaki et al., “Hyperledger fabric: A distributed operating sys tem for permissioned blockchains,” in Proc. 13th EuroSys Conf., 2018, pp. 30:1–30:15. [14] K. Karlsson et al., “Vegvisir: A partition-tolerant blockchain for the Internet-of-Things,” in Proc. IEEE 38th Int. Conf. Distrib. Comput. Syst., 2018, pp. 1150–1158. [15] K. J. O’Dwyer and D. Malone, “Bitcoin mining and its energy footprint,” in Proc. 25th IET Irish Signals Syst. Conf. China-Ireland Int. Conf. Inf. _Commun. Technologies, 2014, pp. 280–285._ [16] A. Beall, “Bitcoin mining uses more energy than ecuador–but there’s a fix,” New Scientist, 2017, [Online]. Available: https://www.newscientist. [com/article/2151823-bitcoin-mining-uses-more-energy-than-ecuador-](https://www.newscientist.penalty -@M com/article/2151823-bitcoin-mining-uses-more-energy-than-ecuador-but-theres-a-fix/) but-theres-a-fix/. [17] E. K. Kogias, P. Jovanovic, N. Gailly, I. Khoffi, L. Gasser, and B. Ford, “Enhancing bitcoin security and performance with strong consistency via collective signing,” in Proc. 25th USENIX Secur. Symp, USENIX Association, 2016, pp. 279–296. [18] S. Popov, “The tangle,” 2018. [Online]. Available: https://iota.org/IOTA_ Whitepaper.pdf [19] A. Churyumov, “Byteball: A decentralized system for storage and transfer [of value,” 2016. [Online]. Available: https://byteball.org/Byteball.pdf](https://byteball.org/Byteball.pdf) ----- [20] C.LeMahieu,“Nano:Afeelessdistributedcryptocurrencynetwork,”2018. [[Online]. Available: https://nano.org/en/whitepaper](https://nano.org/en/whitepaper) [21] X. Dai, J. Xiao, W. Yang, C. Wang, and H. Jin, “Jidar: A jigsaw-like data reduction approach without trust assumptions for bitcoin system,” in Proc. _IEEE 39th Int. Conf. Distrib. Comput. Syst., 2019, pp. 1317–1326._ [22] Z. Yang, H. Yu, J. Tang, and H. Liu, “Toward keyword extraction in constrained information retrieval in vehicle social network,” IEEE Trans. _Veh. Technol., vol. 68, no. 5, pp. 4285–4294, May 2019._ [23] G.Xu,H.Li,Y.Dai,K.Yang,andX.Lin,“Enablingefficientandgeometric range query with access control over encrypted spatial data,” IEEE Trans. _Inf. Forensics Secur., vol. 14, no. 4, pp. 870–885, Apr. 2019._ [24] H. Ren, H. Li, Y. Dai, K. Yang, and X. Lin, “Querying in Internet of Things with privacy preserving: Challenges, solutions and opportunities,” IEEE _Netw., vol. 32, no. 6, pp. 144–151, Nov./Dec. 2018._ [25] D. Liao, H. Li, G. Sun, M. Zhang, and V. Chang, “Location and trajectory privacy preservation in 5G-enabled vehicle social network services,” J. _Netw. Comput. Appl., vol. 110, pp. 108–118, 2018._ [26] B. Ying and A. Nayak, “A distributed social-aware location protection method in un-trusted vehicular social networks,” IEEE Trans. Veh. Tech_nol., vol. 68, no. 6, pp. 6114–6124, Jun. 2019._ [27] W. Jiang, H. Li, G. Xu, M. Wen, G. Dong, and X. Lin, “PTAS: Privacy preserving thin-client authentication scheme in blockchain-based PKI,” _Future Gener. Comput. Syst., vol. 96, pp. 185–195, 2019._ [28] D. Jia, J. Xin, Z. Wang, W. Guo, and G. Wang, “Elasticchain: Support very large blockchain by reducing data redundancy,” in Proc. Asia-Pacific Web _Web-Age Inf. Manage. Joint Int. Conf. Web Big Data, 2018, pp. 440–454._ [29] Z. Xu, S. Han, and L. Chen, “Cub, a consensus unit-based storage scheme for blockchain system,” in Proc. IEEE 34th Int. Conf. Data Eng., 2018, pp. 173–184. [30] I. Lequerica, M. G. Longaron, and P. M. Ruiz, “Drive and share: Efficient provisioning of social networks in vehicular scenarios,” IEEE Commun. _Mag., vol. 48, no. 11, pp. 90–97, Nov. 2010._ **Wenhui Yang (Student Member, IEEE) received the** B.S. degree from the School of Computer Science and Engineering, Northeastern University, Shenyang, China, in 2018. He is currently working toward the M.S. degree with the School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, China. His current research interests include blockchain system and distributed system. **Xiaohai Dai (Student Member, IEEE) received the** M.S. degree from the School of Computer Science and Technology, Huazhong University of Science and Technology (HUST), Wuhan, China, in 2017. He is currently working toward the Ph.D. degree with the School of Computer Science and Technology, HUST. His current research interests include blockchain and distributed system. **Jiang Xiao (Member, IEEE) received the B.Sc. de-** gree from the Huazhong University of Science and Technology (HUST), Wuhan, China, in 2009 and the Ph.D. degree from Hong Kong University of Science and Technology, Clear Water Bay, Hong Kong, in 2014. She is currently an Associate Professor with the School of Computer Science and Technology, HUST, Wuhan, China. She has been engaged in research on blockchain, distributed computing, big data analysis and management, and wireless indoor localization. Her awards include Hubei Dawnlight Program 2018, CCF-Intel Young Faculty Research Program 2017, and best paper awards from IEEE ICPADS/GLOBECOM 2012. **Hai Jin (Fellow, IEEE) received the Ph.D. degree** in computer engineering from the Huazhong University of Science and Technology, Wuhan, China, in 1994. He received German Academic Exchange Service fellowship to visit the Technical University of Chemnitz in Germany in 1996. He was with the University of Hong Kong between 1998 and 2000, and as a Visiting Scholar with the University of Southern California between 1999 and 2000. He is a Cheung Kung Scholars Chair Professor of computer science and engineering with the Huazhong University of Science and Technology, the Chief Scientist of ChinaGrid, the largest grid computing project in China, and the Chief Scientist of National 973 Basic Research Program Project of Virtualization Technology of Computing System, and Cloud Security. He has coauthored 22 books and published more than 800 research papers. His research interests include computer architecture, virtualization technology, cluster computing and cloud computing, peer-to-peer computing, network storage, and network security. He received the Excellent Youth Award from the National Science Foundation of China in 2001. He is a fellow of the CCF and a member of the ACM. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/TVT.2020.2963906?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/TVT.2020.2963906, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "HYBRID", "url": "https://ieeexplore.ieee.org/ielx7/25/9119895/08952775.pdf" }
2,020
[ "JournalArticle" ]
true
2020-01-08T00:00:00
[ { "paperId": "9934803184a2076317fd9b6930fd1cb10b06931b", "title": "PTAS: Privacy-preserving Thin-client Authentication Scheme in blockchain-based PKI" }, { "paperId": "8481ca7e039e637588567e8d9ef1d31bb8a4cdb3", "title": "Jidar: A Jigsaw-like Data Reduction Approach Without Trust Assumptions for Bitcoin System" }, { "paperId": "df70f2abe351c7fe54c5cb2ca2e436fd9db0a029", "title": "Enabling Efficient and Geometric Range Query With Access Control Over Encrypted Spatial Data" }, { "paperId": "4dcbc23e00f872c93cc04ff9ad3b81b36bbcb1cd", "title": "Toward Keyword Extraction in Constrained Information Retrieval in Vehicle Social Network" }, { "paperId": "fe35fb83739a887763e9b6ed3682b1c037394965", "title": "A Distributed Social-Aware Location Protection Method in Untrusted Vehicular Social Networks" }, { "paperId": "3bf890ce3d70db33a990ba089783942b25590b57", "title": "Analysis of Small-World Features in Vehicular Social Networks" }, { "paperId": "73132e4625236bd71c6422e27452ae63131ea724", "title": "ElasticChain: Support Very Large Blockchain by Reducing Data Redundancy" }, { "paperId": "7768155d89eea86e3ffd1be1e0d4a4fd76fa5649", "title": "A Demand-Supply Oriented Taxi Recommendation System for Vehicular Social Networks" }, { "paperId": "3cb4ae87b7ed895a6563ad459329c7f8b8a19d47", "title": "Vegvisir: A Partition-Tolerant Blockchain for the Internet-of-Things" }, { "paperId": "f7de4d2d6132480915d3fa96223bdeac3f47efc2", "title": "Location and trajectory privacy preservation in 5G-Enabled vehicle social network services" }, { "paperId": "c5967549d4f8d3aefe958074d4e316c548972c00", "title": "CUB, a Consensus Unit-Based Storage Scheme for Blockchain System" }, { "paperId": "15d1450a8797e2feaa4c0ca4ebcd43c8cbd8b61d", "title": "Querying in Internet of Things with Privacy Preserving: Challenges, Solutions and Opportunities" }, { "paperId": "1b4c39ba447e8098291e47fd5da32ca650f41181", "title": "Hyperledger fabric: a distributed operating system for permissioned blockchains" }, { "paperId": "0499ec3b1af9a1bed50c58cc953f5c6830ad8264", "title": "A Taxonomy of Blockchain-Based Systems for Architecture Design" }, { "paperId": "2b2e2b19f56a7dc8104c890e8fea892997f61f5d", "title": "Analysis of mobile user behavior in vehicular social networks" }, { "paperId": "efd99fe3b5b620d89aa03201199c45988c688670", "title": "Enhancing Bitcoin Security and Performance with Strong Consistency via Collective Signing" }, { "paperId": "0646bfbc3e70749667106d6d4e288159d4c60007", "title": "A Survey on Sybil Attack in Vehicular Ad-hoc Network" }, { "paperId": "7359eff7806178f0b88e57cd397a941677ae0732", "title": "Bitcoin mining and its energy footprint" }, { "paperId": "3f708653a6930708341871925f3baf6984bccb5c", "title": "Drive and share: efficient provisioning of social networks in vehicular scenarios" }, { "paperId": "914da7df30f5c4084e0181d472529686ab446906", "title": "Propagation of Public Safety Warning Messages: A Delay Tolerant Network Approach" }, { "paperId": "d5482b6c9650d7e588ba10765fa891329a86df68", "title": "RoadSpeak: enabling voice chat on roadways using vehicular social networks" }, { "paperId": "48d2f6971e1b3b1a4f05d4b01e6656ff50f3d72e", "title": "Vehicular Social Networks: A survey" }, { "paperId": "17d3a97e78250b249f2f0492fd700952f4fd3677", "title": "Mobility Dataset Generation for Vehicular Social Networks Based on Floating Car Data" }, { "paperId": "600c574adfbd0a6895934ec8d3dbfcb56fb2bd68", "title": "Nano : A Feeless Distributed Cryptocurrency Network" }, { "paperId": null, "title": "“Bitcoin mining uses more energy than ecuador–but there’s a fix,”" }, { "paperId": null, "title": "“Byteball: A decentralized system for storage and transfer of value,”" }, { "paperId": "2185e06bc5e03aa023490473a72a2bc6462b908c", "title": "Towards Trustworthy Vehicular Social Network" }, { "paperId": "43586b34b054b48891d478407d4e7435702653e0", "title": "The Tangle" }, { "paperId": "3c50bb6cc3f5417c3325a36ee190e24f0dc87257", "title": "ETHEREUM: A SECURE DECENTRALISED GENERALISED TRANSACTION LEDGER" }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": null, "title": "Technology (HUST), Wuhan, China" } ]
13,378
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01e1ec86b0ae4ab38383b0efbe7a44847776e80e
[ "Computer Science" ]
0.863165
A distributed polygon retrieval algorithm using MapReduce
01e1ec86b0ae4ab38383b0efbe7a44847776e80e
10th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing
[ { "authorId": "1913889", "name": "Qiulei Guo" }, { "authorId": "1724566", "name": "Balaji Palanisamy" }, { "authorId": "2312816", "name": "H. Karimi" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
The proliferation of data acquisition devices like 3D laser scanners had led to the burst of large-scale spatial terrain data which imposes many challenges to spatial data analysis and computation. With the advent of several emerging collaborative cloud technologies, a natural and cost-effective approach to managing such large-scale data is to store and share such datasets in a publicly hosted cloud service and process the data within the cloud itself using modern distributed computing paradigms such as MapReduce. For several key spatial data analysis and computation problems, polygon retrieval is a fundamental operation which is often computed under real-time constraints. However, existing sequential algorithms fail to meet this demand effectively given that terrain data in recent years have witnessed an unprecedented growth in both volume and rate. In this work, we develop a MapReduce-based parallel polygon retrieval algorithm which aims at minimizing the IO and CPU loads of the map and reduce tasks during spatial data processing. The results of the preliminary experiments on a Hadoop cluster demonstrate that the proposed techniques are scalable and lead to more than 35% reduction in execution time of the polygon retrieval operation over existing distributed algorithms.
# A Distributed Polygon Retrieval Algorithm using MapReduce Q. Guo, B. Palanisamy, H. A. Karimi[* ] Geoinformatics Laboratory, School of Information Sciences, University of Pittsburgh - (qiulei, bpalan, hkarimi)@pitt.edu **KEY WORDS: Hadoop, Polygon Retrieval, Distributed Algorithm, GIS** **ABSTRACT:** The burst of large-scale spatial terrain data due to the proliferation of data acquisition devices like 3D laser scanners poses challenges to spatial data analysis and computation. Among many spatial analyses and computations, polygon retrieval is a fundamental operation which is often performed under real-time constraints. However, existing sequential algorithms fail to meet this demand for larger sizes of terrain data. Motivated by the MapReduce programming model, a well-adopted large-scale parallel data processing technique, we present a MapReduce-based polygon retrieval algorithm designed with the objective of reducing the IO and CPU loads of spatial data processing. By indexing the data based on a quad-tree approach, a significant amount of unneeded data is filtered in the filtering stage and it reduces the IO overhead. The indexed data also facilitates querying the relationship between the terrain data and query area in shorter time. The results of the experiments performed in our Hadoop cluster demonstrate that our algorithm performs significantly better than the existing distributed algorithms. **1.** **INTRODUCTION** Cloud computing is continually being improved for computational geometry, such as the operations commonly used in GIS. Of particular interest, and high demand, is the spatial analysis and computation that typically involves processing large volumes of spatial data. Some example applications include urban environment visualization, shadow analysis, visibility computation, and flood simulation. For these GIS applications, the polygon retrieval is a common operation where very large terrain data within a given polygon’s boundary for further analysis is retrieved (Mark de Berg, 2008; Willard, 1982). Willard (Willard, 1982) proposed the polygon retrieval problem and devised an algorithm with O( ) time complexity in the worst-case. To speed up this time complexity, several efficient algorithms have been proposed; (Mark de Berg, 2008; Paterson and Frances Yao, 1986; Sioutas et al., 2008; Tung and King, 2000) are among the most notable algorithms. However, with advanced largescale spatial data acquisition techniques and devices like 3D laser and satellite, terrain datasets in tens or even hundreds of gigabytes are currently available. Efficient processing of such large terrain datasets is beyond the capability of current algorithms that run on single machines and therefore a distributed solution is highly desired. Efficiently computing polygon retrieval is very crucial since it is a CPU-intensive operation, especially for very large spatial datasets. In this paper, we present a distributed polygon retrieval algorithm based on MapReduce. The challenges for processing polygon retrieval in a large terrain dataset include how to organize, partition and distribute very large spatial datasets across 10s or 100s of nodes in a cloud datacenter so that the applications can query and analyze the data very quickly and cost-effectively. To address these challenges, we first index the data based on a quad-tree, which is simpler - Corresponding author compared with the R-tree index(Eldawy and Mokbel, 2013). This allows to efficiently filter the spatial data that are not relevant for the query, thereby improving the query performance and efficiency. We conduct two experiments on our cluster consisting of 20 nodes to validate the efficiency of our algorithm and the results show that our algorithm is efficient and reduces the job execution time significantly. The rest of the paper is organized as follows. Section 2 reviews the related work. Section 3 describes the idea of our MapReduce-based polygon retrieval algorithm. The experimental results are showed in Section 4. The conclusion of our work is discussed in Section 5. **2.** **RELATED WORKS** Polygon retrieval is a common operation needed in a diverse number of GIS applications. Willard (Willard, 1982) was the first one who defined the polygon retrieval problem formally and proposed a polygon retrieval algorithm with time complexity. To speed up this performance, efficient algorithms have been proposed (Mark de Berg, 2008; Paterson and Frances Yao, 1986; Sioutas et al., 2008; Tung and King, 2000). These sequential algorithms work well under certain conditions, however, as the terrain datasets are increasingly becoming very large, these algorithms fail to meet the demand for real-time response. As cloud computing has emerged to be an effective and promising solution for both compute- and data-intensive geo-computation, the work in (Karimi et al., 2011) explored the feasibility of using Google App Engine, the cloud computing technology by Google, to process terrain data, usually in triangulated irregular network (TIN) form. Considering Hadoop has become the defacto standard for distributed computation on a large scale, some recent works have developed several MapReduce-based algorithms for geo ----- computation. Puri et al. (Puri et al., 2013) proposed and implemented a MapReduce algorithm for distributed polygon overlay computation in Hadoop. Ji et al. (Ji et al., 2012) presented MapReduce-based approaches that construct inverted grid index and process kNN query over large spatial datasets. Akdogan et al. (Akdogan et al., 2010) created a unique spatial index, Voronoi diagram, for given points in 2D space which enabled efficient processing of a wide range of geospatial queries such as RNN, MaxRNN and kNN, with the MapReduce programming model. Hadoop-GIS (Wang et al., 2011) and Spatial-Hadoop (Eldawy et al., 2013) are two scalable and highperformance spatial data processing systems for running largescale spatial queries in Hadoop. These systems provide support for some fundamental spatial queries like minimal bounding box query, but they do not directly support polygon retrieval operation addressed in this work. **3.** **MAPREDUCE-BASED POLYGON RETRIEVAL** **ALGORITHM** In this section, we discuss our proposed MapReduce-based distributed polygon retrieval algorithm. Our algorithm is composed of two parts: (1) using a quad-tree to index the terrain data and (2) organizing the terrain datasets based on the quadtree prefix to minimize the IO load. To accelerate the processing of terrain data, we first divide the entire space based on a complete quad-tree. Compared with other spatial indexing techniques, quad-tree has several advantages for polygon retrieval. One such advantage is that we can directly partition the space into four sub spaces recursively. In addition, with the quad-tree indexing, the topological relation among the terrain data and the query area can be inferred from the indices’ prefix directly. The key idea here is that if a grid cell is within a query area, then all its sub grids are also guaranteed to be within the query area. In other words, if the prefix of one spatial object’s quad-tree index exists in the intersecting set, then that object is guaranteed to be within the query area. This property helps avoid the time-consuming point-in-polygon computation in the map phase enabling the MapReduce jobs to complete significantly faster. To further increase query efficiency, we use a prefix tree to organize the prefix of all the grid entries that interact with the query area so that the query time is reduced to where k is the length of the index prefix. A prefix tree, also called radix tree or trie, is an ordered tree data structure that is used to store a dynamic set or associative array where the keys are usually strings(Wikipedia). The idea behind a prefix tree is that all strings that share a common prefix inherit a common node. Thus, with our prefix tree optimization, testing a prefix of a quad-tree index in a given dataset can be accomplished in just _O(k) time._ For implementation, in the pre-processing stage, we first consider the coarse-grained grid cells and recursively test whether they overlap with the query area. Once a grid cell intersects the query area, we test the corresponding sub-grid cells unless we are at the deepest level of the quad-tree. If the grid cell is within the query area, we stop subdividing the grid cell and insert its index into the prefix tree. If the grid cell is outside the query area, we just ignore it. From the perspective of prefix tree, if the prefix of a quad-tree index (but not whole index) ends in a leaf node, it means that the corresponding spatial elements are within the query area. After the prefix tree is created in the pre-processing stage, it is effectively used in the map function. When each mapper receives a spatial element record, the relation between the spatial record and the query area is inferred based on the prefix tree created in the pre-processing phase. Finally, our quad-tree prefix-based spatial file filtering strategy tries to read in only the necessary spatial data rather than scanning the whole dataset stored in HDFS. Similar to the idea of using the prefix tree to organize the quad tree indices, we separate the spatial data files into fairly smaller files such that each file shares the same prefix. After we organize the terrain file in this manner, we use it in the file filtering stage which scans only the required records to filter those files that are outside the query area which results in the minimum amount of spatial data needed to be processed. **4.** **EXPERIMENT** In this section, we present the experimental evaluation of our distributed polygon retrieval algorithm. We first introduce the dataset and the computing environment used in the experiment. We then evaluate and compare the proposed approach with existing solutions. **4.1** **Dataset and Experiment Environment** There are several data structures to represent the terrain surfaces, two common examples are digital elevation model (DEM) and TIN. The latter (TIN), which is based on vector model, is widely used in many applications. It consists of irregularly distributed nodes and lines arranged in a network of nonoverlapping triangles. In our experiment we used TIN datasets. TIN requires considerably a large storage capacity as it can be used to represent surfaces with much higher resolution and detail. For our experiments, we used the TIN data of Pittsburgh, which is originally divided into 5*5 equally sized grid cells and each grid cell represents a terrain of 10000 metes * 10000 meters. There are 3 million points and 6 million triangles in each grid cell and the size of each grid’s TIN file is approximately 500 MB. We conducted our experiments on a cluster of 20 virtual machines created by OpenStack hosted on a 5-node experimental cluster. Each server in the cluster has an Intel Xeon 2.2GHz 4 Core with 16 GB RAM and 1 TB hard drive at 7200 rpm. Each virtual machine in our setup has 1 VCPU with 2 GB RAM and 20 GB hard drive with Ubuntu Server 12.04 (32 bit). **4.2** **Algorithm Efficiency** To demonstrate the time performance of the polygon retrieval algorithm in relation to the query area size, we generated a polygon area for each query randomly. We compared our results with the Spatial-Hadoop(Eldawy et al., 2013) as the benchmark. Since Spatial-Hadoop does not provide support for polygon retrieval in the TIN data format directly, we have modified their interfaces and executed the polygon retrieval operation as suggested in the Spatial Hadoop tutorial(SpatialHadoop). Table 1 shows the relationship between the time performance of the algorithm and the polygon query area on our cluster. From the table, it can ----- be seen that as the query area becomes larger, the time performance generally increases. This is due to the increased amount of TIN data that needs t o b e processed in the map and reduce phases, but the trend is not based on an strict increasing function since the query shape is irregular, and the spatial data are processed by the predefined unit of grid cell. From the result, we also infer that our algorithm on an average runs 25% faster than the existing technique. This is partly due to the fact that our algorithm significantly avoids the geometry floating point computation in the map phase, especially when the query area is not very large. Therefore, when the query area becomes larger, the I/O time dominates the CPU time and hence the CPU time savings become less significant. Query Time(ms) – Time(ms) Area( ) Proposed Spatial Algorithm Hadoop (Benchmark) 6.78e+5 14659 40996 3.45e+6 34127 44302 5.26e+6 37608 50487 9.88e+6 37995 51276 1.19e+7 38217 50569 2.16e+7 39773 53906 2.48e+7 37469 54612 Table 1. The query time vs. query area **4.3** **Scalability** We next evaluate the effectiveness of our polygon retrieval algorithm by varying the size of the Hadoop cluster in terms of the number of VMs such as 5, 10, 20. For this experiment, we used the random query shapes generated previously and ran queries on different cluster sizes. The result is in Table 2. From Table 2 we can find that overall our proposed technique scales well and showed a significant reduction in job execution time as the number of nodes in the Hadoop cluster increase. Query Time(ms) – Time(ms) – Time(ms) – Area( ) VM Size 5 VM Size 10 VM Size 20 6.78e+5 19956 18552 14659 3.45e+6 39776 37893 34127 5.26e+6 44526 39248 37608 9.88e+6 43099 40543 37995 1.19e+7 44447 41854 38217 2.16e+7 59872 43893 39773 2.48e+7 58205 42098 37469 Table 2. The query time under different query area and cluster size **5.** **CONCLUSION** In this paper we presented a distributed polygon retrieval algorithm based on MapReduce. We apply two optimization strategies to reduce the CPU and IO loads of polygon retrieval by using a quad-tree to index the terrain data and organizing the terrain data into small files based on the quad-tree prefix. The experiment results show that our approach achieves high efficiency and outperforms existing solutions. **REFERENCES** Akdogan, A., Demiryurek, U., Banaei-Kashani, F., Shahabi, C., 2010. Voronoi-based geospatial query processing with mapreduce, Cloud Computing Technology and Science (CloudCom), 2010 IEEE Second International Conference on. IEEE, pp. 9-16. Eldawy, A., Li, Y., Mokbel, M.F., Janardan, R., 2013. CG_Hadoop: computational geometry in MapReduce, Proceedings of the 21st ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems. ACM, pp. 284-293. Eldawy, A., Mokbel, M.F., 2013. A demonstration of SpatialHadoop: an efficient mapreduce framework for spatial data. Proceedings of the VLDB Endowment 6, 1230-1233. Ji, C., Dong, T., Li, Y., Shen, Y., Li, K., Qiu, W., Qu, W., Guo, M., 2012. Inverted grid-based knn query processing with mapreduce, ChinaGrid Annual Conference (ChinaGrid), 2012 Seventh. IEEE, pp. 25-32. Karimi, H.A., Roongpiboonsopit, D., Wang, H., 2011. Exploring Real‐Time Geoprocessing in Cloud Computing: Navigation Services Case Study. Transactions in GIS 15, 613633. Mark de Berg, O.C., Marc van Kreveld, Mark Overmars, 2008. Simplex Range Searching, Computational Geometry, 3 ed. Springer Berlin Heidelberg, pp. 335-353. Paterson, M.S., Frances Yao, F., 1986. Point retrieval for polygons. Journal of Algorithms 7, 441-447. Puri, S., Agarwal, D., He, X., Prasad, S.K., 2013. MapReduce algorithms for GIS polygonal overlay processing, Parallel and Distributed Processing Symposium Workshops & PhD Forum (IPDPSW), 2013 IEEE 27th International. IEEE, pp. 10091016. Sioutas, S., Sofotassios, D., Tsichlas, K., Sotiropoulos, D., Vlamos, P., 2008. Canonical polygon queries on the plane: A new approach. arXiv preprint arXiv:0805.2681. SpatialHadoop, SpatialHadoop, Extensible operations. Tung, L.H., King, I., 2000. A two-stage framework for polygon retrieval. Multimedia Tools and Applications 11, 235-255. Wang, F., Lee, R., Liu, Q., Aji, A., Zhang, X., Saltz, J., 2011. Hadoop-gis: A high performance query system for analytical medical imaging with mapreduce. Technical report, Emory University. Wikipedia, Trie. Willard, D.E., 1982. Polygon retrieval. SIAM Journal on Computing 11, 149-165. |Query Area( )|Time(ms) – Proposed Algorithm|Time(ms) - Spatial- Hadoop (Benchmark)| |---|---|---| |6.78e+5|14659|40996| |3.45e+6|34127|44302| |5.26e+6|37608|50487| |9.88e+6|37995|51276| |1.19e+7|38217|50569| |2.16e+7|39773|53906| |2.48e+7|37469|54612| |Query Area( )|Time(ms) – VM Size 5|Time(ms) – VM Size 10|Time(ms) – VM Size 20| |---|---|---|---| |6.78e+5|19956|18552|14659| |3.45e+6|39776|37893|34127| |5.26e+6|44526|39248|37608| |9.88e+6|43099|40543|37995| |1.19e+7|44447|41854|38217| |2.16e+7|59872|43893|39773| |2.48e+7|58205|42098|37469| -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.5194/ISPRSANNALS-II-4-W2-51-2015?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5194/ISPRSANNALS-II-4-W2-51-2015, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://isprs-annals.copernicus.org/articles/II-4-W2/51/2015/isprsannals-II-4-W2-51-2015.pdf" }
2,014
[ "JournalArticle", "Conference" ]
true
2014-11-11T00:00:00
[ { "paperId": "b83457fb463bc3f04d32833300988ef530cb364b", "title": "CG_Hadoop: computational geometry in MapReduce" }, { "paperId": "1cec3e3652c0f05122968dce8da00313270d34e7", "title": "A Demonstration of SpatialHadoop: An Efficient MapReduce Framework for Spatial Data" }, { "paperId": "3d5a2b2c15bd54048135256d4403b48a9d316e9f", "title": "MapReduce Algorithms for GIS Polygonal Overlay Processing" }, { "paperId": "93804e1eaa377e6930a9425ae4343232b14dcef3", "title": "Inverted Grid-Based kNN Query Processing with MapReduce" }, { "paperId": "6406c5f12cb3c406b1e6f34bafd65818571bc7ad", "title": "Exploring Real‐Time Geoprocessing in Cloud Computing: Navigation Services Case Study" }, { "paperId": "3b0438e85d79f7425a8b7f5f8ce6bd6c6092bafd", "title": "Voronoi-Based Geospatial Query Processing with MapReduce" }, { "paperId": "dac9daa990c8286db17dac9c510564e8e4740cc3", "title": "Canonical Polygon Queries on the Plane: A New Approach" }, { "paperId": "c57b876ab974b36d669268ed1ac09214d6295f14", "title": "A New Approach on the Canonical k-vertex Polygon Retrieval Problem on the Plane" }, { "paperId": "a1ba0c1afd2eaa452733f880985cad38d3a1533f", "title": "Pro Oracle Spatial for Oracle Database 11g (Expert's Voice in Oracle)" }, { "paperId": "ebef35542f80515f4cb2fa46ab5274b68f53913c", "title": "A Two-Stage Framework for Polygon Retrieval" }, { "paperId": "4c3f964ddb233f30fbe71102e2f41098a0ed233c", "title": "Point Retrieval for Polygons" }, { "paperId": null, "title": "GIS polygonal overlay processing, Parallel and Distributed Processing" }, { "paperId": null, "title": "Hadoop-gis: A high performance query system for analytical medical imaging with mapreduce" }, { "paperId": "e07f55623e1744e62bb75df0ccd00fc40e5a99d7", "title": "International institute for Geo - information Science and Earth Observation : ITC" }, { "paperId": null, "title": "“ TIN support in an open source spatial database ”" }, { "paperId": "627be67feb084f1266cfc36e5aed3c3e7e6ce5f0", "title": "MapReduce: simplified data processing on large clusters" }, { "paperId": "6e8b6e1d551637f1f3e9c4f6024ad455de0c3ffd", "title": "Pro Oracle Spatial for Oracle Database 11g" }, { "paperId": "18648407e048086fad9bce8a6fb23c68229fe0d1", "title": "Remote Sensing and Spatial Information Sciences" }, { "paperId": "2498bd315c86d135bd3eed4c969cd608d9396238", "title": "Simplex Range Searching" }, { "paperId": "066bacb8f763a53ff670959590322fe747b0a9c2", "title": "Polygon Retrieval" } ]
4,384
en
[ { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01e9c5c0c4af261f5828a170cd8890c41bdd4157
[]
0.91843
HOW NFTs CAN REVOLUTIONIZE THE BOOK INDUSTRY
01e9c5c0c4af261f5828a170cd8890c41bdd4157
International Journal of Engineering Applied Sciences and Technology
[ { "authorId": "2226290870", "name": "Abhik Bhattacharya" }, { "authorId": "1808277", "name": "S. Bhattacharjee" } ]
{ "alternate_issns": null, "alternate_names": [ "Int J Eng Appl Sci Technol" ], "alternate_urls": null, "id": "3160206c-9131-4319-995b-582bc038f4e6", "issn": "2455-2143", "name": "International Journal of Engineering Applied Sciences and Technology", "type": "journal", "url": "http://ijeast.com/" }
As the times are changing, physical books are also becoming old & a lot of young generations are preferring digital versions of physical books. The people who are most affected by these changes are the authors of those physical books. So the authors can adopt the NFTs which are unique tokens or digital representations of ownership of their book. They essentially turn their book into an NFT. This way the book can be digitally distributed as well as the author gets a lot of control over the distribution or sales of the book. This also garners the attention of the young generation. NFTs are our future and there are a lot of applications of NFT in the future.
## Vol. 7, Issue 10, ISSN No. 2455-2143, Pages 92-95 Published Online February 2023 in IJEAST (http://www.ijeast.com) # HOW NFTs CAN REVOLUTIONIZE THE BOOK INDUSTRY ## Abhik Bhattacharya Student, Computer Applications Narula Institute of Technology Kolkata, India Subhasree Bhattacharjee Professor, Computer Applications Narula Institute of Technology Kolkata, India _Abstract - As the times are changing, physical books are_ **also becoming old & a lot of young generations are** **preferring digital versions of physical books. The people** **who are most affected by these changes are the authors** **of those physical books. So the authors can adopt the** **NFTs which are unique tokens or digital representations** **of ownership of their book. They essentially turn their** **book into an NFT. This way the book can be digitally** **distributed as well as the author gets a lot of control over** **the distribution or sales of the book. This also garners** **the attention of the young generation. NFTs are our** **future and there are a lot of applications of NFT in the** **future.** _Keywords:_ **Blockchain, Smart Contracts, NFTs, Non-** **Fungible Tokens, ERC Tokens, ERC721 Tokens,** **Authors, Books, IPFS, Inter Planetary File System** I. INTRODUCTION Currently, authors of physical books are facing some problems which are hindering their progress. If we observe closely then we will see that nowadays the craze for books & book fairs is decreasing due to the rise of digital media. These problems are - **Less Money: Publishers don't pay well. Sometimes** authors get about 10% from sales and books tend to sell less than 10000 copies. So authors probably make less than his two weeks' salary for a book that took 52+ weeks to write. - **Very Few Marketing support: Many authors don't** know how to take their books to the intended audience. Of course, money is a limited reason for this. - **Long gestation period: Writing takes a very long time** for authors. Most authors start seeing signs of success after the third book or so. - Most publishers look for well-established writers, so new writers find it difficult to break into print. - In an effort to save royalty costs, publishers frequently conceal the true figures. Those who give royalty, do not reveal sales statistics. II. TURNING DIGITAL BOOKS INTO NON-FUNGIBLE TOKENS Non-fungible means unique and cannot be replaced with or by something else[[1]] and token means a tradable, digital representation of ownership of an asset[[1]]. So essentially what I am trying to say is that the author creates a digital book in pdf or similar text file formats & then he uploads it to a website that creates some unique digital ownership certificates. Those people who buy those digital ownership certificates can have access to the digital book itself. All the transactions stay forever online because they are stored in the blockchain[[2]]. The NFT assets like the book & their cover image will be stored in IPFS, thus this way the books also stay online forever, only the ownership changes. NFTs could be revolutionary for authors; the author can sell his/her NFT book to the digital audience directly without any secondary medium. He can even mint 100 NFTs which means 100 copies of the same book with different & unique token IDs. Just like in the physical world, where we buy a copy of a book, the readers buy the copy of the book as an NFT. However, the book’s copyrights will be retained by the author. Thus those who own the copies can resell those copies to others. However now every time an NFT is resold the authors get a little bit of royalty for each resale. It will also attract a digital audience very much. NFTs & digital books are more appealing to young audiences because they prefer digital books rather than physical books. The growth of the millennial and Gen-Z population in the NFT space has a great role in the relative development of the community. This gives authors a lot more control over their output and its pricing. It also offers a direct revenue stream between themselves and readers that’s not reliant on third parties. The author can reach a large number of audiences ----- ## Vol. 7, Issue 10, ISSN No. 2455-2143, Pages 92-95 Published Online February 2023 in IJEAST (http://www.ijeast.com) who are actually interested in those books, cutting off the middlemen. III. OVERVIEW OF NFTS & IPFS Non-Fungible Tokens (NFTs)[[3]] are ownership certificates of cryptographic assets on a blockchain[[4]] with unique identification codes and metadata that distinguish them from each other. There are two parts to an NFT[[3]]: - **NFT item - The digital item associated with an NFT is** described in an NFT’s metadata (see next bullet). These items are typically stored off-chain, which means this item is not directly stored on a blockchain. - **NFT metadata (called a token) - NFT metadata is** stored on a blockchain and typically includes information identifying the underlying NFT item, its location online, its ownership, and transaction information Unlike crypto currencies[[5]], NFTs cannot be traded or exchanged at equivalency. The difference between fungible and non-fungible goods is NFTs can represent real-world items like artwork and real estate. Tokenizing these realworld tangible assets makes buying, selling, and trading these assets more efficient. This also reduces the probability of fraud in a transparent, unhackableway[[6]]. The magic in NFTs is in their ability to execute and exchange contracts between people. Some of these contracts can be executed with coding, like rent, official documents, and concert tickets for example. But you can get creative because code is very dynamic. So let's say you are a comic artist/author and you want to give value to your readers. You could create an NFT which would allow the NFT owner to come to Comic-Con for free for 5 years and talk to you backstage. This person could sell this NFT and every time someone resells it, you get 15% in royalties or whatever you put in the contract. IPFS or Inter Planetary File System is a distributed file storage system that stores and accesses files, websites, applications, and data. It is a peer-to-peer hypermedia protocol that is designed to preserve and grow information by making the web upgradeable & resilient. Normally file downloads over HTTP happen from one server at a time however IPFS which is peer-to-peer, retrieves pieces of that file from multiple nodes at once, which helps substantial bandwidth savings. IPFS makes distributing high volumes of data without any duplication of that data very efficient. IPFS provides an open, flat web.[[7] ] IV. HOW A AUTHOR MINTS A NFT OFF HIS BOOK Most users create and buy NFTs on various NFT marketplaces. The user uploads a digital file of the item, and through the use of smart contracts[[8]], the NFT is “minted” or recorded on a blockchain. The uploaded image is of the cover of the book & the uploaded file is the pdf version of the book itself. Then a JSON file is created for making the metadata. Finally, the token is minted & the NFT address as well as token Id is returned back to the author. Then the author can use that address & token id to list his NFT for sale in any NFT marketplace[[3]]. An NFT marketplace is a platform where NFTs are sold and exchanged, similar to exchanges dedicated to crypto currencies. Some NFT marketplaces accept payments in government-issued currency, such as the U.S. dollar, but most strictly accept crypto currency. Some NFT marketplace operators pay royalties to creators after each sale, enabling continued income for artists and other content creators as NFTs of their content are transferred and resold.[[3] ] Popular NFT Marketplaces are OpenSea, Axie Infinity, CryptoPunks, Atomic Market, etc.[[9]] Below is Figure 1 which depicts the flowchart of how an author mints an NFT from his book. Figure 1: Author mints a NFT off his Book V. A BASIC NFT SMART CONTRACT A basic NFT smart contract means a smart contract that inherits an ERC721 token. Each token will have a name & initials. Below mentioned Figure 2 has “Abhikb” as the name of the token & “AB” as the initials needed. It takes an IPFS URI to mint a token where the token is minted & that IPFS URI is set as a token URI for a particular token id. Then the token id is incremented by 1. A smart contract is a software program that lives in a decentralized environment i.e Blockchain. This type of code is immutable (cannot be changed ), transparent, and automated — meaning everyone can see it but no one can change or update it, and it can automatically execute by itself without needing any third-party intervention[[10]]. ----- ## Vol. 7, Issue 10, ISSN No. 2455-2143, Pages 92-95 Published Online February 2023 in IJEAST (http://www.ijeast.com) ERC721 is a standard followed for representing ownership of non-fungible tokens, where each token is unique. It provides functionalities like transferring tokens from one account to another, retrieving the current token balance of an account, getting the owner of a specific token, and the total supply of the token available on the network. Besides these, it also approves that a third-party account can move a token from an account.[[11]] When an individual purchase an NFT, the NFT item, such as the image file, appears in the user’s digital wallet[[3]] through an application programming interface (API), which allows software applications to communicate and share data. VI. FUTURE OF NFTS Any context where we need to reliably track and verify authenticity or ownership is a potential application for NFTs[[1]]. NFTs are going to be big in 2023 & have seen new applications such as loyalty programs, ticketing, and met averse applications, apart from improvements in incentivebased gaming, PFP collections, and financial applications, in Fashion & Art[[5]]. NFTs can be used to verify documents such as certificates, diplomas, medical records, passports, collectibles, artwork, gaming, and other markets[[12]]. For example, hiring managers can quickly check a job candidate’s certifications and degrees regarding academic credentials. This is a significant step forward in preventing fraud and making the verification process run more precisely. Today's art JPEG is tomorrow's marriage contract, mortgage, home purchase, vehicle purchase, or concert ticket. Example: Let’s say someone buys a Nike NFT which contractually binds itself to gift only to holders of this token with exclusive limited sneaker drops. They are creating value through scarcity and authenticity while building community & branding. When someone buys 4 Nike NFTs and gets 4 exclusive drops delivered to my door every 10 weeks, he/she can then auction that off to other sneaker heads. Not only does it bring value long-term to its holders (the good projects), but it allows a source of crowd funding without sacrificing equity through big investors. NBA Top Shots are collections of videos & pictures of top NBA moments & much more for fans. Sports Organizations are looking for innovative ways to enhance fan engagement through NFT[[1] ] such as tickets, fractionalized team ownership, etc. Tickets as NFTs solve multiple concerns with traditional tickets, including verifying authenticity, reducing barriers for resale when a ticket holder cannot attend a game, and allowing markets to price tickets dynamically. NFTs allow athletes to monetize their brand, which includes NIL, i.e their name, image, and likeness. Athletes' brands are often connected to their league and team. With NFTs, athletes are encouraged to engage with their personal brand and popularity by creating unique images and special fan experiences that eliminate intermediaries. More and more musicians are adopting NFTs to get along with their fans. A few marketplaces have already started selling partial ownership music NFTs[[13]] in partnership with famous music producers. In the future colleges can transfer the degree certificate of students as NFT to students as owners. No student can fake those degrees if degrees are NFT. Your diploma will come as an NFT because we'll know it was Harvard that minted it. VII. CONCLUSION This paper explores one of the limitless applications of NFT. This paper also reviews the existing research papers on NFTs. Through this paper, the authors try to bring forward how an ordinary author can mint his/her own NFT from his book. Authors can go to an NFT Marketplace & mint their book as NFT. Then he can sell that NFT to potential buyers for the desired price. The concept of ownership of authentic purchased digital assets like images or artworks, videos, and music or songs excited a lot of collectors & thus it helped the sudden growth in the NFT market. Authors can leverage this market to solve the problems they face in the current situations. It is important to state the limitations of this paper. The main limitation of this research paper is that only seven papers were reviewed while preparing this paper. Secondly, there are very few marketplaces that allow minting metadata along with the image for NFT. It is this metadata that will have the link to the pdf or digital book. The study has scope to be further extended by including more literature from this area as well as some other areas. VIII. REFERENCES [1]. Baker, B., Pizzo, A., & Su, Y. (2022). NonFungible Tokens: A Research Primer and Implications for Sport Management. Sports Innovation Journal, 3, 1-15. ----- ## Vol. 7, Issue 10, ISSN No. 2455-2143, Pages 92-95 Published Online February 2023 in IJEAST (http://www.ijeast.com) [2]. What is blockchain? A beginner’s guide for 2021. (2021). Columbia [Engineering.](https://bootcamp.cvn.columbia.edu/blog/what-isblockchain-beginners-guide/) [https://bootcamp.cvn.columbia.edu/blog/what-](https://bootcamp.cvn.columbia.edu/blog/what-isblockchain-beginners-guide/) [isblockchain-beginners-guide/](https://bootcamp.cvn.columbia.edu/blog/what-isblockchain-beginners-guide/) [3]. Busch, K. E. (2022). Congress. Congressional Research Service. Retrieved January 25, 2023, from https://crsreports.congress.gov/product/pdf/R/R471 89 [4]. Hayes, A. (2022, December 19). Blockchain facts: What is it, how it works, and how it can be used. [Investopedia. Retrieved January 25, 2023, from](https://www.investopedia.com/terms/b/blockchain.asp) [https://www.investopedia.com/terms/b/blockchain.](https://www.investopedia.com/terms/b/blockchain.asp) asp [5]. Frankenfield, J. (2023, January 24). Cryptocurrency explained with pros and cons for investment. Investopedia. Retrieved January 25, 2023, [from](https://www.investopedia.com/terms/c/cryptocurrency.asp) [https://www.investopedia.com/terms/c/cryptocurre](https://www.investopedia.com/terms/c/cryptocurrency.asp) ncy.asp [6]. Sharma, R. (2023, January 24). Non-fungible token (NFT): What it means and how it works. [Investopedia. Retrieved January 25, 2023, from](https://www.investopedia.com/non-fungible-tokens-nft-5115211) [https://www.investopedia.com/non-fungible-](https://www.investopedia.com/non-fungible-tokens-nft-5115211) [tokens-nft-5115211](https://www.investopedia.com/non-fungible-tokens-nft-5115211) [7]. IPFS powers the distributed web. IPFS Powers the Distributed Web. (n.d.). Retrieved January 25, [2023, from https://ipfs.tech/](https://ipfs.tech/) [8]. Buterin, V. (2014). A next-generation smart contract and decentralized application platform. white paper, 3(37), 2-1. [9]. Rehman, W., e Zainab, H., Imran, J., &Bawany, N. Z. (2021, December). Nfts: Applications and challenges. In 2021 22nd International Arab Conference on Information Technology (ACIT) (pp. 1-7). IEEE. [10]. Bhattacharya, A., &Bhattacharjee, S. A REVIEW ON APPLICATIONS OF BLOCKCHAIN IN BANKING SECTORS. [11]. Developer Docs, E. (2023). ERC-721 non-fungible [token standard. ethereum.org. Retrieved January](http://ethereum.org/) 25, 2023, [from](https://ethereum.org/en/developers/docs/standards/tokens/erc-721/) [https://ethereum.org/en/developers/docs/standards/t](https://ethereum.org/en/developers/docs/standards/tokens/erc-721/) [okens/erc-721/](https://ethereum.org/en/developers/docs/standards/tokens/erc-721/) [12]. Bao, H., &Roubaud, D. (2022). Non-Fungible Token: A Systematic Review and Research Agenda. Journal of Risk and Financial Management, 15(5), 215. [13]. Folgieri, R., Arnold, P., & Buda, A. G. (2022). NFTs In Music Industry: Potentiality and Challenge. Proceedings of EVA London 2022, 6364. -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.33564/ijeast.2023.v07i10.011?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.33564/ijeast.2023.v07i10.011, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://doi.org/10.33564/ijeast.2023.v07i10.011" }
2,023
[ "JournalArticle" ]
true
2023-02-01T00:00:00
[]
4,448
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Mathematics", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Mathematics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01eade9aa5eab9f7f75fa35a831fbc98d2441618
[ "Computer Science", "Mathematics" ]
0.839999
Bad directions in cryptographic hash functions
01eade9aa5eab9f7f75fa35a831fbc98d2441618
IACR Cryptology ePrint Archive
[ { "authorId": "2175374", "name": "D. Bernstein" }, { "authorId": "2065868", "name": "Andreas Hülsing" }, { "authorId": "144337513", "name": "T. Lange" }, { "authorId": "1687116", "name": "R. Niederhagen" } ]
{ "alternate_issns": null, "alternate_names": [ "IACR Cryptol eprint Arch" ], "alternate_urls": null, "id": "166fd2b5-a928-4a98-a449-3b90935cc101", "issn": null, "name": "IACR Cryptology ePrint Archive", "type": "journal", "url": "http://eprint.iacr.org/" }
null
# Bad directions in cryptographic hash functions Daniel J. Bernstein[1][,][2], Andreas H¨ulsing[2], Tanja Lange[2], and Ruben Niederhagen[2] 1 Department of Computer Science University of Illinois at Chicago Chicago, IL 60607–7045, USA ``` djb@cr.yp.to ``` 2 Department of Mathematics and Computer Science Technische Universiteit Eindhoven P.O. Box 513, 5600 MB Eindhoven, The Netherlands ``` andreas.huelsing@googlemail.com tanja@hyperelliptic.org ruben@polycephaly.org ``` **Abstract. A 25-gigabyte “point obfuscation” challenge “using security** parameter 60” was announced at the Crypto 2014 rump session; “point obfuscation” is another name for password hashing. This paper shows that the particular matrix-multiplication hash function used in the challenge is much less secure than previous password-hashing functions are believed to be. This paper’s attack algorithm broke the challenge in just 19 minutes using a cluster of 21 PCs. **Keywords: symmetric cryptography, hash functions, password hashing,** point obfuscation, matrix multiplication, meet-in-the-middle attacks, meetin-many-middles attacks ## 1 Introduction _Under normal circumstances, the system protected the passwords so that_ _they could be accessed only by privileged users and operating system util-_ _ities. But through accident, programming error, or deliberate act, the_ _contents of the password file could occasionally become available to un-_ _privileged users. . . . For example, if the password file is saved on backup_ _tapes, then those backups must be kept in a physically secure place. If_ _a backup tape is stolen, then everybody’s password needs to be changed._ _Unix avoids this problem by not keeping actual passwords anywhere on_ _the system._ —“Practical UNIX & Internet Security” [23, p. 84], 2003 This work was supported by the National Science Foundation under grant 1018836, by the Netherlands Organisation for Scientific Research (NWO) under grant 639.073.005, and by the European Commission through the ICT program under contract INFSO-ICT-284833 (PUFFIN). Permanent ID of this document: ``` 7c4f480d7f090d69c58b96437b6011b1. Date: 2015.02.23. ``` ----- 2 Daniel J. Bernstein, Andreas H¨ulsing, Tanja Lange, and Ruben Niederhagen Consider a server that knows a secret password 11000101100100. The server could check an input password against this secret password using the following ``` checkpassword algorithm (expressed in the Python language): def checkpassword(input): return int(input == "11000101100100") ``` But it is much better for the server to use the following checkpassword_hashed algorithm (see Appendix A for the definition of sha256hex): ``` def checkpassword_hashed(input): return int(sha256hex(input) == ( "ba0ab099c882de48c4156fc19c55762e" "83119f44b1d8401dba3745946a403a4f" )) ``` It is easy for the server to write down this checkpassword_hashed algorithm in the first place: apply SHA-256 to the secret password to obtain the string ``` ba0...a4f, and then insert that string into a standard checkpassword_hashed ``` template. (Real servers normally store hashed passwords in a separate database, but in this paper we are not concerned with superficial distinctions between code and data.) There is no reason to believe that these two algorithms compute identical functions. Presumably SHA-256 has a second (and third and so on) preimage of SHA-256(11000101100100), i.e., a string for which checkpassword_hashed returns 1 while checkpassword returns 0. However, finding any such string would be a huge advance in SHA-256 cryptanalysis. The checkpassword_hashed algorithm outputs 1 for input 11000101100100, just like checkpassword, and outputs 0 for all other inputs that have been tried, just like checkpassword. The core advantage of checkpassword_hashed over checkpassword is that it is obfuscated. If the checkpassword algorithm is leaked to an attacker then the attacker immediately sees the secret password and seizes control of all resources protected by that password. If checkpassword_hashed is leaked to an attacker then the attacker still does not see the secret password without solving a SHA256 preimage problem: the loss of confidentiality does not immediately create a loss of integrity. Obfuscation is a broad concept. There are many aspects of programs that one might wish to obfuscate and that are not obfuscated in checkpassword_hashed: for example, one can immediately see that the program is carrying out a SHA256 computation, and that (unless SHA-256 is weak) there are very few short inputs for which the program prints 1. In the terminology of some recent papers (see Section 2), what is obfuscated here is the key in a particular family of “keyed functions”, but not the choice of family. Further comments on general obfuscation appear below. We emphasize password obfuscation because it is an important special case: a widely deployed application using widely studied symmetric techniques. **1.1. State-of-the-art password hashing. Of course, some preimage problems** can be efficiently solved. If the attacker knows (or correctly guesses) that the ----- Bad directions in cryptographic hash functions 3 secret password is a string of 14 digits, each 0 or 1, then the attacker can simply try hashing all 2[14] possibilities for that string. Even worse, if the attacker sees many checkpassword_hashed algorithms from many users’ secret passwords, the attacker can efficiently compare all of them to this database of 2[14] hashes: the cost of multiple-target preimage attacks is essentially linear in the sum of the number of targets and the number of guesses, rather than the product. There are three standard responses to these problems. First, to eliminate the multiple-target problem, the server randomizes the hashing. For example, the server might store the same secret password 11000101100100 as the following ``` checkpassword_hashed_salted algorithm, where prefix was chosen randomly ``` by the server for storing this password: ``` def checkpassword_hashed_salted(input): prefix = "b1884428881e20fe61c7629a0f71fcda" return int(sha256hex(prefix + input) == ( "5f5616075f77375f1e36e2b707e55744" "91a308c39653afe689b7a958455e65d2" )) ``` The attacker sees the prefix and can still find this password using at most 2[14] guesses, but the attacker can no longer share work across multiple targets. (This benefit does not rely on randomness: any non-repeating prefix is adequate. For example, the prefix can be chosen as a counter; on the other hand, this requires maintaining state and raises questions of what information is leaked by the counter.) Second, the server chooses a hash function that is much more expensive than SHA-256, multiplying the server’s cost by some factor F but also multiplying the attack cost by almost exactly F, if the hash function is designed well. The ongoing “Password Hashing Competition” [9] has received dozens of submissions of “memory-hard” hash functions that are designed to be expensive to compute even for an attacker manufacturing special-purpose chips to attack those particular functions. Third, users are encouraged to choose passwords from a much larger space. A password having only 14 bits of entropy is highly substandard: for example, the recent paper [14] reports techniques for users to memorize passwords with four times as much entropy. **1.2. Matrix-multiplication password hashing: the “point obfuscation”** **challenge. A “point obfuscation” challenge was announced by Apon, Huang,** Katz, and Malozemoff [7] at the Crypto 2014 rump session. “Point obfuscation” is the same concept as password hashing: see, e.g., [33] (a hashed password is a “provably secure obfuscation of a ‘point function’ under the random oracle model”). The challenge consists of “an obfuscated 14-bit point function on Dropbox”: a 25-gigabyte program with the promise that the program returns 1 for one secret 14-bit input and 0 for all other 14-bit inputs. The goal of the challenge ----- 4 Daniel J. Bernstein, Andreas H¨ulsing, Tanja Lange, and Ruben Niederhagen is to determine the secret 14-bit input: “learn the point and you win!” An accompanying October 2014 paper [5] described the challenge as having “security parameter 60”, where “security parameter λ is designed to bound the probability of successful attacks by 2[−][λ]”. We tried the 25-gigabyte program on a PC with the following relevant resources: an 8-core 125-watt AMD FX-8350 “Piledriver” CPU (about $200), 32 gigabytes of RAM (about $400), and a 2-terabyte hard drive (about $100). The program took slightly over 4 hours for a single input. A brute-force attack using this program would obviously have been feasible but would have taken over 65536 hours worst-case and over 32768 hours on average, i.e., an average of nearly 4 years on the same PC, consuming 500 watt-years of electricity. **1.3. Attacking matrix-multiplication password hashing. In this paper we** explain how we solved the same challenge in just 19 minutes using a cluster of 21 such PCs. The solution is 11000101100100; we reused this string above as our example of a secret password. Of course, knowing this solution allowed us to compress the original program to a much faster checkpassword algorithm. The time for our attack algorithm against a worst-case input point would have been just 34 minutes, about 5000 times faster than the original brute-force attack, using under 0.2 watt-years of electricity. Our current software is slightly faster: it uses just 29.5 minutes on 22 PCs, or 35.7 minutes on 16 PCs. More generally, for an n-bit point function obfuscated in the same way, our attack algorithm is asymptotically n[4]/2 times faster than a brute-force search using the original program. This quartic speedup combines four linear speedups explained in this paper, taking advantage of the matrix-multiplication structure of the obfuscated program. Two of the four speedups (Section 3) are applicable to individual inputs, and could have been integrated into the original program, preserving the ratio between attack time and evaluation time; but the other two speedups (Section 4) share work between separate inputs, making the attack much faster than a simple brute-force attack. See Section 1.6 for generalizations to more functions. **1.4. Matrix-multiplication password hashing vs. state-of-the-art pass-** **word hashing. It is well known that a 2[n]-guess preimage attack against a hash** function, cipher, etc. does not cost exactly 2[n] times as much as a single function evaluation: there are always ways to merge small amounts of initial work across multiple inputs, and to skip small amounts of final work. See, for example, [34] (“Reduce the DES encryption from 16 rounds to the equivalent of 9.5 rounds, _≈_ by shortcircuit evaluation and early aborts”), [29] (“biclique” attacks against various hash functions), and [13] (“biclique” attacks against AES). However, one expects these speedups to become less and less noticeable for functions that have more and more rounds. For any state-of-the-art cost-C password-hashing function, the cost of a 2[n]-guess preimage attack is very close to 2[n]C. The matrix-multiplication function is much weaker: the cost of our attacks is far below 2[n] times the cost of the best method known to evaluate the function. ----- Bad directions in cryptographic hash functions 5 Even worse, the matrix-multiplication approach has severe performance problems that end up limiting the number n of input bits. The “obfuscated point function” includes 2n matrices, each matrix having n+2 rows and n+2 columns, each entry having approximately 4((λ + 1)(n + 4) + 2)[2] log2 λ bits; recall that _λ is the target “security parameter”. If λ is just 60 and n is above 36 then a_ single obfuscated password does not fit on a 2-terabyte hard drive, never mind the time and memory required to print and evaluate the function. Earlier password-hashing functions handle a practically unlimited number of input bits with negligible slowdowns; fit obfuscated passwords into far fewer bits (a small constant times the target security level); allow the user far more flexibility to select the amount of time and memory used to check a password; and do not have the worrisome matrix structure exploited by our attacks. **1.5. Context: obfuscating other functions. Why, given the extensive hash-** ing literature, would anyone introduce a new password-obfuscation method with unnecessary mathematical structure, obvious performance problems, and no obvious advantages? To answer this question, we now explain the context that led to the Apon–Huang–Katz–Malozemoff point-obfuscation challenge; we start by emphasizing that their goal was not to introduce a new point-obfuscation method. Point functions are not the only functions that cryptographers obfuscate. Consider, for example, the following fast algorithm to compute the pqth power of an input mod pq, where p and q are particular prime numbers shown in the algorithm: ``` def rsa_encrypt_unobfuscated(x): p = 37975227936943673922808872755445627854565536638199 q = 40094690950920881030683735292761468389214899724061 pinv = 23636949109494599360568667562368545559934804514793 qinv = 15587761943858646484534622935500804086684608227153 return (qinv*q*pow(x,q,p) + pinv*p*pow(x,p,q)) % (p*q) ``` The following algorithm is not as fast but uses only the product pq: ``` def rsa_encrypt(x): pq = int("15226050279225333605356183781326374297180681149613" "80688657908494580122963258952897654000350692006139") return pow(x,pq,pq) ``` These algorithms compute exactly the same function x _x[pq]_ mod pq, but the _�→_ primes p and q are exposed in rsa_encrypt_unobfuscated while they are obfuscated in rsa_encrypt. This obfuscation is exactly the reason that rsa_encrypt is safe to publish. In other words, RSA public-key encryption is an obfuscation of a secret-key encryption scheme. (Note that this size of pq is too small for serious security. The particular pq shown here was introduced many years ago as the “RSA-100” challenge and was factored in 1991. See [3]. One should take larger primes p and q.) ----- 6 Daniel J. Bernstein, Andreas H¨ulsing, Tanja Lange, and Ruben Niederhagen In a FOCS 2013 paper [25], Garg, Gentry, Halevi, Raykova, Sahai, and Waters proposed an obfuscation method that takes any fast algorithm A as input and “efficiently” produces an obfuscated algorithm Obf(A). The security goal for Obf is to be an “indistinguishability obfuscator”: this means that Obf(A) is indistinguishable from Obf(A[′]) if A and A[′] are fast algorithms computing the _same function._ For example, if Obf is an indistinguishability obfuscator, and if an attacker can extract p and q from Obf(rsa_encrypt_unobfuscated), then the attacker can also extract p and q from Obf(rsa_encrypt), since the two obfuscations are indistinguishable; so the attacker can “efficiently” extract p and q from pq, by first computing Obf(rsa_encrypt). Contrapositive: if Obf is an indistinguishability obfuscator and the attacker cannot “efficiently” extract p and q from pq, then the attacker cannot extract p and q from Obf(rsa_encrypt_unobfuscated); i.e., Obf(rsa_encrypt_unobfuscated) hides p and q at least as effectively as ``` rsa_encrypt does. ``` Another example, returning to symmetric cryptography: It is reasonable to assume that checkpassword and checkpassword_hashed compute the same function if the input length is restricted to, e.g., 200 bits. This assumption, together with the assumption that Obf is an indistinguishability obfuscator, implies that Obf(checkpassword) hides a 200-bit secret password at least as effectively as _≤_ ``` checkpassword_hashed does. ``` These examples illustrate the generality of indistinguishability obfuscation. In the words of Goldwasser and Rothblum [27], efficient indistinguishability obfuscation is “best-possible obfuscation”, hiding everything that ad-hoc techniques would be able to hide. There are, however, two critical caveats. First, it is not at all clear that the Obf proposal from [25] (or any newer proposal) will survive cryptanalysis. There are actually two alternative proposals in [25]: the first relies on multilinear maps [24] from Garg, Gentry, and Halevi, and the second relies on multilinear maps [22] from Coron, Lepoint, and Tibouchi. In a paper [19] posted early November 2014 (a week after we announced our solution to the “point obfuscation” challenge), Cheon, Han, Lee, Ryu, and Stehl´e announced a complete break of the main security assumption in [22], undermining a remarkable number of papers built on top of [22]. The attack from [19] does not seem to break the application of [22] to point obfuscation (since “encodings of zero” are not provided in this context), but it illustrates the importance of leaving adequate time for cryptanalysis. A followup work by Gentry, Halevi, Maji, and Sahai [26] extends the attack from [19] to some settings where no “encodings of zero” below the “maximal level” are available, although the authors of [26] state that “so far we do not have a working attack on current obfuscation candidates”. Second, the literature already contains much simpler, much faster, much more thoroughly studied techniques for important examples of obfuscation, such as password hashing and public-key encryption. Even if the new proposals in fact provide indistinguishability obfuscation for more general functions, there is no reason to believe that they can provide competitive security and performance for ----- Bad directions in cryptographic hash functions 7 functions where the previous techniques apply. We would expect the generality of these proposals to damage the security-performance curve in a broad range of real applications covered by the previous techniques, implying that these proposals should be used only for applications outside that range. The goal of Apon, Huang, Katz, and Malozemoff was to investigate “the practicality of cryptographic program obfuscation”. Their obfuscator is not limited to point functions; it takes more general circuits as input. However, after performance evaluation, they concluded that “program obfuscation is still far from being deployable, with the most complex functionality we are able to obfuscate being a 16-bit point function”; see [5, page 2]. They chose a 14-bit point function as a challenge. **1.6. Attacking matrix-multiplication-based obfuscation of any func-** **tion. The real-world importance of password hashing justifies focusing on point** functions, but we have also adapted our attack algorithm to arbitrary n-bit-to1-bit functions. Specifically, we have considered the method explained in [5] to obfuscate an arbitrary n-bit-to-1-bit function, and adapted our attack algorithm to this level of generality. For the general case, with u pairs of w _w matrices_ _×_ using n input bits, we save a factor of roughly uw/2 in evaluating each input, and a further factor of approximately n/ log2 w in evaluating all inputs. The _n/ log2 w increases to n/2 for the standard input-bit order described in [5], but_ for an arbitrary input-bit order our attack is still considerably faster than a simple brute-force attack. See Section 8. We comment that standard cryptographic hashing can be used to obfuscate general functions. We suggest the following trivial obfuscation technique as a baseline for future obfuscation challenges: precompute a table of hashes of the inputs that produce 1; add fake random hashes to pad the table to size 2[n] (or a smaller size T, if it is acceptable to reveal that at most T inputs produce 1); and sort the table for fast lookups. This does not take polynomial time as n _→∞_ (for T = 2[n]), but it nevertheless appears to be smaller, faster, and stronger than all of the recently proposed matrix-multiplication-based obfuscation techniques for every feasible value of n. ## 2 Review of the obfuscation scheme Since the initial Obf proposal by Garg, Gentry, Halevi, Raykova, Sahai, and Waters [25] a lot of research was spent on finding applications and improving the proposed scheme. The challenge from [5] which we broke uses the relaxed-matrixbranching-program method by Ananth, Gupta, Ishai, and Sahai [4] to generate a size-reduced obfuscated program and combines it with the integer-based multilinear map (CLT) due to Coron, Lepoint, and Tibouchi [22]. As mentioned in Section 1, the recent CLT attack by Cheon, Han, Lee, Ryu, and Stehl´e [19] relies on “encodings of zero” and therefore does not apply to this point-obfuscation scheme. Our attack will also work for other matrix-multiplication-type obfuscation schemes with a similar structure, and in particular we see no obstacle to ----- 8 Daniel J. Bernstein, Andreas H¨ulsing, Tanja Lange, and Ruben Niederhagen applying the same attack strategy with the Garg–Gentry–Halevi [24] multilinear map in place of CLT. Most of the Obf literature does not state concrete parameters and does not present computer-verified examples. The first implementations, first examples, and first challenge were from Apon, Huang, Katz, and Malozemoff in [5], [6], and [7], providing an important foundation for quantifying and verifying attack performance. The challenge given in [5] is an obfuscation of a point function, so we first give a self-contained description of these obfuscated point-function programs from the attacker’s perspective; we then comment briefly on more general functions. For details on how the matrices below are constructed, we refer the reader to [4], [22], and of course [5]; but these details are not relevant to our attack. **2.1. Obfuscated point functions. A point function is a function on** 0, 1 _{_ _}[n]_ that returns 1 for exactly one secret vector of length n and 0 otherwise. The obfuscation scheme starts with this secret vector and an additional security parameter λ related to the security of the multilinear map. The obfuscated version of the point function is given by a list of 2n public (n + 2) × (n + 2) matrices Bb,k for 1 ≤ _b ≤_ _n and k ∈{0, 1} with integer entries;_ a row vector s of length n + 2 with integer entries; a column vector t of length _n + 2 with integer entries; an integer pzt (a “zero test” value, not to be confused_ with an “encoding of zero”); and a positive integer q. All of the entries and pzt are between 0 and q 1 and appear random. The number of bits of q has an _−_ essentially linear impact upon our attack cost; [5] chooses the number of bits of _q to be approximately 4((λ + 1)(n + 4) + 2)[2]_ log2 λ for multilinear-map security reasons. The obfuscated program works as follows: Take as input an n-bit vector x = (x[1], x[2], . . ., x[n]). _•_ _• Compute the integer matrix A = B1,x[1]B2,x[2] · · · Bn,x[n] by successive ma-_ trix multiplications. Compute the integer y(x) = sAt by a vector-matrix multiplication and a dot _•_ product. _• Compute y(x)pzt and reduce mod q to the range [−(q −_ 1)/2, (q − 1)/2]. Multiply the remainder by 2[2][λ][+11], divide by q, and round to the nearest _•_ integer. This result is by definition the matrix-multiplication hash of x. Output 0 if this hash is 0; output 1 otherwise. _•_ We have confirmed these steps against the software in [6]. The matrix-multiplication hash here is reminiscent of “Fast VSH” from [20]. Fast VSH hashes a block of input as follows: use input bits to select precomputed primes from a table, multiply those primes, and reduce mod something. The matrix-multiplication hash hashes a block of input as follows: use input bits to select precomputed matrices from a table, multiply those matrices, and reduce mod something. The matrices are secretly chosen with additional structure, but we do not use that structure in our attack. ----- Bad directions in cryptographic hash functions 9 **2.2. Initial security analysis. A straightforward brute-force attack determines** the secret vector by computing the matrix-multiplication hash of all 2[n] vectors _x. Of course, the computation stops once a correct hash is found._ Unfortunately [5] and [7] do not include timings for λ = 60 and n = 14, so we timed the software from [6] on one of our PCs and saw that each evaluation took 245 minutes, i.e., 2[45][.][74] cycles at 4GHz. As the code automatically used all 8 cores of the CPU, this leads to a total of 2[48][.][74] cycles per evaluation. A brute-force computation using this software would take 2[14] 2[48][.][74] = 2[62][.][74] _·_ cycles worst-case, and would take more than 2[60] cycles for 85% of all inputs. For comparison, recall that the CLT parameters were designed to just barely provide 2[λ] = 2[60] security, although the time scale for the 2[60] here is not clear. If the time scale of the security parameter is close to one cycle then the cost of these two attacks is balanced. In their Crypto 2014 rump-session announcement [8], the authors declared this brute-force attack to be infeasible: “The great part is, it’s only 14 bits, so you think you can try all 2 to the 14 points, but it takes so long to evaluate that it’s not feasible.” The authors concluded in [5, Section 5] that they were “able to obfuscate some ‘meaningful’ programs” and that “it is important to note that the fact that we can produce any ‘useful’ obfuscations at all is surprising”. We agree that a 500-watt-year computation is a nonnegligible investment of computer time (although we would not characterize it as “infeasible”). However, in Section 3 we show how to make evaluation two orders of magnitude faster, bringing a brute-force attack within reach of a small computer cluster in a matter of days. Furthermore, in Section 4 we present a meet-in-the-middle attack that is another two orders of magnitude faster. **2.3. Obfuscation of general functions and keyed functions. The obfusca-** tion scheme in [4] transforms any function into a sequence of matrix multiplications. At every multiplication the matrix is selected based on a bit of the input _x but usually the bits of x are used multiple times. For general circuits of length_ _ℓ_ the paper constructs an oblivious relaxed matrix branching program of length _nℓ_ which cycles ℓ times through the n entries of x in sequence to select from 2nℓ matrices. In that case most of the matrices are obfuscated identity matrices but the regular access pattern stops the attacker from learning anything about the function. Sometimes (as in the password-hashing example) the structure of the circuit is already public, and all that one wants to obfuscate is a secret key. In other words, the circuit computes fz(x) = φ(z, x) for some secret key z, where φ is a publicly known branching program; the obfuscation needs to protect only the secret key z, and does not need to hide the function φ. This is called “obfuscation of keyed functions” in [4]. For this class of functions the length of the obfuscated program equals the length of the circuit for φ; the bits of x are used (and reused as often as necessary) in a public order determined by φ. The designer can drive up the cost of brute-force attacks by including additional matrices as in the general case, but this also increases the obfuscation time, obfuscated-program size, and evaluation time. ----- 10 Daniel J. Bernstein, Andreas H¨ulsing, Tanja Lange, and Ruben Niederhagen ## 3 Faster algorithms for one input This section describes two speedups to the obfuscated programs described in Section 2. These speedups are important for constructive as well as destructive applications. Combining these two ideas reduced our time to evaluate the obfuscated point function for a single input from 245 minutes to under 5 minutes (4 minutes 51 seconds), both measured on the same 8-core CPU. The authors of [6] have recently included these speedups in their software, with credit to us. **3.1. Cost analysis for the original algorithm. Schoolbook multiplication of** the two (n +2) _×_ (n +2) matrices B1,x[1] and B2,x[2] uses (n +2)[3] multiplications of matrix entries. Similar comments apply to all n 1 matrix multiplications, _−_ for a total of (n 1)(n + 2)[3] multiplications of matrix entries. _−_ This quartic operation count understates the asymptotic complexity of the algorithm for two reasons, even when the security parameter λ is treated as a constant. The first reason is that the number of bits of q grows quadratically with n. The second reason is that the entries in B1,x[1]B2,x[2] have about twice as many bits as the entries in the original matrices, the entries in B1,x[1]B2,x[2]B3,x[3] have about three times as many bits, etc. The paper [5] reports timings for point functions with n 8, 12, 16 for security parameter 52, and in particular reports _∈{_ _}_ microbenchmarks of the time taken for each of the matrix products, starting with the first; these microbenchmarks clearly show the slowdown from one product to the next, and the paper explains that “each multiplication increases the multilinearity level of the underlying graded encoding scheme and thus the size of the resulting encoding”. We now account for the size of the matrix entries. Recall that state-of-the-art multiplication techniques (see, e.g., [11]) take time essentially linear in b, i.e., _b[1+][o][(1)], to multiply b-bit integers. The original entries have size quadratic in n,_ and the products quickly grow to size cubic in n. More precisely, the final product _A = B1,x[1] · · · Bn,x[n] has entries bounded by (n + 2)[n][−][1](q −_ 1)[n] and typically larger than (q 1)[n]; similar bounds apply to intermediate products. More than _−_ _n/2 of the products have typical entries above (q_ 1)[n/][2], so the multiplication _−_ time is dominated by integers having size cubic in n. The total time to compute A is n[7+][o][(1)] for constant λ, equivalent to n[5+][o][(1)] multiplications of integers on the scale of q. This time dominates the total time for the algorithm. **3.2. Intermediate reductions mod q. We do better by limiting the growth** of the elements in the computation. The final result y(x)pzt is in Z/q, the ring of integers mod q, and is obtained by a sequence of multiplications and additions, so we are free to reduce mod q at any moment in the computation. Any of the initial integer multiplications has inputs at most q 1; we allow the temporary _−_ values to grow to at most (n + 2)(q 1)[2] by computing the sum of the products _−_ for one entry and then reduce mod q. Thus any future multiplication also has its inputs at most q 1. _−_ ----- Bad directions in cryptographic hash functions 11 State-of-the-art division techniques take time within a constant factor of stateof-the-art multiplication techniques, so (n + 2)[2] reductions mod q take asymptotically negligible time compared to (n + 2)[3] multiplications. The number of bits in each intermediate integer drops from cubic in n to quadratic in n. More precisely, the asymptotic speedup factor is n/2, since the original multiplication inputs had on average about n/2 times as many bits as q. We observe a smaller speedup factor for concrete values of n, mainly because of the overhead for the extra divisions. The total time to compute A mod q is n[6+][o][(1)] for constant λ, dominated by (n 1)(n + 2)[3] = n[4] + 5n[3] + 6n[2] 4n 8 multiplications of integers bounded _−_ _−_ _−_ by q, inside (n 1)(n + 2)[2] = n[3] + 3n[2] 4 dot products mod q. _−_ _−_ **3.3. Matrix-vector multiplications. We further improve the computation by** reordering the operations used to compute y(x): specifically, instead of computing A, we compute � � � � _y(x) =_ _· · ·_ (sB1,x[1])B2,x[2] _· · · Bn,x[n]_ _t._ This sequence of operations requires n vector-matrix products and a final vectorvector multiplication. This combines straightforwardly with intermediate reductions mod q as above. The total time to compute y(x) mod q is n[5+][o][(1)], dominated by n(n + 2) + 1 = (n + 1)[2] dot products mod q. ## 4 Faster algorithms for many inputs A brute-force attack iterates through the whole input range and computes the evaluation for each possible input until the result of the evaluation is 1 and thus the correct input has been found. In terms of complexity our improvements from Section 3 reduced the cost of brute-forcing an n-bit point function from time n[7+][o][(1)]2[n] to time n[5+][o][(1)]2[n] for constant λ, dominated by (n + 1)[2]2[n] dot products mod q. This algorithm is displayed in Figure 4.1. This section presents further reductions to the complexity of the attack. These share computations between evaluations of many inputs and have no matching speedups on the constructive side (which usually only evaluates at a single point at once and in any case cannot be expected to have related inputs). **4.2. Reusing intermediate products. Recall that Section 3 computes y(x) =** _sB1,x[1] · · · Bn,x[n]t mod q by multiplying from left to right: the last two steps_ are to multiply the vector sB1,x[1] · · · Bn−1,x[n−1] by Bn,x[n] and then by t. Notice that this vector does not depend on the choice of x[n]. By computing this vector, multiplying the vector by Bn,0 and then by t, and multiplying the same vector by Bn,1 and then by t, we obtain both y(x[1], . . ., x[n − 1], 0) and _y(x[1], . . ., x[n_ 1], 1). This saves almost half of the cost of the computation. _−_ Similarly, we need only two computations of sB1,x[1] for the two choices of x[1]; four computations of sB1,x[1]B2,x[2] for the four choices of (x[1], x[2]); etc. Overall there are 2+4+8+ +2[n] = 2[n][+1] 2 vector-matrix multiplications here, plus 2[n] _· · ·_ _−_ ----- 12 Daniel J. Bernstein, Andreas H¨ulsing, Tanja Lange, and Ruben Niederhagen ``` execfile(’subroutines.py’) import itertools def bruteforce(): for x in itertools.product([0,1],repeat=n): L = s for b in range(n): L = [dot(L,[B[b][x[b]][i * w + j] for j in range(w)]) for i in range(w)] result = solution(x,dot(L,t)) if result: return result print bruteforce() ``` **Fig. 4.1. Brute-force attack algorithm, separately evaluating y(x) mod q for each x,** including the speedups of Section 3: reducing intermediate matrix products mod q (inside dot) and replacing matrix-matrix products with vector-matrix products. See Appendix A for definitions of subroutines. final multiplications by t, for a total of (n+2)(2[n][+1] 2)+2[n] = (2n+5)2[n] 2(n+2) _−_ _−_ dot products mod q. To minimize memory requirements, we enumerate x in lexicographic order, maintaining a stack of intermediate products. We reuse products on the stack to the extent allowed by the common prefix between x and the previous x. In most cases this common prefix is almost the entire stack. On average slightly fewer than two matrix-vector products need to be recomputed for each x. See Figure 4.3 for a recursive version of this algorithm. **4.4. A meet-in-the-middle attack. To do better we change the order of** matrix multiplication yet again, separating ℓ “left” bits from n _ℓ_ “right” bits: _−_ _y(x) = (sB1,x[1] · · · Bℓ,x[ℓ])(Bℓ+1,x[ℓ+1] · · · Bn,x[n]t)._ We exploit this separation to store and reuse some computations. Specifically, we precompute a table of “left” products _L[x[1], . . ., x[ℓ]] = sB1,x[1] · · · Bℓ,x[ℓ]_ for all 2[ℓ] choices of (x[1], . . ., x[ℓ]). The main computation of all y(x) works as follows: for each choice of (x[ℓ + 1], . . ., x[n]), compute the “right” product _R[x[ℓ_ + 1], . . ., x[n]] = Bℓ+1,x[ℓ+1] · · · Bn,x[n]t, and then multiply each element of the L table by this vector. Computing a single left product sB1,x[1] _Bℓ,x[ℓ] from left to right, as in_ _· · ·_ Section 3, takes ℓ vector-matrix products, i.e., ℓ(n + 2) dot products mod q. Overall the precomputation uses ℓ(n + 2)2[ℓ] dot products mod q. ----- Bad directions in cryptographic hash functions 13 ``` execfile(’subroutines.py’) def reuseproducts(xleft,L): b = len(xleft) if b == n: return solution(xleft,dot(L,t)) for xb in [0,1]: newL = [dot(L,[B[b][xb][i * w + j] for j in range(w)]) for i in range(w)] result = reuseproducts(xleft + [xb],newL) if result: return result print reuseproducts([],s) ``` **Fig. 4.3. Attack algorithm sharing computations of intermediate products across many** inputs x. Computing a single right product Bℓ+1,x[ℓ+1] · · · Bn,x[n]t from right to left (starting from t) takes n _ℓ_ matrix-vector products, for a total of (n _ℓ)(n + 2)_ _−_ _−_ dot products mod q. The outer loop in the main computation therefore uses (n _ℓ)(n + 2)2[n][−][ℓ]_ dot products mod q in the worst case. The inner loop in the _−_ main computation, computing all y(x), uses just 2[n] dot products mod q in total in the worst case. The total number of dot products mod q in this algorithm, including precomputation, is ℓ(n+2)2[ℓ] +(n _ℓ)(n+2)2[n][−][ℓ]_ +2[n]. In particular, for ℓ = n/2 (assum_−_ ing n is even), the number of dot products mod q simplifies to n(n +2)2[n/][2] +2[n]. For a traditional meet-in-the-middle attack, the outer loop of the main computation simply looks up each result in a precomputed sorted table. Our notion of “meet” is more complicated, and requires inspecting each element of the table, but this is still a considerable speedup: each inspection is simply a dot product, much faster than the vector-matrix multiplications used before. We comment that taking ℓ logarithmic in n produces almost the same speedup with polynomial memory consumption. More precisely, taking ℓ close to 2 log2 n means that 2[n][−][ℓ] is smaller than 2[n] by a factor roughly n[2], so the term (n _−_ _ℓ)(n + 2)2[n][−][ℓ]_ is on the same scale as 2[n]. The table then contains roughly n[2] vectors, similar size to the original 2n matrices. Taking slightly larger ℓ reduces the term (n _ℓ)(n + 2)2[n][−][ℓ]_ to a smaller scale. A similar choice of ℓ becomes _−_ important for speed in Section 8.2. **4.5. Combining the ideas. One can easily reuse intermediate products in the** meet-in-the-middle attack. See Figure 4.6. This reduces the precomputation to 2[ℓ][+1] 2 vector-matrix multiplications, i.e., (n+2)(2[ℓ][+1] 2) dot products mod q. _−_ _−_ It similarly reduces the outer loop of the main computation to (n+2)(2[n][−][ℓ][+1] 2) _−_ dot products mod q. The total number of dot products mod q in the entire algorithm is now (n + 2)(2[ℓ][+1] +2[n][−][ℓ][+1] 4)+2[n]. For example, for ℓ = n/2, the number of dot products _−_ mod q simplifies to 4(n + 2)(2[n/][2] 1) + 2[n]. _−_ ----- 14 Daniel J. Bernstein, Andreas H¨ulsing, Tanja Lange, and Ruben Niederhagen ``` execfile(’subroutines.py’) l = n // 2 def precompute(xleft,L): b = len(xleft) if b == l: return [(xleft,L)] result = [] for xb in [0,1]: newL = [dot(L,[B[b][xb][i * w + j] for j in range(w)]) for i in range(w)] result += precompute(xleft + [xb],newL) return result table = precompute([],s) def mainloop(xright,R): b = len(xright) if b == n - l: for xleft,L in table: result = solution(xleft + xright,dot(L,R)) if result: return result return for xb in [0,1]: newR = [dot(R,[B[n - 1 - b][xb][j * w + i] for j in range(w)]) for i in range(w)] result = mainloop([xb] + xright,newR) if result: return result print mainloop([],t) ``` **Fig. 4.6. Meet-in-the-middle attack algorithm, including reuse of intermediate prod-** ucts, using ℓ = _n/2_ bits on the left and n _ℓ_ bits on the right. _⌊_ _⌋_ _−_ This is not much smaller than the meet-in-the-middle attack without reuse: the dominant term is the same 2[n]. However, as above one can take much smaller _ℓ_ to reduce memory consumption. The reuse now allows ℓ to be taken almost as small as log2 n without significantly compromising speed, so the precomputed table is now much smaller than the original 2n matrices. If memory consumption is not a concern then one should compute both an L table and an R table, interleaving the computations of the tables and obtaining each LR product as soon as both L and R are known. For equal-size tables this means computing L0, R0, L0R0, L1, L1R0, R1, L0R1, L1R1, etc. This order of operations does not improve worst-case performance, but it does improve average-case performance. The same improvement has been previously applied to other meet-in-the-middle attacks: for example, Pollard applied this improvement ----- Bad directions in cryptographic hash functions 15 to Shanks’s “baby-step giant-step” discrete-logarithm method. Compare [37, pages 419–420] to [35, page 439, top]. ## 5 Parallelization We implemented our attack for shared-memory systems using OpenMP and for cluster systems using MPI. In general, brute-force attacks are embarrassingly parallel, i.e., the search space can be distributed over the computing nodes without any need for communication, resulting in a perfectly scalable parallelization. However, for this attack, some computations are shared between consecutive iterations. Therefore, some cooperation and communication are required between computing nodes. **5.1. Precomputation. Recall that the precomputation step computes all 2[ℓ]** possible cases for the “left” ℓ bits of the whole input space. A non-parallel implementation first computes ℓ vector-matrix multiplications for sB1,0 · · · Bℓ,0 and stores the first ℓ 1 intermediate products on a stack. As many intermediate _−_ products as possible are reused for each subsequent case. For a shared-memory system, all data can be shared between the threads. Furthermore, the vector-matrix multiplications expose a sufficient amount of parallelism such that the threads can cooperate on the computation of each multiplication. There is some loss in parallel efficiency due to the need for synchronization and work-share imbalance. For a cluster system, communication and synchronization of such a workload distribution would be too expensive. Therefore, we split the input range for the precomputation between the cluster nodes, compute each section of the precomputed table independently, and finally broadcast the table entries to all cluster nodes. For simplicity, we split the input range evenly which results in some workload imbalance. (On each node, the workload is distributed as described above over several threads to use all CPU cores on each node.) This procedure has some loss in parallel efficiency due to the fact that each cluster node separately performs k vector-matrix multiplications for the first precomputation in its range, due to some workload imbalance, and due to the final all-to-all communication. **5.2. Main computation. For simplicity, we start the main computation once** the whole precomputed table L is available. Recall that a non-parallel implementation of the main computation first computes the vector R[0, . . ., 0] = _Bℓ+1,0 · · · Bn,0t using n −_ _ℓ_ matrix-vector multiplications, and multiplies this vector by all 2[ℓ] table entries. It then moves to other possibilities for the “right” _n_ _ℓ_ bits, reusing intermediate products in a similar way to the precomputation _−_ and multiplying each resulting vector R[. . .] by all 2[ℓ] table entries. For a shared-memory system, the computations of R[. . .] are distributed between the threads the same way as for the precomputation. However, vectorvector multiplication does not expose as much parallelism as vector-matrix multiplication. Therefore, we distribute over the threads the 2[ℓ] independent vectorvector multiplications of each of the 2[ℓ] table entries with R[0, . . ., 0]. As in the ----- 16 Daniel J. Bernstein, Andreas H¨ulsing, Tanja Lange, and Ruben Niederhagen parallelization of precomputation, there is some loss of parallel efficiency due to synchronization and work-share imbalance for the vector-matrix multiplications and some loss due to work-share imbalance for the vector-vector multiplications. For a cluster system we again cannot efficiently distribute the workload of one vector-matrix multiplication over several cluster nodes. Therefore, we distribute the search space evenly over the cluster nodes and let each cluster node compute its share of the workload independently. This approach creates some redundant work because each cluster node computes its own initial R[. . .] using n _ℓ_ matrix_−_ vector multiplications. ## 6 Performance measurements We used 22 PCs in the Saber cluster [12] for the attack. Each PC is of the type described earlier, including an 8-core CPU. The PCs are connected by a gigabit Ethernet network. Each PC also has two GK110 GPUs but we did not use these GPUs. **6.1. First break of the challenge. We implemented the single-input optimiza-** tions described in Section 3 and used 20 PCs to compute 2[14] point evaluations for all possible inputs. This revealed the secret point 11000101100100 after about 23 hours. The worst-case runtime for this approach on these 20 PCs is about 52 hours for checking all 2[14] possible input points. On 18 October 2014 we sent the authors of [5] the solution to the challenge, and a few hours later they confirmed that the solution was correct. **6.2. Second break of the challenge. We implemented the multiple-input op-** timizations described in Section 4 and the parallelization described in Section 5. Our optimized attack implementation found the input point in under 19 minutes on 21 PCs; this includes the time to precompute a table L of size 2[7]. The worstcase runtime of the attack for checking all 2[14] possible input points is under 34 minutes on 21 PCs. **6.3. Additional latency. Obviously “19 minutes” understates the real time** that elapsed between the announcement of the challenge (19 August 2014) and our solution of the challenge with our second program (25 October 2014). See Table 6.4 for a broader perspective. The largest deterrent was the difficulty of downloading 25 gigabytes. Whenever a connection broke, the server would insist on starting from the beginning (“HTTP server doesn’t seem to support byte ranges”), presumably because the server stores all files in a compressed format that does not support random access. The same restriction also meant that we could not download different portions of the file in parallel. To truly minimize latency we would have had to overlap the download of the challenge, the broadcast of the challenge to the cluster, and the computation, and of course our optimizations and software would have had to be ready first. In this context, the precompute-L-table algorithm in Section 4 has a latency advantage compared to a bit-reversed algorithm that precomputes an R table ----- Bad directions in cryptographic hash functions 17 Attack component Real time Initial procrastination a few days First attempt to download challenge (failed) 82 minutes Subsequent procrastination 40 days and 40 nights Fourth attempt to download challenge (succeeded) about an hour Original program [6] evaluating one input 245 minutes Original program evaluating all inputs on one computer (extrapolated) 7.6 years Copying challenge to cluster (without UDP broadcasts) about an hour Reading challenge from disk into RAM 2.5 minutes Our faster program evaluating one input 4.85 minutes First successful break of challenge on 20 PCs 23 hours Further procrastination (“this is fast enough”) about half a week Our faster program evaluating all inputs on 21 PCs 34 minutes Second successful break of challenge on 21 PCs 19 minutes Our current program evaluating all inputs on 1 PC 444.2 minutes Our current program evaluating all inputs on 22 PCs 29.5 minutes Time for an average input point on 22 PCs 19.9 minutes Successful break of challenge on 22 PCs 17.5 minutes **Table 6.4. Measurements of real time actually consumed by various components of** complete attack, starting from announcement of challenge. instead of an L table: the portion of the input relevant to L is available sooner than the portion of the input relevant to R. **6.5. Timings of various software components. We have put the latest** [version of our software online at http://obviouscation.cr.yp.to. We applied](http://obviouscation.cr.yp.to) this software to the same challenge on 22 PCs. The software took a total time of 1769 seconds (29.5 minutes) to check all 2[14] input points. An average input point was checked within 1191 seconds (19.9 minutes). The secret challenge point was found within 1048 seconds (17.5 minutes). The rest of this section describes the time taken by various components of this computation. Each vector-matrix multiplication took 15.577 s on average (15.091 minimum, 16.421 maximum), using all eight cores jointly. For comparison, on a single core, a vector-matrix multiplication requires about 115 s. Therefore, we achieve a par115s/8 allel efficiency of 15.577s _[≈]_ [92% for parallel vector-matrix multiplication.] Each y computation took 8.986 s on average (7.975 minimum, 9.820 maximum), using a single core. Each y computation consists of one vector-vector multiplication, one multiplication by pzt (which we could absorb into the precomputed table, producing a small speedup), and one reduction mod q. On a single machine (no MPI parallelization), after a reboot to flush the challenge from RAM, the timing breaks down as follows: 1. Loading the matrices for “left” bit positions: 83.999 s. 2. Total precomputation of 2[7] = 128 table entries: 4055.408 s. ----- 18 Daniel J. Bernstein, Andreas H¨ulsing, Tanja Lange, and Ruben Niederhagen (a) Computing the first ℓ = 7 vector-matrix products: 107.623 s. 4. Loading the matrices for “right” bit positions: 78.490 s. 5. Total computation of all 2[14] evaluations: 22518.900 s. (a) Computing the first n _ℓ_ = 7 matrix-vector products: 109.731 s. _−_ Overall total runtime: 26654 s (444.2 minutes). From these computations, steps 1, 2a, 4, and 5a are not parallelized for cluster computation. The total timing breakdown on 22 PCs, after a reboot of all PCs, is as follows: 1. Loading the matrices for “left” bit positions: 89.449 s average (75.786 on the fastest node, 104.696 on the slowest node). With more effort we could have overlapped most of this loading (and the subsequent loading) with computation, or skipped all disk copies by keeping the matrices in RAM. 2. Total precomputation of 2[7] = 128 table entries: 253.346 s average (217.893 minimum, 295.999 maximum). (a) Computing the first ℓ = 7 vector-matrix products: 107.951 s average (107.173 minimum, 109.297 maximum). 3. All-to-all communication: 153.591 s average (100.848 minimum, 199.200 maximum); i.e., about 53 s average idle time for the busier nodes to catch up, followed by about 101 s of communication. With more effort we could have overlapped most of this communication with computation. 4. Loading the matrices for “right” bit positions: 85.412 s average (73.710 minimum, 97.526 maximum). 5. Total computation of all 2[14] evaluations: 1097.680 s average (942.981 minimum, 1169.520 maximum). (a) Computing the first n _ℓ_ = 7 matrix-vector products: 108.878 s average _−_ (107.713 minimum, 110.001 maximum). 6. Final idle time waiting for all other nodes to finish computation: 80.277 s average (0.076 minimum, 80.277 maximum). Overall total runtime, including MPI startup overhead: 1769 s (29.5 minutes). The overall parallel efficiency of the cluster parallelization thus is [26654 s][/][22] 1769 s _≈_ 68%. Steps 1, 2a, 3, 4, and 5a, totaling 545.281 s, are those parts of the computation that contain parallelization overhead (in particular the communication time in step 3 is added compared to the single-machine case). Removing these steps from the efficiency calculation results in a parallel efficiency of (26654 s−380 s)/22 98%, which shows that those steps are responsible for almost 1769 s−545 s _≈_ all of the loss in parallel efficiency. ## 7 Further speedups In this section we briefly discuss two ideas for further accelerating the attack. We considered further implementation work to evaluate the concrete impact of these ideas, but decided that this work was unjustified, given that solving the existing challenge on our cluster took only 19 minutes. ----- Bad directions in cryptographic hash functions 19 **7.1. Reusing transforms. One fast way to compute an m-coefficient product** of two univariate polynomials is to evaluate each polynomial at the mth roots of 1 (assuming that there is a primitive mth root of 1 in the coefficient ring), multiply the values, and interpolate the product polynomial from the products of values. The evaluation and interpolation take only Θ(m log2 m) arithmetic operations using a standard radix-2 FFT (assuming that m is a power of 2), and multiplying values takes only m arithmetic operations. More generally, to multiply two w _w matrices of polynomials where each_ _×_ entry of the output is known to fit into m coefficients, one can evaluate each polynomial at the mth roots of 1, multiply the matrices of values, and interpolate the product matrix. Note that intermediate values are computed in the evaluation domain; interpolation is postponed until the end of the matrix multiplication. The evaluation takes only Θ(w[2]m log2 m) arithmetic operations; schoolbook multiplication of the resulting matrices of values takes only Θ(w[3]m) arithmetic operations; and interpolation takes only Θ(w[2]m log2 m) arithmetic operations. The total is smaller, by a factor Θ(min{w, log2 m}), than the Θ(w[3]m log2 m) that would be used by schoolbook multiplication of the original matrices. Smaller exponents than 3 are known for matrix multiplication, but there is still a clear benefit to reusing the evaluations (called “FFT caching” in [11]) and merging the interpolations (called “FFT addition” in [11]). Similar, somewhat more complicated, speedups apply to multiplication of integer matrices; see, e.g., [38, Table 17]. Obviously FFT caching and FFT addition can also be applied to matrixvector multiplication, dot products, etc. For example, in the polynomial case, multiplying a w _w matrix by a length-w vector takes only Θ(w[2]m) arithmetic_ _×_ operations on values and Θ(wm log2 m) arithmetic operations for interpolation, if the FFTs of matrix entries have already been cached. Similarly, computing the dot product of two length-w vectors takes only Θ(wm) arithmetic operations on values and Θ(m log2 m) arithmetic operations for interpolation, if the FFTs of vector entries have already been cached. The speedup here is applicable to both the constructive as well as the destructive algorithms in this paper. We would expect the speedup factor to be noticeable in practice, as in [38]. We would also expect an additional benefit for the attack: a high degree of parallelization is supported by the heavy use of arithmetic on values at independent evaluation points. **7.2. Asymptotically fast rectangular matrix multiplication. The compu-** tation of many dot products between all combinations of left vectors and right vectors in our point-obfuscation attack can be viewed as a rectangular matrixmatrix multiplication. An algorithm of Coppersmith [21] multiplies an N _N matrix by an N_ _×_ _×_ _N_ [1][/β] matrix using just N [2+][o][(1)] multiplications of matrix entries, where β = _⌊_ _⌋_ (5 log 5)/(2 log 2) < 6. With the same number of multiplications one can multiply an N _N_ [1][/β] matrix by a _N_ [1][/β] _N matrix. See [31] for context, and for_ _× ⌊_ _⌋_ _⌊_ _⌋×_ techniques to achieve smaller β. ----- 20 Daniel J. Bernstein, Andreas H¨ulsing, Tanja Lange, and Ruben Niederhagen Substitute N = _w[β]_, and note that _N_ [1][/β] = w, to see that one can multiply _⌈_ _⌉_ _⌊_ _⌋_ a _w[β]_ _w matrix by a w_ _w[β]_ matrix, obtaining _w[β]_ results, using w[2][β][+][o][(1)] _⌈_ _⌉×_ _×⌈_ _⌉_ _⌈_ _⌉[2]_ multiplications. Note that this is w[1+][o][(1)] times faster than computing separate dot products between each of the _w[β]_ vectors in the first matrix and each of _⌈_ _⌉_ the _w[β]_ vectors in the second matrix. _⌈_ _⌉_ Our attack has 2[ℓ] left vectors and 2[n][−][ℓ] right vectors, each of length w = _n+2. Asymptotically Coppersmith’s algorithm applies to any choice of ℓ_ between _β log2 w and n/2, allowing all of the dot products to be computed using just_ _w[o][(1)]2[n]_ multiplications, rather than w2[n]. Fast matrix multiplication has a reputation for hiding large constant factors in the w[o][(1)], and we do not claim a speedup here for any particular w, but asymptotically w[o][(1)] is much faster than w. Our operation count also ignores the cost of additions, but we speculate that a more detailed analysis would show a similar improvement in the total number of bit operations. ## 8 Generalizing the attack beyond point functions This section looks beyond point functions: it considers the general obfuscation method explained in [5] for any program. Recall from Section 2 that for general programs the number of pairs of matrices, say u, is no longer tied to the number n of input bits: usually each input bit is used multiple times. Furthermore, each matrix is w _w and each vector has_ _×_ length w for some w > n, where the choice of w depends on the function and is no longer required to be n + 2. The speedups from Section 3 rely only on the general matrix-multiplication structure, not on the pattern of accessing input bits. Reducing intermediate results mod q saves a factor approximately u/2. Using vector-matrix multiplication rather than matrix-matrix multiplication saves a factor w. However, the attacks from Section 4 rely on having each input bit used exactly once. We cannot simply reorder the matrices to bring together the uses of an input bit: matrix multiplication is not commutative. Usually many of the matrices are obfuscated identity matrices, but the way the matrices are randomized prevents these matrices from being removed or reordered; see [5] for details. This section explains two attacks that apply in more generality. The first attack allows cycling through the input bits any number of times, and saves a factor approximately n/2 compared to brute force. The second attack allows using and reusing input bits any number of times in any pattern, and saves a factor approximately n/(2 log2 w) compared to brute force. The first attack is what one might call a “meet-in-many-middles” attack; the second attack does not involve precomputations. Both attacks exploit the idea of reusing intermediate products, sharing computations between adjacent inputs; both attacks can be parallelized by ideas similar to Section 5. **8.1. Speedup n/2 for cycling through input bits. Our first attack applies** to any circuit obfuscated as explained in [5, Section 2.2.1]. The obfuscated circuit ----- Bad directions in cryptographic hash functions 21 is constructed to “cycle through each of the input bits x1, x2, . . ., xn in order, m times”, using u = mn pairs of matrices. In other words, y(x) is defined as _s(B1,x[1] · · · Bn,x[n])(Bn+1,x[1] · · · B2n,x[n]) · · · (B(m−1)n+1,x[1] · · · Bmn,x[n])t._ Evaluating y(x) for one x from left to right takes mn vector-matrix multiplications and 1 vector-vector multiplication, i.e., uw + 1 dot products mod q. A straightforward brute-force attack thus takes (uw + 1)2[n] dot products mod q. One can split the sequence of mn matrices at some position ℓ, and carry out a meet-in-the-middle attack as in Section 4. However, this produces at most a constant-factor speedup once m 2: either the precomputation has to compute _≥_ products at most of the positions for all 2[n] inputs, or the main computation has to compute products at most of the positions for all 2[n] inputs, or both, depending on ℓ. We do better by splitting the sequence of input bits at some position ℓ. This means grouping the matrix positions into two disjoint “left” and “right” sets as follows, splitting each input cycle: � �� � _y(x) =_ _sB1,x[1] · · · Bℓ,x[ℓ]_ _Bℓ+1,x[ℓ+1] · · · Bn,x[n]_ � �� � _Bn+1,x[1] · · · Bn+ℓ,x[ℓ]_ _Bn+ℓ+1,x[ℓ+1] · · · B2n,x[n]_ ... � �� � _B(m−1)n+1,x[1] · · · B(m−1)n+ℓ,x[ℓ]_ _B(m−1)n+ℓ+1,x[ℓ+1] · · · Bmn,x[n]t_ = L1[x[1], . . ., x[ℓ]]R1[x[ℓ + 1], . . ., x[n]] _L2[x[1], . . ., x[ℓ]]R2[x[ℓ_ + 1], . . ., x[n]] ... _Lm[x[1], . . ., x[ℓ]]Rm[x[ℓ_ + 1], . . ., x[n]] where _L1[x[1], . . ., x[ℓ]] = sB1,x[1] · · · Bℓ,x[ℓ],_ _Li[x[1], . . ., x[ℓ]] = B(i−1)n+1,x[1] · · · B(i−1)n+ℓ,x[ℓ]_ for 2 ≤ _i ≤_ _m,_ _Ri[x[ℓ_ + 1], . . ., x[n]] = B(i−1)n+ℓ+1,x[ℓ+1] · · · Bin,x[n] for 1 ≤ _i ≤_ _m −_ 1, _Rm[x[ℓ_ + 1], . . ., x[n]] = B(m−1)n+ℓ+1,x[ℓ+1] · · · Bmn,x[n]t. We exploit this grouping as follows. We use 2[ℓ][+1] 2 vector-matrix multiplica_−_ tions to precompute a table of the vectors L1[x[1], . . ., x[ℓ]] for all 2[ℓ] choices of x[1], . . ., x[ℓ], as in Section 4. Similarly, for each i 2, . . ., m, we use _∈{_ _}_ 2[ℓ][+1] 4 matrix-matrix multiplications to precompute a table of the matri_−_ ces Li[x[1], . . ., x[ℓ]] for all 2[ℓ] choices of x[1], . . ., x[ℓ]. The tables use space for (w + (m 1)w[2])2[ℓ] integers mod q. _−_ After this precomputation, the outer loop of the main computation runs through each choice of x[ℓ + 1], . . ., x[n], computing the corresponding matrices _R1[. . . ], . . ., Rm−1[. . . ] and vector Rm[. . . ]. The inner loop runs through each_ ----- 22 Daniel J. Bernstein, Andreas H¨ulsing, Tanja Lange, and Ruben Niederhagen choice of x[1], . . ., x[ℓ], computing each y(x) by multiplying L1, R1, . . ., Lm, Rm; each x here takes 2m 2 vector-matrix multiplications and 1 vector-vector mul_−_ tiplication. Overall the precomputation costs ((m 1)w[2] + w)(2[ℓ][+1] 2) 2(m 1)w[2] _−_ _−_ _−_ _−_ dot products mod q; the outer loop of the main computation costs ((m 1)w[2] + _−_ _w)(2[n][−][ℓ][+1]_ 2) 2(m 1)w[2] dot products mod q; and the inner loop costs _−_ _−_ _−_ ((2m 2)w + 1)2[n] dot products mod q. _−_ In particular, taking ℓ = n/2 (assuming as before that n is even) simplifies the total cost to 4w(2[n/][2] 1) + 2[n] for m = 1, exactly as in Section 4, and _−_ 4w((m 1)w + 1)(2[n/][2] 1) + ((2m 2)w + 1)2[n] 4(m 1)w[2] for general m. _−_ _−_ _−_ _−_ _−_ Recall that brute force costs (uw+1)2[n] = (mnw+1)2[n]. For large n, large w, and _m_ 2, the asymptotically dominant term has dropped from mnw2[n] to 2mw2[n], _≥_ saving a factor of n/2. The same asymptotic savings appears with much smaller ℓ, almost as small as log2 w. Beware that this does not make the tables asymptotically smaller than the original 2mn matrices for m 2: most of the table space here is consumed _≥_ by matrices rather than vectors. **8.2. Speedup n/ log2 w for any order of input bits. One can try to spoil** the above attack by changing the order of input bits. A slightly different order of input bits, rotating positions in each round, is already stated in [4, Section 3, Claim 2, final formula], but it is easy to adapt the attack to this order. It is more difficult to adapt the attack to an order chosen randomly, or an order that combinatorially avoids keeping bits together. Varying the input order is not a new idea: see, e.g., the compression functions inside MD5 [36] and BLAKE [10]. Many other orders of input bits also arise naturally in “keyed” functions; see Section 2. The general picture is that y(x) is defined by the formula _y(x) = sB1,x[inp(1)]B2,x[inp(2)] · · · Bu,x[inp(u)]t_ for some constants inp(1), inp(2), . . ., inp(u) 1, 2, . . ., n . As a first unification _∈{_ _}_ we multiply s into B1,0 and into B1,1, and then multiply t into Bu,0 and into Bu,1. Now B1,0, B1,1, Bu,0, Bu,1 are vectors, except that they are integers if u = 1; and _y(x) is defined by_ _y(x) = B1,x[inp(1)]B2,x[inp(2)] · · · Bu,x[inp(u)]._ We now explain a general recursive strategy to evaluate this formula for all inputs without exploiting any particular pattern in inp(1), inp(2), . . ., inp(u). The strategy is reducing the number of variable bits in x by one in each iteration. Assume that not all of inp(1), inp(2), . . ., inp(u) are equal to n. Substitute _x[n] = 0 into the formula for y(x). This means, for each i with inp(i) = n in_ turn, eliminating the expression “Bi,x[n]” as follows: _• multiply Bi,0 into Bi+1,0 and into Bi+1,1 if i < u;_ _• multiply Bi,0 into Bi−1,0 and into Bi−1,1 if i = u;_ _• set Bi ←_ _Bi+1, then Bi+1 ←_ _Bi+2, . . ., then Bu−1 ←_ _Bu;_ ----- Bad directions in cryptographic hash functions 23 reduce u to u 1. _•_ _−_ Recursively evaluate the resulting formula for all choices of x[1], . . ., x[n 1]. _−_ Then do all the same steps again with x[n] = 1 instead of x[n] = 0. More generally, one can recurse on the two choices of x[b] for any b. It is most efficient to recurse on the most frequently used index b (or one of the most frequent indices b if there are several), since this minimizes the length of the formula to handle recursively. This is equivalent to first relabeling the indices so that they are in nondecreasing order of frequency, and then always recursing on the last bit. Once n is sufficiently small (see below), stop the recursion. This means separately enumerating all possibilities for (x[1], . . ., x[n]) and, for each possibility, evaluating the given formula _y(x) = B1,x[inp(1)]B2,x[inp(2)] · · · Bu,x[inp(u)]_ by multiplication from left to right. Recall that B1,x[inp(1)] is actually a vector (or an integer if u = 1). Each computation takes u 1 vector-matrix multiplica_−_ tions, i.e., (u 1)w dot products mod q. (Here we ignore the extra speed of the _−_ final vector-vector multiplication.) The total across all inputs is (u 1)w2[n] dot _−_ products mod q. To see that the recursion reduces this complexity, consider the impact of using exactly one level of recursion, from n down to n − 1. If index n is used un times then eliminating each Bi,x[n] costs 2un matrix multiplications, and produces a formula of length u−un instead of u, so each recursive call uses (u−un _−1)w2[n][−][1]_ dot products mod q. The bound on the total number of dot products mod q drops from (u−1)w2[n] to 4unw[2] +(u−un _−1)w2[n], saving unw2[n]_ _−4unw[2]. This analysis_ suggests stopping the recursion when 2[n] drops below 4w, i.e., at n = ⌈log2 w⌉+1. More generally, the algorithm costs a total of 4unw[2] + 8un−1w[2] + 16un−2w[2] + · · · + 2[n][−][ℓ][+1]uℓ+1w[2] + 2[n](uℓ + · · · + u1 − 1)w dot products mod q if the recursion stops at level ℓ. We relabel as explained above so that un ≥ _un−1 ≥· · · ≥_ _u1, and assume n > ℓ. The sum uℓ_ +· · ·+u1 is at most _ℓu/n, and the sum un+2un−1+4un−2+· · ·+2[n][−][ℓ][−][1]uℓ+1 is at most 2[n][−][ℓ]u/(n−ℓ),_ for a total of less than (4w2[−][ℓ]/(n−ℓ)+ℓ/n)uw2[n]. Taking ℓ = ⌈log2 w⌉+1 reduces this total to at most (4/(n −⌈log2 w⌉− 1) + (⌈log2 w⌉ + 1)/n)uw2[n]. For comparison, a brute-force attack against the original problem (separately evaluating y(x) for each x) costs (u 1)w2[n]. We have thus saved a factor of _−_ approximately n/ log2 w. ## References [1] — (no editor), 53rd annual IEEE symposium on foundations of computer science, _FOCS 2012, New Brunswick, New Jersey, 20–23 October 2012, IEEE Computer_ Society, 2012. See [31]. ----- 24 Daniel J. Bernstein, Andreas H¨ulsing, Tanja Lange, and Ruben Niederhagen [2] — (no editor), 54th annual IEEE symposium on foundations of computer science, _FOCS 2013, 26–29 October, 2013, Berkeley, CA, USA, IEEE Computer Society,_ 2013. See [25]. [[3] — (no editor), RSA numbers, Wikipedia page (2014). URL: https://en.](https://en.wikipedia.org/wiki/RSA_numbers) `wikipedia.org/wiki/RSA_numbers. Citations in this document:` 1.5. _§_ [4] Prabhanjan Ananth, Divya Gupta, Yuval Ishai, Amit Sahai, Optimizing obfusca_[tion: avoiding Barrington’s theorem, in ACM-CCS 2014 (2014). URL: https://](https://eprint.iacr.org/2014/222)_ `eprint.iacr.org/2014/222. Citations in this document:` 2, 2, 2.3, 2.3, 8.2. _§_ _§_ _§_ _§_ _§_ [5] Daniel Apon, Yan Huang, Jonathan Katz, Alex J. Malozemoff, Implementing _[cryptographic program obfuscation, version 20141005 (2014). URL: https://](https://eprint.iacr.org/2014/779)_ `eprint.iacr.org/2014/779. Citations in this document:` 1.2, 1.5, 1.6, 1.6, _§_ _§_ _§_ _§_ 2, 2, 2, 2, 2.1, 2.2, 2.2, 3.1, 6.1, 8, 8, 8.1. _§_ _§_ _§_ _§_ _§_ _§_ _§_ _§_ _§_ _§_ _§_ _§_ [6] Daniel Apon, Yan Huang, Jonathan Katz, Alex J. Malozemoff, Implementing _[cryptographic program obfuscation (software) (2014). URL: https://github.com/](https://github.com/amaloz/obfuscation)_ `amaloz/obfuscation. Citations in this document:` 2, 2.1, 2.2, 3, 6.3, A, A, _§_ _§_ _§_ _§_ _§_ _§_ _§_ A. _§_ [7] Daniel Apon, Yan Huang, Jonathan Katz, Alex J. Malozemoff, _Im-_ _plementing_ _cryptographic_ _program_ _obfuscation_ _(slides),_ Crypto 2014 rump session (2014). URL: `http://crypto.2014.rump.cr.yp.to/` `bca480a4e7fcdaf5bfa9dec75ff890c8.pdf. Citations in this document:` 1.2, 2, _§_ _§_ 2.2, A. _§_ _§_ [8] Daniel Apon, Yan Huang, Jonathan Katz, Alex J. Malozemoff, Implementing _cryptographic program obfuscation (video), Crypto 2014 rump session, starting at_ [3:56:25 (2014). URL: https://gauchocast.ucsb.edu/Panopto/Pages/Viewer.](https://gauchocast.ucsb.edu/Panopto/Pages/Viewer.aspx?id=d34af80d-bdb5-464b-a8ac-2c3adefc5194) ``` aspx?id=d34af80d-bdb5-464b-a8ac-2c3adefc5194. Citations in this document: ``` 2.2. _§_ [[9] Jean-Philippe Aumasson, Password Hashing Competition (2013). URL: https://](https://password-hashing.net/) `password-hashing.net/. Citations in this document:` 1.1. _§_ [10] Jean-Philippe Aumasson, Luca Henzen, Willi Meier, Raphael C.-W. Phan, SHA_[3 proposal BLAKE (version 1.3) (2010). URL: https://www.131002.net/blake/](https://www.131002.net/blake/blake.pdf)_ `blake.pdf. Citations in this document:` 8.2. _§_ [11] Daniel J. Bernstein, Fast multiplication and its applications, in [15] (2008), [325–384. URL: http://cr.yp.to/papers.html#multapps. Citations in this doc-](http://cr.yp.to/papers.html#multapps) ument: 3.1, 7.1, 7.1. _§_ _§_ _§_ [[12] Daniel J. Bernstein, The Saber cluster (2014). URL: http://blog.cr.yp.to/](http://blog.cr.yp.to/20140602-saber.html) `20140602-saber.html. Citations in this document:` 6. _§_ [13] Andrey Bogdanov, Dmitry Khovratovich, Christian Rechberger, Biclique crypt_[analysis of the full AES, in Asiacrypt 2011 [30] (2011), 344–371. URL: https://](https://eprint.iacr.org/2011/449)_ `eprint.iacr.org/2011/449. Citations in this document:` 1.4. _§_ [14] Joseph Bonneau, Stuart E. Schechter, Towards reliable storage of 56-bit _secrets in human memory, in USENIX Security Symposium 2014 (2014),_ 607–623. URL: `https://www.usenix.org/conference/usenixsecurity14/` `technical-sessions/presentation/bonneau. Citations in this document:` 1.1. _§_ [15] Joe P. Buhler, Peter Stevenhagen (editors), Surveys in algorithmic number theory, Mathematical Sciences Research Institute Publications, 44, Cambridge University Press, 2008. See [11]. [16] Christian Cachin, Jan Camenisch (editors), _Advances_ _in_ _cryptology—_ _EUROCRYPT 2004, international conference on the theory and applications of_ _cryptographic techniques, Interlaken, Switzerland, May 2–6, 2004, proceedings,_ Lecture Notes in Computer Science, 3027, Springer, 2004. ISBN ISBN 3-54021935-8. See [33]. ----- Bad directions in cryptographic hash functions 25 [17] Ran Canetti, Juan A. Garay (editors), Advances in cryptology—CRYPTO 2013— _33rd annual cryptology conference, Santa Barbara, CA, USA, August 18–22, 2013,_ _proceedings, part I, Lecture Notes in Computer Science, 8042, Springer, 2013. See_ [22]. [18] Anne Canteaut (editor), Fast software encryption—19th international workshop, _FSE 2012, Washington, DC, USA, March 19–21, 2012, revised selected papers,_ Lecture Notes in Computer Science, 7549, Springer, 2012. ISBN 978-3-642-340468. See [29]. [19] Jung Hee Cheon, Kyoohyung Han, Changmin Lee, Hansol Ryu, Damien Stehl´e, _[Cryptanalysis of the multilinear map over the integers (2014). URL: https://](https://eprint.iacr.org/2014/906)_ `eprint.iacr.org/2014/906. Citations in this document:` 1.5, 1.5, 1.5, 2. _§_ _§_ _§_ _§_ [20] Scott Contini, Arjen K. Lenstra, Ron Steinfeld, VSH, an efficient and provable _collision-resistant hash function, in Eurocrypt 2006 [39] (2006), 165–182. URL:_ `https://eprint.iacr.org/2005/193. Citations in this document:` 2.1. _§_ [21] Don Coppersmith, Rapid multiplication of rectangular matrices, SIAM Journal on Computing 11 (1982), 467–471. Citations in this document: 7.2. _§_ [22] Jean-Sebastien Coron, Tancrede Lepoint, Mehdi Tibouchi, Practical multilinear _[maps over the integers, in Crypto 2013 [17] (2013), 476–493. URL: https://](https://eprint.iacr.org/2013/183)_ `eprint.iacr.org/2013/183. Citations in this document:` 1.5, 1.5, 1.5, 1.5, _§_ _§_ _§_ _§_ 2, 2. _§_ _§_ [23] Simson Garfinkel, Gene Spafford, Alan Schwartz, Practical UNIX & Internet se_curity, 3rd edition, O’Reilly, 2003. Citations in this document:_ 1. _§_ [24] Sanjam Garg, Craig Gentry, Shai Halevi, Candidate multilinear maps from ideal _[lattices, in Eurocrypt 2013 [28] (2012), 40–49. URL: https://eprint.iacr.org/](https://eprint.iacr.org/2012/610)_ `2012/610. Citations in this document:` 1.5, 2. _§_ _§_ [25] Sanjam Garg, Craig Gentry, Shai Halevi, Mariana Raykova, Amit Sahai, Brent Waters, Candidate indistinguishability obfuscation and functional encryption for _[all circuits, in FOCS 2013 [2] (2013), 40–49. URL: https://eprint.iacr.org/](https://eprint.iacr.org/2013/451)_ `2013/451. Citations in this document:` 1.5, 1.5, 1.5, 2. _§_ _§_ _§_ _§_ [26] Craig Gentry, Shai Halevi, Hemanta K. Maji, Amit Sahai, Zeroizing without ze_roes: Cryptanalyzing multilinear maps without encodings of zero (2014). URL:_ `https://eprint.iacr.org/2014/929. Citations in this document:` 1.5, 1.5. _§_ _§_ [27] Shafi Goldwasser, Guy N. Rothblum, On best-possible obfuscation, Journal of Cryptology 27 (2014), 480–505. Citations in this document: 1.5. _§_ [28] Thomas Johansson, Phong Q. Nguyen (editors), Advances in cryptology— _EUROCRYPT 2013, 32nd annual international conference on the theory and_ _applications of cryptographic techniques, Athens, Greece, May 26–30, 2013, pro-_ _ceedings, Lecture Notes in Computer Science, 7881, Springer, 2013. ISBN 978-3-_ 642-38347-2. See [24]. [29] Dmitry Khovratovich, Christian Rechberger, Alexandra Savelieva, Bicliques for _preimages: attacks on Skein-512 and the SHA-2 family, in FSE 2012 [18] (2011),_ [244–263. URL: https://eprint.iacr.org/2011/286. Citations in this document:](https://eprint.iacr.org/2011/286) 1.4. _§_ [30] Dong Hoon Lee, Xiaoyun Wang (editors), Advances in cryptology—ASIACRYPT _2011, 17th international conference on the theory and application of cryptology_ _and information security, Seoul, South Korea, December 4–8, 2011, proceedings,_ Lecture Notes in Computer Science, 7073, Springer, 2011. ISBN 978-3-642-253843. See [13]. [31] Fran¸cois Le Gall, Faster algorithms for rectangular matrix multiplication, in FOCS [2012 [1] (2012), 514–523. URL: https://arxiv.org/abs/1204.1111. Citations in](https://arxiv.org/abs/1204.1111) this document: 7.2. _§_ ----- 26 Daniel J. Bernstein, Andreas H¨ulsing, Tanja Lange, and Ruben Niederhagen [32] Donald J. Lewis (editor), 1969 Number Theory Institute: proceedings of the 1969 _summer institute on number theory: analytic number theory, Diophantine prob-_ _lems, and algebraic number theory; held at the State University of New York at_ _Stony Brook, Stony Brook, Long Island, New York, July 7–August 1, 1969, Pro-_ ceedings of Symposia in Pure Mathematics, 20, American Mathematical Society, 1971. ISBN 0-8218-1420-6. MR 47:3286. See [37]. [33] Benjamin Lynn, Manoj Prabhakaran, Amit Sahai, Positive results and techniques _for obfuscation, in Eurocrypt 2004 [16] (2004), 20–39. Citations in this document:_ 1.2. _§_ [34] Dag Arne Osvik, Eran Tromer, Cryptologic applications of the PlayStation 3: Cell _SPEED, Workshop record of “SPEED—Software Performance Enhancement for_ [Encryption and Decryption” (2007). URL: https://hyperelliptic.org/SPEED/](https://hyperelliptic.org/SPEED/slides/Osvik_cell-speed.pdf) `slides/Osvik_cell-speed.pdf. Citations in this document:` 1.4. _§_ [35] John M. Pollard, Kangaroos, Monopoly and discrete logarithms, Journal of Cryptology 13 (2000), 437–447. Citations in this document: 4.5. _§_ [36] Ronald L. Rivest, The MD5 message-digest algorithm, RFC 1321 (1992). URL: `https://tools.ietf.org/html/rfc1321. Citations in this document:` 8.2. _§_ [37] Daniel Shanks, Class number, a theory of factorization, and genera, in [32] (1971), 415–440. MR 47:4932. Citations in this document: 4.5. _§_ [38] Joris van der Hoeven, Gr´egoire Lecerf, Guillaume Quintin, Modular SIMD arith_[metic in Mathemagix (2014). URL: https://arxiv.org/abs/1407.3383. Cita-](https://arxiv.org/abs/1407.3383)_ tions in this document: 7.1, 7.1. _§_ _§_ [39] Serge Vaudenay (editor), Advances in cryptology—EUROCRYPT 2006, 25th an_nual international conference on the theory and applications of cryptographic tech-_ _niques, St. Petersburg, Russia, May 28–June 1, 2006, proceedings, Lecture Notes_ in Computer Science, 4004, Springer, 2006. ISBN 3-540-34546-9. See [20]. ## A Subroutines The sha256hex function is defined as the following wrapper around Python’s ``` hashlib: import hashlib def sha256hex(input): return hashlib.sha256(input).hexdigest() ``` In other words, sha256hex returns the hexadecimal representation of the SHA256 hash of its input. The software from [6] stores nonnegative integers on disk in a self-delimiting format defined by GMP’s mpz_out_raw function (for integers that fit into 2[32] 1 _−_ bytes): a 4-byte big-endian length b precedes a b-byte big-endian integer. The following load_mpz and load_mpzarray functions parse the same format and return gmpy2 integers: ``` import struct import gmpy2 ``` ----- Bad directions in cryptographic hash functions 27 ``` def mpz_inp_raw(f): bytes = struct.unpack(’>i’,f.read(4))[0] if bytes == 0: return 0 return gmpy2.from_binary(’\x01\x01’ + f.read(bytes)[::-1]) def load_mpzarray(fn,n): f = open(fn,’rb’) result = [mpz_inp_raw(f) for i in range(n)] f.close() return result def load_mpz(fn): return load_mpzarray(fn,1)[0] ``` Integers such as w, q, the s entries, etc. are then read from files as gmpy2 integers: ``` w = load_mpz(’size’) pzt = load_mpz(’pzt’) q = load_mpz(’q’) nu = load_mpz(’nu’) s = load_mpzarray(’s_enc’,w) t = load_mpzarray(’t_enc’,w) n = w - 2 B = [[load_mpzarray(’%d.%s’ % (b,xb),w * w) for xb in [’zero’,’one’]] for b in range(n)] ``` The file names are specified by the software from [6]. The challenge announced in [7] used an older version of the software from [6], using file name x0 instead of q, so we copied x0 to q. Note that the B array is indexed 0, 1, . . ., n 1 rather _−_ than 1, 2, . . ., n. The dot function computes a dot product of two length-w vectors and reduces the result mod q: ``` def dot(L,R): return sum([L[i]*R[i] for i in range(w)]) % q ``` The solution function takes x and y(x) as input, and returns x as a string of ASCII digits if the output of the corresponding obfuscated program is 1: ``` def solution(x,y): y *= pzt y %= q if y > q - y: y -= q if y.bit_length() > q.bit_length() - nu: return ’’.join([str(xb) for xb in x]) ``` -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-319-19962-7_28?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-319-19962-7_28, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "CLOSED", "url": "" }
2,015
[ "JournalArticle" ]
false
2015-06-29T00:00:00
[]
23,712
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01ecd03808c3dbefa51e35e507ab4ccf14d7acb1
[ "Computer Science" ]
0.911738
Multipath Routing over Star Overlays for Quality of Service Enhancement in Hybrid Content Distribution Peer-to-Peer Networks
01ecd03808c3dbefa51e35e507ab4ccf14d7acb1
IEEE Access
[ { "authorId": "1701784", "name": "M. Karaata" }, { "authorId": "2148530067", "name": "Anwar Nais AlMutairi" }, { "authorId": "2532222", "name": "Shouq Alsubaihi" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://ieeexplore.ieee.org/servlet/opac?punumber=6287639" ], "id": "2633f5b2-c15c-49fe-80f5-07523e770c26", "issn": "2169-3536", "name": "IEEE Access", "type": "journal", "url": "http://www.ieee.org/publications_standards/publications/ieee_access.html" }
Content Delivery Networks (CDN’s) have emerged as a flexible and decentralized solution to maintain and transfer large volumes of data. CDN’s are distributed systems that maintain a distributed storage on a large number of servers at various locations distributed all over the world and a service network system for dissemination of content such as videos and software with high content dissemination efficiency, enhanced QoS metrics, and reduced network load. In the wake of enormous growth in live video streaming traffic on the Internet, CDN’s face challenges in meeting video traffic demands of users. As a remedy, hybrid CDN-P2P networks are being deployed to allow P2P networks to share the content delivery load of CDN’s providing the reliability and the performance of the CDN’s, and the scalability and the low cost of P2P networks. In this paper, by simulation under a realistic model, we show that multipath routing in star overlay networks achieves a high degree of load balancing, scalability, throughput enhancement, and reduces buffer requirements and network bottlenecks. As these algorithmic properties are highly desirable for hybrid CDN-P2P networks, we establish the viability of the star overlay networks as an edge network for hybrid CDN-P2P networks to meet their content delivery quality of service requirements.
Received November 15, 2021, accepted December 10, 2021, date of publication January 3, 2022, date of current version January 20, 2022. _Digital Object Identifier 10.1109/ACCESS.2021.3139936_ # Multipath Routing Over Star Overlays for Quality of Service Enhancement in Hybrid Content Distribution Peer-to-Peer Networks MEHMET KARAATA, ANWAR AL-MUTAIRI, AND SHOUQ ALSUBAIHI Department of Computer Engineering, Kuwait University, Safat 13060, Kuwait Corresponding author: Mehmet Karaata (mehmet.karaata@ku.edu.kw) **ABSTRACT Content Delivery Networks (CDN’s) have emerged as a flexible and decentralized solution** to maintain and transfer large volumes of data. CDN’s are distributed systems that maintain a distributed storage on a large number of servers at various locations distributed all over the world and a service network system for dissemination of content such as videos and software with high content dissemination efficiency, enhanced QoS metrics, and reduced network load. In the wake of enormous growth in live video streaming traffic on the Internet, CDN’s face challenges in meeting video traffic demands of users. As a remedy, _hybrid CDN-P2P networks are being deployed to allow P2P networks to share the content delivery load_ of CDN’s providing the reliability and the performance of the CDN’s, and the scalability and the low cost of P2P networks. In this paper, by simulation under a realistic model, we show that multipath routing in star overlay networks achieves a high degree of load balancing, scalability, throughput enhancement, and reduces buffer requirements and network bottlenecks. As these algorithmic properties are highly desirable for hybrid CDN-P2P networks, we establish the viability of the star overlay networks as an edge network for hybrid CDN-P2P networks to meet their content delivery quality of service requirements. **INDEX TERMS Edge networks, hybrid CDN-P2P networks, multipath routing, overlays, star networks.** **I. INTRODUCTION** _Content Delivery Networks (CDN’s) have emerged as a flex-_ ible and decentralized solution to transfer large volumes of data primarily for video-on-demand, personal live streaming, software download and DDOS protection [1], [2]. CDN’s are distributed systems that maintain a distributed storage on a large number of servers at various locations distributed all over the world and a service network system for dissemination of content such as videos and software with high content dissemination efficiency, enhanced QoS metrics for end-users, and reduced network load. CDN’s have been proposed by the Internet Engineering Task Force (IETF) [3] as a content network to cope up with the enormously growing demand for video and content distribution. CDN’s benefit not only the end users, but also the content providers and the Internet service providers (ISP’s) who deploy CDN’s servers in their networks [4]. With CDN’s, the end users experience higher QoS as the download latency and the bandwidth are improved, where the bandwidth refers to the maximum The associate editor coordinating the review of this manuscript and approving it for publication was Eyuphan Bulut . rate or amount of data transfer between two endpoints in a given amount of time. In addition, with CDN’s, the content providers can offer larger volumes of reliable services, and the ISP’s enjoy reduced traffic on their backbone servers. Professionally managed and geographically distributed infrastructure of CDN’s is highly reliable, available and provides high quality service. However, CDN’s require considerable investments for deployment, scaling up, and management of geographically distributed servers [5]. In the wake of enormous growth in live video streaming traffic on the Internet, CDN’s face challenges in meeting video traffic demands of users. As a remedy, hybrid _CDN-P2P networks are being deployed to allow P2P net-_ works to share the content delivery load of CDN’s providing the reliability and the performance of the CDN’s, and the scalability and the low cost of P2P networks [6]. In such a network, each peer may select one of the closest CDN edge servers to receive content available in the CDN and this edge server is considered as a peer in the P2P network. In a hybrid CDN-P2P network, whenever there is sufficient network and storage capacity in the P2P network component, peers distribute shares of content among themselves using ----- techniques such as centrally managed swarming [7]. Upon a content request by a user, if there are peers near the user with free upload capacity to deliver the content while maintaining the expected quality, user is served by the peers; and users are served directly from the CDN servers, otherwise. Huang and Zhang [8] present a feasibility study of a novel peer-to-peer architecture for live video streaming. The proposed architecture manages a P2P overlay to deliver audio/video streams through the use of online social networks to retrieve user information and relationships between them in order to improve overlay and stream management. However their proposal does not use hybrid CDN-P2P, a specific overlay topology and multipaths. Commercial hybrid peerto-peer video delivery systems such as CDN Mesh Delivery _and Peer5 exist providing media delivery with improved_ performance, increased reliability and expanded reach for broadcasters while delivering more reliable and more scalable for end users by intelligently multi-sourcing video delivery from both the CDN and a P2P network of end users [9]–[11]. Hybrid CDN-P2P networks have recently emerged as an economically viable alternative to traditional content delivery networks. The feasibility studies conducted by several large content providers suggested a remarkable potential for hybrid CDN-P2P networks to reduce the burden of user requests on content delivery servers [12]. Subsequently, several commercial hybrid CDN-P2P networks deployments have been introduced [12]. However, there are numerous commercial and technical challenges that negatively affect the prospects of industrial hybrid CDN-P2P solutions. In order to enhance the content distribution services, approaches such as hybrid CDN-P2P networks have been designed and studied to allow content distribution to scale or adapt to the bandwidth of data transfer. A hybrid CDN-P2P network requires all potential parallel paths in its P2P component to be discovered and utilized upon demand and the load related parameters. Additional challenges include the reliability, availability and scalability related issues of peer-to-peer edge networks, the lack of incentive mechanisms for peer participation, and copyright issues. The reliability issues related to the P2P edge network stem from insufficient bandwidth, lack of required degree of _network throughput, load balancing, buffering issues, and the_ presence of network bottlenecks. Network throughput refers to the amount of data transferred in the network during an interval while load balancing refers to the even distribution of messages among the peers in the routing process. Network bottlenecks refer to the limitations of some network resources such as buffers at peers and channel capacities that limit the network capacity to transfer content in a timely manner. In addition, a hybrid CDN-P2P network cannot cope up with flash crowd content and heavy content demand. Research has shown that viewers are not patient enough to wait if the start-up delay is longer than a few seconds [13]. Measurements given in [14] also confirm that users very often suffer from video re-buffering or more than five seconds start-up delay. As a result, users tend to drop videos if they frequently stop, freeze, or experience quality changes during the service period [15]. The massive volume of content traffic due to the growth in mobile Internet, computer networks, ultra-high definition videos, and user generated content presents unsurpassed challenges to CDN’s. To cope up with this enormous content demand, network service and content providers take advantage of CDN’s as they are widely regarded as a viable approach to successfully, and efficiently manage content traffic. Nevertheless, efficiency and other metrics of quality of major available methods for content routing are insufficient to meet the current demand. For that purpose, sophisticated content access and dissemination approaches, particularly multimedia streaming, utilize multipaths to provide the content with expected quality by increasing network bandwidth, reducing network congestion and latency. It is known that most of the challenges related to service quality can be met through the appropriate selection of an _overlay structure providing sufficient number of multipaths_ between communication endpoints. A peer-to-peer overlay _network is a virtual or logical network of overlay peers_ connected by virtual or logical links and constructed on the top of a physical network called underlay. An ideal overlay network with an appropriate number of multipaths between communication endpoints increases the network bandwidth while evenly balancing the network load among peers and links of the network, reduces network bottlenecks, increases system throughput, and provides fair service to users. For instance, star overlay networks [16] and their variations [17] provide a large number of parallel paths, a small graph diameter, scalable lookup service for the peers participating in Peer-to-Peer (P2P) networks and a small degree compared to conventional hypercubic DHTs such as Chord and Kademlia. Existing implementations of hybrid CDN-P2P networks have the following shortcomings that limit their reliability, availability and quality of service. First, default best-effort Internet routing results in the absence of end-to-end QoS. Second, existing routing algorithms primarily focus on router and link factors on a single path and thus do not effectively utilize the available network through the use of multipaths. Third, routing policies primarily focus on local knowledge in an individual autonomous system, lacking a network-wide view of topology or traffic to optimize routing with respect to load balancing, throughput, bandwidth, and delay requirements. Fourth, existing hybrid CDN-P2P overlay topologies provide no multipaths or only a limited number of multiple disjoint paths between endpoints that can be readily utilized for bandwidth enhancement or load balancing in P2P networks. Fifth, high-definition video streaming and other forms of content delivery do not scale well to support a large number of end-users, but achieving scalability is very hard since the communication cost and the load of some servers may be extremely high when the number of users is large [18]. In addition, growing demand on media steaming and other content distribution applications have led to more stringent ----- quality of service requirements including high bandwidth, highly reliable and scalable service many of which depend on the load balancing and multipath routing ability of the routing algorithms [19]. Hybrid CDN-P2P networks have been claimed to meet some of these challenges where the P2P component can facilitate the scalability, bandwidth enlargement and the low cost, distributes the system’s load to all participants, handles flash crowd and reduces the load on CDN servers while the CDN component ensures the reliable and high-quality service. However, usage of multipath routing algorithms and overlay networks with a large number of multipaths such as star networks for hybrid CDN-P2P networks for real world applications to address the above shortcomings have been neither proposed nor evaluated. As a remedy, in this paper, by simulation under a realistic model, we show that multipath routing in star overlay networks achieves a high degree of load balancing, scalability, throughput enhancement, and reduces buffer requirements and network bottlenecks. As these algorithmic properties are highly desirable for hybrid CDN-P2P networks, we establish the viability of the star overlay networks as an edge network for hybrid CDN-P2P networks to meet their content delivery quality of service requirements. In particular, we simulated the multipath routing algorithm of Karaata and Alsulaiman [16] under a realistic model including the essential aspects that are not considered in [16] for a practical implementation of the algorithm such as buffer requirements of peers, limiting channel capacities, concurrent transmission of multiple content from multiple sources, pipelining/ interleaving of multiple messages over the same set of multipaths between two endpoints and message drops. Through our simulation, we established the following. First, our simulation results demonstrate that the star overlay with multipath routing balances network load irrespective of the network size and demand. Second, we show that as the content delivery demand increases, network throughput linearly increases. This demonstrates that the star overlay networks have sufficiently many multipaths between all pairs of endpoints whose utilization allows network throughput to increase significantly. Third, the experiments show that the star overlay networks do not require larger buffer sizes as the throughput increases for small and large size networks. This demonstrates the high degree of scalability of the overlay network with the multipath routing for hybrid CDN-P2P networks. The same also show that the overlay with the multipath routing does not lead to network bottlenecks. Fourth, our simulation results show that the star overlay networks with the multipath routing algorithm of [16] delivers content from a source to a destination peer over multiple overlay paths in at most D(Sn) + 4 cycles/rounds. The rest of the paper is organized as follows. P2P overlay networks and multipath routing are presented in Section II providing the required background and terminology. Section III presents a brief overview of the inherently-stabilizing routing algorithm for star P2P overlay networks [16] that is simulated and evaluated in this paper. The network simulation model is described in Section IV. Section V presents the simulation results related to the message propagation delay, network throughput, buffer requirements, load balancing and scalability of the inherently self-stabilizing multipath routing protocol for hybrid CDN-P2P networks. Section VI concludes the paper and features some future research directions. **II. PRELIMINARIES** _A. P2P OVERLAY NETWORKS_ CDN architectures often rely on virtual overlay networks constructed on the generic IP protocol to solve performance problems related to network congestion and to improve web content accessibility in a cost-effective manner [4], [20]. The primary purpose of a P2P component in a hybrid CDN-P2P network is collaboration among peers to facilitate sharing resources and services to enhance the combined network. The quality of sharing of services and resources heavily relies on the available network and the routing protocols that facilitate peer-to-peer communication. Routing protocols often enhance a peer-to-peer network via increasing the network bandwidth, eliminating network bottlenecks through load balancing, and reducing message propagation delays. Peer-to-peer (P2P) overlay networks were initially devised for file sharing; however, later, they have become popular for content sharing, media streaming, telephony applications such as the P2PTV and PDTP protocols. Numerous other widely used P2P applications also exist. For instance, some proprietary multimedia applications use a peer-to-peer network along with streaming servers to stream audio and video to their clients. Bitcoin and alternatives such as Ether, Nxt and Peercoin are all peer-to-peer-based digital cryptocurrencies [21]–[23]. Dalesa is a peer-to-peer web cache for LAN based on IP multicasting [24]. P2P-based search engines such as FAROO also exists [25]. Filecoin is a P2P-based open source, public cryptocurrency and digital payment system intended to be a blockchain-based cooperative digital storage and data retrieval method [26]. I2P is another P2P-based application built over an overlay network to browse the Internet anonymously [27]. _B. MULTIPATH ROUTING_ There are two types of routing protocols used for the collaboration among peers, namely single path routing and multiple _path routing. In a single path routing protocol, throughout_ the session for sharing resources between peers, a single path is used between the sender and the receiver peers. When a single path is used by the routing algorithms, other potential paths between the communicating peers are neither constructed nor utilized to enhance communication. This does not allow single path routing to significantly widen network bandwidth, avoid network bottlenecks, balance the network load and reduce propagation delays. Whereas, in a multipath routing, the same message is split into multiple shares and sent simultaneously over multiple paths established between ----- a pair of peers. Usage of multipath routing clearly enhances the communication bandwidth between the peers by using bandwidth facilitated by the available multipaths, reduces the message propagation delays of large size messages as message shares sent simultaneously over multiple paths requires less propagation delay compared to those sent in sequence over a single path, becomes more tolerant to network failures than traditional single path approaches and improves the security of message transmission, balances network load and reduces network bottlenecks caused by heavy usage of limited network bandwidth provisioned by a single path routing. Load balancing is a very desirable feature since it promotes availability, scalability and reduces the occurrence of bottlenecks in the overlay. Availability means that the network is available as it is operating correctly at any given time while scalability means being able to handle the growth in size and the increase in future load. Multipath routing is already used in various networks. For example, Named Data Networks (NDN’s) inherently provide a flexible forwarding plane for multi-source and multipath communications [28]. In NDN’s, hosts utilize multipaths to obtain data from multiple content providers via multiple paths, which is different from IP multipath routing [29]. In VANET’s, multihop and multipath routing exploiting several paths is proposed to achieve faster content retrieval [30]. Content delivery networks also utilize multipaths in multipath pre-caching mechanisms in which the edge server would parse the requested content and then distribute requests to other edge servers to download content from the origin server simultaneously for accelerating the download speed [31]. In [32], authors propose a video delivery system involving CDN’s that use bandwidth aggregation of multiple ISP’s simultaneously via multipath content delivery. The paper suggests that the multipath approach increases the average quality of service at the expense of ISP’s that experience disproportional congestion increases under heavy load because multipath approach is able to scrounge the last bits of available bandwidth on every ISP reducing the number of served requests. **III. INHERENTLY-STABILIZING MULTIPATH ROUTING** **ALGORITHM FOR STAR P2P OVERLAY NETWORKS** In this section, we present a brief overview of the inherentlystabilizing routing algorithm for star P2P overlay networks [16] that is simulated and evaluated in this paper. The algorithm proposed by Karaata et al. is for routing messages over all disjoints paths between two peers in a star P2P overlay network. In an n-dimensional star network, the algorithm is capable of routing up to n 1 message shares simulta− neously. The algorithm is optimal in terms of the length of the disjoint paths. Due to being inherently-stabilizing, the algorithm can autonomously start in any state and can always recover from transient faults. A transient fault refers to a fault that perturbs the state of a process but not its program. In addition, as the algorithm is inherently self-stabilizing, faults perturbing variables of the system are masked and thus the execution of the algorithm is not affected by arbitrary initialization and transient faults. The simulation model of the inherently-stabilizing routing algorithm is built for an undirected n-dimensional star graph network Sn = (V _, E) where V is the set of n! vertices each_ of which corresponds to a peer in the peer to peer network such that each permutation of symbols 1, 2, 3, . . ., n makes up an id of a distinct vertex while E is the set of symmetric edges. Each vertex has n 1 neighbors connected through − distinct edges. Two nodes are connected by an edge iff the id of one can be obtained from the other by interchanging its first symbol with any other symbol. For example, the i[th] neighbor _v of s refers to the neighbor of peer s whose id is obtained by_ swapping symbols at Position 1 and i of s. Thus, the number of edges in Sn is given by L = [(][n][−][1)][n][!]/2 [33]. _A. THE ROUTING PROTOCOL MODEL AND INTERFACE_ Since an n-dimensional star graph is used where there exist _n_ 1 disjoint paths between any pair of vertices, a message − can be transferred between a source peer and a destination peer using n 1 disjoints paths; hence, each message M to − be transferred is split into n 1 message shares, i.e., M − = _m0, m1, m2, . . ., mn−2. A protocol called the application pro-_ _tocol is assumed to exist at each peer that sends messages_ from a source peer to a destination peer using node-disjoint paths algorithm over all node-disjoint paths. To implement the interface between the application protocol and the node-disjoint algorithm at each peer, the algorithm maintains two implicit buffers for each peer, namely, the _implicit input buffer and the implicit output buffer. When the_ application protocol at peer s wants to send message M to destination peer d, it places both the message and destination id d in the implicit input buffer of the peer. Subsequently, upon discovering message M in its input buffer, the routing algorithm at peer s receives message M by removing the input from the input buffer of s. The routing algorithm later uses action output(m) to place each message share m in the output buffer of d to make it available to the application protocol at destination d. It is assumed that between the execution of two output actions, the application protocol removes the content of the output buffer. As each peer contains both the input and output buffers, the algorithm allows each peer to act as a source peer or a destination peer. At any point in time, the input buffer contains at most a single sequence of n 1 − message shares and a single destination id while the output buffer contains at most a single message share. Each peer also contains an implicit routing buffer that is used in routing the input message share by the peers on the path from the source peer to the destination peer. This buffer holds at most a single share of each input message with destination id, and the distinguishing position lsp (last swap position) that holds the first symbol of destination process to ensure node-disjointness. The algorithm assumes asynchronous message passing model where a message share moves between neighboring peer buffers after an arbitrary but finite propagation delay. The transmission ----- **FIGURE 1. Multiple path routing.** of an input message M is always completed in at most _D(Sn)+4 rounds/cycles, where D(Sn) is the distance between_ the source and destination where D(Sn) = ⌊[3(][n][−][1)]/2⌋. Therefore, the algorithm has a time complexity of D(Sn) + 4 rounds which is also the length of the longest path traversed by a message. _B. ROUTING ALGORITHM_ When a source peer s receives message M and destination peer id d, s divides the message into n 1 shares and maps − each of its neighbors to a distinct neighbor of the destination peer d and then sends each share to one of its mapped neighbors. The message shares are then routed between pairwise mapped neighboring peers of s and d over node-disjoint paths. When a message share m is received by a neighbor of peer d, it is sent to destination d. To ensure that all the paths between pairwise mapped neighbors of s and d are disjoint, the algorithm employs the method given next. In the routing process, to rout message share m from peer _v to a neighboring peer, the first symbol v[1] is swapped_ with another symbol v[j], where 2 < j ≤ _n, to determine_ the id of the neighboring peer to send m. Recall that the id of peer v ∈ _Vn is a permutation over 1,2,3,...,n where v[i]_ denotes the i[th] symbol of v and 1 _i_ _n. The schemes for_ ≤ ≤ determining the value of the swap position j by the source peer s and the other peers differ. Source peer s first splits the input message into n 1 message shares to sent to n 1 − − neighbors. Subsequently, for each message share mi, s swaps _s[1] with distinct symbol s[i] to determine the neighbor to_ send message share mi, where 1 < i ≤ _n. For each message_ share mi, peer s also determines a distinct position lsp (last swap position) to send along with mi to the i[th] neighbor, where 1 < lsp ≤ _n. Once the i[th]_ neighbor of s receives the message share mi, it places d[1] in position lsp, if not already there, and maintain it there until the last swap. This serves two purposes. First, as d[1] is placed and kept in distinct position lsp for each path, process id’s on each path are distinct from those on other paths leading to the construction of n 1 node− disjoint paths. Second, for the same reason, neighbor w of d can be reached such that d is obtained by swapping w[1] and _w[lsp]. Therefore, lsp of mi determines the neighbor w of d_ that will receive mi. In order to place d[1] in position lsp, peer _v that receives message share first places d[1] in position v[1]_ by swapping v[1] with the position in v that holds the value of d[1]. Then, d[1], which is now stored in v[1], is swapped with symbol v[lsp]. Once d[1] is placed in position lsp on all paths, each peer v on the constructed paths determines the id of the next peer by swapping symbols in position v[1] and v[k] where k denotes the position of v[1] in d, that is, v[1] _d[k]._ = Note that this swapping is only done when v[1] _d[lsp],_ ̸= otherwise, v[1] is swapped with an unsorted position instead of lsp to keep d[1] in position lsp. The i[th] symbol of v is said to be sorted if v[i] _d[i]; unsorted, otherwise. This_ = swap is repeated until reaching a neighbor w of d which completes the routing peer by swapping position w[1] with w[lsp] to reach d. The proposed inherently-stabilizing routing algorithm in [16] merely provides a distributed algorithm for routing a single message over multipaths in star overlay networks with desirable features such as inherent stabilization and stabilization. In [16], an abstract model is assumed where essential details for a practical implementation of the algorithm under a realistic model such as buffer requirements of peers limiting channel capacities, concurrent transmission of multiple messages from multiple sources, pipelining/interleaving of multiple messages over the same set of multipaths, and message _drops are not considered. In addition, each peer is assumed_ to have a single input and a single routing buffer which are relaxed in our paper. A message drop refers to an event in which the message share arrives at a peer whose buffer is full. In addition, the experimental work to show that the algorithm is correct and it improves throughput, increases bandwidth, and achieves load balancing in P2P networks are not included in the scope of the paper. Instead, through theoretical proofs of the algorithm, its desirable features and its time complexity bound are given. Furthermore, the appropriateness of the algorithm for hybrid CDN-P2P networks is not considered. In the rest of the paper, we consider all these practical aspects of the algorithm and show its viability for hybrid CDN-P2P networks. **IV. NETWORK SIMULATION MODEL** In Section 3, we presented the system model assumed in [38]. In this section, we present a variation of the above model to make it practical for hybrid CDN-P2P networks. [16] merely ----- **TABLE 1. Simulation model parameters.** provided an algorithm for routing between a single pair of peers over disjoint paths and some theoretical analysis along with the correctness proof of the algorithm. In contrast, the simulation in this paper carried out the presence of multiple sources and destinations allowing multiple concurrent message routing over all node-disjoint paths in a pipelined manner using PeerSim simulator for varying sizes and dimensions of star overlay graphs, demand, and buffer sizes per peer. PeerSim [34] is an open source P2P systems simulator developed in Java at the Department of Computer Science, University of Bologna. It is designed as a scalable and dynamic simulator for large P2P networks as it aims to cope with P2P system properties and allows the user to replace its predefined entities by the user-entities. It supports two models of simulation: cycle -based and event -based, and can simulate both structured and unstructured overlays. In the cycle-based model, in each cycle, a peer is randomly selected and its protocol is executed. Whereas in the event -based model, nodal protocols are executed according to the message delivery time order [35]. Due to its scalability, support for cycle-driven simulation and star networks, accuracy, provisions for construction, execution, and data collection aspects of the simulation, we selected the PeerSim simulator. The simulation proposed in this paper uses the star graph topology as in the inherently-stabilizing multipath routing algorithm for star P2P overlay networks introduced in [16]. We consider a star network consisting of a collection of peers that communicate through message exchange. Each peer is uniquely identified by an id, connected with its neighbours by bidirectional communication channels corresponding to edges in the star network, and runs the inherently-stabilizing multipath routing algorithm. The network is static; new peers cannot join a network, and existing peers may not leave or crash. Byzantine behaviour is not considered. Table 1 presents the model parameters related to the routing algorithm used in our simulation. For the purpose of simulation, a cycle-driven model is assumed for the message routing, i.e., the simulation executes its steps in regular time intervals in which each step performed to complete the execution is referred to as a cycle. In each cycle of the simulation, each peer carries out the following two actions. First, if the peer’s input buffer contains a message, the message is split into n-1 message shares where each message share is sent to a distinct neighbour. Second, each message share sent to a particular peer in the previous cycle is made available in the routing buffer of the recipient peer. Each peer maintains a routing buffer of a fixed size to store the received messages. In case a buffer element is not available, upon receipt of a message, message drop occurs. Then, each message share in the routing buffer is sent to the neighbour decided as per the routing parameters as descried in Table 1. Communication may incur unit time delays as a result of using the cycle-based simulation, and is not subject to any form of failures. No message shares may be lost; links between pairs of peers are always operational; and the integrity of messages is always maintained. Each system channel between two peers is assumed to have unit capacity and in the current cycle of the simulation, it can deliver a message share sent in the previous cycle. In the beginning of each simulation cycle, a new set of input messages is randomly assigned to source peers to be sent to randomly selected destination peers, where each set of input messages consists of messages. For each message, _T_ the destination peer is distinct from the source peer however a destination peer may be common for more than one source and a source peer may receive more than one message. In the first cycle of the simulation, each input message assigned to a source is split into n-1 message shares and each share is placed in the routing buffers of the source’s neighbours as described by the algorithm. Subsequently, in the second cycle, while message shares are forwarded to other peers by the neighbours of the sources, a new set of input messages are assigned to new set of randomly selected source peers then distributed to their sources’ neighbours, and so on. The rest of the steps for routing the messages is performed as described in Section III In our simulation, this process is repeated in the first 21 cycles of the simulation where one new set of input messages are sent in each cycle. Therefore, a total of 21* input messages are fed to the simulator. In 21 _T_ + (D (Sn) + 4) cycles, the routing of all the input messages is completed since the last set of input messages is added in the 21[th] cycle of the simulation. Figure 2 summarizes the simulation process. In our simulation, the largest diameter D(Sn) of the networks we consider is 9. Therefore, it takes at most 13 cycles for each message to reach its destination. Recall that each message takes at most D(Sn) + 4 rounds to be routed. ----- **FIGURE 2. Simulation process.** Observe that in the first 13 rounds after the simulation starts, messages sent in a pipelined manner do not occupy all the multipath channels/processes provided that sufficiently many messages are sent in each cycle. On the other hand after the 13[th] cycle, all/most channels and peers can be occupied by messages which show the real throughput capacity of the network. Therefore, we had to choose more than 13 cycles of simulation. We chose 21 cycles to experiment the network, to observe the network where peers and channels on parallel paths are fully or mostly occupied for sufficiently many cycles, 8 in this case. Also observe that if sufficiently many, one or nearly one, messages is not sent in each cycle from each source, all channels and parallel paths cannot be kept busy to show the real throughput of the network. Therefore, we experimented with number of sources between 2000 and 5000 where _T_ the maximum network size is 5040 which provide sufficient number of message to keep nearly all network channels busy. Each performance evaluation experiment is simulated after repeating the simulation 20 times with dynamically and randomly selected source peers and destination peers. The average values of these repetitions are computed and shown, and individual simulation results for each experiment are shown whenever possible. **V. SIMULATION RESULTS** In this section, we present our simulation results related to message propagation delay, network throughput, buffer requirements, load balancing and scalability of the inherently-stabilizing multipath routing protocol for hybrid CDN-P2P networks. A cycle-based PeerSim simulator was used to evaluate these properties. To the best of our knowledge, no papers have used a cycle-based PeerSim simulation. PeerSim [34] is an open source P2P systems simulator developed in Java. To build a simulator, the user has to construct a network of peers; write protocols that represent the actions each peer will perform; choose a control to monitor the ----- properties and modify the parameters of the network; run the simulation; then collect data. _A. NETWORK THROUGHPUT_ In this section, we present the experimental results related to network throughput. Throughput is a fundamental service quality measure of CDN’s due to being an important indicator of the quality of the network performance. The throughput of a network increases as the network load increases provided that channel capacities available across the network are exploited, network load is evenly distributed across network channels and peers, and network bottlenecks are eliminated. In our simulation, we examined the effect of the network size and the number of source peers on the throughput. Therefore, we considered these two factors independently in two separated experiments. First, we observed the change in throughput as the network size is increased while the number of source peers is kept fixed for both single and multipath usage. Second, we varied the number of source peers and examined the change in throughput while the network size is kept fixed for both single and multipath usage. The throughput in the simulation is measured in bits per cycle and the buffer size for peers is of unlimited size for simulation purposes to avoid any message drop. To measure system throughput for various network sizes, we used 5000 source peers and ran the simulation for networks of dimensions n = {4, 5, 6, 7}, where for each network size the simulation was repeated 20 times and average results were collected. Figure 3 shows the result of the simulation where the x-axis represents network size, and the y-axis represents the throughput measured for the associated network size for both single and multipath usage. In the figure, it can be seen that the throughput gradually increases as the network size increases. As seen in Figure 3, the throughput is increased by 314% for the single path routing and 331% for the multipath routing when changing the network form size 24 (n 4) to size 120 and increased by 410% for the single = path routing and 481% for the multipath routing from network size 120 (n 5) to network size 720 (n 6). As mentioned in = = Section III, the size of an n-dimensional network is given by n . For example, if n 5, the network size is given ! = by 5 120. On the other hand, the ratio of increase in ! = throughput between network size 720 to network size 5040 (n 7) is only 323% for the single path routing and 434% = for the multipath routing. The significant improvement in throughput when the network size is increased is attributed to the following reasons. As the network size increases, the number of disjoint paths between peers in a star graph significantly increases which in turn improves throughput dramatically. On the other hand, as the network size increases, the number of available multipaths also increases which in turn increases the system throughput since additional paths can enlarge the communication bandwidth between communicating end points using the additional available disjoint paths. The throughput increase appears to be exponential with respect to the network dimension. This can be attributed to exponential increase in number of available disjoint paths between endpoints in star graphs. Observe that the throughput increase is slightly less for single path routing compared to that of multipath routing. This is attributed to the following. First, single path routing does not utilize all the available paths. Second, multipath routing leads to better load balancing and more congestion in some peers. Also observe that the throughput difference between multipath and single path routing widens as the network size grows. This is attributed to the avaliability of significantly more multipaths in larger networks that can not be exploited by single path routing. Hence, the star overlay networks have sufficiently many multipaths between all pairs of endpoints, utilizing the multipaths improves network throughput significantly. To measure the system throughput for varying number of source peers, we used the network size of 5040 (dimension of n 7) and run the simulation for varying number of source = peers of 500, 1000, 1500, ..., 4000 as shown in Figure 4, where for each number of source peers the simulation was repeated 20 times and average results were collected. The x-axis in Figure 4 represents the number of source peers while the y-axis represents the throughput. It can be seen in Figure 4 that the network throughput increases linearly to the number of source peers for a fixed network size (5040). Our simulation results show that as the number of concurrent message transmissions (number of sources) increases, the amount of bits transferred per cycle also increases resulting in increased throughput. The increase in the throughput is achieved by the available network bandwidth between pairs of peers in star overlay networks provided by the large number of nodedisjoint paths between them and load balancing of the content delivery in the network. Since we assumed unlimited buffer elements and no message drops are experienced as the throughput is increased in our experiments, we can conclude that the algorithm does not lead to bottlenecks and is scalable. We repeated the abovegiven experiment for single path routing and observed that multipath routing provides significantly better throughput regardless of the network size and the number of sources as shown in Figures 3 and 4. Figure 3 shows that as the network size increases, since the number of multipaths and the bandwidth increase, the throughput increases for the same demand. Figure 4 shows the throughput for various demand for the network size of 5040. It is easy to see that although the multipath routing yeilds significantly more throughput due to reducing congestion, as the number of sources (demand) increases, the throughput does not increase at the same rate for single path and multipath routing. This is attributed to reaching the level of congestion for both the routing schemes that does not allow the network bandwidth to be further increased. This result clearly establishes the viability of star overlay networks for hybrid CDN-P2P networks since the star overlay networks meet the significant bandwidth and throughput requirements of hybrid CDN-P2P networks. ----- **FIGURE 3. The throughput compared to the network size for a fixed number of source peers (5000).** Results given in Figures 3 and 4 show that multipath routing in star overlay networks provides significant network throughput for hybrid CDN-P2P networks. _B. BUFFER REQUIREMENTS_ In this section, we estimate the buffer requirements for routing messages using the inherently-stabilizing routing algorithm [16]. In computer networks, a buffer is a physical memory used by the network components to temporarily store an amount of data while its being transferred from one component to another. In our simulation, we estimated the buffer requirements of network peers for varying network sizes, demand (number of source peers), and single path and multipath routing separately. We assume that each buffer element is capable of holding a single message share in the routing process. First, the algorithm was simulated for each network dimensions of n = {3, 4, 5, 6, 7} to show the effect of the network size on the buffer size requirement when using multipath routing. For each network size, the algorithm was simulated while increasing the buffer sizes at each run until finding the minimum buffer size where the algorithm never experiences any message drops and the results are shown in Figure 5. In all simulations the number of source peers was fixed to 2000. Through our initial simulations, we discovered _T =_ that small scale networks require roughly on the order of ten times more buffer elements and when the buffer size is increased by 100, we are able to find the buffer requirements in a reasonably many simulation experiments. Similarly, we discovered that for large scale networks, when the buffer size is increased by 10, we are able to find the buffer size requirements in a reasonably many simulation experiments. Therefore, in each run, for small scale networks with n<6, the buffer is increased by 100, whereas, for large scale networks with n _6, the buffer size is increased by 10. As we increase_ ≥ the buffer sizes, we observe the effect of the buffer sizes on network throughput. When the buffer sizes are insufficient, expected throughput cannot be obtained due to message drops. However, when the buffers reach sufficient sizes, additional buffer size increases do not lead to throughput increases. Accordingly, at the end of each simulation run, the throughput is calculated for larger buffer sizes until the network no longer experience any message drops. The throughput is calculated only for the message shares that successfully reached the destination. The smallest buffer size to provide the maximum throughput is considered as the suitable buffer size. For example, for a network of size 5040 (n 7), = we ran the simulation first using a buffer size of 10, then calculated the throughput at the end of the simulation using the message shares that successfully reached the destination. Then, we ran the simulation again using a buffer size of 20 and calculate the throughput. In the next simulation, we use a buffer size of 30, and so on until we obtain 5 simulations that have the same throughput and consider the minimum buffer size which no longer improves throughput as the required buffer size for the network of size 5040. It can be observed that for these simulations where the throughput no longer increases though the buffer sizes are increasing, the network does not experience any message drops. ----- **FIGURE 4. The throughput versus the number of source peers for a fixed network size of 5040.** As explained in Section IV, each simulation is fed with 21* input messages. Therefore, a total of 21*2000 input _T_ messages are used in each simulation run. Figures 5 (a,b) present the results of our simulation where the x-axis represents the buffer sizes and the y-axis represents the network throughput. As shown in Figure 5 (b), for the network size of 720, the throughput linearly increases from buffer size 50 to 800 and once the buffer size of 800 is reached, the throughput remains the same since no message drop takes place. Hence, the suitable buffer size for a network of size of 720 with 2000 sources is found to be 800. Figure 5 also shows that when sufficient buffers are available, the system throughput cannot be increased beyond a certain point for each network size. This is due to the full utilization of all available multipaths and the unavailability of additional multipaths to increase the throughput further. It can be observed that when the network size is increased, more multipaths become available and the network throughput increases. It can be observed from Figure 5 that the increase in buffer and network sizes increases system throughput. In addition, Figure 5 clearly shows that when the network size increases, the buffer requirements decrease for the same number of sources. This is due to the routing of less number of messages per peer in the routing process. This also shows that load balancing is achieved by the algorithm. It can be seen that as we increase the buffer size, the throughput of a network increases until it becomes stable at some point. The simulations to obtain Figure 5 are repeated using various number of source peers T = {2000, 3000, 4000, 5000} and the results are shown in Figure 6. Figure 6 depicts the effect of the network size on the peers buffer sizes where the x-axis represents the network size while the y-axis represents the buffer size. The buffers sizes shown are the buffer sizes that do not cause any message drop for the network size under consideration and obtained through repeated experimentation where buffer sizes are gradually increased to find the sufficient buffer size to prevent message drops. It can be concluded from the graph that for a fixed number of source peers, the buffer size linearly decreases as the network size increases. It is observed that as the network size increases, more peers are involved in the routing of input messages, therefore a reduced buffer size is required as the number of message shares routed per peer reduces. It can be seen that for any network size greater than 720 as long as we are using a buffer of size 800, the algorithm does not experience message drops. Second, to show the effect of the number of sources on the buffer size required for each peer, the algorithm was simulated using different number of source peers _T_ = {2000, 3000, 4000, 5000} while keeping the same network size. For each number of sources, the simulation was run while increasing buffer sizes for each run until reaching the suitable buffer size which causes no message drops. These simulation steps were repeated for different network sizes and the results are shown in Figure 7. In Figure 7, the x-axis represents the number of source peers while y-axis represents the buffer size. It can be seen that for a fixed network size, the buffer size is linearly increasing as the number of source peers increases. It can clearly be seen that for a sufficiently large network size (5000 peers), very small, nearly constant, buffer sizes are sufficient even in the presence of high demand (large number of sources in our experiment). This verifies the viability of star overlay networks for hybrid CDN-P2P network in terms of buffer requirements under heavy demand. ----- **FIGURE 5. Throughput versus buffer size and number of sources=2000.** From Figures 6 and 7, it can also be seen that for small scale networks of n 4 or 5, as the number of source peers = (the number of input messages) increases, each peer requires larger buffers in order to avoid message drops. The required buffer size for the small scale networks is 95% larger than the size required for large scale networks. In the figures, it can clearly be seen that when the network and the buffer sizes are increased, since the number of multipaths is increased and message drops decreases, system throughput is increased. It can be observed that when the network size is increased for the same demand and the buffer size, message drops decrease as shown in Figure 5. From this observation, it can be concluded that the algorithm achieves a high degree of load balancing leading to reduced buffer requirements for larger network sizes for the same demand for single path routing. It is easy to see in Figure 9 that the buffer requirements are more for multipath routing than those of the single path for varying network sizes and demand for the same reason as discussed earlier. Figure 5 captures the effect of the buffer size on network throughput where due to insufficient buffer size message drops occur for smaller buffer sizes. As a result, maximum throughput cannot be obtained. However, when buffer sizes are increased to a level where they are sufficient, additional buffer size increases do not lead to throughput increases. It can be seen that for network size 720, the throughput linearly increases from buffer size 50 to 800 and once the buffer size of 800 is reached, the throughput remains the same since no message drop takes place. Hence, the suitable buffer size for a network of size 720 with 2000 sources is found to be 800. Figure 6, shows the buffer requirements for various network sizes and 5000 sources and for both single path routing and multipath routing. The buffer requirements for multipath routing is slightly more than that of single path routing since the throughput for multipath routing is significantly more and it is natural that when more messages are routed per cycle, the buffer requirements increase. Figure 7 shows the buffer ----- **FIGURE 6. Network size versus buffer size and number of sources=5000.** requirements for various number of sources and various network sizes for single path and multipath routing. Observe that for smaller networks (sizes of 24, 120 and 720) the buffer requirements increases as the number of sources increase. On the other hand, for larger network sizes (such as 5040), the buffer requirements increase only very slightly. This is attributed to a decrease in network congestion as the network size grows for the same network size. _C. LOAD BALANCING AND SCALABILITY_ Load balancing is desirable in CDN’s as distributing the network load among various CDN components improves the resource utilization and the response time while eliminating network bottlenecks. Load balancing also helps avoiding heavy load in some network components while others are idle or have significantly less load. Therefore, a good distribution of the network load means a faster response to the end users requests. Many modern applications such as online gaming, video streaming, and etc. often generate heavy network traffic that cannot run without proper load balancing. In this section, we experimentally show that multipath routing in star overlay networks [16] achieves load balancing for hybrid CDN-P2P networks. The simulation was carried out on a network with 5040 peers (n 7) with 2000 source peers. A total of 21*2000 = input messages were sent during the simulation. To evaluate the distribution of the load among the network peers, we count the number of times each peer is traversed by a message share. Based on the results shown in Figure 5, the buffer size used in this simulation is chosen as 90 which is the suitable buffer size for a network of size 5040 with 2000 source peers. Figures 7 and 10 show that as the network size is increased, the buffer requirements dramatically reduce for the same demand. This clearly shows that the algorithm achieves a high degree of load balancing by distributing messages among more peers as the network size grows. Figure 10 shows the load balancing of the network peers by depicting the number of times each peer is visited to complete all the message transmissions. The x-axis of the graph denotes the number of peers in the network while the y-axis shows visit frequency of each peer to show the distribution of the load. It can be concluded from Figure 10 that the visit frequency of most of the peers are close to the average visit frequencies with a small standard deviation of 20.7 for fixed demand. Figure 10 also shows that the degree of load balancing remains the same regardless of the network size for the same demand. Therefore, clearly the load in the network is fairly evenly distributed among all the network peers in a similar manner for various network sizes. Thus, the multipath routing for star overlay networks is experimentally shown to provide balanced load distribution. In addition, since the algorithm increases the degree of load balancing and decreases buffer requirements as the network size grows, it is highly scalable. It can be observed that as the demand is increased, buffer requirements increase for small networks of size of 24 to 120, and remains nearly the same for network sizes of 720 and larger. Notice that buffer size of 800 is sufficient for networks of size 720 and less, whereas, buffer size of less than 100 is sufficient for network of ----- **FIGURE 7. Buffer size versus the network size.** **FIGURE 8. Buffer size versus the network size.** size 5040. This clearly shows that the algorithm achieves high degree of load balancing and scalability. As shown in Figure 9 capturing the relationship between the buffer size and the number of source peers, as we increase the number of source peers, the buffer size only requires a slight increase. In addition, recall that for the network size of 5040, when handling 21*2000 messages, the required buffer size was only 800 and the buffer requirement increases only marginally when the demand (number of sources) is increased in a large network. Also, it can clearly be seen that for a large size network (with 5040 peers), very small size buffers are sufficient to eliminate message drops. It is easy to observe from Figure 8 and 9, the buffer requirements are slightly less for multipath routing compared to single path routing although multipath routing yields a significantly more throughput compared to single path routing. This clearly shows that multipath routing yields to better load balancing. In addition, Figure 10 and 11 provide the distribution of node visit counts for various network sizes for single and multipath routing. The figures clearly show that multipath routing yeilds significantly better load distribution among nodes for all network sizes. Thus, we conclude that the multipath routing in star overlay networks for hybrid CDN-P2P networks is highly scalable since peers with existing buffers continue to route message without message drops when network size is increased. _D. MESSAGE PROPAGATION DELAY_ In this experiment, we evaluate the propagation delay for a message to be transmitted from a source to a destination peer. In a hybrid CDN-P2P network, the message propagation delay is a major obstacle in the development as it affects the ----- **FIGURE 9. Required buffer size versus the number of source peers.** **FIGURE 10. Load distribution of 4000 messages in a network of size up to 5040.** quality of service. The inherently-stabilizing routing algorithm proposed in [16] is theoretically proven that after the system start, the algorithm successfully delivers messages from source peers to destination peers in at most D(Sn) + 4 rounds/cycles where D(Sn) denotes the diameter of the n-dimensional star network Sn. In order to experimentally show the correctness of the algorithm and that it requires D(Sn)+4 rounds (cycles in the simulation) to complete message delivery, we conducted simulation experiments. In particular, we observed the effect of the network size on the number of rounds/cycles required to complete the message propagation between a source peer and a destination peer. The simulation was run for various network dimensions (n = {3, 4, 5, 6, 7}) and the total number of cycles required to complete the message transmission was measured at the end of each run. The experiments are repeated 20 times for each dimension and the average number of cycles in the 20 experiments is taken as the number of cycles required for the dimension under consideration to be depicted. The results of our simulation experiment is shown in Figure 12, where the x-axis denotes the network size and the y-axis denotes the number of rounds. Observe that the number of rounds/cycles required to complete the message transmission increases slightly with the network size. This stems from the fact that the round complexity has to do with the diameter of the graph and the diameter increases slightly as the network size increases. The same experiment also verifies the correctness of the multipath routing algorithm proposed in [16] In order to verify the correctness of the theoretically proven number of round/cycles of D(Sn) + 4, we compared the simulation results and the theoretically calculated values. The theoretical values were computed using the equation of D(Sn) + 4 on the network dimensions of n = {3, 4, 5, 6, 7}; where D(Sn) = �3(n−1)/2�. For instance, for a network of ----- **FIGURE 11. Single Path Routing: Load distribution of 4000 messages in a network of size up to 5040.** dimension n=3, D(Sn) = �3(3−1)/2� = 3. Therefore, the algorithm successfully delivers messages from source peers to destination peers in 3 4 7 rounds/cycles. The com+ = parison between the simulated and theoretically calculated number of rounds/cycles required for a message transition are shown in Figure 12. The results shown verifies the correctness of the theoretically found round complexity (number of cycles) as the actual number of rounds/cycles obtained in the simulation are very close to the theoretically calculated rounds/cycles for varying dimensions of the star graphs. For example, for a network of dimension n 3, the theoretical = number of rounds is 7 while in the simulation it is shown to require 4 rounds/cycles. That is, based on Figure 12, we showed by simulation results that the algorithm successfully completes the message delivery in at most D(Sn) + 4 rounds. _E. SUMMARY AND DISCUSSIONS_ In this section, we summarize the empirical evaluation of the inherently-stabilizing multipath routing protocol for nontraditional overlay nextworks (star networks) under a realistic model for hybrid CDN-P2P networks. In particular, we empirically show that the claimed yet unproven properties such as load balancing, buffer requirements, scalability, and throughput of the multipath routing algorithm of Karaata and Alsulaiman [16]. Reference [16] merely provides an algorithm for routing between a single pair of peers over disjoint paths and some theoretical analysis along with the correctness proof of the algorithm. In contrast, the simulation in this paper has been carried out in the presence of multiple sources and destinations allowing multiple concurrent message routings over all node-disjoint paths in a pipelined manner using PeerSim simulator for varying sizes and dimensions of star overlay graphs, demand, and buffer sizes per peer. Simulation results show that the growth in the size of the network slightly increases the throughput achieved by the algorithm. Since when the dimension of the star overlay networks increases, sufficiently many multipaths become available between all pairs of endpoints whose effective utilization by the algorithm allows network throughput to increase significantly. In addition, our simulation results show that as the demand increases, the network throughput also increases though the network size is kept the same. The increase in the throughput is facilitated by the algorithm through utilizing the available network bandwidth between pairs of peers in star overlay networks provided by the large number of node-disjoint paths between them and load balancing of the content delivery in the network. Observe that for an increased demand of content routing for the same network size, throughput could not have been increased linearly without load balancing of the content routing among peers. Therefore, linear throughput increase for increasing demand for the same network size showed that during the distribution of the message shares from multiple sources to multiple destinations, the load on the network was fairly evenly distributed among all the peers by the routing algorithm thus achieving load balancing in the network. It can clearly be seen that the load balancing of a large number of content by the routing algorithm avoids the formation of bottlenecks in the scalable networks. Moreover, it can be observed that increases in the buffer and network sizes increase system throughput. In addition, as the network size is increased, the buffer requirements dramatically reduce for the same demand. The decrease in the buffer sizes as the network size increases clearly shows the effectiveness of the algorithm in load balancing via the use of available additional multipaths. In a separate experimentation, it is shown that the degree of load balancing remains the same regardless of the network size for the same demand. In addition, since the algorithm increases load balancing and decreases buffer requirements as the network size grows, the scalability of the multipath routing algorithm is established. The routing buffer size requirements of the algorithm for each ----- **FIGURE 12. The number of cycles to complete message delivery versus network size.** peer is analyzed by monitoring the effect of the network size and the number of source peers on the buffer size. Simulation results also show that multipath routing improves the network throughput and the degree of load balancing, and reduce the buffer requirements compared to single path routing. The results obtained clearly establish viability of the multipath routing in star overlay networks. Observe that better simulation results are not obtained since when the simulation is started, most routing buffers are empty and most channels are idle and they remain the same for a while until the message shares on these paths populate the network in a pipelined manner and maybe load is not well distributed. **VI. CONCLUSION** In this paper, we show that multipath routing in star overlay networks facilitates a high degree of load balancing, throughput enhancement, reduces buffer requirements and network bottlenecks and scalability. As these algorithmic properties are highly desirable for hybrid CDN-P2P networks, we establish the viability of the star overlay networks as an edge network for hybrid CDN-P2P networks to meet their content delivery quality of service requirements. We anticipate that this work encourages researchers to consider overlay networks with abundant multipaths as edge networks for hybrid CDN-P2P and other networks. We also expect researchers to investigate other algorithmic properties obtained through the use of multipaths that are not considered here. Although the obtained results are highly promising, better results can be obtained using higher demand or cycles. Our work only consider the benefits of the star overlay networks, however, the limitations by the underlying physical network that the overlay is mapped to is out of the scope of the current work. In this work, we only considered point-to-point multicast communication. It is an open problem to apply multipath routing to other forms of communications including one to many and many to many. In this work, we only consider star overlay networks that provide a larger number of multipaths between any two endpoints. As future work, other overlay networks such as hypercubes can be considered and compared against star overlay networks with respect to the algorithm considered. It is also an open problem to enhance existing commercial hybrid CDN-P2P network applications using the multipath routing over star overlay networks. The simulation conducted does not consider the effects of varying message sizes and the routing delays at peers. First, the routing mechanism described is based on local knowledge and is fairly simple, therefore the negligible delay caused by peers to identify peers to forward messages is not considered. Second, as it can be seen in Figure 6, a star overlay network provides significant bandwidth through the use of multipaths. When the message size is doubled, since we assume that the message size is of maximum of the capacities of all multipaths between two endpoints, the additional message size is accommodated in the next cycle and therefore takes only one additional cycle provided that another message does not arrive at the same source in the next cycle. Since the delay caused by increased message sizes is relatively simple to calculate as discussed above, additional experiments were not conducted for that purpose. As mentioned in Page 2, Paragraph 2, the role of CDN is that when a particular content is not available in the P2P network, a CDN component acts as a source peer in the edge network and provides the content. Since, a CDN component is viewed as a source peer and it assumes the role of source peer, its role is not considered separately. In our current work, we only considered an abstract model of simulation where varying capacities and delays of communication channel are not considered. In addition, we did not consider the mapping of the star overlay network to the underlying physical network and the limitations brought by the mapping. An emulation of the proposed algorithm in a real network considering all the above is highly involved and out of the scope of the current work. We consider such an emulation as future work. **ACKNOWLEDGMENT** The authors are would like to thank the anonymous referees for their suggestions and constructive comments on an earlier version of the paper. Their suggestions have greatly enhanced ----- the readability of the paper. The authors would also like to thank Aisha Dabees for her assistance on algorithm simulation and paper revision. **REFERENCES** [1] G. Tang, H. Wang, K. Wu, and D. Guo, ‘‘Tapping the knowledge of dynamic traffic demands for optimal CDN design,’’ IEEE/ACM Trans. _Netw., vol. 27, no. 1, pp. 98–111, Feb. 2018._ [2] K. Srinivasan, A. Mubarakali, A. S. Alqahtani, and A. Dinesh Kumar, ‘‘A survey on the impact of DDoS attacks in cloud computing: Prevention, detection and mitigation techniques,’’ in Intelligent Communication _Technologies and Virtual Mobile Networks, S. Balaji, Á. Rocha, and_ Y.-N. Chung, Eds. Cham, Switzerland: Springer, 2020, pp. 252–270. [3] M. Day, B. Cain, G. Tomilson, and P. Rzewski, A Model for Content Inter_networking (CDI), RFC Editor document RFC3466, Feb. 2003. [Online]._ Available: https://tools.ietf.org/html/rfc3466 [4] H. Yin, X. Liu, G. Min, and C. Lin, ‘‘Content delivery networks: A bridge between emerging applications and future IP networks,’’ IEEE Netw., vol. 24, no. 4, pp. 52–56, Jul./Aug. 2010. [5] Y. Kim, Y. Kim, H. Yoon, and I. Yeom, ‘‘Peer-assisted multimedia delivery using periodic multicast,’’ Inf. Sci., vol. 298, pp. 425–446, Mar. 2015. [Online]. Available: http://www.sciencedirect.com/science/ article/pii/S0020025514011141 [6] H. B. Hadj Abdallah and W. Louati, ‘‘Ftree-CDN: Hybrid CDN and P2P architecture for efficient content distribution,’’ in Proc. 27th Euromi_cro Int. Conf. Parallel, Distrib. Netw.-Based Process. (PDP), Feb. 2019,_ pp. 438–445. [7] R. Peterson and E. G. Sirer, ‘‘AntFarm: Efficient content distribution with managed swarms,’’ in Proc. NSDI, 2009, vol. 9, no. 1, pp. 107–122. [8] R. Huang and Y. Zhang, ‘‘Survey of WEBRTC based P2P streaming,’’ IETF, Fremont, CA, USA, Tech. Rep., Feb. 2014, pp. 1–11. [Online]. Available: https://www.ietf.org/archive/id/draft [9] (2020). _Discover_ _CDN_ _Mesh_ _Delivery_ _for_ _Live_ _Streaming_ _and_ _Videoon-Demand._ Accessed: May 5, 2020. [Online]. Available: https://streamroot.io/cdn-mesh-delivery/ [10] Peer5. Accessed: Dec. 27, 2017. [Online]. Available: https://www. peer5.com/cdn [11] S. Nacakli and A. M. Tekalp, ‘‘Controlling P2P-CDN live streaming services at SDN-enabled multi-access edge datacenters,’’ IEEE Trans. _Multimedia, vol. 23, pp. 3805–3816, 2021._ [12] T.-W. Um, G. M. Lee, and H.-W. Lee, ‘‘Trustworthiness management in sharing CDN infrastructure,’’ in Proc. Int. Conf. Inf. Netw. (ICOIN), Jan. 2018, pp. 73–75. [13] S. S. Krishnan and R. K. Sitaraman, ‘‘Video stream quality impacts viewer behavior: Inferring causality using quasi-experimental designs,’’ _IEEE/ACM Trans. Netw., vol. 21, no. 6, pp. 2001–2014, Dec. 2013._ [14] X. Liu, F. Dobrian, H. Milner, J. Jiang, V. Sekar, I. Stoica, and H. Zhang, ‘‘A case for a coordinated internet video control plane,’’ ACM SIGCOMM _Comput. Commun. Rev., vol. 42, no. 4, pp. 359–370, 2012._ [15] F. Dobrian, V. Sekar, A. Awan, I. Stoica, D. Joseph, A. Ganjam, J. Zhan, and H. Zhang, ‘‘Understanding the impact of video quality on user engagement,’’ ACM SIGCOMM Comput. Commun. Rev., vol. 41, no. 4, [pp. 362–373, Aug. 2011, doi: 10.1145/2043164.2018478.](http://dx.doi.org/10.1145/2043164.2018478) [16] M. H. Karaata and T. Alsulaiman, ‘‘An optimal inherently stabilizing routing algorithm for star P2P overlay networks,’’ Concurrency Comput., _Pract. Exper., vol. 28, no. 17, pp. 4405–4428, Dec. 2016._ [17] S. Fujita, ‘‘A new network topology for P2P overlay based on a contracted star graph,’’ in Proc. 10th Int. Symp. Pervasive Syst., Algorithms, _[Netw., Washington, DC, USA, Dec. 2009, pp. 29–33, doi: 10.1109/I-](http://dx.doi.org/10.1109/I-SPAN.2009.51)_ [SPAN.2009.51.](http://dx.doi.org/10.1109/I-SPAN.2009.51) [18] C.-S. Lin and I.-T. Lee, ‘‘Applying multiple description coding to enhance the streaming scalability on CDN-P2P network,’’ Int. J. Commun. Syst., vol. 23, no. 5, pp. 553–568, 2010. [19] Z. H. Lu, X. H. Gao, S. J. Huang, and Y. Huang, ‘‘Scalable and reliable live streaming service through coordinating CDN and P2P,’’ in Proc. IEEE _17th Int. Conf. Parallel Distrib. Syst., Dec. 2011, pp. 581–588._ [20] G. Pallis and A. Vakali, ‘‘Insight and perspectives for content delivery networks,’’ Commun. ACM, vol. 49, no. 1, pp. 101–106, 2006. [21] S. Nakamoto. (Mar. 2009). Bitcoin: A Peer-to-Peer Electronic Cash Sys_tem. [Online]. Available: https://metzdowd.com_ [22] V. Buterin. (2013). Ethereum Whitepaper. [Online]. Available: https:// ethereum.org/whitepaper/ [23] J. L. Ferrer and E. M. Payeras, ‘‘Cryptocurrency P2P networks: A comparison analysis,’’ Menorca, Illes Balears, Spain, Tech. Rep., 2016, pp. 423–428. [24] Dalesa, 2010. [Online]. Available: http://www.dalesa.lk/ [25] W. Garbe. (2005). FAROO. [Online]. Available: http://www.faroo.com [26] Y. Psaras and D. Dias, ‘‘The interplanetary file system and thefilecoin network,’’ in Proc. 50th Annu. IEEE-IFIP Int. Conf. Dependable Syst. _Netw.-Supplemental Volume (DSN-S), 2020, pp. 80–80._ [27] A. Ali, M. Khan, M. Saddique, U. Pirzada, M. Zohaib, I. Ahmad,and N. Debnath, ‘‘Tor vs I2P: A comparative study,’’ in Proc. IEEE Int. Conf. Ind. _Technol. (ICIT), Kansas City, MO, USA, 2016, pp. 1748–1751._ [28] Y. Ye, B. Lee, R. Flynn, N. Murray, and Y. Qiao, ‘‘HLAF: heterogeneouslatency adaptive forwarding strategy for peer-assisted video streaming in NDN,’’ in Proc. IEEE Symp. Comput. Commun. (ISCC), Jul. 2017, pp. 657–662. [29] H. Dai, J. Lu, Y. Wang, and B. Liu, ‘‘A two-layer intra-domain routing scheme for named data networking,’’ in Proc. IEEE Global Commun. Conf. _(GLOBECOM), Dec. 2012, pp. 2815–2820._ [30] E. Kalogeiton, T. Kolonko, and T. Braun, ‘‘A multihop and multipath routing protocol using NDN for VANETs,’’ in Proc. 16th Annu. Medit. _Ad Hoc Netw. Workshop (Med-Hoc-Net), Jun. 2017, pp. 1–8._ [31] C.-H. Lee, J.-X. Lin, J.-S. Jhou, and K.-J. Chang, ‘‘Preliminary study on the multi-path pre-caching mechanism for content delivery network,’’ in Proc. _IEEE 6th Global Conf. Consum. Electron. (GCCE), Oct. 2017, pp. 1–2._ [32] V. Poliakov, L. Sassatelli, and D. Saucez, ‘‘Adaptive video streamingand multipath: Can less be more?’’ in Proc. IEEE Int. Conf. Commun. (ICC), 2018. [33] S. Sur and P. K. Srimani, ‘‘Topological properties of star graphs,’’ Comput. _Math. Appl., vol. 25, no. 12, pp. 87–98, Jun. 1993._ [34] (2005). Peersim: A Peer to Peer Simulator. [Online]. Available: http://peersim.sourceforge.net/ [35] B. L. Stephen Naicken, A. Basu, and S. Rodhetbhai, ‘‘A survey of peer-topeer network simulators,’’ in Proc. 7th Annu. Postgraduate Symp., 2006, pp. 1–8. MEHMET KARAATA received the B.S. degree in computer science from the University of Hacettepe, Ankara, Turkey, in 1987, and the M.S. and Ph.D. degrees in computer science from The University of Iowa, Iowa City, USA, in 1990 and 1995, respectively. He joined Bilkent University, Ankara, as an Assistant Professor, in 1995. He is currently working as a Professor with the Department of Computer Engineering, Kuwait University. His research interests include mobile computing, distributed systems, fault-tolerant computing, and self stabilization. He has earned the Distinguished Best Young Researcher Award and a Researcher Award from Kuwait University, Kuwait, in 2001 and 2009, respectively. ANWAR AL-MUTAIRI received the B.S. and M.S. degrees from the Computer Engineering Department, Kuwait University, in 2013 and 2016, respectively. She is currently working as a Computer Engineer with the Center of Information Systems, Kuwait University. She expects to start her Ph.D. degree in the near future. Her research interest includes distributed algorithm development and verification. SHOUQ ALSUBAIHI received the B.S. and M.Sc. degrees in computer engineering from Kuwait University, Kuwait, in 2005 and 2008, respectively, and the Ph.D. degree in computer engineering from the University of California, Irvine, USA. She is currently an Assistant Professor with the Computer Engineering Department, Kuwait University. Her research interests include distributed systems, parallel computing, design automation, evolutionary computation, and fault tolerance. _and_ Available: , -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1109/ACCESS.2021.3139936?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1109/ACCESS.2021.3139936, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBYNCND", "status": "GOLD", "url": "https://ieeexplore.ieee.org/ielx7/6287639/9668973/09667496.pdf" }
2,022
[ "JournalArticle" ]
true
null
[ { "paperId": "c4ea493c1c4a4523e0bcde258242881e3c28d9d4", "title": "A Survey on the Impact of DDoS Attacks in Cloud Computing: Prevention, Detection and Mitigation Techniques" }, { "paperId": "4078c52b3e4b123fc1612560ae079fc17c4bad33", "title": "Tapping the Knowledge of Dynamic Traffic Demands for Optimal CDN Design" }, { "paperId": "82b661087fa8819240fa388ed1481421adedbfb7", "title": "Ftree-CDN: Hybrid CDN and P2P Architecture for Efficient Content Distribution" }, { "paperId": "0a66999d8ef64cd89b184dbb10d0d9117b64dc58", "title": "Adaptive video streaming and multipath: can less be more?" }, { "paperId": "e0dcb1622a821bed7af3542ea113ed8ba6f86cc6", "title": "Preliminary study on the multi-path pre-caching mechanism for content delivery network" }, { "paperId": "d00a47534394537caa4677da5901c182286499b6", "title": "HLAF: Heterogeneous-Latency Adaptive Forwarding strategy for Peer-Assisted Video Streaming in NDN" }, { "paperId": "57b4bbfa8619ec24d17a4234dea09e3a1c164461", "title": "A multihop and multipath routing protocol using NDN for VANETs" }, { "paperId": "0cd8f4e6e13789728754d9f839bee3a6479809b9", "title": "An optimal inherently stabilizing routing algorithm for star P2P overlay networks" }, { "paperId": "98e44b68ea08bafc6da5e12d6773d3d141a98963", "title": "Peer-assisted multimedia delivery using periodic multicast" }, { "paperId": "ee749f664acb80ce219117ae99e3ad72c9f55e2b", "title": "Survey of WebRTC based P2P Streaming" }, { "paperId": "a58b4f1a1052130c6086fc83c1b409001373f1c7", "title": "Video stream quality impacts viewer behavior: inferring causality using quasi-experimental designs" }, { "paperId": "0860bc34aac8a304674aa4c205ff46e6dbc93295", "title": "A case for a coordinated internet video control plane" }, { "paperId": "db52d41b918d67c1e68e24cc7707396e77bd47b8", "title": "Scalable and Reliable Live Streaming Service through Coordinating CDN and P2P" }, { "paperId": "22bd3a35b9550bc5b570a0beee5648eb9033be3b", "title": "Understanding the impact of video quality on user engagement" }, { "paperId": "5bd7bdd4436088b10dc9709dd94f749500050b3a", "title": "Content delivery networks: a bridge between emerging applications and future IP networks" }, { "paperId": "c01f7731f3eeb003bafae8a6a2b0d0b4af3e53d7", "title": "Applying multiple description coding to enhance the streaming scalability on CDN‐P2P network" }, { "paperId": "19e3399c7ea91ecc7f64d73ceb26f32fc4121ca3", "title": "A New Network Topology for P2P Overlay Based on a Contracted Star Graph" }, { "paperId": "7242c5689545718382cee9429373ed72f5b64156", "title": "AntFarm: Efficient Content Distribution with Managed Swarms" }, { "paperId": "46fc28dd19ced3cbb75f93005a76aa8a5a522176", "title": "A Survey of Peer-to-Peer Network Simulators" }, { "paperId": "159587f9ae5d0b959b0d4eeeae492fe0103f060d", "title": "A Model for Content Internetworking (CDI)" }, { "paperId": "8f965534ee0d7cd9e818dd47de61b16d7fbdcbc7", "title": "Topological properties of star graphs" }, { "paperId": "ca40bb51caea640fd8ed42c4e8bb435677229f75", "title": "Controlling P2P-CDN Live Streaming Services at SDN-Enabled Multi-Access Edge Datacenters" }, { "paperId": null, "title": "Discover cdn mesh delivery for live streaming and video ondemand" }, { "paperId": "61503b593c24c2bd339d13a15dafc06e165669e0", "title": "Trustworthiness management in sharing CDN infrastructure" }, { "paperId": null, "title": "“I2p - the invisible internet project,”" }, { "paperId": null, "title": "“Filecoin: A decentralized storage network,”" }, { "paperId": null, "title": "Department, Kuwait University, Kuwait. She received her Ph.D" }, { "paperId": null, "title": "“Cryptocurrency p2p networks: a comparison analysis„”" }, { "paperId": "5837e36ff6db6cb767080124581f77f76456e106", "title": "A two-layer intra-domain routing scheme for named data networking" }, { "paperId": "ecdd0f2d494ea181792ed0eb40900a5d2786f9c4", "title": "Bitcoin : A Peer-to-Peer Electronic Cash System" }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": "1a580bb65e2a747b943d3b417c656c5deeb11213", "title": "Insight and perspectives for content delivery networks" } ]
17,643
en
[ { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Political Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01eeceb5429ebe1fbb3c941f8bc418d5c8ebd799
[]
0.904049
Assuring Anonymity and Privacy in Electronic Voting with Distributed Technologies Based on Blockchain
01eeceb5429ebe1fbb3c941f8bc418d5c8ebd799
Applied Sciences
[ { "authorId": "120520480", "name": "Vehbi Neziri" }, { "authorId": "2295070", "name": "Isak Shabani" }, { "authorId": "121366212", "name": "Ramadan Dervishi" }, { "authorId": "3098361", "name": "Blerim Rexha" } ]
{ "alternate_issns": null, "alternate_names": [ "Appl Sci" ], "alternate_urls": [ "http://www.mathem.pub.ro/apps/", "https://www.mdpi.com/journal/applsci", "http://nbn-resolving.de/urn/resolver.pl?urn=urn:nbn:ch:bel-217814" ], "id": "136edf8d-0f88-4c2c-830f-461c6a9b842e", "issn": "2076-3417", "name": "Applied Sciences", "type": "journal", "url": "http://www.e-helvetica.nb.admin.ch/directAccess?callnumber=bel-217814" }
Anonymity and privacy in the electoral process are mandatory features found in any democratic society, and many authors consider these fundamental civil liberties and rights. During the election process, every voter must be identified as eligible, but after casting a vote, the voter must stay anonymous, assuring voter and vote unlinkability. Voter anonymity and privacy are the most critical issues and challenges of almost all electronic voting systems. However, vote immutability must be assured as well, which is a problem in many new democracies, and Blockchain as a distributed technology meets this data immutability requirement. Our paper analyzes current solutions in Blockchain and proposes a new approach through the combination of two different Blockchains to achieve privacy and anonymity. The first Blockchain will be used for key management, while the second will store anonymous votes. The encrypted vote is salted with a nonce, hashed, and finally digitally signed with the voter’s private key, and by mixing the timestamp of votes and shuffling the order of cast votes, the chances of linking the vote to the voter will be reduced. Adopting this approach with Blockchain technology will significantly transform the current voting process by guaranteeing anonymity and privacy.
# applied sciences _Article_ ## Assuring Anonymity and Privacy in Electronic Voting with Distributed Technologies Based on Blockchain **Vehbi Neziri** **, Isak Shabani *** **, Ramadan Dervishi and Blerim Rexha** Faculty of Electrical and Computer Engineering, University of Prishtina, 10000 Pristina, Kosovo; vehbi.neziri@uni-pr.edu (V.N.); ramadan.dervishi@uni-pr.edu (R.D.); blerim.rexha@uni-pr.edu (B.R.) *** Correspondence: isak.shabani@uni-pr.edu** **Abstract: Anonymity and privacy in the electoral process are mandatory features found in any** democratic society, and many authors consider these fundamental civil liberties and rights. During the election process, every voter must be identified as eligible, but after casting a vote, the voter must stay anonymous, assuring voter and vote unlinkability. Voter anonymity and privacy are the most critical issues and challenges of almost all electronic voting systems. However, vote immutability must be assured as well, which is a problem in many new democracies, and Blockchain as a distributed technology meets this data immutability requirement. Our paper analyzes current solutions in Blockchain and proposes a new approach through the combination of two different Blockchains to achieve privacy and anonymity. The first Blockchain will be used for key management, while the second will store anonymous votes. The encrypted vote is salted with a nonce, hashed, and finally digitally signed with the voter’s private key, and by mixing the timestamp of votes and shuffling the order of cast votes, the chances of linking the vote to the voter will be reduced. Adopting this approach with Blockchain technology will significantly transform the current voting process by guaranteeing anonymity and privacy. **Citation: Neziri, V.; Shabani, I.;** Dervishi, R.; Rexha, B. Assuring Anonymity and Privacy in Electronic Voting with Distributed Technologies Based on Blockchain. Appl. Sci. 2022, _[12, 5477. https://doi.org/10.3390/](https://doi.org/10.3390/app12115477)_ [app12115477](https://doi.org/10.3390/app12115477) Academic Editors: Nadejda Komendantova and Hossein Hassani Received: 21 April 2022 Accepted: 27 May 2022 Published: 28 May 2022 **Publisher’s Note: MDPI stays neutral** with regard to jurisdictional claims in published maps and institutional affil iations. **Copyright:** © 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons [Attribution (CC BY) license (https://](https://creativecommons.org/licenses/by/4.0/) [creativecommons.org/licenses/by/](https://creativecommons.org/licenses/by/4.0/) 4.0/). **Keywords: privacy; anonymity; electronic voting; Blockchain; vote; distributed technologies** **1. Introduction** Many countries, companies, and institutions have thought about and developed a wide range of election systems that use the most up-to-date technologies to allow all citizens to vote quickly and accurately as a result of the rapid development of technology, the large movement of people, and the necessity for movement. The use of technology in elections began a long time ago. However, these technologies have varied, including blackballing, punch cards, lever voting machines, ballot optical scanning, electronic voting cabins, direct-recording electronic voting, and other types of technologies combined with some manual parts [1]. When technology is used to organize elections, the success of elections depends not only on the successful implementation of technology but also on procedures related to privacy and auditing. Traditional voting or electronic voting systems are usually managed by a single authority. Therefore, election manipulation is possible because a single authority can change votes. Challenges such as manipulation, privacy, and anonymity can be solved by switching to systems based on distributed technologies that take security aspects into account. Transactions in traditional electronic voting systems are stored in a centralized ledger or centralized database. In contrast, in distributed systems, there is not just one ledger (database), but all nodes have the same access to a shared ledger, which allows all participants to see the system of record (ledger). Various voting techniques are used, as mentioned in [2,3], ranging from raising hands, punch cards, lever voting machines, electronic voting machines, and online voting, but the idea is for voters to make their electoral choices anonymously. Technology is developing faster than most people can understand it, but the issues of privacy and personal data protection are becoming increasingly critical. ----- _Appl. Sci. 2022, 12, 5477_ 2 of 12 All efforts to implement technology aim to increase security, guarantee integrity, and ensure the reliability of the process, from voting to counting and the announcement of results. Regardless of the type of technology used in the electoral process, voters pay the most attention to the direct use of voting technology, privacy, and anonymity. Electronic voting, or e-voting, is one of the many government services that the implementation of Blockchain can positively affect. E-voting, on the other hand, is a service that may be utilized by a variety of companies and institutions to save time and money, to provide remote access, and to increase inclusiveness. Despite advancements in technology and in voting processes, transparency, anonymity, and privacy remain a concern. As a result, the adoption of new technologies should enable the promotion of system trust and dependability by allowing mechanisms to be audited, but they should also ensure privacy. Privacy is defined in different ways, but Alan Westin [4] defines privacy as “individuals have the right to determine how much personal information they want to disclose and to whom”; however, this does not apply to voting because the privacy of voters should not be dependent on them. The electoral process is very complex and comprehensive; it determines who will lead public life, and it functions as a kind of competition where we hire our representatives. The origin of the electoral process began long ago and has historically developed differently from one country to another [5]. In almost all democracies, the electoral process is highly reliant on the legal aspect, which defines the mechanisms for organizing, supervising, and conducting the electoral process accurately and without deception, but this is not always achieved in practice. There are initiatives for electronic voting in various countries and institutions, and many of them are moving towards more advanced electronic voting systems. The purpose of electoral reform varies from country to country. Some countries seek to increase voter turnout, others seek to reduce electoral fraud, and others reduce bureaucratic procedures and make it easier for voters [2]. Electronic voting can meet these numerous objectives to speed up, simplify, and reduce the cost of elections, and it encourages higher voter turnout, particularly among young voters, who are the most tech-savvy. To better fulfill the legal aspect and organize the best possible elections, many countries have started or are implementing some form of electronic voting. There are many definitions of electronic voting, but according to [6], it is a way to get responses from voters at a given time and make elections more efficient. According to [7], electronic voting is a system where registration, ballot casting, or counting are conducted using information and communication technologies. Therefore, electronic voting can be any voting method in which voter preferences can be expressed or collected through electronic resources. There are various ways to organize electronic voting. Some countries use different electronic devices at polling stations, while others use the Internet [8]. Regardless of the methods used, all efforts to implement new technologies aim to ensure the credibility of the voting process and of the election results. The electronic system faces various challenges but must guarantee the anonymity and privacy of voters to be reliable. Electronic voting systems must also consider transparency, verifiability, and other aspects. Various types of technology offer different possibilities for these features, but there are also difficulties in achieving these features. The use of technology in electoral processes must be safe and secure to the same extent that equivalent manual processes are safe and reliable. Today, many countries have developed or are developing advanced voting systems using the latest technologies to enable all citizens to vote quickly and accurately regardless of their location [9,10]. However, some countries have stopped e-voting projects due to the unreliability of the technologies used [11], but distributed Blockchain technology can increase credibility and reliability. There is always controversy with any new technology, so continual research on all aspects of the process and technology is necessary. Despite the many benefits of online or electronic balloting using different methods, digital vote casting needs to be significantly researched because it can also introduce new threats [12], such as modifying the voter list or adding illegitimate voters, accounting theft, or account interference. ----- _Appl. Sci. 2022, 12, 5477_ 3 of 12 Blockchain technology offers some attractive features, such as transparency, immutability, and distributed consensus, which are difficult to achieve using other technologies. These features make Blockchain an appealing technology for elections, as distributed consensus might boost voter confidence and guarantee correct outcomes. Blockchain technology has primarily been used in banking and finance, where anonymity is not required because it is necessary to know who is making the transaction; however, in electronic voting, anonymity is a required and indisputable feature. There have been several reviews and ideas about Blockchain technology, but Blockchain-based applications and electronic voting have generally received limited attention [13]. However, there are several different schemes and protocols that other authors have proposed, but privacy and anonymity are the main challenges that have not yet been adequately addressed. Our paper analyzes how Blockchain technology might be used to alleviate these challenges. The main focus will be on assuring privacy and anonymity through the latest Blockchain technology, which offers new possibilities that previous technologies did not. In addition to analyzing and comparing existing electronic voting solutions in Blockchain, we also propose a schema by combining two different Blockchains. The concept used in this scheme enables voter privacy and voting anonymity as two basic rights in the voting process. The first Blockchain, called “Distributed Key Management” generates and manages keys and key infrastructures. The second Blockchain, called “Encrypted Votes Blockchain” is separate from the first Blockchain and is used to store votes during the voting process. **2. Blockchain Description** Blockchain technology is a relatively new technology that has changed governments, institutions, and industries worldwide. Understanding distributed systems is essential to understanding Blockchain technology, as Blockchain is a distributed system at its core, which can be centralized or decentralized. In other words, Blockchain is a distributed technology used to record electronic data transactions, which are linked in blocks and stored in many places simultaneously (nodes). The node can be an individual player in a distributed system. Distributed and decentralized systems can easily be confused. The difference is that there is a central authority in a decentralized system that governs the whole system. In contrast, in a distributed system, the work is done by all nodes simultaneously to achieve this result. The Blockchain era started with Bitcoin, a digital virtual currency or digital payment system without an organization to authorize transactions. Many people think that Blockchain is the same thing as Bitcoin or that Blockchain is a financial technology. Because people are starting to hear more about Blockchain right after the peak of Bitcoin’s popularity, such an opinion may be considered valid because the essence of the Bitcoin system is Blockchain, through the computational process called mining; however, Blockchain is more than that. The rules of creating blocks and mining are explained in many types of research, including a study by Gobel [14]. Companies, organizations, and institutions are now researching Blockchain technology, and millions of dollars have been spent experimenting with it. Therefore, Blockchain is being implemented and used in many institutions [15], such as banks, finance, and governments, and in various processes of democracy, such as electoral processes. However, a large part of the global population still has no idea what Blockchain is or how it works. Blockchain applications may be categorized according to different fields, particularly the Internet of Things (IoT), so both industry and academia are paying attention to it, and many research studies are being conducted [16]. As the authors of [17,18] say, Blockchain is becoming a standard technology of the digital age. Blockchain functions as a kind of database or open and distributed register in which transactions between parties are recorded into blocks effectively, permanently, and verifiably. No one can modify the data in a Blockchain, so the Blockchain is an immutable ledger. “Block” refers to a collection of data or records, and “chain” refers to a database of these blocks, stored as a list that is public to all participants. These lists are chained cryptographically ----- p y p y _Appl. Sci. 2022, 12, 5477_ fiably. No one can modify the data in a Blockchain, so the Blockchain is an immutable 4 of 12 ledger. “Block” refers to a collection of data or records, and “chain” refers to a database of these blocks, stored as a list that is public to all participants. These lists are chained cryptographically in chronological order after meeting the preconditions for creating the block. in chronological order after meeting the preconditions for creating the block. In its most In its most basic form, the Blockchain structure is presented in Figure 1, with each block basic form, the Blockchain structure is presented in Figure 1, with each block containing a containing a timestamp, transactions, block hash, and previous block hash created using timestamp, transactions, block hash, and previous block hash created using cryptographic cryptographic functions. The initial block, often known as the genesis block, does not con functions. The initial block, often known as the genesis block, does not contain the prior tain the prior block’s hash. The authors of [12] describe a similar approach to the Block block’s hash. The authors of [12] describe a similar approach to the Blockchain structure, chain structure, noting that each block’s hash is stored in the next block or that each block noting that each block’s hash is stored in the next block or that each block contains the contains the previous block’s hash. previous block’s hash. **Figure 1. Figure 1.Blockchain data model. Blockchain data model.** A hash is a value generated by a string using a mathematical function and functions A hash is a value generated by a string using a mathematical function and functions in a one-way manner by converting entries of different lengths into an encoded output in a one-way manner by converting entries of different lengths into an encoded output with a fixed size. Each block contains a set of transactions that are chronologically linked with a fixed size. Each block contains a set of transactions that are chronologically linked to to previous transaction blocks and precede the transactions of future blocks. previous transaction blocks and precede the transactions of future blocks. Blockchain may be the future of many businesses and governments. However, as the Blockchain may be the future of many businesses and governments. However, as authors in [19] put it, a transformation of business and government is still far away, but the authors in [19] put it, a transformation of business and government is still far away, the adoption process will be gradual. Blockchain is a technology that can lay new founda-but the adoption process will be gradual. Blockchain is a technology that can lay new tions for our government systems and beyond by providing shared, standardized, and foundations for our government systems and beyond by providing shared, standardized, secure data while maintaining privacy and anonymity. One of the government systems is and secure data while maintaining privacy and anonymity. One of the government systems electronic voting, which is a potential use for Blockchain technology. However, for a sys-is electronic voting, which is a potential use for Blockchain technology. However, for a tem or process to be successful, it is essential to choose a suitable Blockchain. The Block-system or process to be successful, it is essential to choose a suitable Blockchain. The chain system can be public, private, or mixed, but the Blockchain for government services Blockchain system can be public, private, or mixed, but the Blockchain for government is usually private with known identities, and only they can add transactions [20]. services is usually private with known identities, and only they can add transactions [20]. **3. Related Works** **3. Related Works** The requirements of any voting system can be numerous and wide-ranging; however, The requirements of any voting system can be numerous and wide-ranging; how in general, electronic voting systems should first meet the legal and regulatory framework ever, in general, electronic voting systems should first meet the legal and regulatory of the country while also meeting the security requirements, which are mandatory and framework of the country while also meeting the security requirements, which are man indisputable. Even new blockchain technology can have certain challenges and draw datory and indisputable. Even new blockchain technology can have certain challenges backs [21]: unlike other distributed solutions, blockchain is challenging to scale, and node and drawbacks [21]: unlike other distributed solutions, blockchain is challenging to scale, growth affects performance. Therefore, the issue of performance is resolved in private and node growth affects performance. Therefore, the issue of performance is resolved in networks by implementing different mechanisms, as presented in [22,23]. private networks by implementing different mechanisms, as presented in [22,23]. The electronic voting system must meet security requirements in order to achieve The electronic voting system must meet security requirements in order to achieve security that is the same as or greater than traditional paper voting. These requirements can security that is the same as or greater than traditional paper voting. These requirements be grouped into four main principles: authentication, integrity, privacy, and verifiability. can be grouped into four main principles: authentication, integrity, privacy, and verifia Authentication guarantees that each voter is uniquely and unmistakably identified, which bility. Authentication guarantees that each voter is uniquely and unmistakably identified, means that only authorized voters should be able to vote. Integrity ensures that each vote is which means that only authorized voters should be able to vote. Integrity ensures that signed and cannot be changed by anyone other than the voter himself. Privacy is about the each vote is signed and cannot be changed by anyone other than the voter himself. Privacy confidentiality of the vote and the anonymity of the voters, such that the ballot is secret and is about the confidentiality of the vote and the anonymity of the voters, such that the ballot its content is not disclosed. Voter privacy enhances voter autonomy and aids in preventing voter pressure and vote-buying. Verifiability is a control principle that ensures accuracy. Various aspects of these principles are listed in the papers [12,24], such as accessibility, availability, transparency, fairness, voter verifiability, privacy, anonymity, auditability, and accuracy, which are very important for a reliable system of voting. Every security require ----- _Appl. Sci. 2022, 12, 5477_ 5 of 12 ment is very important, but anonymity, privacy, and transparency are the cornerstones of electronic voting [25]. The general security of the voting system, but especially privacy and anonymity, is essential in electronic voting and needs further exploration, especially in Blockchain technology. In traditional systems, privacy is maintained through various cryptographic algorithms, but in Blockchain, this is a challenge because Blockchain is a distributed technology and can even be public. Recent initiatives to study applications of Blockchain have mainly been in banking and finance, but there have been fewer efforts to study the use of Blockchain for electronic voting. A Blockchain approach to electronic voting using Multichain, which highlights Blockchain’s effectiveness in terms of basic electronic voting requirements, is proposed in the paper [26]. This technique allows a solid cryptographic hash-based to be generated totally based on voter-specific records in such a way that allows the voters’ anonymity, privacy, and integrity to be protected. There have been various efforts and initiatives to implement Blockchain technology within the election process [27]. Table 1 presents the various electronic voting solutions and applications using Blockchain technology. These applications are for use in elections in corporations, communities, cities, or even nations. **Table 1. Blockchain-based electronic voting applications.** **Company/Country** **Context/Remarks** Voatz/United States Agora/Sierra Leone LVH Group/Nasdaq/Estonia I.T. Department of Moscow Government/Russia From 2018 to 2020, Blockchain-based elections were held in West Virginia, Utah, and Colorado. The company used a voting application using biometrics, Blockchain, and hardware-based cryptography by generating paper and chain voting, but the authors in [28] have expressed concerns about its vulnerability to third-party attacks. In 2018, Sierra Leone deployed a Blockchain-based network for a presidential election to count votes in addition to the official count [29]. The network was an independent vote count, and as a result, privacy and anonymity were very evident because anonymous votes are placed on the Blockchain. Estonia’s cyber security is derived from its keyless signature (KSI) infrastructure, which verifies every electronic activity mathematically using the Blockchain. This system issues each shareholder’s voting assets and symbolic voting assets [30]. In December 2017, the Moscow City Active Citizen Program began using a Blockchain for voting and to make voting results publicly auditable [31]. Voting using Blockchain technology was held in Moscow and other regions in 2020, but Ethereum was unable to handle the load, and also there were challenges in securing the ballot [32]. Tsukuba City in 2018 introduced a Blockchain LayerX/Japan voting system but had problems mainly due to forgotten passwords [33]. In June 2018, Switzerland held elections in the Switzerland city of Zug based on Blockchain, but it was an experiment and the result was not binding [33]. ----- _Appl. Sci. 2022, 12, 5477_ 6 of 12 Another approach was taken by [34], which proposed using ZeroCoin to give Bitcoin anonymity. ZeroCoin’s proposal fixes voting groups and makes it difficult for the administrator to vote fraudulently. The authors of the paper [35] proposed the implementation of smart contracts in Ethereum, and they addressed voter security, voters’ privacy, and non-repudiation of votes. To gain privacy, the authors of paper [36] used a blind signature as proposed by the authors of paper [37], which mathematically prevents every other person from linking a blinded message to the only one who signed it. The proposal uses Blockchain technology and smart contracts to build a reliable and efficient scheme without using certificates. The various online platforms, their consensus, and the technology used for systems development are given in [12], but problems with scalability are highlighted. Developing a transparent online voting protocol using Ethereum through the open voting network is presented in [38], but this proposal fails to prevent system corruption. The authors of [39] suggested using a distributed, anonymous, and transparent system with minimum trust between the parties, but even their proposal fails to be secured from attacks. A Blockchain-based anti-quantum electronic voting protocol making changes to the Niederreiter cryptosystem algorithm is proposed by [40], but according to [41], security and efficiency decrease as the number of voters increases. The authors of the paper [12] compare many electronic voting proposals using Blockchains, such as a comparison of schemes, systems, and scalability analyses. These comparisons define the framework, cryptographic algorithm, consensus protocol, audit, anonymity, verifiability, mining difficulty, block, scalability, integrity, accuracy, and other aspects. According to [12] and the comparison of BSJC, Anti-Quantum, OVN, DATE, BES, and BEA, no scheme offers solutions for any security requirements, such as anonymity, security, integrity, variability by voter, scalability, privacy, and auditing. Basit Shahzad & Jon Crowcroft’s (BSJC) scheme does not meet the requirements of accuracy, scalability, and variability by voters while the counting method is from a third party [42]. The anti-quantum scheme, similar to the BSJC scheme, does not meet the requirements for accuracy, scalability, or voter variability, but the counting mechanism is self-tallying [40]. Although the open vote network (OVN) [38] does not meet the auditing, accuracy, scalability, or integrity requirements, the counting mechanism is self-tally. The other scheme, DATE, does not meet the auditing, accuracy, or integrity requirements, but it does meet voter scalability and variability [39]. BES, unlike BEA, achieves accuracy, integrity, and scalability, but not anonymity and voter variability [43], which BEA does [44]. Agora, a company based in Lausanne, Switzerland [45], has analyzed and developed a token electoral process mechanism based on Blockchain technology. They point out that current systems do not meet key voting features such as transparency, privacy, and integrity that can be achieved with new technologies. The Australian company, based in Brisbane, Horizon State [46] presents a voting application of Blockchain technology and addresses issues that need to be resolved, such as transparency, anonymity, and voter trust. The American company Voatz, in Boston, MA, USA, has created a Blockchain-based voting system that was approved in the U.S. presidential election. In their technical report [47], this company highlighted the challenges of identity, auditing, and protection against DoS attacks. Zcash is a decentralized payment scheme [25] that aims to provide anonymity, and unlike Bitcoin, proof-of-work in Zcash relies on an optimized form of zero-knowledge proofs called zk-SNARK. Double voting is a concern in Zcash since the same granted vote token is used to vote for several candidates [48]. A zero-knowledge proof refers to a cryptographic approach by which a party, referred to as “the prover”, can prove to another party, referred to as “the verifier”, that particular statements are true without giving any other information. Because a malicious user could gain unauthorized access to the Blockchain due to its open nature, the zero-knowledge proof can be used to validate if the prover has sufficient transactions in the Blockchain environment without exposing any data [49]. One of the simplest and most often-used proofs of knowledge is the Schnorr algorithm, also known as the proof of knowledge of a discrete logarithm [50]. ----- _Appl. Sci. 2022, 12, 5477_ 7 of 12 _Appl. Sci. 2022, 12, x FOR PEER REVIEW_ 7 of 12 Different analyses and approaches have been made based on research and evaluation of related work on blockchain-based electronic voting systems, but there are still gaps in the implementation of security requirements. Security requirements for voting schemes, **4. Proposed Approach to Assure Anonymity and Privacy in E-Voting Using** with an emphasis on anonymity and privacy, need to be addressed in future studies. **Blockchain Technology** **4. Proposed Approach to Assure Anonymity and Privacy in E-Voting UsingPrivacy and anonymity are two crucial features related to voter privacy and vote an-** **Blockchain Technologyonymity, so they are closely related to each other in the voting process. Privacy in the case** of voting is when no one can know for whom and how the voter is voting, although the Privacy and anonymity are two crucial features related to voter privacy and vote anonymity, so they are closely related to each other in the voting process. Privacy in thevoter’s identity is potentially known. Anonymity in the case of voting is when no one case of voting is when no one can know for whom and how the voter is voting, althoughknows for whom and how the voter voted, but it is potentially known what the voter is the voter’s identity is potentially known. Anonymity in the case of voting is when no onedoing. No one should be able to detect, identify, or link the vote to a voter during and knows for whom and how the voter voted, but it is potentially known what the voter isafter the poll. However, in different electoral systems, the voter can verify that their vote doing. No one should be able to detect, identify, or link the vote to a voter during andis counted correctly. after the poll. However, in different electoral systems, the voter can verify that their vote isSince anonymity and privacy are critical features of any electoral system, the data counted correctly.flow diagram, as presented in Figure 2, aims to preserve these two features through two separate Blockchains: Distributed Key Blockchain (DKB) and Encrypted Votes Blockchain Since anonymity and privacy are critical features of any electoral system, the data flow diagram, as presented in Figure(EVB). 2, aims to preserve these two features through two separate Blockchains: Distributed Key Blockchain (DKB) and Encrypted Votes Blockchain (EVB). **Figure 2.Figure 2. Proposed scheme.Proposed scheme.** A Distributed Key Blockchain or Distributed Key Management is a cryptographicA Distributed Key Blockchain or Distributed Key Management is a cryptographic process in which multiple parties compute a standard set of public and private keys byprocess in which multiple parties compute a standard set of public and private keys by applying specific protocols and consensus algorithms. This way of generating distributedapplying specific protocols and consensus algorithms. This way of generating distributed keys prevents single parties from accessing a private key. The Distributed Key Blockchainkeys prevents single parties from accessing a private key. The Distributed Key Blockchain can include various authorities dealing with elections, including civil society or othercan include various authorities dealing with elections, including civil society or other stakeholder institutions. The Encrypted Votes Blockchain (EVB), which is separate from thestakeholder institutions. The Encrypted Votes Blockchain (EVB), which is separate from Distributed Key Blockchain, stores encrypted votes throughout the voting process. Beforethe Distributed Key Blockchain, stores encrypted votes throughout the voting process. adding transactions (votes) to the EVB, they are validated and confirmed as legitimateBefore adding transactions (votes) to the EVB, they are validated and confirmed as legititransactions through various consensus algorithms and Smart Contracts. The followingmate transactions through various consensus algorithms and Smart Contracts. The folsteps describe how the scheme works:lowing steps describe how the scheme works: _••_ Step 1. The Distributed Key Blockchain generates public keys that eligible voters willStep 1. The Distributed Key Blockchain generates public keys that eligible voters will use to encrypt votes. In addition to generating and managing keys, this blockchainuse to encrypt votes. In addition to generating and managing keys, this blockchain must verify in advance whether the voter has the right to vote and has not voted before. must verify in advance whether the voter has the right to vote and has not voted _•_ Step 2. At the voter’s request and after reaching consensus with the algorithm used forbefore. consensus, as described in [51], the DKB generates the pair of keys that the voter will - Step 2. At the voter’s request and after reaching consensus with the algorithm used use to encrypt the vote. The preliminary DKG confirms that the voter has the right to for consensus, as described in [51], the DKB generates the pair of keys that the voter vote and has not already voted. There may be some form of interface or application in will use to encrypt the vote. The preliminary DKG confirms that the voter has the this part of the scheme that allows voters to vote. right to vote and has not already voted. There may be some form of interface or application in this part of the scheme that allows voters to vote. - Step 3. As presented in Figure 3, the voter encrypts the ballot using the public key ----- _Appl. Sci. 2022, 12, 5477_ 8 of 12 _Appl. Sci. 2022, 12, x FOR PEER REVIEW_ 8 of 12 _Appl. Sci. 2022, 12, x FOR PEER REVIEW_ 8 of 12 _Appl. Sci. 2022, 12, x FOR PEER REVIEW_ 8 of 12 Step 3. As presented in Figure 3, the voter encrypts the ballot using the public key _•_ generated by DKB. The voter generates a cryptographic nonce and adds it to the vote before encrypting it with the public key. A nonce is an abbreviation for “number used only once”, which is added to the vote and can be used by the voter to verify that the only once”, which is added to the vote and can be used by the voter to verify that the only once”, which is added to the vote and can be used by the voter to verify that theonly once”, which is added to the vote and can be used by the voter to verify that the vote has been counted accurately after it has been counted. Nonce-generation and vote has been counted accurately after it has been counted. Nonce-generation and vote has been counted accurately after it has been counted. Nonce-generation andvote has been counted accurately after it has been counted. Nonce-generation and encryption occurs during the voting process within the interface or application that encryption occurs during the voting process within the interface or application that encryption occurs during the voting process within the interface or application thatencryption occurs during the voting process within the interface or application that the voter uses to vote. This relationship, as presented in Figure 3, hash and encrypted the voter uses to vote. This relationship, as presented in Figure 3, hash and encrypted the voter uses to vote. This relationship, as presented in Figurethe voter uses to vote. This relationship, as presented in Figure 3, hash and encrypted 3, hash and encrypted vote with nonce, assures the voter that their vote has been counted and, furthermore, vote with nonce, assures the voter that their vote has been counted and, furthermore, vote with nonce, assures the voter that their vote has been counted and, furthermore,vote with nonce, assures the voter that their vote has been counted and, furthermore, their vote is counted correctly. their vote is counted correctly. their vote is counted correctly.their vote is counted correctly. **Figure 3. Encrypted vote + nonce.** **Figure 3. Encrypted vote + nonce.** **Figure 3.Figure 3. Encrypted vote + nonce.Encrypted vote + nonce.** - Step 4. As presented in Figure 4, the voter generates a hash of their private key within - • Step 4. As presented in Figure 4, the voter generates a hash of their private key within the interface or application and ties it to the encrypted vote + nonce. Using the hash Step 4. As presented in FigureStep 4. As presented in Figure 4, the voter generates a hash of their private key within 4, the voter generates a hash of their private key within the interface or application and ties it to the encrypted vote + nonce. Using the hash the interface or application and ties it to the encrypted vote + nonce. Using the hashthe interface or application and ties it to the encrypted vote + nonce. Using the hash of their private key, the voter may verify that their vote is valid and has not been of their private key, the voter may verify that their vote is valid and has not been of their private key, the voter may verify that their vote is valid and has not beenof their private key, the voter may verify that their vote is valid and has not been tampered with during the voting process. tampered with during the voting process. tampered with during the voting process.tampered with during the voting process. **Figure 4. Hash** and encrypted vote + nonce. **Figure 4. Figure 4.Figure 4. Hash Hash and encrypted vote + nonce.Hashand encrypted vote + nonce.and encrypted vote + nonce.** - •• Step 5. The encrypted vote + nonce and hash are digitally signed with the voter’s Step 5. The encrypted vote + nonce and hash are digitally signed with the voter’s Step 5. The encrypted vote + nonce and hash are digitally signed with the voter’sStep 5. The encrypted vote + nonce and hash are digitally signed with the voter’s private key, private key, private key, as presented in Figureasas presented in Figure 5. The voter is ready to cast his ballot, which will presented in Figure 5. The voter is ready to cast his ballot, which will 5. The voter is ready to cast his ballot, which will private key, as presented in Figure 5. The voter is ready to cast his ballot, which will be sent to the EVB; however, there will be a mechanism in place to separate the voter be sent to the EVB; however, there will be a mechanism in place to separate the voter be sent to the EVB; however, there will be a mechanism in place to separate the voter be sent to the EVB; however, there will be a mechanism in place to separate the voter data from the vote data. data from the vote data. data from the vote data. data from the vote data. **Figure 5. Figure 5. Signature of hash and encrypted vote + nonce.Signature of hash** and encrypted vote + nonce. **Figure 5. Signature of hash** and encrypted vote + nonce. **Figure 5. Signature of hash** and encrypted vote + nonce. - •• Step 6. A form of anonymizer is used in this step, mixing timestamps of votes and Step 6. A form of anonymizer is used in this step, mixing timestamps of votes and Step 6. A form of anonymizer is used in this step, mixing timestamps of votes andStep 6. A form of anonymizer is used in this step, mixing timestamps of votes and shuffling them in order to reduce the risk of voter or vote identification. In addition shuffling them in order to reduce the risk of voter or vote identification. In addition shuffling them in order to reduce the risk of voter or vote identification. In addition shuffling them in order to reduce the risk of voter or vote identification. In addition to timestamp mixing, this approach guarantees that voter data is separated from the to timestamp mixing, this approach guarantees that voter data is separated from the to timestamp mixing, this approach guarantees that voter data is separated from the to timestamp mixing, this approach guarantees that voter data is separated from the vote. This is an analogy of envelopes, where the inner envelope carries the ballot but vote. This is an analogy of envelopes, where the inner envelope carries the ballot but vote. This is an analogy of envelopes, where the inner envelope carries the ballot but vote. This is an analogy of envelopes, where the inner envelope carries the ballot but no information about the voter, whereas the outer envelope contains voter data but no information about the voter, whereas the outer envelope contains voter data but no no information about the voter, whereas the outer envelope contains voter data but no ballot data. ballot data.no information about the voter, whereas the outer envelope contains voter data but no ballot data. - •• Step 7. After the operation in step 6, the encrypted votes will be stored in the EVB. Step 7. After the operation in step 6, the encrypted votes will be stored in the EVB. Step 7. After the operation in step 6, the encrypted votes will be stored in the EVB.Because the so-called outer wrapper, which was the voter’s signature, is removed inno ballot data. Step 7. After the operation in step 6, the encrypted votes will be stored in the EVB. Because the so-called outer wrapper, which was the voter’s signature, is removed in Because the so-called outer wrapper, which was the voter’s signature, is removed in this step, only the encrypted votes remain as presented in FigureBecause the so-called outer wrapper, which was the voter’s signature, is removed in 4. According to the this step, only the encrypted votes remain as presented in Figure 4. According to the this step, only the encrypted votes remain as presented in Figure 4. According to the envelope analogy, in this case, it is only the inner envelopes that do not contain anythis step, only the encrypted votes remain as presented in Figure 4. According to the envelope analogy, in this case, it is only the inner envelopes that do not contain any envelope analogy, in this case, it is only the inner envelopes that do not contain any information about the outer envelope (voter data).envelope analogy, in this case, it is only the inner envelopes that do not contain any information about the outer envelope (voter data). information about the outer envelope (voter data). Step 8. The voter’s signature is removed from the encrypted ballot, assuring that theinformation about the outer envelope (voter data). - Step 8. The voter’s signature is removed from the encrypted ballot, assuring that the - Step 8. The voter’s signature is removed from the encrypted ballot, assuring that the vote is not linked to the voter. According to the envelope analogy in this case it is onlyStep 8. The voter’s signature is removed from the encrypted ballot, assuring that the vote is not linked to the voter. According to the envelope analogy in this case it is vote is not linked to the voter. According to the envelope analogy in this case it is the outer envelopes, that do not contain any information about the inner envelopevote is not linked to the voter. According to the envelope analogy in this case it is only the outer envelopes, that do not contain any information about the inner enve only the outer envelopes, that do not contain any information about the inner enve (vote data). The DKB stores voter signatures as well as other voter information. Bothonly the outer envelopes, that do not contain any information about the inner enve lope (vote data). The DKB stores voter signatures as well as other voter information. lope (vote data). The DKB stores voter signatures as well as other voter information. voters and authorities can verify that a voter has voted by storing the voter’s signaturelope (vote data). The DKB stores voter signatures as well as other voter information. Both voters and authorities can verify that a voter has voted by storing the voter’s Both voters and authorities can verify that a voter has voted by storing the voter’s and other voter data in the DKB.Both voters and authorities can verify that a voter has voted by storing the voter’s signature and other voter data in the DKB. - signature and other voter data in the DKB. Step 9. The Encrypted Votes Blockchain stores the encrypted votes and the hash of Step 9. The Encrypted Votes Blockchain stores the encrypted votes and the hash of thesignature and other voter data in the DKB. - Step 9. The Encrypted Votes Blockchain stores the encrypted votes and the hash of voter’s private key throughout the voting session.Step 9. The Encrypted Votes Blockchain stores the encrypted votes and the hash of the voter’s private key throughout the voting session. the voter’s private key throughout the voting session. the voter’s private key throughout the voting session. Saving the votes in the EVB without the voter’s signature guarantees anonymity and Saving the votes in the EVB without the voter’s signature guarantees anonymity and Saving the votes in the EVB without the voter’s signature guarantees anonymity and privacy, whereas saving the voter’s signature at the DKB prevents double voting. WithSaving the votes in the EVB without the voter’s signature guarantees anonymity and privacy, whereas saving the voter’s signature at the DKB prevents double voting. With an privacy, whereas saving the voter’s signature at the DKB prevents double voting. With an privacy, whereas saving the voter’s signature at the DKB prevents double voting. With an Encrypted Votes Blockchain the vote cannot be associated with the voter but even the E t d V t Bl k h i th t t b i t d ith th t b t th ----- _Appl. Sci. 2022, 12, 5477_ 9 of 12 _Appl. Sci. 2022, 12, x FOR PEER REVIEW_ 9 of 12 an Encrypted Votes Blockchain, the vote cannot be associated with the voter, but even the Distributed Key Blockchain can never associate the signature (voter) with the vote, meeting the two main preconditions of voting. Smart Contracts can manage voting time thus meeting the two main preconditions of voting. Smart Contracts can manage voting time in both DKB and EVB. When the voting time is over, the generation of keys will notin both DKB and EVB. When the voting time is over, the generation of keys will not be be allowed, and consequently, neither will the voting. Next, the counting begins, and ifallowed, and consequently, neither will the voting. Next, the counting begins, and if the the Distributed Key Blockchain and Encrypted Votes Blockchain have agreed to this, theDistributed Key Blockchain and Encrypted Votes Blockchain have agreed to this, the EnEncrypted Votes Blockchain signs the dataset of all encrypted votes with its private keycrypted Votes Blockchain signs the dataset of all encrypted votes with its private key and and sends this dataset to the Distributed Key Blockchain, as presented in Figuresends this dataset to the Distributed Key Blockchain, as presented in Figure 6. The dataset, 6. The dataset, in this sense, represents a ballot, a list of votes without voter information, thusin this sense, represents a ballot, a list of votes without voter information, thus assuring assuring voter anonymity and privacy.voter anonymity and privacy. **Figure 6.Figure 6. Vote transfer, decryption, and results.Vote transfer, decryption, and results.** The Distributed Key Blockchain validates the signing of encrypted ballot data sentThe Distributed Key Blockchain validates the signing of encrypted ballot data sent by the Encrypted Votes Blockchain using EVB’s public key; if it is valid, it decrypts theby the Encrypted Votes Blockchain using EVB’s public key; if it is valid, it decrypts the encrypted votes. The private key of the Distributed Key Blockchain is used to decryptencrypted votes. The private key of the Distributed Key Blockchain is used to decrypt the the votes. The Distributed Key Blockchain verifies that the number of voter signaturesvotes. The Distributed Key Blockchain verifies that the number of voter signatures equals equals the number of votes received by the Blockchain Encrypted Votes prior to decryption,the number of votes received by the Blockchain Encrypted Votes prior to decryption, proving that there are no more votes than voters or vice versa. After decrypting the votes,proving that there are no more votes than voters or vice versa. After decrypting the votes, the Distributed Key Blockchain calculates the votes and announces the results based on thethe Distributed Key Blockchain calculates the votes and announces the results based on legally defined criteria.the legally defined criteria. _4.1. Evaluation of Storage and Energy Consumption_ _4.1. Evaluation of Storage and Energy Consumption_ Various data, such as voter data, electoral zone data, and other comparable data, are Various data, such as voter data, electoral zone data, and other comparable data, are processed and stored during the voting process. Depending on the number of voters, the processed and stored during the voting process. Depending on the number of voters, the storage size may increase. Data are redundant because the Blockchain is distributed. The storage size may increase. Data are redundant because the Blockchain is distributed. The redundant data depend on the number of nodes used to mine in the Blockchain. The redundant data depend on the number of nodes used to mine in the Blockchain. The stor storage calculation to store the voting records is based on the Blockchain’s structure. The age calculation to store the voting records is based on the Blockchain’s structure. The or organization of data in the block depends on the number of transactions and the platform ganization of data in the block depends on the number of transactions and the platform used. Since, in the current Blockchain, the size of the block is almost 1 MB (megabyte), used. Since, in the current Blockchain, the size of the block is almost 1 MB (megabyte), calculations are based on 1024 bytes (1 kilobyte). According to I BM calculations [52], a calculations are based on 1024 bytes (1 kilobyte). According to I BM calculations [52], a 1 1 MB block must be able to store 1000 votes. Based on the assumptions above, the formula MB block must be able to store 1000 votes. Based on the assumptions above, the formula to calculate the needed storage for the voting system is: to calculate the needed storage for the voting system is: _storage_sizestorage_size = ( = (number_of_votersnumber_of_voters/1000) * 1 MB/1000) * 1 MB_ In the case of 10 million voters, the minimum storage size of one node must beIn the case of 10 million voters, the minimum storage size of one node must be about about 10,000 MB or approximately 10 GB (gigabytes). The redundant data are calculated10,000 MB or approximately 10 GB (gigabytes). The redundant data are calculated by mulby multiplying the storage_size by the number of nodes performing the mining. Energytiplying the storage_size by the number of nodes performing the mining. Energy conconsumption should be considered regardless of whether of the two most popular platformssumption should be considered regardless of whether of the two most popular platforms are used, whether the Ethereum platform as a public network or the Hyperledger platformare used, whether the Ethereum platform as a public network or the Hyperledger platform as a limited access or allowed blockchain network. The amount of energy consumed by theas a limited access or allowed blockchain network. The amount of energy consumed by blockchain is determined by the block’s difficulty and the number of hashes generated perthe blockchain is determined by the block’s difficulty and the number of hashes generated second (called the hash rate) [53]. The total energy consumption is also determined by the per second (called the hash rate) [53]. The total energy consumption is also determined by total number of nodes, which can range from a few tens to several hundreds depending the total number of nodes, which can range from a few tens to several hundreds depend on the type of election and actors involved, such as ministries, municipalities, civil society, ing on the type of election and actors involved, such as ministries, municipalities, civil society, universities, and other important institutions. The assumption of the overall cost ----- _Appl. Sci. 2022, 12, 5477_ 10 of 12 universities, and other important institutions. The assumption of the overall cost of all systems (energy consumption only for transaction mining) was calculated as follows: _energy_cost_per_day = (no_of_nodes * node_power_consumption) * prices_per_kWh * 24 h_ A similar approach of calculation is given in [54], which defines the average energy for storing a data unit for one year. However, because electronic voting only takes a few days or weeks, disk size and energy usage may be less relevant. _4.2. Discussions_ Current schemes and protocols do not meet the reliability criteria since they do not adequately meet the security, privacy, and anonymity characteristics. The BSJC and AntiQuantum systems, for example, fail to meet voter expectations for accuracy, correctness, scalability, and variability. The OVN, DATE, BES, and BEA schemes, on the other hand, do not meet the requirements for correctness, integrity, and scalability. Our scheme manages to balance the qualities of privacy and anonymity by using two Blockchains (DKB and EVB). Integrity, precision, and correctness are also obtained, in addition to anonymity and privacy. This is accomplished by using a cryptographic nonce and a hash of the voter’s private key, which allows the voter to verify their vote and ensure that their vote is correctly counted. Future researchers should consider the component of the vote separation from the voter and the part of anonymization that occurs in step six of the scheme, as presented in Figure 2. **5. Conclusions** Electronic voting systems have recently begun to find more applications in the real world due to their numerous advantages. The application of Blockchain technology can be more reliable than traditional ones because traditional or electronic voting systems are usually managed by a single authority that also has the risk of manipulation. Because Blockchain is distributed, not managed by a single authority and uses different consensus methods between parties, it can improve electronic voting systems. The immutability of Blockchain ensures data integrity through auditing, but privacy and anonymity are still among the main concerns. The proposed approach addresses these concerns with electronic voting, employing two independent Blockchains. The usage of two different Blockchains recommended in our study, i.e., the Encrypted Votes Blockchain and the Distributed Key Blockchain, takes voter privacy and vote anonymity into account and provides solutions. Voter privacy and vote anonymity are achieved by storing votes and voter data in a separate Blockchain and using cryptographic methods and protocols. The nonce and hash of the voter’s private key, as well as a comparison of the number of votes with the number of signatures of voters, ensure the integrity of the data. In addition, this approach makes it possible to verify if the vote has been counted correctly. The Distributed Key Blockchain also guarantees that no fraudulent voter has voted more than once, as this is verified before the voter casts their vote. **Author Contributions: Methodology, V.N., B.R., R.D. and I.S.; formal analyses, V.N, B.R., R.D. and** I.S.; writing—original draft preparation, V.N.; writing—review and editing, B.R., R.D. and I.S.; visualization, V.N.; supervision, B.R. and I.S. project administration, B.R.; funding acquisition, B.R. and I.S. All authors have read and agreed to the published version of the manuscript. **Funding: Ministry of Education, Science, Technology and Innovation, Government of Kosovo with** Decision no. 2-814 dt. 15.06.2021 has funded this research. **Conflicts of Interest: The authors declare no conflict of interest.** ----- _Appl. Sci. 2022, 12, 5477_ 11 of 12 **References** 1. [Solutions, I. Software that Powers Democracy Should Be Free. Available online: http://inno.vote/whitepaper/Inno.vote%20%E2](http://inno.vote/whitepaper/Inno.vote%20%E2%80%94%20Bringing%20Democracy%20to%20Elections.pdf) [%80%94%20Bringing%20Democracy%20to%20Elections.pdf (accessed on 28 January 2022).](http://inno.vote/whitepaper/Inno.vote%20%E2%80%94%20Bringing%20Democracy%20to%20Elections.pdf) 2. Neziri, V. E-Voting: System Architecture—Kosovo Case. Master’s Thesis, Faculty of Electrical and Computer Engineering, University of Prishtina, Prishtinë, Kosovo, September 2011. 3. Dhillon, A.; Kotsialou, G.; McBurney, P.; Riley, L. Introduction to Voting and the Blockchain: Some open questions for economists. EconPapers. In CAGE Online Working Paper Series 416, Competitive Advantage in the Global Economy; Örebro University: Örebro, Sweden, 2019. 4. Westin, A. Privacy And Freedom. Wash. Lee Law Rev. 1968, 25, 166. 5. [Webb, P.D.; Eulau, H.; Gibbins, R. Election Political Science. Available online: https://www.britannica.com/topic/election-](https://www.britannica.com/topic/election-political-science) [political-science (accessed on 7 September 2021).](https://www.britannica.com/topic/election-political-science) 6. Enguehard, C. Ethics and Electronic Voting. In Proceedings of the ETHICOMP—Liberty and Security in an Age of ICTs, Paris, France, 25–27 June 2014. 7. Wolf, P.; Nackerdien, R.; Tuccinardi, D. Introducing Electronic Voting: Essential Considerations; International Institute for Democracy and Electoral Assistance (International IDEA): Stockholm, Sweden, 2011. 8. [e-Estonia. i-Voting—the Future of Elections? Available online: https://e-estonia.com/i-voting-the-future-of-elections/ (accessed](https://e-estonia.com/i-voting-the-future-of-elections/) on 28 January 2022). 9. International Institute for Democracy and Electora. If e-Voting is Currently Being Used, What Type(s) of Technology Used? [Available online: https://www.idea.int/data-tools/question-view/743 (accessed on 17 May 2022).](https://www.idea.int/data-tools/question-view/743) 10. [Microsoft Corporate Blogs. Electronic Voting: What Europe Can Learn from Estonia. Available online: https://blogs.microsoft.](https://blogs.microsoft.com/eupolicy/2019/05/10/electronic-voting-estonia/) [com/eupolicy/2019/05/10/electronic-voting-estonia/ (accessed on 17 May 2022).](https://blogs.microsoft.com/eupolicy/2019/05/10/electronic-voting-estonia/) 11. Gibson, P.; Krimmer, R.; Teague, V.; Pomares, J. A review of E-voting: The past, present and future. Ann. Telecommun. 2016, 71, [279–286. [CrossRef]](http://doi.org/10.1007/s12243-016-0525-8) 12. Jafar, U.; Ab Aziz, M.J.; Shukur, Z. Blockchain for Electronic Voting System—Review and Open Research Challenges. Sensors **[2021, 21, 5874. [CrossRef] [PubMed]](http://doi.org/10.3390/s21175874)** 13. Tama, B.A.; Kweka, B.J.; Park, Y.; Rhee, K.-H. A critical review of blockchain and its current applications. In Proceedings of the International Conference on Electrical Engineering and Computer Science (ICECOS), Palembang, Indonesia, 22–23 August 2017; [pp. 109–113. [CrossRef]](http://doi.org/10.1109/ICECOS.2017.8167115) 14. Göbel, J.; Keeler, H.; Krzesinski, A.; Taylor, P. Bitcoin blockchain dynamics: The selfish-mine strategy in the presence of [propagation delay. Perform. Eval. 2016, 104, 23–41. [CrossRef]](http://doi.org/10.1016/j.peva.2016.07.001) 15. Zheng, Z.; Xie, S.; Dai, H.-N.; Chen, X.; Wang, H. Blockchain challenges and opportunities: A survey. Int. J. Web Grid Serv. 2018, _[14, 352–376. [CrossRef]](http://doi.org/10.1504/IJWGS.2018.095647)_ 16. Wu, M.; Wang, K.; Cai, X.; Guo, S.; Guo, M.; Rong, C. A Comprehensive Survey of Blockchain: From Theory to IoT Applications [and Beyond. IEEE Internet Things J. 2019, 6, 8114–8154. [CrossRef]](http://doi.org/10.1109/JIOT.2019.2922538) 17. Bodkhe, U.; Tanwar, S.; Parekh, K.; Khanpara, P.; Tyagi, S.; Kumar, N.; Alazab, M. Blockchain for Industry 4.0: A Comprehensive [Review. IEEE Access 2020, 8, 79764–79800. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.2988579) 18. Akram, S.V.; Malik, P.K.; Singh, R.; Anita, G.; Tanwar, S. Adoption of blockchain technology in various realms: Opportunities and [challenges. Secur. Priv. 2020, 3, e109. [CrossRef]](http://doi.org/10.1002/spy2.109) 19. [Iansiti, M.; Lakhani, K. The Truth about Blockchain. Hardward Business Review. Available online: https://hbr.org/2017/01/the-](https://hbr.org/2017/01/the-truth-about-blockchain) [truth-about-blockchain (accessed on 19 December 2021).](https://hbr.org/2017/01/the-truth-about-blockchain) 20. Anh Dinh, T.; Wang, J.; Chen, G.; Liu, R.; Ooi, B.C.; Tan, K.-L. BLOCKBENCH: A Framework for Analyzing Private Blockchains. In Proceedings of the ACM International Conference on Management of Data, Chicago, IL, USA, 14–19 May 2017; pp. 1085–1100. [[CrossRef]](http://doi.org/10.48550/arXiv.1703.04057) 21. Zheng, Z.; Xie, S.; Dai, H.-N.; Chen, W.; Chen, X.; Weng, J.; Imran, M. An overview on smart contracts: Challenges, advances and [platforms. Future Gener. Comput. Syst. 2019, 105, 475–491. [CrossRef]](http://doi.org/10.1016/j.future.2019.12.019) 22. Oliveira, M.; Carrara, G.; Fernandes, N.; Albuquerque, C.; Carrano, R.; Medeiros, D.; Mattos, D. Towards a Performance Evaluation of Private Blockchain Frameworks using a Realistic Workload. In Proceedings of the 22nd Conference on Innovation [in Clouds, Internet and Networks and Workshops (ICIN), Paris, France, 19–21 February 2019; pp. 180–187. [CrossRef]](http://doi.org/10.1109/ICIN.2019.8685888) 23. Hussain, H.A.; Mansor, Z.; Shukur, Z. Comprehensive Survey and Research Directions on Blockchain IoT Access Control. Int. J. _[Adv. Comput. Sci. Appl. 2021, 12, 239–244. [CrossRef]](http://doi.org/10.14569/IJACSA.2021.0120530)_ 24. Augoye, V.; Tomlinson, A. Analysis of Electronic Voting Schemes in the Real World; UK Academy for Information Systems: Oxford, UK, 2018. 25. Tarasov, P.; Tewari, H. The Future of E-Voting. IADIS Int. J. Comput. Sci. Inf. Syst. 2017, 12, 148–165. 26. Khan, K.; Arshad, J.; Khan, M. Secure Digital Voting System Based on Blockchain Technology. Int. J. Electron. Gov. Res. 2018, 14, [53–62. [CrossRef]](http://doi.org/10.4018/IJEGR.2018010103) 27. Neziri, V.; Dervishi, R.; Rexha, B. Survey on Using Blockchain Technologies in Electronic Voting Systems. In Proceedings of the 25th International Conference on Circuits, Systems, Communications and Computers (CSCC), Crete Island, Greece, 19–22 July [2021; pp. 61–65. [CrossRef]](http://doi.org/10.1109/CSCC53858.2021.00019) ----- _Appl. Sci. 2022, 12, 5477_ 12 of 12 28. Specter, M.; Koppel, J.; Weitzner, D. The ballot is busted before the blockchain: A security analysis of voatz, the first internet voting application used in U.S. federal elections. USENIX Secur. Symp. 2020, 87, 1535–1552. 29. Zambrano, R.; Young, A.; Verhulst, S. Seeking Ways to Prevent Electoral Fraud using Blockchain in Sierra Leone. Available online: [https://blockchan.ge/blockchange-election-monitoring.pdf (accessed on 7 February 2022).](https://blockchan.ge/blockchange-election-monitoring.pdf) 30. Buldas, A.; Kroonmaa, A.; Laanoja, R. Keyless Signatures’ Infrastructure: How to Build Global Distributed Hash-Trees. In Secure _[IT Systems; Springer: Berlin/Heidelberg, Germany, 2013; Volume 8208, pp. 313–320. [CrossRef]](http://doi.org/10.1007/978-3-642-41488-6_21)_ 31. [Kshetri, N.; Voas, J. Blockchain-Enabled E-Voting. IEEE Softw. 2018, 35, 95–99. [CrossRef]](http://doi.org/10.1109/MS.2018.2801546) 32. [Polyakov, K. How Moscow Organized Voting on Blockchain in 202. (ICT Moscow). Available online: https://ict.moscow/en/](https://ict.moscow/en/news/how-moscow-organized-voting-on-blockchain-in-2020/) [news/how-moscow-organized-voting-on-blockchain-in-2020/ (accessed on 9 January 2022).](https://ict.moscow/en/news/how-moscow-organized-voting-on-blockchain-in-2020/) 33. Huang, J.; He, D.; Obaidat, M.; Vijayakumar, P.; Luo, M.; Raymond Choo, K.-K. The Application of the Blockchain Technology in [Voting Systems: A Review. Assoc. Comput. Mach. 2021, 54. [CrossRef]](http://doi.org/10.1145/3439725) 34. Yu, T.; Yasuo, O. An anonymous distributed electronic voting system using Zerocoin. In Proceedings of the International [Conference on Information Networking, Jeju Island, Korea, 13–16 January 2021; pp. 163–168. [CrossRef]](http://doi.org/10.1109/ICOIN50884.2021.9333937) 35. Yadav, A.S.; Urade, Y.V.; Thombare, A.U.; Patil, A.A. E-Voting using Blockchain Technology. Int. J. Eng. Res. Technol. 2020, 9, 375–380. 36. Wang, W.; Xu, H.; Alazab, M.; Gadekallu, T.R.; Han, Z.; Su, C. Blockchain-Based Reliable and Efficient Certificateless Signature [for IIoT Devices. IEEE Trans. Ind. Inform. 2021. [CrossRef]](http://doi.org/10.1109/TII.2021.3084753) 37. Atsushi, F.; Tatsuaki, O.; Kazuo, O. A practical secret voting scheme for large scale elections. In Proceedings of the International Workshop on the Theory and Application of Cryptographic Techniques, Aarhus, Denmark, 22–26 May 2005; Volume 718. 38. McCorry, P.; Shahandashti, S.; Hao, F. A smart contract for boardroom voting with maximum voter privacy. In Proceedings of the International Conference on Financial Cryptography and Data Security, Sliema, Malta, 3–7 April 2017. 39. Wei-Jr, L.; Yung-chen, H.; Chih-Wen, H.; Ja-Ling, W. Date: A Decentralized, Anonymous, and Transparent E-voting System. In Proceedings of the IEEE International Conference on Hot Information-Centric Networking, Shenzhen, China, 15–17 August 2018; [pp. 24–29. [CrossRef]](http://doi.org/10.1109/HOTICN.2018.8605994) 40. Gao, S.; Zheng, D.; Guo, R.; Jing, C.; Hu, C. An Anti-Quantum E-Voting Protocol in Blockchain with Audit Function. IEEE Access **[2019, 7, 115304–115316. [CrossRef]](http://doi.org/10.1109/ACCESS.2019.2935895)** 41. Fernández-Caramès, T.; Fraga-Lamas, P. Towards Post-Quantum Blockchain: A Review on Blockchain Cryptography Resistant to [Quantum Computing Attacks. IEEE Access 2020, 8, 21091–21116. [CrossRef]](http://doi.org/10.1109/ACCESS.2020.2968985) 42. Shahzad, B.; Crowcroft, J. Trustworthy Electronic Voting Using Adjusted Blockchain Technology. IEEE Access 2019, 7, 24477–24488. [[CrossRef]](http://doi.org/10.1109/ACCESS.2019.2895670) 43. [Yi, H. Securing e-voting based on blockchain in P2P network. J Wirel. Com Netw. 2019, 137. [CrossRef]](http://doi.org/10.1186/s13638-019-1473-6) 44. Khan, K.M.; Arshad, J.; Khan, M.M. Investigating performance constraints for blockchain based secure e-voting system. Future _[Gener. Comput. Syst. 2020, 105, 13–26. [CrossRef]](http://doi.org/10.1016/j.future.2019.11.005)_ 45. [Agora. Bringing Our Voting Systems into the 1st Century. Available online: https://static1.squarespace.com/static/5b0be2f4e2](https://static1.squarespace.com/static/5b0be2f4e2ccd12e7e8a9be9/t/5f37eed8cedac41642edb534/1597501378925/Agora_Whitepaper.pdf) [ccd12e7e8a9be9/t/5f37eed8cedac41642edb534/1597501378925/Agora_Whitepaper.pdf (accessed on 29 January 2022).](https://static1.squarespace.com/static/5b0be2f4e2ccd12e7e8a9be9/t/5f37eed8cedac41642edb534/1597501378925/Agora_Whitepaper.pdf) 46. [Horizon State. Available online: https://cryptorating.eu/whitepapers/Horizon-State/horizon_state_white_paper.pdf (accessed](https://cryptorating.eu/whitepapers/Horizon-State/horizon_state_white_paper.pdf) on 30 January 2022). 47. [Voatz Inc. Voatz Mobile Voting Platform—An Overview. Available online: https://new.voatz.com/wp-content/uploads/2020/0](https://new.voatz.com/wp-content/uploads/2020/07/voatz-security-whitepaper.pdf) [7/voatz-security-whitepaper.pdf (accessed on 3 February 2022).](https://new.voatz.com/wp-content/uploads/2020/07/voatz-security-whitepaper.pdf) 48. Tarasov, P.; Tewari, H. Internet Voting Using Zcash. IACR Cryptol. ePrint Arch. 2017, 585. 49. Xiaoqiang, S.; Richard, Y.F.; Peng, Z.; Zhiwei, S.; Weixin, X.; Xiang, P. A Survey on Zero-Knowledge Proof in Blockchain. IEEE _[Network. 2021, 35, 198–205. [CrossRef]](http://doi.org/10.1109/MNET.011.2000473)_ 50. Schnorr, C. Efficient identification and signatures for smart cards. In Advances in Cryptology—CRYPTO’ 89, Proceedings of the _Workshop on the Theory and Application of Cryptographic Techniques, Houthalen, Belgium, 10–13 April 1989; Springer: Berlin/Heidelberg,_ Germany, 1990; pp. 239–252. 51. Du, M.; Ma, X.; Zhang, Z.; Wang, X.; Chen, Q. A review on consensus algorithm of blockchain. In Proceedings of the IEEE [International Conference on Systems, Banff, AB, Canada, 5–8 October 2017; pp. 2567–2572. [CrossRef]](http://doi.org/10.1109/SMC.2017.8123011) 52. [IBM. IBM Storage: Storage Needs for Blockchain Technology. 2018. Available online: https://www.ibm.com/downloads/cas/](https://www.ibm.com/downloads/cas/LA8XBQGR#:~{}:text=So%20even%20at%20a%20modest,storage%20per%20year%20is%20required) [LA8XBQGR#:~{}:text=So%20even%20at%20a%20modest,storage%20per%20year%20is%20required (accessed on 6 February 2022).](https://www.ibm.com/downloads/cas/LA8XBQGR#:~{}:text=So%20even%20at%20a%20modest,storage%20per%20year%20is%20required) 53. Saingre, D. Understanding the Energy Consumption of Blockchain Technologies: A Focus on Smart; Ecole nationale supérieure MinesTélécom Atlantique: Nantes, France, 2021. 54. Coroamă, V. Blockchain Energy Consumption: An Exploratory Study; Swiss Federal Office of Energy SFOE: Bern, Switzerland, 2021. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.3390/app12115477?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.3390/app12115477, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://www.mdpi.com/2076-3417/12/11/5477/pdf?version=1653725558" }
2,022
[]
true
2022-05-28T00:00:00
[ { "paperId": "80260a4af770a022ccb4955b0999075dc908643a", "title": "Blockchain for Electronic Voting System—Review and Open Research Challenges" }, { "paperId": "d9a3f9939e0b0e08f7e4778bab9a5e433bfca6d0", "title": "The Application of the Blockchain Technology in Voting Systems" }, { "paperId": "e589891c18b1b7303de69e8bbbba2e043a8c4705", "title": "Adoption of blockchain technology in various realms: Opportunities and challenges" }, { "paperId": "a740dcc3da0e3086db21aedb196e5e7ba5b094e1", "title": "Investigating performance constraints for blockchain based secure e-voting system" }, { "paperId": "e0bc89f5776804bc2be27f1945f900d1ac8f1e7f", "title": "An Overview on Smart Contracts: Challenges, Advances and Platforms" }, { "paperId": "09298c5b6258ac9f5d1aba5e3c7649737050a933", "title": "E-Voting using Block Chain Technology" }, { "paperId": "d77a14e132137ee7059d52b1cb714057cb3f5ca3", "title": "Securing e-voting based on blockchain in P2P network" }, { "paperId": "81ac8abee548ab937b68b3bd03d27e57b2705f5a", "title": "A review of E-voting: the past, present and future" }, { "paperId": "1bda4239308c6dcc7158c34204157d77f5f5b384", "title": "Bitcoin blockchain dynamics: The selfish-mine strategy in the presence of propagation delay" }, { "paperId": "8d69c06d48b618a090dd19185aea7a13def894a5", "title": "Efficient Identification and Signatures for Smart Cards (Abstract)" }, { "paperId": "d6a5afd47a5bddc669399dc299c11ab8ac3368c2", "title": "The Ballot is Busted Before the Blockchain: A Security Analysis of Voatz, the First Internet Voting Application Used in U.S. Federal Elections" }, { "paperId": "69f4373f84f2f9cbf97996a803cddffd7ef5d02e", "title": "Introduction to Voting and the Blockchain: some open questions for economists" }, { "paperId": "d4b46a1beb6d03e08066a199f524ded286e666a5", "title": "The Future of E-Voting" }, { "paperId": "6230d41272aa0fa44fa167840bded9af2014ac4f", "title": "Internet Voting Using Zcash" }, { "paperId": "1d1a045884911fe99b502960bf9e2718b32c238a", "title": "Privacy and freedom" } ]
17,090
en
[ { "category": "Physics", "source": "external" }, { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Physics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01effcf23bd61adb2b9b3bb648b73aae63c93c40
[ "Physics", "Computer Science" ]
0.807576
Analytic quantum weak coin flipping protocols with arbitrarily small bias
01effcf23bd61adb2b9b3bb648b73aae63c93c40
ACM-SIAM Symposium on Discrete Algorithms
[ { "authorId": "153150192", "name": "A. S. Arora" }, { "authorId": "1981460", "name": "J. Roland" }, { "authorId": "37633530", "name": "Chrysoula Vlachou" } ]
{ "alternate_issns": null, "alternate_names": [ "Symposium on Discrete Algorithms", "ACM-SIAM Symp Discret Algorithm", "Symp Discret Algorithm", "SODA" ], "alternate_urls": null, "id": "5545566b-c0b8-418c-83a5-a986a4657572", "issn": null, "name": "ACM-SIAM Symposium on Discrete Algorithms", "type": "conference", "url": "https://en.wikipedia.org/wiki/Symposium_on_Discrete_Algorithms" }
Weak coin flipping (WCF) is a fundamental cryptographic primitive for two-party secure computation, where two distrustful parties need to remotely establish a shared random bit whilst having opposite preferred outcomes. It is the strongest known primitive with arbitrarily close to perfect security quantumly while classically, its security is completely compromised (unless one makes further assumptions, such as computational hardness). A WCF protocol is said to have bias $\epsilon$ if neither party can force their preferred outcome with probability greater than $1/2+\epsilon$. Classical WCF protocols are shown to have bias $1/2$, i.e., a cheating party can always force their preferred outcome. On the other hand, there exist quantum WCF protocols with arbitrarily small bias, as Mochon showed in his seminal work in 2007 [arXiv:0711.4114]. In particular, he proved the existence of a family of WCF protocols approaching bias $\epsilon (k)=1/(4k+2)$ for arbitrarily large $k$ and proposed a protocol with bias $1/6$. Last year, Arora, Roland and Weis presented a protocol with bias $1/10$ and to go below this bias, they designed an algorithm that numerically constructs unitary matrices corresponding to WCF protocols with arbitrarily small bias [STOC'19, p.205-216]. In this work, we present new techniques which yield a fully analytical construction of WCF protocols with bias arbitrarily close to zero, thus achieving a solution that has been missing for more than a decade. Furthermore, our new techniques lead to a simplified proof of existence of WCF protocols by circumventing the non-constructive part of Mochon's proof. As an example, we illustrate the construction of a WCF protocol with bias $1/14$.
## Analytic quantum weak coin fipping protocols with arbitrarily small bias #### Atul Singh Arora[∗], Jérémie Roland[†], and Chrysoula Vlachou[‡] Université libre de Bruxelles, Belgium #### 13 July 2020 **Abstract** Weak coin fipping (WCF) is a fundamental cryptographic primitive for two-party secure computation, where two distrustful parties need to remotely establish a shared random bit whilst having opposite preferred outcomes. It is the strongest known primitive with arbitrarily close to perfect security quantumly while classically, its security is completely compromised (unless one makes further assumptions, such as computational hardness). A WCF protocol is said to have bias ϵ if neither party can force their preferred outcome with probability greater than 1/2 + ϵ. Classical WCF protocols are shown to have bias 1 2, i.e., a cheating party can always force their preferred outcome. On the other / hand, there exist quantum WCF protocols with arbitrarily small bias, as Mochon showed in his seminal [work in 2007 [arXiv:0711.4114]. In particular, he proved the existence of a family of WCF protocols ap-](http://arxiv.org/abs/0711.4114) proaching bias ϵ(k) = 1/(4k + 2) for arbitrarily large k and proposed a protocol with bias 1/6. Last year, Arora, Roland and Weis presented a protocol with bias 1 10 and to go below this bias, they designed / an algorithm that numerically constructs unitary matrices corresponding to WCF protocols with arbitrarily small bias [STOC’19, p.205-216]. In this work, we present new techniques which yield a fully analytical construction of WCF protocols with bias arbitrarily close to zero, thus achieving a solution that has been missing for more than a decade. Furthermore, our new techniques lead to a simplifed proof of existence of WCF protocols by circumventing the non-constructive part of Mochon’s proof. As an example, we illustrate the construction of a WCF protocol with bias 1 14. / ∗aarora@ulb.ac.be †jroland@ulb.ac.be ‡cvlachou@ulb.ac.be ----- ### 1 Introduction Coin fipping (CF), introduced by Blum [6], is an important cryptographic primitive which permits two distrustful parties to remotely generate an unbiased random bit in spite of the fact that one of them might be dishonest and try to force a specifc outcome. Like bit commitment (BC) and oblivious transfer (OT), it is a basic primitive for secure 2-party computation, a special case of secure multi-party computation, where the parties need to jointly compute a function on their inputs while keeping these inputs private. In the classical scenario, these primitives are shown to be computationally secure, and without extra assumptions (e.g. computational hardness) a dishonest party can always cheat perfectly [10]. Moving to the quantum scenario, BC and OT protocols have a non-zero lower bound on their bias [8, 7]; achieving perfect security is not possible, but still they perform better than their classical counterparts without computational hardness assumptions. The two distinct variants of CF, namely strong CF (SCF) and weak CF (WCF), behave diferently in the quantum scenario. In SCF the desired outcome of each party is not known a priori, i.e., none of the parties know beforehand whether the other prefers outcome 0 or 1. Just like for quantum BC and OT, there is a lower bound on the bias of SCF protocols [14, 13]. The best known explicit quantum SCF protocols had bias [1] 4 [[][3][,][ 18][,][ 12][]. For a quantum WCF protocol though, where the preferred] outcome of each party is known, the situation is diferent. In his seminal work, Mochon [17] proved the existence of a family of WCF protocols achieving arbitrarily close to zero bias. This established WCF to be the strongest known secure 2-party computation primitive which has arbitrarily close to perfect security in the quantum setting while being completely insecure classically (without making further assumptions). Moreover, Kerenidis and Chailloux showed that perfect WCF can be used as a block box to obtain the optimal protocols for quantum SCF and BC [9, 8], i.e. the protocols with the lowest possible bias √12 2 [,] [−] [1] therefore Mochon’s result is highly relevant for the whole area of quantum secure 2-party (and multiparty) computation. However, his proof was not constructive and the proposal of an explicit protocol with almost zero bias was left as an open problem, while only an explicit protocol with bias [1] 6 [was presented. In] fact, frst, a WCF protocol with bias √12 2 [was reported [][19][], which incidentally matched the][ lower bound] [−] [1] for the bias of SCF protocols, undermining even the existence of better WCF protocols and the distinction between them. Later, Mochon’s lengthy and highly technical proof was verifed and simplifed [2], but still a protocol with bias below [1] 6 [was missing. Last year Arora, Roland and Weis proposed an explicit protocol] with bias [1] 10 [, and designed an algorithm that can][ numerically][ construct unitary matrices corresponding to] protocols with arbitrarily small bias [5]. In the present work, we report the analytical solution to the WCF problem, by determining the unitary matrices that constitute WCF protocols with arbitrarily small bias. ### 2 Background and overview of the result A quantum WCF protocol can be described as follows: the two parties, say A and B, are located in diferent places and, besides their local register, they also have a register that they can exchange, called the message register. At each round, the party that holds the message register can apply a local unitary on it and on their local register. After a number of rounds, the parties perform a fnal measurement on their local registers, and the outcome determines the winner: A wins on outcome 0, while B wins on outcome 1. If both parties are honest and follow the protocol, they have equal probabilities of winning PA = PB = 1/2. If one of the parties is cheating and tries to force the other player to output their desired outcome, then their probability of winning is, in general, greater. We denote this probability by PA[∗] [for A being dishonest] and PB[∗] [for dishonest B. Let][ ϵ][ ≥] [0 be the smallest number such that both][ P]A[∗] [and][ P]B[∗] [are upper bounded by] 1 2 [+][ ϵ][. Then we say that the protocol has][ bias][ ϵ][.][1][ To calculate][ P]A[∗] /B [one can write a semi-defnite program] (SDP) that maximizes this cheating probability, given that the honest party follows the protocol. Using 1The case where both A and B are dishonest does not depend on the description of the protocol since neither is following it. ----- the SDP duality, this maximization problem can be written as a minimization problem over the respective dual variables ZA/B. However, the above holds given that we already have a protocol. Therefore, a new framework is needed, permitting us to fnd both the protocol and its bias. A ground-breaking idea was provided by Kitaev (as Mochon describes in [17]), who transformed these SDPs into the so-called time-dependent point games (TDPG). A TDPG is a sequence of frames that include points on the positive quadrant of the x _y plane with a probability weight assigned to each point. The_ − TDPGs that we consider are determined by specifc initial and fnal confgurations and there are rules on how to move from one frame to the next. The initial frame has two points with coordinates 0, 1 and ⟦ ⟧ 1, 0 and probability weight 1 2 each, while the fnal frame we want to obtain has only one point at ⟦ ⟧ / _β,_ _α_ with probability weight 1. Consider one frame, and restrict to the set of points along a horizontal ⟦ ⟧ line, i.e. points with the same y coordinate. We denote the x−coordinates of the ith such point by xдi and the respective probability weight by pдi, with i ∈{1, 2 . . . _nд_ }. In the subsequent frame, restrict again to a set of points with the same y coordinate as before. Let the x−coordinates of the ith such point be xhi and the respective probability weight be phi, with i ∈{1, 2 . . . _nh_ }. The rules for transitioning between subsequent frames can be written as follows: _nд_ � _pдi =_ _i=1_ �nh _phi_ and _i=1_ _nд_ � _i=1_ _λxдi_ _pдi_ _λ + xдi_ ≤ �nh _i=1_ _λxhi_ _phi_, ∀λ > 0. (1) _λ + xhi_ Analogous rules exist for moving points along vertical lines. Some examples of such permitted moves are the raises, where we move a point along a horizontal or vertical line by increasing its coordinate, the splits, where we split a point into several others, and the merges, where we merge several points into a single point. It was shown that for any TDPG with transitions respecting Equation (1), there exists a WCF protocol with cheating probabilities P [∗] _A_ [=][ α][ +][δ][ and][ P]B[∗] [=][ β][ +][δ] [, where][ δ][ can be made arbitrarily small. The converse] also holds. Thus, the initial task of fnding a protocol and solving the associated SDPs minimising P [∗] _A/B[,]_ is reduced to fnding a TDPG such that the point ⟦β, _α⟧_ of the fnal frame is as close to ⟦ 2[1][,][ 1]2 [⟧] [as possible,] corresponding to the zero-bias case. These TDPGs are called expressible by matrices (EBM) point games, and they are defned below. **Defnition 1. Let Z** 0 be a Hermitian matrix[2] and Π[[][z][]] be the projector on the eigenspace of the ≥ eigenvalue z of Z . Let _ψ_ be a vector (not necessarily normalised), and defne the fnitely supported | ⟩ function Prob _Z_, _ψ_ : 0, 0, as [ | ⟩] [ ∞) →[ ∞) Prob _Z_, _ψ_ _z_ = [ | ⟩]( ) � _ψ_ Π[[][z][]] _ψ_ if z spectrum _Z_ ⟨ | | ⟩ ∈ ( ) 0 otherwise. Let д, _h :_ 0, 0, be two fnitely supported functions. The line transition д _h is called EBM if_ [ ∞) →[ ∞) → there exist two matrices 0 _G_ _H and a vector_ _ψ_, such that д = Prob _G,_ _ψ_ and h = Prob _H,_ _ψ_ . ≤ ≤ | ⟩ [ | ⟩] [ | ⟩] **Defnition 2. Let д,** _h :_ 0, 0, 0, be two fnitely supported functions. The transition д _h_ [ ∞)×[ ∞) →[ ∞) → is called an - EBM horizontal transition if for all y 0,, _д_, _y_ _h_, _y_ is an EBM line transition, and ∈[ ∞) (· ) → (· ) - EBM vertical transition if for all x 0,, _д_ _x,_ _h_ _x,_ is an EBM line transition. ∈[ ∞) ( - → ( **Defnition 3. An EBM point game is a sequence of fnitely supported functions[3]** {д0, _д1, . . .,_ _дn_ }, such that 2This matrix inequality denotes that Z is a positive semi-defnite matrix. 3As explained further in Section 3, ⟦a, _b⟧(x,_ _y) := δa,x_ _δb,y where δr,s is the Kronecker Delta._ ----- - д0 = [1]2 [⟦][0][,][ 1][⟧] [+][ 1]2 [⟦][1][,][ 0][⟧] [and][ д][n][ =][ 1][⟦][β][,] _[α][⟧]_ [for some][ α][,][ β][ ∈[][0][,][ 1][]][,] - for all even (odd) i the transition дi → _дi+1 is an EBM vertical (horizontal) transition._ In order to verify that a transition is EBM one has to check conditions involving matrices, thus the problem remains hard and yet another reduction is needed. For an EBM transition _д_ _h, one can consider_ → the corresponding fnitely supported EBM function to be h _д. The set of EBM functions is shown to be_ − the same (up to the closures) as the set of the so-called valid functions. We omit both the defnition of a valid function and the proof that the two sets are same, as they have been presented in previous works [17, 2]. We only highlight that checking if a transition is EBM is equivalent to verifying the validity of a suitably constructed function which is an easier task. Mochon, following the above reductions, proved the existence of a WCF protocol with arbitrarily small bias, by proposing a suitable family of point games with valid transitions [17]. This family is parametrised by an arbitrary integer k ≥ 1 that specifes the bias ϵ = 4k1+2 [. More precisely, 2][k][ is the number of points] involved in the main move of the point game. He constructed a protocol with bias [1] 6 [, but he left as an] open problem the construction of a protocol with almost zero bias. This problem has remained open since then, as translating the point game into a sequence of unitaries determining the protocol is, indeed, not easy. A step forward was recently taken in [5], where a framework, TDPG-to-Explicit-protocol Framework (TEF) was introduced, which allows the conversion of TDPGs into WCF protocols, granted that unitaries associated with the valid functions used in the games can be found. More precisely, if a unitary matrix O acting on span{|д1⟩, |д2⟩, . . ., |h1⟩, |h2⟩, . . .}, and satisfying the constraints _nд_ � _xдi_ _EhO |дi_ ⟩⟨дi | O [†]Eh ≥ 0, (2) _i=1_ _O_ _v_ = _w_ and | ⟩ | ⟩ �nh _xhi |hi_ ⟩⟨hi | − _i=1_ can be found for every transition of a TDPG, then an explicit WCF protocol with the corresponding bias � _nд_ � can be obtained using the TEF. The vectors {|дi ⟩}i=1[,][ {|][h][i] [⟩][n]i=[h]1[}] are orthonormal and Eh is a projection on span{|hi ⟩}. Furthermore, xдi and xhi are the coordinates of the points of the initial and fnal frame, respectively, of the line transitions, and pдi and phi their corresponding probability weights (see also Equation (1)). Note that there exist nд and nh points in the initial and fnal frame, respectively. Finally, |v⟩ := [�]i √pдi |дi ⟩/[��]i _[p]дi_ [and][ |][w][⟩] [:][=][ �]i √phi |hi ⟩/��i _[p]hi_ [. In fact the set of transitions which satisfy] Equation (2) is the same (up to the closures) as the set of valid/EBM transitions (see Appendix A). Using a perturbative method in conjunction with the TEF, the authors in [5] analytically constructed a protocol with bias [1] 10 [, and to go below this bias they used tools from geometry, and designed the so-called][ elliptic] _monotone align algorithm, that numerically fnds the matrices determining a protocol with arbitrarily small_ bias. In the present work, we analytically construct explicit WCF protocols with arbitrarily small bias, and to this end, we consider the class of valid functions that Mochon uses in his family of point games approaching bias 1 4k+2 [for arbitrary integers][ k][ ≥] [1. We refer to these valid functions as][ f][ -assignments][, and when][ f] is a monomial, we call them monomial assignments. We chose the term assignment to refect the fact that these functions are assigning the appropriate probability weights to the points of the TDPGs. If we are able to construct unitaries satisfying Equation (2) with respect to the f assignments of Mochon’s TDPGs − with bias ϵ 0 i.e. for k, we have efectively solved our problem, since the aforementioned TEF → ( →∞) enables the conversion of TDPGs to WCF protocols. We start by noticing that an even weaker condition is sufcient: suppose that a valid/EBM function can be written as a sum of valid/EBM functions; to obtain the protocol, it sufces to fnd unitaries corresponding to each valid function that appears in this sum (see Appendix A). We then solve the monomial assignments, i.e. we give formulae for the unitaries corresponding to monomial assignments, and show that they indeed satisfy Equation (2), obtaining, thus, an efective solution to the f -assignment, as summarised in our main result, Theorem 13. Our approach, in addition ----- to yielding analytic WCF protocols with vanishing bias, has a feature that we would like to emphasize here. The reduction of the problem from EBM to valid functions is pivotal in the construction of Mochon’s point game [17]. However, we can bypass this reduction and directly construct a WCF protocol once the matrices O, corresponding to the (efective) solutions to the transitions of the point game, which satisfy Equation (2) are known. By means of the TEF we can prove that this protocol has the same bias as the point game. Therefore, our approach is simpler than the previous ones, as it avoids the aforementioned—quite technical—reduction. Finally, in [4, 5] it was shown that functions expressible by real matrices (EBRM) are sufcient for obtaining the solution,[4] therefore from now on we restrict to orthogonal matrices. ### 3 f assignments and their properties − We write fnitely supported functions t in two ways: (1) as t = [�]i[n]=1 _[p][i]_ [⟦][x][i] [⟧][,][ where][ |][p][i][ |][ >][ 0 for all][ i][ ∈] {1, 2 . . . _n}, and xi �_ _xj for i �_ _j, and (2) as t =_ [�]i[n]=[h]1 _[p][h]i_ [⟦][x][h]i [⟧][−][�][n]i=[д]1 _[p][д][i]_ [⟦][x][д][i] [⟧][, where] _[p][h]i_ [,] _[p][д][i]_ [>][ 0 and][ x][h]i [,] _[x][д][i]_ are all distinct. By ⟦xi ⟧ we represent a point with coordinate xi . More concretely, we have ⟦a⟧(x) = δa,x, where δa,x is the Kronecker delta. **Defnition 4 (f -assignments). Given a set of real coordinates 0 ≤** _x1 < x2 · · · < xn and a polynomial of_ degree at most n 2 satisfying f _λ_ 0 for all λ 0, an f -assignment is given by the function − (− ) ≥ ≥ −f (xi ) � ⟦xi ⟧ = h − _д,_ _j�i_ [(][x]j [−] _[x]i_ [)] �������������������������� =:pi _t =_ _n_ � _i=1_ where h contains the positive part of t and д the negative part (without any common support), viz. h = � _i:pi >0_ _[p]i_ [⟦][x]i [⟧] [and][ д][ =][ �]i:pi <0 [(−][p]i [)][ ⟦][x]i [⟧][.] - We say an assignment is balanced if the number of points with negative weights, pi < 0, equals the number of points with positive weights, pi > 0. We say an assignment is unbalanced if it is not balanced. - When f is a monomial, viz. has the form f _x_ = cx _[q], where c > 0 and q_ 0, we call the assignment ( ) ≥ a monomial assignment. For q = 0, we call the assignment the zeroth assignment. - We say that a monomial assignment is aligned if the degree of the monomial is an even number (q = 2 _b_ 1, b N). We say that a monomial assignment is misaligned if it is not aligned. ( − ) ∈ In the defnition above the coordinates are real non-negative numbers, but in the next sections where we present the solutions, we consider the coordinates to be strictly positive. However, this is not really a restriction, because any f -assignment with a zero coordinate can be expressed as an f -assignment with strictly positive coordinates, in such a way that both have the same solution (see Lemma 15 in Appendix B). �Defnition 5ni=д1 _[p][д][i]_ [⟦][x][д][i] [⟧] ( (Efectively) Solving an assignment)[and an orthonormal basis] �|д1⟩, |д2⟩. Given a fnitely supported function. . . ��дnд �, |h1⟩, |h2⟩ . . . ��hnh ��, we say that an orthog-t = [�]i[n]=[h]1 _[p][h]i_ [⟦][x][h]i [⟧][−] �onal matrixni=д1 √pдi |д Oi ⟩, solves |w⟩ = _t[�] ifni=h1 O√ satisfes the following:phi |hi_ ⟩, Xh = �ni=h1 _[x][h]i_ [|][h] O[i] [⟩⟨] |v[h]⟩[i][ |][,][ X]= |[д][ =]w⟩[ �]andi[n]=[д]1 X[x][д][i]h ≥[|][д][i] [⟩⟨]E[д]h[i][ |]OX[ and]дO[ E][T][h]E[ =]h, where[ �]i[n]=[h]1 [|][h] |[i] [⟩⟨]v⟩[h][i]=[ |][.] Moreover, we say that t has an efective solution if t = [�]i ∈I _[t][ ′]i_ [and][ t][ ′]i [has a solution for all][ i][ ∈] _[I]_ [, where][ I][ is] a fnite set. 4This permitted the use of a geometric approach to achieve the numerical solution. ----- In Section 2, we claimed that in order to construct a WCF protocol with vanishing bias it sufces to obtain efective solutions to f assignments. In particular, it sufces to express each f assignment − − as a sum of monomial assignments and fnd the orthogonal matrices solving each monomial assignment appearing in the sum. In Appendix A we explain why this claim holds, and in Lemma 6 below we show how an f -assignment[5] can be trivially expressed as a sum of monomial assignments. **Lemma 6 (f -assignment as a sum of monomials). Consider a set of real coordinates[6]** _satisfying 0 ≤_ _x1 <_ _x2 · · · < xn and let f (x) = (r1_ −x)(r2 −x) . . . (rk −x) where _k ≤_ _n−2. Let t =_ [�]i[n]=1 _[p][i][ ⟦][x][i]_ [⟧] _[be the corresponding]_ _f -assignment. Then_ _k_ � _n_ � � � −(−xi )[l] _t =_ _αl_ �, _l_ =0 _i=1_ _j�i_ [(][x]j [−] _[x]i_ [)][ ⟦][x][i] [⟧] _where αl ≥_ 0. More precisely, αl is the coefcient of (−x)[l] _in f (x)._ In the following sections we present the orthogonal matrices solving the four possible types of monomial assignments, namely balanced/unbalanced and aligned/misaligned (see Defnition 4). ### 4 Solution to the zeroth assignment In this section we present the solution for the simplest monomial assignment, which we call the zeroth assignment, since f _x_ = _x_ . We start with the orthogonal matrices solving the balanced case, and ( ) (− )[0] prove their correctness. Henceforth, we use h.c. to denote the Hermitian conjugate. **Proposition 7 (Solution to balanced zeroth assignments). Let t =** [�]i[n]=1 _[p][h]i_ [⟦][x][h]i [⟧] [−] [�][n]i=1 _[p][д]i_ [⟦][x][д]i [⟧] _[be a]_ _zeroth assignment over 0 < x1 < x2 · · · < x2n, {|h1⟩_, |h2⟩ . . . |hn⟩, |д1⟩, |д2⟩ . . . |дn⟩} be an orthonormal _basis, Eh :=_ [�]i[n]=1 [|][h][i] [⟩⟨][h][i][ |][ be a subspace projector, and fnally let] _Xh :=_ _Xд :=_ _n_ � _xhi |hi_ ⟩⟨hi | � _diag(xh1,_ _xh2 . . . xhn_, 0, 0 . . . 0), _i=1_ ���������� _n-zeros_ _n_ � _xдi |дi_ ⟩⟨дi | � _diag(0, 0, . . . 0,_ _xд1,_ _xд2 . . . xдn_ ), _i=1_ ������������ _n-zeros_ _n_ � _w_ := | ⟩ _i=1_ _n_ � _v_ := | ⟩ _i=1_ �pдi |дi ⟩ � (0, 0 . . . 0, [�]pд1, [�]pд2 . . . [�]pдn )[T] . ���������� _n-zeros_ √ _phi |hi_ ⟩ � (√ph1, √ph2 . . . √phn, 0, 0 . . . 0 ���������� _n-zeros_ )[T] _Then,_ � � Πh⊥i −1[(][X][h][)][i][ |][w][⟩⟨][v] [| (][X][д][)][i] [Π]д[⊥]i −1 + h.c. √chi _cдi_ _O :=_ �n−1 _i=0_ _satisfes_ _Xh ≥_ _EhOXдO[T]_ _Eh_ _and_ _O |v⟩_ = |w⟩, _where Πh[⊥]−1_ [=][ Π]д[⊥]−1 [=][ I][,][ Π]h[⊥]i [:][=][ projector orthogonal to span][{(][X][h][)][i][ |][w][⟩] [,][ (][X][h][)][i][−][1][ |][w][⟩] [, . . .][ |][w][⟩}][,] _chi := ⟨w_ | (Xh)[i] Πh[⊥]i −1[(][X][h][)][i][ |][w][⟩][, and analogously are defned the forms of][ Π]д[⊥]i _[and][ c][д]i_ _[.]_ 5with real and non-negative roots, 6The restriction on the number of roots is justifed by the forthcoming use of the f −assignment. ----- _Proof. Let t =_ [�]i[2]=[n]1 _[p][i]_ [⟦][x][i] [⟧] [be the zeroth assignment. Lemma][ 17][ from Appendix][ B][ gives us the following] properties of t: � _x_ _[k]_ [�] = 0, for all k 0, 1, 2 . . ., 2n 2, and (3) ∈{ − } � _x_ [2][n][−][1][�] - 0 (4) where ⟨x _[k]_ ⟩ := [�]i[2]=[n]1 _[p][i]_ [(][x][i] [)][k] [. Consider the following basis:] |w0⟩ := |w⟩ |w1⟩ := [(][I][ −|][w][0][⟩⟨]√[w][0][|) (][X][h][) |][w][⟩] _ch1_ ... |wk ⟩ := � � I − [�]i[k]=[−]0[1] [|][w][i] [⟩⟨][w][i][ |] (Xh)[k] |w⟩ . (5) √chk We are interested in keeping track of the highest power, l, of _x_ _[l]_ ⟨ _h[⟩][. To this end, we consider the highest]_ power of Xh that appears in |wk ⟩, i.e. Xh[k] [and the highest value][ l][ ′][ such that a][ ⟨][x]h[l] [′] [⟩] [appears in][ |][w][k] [⟩][, i.e.] � � _l_ [′] = 2k (as ⟨xh[2][k] [). We capture this dependence by writing][ M(|][w][k] [⟩)][ =] _xh[2][k]_ - (Xh)[k] |w⟩. [⟩] [is present in][ √][c][h][k] Note that the projectors can be expressed in terms of these vectors more concisely, as Πhi := I − Πh[⊥]i [=] �ij=0 ��wj ��wj �� . It also follows that O can be re-written as _O =_ �n−1 _j=0_ �� �� ���wj _vj_ �� + ��vj _wj_ ���, � where ��vj is analogously defned (by replacing h with д). It is evident that O |v⟩ = |w⟩. We set D = � 7 _Xh −_ _EhOXдO[T]_ _Eh, and note that_ _vj_ �� _D |vi_ ⟩ = 0 (because Xh |vi ⟩ = 0 and Eh |vi ⟩ = 0 ). We assert that it has the following rank-1 form  0 . . . 0 ... ... ... 0 . . . ⟨wn−1| D |wn−1⟩ _D =_  in the (|w0⟩, |w1⟩, . . . |wn−1⟩) basis, together with ⟨wn−1| D |wn−1⟩ - 0. To see this, we simply compute � � � � � ⟨wi | D ��wj = ⟨wi | Xh ��wj −⟨wi | OXдO[T][ ��]wj = ⟨wi | Xh ��wj −⟨vi | Xд ��vj . For any 0 _i, j_ _n_ 1 except for the case where both i = j = n 1, the two terms are the same. This is ≤ ≤ − − � � because the term with the highest possible power l (of _x_ _[l]_ [�]) in ⟨wi | Xh ��wj can be deduced by observing � � � � � � � M(⟨wi |)XhM(��wj ) = _xh[2][i]_ - _xh[2][j]_ - _xh[i][+][j][+][1]_ . For the analogous expression with д to be the same, we must have 2i, 2j and i + j + 1 less than or equal to 2n 2. The frst two are always satisfed (for 0 _i, j_ _n_ 1). The last can only be violated when − ≤ ≤ − _i = j = n_ 1. This establishes that the matrix has the asserted form. − 7The conclusion holds even without the projector as O maps span(|v1⟩, |v2⟩, . . . |vn ⟩) to span(|w1⟩, |w2⟩ . . . |wn ⟩) on which _Xд has no support._ ----- To prove the positivity of ⟨wn−1| D |wn−1⟩, consider ⟨wn−1| Xh |wn−1⟩ and ⟨vn−1| Xд |vn−1⟩. When these � � � � terms are expanded in powers of _xh[k]_ and _xд[k]_ respectively, only terms with k > 2n − 2 would remain; the others would get cancelled due to Equation (3). Using Equation (5), it follows that 1 1 ⟨wn−1| D |wn−1⟩ = _chn−1_ ⟨w | (Xh)[2][n][−][2][+][1] |w⟩− _cдn−1_ ⟨v | (Xд)[2][n][−][2][+][1] |v⟩, � � � � � � � � and it is not hard to see that chn−1 = chn−1( _xh[2][n][−][2]_, _xh[2][n][−][3]_, . . ., _xh[1]_ ) does not depend on _xh[2][n][−][1]_ (we proceed analogously for cдn−1). Further, chn−1 = cдn−1 =: cn−1. We thus have ⟨wn−1| D |wn−1⟩ = � � _x_ [2][n][−][1] _h_ - 0 _cn−1_ using Equation (4). Thus, Xh−EhOXдO[T] _Eh ≥_ 0. Note that we assumed span{|w⟩, _Xh |w⟩_, _Xh[2]_ [|][w][⟩] [, . . .,] _[X][ n]h_ [|][w][⟩}] equals to span{|h1⟩, |h2⟩ . . . |hn⟩} which is justifed by Lemma 16, presented in Appendix B. Before proceeding to the unbalanced zeroth assignments, let us try to better understand the above � result and see why it doesn’t work unchanged in the unbalanced case. We could write Dij = ⟨wi | D ��wj � � and note that the maximum power l which appears as _x_ _[l]_ is given by max 2i, 2j, _i + j + 1_ . This yields _д/h_ { } a matrix with each term depending on the power as  _D00(⟨x⟩)_ � � _D10(_ _x_ [2][�], . . . ) _D11(_ _x_ [3][�], . . . ) h.c. � � � _D20(_ _x_ [4][�], . . . ) _D21(_ _x_ [4][�], . . . ) _D22(_ _x_ [5][�], . . . ) � � � � _D30(_ _x_ [6][�], . . . ) _D31(_ _x_ [6][�], . . . ) _D32(_ _x_ [6][�], . . . ) _D33(_ _x_ [7][�], . . . ) � � � � � _D40(_ _x_ [8][�], . . . ) _D41(_ _x_ [8][�], . . . ) _D42(_ _x_ [8][�], . . . ) _D43(_ _x_ [8][�], . . . ) _D44(_ _x_ [9][�], . . . ) ... _D =_  . For brevity, we represent this dependence as _x_ h.c. ⟨ ⟩ � � _x_ [2][�] _x_ [3][�] _D_ = � � � . M( ) _x_ [4][�] _x_ [4][�] _x_ [5][�] ...   � � Consider the balanced m0 case over {x1, _x2,_ _x3,_ _x4}, where we have ⟨x⟩_ = _x_ [2][�] = 0 and _x_ [3][�] - 0. This is a two-dimensional case, thus � � 0 0 M(D) = 0 �x [3][�] ≥ 0. If we now try to use the same procedure for an unbalanced zeroth assignment over {x1, _x2 . . . x5}, we will_ � � � have _x_ = _x_ [2][�] = _x_ [3][�] = 0 and _x_ [4][�] - 0. If we try to solve in three dimensions, we would obtain ⟨ ⟩  _D_ = M( )  � 0 0 _x_ [4][�] � 0 0 _x_ [4][�] � � � _x_ [4][�] _x_ [4][�] _x_ [5][�] (6) which does not seem to work directly. It turns out that the projector that was present in Equation (2), gets rid of the troublesome part and yields a zero matrix. We see it in this example frst and then generalize it. The unbalanced assignment takes three points to two points. We defne Xh := diag(xh1, _xh2, 0, 0, 0),_ ----- �|w1⟩ = ([√]ph1, [√]ph2, 0, 0, 0) along with |w0⟩ := |w⟩ and |w1⟩ := (I −|w0⟩⟨w0|) _Xh |w0⟩_ . We can write Eh = _i=0_ [|][w][i] [⟩⟨][w][i][ |][ and have the same orthogonal matrix as before, except that we leave][ |][v][2][⟩] [unchanged, i.e.] _O =_ [�]i[1]=0 [|][w][i] [⟩⟨][v][i][ |][ +][ |][v][2][⟩⟨][v][2][|][. We can now show that][ D] [′][ =][ X][h] [−] _[E][h][OX][д][O][T][ E][h]_ [≥] [0 because every vector in] |ψ ⟩∈ span{|v0⟩, |v1⟩, |v2⟩} satisfes D [′] |ψ ⟩ = 0 (as Xh |ψ ⟩ = 0 and Eh |ψ ⟩ = 0). This means that it sufces to restrict to a 2 × 2 matrix in span{|w0⟩, |w1⟩}. But, from Equation (6), we already know that this is zero, hence D [′] = 0. **Proposition 8 (Solution to unbalanced zeroth assignments). Let t =** [�]i[n]=[−]1[1] _[p][h]i_ [⟦][x][h]i [⟧] [−] [�][n]i=1 _[p][д]i_ [⟦][x][д]i [⟧] _[be a]_ _zeroth assignment over 0 < x1 < x2 · · · < x2n−1, {|h1⟩_, |h2⟩ . . . |hn−1⟩, |д1⟩, |д2⟩ . . . |дn⟩} be an orthonormal _basis, Eh :=_ [�]i[n]=1 [|][h][i] [⟩⟨][h][i][ |][ a subspace projector, and fnally let] _Xh :=_ �n−1 _xhi |hi_ ⟩⟨hi | � _diag(xh1,_ _xh2 . . . xhn−1, 0, 0, . . . 0),_ _i=1_ ������������ _n zeros_ _Xд :=_ _n_ � _xдi |дi_ ⟩⟨дi | � _diag(0, 0, . . . 0,_ _xд1,_ _xд2 . . . xдn−1,_ _xдn_ ), _i=1_ ������������ _n−1 zeros_ √ _phi |hi_ ⟩ � (√ph1, √ph2, . . . √phn−1, 0, 0 . . . 0)[T], ���������� _n zeros_ _w_ := | ⟩ �n−1 _i=1_ _n_ � _i=1_ �pдi |дi ⟩ � (0, 0, . . . 0, [�]pд1, [�]pд2 . . . [�]pдn−1, [�]pдn )[T] . ������������ _n−1 zeros_ _v_ := | ⟩ _Then,_ _O :=_ Πh[⊥]i −1[(][X][h][)][i][ |][w][⟩⟨][v] [| (][X][д][)][i] [Π]д[⊥]i −1 + h.c. √chi _cдi_ ��n−2 _i=0_ + Πд[⊥]n−2[(][X][д][)][n][−][1][ |][v][⟩⟨][v] [| (][X][д][)][n][−][1][Π]д[⊥]n−2 _cдi_ � _satisfes_ _Xh ≥_ _EhOXдO[T]_ _Eh_ _and_ _EhO |v⟩_ = |w⟩, _where Πh[⊥]−1_ [=][ Π]д[⊥]−1 [=][ I][,][ Π]h[⊥]i [:][=][ projector orthogonal to span][{(][X][h][)][i][ |][w][⟩] [,][ (][X][h][)][i][−][1][ |][w][⟩] [, . . .][ |][w][⟩}][,] _chi := ⟨w_ | (Xh)[i] Πh[⊥]i −1[(][X][h][)][i][ |][w][⟩][, and analogous are the forms of][ Π]д[⊥]i _[and][ c][д]i_ _[.]_ _Proof. By using again Lemma 17 from Appendix B, we have_ � _x_ _[k]_ [�] = 0 for k 0, 1, . . . 2n 3, (7) ∈{ − } and � _x_ [2][n][−][2][�] - 0. We defne the basis, almost exactly as before, we set |w0⟩ := |w⟩ and for each integer k satisfying 0 _k_ _n_ 2 we have ≤ ≤ − � � |wk ⟩ := Πh[⊥]k −1√[(][X]ch[h]k[)][k][ |][w][⟩] = I − [�]i[k]=[−]0[1] [|][w][i]√[⟩⟨]ch[w]k _[i][ |]_ (Xh)[k] |w⟩ . We defne |v0⟩ := |v⟩ and for each integer satisfying 0 ≤ _k ≤_ _n −_ 1 we have |vk ⟩ := Πд[⊥]k −1√[(][X]cд[д]k[)][k][ |][v][⟩] = � � I − [�]i[k]=[−]0[1] [|][v][i] [⟩⟨][v][i][ |] (Xд)[k] |v⟩ . √chk ----- Note that this means O = [�]i[n]=[−]0[2] [(|][w][i] [⟩⟨][v][i][ |][ +][ |][v][i] [⟩⟨][w][i][ |)][ +][ |][v][n][−][1][⟩⟨][v][n][−][1][|][ and so][ E][h][O][ |][v][⟩] [=][ |][w][⟩] [follows directly.] � Also, to establish D := Xh − _EhOXдO[T]_ _Eh ≥_ 0, note that it sufces to show that ⟨wi | D ��wj ≥ 0 for integers _i, j satisfying 0 ≤_ _i, j ≤_ _n −_ 2. This is because, as we saw in the previous case, D |vi ⟩ = 0 as Xh |vi ⟩ = 0 and Eh |vi ⟩ = 0. As before, we indicate the term with the highest power of Xh appearing in |wk ⟩, for k in 0, 1 . . . _n_ 2, by { − } � � M(|wk ⟩) = _xh[2][k]_ - (Xh)[k] |w⟩ and analogously, the highest power of Xд appearing in |vk ⟩ for k in {0, 1, . . . _n −_ 2}, by � � M(|vk ⟩) = _xд[2][k]_ - (Xд)[k] |v⟩ . � � Again, the highest power l of _x_ _[l]_ [�] that appears in ⟨wi | D ��wj is max{2j, 2i, _i +_ _j +_ 1} which can be deduced by evaluating � � � � � � � M(⟨wi |)XhM(��wj ) = _xh[2][j]_ - _xh[2][i]_ - _xh[i][+][j][+][1]_ and similarly � � � � � � M(⟨vi |)EhOXдOEhM(|vi ⟩) = _xд[2][j]_ - _xд[2][i]_ - _xд[i][+][j][+][1]_ . The highest possible power is obtained when i = j = n 2. This yields 2n 3 and thus, using Equation (7), − − � we conclude that ⟨wi | D ��wj is zero for all 0 ≤ _i, j ≤_ _n −_ 2, establishing in fact that D = 0. ### 5 Solution to the monomial assignments In this section we present the solutions to the monomial assignments of order higher than zero. There are four diferent cases, depending on the number of points and the degree of the monomial (balanced/unbalanced and aligned/misaligned, see Defnition 4). One could fnd a single expression for all, but this does not seem to aid clarity, therefore we present and prove the four cases separately. Our approach is essentially the same as before. The main additional technique that we introduce here is the use of the pseudo-inverses _Xh[⊣]_ [and][ X][ ⊣]д[.][8] **Proposition 9 (Solution to balanced aligned monomial assignments). Let m = 2b be an even non-negative** _integer, t =_ [�]i[n]=1 _[x]h[m]i[p][h][i]_ [⟦][x][h][i] [⟧] [−] [�]i[n]=1 _[x]д[m]i_ _[p][д]i_ [⟦][x][д]i [⟧] _[a monomial assignment over][ 0][ <][ x][1]_ [<][ x][2] [· · ·][ <][ x][2][n][,] {|h1⟩, |h2⟩ . . . |hn⟩, |д1⟩, |д2⟩ . . . |дn⟩} an orthonormal basis, and fnally let _Xh :=_ _Xд :=_ _n_ � _xhi |hi_ ⟩⟨hi | � _diag(xh1,_ _xh2 . . . xhn_, 0, 0 . . . 0), _i=1_ ���������� _n zeros_ _n_ � _xдi |дi_ ⟩⟨дi | � _diag(0, 0, . . . 0,_ _xд1,_ _xд2 . . . xдn_ ), _i=1_ ������������ _n zeros_ √ _phi |hi_ ⟩ � (√ph1, √ph2 . . . √phn, 0, 0, . . . 0)[T] _and_ |w [′]⟩ := (Xh)[b] |w⟩, ������������ _n zeros_ �pдi |дi ⟩ � (0, 0, . . . 0, [�]pд1, [�]pд2 . . . [�]pдn )[T] _and_ |v [′]⟩ := (Xд)[b] |v⟩ . ������������ _n zeros_ _w_ := | ⟩ _v_ := | ⟩ _n_ � _i=1_ _n_ � _i=1_ 8For any Hermitian matrix A with spectral decomposition A = [�]i _[a]i_ [|][i][⟩⟨][i] [|][ (including zero eigenvalues), we denote by][ A][⊣] [its] pseudo-inverse A[⊣] := [�]i: |ai |>0 _[a]i[−][1]_ |i⟩⟨i |. ----- _Then,_ � Πh⊥i [(][X][h][)][i][ |][w] [′][⟩⟨][v] [′][| (][X][д][)][i] [Π]д[⊥]i + h.c. √chi _cдi_ � _O :=_ _n�−b−1_ _i=−b_ _satisfes_ _Xh ≥_ _EhOXдO[T]_ _Eh_ _and_ _EhO |v_ [′]⟩ = |w [′]⟩, _where Eh :=_ [�]i[n]=1 [|][h][i] [⟩⟨][h][i][ |][, and for brevity, by][ X][ −]h _[k]_ _we mean (Xh[⊣][)][k][ for][ k][ >][ 0][ (similarly for][ X][д][),]_ _projector orthogonal to span{(Xh)[−|][i][ |][+][1]_ |w [′]⟩, (Xh)[−|][i][ |][+][2] |w [′]⟩ . . ., |w [′]⟩} _i < 0_ _projector orthogonal to span{(Xh)[−][b]_ |w [′]⟩, (Xh)[−][b][+][1] |w [′]⟩, . . . (Xh)[i][−][1] |w [′]⟩} _i > 0_ I _i = 0,_ Π[⊥] _hi_ [:][=]   _chi := ⟨w_ [′]| (Xh)[i] Πh[⊥]i [(][X][h][)][i][ |][w] [′][⟩][, and analogous are the forms of][ Π]д[⊥]i _[and][ c][д]i_ _[.]_ **Proposition 10 (Solution to balanced misaligned monomial assignments). Let m = 2b** 1 be an odd non− _negative integer, t =_ [�]i[n]=1 _[x]h[m]i[p][h][i]_ [⟦][x][h][i] [⟧][−][�]i[n]=1 _[x]д[m]i_ _[p][д]i_ [⟦][x][д]i [⟧][,][ a monomial assignment over][ 0][ <][ x][1] [<][ x][2] [· · ·][ <][ x][2][n][,] {|h1⟩, |h2⟩ . . . |hn⟩, |д1⟩, |д2⟩ . . . |дn⟩} an orthonormal basis, and fnally let _Xh :=_ _Xд :=_ _n_ � _xhi |hi_ ⟩⟨hi | � _diag(xh1,_ _xh2 . . . xhn_, 0, 0 . . . 0), _i=1_ ���������� _n zeros_ _n_ � _xдi |дi_ ⟩⟨дi | � _diag(0, 0, . . . 0,_ _xд1,_ _xд2 . . . xдn_ ), _i=1_ ������������ _n zeros_ |w⟩ := ([√]ph1, [√]ph2 . . . [√]phn, 0, 0 . . . 0) and |w [′]⟩ := (Xh)[b][−] [1]2 |w⟩, ���������� _n zeros_ |v⟩ := (0, 0, . . . 0, [�]pд1, [�]pд2 . . . [�]pдn ) and |v [′]⟩ := (Xд)[b][−] [1]2 |v⟩ . ������������ _n zeros_ � _Then,_ _O :=_ _n�−b−1_ � Πh⊥i [(][X][h][)][i][ |][w] [′][⟩⟨][v] [′][| (][X][д][)][i] [Π]д[⊥]i � + h.c. _i=−b+1_ √chi _cдi_ + Πд[⊥]n−b [(][X][д][)][n][−][b][ |][v] [′][⟩⟨][v] [′][| (][X][д][)][n][−][b] [Π]д[⊥]n−b + Πh[⊥]n−b [(][X][h][)][n][−][b][ |][w] [′][⟩⟨][w] [′][| (][X][h][)][n][−][b] [Π]h[⊥]n−b _cдn−b+1_ _chn−b_ � Πh⊥i [(][X][h][)][i][ |][w] [′][⟩⟨][v] [′][| (][X][д][)][i] [Π]д[⊥]i + h.c. √chi _cдi_ _satisfes_ _Xh ≥_ _EhOXдO[T]_ _Eh_ _and_ _EhO |v_ [′]⟩ = |w [′]⟩, _where Eh :=_ [�]i[n]=1 [|][h][i] [⟩⟨][h][i][ |][, and for brevity, by][ X][ −]h _[k]_ _we mean (Xh[⊣][)][k][ for][ k][ >][ 0][ (similarly for][ X][д][),]_ Π[⊥] _hi_ [:][=] projector orthogonal to span{(Xh[⊣][)] [|][i][ |−][1][ |][w] [′][⟩] [,][ (][X][ ⊣]h[)] [|][i][ |−][2][ |][w] [′][⟩] [. . .,][ |][w] [′][⟩}] _i < 0_ _projector orthogonal to span{(Xh[⊣][)][b][−][1][ |][w]_ [′][⟩] [,][ (][X][ ⊣]h[)][b][−][2][ |][w] [′][⟩] [, . . .,][ |][w] [′][⟩] [,] _[X][h][ |][w]_ [′][⟩] [, . . .][ (][X][h][)][i][−][1][ |][w] [′][⟩}] _i > 0_ I _i = 0,_  _chi := ⟨w_ [′]| (Xh)[i] Πh[⊥]i [(][X][h][)][i][ |][w] [′][⟩][, and analogous are the forms of][ Π]д[⊥]i _[and][ c][д]i_ _[.]_ ----- For the proofs and concrete examples of balanced aligned and misaligned monomial assignments, see Appendix C. We similarly proceed to the unbalanced monomial assignments, aligned and misaligned. Below, we state the solution for both cases, while in Appendix D we prove their correctness and give concrete examples illustrating their construction. **Proposition 11 (Solution to the unbalanced aligned monomial assignments). Let m = 2b be an even non-** _negative integer, t =_ [�]i[n]=[−]1[1] _[x]h[m]i[p][h][i]_ [⟦][x][h][i] [⟧] [−] [�]i[n]=1 _[x]д[m]i_ _[p][д]i_ [⟦][x][д]i [⟧] _[a monomial assignment over][ 0][ <][ x][1]_ [<][ x][2] [· · ·][ <] _x2n−1, {|h1⟩_, |h2⟩ . . . |hn−1⟩, |д1⟩, |д2⟩ . . . |дn⟩} be an orthonormal basis, and fnally let _Xh :=_ �n−1 _xhi |hi_ ⟩⟨hi | � _diag(xh1,_ _xh2 . . . xhn−1, 0, 0 . . . 0),_ _i=1_ ���������� _n zeros_ _Xд :=_ _n_ � _xдi |дi_ ⟩⟨дi | � _diag(0, 0, . . . 0,_ _xд1,_ _xд2 . . . xдn_ ), _i=1_ ������������ _n−1 zeros_ |w⟩ := ([√]ph1, [√]ph2 . . . [√]phn−1, 0, 0 . . . 0 ���������� _n zeros_ ) and |w [′]⟩ := (Xh)[b] |w⟩, |v⟩ := (0, 0, . . . 0, [�]pд1, [�]pд2 . . . [�]pдn ) and |v [′]⟩ := (Xд)[b] |v⟩ . ������������ _n−1 zeros_ + Πд[⊥]n−b−1[(][X][д][)][n][−][b][−][1][ |][v] [′][⟩⟨][v] [′][| (][X][д][)][n][−][b][−][1][Π]д[⊥]n−b−1 _cдn−b−1_ � Πh⊥i [(][X][h][)][i][ |][w] [′][⟩⟨][v] [′][| (][X][д][)][i] [Π]д[⊥]i + h.c. √chi _cдi_ � _Then,_ _O :=_ _n�−b−2_ _i=−b_ _satisfes_ _Xh ≥_ _EhOXдO[T]_ _Eh_ _and_ _EhO |v_ [′]⟩ = |w [′]⟩, _where for brevity, by Xh[−][k]_ _we mean (Xh[⊣][)][k][ for][ k][ >][ 0][ (similarly for][ X][д][),][ c][h][i]_ [,] _[c][д][i]_ [,][ Π]h[⊥]i [,][ Π]д[⊥]i _[are as defned in]_ _Proposition 9._ **Proposition 12 (Solution to the unbalanced misaligned monomial assignments). Let m = 2b** 1 be an − _odd non-negative integer, t =_ [�]i[n]=1 _[x]h[m]i[p][h][i]_ [⟦][x][h][i] [⟧] [−] [�]i[n]=[−]1[1] _[x]д[m]i_ _[p][д]i_ [⟦][x][д]i [⟧] _[a monomial assignment over][ 0][ <][ x][1]_ [<] _x2 · · · < x2n−1, {|h1⟩_, |h2⟩ . . . |hn⟩, |д1⟩, |д2⟩ . . . |дn−1⟩} be an orthonormal basis, and fnally let _n_ � _xhi |hi_ ⟩⟨hi | � _diag(xh1,_ _xh2 . . . xhn_, 0, 0 . . . 0), _i=1_ ���������� _n−1 zeros_ _Xh :=_ _Xд :=_ �n−1 _xдi |дi_ ⟩⟨дi | � _diag(0, 0, . . . 0,_ _xд1,_ _xд2 . . . xдn−1),_ _i=1_ ������������ _n zeros_ |w⟩ := ([√]ph1, [√]ph2 . . . [√]phn, 0, 0 . . . 0) and |w [′]⟩ := (Xh)[b][−] 2[1] |w⟩, ���������� _n−1 zeros_ |v⟩ := (0, 0, . . . 0, [�]pд1, [�]pд2 . . . [�]pдn−1) and |v [′]⟩ := (Xд)[b][−] 2[1] |v⟩ . ������������ _n zeros_ ----- � Π[⊥] + _hn−b_ [(][X][h][)][n][−][b][ |][w] [′][⟩⟨][w] [′][| (][X][h][)][n][−][b] [Π]h[⊥]n−b _chn−b_ � Πh⊥i [(][X][h][)][i][ |][w] [′][⟩⟨][v] [′][| (][X][д][)][i] [Π]д[⊥]i + h.c. √chi _cдi_ _Then,_ _O :=_ _n�−b−1_ _i=−b+1_ _satisfes_ _Xh ≥_ _EhOXдO[T]_ _Eh_ _and_ _EhO |v_ [′]⟩ = |w [′]⟩, _where for brevity, by Xh[−][k]_ _we mean (Xh[⊣][)][k][ for][ k][ >][ 0][ (similarly for][ X][д][),][ c][h][i]_ [,] _[c][д][i]_ [,][ Π]h[⊥]i [,][ Π]д[⊥]i _[are as defned in]_ _Proposition 10._ Combining all the above, we can now state our main result: **Theorem 13. Let t be an f -assignment (see Defnition 4) with f having real positive roots. Then, in order** _to obtain its efective solution (see Defnition 5), it sufces to write it as t =_ [�]i _[α]i[t][ ′]i_ _[(see Lemma][ 6][), where][ α][i]_ _are positive and ti[′]_ _[are monomial assignments. Furthermore, each monomial assignment][ t][ ′]i_ _[admits an exact]_ _solution given in Proposition 9, Proposition 10, Proposition 11, or Proposition 12._ _Proof. We established that in order to determine the efective solution to an f_ assignment t, it is sufcient − to express it as a sum of monomial assignments ti [′] and fnd the solution for each one of them (see Appendix A). A monomial assignment can be balanced/unbalanced and aligned/misaligned (see Defnition 4). The solution in each case is given by either Proposition 9, Proposition 10, Proposition 11, or Proposition 12. In Appendix E, as an example, we describe how Theorem 13 can be applied to derive a WCF protocol with bias approaching [1] 14 [.] ### 6 Conclusions and future work We presented the analytical construction of explicit WCF protocols achieving arbitrarily close to zero bias, by means of Mochon’s family of TDPGs [17], described by the respective f assignments. Using the TEF − from [5], these TDPGs can be converted into WCF protocols with the corresponding bias. In order to obtain the solution for an f assignment, we argued that it sufces to write it as a sum of monomial assignments − and fnd the solution for each term of the sum separately. For all four diferent types of monomial assignments, we constructed the corresponding solutions and proved that indeed satisfy the required conditions as stated in Equation (2) and the analysis following it. Importantly enough, our approach does not use the reduction of EBM functions to valid functions and it admits, thus, a simple and clear description. We also presented an example illustrating the construction of a WCF protocol with bias [1] 14 [.] There exist several related problems that deserve further study. First, one could try to fnd analytic solutions corresponding to f assignments in fewer dimensions (assuming that they exist). This way, the − only shortcoming of our approach concerning resource requirements could be improved: while expressing the f assignment as a sum of monomial assignments we are increasing the dimensions, which in turn − corresponds to an increase in the number of qubits required. One could also try to fnd analytic solutions for the Pelchat-Hoyer point games [11], which is another family of point games giving rise to WCF protocols with arbitrarily close to zero bias. Moreover, given the recently improved bound on the number of rounds of communication needed to achieve a certain bias ϵ [15], one can investigate whether there exist protocols matching these bounds. Finally, while one expects the bias to increase in the presence of noise, a thorough study of such efects is needed in order to determine the robustness of WCF protocols against noise. ----- ### Acknowledgements We are thankful to Tom Van Himbeeck, Kishor Bharti, Stefano Pironio and Ognyan Oreshkov for various insightful discussions. We acknowledge support from the Belgian Fonds de la Recherche Scientifque – FNRS under grant no R.50.05.18.F (QuantAlgo). The QuantAlgo project has received funding from the QuantERA ERA-NET Cofund in Quantum Technologies implemented within the European Union’s Horizon 2020 Programme. ASA further acknowledges the FNRS for support through the FRIA grants, 3/5/5 – MCF/XH/FC – 16754 and F 3/5/5 – FRIA/FC – 6700 FC 20759. ### A Decomposing TEF functions into sums of TEF functions In this frst part of the appendix we present how one can construct a WCF protocol with bias ϵ, by decomposing the TEF functions (i.e., the functions that satisfy Equation (2) for some unitary matrix O [9]) of a so-called time-independent point game (TIPG)[10] with the same bias ϵ into a sum of TEF functions. This way, we establish our claim that, to convert Mochon’s TIPGs (achieving vanishing bias) which rely nontrivially only on transitions defned by f -assignments, it is sufcient to fnd an efective solution thereof. In particular, it is sufcient to express an f -assignment as a sum of monomial assignments and fnd the solution to each one of them. In Lemma 14, we show that the set of TEF functions is the same as the set of valid functions, which in turn is the same as the closure of the set of EBM functions.[11] Henceforth, for simplicity, we only use the term valid functions. Our demonstration requires techniques and results from previous works [17, 1, 4, 5], which we do not present here in detail; we only refer to them and outline how they are used in our analysis. We recall from [17, 2] the basic idea behind the conversion of a TIPG into a TDPG (see, for e.g., the proof of Theorem 5 in [2]). The primary hinderance is that for applying a valid function in a TDPG, the places where the function is negative must already have points with at least as much weight. This corresponds to fnding a time dependent ordering of the valid functions which defne a TIPG, however, in general, TIPGs do not admit such simple orderings. This difculty is surpassed by introducing the so-called catalyst state, which is a set of points with vanishing weights. They are a scaled-down compensation for the negative weights which arise. In their presence, an accordingly scaled-down version of the valid functions can be applied, repeatedly, until their cumulative efect is essentially the same as that of having applied the valid functions unaltered. The catalyst state, after this procedure, is efectively unchanged. The weight of the catalyst state costs us an increase in the bias. However, the weight can be made arbitrarily small, at the expense of extra rounds of communication. Our case is not very diferent. Suppose that the valid functions used in the TIPG are decomposed into a sum of valid functions. Let us call these valid functions (present in the decomposition), constituent functions. Then, we can convert the TIPG into a TDPG which only uses the constituent functions by essentially using the same technique. This is because the difculty in constructing TDPGs using the constituent functions is of the same nature. In particular, it is possible that the constituent functions are negative at various locations, but there are no points present there. We can again use a catalyst state, scale the constituent functions accordingly, and proceed thereafter as in the original proof [1], to obtain the corresponding TDPG. The TEF from [4, 5] is then applied for this TDPG resulting in a WCF protocol approaching the same bias as the TIPG that we started with, in the limit of infnite rounds of communication. **Lemma 14 (TEF = Closure of EBM = valid). The set of the TEF functions (as defned above), the set of valid** _functions (for the defnition, see e.g. [17, 1]) and the closure of the set of the EBM functions (for the defnition_ 9As already mentioned, restricting to real matrices is enough (see [5]), therefore we assume that the matrices O are orthogonal without loss of generality. 10TIPGs are presented and studied in numerous previous works [17, 1, 5]. 11and the same holds for the closure of the set of EBRM functions, see [4, 5]. ----- _see Section 2) are the same._ _Proof outline. We start by observing that the set of EBM functions is an open set. From Defnition 1 we_ can see that the matrix H may have eigenvectors which have no support on _ψ_ . Consequently, one can | ⟩ consider a sequence of EBM functions ti such that the limi→∞ _ti = t is well-defned, while the associated_ matrix limi→∞ _Hi has a diverging eigenvalue. Such a case arises, for instance, when we have a merge_ move in the point game. For concreteness, let xд1, _xд2 be the coordinates of two points that are going to_ be merged into a single point with coordinate xh = pд1xд1 + pд2xд2, and let pд1, _pд2 be their respective_ probability weights, with pд1 + pд2 = 1. Furthermore, let ti = ⟦xh + 1/i⟧ − _pд1⟦xд1⟧_ − _pд2⟦xд2⟧. One can_ verify that for all fnite values of i, ti is EBM, but its limit t = ⟦xh⟧ − _pд1⟦xд1⟧_ − _pд2⟦xд2⟧_ is not EBM (we omit the details for the sake of brevity), thus concluding that the set of EBM functions is open. To show that the closure of this set is the same as the set of the TEF functions, we need to establish that the limit of any such sequence belongs to the set of TEF functions. This requires a combination of certain results from Section 5 of [4]. In particular, the relationship between the so-called canonical _orthogonal form and the canonical projective form permits one to trade the divergence of such a matrix_ _H for appropriate projectors. This is exactly the origin of the projectors Eh that appear in our analysis._ The matrices H _G and the vector_ _ψ_ corresponding to an EBM transition, can be expressed in the ≥ | ⟩ canonical orthogonal form,[12] _Xh ≥_ _OXдO[T]_ . Essentially, the same orthogonal matrix O also satisfes the TEF inequality.[13] (Equation (2)) The TEF inequality may, in fact, be seen as the limit where H ’s eigenvalues diverge to infnity. Thus, the limit t of the sequence ti indeed belongs to the set of TEF functions and this argument readily extends to all relevant sequences. Finally, in Section 3 of [1] the authors prove that the set of valid functions is the same as the closure of the set of EBM functions. In particular, they start by observing that the set of EBM functions is a convex cone K, and its dual cone K [∗] is the set of operator monotone functions. The bi-dual K [∗∗] is the set of valid functions, and the fact that K [∗∗] = cl _K_ completes the proof. Since we just showed that the closure of the ( ) set of EBM functions is the same as the set of TEF functions, we can also conclude that the set of valid functions is the same as the set of TEF functions. ### B Useful lemmas **Lemma 15. Consider a set of real coordinates 0 ≤** _x1 < x2 · · · < xn and let f (x) = (a1_ −x)(a2 −x) . . . (ak −x), _where k ≤_ _n −_ 2 and the roots {ai }i[k]=1 _[of][ f][ are non-negative. Let][ t][ =][ �]i[n]=1_ _[p][i][ [][x][i]_ []][ be the corresponding][ f][ -] _assignment. Consider a set of real coordinates 0 < x1 + c < x2 + c · · · < xn + c, where c > 0 and let_ � � _f_ [′](x) = (a1 + _c −_ _x)(a2 +_ _c −_ _x) . . . (ak +_ _c −_ _x). Let t_ [′] = [�]i[n]=1 _[p]i[′]_ _xi[′]_ _be the corresponding f -assignment with_ _xi[′]_ [:][=][ x][i][ +][ c][. The solution to][ t][ and to][ t][ ′][ are the same.] _Proof. Note that pi[′]_ [=][ p][i][ as the][ c][’s cancel. We write][ t][ =][ �]i[n]=[h]1 _[p][h]i_ **�xhi** **�** − [�]i[n]=[д]1 _[p][д][i]_ **�xдi** **�** and defne Xh := �nh _i=1_ _[x][h]i_ [|][h][i] [⟩][,][ X][д][ :][=][ �]i[n]=[д]1 _[x][д][i]_ [|][д][i] [⟩][. If][ t][ is solved by][ O][, then we must have][ X][h][ ≥] _[E][h][OX][д][O][T][ E][h][. We show]_ that Xh + cIh ≥ _EhO(Xд + cIд)O[T]_ _Eh where Ih :=_ [�]i[n]=[h]1 [|][h][i] [⟩⟨][h][i][ |][ and][ I][д][ :][=][ �]i[n]=[д]1 [|][д][i] [⟩⟨][д][i][ |][. Together with the] observation that pi[′] [=][ p][i] [, this establishes that][ O][ also solves][ t][ ′][. Since][ c][ is an arbitrary real number, it follows] that O solves t if and only if it solves t [′]. 12Xh and Xд are diagonal matrices containing the eigenvalues of H and G, respectively. We suppress further details. 13The TEF inequality is closely related to the canonical projective form. ----- We now establish Xh ≥ _EhOXдO[T]_ _Eh ⇐⇒_ _Xh + cIh ≥_ _EhO(Xд + cIд)O[T]_ _Eh. Observe that_ _Xh ≥_ _EhOXдO[T]_ _Eh_ ⇐⇒ _Eh(Xh −_ _OXдO[T]_ )Eh ≥ 0 ∵ _Xh = EhXhEh_ ⇐⇒ _Eh(Xh + cIhд −_ _O(Xд −_ _cIhд)O[T]_ )Eh ≥ 0 ⇐⇒ _Xh + cIh ≥_ _EhO(Xд + cIhд)O[T]_ _Eh,_ where Ihд := I. Further, _Xд + cIhд_ _Xд + cIд_ ≥ ⇐⇒ _EhO(Xд + cIhд)O[T]_ _Eh ≥_ _EhO(Xд + cIд)O[T]_ _Eh_ which together yield _Xh ≥_ _EhOXдO[T]_ _Eh ⇐⇒_ _Xh + cIh ≥_ _EhO(Xд + cIд)O[T]_ _Eh_ . **Lemma 16. Consider an n-dimensional vector space. Given a diagonal matrix X = diag(x1,** _x2 . . . xn)_ _and a vector |c⟩_ = (c1, _c2 . . .,_ _cn) where all the xi_ _s are distinct and all the ci are non-zero, the vectors_ _c_, _X_ _c_, . . . X _[n][−][1]_ _c_ _span the vector space._ | ⟩ | ⟩ | ⟩ _Proof. We write the vectors as_ _x1[i][−][1][c][1]_ _x2[i][−][1][c][2]_ ... _xn[i][−][1][c][n]_ _x1[i][−]_ _[c][1]_ _x2[i][−][1][c][2]_ |w ˜ _i_ ⟩ = X _[i][−][1]_ |c⟩ = ... . _xn[i][−][1][c][n]_   We show that the set of vectors are linearly independent, which is equivalent to showing that the determinant of the matrix containing the vectors as rows (or equivalently as columns) is non-zero, i.e. 1 1 . . . 1 _c1_ _x1_ _x2_ _xn_ _c2_ det _x1[2]_ _x2[2]_ _xn[2]_ = c1 · c2 · . . . _cn · det X[˜]_ ... ... ... _x1[n][−][1]_ _x2[n][−][1]_ . . . _xn[n][−][1]_ _cn_     �������������������������������������������������������������������������������������� �������������� :=X[˜] � � is non-zero. To see this, we note that X[˜] is the so-called Vandermonde matrix (restricted to being a square matrix) and its determinant, known as the Vandermonde determinant, is det(X[˜] ) = [�]1≤i ≤j ≤n[(][x]j [−] _[x]i_ [)][ �] [0] as xi s are distinct. As ci s are all non-negative, this concludes the proof. **Lemma 17. Let t =** [�]i[n]=1 _[p][i][ [][x][i]_ []][ be the zeroth assignment for a set of real numbers][ 0][ ≤] _[x][1][ <][ x][2][ · · ·][ <][ x][n][.]_ _Then for 0_ _k_ _n_ 2, ≤ ≤ − � � _x_ _[k]_ [�] = 0 _and_ _x_ _[n][−][1][�]_ - 0, � _where_ _x_ _[k]_ [�] = [�]i[n]=1 _[p][i][ (][x][i]_ [)][k] _[.]_ _Proof. For the proof, see Section 4 and Appendix B of [4]. Most of the work had already been done by_ Mochon [17]. ----- ### C Proofs and examples for balanced monomial assignments #### Proof of Proposition 9 _Proof. The orthonormal basis (over span{|h1⟩_, |h2⟩ . . . |hn⟩}) of interest here is Π[⊥] ��wi′� := _hi_ [(][X]√[h][)][i][ |][w] [′][⟩] (8) _chi_ which entails (9) Π[⊥] _hi_ [=]  Ih _i = 0_ �� Ih − [�][0]j=i+1 ���wj′ _wj[′]���_ _i < 0_ �� Ih − [�][i]j[−]=[1]−b ���wj′ _wj[′]���_ _i > 0_ �� Ih − [�][i]j[−]=[1]−b ���wj′ _wj[′]���_ _i > 0_  where Ih := Eh. We defne ��vi′� and Πд[⊥]i [analogously. Our strategy would be to keep track of both the highest] � and lowest power l, in ⟨w [′]| Xh[l] [|][w] [′][⟩] [and][ ⟨][v] [′][|][ X][ l]д [|][v] [′][⟩][, which appear in the matrix elements] �wi[′]�� _D_ ���wj′ . We use �xh[l] �′ := ⟨w ′| X lh [|][w] [′][⟩] [=][ ⟨][w] [|][ X][ l]h[+][2][b] |w⟩ and similarly �xд[l] �′ := ⟨v [′]| Xд[l] [|][v] [′][⟩] [=][ ⟨][w] [|][ X][ l]д[+][2][b] |w⟩. To this end, we denote the minimum and maximum powers l, by ��x [0] �′ _w_ ′, �x [0] �′ _w_ ′ � _i = 0_ _h_ | ⟩ _h_ | ⟩ M(��wi′�) =  ��xh[−][2][|][i][ |]�′ (Xh)[−|][i][ |] |w [′]⟩, �xh[0] �′ |w ′⟩� _i < 0_ ��xh[−][2][b] �′ (Xh)−b |w ′⟩, �xh[2][i] �′ (Xh)i |w ′⟩� _i > 0._  We defne D := Xh − _EhOXдO[T]_ _Eh �_ �wi[′]���Xh − _EhOXдOT Eh_ � [�]��wj′�. It sufces to restrict to the span of {��wi′�} basis, because Xh ��vi′� = 0 and Eh ��vi′� = 0. The lowest power l, appearing in D is for i = j = −b (as _b_ _i, j_ _n_ _b_ 1). This can be evaluated to be 2b by observing that − ≤ ≤ − − − M(�w−[′] _b_ ��)XhM(��w−′ _b_ �) = ��xh[−][2][b] �′ �xh[−][2][b] �′ �xh[−][2][b][+][1]�′, �xh[0] �′ �xh[0] �′ ⟨xh⟩′[�], where we multiplied component-wise. To fnd the highest power l, in the matrix D, note that for i, j > 0 we have M(⟨wi[′][|)][X][h][M(|][w]j[′][⟩)][ =] ��xh[−][2][b] �′ �xh[−][2][b] �′ �xh[−][2][b][+][1]�′, �xh[2][i] �′ [�]xh[2][j] �′ �xh[i][+][j][+][1]�′�, therefore l = max 2i, 2j, _i + j + 1_ . As argued for the zeroth assignment l = 2n 2b 1 for i = j = n _b_ 1 { } − − − − � �′ or otherwise strictly less than 2n − 2b − 1. Thus, only the Dn−b−1,n−b−1 term in D depends on _xh[2][n][−][2][b][−][1]_ . � �′ � �′ � �′ Except for this term, all other terms depend, at most, on _x_ [−][2][b], _x_ [−][2][b][+][1], . . . _x_ [2][n][−][2][b][−][2], _h_ _h_ _h_ � � i.e. �xh[0] �, �xh[1] �, . . . �xh[2][n][−][2]�. The analogous argument for �vi[′]�� _Xд_ ���vj′, the observation that �wi[′]�� _D_ ���wj′ = � � �wi[′]�� _Xh_ ���wj′ − �vi[′]�� _Xд_ ���vj′, and the fact that �x [0][�] = �x [1][�] = · · · = �x [2][n][−][2][�] = 0 entail that these terms vanish. It remains to establish that Dn−b−1,n−b−1 ≥ 0. This is easily seen by noting that in �wn[′] −b−1�� _D_ ��wn′ −b−1�, the only term which would not get cancelled due to the aforesaid reasoning, must come from the part of ��wn′ −b�−1� containing� Xh[n][−][b][−][1] |w [′]⟩. It sufces to show that the coefcient of this term is positive, as we know that _x_ [2][n][−][2][b][−][1][�][′] = _x_ [2][n][−][1][�] - 0. Further, from Equation (9) and Equation (8), we know that the coefcient is 1/chn−b−1. This establishes D ≥ 0. ----- #### Example of balanced aligned and misaligned monomial assignments Let us consider a concrete example of a balanced aligned monomial assignment with 2n = 8 and m = 2b = 2 (see Figure 1a). We represent the range of dependence of �w0[′]�� _Xh_ ��w0′� on �xh[l] � diagrammatically by enclosing in a left bracket, the terms �x [3][�] = ⟨x⟩[′] and �x [2][�] = �x [0][�][′] (replacing |w⟩ with ��w0′�) and writing ��w0′� next to it. Similarly, for ��w−′ 1�, ��w1′� and ��w2′� we enclose in a left bracket, the terms �� � � � �� � _x_ [0][�], _x_ [1][�], _x_ [2][�], _x_ [3][��] = _x_ [−][2][�][′], _x_ [−][1][�][′], . . . _x_, ⟨ ⟩[′][�] �� � � �� � � _x_ [0][�], _x_ [1][�], . . ., _x_ [5][��] = _x_ [−][2][�][′], _x_ [−][1][�][′], . . . _x_ [3][�][′][�] and �� � � �� � � _x_ [0][�], _x_ [1][�], . . . _x_ [7][��] = _x_ [−][2][�][′], _x_ [−][1][�][′], . . . _x_ [5][�][′][�], � respectively. Note that the highest power _l of_ �xh[l] � that appears in �wi[′]�� _Xh_ ���wj′ is _l = 7 only when i = j = 2._ Thus, the matrix D restricted to the subspace spanned by the {��wi′�} basis (again, we can safely ignore the subspace span{��vi′�} because D ��vi′� = 0) has only one non-zero entry, which is positive, as �x [7][�] - 0. We now explain why a direct extension of the analysis to the balanced misaligned monomial assignment fails and subsequently see how to remedy the situation. Consider the case with 2n = 8 and _m = 2b −_ 1 = 3 (see Figure 1b). From hindsight, we write both the ��vi′�s and the ��wi′�s. We start with ��w0′� = Xh[3][/][2] |w⟩ and ��v0′� = Xд[3][/][2] |v0⟩, and, as before, enclose the terms ��x [0][�][′] = �x [3][�], �x [1][�][′] = �x [4][��] in a left bracket. We continue by multiplying ��w0′� with Xh[−][1] (and ��v0′� with Xд[−][1][, respectively) and pro-] jecting out the components along the previous vectors. We represent these by� � � � � ��w−′ 1� and ��v−′ 1� and in the fgure, enclose the terms _x_ = _x_ [−][2][�][′], _x_ [2][�] = _x_ [−][1][�][′] . . . _x_ [4][�] = _x_ in the left and right brack⟨ ⟩ ⟨ ⟩[′][�] � ets. We do not continue further, because in this case a dependence on _x_ [−][1][�] arises and persists for subsequent vectors. In general, we stop after taking b (which equals 1 here) steps downwards. We can move upwards by multiplying ��w0′� with Xh (and ��v0′� with Xд resp.) and projecting out the components along the previous vectors. We represent these by ��w1′� and ��v1′� and in the fgure, enclose the terms �⟨x⟩ = �x [−][2][�][′], �x [2][�] = �x [−][1][�][′] . . . �x [6][�] = �x [3][�][′][�] in the brackets. Finally, we construct ��w2′� and ��v2′� by taking a step up using Xh and Xд, respectively (these are essentially fxed to be the vectors orthogonal to the previous ones once we restrict to span(|h1⟩, |h2⟩ . . . |hn⟩)) and span(|д1⟩, |д2⟩ . . . |дn⟩)). Taking a step asdown using��w2′� and _X��v2h′[−]�[1]. If we were to useand Xд[−][1]_ we could have constructed O = [�]i[2]=−1 ���wi′��v��wi[′]��−′+2� h.c.and�, we would have obtained dependence on��v−′ 2� respectively but they are the same �x [7][�] in the last row (corresponding to ��w2′�) and a dependence on �x [8][�] for the last term (i.e. �w2[′]�� _D_ ��w2′�). � � 0 _b_ This already hints that the matrix is negative because it has the form with b � 0, which means _b_ _c_ that the determinant is _b[2], entailing there’s a negative eigenvalue; thus this choice can not work. We_ − therefore defne O := [��]i[1]=−1 ��wi′��vi[′]�� + h.c.� + ��w2′��w2[′]�� + ��v2′��v2[′]��. Furthermore, instead of using _Xh ≥_ _EhOXдO[T]_ _Eh_ (10) for establishing positivity, we equivalently use � 1/2 _T_ � 1/2 _Eh ≥_ [�]Xh[⊣] _OXдO_ �X ⊣h . (11) The reason is that to establish positivity, we must include ��w2′� in the basis (we can neglect the null vectors of Eh), and even though the RHS of Equation (10) would not contribute, the LHS would get non-trivial contributions along the rows (as was the case earlier). Using the form with the inverses lets us remove this dependence. To see this, note that span{��w−′ 1�, ��w0′� . . . ��w2′�} equals the _h-space, i.e. span{|h1⟩_, |h2⟩ . . . |hn⟩}. ----- Further, span{Xh[1][/][2] ��wi′�}i[2]=−1 [also equals the][ h][-space (but the vectors are not, in general, orthonormal any] more). Finally, observe that Xh[1][/][2] ��w2′� is a null vector of the RHS of Equation (11). Therefore, to prove the positivity, it sufces to restrict to span{Xh[1][/][2] ��wi′�}i[1]=−1[. An arbitrary normalised vector in this space can be] written as _ψ_ = | ⟩ =⇒ _Xд[1][/][2][O][T][ (][X][ ⊣]h[)][1][/][2][ |][ψ]_ [⟩] [=] =⇒⟨ψ | (Xh[⊣][)][1][/][2][OX][д][O][T][ (][X][ ⊣]h[)][1][/][2][ |][ψ] [⟩] [=] �i1=−1 _[α][i][X][ 1]h[/][2]_ ��wi′� ��i1, _j=−1_ _[α][i][α][j]_ �wi[′]�� _Xh_ ���wj′� �i1=−1 _[α][i][X][ 1]д[/][2]_ ��vi′� ��i1, _j=−1_ _[α][i][α][j]_ �wi[′]�� _Xh_ ���wj′� �i1, _j=−1_ _[α][i][α][j]_ �vi[′]�� _Xд_ ���vj′� = 1, �i1, _j=−1_ _[α][i][α][j]_ �wi[′]�� _Xh_ ���wj′� � � � � �� where we got the equality by noting that �vi[′]�� _Xд_ ���vj′ s depend on (at most) ��xд �, _xд[2]_ . . . _xд[6]_ and � analogously �wi[′]�� _Xh_ ���wj′ depend on (at most) �⟨xh⟩, �xh[2] � . . . �xh[6] ��, concluding that they are the same � as _x_ _[i]_ [�] = 0 for i 0, 1, . . . 6 . Since we proved that the RHS of Equation (11) is one for all normalised ∈{ } _ψ_ s, we infer that we have the correct orthogonal matrix. | ⟩ #### Proof of Proposition 10 _Proof. The proof is very similar to that of Proposition 9. The orthonormal basis (over {|h1⟩_, |h2⟩ . . . |hn⟩}) of interest here is Π[⊥] ��wi′� := _hi_ [(][X]√[h][)][i][ |][w] [′][⟩] _chi_ which entails Ih _i = 0_ �� Πh[⊥]i [=] Ih − [�][0]j=i−1 ���wj′ _wj[′]���_ _i < 0_, �� Ih − [�][i]j=−b+1 ���wj′ _wj[′]���_ _i > 0_  where Ih := Eh. We defne ��vi′� and Πд[⊥]i [analogously. Our strategy is to keep track of the highest and] � lowest powers l in ⟨w [′]| Xh[l] [|][w] [′][⟩] [and][ ⟨][v] [′][|][ X][ l]д [|][v] [′][⟩][, which appear in the matrix elements] �wi[′]�� _Xh_ ���wj′ and �vi[′]�� _Xд_ ���vj′�. For brevity, as before, we use �xh[l] �′ := ⟨w ′| X lh [|][w] [′][⟩] [and similarly] �xд[l] �′ := ⟨v [′]| Xд[l] [|][v] [′][⟩][. To this] end, we denote the minimum and maximum powers l, by ��x [0] �′ _w_ ′, �x [0] �′ _w_ ′ � _i = 0_ _h_ | ⟩ _h_ | ⟩ ��xh[−][2][|][i][ |]�′ (Xh)[−|][i][ |] |w [′]⟩, �xh[0] �′ |w ′⟩� _i < 0_ ��xh[−][2][(][b][−][1][)]�′ (Xh)[−(][b][−][1][)] |w [′]⟩, �xh[2][i] �′ (Xh)i |w ′⟩� _i > 0._ M(��wi′�) =  �� �′ � �′ _xh[−][2][(][b][−][1][)]_ (Xh)[−(][b][−][1][)] |w [′]⟩, _xh[2][i]_ (X  Note that establishing Xh ≥ _EhOXдO[T]_ _Eh is equivalent to establishing_ _Eh ≥_ _Xh[−][1][/][2]OXдO[T]_ _Xh[−][1][/][2]._ (12) ----- (a) 2n = 8, m = 2b = 2; Balanced (aligned) m-assignment. (b) 2n = 8, m = 2b − 1 = 3; Balanced (aligned) monomial assignment. Figure 1: Depicting balanced monomial assignments with simple examples. It is easy to see that _Xh[1][/][2]_ ��wn′ −b � is a null vector (vector with zero eigenvalue) for the RHS as _XдO[T][ ��]wn[′]_ −b � = 0. Any vector |ψ ⟩ in span{|д1⟩, |д2⟩ . . . |дn⟩} is a null vector for both the LHS and the RHS. Thus, we can restrict to span{|h1⟩, |h2⟩, . . . |hn⟩}\span{Xh[1][/][2] ��wn′ −b �}, i.e. to vectors in the h-space orthogonal to _Xh[1][/][2]_ ��wn′ −b �, in order to establish positivity. It turns out to be easier to test for positivity on a possibly � larger space. It is clear that span _Xh[1][/][2]_ ��wi′�[�][n]i=[−]−[b]b+1 [=][ span][{|][h][1][⟩] [,][ |][h][2][⟩] [. . .][ |][h][n][⟩}][ (because it also equals] span{|w [′]⟩i }i[n]=[−]−[b]b+1[, due to Lemma][ 16][). As neglecting vectors with components along][ X][ 1]h[/][2] ��wn′ −b � suf fces for establishing positivity of Equation (12), we can restrict to span{Xh[1][/][2] ��wi′�}i[n]=[−]−[b]b[−]+[1]1[, which might] still contain vectors with components along Xh[1][/][2] ��wn′ −b �, as the basis vectors are not orthogonal. Let |ψ ⟩ = ��ni=−−bb−+11 _[α][i][X][ 1]h[/][2]_ ��wi′�[�] /c where c = �⟨ψ |ψ ⟩. To establish Equation (12), it is enough to show that for all choices of αi s, 1 ≥⟨ψ | Xh[−][1][/][2]OXдO[T] _Xh[−][1][/][2]_ |ψ ⟩ �ni,−j=b−−b1+1 _[α][i][α][j]_ �vi[′]�� _Xд_ ���vj′� = (13) �ni,−j=b−−b1+1 _[α][i][α][j]_ �wi[′]�� _Xh_ ���wj′� = 1, where the second step follows from the fact that Xд[1][/][2][O][T] _[X][ −]h_ [1][/][2] |ψ ⟩ = [�]i[n]=[−]−[b]b[−]+[1]1 _[α][i][X][ 1]д[/][2]_ ��vi′�, and the last step follows from a counting argument which we give below. Note that � �′ � � _x_ _[i]_ = _x_ _[i][+][2][b][−][1]_ _h_ _h_ and � � _x_ [0][�] = _x_ = = _x_ [2][n][−][2][�] = 0. (14) ⟨ ⟩ - · · � To determine the highest power l in ⟨w [′]| Xh[l] [|][w] [′][⟩] [which appears in the matrix elements] �wi[′]�� _Xh_ ���wj′ (for ----- −b + 1 ≤ _i, j ≤_ _n −_ _b −_ 1) it sufces to consider �wn[′] −b−1�� _Xh_ ��wn′ −b−1�. To this end, we evaluate M(�wn[′] −b−1��)XhM(��wn′ −b−1�) �� �′ � �′ � �′ � �′ � �′ � �′� = _x_ [−][2][(][b][−][1][)] _x_ [−][2][(][b][−][1][)] _x_ [−][2][(][b][−][1][)][+][1], _x_ [2][(][n][−][b][−][1][)] _x_ [2][(][n][−][b][−][1][)] _x_ [2][(][n][−][b][−][1][)][+][1] _h_ _h_ _h_ _h_ _h_ _h_ � � � �� �� �� = [�]⟨xh⟩⟨xh⟩ _xh[2]_, _xh[2][n][−][3]_ _xh[2][n][−][3]_ _xh[2][n][−][2]_ . The highest power is l = 2n 2. To fnd the lowest power of l in _w_ [′] _X_ _[l]_ − ⟨ | _h_ [|][w] [′][⟩] [which appears in the matrix] � elements �wi[′]�� _Xh_ ���wj′ (for −b + 1 ≤ _i, j ≤_ _n −_ _b −_ 1) it sufces to consider �w−[′] _b+1��_ _Xh_ ��w−′ _b+1�. To this end,_ we evaluate M(�w−[′] _b+1��)XhM(��w−′_ _b+1�) =_ ��xh[−][2][(][b][−][1][)]�′ �xh[−][2][(][b][−][1][)]�′ �xh[−][2][(][b][−][1][)][+][1]�′, �xh[0] �′ �xh[0] �′ ⟨xh⟩′[�] � � �� �� �� � � = ⟨xh⟩⟨xh⟩ _xh[2]_, _xh[2][b][−][1]_ _xh[2][b][−][1]_ _xh[2][b]_ . The lowest power is l = 1. We thus conclude that the numerator in Equation (13) is a function of � � � � � � � � � � ⟨xh⟩, _xh[2]_, . . . _xh[2][n][−][2]_, and analogously the denominator is a function of _xд_, _xд[2]_, . . . _xд[2][n][−][2]_ with the same form. Using Equation (14), we obtain that the numerator and the denominator are the same. ### D Proofs and examples for unbalanced monomial assignments #### Proof of Proposition 11 _Proof. Many observations from the proof of Proposition 9 carry over to this case. We import the defnitions_ of ���wi′��ni=−−bb−2 and {��vi′�}i[n]=[−]−[b]b[−][1][, together with the observations that][ M(]�w−[′] _b_ ��)XhM(��w−′ _b_ �) has no dependence on �xh[l] �′ with l smaller than −2b (which corresponds to ⟨xh⟩), and that M(�wn[′] −b−2��)XhM(��wn′ −b−2�) � �′ has no dependence on _x_ _[l]_ with l greater than 2n 2b 4 + 1 = 2n 3 2b. We can restrict to _h_ − − − − spanogous observation for� {��w−�′ _b_ �, ��w−′ _b+1��. . . M(��wn′�−�vb−[′]−b2���)X} to establish the positivity ofдM(��v�−′_ _b_ �) and M(�vn[′] −b−2��) DXд :M(= X��vhn′ −−b−E2h�OX), along with the fact thatдO[T] _Eh. Using the anal-_ _x_ _[l]_ [�][′] = _x_ _[l]_ [+][2][b] [�] and _x_ [0][�] = _x_ [1][�] = = _x_ [2][n][−][3][�] = 0, it follows that D is zero. - · · #### Proof of Proposition 12 _Proof. For this proof, we can use the defnitions and observations from the proof of Proposition 10. We_ import the defnitions of ���wi′�� _ni=−−bb+1_ [and] ���vi′��ni=−−bb−+11 [along with the observation that] M(�w−[′] _b+1��)XhM(��w−′_ _b+1�)_ � �′ has no dependence on _xh[l]_ with l smaller than −2b + 2 (which corresponds to ⟨xh⟩), and M(�wn[′] −b−1��)XhM(��wn′ −b−1�) � � � has no dependence on _x_ _[l]_ [�] with l greater than 2n 2b 1 (which corresponds to _x_ [2][n][−][2], as 2n 2b 1 + − − _h_ − − (2b − 1) = 2n − 2). From the previous proof, we also have that establishing Xh ≥ _EhOXдO[T]_ _Eh is equivalent_ to establishing that �ni,−j=b−−b1+1 _[α][i][α][j]_ �vi[′]�� _Xд_ ���vj′� 1, ≥ �ni,−j=b−−b1+1 _[α][i][α][j]_ �wi[′]�� _Xh_ ���wj′� ----- � � � � for all real {αi }i[n]=[−]−[b]b[−]+[1]1[. We know that][ ⟨][x][⟩] [=] _x_ [2][�] = · · · = _x_ [2][n][−][3][�] = 0. As we have dependence on _xh[2][n][−][2]_, we can’t conclude that the fraction is one. However, as we saw in the proof of Proposition 9, dependence on �xh[2][n][−][2]� in the denominator only appears in the �wn[′] −b−1�� _Xh_ ��wn′ −b−1� term with the positive coefcient� 1/chn−b−1. The analogous statement holds for the numerator. This, using _x_ [2][n][−][2][�] - 0, entails that the denominator is larger than or equal to the numerator, concluding the proof. #### Examples of unbalanced aligned and misaligned monomial assignments We illustrate how the solution is constructed by considering a concrete example of an unbalanced aligned monomial assignment. We start with 2n 1 = 7 points and m = 2b = 2 (see Figure 2a). We use the same − diagrammatic representation as before. In this case, we have 4 initial and 3 fnal points and the basis is {|д1⟩, |д2⟩, . . . |д4⟩, |h1⟩, |h2⟩, |h3⟩}. We construct the basis of interest by starting at |w [′]⟩ and using Xh[−][1] � frst until we reach _x_ [0][�], followed by using Xh until the space is spanned (analogously for |v [′]⟩). We get ���v−′ 1�, ��v0′�, ��v1′�, ��v2′�� and ���w−′ 1�, ��w0′�, ��w1′��. In the same vein as the previous solutions, we defne _O :=_ [�]i[1]=−1 ���wi′��vi[′]�� + h.c.� + ��v2′��v2[′]��. In Xh ≥ _EhOXдOT Eh, the_ ��v2′� term is removed by the projector � � _Eh :=_ [�]i[3]=1 [|][h][i] [⟩⟨][h][i][ |][. Using] _x_ [0][�] = ⟨x⟩ = · · · = _x_ [5][�] = 0 and the counting arguments from before, it follows that D = Xh − _EhOXдO[T]_ _Eh is zero._ We now move on to unbalanced misaligned monomial assignment. Consider 2n 1 = 7 points and _m =_ − 2b−1 = 1. In this case, we have 3 initial and 4 fnal points and the basis is {|д1⟩, |д2⟩, |д3⟩, |h1⟩, |h2⟩, . . . |h4⟩}. We construct the basis of interest by starting at |w [′]⟩ and using Xh until the space is spanned (analogously for _v_ [′] ). That is, we frst go downwards for b 2 steps (which is zero in this case), until _x_ is | ⟩ − ⟨ ⟩ reached in the diagram. The basis is ���v0′�, ��v1′�, ��v2′�� and ���w0′�, ��w1′�, ��w2′�, ��w3′��. As before, we defne _O :=_ [�]i[2]=0 ���wi′��vi[′]�� + h.c.� + ��w3′��w3[′]��. This time we use Eh ≥ _X −h_ 1/2OXдO[T] _Xh[−][1][/][2]_ which is equivalent to _Xh ≥_ _EhOXдO[T]_ _Eh for Eh :=_ [�]i[4]=1 [|][h][i] [⟩⟨][h][i][ |][. Using an argument similar to the balanced misaligned case, we] can reduce the positivity condition to 1 ≥ �i2, _j=0_ _[α][i][α][j]_ �vi[′]�� _Xд_ ���vj′� , �i2, _j=0_ _[α][i][α][j]_ �wi[′]�� _Xh_ ���wj′� � � but the counting argument doesn’t make the fraction 1. This is because we now have an _x_ [6] dependence _h_ � � in the denominator and _xд[6]_ dependence in the numerator. However, we also know that this term only � � appears in �w2[′]�� _Xh_ ��w2′� that too with a positive coefcient. Furthermore, we know �xh[6] � - _xд[6]_ and therefore we can conclude that the numerator is smaller than the denominator ensuring the inequality is always satisfed. ### E Constructing a WCF protocol approaching bias 1/14 In this last part of the appendix we show how one can construct an explicit WCF protocol, in particular a protocol approaching bias ϵ = 14[1] [, corresponding to the point game with the same bias, that is for][ k][ =][ 3 in] _ϵ(k) =_ 4k1+2 [, we obtain][ ϵ][(][3][)][ =][ 1]14 [. Several results and techniques presented in previous works, such as [][16][,] 17, 1, 5], are required for this construction. We will only refer to them when they are needed. The TDPG with bias [1] 14 [includes the basic moves we mentioned in Section][ 2][, namely the split, merge and] raise moves, as well as the main moves which are needed for the so-called ladder, as illustrated in Figure 3. We only need to determine the orthogonal matrix O for these main moves, as the matrices corresponding ----- (a) 2n − 1 = 7; m = 2b = 2. Even unbalanced monomial assignment. (b) 2n − 1 = 7; m = 2b − 1 = 1. Odd unbalanced monomial assignment. Figure 2: Depicting unbalanced monomial assignment with simple examples. to the split and the merge moves are given by the so-called blinkered unitary, as presented in Equation 3 of [5], and the raise move is trivial, as it just increases the coordinate. The weights on the points constituting the ladder are given by the f -assignment. For our example (the bias 14[1] [case), the][ f][ -assignment is on a set] of points seven points {x0[′][,] _[x][ ′]1_ [. . .][ x][ ′]6[}][, and the corresponding polynomial has degree fve which we write as] _f_ [′](x) = (r1[′] [−] _[x][)(][r][ ′]2_ [−] _[x][)(][r][ ′]3_ [−] _[x][)(][r][ ′]4_ [−] _[x][)(][r][ ′]5_ [−] _[x][)][. More explicitly, the][ f][ -assignment is given by]_ −f [′](xi[′][)] � **�xi[′]�** . _j�i_ [(][x][ ′]j [−] _[x][ ′]i_ [)] _t_ [′] = 6 � _i=0_ The placement of the roots of the polynomial with respect to the points is the following (see also Figure 3): _x0[′]_ [=][ 0][ <][ r][ ′]1 [<][ r][ ′]2 [<][ x][ ′]1 [<][ x][ ′]2 [<][ x][ ′]3 [<][ x][ ′]4 [<][ x][ ′]5 [<][ x][ ′]6 [<][ r][ ′]3 [<][ r][ ′]4 [<][ r][ ′]5[.] The assignment t [′] includes a point with zero coordinate, while the orthogonal matrices O (in Proposition 9, Proposition 10, Proposition 11, and Proposition 12) solve (monomial) assignments whose points have strictly positive coordinates. As already mentioned in Section 3, this is not really a restriction, as Lemma 15 permits us to alternatively consider an f -assignment on the points {x0, _x1 . . . x6} where xi = xi[′]_ [+][ c][ and] _f (x) = (r1 −_ _x)(r2 −_ _x) . . . (r5 −_ _x) where ri = ri[′]_ [+][ c][, for a positive number][ c][. The resulting assignment] −f (xi ) � _j�i_ [(][x]j [−] _[x]i_ [)][ ⟦][x][i] [⟧] _t =_ 6 � _i=0_ ----- has the same solution as that of t [′]. We decompose t into a sum of monomial assignments as _t =_ 6 � −r1r2r3r4r5 � _i=0_ _j�i_ [(][x]j [−] _[x]i_ [)][ ⟦][x][i] [⟧] �������������������������������������������������� I + :=α1 6 �������������������������������������������������������������������������������������������������� � − (r2r3r4r5 + r1r3r4r5 + r1r2r3r5 + r1r2r3r4)(−xi ) � ⟦xi ⟧ _i=0_ _j�i_ [(][x]j [−] _[x]i_ [)] ���������������������������������������������������������������������������������������������������������������������������������������������� II + + 6 � −α2(−xi )[2] � _i=0_ _j�i_ [(][x]j [−] _[x]i_ [)][ ⟦][x][i] [⟧] �������������������������������������������������� III 6 � −α4(−xi )[4] � _i=0_ _j�i_ [(][x]j [−] _[x]i_ [)][ ⟦][x][i] [⟧] �������������������������������������������������� V + + 6 � −α3(−xi )[3] � _i=0_ _j�i_ [(][x]j [−] _[x]i_ [)][ ⟦][x][i] [⟧] �������������������������������������������������� IV 6 � −α5(−xi )[5] , � _i=0_ _j�i_ [(][x]j [−] _[x]i_ [)][ ⟦][x][i] [⟧] �������������������������������������������������� VI where αl is the coefcient of (−x)[l] in f (x). Since the total number of points in each term is 7, the monomial assignments are unbalanced. Terms I, III and V each have an even powered monomial, therefore they correspond to the aligned case. Their solutions, thus, are readily obtained from Proposition 11. Analogously, the remaining terms II, IV and VI have an odd powered monomial, therefore they correspond to the misaligned case. Their solutions, thus, are readily obtained from Proposition 12. We have already done the hard work, which is to fnd the matrices which (efectively) solve the _f_ assignments for each move of the point game, and we can now describe how the pieces ft together − to give the WCF protocol. We outline the steps of the associated TDPG, since, using the TEF, they can be seen as a short-hand to denote an exchange and manipulation of quantum systems (e.g. qubits) by the two parties executing the WCF protocol, granted that the associated unitaries are known (for details, see the description of the TEF in [5]). Then, the WCF protocol consists of the same steps implemented in the reverse order. Here, we should clarify that, in fact, we convert a TIPG approaching bias [1] 14 [, into a TDPG] following the technique presented, for instance, in the proof of Theorem 5 in [1] with the minor modifcations we outlined in Appendix A. Being familiar with the relationship between TIPGs and TDPGs and the related techniques facilitates the understanding of the construction that follows. #### Steps of the point game 1. The initial frame corresponds to the function [1] 2 [(][⟦][0][,][ 1][⟧] [+][ ⟦][1][,][ 0][⟧][)][.] 2. The split move: the point 0, 1 is split into a set of points along the y–axis and analogously, the ⟦ ⟧ point 1, 0 is split into a set of points along the x–axis. The number of points resulting from the ⟦ ⟧ splits and their respective weights match the distribution of points along the axis as specifed by the TIPG we started with. 3. The catalyst state [17, 1, 5]: Deposit a small amount of weight, δcatalyst, at all the points that appear in the TIPG. This can be done, for instance, by raising (the x–coordinates) of the points which are along the y–axis, i.e. if the points along the axes are denoted as [�]i _[p]split,i_ [⟦][0][,] _[y]i_ [⟧] [then raise them to obtain] �i [(][p]split,i [−] _[δ]split,i_ [)][ ⟦][0][,] _[y]i_ [⟧] [+][ �]i, _j_ _[δ]catalyst_ **�xi**, _yj_ **�** where δcatalyst > 0 can be chosen to be arbitrarily small and the second sum is over the points (xi, _yj_ ) which appear in the TIPG (excluding the points on the axes[14]). 14One needs to use the analogues procedure, i.e. use [�]i _[p]split,i_ [⟦][x]i [,][ 0][⟧] [as well for the one point of the TIPG which has a] _y–coordinate smaller than that of the points along the y–axis._ ----- Figure 3: The TDPG (or equivalently, the reversed protocol) approaching bias ϵ(k = 3) = 14[1] [may be seen as] proceeding in three stages, as illustrated by the three images (left to right). First, the initial points (indicated by unflled squares) are split along the axes (indicated by the flled squares). Second, the points on the axes (unflled squares) are transferred, via the ladder (indicated by the circles), into two fnal points (flled squares). Third, the two points from the previous step (unflled squares) and the catalyst state (indicated, after being raised into one point, by the little unflled box) are merged into the fnal point (flled box). The _second stage is illustrated by Mochon’s TIPG (or more precisely, the ladder) approaching bias 1_ 14. Its / typical move is highlighted. The weight of these points is given (up to a multiplicative constant) by the _f –assignment shown above. The roots of the polynomial correspond to the locations of the vertical lines_ and the location of the points in the graph is representative of the general construction. 4. The ladder: (a) The constituent functions, i.e., the valid functions resulting from the decomposition of the valid function of the TIPG, are globally scaled such that no negative weight appears when they are applied. (b) All the scaled down constituent horizontal functions are applied. (c) All the scaled down constituent vertical functions are applied. (d) The above two steps are repeated until all the weight has been transferred from the axes points to the two fnal points of the ladder.[15] 5. The raise and merge moves: the last two points are raised and merged into the point (1−δ [′]) **�** 47 [+][ δ][ ′′][,][ 4]7 [+][ δ][ ′′][�], where δ [′] is the weight introduced by the catalyst state, and δ [′′] comes from the truncation of the ladder. The catalyst state can then be absorbed (see, e.g. the proof of Theorem 5 in [1]) to obtain a single point **�** 74 [+][ δ][,][ 4]7 [+][ δ] **�, where δ can be made arbitrarily small.** This fnal point, **�** 47 [+][ δ][,][ 4]7 [+][ δ] **�** with a vanishing δ > 0, of the point game is, in fact, the starting point of the WCF protocol. It corresponds to the initial uncorrelated state of the two parties, A and B, and the 15Once the weight on the axes points diminishes sufciently, it becomes impossible to apply the moves again. ----- coordinates represent the cheating probabilities of each party, PA[∗] /B [=][ 4]7 [+][ δ][ =][ 1]2 [+][ 1]14 [+][ δ] [. The steps of] the point game are followed in the reverse order, and the WCF protocol ends with two points of equal weights along the axis (these are exactly the points in the initial frame of the point game) corresponding to a correlated state between A and B, [|][00][⟩]√[+]2[|][11][⟩] . ### References [1] Nati Aharon, André Chailloux, Iordanis Kerenidis, Serge Massar, Stefano Pironio, and Jonathan Silman. “Weak Coin Flipping in a Device-Independent Setting.” In: Revised Selected Papers of the 6th _Conference on Theory of Quantum Computation, Communication, and Cryptography - Volume 6745._ TQC 2011. Madrid, Spain: Springer-Verlag New York, Inc., 2014, pp. 1–12. isbn: 978-3-642-54428-6. [doi: 10.1007/978-3-642-54429-3_1. url: http://dx.doi.org/10.1007/978-3-642-](https://doi.org/10.1007/978-3-642-54429-3_1) ``` 54429-3_1. ``` [2] Dorit Aharonov, André Chailloux, Maor Ganz, Iordanis Kerenidis, and Loïck Magnin. “A simpler proof of existence of quantum weak coin fipping with arbitrarily small bias.” In: SIAM Journal on _[Computing 45.3 (Jan. 2014), pp. 633–679. doi: 10.1137/14096387x. arXiv: 1402.7166.](https://doi.org/10.1137/14096387x)_ [3] Andris Ambainis. “A new protocol and lower bounds for quantum coin fipping.” In: Journal of Com_[puter and System Sciences 68.2 (2004), pp. 398–416. doi: 10.1016/j.jcss.2003.07.010. arXiv:](https://doi.org/10.1016/j.jcss.2003.07.010)_ ``` 0204022 [quant-ph]. ``` [4] Atul Singh Arora, Jérémie Roland, and Stephan Weis. “Quantum Weak Coin Flipping.” In: (Nov. 6, [2018). arXiv: http://arxiv.org/abs/1811.02984v1 [quant-ph].](https://arxiv.org/abs/http://arxiv.org/abs/1811.02984v1) [5] Atul Singh Arora, Jérémie Roland, and Stephan Weis. “Quantum weak coin fipping.” In: Proceedings _of the 51st Annual ACM SIGACT Symposium on Theory of Computing - STOC 2019. ACM Press, 2019._ [doi: 10.1145/3313276.3316306.](https://doi.org/10.1145/3313276.3316306) [6] Manuel Blum. “Coin Flipping by Telephone a Protocol for Solving Impossible Problems.” In: SIGACT _[News 15.1 (Jan. 1983), pp. 23–27. issn: 0163-5700. doi: 10.1145/1008908.1008911. url: http:](https://doi.org/10.1145/1008908.1008911)_ ``` //doi.acm.org/10.1145/1008908.1008911. ``` [7] André Chailloux, Gus Gutoski, and Jamie Sikora. “Optimal bounds for semi-honest quantum oblivi[ous transfer.” In: Chicago Journal of Theoretical Computer Science, 2016 (Oct. 11, 2013). arXiv: http:](https://arxiv.org/abs/http://arxiv.org/abs/1310.3262v2) ``` //arxiv.org/abs/1310.3262v2 [quant-ph]. ``` [8] André Chailloux and Iordanis Kerenidis. “Optimal Bounds for Quantum Bit Commitment.” In: 52nd _[FOCS. 2011, pp. 354–362. doi: 10.1109/FOCS.2011.42. arXiv: 1102.1678.](https://doi.org/10.1109/FOCS.2011.42)_ [9] André Chailloux and Iordanis Kerenidis. “Optimal Quantum Strong Coin Flipping.” In: 50th FOCS. [2009, pp. 527–533. doi: 10.1109/FOCS.2009.71. arXiv: 0904.1511.](https://doi.org/10.1109/FOCS.2009.71) [10] Richard Cleve. “Limits on the security of coin fips when half the processors are faulty.” In: Proceed_ings of the eighteenth annual ACM symposium on Theory of computing - STOC ’86. ACM Press, 1986._ [doi: 10.1145/12130.12168.](https://doi.org/10.1145/12130.12168) [11] Peter Høyer and Edouard Pelchat. “Point Games in Quantum Weak Coin Flipping Protocols.” MA [thesis. University of Calgary, 2013. url: http://hdl.handle.net/11023/873.](http://hdl.handle.net/11023/873) [12] Iordanis Kerenidis and Ashwin Nayak. “Weak coin fipping with small bias.” In: Information Process_[ing Letters 89.3 (Feb. 2004), pp. 131–135. doi: 10.1016/j.ipl.2003.07.007.](https://doi.org/10.1016/j.ipl.2003.07.007)_ [13] Alexei Kitaev. “Quantum coin fipping.” Talk at the 6th workshop on Quantum Information Processing. 2003. ----- [14] Hoi-Kwong Lo and Hoi Fung Chau. “Why quantum bit commitment and ideal quantum coin tossing are impossible.” In: Physica D: Nonlinear Phenomena 120.1 (1998). Proceedings of the Fourth Work[shop on Physics and Consumption, pp. 177–187. issn: 0167-2789. doi: https://doi.org/10.](https://doi.org/https://doi.org/10.1016/S0167-2789(98)00053-0) ``` 1016/S0167-2789(98)00053-0. url: http://www.sciencedirect.com/science/article/ pii/S0167278998000530. ``` [15] Carl A. Miller. “The Impossibility of Efcient Quantum Weak Coin Flipping.” In: Proceedings of the _52nd Annual ACM SIGACT Symposium on Theory of Computing. STOC 2020. Chicago, IL, USA: Asso-_ [ciation for Computing Machinery, 2020, pp. 916–929. isbn: 9781450369794. doi: 10.1145/3357713.](https://doi.org/10.1145/3357713.3384276) ``` 3384276. url: https://doi.org/10.1145/3357713.3384276. ``` [16] Carlos Mochon. “Large family of quantum weak coin-fipping protocols.” In: Phys. Rev. A 72 (2005), [p. 022341. doi: 10.1103/PhysRevA.72.022341. arXiv: 0502068 [quant-ph].](https://doi.org/10.1103/PhysRevA.72.022341) [17] Carlos Mochon. “Quantum weak coin fipping with arbitrarily small bias.” In: arXiv:0711.4114 (2007). [arXiv: 0711.4114.](https://arxiv.org/abs/0711.4114) [18] Ashwin Nayak and Peter Shor. “Bit-commitment-based quantum coin fipping.” In: Phys. Rev. A 67 [(1 Jan. 2003), p. 012304. doi: 10.1103/PhysRevA.67.012304. url: https://link.aps.org/](https://doi.org/10.1103/PhysRevA.67.012304) ``` doi/10.1103/PhysRevA.67.012304. ``` [19] Robert W. Spekkens and Terry Rudolph. “Quantum Protocol for Cheat-Sensitive Weak Coin Flip[ping.” In: Physical Review Letters 89.227901 (Nov. 2002). doi: 10.1103/physrevlett.89.227901.](https://doi.org/10.1103/physrevlett.89.227901) -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1911.13283, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "http://arxiv.org/pdf/1911.13283" }
2,019
[ "JournalArticle", "Conference" ]
true
2019-11-29T00:00:00
[ { "paperId": "6250df21e2a0fb4dd7aca9b078b54db7bfe15293", "title": "The impossibility of efficient Quantum weak coin flipping" }, { "paperId": "4babd905a20e3ba362c21343ce2463bc998ed347", "title": "Quantum weak coin flipping" }, { "paperId": "c7824a6369e33dde3738f204797f965b37185751", "title": "A Simpler Proof of the Existence of Quantum Weak Coin Flipping with Arbitrarily Small Bias" }, { "paperId": "d497449ab968457e3ac4c90d65e3df4ec4136974", "title": "Optimal bounds for semi-honest quantum oblivious transfer" }, { "paperId": "0b7bc8bf26cceb96cb685deb489b36c322e8d945", "title": "Point Games in Quantum Weak Coin Flipping Protocols" }, { "paperId": "0a5e06b023037beb0edb44dca375bf696526a70b", "title": "Weak Coin Flipping in a Device-Independent Setting" }, { "paperId": "65923306c4f31fafcd47b491c7888a3baa91c8ba", "title": "Optimal Bounds for Quantum Bit Commitment" }, { "paperId": "4e1ccf37a779e9d60daf628efbaad840c2119a8d", "title": "Optimal Quantum Strong Coin Flipping" }, { "paperId": "c5d78f1656b6f42642173788fc431a65caad06d3", "title": "Quantum weak coin flipping with arbitrarily small bias" }, { "paperId": "e9d066150a4bed5f77ddd52db8a139d69529f933", "title": "Large family of quantum weak coin-flipping protocols" }, { "paperId": "b8ab2f4f042889f96b043c07797558df208eb75d", "title": "Bit-commitment-based quantum coin flipping" }, { "paperId": "37d5b525d95831d6649c8908863a4cd59975b22e", "title": "Quantum protocol for cheat-sensitive weak coin flipping." }, { "paperId": "0ad3fe8adf643eb26e6f927e610a97c1fb7c1e95", "title": "A new protocol and lower bounds for quantum coin flipping" }, { "paperId": "eb12189a10726b694be12bdfa91fd1a40ed1eaee", "title": "Why quantum bit commitment and ideal quantum coin tossing are impossible" }, { "paperId": "d987feebe58c6e315cca4249dc63c1c576b452cf", "title": "Limits on the security of coin flips when half the processors are faulty" }, { "paperId": "a513c22df84d752391f050fa8e004ba2630409d4", "title": "Coin flipping by telephone a protocol for solving impossible problems" }, { "paperId": null, "title": "51st ACM SIGACT STOC" }, { "paperId": null, "title": "Miller . “ The Impossibility of E \u001c cient Quantum Weak Coin Flipping . ” In : Proceedings of the 52 nd Annual ACM SIGACT Symposium on Theory of Computing . STOC 2020" }, { "paperId": null, "title": "2n = 8, m = 2b − 1 = 3; Balanced (aligned) monomial assignment" }, { "paperId": null, "title": "“ Quantum coin \u001e ipping . ” Talk at the 6 th workshop on Quantum Information Process" } ]
35,976
en
[ { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01f051600f4940964e4e201a7345a61fec8321a9
[]
0.85602
Securing textual information with an image in the image using a visual cryptography AES algorithm.
01f051600f4940964e4e201a7345a61fec8321a9
International Journal of Enhanced Research in Management &amp; Computer Applications
[ { "authorId": "2222795946", "name": "Dr. Dipakkumar Dhansukhbhai Patel" }, { "authorId": "2222586520", "name": "Dr. Subhashchandra Desai" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
Now a day‟s the uses of devices such as computer, mobile and many more other device for communication as well as for data storage and transmission has increases. As a result there is increase in no of user‟s also there is increase in no of unauthorized user‟s which are trying to access a data by unfair means. This arises the problem of data security. To solve this problem a data is stored or transmitted in the encrypted format. This encrypted data is unreadable to the unauthorized user. Cryptography is a science of information security which secures the data while the data is being transmitted and stored. There are two types of cryptographic mechanisms: symmetric key cryptography in which the same key is use for encryption and decryption. In case of asymmetric key cryptography two different keys are used for encryption and decryption. Symmetric key algorithm is much faster and easier to implement and required less processing power as compare to asymmetric key algorithm. The Advanced Encryption Standard (AES) was published by the National Institute of Standards and Technology (NIST) in 2001. This types of cryptography relies on two different keys for encryption and decryption. Finally, cryptographic hash function using no key instead key it is mixed the data.
**ISSN: 2319-7471, Vol. 12 Issue 6, June, 2023, Impact Factor: 7.751** # “Securing textual information with an image in the image using a visual cryptography AES algorithm.” ## Dr. Dipakkumar Dhansukhbhai Patel[1], Dr. Subhashchandra Desai[2] 1Ph.D Scholar, Department of Computer Science, The Sabarmati University, Ahmedabad, India [2]Department of Computer Science, The Sabarmati University, Ahmedabad, India **INTRODUCTION** Now a day‟s the uses of devices such as computer, mobile and many more other device for communication as well as for data storage and transmission has increases. As a result there is increase in no of user‟s also there is increase in no of unauthorized user‟s which are trying to access a data by unfair means. This arises the problem of data security. To solve this problem a data is stored or transmitted in the encrypted format. This encrypted data is unreadable to the unauthorized user. Cryptography is a science of information security which secures the data while the data is being transmitted and stored. There are two types of cryptographic mechanisms: symmetric key cryptography in which the same key is use for encryption and decryption. In case of asymmetric key cryptography two different keys are used for encryption and decryption. Symmetric key algorithm is much faster and easier to implement and required less processing power as compare to asymmetric key algorithm. The Advanced Encryption Standard (AES) was published by the National Institute of Standards and Technology (NIST) in 2001. This types of cryptography relies on two different keys for encryption and decryption. Finally, cryptographic hash function using no key instead key it is mixed the data. **BACKGROUND STUDY** In 1998, Joan Daemen and Vincent Rijmen developed the Advanced Encryption Standard (AES), a symmetric key block cipher. The AES method can be employed with any combination of data (128 bits) and key lengths of 128, 192, or 256 bits. The algorithm is referred known as AES-128, AES-192, or AES-256 depending on the key length. During the encryption-decryption process, the AES system goes through 10 rounds for I28-bit keys, 12 rounds for 192-bit keys, and 14 rounds for 256-bit keys to deliver the final cipher-text or retrieve the original plain text. The data length supported by AES is 128 bits, which can be split down into four basic working blocks. These blocks are organized as a 44-order matrix called the state, which can be thought of as a byte array. For both encryption and decryption, the cipher begins with the Round Key stage. This output, on the other hand, goes through nine key phases before reaching the final step, each of which includes four transformations. **1- Subbytes, 2- Shift rows in a clockwise direction. 3- Mix columns, 4- Include a circular key.** In the last (10th) round, there is no Mix-column transformation. The full treatment was performed. Decryption uses Inverse Substitute Bytes, Inverse Shift Rows, and Inverse Mix Columns to reverse the encryption process. Each round of AES is governed by the transformations listed below. Byte transformation as a substitute AES data block is 128 bits long, therefore each data block contains 16 bytes. In sub-byte transformation, each byte (8-bit) of a data block is converted into another block using an 8-bit substitution box, also known as Rijndael Xbox. A triple-layered message security plan with an extremely high limit was proposed by the developers in S. Farrag, and its collegues [1]. The first two layers make use of cryptography, and the third layer makes use of steganography to conceal information. In the primary layer, the mysterious message is encrypted using AES with a key length of 128 bits, which is the most secure encryption algorithm available. The yield from the first layer is transferred to the second layer, where it is scrambled and safeguarded with the help of clamorous strategic guidance. The 2D picture steganography approach is employed in the third and final layer of the plan, where the smallest component is hidden as a crisscross pattern in the RGB shade of the cover picture. This is the final layer of the plan. According to the findings of this study A.M. Abdullah [2], the Advanced Encryption Standard (AES) algorithm is one of the most extensively utilized symmetric block cipher algorithms in use around the world. This approach is ----- **ISSN: 2319-7471, Vol. 12 Issue 6, June, 2023, Impact Factor: 7.751** used in hardware and software all over the world to encrypt and decrypt sensitive data, and it has a distinct structure that distinguishes it from other methods. When utilizing the AES method to encrypt data, hackers will have a difficult time decrypting the data once it has been encrypted. At this time, there is no evidence that this algorithm can be exploited. It is possible to encrypt with AES using three alternative key sizes: 128, 192, and 256 bits, with each of these ciphers requiring a 128-bit block size. This paper will provide an overview of the AES algorithm and explain some key parts of the method in detail, as well as demonstrate some prior research on it by comparing it to other algorithms such as DES, 3DES, Blowfish, and others. To maintain secrecy, security, privacy, and confidentiality of sensitive data in this study Shafana A.R.F. [3], the authors have integrated the use of both processes, first encrypting the sensitive statistics and then concealing them in carrier media. The AES256 encryption algorithm is used to complete the encryption process, and Digital Images are used as service multimedia to deliver the service. At first, the AES256 algorithm is employed to encrypt sensitive data, which is then decrypted using a different technique. Through the use of the popular and impermeable Least Significant Bit (LSB) technique in Steganography, the encrypted messages are randomly embedded within a digital image so that they can be perceived as regularly recurring White noise. The use of both approaches has complicated the process of unintentional access since, even though the picture was suspected of containing any hidden messages, the cipher is still complicated. Therefore, this two-tier security device may be a low-cost and practical option for hiding secret messages on personal computers. According to the findings of this study Al-Mamun A., [4], the secure and timely transmission of documents is a critical quality for every organization. Data confidentiality, authenticity, and dependability are constantly improving thanks to the use of strong encryption systems and algorithms. The Advanced Encryption Standard (AES), which is supported by the National Institute of Standards and Technology (NIST), is currently the most secure technique for maintaining data confidentiality. In summary, this research focuses on a comprehensive review of the security of the current AES algorithm, intending to increase the level of security provided by the method. By modifying the existing AES method by XORing an additional byte with the s-box value, we were able to significantly improve the Time Security and Strict Avalanche Criterion, as well as the overall security. The add insertion steganography method is the name given to the steganography method that was used in this paper G.C. Prasetyadi [5]. To avoid the annoyance of the message format, which is present in many common steganography methods, this steganography approach was chosen for its simplicity. To scramble the meaning of the concealed message, the AES-256 (Rijndael algorithm) encryption algorithm is used in conjunction with a secret passcode. Perception and validation of the original message are performed on a precise block of bytes to retrieve the message while maintaining its integrity.. As a result, the program, which is an implementation of the suggested algorithm, has been certified to be functional, but only for private use at this time due to the need for additional enhancements. As a result, the attacker will either halt the transmission or conduct more thorough tests on the statistics from the sender to the receiver, which is the Cryptography problem addressed in this work M.E. Saleh [6]: the ciphertext appears useless. Using steganography has the disadvantage of making the message public as soon as the presence of secret data is printed or even inferred. Following the findings of this paper's research, a combined approach to information security has been developed, which combines Cryptography and Steganography techniques to improve information security. Beginning with an encrypted version of the Advanced Encryption Standard (AES) method, the secret message was transmitted over the network. Second, the approach was used to keep the encrypted message from being discovered. As a result, for the hybrid approach that has been proposed, two levels of safety have been established. According to Amal Joshy, and Fasila K.A. [7], we describe a technique for converting text into a picture using the RGB substitution algorithm, as well as a software application that encrypts the resulting photograph using the AES encryption algorithm. In this method, the secret key and the ciphertext are both sent in a single transmission, making for a very tidy package. To transform textual material into a photograph, the encryption and decryption system makes use of a mixed database on both the transmitter and receiver sides of the transaction. On the top of this encrypted image, one more pixel is added, and this pixel contains the cost of the combinational number that was previously used to convert the text into an image. The key that was previously used with the AES technique is now the same as the RGB resultant value, which is a significant improvement. Once this has been accomplished, both the resulting value as well as the snapshot that has been generated will be sent to the destination host. When it comes to function decryption, the receiver performs the decryption in reverse order. It is proposed in Ghoradkar Sneha, and Shinde Aparna [8] that an Image Encryption and Decryption using AES (Advanced Encryption Standard) computation be used for image encryption. Because of the increasing use of photographs in a variety of industries, it is essential to safeguard classified photographic data from unauthorized access. When dealing with a square size of 128 pieces and a critical size of 256 pieces, an iterative approach is used in the design. When dealing with a critical size of 256 pieces, the number of rounds required is fourteen. The unpredictability of cryptography calculations, when used as a mystery key, increases the security of the system. According to this study, the picture is a contribution to AES Encryption to acquire the encoded picture, and the scrambled picture is a contribution to AES Decryption to obtain the initial picture. ----- **ISSN: 2319-7471, Vol. 12 Issue 6, June, 2023, Impact Factor: 7.751** According to Arun M., and Nivek T.N. [9], encryption and decryption are the most important procedures in any community security application, with the former being performed at the sender side and the latter being performed at the receiver side of the communication channel. Many encryption systems necessitate the use of a secret key, without which it is frequently impossible to recover the original statistics from the encrypted data in question. In this study, we propose a system that uses Modulo 256 logic to convert textual content into a pixel-based picture and then uses the AES algorithm to encrypt the received pixel-based picture after it has been decrypted. Because the key is sent along with the encrypted image, this technique is effective in resolving the AES key change problem. The authors of this paper Jawad Ahmad, and Fwad Ahmad [10], thoroughly investigated the algorithms and provided a clear comparison between two encryption procedures, namely, Compression Friendly Encryption Scheme (CFES) and Advanced Encryption Standard (AES). The authors investigated and estimated the AES algorithm, as well as the CFES algorithm, for use in digital images, as well as their ability to protect against brute force and other attacks. The authors discovered that the weaknesses of this technique were associated with low entropy and flat association. As a result, it has been discovered that the algorithm with fewer correlation values provides greater security. **PROPOSED METHODOLOGY** AES is called AES-128, AES-192 and AES-256. This classification depends on the different key size used for cryptographic process. Those different key sizes are used to increase the security level. As, the key size increases the security level increases. Hence, key size is directly proportional to the security level. The input for AES process is a single block of 128 bits. The processing is carried out in several number of rounds where it depends on the key length: 16 byte key consists of 10 rounds, 24 byte key consists of 12 rounds, and 32 byte key consists of 14 rounds. The first round of encryption process consists of four distinct transformation functions: - Substitution Bytes - ShiftRows - MixColumns - AddRoundKey The final round consists of only three transformation ignoring MixColumns. The Decryption method is the reverse of encryption and it consists of four transformations. - Inverse Substitution Bytes - Inverse ShiftRows - Inverse MixColumns - AddRoundKey **AES – Encryption process** **Substitution bytes: The 16 byte plain-text substitutes the corresponding value from substitution table S-box . It is a** non-linear method which performs in the following way: **ShiftRows: In shiftrows transformation, the bytes in last 3 rows will be shifted cyclically over number of bytes** present. - The first row will remain same. - The second row will get shifted to the left by one position. - The third row will get shifted to the left by two positions. - The fourth row will be shifted to the left by three positions. **MixColumns: MixColumns transformation performs by transforming each column of four bytes. It takes input as** one column which is of 4 bytes and output as completely different 4 bytes by transforming the original column. The resultant matrix is same as the size of plain-text. MixColumn transformation will not be carried in the last round. **Add Round Key: The 16 bytes which is produced from MixColumns is equal to 128 bits which is XORed with the** round key of 128 bits. The above process has been repeated until final round to produce the corresponding cipher text. **AES – DECRYPTION PROCESS** **Inverse Substitution Bytes: Inverse Substitution Bytes is the inverse of the substitution byte transformation. This** is performed through inverse S-box . This is obtained by applying inverse of substitution bytes and by computing multiplicative inverse of Galois Field - GF (2^8). ----- **ISSN: 2319-7471, Vol. 12 Issue 6, June, 2023, Impact Factor: 7.751** **Inverse ShiftRows: Inverse ShiftRows is the inverse of ShiftRows transformation. It carries out circular shifts in** reverse direction for each last 3 rows and for the 2nd row, it performs one-byte circular shift to the right and it continues the process till (n-3)rd row. **Inverse MixColumns: Inverse MixColumns is the inverse of Mixcolumns transformation. It carries out operations** on a matrix by column-wise. Resultant columns are in the form of polynomials. **AES Algorithm** **3.3.1 Encryption Algorithm for text using steganography with cover image for stego image and using visual** **cryptography with the secret image.** **For Encryption:** Step 1: Taking message (plain text) input by user. Step 2: Generating random key in range. Step 3: Storing random key in database. Step 4: Converting plain text to cipher text by applying AES. Step 5: AES system there are 10 rounds for 128-bit keys, 12 round for 192-bit keys, and 14 round for 256-bits in order to deliver final ciphertext or to retrieve the original plain-text.AES allows a 128 bit data length that can be divided into four basic operational blocks. These blocks are treated as array of bytes and organized as a matrix of the order of 4×4 that is called the state. For both encryption and decryption, the cipher begins with adding Round Key stage .Step 6: However, before reaching the final round, this output goes through nine main rounds, during each of those rounds four transformations are performed; 1- Subbytes, 2- Shift rows, 3- Mix columns 4 - Add roundkey. In the final (10[th]) round, there is no Mix column transformation. Step 7: The AES permutation process has four stages of substitute bytes, shift rows, mix columns and add round key. **1) Substitution bytes – In this step, each byte (ai,j) of matrix is replaced with a sub byte (si,j), that is Rijindeal S-** Box. At the decryption end, the sub bytes are inversed to reach the original state. **2) Shift Rows - The shift rows operation, shift each rows with a certain constraint. That is first row of matrix is left** same, the second, third and forth rows are shifted to one place left. **3) Mix Columns – In this step, the each column is multiplied with a fixed polynomial and the new value of the** columns is placed. **4) Add Round Key – This sub key is derived from the main key and the sub key is added into this step by applying** XOR to the matrix. Step 8: Read cover image to hide cipher text. Step 9: Hiding cipher text into cover image which gives us stego image. (1) Generating Random Number between 0-2 for channel indicator. (0-Red, 1-Green, 2-Blue) (II) Use MSB (3) of selected channel is used to hide cipher text according to table no 2. (III) Save image as stego_image. Step 10: Hiding Stego Image in VC Shares (I) Read Secret Image(SI). (II) Extract RGB components from each pixel of SI.component which ranges from 0 – 255. (III) According to the value of pixels in each channel (red,green and blue),each pixel is replaced with a 2X2 block(B1 and B2). (IV) The fourth pixel of B1 and B2 is replaced with MSB(4) and LSB(4) of stego image. (V) Create 2 shares for each color channel.(share1, share2, share3, share4, share5 and share6). (IV) Shares 1, 3 and 5 are merged to form VC share1 and similarly Share2, Share4 and Share6 are merged to form VC share2. Step 11: Save shares.(share1.png and share2.png) Now Algorithm 1 – Embedding algorithm to hide the image and text encryption using steganography and visual cryptography **Decryption Algorithm for text using steganography with cover image for stego image and using visual** **cryptography with the secret image.** **For Decryption:** Step 1: Select Both Shares (VC share1 and VC share2) which gives you secret and stego image by process onwards Step 12. Step 2: Overlap VC share1 and VC share2 to get secret image. Step 3: Trace, extract and combine the values of fourth pixel of every 2X2 block of both shares to get stego image. Step 4: From the recovered stego image the hidden cipher text is extracted by extraction process. Step 5: The plain text is extracted from the cipher text by decryption. Now Algorithm 2 – Extracting algorithm to unhide image and text decryption using steganography and visual cryptography ----- **ISSN: 2319-7471, Vol. 12 Issue 6, June, 2023, Impact Factor: 7.751** **Flowchart for Embedding encryption** **Start** **Input Message** **(Plain Text)** **Generate** **Store** **Random** **random key** **Key** **in database** **Creating share1** **VC Share1 is** **for each colour** **created by** **channel such as** **merging** **(share1, share3,** **(Share1, Share3** **Add round key** **and share5)** **and Share5)** **SubBytes** **Forth block of** **Save VC** **B1 is replaced** **Share1 as** **Shift Rows** **with MSB (4)** **share1.png** **of stego_image** **Mix Coloumns** **Replace each pixel** **with 2 x 2 blocks (B1** **Add Round key** **and B2) according** **to the intensity of** **pixel for each** **channel (RGB)** **Sub Bytes** **Shift Rows** **Extract RGB** **Forth block of** **(0-255) from** **B2 is replaced** **Save VC** **each pixel of SI** **with LSB (4) of** **Share2 as** **Add round key** **stego_image** **share2.png** **Cipher text** **Read** **Secret** **Creating share1** **VC Share2 is** **Image (SI)** **for each colour** **created by** **Input Cover Image** **channel such as** **merging** **(share2, share4,** **(Share2, Share4** **and share6)** **and Share6)** **Read Cover Image** **Input** **Cover** **Save image** **Image** **Generate** **as** **random** **stego_image** **channel** **indicator (0-2)** **Use selected** **Update** **Generate** **channel MSB to** **pixels by** **random** **hide cipher text** **using** **pixel** **formula** **[Flowchart 1: Embedding flowchart to hide image and text using encryption using steganography and visual** **cryptography]** ----- **ISSN: 2319-7471, Vol. 12 Issue 6, June, 2023, Impact Factor: 7.751** **Flowchart for Extraction** **[Flowchart 2: Extracting flowchart to hide the image and text decryption using steganography and visual** **cryptography]** **RESULT ANALYSIS** **r_id** **algo_type** **image_name** **r_original_size r_hidden_data r_psnr r_rmse keyid** 1 AES in2.png 31 238 86.65 0.021 13 2 AES in2.png 31 238 88.06 0.017 14 3 AES images (13).jpg 10 61 82.44 0.033 34 4 AES Das_ID.jpg 16 57 83.11 0.031 35 5 AES Lata_ID.jpg 10 55 80.29 0.043 36 6 AES Jesica_ID.jpg 11 56 81.53 0.037 37 7 AES Neethu_ID.jpg 7 33 79.49 0.047 38 8 AES Sanjay_ID.jpg 10 53 78.87 0.05 39 9 AES Philip_ID.jpg 10 61 85.45 0.024 40 10 AES DylanRose_ID.png 113 342 89.18 0.015 42 11 AES Eagle1.png 287 508 94.08 0.009 44 12 AES DylanRose_ID.png 113 238 89.98 0.014 46 13 AES img-113kb.png 113 345 85.19 0.024 47 14 AES img-113kb.png 113 342 90.87 0.013 50 15 AES Panda1.png 146 456 85.79 0.023 52 16 AES in2.png 31 238 87.39 0.019 53 17 AES img-113kb.png 113 340 87.49 0.019 54 2cd43b_fcc6b8947ce4437da2ff2cdd600 18 AES 16 89 80.53 0.042 55 e137b_mv2.png 19 AES in3.png 50 342 87.95 0.018 56 |r_id|algo_type|image_name|r_original_size|r_hidden_data|r_psnr|r_rmse|keyid| |---|---|---|---|---|---|---|---| |1|AES|in2.png|31|238|86.65|0.021|13| |2|AES|in2.png|31|238|88.06|0.017|14| |3|AES|images (13).jpg|10|61|82.44|0.033|34| |4|AES|Das_ID.jpg|16|57|83.11|0.031|35| |5|AES|Lata_ID.jpg|10|55|80.29|0.043|36| |6|AES|Jesica_ID.jpg|11|56|81.53|0.037|37| |7|AES|Neethu_ID.jpg|7|33|79.49|0.047|38| |8|AES|Sanjay_ID.jpg|10|53|78.87|0.05|39| |9|AES|Philip_ID.jpg|10|61|85.45|0.024|40| |10|AES|DylanRose_ID.png|113|342|89.18|0.015|42| |11|AES|Eagle1.png|287|508|94.08|0.009|44| |12|AES|DylanRose_ID.png|113|238|89.98|0.014|46| |13|AES|img-113kb.png|113|345|85.19|0.024|47| |14|AES|img-113kb.png|113|342|90.87|0.013|50| |15|AES|Panda1.png|146|456|85.79|0.023|52| |16|AES|in2.png|31|238|87.39|0.019|53| |17|AES|img-113kb.png|113|340|87.49|0.019|54| |18|AES|2cd43b_fcc6b8947ce4437da2ff2cdd600 e137b_mv2.png|16|89|80.53|0.042|55| |19|AES|in3.png|50|342|87.95|0.018|56| |20|AES|in5.png|80|299|89.91|0.014|60| ----- **ISSN: 2319-7471, Vol. 12 Issue 6, June, 2023, Impact Factor: 7.751** **Encryption process justification with example** **Input as plain text and Passkey** **Krish and 12** **Output comes as ciphertext** **G~ed-** Here, input as the plain text with the passkey. Plain Text = Krish Passkey = 12 The plain text will convert the ASCII character value calculate and then it will convert it into the binary conversion. For example, „K‟ character ASCII CODE IS 75 and then binary equivalent is 01010011. And so on. **Table 2 shows the conversion of Plain text to ASCII code and then Binary Code** **PLAIN TEXT** **ASCII CODE** **BINARY EQUIVALENT** K 075 01001011 r 114 01110010 i 105 01101001 s 115 01110011 h 104 01101000 The passkey is 12, so then it will after AES operation execute in the binary number of plain text ASCII characters and then we received the ciphertext G~ed-. As per the above example, After performing AES operations on 01001011 with Passkey logic then we get the new binary number 01000111. Which is ASCII code is 071 and it is character code of G and so on we get all other character conversions after plain text to cipher text like G~ed-. The conversion table is shown below. **Table 3 shows the conversion of Binary code to ASCII code and Ciphertext** |Input as plain text and Passkey|Krish and 12| |---|---| |Output comes as ciphertext|G~ed-| |PLAIN TEXT|ASCII CODE|BINARY EQUIVALENT| |---|---|---| |K|075|01001011| |r|114|01110010| |i|105|01101001| |s|115|01110011| |h|104|01101000| |BINARY EQUIVALENT|ASCII CODE|CIPHERTEXT| |---|---|---| |01000111|071|G| |01111110|126|~| |01100101|101|e| |01100100|100|d| |00101101|045|-| ----- **ISSN: 2319-7471, Vol. 12 Issue 6, June, 2023, Impact Factor: 7.751** As per the above table, the binary operation will convert the plain text to cipher text using a passkey and AES operation. The Plain text is securely encrypted into the ciphertext with a passkey and AES operation. Now just see the below picture of tool encryption process execution with a passkey and AES operation. The snapshot of my tool for the encryption process is as below. Illustrations = 1 Snapshot for Encryption process using input plain text, passkey, and AES operation. **Use of steganography and visual cryptography process justification with example** G~ed- & Cover Image **Input cipher text & Cover Image** **Stego Image** **Output** **Phase 1: Stego Image Creation** After the encryption process in this phase, the secret message is embedded into random pixels of the Cover Image1 and the steps are described below. Step 1 = Read the secret message and convert them into bytes. Step 2 = Read the Cover Image1 and split it into RGB channels. Step 3 = Select one of the color channels using Pseudo-Random Number Generator (PRNG) Step 4 = Hide 4 bits of the secret message in a pixel-based on the Indicator value. |Input cipher text & Cover Image|G~ed- & Cover Image| |---|---| |Output|Stego Image| ----- **ISSN: 2319-7471, Vol. 12 Issue 6, June, 2023, Impact Factor: 7.751** A random selection of a channel and an indicator to hide data are used in Steps 3 and 4, which are detailed in greater detail below. One of the color channels of each pixel will be randomly selected before secret data is hidden in each pixel. This will be illustrated in Table 5. Table 6 shows that after selecting the color channel at random, the three MSBs of the selected color channel are utilized as an indicator to determine whether or not to hide the data in the current pixel, as well as the number of bits to be hidden in each color channel. Similarly, if the three most significant bits of each pixel are the same, for example, 0 0 0 or 1 1 1, then no data will be buried in that particular pixel. The secret message bits will be substituted for one or two of the least significant bits of each component if this is not the case. Figure 21 depicts the results of a test using sample data, which shows how the method was tested. **Figure 4 Stego Creation - Result of phase1.** **Figure 5: LSB substitution method stego image can divide in RGB color channel.** The snapshot of my tool for the select the cover image and the secret image for creating the shares are as below. Illustrations = 2 Snapshots for select the stego image and secret image for creating the shares. ----- **ISSN: 2319-7471, Vol. 12 Issue 6, June, 2023, Impact Factor: 7.751** After the process of steganography and visual cryptography, the data will hide in the secret image and we will generate the histogram. Illustrations = 3 Process of steganography and visual cryptography for hiding the secret image **Phase 2: Hiding stego image in VC shares** **Stego Image** **Secret Image** **Input Stego** **Image and Secret** **image** **Share 1** **Share 2** **Output Share 1** **and Share 2** In this phase, the stego image created in phase 1 is embedded into the VC shares of Cover Image2 and the steps are described below. Step 1 = Read the stego image and read each pixel value Step 2 = Separate the 8 bits of each color component into 2 nibbles Step 3 = Read the Cover Image2 and create 2 shares for each color channel Step 4 = Hide the first nibble (MSB) in share1 and second nibble (LSB) in share2 Step 3 and Step 4 which involve the creation of VC shares and hiding of stego image simultaneously are explained below. The Cover Image2 is split up into 3 color channels (RGB) and two shares are created depending on the intensity of pixel values (whether it is greater than or less than 128) of each color channel. |Input Stego Image and Secret image|Stego Image|Secret Image| |---|---|---| |Output Share 1 and Share 2|Share 1|Share 2| ----- **ISSN: 2319-7471, Vol. 12 Issue 6, June, 2023, Impact Factor: 7.751** It expands each pixel into two 2 × 2 blocks (B1 and B2) to which a color is assigned as shown in Fig. 2. This shows the blocks created for the Red channel. Similarly, blocks are created for Blue and Green channels. The fourth pixel of B1 is replaced with first nibble and B2 is replaced with the second nibble of the stego image. B1 of all pixels form share1 and B2 form share2. **Secret Image** **Figure 9 LSB substitution method Secret image can divide into RGB color channels.** The snapshot of my tool for the after the visual cryptography the histogram will be check for the changes of image size as below. Illustrations = 4 Snapshot for histogram for stego image before and after operation of data hiding. ----- **ISSN: 2319-7471, Vol. 12 Issue 6, June, 2023, Impact Factor: 7.751** **Decryption process justification with example** **Input** **G~ed-** **(1024 * 768)** **(1024 * 768)** **5.41 KB** **149 KB** **Share 1** **Share 2** **Cipher Text** **Output** **Krish** **Original Text (Plain Text)** The procedure of decryption is straightforward. It is not necessary to restore the multimedia content if neither the stego picture nor the Cover Image2 is present. By overlapping the two shares, the Cover Image2 can be disclosed without the use of any mathematical processes. Using tracing, extraction, and combination of the values of the fourth pixel of every 2 x 2 blocks in each of the two shares, the stego image may be reconstructed. It is possible to extract a hidden message from the restored stego image. As a result, this multi-level stego-vc system aids in the secure transmission of communications, which is extremely difficult to crack. The snapshot of my tool for the upload the shares for the decryption process. Illustrations = 5 Snapshot for uploading the shares The snapshot of my tool for the decryption process using with selecting the shares and putting the passkey for the same is as below. Illustrations = 6 Snapshot for Decryption process using selecting the shares and putting the passkey. |Input|(1024 * 768) 5.41 KB|(1024 * 768) 149 KB|G~ed-| |---|---|---|---| ||Share 1|Share 2|Cipher Text| |Output|Krish||| ||Original Text (Plain Text)||| ----- **ISSN: 2319-7471, Vol. 12 Issue 6, June, 2023, Impact Factor: 7.751** **Justification for result analysis** When it comes to concealing multimedia data, the suggested approach makes use of the advantages of VC and Steganography. The algorithm, which is built in the Python programming language, is tested using example data, and the results are depicted in Figure. Some of the most important aspects of this study are discussed and listed. **Imperceptibility** The second phase of the proposed solution is tested by concealing text files of varying sizes under a cover image. Calculating the RMSE and PSNR values is used to evaluate it, and the results are shown in Table 9. As a result, it is found that the PSNR value is 88.49 dB of the message, implying that there is no significant visual distortion even when hiding 38KB of the message. Table 3.PSNR and RMSE values Cover Image Hidden Data(KB) PSNR (dB) RMSE. **Table 6 Calculation of Cover image hidden data, PSNR and RMSE** **Cover Image** **Hidden Data (KB)** **PSNR (dB)** **RMSE** 299 89.9 0.014 (512 * 384) 80.4 KB **Resistance to Steganalysis** Although the changes made to the cover image as a result of data concealing are undetectable to HVS, a variety of steganalysis methods are available to discover the presence of a hidden message in the steg medium. Deduction via steganalysis can be avoided by employing the VC technique, which involves hiding the stego picture within the shares of a secret image that has been constructed. Even after suppressing the stego image, as illustrated in Figure8, the shares that are generated are always meaningless and worthless. This technique assures that hackers will not be able to deduce any information about the secret image from the shares that have been produced. **Share 1** **Share 2** **Figure 15 Shares created** **Multilevel Security**  This system protects the information being communicated with four different layers of protection.  Phase 1 involves encrypting the secret message.  In phase 2, the secret message is hidden in an image using a dynamic and random algorithm.  In phase 3, the stego picture will be embedded in VC shares.  Shares of the hidden image made in step 3 are meaningless and dumb As a result, even if intruders are aware of the presence of a secret data stream, they will be unable to simply break into the system. **Multimedia security** This approach allows the user to hide various pieces of data in different formats, such as text and images, at the same time. Two secret text files, two cover images, and a secret picture are hidden in the shares of the secret image in this manner. From the received shares, the recipient can extract the hidden image, stego images, and secret messages. As a result, our technique enables the hidden transmission of multiple types of data in massive volumes. **Message Integrity** If the receiver can extract the precise message that was disguised and conveyed, the security technique is termed efficient. The Secret Message is hidden in the spatial domain of the image in this suggested approach, and no alterations are made, therefore the message obtained in the extraction phase is identical to the hidden message (Fig. 9). As a result, this strategy guarantees data integrity. |Cover Image|Hidden Data (KB)|PSNR (dB)|RMSE| |---|---|---|---| |(512 * 384) 80.4 KB|299|89.9|0.014| |Col1|Col2| |---|---| |Share 1|Share 2| ----- **ISSN: 2319-7471, Vol. 12 Issue 6, June, 2023, Impact Factor: 7.751** **Figure 16 Result of Extraction** **PSNR** The MSE represents the average of the squares of the "errors" between our actual image and our stego image. The error is the amount by which the values of the original image differ from the degraded image. Where, f represents the matrix data of our original image g represents the matrix data of our stego image m represents the numbers of rows of pixels of the images and i represents the index of that row n represents the number of columns of pixels of the image and j represents the index of that column Peak Signal to Noise Ratio (PSNR ) The peak signal-to-noise ratio (PSNR) in decibels is computed between two images. This ratio is often used as a quality measurement between the original and the reconstructed image. The higher the PSNR better the quality of the reconstructed image. MAXf is the maximum signal value that exists in the cover image. The PSNR of Cover and Stego image with different sizes of data hidden is shown in the figure below. Similarly, the PSNR of Cover Share and the Stego Share after hiding the stego image of size 512 X 384 are shown in fig.10. From the PSNR value, it is evident that the clarity of the Stego image is almost the same as the original image. **Table 7 Comparison of the Cover image with red, green & blue stego images** **Cover Image** **Red Stego Image** **Green Stego Image** **Blue Stego Image** (512 * 480) 80.4 KB (512 * 480) (512 * 480) (512 * 480) 215 KB 220 KB 234 KB **Histogram** A histogram is a graphical representation of statistical information that uses rectangles to depict the frequency of data items in successive numerical intervals of equal size across time. Histograms are most commonly represented by the horizontal axis, with the independent variable drawn along the horizontal axis and the dependent variable plotted along the vertical axis. |Cover Image|Red Stego Image|Green Stego Image|Blue Stego Image| |---|---|---|---| |(512 * 480) 80.4 KB|(512 * 480) 215 KB|(512 * 480) 220 KB|(512 * 480) 234 KB| ----- **ISSN: 2319-7471, Vol. 12 Issue 6, June, 2023, Impact Factor: 7.751** The following histogram depicts the relationship between pixel value and the number of pixels in the image. As illustrated in Fig.11, the histograms created before and after hiding the stego pictures are shown in comparison. It indicates that by hiding the Stego picture in the VC shares, just a small number of pixel values are altered. Histogram 1 Before and after comparison of the normal image, and stego image **Robust and simple** This method is fairly easy because the data is hidden just by altering a small number of least significant bits, and it requires little computational power. Because we are using VC, there is no need for a complicated decryption algorithm. The algorithm's results corroborate the system's robustness, which is a good thing. **Capacity** In this system, the ability to conceal information is very high. Figure 10 shows a comparison between the size of the hidden message and the size of the cover image. The number of shares enhances the embedding capacity of the system because a greater number of stego pictures may be embedded in the shares as the number of shares increases. **SUMMARY OF CHAPTER** The proposed work makes use of AES algorithm to encrypt and decrypt the image and text. It makes use of 128 bit key for encryption. In our proposed system, encryption image doesn‟t remain the same. The encryption image is chosen in random. So, it is difficult for intruder to differentiate the encrypted image and the original image. So, AES algorithm is most suited for image encryption in real time applications. As a future work, we are planning for a different encryption keys in each round to perform encryption.Image Encryption and Decryption using AES algorithm is implemented to secure the image data from an unauthorized access. A Successful implementation of symmetric key AES algorithm is one of the best encryption and decryption standard available in market. With the help of PYTHON coding implementation of an AES algorithm is synthesized and simulated for Image Encryption and Decryption. The original images can also be completely reconstructed without any distortion. It has shown that the algorithms have extremely large security key space and can withstand most common attacks such as the brute force attack, cipher attacks and plaintext attacks. **REFERENCES** [1]. S. Farrag, W. Alexan, and H. Hussein, “Triple-layer image security using a zigzag embedding pattern,” in 2019 International Conference on Advanced Communication Technologies and Networking (CommNet‟19), Morocco, Apr. 2019. [2]. A. M. Abdullah, Advanced Encryption Standard (AES) Algorithm to Encrypt and Decrypt Data, Cyprus UK: Research Gate Departement Of Applied Mathematics & Computer Science, 2017. ----- **ISSN: 2319-7471, Vol. 12 Issue 6, June, 2023, Impact Factor: 7.751** [3]. Shafana A.R.F. “TWO TIER SHIELD SYSTEM FOR HIDING SENSITIVE TEXTUAL DATA”, Proceedings of 7th International Symposium, SEUSL, ISBN 978-955-627-120-1, pp. 97-103., 7th & 8th December 2017. [4]. Al-Mamun, A., Rahman, S., et al.: Security analysis of AES and enhancing its security by modifying S-box with an additional byte. Int. J. Comput. Netw. Commun. (IJCNC) 9(2) (2017). [5]. G. C. Prasetyadi, A. Benny Mutiara and R. Refianti, “File encryption and hiding application based on advanced encryption standard (AES) and append insertion steganography method,”, 2017 Second International Conference on Informatics and Computing (ICIC), Jayapura, 2017, pp. 1-5. [6]. M. E Saleh, A. A. Aly, and F. A. Omara, “Data Security Using Cryptography and Steganography Techniques,” Int. J. Adv. Comput. Sci. Appl., vol. 7, no. 6, pp. 390–397, 2016. [7]. Amal Joshy, Amitha Baby K X, Padma S, Fasila K A "Text to Image Encryption Technique using RGB Substitution and AES" IEEE International Conference on Electrical, Computer and Communication Technologies, Coimbatore, pp 19-21, February 2016. [8]. Ghoradkar, Sneha and Shinde, Aparna, “Review on Image Encryption and Decryption using AES Algorithm,” International Journal of Computer Applications (0975–8887), National Conference on Emerging Trends in Advanced Communication Technologies, (NCETACT-2015). [9]. Arun, M., Azarudeen S. Mohamed and Nivek, T.N. “AES based Text to Pixel Encryption using Color Code Conversion by Modulo Arithmetic”. International Journal of Recent Research in Science, Engineering, and Technology. Vol. 1, No. 3 pp 37-42, June 2015. [10]. Jawad Ahmad and Fawad Ahmed ―Efficiency Analysis and Security Evaluation of Image Encryption Schemes‖ International Journal of Video & Image Processing and Network Security IJVIPNS-IJENS Vol: 12 No: 04, 2012 -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.55948/ijermca.2023.0611?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.55948/ijermca.2023.0611, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://doi.org/10.55948/ijermca.2023.0611" }
2,023
[ "JournalArticle" ]
true
null
[]
10,868
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Political Science", "source": "external" }, { "category": "Philosophy", "source": "s2-fos-model" }, { "category": "Political Science", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Sociology", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01f08210264ee1f338a86e5c95a00d7531c83fe2
[ "Computer Science", "Political Science" ]
0.930563
Governance in Blockchain Technologies & Social Contract Theories
01f08210264ee1f338a86e5c95a00d7531c83fe2
Ledger
[ { "authorId": "3373845", "name": "Wessel Reijers" }, { "authorId": "1403861641", "name": "Fiachra O’Brolcháin" }, { "authorId": "145756354", "name": "P. Haynes" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": [ "http://ledgerjournal.org/ojs/index.php/ledger" ], "id": "6dc5e327-35e7-485d-af5c-6a89140e72e9", "issn": "2379-5980", "name": "Ledger", "type": null, "url": "http://ledgerjournal.org/" }
This paper is placed in the context of a growing number of social and political crit iq ues of blockchain technologies. We focus on the supposed potential of blockchain technologies to transform political institutions that are central to contemporary human societies, such as money,  property right s regimes , and systems of democratic governance. Our aim is to examine the way blockchain technologies can bring about - and justify - new models of governance . To do so, w e draw on  the philosophical works of Hobbes , Rousseau , and Rawls , analyzing blockchain governance in terms of contrasting  social contract theories .  We begin by comparing the justifications of blockchain governance offered by members of the blockchain developers ’ community with the justifications of governance presented with in social contract theories . We then  examine  the extent to which the model of governance offered by blockchain technologies  reflect s key governance themes and assumptions located within social contract theories , focusing on the notions of sovereignty, the initial situation, decentralization and distributive justice .
( ) DOI 10.5915/LEDGER.2016.62 # Governance in Blockchain Technologies & Social Contract Theories ## Wessel Reijers,*[†] Fiachra O'Brolcháin,[‡] Paul Haynes[§] **Abstract. This paper is placed in the context of a growing number of social and political** critiques of blockchain technologies. We focus on the supposed potential of blockchain technologies to transform political institutions that are central to contemporary human societies, such as money, property rights regimes, and systems of democratic governance. Our aim is to examine the way blockchain technologies can bring about - and justify - new models of governance. To do so, we draw on the philosophical works of Hobbes, Rousseau, and Rawls, analyzing blockchain governance in terms of contrasting social contract theories. We begin by comparing the justifications of blockchain governance offered by members of the blockchain developers’ community with the justifications of governance presented within social contract theories. We then examine the extent to which the model of governance offered by blockchain technologies reflects key governance themes and assumptions located within social contract theories, focusing on the notions of sovereignty, the initial situation, decentralization and distributive justice. ## 1. Introduction The Blockchain, the technological innovation underpinning the familiar cryptocurrency Bitcoin, is increasingly the topic of academic and public debate. In this paper, we aim to examine the ways in which blockchain technologies can produce models of governance and how these models of governance are justified. We do so by exploring similarities between core design features of the Blockchain, the main ideas about governance that persist in the blockchain community and essential aspects of prominent social contract theories. We do not intend to construct a conclusive comparison between models of government offered by social contract theories and blockchain technologies, but rather to identify points of convergence and divergence that enable us to indicate points of departure for political critiques of the technology. Blockchain technology, first applied in the design of Bitcoin in 2008, emerged from a movement of anarchists, computer scientists and crypto-enthusiasts who saw the potential of the technology as a breakthrough in the long-awaited realization of an old “cypherpunk” dream of money that is free from the control of the state and other third parties, such as commercial banks;[1] however, blockchains offer technological possibilities far beyond new ways of issuing money. They also offer scope for rethinking political organization, including enabling novel ways of creating, managing and maintaining systems of voting rights, property rights and other legal agreements. We refer to the process by which blockchains enable such †W. Reijers (wreijers@adaptcentre.ie) is a PhD researcher at the School of Computing, Dublin City University ‡F. O'Brolcháin (fiachra.obrolchain@dcu.ie) is a postdoctoral researcher at the Institute of Ethics, Dublin City University §P. Haynes (paul.haynes@rhul.ac.uk) is lecturer at the School of Management, Royal Holloway, University of London - 3HrFGw5nuBup39tzvQT5reEF5gdtx8fDGw ----- LEDGER VOL 1 (2016) 134 151 systems as “blockchain governance,” which is constitutive of a broader political theme termed “blockchain government.” [2] Our paper contributes to a growing body of political and sociological reflections on blockchain technologies in which the design and application of its technology is linked to ideas of political organization. Kostakis and Giotitsas (2014: 437) argue that Bitcoin “as a piece of software is imbued with ideas drawn from a certain political framework.” [3] Such a political framework, Barton (2015) argues, challenges the instrumentalist idea of technical “neutrality” of Bitcoin, [4] a claim he supports with ethnographic findings indicating biases present in the design of the technology itself. Golumbia (2015: 128) is more explicit, stating that networks built on the Blockchain represent a political framework that is “profoundly antidemocratic” and serves “a neo-liberal agenda.” [5] In addition, some scholars specifically focus on philosophical ideas of political organization that can be traced in the technological design of the Blockchain. For instance, Dupont (2014: 8) argues that cryptographic code can “stand in” for humans and that the Blockchain can be regarded as a powerful “ordering machine” in the modern “control society.”[ 6] Linking Bitcoin to political philosophy, Kavanagh and Miscione (2015: 8) draw the connection between the Blockchain and the _Leviathan, as_ conceptualized in the work of Thomas Hobbes, as the enforcer of the social contract.[7] More specifically, Dupont and Maurer (2015) argue that the Blockchain conjoins “two of the central legal devices of modernity: the ledger and the contract.”[ 8] Our paper aims at contributing to these philosophical debates by exploring philosophical ideas common to both the Blockchain and classical social contract theories. We base our argument on the social contract theories of Hobbes, Rousseau, and Rawls, and on central texts produced by, and widely circulated within, the blockchain developer community. Notably, we focus on writings about the Ethereum platform. Ethereum is a nonprofit organization with the key objective stipulated as: “promotion of developments of new technologies and applications, especially in the fields of new open and decentralized software architectures.”[ 9] Its character, as a platform for the advocacy and development of blockchain applications that tries to engage the wider community of developers, users and enthusiasts, makes it a valuable source for investigating how principles of political organization are discussed in the context of blockchain technologies. As in any community, proponents of blockchain technology express a diversity of views representing a variety of perspectives; however, the values that unite the Ethereum community can be drawn from a number of its key texts. For our case study, these include white and yellow papers (Buterin, 2013; Wood, 2014) and communications from key individuals, organizations and other members of the Ethereum community (including interviews, articles, mission statements, wiki, blog and forum postings). Our inquiry is guided by two distinct research objectives. Firstly, we investigate the extent to which justifications of blockchain governance offered by the Ethereum community reflect justifications of governance offered by social contract theories. Secondly, we investigate the extent to which the model of governance offered by blockchain technologies reflects the models of governance offered by prominent versions of social contract theory. We start by outlining the principles of governance applied in the Blockchain, focusing on two of its key features: its nature as a public ledger, and its capacity to decentralize the enforcement of contracts. We then compare justifications offered for blockchain governance with justifications for governance offered by the social contract theories of Hobbes, Rousseau and Rawls. Finally, we trace similarities between the models of governance offered by these theories and the model of governance enabled by blockchain technologies. l e d g e r j o u r n a l . o r g ISSN 2379-5980 (online) **135** DOI 10.5915/LEDGER.2016.62 ----- LEDGER VOL 1 (2016) 134 151 ## 2. How Blockchain Technologies Can Shape Governance We start our investigation by exploring the way blockchain technologies are able to configure specific forms of political organization. In order to do so, we focus on a paradigmatic instance of a software project utilizing blockchain technology: Ethereum. Ethereum was chosen as a case study because it matches a number of relevant criteria, including its technological scope and the engagement with political ideas by its community of practitioners. It aims at implementing the paradigm of the Blockchain “coupled with cryptographically-secured transactions” in a “generalized manner.”[10] This suggests that it attempts to generate a software standard (like an e-mail protocol) for any kind of decentralized blockchain application, which could range from another cryptocurrency to applications for managing “smart-contracts” like blockchain-instigated civil marriage contract,[11] property contracts and financial instruments.[12] The Blockchain can be described as a public record of time-stamped transactions that is reinforced by the computational efforts of the decentralized network of ‘miners’ (people controlling computational nodes that are validating transactions). This public record is commonly referred to as the “universal” or public ledger. Core features of blockchain design that are relevant for our analysis are: (i) its nature as a digital, public ledger through which people contract with one-another; and, (ii) its decentralized enforcement of validated transactions or contracts by means of computational scrutiny. Any blockchain consists of time-stamped “blocks,” which are collections of the validated transactions in the system within a certain timeframe (every 10 minutes in the case of Bitcoin). All transactions made within a blockchain are available to public inquiry, from the “beginning of time” (when the first block was time-stamped) until the current moment. In theory at least, this means that all the entities interacting with a certain blockchain application can own a copy of the public blockchain and control the validity of new interactions. Thus, so-called “smart contracts” in the given blockchain can be publicly validated and can be enforced by a decentralized network of nodes; which can in theory include all the users of the blockchain. The objects that are transacted through a blockchain need not be quantities of money, as is the case with Bitcoin, but can also be texts or certain rule-based agreements. Aspects of governance such as property rights regimes, insurance contracts and even so-called “decentralized autonomous organizations” (DAOs) – organizations such as companies or government institutions that are managed by means of decentralized, blockchain-based interactions – can be (re)organized and managed through blockchain technologies.[13] Property rights can for instance be organized on a blockchain in the context of the Internet of Things (IoT). In this context, physical devices that are connected to the Internet would require identification of their owner in order to be used, with the ownership rights of each specific device stored on a blockchain (Wright and De Filippi 2015: 15). This is an important innovation because, as Dupont and Maurer (2015) argue, blockchain technologies differ from traditional social systems that validate, maintain and enforce contracts between people (e.g. accountancy and legal systems), because “cryptocontracts tend to build social and functional properties within the system.” In other words, where lawyers and judges are needed to enforce legal regulations and notaries are needed to validate certain legally binding contracts, the blockchain allows for the validation of smart contracts and their enforcement in its own right without the necessity for arbitrating third parties. Because of these features, developers of the Ethereum platform argue that the blockchain can function as a legal framework able to serve as the basis for online interactions of any kind, claiming that: “Ethereum is a new kind of law.”[ 14] This implies that in contrast with conventional contract laws, which are necessarily l e d g e r j o u r n a l . o r g ISSN 2379-5980 (online) **136** DOI 10.5915/LEDGER.2016.62 ----- LEDGER VOL 1 (2016) 134 151 coupled with their human validators and enforcers, blockchain technologies are capable of establishing and maintaining forms of political organization that are (at least in the virtual realm) self-sustaining. As Dupont and Maurer (2015) argue, the public ledger renders social interactions that are recorded on the ledger visible to everyone in the system (both human and artificial agents in the case of the Ethereum ledger), which consequently renders them auditable. Moreover, the decentralized enforcement of smart contracts “dematerializes” or rather depersonalizes the auditing authority: it eradicates the need for human arbitrators such as notaries or accountants. To understand how blockchain technologies enforce “smart contracts” as opposed to how traditional contracts are enforced, we need to clarify both terms. Traditional contracts can be described as textually expressed voluntary agreements between two or more contracting parties that require human arbitration to be validated, audited and enforced. A smart contract is defined by Buterin (2016) as “a mechanism involving digital assets and two or more parties, where some or all of the parties put assets in and assets are automatically redistributed among those parties according to a formula based on certain data that is not known at the time the contract is initiated.”[15] Thus, on the one hand we can say that clauses sanctioned by two parties in conventional contracts are textually defined and do not directly bind the contracting parties because a third, arbitrating human party is necessary to ensure the validity and enforcement of the contract. On the other hand, a smart contract implies that all the contractual clauses are machine-readable and can be made binding by means of computational scrutiny, without human interference. As Dupont and Maurer (2015) put it, the smart contract “replaces the difficult social and psychological work of contracting with self-executing code.” We would slightly nuance this claim by stating that a significant part of the “work of contracting” remains embedded in social interactions, namely the act of consenting to a specific contractual reality. The aspects that are delegated to the technology are the validation, storing and enforcement of the contractual clauses. The characteristics of blockchain technologies, as described earlier, seem to support the claim that they could, in many circumstances, mimic institutional processes that enable society governance, such as currency systems (as Bitcoin demonstrates), property regimes and even democratic voting processes. Whether such institutional processes on the blockchain can be part of a “social contract” similar to the social contract as understood in the philosophical tradition, remains, however, an open question. In the following section, we explore the extent to which the “social contract” of blockchain governance reflects aspects of the social contract that structures the basis of governance as theorized by some of the most prominent thinkers in the philosophical tradition. Before we proceed with this inquiry, we need to clarify two important issues. First of all, we need to clarify the meaning of “social contract” vis-à-vis the notions of contract and smart contract discussed earlier. In philosophical writings, the concept of the social contract is used in two distinct traditions: one identified by Skyrm (1996: ix) as focusing on “what _sort of_ contract rational decision makers would agree to in a preexisting ‘state of nature’” and another that aims to explain how the implicit social contract that creates society has evolved and may continue to evolve in the future.[16] In this paper, we limit our focus to an understanding of the social contract as it is used within the first of these traditions, i.e. conceptualizing the social contract as a method for justifying political principles by appeal to an agreement made in an initial situation by people who are (broadly speaking) presupposed to be equal, rational, and autonomous. l e d g e r j o u r n a l . o r g ISSN 2379-5980 (online) **137** DOI 10.5915/LEDGER.2016.62 ----- LEDGER VOL 1 (2016) 134 151 This notion of the social contract is one of the most significant contributions of Western liberal political philosophy. Its lineage can be traced back to Thomas Hobbes (1651), JeanJacques Rousseau (1762), and John Rawls (1971).[17, 18, 19] We acknowledge that by focusing on these three thinkers our account of the social contract tradition will remain incomplete, not least because it excludes other notable contributors (e.g. Locke, Gauthier, Schmitt). Nevertheless, we argue that within the scope of this paper the three thinkers selected afford a discussion of the most significant aspects of social contract theories. Social contract thinkers were attempting to justify government – arguing that governments were legitimate if they were deemed to be the creations of autonomous individuals contracting together. Governments are, in this way, conceptualized as systems designed to protect certain central aspects of human existence – life for Hobbes, a substantive conception of liberty for Rousseau, and justice as fairness for Rawls. The perception that governments provide such protections is considered sufficient to legitimize the loss of certain rights and the allocation of power to specific supra-individual structures, such as constitutional monarchies or parliamentary democracies. In the sense that social contract theories do not merely explain why people agree to form a government to inaugurate certain political principles but also stipulate what these principles (ideally) are, they therefore offer certain abstract models of governance. The models of governance presented by social contract theories can be obtained by looking at how they postulate the process through which people collectively contracting are able to overcome the hypothetical initial situation. Additionally, we need to explain why we believe a discussion of social contract theories could advance our understanding of how blockchain technologies configure forms of governance. In the context of some of the core writings on blockchain technologies, this can be explained with reference to the myriad occasions on which the social contract is mentioned (see _e.g. Buterin 2014; Chuen 2015; Wood 2014). In these writings, the “social contract” is_ commonly conceptualized as the rule-based, distributed system containing the public ledger on which smart contracts are based. The crucial difference between smart contracts and the social contract in these writings is therefore that smart contracts are protocols enforcing specific contractual agreements that are built on top of and _conditioned by_ the underlying system (such as Ethereum), which in its entirety can be referred to as “the social contract.” The social contract for blockchain technologies can thus be understood as the underlying model for the governance of blockchain-based interactions. However, it is not at all self-evident to claim that the notion of a social contract as used in the context of blockchain governance can be said to reflect, or possibly even embody, aspects of the models of governance contained in philosophical social contract theories. To support this claim, we assert, as Golumbia argues, that technologies such as the Blockchain are not neutral but might be “deeply political” (2015: 118). In philosophy, scholars such as Ihde and Winner have shown that technologies can embody normative and political ideas.[ 20, 21] Georg Simmel offers a forceful example of an analysis based on this assumption in his work _The_ _Philosophy of Money.[22]_ Simmel argues that the empirical realizations of money (coins, credit) move towards a conceptual ideal of “pure money” (1900: 508), which is the expression and embodiments of his conceptual construct of exchange as a condition of economic value (1900: 79-87). Even though the conceptual ideal of pure money is unattainable in empirical reality,[23] it functions as an actual force that guides the design of our monetary system. Similarly, we could argue that even though the abstract models of governance offered by social contract theories are postulated as hypothetical ideals, they also inform real-world political constructs. As such, conventional political constructs such as constitutions in many ways reflect aspects l e d g e r j o u r n a l . o r g ISSN 2379-5980 (online) **138** DOI 10.5915/LEDGER.2016.62 ----- LEDGER VOL 1 (2016) 134 151 of ideal models of governance explicated by social contract theories. Expanding on this idea suggests that technologies such as the blockchain might similarly reflect aspects of social contract theories, a view we will examine in the following sections. ## 3. The “Initial Situation” and Justification of Blockchain Governance In this section we examine the extent to which the justification for governance enabled by blockchain technologies (blockchain governance) reflects one or more of the accounts of justification offered by social contract theories. The social contract theories of Hobbes and Rousseau aimed to justify the existence of a legitimate government by postulating a conceptual “state of nature,” or initial situation, populated by somewhat isolated individuals of roughly equal power and capacity. Rawls constructs a hypothetical “original position of equality” (1971: 11), which corresponds to the state of nature but puts the contracting individual behind a conceptual “veil of ignorance.” The initial situation serves as a rationale for such isolated individuals to agree to collectively relinquish (some of) their individual rights for the sake of forming a supra-individual structure of government. For Hobbes, a core feature of the state of nature is that it results in a high level of uncertainty for its inhabitants,[24] implying that individuals are unable to reach agreement on certain issues because they cannot trust that all parties involved will honor the agreement. This leads to the situation described by Chung as a constant potential for a “war of every man against every man” (Chung, 2015: 485), a state of affairs undesirable for the individuals living in this situation, which provides them with the justification to form a government. Rousseau’s social contract theory is based on a notion of “initial situation” that is significantly different from that of Hobbes. Rousseau viewed the state of nature, the precivilized state of human society without government, as a peaceful, idyllic situation. It is only with the rise of institutions such as private property and money that an undesirable state of affairs arises.[25] The institutions created by people have corrupted society and have instantiated unjust forms of inequality between people. This institutional reality is what _serves as_ Rousseau’s initial situation, which should be overcome by means of a specific social contract. In a similar vein, Rawls’s “original position” is meant to serve as a rationale for the contracting individuals to engage in a social contract able to promote justice as fairness for all its contracting parties. Behind the veil of ignorance, contracting parties are unaware of their own position (as defined by gender, race, class etc.) _vis-à-vis the positions of the other_ contracting parties. Because an individual is placed behind the conceptual veil of ignorance, she is uncertain about her eventual position once the social contract is in place. This provides for the rationale and the justification for the individual to agree to a social contract that is as fair as possible for all contracting parties. Before addressing the parallels, we need to acknowledge that the philosophical underpinning of blockchain governance differs from that of the social contract tradition, by being strongly aligned to anarchist and libertarian theories of social order, with many thinkers within this tradition, such as Nozick and Proudhon, argue strongly against the notion of a social contract.[ 26,] [27] Nevertheless, we will indicate below that some essential aspects of the justification for blockchain governance show significant similarities with justifications offered by social contract theories. It should be noted that it is impossible to refer to single scholars or single works in order to capture the established justification of blockchain governance. As such, any absolute claim of defining the “blockchain ideology” can be greeted with skepticism. However, we contend that by studying the core texts that support its most l e d g e r j o u r n a l . o r g ISSN 2379-5980 (online) **139** DOI 10.5915/LEDGER.2016.62 ----- LEDGER VOL 1 (2016) 134 151 prominent instantiations, as exemplified by Ethereum in our research, we can at least construct a coherent account of the justification offered for blockchain governance. To what extent can we say that justifications of blockchain governance reflect aspects of the types of justification for governance as offered by Hobbes, Rousseau or Rawls? The Ethereum community provides illuminating justifications of the two core features of the blockchain we discussed earlier: of the public ledger and the decentralized system of enforcement of transactions. In the Ethereum white paper, it is argued that these two features solve two important political enigmas: of people corrupting systems by means of fraud and counterfeiting and the freeing of human beings from central political powers such as states and banks.[28] At face value, this outlook ties in with anarchist and libertarian critiques of authority. Such critiques claim that centralized powers like states and banks are easily corrupted and that groups of individuals are able to organize themselves in sophisticated ways in the absence of third-party institutions. As an alternative form of governance, proponents claim that through blockchain technologies autonomous individuals are capable of creating a self-governing community (or multiple communities) with enforceable rules of interaction without the requirement of any centralized (hierarchical) power structures. In spite of these ideological tensions, some striking similarities between the justification of blockchain governance and the justification of governance offered by social contract theories can be observed. First of all, similar to the initial situation as conceptualized by Rousseau, blockchain governance is justified against the idea of an initial “pre-blockchain” society. Roio argues that events such as the blockade of payments to Wikileaks by the US government and major payment companies in 2010 have been important enablers of theme he identifies as the “cypherpunk imagination,”[ 29] justifying the use of Bitcoin as an alternative payment system. As such, blockchain governance is justified by reference to an idealized initial, undesirable situation that is defined by the contemporary institutional reality of centralized institutions, which are subject to human arbitration. Moreover, just as Rawls’s original position can be used as a justification of net neutrality, as Schejter and Yemini argue, [30] blockchain governance can be justified with reference to a notion of “neutrality.” In this respect regard, the technology itself functions as a “veil of ignorance” in that it is unable to discriminate between its users, in contrast to conventional institutions. However, the justification of blockchain governance differs significantly from the justifications offered by Rousseau and Rawls in two ways. Firstly, even though people interacting through blockchain applications could theoretically operate through a “veil of ignorance”—in the sense that they could enjoy a high level of pseudonymity and the technology would be structurally incapable of discriminating against them on the basis of who they are—power is still divided unequally. This is the case because, as the definition of the smart contract reveals, relations between contracting parties are defined in terms of digital assets (for instance in the form of a bet, with person A betting _x amount of Bitcoins and_ person B _y amount on the same predicted outcome of an event). Therefore, a situation of_ neutrality as defined by Rawls’s original position would be unattainable in the blockchain, because power-relations are always already predefined in the public ledger. Secondly, the conception of human nature guiding Rousseau’s justification for the social contract differs strongly with the conception of human nature offered for the justification of blockchain governance. Rousseau views human society as naturally peaceful and friendly, but argues that it has been corrupted by civilization. The blockchain community, in contrast, envisions human nature and especially the notion of “trust” in humans as the corrupting factors in contemporary civilizations. As O’Dwyer argues, the claim is made that trust in humans is undesirable and l e d g e r j o u r n a l . o r g ISSN 2379-5980 (online) **140** DOI 10.5915/LEDGER.2016.62 ----- LEDGER VOL 1 (2016) 134 151 should be made redundant by replacing it with a different kind of trust, namely the “trust in the code.” [31] These aspects of the justification of blockchain governance lead us to consider the justification made by Hobbes for the social contract. As Kavanagh and Miscione argue, a conceptual situation similar to the circumstances described by Hobbes is outlined by Nakamoto in his white paper on Bitcoin, framing the issue as a problem of “costs and payment uncertainties” between merchants and customers, [32] which causes distrust (understood as distrust between humans). Nakamoto’s account is similar to the one offered by Hobbes - both accounts envision the potential for corrupt behavior in a situation of uncertainty. This presupposition is consistent with the negative view of human nature expressed by Hobbes, which accepts that humans will engage in corrupt behavior if it serves their self-interest. A similar assumption seems to underlie the rationale for replacing trust in potentially corrupt humans by the incorruptible code of the blockchain. Additionally, as Rawls (1971: 238) and Chung (2015: 490) argue, the initial situation described by Hobbes in the context of his mechanical worldview can be understood as a game-theoretical problem. The equilibrium of a war of every man against every man can be expressed in game-theoretical terms, just as its solution, which is the social contract as described by Hobbes. Similarly, both the initial situation (the pre-blockchain world) and blockchain governance are commonly grounded in a game-theoretical understanding of the world. As Buterin argues: “the same game theory that is the reason that you’re still alive is also the reason why the Bitcoin Blockchain is still alive.”[ 33] Eventually, the social contract as incorporated in Ethereum is seen as a game theoretical mechanism that underlies all social interactions and only needs to be “facilitated” by blockchain technologies. This assumed that game theory can thus correctly predict human behavior as it “really” is and that this knowledge can be used to “engineer” social interaction in a virtual environment that functions like a game environment. Our initial conclusions support the view that the justification offered for blockchain governance to a certain extent resembles justification accounts offered by social contract theories. It is most similar to the justification of the social contract presented by Hobbes, in that it is based on a rather negative assessment of human nature, being self-interested and potentially corrupt, and tends to reduce social interactions to game-theoretical problems. In contrast, the initial situation it presents resembles the scheme presented by Rousseau, in that the undesirable “pre-blockchain” society is defined by our institutional reality rather than by a state of nature lacking any form of government. Finally, we argue that blockchain governance seems to approximate Rawls’s original position, although it makes this position unattainable by rendering inequality between contracting parties a structural feature of the technology. ## 4. Modeling Sovereignty in Blockchain Governance Having examined the theme of governance justification, we now examine models of governance, or more specifically identify ways in which the models of governance presented by blockchain technologies reflect aspects of the models of governance presented by social contract theories. By doing so, we do not intend to provide an account of how blockchain government actually works, for such an account would be highly speculative in the current state of affairs in which no instance of wholly functioning blockchain governance exists, but rather of similarities between models of governance as they are being claimed to manifest themselves through the use of blockchain technologies and those discussed by social contract l e d g e r j o u r n a l . o r g ISSN 2379-5980 (online) **141** DOI 10.5915/LEDGER.2016.62 ----- LEDGER VOL 1 (2016) 134 151 theories. A central notion in social contract theories specified as a solution to the problem of the initial situation is the notion of “sovereignty.” This section will focus on this notion, examining the views of Hobbes, Rousseau and Rawls on the issue of sovereignty. In contrast to the previous section, in which our analysis relied on linking ideas from key philosophical texts with the views on justification of blockchain governance expressed by the blockchain community, we now develop our comparison with a focus on the core design features of the technology for our analysis. Hobbes views the creation of an absolute form of government, which he designates as the “Leviathan,” as the only rational way people could escape the miseries of their state of nature. By contracting together, people alienate all their rights to the Leviathan, which can be viewed as the sovereign power (such as a monarch) in abstract. Hobbes describes the Leviathan as a “real Unitie of them all, in one and the same Person, made by Covenant of every man with every man … this is the Generation of that great Leviathan, or rather (to speake more reverently) of that Mortall God” (1651: 227). The Leviathan is where sovereignty – supreme authority – resides; and all people, having alienated their rights to the sovereign, are obligated to obey its decrees. Hobbes argues that the sovereign (be it one person or an assembly) has power over everyone else – all of whom are subjects – and “to the end he may use the strength and means of them all, as he shall think expedient, for their Peace and Common Defence” (1651: 228). The Leviathan is the sovereign, and once created it is totalitarian, despite having been created voluntarily by its subjects. Attaining sovereign power, Hobbes argues, occurs “when men agree amongst themselves, to submit to some Man, or Assembly of men, voluntarily, on confidence to be protected by him against all others. This latter, may be called a Political Common-wealth, or Commonwealth by Institution…” (1651: 228). The only rights that people have within such a commonwealth by institution are those granted to them by the sovereign, with the significant exception of the right to self-preservation. The Leviathan, as the absolute sovereign, cannot be questioned and must be obeyed; otherwise people have to face the threat of inevitable punishment. Rousseau’s notion of the sovereign is in some ways similar to the view expressed by Hobbes. Rousseau suggests that the clauses of the social contract can be summarized as “the total alienation of each associate, together with all his rights, to the whole community; for, in the first place, as each gives himself absolutely, the conditions are the same for all; and, this being so, no one has any interest in making them burdensome to others” (1762: 191). Unlike Hobbes, however, Rousseau argues that “each man, in giving himself to all, gives himself to nobody; and as there is no associate over which he does not acquire the same right as he yields others over himself, he gains an equivalent for everything he loses, and an increase of force for the preservation of what he has” (1762: 192). In this way, if all associates agree on instituting a regime of property rights that applies the same conditions on all, no associate will defect from it. This is because anyone defecting from the agreement will, in addition, lose their property rights. Moreover, for Rousseau, the individual does not alienate her freedom when entering the social contract in the way that the individual for Hobbes does but rather voluntarily cooperates with others in order to increase her freedom while being still involved in the creation of laws and rules governing her life. For Rousseau, each individual has put “his person and all his power under the supreme direction of the general will, and, in our corporate capacity, we receive each member as an indivisible part of the whole” (1762: 192). Each person then, in uniting with others “may still obey himself alone, and remain as free as before” (1762: 191). This freedom is due to the fact that, for Rousseau, sovereignty can never be alienated from the individuals forming the society and, as such, sovereignty resides not l e d g e r j o u r n a l . o r g ISSN 2379-5980 (online) **142** DOI 10.5915/LEDGER.2016.62 ----- LEDGER VOL 1 (2016) 134 151 principally in a centralized assembly or monarch (as it does for Hobbes), but is always vested in the will of the people – in a decentralized manner. Rousseau considered that whilst assemblies or monarchs might attempt to usurp power, this is always illegitimate, for the sovereignty of the people is inalienable. Sovereignty, for Rousseau, is something that exists in and for all people who have taken part in the social contract. In other words, it does not reside in a central sovereign authority but rather decentralized in the agency of each member of a community. Therefore, Rousseau prefers a form of direct democracy (one man, one vote) as a model of governance and a high level of transparency of decision making for any type of representational governance, so that representatives can always be subjected to public scrutiny (Inston, 2010: 152). The model of governance proposed by Rawls is more abstract compared to those of Hobbes and Rousseau, in that it does not propose a specific type of authoritarian or democratic rule (though Rawls is a strong supporter of democratic institutions) but rather a social contract conditioned by certain “principles of justice.” Rawls proposes two principles of justice that every contracting individual behind the “veil of ignorance” would rationally consent to (Rawls 1971: 53): (1) “Each person is to have an equal right to the most extensive total system of equal basic liberties compatible with a similar system of liberty for all” (2) “Social and economic inequalities are to be arranged so that they are both (a) reasonably expected to be to everyone’s advantage, and (b) attached to positions and offices open to all” Thus, every model of governance should, according to Rawls, incorporate these principles in order to be justifiable. However, he also concedes that any sovereign should provide for a publicly maintained, effective schedule of penalties, “so men in the absence of coercive arrangements establish and stabilize their private ventures by giving one another their word” (1971: 305). Thereby, the sovereign makes sure that people reciprocally recognize promises made to one-another that are based on common knowledge _i.e. the conditions of these_ promises should be publicly identified. The model of governance offered by the Ethereum platform is perhaps best described by Binmore, who states that “a social contract is”…“an equilibrium profile of strategies, one for each citizen. When the social contract operates, each citizen will therefore be optimizing when he follows the rules of behavior prescribed by his strategy” (1998: 355). [34] A blockchain technology such as Ethereum can be said to provide its users with an “equilibrium profile of strategies” that are hard-coded in the blockchain protocol. Within this equilibrium profile, participants interact and are consenting by default with the agreed upon rules in a particular smart contract; however, the limits of what kind of smart contracts could run on the Ethereum protocol are still unclear. The Ethereum Wiki page claims: “ultimately, Ethereum could be used to run countries.”[35] Gavin Wood, a co-founder of Ethereum, sees the importance of the emerging and voluntary status of the social contract in shaping social interaction and a significant force in human cooperation: [Ethereum’s use of blockchain technologies demonstrates that] “through the power of the default, consensus mechanisms and voluntary respect of the social contract, it is possible to use the internet to make a decentralized valuetransfer system, shared across the world and virtually free to use.”[36] To examine the extent to which conceptions of sovereignty in blockchain governance reflect the ideas of sovereignty discussed by social contract theories we first consider the Leviathan, as presented by Hobbes, as a model of governance. Even though Hobbes and Nakamoto foresee different roles for the sovereign in their writings (understood respectively l e d g e r j o u r n a l . o r g ISSN 2379-5980 (online) **143** DOI 10.5915/LEDGER.2016.62 ----- LEDGER VOL 1 (2016) 134 151 as the Leviathan and the consensus mechanism), there are striking similarities as well. Within a single blockchain, disobeying the rules is made impossible and will lead to exclusion from the system – _i.e. the blockchain is totalitarian in terms of rule-enforcement, which makes it_ comparable to the Leviathan as described by Hobbes. Moreover, no blockchain can be altered or manipulated by the individuals who use it to contract with one-another. In order to render fraud and counterfeit structurally impossible, once a person has contracted with someone else through the blockchain she has no other choice than to abide by its rules. Important to note, however, is that this structural impossibility only exists within the system that runs on the blockchain. Participants running the software can circumvent it by not using a certain blockchain technology or by switching between different blockchain technologies. As Rawls (1971: 453) concedes, the sovereign for Hobbes is a mechanism that stabilizes a system of human cooperation. Similarly, the blockchain can be understood as a mechanism for stabilizing a pre-given system of human cooperation such as a property regime or an insurance system. Any blockchain can therefore be seen as a created “institution”, a technological Leviathan (or “techno-leviathan” as expressed by Brett Scott)[37] that people voluntarily join. As a counterpoint to the totality of power assigned to the Leviathan for Hobbes, blockchain governance is not “absolute,” in the sense that no blockchain dominates the entire governance of a community, and as such it is unable to realize the ideal of the Leviathan expressed by Hobbes. In contrast to the Leviathan, the blockchain does not have the power or authority to kill those who use it to contract with one-another and it cannot change its rule according to its own will. Hobbes argues that the Leviathan’s power is sustained by means of a constant threat of punishment whenever its subjects act against its decrees, raising the issue of whether blockchain governance establishes any such system of punishment. There are some suggestions in the literature, for example Chuen argues, in discussing the role of the social contract for blockchain technologies: “by social contract, we mean a system for which to be part of it means obeying the rules.”[ 38] These rules, however, are not enforced “under the threat of physical action or exclusion … but on the blockchain, the rules cannot be broken and so exclusion is implicit” (Chuen, 2015: 391). Thus, enforcement of the social contract by means of blockchain technologies differs from the Hobbesian idea of enforcement by threat of physical punishment. The majority of nodes within the system act as the sovereign by enforcing its rules on all of its participants. This design feature of the blockchain brings us to Rousseau’s version of social contract theory. Rousseau insisted that “in order that the social contract may not be an empty formula, it tacitly includes the undertaking, which alone can give force to the rest, that whoever refuses to obey the general will, shall be compelled to do so by the whole body,” or infamously “this means nothing less than that he will be forced to be free” (1762: 195). Similarly, the consensus mechanism built into blockchain technologies ensures that those interacting through a blockchain application are compelled to abide by its rules. In an illuminating presentation, Buterin explains that decentralized communities using a blockchain technology will instantiate “recursive punishment” systems. [39] This implies that, although a node controlled by a miner is free to go against the “general will” of the blockchain, it is deterred from doing so because both this node and other nodes following the same strategy will eventually be punished by being excluded from the system; or more precisely by being excluded from the main blockchain and working on another chain that represents no value. The question of course is whether implicit exclusion from a blockchain is a sufficient deterrent to ensure that all its members always obey its rules at all times. The point can be l e d g e r j o u r n a l . o r g ISSN 2379-5980 (online) **144** DOI 10.5915/LEDGER.2016.62 ----- LEDGER VOL 1 (2016) 134 151 addressed with reference to the extent to which one blockchain dominates one or more aspects of social life. A simple illustration of this is to imagine if property rights in the context of the Internet of Things (IoT) were to be organized through one dominant blockchain application. Exclusion from this blockchain would mean that the physical devices owned by an excluded individual could cease to function and thus the punishment of exclusion would be sufficiently serious to deter people from individually contravening the rules laid down by the blockchain. In addition to matter of rule compliance, blockchain governance also reflects Rousseau’s idea of sovereignty, at least to a greater extent than the highly centralized idea of sovereignty expressed by Hobbes. Similar to Rousseau’s ideal of radical democracy, sovereignty on the blockchain is implemented in a decentralized manner: all the nodes together enforce the validity of transactions and therefore reflect consensus with regards to the contractual agreements realized through the blockchain. In theory at least, Rousseau’s ideal of a general assembly that encompasses all the members of a community could be technically realized in blockchain governance. All members of a blockchain community could be permitted to propose their own smart contracts and vote on contracts proposed by others. There is, however, a significant difference between Rousseau’s concept of the General Will and sovereignty in blockchain governance, which in many ways represents instead the “will of all.” The General Will, in Rousseau’s conception, is primarily concerned with the common interest, in contrast with the “will of all” as implemented in blockchain governance, which is no more than the sum of the individual wills of its members. The blockchain design lacks any conception of a common interest beyond facilitating autonomous individuals contracting between themselves. The blockchain then, is based on a limited conception of the “common good,” one that is more consistent with the ideals of contemporary capitalism, than the Republican ideals of Rousseau. Rousseau also provides a warning regarding the distribution of power in contract-based political organization that remains pertinent to blockchain technologies. These technologies instantiate distributed networks, that can theoretically be comprised of all those who participate in them. The power resides with those who control the nodes, ensuring that there can in theory be no central power or authority as long as a sufficient number of non-related nodes partake in the network. Arguably then, within the blockchain, sovereignty is distributed at the technological level, rather than explicitly at the political level. In principle, it is possible for the miners to unite and gain control of the blockchain, similar to the risk of elected representatives attempting to usurp sovereignty and limit it only to themselves, as foreseen by Rousseau. Such a concern is raised in current debates on the “centralization” of Bitcoin; which focus on the risks of pools of miners coordinating their mining efforts to undermine the system.[40] There seems to be no guarantee that all subjects of a hypothetical blockchain government would act under the condition that Rousseau portrayed as “freedom and equality of all” (Inston 2010: 175). This concern can be addressed with reference to Rawls’s idea of sovereignty. Blockchain governance seems to have the capacity to support Rawls’s first principle of justice, since people contracting through the blockchain would all enjoy the same rights and liberties. The blockchain does not discriminate against its users based on who they are, and as such, in theory all users are able to contract with one-another while enjoying the same, though limited, digital rights and liberties, such as the right to smart property or the right to freedom of expression on the blockchain. Rawls’s second principle of justice seems, however, to be very hard – if not impossible – to realize in blockchain governance. In accordance with the libertarian ideas that support blockchain governance, such governance seems to be designed to exclude hardcoded ideas of distributed justice. Firstly, there are no l e d g e r j o u r n a l . o r g ISSN 2379-5980 (online) **145** DOI 10.5915/LEDGER.2016.62 ----- LEDGER VOL 1 (2016) 134 151 political offices “open to all” in blockchain governance able to intervene in the way rights and assets are distributed amongst its members. Nobody is able to superimpose a redistribution of rights and assets because the only distribution that is structurally enabled in blockchain governance is the one that happens to be the equilibrium resulting from the interacting nodes. Moreover, no limitations exist for great inequalities in distribution of rights and assets, especially because individuals or companies can own multiple nodes in the system. This last point has been made strikingly clear in the aftermath of the recent “DAO attack.” “The DAO” is a project that runs on the Ethereum protocol but is a separate initiative that can be seen as the first high-profile implementation of the idea of a Decentralized Autonomous Organization. Individuals are able to arrange smart contracts in the DAO and join them by pledging “DAO tokens” that can also be used to vote for proposals that designate how the tokens belonging to a smart contract should be spent. By exploiting a bug in the source code of the DAO, an attacker managed to obtain an equivalent of 60 million USD in the cryptocurrency Ether. [41] We will not discuss the technical details of this attack, but focus instead on the “ideological” conflict it created in the Ethereum community. Although the cryptocurrency was obtained by exploiting a weakness in the source code, the attacker obtained the Ether “legally” within the system (recall the earlier discussion that a blockchain can be considered as a “form of law”). The response of the Ethereum community was split, with some members arguing that the attacker should be allowed to keep his “reward” and that the software actually worked as it was intended to, while other members argue that the basic code of the DAO should be rewritten to prevent the attacker from claiming the Ether obtained in the attack. This division within the community illustrates a tension concerning the justifiability of existing governance models. The argument remains that sovereignty resides in the blockchain, that the mechanisms of interaction that existed at the moment when people consented to abide to the internal rules of the DAO are the only ones that should validate transactions. This perspective is, though, in opposition to the widely held view that the distribution of Ether after the attack is unfair and that the Ether should be redistributed by means of a “hard fork” that would in effect circumvent the sovereignty of the current blockchain. A Rawlsian argument could be constructed to support this latter argument. Behind a “veil of ignorance” in which nobody knows their position (including the attacker), the preference of the least advantaged (the individual losers from the attack) would be endorsed. A particularly compelling argument can be made on the basis that the attacker is the sole beneficiary, while the losing parties are not merely those losing part of their investment, but the entire network because the DAO as a whole lost value due to the attack. This conflict raises the issue of whether a blockchain technology such as the DAO can offer a justifiable model of governance while lacking an external governance structure to function as a check on the power of the technology. As Yarvin argues: “one of the governance problems of blockchains, related to the fundamental error of decentralization theater, is the failure to build deliberative institutions on top of the ‘parliament of miners.’”[42] While the DAO in question was relatively small in both scale and scope, with few contracts in operation at the moment of the attack, if in the future governance of crucial parts of our social infrastructure, such as identity registers or property rights, were to be organized in the form of DAOs, these conflicts might cause great social unrest, rebellion and possible challenges concerning the sovereignty of the blockchain. This illustrates clearly that issues regarding how to model governance on the blockchain, and how to govern the blockchain itself, have yet to be resolved and might yet become relevant research topics in political philosophy and political issues in their own right. l e d g e r j o u r n a l . o r g ISSN 2379-5980 (online) **146** DOI 10.5915/LEDGER.2016.62 ----- LEDGER VOL 1 (2016) 134 151 ## 5. Conclusion In this paper, we investigated the way in which the justification and modeling of blockchain governance can be said to reflect core ideas in social contract theories. The following are our main findings: - Accounts of justification of blockchain governance are informed by a conception of human nature that is similar to the account offered by Hobbes; however, it is similar to Rousseau’s justification of governance in that it is seen as a solution to an existing structure of corrupted institutions. - Blockchain governance in many ways reflects Rawls’s idea of a “veil of ignorance,” being non-discriminatory, though it negates this idea because power-relations are predefined in the public ledger. - The blockchain reflects the idea expressed by Hobbes of a totalitarian sovereign in terms of rule-enforcement, coupled with Rousseau’s idea of decentralized governance and Rawls’s idea of equal rights and liberties for all (that is, for all the nodes). - Blockchain governance fails, however, to incorporate Rousseau’s idea of the common good, and fails to implement conditions of distributive justice that Rawls thought to be essential for overcoming the initial situation. A first implication of our discussion has been to contest the idea that the blockchain is a “neutral,” non-political technology. Instead, being a transformative technology, its political implications are significant because the applications that the technology affords can reconfigure economic, legal, institutional, monetary and ultimately broader socio-political relationships.[43] By discussing the blockchain in light of social contract theories, we have tried to make explicit what kind of political justifications for blockchain governance are offered and what political model of governance it represents. Overall, it seems that the justification and modeling of governance presented by Hobbes, though far removed from anarchist and libertarian ideals that fuel many of the efforts for designing blockchain technologies, offers an insightful comparison with blockchain governance. The justification of blockchain governance on the basis of a negative view of human nature and game-theoretical presuppositions, and its modeling as a totalitarian process in the sense that its authority is unquestionable once voluntarily joined, brings it surprisingly close to the social contract theory expressed by Hobbes. Although Rousseau’s model of governance offers some striking similarities with blockchain governance, based on his focus on decentralization of power and punishment through exclusion, Rousseau’s ideas of governance in support of the common good and governance based on free and equal participation of community members seem to be lacking in blockchain governance. In a more radical reading, it could be argued that Rousseau denounces any delegation of governance to a technology when he stresses: “The general will is ultimately unrepresentable because it entails a continuous act of willing which leaves its identity forever incomplete and thus available to new demands and reformulations” (Inston 2010: 130). Thus, any technology instantiating human governance along fixed lines would be essentially inadequate. Finally, Rawls’s social contract theory seems to show only limited similarities with blockchain governance. Although a blockchain might seem to offer a limited form of a “veil of ignorance” for people contracting through it, it lacks the essential elements of distributive justice that would make it a justifiable form of governance in Rawls’s terms. l e d g e r j o u r n a l . o r g ISSN 2379-5980 (online) **147** DOI 10.5915/LEDGER.2016.62 ----- LEDGER VOL 1 (2016) 134 151 While we feel these conclusions are insightful, and appropriately evidenced, a number of important limitations of our inquiry are worthy of mentioning. Firstly, our discussion of social contract theories has been necessarily incomplete, both by only addressing three of their prominent instantiations but also by discussing only a limited number of their central aspects (focusing on their notions of the initial situations and sovereignty, and thereby leaving out discussions of issues such as transparency and consent). Secondly, we have only focused on a limited number of blockchain technologies, notably on Ethereum, omitting from our analysis interesting examples such as Bitnation that might have influenced parts of the argument.[ 44] Thirdly, and perhaps most importantly, our analysis is based on a technology that is still in its development phase, which means that empirical support for much of our discussions is lacking or in its infancy. In the future blockchain technologies might be developed in ways that we have failed anticipate in this paper, which resolve the governance dilemma, such as providing mechanisms of distributive justice, for example. Therefore, our paper should be seen as an exploration of the potential implications of blockchain governance and in providing the scope for future research on this topic in the field of political philosophy. ## Acknowledgements The ADAPT Centre for Digital Content Technology is funded under the SFI Research Centres Programme (Grant 13/RC/2106) and is co-funded under the European Regional Development Fund. ## Author Contributions WR provided the core insights about the elements of social contract theories relevant to our investigation (40%), FOB interpreted these insights in relation to the core design features of blockchain technologies (30%) and PH added to the paper by incorporating the views of the key Ethereum community members (30%). All equally contributed to manuscript preparation. ## Notes and References 1 Karlstrøm, H. “Do libertarians dream of electric coins? The material embeddedness of Bitcoin.” _Distinktion: Scandinavian Journal of Social Theory_ **15.1 29 (2014)** http://doi.org/10.1080/1600910X.2013.870083 2 Swan, M. Blockchain: Blueprint for a New Economy Sebastopol: O’Reilly Media Inc (2015) 3 Kostakis, V., Giotitsas, C. “The (A)Political Economy of Bitcoin.” tripleC: Journal for a _Global_ _Sustainable Information Society_ **12.2 431-440 (2014)** 4 Barton, P. Bitcoin and the Politics of Distributed Trust (Senior Thesis). Swarthmore College (2015) 5 Golumbia, D. “Bitcoin as Politics: Distributed Right-Wing Extremism.” In Moneylab Reader: An _Intervention in Digital Economy. Amsterdam: Institute of Network Cultures 117–131 (2015)_ 6 DuPont, Q. “The Politics of Cryptography: Bitcoin and The Ordering Machines.” Journal of Peer _Production_ **1.4 1–10 (2014)** l e d g e r j o u r n a l . o r g ISSN 2379-5980 (online) **148** DOI 10.5915/LEDGER.2016.62 ----- LEDGER VOL 1 (2016) 134 151 7 Kavanagh, D., Miscione, G. “Bitcoin and the Blockchain: A coup d’état in Digital Heterotopia?” In Critical Management Studies Conference. Leicester (2015) 8 Dupont, Q., Maurer, B. “Ledgers and Law in the Blockchain.” Kings Review (23 June 2015) http://kingsreview.co.uk/magazine/blog/2015/06/23/ledgers-andlaw-in-the-blockchain/ 9 Ethereum. “About the Ethereum Foundation” (accessed 25 January 2016) https://www.ethereum.org/foundation 10 Wood, G. “Ethereum: A Secure Decentralised Generalised Transaction Ledger,” Ethereum Project Yellow Paper. Gavwood (accessed 28 November 2015) http://gavwood.com/Paper.pdf 11 Alexander, R. “The First Blockchain Wedding.” Bitcoinmagazine (accessed 3 December 2015) https://bitcoinmagazine.com/articles/first-blockchain-wedding-21412544247 12 Buterin, V. “A next-generation smart contract and decentralized application platform.” Ethereum 1–36 (2014) http://buyxpr.com/build/pdfs/EthereumWhitePaper.pdf 13 Wright, A., De Filippi, P. “Decentralized Blockchain Technology and the Rise of Lex Cryptographia.” Social Science Research Network 2580664 (2015) http://papers.ssrn.com/abstract=2580664 14 Ethereum. “What is Ethereum?” Etherscripter (accessed December 3 2015) http://etherscripter.com/what_is_ethereum.html 15 Buterin, V. “DAOs, DACs, DAs and More: An Incomplete Terminology Guide.” Ethereum (accessed 12 July 2016) https://blog.ethereum.org/2014/05/06/daos-dacsdas-and-more-an-incomplete-terminology-guide/ 16 Skyrms, B. Evolution of the Social Contract. Cambridge: Cambridge University Press (1996) 17 Hobbes, T. Leviathan. London: Andrew Crooke (1651) 18 Rousseau, J.-J. The Social Contract and Discourses. London: Everyman (1762) 19 Rawls, J. A Theory of Justice. Cambridge, Massachusetts: Harvard University Press (1971) 20 Ihde. D. Postphenomenology and Technoscience. New York: Sunny Press (2009) 21 Winner, L. “Do artifacts have politics?” Daedalus **109.1 121-136 (1980)** 22 Simmel, G. The Philosophy of Money (3rd ed.). New York: Routledge Classics (1900) 23 Dodd, N. “On Simmel’s Pure Concept of Money: a Response to Ingham.” European Journal of _Sociology_ **48.2 273-294 (2007)** 24 Chung, H. “Hobbes's State of Nature: A Modern Bayesian Game-Theoretic Analysis.” Journal of _the American Philosophical Association 1.3 485-508 (2015)_ l e d g e r j o u r n a l . o r g ISSN 2379-5980 (online) **149** DOI 10.5915/LEDGER.2016.62 ----- LEDGER VOL 1 (2016) 134 151 25 Inston, K. Rousseau and Radical Democracy. London: Continuum International Publishing Group (2010) 26 Nozick, R. Anarchy, State and Utopia. Oxford: Blackwell Publishing (1974) 27 Simon, Y., Kuic, V. “A Note on Proudhon’s Federalism.” Publius 3.1 19-30 (1973) 28 Buterin, V. “A Next-Generation Smart Contract and Decentralized Application Platform.” Ethereum (2013) https: //www.ethereum.org/pdfs/EthereumWhitePaper.pdf 29 Roio, D.J. “Bitcoin, the End of the Taboo on Money.” Dyne.org digital press (April 2013) https://files.dyne.org/readers/Bitcoin_end_of_taboo_on_money.pdf 30 Schejter, A. & Yemini, M. “Justice, and Only Justice, You Shall Pursue: Network Neutrality, the First Amendment and John Rawls's Theory of Justice.” Michigan Telecommunications and _Technology Law Review_ **14.1 137-174 (2007).** 31 O’Dwyer, R. “The Revolution will (not) be Decentralized: Blockchains.” Commons Transition (2015) 32 Nakamoto, S. “Bitcoin: A Peer-to-Peer Electronic Cash System.” No publisher. 1 (2008) https://bitcoin.org/bitcoin.pdf 33 Buterin, V. “Vitalik Buterin: Cryptoeconomic Protocols In the Context of Wider Society.” _Youtube (accessed 26 January 2016)_ https://www.youtube.com/watch?v=S47iWiKKvLA 34 Binmore, K. Game Theory and the Social Contract (3rd ed.) Cambridge, Massachusetts: Massachusetts Institute of Technology 355 (1998) 35 No Author. “Ethereum wiki.” Github (accessed 28 November 2015) https://github.com/ethereum/wiki/wiki 36 Wood, G. “Ethereum: A Secure Decentralised Generalised Transaction Ledger.” Gavwood (accessed 28 November 2015) http://gavwood.com/Paper.pdf 37 Scott, B. “Visions of a Techno-Leviathan: The Politics of the Bitcoin Blockchain.” E _International Relations (accessed 16 July 2016)_ http://www.eir.info/2014/06/01/visions-of-a-techno-leviathan-the-politics-ofthe-bitcoin-blockchain/ 38 Chuen, D. L. K. Handbook of digital currency: Bitcoin, Innovation, Financial instruments, and _Big Data. Handbook of Digital Currency. London: Elsevier Inc. (2015)_ http://doi.org/10.1016/B978-0-12-802117-0.09989-6 39 Buterin, V. “Vitalik Buterin: Cryptoeconomic Protocols In the Context of Wider Society.” _Youtube (accessed 26 January 2016)_ https://www.youtube.com/watch?v=S47iWiKKvLA l e d g e r j o u r n a l . o r g ISSN 2379-5980 (online) **150** DOI 10.5915/LEDGER.2016.62 ----- LEDGER VOL 1 (2016) 134 151 40 Fargo, S. “The economics of Bitcoin Mining.” Insidebitcoins.com (accessed 26 January 2016) http://insidebitcoins.com/news/the-economics-of-bitcoin-miningcentralization/31833 41 Reutzel, B. “The DAO Shows Blockchain Can’t Code Away Social Problems.” Coindesk (accessed 18 July 2016) http://www.coindesk.com/system-problems-socialissues-daos-structure/ 42 Yarvin, C. “The DAO as a lesson in decentralized governance.” Urbit (accessed 18 July 2016) https://urbit.org/blog/dao/ 43 Coeckelbergh, M., Reijers, W. “Crypto currencies as narrative technologies.” SIGCAS Computers _& Society, 45.3 172-178 (2015)_ 44 Bitnation. “Governance 2.0: borderless, decentralized, voluntary.” Bitnation (accessed 18 July 2016) https://bitnation.co/main/ l e d g e r j o u r n a l . o r g ISSN 2379-5980 (online) **151** DOI 10.5915/LEDGER.2016.62 -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.5195/LEDGER.2016.62?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.5195/LEDGER.2016.62, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "http://ledger.pitt.edu/ojs/index.php/ledger/article/download/62/51" }
2,016
[ "JournalArticle" ]
true
2016-12-21T00:00:00
[ { "paperId": "12b5670586ca91df18ec710f8c64d443b2c40642", "title": "The Social Contract and The Discourses" }, { "paperId": "ec83c5beb1f0b3154e993af03528e5537fe5d3f9", "title": "Hobbes's State of Nature: A Modern Bayesian Game-Theoretic Analysis" }, { "paperId": "52e1a5ed3289fe7ab0682685c748ad627c809179", "title": "Handbook of Digital Currency: Bitcoin, Innovation, Financial Instruments, and Big Data" }, { "paperId": "1486b782f231b5bf1e9ae366e156a8deac75f96f", "title": "Bitcoin and the Blockchain: A Coup D''tat in Digital Heterotopia?" }, { "paperId": "21ecfe7dbe0e886365bf84f085fd9e5f5b1e8aaa", "title": "Bitcoin as Politics: Distributed Right-Wing Extremism" }, { "paperId": "2b2f1f3c6b2c02234cc58023bf2fcc7f5cd506e4", "title": "Decentralized Blockchain Technology and the Rise of Lex Cryptographia" }, { "paperId": "f4135583394f6e9fbd5e7b691af6eb1b9d2d7da8", "title": "THE (A)POLITICAL ECONOMY OF BITCOIN" }, { "paperId": "97fddbbfd681bce9eeb8e0a013353b4d5b2ba0db", "title": "Blockchain: Blueprint for a New Economy" }, { "paperId": "664741c8e6ad9c1be441ec275f93673904e1b5f0", "title": "Evolution of the social contract" }, { "paperId": "554f7c3ca844ec24781ae65a61c117b6cfb54c10", "title": "Do libertarians dream of electric coins? The material embeddedness of Bitcoin" }, { "paperId": "2bb895e8481cadfd93af1f7195128456c63eec36", "title": "Postphenomenology and Technoscience" }, { "paperId": "e8f6af61c207f7b539c97df81c48bf4ada7a0268", "title": "Rousseau and Radical Democracy" }, { "paperId": "29fa605bbed535f79d5d064d1d1d7c8e009c000d", "title": "'Justice, and Only Justice, You Shall Pursue': Network Neutrality, the First Amendment and John Rawls' Theory of Justice" }, { "paperId": "644ece4b3453132b2762688a49f8c5d425a168cb", "title": "On Simmel's Pure Concept of Money: A Response to Ingham" }, { "paperId": "a91eea9e563867c034f4502bde23ddfd826e5857", "title": "The Philosophy of Money" }, { "paperId": "2ac40a8db8a94329f86173597a0996ee8e4a0b0b", "title": "ANARCHY, STATE, AND UTOPIA" }, { "paperId": "f695a7a72f8ce3fef008badd13b7abffeaf1c9da", "title": "You Shall Pursue : Network Neutrality , the First Amendment and John Rawls ' s Theory of Justice" }, { "paperId": null, "title": "The DAO Shows Blockchain Can’t Code Away Social Problems." }, { "paperId": null, "title": "DAOs, DACs, DAs and More: An Incomplete Terminology Guide." }, { "paperId": null, "title": "The economics of Bitcoin Mining." }, { "paperId": null, "title": "Visions of a Techno-Leviathan: The Politics of the Bitcoin Blockchain." }, { "paperId": null, "title": "Vitalik Buterin: Cryptoeconomic Protocols In the Context of Wider Society." }, { "paperId": "65f86dbe2c3f11e679877475ed2f06e31a199590", "title": "Bitcoin and the Politics of Distributed Trust" }, { "paperId": "0dbb8a54ca5066b82fa086bbf5db4c54b947719a", "title": "A NEXT GENERATION SMART CONTRACT & DECENTRALIZED APPLICATION PLATFORM" }, { "paperId": null, "title": "The First Blockchain Wedding." }, { "paperId": null, "title": "Crypto currencies as narrative technologies." }, { "paperId": null, "title": "The Revolution will (not) be Decentralized: Blockchains." }, { "paperId": "3c50bb6cc3f5417c3325a36ee190e24f0dc87257", "title": "ETHEREUM: A SECURE DECENTRALISED GENERALISED TRANSACTION LEDGER" }, { "paperId": null, "title": "The Politics of Cryptography: Bitcoin and The Ordering Machines." }, { "paperId": null, "title": "“ Bitcoin , the End of the Taboo on Money . ” Dyne . org digital press ( April" }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": "dddae1b11f0627aa3b3eece0058d8aba4d7ef9a0", "title": "Game theory and the social contract" }, { "paperId": "57417f6ea6162379424f866286d8a3e3a7acd546", "title": "A NOTE ON PROUDHON'S FEDERALISM" }, { "paperId": null, "title": "Github (accessed 28" }, { "paperId": null, "title": "“ Governance 2 . 0 : borderless , decentralized , voluntary" }, { "paperId": null, "title": "What is Ethereum?" } ]
15,102
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Medicine", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Medicine", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01f1f519cd65038179cdb269060906ef40361df2
[ "Computer Science", "Medicine" ]
0.895921
Audited credential delegation: a usable security solution for the virtual physiological human toolkit
01f1f519cd65038179cdb269060906ef40361df2
Interface Focus
[ { "authorId": "38806906", "name": "A. Haidar" }, { "authorId": "3168773", "name": "S. Zasada" }, { "authorId": "1725598", "name": "P. Coveney" }, { "authorId": "145250738", "name": "A. Abdallah" }, { "authorId": "3031017", "name": "B. Beckles" }, { "authorId": "2218396209", "name": "Mike A. S. Jones" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": "692a1437-389a-429a-b9b0-7a8182722f06", "issn": "2042-8898", "name": "Interface Focus", "type": "journal", "url": "http://rsfs.royalsocietypublishing.org/" }
null
Interface Focus (2011) 1, 462–473 doi:10.1098/rsfs.2010.0026 Published online 30 March 2011 # Audited credential delegation: a usable security solution for the virtual physiological human toolkit ## Ali N. Haidar[1], Stefan J. Zasada[1], Peter V. Coveney[1,]*, Ali E. Abdallah[2], Bruce Beckles[3] and Mike A. S. Jones[4] 1Centre for Computational Science, University College London, 20 Gordon Street, London WC1H 0AJ, UK 2E-Security Group, London South Bank University, 103 Borough Road, London SE1 0AA, UK 3University of Cambridge Computing Service, Pembroke Street, Cambridge CB2 3QH, UK 4Research Computing Services, Devonshire House, Precinct Centre, The University of Manchester, Manchester M13 9PL, UK We present applications of audited credential delegation (ACD), a usable security solution for authentication, authorization and auditing in distributed virtual physiological human (VPH) project environments that removes the use of digital certificates from end-users’ experience. Current security solutions are based on public key infrastructure (PKI). While PKI offers strong security for VPH projects, it suffers from serious usability shortcomings in terms of end-user acquisition and management of credentials which deter scientists from exploiting distributed VPH environments. By contrast, ACD supports the use of local credentials. Currently, a local ACD username–password combination can be used to access grid-based resources while Shibboleth support is underway. Moreover, ACD provides seamless and secure access to shared patient data, tools and infrastructure, thus supporting the provision of personalized medicine for patients, scientists and clinicians participating in e-health projects from a local to the widest international scale. Keywords: grid security; e-health security; information assurance; security wrappers 1. INTRODUCTION Within the virtual physiological human (VPH) ini[tiative (www.vph-noe.eu), grid infrastructure provides](http://www.vph-noe.eu) access to a wide range of computing resources distributed across multiple administrative domains. Scientists and clinicians need to use such resources to perform patient-specific modelling and simulation that draws on the medical characteristics of an individual patient. Decision-support systems based on patient-specific computer simulation hold the potential to revolutionize the way clinicians plan courses of treatment for patients [1]. This leads immediately to the question of how to address information security within the VPH initiative. As high profile security breaches and data loss are frequent headline news [2,3], a usable security solution is of critical importance for VPH projects. There are several pieces of legislation such as the UK Data Protection Act, the EU Data Protection Directive and the US Health [*Author for correspondence (p.v.coveney@ucl.ac.uk).](mailto:p.v.coveney@ucl.ac.uk) [Electronic supplementary material is available at http://dx.doi.org/](http://dx.doi.org/10.1098/rsfs.2010.0026) [10.1098/rsfs.2010.0026 or via http://rsfs.royalsocietypublishing.org.](http://dx.doi.org/10.1098/rsfs.2010.0026) One contribution of 17 to a Theme Issue ‘The virtual physiological human’. Insurance Portability and Accountability Act (HIPAA) that make it a legal requirement for VPH partners to collect, hold and process patient data in a secure way [4]. Security is also needed to protect VPH projects from the consequences of unauthorized disclosure of medical information including negative publicity, legal liabilities and fines; and from unauthorized modification of patient data used in VPH project environments, which may lead to incorrect patient treatment and result in a loss of life or identity theft, itself currently creating considerable concern. Hence, authentication, authorization and auditing security mechanisms are key requirements for any VPH system using patient data to be compliant with information security standards and avoid legal liability. Another major problem faced by end-users and administrators of grid-based VPH environments arises in connection with the usability of the security mechanisms deployed [5]. Many of the existing computational grid security infrastructures use public key infrastructure (PKI) and X.509 digital certificates as the means to provide authentication and authorization security goals. [For instance, Globus (www.globus.org), UNICORE](http://www.globus.org) [(www.unicore.eu),](http://www.unicore.eu) virtual organization membership service (VOMS) [6] and community authorization service [7] are all based on PKI [8]. However, it is well ----- documented that such security solutions lack user friendliness [5,9] for both administrators and end-users, which is essential for the uptake of any VPH solution. The problems stem from the process of acquiring X.509 digital certificates, which can be a lengthy one including the generation of proxy certificates to get access to remote resources as part of the authentication process (see the electronic supplementary material, §1). As a result, many users engage in practices which substantially weaken the security of the environment, such as the sharing of the private key of a single personal certificate, to get on with their tasks. End-users, such as scientists or clinicians who are not security experts, are concerned with the results of the analysis they perform on such grids rather than acquiring and using digital certificates [5]. Administrators are concerned with setting up virtual organizations (VOs) and administering security infrastructure in an efficient way. Resource providers are concerned with securing access to their shared resources, tracing users responsible for performing tasks on their resources, and avoiding the consequences of security breaches, including negative publicity and fines. Moreover, there is a need within the VPH initiative for a security solution that can be easily integrated with the tools provided by the VPH Toolkit [10]. These software tools have been developed by various partners and third parties using different programming languages to access and process patient medical data. Without such security, each set of VPH tools would need to have a ‘hard wired’ securityextension in order to be compliant with data security standards. This also means that VPH users would have to maintain credentials for all these VPH tools, which would be difficult to manage and would probably deter clinical uptake of VPH approaches. This paper describes the application of the audited credential delegation (ACD) [11,12] security solution to address authentication, authorization and auditing security goals within grid-related projects, including VPH and many other projects. We show how ACD satisfies security and usability. We demonstrate how ACD can be used to set up multiple VOs that have specific goals within the VPH initiative, to manage dynamic groups of users wishing to access various resources, and to provide VO administrators with tighter control of users’ actions as well as identity management. ACD is more than simply a security layer. Existing solutions such as MyProxy, Shibboleth and SARoNGS only provide credential repositories to store short-lived X.509 certificates (Myproxy), web-based single sign-on (Shibboleth), and web portals to access grid resources using a combination of Shibboleth and VOMS (SARoNGS) [9,13]. None of these solutions provides a holistic VO-controlled security solution in the way ACD does. We have successfully integrated ACD with the functionality of the application hosting environment (AHE) [14], lightweight grid middleware that allows the user to run applications on the grid, to construct a VO with tight security controls on identities and actions while providing a set of services allowing users to interact with grid resources without requiring specific knowledge of the details of each resource they wish to use. In addition, we have integrated authentication, authorization and basic auditing of ACD with the Individualized MEdiciNe Simulation Environment (IMENSE) [15], [developed within the ContraCancrum Project (www.](http://www.contracancrum.eu) [contracancrum.eu), to provide secure access to clinical](http://www.contracancrum.eu) data and tools. The functionality used in the environment includes the performance of imaging data annotation and analysis, the running of simulations and composite tasks (workflows) of considerable complexity on remote grid resources using patient data. The integration of IMENSE with ACD provides assurance about the confidentiality and integrity of patient data because only authorized scientists and clinicians are able to view and modify patients’ clinical records as well as having easy and controlled access to remote grid resources using familiar authentication mechanisms. The paper is organized as follows. Section 2 gives a brief overview of the current security challenges encountered within VPH, namely enabling scientists to access grid infrastructures and providing secure access to shared patient data. Section 3 provides a brief overview of common VPH projects’ security requirements. Section 4 presents a description of ACD. Sections 5 and 6 describe two case studies which demonstrate how ACD can be integrated with VPH environments to enable secure and usable access to patient data and grid infrastructures. Section 7 discusses related work, while §8 contains a discussion and conclusions. 2. OVERVIEW OF CURRENT SECURITY ISSUES WITHIN VIRTUAL PHYSIOLOGICAL HUMAN This section describes two major security issues encountered in VPH environments. The first concerns the complexity of current mechanisms for accessing grid resources; the second addresses secure access to shared patient data for VPH collaborators. 2.1. Access to grid resources To illustrate the complexity of current mechanisms for accessing grid resources, such as those provided by the [UK National Grid Service (NGS) (www.ngs.ac.uk),](http://www.ngs.ac.uk) [US TeraGrid (www.teragrid.org) and EU DEISA (www.](http://www.teragrid.org) [deisa.eu), we briefly describe the current steps needed](http://www.deisa.eu) by a scientist prior to running any application on a grid. For more details, the reader is referred to Haidar et al. [16] and the electronic supplementary material. The first step is to acquire a digital certificate. There are three processes involved in this step, each of which has a mean duration of one working day. The certificate authority (CA) informs the registration authority (RA) that a user has applied for a certificate (1 day). The RA contacts the user and arranges a face-to-face visit (1 day); the CA then issues the certificate (1 day). The average scenario takes about three days, which is too long. In the second step, the user is required to get authorization to access the resources offered by the resource provider. From our experience, this step takes between 3 working days and two weeks but only needs to be performed once. In the final step, end-users have to configure their chosen client applications themselves, including the Globus toolkit, the UNICORE Client and the AHE client ----- which are used to access the grid with a certificate. The resource providers patently cannot do this because they have no control over or access to the end-users’ machines. An exception would be where the user invokes a web portal. All in all, the above steps amount to a lengthy and complicated process which certainly deters many potential VPH users from exploiting the enormous power locked up within grid resources. 2.2. Secure access to shared patient data Currently, scientists working within VPH projects collect pseudonymized or anonymized patient data from hospitals (this may include patient records, histopathological and molecular data, magnetic resonance imaging, X-ray computed tomography and positron emission tomography imaging data) and upload them to their VPH environments. These data can be stored in a centralized data warehouse or distributed across several administrative domains. When the data reside within an environment managed by a VPH research group, it is by no means clear what security measures are taken to protect these data. Recent studies [5,17] have shown that many VPH and other e-Science projects do not have adequate security solutions in place to protect patient data. Although patient data are anonymized or pseudonymized by the providing hospital, it can still conceivably be identified in various cases. For example, genetic sequence data taken from a person at an interview, whose identity is therefore known, could be compared with anonymized data stored in a database; if a match were found that person’s medical status would then be revealed. An incident reported in 2008 [18], where a nurse’s medical status was revealed publicly in an unauthorized way by a colleague in the hospital where she worked, illustrates the impact of such breaches of confidentiality. The nurse’s medical history showed that she had been treated for HIV. The revelation resulted in her contract not being renewed by the hospital and her colleagues at work knowing about her disease. The hospital was ordered to pay the nurse E14,000 in damages and E20,000 in costs. Therefore, there is a very obvious need for a secure solution that enables VO-controlled access to patient data within VPH projects to ensure patient confidentiality and integrity, along with secure and seamless access to remote grid resources for processing such data. 3. COMMON SECURITY REQUIREMENTS IN A VIRTUAL PHYSIOLOGICAL HUMAN ENVIRONMENT In order to design a usable solution to access grid resources and patient data within VPH projects, it is fundamental to understand all the stakeholders’ requirements. The stakeholders in VPH environments include patients, scientists, clinical researchers and clinical practitioners, system administrators, universities, and grid resources providers. Scientists and clinicians need to: — run scientific tasks on grid resources and get the correct results of running these tasks as if they are accessing local resources; — query patient data and access data analysis tools; — invoke familiar and usable security mechanisms to perform their tasks; these must not be a barrier to their progress, and so must be seamlessly integrated with their desired ways of working. System administrators require a mechanism for setting up VOs and administering the VPH security environment in a clear and easy fashion. This requires understanding of: — how a scientist from a VPH project becomes a VPH user with access to grid resources; — how to authenticate VPH users to resource providers; and whether VPH users can use their local credentials (preferably the same ones they use in their own organization) to access grid resources or need to acquire new ones; — how to determine whether a person within a VPH project is authorized to perform a task on a grid resource; — who decides what the access rights of a VPH scientist are; — how to identify those people from VPH environments responsible for performing tasks on grid resources using patient data. Resource providers, in particular the hospitals providing patient data together with grid resource owners, are concerned with securing access to their resources. This involves identifying who is requesting access to their resources (authentication), checking if a user is allowed to run tasks on their resources (authorization) and tracking users responsible for running named tasks on their resources (auditing) in case of misuse (e.g. security breaches, usage of CPU allocations for billing purposes). All these measures are needed to give resource providers assurance that their assets are adequately protected and to ensure that the resource providers avoid the consequences of the misuse of their valuable resources by unauthorized users. 4. AUDITED CREDENTIAL DELEGATION 4.1. Overview The design of ACD is based on the concept of ‘wrappers’. A wrapper is a connector between a component and the outside world. It enables controlled access to the functionalities of a component. For instance, figure 1 shows the ACD security wrapper made of authentication, authorization and auditing components surrounding the functionalities of an environment represented by the tasks (Task1, . . .,Taskn) that can be performed on the system. Any request by a user to perform a task is intercepted by each layer of the security wrapper to establish the identity of the requester, to check whether or not the user is allowed to perform the task, to record the results of these checks in the audit log, then to perform the task on the system and, finally, to return results to the user. This model fits well with many VPH environments that encapsulate tools from the VPH Toolkit [10] as ----- Figure 1. The ACD security wrapper comprises auditing, authentication and authorization wrappers. Any request to perform a task within a VPH environment has to pass successfully through all wrappers before it can be executed, otherwise the request fails. we will show in §§5 and 6. These tools are usually specified as ‘black boxes’ so that scientists can use them to access patient data without knowing their internal details. The interface of the tool is the only information available to the designer about how it will be connected with its environment. These tools have to be customized in some way to match the global requirements of the VPH environment described in §3, such as the need for extra security features or blocking unneeded functionality provided by a tool. By placing VPH tools within a security wrapper such as ACD, all the requests coming to and/or replies from the wrapped tools are passed through the authentication, authorization and auditing wrappers. These security wrappers hide the details of the interface of the tool from external clients and act as an interface between its caller and the wrapped tool. The interface of the wrapped tool is different from the interface of the security wrapper. The wrapper’s interface will include the names of the tasks provided by the wrapped component in addition to the tasks provided by the security wrapper. The security wrappers will define how a call to perform a task offered by the wrapped component will be processed. In this way, ACD controls who can access the specific functionality provided by a VPH tool, determines whether the user is allowed to access the functionality and traces users who have invoked this functionality. Without such wrappers, the interface of a tool is accessed directly without any protection. ACD provides much of the functionality required for secure cloud computing [19], a business model of grid computing, that provides access to various resources such as CPU, memory and storage (known as infrastructure services) and applications. However, it is not designed to be a cloud computing security solution. Amazon’s Elastic Compute Cloud (EC2) and Google App Engine are examples of such clouds [20]. There are many security issues in cloud computing that are yet to be resolved concerned with data storage, compliance of the cloud system with legislation (DPA, HIPPA) and information assurance [19,20]. The main difference between clouds and VOs used in ACD is that the VO has full control of where data are stored and the processes that access these data whereas within the VO, in a cloud environment, the service and data maintenance are provided by third party vendors, potentially leaving the client ignorant of where the processes are running or even where the data reside. The location of data storage is very important so that applicable laws and regulations governing the data are identified [4]. Only recently, Amazon and Microsoft started offering data storage guaranteed to be in Europe to address the legal aspect. Users of cloud services have to trust the provider as to where and how the data are protected and the adequacy of the security controls in place, both critical issues for VPH projects. The design of ACD has been focused around several objectives. First and foremost is the requirement to provide secure yet facile access to grid resources and to ensure the confidentiality and integrity of patient data used in a research environment. There is a need for a solution that can be easily extended, because new tools are developed during the lifetime of VPH projects as well as acquired from third parties; these also need to be exposed to end-users in a secure way. Keeping this in mind, ACD has been designed around Web services, providing interfaces compliant with Web services standards such as web service description language, SOAP, WS-Policy and WS-Security [21]. This enables integration of new VPH tools written in programming languages that have Web services libraries with ACD. In addition, ACD has been developed by adopting best ----- practice software engineering principles that enable it to evolve as new functionalities are needed or changes in security policies are required, without the need to rewrite the whole solution from scratch or perform major modifications. Besides secure access to patient data, ACD enables VPH scientists to seamlessly access grid resources using various authentication mechanisms such as a local ACD username–password, or Shibboleth credentials, both of which are considered easier than acquiring and managing digital certificates, in order to run pre-installed applications on AHE, such as complex workflows and simulations that support patient-specific treatments. By providing support for Shibboleth, a large class of end-users who belong to institutions subscribed to Shibboleth services (e.g. academic institutions) will be able to invoke their local institutional credentials rather than acquiring a VO specific username–password. Within VPH, the correct execution of ACD functionalities to ensure integrity and confidentiality of patient data is extremely important. Hence, at the outset of its design, ACD was subjected to a rigorous modelling activity based on formal methods to ensure that the security requirements were fully met [12]. Another critical aspect addressed during the design of ACD is usability. ACD eliminates the steps performed by end-users listed in §2.1 which are now done only once by an expert-user (the VO administrator). It is important to emphasize that the time consuming steps described in §2a cannot be completely eliminated because of the need to interoperate with grid resource providers’ systems. What we have improved is that if there are say 10 scientists in a group, only one person (the expert user) has to go through the steps whereas the others will enjoy genuinely seamless access thereafter. Hiding complexity from end-users whenever possible is a fundamental usability principle. We do not claim that there are no usability problems with passwords but the usability issues associated with digital certificates are substantially worse. A digital certificate used to access grid resources is supposed to be protected by a passphrase (i.e. a password), so with digital certificates we still have all the usability problems associated with passwords as well. We have recently completed a comprehensive usability study [22] that involved comparing several middleware products for accessing grid environments. These include the AHE middleware, introduced in §1 and described in detail in §5.1, which comes with graphical user and command line interfaces for accessing grid resources, a combination of AHE with ACD, as well as UNICORE and Globus. There were 40 participants drawn from different departments and faculties at UCL including Physics, Chemistry, Computer Science, the Medical School, the Business School, the Cancer Institute and the Law School. Each participant was asked to run a simulation on a grid (NGS) using the different middleware to configure the security of their client tools and use the credentials given to them (username/password, X.509 certificate). The results unambiguously show that the combination of AHE and ACD scored higher than all other tools regarding the time needed to run a task, the ease of configuring the security of the tools, and the ease of running the overall task. 4.2. Overview of ACD Architecture ACD has four components: — A local authentication service (LAS): one of the main objectives of ACD is to remove digital certificates from the end-users’ experience. The current implementation supports a username–password database specifically for ACD. To be authenticated, a user has to provide a username–password pair that matches an entry in the database. To avoid known vulnerabilities in usernames and passwords we adopted OWASP best security practices [23] such as storing passwords in an encrypted form, rejecting weak passwords chosen by users, forcing the password length to a minimum of eight characters including special characters, and changing the password on a regular basis. This way, if the database is compromised, the attacker will not get hold of any password. There is currently work in progress to support Shibboleth in ACD to give users more options to choose from. Shibboleth is currently used by many universities in the UK and EU to allow students and researchers to access online publishers’ resources by invoking their local university username–password credentials. This way they will not need to use a specific ACD username–password for the VO. However, the support of Shibboleth will have an impact on ACD availability since it is dependent on the availability of the external authentication services provided. Without successful authentication, it is not possible to determine the role of the user in any given VPH project and, as a result, all requests to perform tasks will be denied. — An authorization component: this component controls all actions performed in the VO. It uses the parameterized role-based access control (PRBAC) model in which permissions are assigned to roles [24] as shown in figure 2 (Role [Task]). The VO policy designer ! associates each user in the VO with the role that best describes his/her job functions (UserID [Role]). ! The policy is defined at the VO set-up because it depends on the VO functionalities. The tasks (permissions) assigned to roles are drawn from the VO functionality. Sections 5 and 6 show how this is done. There are administrative tasks common to all VOs, such as ‘create role’, ‘assign a VO user to one or more role’, ‘assign tasks to roles’ and so on. This component is usually configured during the VO set-up by the VO administrator. In traditional role-based access control, two users that perform similar roles in the VO must have identical permissions. Sometimes this is not desirable. For instance, when two scientists submit two jobs to a grid resource, each scientist should be able to privately monitor, terminate or view the result of his/her own job submission. Thus the PRBAC model is flexible and permits fine-grained access control. It is important to emphasize that the decision to permit a user to perform a task on a grid resource is determined by the resource provider who has the final authority. The VO authorization component only manages the permissions (i.e. the allowed tasks) given by the resource owner to ----- **verify user identity** **credential repository** create VO add/remove VO user projectname certificate projectname userid **authentication** **server** asssign VO certificate key certificate **local database** **kerberos** proxy key **shibboleth** credential translation proxy userid search **audit log** create role **parametrized role based access control** Username | **authorisation** TaskName | assign user to role Granted/Denied | userrole: userid [role] Time | Source assign permissions rolepermission: role [task] to role Figure 2. The main components of ACD include a credential repository for creating VOs and translating users’ credentials to proxies to access grid resources (ProjectName refers to the VO name); an authorization component for defining VPH users’ roles within a VO and the permissions associated with those roles; an authentication service; and audit components for tracing users responsible for running a given task. the VO which controls the use of these permissions within the VO (authorization delegation). — A credential repository: this component is responsible for managing the delegation of identity from the user to ACD via a proxy certificate. It stores the certificates acquired by the VO administrator through the steps in §2.1 and their corresponding private keys in order to communicate with the grid (Certificate Key). The ! relation ProjectName UserID enables the creation ! and management of VO membership. In order to allow the members of a named VO access to grid resources, the VO is assigned a digital certificate (ProjectName Certificate) which is used behind the ! scenes to authenticate requests issued by the VO at the resource provider site. The component also maintains a list of issued proxy certificates (delegated identities), their corresponding private keys (Proxy ! Key) and the association between users and proxies (Proxy UserID) in order to trace which proxy was ! used by which user. These proxies enable users’ requests to be authenticated at remote grid resources (known as identity federation) on behalf of the users. At the grid resource owner’s end, all requests to access grid resources appear as coming from the named VO, not individual users. Two users who submitted jobs on the same grid resource site will have different proxies issued by the same VO certificate. The resource provider will not be able to tell which individual used this proxy to run an application on its resources but ACD can provide this information. The grid resource owner provides the VO administrator with the proxy’s public key. From the relation (Proxy UserID) the VO administrator can tell ! which person used this proxy and take any appropriate action. — An auditing component: this component records all actions within the VO including authorized and unauthorized requests to perform tasks within the VO, the username that requested them, the number of login attempts and login times. This allows the VO management to identify those ACD users responsible for having performed any tasks in a VPH environment. The main features of this architecture are the identity delegation and authorization delegation which are handled by a trusted entity, the VO, to make access to remote grid resources easier and to provide finer access control decisions within the VO. Since end-users sometimes share certificates to get access to shared resources, ACD is just an organized way of doing so thereby mitigating and controlling the risks associated. 5. INTEGRATION WITH THE APPLICATION HOSTING ENVIRONMENT This section describes how ACD is integrated with the AHE to enable construction of VOs that enable scientists to run pre-configured applications on remote grid resources using ACD username–password credentials. 5.1. Overview of application hosting environment The AHE [14,25] is a lightweight mechanism for exposing scientific applications (i.e. workflows and complex simulations) as Web services, and allowing users to interact with those applications using simple client tools (AHE client). AHE enables the launching of preexisting scientific applications installed by an expert ----- user on a variety of different computational resources, from national and international grids of supercomputers, through institutional and departmental clusters, to single processor desktop machines [26]. The end user is presented with a choice of very lightweight clients, specifically designed to obviate the need to deal with Globus and UNICORE middleware for job management, allowing the user to submit, monitor and download application results, as well as to terminate applications as they run. 5.2. AHE with ACD: usable and secure access to grid resources The current security model for AHE requires each individual VPH user to have a digital certificate, which carries with it the need to go through the steps described in §2.1. In order to remove the need for such a certificate, we have integrated ACD with AHE. The first step of the integration requires understanding the interface of AHE and ACD combined, in other words, the functional and administrative tasks that can be performed within the integrated system. The administrative tasks offered by ACD include create VO, assign certificate to VO, add user to VO, reset user password, create role, assign tasks to roles, and assign users to roles. The functional tasks offered by AHE include prepare job, submit job, monitor job, download and terminate job. Note that AHE’s functional tasks are the same as the tasks permitted for any authorized useron a grid resource site that uses Globus or UNICORE middleware such as in NGS, DEISA and TeraGrid. Therefore, the permission assignment to the VO is done by the grid resource owner first, then the VO administrator re-assigns these permissions to the roles in the VO according to the VO authorization requirements. In the combined ACD AHE environment, the authþ orization requirements determined by the VO administrator are expressed through the introduction of two roles: VO administrator and scientist. The former is permitted to perform all the administrative operations above in addition to terminate, monitor and download any job submitted to grid resources. The latter is permitted to perform all AHE operations in such a way that a person who submitted a specified job can only perform AHE functional operations on this application. As a result, two VPH users running applications using different patient data will not be able to view the results of each other’s digital activities. In addition, the scientist role only permits a user to change his/her own password. The construction of a VO requires that an expertuser goes through the lengthy process described in §2.1. Once this is done, the VO administrator creates a VO (see supplementary document §2) and assigns the certificate to the named VO using the AHE ACD þ client. Then, it becomes possible to add users instantly to the VO and give them genuinely seamless access to grid resources. To illustrate how this system works consider a user named ‘John Smith’ who is a member of a research group in a UK university and would like to use NGS grid resources to run scientific applications using AHE. The user contacts the local VO administrator and requests an account. The VO administrator creates a new user account which generates a username and a random password that are given to the user. The VO administrator assigns the user to the ‘scientist’ role described above and assigns the user to a VO that has access to NGS resources (figure 3). When a user logs in for the first time to the AHE ACD client application, þ he is prompted to change his password. The communications between the AHE ACD client and the wrapped þ AHE server, as well as between the latter and the grid resources, are protected by the SSL security protocol. In order to submit a job to a grid resource, the user invokes a request to perform the ‘submit job’ task within the combined AHE ACD client as shown in þ figure 3 (1). This request is intercepted by the ACD authentication component which checks whether the username and password match an entry in the database. The result of the authentication is recorded in the auditing component (2). The role of the user is picked up from the authorization component, userID [Role], in this case ‘scientist’. The authorization ! checks whether the task ‘submit job’ is permitted for the ‘scientist’ role held by the user, which is true (3). The result of the access control check is recorded in the audit log (4), and the operation ‘submit job’ is invoked from the AHE server (5). Once the request is granted, ACD picks the certificate associated with the VO the user wants to use (i.e. NGS) and checks whether the user is assigned to this VO. If the check is successful, then ACD generates a proxy certificate from the VO-assigned certificate, ProjectName Certificate ! (6), uploads it to the MyProxy server (7) and records the issued proxies, Proxy UserID (credential del! egation occurs here), in the credential repository. ACD sends the randomly generated username/password pair needed to access MyProxy to the AHE server to download the session proxy (8) and (9). Finally, the AHE server sends the request to the grid resource site along with the proxy. At the NGS site, the proxy is validated, since the proxy is issued from a valid trusted certification authority. Certificate authentication succeeds, and the distinguished name on the proxy (VOName) is checked against the gridmap file within the NGS authorization system to determine the role of the VOName, which is Scientist. Since this role is allowed to submit a job to NGS the task will be invoked. From NGS’s perspective, it is the VOName that submitted the task, not ‘John Smith’. In order to find out who invoked the ‘submit job’ task on NGS using a specific proxy, the NGS administrator passes the public key of the proxy to the VO administrator who can identify the name of the user from (6), which records the issued proxy in Proxy UserID. In this way, requests from within ! the combined ACD AHE are audited. It is thus possþ ible to identify legitimate users and to ensure that only such users are allowed access to grid resources, in conformance with the policies enforced by the grid infrastructure management. In addition, it is possible to detect unauthorized attempts to access resources from within the VO and to identify persons responsible for such attempts. This form of accountability is an essential requirement for resource providers to be prepared to accept the ACD security model. ----- NGS myproxy server Figure 3. The steps involved when a user performs a task within the integrated AHE ACD environment are numbered sequenþ tially according to their temporal order. The ACD security wrappers intercept the request, check the credentials against an authentication service, then verify whether the task is authorized for that user against an authorization service, and finally translate the credentials to a proxy so as to access grid resources. The results of these checks are audited. To illustrate how unauthorized requests to access resources are detected, let us assume that the above user is attempting to invoke the ‘remove user from a VO’ task, which is only permitted to a user holding the role ‘administrator’. When the request reaches the authorization wrapper in (3), the current user’s role is determined, which is ‘scientist’ and it will not find the requested task among the permitted tasks for this role. As a result, the authorization wrapper will return ‘access denied’ and record this result in the audit log (4). After three unauthorized access attempts, the VO administrator is notified by email via ACD that the user named ‘John Smith’ has had three unauthorized attempts to perform the task ‘remove user from a VO’ task. The VO administrator can then take the appropriate action. 6. INTEGRATING AUDITED CREDENTIAL DELEGATION WITH THE INDIVIDUALIZED MEDICINE SIMULATION ENVIRONMENT 6.1. Overview of IMENSE environment One of the main objectives of the VPH ContraCancrum (Clinically oriented translational cancer multilevel modelling) project [(www.contracancrum.eu)](http://www.contracancrum.eu) is to provide an environment (it can also be thought of as a VO) that allows clinicians and researchers to use the tools developed as part of their clinical and research practice in order to run workflows and simulations on grid infrastructure, using a heterogeneous set of patient data provided by the University of Saarland Hospital within an integrated IT environment, known as Individualized MEdiciNe Simulation Environment (IMENSE) [15]. These data include heterogeneous image scans (i.e. MRI, PET, CT), patient records, histopathology data and DNA profiles. The main functionalities provided by this VO include the ability to bring together and query patient data, edit them, upload and download image data, and to invoke Web services that allow workflows including simulations to be run on grid infrastructure. For example, a workflow that checks whether a patient responds to a particular drug is a pre-configured application in AHE. For the end-user, the workflow is viewed as a ‘black box’ and users can only run the workflow using a specific patient dataset and download the results (see §3 in the electronic supplementary material). ACD only controls access to the interface of the workflow. We use DEISA and TeraGrid for large-scale computationally intensive patient-specific workflows that involve moving data from within the VO via an un-trusted public network to remote grid ----- resources. Thus, the following security requirements need to be addressed: — restricting access to the environment to authorized users only; — enabling members of the project to run applications on grid infrastructure using username and password only; — allowing users responsible for running a given task on the environment to be traced; — ensuring the integrity of patient data by controlling the tasks that process these data in order to offer medical treatment; — protecting patient data when transferred onto public networks. Prior to the integration, access to IMENSE functionalities did not meet the above requirements. 6.2. Integration of ACD with IMENSE environment Having understood the functionalities of IMENSE introduced in the previous section, the integration with ACD can be done as follows. The administrative operations of ACD remain as described in the previous section. However, the functional activities performed within IMENSE now include uploading and downloading patient-specific images, running workflows on patient data, viewing images, searching patient data and image segmentation inter alia. The authorization requirements for this system are expressed again through the introduction of two roles: VO administrator and scientist. The first role is permitted to perform all the operations above. The ‘scientist’ role is permitted to perform all the functional operations, in addition to enabling the user holding this role to change his/her own password. The result of the integration is a controlled VO within which each request to perform a task goes through all three security wrappers previously described: authentication, authorization and auditing. We illustrate this through an example (see figure 4). A user can join the IMENSE VO in the way described in the previous section. Consider the same user ‘John Smith’ who wishes to run image segmentation on appropriate grid resources. The request to perform this task is first intercepted by the authentication wrapper which checks the user credentials against the ACD authentication service. The outcome of the authentication is recorded in the audit log. After successful authentication, the role of the user is determined from the authorization component (userID [Role]), which is ! ‘scientist’, permitted to perform the ‘image segmentation’ task. The result of the access control check is also recorded in the audit log. Once access is granted the task is performed in the VO; as a result, all the steps described in the previous section steps (1) to (11) needed to run ‘submit job’ are performed behind the scenes to run the image segmentation application preinstalled on AHE. Once segmentation finishes, the user is notified to download the result. The same level of auditing is also provided in this environment. This ensures that only authorized personnel can run tasks in the VO and that the user can only access the result of the segmentation request they submitted. The permissions in the VO are assigned to roles by the VO policy designer who understands what the users require in order to do their jobs. 7. RELATED WORK There are certainly precedents for the concept of VOs used in ACD whereby users invoke either their local credentials or a dedicated username and password, such as in the ‘community account’ system provided by TeraGrid [28] and SARoNGS [13] offered by NGS. For instance, the community account system allows scientists to access grid resources using a dedicated username and password via a Web portal. The SARoNGS project shares various similarities with our approach. It removes digital certificates from the endusers’ environment, enabling them to invoke their local credentials via a Shibboleth federated identity system, which is then translated into a grid identity credential to access UK NGS grid resources. It differs from ACD in that it passes individual identity and attributes of the user to the grid layer whereas ACD presents a single identity (that of the ACD VO name). The SARoNGS approach assumes the use of a web-portal and requires an end-user (or portal on behalf of the end user) to specify VO membership and role parameters before being able to access the grid. Like ACD, the mechanism is based on providing easy access to grid resources. The main difference is that ACD controls the authorization decision for the VO, whereas SARoNGS merely propagates authentic information about users and their roles within their specified VOs to the resources where it is consumed and processed. Thus, a significant part of the authorization in SARoNGS takes place within the grid resource provider’s service whereas ACD assumes the role of a delegated authorization decision maker for those resources. The SARoNGS model is essentially the VOMS model [6] with Shibboleth presented to the user and the grid X.509 Certificates hidden [13]. The advantages of ACD over SARoNGS are that the VO members’ activities can be more tightly controlled (helping VO-based security) and managed (delegating responsibility for usability to the VO and the AHE). A limitation is that resource providers can only make their authorization decisions on a VO level: they are not be able to identify individuals without consulting the ACD VO administrator. It is important to emphasize that what we present in this approach is a holistic VO-based authorization solution which has control of actions as well as identity. This is not the case in any other established grid environment. We have integrated our work with an environment which allows the user to actually run applications on the grid (namely the AHE); ACD is not simply a security layer, as in MyProxy, Kerberos, Active Directory, Shibboleth or Fermilab’s security mechanisms [9]. These security components only address authentication issues whereas ACD addresses authorization and accountability as well. Some of the comparisons between the examples cited above and ACD are discussed in Beckles et al. [9]. The Member ----- |authorization service|Col2| |---|---| auditing service authentication service username, password,task access monitor modelling success checks if task grant upload results records authentication is permitted for SSL/HTTPS who server the username accessed DICOM what, data modelling images when, from deny failure where, and outcome VPH XML scientist result from functional system metadata data preview segmented segmentation images firewall Figure 4. The sequence of steps to be performed when a VPH user invokes a task within the IMENSE environment. All communications are performed over SSL. |GRID DEISA NGS TeraGrid auditing service authentication service authorization service scoring functionalities checks username and password registration internet against a audit log database ername, password,task access monitor modelling success checks if task grant upload results records authentication is permitted for SSL/HTTPS who server the username accessed DICOM what, data modelling images when, from deny failure where, and outcome XML result from functional system metadata data preview segmented segmentation images firewall|Col2|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| ||auditing service||||| ||audit log records who accessed what, when, from where, and outcome|||authentication service|| |||||authorization service scoring functionalities checks username and password registration against a database access monitor modelling success checks if task grant upload results authentication is permitted for server the username DICOM data modelling images deny failure XML ult from functional system metadata data preview segmented segmentation images|| |||audit log records who accessed what, when, from where, and outcome|||| ||||||| ||res||||| XML metadata segmented images modelling upload results DICOM images XML segmented images modelling results DICOM images authorization service Integrated X.509 PKI Credential Services (MICS) [(http://www.tagpma.org/authn_profiles) is a profile](http://www.tagpma.org/authn_profiles) used in technologies such as MyProxy CAs. These, however, focus on providing the user with certificate-based credentials for authentication, do not deal with VO/ Community attributes and leave authorization to the resources alone; by contrast, ACD in combination with AHE manages VO-specific authentication and authorization. Any solution which involves each enduser having to obtain an individual certificate (even if they immediately deposit it in a credential repository and thereafter employ a username and password to access the certificate in the repository) is unsuitable because the end user will still have to go through the steps described in §2.1. CROWN [29] and gLite [30] middleware adopt the Globus security model and use X.509 certificates for authentication, one of the main problems ACD solves. gLite also uses the VOMS model for authorization. Unlike CROWN and gLite, authorization in ACD has been extended to the end users’ technical environment to provide fine-grained access control. This fits naturally within the VO model because, from a remote resource provider’s perspective, all VO users appear as a single user since the VO certificate is used to generate the proxies on the users’ behalf. In all the above alternative security solutions, auditing is performed at resource providers’ sites. In case of a security breach, the VO DICOM images XML metadata segmented images XML metadata segmented images access monitor checks if task is permitted for the username audit log records who accessed what, when, from where, and outcome management relies exclusively on the individual resource provider’s audit logs. ACD provides auditing for every VO set up based on the tasks that need to be monitored. These tasks are derived from the functionality of the VO and, moreover, allow VO management to corroborate resource providers’ claims in case of a security breach. 8. DISCUSSION AND CONCLUSION The ACD security mechanism has required an evolution of grid security policies because it violates the standard one-user-one-certificate security model prevalent in current grid infrastructures. A key requirement from resource providers in order for them to consider the ACD security model is the ability for them to audit all actions related to accessing their resources. This is addressed by the fine-grained auditing features of ACD. The combined ACD AHE is now listed among the gateways þ [on the TeraGrid Science gateways (www.teragrid.org/](http://www.teragrid.org/web/science) [web/science-gateways/gateway_list) that are allowed](http://www.teragrid.org/web/science) to provide a community of users access to TeraGrid resources using the ACD security model. ACD integrated with AHE has been successfully deployed on TeraGrid, NGS and DEISA. A detailed usability study involving undergraduates, scientists and system administrators will be published in the near future [22]. A small-scale pilot usability trial of functionalities modelling upload results DICOM ----- this security architecture, in which it is compared with the traditional PKI-based authentication mechanisms used in many existing computational grid environments, has already shown that users favour the familiar username and password paradigm supported by ACD. While that study only involved undergraduates at UCL with no prior experience of using computational grid environments, the findings are fully borne out by the extended study [22]. Usability issues associated with username–password combinations remain but they are easier to deal with than those of digital certificates. ACD addresses many common security requirements such as the one described in §3. However, some projects that deal with data that can identify individual patients might require a higher level of assurance (LoA), meaning that the username–password dual on its own might not always be sufficient. ACD supports the National Institute of Standards and Technology (NIST) [31] LoA level 1 at best because there is little control of where a GSI-Proxy credential is kept, how it is protected, its cryptographic makeup, and its longevity. Certainly this could be improved but ACD’s main focus is user management and controlled access first and foremost and not about upgrading the entire infrastructure to cope with multiple (higher) LoAs. The LoA required will depend on the sensitivity of the shared data. This requires a vulnerability assessment of the various types of patient data (e.g. MRI and PET scans, genetic sequences) that describes the impact of loss of data confidentiality, integrity and availability so that appropriate security mechanisms can be deployed. Once these vulnerabilities are understood, it is possible to choose the appropriate security control to mitigate the risks. For instance, there might be a need for using two level authentication that involves a pin number in addition to a username–password pair, as currently employed in online banking security systems. ACD balances different risks. On one hand, the ACD delegated authentication model may lead to the situation wherein one misuse may result in the whole VO being blocked; it is therefore essential to the VO that it vets and controls activities because the scale of withdrawal of service is much more of an issue than for an individual user. On the other hand, an individual should be encouraged by the easy access to grid resources and therefore very likely make far greater use of these resources. ACD fits well with the distributed computing requirements of the VPH initiative and translational, computationally based biomedical research more generally. A dedicated VO for clinicians and scientists who require access to grid resources can be created and secure access to shared medical data provided using fine grained authorization. In addition, the accountability provided by ACD makes it possible to track local users responsible for performing tasks in distributed environments in case of misuse or violation of the security policy for the VO. Indeed, the fact that ACD is based on a formal model means that it is well documented and can be certified in the future. Finally, the design of ACD is flexible enough for it to be included within the VPH Toolkit for which successful integration with AHE leads the way; its integration with IMENSE will continue to be developed in a major new project called ‘p-medicine’ (EU-FP7-270089). Support for different types of credentials such as Kerberos and Shibboleth is planned in future work which will give end users more options to choose from. The ACD software will be available free of charge via the VPH Toolkit (toolkit.vph-noe.eu/) and will feature in future releases of the AHE that will also be distributed via the VPH Toolkit. The authors would like to thank Prof. Dr Norbert Graf and Prof. Dr Rainer Bohle (University of Saarland) for helpful discussions on acquiring and transferring patient data to IMENSE. The authors also wish to thank Prof. Dr Nikolaus Forgo´ (Leibniz University, Hannover) for helpful discussions on patient data protection and data security law. We are also grateful to Nancy Wilkins-Diehr (TeraGrid), Gavin Pringle (DEISA) and David Wallom (UK NGS) for giving us permission to deploy ACD on their grid infrastructures. This work has been supported by EPSRC through the UserFriendly Authentication and Authorisation Security for Grid Environments [32] (EP/D051754/1) and RealityGrid Platform (EP/C536452/1) grants, as well as the EU FP7 ContraCancrum Project (EU-FP7-223979) [27] and Virtual Physiological Human Network of Excellence (FP7-2007-IST223920) grants. REFERENCES 1 Sadiq, S. K. et al. 2008 Patient-specific simulation as a basis for clinical decision-making. Phil. Trans. R. Soc. A 366, [3199–3219. (doi:10.1098/rsta.2008.0100). See http://rsta.](http://dx.doi.org/doi:10.1098/rsta.2008.0100) [royalsocietypublishing.org/content/366/1878/3199.](http://rsta.royalsocietypublishing.org/content/366/1878/3199) 2 Credit Reporting Agency Limited. 2010 Identity theft [and data loss. See http://www.annualcreditreport.co.uk/](http://www.annualcreditreport.co.uk/identity-theft/data-loss.htm) [identity-theft/data-loss.htm.](http://www.annualcreditreport.co.uk/identity-theft/data-loss.htm) [3 Infosecurity-magazine. 2010 Data loss. See http://www.](http://www.infosecurity-magazine.com/category/75/data-loss/) [infosecurity-magazine.com/category/75/data-loss/.](http://www.infosecurity-magazine.com/category/75/data-loss/) 4 Arning, M., Forgo´, N. & Kru¨gel, T. A. 2009 Data protection in grid-based multicentric clinical trials: killjoy or confidence-building measure? Phil. Trans. R. Soc. A [367, 2729–2739. (doi:10.1098/rsta.2009.0060)](http://dx.doi.org/doi:10.1098/rsta.2009.0060) 5 Martin, A. & Spence, D. 2008 Trust and security in virtual communities, report on first workshop: the application-led security agenda for e-science. Workshop [report, University of Oxford. See (http://wiki.esi.ac.uk/](http://wiki.esi.ac.uk/w/files/7/7b/Theme8-workshop1-Final-report.pdf) [w/files/7/7b/Theme8-workshop1-Final-report.pdf](http://wiki.esi.ac.uk/w/files/7/7b/Theme8-workshop1-Final-report.pdf) ). 6 Alfieri, R., Cecchini, R., Ciaschini, V., dell’Agnello, L., Frohner, A[´ ], Gianoli, A., Loˇrentey, K. & Spataro, F. 2004 VOMS, an authorization system for virtual organizations. In Grid computing, vol. 2970 (eds F. Ferna´ndez-Rivera, G. T. Bubak Marian & D. R. Andre´s), pp. 33–40. Lecture Notes in Computer Science, Berlin, Germany: Springer. 7 Pearlman, L., Welch, V., Foster, I., Kesselman, C. & Tuecke, S. 2002 A community authorization service for group collaboration. In Proc. of the IEEE 3rd Int. Workshop on Policies for Distributed Systems and Networks, pp. 50–59. Washington, DC: IEEE Computer Society. 8 Haidar, A. N. 2003 Critical evaluation of current approaches to grid security. Master’s thesis, Royal Holloway University of London. 9 Beckles, B., Welch, V. & Basney, J. 2005 Mechanisms for increasing the usability of grid security. Int. J. Human– [Computer Stud. 63, 74–101. (doi:10.1016/j.ijhcs.2005.](http://dx.doi.org/doi:10.1016/j.ijhcs.2005.04.017) [04.017)](http://dx.doi.org/doi:10.1016/j.ijhcs.2005.04.017) 10 Cooper, J. et al. 2010 The Virtual Physiological human [toolkit. Phil. Trans. R. Soc. A 368, 3925–3936. (doi:10.](http://dx.doi.org/doi:10.1098/rsta.2010.0144) [1098/rsta.2010.0144)](http://dx.doi.org/doi:10.1098/rsta.2010.0144) ----- 11 Beckles, B., Haidar, A. N., Zasada, S. J. & Coveney, P. V. 2010 Audited credential delegation: a sensible approach to grid authentication. In 5th Int. Conf. e-Science, Washington, DC, December 2010, pp. 19–30. Silver Spring, MD: IEEE Computer Society. 12 Haidar, A. N., Coveney, P. V., Abdallah, A. E., Ryan, P. Y. A., Beckles, B., Brooke, J. M. & Jones, M. A. S. 2009 Formal modelling of a usable identity management solution for virtual organisations. Electron. Proc. Theoret. Comput. [Sci. 16, 41–50. (doi:http://arxiv.org/abs/1001.5050)](http://arxiv.org/abs/1001.5050) 13 Wang, X. D. et al. 2010 Shibboleth access for resources on the national grid service (SARoNGS). J. Inform. Assur[ance Security 5, 293–300. (doi:10.1109/IAS.2009.163)](http://dx.doi.org/doi:10.1109/IAS.2009.163) 14 Zasada, S. J. & Coveney, P. V. 2009 Virtualizing access to scientific applications with the application hosting environment. Comput. Phys. Commun. 180, 2513–2525. [(doi:10.1016/j.cpc.2009.06.008)](http://dx.doi.org/doi:10.1016/j.cpc.2009.06.008) 15 Zasada, S. J., Wang, T., Haidar, A. N., Liu, E., Graf, N., Clapworthy, G., Manos, S. & Coveney, P. V. 2011 IMENSE: An e-infrastructure environment for patient specific multiscale modelling and treatment. Preprint. 16 Haidar, A. N., Zasada, S. J., Coveney, P. V., Abdallah, A. E. & Beckles, B. 2010 Audited credential delegation—a user-centric identity management solution for computational grid environments. In Sixth Int. Conf. on Information Assurance and Security, August 2010, pp. 222–227. Washington, DC: IEEE Computer Society. 17 Cox, B. M. & Hatzaras, K. S. 2010 Online project deliverables. See [http://www.biomedtown.org/biomed_town/](http://www.biomedtown.org/biomed_town/VPH/VPHnews/radical/) [VPH/VPHnews/radical/.](http://www.biomedtown.org/biomed_town/VPH/VPHnews/radical/) 18 EHealthInsider. 2008 European Court fines Finland for [data breach. See http://www.e-health-insider.com/news/.](http://www.e-health-insider.com/news/) 19 Jensen, M., Schwenk, J., Gruschka, N. & Iacono, L. L. 2009 On technical security issues in cloud computing. In IEEE Int. Conf. on Cloud Computing, September 2009, pp. 109–116. Washington, DC: IEEE Computer Society. 20 Tsai, C. L., Lin, U. C., Chang, A. Y. & Chen, C. J. 2010 Information security issue of enterprises adopting the application of cloud computing. In Sixth Int. Conf. on Networked Computing and Advanced Information Management August 2010, pp. 645–649. Washington, DC: IEEE Computer Society. 21 Kuno, H., Mchiraju, V., Alonso, G. & Casati, F. 2004 Web services: concepts, architectures and applications. Berlin, Germany: Springer. 22 Zasada, S. J., Haidar, A. N. & Coveney, P. V. 2011 On the Usability of Grid Middleware and Security Mechanisms. UK e-Science AHM 2010 theme issue of the Philosophical Transactions of the Royal Society A, accepted for publication. 23 OWASP Top 10 Vulnerabilities. 2010 The open web application security Project Top 10 vulnerabilities. See [http://www.owasp.org/index.php/Top_10_2010-Main.](http://www.owasp.org/index.php/Top_10_2010-Main) 24 Abdallah, A. E. & Khayat, E. J. 2006 Formal Z specifications of several flat role-based access control models. In 30th Annu. IEEE/NASA Software Engineering Workshop, pp. 282–292. Washington, DC: IEEE Computer Society. 25 Coveney, P. V., Saksena, R. S., Zasada, S. J., McKeown, M. & Pickles, S. 2007 The application hosting environment: lightweight middleware for grid-based computational science. Comput. Phys. Commun. 176, 406–418. 26 Zasada, S. J. & Coveney, P. V. 2009 From campus resources to federated international grids: bridging the gap with the application hosting environment. In Proc. of the 5th Grid Computing Environments Workshop, GCE ’09, pp. 10:1–10:10. New York, NY: ACM Press. 27 EU-FP7-ContraCancrum. 2010 ContraCancrum—clinically oriented translational cancer multilevel modelling. [See http://www.contracancrum.eu.](http://www.contracancrum.eu) [28 TeraGrid. 2010 Science gateways home. See http://www.](http://www.teragrid.org/web/science-gateways/home) [teragrid.org/web/science-gateways/home.](http://www.teragrid.org/web/science-gateways/home) 29 Huai, J., Hu, C., Wo, T. & Li, J. 2008 CROWN: a serviceoriented grid middleware system: experience and applications. In th IEEE Int. Symp. on Object Oriented RealTime Distributed Computing, May 2008, pp. 141–147. Washington, DC: IEEE Computer Society. 30 gLite. 2010 Lightweight Middleware for Grid Computing. [See http://glite.web.cern.ch/glite/.](http://glite.web.cern.ch/glite/) 31 NIST. The National Institute of Standards and Technology [(NIST). See http://csrc.nist.gov/publications/nistpubs/](http://csrc.nist.gov/publications/nistpubs/800-63/SP800-63V1_0_2.pdf) [800-63/SP800-63V1_0_2.pdf.](http://csrc.nist.gov/publications/nistpubs/800-63/SP800-63V1_0_2.pdf) 32 UFSSGE. 2010 User-friendly authentication and authoris[ation for grid environments project. See http://www.](http://www.realitygrid.org/uf-security/) [realitygrid.org/uf-security/.](http://www.realitygrid.org/uf-security/) -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'abstract', 'references'}. Paper or abstract available at https://api.unpaywall.org/v2/10.1098/rsfs.2010.0026?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1098/rsfs.2010.0026, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://europepmc.org/articles/pmc3262438?pdf=render" }
2,011
[ "JournalArticle" ]
true
2011-06-06T00:00:00
[]
15,421
en
[ { "category": "Art", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01f2df8b5df8ea23871aeda5fc869f5c14271011
[]
0.921112
Approaches about NFT with Crypto Art and Its Place in the Art Market
01f2df8b5df8ea23871aeda5fc869f5c14271011
Art and Design Review
[ { "authorId": "2218284748", "name": "Mustafa Günay" } ]
{ "alternate_issns": null, "alternate_names": [ "Art Des Rev" ], "alternate_urls": null, "id": "2fb4a9fa-381a-4063-9f7e-ef0f214b5dd8", "issn": "2332-1997", "name": "Art and Design Review", "type": null, "url": "https://www.scirp.org/journal/adr/" }
null
**Art and Design Review, 2023, 11, 104-119** [https://www.scirp.org/journal/adr](https://www.scirp.org/journal/adr) ISSN Online: 2332-2004 ISSN Print: 2332-1997 # Approaches about NFT with Crypto Art and Its Place in the Art Market ### Mustafa Günay Department of Graphic Design, Vocational School, Istanbul Gelişim University, Istanbul, Türkiye How to cite this paper: Günay, M. (2023). Approaches about NFT with Crypto Art and Its Place in the Art Market. Art and Design Review, 11, 104-119. [https://doi.org/10.4236/adr.2023.112008](https://doi.org/10.4236/adr.2023.112008) Received: March 22, 2023 Accepted: May 19, 2023 Published: May 22, 2023 Copyright © 2023 by author(s) and Scientific Research Publishing Inc. This work is licensed under the Creative Commons Attribution International License (CC BY 4.0). [http://creativecommons.org/licenses/by/4.0/](http://creativecommons.org/licenses/by/4.0/) ## Abstract As a result of technological innovations, digital opportunities vary greatly with the transition to a different lifestyle due to the insignificance of distances at both time and international level. The treatment of a digital resource in the form of value is generally seen as an element of the perception of social values created by individuals and rare resources that are approved and manufactured in so realistic lanes and at the same time do not have the possibility of change. This article, with the qualitative research method, the production system, method, platforms, and value system of unique assets that cannot be exchanged, known as Non-Fungible Token (NFT), as well as crypto art, is investigated in the reproduction of the work of art with technical possibilities and Non-Fungible Token (NFT) in the global art market. ## Keywords Cryptocurrency, NFT, Crypto Art, Digital, Digital Opportunity ## 1. Introduction As a result of technological innovations, digital opportunities vary greatly with the transition to a different lifestyle due to the insignificance of distances at both time and international level. Large-scale differences, particularly in manufacturing processes, are based on these digital opportunities. A different abstract space in autonomous structures shapes the distinguishing social lifestyle, as well as physical and sensory differences. Access to the international level is also enabled by the migration of social spaces to virtual platforms. Individuals who transition from physical environments to a structured system of virtual spaces distinguish their lifestyles by integrating with the values that differentiate these lanes. While virtual cryptocurrencies stand out with the advancement of blockchain technol Open Access [DOI: 10.4236/adr.2023.112008 May 22, 2023](https://doi.org/10.4236/adr.2023.112008) 104 Art and Design Review ----- M. Günay ogy, this hybrid lifestyle is also supported by distinct perceptions. Although the concern of sending messages with global communication devic es dates back to ancient times, the facilities provided by virtual digital innovations with democratic subsystems also allow personal lives to reach beyond borders and express democratic opinions. Content creation, design, and open-source codes are brought to the forefront and distributed to the masses in these lanes of collective production. Progress in this area with blockchain technology is seen as a result of this open-source collective. However, the formation in question is also part of blockchain technology, which is based on the perception of crypto art (Ethereum, 2021). The treatment of a digital resource in the form of value is generally viewed as an element of the perception of social values created by individuals and rare resources that are approved and manufactured in hyper-real lanes and do not have the possibility of change. The integration of the rare resource situation of digital assets that cannot be exchanged for crypto codes and the Ethereum blockchain phenomenon is conveyed as art discoveries of various dimensions. While a new art phenomenon emerges in the form of a representative work of art, the naming of the works of art transferred to the crypto codes as graphic stains benefits from the graphic design of both the mode of manufacture and the depiction of the im age and the result of their actions from the start. Typography, illustration, cha racter design, and actions that frequently feature three-dimensional images re flect the crypto art form itself, but it is also suggested that they play a role in the transmission of these objects. ## 2. Cryptocurrency ### 2.1. The Concept of Digital Currency Digital money or digital currency is understood to be a fundamental concept used to reflect the virtual nature of classical money, virtual money, and cryptocurren cies, rather than a phenomenon that refers to a specific currency or type. This state ment emphasizes the importance of being transferred to the digital field with in tensity (FATF, 2014). Electronic money is a concept that integrates the value of money processed in the form of a payment element in the digital geography where it is located. Ac cording to the EU Electronic Money Directive (2009/110), it covers a value that can be stored in digital form that is issued as a result of the acquisition of funds that are not less valuable than digital money and approved as a payment element by those who assert commitment other than the entity providing this issuance, which stands out with demand proportionate to the formation of the issuance. Other than digital money, virtual currency stands out in terms of reflecting dif ferent digital values. Cryptocurrency is once again regarded as a volatile curren cy. The European Central Bank stated in its report Virtual Currency Schemes, published in 2012, that the value of digital money is issued through revealing in stitutions and that developers are intensively supervised, and that it is also a le gally structured digital currency that is evaluated and approved within the scope [DOI: 10.4236/adr.2023.112008](https://doi.org/10.4236/adr.2023.112008) 105 Art and Design Review ----- M. Günay of the anticipated digital community. ### 2.2. Cryptocurrency as a Digital Currency Cryptocurrency, which includes virtual coins and is based on cryptocurrency, lacks a clear definition that stands out within the framework of cryptocurrencies, which are predominantly in variable forms within digital currencies. The reason for this situation is that progress in the unit sector in question has not yielded a clear result, and the legal configurations associated with this situation have not yielded a clear mechanism (IMF, 2016). According to some research, cryptocurrency refers to a structure that employs cryptography in the creation and transfer of money, with an emphasis on ex change in the virtual space. In other words, this unit is viewed as a virtual cur rency issuance area that provides its supporters with the ability to make dig ital payments for products and services without the need for a specific cen tral mechanism and already functions as a currency. Crypto coins, which are based on the transfer of virtual data, enable money actions to function inde pendently while also being integrated with norms via cryptographic methods (Fa rell, 2015). ### 2.3. Emergence and Historical Development of Cryptocurrencies The announcement of the Bitcoin structure is the basis for the currencies in question challenge. In October 2008, Stoshi Nakamoto made statements about the structure in question in the e-mail field on the website metzdowd.com. On January 3, 2009, Nakamoto is credited with being the first person to begin Bit coin mining by making it available to the general public via virtual platforms. He previously supported his expression in a number of actions that served as the foundation for this digital structure. The first action that generates crypto money values is the Cypherpunk action, which is formed by cyber-privacy-oriented computer scientists. Individuals who are well-equipped and in favor of developing unidentified systems claim that privacy can be protected by using some encrypted methods. These proponents, including Nakamoto, believe that effective encryption will prevent government interference in a wide range of economic actions while also elevating contract performance to a new level. This viewpoint expresses the foundation for the cre ation of virtual currency, from which these methods are frequently used (Hughes, 1993). Hashcash is one of the leading technological innovations that has enabled the currency in question to exist. This activity, designed to provide assurance against the negative effects of DoS attacks in the digital field, envisions the transfer of ma thematical problem-solving in the form of proof of work. The expected proof of work in the infrastructure system of blocks consisting of money actions in the perception of Bitcoin usually includes the perception of hashcash (Carl Mullan, 2016). [DOI: 10.4236/adr.2023.112008](https://doi.org/10.4236/adr.2023.112008) 106 Art and Design Review ----- M. Günay ## 3. NFT Market and Its Development Only the goal of carrying and transferring value was pursued during the initial process of Bitcoin discovery. However, as a result of their frequent use of these coins over time, users have evaluated these elements for various purposes. The prominence of blockchain perception has given rise to some financial alternatives based on the blockchain structure in which crypto actions are carried out in or der to respond to this goal. ### 3.1. NFT The acquisition of an existing product in the digital field is known as NFT, which has gained traction in the year 2021. NFT, or Non-Fungible Token, refers to the sale of many products such as jpg documents, tweets, game characters, digital plots, songs, and so on as tokens on virtual platforms. NFTs are tokens that represent the purchase of a valuable asset. In a nutshell, it points to a source through a barter system. Because NFTs are unique and one-of-a-kind, it is im possible to divide an equivalent price into two (Figure 1). **3.1.1. The Development of NFTs** The cost of the work, which consists of the designs of Mike Winkelmann over a 5000-day period through the Christies auction house, was obtained as 69.3 mil lion dollars (Crow & Ostroff, 2021). The work in question is known as the art ist’s third work at the optimum level reached before he lost his life (Uçak, 2021). The work, which reached the third highest value after Jeff Koons and David Hockney, was also extremely popular in the NFT lane. The high prices of the works in question, as expressed the form of crypto art, have allowed the NFT lane and the collections here to gain significant traction. In today’s digital world, it is possible to easily access and view the originals of the works for sale in all NFT sectors. The leading reason why these works are sold at optimum figures is due to the fact that they are included in the scope of NFT, Figure 1. NFT market. [DOI: 10.4236/adr.2023.112008](https://doi.org/10.4236/adr.2023.112008) 107 Art and Design Review ----- M. Günay NFTs are blockchain derived tokens that easily integrate virtual resource owner ship rights into virtual resources (Binance Academy, 2021a). Coming across a painting that sells for optimum figures in a large-scale art museum and acquiring these works elicits very different emotions (Figure 2). As with physical works, NFTs enable the acquisition and storage of the right to property in virtual re sources. The origin of the blockchain is at the heart of the NFT structure. The smart contract phase follows the sale or printing of NFTs. Following this contract, the blog includes NFT meta-findings and property details (Wang et al., 2021). Its ownership is registered with NFT in this direction, with a registration that can not be exchanged or recycled. Following this stage, the transfer of NFT occurs only through the virtual signature of the individuals who have acquired the private key and own the NFT. Although this action appears to be complicated, it consists of a smart contract within the framework of the ERC using a simple crypto wallet (Wang et al., 2021). Platforms that direct this shopping are com monly used in the process of buying and selling NFTs. The main tracks used in the purchase of NFTs are OpenSea, Rarible, Mintable, Treasureland, and Zora. From a technical standpoint, understanding NFT necessitates a thorough under standing of the Ethereum origin blockchain. When we examine the cause of this situation, the root of NFT is the cryptocurrency within Ethereum (Ethereum, 2021). However, their characteristics distinguish NFTs from others. Although a large number of accepted cryptocurrencies, such as Bitcoin, Ripple, and Ethereum, are exchangeable, NFTs stand out for not being exchangeable. On the block chain, the entire cryptographic token is a virtual value. The administration of the tools in question evaluates smart contracts. The registered token access is used in conjunction with the private key for the tokens obtained by the user (Kshetri, 2021). Figure 2. Example of NFT sold at optimum rates. [DOI: 10.4236/adr.2023.112008](https://doi.org/10.4236/adr.2023.112008) 108 Art and Design Review ----- M. Günay At this point, the tokens classified as fungible, and crypto have a similar value. Within the context of Bitcoin, all 18 million Bitcoins currently in circulation have a similar value and command the same price. There is the possibility of swap, as well as the possibility of exchange. Despite this, NFTs are unique and non-tradable. Token actions must be integrated into a number of standards in order to put smart contracts into action and implement shopping. ERC-721 and ERC-1155 were used to close the gap. At the same time, this situation provides the foundation for safe trade on the part of NFTs. ERC-721 tokens do not all have the same value. When researching Ethereum development recommendations, ERC-721’s Non-Fungible Token has come to the fore through William et al. This method is being developed by ERC 1155. In the ERC-721 framework, all NFTs have a token variable uint256 and are un iquely qualified (Wang et al., 2021) (Figure 3). According to Google Trends, NFTs are expected to attract users’ attention on a large scale after January 2021 (Dowling, 2021a). Etheria, the first NFT imple mentation within Ethereum, emerged in 2015 (Ante, 2021). CryptoPunks, which debuted in June 2017 via Larva Labs, is also regarded as an inspiration for ERC-721, which provides support for NFTs under Ethereum. CryptoPunks was one of the first NFTs in Ethereum (Wang et al., 2021). On the other hand, sales realized at the best price in 2021 support a unique trade scope that has a real impact on the NFT sector. NFTs, which are perceived as graphic design intensively, are also evaluated as an image-like virtual resource. Characters, on the other hand, are frequently drawn to the virtual level in games via collections and works of art (Dowling, 2021b). NFT is also a factor in the game market. Games such as CryptoPunks, CrytpoKitties, Meebits are also noteworthy (Wang et al., 2021). At the same time, the non-variable openness of NFTs offered by the blockchain system high lights their applicability in the field of logistics. Nutrients, products, and perish able products, as well as the extent to which they are stored, can all be openly stored with the help of NFT (Binance Academy, 2021b). NFT virtual resources commodify the state of belonging by clarifying who held it in previous processes and the period of its emergence (Nadini et al., 2021). Three important qualities of NFTs stand out: These qualities are: Figure 3. Blockchains used to create NFTs. [DOI: 10.4236/adr.2023.112008](https://doi.org/10.4236/adr.2023.112008) 109 Art and Design Review ----- M. Günay  Uniqueness: Meta-findings are evaluated to clarify what distinguishes one source from another. Records that cannot be changed or deleted are transferred through the NFT representative.  Rarity: NFTs are intriguing to express limited resources.  Indivisibility: A large number of NFTs are not divided into low percentage values. All elements must be provided and processed (Kshetri, 2021: p. 24). It is also possible for the NFT user to provide or verify NFT data within a non-centralized configuration framework. Someone who does not have the key in question is unable to steal NFT (Özrili, 2021). Aside from physical works of art, NFTs do not raise security concerns due to the possibility of damage or theft, nor do they incur additional costs due to factors such as taxes and insurance (Özrili, 2021). In the NFT sector, optimal amounts are regarded as a major issue. Because of smart contracts, account-oriented, and actionable storage, all NFT actions incur a higher fee than a simple transfer. On average, an expense fee of 60 - 100 dollars is incurred in order to complete an easy NFT purchase (Wang et al., 2021). **3.1.2. The Uses of NFTs** As NFT becomes more prevalent, it can be found in a wide range of markets. NFT has a reflection in the gaming industry, such as virtualizing and selling a character or material within a game and evaluating it in different games. While the product exchange within the context of Fortnite, a common game, has been discontinued, the products in question can be traded in a virtual framework via NFT. The economic structure of the games has also evolved in this direction. NFTs, on the other hand, are also very convenient in terms of eliminating the copyright problem. Everyone recognizes the rights of individuals who acquire a virtualized product within the scope of NFT to the products within the framework of the blockchain network in question. NFTs can be thought of as a collection element of this time period. Unlike physically holding products and commodities, storing them in the virtual field as NFT is regarded as a distinct type of collection. **3.1.3. NFT Production** Although it is widely possible to manufacture a product that is sold in the form of NFT, there are some needs. Since NFT sales are mostly implemented in line with the Ethereum network, it is necessary to create an NFT sector that approves the sale of NFT through the Ethereum wallet. After registering in the sector in question, the installed product can be sold within the framework of NFT. **3.1.4. NFT Sales** With the proliferation of NFTs, sales in this direction are also gaining traction. Many products are passed from hand to hand at varying prices every day. When we look at the sales realized within the scope of NFT, we can see that Everytays The First 5000 Days, in which all of the artist’s virtual works brought to life in [DOI: 10.4236/adr.2023.112008](https://doi.org/10.4236/adr.2023.112008) 110 Art and Design Review ----- M. Günay 5000 days are integrated into a single image under the pseudonym Beeple of the best price, was sold for a record price of $69.3 million (Figure 4). Another notable sale in this direction is the sale of “Just setting up my twttr”, the first tweet developed by Jack Dorsey to highlight the recognition rate of NFT, worth 2.9 million USD (Figure 5). The live virtual collection Hashmasks, which includes 16,384 works of art rea lized by an average of 70 artists worldwide and without copies, is also notewor thy. These works have sold for between 0.1 and 400 ETH. The ability to use the names of all the works at once is a unique feature of this collection. At this point, opinions that NFT products are unique are also accepted (Hashmasks, 2021). Figure 4. Everytays first 5000 days artwork. Figure 5. Jack dorsey “just setting up my twttr” tweet. [DOI: 10.4236/adr.2023.112008](https://doi.org/10.4236/adr.2023.112008) 111 Art and Design Review ----- M. Günay ## 4. Approaches to Crypto Art and NFT and Their Place in the Art Market In the introductory sentence of the book The Story of Gombrich Art, it is argued that art is not something but the artist who is already there (Gombrich, 1986). In other words, it is implied that individuals can create products in response to some compulsory situations and lifestyles. It is also correct to state that this manufacturing process, called crypto art, has emerged in this direction and has created its own distinct sector. Cryptokitties.co was established in 2017 as the world’s first NFT game developed in accordance with Ethereum. ERC721 is based on smart contracts and is a unique and indestructible token. It reflects virtual assets by occurring within the Ethereum network (Cryptokitties, 2021). Although it is a unique game, it is one of the first known NFT values. It is also among the first sectors because it is a sector where shopping opportunities with designs are available (Castellanos, 2017; Kharif, 2017). The valuation of the manu factured virtual resources and their supply to the sector is seen as a different lane for works of art. The manufactured Crypto Artwork, ERC-721, is devel oping an Ethereum-related blockchain system. It is also perceived as unique, non-exchangeable and virtual verification. Blockchain technology is being used on digital platforms and is linked to a wide range of media. Simultaneously, this technology, also known as the virtual dimension of art, is referred to as an open and licensed sector. Graphic products ERC721, which are brought to life primarily by graphics-based artists, are be coming an important value with Ethereum technology within the framework of an autonomous structure (DAO) (Chohan, 2017), virtual sectors bring these works to life. At the same time, NFT sectors are not Crypto Arts, but rather a lane presentation. It can also take the form of physical art galleries in the real world. Walter Benjamin’s perception that the uniqueness of works produced by digi tal means has come to an end, as well as his idea that works of art created by machines have gained a mechanical dimension, are both evaluated within the NFT in his (1936) publication titled The Work of Art in the Age of Mechanical Production. This is because the various sources of the works are thought to be secured by crypto ciphers, but they contain similarities. The originality of the works emerges in line with the perception of a unique, changeless work. The fact that it gives identity to works of art with crypto codes and that it is possible with blockchain technology reflects the representative status of works of art. In this regard, the uniqueness of works of art is raised, as is the need to update within the framework of morals and rules (Maria Paula Fernandez, 2019). It is possible to realize objects that are intended to be expressed in the real world as works of art by applying virtual manufacturing techniques. Because of its position in the developed numerical field, the virtual work can be referred to as NFT. The op timum resolution of the manufactured work is also accepted from this stand point. When the works are included in the NFT and linked to codes outside the [DOI: 10.4236/adr.2023.112008](https://doi.org/10.4236/adr.2023.112008) 112 Art and Design Review ----- M. Günay representative area, they are approved as logical values within the scope of the numerical work. Visuals are used to convey the perception of an object within the NFT within the scope of the physical object. ERC 721 or similar technologies are used to convert representative images into NFT values. Visual representa tions of physical works are once again considered works of art. Formats such as JPEG, MP4 or GIF, which are linked to crypto ciphers, reflect a known and unique value verified in this field. The multiverse of NFTs highlights the singularity of two distinct exchange less tokens. Although no two NFTs are exactly alike, they do share a reference to the certificate of authenticity. The unique situation in which the contract, wallet, and virtual resource are linked, which does not have the possibility of unique exchange with crypto passwords, emphasizes the uniqueness of these works yet again. This can also be used to optimize amounts in the digital field. NFTs can be sold to optimum figures because they are available to all masses with an egalitarian platform. The unique perception and rules of the hyper-real lifestyle make abstract perceptions evident day by day. A different expression brings the process to life within the scope of its linguistic uniqueness, the production of the work and its reward. The fact that virtual artworks and NFT sources are viewed as manufactured values raises some concerns (Roose, 2021). The evaluation of virtual works is carried out by copying a large number of them within the framework of individual moral values and virtual network rules. Although data transfer and the foundation of exit networks and rules are the basis of artworks, evaluating works of art from this perspective can lead to copyright issues. It is displayed as one of the primary issues that stand out. Because of the high-level copyrights problem that has occurred with music sharing on numerous occasions (Yue, 2011). It appears that moral attitudes and rules in virtual spaces are also structured. Apple, on the other hand, eliminated this situation through the iTunes application by doing so within the legal framework (Yue, 2011). One of the major issues with virtual works is copyright. It is claimed that the net illegal benefits of both music and cinema place the companies in a difficult position. The transfer of the work of art, as well as the perception of the situation as normal, are accepted as a clear reflection of the situation at hand. In this regard, NFT takes a real-world stance. However, there is no valid practice that prohibits the reproduction and use of art structures. The uniqueness of works of art in terms of technology is guaranteed within the framework of NFT perception. Although this situation cannot prevent the works from being produced in a different work, it does highlight its distinct position in terms of the formation of general rules. In other words, its moral perception and rules are unique. It is suggested that the issues of creating a new reality or value be prioritized. This concept implies that reproduced works have no value if there is no concrete approval of the work’s wide range of uses. Although reproduction of the works within the scope of the technology in question is not prohibited in this case, it is stated that an effort has been made to develop the rules at the primary level and emphas [DOI: 10.4236/adr.2023.112008](https://doi.org/10.4236/adr.2023.112008) 113 Art and Design Review ----- M. Günay ize its uniqueness. It also brings moral values and rules with it. There is also a need to address the issue of creating new value in the face of a different reality. This perception also highlights the fact that if the large number of copying situations of the work cannot be determined, the reproduced products have no value and cannot be processed. Unique virtual works are integrated into the NFT, and crypto codes reflect the work’s reality. The transformation of works into virtual value by connecting them with crypto ciphers is at its core. The encrypted structure of virtual products and the certainty of the ownership status also express the non-imitation nature in the NFT field. In this context, it is possible for users to perceive the products uniquely (Roose, 2021). As a result of this perception, the crypto work of artist Mike Winkelmann, also known as Beeple, was sold in an auction for $69.3 million (Goodwin, 2021). While Beeple (Winkelmann) sells the work, which consists of products manufactured in an average of five days, for this price, it is also noted that Winkelmann is also a graphic designer. While he became famous with the sale in question, which was sold at an optimal price, he is also on record in the sense that it is the third work sold with the highest value within the scope of surviving artists (Pittwire, 2021) (Figure 6). A work of art cannot be transferred to a virtual space. However, in the field of music, this situation is assessed from a primary standpoint. It is well known that during the Covid-19 period, artists concentrated on various sales methods. In this regard, NFT-like methods are in high demand for transferring works in both the visual and audio fields to online areas. In terms of virtual currency, the view that NFT offers a different way of life is also accepted (Pittwire, 2021) (Figure 7). Aside from the development of a social perception, the evaluation of people’s external conditions and the true values they live by in a critical language, their experiences of the world to which they belong allow the emergence of different moral values and rules. While the social structure is expected to have some reac tion to this perception, Marx also claims that the accepted products emerge from this contrasting situation (Marx & Engels, 2013). Figure 6. Crypto art in the digital world. [DOI: 10.4236/adr.2023.112008](https://doi.org/10.4236/adr.2023.112008) 114 Art and Design Review ----- M. Günay Figure 7. Crypto art in the digital world (Beeple collage). NFT, a value revealed in the digital field, also reflects the rarity of digital cer tificates, revealing its distinct norms and ethical attitude that emerges with the common production of the social structure. The perception that what is manu factured is rare is integrated with the rare ciphers that link this technology to the crypto resource on which it is based. The artwork’s objective composition usual ly points to a safe space based on the smart contracts that are already associated with it through technology. Although this is seen as a result of social differences and perceptions, it suggests that, contrary to popular belief, the artwork is pri marily aimed at direct access to users rather than auctions or galleries. At the heart of NFT is a reliable function for protecting contract crypto passwords, wallet passwords, and virtual objects in the connection. In other words, crypto artwork grants indefinite ownership of the records to which virtual certificates are attached. In order to detect this situation, the IPFS protocol, in addition to the online sites presented in the form of HTTP applications, is essential. IPFS data that is not based on a central location attracts attention (Franceschet et al., 2019). There is a system that operates through data transfer within the framework of networks. In other words, the perception of using multiple centers is based on not preferring to take data from a specific location. In this regard, IPFS reflects a structure that stores online sites, documents, applications, and information, and transfers this data to allow access to this data. Crypto art is created by incorporating a unique blockchain structure trans ferred via the IPFS system into works of art. An NFT-oriented track assists in the creation of a crypto work. When an NFT is created and transferred to a specific platform, a unique code is generated in Ethereum technology and linked to the artist’s cryptography via a unique virtual signature. IPFS is used to transfer vir tual resources that are integrated with sites or galleries. In this regard, IPFS is regarded as a virtual wallet. Although the proportion of information connected by IPFS remains constant, it has a permanent structure. Although IPFS is the perfect and limitless transfer in this technology, it also ensures the uniqueness of [DOI: 10.4236/adr.2023.112008](https://doi.org/10.4236/adr.2023.112008) 115 Art and Design Review ----- M. Günay the works through the use of unique codes (Franceschet et al., 2019). The artwork created can be auctioned off or purchased for the specified price. The image of the work is transferred to the system of the individual who acquires the product with the acquisition of the work. In this case, the blockchain technology maintains the connection between the work and the artist, as well as the artist’s solids to the product. Personal competence is supported in this regard by a much more serial function than in the classical art sector. In other words, crypto artists can use NFT to showcase their gallery potential. The virtual work’s ownership right and all the rights that are securely linked are unique, and the codes that cannot be changed are also possible with the technology in question. At the same time, the blockchain system can be used to interact with payment methods on the property (McConaghy et al., 2017). From another perspective, a large-scale data requirement arises because the unique and unchangeable virtual resources created by artists or users will reflect knowledge to be transferred within the framework of online networks. NFT is also viewed as a productive technology reform based on a lifestyle in which the new reality is accepted (Hahn, 2021). The virtual remanufacturing process emphasizes a process in which the object is transferred to virtual space as well as virtual work with web systems. Although this situation is reminiscent of Benjamin’s, it commodifies products with a system that can be traced and accessed in terms of transferring a physical object to the abstract field and becoming a contemporary source (Lotti, 2019). The focus of ERC721 (Binance Academy, 2021a) is on works of art, and it emphasizes the importance of the graphic design function from the creation of virtual works with developed crypto signatures to its creation. While the direct access of the online structure is also based on this effect, it is also reported that the egalitarian structure of data transfer with graphic design resources is integrated, paving the way for the emergence of different manufacturing styles. Web design first appeared in the late 1990s, breaking through the boundaries of classical perception and influencing technological innovations and different lifestyles. The integration of classical manufacturing and opinions on various conditions necessitates graphic designers to display diverse attitudes (Long, 2021). The ability of hyper-real life to convey various and comprehensive messages in terms of graphic design at international standards is also accepted as objective digital realities in all spheres of life with socialization. The web-oriented interaction opportunities created in virtual conditions, as seen in the objects created in three-dimensional space, also bring the economic and social fields to the fore in the form of a different reality lane (Trautman, 2021). The communication-oriented view can be integrated with physical life elements such as socialization, entertainment, and education of the hyperreal lifestyle using integrated interfaces of simulated, continuous moving areas. Graphical interfaces created by online designers are viewed as the home of hyper-real fiction such as avatars and digital platform games. Virtual reality lanes similar to Secon Life bring online sensations to life. This situation is legitimized [DOI: 10.4236/adr.2023.112008](https://doi.org/10.4236/adr.2023.112008) 116 Art and Design Review ----- M. Günay in both education and business, and it is confirmed in all areas where the individual element is also relevant (Trautman, 2021). While individuals who are integrated with various configurations where dis tances and time disappear at the universal level spend a significant amount of time in hyper-real social lanes, integrated life forms are also a possibility. Physi cal lives that contain multiple realities with virtual bodies highlight a new di mension in terms of virtual life (Trautman, 2021). It is widely accepted that dy namic systems, typography, and illustration serve as communication devices in the entertainment, online space, and game markets, where visual interaction in the virtual lifestyle is frequently evaluated. The integration of the graphics with the visual interaction function reveals a dynamic structure in the NFT evalua tion. ## 5. Conclusion and Suggestions It can be shown that the prominence of a different development process allows the social lifestyle and opinions to be influenced since the technological structure shapes the industrial processes. Tokens that provide clarity, such as what kind of space the objects cover and the perception of the real state of the existing objects, are also important. Technically speaking, having multiple possibilities also represents the transition of manufacturing mechanisms and devices to a different dimen sion. All at the same, a non-real representation lane with an approved physical structure can be developed and experienced. On the platforms presented, the autonomous lanes and abstract concepts in which different perceptions develop are perceived as reality. With virtual reproduction online technologies, physical objects, like works of art, are transitioning to visual minimization. A blockchain structure created by networks at this level protects reality. This system is regarded as the primary foun dation for the creation of virtual resources in terms of value. The ability to track the virtual resource via NFT, which uses the Ethereum system, ensures the rights of both the work and the artist, and its uniqueness is registered. Crypto art, which is regarded as the representative state of a virtual resource, is also known to confirm that state. Only a representative system can enable a physical art object to transform into a crypto work. The work-produced object reflects that there is a distinct element of reality in this field that cannot be diffe rentiated within the framework of moral values and social rules, as well as the method of personal expression. The dynamic nature of the resources created with the blockchain system, as well as the concern for message transfer, highlights the fact that the works are integrated with graphic design elements. Apart from being a representative ref lection of a dynamic graphic stain, the perception it creates in terms of the mes sage it wishes to convey strongly indicates an important value. It is also stated in this direction that a dynamic NFT icon has a device function that transmits mes sages. The fact that dynamic representative elements can be interpreted as mul [DOI: 10.4236/adr.2023.112008](https://doi.org/10.4236/adr.2023.112008) 117 Art and Design Review ----- M. Günay tiple descriptions and reveal an illustrative perception explains why graphic de sign is preferred. ## Conflicts of Interest The author declares no conflicts of interest regarding the publication of this pa per. ## References Ante, L. (2021). Non-Fungible Token (NFT) Markets on the Ethereum Blockchain: Temporal Development, Cointegration and Interrelations. SSRN Electronic Journal, Article [ID: 3904683. https://doi.org/10.2139/ssrn.3904683](https://doi.org/10.2139/ssrn.3904683) [Binance Academy (2021a). ERC-721. https://academy.binance.com/tr/glossary/erc-721](https://academy.binance.com/tr/glossary/erc-721) Binance Academy (2021b). A Comprehensive Understanding of Blockchain and Cryp[tocurrencies. (In Chinese) https://academy.binance.com/](https://academy.binance.com/) Carl Mullan, P. (2016). A History of Digital Currency in the United States (pp. 19-86). Pal[grave Macmillan. https://doi.org/10.1057/978-1-137-56870-0_2](https://doi.org/10.1057/978-1-137-56870-0_2) Castellanos, S. (2017). Ethereum Network Copes with Surge of Activity as Virtual Kitten Game Goes Viral. Chohan, U. (2017). The Decentralized Autonomous Organization and Governance Issues. SSRN Electronic Journal, Article ID: 3082055. [https://doi.org/10.2139/ssrn.3082055](https://doi.org/10.2139/ssrn.3082055) Crow, K., & Ostroff, C. (2021). Beeple NFT Fetches Record-Breaking $69 Million in Christie’s Sale. The Wall Street Journal. Cryptokitties (2021). Cryptokitties.co. Dowling, M. (2021a). Fertile LAND: Pricing Non-Fungible Tokens. Finance Research Let[ters, 44, Article ID: 102096. https://doi.org/10.1016/j.frl.2021.102096](https://doi.org/10.1016/j.frl.2021.102096) Dowling, M. (2021b). Is Non-Fungible Token Pricing Driven by Cryptocurrencies? Finance Research Letters, 44, Article ID: 102097. [https://doi.org/10.1016/j.frl.2021.102097](https://doi.org/10.1016/j.frl.2021.102097) Ethereum (2021). Non-Fungible Tokens (NFT). [https://ethereum.org/en/nft/#environmental-impact-nfts](https://ethereum.org/en/nft/#environmental-impact-nfts) Farell, R. (2015). An Analysis of the Cryptocurrency Industry. Wharton Research Scholars. Financial Action Task Force (FATF) (2014). Virtual Currencies-Key Definitions and Potential AML/CFT Risks. [https://www.fatf-gafi.org/content/dam/fatf-gafi/reports/Virtual-currency-key-definitio](https://www.fatf-gafi.org/content/dam/fatf-gafi/reports/Virtual-currency-key-definitions-and-potential-aml-cft-risks.pdf.coredownload.pdf) [ns-and-potential-aml-cft-risks.pdf.coredownload.pdf](https://www.fatf-gafi.org/content/dam/fatf-gafi/reports/Virtual-currency-key-definitions-and-potential-aml-cft-risks.pdf.coredownload.pdf) Franceschet, M., Colavizza, G., Smith, T., Finucane, B., Ostachowski, M. L., Scalet, S., Perkins, J., Morgan, J., Hernández, S. (2019). Crypto Art: A Decentralized View. Leonardo, [54, 402-405. https://doi.org/10.1162/leon_a_02003](https://doi.org/10.1162/leon_a_02003) Gombrich, E. H. (1986). The Story of Art. Remzi Bookstore Publications. Goodwin, J. (2021). What Is an NFT? Non-Fungible Tokens Explained. [https://edition.cnn.com/2021/03/17/business/what-is-nft-meaning-fe-series/index.html](https://edition.cnn.com/2021/03/17/business/what-is-nft-meaning-fe-series/index.html) Hahn, J. (2021). NFTs Will Usher in a “Creative and Artistic Renaissance” Say Designers. [https://www.dezeen.com/2021/04/09/nfts-impact-design](https://www.dezeen.com/2021/04/09/nfts-impact-design) Hashmasks (2021). Nedir Bu Hashmasks Çılgınlığı? Koinbulteni.com. [https://koinbulteni.com/hashmasks-nedir-92819.html](https://koinbulteni.com/hashmasks-nedir-92819.html) [DOI: 10.4236/adr.2023.112008](https://doi.org/10.4236/adr.2023.112008) 118 Art and Design Review ----- M. Günay Hughes, E. (1993). A Cypherpunk’s Manifesto. [https://www.activism.net/cypherpunk/manifesto.html](https://www.activism.net/cypherpunk/manifesto.html) International Monetary Fund (IMF) (2016). Virtual Currencies and Beyond:Initial Consid[erations. https://www.imf.org/external/pubs/ft/sdn/2016/sdn1603.pdf](https://www.imf.org/external/pubs/ft/sdn/2016/sdn1603.pdf) Kharif, O. (2017). CryptoKitties Mania Overwhelms Ethereum Network’s Processing. Kshetri, N. (2021). Blockchain and Supply Chain Management. Elsevier. Long, M. (2021). What Are NFTs and Should Designers Be Thinking about Crypto Art? [https://www.designweek.co.uk/issues/15-21-march-2021/](https://www.designweek.co.uk/issues/15-21-march-2021/) Lotti, L. (2019). The Art of Tokenization: Blockchain Affordances and the Invention of Future Milieus. Rethinking Affordance, 1, 287-320. Maria Paula Fernandez, G. S. (2019). There Is No Such Thing as Blockchain Art—A Report on the Current Status of the Intersection of Blockchain and Art, There Is No Such Thing as Blockchain Art. Marx, K., & Engels, F. (2013). German Ideology. Evrensel Press Publishing. McConaghy, M., McMullen, G., Parry, G., McConaghy, T., & Holtzman, D. (2017). Visibility and Digital Art: Blockchain as an Ownership Layer on the Internet. Strategic Change, [26, 461-470. https://doi.org/10.1002/jsc.2146](https://doi.org/10.1002/jsc.2146) Nadini, M., Alessandretti, L., Di Giacinto, F., Martino, M., Aiello, L. M., & Baronchelli, A. (2021). Mapping the NFT Revolution: Market Trends, Trade Networks and Visual Features. Scientific Reports, 11, Article No. 20902. [https://doi.org/10.1038/s41598-021-00053-8](https://doi.org/10.1038/s41598-021-00053-8) Özrili, Y. (2021). The Museum of Inexistance: Crypto Art. Journal of Tourism Studies, 3, 6. Pittwire (2021). Crypto Art’s Grand Entrance. [https://www.pittwire.pitt.edu/news/what-nft-pitt-experts-explain-digital-tokens](https://www.pittwire.pitt.edu/news/what-nft-pitt-experts-explain-digital-tokens) Roose, K. (2021). Buy This Column on the Blockchain! Why Can’t a Journalist Join the NFT Party, too? The New York Times. [https://www.nytimes.com/2021/03/24/technology/nft-column-blockchain.html](https://www.nytimes.com/2021/03/24/technology/nft-column-blockchain.html) Trautman, L. J. (2021). Virtual Art and Non-Fungible Tokens. SSRN Electronic Journal, [Article ID: 3814087. https://doi.org/10.2139/ssrn.3814087](https://doi.org/10.2139/ssrn.3814087) Uçak, O. (2021). Towards a Single Culture in Intercultural Communication: Digital Culture. Communication and Technology Congress (p. 13). Wang, Q., Li, R., Wang, Q., & Chen, S. (2021). Non-Fungible Token (NFT): Overview, Evaluation, Opportunities and Challenges. ArXiv: 2105.07447. Yue, X. (2011). The Music Industry in the Social Networking Era (p. 9). MSc. Thesis, Michigan State University. [DOI: 10.4236/adr.2023.112008](https://doi.org/10.4236/adr.2023.112008) 119 Art and Design Review -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.4236/adr.2023.112008?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.4236/adr.2023.112008, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GOLD", "url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=125002" }
2,023
[ "JournalArticle" ]
true
null
[ { "paperId": "5cbc70102a4a2ab47cdbe7c112b3a48afcc76b02", "title": "Non-fungible token (NFT) markets on the Ethereum blockchain: temporal development, cointegration and interrelations" }, { "paperId": "f8a30a36507374efc512399f027b0fc2ed799cdc", "title": "Mapping the NFT revolution: market trends, trade networks, and visual features" }, { "paperId": "2a4baffe3913c6c2eb576c78de2e89104a1e2e1c", "title": "Virtual Art and Non-fungible Tokens" }, { "paperId": "75a6555861143f2c54c81c969f285f19c3f35f89", "title": "Is Non-fungible Token Pricing Driven by Cryptocurrencies?" }, { "paperId": "bb29266763b2681e38baab0d6c9c71b21ab9fb9c", "title": "Fertile LAND: Pricing Non-Fungible Tokens" }, { "paperId": "c7a49ebe7f7863d71f6e12074e0f420ade9e4835", "title": "Crypto Art: A Decentralized View" }, { "paperId": "a2e0160e18e67ec9e1a2cd62cbf5059bfe103525", "title": "The Decentralized Autonomous Organization and Governance Issues" }, { "paperId": "5b73bef66e267729aa80e4bb574367f637a35755", "title": "Visibility and digital art: Blockchain as an ownership layer on the Internet" } ]
11,591
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01f319ed161af23521310a847ede7f773379ca46
[ "Computer Science" ]
0.881774
Population protocols with unreliable communication
01f319ed161af23521310a847ede7f773379ca46
Algorithmic Aspects of Wireless Sensor Networks
[ { "authorId": "51212315", "name": "Mikhail A. Raskin" } ]
{ "alternate_issns": null, "alternate_names": [ "Algorithmic Asp Wirel Sens Netw", "ALGOSENSORS" ], "alternate_urls": null, "id": "bf10ccf4-ae66-43c0-a09c-ad469e3621fa", "issn": null, "name": "Algorithmic Aspects of Wireless Sensor Networks", "type": "conference", "url": "http://www.wikicfp.com/cfp/program?id=136" }
Population protocols are a model of distributed computation intended for the study of networks of independent computing agents with dynamic communication structure. Each agent has a finite number of states, and communication opportunities occur nondeterministically, allowing the agents involved to change their states based on each other's states. Multiple variations of that model have been studied. In most of them the situation of temporary impossibility of communication between some agents is natural. On the other hand, the models usually assume atomic interactions, i.e. either all the agents update their state or none do. In practice, ensuring that in case of a communication problem an interaction is recognised as successful either by all participants or by nobody has performance and implementation complexity costs. In the present paper we study unreliable models based on population protocols and their variations from the point of view of expressive power. We model the effects of non-atomic interaction. We show that for a general definition of unreliable protocols with constant-storage agents such protocols can only compute predicates computable by immediate observation population protocols. Immediate observation population protocols are inherently tolerant of unreliable communication and keep their expressive power under a wide range of fairness conditions. We prove it via a structural lemma that can also be applied for other settings requiring guaranteed eventual correctness. We also prove that adding unreliability reduces expressive power non-monotonically, and show that a large class of message-based models becomes strictly less expressive than immediate observation.
## Population protocols with unreliable communication[∗] #### Mikhail Raskin #### raskin@mccme.ru, raskin@in.tum.de #### Department of Computer Science, TU Munich December 28, 2021 Abstract Population protocols are a model of distributed computation intended for the study of networks of independent computing agents with dynamic communication structure. Each agent has a finite number of states, and communication occurs nondeterministically, allowing the involved agents to change their states based on each other’s states. In the present paper we study unreliable models based on population protocols and their variations from the point of view of expressive power. We model the effects of message loss. We show that for a general definition of protocols with unreliable communication with constant-storage agents such protocols can only compute predicates computable by immediate observation (IO) population protocols (sometimes also called one-way protocols). Immediate observation population protocols are inherently tolerant to unreliable communication and keep their expressive power under a wide range of fairness conditions. We also prove that a large class of message-based models that are generally more expressive than IO becomes strictly less expressive than IO in the unreliable case. Keywords. population protocols - message loss - expressive power ### 1 Introduction Population protocols have been introduced in [1, 2] as a restricted yet useful subclass of general distributed protocols. In population protocols each agent has a constant amount of local storage, and during the protocol execution pairs of agents are selected and permitted to interact. The selection of pairs is assumed to be done by an adversary bound by a fairness condition. The fairness condition ensures that the adversary cannot trivially stall the protocol. A typical fairness condition requires that every configuration that stays reachable during an infinite execution is reached infinitely many times. ∗The project has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme under grant agreement No 787367 (PaVeS) 1 ----- Population protocols have been studied from various points of view, such as expressive power [5], veri fication complexity [19], time to convergence [3, 17], privacy [13], impact of different interaction scheduling [10] etc. Multiple related models have been introduced. Some of them change or restrict the communication structure: this is the case for immediate, delayed, and queued transmission and observation [5], as well as for broadcast protocols [18]. Some explore the implications of adding limited amounts of storage (below the usual linear or polynomial storage permitted in traditional distributed protocols): this is the case for com munity protocols [23] (which allow an agent to recognise a constant number of other agents), PALOMA [11] (permitting logarithmic amount of local storage), mediated population protocols [26] (giving some constant amount of common storage to every pair of agents), and others. The original target application of population protocols and related models is modelling networks of restricted sensors, starting from the original paper [1] on population protocols. On the other hand, verifying distributed algorithms benefits from translating the algorithms in question or their parts into a restricted setting, as most problems are undecidable in the unrestricted case. Both applications motivate study of fault tolerance. Some papers on population protocols and related models [12, 23, 4, 24] consider questions of fault tolerance, but in the context of expressive power the fault is typically expected to be either a total agent failure or a Byzantine failure. There are some exceptions such as a study of fine-grained notions of unreliability [15, 14] in the context of step-by-step simulation of population protocols by distributed systems with binary interactions. However, these studies answer a completely different set of questions, as they are concerned with simulating a protocol as a process as opposed to designing a protocol to achieve a given result no matter in what way. In a practical context, many distributed algorithms pay attention to a specific kind of failure: message loss. While the eventual convergence approach typical in study of population protocols escapes the question of availability during a temporary network partition (the problem studied, for example, in [22]), the onset of a network partition may include message loss in the middle of an interaction. In such a situation the participants do not always agree whether the interaction has succeeded or failed. In terms of population protocols, one of the agents assumes that an interaction has happened and updates the local state, while a counterparty thinks the interaction has failed and keeps the old state. In the present paper we study the expressive power of a very wide class of models with interacting constant-storage agents when unreliability of communication is introduced. This unreliability corresponds to the loss of atomicity of interactions due to message loss. Indeed, in the distributed systems ensuring that both sides agree on whether the interaction has taken place is often the costliest part; a special case of it is “exactly-once” message arrival, known to be much more complex to ensure than “at most-once”. We model such loss of atomicity by allowing some agents to update their state based on an interaction, while other agents keep their original state because they assume the interaction has failed. For a bit more generality, corresponding, for example, to request-response interactions with the response being impossible if the request is lost, we allow to require that some agents can only update their state if the others do. We consider the expressive power in the context of computing predicates by protocols with eventual 2 ----- convergence of individual opinions. We show that under very general conditions the expressive power of protocols with unreliable communication coincides with the expressive power of immediate observation pop ulation protocols. Immediate observation population protocols, modelling interactions where an agent can observe the state of another one without the observee noticing, provide a model that inherently tolerates unreliability and is considered a relatively weak model in the fully reliable case. This model also has other nice properties, such as relatively low complexity (PSPACE-complete) of verification tasks [21]. Our results hold under any definition of fairness satisfying two general assumptions (see Definition 10), including all the usually used versions of fairness. We prove it by observing a general structural property shared by all protocols with unreliable commu nication. Informally speaking, protocols with unreliable communication have some special fair executions which can be extended by adding an additional agent with the same initial and final state as a chosen existing one. This property is similar to the copycat arguments used, for example, for proving the exact expressive power of immediate observation protocols. The usual structure of the copycat arguments includes a proof that we can pick an agent in an execution and add another agent (copycat) which will repeat all the state transitions of the chosen one. In the immediate observation case the corresponding property is almost self-evident once defined. A slightly stronger but still straightforward argument is needed in the case of reconfigurable broadcast networks [8]. The latter model is equivalent to unreliable broadcast networks; a sender broadcasts a message and changes the local state, and an arbitrary set of receivers react to the message (immediately). However, unlike all the previous uses of the copycat-like arguments in the context of population protocols and similar models, proving the necessary copycat-like property for a general notion of protocols with unreliable communication (sufficient to handle assymetry of message loss where loss for sender requires loss for receiver) requires careful analysis using different techniques. Note that although the natural way to design population protocols for our setting involves the use of immediate observation population protocols, we still need to rule out additional opportunities arising from the fact that eventually a two-agent interaction with both agents correctly updated will happen. However, in contrast to self-stabilising protocols [16, 6], the protocols cannot rely on the message loss being absent for an arbitrarily long time. Surprisingly, asynchronous transmission and receipt of messages, which provides more expressive power than immediate observation population protocols in the reliable setting, turns out to have strictly less expressive power in the unreliable setting. Note that message reordering is allowed already in the reliable setting, while unreliability is essentially a generalisation of message loss. One could say that an unbounded delay in message delivery becomes a liability instead of an asset once there is message loss. The rest of the present paper is organised as follows. First, in Section 2 we define a general protocol framework generalising many previously studied approaches. Then in Section 3 we summarise the results from the literature on the expressive power of various models covered by this framework. Afterwards in Section 4 we formally define our general notion of a protocol with unreliable communication. Then in Section 5 we formalise the common limitation of all the protocols with unreliable communication, and provide the proof 3 ----- sketches of this restriction and the main result. Afterwards in Section 6 we show that fully asynchronous (message-based) models become strictly less powerful than immediate observation in the unreliable setting. The paper ends with a brief conclusion and some possible future directions. #### 1.1 Main results (preview) The precise statements of our results require the detailed definitions introduced later. However, we can roughly summarise them as follows. First, we characterise the expressive power of all fixed-memory protocols given unreliable comunication. Proposition 1. Adding unreliability of communication to population protocols restricts the predicates they can express to boolean combinations of comparisons of arguments with constants. This is the same expressive power as the immediate observation protocols. Next we show that unreliability changes the expressive power non-monotonically for some natural classes. Proposition 2. Queued transmission protocols with unreliable communication are strictly less expressive than immediate observation population protocols (with or without unreliable communication). Note that without unreliability queued transmission protocols are strictly more expressive than immediate observation population protocols. ### 2 Basic definitions #### 2.1 Protocols We consider various models of distributed computation where the number of agents is constant during protocol execution, each agent has a constant amount of local storage, and agents cannot distinguish each other except via the states. We provide a general framework for describing such protocols. Note that we omit some very natural restrictions (such as decidability of correctness of a finite execution) because they are irrelevant for the problems we study. We allow agents to be distinguished and tracked individually for the purposes of analysis, even though they cannot identify each other during the execution of the protocol. We will use the following problem to illustrate our definitions: the agents have states q0 and q1 corre sponding to input symbols 0 and 1 and aim to find out if all the agents have the same input. They have an additional state q⊥ to represent the observation that both input symbols were present. We will define four protocols for this problem using different communication primitives. - Two agents interact and both switch to q⊥ unless they are in the same state (population protocol interaction). - An agent observes another agent and switches to q⊥ if they are in different states (immediate observa tion). 4 ----- - An agent can send a message with its state, q0, q1 or q⊥. An agent in a state q0 or q1 can receive a message (any of the pending messages, regardless of order); the agent switches to q⊥ if the message contains a state different from its own (queued transmission). - An agent broadcasts its state without changing it; each other agent receives the broadcast simultaneouly and switches to q⊥ if its state is different from the broadcast state (broadcast protocol interaction). Definition 1. A protocol is specified by a tuple (Q, M, Σ, I, o, Tr, Φ), with components being a finite nonempty set Q of (individual agent) states, a finite (possibly empty) set M of messages, a finite nonempty input alphabet Σ, an input mapping function I : Σ → Q, an individual output function o : Q →{true, false}, a transition relation Tr (which is described in more details below), and a fairness condition Φ on executions. The protocol defines evolution of populations of agents (possibly with some message packets being present). Definition 2. A population is a pair of sets: A of agents and P of packets. A configuration C is a population together with two functions, CA : A → Q provides agent states, and CP : P → M provides packet contents. Note that if M is empty, then P must also be empty. As the set of agents is the domain of the function CA, we use the notation Dom(CA) for it. The same goes for the set of packets Dom(CP ). Without loss of generality Dom(CP ) is a subset of a fixed countable set of possible packets. The message packets are only used for asynchronous communication; instant interaction between agents (such as in the classical rendezvous-based population protocols or in broadcast protocols) does not require describing the details of communication in the configurations. Example 1. The four example protocols have the same set of states Q = {q0, q1, q⊥}. The first two protocols have the empty set of messages, and the last two have the set of messages M = {m0, m1, m⊥}. The example protocols all have the same input alphabet Σ = {0, 1}, input mapping I : i �→ qi, and output mapping o : q0 �→ true, q1 �→ true, q⊥ �→ false. The definition of the transition relation uses the following notation. Definition 3. For a function f and x /∈ Dom(f ) let f ∪{x �→ y} denote the function g defined on Dom(f )∪{x} such that g |Dom(f )= f and g(x) = y. For u ∈ Dom(f ) let f [u �→ v] denote the function h defined on Dom(f ) such that h |Dom(f )\{u}= f |Dom(f )\{u} and h(u) = v. For symmetry, if w = f (u) let f \ {u �→ w} denote restriction f |Dom(f )\{u}. Use of this notation implies an assertion of correctness, i.e. x /∈ Dom(f ), u ∈ Dom(f ), and w = f (u). We use the same notation with a configuration C instead of a function if it is clear from context whether CA or CP is modified. Now we can describe the transition relation that tells us which configurations can be obtained from a given one via a single interaction. In order to cover broadcast protocols we define the transition relation as 5 ----- a relation on configurations. The restrictions on the transition relation ensure that the protocol behaves like a distributed system with arbitrarily large number of anonymous agents. Definition 4. The transition relation of a protocol is a set of triples (C, A[⊙], C[′]), called transitions, where C and C[′] are configurations and A[⊙] ⊂ Dom(CA) is the set of active agents (of the transition); agents in A[⊙] Dom(CA) \ A[⊙], are called passive. We write C −−→ C[′] for (C, A[⊙], C[′]) ∈ Tr, and let C → C[′] denote the A[⊙] projection of Tr: C → C[′] ⇔∃A[⊙] : C −−→ C[′]. The transition relation must satisfy the following conditions A[⊙] for every transition C −−→ C[′]: - Agent conservation. Dom(CA) = Dom(CA[′] [).] - Agent and packet anonymity. If hA and hP are bijections such that DA = CA ◦ hA, DA[′] [=][ C]A[′] [◦] [h][A][,] h[−][1](A[⊙]) DP = CP ◦ hP, and DP[′] [=][ C]P[′] [◦] [h][P][, then][ D] −−−−−−→ D[′]. - Possibility to ignore extra packets. For every p /∈ Dom(CP ) ∪ Dom(CP[′] [) and][ m][ ∈] [M] [:][ C][ ∪{][p][ �→] A[⊙] m} −−→ C[′] ∪{p �→ m}. - Possibility to add passive agents. For every agent a /∈ Dom(CA) and q ∈ Q there exists q[′] ∈ Q A[⊙] such that: C ∪{a �→ q} −−→ C[′] ∪{a �→ q[′]}. Informally speaking, the active agents are the agents that transmit something during the interaction. The passive agents can still observe other agents and change their state. The choice of active agents is used for the definition of protocols with unreliable communication, as a failure to transmit precludes success of reception. The formal interpretation will be provided in Definition 13. Many models studied in the literature have the transition relation defined using pairwise interaction. In these models the transitions are always changing the states of two agents based on their previous states. When discussing such protocols, we will use the notation (p, q) → (p[′], q[′]) for a transition where agents in the states p and q switch to states p[′] and q[′], correspondingly. Example 2. The four example protocols have the following transition relations. - In the first protocol for a configuration C and two agents a, a[′] ∈ Dom(CA) such that CA(a) ̸= CA(a[′]) {a,a[′]} we have C −−−−→ C[a �→ q⊥][a[′] �→ q⊥] (in other notation, (C, {a, a[′]}, C[a �→ q⊥][a[′] �→ q⊥]) ∈ Tr). - In the second protocol for a configuration C and two agents a, a[′] ∈ Dom(CA) such that CA(a) ̸= CA(a[′]) {a} we have C −−→ C[a �→ q⊥]. We can say that a observes a[′] in a different state and switches to q⊥. - In the third protocol there are two types of transitions. Let a configuration C be fixed. For an agent a ∈ Dom(CA), i ∈{0, 1, ⊥} such that CA(a) = qi, and a new message identity p /∈ Dom(CP ) we {a} have C −−→ C ∪{p �→ mi} (sending a message). If CA(a) = qi for some i ∈{0, 1}, for each message {a} p ∈ Dom(CP ), we also have C −−→ C[a �→ q[′]] \ {p �→ CP (p)} where q[′] is equal to qi if CP (p) = mi and q⊥ otherwise (receiving a message). 6 ----- - In the fourth protocol, for a configuration C and an agent a ∈ Dom(CA) we can construct C[′] by replacing CA with CA[′] [that maps each][ a][′][ ∈] [Dom(][C][A][) to][ C][A][(][a][′][) if][ C][A][(][a][) =][ C][A][(][a][′][) and][ q][⊥] [otherwise.] {a} Then we have C −−→ C[′]. We can say that a broadcasts its state and all the agents in the other states switch to q⊥. #### 2.2 Definitions of the protocol classes studied in the literature Many previously studied models can be defined inside out framework. Among such models are population protocols, immediate transmission population protocols, immediate observation population protocols, queued transmission protocols, broadcast protocols. These general definitions are similar to the definitions for specific protocols provided as exampled, and our results do not depend on these definitions. We provide them as a corroboration of sufficient generality of our framework. First we translate the initial definition of the population protocols [1]. Definition 5. A population protocol is described by an interaction relation δ ⊆ Q[2] ×Q[2]. The set of messages is empty. A configuration C[′] can be obtained from C, if there are agents a1, a2 ∈ Dom(CA) and states q1, q2, q3, q4 ∈ Q such that CA(a1) = q1, CA(a2) = q2, C[′] = C[a1 �→ q3][a2 �→ q4], and ((q1, q2), (q3, q4)) ∈ δ. The set of active agents A[⊙] is {a1, a2}. Now we proceed to the variants of the population protocols appearing in the paper on expressive power of population protocols and their variants [5]. Definition 6. An immediate transmission population protocol is a population protocol such that q3 depends only on q1, i.e. the following two conditions hold. If ((q1, q2), (q3, q4)) ∈ δ and ((q1, q2[′] [)][,][ (][q]3[′] [, q]4[′] [))][ ∈] [δ][ then] q3 = q3[′] [. If ((][q][1][, q][2][)][,][ (][q][3][, q][4][))][ ∈] [δ][ then for every][ q]2[′] [there exists][ q]4[′] [such that ((][q][1][, q]2[′] [)][,][ (][q][3][, q]4[′] [))][ ∈] [δ][.] Definition 7. An immediate observation population protocol is an immediate transmission population pro tocol such that every possible interaction ((q1, q2), (q3, q4)) ∈ δ has q1 = q3. We can consider only the first agent to be active. Definition 8. Queued transmission protocol has a nonempty set M of messages. It has two transition relations: δs ⊆ Q × (Q × M ) describing sending the messages, and δr ⊆ (Q × M ) × Q describing receiving the messages. If agent a has state q = CA(a) and (q, (q[′], m)) ∈ δs, it can send a message m as a fresh packet {a} p and switch to state q[′]: C −−→ C[a �→ q[′]] ∪{p �→ m}. If agent a has state q = CA(a), packet p contains {a} message m = CP (p) and ((q, m), q[′])) ∈ δr, agent a can receive the message: C −−→ C[a �→ q[′]] \ {p �→ m}. Delayed transmission protocol is a queued transmission protocol where every message can always be received by every agent, i.e. the projection of δr to Q × M is the entire Q × M . Delayed observation protocol is a delayed transmission protocol where sending a message doesn’t change state, i.e. (q, (q[′], m)) ∈ δs implies q = q[′]. As the last example, we consider broadcast protocols [18]. 7 ----- Definition 9. Broadcast protocol is defined by two relations: δs ⊆ Q × Q describing a sender transition, and δr ⊆ (Q × Q) × Q. To perform a transition from a configuration C, we pick an agent a ∈ Dom(CA) with state q and change its state to q[′] such that (q, q[′]) ∈ δs. At the same time, we simultaneously update the state of all other agents, in such a way that an agent in state qj can switch to any state qj[′] [such that] ((qj, q), qj[′] [)][ ∈] [δ][r][.] We consider the transmitting agent to be the only active one. Remark 1. In the literature, the relations δ, δs, δr and δs are sometimes required to be partial functions. As we use relations in the general case, we use relations here for consistency. #### 2.3 Fair executions In this section we define the notion of fairness. This notion is traditionally used to exclude the most pathological cases without a complete probabilistic analysis of the model. For the population protocols fairness has been a part of the definition since the introduction [1, 2]. However, in the general study of distributed computation there has long been some interest in comparing effects of different approaches to fairness in execution scheduling [7]. For example, the distinction between weak fairness and strong fairness and the conditions where one can be made to model the other has been studied in [25]. The difference between weak and strong scheduling is that strong fairness executes infinitely often every interaction that is enabled infinitely often, while weak fairness only guarantees anything for continuously enabled interactions. As there are multiple notion of fairness in use, we define their basic common traits. Our results hold for all notions of fairness satisfying these basic requirements, including all the notions of fairness used in the literature, as well as much stronger and much weaker fairness conditions. Definition 10. An execution is a sequence (finite or infinite) Cn of configurations such that at each moment i either nothing changes, i.e. Ci = Ci+1 or a single interaction occurs, i.e. Ci → Ci+1. A configuration C[′] is reachable from configuration C if there exists an execution C0, . . ., Cn with C0 = C and Cn = C[′] (and unreachable otherwise). A protocol defines a fairness condition Φ which is a predicate on executions. It should satisfy the following properties. - A fairness condition is eventual, i.e. every finite execution can be continued to an infinite fair execution. - A fairness condition ensures activity, i.e. if an execution contains only configuration C after some moment, only C itself is reachable from C. Definition 11. The default fairness condition accepts an execution if every configuration either becomes unreachable after some moment, or occurs infinitely many times. Example 3. The example protocols use the default fairness condition. It is clear that the default fairness condition ensures activity. 8 ----- Lemma 1 (adapted from [5]). Default fairness condition is eventual. Proof. Consider a configuration after a finite execution. Then there is a countable set of possible configu rations (note that the set of potential packets is at most countable). Consider an arbitrary enumeration of configurations that mentions each configuration infinitely many times. We repeat the following procedure: skip unreachable configurations in the enumeration, then perform the transitions necessary to reach the next reachable one. If we skip a configuration, it can never become reachable again. Therefore all the configurations that stay reachable infinitely long are never skipped and therefore they are reached infinitely many times. The fairness condition is sometimes said to be an approximation of probabilistic behaviour. In our general model the default fairness condition provides executions similar to random ones for protocols without messages but not always for protocols with messages. The arguments from [20] with minimal modification prove this. The core idea in the case without messages is observing we have a finite state space reachable from any given configuration; a random walk eventually gets trapped in some strongly connected component, visiting all of its states infinitely many time. If we do have messages, the message count might behave like a biased random walk; while consuming all the messages stays possible in principle, with probability one it only happens a finite number of times. #### 2.4 Functions implemented by protocols In this section we recall the standard notion of a function evaluated by a protocol. Here the standard definition generalises trivially. Definition 12. An input configuration is a configuration where there are no packets and all agents are in input states, i.e. P = ∅ and Im(CA) ⊆ Im(I) where Im denotes the image of a function. We extend I to be applicable to multisets of input symbols. For every x ∈ N[Σ], we define I(x) to be a configuration of |x| agents with [�]I(σ)=qi [x][(][σ][) agents in input state][ q][i][ (and no packets).] A configuration C is a consensus if the individual output function yields the same value for the states of all agents, i.e. ∀a, a[′] ∈ Dom(CA) : o(CA(a)) = o(CA(a[′])). This value is the output value for the configuration. C is a stable consensus if all configurations reachable from C are consensus configurations with the same value. A protocol implements a predicate ϕ : N[Σ] →{true, false} if for every x ∈ N[Σ] every fair execution starting from I(x) reaches a stable consensus with the output value ϕ(x). A protocol is well-specified if it implements some predicate. Example 4. It is easy to see that each of the four example protocols implements the predicate ϕ(x) ⇔ (x(0) = 0) ∨ (x(1) = 0) on N[2]. In other words, the protocol accepts the input configurations where one of the two input states has zero agents and rejects the configurations where both input states occur. 9 ----- This framework is general enough to define the models studied in the literature, such as population pro tocols, immediate transmission protocols, immediate observation population protocols, delayed transmission protocols, delayed observation protocols, queued transmission protocols, and broadcast protocols. ### 3 Expressive power of population protocols and related models In this section we give an overview of previously known results on expressive power of various models related to population protocols. We only consider predicates, i.e. functions with the output values being true and false because the statements of the theorems become more straightforward in that case. The expressive power of models related to population protocols is expressed in terms of semilinear, coreMOD, and counting predicates. Semilinear predicates on tuples of natural numbers can be expressed using the addition function, remainders modulo constants, and the order relation, such as x + x ≥ y + 3 or x mod 7 = 3. Roughly speaking, coreMOD is the class of predicates that become equivalent to modular equality for inputs with only large and zero components. An example could be (z = 1 ∧ x ≥ y) ∨ (x + y mod 2 = 0), a semilinear predicate which becomes a modular equality whenever z = 0 or z is large (i.e. z ≥ 2). Counting predicates are logical combinations of inequalities including one coordinate and one constant each, for example, x ≥ 3. Theorem 1 (see [5] for details). Population protocols and queued transmission protocols can implement precisely semilinear predicates. Immediate transmission population protocols and delayed transmission protocols can implement precisely all the semilinear predicates that are also in coreMOD. Immediate observation population protocols implement counting predicates. Delayed observation protocols implement the counting predicates where every constant is equal to 1. Theorem 2 (see [9] for details). Broadcast protocols implement precisely the predicates computable in non deterministic linear space. ### 4 Our models #### 4.1 Proposed models We propose a general notion of an unreliable communication version of a protocol. Our notion models transient failures, so the set of agents is preserved. The intuition we formalise is the idea that for every possible transition some agents may fail to update their states (and keep their corresponding old states). We also require that for some passive agent to receive a transmission, the transmission has to occur (and active agents who transmit do not update their state if they fail to transmit, although a successful transmission can still fail to be received). 10 ----- Definition 13. A protocol with unreliable communication, corresponding to a protocol P, is a protocol that A[⊙] differs from P only in the transition relation. For every allowed transition C −−→ C[′] we also allow all the A[⊙] transitions C −−→ C[′′] where C[′′] satisfies the following conditions. - Population preservation. Dom(CA[′′] [) = Dom(][C]A[′] [), Dom(][C]P[′′] [) = Dom(][C]P[′] [).] - State preservation. For every agent a ∈ Dom(CA[′′] [):][ C]A[′′] [(][a][)][ ∈{][C][A][(][a][)][, C]A[′] [(][a][)][}][.] - Message preservation. For every packet p ∈ Dom(CP[′′] [):][ C]P[′′] [(][p][) =][ C]P[′] [(][p][).] - Reliance on active agents. Either for every agent a /∈ A[⊙] we have CA[′′] [(][a][) =][ C][A][(][a][), or for every] agent a ∈ A[⊙] we have CA[′′] [(][a][) =][ C]A[′] [(][a][).] Example 5. - Population protocols with unreliable communication allow an interaction to update the state of only one of the two agents. - Immediate transmission population protocols with unreliable communication allow the sender to update the state with no receiving agents. - Immediate observation population protocols with unreliable communication do not differ from ordinary immediate observation population protocols, because each transition changes the state of only one agent. Failing to change the state means a no-change transition which is already allowed anyway. - Queued transmission protocols with unreliable communication allow messages to be discarded with no effect. Note that for delayed observation protocols unreliable communication doesn’t change much, as sending the messages also has no effect. - Broadcast protocols with unreliable communication allow a broadcast to be received by an arbitrary subset of agents. #### 4.2 The main result Our main result is that no class of protocols with unreliable communication can be more expressive than immediate observation protocols. Definition 14. A cube is a subset of N[k] defined by a lower and upper (possibly infinite) bound for each coordinate. A counting set is a finite union of cubes. A counting predicate is a membership predicate for some counting set. Alternatively, we can say it is a predicate that can be computed using comparisons of input values with constants and logical operations. Theorem 3. The set of predicates that can be implemented by protocols with unreliable communication is the set of counting predicates. All counting predicates can be implemented by (unreliable) immediate observation protocols. 11 ----- ### 5 Proof of the main result Our main lemma is generalises of the copycat lemma normally applied to specific models such as immediate observation protocols. The idea is that for every initial configuration there is a fair execution that can be extended to a possibly unfair execution by adding a copy of a chosen agent. In some special cases, for example, broadcast protocols with unreliable communication, a simple proof can be given by saying that if the original agent participates in an interaction, the copy should do the same just before the original without anyone ever receiving the broadcasts from the copy. The copycat arguments are usually applied to models where a similar proof suffices. The situation is more complex for models like immediate transmission protocols with unreliable communication. As a message cannot be received without being sent, the receiver cannot update its state if the sender doesn’t. We present an argument applicable in the general case. Definition 15. Let E be an arbitrary execution of protocol P with initial configuration C. Let a ∈ Dom(CA) be an agent in this execution. Let a[′] ∈/ Dom(CA) be an agent, and C[′] = C ∪{a[′] �→ CA(a)}. A set Ea of executions starting in configuration C[′] is a shadow extension of the execution E around the agent a if the following conditions hold: - removing a[′] from each configuration in any execution in Ea yields E; - for each moment during the execution, there is a corresponding execution in Ea such that a and a[′] have the same state at that moment. The added agent a[′] is a shadow agent, and elements of Ea are shadow executions. A protocol P is shadow permitting if for every configuration C there is a fair execution starting from C that has a shadow extension around each agent a ∈ Dom(CA). Note that the executions in Ea might not be fair even if E is fair. Not all population protocols are shadow-permitting. For example, consider a protocol with one input state q0, additional states q+ and q−, and one transition (q0, q0) → (q+, q−). As the number of agents in the states q+ and q− is always the same, one can’t add a single extra agent going from state q0 to state q+. Lemma 2. All protocols with unreliable communication are shadow-permitting. The intuition behind the proof is the following. We construct a fair execution together with the shadow executions and keep track what states can be reached by the shadow agents. The set of reachable states will not shrink, as the shadow agent can always just fail to update. If an agent a tries to move from a state q to a state q[′] not reachable by the corresponding shadow agent in any of the shadow executions, we “split” the shadow execution reaching q: one copy just stays in place, and in the other the shadow agent a[′] takes the place of a in the transition while a keeps the old state. In the main execution there is no a[′] so a participates in the interaction but fails to update. Afterwards we restart the process of building a fair execution. Proof. We construct an execution and the families Ea in parallel, then show that the resulting execution E is fair. We say that a state q is a-reachable after k transitions, if there is an execution in Ea such that a[′] has 12 ----- state q after k transitions. The goal of the construction is to ensure that the set of a-reachable states grows as k increases and contains the state of a after k transitions. Consider an initial configuration C. We build the execution E and its shadow extensions Ea for each a ∈ Dom(CA) step by step. Initially, E = (C) and Ea has exactly one execution, namely (C ∪{a[′] �→ CA(a)}). We pick an arbitrary fair continuation E[∞] starting with E. At each step we extend E = (E0 = C, E1, . . ., Ek) by one configuration and update Ea for each a ∈ Dom(CA). Consider the next configuration in E[∞], which we can denote Ek[∞]+1[. By definition there exists a] A[⊙] set of agents A[⊙] such that Ek[∞] −−→ Ek[∞]+1[. We consider the following cases.] Case 1: For each agent a the state Ek[∞]+1[(][a][) is][ a][-reachable (after][ k][ transitions).] We set Ek+1 = Ek[∞]+1[(][a][) and keep the same][ E][∞][. In other words, we just copy the next transition from] E[∞]. Then for each agent a ∈ Dom(CA) and for each Ea[′] [∈] [E][a] [we set (][E]a[′] [)][k][+1] [=][ E][k][+1] [∪{][a][′][ �→] [(][E]a[′] [)][k][(][a][′][)][}][,] i.e. say that a[′] fails to update its state. Case 2: For each active agent a[⊙] ∈ A[⊙] the state Ek[∞]+1[(][a][⊙][) is][ a][⊙][-reachable, but there is a passive agent] a /∈ A[⊙] such that the state Ek[∞]+1[(][a][) is not][ a][-reachable (after][ k][ transitions).] We construct Ek+1 such that Ek+1(a[⊙]) = Ek[∞]+1[(][a][⊙][) for each active][ a][⊙] [∈] [A][⊙][, and][ E][k][+1][(][a][) =][ E][k][(][a][)] for each passive agent a ∈ Dom(CA) \ A[⊙]. In other words, all the active agents perform the update, but all the passive agents fail to update. The message packets are still consumed or created as if we performed A[⊙] the transition Ek = Ek[∞] −−→ Ek[∞]+1[, i.e. (][E][k][+1][)][P][ = (][E]k[∞]+1[)][P][ . As][ E][∞] [is now not a continuation of][ E][, we] replace E[∞] with an arbitrary fair continuation of our new E. Then for each Ea[′] [∈] [E][a] [we set (][E]a[′] [)][k][+1] [=] Ek+1 ∪{a[′] �→ (Ea[′] [)][k][(][a][′][)][}][ like in the previous case. Also, for each passive agent][ a][ ∈] [Dom(][C][A][)][ \][ A][⊙] [we add a] trajectory Ea[′′] [to][ E][a] [obtained by modifying an existing trajectory][ E]a[′] [∈] [E][a] [such that (][E]a[′] [)][k][(][a][′][) = (][E]a[′] [)][k][(][a][).] We set (Ea[′′][)][k][+1][(][a][′][) =][ E]k[∞]+1[(][a][), and keep everything else the same as in][ E]a[′] [. In other words, we make][ a][′] perform the update that a would perform in E[∞]. Case 3: There is an active agent a ∈ A[⊙] such that the state Ek[∞]+1[(][a][) is not][ a][-reachable (after][ k] transitions). We set (Ek+1)A = (Ek)A, i.e. we say that all the agents fail to update. The message packets are still A[⊙] consumed or created as if we performed the transition Ek = Ek[∞] −−→ Ek[∞]+1[, i.e. (][E][k][+1][)][P][ = (][E]k[∞]+1[)][P][ . As] E[∞] is now not a continuation of E, we replace E[∞] with an arbitrary fair continuation of our new E. Then for each Ea[′] [∈] [E][a][ we set (][E]a[′] [)][k][+1][ =][ E][k][+1][ ∪{][a][′][ �→] [(][E]a[′] [)][k][(][a][′][)][}][ (like in the previous two cases). Also, for each] active agent a ∈ A[⊙] we add a trajectory Ea[′′] [to][ E][a] [obtained by modifying an existing trajectory][ E]a[′] [∈] [E][a] such that (Ea[′] [)][k][(][a][′][) = (][E]a[′] [)][k][(][a][). We set (][E]a[′′][)][k][+1][(][a][′][) =][ E]k[∞]+1[(][a][), and keep everything else the same as in][ E]a[′] [.] In other words, we allow a[′] to update its state in the way a would do in E[∞]. We now prove that the above construction is always correctly defined and yields a fair execution E together with shadow extensions around each agent. A[⊙] First we show that we always continue E in a valid way, i.e. Ek −−→ Ek+1. In the first case it is true by construction as Ek = Ek[∞] [and][ E][k][+1][ =][ E]k[∞]+1[. In the second and the third case, we modify the states of some] A[⊙] agents in the second configuration of a valid transition Ek[∞] −−→ Ek[∞]+1 [by assigning them the states from the] 13 ----- first configuration. Such changes clearly cannot violate population preservation and message preservation. State preservation is satisfied because we replace the agent’s state in the second configuration with the state from the first configuration. The case split between the cases 2 and 3 ensures reliance on active agents; we either make sure that all the active agents update their state, or none of them. Therefore, all the conditions of the Definition 13 are satisfied and the changed transition is also present in the protocol with unreliable communication. As the updated execution E is a valid finite execution, we can find a fair continuation E[∞] as the fairness condition is eventual. When we extend the executions in the shadow extensions by repeating the same state, we just use possibility to add passive agents to add a[′] to the valid transition from E, then observe that making a passive agent fail to update is always allowed in an protocol with unreliable communication. When we add new trajectories in cases 2 and 3, we use possibility to add passive agents to add a[′] to the valid transition from E, then we use agent anonymity to swap the state changes of a and a[′], then we use unreliability to make the (passive) agent a fail to update the state, as well as either all the passive or all the agents from Dom(CA). So far we know that the construction can be performed and yields a valid execution E and some valid executions in each Ea. Now we check that each Ea is a shadow extension around a, and E is fair. We observe that our construction indeed only increases the set of a-reachable states as the number of transitions grows. Furthermore, at each step either agent a moves to an a-reachable state, or a stays in an a-reachable state, thus Ea is indeed a shadow extension around the agent a. Whenever the fair continuation E[∞] is changed, for at least one agent a the set of a-reachable states strictly increases. As the set of agents is finite and cannot change by agent conservation, and the set of states is finite, all but a finite number of steps correspond to the case 1. Therefore from some point on E[∞] does not change and E coincides with it, and therefore E is fair. This concludes the proof of the lemma. We also use a straightforward generalisation of the truncation lemma from [5]. The lemma says that all large amounts of agents are equivalent for the notion of stable consensus. Definition 16. A protocol is truncatable if there exists a number K such that for every stable consensus adding an extra agent with a state q that is already represented by at least K other agents yields a stable consensus. Lemma 3 (adapted from [5]). All protocols (not necessarily with unreliable communication) are truncatable. Proof. Every configuration can be summarised by an element of N[Q][∪][M] (each state is mapped to the number of agents in this state, each message is mapped to the number of packets with this message). In other words, we can forget the identities and consider the multiset of states and messages. If a configuration is a consensus (correspondingly, stable consensus), all the configurations with the same multiset of states and messages are 14 ----- also consensus configurations (correspondingly, stable consensus configurations). The set ST of elements of N[Q][∪][M] not representing stable consensus configurations is upwards closed, because reaching a state with a different local output value cannot be impeded by adding agents or packets. Indeed, if we can reach a configuration CST with some state q present, we can always use addition of passive agents to each transition of the path and still have a path of valid transitions from a larger configuration to some configuration C[∗] ST with state q still present. By possibility to ignore extra packets, we can also allow additional packets in the initial configuration. By Dickson’s lemma, the set ST of non-stable-consensus state multisets has a finite set of minimal elements ST min. We can take K larger than all coordinates of all minimal elements. Then adding more agents with the state that already has at least K agents leads to increasing a component larger than K in the multiset of states. This cannot change any component-wise comparisons with multisets from ST min, and therefore belonging to ST and being or not a stable consensus. Remark 2. A specific bound on the truncation threshold K can be obtained using the Rackoff’s bound for the size of configuration necessary for covering in general Vector Addition Systems [27]. Lemma 4. If a predicate ϕ can be implemented by a shadow-permitting truncatable protocol, then ϕ is a counting predicate. Proof. Let K be the truncation constant. We claim that ϕ can be expressed as a combination of threshold predicates with thresholds no larger than |Q| × K. More specifically, we prove an equivalent statement: adding 1 to an argument already larger than |Q|× K doesn’t change the output value of ϕ. Let us call the state corresponding to this argument q. Indeed, consider any corresponding input configuration. We can build a fair execution starting in it with shadow extensions around each agent. As the predicate is correctly implemented, this fair execution has to reach a stable consensus. By assumption (and pigeonhole principle), more than K agents from the state q end up in the same state. By definition of shadow extension, there is an execution starting with one more agent in the state q, and reaching the same stable consensus but with one more agent in a state with more than K other ones (which doesn’t break the stable consensus). Continuing this finite execution to a fair execution we see that the value of ϕ must be the same. This concludes the proof. For the lower bound, we adapt the following lemma from [5]. Lemma 5. All counting predicates can be implemented by immediate observation protocols (possibly with unreliable communication), even if the fairness condition is replaced with an arbitrary different (activity ensuring) one. Proof. We have already observed that immediate observation population protocols do not change if we add unreliability. It was shown in [5] that immediate observation population protocols implement all counting predicates. Moreover, the protocol (k, k) �→ (k +1, k); (k, n) �→ (n, n) provided there for threshold predicates has the state of each agent increase monotonically. It is easy to see that ensuring activity is enough for this 15 ----- protocol to converge to a state where no more configuration-changing transitions can be taken. Also, the construction for boolean combination of predicates via direct product of protocols used in [5] converges as long as the protocols for the two arguments converge. Therefore it doesn’t need any extra restrictions on the fairness condition. Theorem 3 now follows from the fact that all the protocols with unreliable communication are shadow permitting (by Lemma 2) and truncatable (by Lemma 3), therefore they only implement counting predicates. By Lemma 5 all counting predicates can be implemented. ### 6 Non-monotonic impact of unreliability In this section we observe that, surprisingly, while delayed transmission protocols and queued transmission protocols are more powerful than immediate observation population protocols, their unreliable versions are strictly less expressive than immediate observation population protocols (possibly with unreliable communi cation). Definition 17. A protocol is fully asynchronous if for each allowed transition (C, A[⊙], C[′]) the following conditions hold. - There is exactly one active agent, i.e. |A[⊙]| = 1. - No passive agents change their states. - Either the packets are only sent or the packets are only consumed, i.e. Dom(CP ) ⊆ Dom(CP[′] [) or] Dom(CP ) ⊇ Dom(CP[′] [). Packet contents do not change, i.e.][ C][P][ |][Dom(][C]P [)][∩][Dom(][C]P[′] [)][=][ C]P[′] [|][Dom(][C]P [)][∩][Dom(][C]P[′] [)][.] It turns out that given unreliable communication such protocols can check presence of states but cannot count. As our old notion of ensuring activity doesn’t force any messages to be ever received, we need a slightly stronger fairness condition for any positive claims. Definition 18. A fairness condition ensures communication if the following two conditions hold in every fair run. 1. If the agent states CA do not change after some moment, from each configuration occurring after some later moment there is no possible transition changing CA. 2. If the set of messages present in CP (ignoring multiplicities) does not change after some moment, then for each configuration after some later moment there is no possible transition that creates a packet with a new message. Theorem 4. Fully asynchronous protocols with unreliable communication compute exactly the predicates that are boolean combinations of positivity of single coordinates. The upper bound holds under any eventual fairness condition, while the lower bound requires a fairness conditions that ensures communication. 16 ----- The core idea of the proof is to ensure that in a reachable situation rare messages do not exist and cannot be created. In other words, if there is a packet with some message, or if such a packet can be created, then there are many packets with the same message. This makes irrelevant both the production of new messages by agents, and the exact number of agents needing to follow a particular sequence of transitions. This idea has some similarity with the message saturation construction from [20], but here the production of new messages might require consuming some of the old ones. We choose the threshold for “many” packets depending on the number of messages that do not yet have “many” packets. The threshold ensures that a new message will become abundant before we exhaust the packets for any previously numerous message. Definition 19. The in-degree of a fully asynchronous protocol is the maximum number of messages con sumed in a single transition. The supply of a message m ∈ M in configuration C is the number of packets in C with the message m, i.e. |CP[−][1][(][m][)][|][.] Let F (x, y, z, n, k) = (32(xyzn + 1))[32(][xyzn][+1)][−][2][k]. An abundance set is the largest set M [∞] ⊆ M such that the supply of each message in M [∞] is at least F (|Q|, |M |, d, |CA|, |M [∞]|) where d is the in-degree. As F decreases in the last argument, the abundance set M [∞](C) is well-defined. A message m is abundant in configuration C if it is in the abundance set, i.e. m ∈ M [∞](C). A message m is expendable at some moment in execution E if it is abundant in some configuration that has occurred in E before that moment. A packet is expendable if it bears an expendable message. An execution E is careful if no transition that decreases the supply of non-expendable messages changes agent states. Remark 3. The function F is chosen to make its rate of growth obviously sufficient in the following calcula tions. A much smaller function would suffice for a more tedious analysis. Lemma 6. Every fully asynchronous protocol with unreliable communication has a careful fair execution starting from any configuration without message packets. Moreover, if the protocol is well-specified, there is a careful fair execution that runs each packet-consuming transition twice in a row, failing to update the state the first time, until stable consensus is reached. Proof. We start with an execution with only the initial configuration. In the first phase, as long as it is possible to create a packet with a non-expendable message (without making the execution careless), we do it while consuming the minimal possible number of packets with expendable messages. After creating each packet we increase the abundance set if possible. In the second phase, a long as it is possible to consume a packet with a non-expendable message, we do it (but fail to update the agent states). In the third phase we reach a stable consensus by consuming the minimal number of packets. We call the end of the third phase the target moment. Afterwards we pick an arbitrary fair continuation. We now prove that each abundance set with a new message obtained during the first phase includes all the previous abundance sets. We only use the ways to create a new non-expendable packet that do not 17 ----- require consuming any non-expendable packets. Indeed, consuming a non-expendable packet is not allowed to change the internal state by definition of carefulness, and cannot create any new messages by definition of a fully asynchronous protocol. Note that reaching the internal state that can create a new non-expendable packet can take most |Q| × n transitions as all the expendable packets are already available for consumption and thus there is no reason to repeat the same internal state of the same agent twice. Therefore creating an additional non-expendable packet can consume at most |Q| × n × d packets. To make the supply of some message reach F (|Q|, |M |, d, n, k + 1), we need to repeat this at most F (|Q|, |M |, d, n, k + 1) × |M | times consuming at most F (|Q|, |M |, d, n, k + 1) × M | × |Q| × n × d expendable packets. We might consume twice as many expendable packets if we want to fail every other packet consumption transition. As 3 × F (|Q|, |M |, n, d, k + 1) × |M | × |Q| × n × d < F (|Q|, |M |, d, n, k), all the expendable messages together with this message form an abundance set. In the second phase, we run consumption in at most |Q| states; reaching each of them requires at most |Q| transitions. Thus the state changes consume at most |Q|[2]×d expendable packets. Note that consuming a non expendable packet requires consuming at most d expendable packets. As the supply of each non-expendable message is less than F (|Q|, |M |, d, n, |M [∞]|+1), we consume at most d×(|Q|[2] +|M |×F (|Q|, |M |, d, n, |M [∞]|+ 1)). We also could have spent twice as many expendable messages if the non-expendable messages were not the limiting factor. Therefore we still have more than F (|Q|, |M |, d, n, |M [∞]| + 1) > 4 × |Q| × n × d packets with each expendable message left by the time there are no non-expendable packets that can be received in a reachable state and no possibility to create a non-expendable packet. A reachable stable consensus exists if the protocol computes some predicate. As it is impossible to produce or consume new non-expendable messages, we cannot violate the carefulness property. Moreover, we can reach it while spending at most |Q| × n × d expendable packets (or twice as many if we fail to update the state every second time). That many packets are available, so producing new expendable packets is not required. We see that the construction indeed provides a careful fair execution. Thus the lemma is proven. As the execution obtained via the previous lemma wastes a lot of messages, we can add one more agent to make use of those messages. Lemma 7. Consider a fully asynchronous protocol with unreliable communication that computes some pred icate. Then for any input configuration, adding one more agent in an already present input state cannot change the value of the predicate. Proof. Consider a careful execution constructed by Lemma 6. Consider an extra agent that we want to use as a copycat of an existing one, which we call target. If a transition performed by the target agent sends messages, so does the copycat agent. If a transition requires receiving messages and the target agent updates the state, we cancel the previous transition where the target agent failed to update the state after consuming the same messages, and let the copycat agent 18 ----- receive those messages and update the state. Thus the copycat agent always mimics the state of the target agent. Additionally, we extend phase two of the execution to consume the non-expendable messages sent by the copycat agent. They are the same as the target agent has sent, and there is a reserve of expendable messages for consuming these non-expendable messages (those that can be consumed in some reachable state). As consumption of expendable messages did not allow to emit any non-expendable messages after reaching the stable consensus, the same must be true when we add the copycat agent as the set of reachable agent states without producing or consuming non-expendable messages is the same. But then the set of all the reachable states is the same, and we get a stable consensus with the same answer. As the protocol is well-specified, this concludes the proof. Corollary 1. A predicate computed by a fully asynchronous protocol with unreliable communication only depends on which coordinates are positive. Proof. Consider two configurations with the same set of represented input states. By repeated addition of copycat agents we can prove that the predicate value for either of configurations is the same as the predicate value for their union. It is clear that the predicates that only depend on the set of positive coordinates can be computed. Lemma 8. For any fairness condition ensuring communication, and for any predicate only depending on positivity of arguments, there is a fully asynchronous protocol computing that predicate. Proof. We just describe the protocol informally. The messages correspond to the input states. The states correspond to nonempty states of the input state (which are known to the agent to be initially present). An agent can send a message corresponding to an initial state in the agent’s set. An agent can receive a message and add the corresponding initial state to the set. An agent has output value equal to the value of the predicate on the input where all the input states from the agent’s set get the value 1, while the others get 0. Ensuring communication implies that the only stable situation is when all the initially present input states are reflected in message packets, and are also reflected in the sets of all the agents. The theorem now follows from Corollary 1 and Lemma 8. Remark 4. This result doesn’t mean that fundamentally asynchronous nature of communication prevents us from using any expressive models for verification of unreliable systems. It is usually possible to keep enough state to implement, for example, immediate observation via request and response. ### 7 Conclusion and future directions We have studied unreliability based on message loss, a practically motivated approach to fault tolerance in population protocols. We have shown that inside a general framework of defining protocols with unreliable 19 ----- communication we can prove a specific structural property that bounds the expressive power of protocols with unreliable communication by the expressive power of immediate observation population protocols. Immediate observation population protocols permit verification of many useful properties, up to well-specification, correctness and reachability between counting sets, in polynomial space. We think that relatively low complexity of verification together with inherent unreliability tolerance and locally optimal expressive power under atomicity violations motivate further study and use of such protocols. It is also interesting to explore if for any class of protocols adding unreliability makes some of the veri fication tasks easier. Both complexity and expressive power implications of unreliability can be studied for models with larger per-agent memory, such as community protocols, PALOMA and mediated population protocols. We also believe that some models even more restricted than community protocols but still per mitting a multi-interaction conversation are an interesting object of study both in the reliable and unreliable settings. #### Acknowledgements I thank Javier Esparza for useful discussions and the feedback on the drafts of the present article. I thank Chana Weil-Kennedy for useful discussions. This work is an extended version of [28], differing in the inclusion of full proofs as well as more precise characterisation of expressive power of fully asynchronous protocols. I thank the anonymous reviewers both of the previous and of the current version for their valuable feedback on presentation. ### References [1] Dana Angluin, James Aspnes, Zo¨e Diamadi, Michael J. Fischer, and Ren´e Peralta. Computation in networks of passively mobile finite-state sensors. In ACM Symposium on Principles of Distributed Computing, pages 290–299. ACM, 2004. [2] Dana Angluin, James Aspnes, Zo¨e Diamadi, Michael J. Fischer, and Ren´e Peralta. Computation in networks of passively mobile finite-state sensors. Distributed Computing, 18(4):235–253, 2006. [3] Dana Angluin, James Aspnes, and David Eisenstat. Fast computation by population protocols with a leader. In Distributed Computing: 20th International Symposium, DISC 2006, volume 4167 of Lecture Notes in Computer Science, pages 61–75. Springer, 2006. [4] Dana Angluin, James Aspnes, and David Eisenstat. A simple population protocol for fast robust approximate majority. In Distributed Computing, 21st International Symposium, DISC 2007, pages 20–32. Springer, 2008. [5] Dana Angluin, James Aspnes, David Eisenstat, and Eric Ruppert. The computational power of popu lation protocols. Distributed Computing, 20(4):279–304, 2007. 20 ----- [6] Dana Angluin, James Aspnes, Michael J. Fischer, and Hong Jiang. Self-stabilizing population protocols. ACM Transactions on Autonomous and Adaptive Systems, 3(4):13:1–13:28, 2008. [7] Krzysztof R. Apt, Nissim Francez, and Shmuel Katz. Appraising fairness in languages for distributed programming. Distributed Computing, 2(4):226–241, 1988. [8] Nathalie Bertrand, Patricia Bouyer, and Anirban Majumdar. Reconfiguration and message losses in parameterized broadcast networks. In 30th International Conference on Concurrency Theory, CONCUR 2019, August 27-30, 2019, Amsterdam, the Netherlands, volume 140 of LIPIcs, pages 32:1–32:15. Schloss Dagstuhl - Leibniz-Zentrum f¨ur Informatik, 2019. [9] Michael Blondin, Javier Esparza, and Stefan Jaax. Expressive power of oblivious consensus protocols, 2019. [10] Ioannis Chatzigiannakis, Shlomi Dolev, Sandor P. Fekete, Othon Michail, and Paul G. Spirakis. Not all fair probabilistic schedulers are equivalent. In In 13th International Conference on Principles of Dis tributed Systems (OPODIS), volume 5923 of Lecture Notes in Computer Science, pages 33–47. Springer Verlag, 2009. [11] Ioannis Chatzigiannakis, Othon Michail, Stavros Nikolaou, Andreas Pavlogiannis, and Paul G. Spirakis. Passively mobile communicating logarithmic space machines. Technical report, 2010. [12] Carole Delporte gallet, Hugues Fauconnier, and Rachid Guerraoui. When birds die: Making population protocols fault-tolerant. In In Proc. 2nd IEEE International Conference on Distributed Computing in Sensor Systems, volume 4026 of LNCS, pages 51–66, 2006. [13] Carole Delporte-Gallet, Hugues Fauconnier, Rachid Guerraoui, and Eric Ruppert. Secretive birds: Privacy in population protocols. In OPODIS, volume 4878 of Lecture Notes in Computer Science, pages 329–342. Springer, 2007. [14] Giuseppe Antonio Di Luna, Paola Flocchini, Taisuke Izumi, Tomoko Izumi, Nicola Santoro, and Gio vanni Viglietta. On the power of weaker pairwise interaction: Fault-tolerant simulation of population protocols. Theoretical Computer Science, (754):35–49, 2019. [15] Giuseppe Antonio Di Luna, Paola Flocchini, Taisuke Izumi, Tomoko Izumi, Nicola Santoro, and Gio vanni Viglietta. Population protocols with faulty interactions: The impact of a leader. Theor. Comput. Sci., 754:35–49, 2019. [16] Edsger W. Dijkstra. Self-stabilizing systems in spite of distributed control. Commun. ACM, 17(11):643– 644, 1974. [17] David Doty and David Soloveichik. Stable leader election in population protocols requires linear time. In DISC, volume 9363 of Lecture Notes in Computer Science, pages 602–616. Springer, 2015. 21 ----- [18] E. Allen Emerson and Kedar S. Namjoshi. On model checking for non-deterministic infinite-state systems. In LICS, pages 70–80. IEEE Computer Society, 1998. [19] Javier Esparza, Pierre Ganty, Rupak Majumdar, and Chana Weil-Kennedy. Verification of immediate observation population protocols. In 29th International Conference on Concurrency Theory (CONCUR 2018), volume 118 of LIPIcs, pages 31:1–31:16. Schloss Dagstuhl - Leibniz-Zentrum fuer Informatik, 2018. [20] Javier Esparza, Stefan Jaax, Mikhail A. Raskin, and Chana Weil-Kennedy. The complexity of verifying population protocols. Distributed Computing, 34(2):133–177, 2021. [21] Javier Esparza, Mikhail A. Raskin, and Chana Weil-Kennedy. Parameterized analysis of immediate observation petri nets. In Application and Theory of Petri Nets and Concurrency - 40th International Conference, PETRI NETS 2019, Aachen, Germany, June 23-28, 2019, Proceedings, volume 11522 of Lecture Notes in Computer Science, pages 365–385. Springer, 2019. [22] Roy Friedman and Ken Birman. Trading consistency for availability in distributed systems. Technical report, 1996. [23] Rachid Guerraoui and Eric Ruppert. Even small birds are unique: Population protocols with identifiers, 2007. [24] Rachid Guerraoui and Eric Ruppert. Names trump malice: Tiny mobile agents can tolerate byzantine failures. In Automata, Languages and Programming, 36th Internatilonal Colloquium, ICALP 2009, Rhodes, Greece, July 5-12, 2009, Proceedings, Part II, volume 5556 of Lecture Notes in Computer Science, pages 484–495. Springer, 2009. [25] Mehmet Hakan Karaata. Self-stabilizing strong fairness under weak fairness. IEEE Trans. Parallel Distributed Systems, 12(4):337–345, 2001. [26] Othon Michail, Ioannis Chatzigiannakis, and Paul G. Spirakis. Mediated population protocols. Theo retical Computer Science, 412(22):2434–2450, 2011. [27] Charles Rackoff. The covering and boundedness problems for vector addition systems. Theoretical Computer Science, 6:223–231, 1978. [28] Mikhail A. Raskin. Population protocols with unreliable communication. In Leszek Gasieniec, Ralf Klasing, and Tomasz Radzik, editors, Algorithms for Sensor Systems - 17th International Symposium on Algorithms and Experiments for Wireless Sensor Networks, ALGOSENSORS 2021, Lisbon, Portugal, September 9-10, 2021, Proceedings, volume 12961 of Lecture Notes in Computer Science, pages 140–154. Springer, 2021. 22 -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/1902.10041, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "CLOSED", "url": "" }
2,019
[ "JournalArticle" ]
false
2019-02-26T00:00:00
[ { "paperId": "740a4a799739a863a8d09013715b91a81d52ea3e", "title": "The complexity of verifying population protocols" }, { "paperId": "84a09e13a5cff8363bf64946a1e1a42a61539da5", "title": "Reconfiguration and Message Losses in Parameterized Broadcast Networks" }, { "paperId": "1161acc9b43873095a09f1a45eda13cb1e1e46ee", "title": "Parameterized Analysis of Immediate Observation Petri Nets" }, { "paperId": "77b4ff1dd24945ea9119aac32c4e00da87a4e9ec", "title": "Expressive Power of Oblivious Consensus Protocols" }, { "paperId": "628c4912a384d9ee3e1d2dcc8d958b1533d9b6f8", "title": "Verification of Immediate Observation Population Protocols" }, { "paperId": "c8ef3771ab25c082d430588ea354bdb0970023de", "title": "Population Protocols with Faulty Interactions: The Impact of a Leader" }, { "paperId": "312ccb9002035a46c15d7734a4c50f4bc379d307", "title": "On the Power of Weaker Pairwise Interaction: Fault-Tolerant Simulation of Population Protocols" }, { "paperId": "21555672929984014efaa20c798633e29c00a466", "title": "Stable leader election in population protocols requires linear time" }, { "paperId": "18c9136d0a784fd0138b020a7b1c6984e57f129f", "title": "Passively Mobile Communicating Logarithmic Space Machines" }, { "paperId": "ea6782521764c21ea2d4a8f556aac04a23bd0b45", "title": "Not All Fair Probabilistic Schedulers Are Equivalent" }, { "paperId": "fb45b0fd97dd7f28255460b2c6ae4c0ac0cb430b", "title": "Mediated Population Protocols" }, { "paperId": "44e1a20d80a29a9491b574ad9fac68f7fb8c745b", "title": "Names Trump Malice: Tiny Mobile Agents Can Tolerate Byzantine Failures" }, { "paperId": "37bb1068d0a7bd07b84af02bb108c76a43dc4fba", "title": "Secretive Birds: Privacy in Population Protocols" }, { "paperId": "7724872ce40ec2b4fa8f6cf5c72e74a288573ad2", "title": "A simple population protocol for fast robust approximate majority" }, { "paperId": "3ce33f72e91e03a2d36951cd3e4b811583357bc8", "title": "Fast computation by population protocols with a leader" }, { "paperId": "026a0f721c6e95ca2db9e52df215ab1078b1e7fa", "title": "The computational power of population protocols" }, { "paperId": "e0910c641d4cd9aeb58fcdcac90a0f94e6aac4b9", "title": "When Birds Die: Making Population Protocols Fault-Tolerant" }, { "paperId": "ea1f930e775a7c6b3b286c06a46560163d62c839", "title": "Self-stabilizing population protocols" }, { "paperId": "d4e5c1d99fac80887f266a64106495b887ad78e3", "title": "Computation in networks of passively mobile finite-state sensors" }, { "paperId": "1ea1eb8fc1a231fe38956dacac1735332d1f49d3", "title": "Self-Stabilizing Strong Fairness under Weak Fairness" }, { "paperId": "453100ac3707cb8c704827921abdeb8587f94216", "title": "On model checking for non-deterministic infinite-state systems" }, { "paperId": "d9f525416fd74c01179669189770a646b61bba30", "title": "Trading Consistency for Availability in Distributed Systems" }, { "paperId": "19386006b9ff68d7f574c8963a522e0baf4c1cdc", "title": "Appraising fairness in languages for distributed programming" }, { "paperId": "47795d76f935cb633611a08a93554a2378888966", "title": "Self-stabilizing systems in spite of distributed control" }, { "paperId": "c519d7e26f6f23b7b74c28e8d98e23f59cc72909", "title": "Even Small Birds are Unique: Population Protocols with Identifiers" }, { "paperId": "28ae15748155cccac78269f6a56847779bfb3b68", "title": "The Covering and Boundedness Problems for Vector Addition Systems" } ]
17,312
en
[ { "category": "Economics", "source": "s2-fos-model" }, { "category": "Environmental Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01f352005196c374e2ad36f2313f1c04703d6050
[]
0.84391
Extreme Risk Dependence between Green Bonds and Financial Markets
01f352005196c374e2ad36f2313f1c04703d6050
Social Science Research Network
[ { "authorId": "1517010297", "name": "Sitara Karim" }, { "authorId": "144647699", "name": "B. Lucey" }, { "authorId": "144712462", "name": "M. Naeem" }, { "authorId": "89235719", "name": "L. Yarovaya" } ]
{ "alternate_issns": null, "alternate_names": [ "SSRN, Social Science Research Network (SSRN) home page", "SSRN Electronic Journal", "Soc Sci Res Netw", "SSRN", "SSRN Home Page", "SSRN Electron J", "Social Science Electronic Publishing presents Social Science Research Network" ], "alternate_urls": [ "www.ssrn.com/", "https://fatcat.wiki/container/tol7woxlqjeg5bmzadeg6qrg3e", "https://www.wikidata.org/wiki/Q53949192", "www.ssrn.com/en", "http://www.ssrn.com/en/", "http://umlib.nl/ssrn", "umlib.nl/ssrn" ], "id": "75d7a8c1-d871-42db-a8e4-7cf5146fdb62", "issn": "1556-5068", "name": "Social Science Research Network", "type": "journal", "url": "http://www.ssrn.com/" }
The current study investigates the extreme risk dependence between green bonds and financial markets by employing the dual approaches of time‐varying optimal copula and extreme risk spillover analysis of dynamic conditional Value‐at‐Risk. We report significant symmetric (asymmetric) tail‐dependent copulas in the upper (lower) tails characterizing independent regimes. Green bonds offer sufficient diversification, safe‐haven, and hedging opportunities during stable and distressing times to financial markets. The extreme risk spillovers revealed that COVID‐19 transformed the spillovers between green bonds and financial markets except Bitcoin. We proposed insightful implications for policymakers, governments, investors, and portfolio managers to relish the findings for their investment avenues.
DOI: 10.1111/eufm.12458 O R I G I N A L A R T I C L E EUROPEAN FINANCIAL MANAGEMENT # Extreme risk dependence between green bonds and financial markets #### Sitara Karim [1] | Brian M. Lucey [2,3,4,5] | Muhammad A. Naeem [6,7] | Larisa Yarovaya [8] 1 Department of Economics and Finance, Sunway Business School, Sunway University, Malaysia 2 Trinity Business School, Trinity College Dublin, Ireland 3 University of Economics Ho Chi Minh City, Ho Chi Minh City, Vietnam 4 Jiangxi University of Finance and Economics, China 5 Abu Dhabi University, Abu Dhabi, United Arab Emirates 6 College of Business and Economics, United Arab Emirates University, Al‐Ain, United Arab Emirates 7 Adnan Kassar School of Business, Lebanese American University, Beirut, Lebanon 8 The Centre for Digital Finance, Southampton Business School, University of Southampton, Southampton, United Kingdom Correspondence Brian M. Lucey, Trinity Business School, Trinity College Dublin, Ireland. [Email: blucey@tcd.ie and brianmlucey@](mailto:blucey@tcd.ie) [gmail.com](mailto:brianmlucey@gmail.com) Abstract The current study investigates the extreme risk dependence between green bonds and financial ‐ markets by employing the dual approaches of time varying optimal copula and extreme risk spillover analysis of dynamic conditional Value‐at‐Risk. We report significant symmetric (asymmetric) tail‐ dependent copulas in the upper (lower) tails characterizing independent regimes. Green bonds offer sufficient diversification, safe‐haven, and The authors express their humble gratitude to the constructive feedback and comments of anonymous referee(s) and Editor‐in‐Chief Prof. John Doukas for his continuous support throughout the process. For proofs and reprints please contact Muhammad Abubakr Naeem, Accounting and Finance Department, United [Arab Emirates University, P.O. Box 15551, Al‐Ain, United Arab Emirates. Email: m.ab.naeem@gmail.com;](mailto:m.ab.naeem@gmail.com) [muhammad.naeem@uaeu.ac.ae](mailto:muhammad.naeem@uaeu.ac.ae) This is an open access article under the terms of the Creative Commons Attribution‐NonCommercial‐NoDerivs License, which permits use and distribution in any medium, provided the original work is properly cited, the use is non‐commercial and no modifications or adaptations are made. © 2023 The Authors. European Financial Management published by John Wiley & Sons Ltd. Eur Financ Manag. 2023;1–26. [wileyonlinelibrary.com/journal/eufm](https://wileyonlinelibrary.com/journal/eufm) | 1 ----- 2 | EUROPEAN FINANCIAL MANAGEMENT #### 1 | INTRODUCTION KARIM ET AL . hedging opportunities during stable and distressing times to financial markets. The extreme risk spil lovers revealed that COVID‐19 transformed the spillovers between green bonds and financial mar kets except Bitcoin. We proposed insightful implica tions for policymakers, governments, investors, and portfolio managers to relish the findings for their investment avenues. K E Y W O R D S ‐ CoVaR, COVID 19, financial markets, green bonds, TVOC J E L C L A S S I F I C A T I O N C18, G11, G18 The past renowned crisis, such as Global Financial Crisis (GFC), Eurozone Sovereign Debt ‐ Crisis (ESDC), and the recent COVID 19 pandemic, catalyzed academicians and research scholars to examine the dependence and risk spillovers between the financial markets to stipulate policy implications further and grab the investors' attention to overcome the surmounted challenged appeared out of uncertain circumstances (Cesa‐Bianchi et al., 2020). ‐ Investors' growing concern toward risk adjusted portfolios during economically fragile periods has converged them to multiple investment opportunities in versatile financial markets which ‐ offer considerable diversification potential, safe haven features during crisis periods, and strong hedge properties during stable economic conditions (Cochrane, 2022; Karim et al., 2023a, 2023b). Since financial markets represent different markets with varied risk‐capacities, examining the dependence between financial markets is reflective of various useful avenues for policymakers, governments, and investors to formulate policies and design their portfolios optimistically. Tail dependence and identifying the extreme relationship between financial markets are crucial components for portfolio allocation, design, and strategies. In the case of green bonds, the upsurge in the regulatory convergence (Arif, Hasan, et al., 2021; Flammer, 2020; Naeem, Adekoya, & Oliyide, 2021; Naeem, Farid, et al., 2021), investors' environmental orientation (Naeem & Karim, 2021), and seeking the most suitable investment potentials have increased the integration among the financial markets (Daubanes et al., 2021). In terms of regulation of green bonds, Saravade et al. (2023) imply that green bond policies implemented by Chinese financial market regulators are used to be effective in increasing the overall green bond issuance in China. Subsequently, the increasing worldwide focus on green and clean investments is motivated by environmental concerns and aspirations to step ahead in restructuring the current economy into a climate‐resilient economy (Bolton & Kacperczyk, 2021; Naeem, Gul, et al., 2023; Naeem, Iqbal, et al., 2022c; Naeem, Nguyen, et al., 2021; Naeem, Peng, et al., 2020; Umar et al., 2022). The prevailing sustainable investment initiatives have fostered the attention of policymakers, regulators, governments, and worldwide ----- KARIM ET AL . EUROPEAN | 3 FINANCIAL MANAGEMENT investors to shift from the existing dirty energies to renewable and sustainable energy sources. In this stream, green finance offers sufficient opportunity to switch conventional investments into green investments. The proceeds of green investments are exclusively attributed to environment‐friendly, clean energy, and renewable projects backed by these investments (Atif et al., 2021; Krueger et al., 2020). First introduced by the European Investment Bank in 2007, green bonds provide an innovative solution to financial market participants to channel their financial resources toward sustainable programs and overcome the ongoing environmental challenges. Evidence suggests ‐ that green investments are an effective means of financing to overcome the cost of climate oriented projects (Andersen et al., 2020) and achieve a low‐carbon economy (Appiah et al., 2022; Leitao et al., 2021). Environmental and climate‐friendly investments outperform traditional assets as green assets result in more green innovations (Karim & Naeem, 2022; Nguyen et al., 2020). Following this, multiple stock exchanges worldwide have introduced specialized green investments and assets that service the green concerns of both investors and issuers. Given these contextual underpinnings, the increasing activities in green finance have raised the attention of recent scholars to investigate the underlying nature of green bonds while uncovering the potential benefits of these investments given the uncertain economic circumstances. For example, recent studies (Kanamura, 2020; Karpf & Mandel, 2017) reported a positive yield differential of green assets, whereas Flammer (2021) and Larcker and Watts (2020) documented an essentially zero‐premium on green investments. Conversely, the other strand of literature (Billah et al., 2022; Naeem & Karim, 2021; Tang & Zhang, 2020; Wang et al., 2020) witnessed that both investors and issuers can benefit from green bond issuance. Scholars' pronounced interest and greater attention in understanding the nature and features of green bonds compared with other financial markets reflects growth and awareness among academicians and practitioners are given the importance of this new green strand of investment. However, the literature offers limited research regarding tail dependence between green bonds and financial markets. Correspondingly, the world has undergone serious shifts and unprecedented crises during the last two decades, which strongly affected the tail dependence between green bonds and financial markets. One of the severe shocks the world is ‐ still suffering from is the recent global pandemic of COVID 19, where financial markets experienced endangered susceptibility to the unexpected shocks propelled out of this world health emergency (Farid et al., 2022; Pham et al., 2022; Tiwari et al., 2022). These shocks have driven tail dependence and extreme risk spillovers between green bonds and financial markets, where multiple tail dependence regimes underline the dependence arrangements (Mensi et al., 2022; Naeem, Conlon, & Cotter, 2022; Naeem & Karim, 2021). One of the main reasons that COVID‐19 has transformed the spillovers among financial markets is the high degree of globalization and interconnectedness among different countries' economies (Alawi et al., 2022; Iqbal et al., 2022; Naeem, Karim, & Tiwari, 2022; Naeem, Karim, Uddin, et al., 2022). The pandemic has affected not only public health but also the economies of countries worldwide. The globalized nature of financial markets has made it easy for economic shocks to spread quickly from one market to another, leading to increased volatility and uncertainty (Billah et al., 2022; Karim, Naeem, Hu, et al., 2022c; Karim, Naeem, Mirza, et al., 2022d). In addition, COVID‐19 has caused disruptions to global supply chains, leading to reduced trade volumes and a slowdown in economic activity (Bown, 2022; Siddique et al., 2022, 2023). This has affected various sectors, including manufacturing, transportation, and retail. As a result, the stock prices of companies in these sectors have been negatively affected, leading to spillover effects on the broader financial markets. Subsequently, ----- 4 | EUROPEAN FINANCIAL MANAGEMENT KARIM ET AL . ‐ governments and central banks have responded to the economic impacts of COVID 19 by implementing unprecedented monetary and fiscal policies (Yousaf et al., 2023). For example, central banks have lowered interest rates and provided liquidity to financial markets, while governments have implemented stimulus packages to support businesses and households. ‐ Finally, COVID 19 has also led to changes in investor behavior, with many investors adopting a more risk‐averse approach (Arfaoui et al., 2023), leading to increased demand for safe‐haven assets such as gold that ultimately leads to spillover effects on other asset classes such as equities and corporate bonds (Farid et al., 2023). Traditionally, prior studies employed various connectedness methodologies to examine the relationship between green bonds and financial markets. For instance, Nguyen et al. (2020) and Reboredo et al. (2020) employed wavelet coherence analysis, Reboredo et al. (2019) utilized VAR models, Pham (2021) and Arif et al. (2021) used the cross‐quantilogram technique, and Bouri et al. (2021) and Broadstock and Cheng (2019) applied GARCH model. While all these ‐ studies captured various aspects of green bonds, the sophistication of time varying optimal copula (TVOC) under multiple regimes and economic and financial circumstances has not been explored by the earlier studies. In this vein, policymakers and investors are keen to understand the linkages between green bonds and financial markets at assorted copulas under various adverse conditions. In light of the above arguments, the contribution of the current study is manifold. First, we employed the TVOC approach modelled by Liu et al. (2017) to examine the tail dependence between green bonds and financial markets, which characterize several stressful periods and symbolize discrete copulas for the period encapsulating January 2, 2012 to September 30, 2021. We contend that financial markets are exposed to various financial and economic risks, while tail dependence offers novel intuitions to the policymakers, financial market participants, and investors while weighing their portfolios amidst global crises. Second, we utilized a blend of financial markets, such as clean energy, stocks, commodities, US dollar, bonds, and Bitcoin, representing six different financial markets. Third, we measured the extreme risk spillovers between green bonds and financial markets using the Value‐at‐Risk (VaR) and conditional dynamic Value‐at‐Risk (CoVaR) arguing that spillovers at tails provide unique insights to investors under extreme circumstances. Fourth, we add to the existing literature by devising beneficial investment potentials and useful policy implications for governments and macro‐prudential authorities. Correspondingly, in terms of contribution of the study, we differ from the study of Pham and Nguyen (2021) on several aspects. First, the aforementioned study applied cross‐ quantilogram on the data set to identify asymmetric relationship of green bonds and other asset classes. We applied TVOC approach on the data set along with unique risk measure of VaR and CoVaR. Secondly, the data span of current study covers the time period from January 2, 2012 to September 30, 2021 whereas the data set of the aforementioned study covers October 2014 to February 2021. Finally, the current study also differs in terms of market selection and assessing their extreme risk dependence as compared to Pham and Nguyen (2021). ‐ We document significant tail dependencies between green bonds and financial markets where most of the markets exhibited numerous tail‐dependent copulas corresponding to their ‐ ‐ respective symmetric and asymmetric tail dependent relationships. Along with these, time varying properties underscore various economic and financial trends which echoed European Sovereign Debt Crisis, Shale oil crisis, Brexit referendum, US interest rate hike, and COVID‐19 pandemic. Pairwise analysis of financial markets with green bonds reveals that green bonds act ‐ as diversifiers for clean energy and stocks, while significant safe haven features are emphasized for US dollar and Bitcoin markets. Concurrently, green bonds also provide strong hedge and ----- KARIM ET AL . EUROPEAN | 5 FINANCIAL MANAGEMENT safe‐haven features to conventional bonds and commodities during normal and economically tumultuous periods, respectively. To validate our results further, the log‐likelihood values also embodied justification for using the TVOC approach. Extreme risk spillover analysis ‐ substantiated spillovers during COVID 19, except Bitcoin, where extreme risk spillovers were formed during 2015, confirming a $5 million loss by Bitstamp. Given these results, we proposed plentiful implications for policymakers, green investors, regulation authorities, macro‐prudential bodies, portfolio managers, and financial market participants. Policymakers can encourage the markets to expand the growth of the green bonds ‐ ‐ due to their trifold benefits, such as diversification, risk absorbance, and satisfying the eco friendly motives of investors. Hence, policymakers can restructure and reformulate their existing policies to shelter investors from uncertain economic conditions. Investors and portfolio managers can include green bonds while synthesizing their portfolios to relish their risk mitigation attribute. When market circumstances are unfavorable, the perseverance of green bonds can shelter the investments of green and financial markets from extreme economic periods. The rest of the paper unfolds as follows: Section 2 illustrates empirical strategy along with Data and Preliminary Statistics; Section 3 gives empirical results and discussion. Section 4 concludes the study with policy implications. #### 2 | EMPIRICAL STRATEGY, DATA AND PRELIMINARY ANALYSIS 2.1 | Data and descriptive statistics This study endeavors to investigate tail dependence between green bonds and financial markets, where S&P Green Bond Index (SPGB) represents green bonds and financial markets included in the study are S&P Clean Energy Index (SPCL), which indicates clean energy market, MSCI Global Index (MSCI) is representative of world stock market, S&P GSCI Commodity Index (GSCI) which denotes commodity market, US Dollar Index (UDSX) is indicative of currency market, PIMCO Investment ‐ Grade Bond Index (BOND) symbolizes fixed income bond market, and Bitcoin (BTCN) which denotes cryptocurrency market. The data have been extracted from Datastream for the period ‐ encompassing January 2, 2012 to September 30, 2021 and the price series is converted into first log differenced returns to obtain empirical results. Table 1 presents summary statistics and correlation of green bonds with other financial markets where BTCN reveals the highest average returns among all financial markets. SPCL and MSCI yield moderate and parallel average returns, whereas USDX and BOND generate minimum average returns. However, SPGB and GSCI yield negative average returns for the sample period. While considering the return series variability, BTCN marks the highest variability in the returns, whereas SPCL and GSCI show comparable variability in the return series. Conversely, MSCI, UDSX, BOND, and SPGB manifest parallel variability in the return series. Almost all return series, except UDSX, indicate negative skewness values, while the return series is leptokurtic, as evident from the kurtosis values. Multiple tests, for instance, the ‐ Jarque Bera test of normality, exhibit that series are not normally distributed. Further evidence of all return series reveals no serial correlation and conditional heteroskedasticity. Meanwhile, the correlation between green bonds and financial markets is mainly positive except for UDSX, which is negatively correlated with SPGB. Moreover, the highest (lowest) positive correlation is documented between SPGB and BOND (BTCN). ----- 6 | EUROPEAN FINANCIAL MANAGEMENT KARIM ET AL . ----- KARIM ET AL . EUROPEAN | 7 FINANCIAL MANAGEMENT (a) (b) (c) (d) (e) (f) FIGURE 1 This figure presents time trend of green bonds and financial markets. Figure 1 presents the time trend of green bonds and financial markets where SPCL, MSCI, UDSX and BTCN revealed highly volatile patterns whereas GSCI and BOND signpost parallel time‐varying trend with SPGB. #### 2.2 | TVOC approach Assuming that markets undergo several price changes and their interactions depend on external shocks and asymmetric information, the dependence structure among markets is ----- 8 | EUROPEAN FINANCIAL MANAGEMENT KARIM ET AL . dynamic. Thus, using a single copula to explain various markets' dynamics simultaneously restricts the dependence structure, and TVOC provides precise information across multiple financial markets. The dependence structure is generally split into positive and negative dependence, where external shocks make this structure nonlinear and complex. For this purpose, Kendall's *τ* measures the dependence direction and intensity. The two tail dependence structures of Joe (1997) and Caillault and Guégan (2005) for the upper and lower tail are – – employed. Additional functions of lower upper tail and upper lower tail explain the extreme dependencies across various financial markets in the presence of external shocks. For two random constructs X and Y along with their respective distribution functions *F* *X* and *F* *Y* for *α* = 0.05, *τ* *UU* ( ) = Pr( *α* *X* - *F* *X* −1 (1 − *α Y* )| - *F* *Y* −1 (1 − *α* )), (1) *τ* *LL* ( ) = Pr( *α* *X* < *F* *X* −1 ( )| *α Y* < *F* *Y* −1 ( )), *α* (2) *τ* *LU* ( ) = Pr( *α* *X* < *F* *X* −1 ( )| *α Y* - *F* *Y* −1 (1 − *α* )), (3) *τ* *UL* ( ) = Pr( *α* *X* - *F* *X* −1 (1 − *α Y* )| < *F* *Y* −1 ( )). *α* (4) Here *τ* *UU* ( ) *α* denotes upper–upper (upper) tail‐dependence, *τ* *LL* ( ) *α* is indicative of lower–lower (lower) tail‐dependence, *τ* *LU* ( ) *α* depicts lower–upper tail dependence, and *τ* *UL* ( ) *α* shows upper–lower tail‐dependence. The additive lower–upper ( *τ* *LU* ( ) *α* ) and upper–lower ( *τ* *UL* ( )) *α* characterize complete dependence structures across markets specifying *LU* *UL* extreme comovements. Therefore, *τ* ( ) *α* and *τ* ( ) *α* are more precise in terms of extreme *UU* *LL* dependence as compared to *τ* ( ) *α* and *τ* ( ) *α* . Meanwhile, the asymmetric negative extreme dependence is expanded through Clayton and Gumbel copulas in the next subsection. A copula is a multivariate probability distribution with uniform marginal distributions on the intervals 0 and 1. In other words, if random constructs U and V are said to be uniform following 0 and 1 interval, respectively, then the copula function is denoted as joint distribution of vectors U and V in terms of *U V* (, ) ~ *C* . Following Sklar (1959), the bivariate random vector for X and Y constructs are obtained through joint distribution F as below: *F* (, ) *x y* = *C F* ( *X* ( ), *x* *F* *Y* ( )). *y* (5) Here marginal distributions are denoted by *F* *X* and *F* *Y* and C denotes copula function describing the dependence structure between X and Y. We assume that all functions can be varied; therefore, bivariate joint density is given as: *f x y* (, ) = *c u v* (, ). *f* *X* ( ). *x* *f* *Y* ( ). *y* (6) In Equation (6), *u* = *F* *X* ( ) *x* and *v* = *F* *Y* ( ) *y* along with the density function of copula ∂ 2 *C u v* (, ) *c u v* (, ) = ∂∂ *u v* . The most renowned copulas are Normal, t where both copulas define symmetric and positive/negative dependence. In return, Gumbel, rotated Gumbel, Clayton, and rotated Clayton are representative of asymmetric positive dependence. It is important to note that a normal copula carries no tail dependence, whereas Student t copula possesses symmetric tail ----- KARIM ET AL . EUROPEAN | 9 FINANCIAL MANAGEMENT dependence. Meanwhile, Clayton and rotated Gumbel copulas symbolized lower tail dependence, and Gumbel and rotated Clayton signify upper tail dependence. The upper and lower tail dependence are manifested as: *λ* *U* ( ) = lim *v* *P X* [ - *F* −1 ( )| *v Y* - *F* −1 ( )] = lim *v* [1 −2 +] *v* *C v v* (, ), (7) *v* →1 *v* →1 1 − *v* *λ* *L* ( ) = lim *v* *P X* [ < *F* −1 ( )| *v Y* < *F* −1 ( )] = lim *v* *C v v* (, ) . (8) *v* →0 *v* →0 *v* Here 0 ≤ λ U ≤ 1, 0 ≤ λ L ≤ 1. For capturing extreme dependencies in counter directions, it is compulsory to construct fresh copulas by the rotation of 90 and 270°. In this way, updated upper and lower tail ‐ dependencies of freshly created half rotated copulas are written as: *λ* *U* ′ ( ) = lim *v* *P X* [ < *F* −1 (1 −)| *v Y* - *F* −1 ( )] = lim *v* 1 −2 + *v* *C* *R* 27090 (, ) *v v*, (9) *v* →1 *v* →1 1 − *v* *λ* ′ ( ) = lim *L* *v* *P X* [ - *F* −1 (1 −)| *v Y* < *F* −1 ( )] = lim *v* *C* *R* 27090 (, ) *v v* . (10) *v* →0 *v* →0 *v* Here condition applies 0 ≤ λ′ U ≤ 1 and 0 ≤ λ′ L ≤ 1. Given that Equations (7) and (8) present positive tail dependence in the third and first quadrants, Equations (9) and (10) reflect negative tail dependence in the fourth and second quadrants. [1] TVOC joins all combinations of copulas as provided in Table 2 and signposts potential dependencies in the tails in terms of switching from positive to negative dependence. Thus, ‐ there are two steps to model TVOC approach (1) optimal copula (OC) and (2) time varying modeling based on Liu et al. (2017). #### 2.3 | Modeling OC As mentioned in the previous subsection, various types of copulas describe positive and negative tail dependencies. Nevertheless, it is very difficult for them to fit the dependence types concurrently. Thus, the first step involves testing the direction of dependence between X and Y where corresponding copulas are selected based on their direction. For this purpose, the distribution‐free test is applied proposed by Liu et al. (2017) to identify the underlying relationships. For variables X and Y having n length, it is measured whether Kendell's *τ* is positive provided that it measures the average market dependence and whether it is negative, where both null hypotheses set tau to be zero, that is, *τ* = 0. Results are interpreted following the conditions: 1 Refer to Karim, Khan, et al. (2022) for copula specifications employed in the TVOC framework. ----- 10 | EUROPEAN FINANCIAL MANAGEMENT KARIM ET AL . ----- KARIM ET AL . EUROPEAN | 11 FINANCIAL MANAGEMENT (i) OC fitting samples are selected from the set of copulas encompassing [normal, Student t, Clayton, rotated Clayton, Gumbel, and rotated Gumbel] if the value of Kendall's *τ* is positively significant. (ii) OC fitting samples are selected from the set of copulas carrying [normal, Student t, Clayton‐90‐degree, rotated Clayton‐270 degree, Gumbel‐90 degree, and rotated Gumbel‐ 270 degree] if the value of Kendall's *τ* is negatively significant. (iii) OC fitting samples are selected from all set of copulas as mentioned in (i) and (ii) then the value of Kendall's *τ* is insignificant. By employing this process of fitting OC samples, we can compare the log‐likelihood values for each copula. Meanwhile, the changes in the market dependencies are tracked by repeating the two steps for each subsample as given below: Step 1: We fit the subsample at time t where t is considered as the last point within the subsample, and then we compute the marginal distributions for constructs F X and F Y independently. Thus, we attain the uniform (0, 1) series for u and v at each window; Step 2: We calculate Kendall's *τ* for subsample at time t and perform the distribution‐free tests as explained earlier. Given varying results in each copula, we select the OC from multiple sets of OC functions. #### 2.4 | Modeling time‐varying (TV) process Based on Liu et al. (2017), a fixed window of 260 days and a rolling ahead process for each day is used following the subsample characteristics mentioned above. When OC modeling is combined with TV modeling process, the obtained copula reveals distinct dependence structures as obtained from TV process. In other words, as Patton (2006) and Creal et al. (2008) explained, the resultant copula only possesses the dynamic features which solely reflect positive or negative dependencies. In our study, ‐ the TV process is parallel to a regime switching method, where one of the major benefits is that we do not have to compute a large number of parameters with the increase in the regimes. Apart from Student t copula, the remaining copulas carry one respective parameter. #### 2.5 | Tail‐risk in the spillovers This subsection estimates the extreme risk spillovers from green bonds to financial markets by employing the technique of Adrian and Brunnermeier (2016). VaR is the value‐at‐risk and CoVAR is the conditional value‐at‐risk, which explains financial markets' conditional value‐at‐ risk on green bonds. In other words, the VaR of green bonds in the q 1 ‐quadrant is the conditional distribution (R GB ) of CoVaR of financial markets conditional distribution (R FM ) at q 2 ‐quadrant as follows: ## Pr ( R FM t, ≤ CoVaR q tFM GB 2, | R GB t, ≤ VaR GB q t, 1, ) = q 2 . (11) Here we can say that *VaR* *GB q t*, 1, represents VaR of green bonds and Pr can be further explained as: ----- 12 | EUROPEAN FINANCIAL MANAGEMENT ### Pr ( R FM t, ≤ CoVaR q tFM GB 2, |, R GB t, ≤ VaR GB q t, 1, ) Pr( *R* *GB t*, ≤ *VaR* *GB q t*, 1, ) = *q* 2 . Given that *P* *r* ≤ *VaR* *GB*, *q t* 1, = *q* 1 [, we can re][‐][write the Equation (][12][) as:] KARIM ET AL . (12) ### Pr ( R FM t, ≤ CoVaR q tFM GB 2, |, R GB t, ≤ VaR GB q t, 1, ) = q q 1 2 . (13) Following this, Equation (13) can be rewritten for calculating copulas as: ### F R FM t,, R GB t, ( CoVaR q tFM GB 2, |, VaR GB q t, 1, ) = q q 1 2 . (14) If we invert the marginal distribution function *R* *FM* *CoVaR* *qFM|GB* 2, *t* = *F* −1 *FM t*, ( ) *u*, then the above equation is written as: *C u v* (, ) = *q q* 1 2 . (15) *FM* Here, the copula function is represented as C(.,.) where *u* = *F* *R* *FM t*, *CoVaR* *q* *GB* 2, *t* and # ( ) *v* = *F* *R* *GB t*, ( *VaR* *GB q t*, 1, ) . *F* *R* *FM t*, and *F* *R* *GB t*, are marginal distribution functions of *R* *FM t*, and *R* *GB t*, in an orderly manner. Afterward, for computing the value of u, all values of C(u, v) = q 1 q 2 and v (v = q 1 ) are given; hence it becomes quite easy to calculate its value. Since multiple copulas are used to capture the dynamic dependence, given the specific characteristics of each copula, u are obtained. Thus, considering the marginal modeling, *F* *R* *FM t*, is achieved. #### 3 | EMPIRICAL RESULTS 3.1 | TVOC estimates Empirical results in Table 2 and Figures 2–7 illustrate that the dependence structure between ‐ green bonds and financial markets are asymmetric and positive except for SPGB UDSX, where ‐ dependence structure is mainly symmetric and negative with substantial tail dependence. We also report that TVOC demonstrates higher values compared with each copula of green bonds and financial markets. Further, Table 2 displays that t copula contains the largest proportion of the best‐fitting copulas, which determines that dependence between green bonds and financial ‐ markets is symmetric and tail dependent, necessitating the TVOC technique. Meanwhile, given the varied periods, most of the copulas show rotated Clayton and rotated Gumbel arrangements ‐ which suggest that positive tail dependence is evident in some of the pairs. In contrast, some ‐ ‐ pairs denote half rotated Clayton and half rotated Gumbel, providing evidence of negative asymmetric dependencies. Our findings are well‐aligned with Liu et al. (2017), Naeem and Karim (2021), Karim, Naeem, Mirza, et al. (2022), Karim et al., (2023a, 2023b, 2023c) for demonstrating similar dependence structures among various types of markets. ----- KARIM ET AL . EUROPEAN | 13 FINANCIAL MANAGEMENT FIGURE 2 This figure presents TVOC estimates for green bonds and clean energy. Panel (a) presents Kendal's tau derived from the tail dependence parameters; Panel (b) presents the proportion of the total number of best‐fitting copulas for every copula, where the horizontal axis represents the types of copula model under consideration (N: normal; t: Student t; C: Clayton; G: Gumbel; RC: 180° rotated Clayton; RG: 180° rotated Gumbel; R1C: 90° rotated Clayton; R1G: 90° rotated Gumbel; R2C: 270° rotated Clayton; R2G: 270° rotated – ‐ ‐ Gumbel); Panels (c f) are the time varying tail dependence parameters. TDF, tail dependence function; TVOC, time‐varying optimal copula. Further, detailed evidence of each pair of green bonds and financial markets suggests that each pair's time‐varying OC vary. For instance, Figure 2 displays the TVOC estimates between green bonds and clean energy market where best‐fitting copulas are mainly related to Student t ‐ ‐ (symmetric and tail dependent) and Normal (symmetric and no tail dependence) copulas. However, rotated Gumbel (asymmetric, positive dependence) and half‐rotated Gumbel (asymmetric, negative dependence) copulas also reflect the dependence between green bonds and clean energy. Figure 2a represents time‐varying attributes of TVOC where initially ‐ dependence between green bonds and clean energy market is symmetric and tail dependent – reflecting European Sovereign Debt Crisis (2010 2012). Soon after ESDC, the dependence shifted towards blue copula (rotated Gumbel), revealing positive dependence in the lower tails. A declining trend in the comovement between green bonds and the clean energy market until ‐ 2015 is observed where pink copula (half rotated Gumbel) is dominant, highlighting – asymmetric negative dependence of upper lower tails during the start of Shale oil crisis – (2015 2016). Nevertheless, the dependence turned out to be symmetric again during 2018, ----- 14 | EUROPEAN FINANCIAL MANAGEMENT KARIM ET AL . FIGURE 3 This figure presents TVOC estimates for green bonds and stocks. See notes in Figure 2. TVOC, time‐varying optimal copula. which reflects US interest rate hike and sudden increase in the interest rates surmounted the tail‐dependence of financial markets (Kang et al., 2021; Naeem, Iqbal et al., 2022c; Naeem, Karim et al., 2022d). Concurrently, during the onset of COVID‐19, the dependence structure shifted to sea green copula (rotated Clayton), symbolizing asymmetric positive dependence in ‐ upper tails. However, after COVID 19, markets started to stabilize and returned to their original operating positions. The dominant dependence is embodied by Student t copula. Figure 2c–f also illustrates time‐varying OC in the lower–upper, upper–upper, lower–lower, and upper–lower dependence structures, stressing the existence of substantial asymmetric tail‐ – dependence in both upper upper classes and lower tails between green bonds and clean energy. Given the dependence structure between green bonds and clean energy, our findings corroborate Elsayed et al. (2020), who demonstrated the strong diversification potential of green bonds for several markets. Overall, it is revealed that considerable tail dependence between green bonds and clean energy exists given the sample period, and dependence ‐ structures are strengthened and predominantly tail dependent following a stress period. Figure 3 demonstrates the estimates between green bonds and stocks where dominant copulas are related to Student t copula, which carries symmetric and tail‐dependent features. Meanwhile, the rest of the copulas show negligible dependence between green bonds and stocks. Lower dependence between green bonds and stocks echoes the findings of Arif et al. (2021), who documented diversification avenues of green bonds for stocks as the connectedness ----- KARIM ET AL . EUROPEAN | 15 FINANCIAL MANAGEMENT FIGURE 4 This figure presents TVOC estimates for green bonds and commodities. See notes in Figure 2. TVOC, time‐varying optimal copula. between green bonds and stocks is lower. In this way, green bonds can shelter the investments from adverse shocks and distressing periods by rescuing the investments from uncertainty and substantial losses. Figure 3a represents time‐varying attributes of TVOC where the majority of the dependence structures following significant distressing events of European Sovereign Debt Crisis (Blundell‐Wignall, 2012), Shale oil crisis, US interest rate hike (Kang et al., 2021), and COVID‐19 pandemic signify Student t copula. Correspondingly, Figure 3c–f depicts that TVOC in the lower–upper, upper–upper, lower–lower, and upper–lower dependence structures where Student t copulas are dominant in the tails between green bonds and stocks. In summary, Figure 3 indicates that dependence between green bonds and stocks is symmetric with varying tail dependencies. Meanwhile, stress events reiterated the symmetric arrangements of copulas ‐ given time varying attributes. Figure 4 presents the TVOC measures for green bonds and commodities where histograms show best‐fitting copulas are related to Normal, Student t, Clayton, rotated‐Gumbel, rotated‐ Clayton, and Gumbel in an orderly manner. Since most of the dependence structures are dominated by Normal and Student t, intuitively, dependence between green bonds and ‐ commodities is symmetric with no tail dependence (Normal) and symmetric with tail dependence (Student t) copulas. Clayton (parrot‐green fragment) and rotated‐Gumbel (blue – fragment) copulas symbolize positive dependence in lower lower tails, which suggests that green bonds and commodities show direct dependence mainly in their lower tails. ----- 16 | EUROPEAN FINANCIAL MANAGEMENT FIGURE 5 This figure presents TVOC estimates for green bonds and US dollar. See notes in Figure 2. TVOC, time‐varying optimal copula. KARIM ET AL . ‐ ‐ ‐ Concurrently, rotated Clayton (light green fragment) and Gumbel (dark green) copula arrays – reveal positive dependence between green bonds and commodities in the upper upper tails. Hence, direct dependence between green bonds and commodities in their upper and lower tails intuitively explains that green bonds are directly associated with commodities by reflecting their positive dependence implying positive comovements between green bonds and commodities due to the strong positioning of commodities in the financial markets and their inherent integration. The aggregate dependence associations are reflected in Figure 4a, where time‐varying characteristics between green bonds and commodities show varying dominance of copulas given multiple events of economic ups and downs. There is an increasing dependence during ‐ ESDC with symmetric arrangements of copula reflecting peach colored fragment initially. As the dependence declines gradually, the comovement varies given the positive dependence in – ‐ both upper upper and lower lower tails. During the Shale oil revolution and US interest rate ‐ hike, dependence re echoes dominance of Normal copula contending prevalent direct dependence between green bonds and commodities during the oil crisis. However, the dependence structure during COVID‐19 switched to Student t copula in the downward direction, which sufficiently explains the gigantic havoc and adversity created by the pandemic (Avramov et al., 2022), which substantially shifted the positive dependence into a negative relationship between green bonds and commodities. The negative dependence during the ----- KARIM ET AL . EUROPEAN | 17 FINANCIAL MANAGEMENT FIGURE 6 This figure presents TVOC estimates for green bonds and conventional bonds. See notes in Figure 2. TVOC, time‐varying optimal copula. COVID‐19 pandemic reflects strong safe‐haven features of green bonds for commodities in line with Arif et al. (2021), who demonstrated strong safe‐haven characteristics of green bonds, particularly during the epidemic of COVID‐19. Figure 4c–f manifests the dependence structures – – – between green bonds and commodities in the lower upper, upper upper, lower lower, and upper–lower tails where remarkable changes in the comovements suggest positive tail‐ dependence between commodities and green bonds. Figure 5 demonstrates the TVOC estimates between green bonds and US dollar index, where interesting findings are obtained with discrete dominance of Student t copula for the whole sample period, which explicitly explains the symmetric arrangements with considerable negative tail‐dependence. Meanwhile, minor fragments of rotated (R1G) and half‐rotated Gumbels (R2G) copulas are reported, symbolizing negative dependencies in the upper–lower tails. The predominant negative dependence between green bonds and US dollar indicates that both financial markets counter‐move for the given sample period. Meanwhile, the intuitive ‐ explanation of these negative time varying results for the whole sample period point toward ‐ hedge and safe haven attributes of green bonds for US dollars during normal and distressing periods, respectively. These findings imply the correlations between US dollar and green bonds ‐ are negative, necessitating the strong safe haven feature of green bonds against US dollar given ----- 18 | EUROPEAN FINANCIAL MANAGEMENT KARIM ET AL . FIGURE 7 This figure presents TVOC estimates for green bonds and Bitcoin. See notes in Figure 2. TVOC, time‐varying optimal copula. the tumultuous economic strains (Karim, Khan, Mirza, et al., 2022a; Karim, Lucey, Naeem, et al., 2022b). ‐ Moreover, strong safe haven characteristics of green bonds for US dollar also indicate that investors can consider green investment potentials as prospective beneficial investment streams that ultimately shield the investments from harsh economic circumstances. The cumulative time‐varying features in Figure 5a narrate parallel findings where initially negative tail‐ dependence is evident during ESDC while the rest of the plot echoes dominance of Student t copula for each distressed episode with inclined dependence. Figure 5c–f illustrates leading dependence in the upper–lower and lower‐upper tails, whereas small scattered dependence ‐ fragments are evident in the upper and lower tails. The negative tail dependence between green ‐ bonds and US dollar reverberate underlying uncertainty in the US dollar as well as strong safe haven properties of green bonds for US dollar. In addition, the potential of safe‐haven attributes can also be reported, which intuitively justifies the inclusion of green bonds in mainstream investment portfolios to avoid exponential losses due to economic and financial uncertainties. Figure 6 exhibits TVOC estimates between green bonds and conventional bonds where best‐ fitting copulas correspond to Student t, rotated‐Gumbel, and Clayton, whereas the little ‐ contribution of Normal, Gumbel, and half rotated Gumbel is also reported. The dependence ‐ structure between green bonds and conventional bonds is symmetric and mainly tail dependent, referring to Student t copula, while rotated‐Gumbel and Clayton show asymmetric ----- KARIM ET AL . EUROPEAN | 19 FINANCIAL MANAGEMENT – positive dependence in the lower lower tails. The positive dependence between green bonds and conventional bonds refers to the arguments of Reboredo et al. (2020), who narrated green bonds are subsets of conventional bonds and share comparable features of fixed‐income securities. In this way, conventional bonds and green bonds comove for the whole sample period. The aggregate dependence in Figure 6a demonstrates time‐varying features between green and conventional bonds where initially declining dependence coincides with the ESDC ‐ and symmetric tail dependent characteristics are dominant. An incline in the graph is observed with asymmetric positive dependence in the lower tails, given the aftermaths of ESDC. However, decreasing dependence is evident with varying copulas during Shale oil crisis, Brexit referendum, and US interest rate hike. ‐ Moreover, negative dependence during ESDC shadows on the safe haven features of green bonds for conventional bonds and consistent positive dependence afterward reflects the hedge ‐ capacity of green bonds. Thus, green bonds tend to act as safe haven during ESDC and hedge during stable periods with continuous positive dependence. Similar findings are reflected in Figure 6c–f where, at different tails, the dependence structure is predominantly symmetric and tail‐dependent, with few traces of asymmetric positive dependence in the lower tails of both markets. Figure 7 represents tail‐dependence between green bonds and Bitcoin where best‐fitted copulas are Normal, Student t, rotated and half‐rotated Clayton (90‐degree), rotated‐Gumbel, half‐rotated Clayton (270‐degree), and Clayton. The copula arrangements reveal that the ‐ dependence structure between green bonds and Bitcoin is symmetric with no tail dependence. Meanwhile, Student t copula pattern suggests symmetric and tail‐dependence structures. The RC and RG arrays are indicative of positive dependence in the upper‐upper and lower‐lower tails, respectively, which ascertains that dependence between green bonds and Bitcoin is positive in the upper and lower tails. Correspondingly, R1C and R2C manifest negative dependence between the two financial markets in the upper–lower and lower‐upper tails following the sample period, which sufficiently justifies the strong safe‐haven characteristics of green bonds for Bitcoin (Liu & Tsyvinski, 2021; Naeem & Karim, 2021). Our findings narrate ‐ that the dependence arrangement between green bonds and Bitcoin is mostly tail dependent irrespective of positive (negative) and upper (lower) tails. The cumulative dependence in Figure 7a shows that initially, during ESDC, dependence corresponds to Normal copula and is ‐ positive without significant tail dependence when markets were undergoing distressed episodes following the European Sovereign Debt Crisis. One plausible explanation for symmetric dependence between green bonds and Bitcoin is the lowest concentration of investors and governments toward green initiatives during this period; therefore, there is ‐ negligible positive tail dependence. Right after ESDC, the dependence shifts toward negative dependence, reflecting recovery of the financial markets with dominant Student t copula, which symbolizes hedging properties of green bonds for Bitcoin. During the Shale oil crisis, the dependence switched between Normal and Clayton copulas, which sufficiently describe the shift in dependence from symmetric to the asymmetric arrangement, particularly in the lower tails signifying the stress period characterized the dependence in the lower tails. Further, prominent ups and downs are observed during the eras of Brexit, the cryptocurrency bubble burst (Corbet et al., 2018; Karim, Appiah et al., 2022e; Lucey et al., 2021), ‐ US interest rate hike, and COVID 19, where sizable comovements are illustrated between green bonds and Bitcoin. The dependence structure remained positive for most of the crisis periods except after ESDC, which substantiates the hedging features of green bonds against Bitcoin. ‐ Overall, it is manifested that tail dependence between green bonds and Bitcoin features the ----- 20 | EUROPEAN FINANCIAL MANAGEMENT KARIM ET AL . TABLE 3 This table presents the log‐likelihood values for TVOC, time‐varying copula and nondynamic copula models. TVOC TV‐Normal TV‐t Normal t SPGB‐SPCL 70.548 59.505 67.606 39.177 49.860 SPGB‐MSCI 124.670 106.264 123.079 68.890 99.543 SPGB‐GSCI 48.581 38.958 43.681 21.963 26.179 SPGB‐UDSX 861.156 818.153 859.792 828.908 864.843 SPGB‐BOND 206.189 178.323 198.813 126.804 152.223 SPGB‐BTCN 12.760 10.297 8.751 1.279 −0.079 external shocks and intensity of stress events determine the appropriate copulas for dependence ‐ along with safe haven and hedge characteristics of green bonds for Bitcoin (Liu & Tsyvinski, 2021). Figure 7c–f also explains subsequent dependence in the respective tails where substantial tail‐dependence is reported in the upper–lower and lower‐upper tails, conquering our findings in Figure 7b. As additional evidence, Table 3 explains the log‐likelihood of TVOC with time‐varying copula and nondynamic copula models, which exhibits that the employed methodology supersedes all financial markets pairs compared to other benchmark techniques. Moreover, the table's values also prove that the TVOC approach can best determine the dynamic dependence features between green bonds and financial markets. #### 3.2 | VaR and CoVaR estimates For further validating our findings of TVOC approach, we examined the risk spillovers of green bonds and financial markets by quantifying the VaR and CoVaR measures of risk. Figure 8 presents the upside and downside values of VaRs and CoVaRs between each pair of green bonds and financial markets. In general, parallel risk spillovers are examined by each risk pair where the sizable influence ‐ ‐ of external shocks, particularly COVID 19, is imprinted except for the SPGB BTCN pair, which revealed surmounted risk spillovers during the 2015 wallet hack of Bitstamp increased the risk spillovers between green bonds and Bitcoin. [2] While quantifying the risk spillovers, we report parallel trends for SPGB‐SPCL, SPGB‐MSCI, SPGB‐GSCI, and SPGB‐BOND pairs, while SPGB‐BTCN pairs ‐ ‐ revealed high risk spillovers during 2015 and moderate risk during the COVID 19 pandemic. ‐ Noticeably, risk spillovers for SPGB UDSX pair displayed scattered upside and downward VaRs and CoVaRs, which reiterate our findings in Figure 5 where tail dependence between green bonds and US dollar manifested abnormal dependence with a predominance of Student t copula echoing uncertainty in the US dollar index following uncertain economic conditions (Avramov et al., 2022; Cesa‐Bianchi et al., 2020; Karim et al., 2023b; Naeem, Iqbal, et al., 2022c; Naeem, Karim, et al., 2023). In this way, extreme risk spillovers analysis highlights that uncertainty of the external economic circumstances shaped the dependence of green bonds and financial markets with significant ‐ spillovers during COVID 19 in particular. 2 [See https://www.coindesk.com/markets/2015/12/31/14-headlines-that-rocked-bitcoin-and-the-blockchain-in-2015/](https://www.coindesk.com/markets/2015/12/31/14-headlines-that-rocked-bitcoin-and-the-blockchain-in-2015/) ----- KARIM ET AL . EUROPEAN | 21 FINANCIAL MANAGEMENT FIGURE 8 This figure presents spillovers from green bonds to financial markets. These figures show conditional value‐at‐risk (CoVaR) of the green bond. #### 4 | CONCLUSION We examined the tail‐dependence between green bonds and financial markets using the data of six financial markets, such as clean energy market, stock market, commodities, US dollar, conventional bonds, and Bitcoin, by employing the novel technique of TVOC proposed by Liu et al. (2017) for the period spanning January 2012 to September 2021. In addition, we quantified the risk spillovers between green bonds and financial markets by employing the VaR and ----- 22 | EUROPEAN FINANCIAL MANAGEMENT KARIM ET AL . ‐ CoVaR estimates. Our findings highlight significant tail dependencies between green bonds ‐ and financial markets, where most of the markets exhibited numerous tail dependent copulas ‐ corresponding to their respective symmetric and asymmetric tail dependent relationships. ‐ Along with these, time varying properties characterize various economic and financial trends, which echoed European Sovereign Debt Crisis, Shale oil crisis, Brexit referendum, US interest ‐ rate hike, and COVID 19 pandemic. An independent analysis of financial markets reveals that green bonds act as diversifiers for clean energy and stocks, whereas significant safe‐haven features are illuminated for US dollar and Bitcoin markets. Concurrently, green bonds also ‐ provide strong hedge and safe haven features to conventional bonds and commodities during normal and distressing periods in an orderly manner. For further validation, the log‐likelihood values also symbolized justification of the use of TVOC approach. Risk spillover analysis substantiated the COVID‐19 pandemic except for Bitcoin, where it manifested enhanced risk spillovers during 2015, corroborating Bitstamp loss. We devise useful implications for ‐ policymakers, governments, macro prudential authorities, investors, financial market participants, and portfolio managers by reporting these results. Policymakers can relish these findings by including green bonds in the mainstream ‐ investments and assessing the tail dependence and diversification, safe haven, and hedging ‐ avenues given the uncertainty of the economic and financial circumstances. As tail dependence between green bonds and diverse financial markets depict varying patterns, the study can be utilized as a benchmark by the governments for determining the effectiveness of green bonds ‐ and their dependence structures with other financial markets in terms of their diversifiers safe haven and hedgers roles. Investors can also cherish the study's findings by cautiously ‐ evaluating the available investment opportunities that service their profit seeking and socially responsible motives. Concurrently, financial market participants and institutional investors can employ various risk measures to observe the costs and benefits of each investment pair keenly. In addition, investors can utilize the study's findings to evaluate the diversification potential, offer safe‐haven or hedging avenues, and select the investments with minimum losses under uneven economic circumstances. Investors and portfolio managers can design their mainstream portfolios with less risky investments and include green bonds as diversifiers to mitigate risk by adopting useful strategies under haphazard economic episodes. As reported by the earlier empirical studies, green bonds act as diversifiers due to their high risk‐absorbance during economically fragile periods. Thus, these findings provide support to the prior literature and insightful ramifications for the practitioners to reap the benefits of the study. As a future research agenda, further studies can assess the hedge and safe‐haven features of green bonds and other financial markets or stock markets such as global stocks, and so forth. Moreover, future research studies can employ other tail dependence methodologies, for instance, quantile connectedness to comprehensively assess whether the selected financial markets perform better than the other under extreme settings. ACKNOWLEDGMENTS Open access funding provided by IReL. DATA AVAILABILITY STATEMENT All data are publically available and described in full in the paper. The data that support the findings of this study are available from the corresponding author upon reasonable request. ----- KARIM ET AL . EUROPEAN | 23 FINANCIAL MANAGEMENT ORCID Brian M. Lucey [http://orcid.org/0000-0002-4052-8235](http://orcid.org/0000-0002-4052-8235) REFERENCES Adrian, T., & Brunnermeier, M. K. (2016). CoVaR. American Economic Review, 106(7), 1705–1741. Alawi, S. M., Karim, S., Meero, A. A., Rabbani, M. R., & Naeem, M. A. (2022). Information transmission in regional energy stock markets. Environmental Science and Pollution Research, 1–13. Andersen, T. M., Bhattacharya, J., & Liu, P. (2020). Resolving intergenerational conflict over the environment under the Pareto criterion. Journal of Environmental Economics and Management, 100, 102290. Appiah, M., Karim, S., Naeem, M. A., & Lucey, B. M. (2022). Do institutional affiliation affect the renewable energy‐growth nexus in the Sub‐Saharan Africa: Evidence from a multi‐quantitative approach. Renewable Energy, 191, 785–795. Arfaoui, N., Naeem, M. A., Boubaker, S., Mirza, N., & Karim, S. (2023). Interdependence of clean energy and green markets with cryptocurrencies. Energy Economics, 120, 106584. Arif, M., Hasan, M., Alawi, S. M., & Naeem, M. A. (2021). COVID‐19 and time‐frequency connectedness between green and conventional financial markets. Global Finance Journal, 49, 100650. Arif, M., Naeem, M. A., Farid, S., Nepal, R., & Jamasb, T. (2021). Diversifier or More? Hedge and Safe Haven [Properties of Green Bonds During COVID‐19. Energy Policy, 168, 113102. https://papers.ssrn.com/sol3/](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3782126) [papers.cfm?abstract_id=3782126](https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3782126) Atif, M., Hossain, M., Alam, M. S., & Goergen, M. (2021). Does board gender diversity affect renewable energy consumption? Journal of Corporate Finance, 66, 101665. Avramov, D., Chordia, T., Jostova, G., & Philipov, A. (2022). The distress anomaly is deeper than you think: Evidence from stocks and bonds. Review of Finance, 26(2), 355–405. Billah, M., Karim, S., Naeem, M. A., & Vigne, S. A. (2022). Return and volatility spillovers between energy and BRIC markets: Evidence from quantile connectedness. Research in International Business and Finance, 62, 101680. Blundell‐Wignall, A. (2012). Solving the financial and sovereign debt crisis in Europe. OECD Journal: Financial Market Trends, 2011(2), 201–224. Bolton, P., & Kacperczyk, M. (2021). Do investors care about carbon risk? Journal of Financial Economics, 142(2), 517–549. Bouri, E., Gabauer, D., Gupta, R., & Tiwari, A. K. (2021). Volatility connectedness of major cryptocurrencies: The role of investor happiness. Journal of Behavioral and Experimental Finance, 30, 100463. ‐ Bown, C. P. (2022). How COVID 19 medical supply shortages led to extraordinary trade and industrial policy. Asian Economic Policy Review, 17(1), 114–135. ‐ Broadstock, D. C., & Cheng, L. T. W. (2019). Time varying relation between black and green bond price benchmarks: Macroeconomic determinants for the first decade. Finance Research Letters, 29, 17–22. Caillault, C., & Guégan, D. (2005). Empirical estimation of tail dependence using copulas: Application to Asian markets. Quantitative Finance, 5(5), 489–501. Cesa‐Bianchi, A., Pesaran, M. H., & Rebucci, A. (2020). Uncertainty and economic activity: A multicountry perspective. The Review of Financial Studies, 33(8), 3393–3445. Cochrane, J. H. (2022). Portfolios for long‐term investors. Review of Finance, 26(1), 1–42. Corbet, S., Meegan, A., Larkin, C., Lucey, B., & Yarovaya, L. (2018). Exploring the dynamic relationships between cryptocurrencies and other financial assets. Economics Letters, 165, 28–34. Creal, D., Koopman, S. J., & Lucas, A. (2008). A general framework for observation‐driven time‐varying parameter models (Tinbergen Institute Discussion paper no. 08‐108/4). Daubanes, J. X., Mitali, S. F., & Rochet, J. C. (2021). Why do firms issue green bonds? Swiss Finance Institute Research Paper, 21–97. ‐ Elsayed, A. H., Nasreen, S., & Tiwari, A. K. (2020). Time varying comovements between energy market and global financial markets: Implication for portfolio diversification and hedging strategies. Energy Economics, 90, 104847. ‐ Farid, S., Karim, S., Naeem, M. A., Nepal, R., & Jamasb, T. (2023). Co movement between dirty and clean energy: A time‐frequency perspective. Energy Economics, 119, 106565. ----- 24 | EUROPEAN FINANCIAL MANAGEMENT KARIM ET AL . ‐ Farid, S., Naeem, M. A., Paltrinieri, A., & Nepal, R. (2022). Impact of COVID 19 on the quantile connectedness between energy, metals and agriculture commodities. Energy Economics, 109, 105962. Flammer, C. (2020). Green bonds: Effectiveness and implications for public policy. Environmental and Energy Policy and the Economy, 1(1), 95–128. Flammer, C. (2021). Corporate green bonds. Journal of Financial Economics, 142, 499–516. Iqbal, N., Naeem, M. A., & Suleman, M. T. (2022). Quantifying the asymmetric spillovers in sustainable investments. Journal of International Financial Markets, Institutions and Money, 77, 101480. Joe, H. (1997). Multivariate models and multivariate dependence concepts. CRC Press. Kanamura, T. (2020). Are green bonds environmentally friendly and good performing assets? Energy Economics, 88, 104767. Kang, S., Hernandez, J. A., Sadorsky, P., & McIver, R. (2021). Frequency spillovers, connectedness, and the hedging effectiveness of oil and gold for US sector ETFs. Energy Economics, 99, 105278. Karim, S., Appiah, M., Naeem, M. A., Lucey, B. M., & Li, M. (2022e). Modelling the role ofinstitutional quality on carbon emissions in Sub‐Saharan Africancountries. Renewable Energy, 198, 213–221. ‐ Karim, S., Khan, S., Mirza, N., Alawi, S. M., & Taghizadeh Hesary, F. (2022a). Climate finance in the wake of COVID‐19: Connectedness of clean energy with conventional energy and regional stock markets. Climate Change Economics, 13(03), 2240008. Karim, S., Lucey, B. M., Naeem, M. A., & Uddin, G. S. (2022b). Examining the interrelatedness of NFTs, DeFi tokens and cryptocurrencies. Finance Research Letters, 47, 102696. Karim, S., Lucey, B. M., Naeem, M. A., & Vigne, S. A. (2023a). The dark side of bitcoin: Do emerging Asian Islamic markets help subdue the ethical risk? Emerging Markets Review, 54, 100921. Karim, S., & Naeem, M. A. (2022). Do global factors drive the interconnectedness among green, Islamic and conventional financial markets? International Journal of Managerial Finance, 18(4), 639–660. – Karim, S., Naeem, M. A., Hu, M., Zhang, D., & Taghizadeh Hesary, F. (2022c). Determining dependence, centrality, and dynamic networks between green bonds and financial markets. Journal of Environmental Management, 318, 115618. ‐ ‐ Karim, S., Naeem, M. A., Mirza, N., & Paule Vianez, J. (2022d). Quantifying the hedge and safe haven properties of bond markets for cryptocurrency indices. The Journal of Risk Finance, 23(2), 191–205. Karim, S., Naeem, M. A., Shafiullah, M., Lucey, B. M., & Ashraf, S. (2023b). Asymmetric relationship between climate policy uncertainty and energy metals: Evidence from cross‐quantilogram. Finance Research Letters, 54, 103728. Karim, S., Naeem, M. A., Tiwari, A. K., & Ashraf, S. (2023c). Examining the avenues of sustainability in resources and digital blockchains backed currencies: Evidence from energy metals and cryptocurrencies. Annals of Operations Research, 1–18. Karpf, A., & Mandel, A. (2017). Does it pay to be green?. Krueger, P., Sautner, Z., & Starks, L. T. (2020). The importance of climate risks for institutional investors. The Review of Financial Studies, 33(3), 1067–1111. Larcker, D. F., & Watts, E. M. (2020). Where's the greenium? Journal of Accounting and Economics, 69(2–3), 101312. ‐ Leitao, J., Ferreira, J., & Santibanez Gonzalez, E. (2021). Green bonds, sustainable development and environmental policy in the European Union carbon market. Business Strategy and the Environment, 30, 2077–2090. ‐ Liu, B. Y., Ji, Q., & Fan, Y. (2017). A new time varying optimal copula model identifying the dependence across markets. Quantitative Finance, 17(3), 437–453. Liu, Y., & Tsyvinski, A. (2021). Risks and returns of cryptocurrency. The Review of Financial Studies, 34(6), 2689–2727. Lucey, B. M., Vigne, S. A., Yarovaya, L., & Wang, Y. (2021). The cryptocurrency uncertainty index. Finance Research Letters, 102147. Mensi, W., Naeem, M. A., Vo, X. V., & Kang, S. H. (2022). Dynamic and frequency spillovers between green bonds, oil and G7 stock markets: Implications for risk management. Economic Analysis and Policy, 73, 331–344. Naeem, M. A., Adekoya, O. B., & Oliyide, J. A. (2021). Asymmetric spillovers between green bonds and commodities. Journal of Cleaner Production, 314, 128100. ----- KARIM ET AL . EUROPEAN | 25 FINANCIAL MANAGEMENT Naeem, M. A., Conlon, T., & Cotter, J. (2022). Green bonds and other assets: Evidence from extreme risk transmission. Journal of Environmental Management, 305, 114358. Naeem, M. A., Farid, S., Ferrer, R., & Shahzad, S. J. H. (2021). Comparative efficiency of green and conventional bonds pre‐and during COVID‐19: An asymmetric multifractal detrended fluctuation analysis. Energy Policy, 153, 112285. Naeem, M. A., Gul, R., Farid, S., Karim, S., & Lucey, B. M. (2023a). Assessing linkages between alternative energy markets and cryptocurrencies. Journal of Economic Behavior & Organization, 211, 513–529. Naeem, M. A., Iqbal, N., Karim, S., & Lucey, B. M. (2023b). From forests to faucets to fuel: Investigating the domino effect of extreme risk in timber, water, and energy markets. Finance Research Letters, 55, 104010. Naeem, M. A., Iqbal, N., Lucey, B. M., & Karim, S. (2022c). Good versus bad information transmission in the ‐ cryptocurrency market: Evidence from high frequency data. Journal of International Financial Markets, Institutions and Money, 81, 101695. Naeem, M. A., & Karim, S. (2021). Tail dependence between bitcoin and green financial assets. Economics Letters, 208, 110068. Naeem, M. A., Karim, S., Hasan, M., Lucey, B. M., & Kang, S. H. (2022d). Nexus between oil shocks and agriculture commodities: Evidence from time and frequency domain. Energy Economics, 112, 106148. Naeem, M. A., Karim, S., & Tiwari, A. K. (2022). Risk connectedness between green and conventional assets with portfolio implications. Computational Economics, 1–29. Naeem, M. A., Karim, S., Uddin, G. S., & Junttila, J. (2022). Small fish in big ponds: Connections of green finance assets to commodity and sectoral stock markets. International Review of Financial Analysis, 83, 102283. Naeem, M. A., Karim, S., Yarovaya, L., & Lucey, B. M. (2023c). Systemic risk contagion of green and Islamic markets with conventional markets. Annals of Operations Research, 1–23. – Naeem, M. A., Nguyen, T. T. H., Nepal, R., Ngo, Q. T., & Taghizadeh Hesary, F. (2021). Asymmetric relationship between green bonds and commodities: Evidence from extreme quantile approach. Finance Research Letters, 43, 101983. Naeem, M. A., Peng, Z., Suleman, M. T., Nepal, R., & Shahzad, S. J. H. (2020). Time and frequency connectedness among oil shocks, electricity and clean energy markets. Energy Economics, 91, 104914. ‐ Nguyen, T. T. H., Naeem, M. A., Balli, F., Balli, H. O., & Vo, X. V. (2020). Time frequency comovement among green bonds, stocks, commodities, clean energy, and conventional bonds. Finance Research Letters, 101739. Patton, A. J. (2006). Modelling asymmetric exchange rate dependence. International Economic Review, 47(2), 527–556. ‐ Pham, L. (2021). Frequency connectedness and cross quantile dependence between green bond and green equity markets. Energy Economics, 98, 105257. ‐ Pham, L., Karim, S., Naeem, M. A., & Long, C. (2022). A tale of two tails among carbon prices, green and non Green cryptocurrencies. International Review of Financial Analysis, 82, 102139. Pham, L., & Nguyen, C. P. (2021). Asymmetric tail dependence between green bonds and other asset classes. Global Finance Journal, 50, 100669. Reboredo, J. C., Ugolini, A., & Aiube, F. A. L. (2020). Network connectedness of green bonds and asset classes. Energy Economics, 86, 104629. Reboredo, J. C., Ugolini, A., & Chen, Y. (2019). Interdependence between renewable‐energy and low‐carbon stock prices. Energies, 12(23), 4461. Saravade, V., Chen, X., Weber, O., & Song, X. (2023). Impact of regulatory policies on green bond issuances in China: Policy lessons from a top‐down approach. Climate Policy, 23(1), 96–107. Siddique, M. A., Nobanee, H., Karim, S., & Naz, F. (2022). Investigating the role of metal and commodity classes in overcoming resource destabilization. Resources Policy, 79, 103075. Siddique, M. A., Nobanee, H., Karim, S., & Naz, F. (2023). Do green financial markets offset the risk of cryptocurrencies and carbon markets? International Review of Economics & Finance, 86, 822–833. Sklar, M. (1959). Fonctions de repartition an dimensions et leurs marges. Annales de l'ISUP, 8, 229–231. Tang, D. Y., & Zhang, Y. (2020). Do shareholders benefit from green bonds? Journal of Corporate Finance, 61, 101427. ----- 26 | EUROPEAN FINANCIAL MANAGEMENT KARIM ET AL . Tiwari, A. K., Abakah, E. J. A., Bonsu, C. O., Karikari, N. K., & Hammoudeh, S. (2022). The effects of public sentiments and feelings on stock market behavior: Evidence from Australia. Journal of Economic Behavior & Organization, 193, 443–472. ‐ ‐ Umar, M., Farid, S., & Naeem, M. A. (2022). Time frequency connectedness among clean energy stocks and fossil fuel markets: Comparison between financial, oil and pandemic crisis. Energy, 240, 122702. Wang, J., Chen, X., Li, X., Yu, J., & Zhong, R. (2020). The market reaction to green bond issuance: Evidence from China. Pacific‐Basin Finance Journal, 60, 101294. Yousaf, I., Jareño, F., & Tolentino, M. (2023). Connectedness between Defi assets and equity markets during COVID‐19: A sector analysis. Technological Forecasting and Social Change, 187, 122174. How to cite this article: Karim, S., Lucey, B. M., Naeem, M. A., & Yarovaya, L. (2023). Extreme risk dependence between green bonds and financial markets. European [Financial Management, 1–26. https://doi.org/10.1111/eufm.12458](https://doi.org/10.1111/eufm.12458) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.2139/ssrn.4318426?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.2139/ssrn.4318426, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GREEN", "url": "https://eprints.soton.ac.uk/485455/1/Euro_Fin_Management_2023_Karim_Extreme_risk_dependence_between_green_bonds_and_financial_markets.pdf" }
2,023
[ "JournalArticle" ]
true
2023-09-16T00:00:00
[ { "paperId": "c3cbf1e8b2f7ed7022852e13535f28856080e617", "title": "Assessing Linkages between Alternative Energy Markets and Cryptocurrency" }, { "paperId": "d72a5049d2f1fc8e888b43efe819e8d56bf63830", "title": "Examining the avenues of sustainability in resources and digital blockchains backed currencies: evidence from energy metals and cryptocurrencies" }, { "paperId": "4390bc53d83ab2f923b64f1ec978fd53723f7b42", "title": "From forests to faucets to fuel: Investigating the domino effect of extreme risk in timber, water, and energy markets" }, { "paperId": "62710db225568184adc7e445408bd6ad836e0114", "title": "Systemic risk contagion of green and Islamic markets with conventional markets" }, { "paperId": "92f584139335c39fbc86139677d92f518b9d2284", "title": "Do green financial markets offset the risk of cryptocurrencies and carbon markets?" }, { "paperId": "b28f9ab97a1b72c88572c166185fe044c4b49f19", "title": "Co-movement between dirty and clean energy: A time-frequency perspective" }, { "paperId": "72ff7e44088d9915fee9b348be21524d44bcb6bc", "title": "Interdependence of clean energy and green markets with cryptocurrencies" }, { "paperId": "ae8fc496d0be6add68543da1f8aaf1270d49e83e", "title": "Asymmetric relationship between Climate Policy Uncertainty and Energy Metals: Evidence from Cross-Quantilogram" }, { "paperId": "9180b1e5806e94001e36b320ea68b27a5350e08a", "title": "Investigating the role of metal and commodity classes in overcoming resource destabilization" }, { "paperId": "fad829ee821912962f9a32058afebbbd83544d09", "title": "Good versus bad information transmission in the cryptocurrency market: Evidence from high-frequency data" }, { "paperId": "74297423cfe5e3ab95ba7f9a596625f9207d22ab", "title": "Connectedness between Defi assets and equity markets during COVID-19: A sector analysis" }, { "paperId": "9ed559bd776a90cd61f2f31433b33860111d86be", "title": "Determining dependence, centrality, and dynamic networks between green bonds and financial markets." }, { "paperId": "8259ffd8b8502f156ca047b4621db1cb7fabcf76", "title": "Risk Connectedness Between Green and Conventional Assets with Portfolio Implications" }, { "paperId": "c3654d90c410173eafc1444f93b19eae8f4f7da7", "title": "Modelling the role of institutional quality on carbon emissions in Sub-Saharan African countries" }, { "paperId": "1b4fcf9eae92a0839ebe2146710b093e4e8b4f3c", "title": "Small fish in big ponds: Connections of green finance assets to commodity and sectoral stock markets" }, { "paperId": "2a3f896428689c111a2cc980cb2b9712a3fea320", "title": "Nexus between Oil Shocks and Agriculture Commodities: Evidence from Time and Frequency Domain" }, { "paperId": "fabf334c8caef5f4115e137cd58661bc3c57424f", "title": "The dark side of Bitcoin: Do Emerging Asian Islamic markets help subdue the ethical risk?" }, { "paperId": "05b98d0a4dfd7360865595dddaa26ec700341be7", "title": "Return and volatility spillovers between energy and BRIC markets: Evidence from quantile connectedness" }, { "paperId": "1d48525a9b189a2e2dcdb90c1ccf9793817f086a", "title": "Impact of regulatory policies on green bond issuances in China: policy lessons from a top-down approach" }, { "paperId": "a769daac3c63f1983527aa5cec4a43abbd5c1b83", "title": "A tale of two tails among carbon prices, green and non-green cryptocurrencies" }, { "paperId": "f9a6bc7da4892e4fc3ea83a5f132f0709afc231b", "title": "Do institutional affiliation affect the renewable energy-growth nexus in the Sub-Saharan Africa: Evidence from a multi-quantitative approach" }, { "paperId": "e4dd6caf93de85e86d0a0d3026a3e82176805064", "title": "Information transmission in regional energy stock markets" }, { "paperId": "87bd1356141f59315fcf8ba46c329ed03721b6ef", "title": "Impact of COVID-19 on the quantile connectedness between energy, metals and agriculture commodities" }, { "paperId": "77daa21103b105c1bc25fbdd328f545c4b5498ee", "title": "CLIMATE FINANCE IN THE WAKE OF COVID-19: CONNECTEDNESS OF CLEAN ENERGY WITH CONVENTIONAL ENERGY AND REGIONAL STOCK MARKETS" }, { "paperId": "43ada055b32af71243fb322c98ac41a1a7fe2fb8", "title": "Do global factors drive the interconnectedness among green, Islamic and conventional financial markets?" }, { "paperId": "4a1e63644beb0f674e1e6b51bcbef300c4b910fa", "title": "Quantifying the hedge and safe-haven properties of bond markets for cryptocurrency indices" }, { "paperId": "6dbb39e280d05e9a2dc8558ae3b9e1ae5fc467f7", "title": "The effects of public sentiments and feelings on stock market behavior: Evidence from Australia" }, { "paperId": "628003a31d7abc81483477adbd14bba403030881", "title": "Green bonds and other assets: Evidence from extreme risk transmission." }, { "paperId": "98996734e9fc356ecb154c49eac04c45d03c6ba2", "title": "Dynamic and frequency spillovers between green bonds, oil and G7 stock markets: Implications for risk management" }, { "paperId": "dd032718889e676fb357d8395e20541583dc2048", "title": "Quantifying the asymmetric spillovers in sustainable investments" }, { "paperId": "af42225a3c5010acbeb7cfd07043b13dc94976cc", "title": "Time-frequency connectedness among clean-energy stocks and fossil fuel markets: Comparison between financial, oil and pandemic crisis" }, { "paperId": "65ba6a036f4c5be5e36c80843081095dd5bfd1cd", "title": "Asymmetric spillovers between green bonds and commodities" }, { "paperId": "7e75cbe8e3f7977f5ed3bc26bfde3979a6ebabef", "title": "Tail dependence between bitcoin and green financial assets" }, { "paperId": "5fe8efef44329302b655385f9d39d49a7065bde2", "title": "Asymmetric tail dependence between green bonds and other asset classes" }, { "paperId": "b517aa5ca4557f084ec091d05ff6c77d0e465cdb", "title": "How COVID‐19 Medical Supply Shortages Led to Extraordinary Trade and Industrial Policy" }, { "paperId": "600946925468a776fa1d71581ed79530cfe539dd", "title": "COVID-19 and time-frequency connectedness between green and conventional financial markets" }, { "paperId": "752688e18af2609c44d7b115a4be0405e2823234", "title": "Comparative efficiency of green and conventional bonds pre- and during COVID-19: An asymmetric multifractal detrended fluctuation analysis" }, { "paperId": "aed7ea822ba626841cb680cbf3995802b43190ba", "title": "Frequency spillovers, connectedness, and the hedging effectiveness of oil and gold for US sector ETFs" }, { "paperId": "c516594c70f25089d2275917b71638fa5791f37e", "title": "The Cryptocurrency Uncertainty Index" }, { "paperId": "504542b18d6e10e7d07af341df7eddf925ab7b35", "title": "Asymmetric relationship between green bonds and commodities: Evidence from extreme quantile approach" }, { "paperId": "54b9d04a6e4570d0dbd8649f43bdc57c3c27bf65", "title": "Diversifier or more? Hedge and safe haven properties of green bonds during COVID-19" }, { "paperId": "b05df5eb9e13e6f71bed0231cb9233202faba04d", "title": "Green bonds, sustainable development and environmental policy in the European Union carbon market" }, { "paperId": "3cbbf62d38e7c39aaaae86c443de7cde164860fa", "title": "The Distress Anomaly is Deeper than you Think: Evidence from Stocks and Bonds" }, { "paperId": "a2658aeb6cd648518cd5e1dda9efb0c9045724a2", "title": "Frequency Connectedness and Cross-quantile Dependence Between Green Bond and Green Equity Markets" }, { "paperId": "836a7367b378b83470de4a5ad9b5fd214a1144a7", "title": "Time-frequency comovement among green bonds, stocks, commodities, clean energy, and conventional bonds" }, { "paperId": "78f898c83b2fdd822d2c43e1d1708f165adcb84a", "title": "Time and frequency connectedness among oil shocks, electricity and clean energy markets" }, { "paperId": "423c5bf3010da591e318b5ce620d9476f28ae008", "title": "Time-varying co-movements between energy market and global financial markets: Implication for portfolio diversification and hedging strategies" }, { "paperId": "d68d4a2d3269f40a73d57cfb9979abe0078a177a", "title": "Volatility connectedness of major cryptocurrencies: The role of investor happiness" }, { "paperId": "d47cbbe1966a1a4d9c2398731b14facfc7d5a202", "title": "Are green bonds environmentally friendly and good performing assets?" }, { "paperId": "bb368dc58b6600fa6a05b42604dabf0cb6443b26", "title": "Do Investors Care About Carbon Risk?" }, { "paperId": "a3ec44be68fcdc07eb7e9d808a4530b277c8e3c5", "title": "The market reaction to green bond issuance: Evidence from China" }, { "paperId": "32a267534ffd57b6b5fe39d32dbe553daf987748", "title": "Does Board Gender Diversity Affect Renewable Energy Consumption?" }, { "paperId": "7e2d5f6bfd5edd77229072097e71fd0f758a36c5", "title": "Network connectedness of green bonds and asset classes" }, { "paperId": "afc50fe47c1084eda13958b32203323b1d40fc1d", "title": "Interdependence Between Renewable-Energy and Low-Carbon Stock Prices" }, { "paperId": "c48536eb997e5ab9d4b1c86c540ba8ca2fee08ec", "title": "The Importance of Climate Risks for Institutional Investors" }, { "paperId": "6613b82bea6e165f79ea779ff199f8a427d6b442", "title": "Corporate Green Bonds" }, { "paperId": "26e122872292d49a27aae0a6d793bd2e8ae77dcf", "title": "Time-varying relation between black and green bond price benchmarks: Macroeconomic determinants for the first decade" }, { "paperId": "f1946f8221aba2bda4e7e3534788ca9ae68d8e27", "title": "Green Bonds: Effectiveness and Implications for Public Policy" }, { "paperId": "791d78a3fbde2ea01ffcb52a70f21fa1633f9d00", "title": "Where's the Greenium?" }, { "paperId": "5bb37ab8def3910a5fb61c4d7367bc6df875575f", "title": "Do Shareholders Benefit from Green Bonds?" }, { "paperId": "2a70cf6787c84fbb15632bf3986d60094c71da69", "title": "Risks and Returns of Cryptocurrency" }, { "paperId": "3b539f3d38dab6a25aea93f918ea1d0d8dac6c05", "title": "Exploring the Dynamic Relationships between Cryptocurrencies and Other Financial Assets" }, { "paperId": "e8a31ebab6858a77b8943bcd2ed3878648c5d3b5", "title": "A new time-varying optimal copula model identifying the dependence across markets" }, { "paperId": "071adb7ef2575fde469233be44b13238ddfaa221", "title": "Does it Pay to Be Green?" }, { "paperId": "d82bcf13e83b184dcc0f9c556feef5538e130dc8", "title": "Resolving Intergenerational Conflict Over the Environment Under the Pareto Criterion" }, { "paperId": "5346e970f41f861b4760e2f9ba5ae7a15fb9aa56", "title": "Solving the Financial and Sovereign Debt Crisis in Europe" }, { "paperId": "133c03879d374a9b7c7e56776fdf0cc1cdaa375f", "title": "Modelling Asymmetric Exchange Rate Dependence" }, { "paperId": "7b8733f897f0213e4aabae8033dc9e3828134b28", "title": "Empirical estimation of tail dependence using copulas: application to Asian markets" }, { "paperId": "b28f5e8dc84228d606b77d059f3a9c9abbcfb6e2", "title": "Why Do Firms Issue Green Bonds?" }, { "paperId": "ff1731667f1fc5396437b4f96fabcd729bdf255f", "title": "OUP accepted manuscript" }, { "paperId": "a53bab298fa892bb9e8029447ced79467bcfec4f", "title": "Examining the Interrelatedness of NFT’s, DeFi Tokens and Cryptocurrencies" }, { "paperId": null, "title": "Uncertainty and Economic Activity: A Multicountry Perspective" }, { "paperId": null, "title": "CoVaR" }, { "paperId": "2a6db29b560703aec0be4e491938d6005ac65563", "title": "A general framework for observation-driven time-varying parameter models ∗" }, { "paperId": "43ef659f9d6f454e8188e38f09f1461b6df85017", "title": "Multivariate Models and Multivariate Dependence Concepts" }, { "paperId": "d017e0a6a9b93dcab7cdcd44b17544c04f7678d2", "title": "Fonctions de repartition a n dimensions et leurs marges" } ]
17,468
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01f44a480f9eb8601ea7db5101f3f95ff1596e67
[ "Computer Science" ]
0.868352
DAOS: A Scale-Out High Performance Storage Stack for Storage Class Memory
01f44a480f9eb8601ea7db5101f3f95ff1596e67
Asian Conference on Supercomputing Frontiers
[ { "authorId": "2113515828", "name": "Zhen Liang" }, { "authorId": "34783068", "name": "J. Lombardi" }, { "authorId": "1968911", "name": "M. Chaarawi" }, { "authorId": "2358069", "name": "Michael Hennecke" } ]
{ "alternate_issns": null, "alternate_names": [ "SCFA", "Asian Conf Supercomput Front" ], "alternate_urls": null, "id": "d04ec0de-0c03-4ca6-82dc-0e0274e6486f", "issn": null, "name": "Asian Conference on Supercomputing Frontiers", "type": "conference", "url": null }
The Distributed Asynchronous Object Storage (DAOS) is an open source scale-out storage system that is designed from the ground up to support Storage Class Memory (SCM) and NVMe storage in user space. Its advanced storage API enables the native support of structured, semi-structured and unstructured data models, overcoming the limitations of traditional POSIX based parallel filesystem. For HPC workloads, DAOS provides direct MPI-IO and HDF5 support as well as POSIX access for legacy applications. In this paper we present the architecture of the DAOS storage engine and its high-level application interfaces. We also describe initial performance results of DAOS for IO500 benchmarks.
# DAOS: A Scale-Out High Performance Storage Stack for Storage Class Memory Zhen Liang[1(][&][)], Johann Lombardi[2], Mohamad Chaarawi[3], and Michael Hennecke[4] 1 Intel China Ltd., GTC, No. 36 3rd Ring Road, Beijing, China liang.zhen@intel.com 2 Intel Corporation SAS, 2 rue de Paris, 92196 Meudon Cedex, France johann.lombardi@intel.com 3 Intel Corporation, 1300 S MoPac Expy, Austin, TX 78746, USA mohamad.chaarawi@intel.com 4 Lenovo Global Technology Germany GmbH, Am Zehnthof 77, 45307 Essen, Germany mhennecke@lenovo.com Abstract. The Distributed Asynchronous Object Storage (DAOS) is an open source scale-out storage system that is designed from the ground up to support Storage Class Memory (SCM) and NVMe storage in user space. Its advanced storage API enables the native support of structured, semi-structured and unstructured data models, overcoming the limitations of traditional POSIX based parallel filesystem. For HPC workloads, DAOS provides direct MPI-IO and HDF5 support as well as POSIX access for legacy applications. In this paper we present the architecture of the DAOS storage engine and its high-level application interfaces. We also describe initial performance results of DAOS for IO500 benchmarks. Keywords: DAOS � SCM � Persistent memory � NVMe � Distributed storage system � Parallel filesystem � SWIM � RAFT ## 1 Introduction The emergence of data-intensive applications in business, government and academia stretches the existing I/O models beyond limits. Modern I/O workloads feature an increasing proportion of metadata combined with misaligned and fragmented data. Conventional storage stacks deliver poor performance for these workloads by adding a lot of latency and introducing alignment constraints. The advent of affordable largecapacity persistent memory combined with a high-speed fabric offers a unique opportunity to redefine the storage paradigm and support modern I/O workloads efficiently. This revolution requires a radical rethinking of the complete storage stack. To unleash the full potential of these new technologies, the new stack must embrace a byte-granular shared-nothing interface from the ground up. It also has to be able to ----- support massively distributed storage for which failure will be the norm, while preserving low latency and high bandwidth access over the fabric. DAOS is a complete I/O architecture that aggregates SCM and NVMe storage distributed across the fabric into globally accessible object address spaces, providing consistency, availability and resiliency guarantees without compromising performance. Section 2 of this paper describes the challenges that SCM and NVMe storage pose to traditional I/O stacks. Section 3 introduces the architecture of DAOS and explains how it integrates with new storage technologies. Section 4 gives an overview of the data model and I/O interfaces of DAOS, and Sect. 5 presents the first IO500 performance results of DAOS. ## 2 Constraints of Using Traditional Parallel Filesystems Conventional parallel filesystems are built on top of block devices. They submit I/O through the OS kernel block I/O interface, which is optimized for disk drives. This includes using an I/O scheduler to optimize disk seeking, aggregating and coalescing writes to modify the characteristics of the workloads, then sending large streaming data to the disk drive to achieve the high bandwidth. However, with the emergence of new storage technologies like 3D-XPoint that can offer several orders of magnitude lower latency comparing with traditional storage, software layers built for spinning disk become pure overhead for those new storage technologies. Moreover, most parallel filesystems can use RDMA capable network as a fast transport layer, in order to reduce data copying between layers. For example, transfer data from the page cache of a client to the buffer cache of a server, then persist it to block devices. However, because of lacking unified polling or progress mechanisms for both block I/O and network events in the traditional storage stack, I/O request handling heavily relies on interrupts and multi-threading for concurrent RPC processing. Therefore, context switches during I/O processing will significantly limit the advantage of the low latency network. With all the thick stack layers of traditional parallel filesystem, including caches and distributed locking, user can still use 3D NAND, 3D-XPoint storage and high speed fabrics to gain some better performance, but will also lose most benefits of those technologies because of overheads imposed by the software stack. ## 3 DAOS, a Storage Stack Built for SCM and NVMe Storage The Distributed Asynchronous Object Storage (DAOS) is an open source softwaredefined object store designed from the ground up for massively distributed Non Volatile Memory (NVM). It presents a key-value storage interface and provides features such as transactional non-blocking I/O, a versioned data model, and global snapshots. ----- This section introduces the architecture of DAOS, discusses a few core components of DAOS and explains why DAOS can be a storage system with both high performance and resilience. 3.1 DAOS System Architecture DAOS is a storage system that takes advantage of next generation NVM technology like Storage Class Memory (SCM) and NVM express (NVMe). It bypasses all Linux kernel I/O, it runs end-to-end in user space and does not do any system call during I/O. As shown in Fig. 1, DAOS is built over three building blocks. The first one is persistent memory and the Persistent Memory Development Toolkit (PMDK) [2]. DAOS uses it to store all internal metadata, application/middleware key index and latency sensitive small I/O. During starting of the system, DAOS uses system calls to initialize the access of persistent memory. For example, it maps the persistent memory file of DAX-enabled filesystem to virtual memory address space. When the system is up and running, DAOS can directly access persistent memory in user space by memory instructions like load and store, instead of going through a thick storage stack. Persistent memory is fast but has low capacity and low cost effectiveness, so it is effectively impossible to create a large capacity storage tier with persistent memory only. DAOS leverages the second building block, NVMe SSDs and the Storage Performance Development Kit (SPDK) [7] software, to support large I/O as well as higher latency small I/O. SPDK provides a C library that may be linked into a storage server that can provide direct, zero-copy data transfer to and from NVMe SSDs. The DAOS service can submit multiple I/O requests via SPDK queue pairs in an asynchronous manner fully from user space, and later creates indexes for data stored in SSDs in persistent memory on completion of the SPDK I/O. Libfabric [8] and an underlying high performance fabric such as Omni-Path Architecture or InfiniBand (or a standard TCP network), is the third build block for DAOS. Libfabric is a library that defines the user space API of OFI, and exports fabric communication services to application or storage services. The transport layer of DAOS is built on top of Mercury [9] with a libfabric/OFI plugin. It provides a callback based asynchronous API for message and data transfer, and a thread-less polling API for progressing network activities. A DAOS service thread can actively poll network events from Mercury/libfabric as notification of asynchronous network operations, instead of using interrupts that have a negative performance impact because of context switches. ----- 3D-XPoint Memory 3D-NAND/XPoint SSD Fig. 1. DAOS system architecture As a summary, DAOS is built on top of new storage and network technologies and operates fully in user space, bypassing all the Linux kernel code. Because it is architected specifically for SCM and NVMe, it cannot support disk based storage. Traditional storage system like Lustre [11], Spectrum Scale [12], or CephFS [10] can be used for disk-based storage, and it is possible to move data between DAOS and such external file systems. 3.2 DAOS I/O Service From the perspective of stack layering, DAOS is a distributed storage system with a client-server model. The DAOS client is a library that is integrated with the application, and it runs in the same address space as the application. The data model exposed by the DAOS library is directly integrated with all the traditional data formats and middleware libraries that will be introduced in Sect. 4. The DAOS I/O server is a multi-tenant daemon that runs either directly on a data storage node or in a container. It can directly access persistent memory and NVMe SSDs, as introduced in the previous section. It stores metadata and small I/O in persistent memory, and stores large I/O in NVMe SSDs. The DAOS server does not rely on spawning pthreads for concurrent handling of I/O. Instead it creates an Argobots [6] User Level Thread (ULT) for each incoming I/O request. An Argobots ULT is a lightweight execution unit associated with an execution stream (xstream), which is mapped to the pthread of the DAOS service. This means that conventional POSIX I/O function calls, pthread locks or synchronous message waiting calls from any ULT can ----- block progress of all ULTs on an execution stream. However, because all building blocks used by DAOS provide a non-blocking user space interface, a DAOS I/O ULT will never be blocked on system calls. Instead it can actively yield the execution if an I/O or network request is still inflight. The I/O ULT will eventually be rescheduled by a system ULT that is responsible for polling a completion event from the network and SPDK. ULT creation and context switching are very lightweight. Benchmarks show that one xstream can create millions of ULTs per second, and can do over ten million ULT context switches per second. It is therefore a good fit for DAOS server side I/O handling, which is supposed to support micro-second level I/O latency (Fig. 2). utl_create(rpc_handler) 5 3 ULT 2 I/O submit Bulk transfer ULT I/O progress RPC progress ULT 6 1 4 I/O XStream I/O complete ULT Reply send 9 7 ULT Index data 8 VOS PMDK Fig. 2. DAOS server side I/O processing 3.3 Data Protection and Data Recovery DAOS storage is exposed as objects that allow user access through a key-value or keyarray API. In order to avoid scaling problems and the overhead of maintaining perobject metadata (like object layout that describes locality of object data), a DAOS object is only identified by a 128-bit ID that has a few encoded bits to describe data distribution and the protection strategy of the object (replication or erasure code, stripe count, etc.). DAOS can use these bits as hints, and the remaining bits of the object ID as a pseudorandom seed to generate the layout of the object based on the configuration of the DAOS storage pool. This is called algorithmic object placement. It is similar to the data placement technology of Ceph, except DAOS is not using CRUSH [10] as the algorithm. This paper will only describe the data protection and recovery protocol from a high level view. Detailed placement algorithm and recovery protocol information can be found in the online DAOS design documents [5]. ----- Data Protection In order to get ultra-low latency I/O, a DAOS storage server stores application data and metadata in SCM connected to the memory bus, and on SSDs connected over PCIe. The DAOS server uses load/store instructions to access memory-mapped persistent memory, and the SPDK API to access NVMe SSDs from user space. If there is an uncorrectable error in persistent memory or an SSD media corruption, applications running over DAOS without additional protection would incur a data/metadata loss. In order to guarantee resilience and prevent data loss, DAOS provides both replication and erasure coding for data protection and recovery. When data protection is enabled, DAOS objects can be replicated, or chunked into data and parity fragments, and then stored across multiple storage nodes. If there is a storage device failure or storage node failure, DAOS objects are still accessible in degraded mode, and data redundancy is recoverable from replicas or parity data [15]. Replication and Data Recovery Replication ensures high availability of data because objects are accessible while any replica survives. Replication of DAOS is using a primary-slave protocol for write: The primary replica is responsible for forwarding requests to slave replicas, and progressing distributed transaction status. client RPC Client data data parity RDMA slave slave primary data data data parity server server server server server server server storage storage storage storage storage storage storage (a) Replicated write (b) Erasure coding write Fig. 3. Message and data flow of replication and erasure coding The primary-slave model of DAOS is slightly different from a traditional replication model, as shown in Fig. 3a. The primary replica only forwards the RPC to slave replica servers. All replicas will then initiate an RDMA request and get the data directly from the client buffer. DAOS chooses this model because in most HPC environments, the fabric bandwidth between client and server is much higher than the bandwidth between servers (and the bandwidth between servers will be used for data recovery and rebalance). If DAOS is deployed for a non-HPC use case that has higher bandwidth between servers, then the data transfer path of DAOS can be changed to the traditional model. DAOS uses a variant of two-phase commit protocol to guarantee atomicity of the replicated update: If one replica cannot apply the change, then all replicas should abandon the change as well. This protocol is quite straightforward if there is no failure. data data parity RDMA slave slave primary data data data parity server server server server server server server storage storage storage storage storage storage storage ----- However, if a server handling the replication write failed during the two-phase transaction, DAOS will not follow the traditional two-phase commit protocol that would wait for the recovery of the failed node. Instead it excludes the failed node from the transaction, then algorithmically selects a different node as a replacement, and moves forward the transaction status. If the failed-out node comes back at some point, it ignores its local transaction status and relies on the data recovery protocol to catch up the transaction status. When the health monitoring system of DAOS detected a failure event of a storage target, it reports the event to the highly replicated RAFT [14] based pool service, which can globally activate the rebuild service on all storage servers in the pool. The rebuild service of a DAOS server can promptly scan object IDs stored in local persistent memory, independently calculates the layout of each object, and then finds out all the impacted objects by checking if the failed target is within their layouts. The rebuild service also sends those impacted object IDs to algorithmically selected fallback storage servers. These fallback servers then reconstruct data for impacted objects by pulling data from the surviving replicas. In this process, there is no central place to perform data/metadata scans or data reconstruction: The I/O workload of the rebuild service will be fully declustered and parallelized. Erasure Coding and Data Recovery DAOS can also support erasure coding (EC) for data protection, which is much more space and bandwidth efficient than replication but requires more computation. Because the DAOS client is a lightweight library which is linked with the application on compute nodes that have way more compute resource than the DAOS servers, the data encoding is handled by the client on write. The client computes the parity, creates RDMA descriptors for both data and parity fragments, and then sends an RPC request to the leader server of the parity group to coordinate the write. The RPC and data flow of EC is the same as replication: All the participants of an EC write should directly pull data from the client buffer, instead of pulling data from the leader server cache (Fig. 3b). DAOS EC also uses the same two-phase commit protocol as replication to guarantee the atomicity of writes to different servers. If the write is not aligned with the EC stripe size, most storage systems have to go through a read/encode/write process to guarantee consistency of data and parity. This process is expensive and inefficient, because it will generate much more traffic than the actual I/O size. It also requires distributed locking to guarantee consistency between read and write. With its multi-version data model, DAOS can avoid this expensive process by replicating only the partial write data to the parity server. After a certain amount of time, if the application keeps writing and composes a full stripe eventually, the parity server can simply compute the parity based on all this replicated data. Otherwise, the parity server can coordinate other servers in the parity group to generate a merged view from the partial overwritten data and its old version, then computes parity for it and stores the merged data together with that new parity. When a failure occurs, a degraded mode read of EC-protected data is more heavyweight compared to replication: With replication, the DAOS client can simply switch to read from a different replica. But with EC, the client has to fetch the full data stripe and ----- has to reconstruct the missing data fragment inflight. The processing of degraded mode write of EC-protected data is the same as for replication: The two-phase commit transaction can continue without being blocked by the failed-out server, instead it can immediately proceed as soon as a fallback server is selected for the transaction. The rebuild protocol of EC is also similar to replication, but it will generate significantly more data movement compared to replication. This is a characteristic of all parity based data protection technologies. End to End Data Integrity There are three types of typical failures in DAOS storage system: - Service crash. DAOS captures this by running the gossip-like protocol SWIM [13]. - NVMe SSD failure. DAOS can detect this type of failure by polling device status via SPDK. - Data corruption caused by storage media failure. DAOS can detect this by storing and verifying checksums. In order to support end-to-end checksums and detect silent data corruption, before writing the data to server the DAOS client computes checksums for the data being written. When receiving the write, the DAOS server can either verify the checksums, or store the checksums and data directly without verification. The server side verification can be enabled or disabled by the user, based on performance requirements. When an application reads back the data, if the read is aligned with the original write then server can just return the data and checksum. If the read is not aligned with the original write, the DAOS server verifies the checksums for all involved data extents, then computes the checksum for the part of data being read, and returns both data and checksum to the client. The client then verifies the checksum again before returning data to the application. If the DAOS client detects a checksum error on read, it can enable degraded mode for this particular object, and either switch to another replica for the read, or reconstruct data inflight on the client if it is protected by EC. The client also reports the checksum error back to the server. A DAOS server will collect all checksum errors detected by local verification and scrubbing, as well as errors reported by clients. When the number of errors reaches a threshold, the server requests the pool service to exclude the bad device from the storage system, and triggers data recovery for it. Checksums of DAOS are stored in persistent memory, and are protected by the ECC of the persistent memory modules. If there is an uncorrectable error in persistent memory, the storage service will be killed by SIGBUS. In this case the pool service will disable the entire storage node, and starts data recovery on the surviving nodes. ## 4 DAOS Data Model and I/O Interface This section describes the data model of DAOS, the native API built for this data model, and explains how a POSIX namespace is implemented over this data model. ----- 4.1 DAOS Data Model The DAOS data model has two different object types: Array objects that allow an application to represent a multi-dimensional array; and key/value store objects that have native support of a regular KV I/O interface and a multi-level KV interface. Both KV and array objects have versioned data, which allows applications to make disruptive change and rollback to an old version of the dataset. A DAOS object always belongs to a domain that is called a DAOS container. Each container is a private object address space which can be modified by transactions independently of the other containers stored in the same DAOS pool [1] (Fig. 4). Application key1 @ val1 DAOS key3 @ @ root @ val3 @ key3 key1 key2 @ val3 @ @ NVMe SSD Application key2 val1 val2 @ val2 con’d val2 Fig. 4. DAOS data model DAOS containers will be exposed to applications through several I/O middleware libraries, providing a smooth migration path with minimal (or sometimes no) application changes. Generally, all I/O middleware today runs on top of POSIX and involves serialization of the middleware data model to the POSIX scheme of directories and files (byte arrays). DAOS provides a richer API that provides better and more efficient building blocks for middleware libraries and applications. By treating POSIX as a middleware I/O library that is implemented over DAOS, all libraries that build on top of POSIX are supported. But at the same time, middleware I/O libraries can be ported to work natively over DAOS, bypassing the POSIX serialization step that has several disadvantages that will not be discussed in this document. I/O middleware libraries that have been implemented on top of the DAOS library include POSIX, MPII/O, and HDF5. More I/O middleware and frameworks will be ported in the future to directly use the native DAOS storage API. root @ @ @ key3 key1 key2 @ val3 @ @ NVMe SSD val1 val2 val2 con’d ----- 4.2 DAOS POSIX Support POSIX is not the foundation of the DAOS storage model. It is built as a library on top of the DAOS backend API, like any other I/O middleware. A POSIX namespace can be encapsulated in a DAOS container and can be mounted by an application into its filesystem tree. Single process address space Application / Framework dfuse Interception Library DAOS File System (libdfs) DAOS library (libdaos) End-to-end user space No system calls RPC RDMA **DAOS Storage Engine** Persistent memory NVMe SSD Fig. 5. DAOS POSIX support Figure 5 shows the software stack of DAOS for POSIX. The POSIX API will be used through a fuse driver using the DAOS Storage Engine API (through libdaos) and the DAOS File System API (through libdfs). This will inherit the overhead of FUSE in general, including system calls etc. This overhead is acceptable for most file system operations, but I/O operations like read and write can actually incur a significant performance penalty if all of them have to go through system calls. In order to enable OS-bypass for those performance sensitive operations, an interception library has been added to the stack. This library will work in conjunction with dfuse, and allows to intercept POSIX read(2) and write(2) calls in order to issue these I/O operations directly from the application context through libdaos (without any application changes). In Fig. 5, there is a layer between dfuse/interception library and libdaos, which is called libdfs. The libdfs layer provides a POSIX like API directly on top of the DAOS API. It provides file and directory abstractions over the native libdaos library. In libdfs, a POSIX namespace is encapsulated in a container. Both files and directories are mapped to objects within the container. The namespace container can be mounted into the Linux filesystem tree. Both data and metadata of the encapsulated POSIX file system will be fully distributed across all the available storage Persistent memory NVMe SSD ----- of the DAOS pool. The dfuse daemon is linked with libdfs, and all the calls from FUSE will go through libdfs and then libdaos, which can access the remote object store exposed by the DAOS servers. In addition, as mentioned above, libdfs can be exposed to end users through several interfaces, including frameworks like SPARK, MPI-IO, and HDF5. Users can directly link applications with libdfs when there is a shim layer for it as plugin of I/O middleware. This approach is transparent and requires no change to the application. ## 5 Performance The DAOS software stack is still under heavily development. But the performance it can achieve on new storage class memory technologies has already been demonstrated at the ISC19 and SC19 conferences, and first results for the IO500 benchmark suite on DAOS version 0.6 have been recently submitted [16]. IO500 is a community activity to track storage performance and storage technologies of supercomputers, organized by the Virtual Institute for I/O (VI4IO) [17]. The IO500 benchmark suite consists of data and metadata workloads as well as a parallel namespace scanning tool, and calculates a single ranking score for comparison. The workloads include: - IOR-Easy: Bandwidth for well-formed large sequential I/O patterns - IOR-Hard: Bandwidth for a strided I/O workload with small unaligned I/O transfers (47001 bytes) - MDTest-Easy: Metadata operations on 0-byte files, using separate directories for each MPI task - MDTest-Hard: Metadata operations on small (3901 byte) files in a shared directory - Find: Finding relevant files through directory traversals We have adapted the I/O driver used for IOR and MDTEST to work directly over the DFS API described in Sect. 4. The driver was pushed and accepted to the upstream ior-hpc repository for reference. Developing a new IO driver is relatively easy since, as mentioned before, the DFS API closely resembles the POSIX API. The following summarizes the steps for implementing a DFS backend for IOR and mdtest. The same scheme can also be applied to other applications using the POSIX API: - Add an initialize callback to connect to the DAOS pool and open the DAOS container that will encapsulate the namespace. A DFS mount is then created over that container. - Add callbacks for all the required operations, and substitute the POSIX API with the corresponding DFS API. All the POSIX operations used in IOR and mdtest have a corresponding DFS API, which makes the mapping easy. For example: – change mkdir() to dfs_mkdir(); – change open64() to dfs_open(); – change write() to dfs_write(); – etc. – Add a finalize callback to unmount the DFS mount and close the pool and container handle. ----- Two lists of IO500 results are published: The “Full List” or “Ranked List” contains performance results that are achieved on an arbitrary number of client nodes. The “10 Node Challenge” list contains results for exactly 10 client nodes, which provides a standardized basis for comparing those IO500 workloads which scale with the number of client nodes [3]. For both lists, there are no constraints regarding the size of the storage system. Optional data fields may provide information about the number and type of storage devices for data and metadata; when present in the submissions this information can be used to judge the relative efficiency of the storage systems. For the submission to IO500 at SC19 [16], the IO500 benchmarks have been run on Intel’s DAOS prototype cluster “Wolf”. The eight dual-socket storage nodes of the “Wolf” cluster use Intel Xeon Platinum 8260 processors. Each storage node is equipped with 12 Intel Optane Data Center Persistent Memory Modules (DCPMMs) with a capacity of 512 GiB (3 TiB total per node, configured in app-direct/interleaved mode). The dual-socket client nodes of the “Wolf” cluster use Intel Xeon E5-2699 v4 processors. Both the DAOS storage nodes and the client nodes are equipped with two Intel Omni-Path 100 adapters per node. Figure 6 shows the IO500 IOR bandwidth of the top four storage systems on the November 2019 edition of the IO500 “10-Node Challenge”. DAOS achieved both the #1 overall rank, as well as the highest “bw” bandwidth score (the geometric mean of the four IOR workloads). Due to its multi-versioned data model, DAOS does not require read-modify-write operations for small or unaligned writes (which generates extra I/O traffic and locking contention in traditional POSIX filesystems). This property of the DAOS storage engine results in very similar DAOS bandwidth for the “hard” and “easy” IOR workloads, and provides predictable performance across many different workloads. Fig. 6. IO500 10-node challenge – IOR bandwidth in GB/s ----- Figure 7 shows the mdtest metadata performance of the top four storage systems on the November 2019 edition of the IO500 “10-Node Challenge”. DAOS dominates the overall “md” metadata score (geometric mean of all mdtest workloads), with almost a 3x difference to the nearest contender. This is mainly due to the lightweight end-to-end user space storage stack, combined with an ultra-low latency network and DCPMM storage media. Like the IOR bandwidth results, the DAOS metadata performance is very homogeneous across all the tests, whereas many of the other file systems exhibit large variations between the different metadata workloads. Fig. 7. IO500 10-node challenge – mdtest performance in kIOP/s DAOS achieved the second rank on the November 2019 “Full List”, using just 26 client nodes. Much better performance can be expected with a larger set of client nodes, especially for those metadata tests that scale with the number of client nodes. So a direct comparison with other storage systems on the “Full List” (some of which were tested with hundreds of client nodes) is not as meaningful as the “10-Node Challenge”. The full list of IO500 results and a detailed description of the IO500 benchmark suite can be found at Ref. [16]. ## 6 Conclusion As storage class memory and NVMe storage become more widespread, the software stack overhead factors more and more as part of the overall storage system. It is very difficult for traditional storage systems to take full advantage of these storage hardware devices. This paper presented DAOS as a newly designed software stack for these new ----- storage technologies, described the technical characteristics of DAOS, and explained how it can achieve both high performance and high resilience. In the performance section, IO500 benchmark results proved that DAOS can take advantage of the new storage devices and their user space interfaces. More important than the absolute ranking on the IO500 list is the fact that DAOS performance is very homogeneous across the IO500 workflows, whereas other file systems sometimes exhibit orders-of-magnitude performance differences between individual IO500 tests. This paper only briefly introduced a few core technical components of DAOS and its current POSIX I/O middleware. Other supported I/O libraries like MPI-I/O and HDF5 are not covered by this paper and will be the subject of future studies. Additional I/O middleware plugins based on DAOS/libdfs are still in development. The roadmap, design documents and development status of DAOS can be found on github [5] and the Intel DAOS website [4]. ## References 1. Breitenfeld, M.S., et al.: DAOS for extreme-scale systems in scientific applications (2017). [https://arxiv.org/pdf/1712.00423.pdf](https://arxiv.org/pdf/1712.00423.pdf) [2. Rudoff, A.: APIs for persistent memory programming (2018). https://storageconference.us/](https://storageconference.us/2018/Presentations/Rudoff.pdf) [2018/Presentations/Rudoff.pdf](https://storageconference.us/2018/Presentations/Rudoff.pdf) 3. Monnier, N., Lofstead, J., Lawson, M., Curry, M.: Profiling platform storage using IO500 and mistral. In: 4th International Parallel Data Systems Workshop, PDSW 2019 (2019). [https://conferences.computer.org/sc19w/2019/pdfs/PDSW2019-6YFSp9XMTx6Zb1FALM](https://conferences.computer.org/sc19w/2019/pdfs/PDSW2019-6YFSp9XMTx6Zb1FALMAAsH/5PVXONjoBjWD2nQgL1MuB3/6lk0OhJlEPG2bUdbXXPPoq.pdf) [AAsH/5PVXONjoBjWD2nQgL1MuB3/6lk0OhJlEPG2bUdbXXPPoq.pdf](https://conferences.computer.org/sc19w/2019/pdfs/PDSW2019-6YFSp9XMTx6Zb1FALMAAsH/5PVXONjoBjWD2nQgL1MuB3/6lk0OhJlEPG2bUdbXXPPoq.pdf) [4. DAOS. https://wiki.hpdd.intel.com/display/DC/DAOS+Community+Home](https://wiki.hpdd.intel.com/display/DC/DAOS%2bCommunity%2bHome) [5. DAOS github. https://github.com/daos-stack/daos](https://github.com/daos-stack/daos) 6. Seo, S., et al.: Argobots: a lightweight low-level threading and tasking framework. IEEE [Trans. Parallel Distrib. Syst. 29(3) (2018). https://doi.org/10.1109/tpds.2017.2766062](https://doi.org/10.1109/tpds.2017.2766062) [7. SPDK. https://spdk.io/](https://spdk.io/) [8. Libfabric. https://ofiwg.github.io/libfabric/](https://ofiwg.github.io/libfabric/) [9. Mercury. https://mercury-hpc.github.io/documentation/](https://mercury-hpc.github.io/documentation/) 10. Weil, S.A., Brandt, S.A., Miller, E.L., Maltzahn, C.: CRUSH: controlled, scalable, decentralized placement of replicated data. In: Proceedings of the 2006 ACM/IEEE [Conference on Supercomputing, SC 2006 (2006). https://doi.org/10.1109/sc.2006.19](https://doi.org/10.1109/sc.2006.19) [11. Braam, P.J.: The Lustre storage architecture (2005). https://arxiv.org/ftp/arxiv/papers/1903/](https://arxiv.org/ftp/arxiv/papers/1903/1903.01955.pdf) [1903.01955.pdf](https://arxiv.org/ftp/arxiv/papers/1903/1903.01955.pdf) 12. Schmuck, F., Haskin, R.: GPFS: a shared-disk file system for large computing clusters. In: Proceedings of the First USENIX Conference on File and Storage Technologies, Monterey, [CA, 28–30 January 2002, pp 231–244 (2002). http://www.usenix.org/publications/library/](http://www.usenix.org/publications/library/proceedings/fast02/) [proceedings/fast02/](http://www.usenix.org/publications/library/proceedings/fast02/) 13. Das, A., Gupta, I., Motivala, A.: SWIM: scalable weakly-consistent infection-style process group membership protocol. In: Proceedings of the 2002 International Conference on Dependable Systems and Networks, DSN 2002, pp. 303–312 (2002) 14. Ongaro, D., Ousterhout, J.: In search of an understandable consensus algorithm (2014). [https://www.usenix.org/system/files/conference/atc14/atc14-paper-ongaro.pdf](https://www.usenix.org/system/files/conference/atc14/atc14-paper-ongaro.pdf) ----- [15. Barton, E.: DAOS: an architecture for extreme storage scale storage (2015). https://www.](https://www.snia.org/sites/default/files/SDC15_presentations/dist_sys/EricBarton_DAOS_Architecture_Extreme_Scale.pdf) [snia.org/sites/default/files/SDC15_presentations/dist_sys/EricBarton_DAOS_Architecture_](https://www.snia.org/sites/default/files/SDC15_presentations/dist_sys/EricBarton_DAOS_Architecture_Extreme_Scale.pdf) [Extreme_Scale.pdf](https://www.snia.org/sites/default/files/SDC15_presentations/dist_sys/EricBarton_DAOS_Architecture_Extreme_Scale.pdf) [16. IO500 List, November 2019. https://www.vi4io.org/io500/list/19-11/start](https://www.vi4io.org/io500/list/19-11/start) [17. Kunkel, J., et al.: Virtual institute for I/O. https://www.vi4io.org/start](https://www.vi4io.org/start) Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 [International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing,](http://creativecommons.org/licenses/by/4.0/) adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-030-48842-0_3?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-030-48842-0_3, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "HYBRID", "url": "https://link.springer.com/content/pdf/10.1007%2F978-3-030-48842-0_3.pdf" }
2,020
[ "JournalArticle" ]
true
2020-02-24T00:00:00
[ { "paperId": "f3dee13cc96e07fb275d51f546e998551bc1dead", "title": "Profiling Platform Storage Using IO500 and Mistral" }, { "paperId": "23df96545fb1b27b36b36e9bbe1e5c8d8cd71a6f", "title": "The Lustre Storage Architecture" }, { "paperId": "a3b4edb2943e643dbfde4291266de05d91326676", "title": "Argobots: A Lightweight Low-Level Threading and Tasking Framework" }, { "paperId": "b130eef5f1dfc1835abe24a4c3e8e17c1fa2ea89", "title": "DAOS for Extreme-scale Systems in Scientific Applications" }, { "paperId": "9979809e4106b29d920094be265b33524cde8a40", "title": "In Search of an Understandable Consensus Algorithm" }, { "paperId": "d88ae008c11859ff72c1b8b16f2fd0f58dc47964", "title": "CRUSH: Controlled, Scalable, Decentralized Placement of Replicated Data" }, { "paperId": "068f65c0271ed16a6bf4a1c2de1a962eec08edbf", "title": "SWIM: scalable weakly-consistent infection-style process group membership protocol" }, { "paperId": "55d5b653d2d5b03166c4272e94d4c213dbdf0571", "title": "Mercury" }, { "paperId": null, "title": "APIs for persistent memory programming" }, { "paperId": null, "title": "DAOS: an architecture for extreme storage scale storage" }, { "paperId": "2d60d3596490d9999d8433bf41405060779bc11d", "title": "Proceedings of the Fast 2002 Conference on File and Storage Technologies Gpfs: a Shared-disk File System for Large Computing Clusters" }, { "paperId": null, "title": "AAsH/5PVXONjoBjWD2nQgL1MuB3/6lk0OhJlEPG2bUdbXXPPoq.pdf" }, { "paperId": null, "title": "Virtual institute for I" } ]
9,054
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Law", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01f882c68fd68f4bc3c3a86e7ec098803297b4d8
[ "Computer Science" ]
0.899388
Argumentation Schemes for Blockchain Deanonymization
01f882c68fd68f4bc3c3a86e7ec098803297b4d8
FinTech
[ { "authorId": "38935542", "name": "Dominic Deuber" }, { "authorId": "48137162", "name": "Jan Gruber" }, { "authorId": "2192706720", "name": "Merlin Humml" }, { "authorId": "51055162", "name": "Viktoria Ronge" }, { "authorId": "2105495485", "name": "Nicole Scheler" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": "c518244b-d9bf-4665-abe7-cbd5a1e510b8", "issn": "2674-1032", "name": "FinTech", "type": null, "url": null }
Cryptocurrency forensics have become standard tools for law enforcement. Their basic idea is to deanonymise cryptocurrency transactions to identify the people behind them. Cryptocurrency deanonymisation techniques are often based on premises that largely remain implicit, especially in legal practice. On the one hand, this implicitness complicates investigations. On the other hand, it can have far-reaching consequences for the rights of those affected. Argumentation schemes could remedy this untenable situation by rendering the underlying premises more transparent. Additionally, they can aid in critically evaluating the probative value of any results obtained by cryptocurrency deanonymisation techniques. In the argumentation theory and AI community, argumentation schemes are influential as they state the implicit premises for different types of arguments. Through their critical questions, they aid the argumentation participants in critically evaluating arguments. We specialise the notion of argumentation schemes to legal reasoning about cryptocurrency deanonymisation. Furthermore, we demonstrate the applicability of the resulting schemes through an exemplary real-world case. Ultimately, we envision that using our schemes in legal practice can solidify the evidential value of blockchain investigations, as well as uncover and help to address uncertainty in the underlying premises—thus contributing to protecting the rights of those affected by cryptocurrency forensics.
## Argumentation Schemes for Blockchain Deanonymization Dominic Deuber[[0000][−][0002][−][8177][−][0562]], Jan Gruber[[0000][−][0003][−][1862][−][2900]], Merlin Humml[[0000][−][0002][−][2251][−][8519]], Viktoria Ronge, and Nicole Scheler Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen, Germany ``` {firstname.lastname}@fau.de ``` **Abstract Cryptocurrency forensics became standard tools for law en-** forcement. Their basic idea is to deanonymise cryptocurrency transactions to identify the people behind them. Cryptocurrency deanonymisation techniques are often based on premises that largely remain implicit, especially in legal practice. On the one hand, this implicitness complicates investigations. On the other hand, it can have far-reaching consequences for the rights of those affected. Argumentation schemes could remedy this untenable situation by rendering underlying premises transparent. Additionally, they can aid in critically evaluating the probative value of any results obtained by cryptocurrency deanonymisation techniques. In the argumentation theory and AI community, argumentation schemes are influential as they state implicit premises for different types of arguments. Through their critical questions, they aid the argumentation participants in critically evaluating arguments. We specialise the notion of argumentation schemes to legal reasoning about cryptocurrency deanonymisation. Furthermore, we demonstrate the applicability of the resulting schemes through an exemplary real-world case. Ultimately, we envision that using our schemes in legal practice can solidify the evidential value of blockchain investigations as well as uncover and help address uncertainty in underlying premises – thus contributing to protect the rights of those affected by cryptocurrency forensics. **Keywords: Argumentation · Legal Reasoning · Blockchain Analysis.** ### 1 Introduction “Follow the money” is arguably the central investigation strategy for any profitdriven offence [34]. Analysing flows of incriminated money is crucial to understand the business models and inner workings of organised crime groups, the hierarchy of the involved entities, and finally, identifying the groups’ members. However, the fight against money laundering is challenging, and criminals utilising virtual currencies as early adopters aggravate the situation even further. While law enforcement agencies need to expend many resources to follow complex transnational flows of fiat currencies, blockchain-based investigations impose even further challenges. These challenges arise from the fact that cryptocurrencies are generally pseudonymous, with some even being anonymous. Bitcoin [17] is arguably ----- 2 D. Deuber et al. the most famous and widespread cryptocurrency – both for lawful economic purposes and criminal activities [6]. Already in the early days of Bitcoin, it was shown that the currency is not anonymous because it is possible to link multiple pseudonyms belonging to the same person [1, 14, 21]. However, also supposedly anonymous cryptocurrencies, such as Monero [15] or Zcash [35], have been target of deanonymisation attacks [11, 16]. What all attacks on Bitcoin, Monero, and Zcash have in common is that they are based on partly unreliable assumptions [5]. The reliability of these assumptions determines the quality of the results of an attack. In legal practice, those assumptions are critical for inferring the evidential value of the deanonymisation of a perpetrator. However, no standard practice for deriving and discussing the reliability of those analysis results has been proposed yet. Therefore, we propose argumentation schemes for assessing the reliability of investigations on the Bitcoin blockchain – thus bridging practical cryptocurrency forensics and its scientific analysis. **1.1** **Related Work** _Argumentation schemes [33] as a way to classify arguments by their underlying_ principles of convincingness have been influential in the argumentation theory and the artificial intelligence community [12]. They present the various types of arguments as informal deduction rules together with accompanying critical _questions to aid a human reasoner in evaluating arguments of the respective type._ Given that expert testimonies, as well as the court process itself, is a form of argumentation, it is not surprising that argumentation schemes were applied to legal processes [2]. Walton [32] gives a detailed overview of the applicability of many argumentation schemes to representing and analysing legal processes. Apart from the argumentation schemes, there are other informal argument schemes like the ones proposed by Wagemans [31]; however, they focus more on the classification of arguments rather than human comprehension. There have also been more formal – and even automated – approaches to legal reasoning based on argumentation theory [20, 2]. However, our goal is not to automate parts of the legal process but to aid in evaluating statements about blockchain deanonymisation. While software automates blockchain deanonymisation (e.g. Chainalysis Reactor [10]), in the end, legal decision makers, i.e. humans, need to evaluate the reliability of the obtained findings. Postulating application-tailored argumentation schemes to capture specialised forms of argument is common practice. Parsons et al. [18] introduce schemes to reason about trust in entities to specialise arguments building on statements. Another example from the medical field is specific argumentation schemes to reason about treatment choices in order to aid doctors in their decision making and producing automated patient specific recommendations [26, 27]. On the legal side, the evidence must be critically evaluated as investigative measures justified by unreliable results potentially impinge upon the fundamental rights of the suspects [22]. Fröwis et al. [7] provide key requirements that must be satisfied to safeguard the evidential value of cryptocurrency investigations; one of them being reliability. They suggest specific measures to achieve reliability, such ----- Argumentation Schemes for Blockchain Deanonymization 3 as sharing any information necessary to assess reliability, without discussing how they can be implemented in practice. As a step in that direction, Deuber, Ronge and Rückert [5] provide a taxonomy for the different assumptions underlying deanonymisation attacks on cryptocurrency users – while only briefly discussing their taxonomy’s applicability in legal practice. **1.2** **Contribution** In legal practice, the lack of a profound framework means that there is no standard way to reason about the reliability of findings from blockchain-based investigations. Less reliable findings might entail two issues: First, results with low reliability might not establish the degree of suspicion required by subsequent investigative measures and thus render them unlawful. In the worst case, any evidence obtained from unlawful investigations might be inadmissible in court – depending on the exclusionary rules of the respective jurisdictions. Second, even if evidence might be admissible, low reliability corresponds to low evidential value, and thus the evidence might not be sufficient for a conviction. Given that any findings and the blockchain investigation itself are highly abstract for most parties involved, there needs to be a common ground between technical analysts, investigators, and other legal practitioners to assess these findings. Our contribution is the application of tailored argumentation schemes to assess heuristics employed in investigations based on the Bitcoin blockchain to deanonymise criminal users. The schemes render the taxonomy proposed by Deuber, Ronge and Rückert [5] broadly accessible and easy to use in practice. By presenting the implicit and explicit premises of those heuristics, our argumentation schemes enable all parties involved in the legal process to assess evidential value systematically. Thus, the schemes can potentially render blockchain-based analyses of Bitcoin transactions more comprehensible and the findings more reliable and conclusive. ### 2 Preliminaries **2.1** **Bitcoin (BTC)** Bitcoin [17] is a cryptocurrency. At its core are transactions that, in their most basic form, are payments. In contrast to fiat currencies, Bitcoin employs a decentralised ledger of transactions. Decentralised means that there is no central authority issuing new units of the currency or settling transactions. Instead, parties maintain the ledger in a peer-to-peer network – a network where all parties are clients and servers simultaneously. The transactions are organised in blocks, which is why the ledger is also referred to as a blockchain. Using a consensus mechanism, the network agrees on which blocks, i.e. particularly transactions, should extend the ledger. The network nodes participating in this consensus mechanism are called miners. ----- 4 D. Deuber et al. TX In Out _txhash, outid_ _hpk1_, v1 BTC _hpk2_, v2 BTC Figure 1: Bitcoin transaction _Transactions consist of a list of inputs and outputs. An output usually states an_ amount of Bitcoin (v BTC) and the hash hpk of a public key pk, which is also referred to as address a. The public key is part of a digital signature scheme. Such schemes use public and secret key pairs – anyone can check the validity of a signature with respect to some public key, while only the one knowing the corresponding secret key can create a valid signature. An input is a reference to an output of another transaction, which is uniquely described by the hash txhash of that other transaction and the position outid of the output in the transaction’s list of outputs. An example of a transaction with one input and two outputs is given in Fig. 1. Usually, transactions have several in- and outputs. Spending the first output of this transaction with an amount of v1 Bitcoin requires providing a public key pk[′] whose hash equals hpk1 and a signature that verifies under pk[′]. This mechanism ensures that, in general, there are no unauthorized transactions, as knowledge of the corresponding secret keys is required to issue a transaction. A property of Bitcoin is that the input amount of a transaction is always consumed entirely. Thus, the second output of the transaction might be a so-called change output. A change output pays back to the sender(s) the difference between its input amounts and the amount that the recipient(s) should receive. _Wallets in Bitcoin can be seen as a collection of several addresses which belong_ to the same entity. On a technical level, a wallet is often referred to as software that generates and stores the private keys corresponding to different addresses and allows creating new addresses and issuing transactions. By only inspecting transactions on the blockchain, it is not immediately obvious which addresses belong to the same wallet. _CoinJoin transactions are a special type of transaction that tries to add anonymity_ to Bitcoin. The idea is to combine inputs from multiple entities while at the same time having equally valued outputs [13]. In Bitcoin, the concept of having transactions with inputs from multiple users to hinder linking is called mixing. **2.2** **Bitcoin Investigations** Research has shown early on that Bitcoin is not anonymous but pseudonymous, as it is possible to cluster addresses that are likely to be controlled by the same entity, |Col1|In tx , out hash id|Out h pk1, v BTC 1 h pk2, v BTC 2|Col4| |---|---|---|---| ||||| ----- Argumentation Schemes for Blockchain Deanonymization 5 referred to as address clustering. The most important address-clustering heuristics are the multi-input heuristic [1, 14, 21] and the change-address heuristic [14, 1, 11]. The multi-input heuristic states that all inputs of a transaction are controlled by the same entity – as already mentioned in Bitcoin’s whitepaper [17]. The multi-input heuristic should not be applied to CoinJoin transactions as they are issued by multiple entities by design. The change-address heuristics utilise that change often occurs in Bitcoin (see Section 2.1). The main objective of blockchain investigations is re-identification, that is to determine the natural or legal person who controls an address cluster. This is especially relevant for law enforcement trying to identify persons connected to flows of incriminated virtual currencies. By tracing such transactions and conducting address clustering, they might identify a single relevant address cluster. As addresses typically do not contain any personally identifiable information, the investigation requires re-identification. To facilitate re-identification, address clusters are usually connected with off-chain information – a process also referred to as attribution tagging [7]. As its name implies, the tagged information in attribution tagging can be used to identify the actual entity. In practice, the arguably most important attribution information is that an address cluster is related to some cryptocurrency exchange – a platform to exchange, buy or sell cryptocurrencies – as law enforcement might request the respective customer data from this exchange. **2.3** **Legal Background** Many states committed themselves to the fight against cybercrime by ratifying the Convention on Cybercrime [3]. This commitment includes establishing cybercrime offences under domestic law as well as providing investigative measures to enable the prosecution of such offences – while simultaneously protecting fundamental human rights and liberties. The actual balance between the interests of law enforcement and human rights is dictated by the domestic laws of the ratifying states. However, the legal issues discussed in this section are not specific to a particular jurisdiction or legal system. This is illustrated by using the US as an example of a common-law jurisdiction and Germany as an example of a civil-law jurisdiction; both states have ratified the convention. The starting point for our discussion is the following example case of a typical blockchain-based investigation: **Example. Investigators seized a darknet marketplace and recovered a local** Bitcoin wallet that was presumably used to pay the marketplace’s operator. The investigators then used blockchain analysis to discover the wallet which was used by the operator to receive payments. While the discovered operator wallet is a local wallet, the operator is suspected of using another wallet at a cryptocurrency exchange to convert Bitcoin into fiat currency. To prevent that the exchange wallet can be linked to the incriminated local wallet, the operator mixed the funds prior to the transfer. Through blockchain analysis, the investigators nevertheless managed to establish a link between the incriminated local wallet ----- 6 D. Deuber et al. and the exchange wallet. Next, the investigators issued a request for the disclosure of customer data to the exchange – which collected them as part of their employed Know-Your-Customer policy to comply with anti-money-laundering laws. The goal of this request was to find the natural person that controls the incriminated local wallet. After having identified this suspected operator, the investigators conducted electronic surveillance and executed a search of the suspect’s premises. In summary, the investigative measures used in the example were the blockchain analysis, a request for the disclosure of customer data, electronic surveillance, and a search of premises. In general, such investigative measures have in common that they require a specific degree of suspicion in order to protect the rights of the targeted person. Under German law, an initial suspicion is sufficient to justify a blockchain analysis (according to Sections 161, 163 German Code of Criminal Procedure (GCCP), [25, 8]) or a request for the disclosure of customer data (according to Section 100j GCCP). An initial suspicion must be based on a conclusive and established factual basis (factual quality). Due to lax requirements, these measures may be directed not only against the suspected person but also against other third parties that might be somehow connected [9, 24]. There are stronger requirements regarding electronic surveillance pursuant to Section 100a GCCP or a search of premises pursuant to Section 102 GCCP. Beyond the mere ‘possibility’ of the commission of a crime, in these cases, the suspicion of the crime must be specific and individualised (so-called qualified initial suspicion) as well as ‘probable’ [23, 19]. These measures have to be directed only against the accused person [23] and may only involve other persons who are directly connected to the accused person or involved in the crime (see Sections 100a (3) and 103 GCCP). Under US law, especially the requirements for the analysis of blockchain data and a request for the disclosure of customer data differ significantly from German law. However, this does not affect the legal issues raised by blockchain analyses, as we will point out below. Both blockchain analyses and the request for the disclosure of customer data are not subject to the probable cause requirement of the Fourth Amendment, given that the third-party doctrine applies [30]. However, electronic surveillance and search of premises are subject to the Fourth Amendment and therefore require probable cause as the degree of suspicion. The Fourth Amendment demands the suspicion to be particularised with respect to the person under surveillance, being searched, or specific things to be seized. The most important legal issue concerning blockchain analysis in practice is whether or not the findings of the analysis can establish the required degree of suspicion for subsequent investigative measures. Therefore, the lower requirements for blockchain analysis or a request for the disclosure of customer data under US law do not matter, as at least subsequent measures – such as searches of premises – require similar degrees of suspicion as under German law. Thus, the only difference under US law is that the legal issue arises later in the investigation. To illustrate the legal issue, we return to the example of the darknet marketplace operator. Here, a blockchain analysis was used to link an incriminated wallet to an exchange service. Next, disclosure of customer data was requested ----- Argumentation Schemes for Blockchain Deanonymization 7 from the exchange. Imagine that solely based on the linkage of the wallets, further investigative measures are conducted against the natural person identified by the customer data. If those measures are electronic surveillance or searches of premises, the required suspicion must be particularised against the person targeted by the measures, both under German and US law. If it is unreliable, blockchain analysis might fail to establish this particularised suspicion. Imagine that the analysis is based on the multi-input heuristic, but the heuristic is applied to CoinJoin transactions. In this case, the analysis would definitely yield false positives as CoinJoin transactions are issued by multiple entities by design. False positives might render the individualisation insufficient and thus the respective investigative measure unlawful. To summarise, certain invasive and targeted investigative measures require a degree of suspicion that is individualised with respect to the target of these measures. Blockchain analysis based on uncertain assumptions might lead to unreliable findings that are not sufficient to establish the individualisation and thus the required degree of suspicion for subsequent investigative measures. If investigative measures are conducted without the necessary degree of suspicion, they are unlawful and thus might render obtained evidence inadmissible – depending on the exclusionary rules of the respective jurisdiction. **2.4** **Argumentation Schemes** Argumentation schemes classify arguments by their warrant in the sense of Toulmin [28] – i.e. by their principle of convincingness. They are presented as informal presumptive deduction rules inferring plausible truth of a conclusion from truth of multiple premises [33]. For example, the Argument from Abductive _Inference is tailored towards reconstructing the cause E for a set F of observed_ findings. Premise: _F is a finding or given set of facts._ Premise: _E is a satisfactory explanation of F_ . Premise: No alternative explanation E[′] given so far is as satisfactory as E. Conclusion: Therefore, E is plausible as hypothesis. Scheme 1: Argument from Abductive Inference [33] In addition to the deduction rule representing the informal shape of the argument, an argumentation scheme specifies critical questions (CQs) as ways to attack an argument based on the scheme. The critical questions aid both the producer and the receiver of arguments by suggesting relevant statements to present or ask about. There are usually critical questions attacking the individual premises or the conclusion of the argument, as well as ones attacking the applicability of the scheme. Consider for example the CQs of the Argument from Abductive Inference: ----- 8 D. Deuber et al. 1. How satisfactory is E as an explanation of F, apart from the alternative explanations available so far in the dialogue? 2. How much better an explanation is E than the alternative explanations available so far in the dialogue? 3. How far has the dialogue progressed? If the dialogue is an inquiry, how thorough has the investigation of the case been? 4. Would it be better to continue the dialogue further, instead of drawing a conclusion at this point? Scheme 1: Critical questions of Argument from Abductive Inference CQs 1 and 2 are direct attacks on truth of premises of the rule. CQs 3 and 4 are specific attacks based on the idea that there could be other explanations not yet put forth due to the temporal nature of argumentative dialogues. By making premises and possible flaws of an argument explicit, argumentation schemes aid critical discussion of expert statements by legal decision-makers and other practitioners without the need for deep understanding of the underlying topic. For judging the reliability of a claim from blockchain analysis, it is particularly helpful to have transparency with regards to the underlying assumptions as they have to be judged on a case-by-case basis [5]. This added transparency can also increase the evidential value of such findings if the reliability of dependent information is sufficiently well established. ### 3 Our Argumentation Schemes In criminal investigations, blockchain analyses are typically conducted to establish a link between an entity and a criminal offence through involved cryptocurrency addresses. As stated in Section 1.1, there exists software that could establish such links in an automated manner. However, the methods used by it, as well as the employed heuristics, remain regularly opaque. Such insufficient traceability is contrary to the requirements of legal proceedings, which require a high degree of explainability and intelligibility. For this purpose, we present a custom argumentation scheme to argue the involvement of an entity in an offence from the control of an address that is connected to that offence (see Scheme 2). We do not need a custom argumentation scheme to represent linking an entity with an address by requesting data from a cryptocurrency exchange, as this is covered by Argument from Position to Know [33]. This standard scheme covers this case, as exchanges typically collect the personal information their customers’ personal information as part of Know-Your-Customer policies and are thereby in a position to know who the customer using an account is. To establish a link between addresses, there are software tools implementing various heuristics, such as the multi-input heuristic or change heuristics, which are arguably used by investigators [5]. We pose the Cluster from Software scheme to represent arguments based on such a software tool to establish the link between addresses and thereby forming clusters. ----- Argumentation Schemes for Blockchain Deanonymization 9 Premise: Address A is connected to offence O Premise: Entity E controls address A Conclusion: Entity E is connected to offence O 1. Which circumstantial evidence indicates that entity E controls address A? 2. Could it be that at the time of offence O someone else controlled address A instead of entity E? 3. How was address A connected to offence O that E’s involvement is indicated? 4. Are there other indicators that E is connect to offence O? Scheme 2: Suspicion through Address Control Premise: Software S establishes a link between address A1 and address A2 Premise: Software S is reliable Premise: Entity E controls address A1 Conclusion: Entity E controls address A2 1. How does software S establish the link? 2. How reliable is software S? Why is software S considered reliable? 3. Could this link be also established without the use of software S, e.g. by using a different software, human-reasoning with the multi-input heuristic, or other non-blackbox methods? 4. What evidence exists for entity E controlling A1? 5. Are there other indicators that E might control A2? Scheme 3: Cluster from Software Naturally, it is not enough for a software tool to establish a link between addresses without further explanations and evidence backing that claim. Analysts face a myriad of transactions when conducting blockchain analyses. They must assess the results presented by the software for criminalistic and legal reasons. First, analysts must understand the software’s processes to infer investigative leads, find connections, and form hypotheses – tasks that cannot be entirely automated. Second, only when understanding the software’s results can analysts apply their knowledge of criminal tactics eventually employed by perpetrators, question the results, and falsify hypotheses they previously posed. Finally, from a legal perspective, the rightfulness of the analysis is crucial, as it affects the lawfulness of further investigations in the pre-trial stages and the evidential value of obtained findings in the actual trial [5]. However, assessing the results would require that the employed deanonymization software discloses the assumptions relied on in the analysis – which is typically not done at all. Therefore, an investigator would back the findings of the software by manual analysis in case the software does not disclose the reasons for linking addresses. To represent the claims from manual analysis, we present two exemplary schemes that capture the use of the multi-input (see Scheme 4) and the change-address heuristic (see Scheme 5), respectively. ----- 10 D. Deuber et al. Premise: Transaction T has multiple input addresses Premise: Entity E controls some input addresses of T Conclusion: Entity E controls all input addresses of T 1. Could T be a CoinJoin transaction? 2. Could it be that another entity F shares secret keys with E and thereby can control other or all inputs of T ? 3. Which input addresses of transaction T does entity E control? What evidence is there for E controlling these addresses? 4. Are there other indicators that E might control other input addresses of T ? Scheme 4: Cluster from Multi-Input Premise: Transaction T has multiple output addresses Premise: Output address C is a change address of transaction T Premise: Entity E controls all input addresses of T Conclusion: Entity E also controls change address C 1. Could T just have multiple distinct benefactors? Could the change for example be donated to a supported unrelated entity? 2. What evidence is there suggesting that client software was used which generates a fresh change address for every new transaction? 3. Are there other indicators that E controls address C? Scheme 5: Cluster by Change-Address For brevity, the argumentation schemes presented in this section only cover the most common Bitcoin blockchain analysis heuristics used in practice and especially do not cover non-blockchain-specific reasoning. For the latter, we can use the vast array of pre-existing schemes [33]. Together, these schemes can be applied to represent reasoning about Bitcoin blockchain investigations in practice, as we will show in Section 4. ### 4 Application in the Wall Street Market Case In order to illustrate our approach and its practical implications, we present the argumentation behind the investigative results of the proceedings against one of the administrators of the infamous Wall Street Market (WSM). WSM was one of the largest darknet marketplaces on which illegal narcotics, financial data, hacking software as well as counterfeit goods were traded between approximately 2016 and its seizure in 2019 [4]. Besides technical surveillance measures, blockchain-based investigations of Bitcoin transactions conducted by the US Postal Service (USPS) were decisive in identifying the administrators operating the marketplace [29]. The publicly available criminal complaint states that the USPS employed proprietary software of an undisclosed company to conduct its blockchain analyses [29]. Furthermore, neither the exact methods employed during the analyses ----- Argumentation Schemes for Blockchain Deanonymization 11 Suspicion through Address Control dudebuy BPPC Argument from Position to Know E-Mail Game Company Cluster from Software Mixer _W 2_ _W 1_ _W 4_ Hansa Argument from Sign Defendant X. Argument from Position to Know Figure 2: Application of the proposed argumentation schemes to assess the identification of the administrator of the darknet marketplace called Wall Street Market nor the involved Bitcoin addresses were specified. Instead, the final results – meaning actual investigative findings in the form of off-chain information – were presented on their own. To prove the correctness, it is merely stated that the software was found to be reliable based on numerous unrelated investigations [29]. This might either suggest the software was utilised as a black box or that the details were (intentionally) not published and kept secret to protect the technical means for tactical reasons. This argumentation might be insufficient to convince legal decision makers of the rightfulness of the findings. Thus, we infer from the criminal complaint which analysis methods the software might have employed and then apply our argumentation schemes to argue the findings. The blockchain analyses of the USPS constituted the initial lead that enabled the involved law enforcement agencies to identify ‘TheOne’ – who acted as one of the administrators of the platform [29]. ‘TheOne’ is believed to be ‘X.’,[1] one of the three defendants, mainly based on the following two findings: First, the investigators could establish a link between the administrator ‘TheOne’ from WSM and the user ‘dudebuy’ from Hansa Market by analysing data seized from both platforms. They found that ‘TheOne’ used the same PGP public key as ‘dudebuy’ did at the previously operated and meanwhile seized darknet marketplace Hansa Market. As a PGP key pair is a highly individual piece of data used to prove one’s identity and encrypt communications, it has to be inferred that those two monikers belong to the same real-world entity. As ‘dudebuy’ used a wallet W 2 as his refund wallet on Hansa Market, the 1 The defendant’s name has been anonymized by the authors. ----- 12 D. Deuber et al. investigators found an entry point to perform financial investigations concerning this perpetrator seeming to operate now as ‘TheOne’. Here, the investigators could establish suspicion using the Suspicion through _Address Control scheme and infer that the owner of wallet W_ 2 seems to be the targeted administrator of the ongoing investigations regarding WSM. This conclusion could be assessed by the evaluation of the critical questions of the scheme. CQ 1 – regarding circumstantial evidence indicating address control – leads to a high degree of confidence, as the investigators resorted to seized user data, including an identical PGP public key. While CQ 2 (address control by somebody else) does not seem to be of relevance to the investigators at this point in time, CQ 3 (nature of the connection to the offence) reveals at least an indirect involvement of the address in the offence in question. Second, being confident that the owner of wallet W 2 is the target, the USPS revealed that other wallets that appeared in the investigations, namely wallets W 1 and W 4, were funded by transactions originating from wallet W 2. As this analysis step is basically a rather typical payment flow analysis, which is also employed in traditional money laundering investigations concerning fiat currencies, it is dispensable to assess it with a newly formulated argumentation scheme. For example, Argument from Sign or Argument from Abductive Inference would be a suitable fit here [33]. Those newly uncovered wallets, in turn, were identified to be the true origin of several payments to various services, which were conducted via a bitcoin payment processing company (BPPC). Prior to these payments, the corresponding funds were supposedly mixed via a commercial mixing service, whose flow of transactions could be ‘de-mixed’ by the USPS’ analysts [29]. Given the fact that no further information regarding the de-mixing is presented in the criminal complaint, we deliberately assume that some sort of software established the link so that the Cluster from Software scheme should be employed to be able to judge the evidential value of this result. The scheme revolves around the mechanism for link establishment (CQ 1), the reliability of the tool itself (CQ 2), human comprehensibility (CQ 3) and additional evidence available (CQs 4 and 5). Here, the most important critical question to pose might be CQ 3, i.e. whether the link could be established by comprehensible reasoning of a human analyst. As the following requests for the disclosure of customer data were based on this link, it must be considered crucial evidence in this early phase of the investigation. In the course of using CQ 3, a human analyst might establish that the link was a result of the multi-input heuristic. As the multi-input heuristic results in false positives when applied to CoinJoin transactions, it is crucial to challenge whether the involved transactions could be CoinJoin transactions – via CQ 1 of the Cluster from Multi-Input scheme. By this example, the practical relevance of our argumentation schemes becomes particularly apparent. Without the schemes, the argumentation would be limited to whether the analysis software was reliable in the past but not whether false positives were actually excluded in the specific case. By obtaining user records from the BPPC regarding the payment from wallet _W_ 1, investigators uncovered an e-mail address, which could be linked to the ----- Argumentation Schemes for Blockchain Deanonymization 13 aforementioned defendant, as it was actually used alongside his real-world identity ’X’. In addition to that, they uncovered that wallet W 4 served as the suspected source for payments for two accounts at a video gaming company, which were also linked to the suspect, as the records obtained by a subpoena suggest. Furthermore, a second link could be established from another wallet W 5, in a similar manner, which is considered to be used to pay for a third account linked to the suspect at the gaming company in a similar manner. Wallet W 5 was found to be funded by a different wallet that could also be associated with WSM’s administrators at a later point in time. While this correlation accumulates reliability, each respective request for the disclosure of customer data might be assessed by employing the _Argument from Position to Know scheme [33]._ In summary, the USPS’s blockchain analyses included the following broader steps: identification of wallets, detection of payments between wallets, de-mixing and the association of wallets with off-chain information mainly from other darknet marketplaces as well as service providers. While the investigators later found various pieces of evidence in the course of the following investigative actions, these steps were central for the case in order to find a starting point for targeted investigations. We showed that their reliability could be effectively assessed by the utilisation of our argumentation schemes. ### 5 Conclusion After having demonstrated the usage of several argumentation schemes for blockchain-based investigations, we conclude by presenting use cases in which the schemes will be especially beneficial and by pointing out directions for future work. As our argumentation schemes allow reasoning about the findings of blockchainbased investigations, we see potential use cases wherever such findings have to be communicated to and assessed by persons involved in respective criminal proceedings. By utilising the schemes, an analyst can clearly articulate the employed heuristics, their individual strengths, and potential weaknesses. This increases the comprehensibility of such analyses and court proceedings for the decision makers, and also eases the documentation for later verification by an expert witness. Given the high requirements regarding the explainability of legal proceedings, this task cannot be achieved by software in an automated manner yet. Therefore, we intend to support them with our argumentation schemes. Nevertheless, our considerations can be prospectively integrated into deanonymization software to increase its explainability. Clear articulation is key to determining the quality of blockchain-based findings, especially if they are not or only weakly supported by other evidence. On the one hand, applying an argumentation scheme and utilising its critical questions enables law enforcement agencies and the preliminary judge to reason about the eventual perpetration of the identified person and therefore establish a certain degree of suspicion to justify further investigative measures. On the other hand, the rights of suspects can be protected by ensuring that the results obtained from blockchain investigations are of quality, can be understood, ----- 14 D. Deuber et al. independently checked for plausibility by the parties to the proceedings, and are actually able to establish the relevant suspicion required by law. As a result, we consider the application of argumentation schemes in the context of blockchain-based investigations a supportive mechanism for making sense of the intangible crime scene and highly abstract commission of cybercriminal offences. Our schemes can be a helpful tool for investigators and prosecutors that strive to identify perpetrators, as well as for legal decision makers to answer the question of guilt. Finally, the schemes are a step forward in the direction of harmonising the effectiveness and explainability of high-tech investigations. Extending this work can be done in multiple directions. Further schemes for other blockchain analysis heuristics or other cybercriminal investigations could be created, as indicated already in Section 3. In addition to that, the critical questions of our schemes could be refined to comprise more specific sub-questions as done for Argument from Expert Opinion in Walton, Reed and Macagno [33] to capture more expert knowledge. **Acknowledgements This work was supported by DFG (German Research** Foundation) as part of the Research and Training Group 2475 “Cybercrime and Forensic Computing” (grant number 393541319/GRK2475/1-2019). Merlin Humml was also supported by DFG project RAND (grant number 377333057). The authors also wish to thank Marie-Helen Maras for fruitful discussions. ### References 1. Androulaki, E. et al.: Evaluating User Privacy in Bitcoin. In: pp. 34–51 (2013). [https://doi.org/10.1007/978-3-642-39884-1_4](https://doi.org/10.1007/978-3-642-39884-1_4) 2. Atkinson, K., Bench-Capon, T.J.M.: Argumentation schemes in AI and Law. Argu[ment & Computation 12(3), 417–434 (2021). https://doi.org/10.3233/AAC-200543](https://doi.org/10.3233/AAC-200543) 3. Council of Europe, Convention on Cybercrime. COM(2020) 605 final, (2001). 4. Department of Justice - Office of Public Affairs, Three Germans Who Allegedly Operated Dark Web Marketplace with Over 1 Million Users Face U.S. Narcotics [and Money Laundering Charges, (2019). https://www.justice.gov/opa/pr/three-](https://www.justice.gov/opa/pr/three-germans-who-allegedly-operated-dark-web-marketplace-over-1-million-users-face-us) [germans-who-allegedly-operated-dark-web-marketplace-over-1-million-users-face-](https://www.justice.gov/opa/pr/three-germans-who-allegedly-operated-dark-web-marketplace-over-1-million-users-face-us) [us (visited on 07/12/2020). Department of Justice.](https://www.justice.gov/opa/pr/three-germans-who-allegedly-operated-dark-web-marketplace-over-1-million-users-face-us) 5. Deuber, D., Ronge, V., Rückert, C.: SoK: Assumptions underlying Cryptocurrency Deanonymizations - A Taxonomy for Scientific Experts and Legal Practitioners. Proc. Priv. Enhancing Technol. 2022(3), 64–84 6. European Union Agency for Law Enforcement Cooperation, IOCTA 2021: internet [organised crime threat assessment 2021, (2021). https://doi.org/10.2813/113799.](https://doi.org/10.2813/113799) [https://data.europa.eu/doi/10.2813/113799.](https://data.europa.eu/doi/10.2813/113799) 7. Fröwis, M. et al.: Safeguarding the Evidential Value of Forensic Cryptocurrency [Investigations. arXiv e-prints (2019). arXiv: 1906.12221 [cs.CY]](https://arxiv.org/abs/1906.12221) 8. Grzywotz, J., Köhler, O.M., Rückert, C.: Cybercrime mit Bitcoins - Straftaten mit virtuellen Währungen, deren Verfolgung und Prävention. StV 11, 753–759 (2016) 9. Hauschild, Münchener Kommentar zur StPO, § 152. Hans Kudlich (2014) [10. Inc., C.: Chainalysis Reactor, (2022). https://www.chainalysis.com/chainalysis-](https://www.chainalysis.com/chainalysis-reactor/) [reactor/.](https://www.chainalysis.com/chainalysis-reactor/) ----- Argumentation Schemes for Blockchain Deanonymization 15 11. Kappos, G. et al.: An Empirical Analysis of Anonymity in Zcash. In: pp. 463–477 (2018) 12. Macagno, F.: Argumentation schemes in AI: A literature review. Introduction to [the special issue. Argument Comput. 12(3), 287–302 (2021). https://doi.org/10.](https://doi.org/10.3233/AAC-210020) [3233/AAC-210020](https://doi.org/10.3233/AAC-210020) [13. Maxwell, G.: CoinJoin: Bitcoin privacy for the real world, (2013). https://bitcointalk.](https://bitcointalk.org/index.php?topic=279249) [org/index.php?topic=279249 (visited on 12/08/2019).](https://bitcointalk.org/index.php?topic=279249) 14. Meiklejohn, S. et al.: A fistful of bitcoins: characterizing payments among men with no names. In: Proceedings of the 2013 conference on Internet measurement conference, pp. 127–140 (2013) [15. Monero, https://www.getmonero.org/ (visited on 12/08/2019).](https://www.getmonero.org/) 16. Möser, M. et al.: An Empirical Analysis of Traceability in the Monero Blockchain. **[2018(3), 143–163 (2018). https://doi.org/10.1515/popets-2018-0025](https://doi.org/10.1515/popets-2018-0025)** 17. Nakamoto, S.: Bitcoin: A peer-to-peer electronic cash system, (2008). 18. Parsons, S. et al.: Argument schemes for reasoning about trust. Argument & [Computation 5(2-3), 160–190 (2014). https://doi.org/10.1080/19462166.2014.](https://doi.org/10.1080/19462166.2014.913075) [913075](https://doi.org/10.1080/19462166.2014.913075) 19. Peters, Münchener Kommentar zur StPO, § 102. Hans Kudlich (2016) 20. Prakken, H.: Historical Overview of Formal Argumentation. In: Handbook of Formal [Argumentation Vol. 1 (2017). http://www.collegepublications.co.uk/downloads/](http://www.collegepublications.co.uk/downloads/ifcolog00017.pdf) [ifcolog00017.pdf](http://www.collegepublications.co.uk/downloads/ifcolog00017.pdf) 21. Reid, F., Harrigan, M.: An analysis of anonymity in the bitcoin system. In: Security and privacy in social networks, pp. 197–223. Springer (2013) 22. Rückert, C.: Cryptocurrencies and fundamental rights. J. Cybersecur. 5(1), tyz004 [(2019). https://doi.org/10.1093/cybsec/tyz004](https://doi.org/10.1093/cybsec/tyz004) 23. Rückert, C.: Münchener Kommentar zur StPO, § 100a. Hans Kudlich (2022) 24. Rückert, C.: Münchener Kommentar zur StPO, § 100j. Hans Kudlich (2022) 25. Safferling, C., Rückert, C.: Telekommunikationsüberwachung bei Bitcoins - Heimliche Datenauswertung bei virtuellen Währungen gem. §100a StPO. MMR (2015) 26. Sanchez Graillet, O., Cimiano, P.: Argumentation Schemes for Clinical Interventions. Towards an Evidence-Aggregation System for Medical Recommendations. In: Informatics and Assistive Technologies for Health-Care, Medical Support and Wellbeing HEALTHINFO 2019 (2019) 27. Sassoon, I. et al.: Argumentation schemes for clinical decision support. Argument [Comput. 12(3), 329–355 (2021). https://doi.org/10.3233/AAC-200550](https://doi.org/10.3233/AAC-200550) 28. Toulmin, S.: The Uses of Argument. Cambridge University Press (1958) 29. United States District Court for the Central District of California, Criminal Complaint - United States of America v. Tibo Lousee, Klaus-Martin Frost, and Jonathan [Kalla - Case No. 19MJ1843, (2019). https://www.justice.gov/opa/press-release/](https://www.justice.gov/opa/press-release/file/1159706/download) [file/1159706/download (visited on 07/12/2020).](https://www.justice.gov/opa/press-release/file/1159706/download) 30. United States v. Gratkowski, 964 F.3d 307 (5th Cir. 2020), 31. Wagemans, J.: Constructing a Periodic Table of Arguments. SSRN Electronic [Journal (2016). https://doi.org/10.2139/ssrn.2769833](https://doi.org/10.2139/ssrn.2769833) 32. Walton, D.: Legal Reasoning and Argumentation. In: Handbook of Legal Reasoning [and Argumentation, pp. 47–75. Springer Netherlands (2018). https://doi.org/10.](https://doi.org/10.1007/978-90-481-9452-0_3) [1007/978-90-481-9452-0_3](https://doi.org/10.1007/978-90-481-9452-0_3) 33. Walton, D., Reed, C., Macagno, F.: Argumentation Schemes. Cambridge University Press (2008) 34. Wechsler, W.F.: Follow the money. Foreign Affairs 80, 40 (2001) [35. Zcash, https://z.cash/ (visited on 12/08/2019).](https://z.cash/) -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2305.16883, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "CLOSED", "url": "http://arxiv.org/pdf/2305.16883" }
2,023
[ "JournalArticle" ]
true
2023-05-26T00:00:00
[ { "paperId": "ec9fcc9aa1e9e1bfa36ef44e072054683b87f5ad", "title": "Argumentation schemes in AI: A literature review. Introduction to the special issue" }, { "paperId": "f70e7e7bbfa4c042438444ff7fd47251add59bf8", "title": "Argumentation schemes for clinical decision support" }, { "paperId": "7172874da1e4861224247984b0a8ee6db714aed8", "title": "Argumentation schemes in AI and Law" }, { "paperId": "dede5aebf87c6e8d29c6dad724596d0c59e6eee6", "title": "Safeguarding the Evidential Value of Forensic Cryptocurrency Investigations" }, { "paperId": "346f11a4d4e352e1e474d08137e5f7841b840d33", "title": "Cryptocurrencies and Fundamental Rights" }, { "paperId": "6f0b29f460b436e22fd67e7f9c32bbaeba4b5cc2", "title": "Statutory Interpretation as Argumentation" }, { "paperId": "ab4da6d37196c4fbec6bcb3b7508e31506fe30bc", "title": "An Empirical Analysis of Anonymity in Zcash" }, { "paperId": "07c113708582b95c83951e0fd4c0a325128b109f", "title": "A Survey on Anonymity and Privacy in Bitcoin-Like Digital Cash Systems" }, { "paperId": "aea5e016836f685f65f48ec6a82d4382fa46ee96", "title": "BlockSci: Design and applications of a blockchain analysis platform" }, { "paperId": "ffa7b79338fb948c8cd0fa3621172106546023fa", "title": "An Empirical Analysis of Traceability in the Monero Blockchain" }, { "paperId": "ad4baa8a1d16330c26d658f06119ae1515a4c89d", "title": "Constructing a Periodic Table of Arguments" }, { "paperId": "e9d7f2e68f63253db4742db03082849d344c8912", "title": "Argument schemes for reasoning about trust" }, { "paperId": "19bab496d5d7f60d3e5b9217739b9cf7fedaf44b", "title": "A fistful of bitcoins: characterizing payments among men with no names" }, { "paperId": "3e987181514405756e0e4ca71e6ef0457749a840", "title": "Evaluating User Privacy in Bitcoin" }, { "paperId": "89e12ff6fe2576c6d3fcdd0ebe4a1d0ac6d49089", "title": "An Analysis of Anonymity in the Bitcoin System" }, { "paperId": "da6862a2566dc0041f8d7e10b3f4ec03895e3ece", "title": "Follow the Money" }, { "paperId": "2bdfe6f7c8af3ec8a13e249101dc5720337c2f62", "title": "Council of Europe" }, { "paperId": null, "title": "United States v. Gratkowski , 964 F.3d 307 (5th Cir. 2020)" }, { "paperId": null, "title": "Zcash Foundation" }, { "paperId": null, "title": "SoK: Assumptions underlying Cryptocurrency Deanonymizations—A Taxonomy for Scientific Experts and Legal Practitioners" }, { "paperId": null, "title": "https://www.chainalysis.com/chainalysisreactor/. Argumentation Schemes for Blockchain Deanonymization" }, { "paperId": null, "title": "European Union Agency for Law Enforcement Cooperation, IOCTA 2021: internet organised crime threat assessment 2021" }, { "paperId": "8732629f872f3ab326f325bff2867b02b1fe606e", "title": "Argumentation Schemes for Clinical Interventions Towards an Evidence-Aggregation System for Medical Recommendations" }, { "paperId": null, "title": "United States District Court for the Central District of California. Criminal Complaint-United States of America v. Tibo Lousee, Klaus-Martin Frost, and Jonathan Kalla-Case No. 19MJ1843" }, { "paperId": null, "title": "Department of Justice - Office of Public Affairs, Three Germans Who Allegedly Operated Dark Web Marketplace with Over 1 Million Users Face U.S. Narcotics and Money Laundering Charges" }, { "paperId": "6fbddc9502a302e648a01e1f55ce9cade476330a", "title": "Legal Reasoning and Argumentation" }, { "paperId": "4710e17cfbbb9d4637746b0dc5be71c94b8f627a", "title": "Historical overview of formal argumentation" }, { "paperId": "ab432f51a8a394775c81109dd104f17149adf5c1", "title": "Cybercrime mit Bitcoins – Straftaten mit virtuellen Währungen, deren Verfolgung und Prävention" }, { "paperId": "93877edcdf3ed6c2136ac90169ffdce510cf0d60", "title": "The Uses Of Argument" }, { "paperId": "62b12e1523e81eecd8910d2807acb9b839480719", "title": "An Introduction To Reasoning" }, { "paperId": null, "title": "Münchener Kommentar zur StPO, § 102" }, { "paperId": null, "title": "Telekommunikationsüberwachung bei Bitcoins - Heim-liche Datenauswertung bei virtuellen Währungen gem. §100a StPO" }, { "paperId": "b7f039f3e24404dfdae60547ec1a26df671675aa", "title": "A Fistful of Bitcoins Characterizing Payments Among Men with No Names" }, { "paperId": null, "title": "CoinJoin: Bitcoin Privacy for the Real World" }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": "25535d54d130acec7c665fa6fd35cf4b1215d4cb", "title": "Argumentation Schemes" }, { "paperId": null, "title": "Alltagslogik" }, { "paperId": null, "title": "How satisfactory is E as an explanation of F , apart from the alternative explanations available so far in the dialogue?" }, { "paperId": null, "title": "§ 100j StPO" }, { "paperId": null, "title": "The Bitcoin Project" }, { "paperId": null, "title": "How far has the dialogue progressed? If the dialogue is an inquiry, how thorough has the investigation of the case been?" }, { "paperId": null, "title": "Monero" }, { "paperId": null, "title": "Chainalysis" } ]
10,737
en
[ { "category": "Physics", "source": "external" }, { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Physics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01fa4b33dbd61a640c18adcddb778c405cc21fcf
[ "Physics", "Computer Science" ]
0.851635
Scalable Quantum Error Correction for Surface Codes Using FPGA
01fa4b33dbd61a640c18adcddb778c405cc21fcf
International Conference on Quantum Computing and Engineering
[ { "authorId": "49906277", "name": "Namitha Liyanage" }, { "authorId": "2109036219", "name": "Yue Wu" }, { "authorId": "2202172191", "name": "Alexander Deters" }, { "authorId": "2190107546", "name": "Lin Zhong" } ]
{ "alternate_issns": null, "alternate_names": [ "QCE", "Int Conf Quantum Comput Eng" ], "alternate_urls": null, "id": "f4feaf1a-55ae-44e8-a601-dd4b8d6993ab", "issn": null, "name": "International Conference on Quantum Computing and Engineering", "type": "conference", "url": null }
A fault-tolerant quantum computer must decode and correct errors faster than they appear. The faster errors can be corrected, the more time the computer can do useful work. The Union-Find (UF) decoder is promising with an average time complexity slightly higher than $O(d^{3}$. We report a distributed version of the UF decoder that exploits parallel computing resources for further speedup. Using an FPGA-based implementation, we empirically show that this distributed UF decoder has $a$ sublinear average time complexity with regard to $d$ given $O(d^{3}$ parallel computing resources. The decoding time per measurement round decreases as $d$ increases, a first time for a quantum error decoder. The implementation employs a scalable architecture called Helios that organizes parallel computing resources into a hybrid tree-grid structure. We are able to implement $d$ up to 21 with a Xilinx VCU129 FPGA, for which an average decoding time is 11.5 ns per measurement round under phenomenological noise of 0.1 %, significantly faster than any existing decoder implementation. Since the decoding time per measurement round of Helios decreases with d, Helios can decode a surface code of arbitrarily large $d$ without a growing backlog.
# Scalable Quantum Error Correction for Surface Codes using FPGA ### Namitha Liyanage, Yue Wu, Alexander Deters and Lin Zhong Department of Computer Science, Yale University, New Haven, CT Email : namitha.liyanage, yue.wu, alex.deters, lin.zhong @yale.edu _{_ _}_ **_Abstract—A fault-tolerant quantum computer must decode_** **and correct errors faster than they appear. The faster errors** **can be corrected, the more time the computer can do useful** **work. The Union-Find (UF) decoder is promising with an** **average time complexity slightly higher than O(d[3]). We report** **a distributed version of the UF decoder that exploits parallel** **computing resources for further speedup. Using an FPGA-based** **implementation, we empirically show that this distributed UF** **decoder has a sublinear average time complexity with regard** **to d, given O(d[3]) parallel computing resources. The decoding** **time per measurement round decreases as d increases, a first** **time for a quantum error decoder. The implementation employs** **a scalable architecture called Helios that organizes parallel** **computing resources into a hybrid tree-grid structure. We are** **able to implement d up to 21 with a Xilinx VCU129 FPGA,** **for which an average decoding time is 11.5 ns per measurement** **round under phenomenological noise of 0.1%, significantly faster** **than any existing decoder implementation. Since the decoding** **time per measurement round of Helios decreases with d, Helios** **can decode a surface code of arbitrarily large d without a growing** **backlog.** I. INTRODUCTION The high error rates of quantum devices pose a significant obstacle to the realization of a practical quantum computer. As a result, the development of effective quantum error correction (QEC) mechanisms is crucial for the successful implementation of a fault-tolerant quantum computer. One promising approach for QEC is surface codes [1–3] in which information of a single qubit (called a logical qubit) is redundantly encoded across many physical data qubits, with a set of ancillary qubits interacting with the data qubits. By periodically measuring the ancillary qubits, one can detect and potentially correct errors in physical qubits. Once the presence of errors has been detected through the measurement of ancillary qubits, a classical algorithm, or decoder, guesses the underlying error pattern and corrects it accordingly. The faster errors can be corrected, the more time a quantum computer can spend on useful work. Due to the error rate of the state-of-the-art qubits, very large surface codes (d > 25) are necessary to achieve fault-tolerant quantum computing [2, 4, 5]. See §II for more background. As surveyed in §VII, previously reported decoders capable of decoding errors as fast as measured, or backlog-free, either exploit limited parallelism [6–8], or sacrifice accuracy [9, 10]. Sparse Blossom [8] and Fusion Blossom [11] feature an important algorithmic breakthrough in realizing MWPM-based decoders. Fusion Blossom can additionally leverage measure ment round-level parallelism to meet the throughput requirement of very large d. However, due to their software-based realizations, both Sparse Blossom and Fusion Blossom suffer from decoding time per round longer than that of Helios by orders of magnitude at large d and higher noise level. When used in a quantum computer, the computer would spend most of execution time waiting for error correction. In this paper we report a distributed Union-Find (UF) de_coder (§III) and its FPGA implementation called Helios (§IV)._ Given O(d[3]) parallel resources, our decoder achieves sublinear average time complexity according to empirical results for d up to 21, the first to the best of our knowledge. Notably, adding more parallel resources will not reduce the time complexity of the decoder, due to the inherent nature of error patterns. Our decoder is a distributed design of and logically equivalent to the UF decoder first proposed in [12]. We implement the distributed UF decoder with Helios, a scalable architecture for organizing the parallel computation units. Helios is the first architecture of its kind that can scale to arbitrarily large surface codes by exploiting parallelism at the vertex level of the model graph. In §VI, we present experimental validations of the distributed UF decoder and Helios using a VCU129 FPGA board [13] for up to d = 21. The decoder’s average decoding time per measurement round under a phenomenological noise of 0.1% is 11.5 ns for d = 21, which is significantly faster than any existing decoder implementation. Our results successfully demonstrate, for the first time, a decoder design with decreasing average time per measurement round when d increases. This shows evidence that the decoder can scale to arbitrarily large surface codes without a growing backlog. In summary, we report the following contributions in this work. _• A distributed algorithm that implements the Union-Find_ decoder that can exploit parallel computing units to stop decoding time per measurement round from growing with the code distance d. _• The Helios architecture and its FPGA-based implemen-_ tation that realize the distributed Union-Find decoder. _• A set of empirical data based on the FPGA implementa-_ tion that demonstrate decreasing decoding time per round as d grows and 11.5 ns decoding time per measurement round for d = 21 under a phenomenological noise of 0.1%. Helios is open-source and available from [14]. ----- II. BACKGROUND _A. Error Correction and Surface Code_ Quantum Error Correction (QEC) is more challenging than classical error correction due to the nature of Quantum bits. First, qubits cannot be copied to achieve redundancy due to the no-cloning theorem. Second, the value of the qubits cannot be directly measured as measurements perturb the state of qubits. Therefore QEC is achieved by encoding the logical state of a qubit, as a highly entangled state of many physical qubits. Such an encoded qubit is called a logical qubit. The surface code is the widely used error correction code for quantum computing due to its high error correction capability and ease of implementation due to only requiring connectivity between adjacent qubits. A distance d rotated surface code is a topological code made out of 2d[2] 1 physical qubits arranged _−_ as shown in Figure 1. A key feature of surface codes is that a larger d can exponentially reduce the rate of logical errors making them advantageous. For example, even if the physical error rate is 10 times below the threshold, d should be greater than 17 to achieve a logical error rate below 10[−][10] [2]. A surface code contains two types of qubits, namely data qubits and ancilla qubits. The data qubits collectively encode the logical state of the qubit. The ancilla qubits (called X-type and Z-type) entangle with the data qubits and by periodically measuring the ancilla qubits, physical errors in all qubits can be discovered and corrected. An X error occurring in a data qubit will flip the measurement outcome of Z ancilla qubits connected with the data qubit and a Z error will flip the X ancilla qubits likewise. Such a measurement outcome is called defect measurement. Because ancilla qubits themselves could also suffer from physical qubit errors, multiple rounds of measurements are necessary. The outcomes from these multiple rounds of measurements of ancilla qubits constitute a _syndrome. Figure 2a shows a syndrome with sample physical_ qubit errors and shows how they are detected by ancilla qubits. We only show X errors and measurement errors on Z-type ancillas because Z errors and measurement errors on X-type ancillas can be independently dealt with in the same way. A syndrome can be conveniently represented by a graph called decoding graph in which a vertex represents a measurement outcome of an ancilla and an edge a data qubit. Vertices corresponding to defect measurements are specially marked. The weight of an edge is determined by the probability of error in the corresponding data qubit or measurement. For distance _d surface code, there are (d + 1)_ (d 1)/2 vertices. This _×_ _−_ decoding graph can be extended to three-dimensional in which multiple identical planar layers are stacked on each other. Each layer represents a round of measurement. The minimum number of measurement rounds required to complete a faulttolerant logical operation is d, which is also the number of rounds we consider in this paper. Corresponding vertices in adjacent layers are connected by edges representing the corresponding ancilla’s measurement error probability. That is, there are (d + 1) ((d 1)/2) _d vertices in this three-_ _×_ _−_ _×_ |a b c d e 0 Z|Col2|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| ||||||| ||||||| |||||Z|| ||||||Z| (a) (b) X **e** X (c) |a b c d e +|Col2|Col3|Col4|Col5|Col6| |---|---|---|---|---|---| ||||||| ||||||| ||||||| ||||X||| ||||||X| Fig. 1: (a) : Rotated CSS surface code (d = 5), a commonly used type of surface code. The white circles are data qubits and the black are the Z-type and X-type ancillas. (b) and (c): Measurement circuit of Z-type and X-type ancillas. Excluding the ancillas in the border, each Z-type and X-type ancilla interacts with 4 adjacent data qubits. (b) (a) Fig. 2: (a) : An example syndrome of Z stabilizers for d = 5 surface code with 5 rounds of measurements. The syndrome contains an isolated X-error (round 1), an isolated measurement error (rounds 1 and 2), a chain of two X errors (round 3), and a chain containing X errors and measurement errors spanning multiple measurement rounds (rounds 3 and 4). (b) Decoding graph with defect vertices marked red for the syndrome in (a). dimensional graph. Figure 2b shows the decoding graph for a syndrome from d = 5 surface code. _B. Error Decoders_ Given a syndrome, an error decoder identifies the underlying error pattern, which will be used to generate a correction pattern. As multiple error patterns can generate the same syndrome, the decoder has to make a probabilistic guess of the underlying physical error. The objective is that when the correction pattern is applied, the chance of the surface code ----- entering a different logical state (i.e a logical error) will be minimized. _a) Metrics: The two important aspects of decoders are_ accuracy and speed. A decoder must correct errors faster than syndromes are produced to avoid a backlog. A faster decoder also allows more time for the quantum hardware to do actual useful work. The average decoding time per measurement round is a widely used criterion for speed. A decoder must make a careful tradeoff between speed and accuracy. A faster decoder with lower accuracy requires a larger d to achieve any given logical error rate, which may require more computation overall. _b) Union-Find (UF) Decoder: The UF decoder is a fast_ surface code decoder design first described by Delfosse and Nickerson [12]. According to [15], it can be viewed as an approximation to the blossom algorithm that solves minimumweight perfect matching (MWPM) problems. It has a worstcase time complexity of O(d[3]α(d)), where α is the inverse of Ackermann’s function, a slow-growing function that is less than three for any practical code distances. Based on our analysis, it has an average case time complexity slightly higher than O(d[3]). Algorithm 1 describes the UF decoder. It takes a decoding graph (V, E) as input. Each edge e **E has a weight and a** _G_ _∈_ growth, denoted by e.w and e.g, respectively. e.g is initialized with 0 and the decoder may grow e.g until it reaches e.w. When that happens, we say the edge is fully grown. The decoder maintains a set of odd clusters, denoted by . is initialized to include all _v_ that v **V are defect** _L_ _L_ _{_ _}_ _∈_ measurements (L5). Each cluster C keeps track of whether its cardinality is odd or even as well as its root element. The UF decoder iterates over growing and merging the odd cluster list until there are no more odd clusters (inside the **while loop of Algorithm 1). Each iteration has two stages:** Growing and Merging. In the Growing stage, each odd cluster “grows” by increasing the growth of the edges incidental to its boundary. This process creates a set of fully grown edges _F_ (L10 to L19). The Growing stage is the more time-consuming step as it requires traversing all the edges in the boundary of all the odd clusters and updating the global edge table. Since the number of edges is O(d[3]), the UF decoder is not scalable for surface codes with large d. In the Merging stage, the decoder goes through each fullygrown edge to merge the two clusters connected by the edge using UNION(u, v) operation. The UNION(u, v) merges the two clusters containing u and v by assigning a common root element to the two clusters. When two clusters merge, the new cluster may become even. When there is no more odd cluster, the decoder finds a correction within each cluster and combines them to produce the correction pattern (L25). III. DISTRIBUTED UF DECODER DESIGN Our goal to build a QEC decoder is scalability to the number of qubits. As surface codes can exponentially reduce logical error rate with respect to d, larger surface codes with hundreds **Algorithm 1: Union Find Decoder** **input : A decoding graph G(V, E) with X (or Z) syndrome** **output: A correction pattern** **1 % Initialization** **2 for each v ∈** **V do** **3** **if v is defect measurement then** **4** Create a cluster {v} **5** **end** **6 end** **7 while there is an odd cluster do** **8** % Growing **9** _F ←∅_ **10** **for each odd cluster C do** **11** **for each e =< u, v >, u ∈** _C, v ̸∈_ _C do_ **12** **if e.growth < e.w then** **13** _e.growth ←_ _e.growth + 1_ **14** **if e.growth = e.w then** **15** _F ←F ∪{e}_ **16** **end** **17** **end** **18** **end** **19** **end** **20** % Merging **21** **for each e =< u, v >∈F do** **22** UNION(u, v) **23** **end** **24 end** **25 Build correction within each cluster by constructing a spanning tree** or even thousands of qubits are necessary for fault-tolerant quantum computing. Therefore, the average decoding time per measurement round should not grow with d, to avoid exponential backlog for any larger d. We choose the UF decoder for two reasons. First, it has a much lower time complexity than the MWPM algorithm. Although in general, the UF decoder achieves lower decoding accuracy than MWPM decoders, it is as accurate in many interesting surface codes and noise models [15, 16]. Second, the UF decoder maintains fewer intermediate states, which makes it easier to implement in a distributed manner. We observe that the Growing stage from L10 to L19 in Algorithm 1 operates on each vertex independently without dependencies from other vertices. A vertex requires only the parity of the cluster it is a part of for the growing stage. Second, during the merging stage, a vertex only needs to interact with its immediate neighbors (L22). _A. Overview_ Like the original UF decoder, our distributed UF decoder is also based on the decoding graph. Logically, the distributed decoder associates a processing element (PE) with each vertex in the graph. Therefore, when describing the distributed decoder, we often use PE and vertex in an inter-exchangeable manner. All PEs run the same algorithm, specified by Algorithm 2. Like the UF decoder, a PE iterates over the Growing and _Merging stages with the Merging split into two: Merging and_ _Checking. Within each stage, PEs operate independently. A_ central controller coordinates their transition from one stage to the next as specified by Algorithm 6. A key challenge to the PE algorithm is to (i) merge clusters and (ii) compute the cluster parity, without central ----- coordination. To achieve (i), each PE is assigned a unique identifier (a natural number) and maintains the identifier of the cluster it belongs to, cid. The cid is the lowest identifier of all its PEs. And the PE of the lowest identifier is called the root of the cluster. When two PEs connected by a fully grown edge have different cids, the PE with the higher cid adopts the lower value, resulting in the merging of their clusters. To achieve (ii), each PE maintains a parent. When a PE adopts the cid from an adjacent PE, it sets the latter as its parent. The parenthood relation between PEs creates a spanning tree for each cluster that is maintained by PEs locally and in which every PE in the cluster has a directional path to the root of the cluster. The cluster parity can be computed using a convergecast algorithm on the spanning tree. We describe the PE algorithm in detail in III-D. To implement our distributed UF algorithm, we require several PE states, some of which are located in shared memories. We limit all communication between PEs and between PEs and the controller to coherent shared memories to ensure fast communication and prevent stalling that could result from message-based communication. _B. PE States_ A PE has direct read access to its local states and some states of incident PEs. A PE can only modify its local states. Thanks to the decoding graph, a PE has immediate access to the following objects. _• v, the vertex it is associated with._ _• v.E, the set of edges incident to v._ _• v.U_, the set of vertices that are incident to any e ∈ _v.E_ other than v itself. We say these vertices are adjacent to v. The algorithm augments the data structures of each vertex and edge of the decoding graph, according to the UF decoder design [12]. For each vertex v _V, the following information_ _∈_ is added _• id : a unique identity number which ranges from 1 to n_ where n = _V_ . id is statically assigned and never changes. _|_ _|_ _• m is a binary state indicating whether the measurement_ outcome is a defect measurement (true) or not (false). _m is initialized according to the syndrome._ _• cid: a unique integer identifier for the cluster to which v_ belongs, and is equal to the lowest id of all the vertices inside the cluster. The vertex with this lowest id is called the cluster root. cid is initialized to be id. That is, each vertex starts with its own single-vertex cluster. When cid = id, the vertex is a root of a cluster. _• odd is a binary state indicating whether the cluster is odd._ _odd is initialized to be m._ _• codd is a copy of odd._ _• parent is a reference to the parent. As noted before, this_ parenthood relationship creates a spanning tree that connects all vertices (PEs) with directional edges. _• st odd: a binary state representing the parity of m of v and_ all its descendants. _• stage indicates the stage the PE currently operates in_ **Algorithm 2: Algorithm for vertex v in the distributed** UF decoder. **26 v.cid ←** _v.id; v.odd ←_ _v.m; v.parent ←_ _v.id; v.st odd ←_ _v.m_ **27 while true do** **28** **if global_stage =terminate then** **29** **return** **30** **end** **31** Wait until global_stage =growing **32** growing(v) **33** Wait until global_stage =merging **34** **do** **35** merging(v) **36** Wait until global_stage =checking **37** checking(v) **38** Wait until global_stage! =checking **39** **while global_stage =merging** **40 end** **Algorithm 3: Vertex growing algorithm** **41 function growing(vertex v)** **42** _v.busy ←_ true; v.stage ← growing **43** **if v.odd then** **44** **for each e = ⟨u, v⟩∈** _v.E atomic do_ **45** **if e.growth< e.w and u.cid ̸= v.cid then** **46** _e.growth←_ _e.growth+1_ **47** **end** **48** **end** **49** **end** **50** _v.busy ←_ false; **51 end** **Algorithm 4: Vertex merging algorithm** **52 function merging(vertex v)** **53** _v.busy ←_ true; v.stage ← merging **54** **55** **for each u ∈** _v.nb do_ **56** **if u.cid < v.cid then** **57** _v.cid ←_ _u.cid_ **58** _v.parent ←_ _u.id_ **59** **end** **60** **end** **61** **62** _v.st odd ←_ XOR(u.st odd|u ∈ _v.child, m)_ **63** **64** **if v.parent = v.id then v.odd ←** _v.st odd_ **65** **else v.odd ←** _u.odd where v.parent = u.id_ **66** **67** _v.busy ←_ false **68 end** **Algorithm 5: Vertex checking algorithm** **69 function checking(vertex v)** **70** _v.busy ←_ true **71** **72** **if ∀u ∈** _v.nb, (u.cid = v.cid & v.odd = u.odd) and_ _v.st odd = XOR(w.st odd|w ∈_ _v.child, m) and_ (v.parent ̸= v.id or v.odd = v.st odd) then **73** _v.busy ←_ false **74** **end** **75** _v.stage ←_ checking **76 end** _• busy is a binary state indicating whether the PE has any_ pending operations. ----- **Algorithm 6: The controller coordinates all PEs along** stages and detects the presence of odd clusters. **77 while true do** **78** global_stage ← growing **79** Wait until ∀v ∈ _V, v.stage = growing_ **80** Wait until ∀v ∈ _V, v.busy = false_ **81** **82** **do** **83** global_stage ← merging **84** Wait until ∀v ∈ _V, v.stage = merging_ **85** Wait until ∀v ∈ _V, v.busy = false_ **86** **87** global_stage ← checking **88** Wait until ∀v ∈ _V, v.stage = checking_ **89** **while ∃v ∈** _V, v.busy = true_ **90** **91** **if ∀v ∈** _V, v.codd = false then_ **92** global_stage ← terminate **93** **return** **94** **end** **95 end** For each edge e _E, the decoder maintains e.growth, which_ _∈_ indicates the growth of the edge, in addition to e.w, the weight. _e.growth is initialized as 0. The decoder grows e.growth_ until it reaches e.w and e becomes fully grown. For clarity of exposition, we introduce a mathematical shorthand v.nb, the set of vertices connected with v by full-grown edges, i.e., v.nb= _u_ _e =_ _v, u_ _v.E_ _e.growth= e.w_ . _{_ _|_ _⟨_ _⟩∈_ _∧_ _}_ We call these vertices the neighbors of v. Note neighbors are always adjacent but not all adjacent vertices are neighbors. We also use v.child, to indicate all child vertices of a vertex in the tree representation, i.e., v.child= _u_ _u.parent = v.id_ . _{_ _|_ _}_ Since trees are built within a cluster, all child vertices are neighbors but not all neighbors are child vertices. _C. Shared memory based communication_ We use coherent shared memory for a shared state that has a single writer. For all shared memories, given the coherence, a read always returns the most recently written value. Like ordinary memory, we also assume both read and write are atomic. Figure 4 illustrates these memory blocks. _• memory read/write for PE (v) and read-only for adjacent_ PEs, i.e., _u_ _v.U_ . v.id, v.cid, v.odd, v.parent and _∀_ _∈_ _v.st odd reside in this memory (S1)._ _• memory read/write for PE (v) and read-only for the con-_ troller. The PE local states, v.codd, v.stage and v.busy reside in this memory (S2). _• memory for e.growth, which can be written by its two_ incident PEs (S3). _• memory read/write for the controller and read-only for all_ PEs. The controller state global_stage is stored in this memory (S4). _D. PE Algorithm_ All PEs iterate over three stages of operation. Within each stage, they operate independently but transit from one stage to the next when the controller updates global_stage. When a PE enters a stage, it sets v.stage accordingly and keeps _v.busy as true until it finishes all work in the stage. The_ controller uses these two pieces of information from all PEs to determine if a stage has started and completed, respectively (See §III-E). We next describe the three stages of the PE algorithm. In the Growing stage, vertices at the boundary of an odd cluster increase e.growth for boundary edges (L46). As PEs perform Growing simultaneously, two adjacent PEs may compare e.w and e.growth and update e.growth for the same e. Such compare-and-update operations must be atomic to avoid data race. In the Merging stage, two clusters connected through a fully-grown edge merge by adopting the lower cluster id (cid) of theirs. To achieve this, each PE compares its cid with its neighbors (L56). If the other incident vertex of a fully grown edge has a lower cid, the PE adopts the lower cid as its own (L57). The merging process continues until every PE in the cluster has the same cid, which is the lowest vertex identifier of the cluster. In order to compute the cluster parity, when a PE adopts the cid of the adjacent PE, it sets the latter as its parent (L58). This parenthood relation creates a spanning tree for each cluster that includes all PEs (vertices) with directional edges. Each PE then calculates the parity of itself and all its children as st odd (L65). Note that odd of the root PE is the same as its st odd (L64). All other PEs copy the odd of their respective parents (L65). Astute readers may point out that v.st odd should be the parity of v and all its descendants, not just children. This is achieved by two modifications, compared to the UF decoder. First, a new stage Checking is added after Merging to see if the PE (vertex) needs to go back to Merging again (L72). Second, all PEs iterates through Merging and Checking until all PEs have nothing to do for Merging. (L34-L39). These allow parity computation to propagate from leaves to the roots of the spanning trees while cid and odd to propagate from the roots to the leaves. _a) Building corrections within clusters: While the origi-_ nal UF decoder builds a spanning tree within each even cluster in the end to generate a correction (L25), our distributed UF decoder already has a spanning tree based on the parenthood relation and therefore is more efficient in generating corrections. _b) Alternative Message-based Design: Early on we con-_ sidered the use of message-based communication to update the parity of a cluster [17]. This design requires directional links between PEs, with each PE serving as a router for forwarding messages, thus increasing the complexity of PEs. Moreover, the finite capacity of directional links could lead to congested links, causing PE stalling, which in turn slowed down the decoding process and increased tail latency. _E. Controller Algorithm_ The controller moves all PEs and itself along the three stages. In the Growing and Merging stages, it checks for _v.busy signals from each PE. The controller determines the_ ----- completion of a stage when all PEs have v.busy as false. In the Checking stage controller determines the completion of the stage when all PEs have moved to the Checking stage. Upon completion, the controller updates the global_stage variable to move to the next stage and the PEs acknowledge this update by updating their own v.stage variable. The controller also calculates the presence of odd clusters. At the end of the Merging and Checking stages, it reads the v.odd value of each vertex (L91). If any vertex has _v.odd = true, the controller updates the global stage variable_ to Growing to continue the algorithm. Otherwise, it updates it to Terminate to end the algorithm. _F. Time Complexity Analysis_ We first show the PE coordination complexity and then calculate the overall time complexity based on that. _a) PE Coordination Complexity: The controller’s time_ complexity is contingent upon the implementation of the shared memory for v.busy and v.codd. Since both checks involve logical OR operations on individual PE information, the most efficient implementation consists of a logical tree of OR operations, yielding a time complexity of O(log(d)). _b) Worst-case Time Complexity: The worst-case time_ complexity of our distributed UF decoder is O(d[3]log(d)). We explain this as follows. Each stage of our distributedUF algorithm is O(1) time. Thus the worst case depends on the total number of stages. In the merging stage, both propagating the cid and calculating the parity uses shared memory-based flooding and convergecast algorithms, each of which requires O(D) merging and checking stages, where D is the cluster diameter. The maximum possible diameter, O(d[3]), occurs when a series of single-vertex clusters merge, creating a chain of clusters with a total diameter of O(d[3]). As coordinating between stages has a complexity of _O(log(d)), the overall time complexity is O(d[3]log(d))._ Nevertheless, the worst-case scenario is extremely rare since larger clusters are exponentially less likely to occur. As shown in the empirical results reported in §VI, the average time grows sublinearly with d. IV. HELIOS ARCHITECTURE We next describe Helios, the architecture for the distributed UF decoder. _A. Overview_ Helios organizes PEs and the controller in a custom topology that combines a 3-D grid and a tree as illustrated by Figure 3 and explained below. _• PEs are organized according to the position of vertices in_ the model graph they represent. We assign v.id sequentially, starting with 1 from the bottom left corner and continuing in row-major order for each measurement round. Shared memory S1 (v.cid, v.odd, v.parent and v.st odd) and S2 (v.codd, v.stage, and v.busy) are per PE. _• Shared memory S3 (e.growth) is added to the incident PE_ with the lower id. _• A link between every two adjacent PEs to read from each_ other’s S1 and for the one with the higher id to read the other’s S4. This results in a network of links in a 3-D grid topology. As a PE represents a vertex in the model graph, a link represents an edge. Broad pink lines in Figure 3 represent these links. _• The controller is realized as a tree of control nodes (§IV-B)._ The leaf nodes of the tree contain shared memory S4. _• A link between each PE and the controller for the controller_ to read from S2 and for the PEs to read from S4. Dashed orange lines in Figure 3 represent these links. _B. Controller_ Helios implements the controller as a tree of control nodes to avoid the scalability bottleneck. The controller requires three pieces of information from each PE: v.codd, v.stage and v.busy. Each leaf control node of the tree is directly connected with a subset of PEs. We can consider these PEs as the children of the leaf node. Each node in the tree gathers vertex information from its children and reports it to the parent. With information from all vertices, the root control node runs Algorithm 6 and decides whether to advance the stage. We leave height, branching factor, and the subset of PEs connected to each leaf node as implementation choices. The necessary requirement is that the controller should not slow down the overall design. V. FPGA IMPLEMENTATION We next describe an implementation of Helios targeting a single FPGA. We choose FPGA for two reasons. It supports massively parallel logic, which is essential as the number of PEs grows proportional to d[3] in our distributed UF design. Moreover, it allows deterministic latency for each operation, which facilitates synchronizing all the PEs. Our implementation contains approximately 3000 lines of Verilog code, which is publicly available at [14]. _A. Leveraging global synchronization in FPGA_ We leverage global synchronization inside the FPGA to speed up our distributed UF algorithm. Running the FPGA design in a single-clock domain allows us to have all the PEs and the control nodes tightly synchronized. Notably, we simplify our algorithm as follows. Firstly, we run the Merging (L121) and Checking stages (L137) in parallel within each PE. The tight synchronization of all PEs guarantees that false negative busy signals do not occur. Secondly, we reduce the overhead of synchronization by having the controller only coordinate moving to the Growing stage at the beginning of each iteration (L101). As each PE can perform the Growing stage deterministically in a single cycle, PEs can move to the Merging stage without central coordination (L102). Additionally, as the controller deterministically knows the exact stage each PE is in, stage is stored locally and not shared with the controller. Thus the information from the PEs to the controller is limited to two bits, v.busy and v.odd. ----- Fig. 3: Helios architecture for d=5 surface code for 5 measurement rounds. As d=5 surface code has 12 ancilla qubits of Z-type, Helios contains a 12x5 PE array. PE n indicates PE with v.id = n. Not all links from the controller to PEs and all v.ids shown in the figure Algorithm 7 and Algorithm 8 lists the FPGA-oriented algorithm of PE and the controller. The logic at every positive edge is executed in parallel. Figure 4 shows a minimal diagram of a PE in the FPGA implementation. _a) Time Complexity: The worst-case time complexity of_ the FPGA design is O(d[3]) in contrast to O(d[3]log(d)) of the generic distributed UF algorithm. The log(d) factor in the latter originates from the coordination overhead associated with transitioning between Merging and Checking stages. However, in the case of FPGA implementation, these two stages—Merging and Checking—are performed concurrently, obviating stage transitions. This concurrent operation effectively removes the log(d) component. _B. Implementation details_ We next list the other implementation choices of our design. _Controller:_ Since we only use a single FPGA and evaluate with d up to 21, a single node controller suffices. The node controller reads busy of each PE, every clock cycle to identify the completion of a stage. _Shared memory:_ We implement all shared memories as FPGA registers, i.e., reg in Verilog. FPGA registers by design guarantee that a read returns the last written value. In order to ensure that the S4 memory has a single writer, we adjust the PE logic to update growth by implementing a modified compare-and-update operation (L109) as shown in Figure 5. The PE that houses the S3 memory performs this operation, increasing e.growth by two when both endpoints of the edge have v.odd set to true. Fig. 4: The bottom left corner of the PE array shown in Figure 3. Only part of the logic and memory inside PE 1 is shown: growth (S3) is per edge and is stored in the PE with lower id. grow logic (in brown) calculates the updated growth value. edge_busy (in green) is per adjacent PE and is used to calculate v.busy. **60** **49** `PE 3` odd, cid, parent, st_odd **37** **ControlNode** parent, st_odd odd, cid,growth, `S3` ``` growth ``` **ControlNodeRoot** **ControlNode** **25** `grow` ``` edge_busy ``` **Control** `S2` **Controller** **Node** **13** `codd` codd `busy` busy **3** **4** `stage` **1** **2** global_stage ``` To/from controller ``` Fig. 3: Helios architecture for d=5 surface code for 5 measurement rounds. As d=5 surface code has 12 ancilla qubits of Z-type, Helios contains a 12x5 of the logic and memory inside PE 1 is shown: ``` grow odd[0] Adder odd[1] Min w stage growing == ``` |grow ] Adder 2x1 D Q|Col2|Col3| |---|---|---| |] w|Min Mux Q growth == clk|| Fig. 5: Circuit diagram of grow sub-module and Verilog implementation. This implements the atomic compare and update operation in L45 as part of the PE module. odd[0] and odd[1] represents the odd state of the two incident PEs of the edge. _C. Resource Usage_ On the VCU129 FPGA development board [18], we are able to support the distributed UF decoder with d up to 21, due to resource limits. Table I shows the resource usage for various _d. While the numbers of vertices and edges grow by O(d[3]),_ resource usage grows faster for the following reasons. First, resource usage by a PE grows due to the increase of bit-width required for v.id, and v.cid. A PE for d = 21 with six adjacent PEs requires 200 LUTs and a similar PE for d = 5 requires only 155 LUTs. Second, PEs on the surface of the threedimensional array as shown in Figure 3 use fewer resources than those inside because the latter have more incident edges. When d increases a higher portion of PEs are inside the array. We find that LUTs are the most critical resource in the FPGA for our design. It may be possible to run a design with _d = 29 on a Xilinx VU19 FPGA [19], which currently has_ the highest number of LUTs among commercially available FPGAs at the time of this writing. Potentially larger d values can be supported by using a network of FPGAs. Existing commercial FPGAs like VCU129 often dedicate ----- **Algorithm 7: FPGA-oriented algorithm for vertex v** in the distributed UF decoder. **96 v.cid ←** _v.id; v.odd ←_ _v.m; v.parent ←_ _v.id;_ _v.st odd ←_ _v.m_ **97** **98 % Stage transition logic** **99 At every positive clock edge do** **100** **if global_stage =terminate then return** **101** **else if global_stage =growing then** _v.stage ←_ growing **102** **else if v.stage =growing then v.stage ←** merging **103 end** **104** **105 % Growing logic** **106 At every positive clock edge do** **107** **if v.stage =growing then** **108** **for each e = ⟨u, v⟩∈** _v.E and v.id < u.id do_ **109** **if e.growth< e.w and u.cid ̸= v.cid then** **110** **if v.odd and u.odd then** **111** _e.growth←_ MIN(e.growth+2, w) **112** **end** **113** **else if v.odd or u.odd then** **114** _e.growth←_ MIN(e.growth+1, w) **115** **end** **116** **end** **117** **end** **118** **end** **119 end** **120** **121 % Merging logic** **122 At every positive clock edge do** **123** Let u be arg minu∈(v.nb ∪{v})(u.cid) **124** **if u.cid < v.cid then** **125** _v.cid ←_ _u.cid_ **126** _v.parent ←_ _u.id_ **127** **end** **128 end** **129 At every positive clock edge do** **130** _v.st odd ←_ _subtree parity(v)_ **131 end** **132 At every positive clock edge do** **133** **if v.parent = v.id then v.odd ←** _v.st odd_ **134** **else v.odd ←** _u.odd where u.id = v.parent_ **135 end** |d|# of LUTs|# of registers| |---|---|---| |3|970|528| |5|6425|2425| |9|52111|13754| |13|165718|47211| |17|448314|122028| |21|898715|238939| **136** **137 % Checking logic** **138 At every positive clock edge do** **139** **if ∃u ∈** _v.nb, (u.cid ̸= v.cid ∥_ _v.odd ̸= u.odd) then_ **140** _v.busy ←_ true **141** **end** **142** **else if v.st odd ̸= subtree parity(v) then** **143** _v.busy ←_ true **144** **end** **145** **else if (v.parent = v.id & v.odd ̸= v.st odd) then** **146** _v.busy ←_ true **147** **end** **148** **else** **149** _v.busy ←_ false **150** **end** **151 end** **152** **153 function subtree parity(v)** **154** _parity ←_ _v.m_ **155** **for each u ∈** _v.child do_ **156** _parity ←_ XOR(parity, u.st odd) **157** **end** **158** **return parity** **159 end** a lot of silicon to digital signal processing (DSP) units and **Algorithm 8: FPGA-oriented controller logic** **161 global_stage ←** growing **162 At every positive clock edge do** **163** **if global_stage = growing then** **164** global_stage ← merging **165** %Wait until all PEs are in Merging Stage **166** Wait 2 clock cycles **167** **end** **168** **else if ∀v ∈** _V, v.busy = false then_ **169** **if ∀v ∈** _V, v.codd = false then_ **170** global_stage ← terminate **171** **end** **172** **else** **173** global_stage ← growing **174** **end** **175** **end** **176 end** TABLE I: Resource usage of Helios on VCU129 FPGA board for selected d _d_ # of LUTs # of registers 3 970 528 5 6425 2425 9 52111 13754 13 165718 47211 17 448314 122028 21 898715 238939 block RAMs (BRAMs). However, our design does not use any DSPs because it only requires comparison operators and fixed point additions. Our design does not use any BRAMs because all communication between PEs is shared memory based, which is implemented using registers. Therefore, an ideal FPGA designed to run our distributed UF decoder would be simpler than current large FPGAs, as it would only need a large number of LUTs, no DSP units, and a limited amount of BRAM. VI. EVALUATION The main objective of our evaluation is to assess the scalability of our distributed UF implementation. To that end, we first describe our methodology and then show that the latency of our implementation grows sub-linearly with respect to the surface code size d. In addition, we also evaluate the impact of noise and nonidentically distributed errors on latency. _A. Methodology_ For speed, we measure the number of cycles required to decode a syndrome. To evaluate correctness, we compare the results of our distributed UF decoder with the results from the original UF decoder. We compare clusters because the original UF decoder and ours only differ in the clustering process. In the rest of our evaluation, we will focus only on the speed of the distributed UF decoder and not on the accuracy of its results. _a) Experimental Setup: As our evaluation setup, we use_ Xilinx VCU129 FPGA development board [13], which is capable of decoding surface codes with d up to 21. ----- We use a MicroBlaze soft processor core [20] instantiated inside the FPGA to generate the syndromes and transmit them to Helios, which runs on the same FPGA. We ran 10[6] trials for each error rate and distance. _b) Noise Model: We use the phenomenological noise_ model [1] that accounts for errors in both data and ancilla qubits. As decoding for X-errors and Z-errors are independent and identical, we only focus on decoding X-errors in the evaluation. To emulate noise, we independently flip the two adjacent stabilizer measurements for each data qubit with a probability of p (the physical error rate) in each measurement round, and we also independently flip each stabilizer measurement with a probability of p except for the first and last measurement rounds. This is a widely used approach by prior QEC decoders [7, 9, 21]. We then generate the syndrome from the physical errors and provide it as input to our decoder. For most of our experiments, we use as default p = 0.001, like other works [7]. This value is reasonable for surface codes, as p should be sufficiently below the threshold (at least ten times lower) to exponentially reduce errors. We note that the UF decoder has a threshold of p = 0.024, calculated by Delfosse and Nickerson [12]. _B. Decoding Time_ We experimentally show how the average time for decoding grows with the size of the surface code. Additionally, we show the effect of noise on the average time. _a) Average time: To demonstrate the scalability of our_ algorithm with respect to the size of the surface code, we plot the average time for decoding against the size of the surface code. In Figure 6 (left) the y-axis shows the average decoding time in nanoseconds and the x-axis shows the distance (d) of the surface code. We see that for all 3 physical error rates we tested, average decoding time grows sub-linearly with respect to the surface code size, which satisfies the scalability criteria to avoid an exponential backlog. This implies that the average time to decode a measurement round reduces with increasing _d as shown in Figure 6 (right)._ _b) Distribution of decoding time: To understand the_ growth of decoding time with respect to the code distance, in Figure 7a we plot the distribution of decoding time for different code distances. The y-axis shows the decoding time and the x-axis shows the distance (d) of the surface code. The average cycle count is indicated with . _×_ The key factor determining the decoding time is the number of iterations of growing and merging the distributed UF decoder requires. The peaks in the probability distribution for each distance in Figure 7a correspond to the number of iterations. The variation around each peak is caused by the time required to sync c id and calculate odd. The number of iterations is related to the size of the largest cluster, which in turn correlates with the size of the longest error chain in the syndrome. As the size of the surface code increases, the probability of a longer error chain also increases, resulting in the probability distribution shifting to the right. Furthermore, as seen in Figure 7a, the distribution for each surface code size is right-skewed. For example, for d = 13, 90% of trials required two iterations or fewer, which were completed within 250 ns. In the same test, 99.99% of trials were completed within 370 ns. Only a very small number of error patterns require long decoding times, corresponding to syndromes with long error chains. Since such syndromes occur rarely and have poor decoding accuracy even if the decoding time is bounded, the impact on accuracy will be minimal. _c) Effect of physical error rate: To understand the effect_ of the physical error rate on decoding time, in Figure 7b we plot the distribution of latency for three different noise levels for d = 13. The y-axis shows the latency and the x-axis shows the physical error rate. As the noise level increases, the probability distribution of latency shifts to the right. This is caused by the increased probability of a longer error chain when the physical error rate increases, which in turn requires more iterations to decode. As a result, the average decoding time increases with the physical error rate. _C. Non-identically distributed errors_ We next analyze the decoding process of a surface code with varying error probabilities for data and measurement qubits. While identically distributed errors are useful for evaluating the decoder’s performance, practical implementation of surface codes may have different error probabilities for each qubit. To address this issue, each edge i in the decoding graph is assigned a weight wi that ranges from 2 to wmax and is proportional to −log(pi), where pi is the error probability corresponding to edge i. wmax is a user-specified parameter indicating the resolution of error probabilities. **Noise model : We assign random error probabilities from** a standard normal distribution with a mean of 0.001 and a standard deviation of 0.0005. Figure 7c shows that the average latency increases as _wmax increases. When the errors have a higher resolution,_ more iterations are required for each cluster, leading to an increase in latency. For the unweighted graph with d = 13, the average decoding time per round of 15 ns increases to 38 ns when wmax increases to 16. Notably, all of these values are significantly faster than the rate of measurement. As a result, decoding non-identically distributed errors can be performed in real-time using distributed UF on Helios. _D. Comparison with related work_ Our empirical results as shown in Figure 7a suggest that Helios has a lower asymptotic complexity than any existing MWPM or UF implementation for which asymptotic complexities are available, e.g., [12, 22]. Indeed, the empirical results suggest that our decoder has a sub-linear time complexity: the decoding time per round decreases with the number of measurement rounds, which has never been achieved before. This implies that Helios can support arbitrarily large d as the rate of decoding will always be faster than the rate of measurement. ----- 450 400 350 300 250 200 150 100 45 40 35 30 25 20 15 10 5 3 5 7 9 11 13 15 17 19 21 code distance (d) 3 5 7 9 11 13 15 17 19 21 code distance (d) Fig. 6: Average decoding time scales sub-linearly with d. We measure the average decoding time for 3 different noise levels. (Left) The average decoding time. (Right) The average decoding time per measurement round. The average time per measurement round reducing continuously justifies that our decoder is scalable for large surface codes. We show the distributions separately in Figure 7a. (a) T ’s distribution has a small mean & a long tail (b) T grows with the physical error rate. (c) T grows with the weight of the edges. Fig. 7: Distribution of decoding time (T ) with the mean marked with ×. Each distribution includes 10[6] data points. By default d = 13, p = 0.001 and is unweighted Das et al [7] calculate an average latency for their AFS decoder based on memory access cycles and assuming a design running at 4 GHz. As the number of memory access cycles grows quadratically with d, the average decoding time per measurement round of AFS grows O(d[2]). Similarly, Ueno et al [10] estimate the decoding time of QECOOL from d = 5 to d = 13 based on SPICE-level simulations with a clock frequency of 5 GHz. For the given range of d, the decoding time per measurement round increases quadratically with d. In comparison, the decoding time of Helios decreases per measurement round. We should like to point out that AFS and QECOOL assume very high clock frequencies, which is key to their estimated low latency. For example, for d = 11, AFS and QECOOL respectively report latencies of 42 ns and 8.32 ns per measurement round. Helios, in contrast, requires 16.2 ns per measurement round with a 100 MHz clock. To the best of our knowledge, LILLIPUT [6] is the only hardware decoder in the literature that provides implementation-based results, for d = 5. The decoder has an average time of 21 ns per measurement round, which is slightly lower than that of Helios for d = 5, i.e., 24.5 ns. However, as analyzed in §VII, LILLIPUT is not scalable for _d > 5. Our work, in contrast, has successfully demonstrated_ the implementation of a d = 21 surface code on a VCU129 FPGA with 11.5 ns per measurement round. The architecture of Helios can potentially support larger d using a larger FPGA, for example, d = 29 for Xilinx VU19P [19], and even larger _d using a network of FPGAs._ Our decoder outperforms the two fastest software MWPM decoder, Sparse Blossom [8] and Fusion Blossom [11], by an order of magnitude. According to our evaluation, Sparse Blossom and Fusion Blossom take 160 ns and 295 ns per measurement round, respectively, for d = 13 and p = 0.1%, using a single core of an M1 Max processor. In contrast, Helios achieves an average decoding time of 15 ns per measurement round under the same conditions, which is more than 60 times faster than the current state-of-the-art measurement rate [4]. VII. RELATED WORK There is a large body of literature on fast QEC decoding, e.g., [23–26]. The most related are solutions that leverage parallel compute resources. Fowler [22] describes a method for decoding at the rate of measurement (O(d)). The proposed design divides the decoding graph among specialized hardware units arranged in a grid. Each unit contains a subset of vertices and can independently decode error chains contained within it. The design is based on the observation that large error patterns spanning multiple units are exponentially rare, so inter-unit communication is not frequently required. It, however, paradoxically assumes that the number of vertices per unit is “sufficiently large” and a unit can find an MWPM for its vertices within half the measurement time on average. Not surprisingly, to date, no implementation or empirical data have been reported for ----- this work. Our approach uses vertex-level parallelism and leverages the same observation that communication between distant vertices is infrequent. NISQ+[9] and QECOOL[10] parallelize computation at the ancilla level, where all vertices in the decoding graph representing measurements of one ancilla are handled by a single compute unit. This results in an increase in decoding time per measurement round as d increases. In contrast, we allocate a processing element per each vertex, which results in decreasing decoding time per measurement round with _d at the expense of the number of parallel units growing_ _O(d[3]). Furthermore, they both implement the same greedy_ decoding algorithm that has much lower accuracy than the UF decoder used in this work. QECOOL has an accuracy that is approximately four orders of magnitude lower than that of a UF decoder [7] and NISQ+ ignores measurement errors further lowering its accuracy than QECOOL. Skoric et al. [21], Tan et al. [27] and Wu [11] propose similar methods of using measurement round-level parallelism, in which a decoder waits for a large number of measurement rounds to be completed and then decodes multiple blocks of measurement rounds in parallel. By using sufficient parallel resources these methods can achieve a rate of decoding faster than the rate of measurement. However, the latency of such approaches grows with the number of measurement rounds the decoder needs to batch to achieve a throughput equal to the rate of measurement. In contrast, our approach exploits vertex-level parallelism and completes the decoding of every _d round of measurements with an average latency that grows_ sublinearly with d. Pipelining can be considered a special form of using compute resources in parallel, i.e., in different pipeline stages. AFS [7] is a UF decoder architected in three pipeline stages. The authors estimate the decoder will have a 42 ns latency for _d = 11 surface code, which is 2.4 times higher than what we_ report based on implementation and measurement. The authors assume specialized hardware that is capable of running at 4 GHz and as a result, the decoding latency will be dominated by memory access. However, no implementation or cycleaccurate simulation is known for this decoder. Importantly, pipelining is limited in how much parallelism it can leverage: the number of pipeline stages. In contrast, the parallelism of our decoder grows along d[3], which enables us to achieve a sublinear average case latency. LILLIPUT [6] is a three-stage look-up-table based decoder similar to AFS. Look-up-table based decoders can achieve fast decoding but are not scalable beyond d = 5 as the size of the look-up table grows O(2[d][3] ). For d = 7 surface code with 7 measurement rounds, it would require a memory of 2[168] Bytes, which is infeasible in any foreseeable future. Sparse Blossom [8], a C++ MWPM implementation, decodes faster than the rate of measurement for d = 17 on a single CPU core. However, its decoding time per round grows linearly with d and increases to a few micro-seconds when the noise level increases, making it impractical for real-time decoding for higher noise levels and large surface codes. Fusion Blossom [11] takes a similar approach to Sparse Blossom and additionally parallelizes the computation at the measurement round level. By allocating 100 measurement rounds to each core on a 64-core processor, Fusion Blossom can decode up to d = 33 faster than the measurement rate. However, both Fusion blossom and Sparse Blossom has a decoding time per round higher than that of Helios by orders of magnitude, which limits their immediate use in quantum computing. VIII. CONCLUSION We describe a distributed design for the Union Find decoder for quantum error-correcting surface codes, along with Helios, a system architecture for its realization. Our FPGA-based implementation of Helios demonstrates empirically that the average decoding time grows sub-linearly with the d. Using a VCU129 FPGA, Helios decodes distance 21 surface codes at an average speed of 11.5 ns per measurement round, the fastest to the best of our knowledge. Helios is faster and more scalable than any previously reported surface code decoder implementations. Our results suggest that by leveraging parallel hardware resources, Helios can avoid a growing backlog of measurements for arbitrarily large surface codes. ACKNOWLEDGMENTS This work was supported in part by Yale University and NSF MRI Award #2216030. ----- REFERENCES [1] E. Dennis, A. Kitaev, A. Landahl, and J. Preskill, “Topological quantum memory,” Journal of Mathematical Physics, vol. 43, no. 9, pp. 4452– 4505, 2002. [2] A. G. Fowler, M. Mariantoni, J. M. Martinis, and A. N. Cleland, “Surface codes: Towards practical large-scale quantum computation,” Physical _Review A, vol. 86, no. 3, p. 032324, 2012._ [3] J. P. Bonilla-Ataides, D. K. Tuckett, S. D. Bartlett, S. T. Flammia, and B. J. Brown, “The XZZX surface code,” arXiv e-prints, pp. arXiv–2009, 2020. [4] Z. Chen, K. J. Satzinger, J. Atalaya, A. N. Korotkov, A. Dunsworth, D. Sank, C. Quintana, M. McEwen, R. Barends, P. V. Klimov, S. Hong, C. Jones, A. Petukhov, D. Kafri, S. Demura, B. Burkett, C. Gidney, A. G. Fowler, H. Putterman, I. Aleiner, F. Arute, K. Arya, R. Babbush, J. C. Bardin, A. Bengtsson, A. Bourassa, M. Broughton, B. B. Buckley, D. A. Buell, N. Bushnell, B. Chiaro, R. Collins, W. Courtney, A. R. Derk, D. Eppens, C. Erickson, E. Farhi, B. Foxen, M. Giustina, J. A. Gross, M. P. Harrigan, S. D. Harrington, J. Hilton, A. Ho, T. Huang, W. J. Huggins, L. B. Ioffe, S. V. Isakov, E. Jeffrey, Z. Jiang, K. Kechedzhi, S. Kim, F. Kostritsa, D. Landhuis, P. Laptev, E. Lucero, O. Martin, J. R. McClean, T. McCourt, X. Mi, K. C. Miao, M. Mohseni, W. Mruczkiewicz, J. Mutus, O. Naaman, M. Neeley, C. Neill, M. Newman, M. Y. Niu, T. E. O’Brien, A. Opremcak, E. Ostby, B. Pat´o, N. Redd, P. Roushan, N. C. Rubin, V. Shvarts, D. Strain, M. Szalay, M. D. Trevithick, B. Villalonga, T. White, Z. J. Yao, P. Yeh, A. Zalcman, H. Neven, S. Boixo, V. Smelyanskiy, Y. Chen, A. Megrant, and J. Kelly, “Exponential suppression of bit or phase errors with cyclic error correction,” Nature, vol. 595, no. 7867, pp. 383–387, Jul [2021. [Online]. Available: https://doi.org/10.1038/s41586-021-03588-y](https://doi.org/10.1038/s41586-021-03588-y) [5] C. Gidney and M. Eker˚a, “How to factor 2048 bit rsa integers in 8 hours using 20 million noisy qubits,” Quantum, vol. 5, p. 433, Apr [2021. [Online]. Available: http://dx.doi.org/10.22331/q-2021-04-15-433](http://dx.doi.org/10.22331/q-2021-04-15-433) [6] P. Das, A. Locharla, and C. Jones, “LILLIPUT: A lightweight low-latency lookup-table based decoder for near-term quantum error [correction,” 2021. [Online]. Available: https://arxiv.org/abs/2108.06569](https://arxiv.org/abs/2108.06569) [7] P. Das, C. A. Pattison, S. Manne, D. M. Carmean, K. M. Svore, M. Qureshi, and N. Delfosse, “Afs: Accurate, fast, and scalable errordecoding for fault-tolerant quantum computers,” in 2022 IEEE Interna_tional Symposium on High-Performance Computer Architecture (HPCA),_ 2022, pp. 259–273. [8] O. Higgott and C. Gidney, “Sparse blossom: correcting a million errors per core second with minimum-weight matching,” 2023. [9] A. Holmes, M. R. Jokar, G. Pasandi, Y. Ding, M. Pedram, and F. T. Chong, “NISQ+: Boosting quantum computing power by approximating quantum error correction,” 2020. [Online]. Available: [https://arxiv.org/abs/2004.04794](https://arxiv.org/abs/2004.04794) [10] Y. Ueno, M. Kondo, M. Tanaka, Y. Suzuki, and Y. Tabuchi, “QECOOL: On-line quantum error correction with a superconducting decoder for surface code,” in Proc. ACM/IEEE Design Automation Conference _(DAC), 2021._ [11] “Fusion Blossom,” [https://github.com/yale-paragon/fusion-blossom,](https://github.com/yale-paragon/fusion-blossom) 2023. [12] N. Delfosse and N. H. Nickerson, “Almost-linear time decoding algorithm for topological codes,” arXiv preprint arXiv:1709.06218, 2017. [[13] Xilinx, “Zynq UltraScale+ RFSoC ZCU106 evaluation kit,” https://ww](https://www.xilinx.com/products/boards-and-kits/zcu106.html) [w.xilinx.com/products/boards-and-kits/zcu106.html.](https://www.xilinx.com/products/boards-and-kits/zcu106.html) [[14] “Helios scalable QEC,” https://github.com/yale-paragon/Helios scalabl](https://github.com/yale-paragon/Helios_scalable_QEC) [e QEC, 2023.](https://github.com/yale-paragon/Helios_scalable_QEC) [15] Y. Wu, N. Liyanage, and L. Zhong, “An interpretation of unionfind decoder on weighted graphs,” 2022. [Online]. Available: [https://arxiv.org/abs/2211.03288](https://arxiv.org/abs/2211.03288) [16] S. Huang, M. Newman, and K. R. Brown, “Fault-tolerant weighted union-find decoding on the toric code,” Physical Review A, vol. 102, [no. 1, Jul 2020. [Online]. Available: http://dx.doi.org/10.1103/PhysRev](http://dx.doi.org/10.1103/PhysRevA.102.012419) [A.102.012419](http://dx.doi.org/10.1103/PhysRevA.102.012419) [17] N. Liyanage, Y. Wu, A. Deters, and L. Zhong, “Scalable quantum error correction for surface codes using FPGA,” 2023. [Online]. Available: [https://arxiv.org/abs/2301.08419](https://arxiv.org/abs/2301.08419) [18] Xilinx, “Virtex UltraScale+ 56G PAM4 VCU129 FPGA evaluation kit,” [https://www.xilinx.com/products/boards-and-kits/vcu129.html.](https://www.xilinx.com/products/boards-and-kits/vcu129.html) [[19] Xilinx, “Virtex UltraScale+ VU19P FPGA,” https://www.xilinx.com/c](https://www.xilinx.com/content/dam/xilinx/publications/product-briefs/virtex-ultrascale-plus-vu19p-product-brief.pdf) [ontent/dam/xilinx/publications/product-briefs/virtex-ultrascale-plus-vu1](https://www.xilinx.com/content/dam/xilinx/publications/product-briefs/virtex-ultrascale-plus-vu19p-product-brief.pdf) [9p-product-brief.pdf.](https://www.xilinx.com/content/dam/xilinx/publications/product-briefs/virtex-ultrascale-plus-vu19p-product-brief.pdf) [[20] Xilinx, “MicroBlaze processor quick start guide,” https://docs.xilinx.co](https://docs.xilinx.com/v/u/en-US/microblaze-quick-start-guide-with-vitis) [m/v/u/en-US/microblaze-quick-start-guide-with-vitis.](https://docs.xilinx.com/v/u/en-US/microblaze-quick-start-guide-with-vitis) [21] L. Skoric, D. E. Browne, K. M. Barnes, N. I. Gillespie, and E. T. Campbell, “Parallel window decoding enables scalable fault tolerant quantum computation,” 2022. [Online]. Available: [https:](https://arxiv.org/abs/2209.08552) [//arxiv.org/abs/2209.08552](https://arxiv.org/abs/2209.08552) [22] A. G. Fowler, “Minimum weight perfect matching of fault-tolerant topological quantum error correction in average O(1) parallel time,” 2014. [23] F. Battistel, C. Chamberland, K. Johar, R. W. J. Overwater, F. Sebastiano, L. Skoric, Y. Ueno, and M. Usman, “Real-time decoding for faulttolerant quantum computing: Progress, challenges and outlook,” 2023. [24] B. M. Terhal, “Quantum error correction for quantum memories,” _Reviews of Modern Physics, vol. 87, no. 2, pp. 307–346, apr 2015._ [[Online]. Available: https://doi.org/10.1103%2Frevmodphys.87.307](https://doi.org/10.1103%2Frevmodphys.87.307) [25] D. Gottesman, “An introduction to quantum error correction and [fault-tolerant quantum computation,” 2009. [Online]. Available: https:](https://arxiv.org/abs/0904.2557) [//arxiv.org/abs/0904.2557](https://arxiv.org/abs/0904.2557) [26] H. Bomb´ın, Topological codes. Cambridge University Press, 2013, p. 455–481. [27] X. Tan, F. Zhang, R. Chao, Y. Shi, and J. Chen, “Scalable surface code decoders with parallelization in time,” 2022. -----
{ "disclaimer": "Notice: Paper or abstract available at https://arxiv.org/abs/2301.08419, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "GREEN", "url": "https://arxiv.org/pdf/2301.08419" }
2,023
[ "JournalArticle", "Conference" ]
true
2023-01-20T00:00:00
[ { "paperId": "76fc0b4422b873a098a109ae7675fc0de1ec9cc6", "title": "Fusion Blossom: Fast MWPM Decoders for QEC" }, { "paperId": "eb57a5cc2ea992a9f1469752d56c61de84798c6f", "title": "Sparse Blossom: correcting a million errors per core second with minimum-weight matching" }, { "paperId": "c782e2c5456a352207982e084c0986adc5e5b822", "title": "Real-time decoding for fault-tolerant quantum computing: progress, challenges and outlook" }, { "paperId": "2a01d842f38725f98c34fca5c094185af2ae4ab0", "title": "An interpretation of Union-Find Decoder on Weighted Graphs" }, { "paperId": "c1ae97e49a3e6d00320a3f338d1a4139d2ba9351", "title": "Scalable Surface-Code Decoders with Parallelization in Time" }, { "paperId": "80ec53a4d56b8fae66ef66678b8098e4583d3e72", "title": "Parallel window decoding enables scalable fault tolerant quantum computation" }, { "paperId": "1795cacfcd2259b465eb34149405f5cc3753e7a0", "title": "AFS: Accurate, Fast, and Scalable Error-Decoding for Fault-Tolerant Quantum Computers" }, { "paperId": "150d19647466359e4ba03859b57cdf79cb89561d", "title": "LILLIPUT: A Lightweight Low-Latency Lookup-Table Based Decoder for Near-term Quantum Error Correction" }, { "paperId": "e0781ec41cf088040e40c77517d3104052ede304", "title": "QECOOL: On-Line Quantum Error Correction with a Superconducting Decoder for Surface Code" }, { "paperId": "cad3c8bf766a7588ff3ff299ba1d065c8f901435", "title": "Exponential suppression of bit or phase errors with cyclic error correction" }, { "paperId": "059f1d3b4d12d486afcd70c6c9ce5a96f1727c8b", "title": "The XZZX surface code" }, { "paperId": "5325d8194a04e425622b519447b709675dc8436b", "title": "NISQ+: Boosting quantum computing power by approximating quantum error correction" }, { "paperId": "f58df7e89d5287b5086d7a3f3a6e81bbd2e6cda8", "title": "Fault-tolerant weighted union-find decoding on the toric code" }, { "paperId": "5de5fbe940f95e385b79982fdcbd74e9d3f72340", "title": "A Scalable Decoder Micro-architecture for Fault-Tolerant Quantum Computing" }, { "paperId": "ee1044cb1b4fa931e86a3a1c3cd1534c52c8cb70", "title": "How to factor 2048 bit RSA integers in 8 hours using 20 million noisy qubits" }, { "paperId": "150c8897eafe69278ab54b869b41e108eaddce76", "title": "Almost-linear time decoding algorithm for topological codes" }, { "paperId": "ddd6258a1781179fabeca3d81ad645ab883d303a", "title": "Minimum weight perfect matching of fault-tolerant topological quantum error correction in average O(1) parallel time" }, { "paperId": "3a0f4231bf931d1e3eb97b2ced99d0be0ca5c1c3", "title": "Quantum error correction for quantum memories" }, { "paperId": "f9db7ae0a333ef8a21317d1a3126d75da9d43ff4", "title": "Surface codes: Towards practical large-scale quantum computation" }, { "paperId": "0b57d234d4a4ab5c88b5d063a2d625aa06f9787e", "title": "An Introduction to Quantum Error Correction and Fault-Tolerant Quantum Computation" }, { "paperId": "8ba3a176211e3e9959c36cbb2e22dbdee84d3b00", "title": "Topological quantum memory" }, { "paperId": null, "title": "and L" }, { "paperId": "6a854ffb59dfe62332fe57230b7c290f605d7049", "title": "Distributed computing - fundamentals, simulations, and advanced topics (2. ed.)" }, { "paperId": "1fc9f8ceaf9fc386501d4478c34217c85631cf47", "title": "Distributed Computing" }, { "paperId": null, "title": "Virtex UltraScale+ VU19P FPGA" }, { "paperId": null, "title": "“Helios scalable QEC,”" }, { "paperId": null, "title": "Authorized licensed use limited to the terms of the applicable license agreement with IEEE" }, { "paperId": null, "title": "Vivado Design Suite User Guide: Logic Simulation" }, { "paperId": null, "title": "MicroBlaze processor quick start guide" }, { "paperId": null, "title": "Virtex UltraScale+ 56G PAM4 VCU129 FPGA evaluation kit" } ]
17,716
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01fb3d5b17e0f96e6fdd52e76f3e2f4bd827a45b
[ "Computer Science" ]
0.860205
Collaborative Complex Computing Environment (Com-Com)
01fb3d5b17e0f96e6fdd52e76f3e2f4bd827a45b
[ { "authorId": "50283674", "name": "A. Petrenko" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
null
**e** **ISSN: 0974-7230** ### Journal of # Computer Science & Systems Biology Petrenko, J Comput Sci Syst Biol 2015, 8:5 DOI: 10.4172/jcsb.1000201 #### Research ArticleReview Article Open AccessOpen Access ## Collaborative Complex Computing Environment (Com-Com) **Petrenko AI*** _Department of System Design, Institute of Applied System Analysis, National Technical University of Ukraine, Kyiv Polytechnic Institute, Kiev, Ukraine_ **Abstract** The Com-Com is the user-centric environment which provides researchers with tailored frameworks to support their computational needs. It addresses existing and new user communities in both research and commercial fields. Technically the Com-Com provides dynamic infrastructure, dynamic service provision and user-driven application development across the domains. End users can create new applications for solving their computational tasks easily by combining ready-made interdisciplinary services available in the networked Repository and incorporate their own functionalities. Since services may be offered by different enterprises and communicate over the network, they provide an advanced distributed computing infrastructure for both intra- and cross-enterprise application integration and collaboration. The approach in hands potentially opens a door to rapid creating applied software for Exaflops HPC and Exabytes data. Nowadays the _Com-Com_ can provide applications developing in the life science, environment, engineering, physics, computational chemistry, medicine, data mining research by collecting already existing web-services been developed by different research communities EGI, Flatworld, FI-WARE, SAP, ESRC. The goal of the _Com-Com_ is to present an open environment of applied computing services and to encourage researchers across Europe to participate in its extending, interchanging or improving. The _Com-Com_ stack presents flexibility enabling users to form dynamic teams, dynamic collections of cross domain services and dynamic infrastructure to run the services on. The Com-Com may enhance the capabilities of research organizations who lack resource both in human and technical terms by better integrating researches across international scientific communities with the final aim to strengthen the EU research base. #### Keywords: SOC (service-oriented computing); SOA (serviceoriented architecture); Web-services; Services composition; Userdriven applications’ development; Application design platform; Distributed modeling; Services workflow management. #### Introduction Collaborative Complex Computing arises from and is intended to address the specific requirements of a large amount of research and industrial enterprises who are engaged in processing scientific data and performing time-consuming mathematical experiments during their scientific and applied research. Many scientific and engineering fields need powerful tools that meet the needs of a quite wide range of customers in the means of mathematical modeling and collective computing research support, enabling collaboration of distributed group of partners – the providers and consumers of computing resources and data processing solutions. Providing effective ways for the distributed user groups to compose distributed workflows representing the sequence of data processing procedures needed to solve their problems – this is what Collaborative Complex Computing is about. The aim of the Com-Com is to provide an integrated environment that supports the collaborating engineering research and allows its users to create and debug the structure of mathematical experiments or data processing workflows that are selected for execution on Grid resources. _Com-Com concept is the available online intelligent multidisciplinary_ research gateway combining. A) Inhabited information space where both open and private user communities can easily communicate and develop their domainspecific expert knowledge on the base of new emerging design paradigms and best practices. B) User-driven adaptive tools and methods for distributed data processing and mathematical experiments, their modeling and optimization in a user-friendly environment using the free resources of e- infrastructure. End users can create new applications for solving their tasks easily by combining ready-made services available in the networked Repository and incorporate their own functionalities. C) Web-services Repository with Task Solving Supporting (Application Specific) Services, which are corresponding to loosely coupled stages and procedures for complex tasks of data processing and modeling, and Environment Supporting (Generic) Services, which are responsible for service management and hosting (Figure 1). The list of offered Task Solving Supporting (Specific) Services covers a significant share of the possible user needs in scientific and applied research, such as: experimental data search and access, collection and management, data analysis, remote modeling of processes (objects) of different physical nature, etc. D) Semantics-aware mechanism to find proper web-services and target execution resources for the best integration solution of the specific user-defined problem with respect to a quality-of-service. E) The truly open environment and a set of open services that will allow researchers, service providers, small and medium engineering ***Corresponding author: Anatoly I. Petrenko, Department of System Design,** Institute of Applied System Analysis, National Technical University of Ukraine, Kyiv Polytechnic Institute, Kiev, Ukraine, Tel: +38044-2364166; [E-mail: tolja.petrenko@gmail.com](mailto:tolja.petrenko@gmail.com) **Received August 11, 2015;** **Accepted August 25, 2015;** **Published August 27,** 2015 **Citation: Petrenko AI (2015) Collaborative Complex Computing Environment** (Com-Com). J Comput Sci Syst Biol 8: 278-284. doi:10.4172/jcsb.1000201 **Copyright: © 2015 Petrenko AI. This is an open-access article distributed under** the terms of the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited. ----- enterprises and other organizations to develop their custom application software satisfying their needs while still being open and innovative. Today there is no well recognized user-driven applied platforms with support of arbitrary mathematical experiments during scientific and applied research that can offer all of mentioned above. _Com-_ _Com stands for a new technology and methodology for planning and_ modeling of mathematical experiments, and it can offer the following features, Figure 1. - Execution of composite computing tasks of arbitrary complexity to support collective research via the Internet. - Promote a high scientific and technical level of research with open knowledge base. - Literative optimization of results obtained during calculations. - Reducing terms of scientific and applied research and subsequent development work with intensive workflows, tools, data and knowledge reuse in mind. - Improving the quality of scientific and technical documents while productivity growth in scientific organizations and SMEs. - System integrating stages of scientific and applied research, development and technological preparation of production. _Com-Com_ enhances future competitiveness by strengthening its scientific and technological base in the area of Experimenting and Data Processing, makes public service infrastructures and simulation processes smarter i.e., more intelligent, more efficient, more adaptive and sustainable. It can create extended new inter-disciplinary collaborations, new research alliances with European researchers in order to combine joint knowledge and experience and exploit synergies in user-driven Applications development. _Com-Com_ utilizes services as constructs to support the rapid, low-cost and easy composition of distributed applications by end-users. The computing is divided into separate loosely coupled stages and procedures for their subsequent transfer to the form of standardized specific (application support) services, (ASS) at infrastructure and data / user federation level. The offered list of such services covers a diverse range of application domains and the project establishes a point on which this range can be expanded upon. #### Activity Overview The _Com-Com_ concept is based on SOC (Service - Oriented Computing) distributed applications development by means of the composition of services [1-11]. Service-Oriented Computing (SOC) is the paradigm for distributed computing that utilizes services as fundamental elements (services) for application development. It represents a new approach in application development moving away from tightly coupled monolithic software towards software of loosely coupled, dynamically bound services. End-users need the support to build new systems easily by incorporating functionalities of available systems and services. Computing procedures, being used in different branches of science and technology, are invariant in their nature. That why they can be used by different customers in their particular needs. Services implement functions that can range from answering simple requests to executing sophisticated research processes requiring peer-to-peer relationships between possibly multiple layers of service consumers and providers. The delivery of software for complex collaborative computing as a set of distributed services can help to solve problems like software reuse, deployment and evolution. The “software as a service model” will open the way to the rapid creation of new value-added composite services based on existing ones. Although service-oriented computing in cloud computing environments presents a new set of research challenges, their combination provides potentially transformative new opportunities. Pioneering work in mathematical SOC has been done as part of the ADaM (Algorithm Development and Mining) toolkit which was originally developed by the University of Alabama in Huntsville (UAH) with the goal of mining large scientific data sets for geophysical phenomena detection and feature extraction, and has continued to be expanded and improved [12]. ADaM includes not only traditional data mining capabilities such as pattern recognition, but also image processing and optimization capabilities, and many supporting data preparation algorithms that are useful in the mining process. ADaM provides technology that allows users to locally define analysis workflows that can be executed on data residing in online repositories. The NASA project, called Mining Web Services (MWS), is enabling ADaM capabilities for use in a distributed web service environment. This redesign also allows the algorithms in ADaM to be easily packaged as grid or web services and is being extensively used by different research **Figure 1:** _Com-Com general structure_ ----- groups [13].The rest of this paper is organized as follows: Section 5 gives an overview of main elements SOC of the computational platform, web-services as the best component of SOC; Section 6 describes computational web-service examples; Section 7 presents Web-services Management; Section 8 describes the prototyping example; Section 9 presents the performance of applied services in the SOC prototype and concludes this paper. #### Main Elements of Service-Oriented Architecture of the Computational Platform The above approach is incorporated in the service-oriented computational platform consisting (Figure 2). This architecture characterized in that: its web-accessible, its functionality is distributed across the ecosystem of both web services from the _Com-Com_ Repository and grid/cloud services (from e-infrastructure); it is compatible with adopted standards and protocols; it supports custom user analysis scenario development and execution; it hides the complexity of web-service interaction from user with abstract workflow concept and graphical workflow editor. User interface provides the following functionality: authorization, graphical workflow editor, project artifacts browsing (input and output files management, simulation results visualizers etc.), task execution monitoring and others (Figure 3). The server-side part of the architecture has several layers to reflect the abstract workflow concept described above. First tier is the portal which organizes user environment: holds user data and preferences, controls user access, provides information support, organizes user interface. Its modules are also responsible for: abstract workflow description generation according to user inputs, passing this task description to lower architecture layers for execution, retrieving finished task results and storing all the project artifacts in the database. The next tier is the workflow manager running on the execution server. It is responsible for mapping (with the help of service registry) the abstract workflow description to the concrete web services orchestration scenario expressed in the orchestrator-specific input language (like WSBPEL for BPEL engines). It also initiates the execution of the concrete workflow with the external orchestrator, monitors its state and fetches the results. Concrete workflow operates with SOAP / REST web services representing the basic building blocks of system’s functionality: data preparation and adaptation, simulation, optimization, results processing etc. Compute-intensive steps are implemented as grid/cloud services interacting with grid / cloud resources to run computations as grid / cloud jobs. Introduction of the new functionality to the system is accomplished through the registration of the new web or grid/cloud services. The overall sequence of user scenario execution is as follows. User passes login procedure on the portal and accesses workflow editor. He may choose and setup the activities available in repository to compose (manually or automatically) the scenario workflow he wants to execute (Figure 3). Then the execution phase is initiated by user. User task description is passed to workflow management service on execution server, where this abstract workflow is translated to the concrete one. Workflow manager parses the description and checks for errors, requests metadata from service registry and performs mapping from activity sequence to web service invocations sequence, described in one of the standard orchestration languages. Mapper unit of the workflow manager should arrange web services in correct invocation order according to abstract workflow, organize XML messages and variables initializations and assignments between calls, and provide the ways for run-time control (workflow monitoring, canceling, intermediate results retrieving etc.). Then this concrete scenario is executed by orchestrator. When an orchestrator invokes service the latter initiates the **Figure 2: Main elements of service-oriented architecture of the computational platform.** ----- **Figure 3: The general procedure of user-driven building flexible computational routes.** submission of a job to resource: it prepares a job description and communicates with grid middleware to schedule and execute a job. The similar behavior is for HPC or cloud services (task preparation and execution via the specific API). User is informed about the progress of the workflow execution by monitoring unit communicating with workflow manager. When execution is finished user can retrieve the results, browse and analyze them and repeat this sequence if needed. #### Computational Web Service Examples If a large multidisciplinary and multinational Repository of Application support services is created, the end- users can tailor the services to their own personal requirements and expectations by incorporating functionalities of available services into large-scale Internet-based distributed application software. Typical scheme of a computational modeling experiment in many fields of science and technology has an invariant character and includes the following steps: - Definition of the mathematical description of the experiment tasks (mathematical model). This is often done manually by researcher. However, it is possible to automate this process as a separate web-service. This service forms a mathematical model usually in the form of a system of nonlinear differential algebraic equations based on a block diagram (structure) of a computational experiment in hand [14,15]. - The dimension of such model can be very large (some thousands of equations), and its structure is highly sparse. In its formation the library of individual blocks (procedures) descriptions are used. For example, it is possible to generate automatically a mathematical model of the selected data processing using its block diagram. - If investigated processes (objects) have distributed nature and are governed by partial differential equations (PDE), it is possible to assemble their models also in the form of systems of first order ordinary differential equations (ODE). Otherwise the PDE can be numeral solved, applying the methods of finite differences (FDM) or finite elements (FEM) [16]. - The solution of mathematical model equations for stationary regime when its equations are transformed into a system of nonlinear algebraic equations. It is important to ensure the convergence of the solution, despite the ill conditional nature of the task. - Solution of the mathematical model equations for dynamic regime in time domain with regard to the possible stiffness of these equations. ----- - The solution of the linearized mathematical model equations in frequency domain. - Automatically detecting solution output parameters in the form of extreme values, consuming power, time delay and rise time (fall) of selected variables (for the time domain) or the transfer coefficients, bandwidth, resonant frequencies and quality factors (frequency domain). - Determination of the solution output parameters sensitivity to value change of internal parameters associated with the description of the individual procedures (blocks), or the parameters of the environment in which objects been built on the results of the experiment are planned to explore. - Multi-criteria optimization of the task solution output parameters in conditions of functional and parametric constraints. - Statistical analysis and histogram building for solution output parameters taking into account the laws of distribution of internal parameter values. - Estimates of the deviations of solution output parameters due to variations of internal parameter values: the worst case for the boundary values of the internal parameter deviations and statistical evaluation, taking into account the laws of distribution of these deviations. - Inverse problem: determination of optimal tolerance of the internal parameter values for a given deviations of solution output parameters. The problem is solved by optimization procedures (deterministic or statistical by maximizing the ratio of output). - Determination of the spectral composition of output variables of the experiment (the project) and assessment of their degree of distortion. - Support for experiments that require repeated execution of the same procedures (steps) with different values of the internal parameters. - The unified and efficient access to data stored in organizationally distributed environments. - Visualization of calculation results in a graphical form. - Search for the required input data and descriptions of individual procedures, scattered across multiple databases. - These stages of computational modeling experiments are used in different science and technology branches, where investigated objects are composed of discrete blocks (components); - Aeronautical (involving study, design, and manufacturing of air flight-capable machines, and the techniques of operating aircraft and rockets within the atmosphere). - Architectural (utilizing current industry technology for both the process and the product of planning, designing, and constructing buildings and other physical structures). - Chemical (studying chemical structure, bonding and reactivity in chemical systems using mathematical and computational methods or the development of such methods). - Civil (creating drawings for the civil engineering industry, including areas of land development, transportation, public works, environmental, landscaping, surveying, design visualization and many others). - Electronic (designing circuits using pre-manufactured building blocks such as power supplies, semiconductors (such as transistors), and integrated circuits). - Robotic (robotics covers the engineering elements of robotics, automation and autonomy, incorporating robot control, which may be based on Artificial intelligence). - Industrial Manufacturing (including all intermediate processes required for the production and integration of a product’s components). - Materials (understanding, modeling and processing of metals and alloys with respect to the properties and material behaviour and development of novel materials). - Mechanical (utilizing CAD software to plan and prepare documents and technical graphics appropriate to the mechanical engineering industry). - Medical Engineering (research and development of new and existing medical imaging instruments and signals for therapeutic, monitoring and diagnostic purposes). - Microsystems (this research area captures a broad spectrum of underpinning micro-engineering research aimed at developing a diverse range of novel miniaturized micro-structured devices). For different branch applications sequences and combination of mentioned steps may vary so as their algorithmic and program implementations. These alternative realizations organizationally can be presented in the form of unified web-services with standardized interfaces. There is other type of computational experiments in which distributed web service technologies for science data analysis solutions are used. The basic procedures (stages) in these cases for execution of user scenarios against large data stores are: - Curve fitting and Approximation for estimating the relationships among variables (Linear regression, Simple regression, Ordinary least squares, Polynomial regression, Logistic regression, Nonlinear regression, Nonparametric regression, Semi parametric regression, Least angle Local, Segmented regression, Interpolation, Fourier Approximation, etc.). - Classification Techniques for categorizing different data into various folders (Naïve Bayes Classifier, Bayes Network Classifier, CBEA and SEA Classifiers, Decision Tree Classifier, Back Propagation Neural Network, k-Nearest Neighbor Classifier, Multiple Prototype Minimum Distance Classifier, Recursively Splitting Neural Network, etc.). - Clustering Techniques for grouping a set of objects in such a way that objects in the same group (cluster) are more similar to each other than to those in other groups (Isodata, K-Means, Maximin, Feature Selection/Reduction Techniques, Backward Elimination, Forward Selection, Principal Components, RELIEF (filter-based feature selection), Remove Attributes). - Pattern Recognition Utilities (Accuracy Measures, Range Filter, K-Fold Cross, Validation, Vector Magnitude, Merge Patterns, Normalization, Sample, Subset, Statics, Cleaning Outliers, Comparing Image File, Discretization); - Image processing (Collaging, Cropping, Image Difference, Image Normalization, Image Moments, Equalization, Inverse, ----- Quantization, Relative Level Quantization, Resampling, Rotation, Scaling, Statistics, Thresholding, Vector Plot, Polygon Circumscription, Marking Region, GLCM (Gray Level Concurrence Matrix), GLRL (Gray Level Run Length)). - Filtering (Dilation, Energy Erosion, Fast Fourier Transfer, Median and Mode Filters, Pulse Coupled Neural Network, Spatial Filter, Gabor Filter, etc.). - Optimization Techniques (Genetic Algorithm, Multi-Objective Genetic Algorithm, Principal component analysis). These computational web-services for data proceeding are used in different science and technology branches during data collection, data cleansing, data management, data analytics and data visualisation, where there are very large datasets. The Com-Com supports an end-user in the distributed web-service environment by collection and unification of different computational web services. It also investigates ways to compose and orchestrate these services into a task solution, which end-users can create easily as new applications by combining ready-made services available on the network and incorporating their functionalities. End- users are provided with mechanisms to re-engineer the already available monolithic solutions as sets of services in the Federal or National clouds. Services may be offered by different enterprises and communicate over the Com-Com, that why they provide a distributed computing infrastructure for both intra- and cross-enterprise application integration and collaboration. Very often end-users start to solve their science or technology tasks using web-services of data processing and then transfer to web- services of computational modeling. #### Web-services Management Implementation of the SOC concept means generating end-user applications based on dynamic composition and orchestration of web services workflows. A workflow describes how tasks are orchestrated, what components performs them, what their relative order is, how they are synchronized, how information flows to support the tasks and how tasks are being tracked. Currently, the industry standard for service orchestration is the Business Process Execution Language (BPEL) and 3C-E will use it. BPEL provides a standard XML schema for workflow composition of web services that are based on SOAP. There are other workflow composition tools that create workflow descriptions for a set of web- services execution; however, the tools are not standardized yet. This standardized composition description is eventually deployed on a BPEL engines. The Active BPEL Designer requires too much in- depth knowledge of BPEL definitions to be useful for computing users. To assist users in composing the workflows, 3C-E will adapt a graphical composition tool to work in this environment. Modern offerings go beyond simple services, including full platforms, complex compositions and whole infrastructures. This leads to a significant complexity in mapping the different modules of these solutions on the large variety of available hardware options. To cope with the challenge to optimize the mapping of services to a variety of different resources, both hardware and software related (e.g., high bandwidth demands), requires topology-aware mapping. This mapping needs to consider placement of the services across geographically distributed centers and demands new intelligent and cross-domain integration of actual and historical usage data. The underpinning idea is based on the assumption that cloud applications can be described and analyzed in terms of workload behaviour, potentially split into segments representing different classes of workload and that an optimised placement of the application elements is feasible relying on rich resource descriptions providing the necessary information from server node capabilities over cluster and data centre topology up to environmental data collected by sensors from the facility management system and business data such as actual power costs. #### Prototyping The Institute of Applied System Analysis (IASA) of NTUU “Kiev Polytechnic Institute” (Ukraine) has developed the prototype of the Engineering Design environment based on SOC [14-16]. It is designed for modeling and optimization of Nonlinear Dynamic Systems, based on components of different physical nature and being widely spread in different scientific and engineering fields. It is the cross-disciplinary application for distributed computing in the form of service compositions functioning within or across organization borders. For example, in cases of electronics, mechanics, hydraulics, control systems, heat, energy, environment tasks selected web-services can provide the following important computational procedures: operations with large-scale mathematical models, steady state analysis, transient and frequency domain analysis, sensitivity and statistical analysis, parametric optimization and optimal tolerances assignment, solution centering, etc.), and supporting procedures (cross-domain mathematical model description translation, data formats translation etc.) based on innovative original numerical methods. Algorithms proposed for many design web-services are novel and unique (multi-criteria optimization, optimal tolerances assignment, yield maximization, stiff- and illconditional tasks solving, etc.). The proposed approach to application design is completely different from present attempts to use the whole indivisible applied software in the grid / cloud infrastructure as it is done in TINACloud, PartSim, RT-LAB, FineSimPro and CloudSME. Prototype of this Optimal Engineering Designer was used for microelectromechanical systems development [16]. #### Conclusions Nowadays SOA technology is becoming more and more widespread in many fields of IT industry due to the main advantage: capacity to offer effective approach to the solution of one of the most complicated and actual problems – problem of integration of the information resources (l mathematical procedures and their implementations in our case). Joining the advantages of SOA with the capacities of Grid/ Cloud technology allows providing integration not only of local but of geographically remote applied web- services. The implementation of any IT service-oriented software system requires performing a number of different steps in order to produce all the required artifacts (either internal or deliverable). In our case the first step is to formalize the research domain concepts and their computing similarities in order to obtain a common interdisciplinary set of applied computing services agreed by all the stakeholders involved in Collaborative Complex Computing. The next step is to select mechanisms that enable story, discovery, selection, mediation, invocation and compose of applied web-services. The development process for _Com-Com_ includes also defining web-service semantics, developing the architecture of the framework, designing the supporting software and building a working implementation of whole framework. Presentation of the SOA components as services with standard interfaces ensure their re-use for the creation of new applications and to enhance the capacity of existing ones, rather than re-programming of the same functions. Service capabilities are described using languages such as Description Language Web-services (Web Services Description Language, WSDL). ----- Service-oriented architecture in the _Com-Com_ is the emergence of a new paradigm, which is a response to the increasing complexity of computing distributed software. In other words, it is the dynamic architecture, where the structure and behaviour of the software is changed during its execution, as well as the location where the software components are stored and executed. Penetrating computing environment also introduces new, non-functional requirements for interoperability, heterogeneity, mobility and adaptability collaborative research supported by complex computing. _Com-Com_ after full realization can enhance Europe’s future competitiveness by strengthening its scientific and technological base in the area of Experimenting and Data Processing, makes public service infrastructures and simulation processes smarter i.e., more intelligent, more efficient, more adaptive and sustainable. **References** [1. Huhns MN, Singh MP (2005) Service-Oriented Computing: Key Concepts and](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1407782&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4236%2F30523%2F01407782.pdf%3Farnumber%3D1407782) [Principles. IEEE Internet Computing 9: 75-81.](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=1407782&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4236%2F30523%2F01407782.pdf%3Farnumber%3D1407782) [2. Yinong Chen, Wei-Tek Tsai (2008) Distributed Service-Oriented Software](http://www.public.asu.edu/~ychen10/book/socwsi.html) [Development, Kendall Hunt Publishing, United States.](http://www.public.asu.edu/~ychen10/book/socwsi.html) [3. Yinong Chen, Wei-Tek Tsai (2014) Service-Oriented Computing and Web](http://www.public.asu.edu/~ychen10/book/socwsi.html) [Software Integration. 4th edn, Kendall Hunt Publishing, United States.](http://www.public.asu.edu/~ychen10/book/socwsi.html) [4. Yi Wei, Brian Blake M (2010) Service-Oriented Computing and Cloud](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5617062&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5617062) [Computing: Challenges and Opportunities. IEEE Internet Computing 14: 72-75.](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5617062&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5617062) [5. Michael PP, Traverso P, Dustdar S, Leymann F (2008) Service-Oriented](http://www.worldscientific.com/doi/abs/10.1142/S0218843008001816) [Computing: A Research Roadmap, International Journal of Cooperative](http://www.worldscientific.com/doi/abs/10.1142/S0218843008001816) [Information Systems 17: 223-255.](http://www.worldscientific.com/doi/abs/10.1142/S0218843008001816) [6. Michael PP, Willem-Jan van den H (2006) Service-Oriented Design and](http://www.inderscienceonline.com/doi/abs/10.1504/IJWET.2006.010423) [Development Methodology, Int J Web Engineering and Technology 2: 1-17.](http://www.inderscienceonline.com/doi/abs/10.1504/IJWET.2006.010423) 7. G Rains (2009) Cloud Computing and SOA, MITRE, White paper. [8. Petrenko AI (2014) Service-oriented computing (SOC) in a cloud computing](http://www.krput.edu.pl/files/swrutu/prof_petrenko/EWTD-13.pdf) [environment. Computer Science and Applications 1: 349-358.](http://www.krput.edu.pl/files/swrutu/prof_petrenko/EWTD-13.pdf) [9. Tsai WT, Sun X, Chen Y, Huang Q, Bitter G, et al. (2008) Teaching Service-](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4519570&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4519543%2F4519544%2F04519570.pdf%3Farnumber%3D4519570) [Oriented Computing and STEM Topics via Robotic Games. Proc. of IEEE](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4519570&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4519543%2F4519544%2F04519570.pdf%3Farnumber%3D4519570) [International Symposium on Object/Component/Service-Oriented Real-Time](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4519570&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4519543%2F4519544%2F04519570.pdf%3Farnumber%3D4519570) [Distributed Computing 11: 131-137.](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4519570&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F4519543%2F4519544%2F04519570.pdf%3Farnumber%3D4519570) [10. Blake MB (2007) Decomposing Composition: Service-Oriented Software](http://www.computer.org/csdl/mags/so/2007/06/mso2007060068-abs.html) [Engineers. IEEE Software 24: 68-77.](http://www.computer.org/csdl/mags/so/2007/06/mso2007060068-abs.html) [11. Chen Y, Wei-Tek T (2011) Service-orientation in computing curriculum. Proc. of](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6139100&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F6132474%2F6139076%2F06139100.pdf%3Farnumber%3D6139100) [IEEE 6[th] International Symposium on Service Oriented System Engineering](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6139100&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F6132474%2F6139076%2F06139100.pdf%3Farnumber%3D6139100) [(SOSE), Irvine, CA, 122-132.](http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=6139100&url=http%3A%2F%2Fieeexplore.ieee.org%2Fiel5%2F6132474%2F6139076%2F06139100.pdf%3Farnumber%3D6139100) [12. Hinke T, Rushing J, Ranganath HS, Graves SJ (2000) Techniques and](http://link.springer.com/article/10.1023%2FA%3A1006603414245) [Experience in Mining Remotely Sensed Satellite Data. Artificial Intelligence](http://link.springer.com/article/10.1023%2FA%3A1006603414245) [Review 14: 503-531.](http://link.springer.com/article/10.1023%2FA%3A1006603414245) [13. Rushing J, Ramachandran R, Nair U, Graves S, Welch R, et al. (2005) ADaM:](http://www.sciencedirect.com/science/article/pii/S009830040400233X) [A Data Mining Toolkit for Scientists and Engineers. Computers & Geosciences](http://www.sciencedirect.com/science/article/pii/S009830040400233X) [31: 607-618.](http://www.sciencedirect.com/science/article/pii/S009830040400233X) [14. Zgurovsky M, Petrenko A, Ladogubets V, Finogenov O, Bulakh B (2013)](http://journals.agh.edu.pl/csci/article/view/283) [WebALLTED: Interdisciplinary Simulation in Grid and Cloud. Computer](http://journals.agh.edu.pl/csci/article/view/283) [Science 14: 295-306.](http://journals.agh.edu.pl/csci/article/view/283) [15. Petrenko A, Ladogubets V, Tchkalov V, Pudlowski Z (1997) ALLTED-a](http://unesdoc.unesco.org/Ulis/cgi-bin/ulis.pl?catno=156798&set=5104DE72_1_36&gp=0&lin=1&ll=s) [Computer-Aided System for Electronic Circuit Design, UICEE, Melbourne 205.](http://unesdoc.unesco.org/Ulis/cgi-bin/ulis.pl?catno=156798&set=5104DE72_1_36&gp=0&lin=1&ll=s) [16. Petrenko A (2012) Macromodelsof Micro-Electro-Mechanical Systems](http://www.intechopen.com/books/microelectromechanical-systems-and-devices) [(MEMS), Microelectro- mechanical Systems and Devices. NazmulIslam](http://www.intechopen.com/books/microelectromechanical-systems-and-devices) [(Edt), InTech, US.](http://www.intechopen.com/books/microelectromechanical-systems-and-devices) -----
{ "disclaimer": "Notice: The following paper fields have been elided by the publisher: {'references', 'abstract'}. Paper or abstract available at https://api.unpaywall.org/v2/10.4172/JCSB.1000201?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.4172/JCSB.1000201, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": null, "status": "BRONZE", "url": "https://doi.org/10.4172/jcsb.1000201" }
2,015
[]
true
2015-08-27T00:00:00
[]
9,014
en
[ { "category": "Business", "source": "external" }, { "category": "Law", "source": "s2-fos-model" }, { "category": "Business", "source": "s2-fos-model" }, { "category": "Economics", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01fdeb97f43001f282e048b1fa19641e7c90e9c6
[ "Business" ]
0.901128
The Current Status of Cryptocurrency Regulation in China and Its Effect around the World
01fdeb97f43001f282e048b1fa19641e7c90e9c6
[ { "authorId": "145411126", "name": "John Riley" } ]
{ "alternate_issns": null, "alternate_names": null, "alternate_urls": null, "id": null, "issn": null, "name": null, "type": null, "url": null }
There is no single approach in the world regarding the legal regulation of cryptocurrency. Most countries are wary of legalizing this payment instrument, fearing problems associated with tax evasion, terrorist financing, fraud and other illegal transactions. Nevertheless, the issue of legalization of cryptocurrencies has recently moved to a different level as the market capitalization of cryptocurrencies grew to over USD 237 billion 2020, with several leading cryptocurrencies such as Bitcoin skyrocketing in value in 2021. The explosive growth has been lead in no small part by China, the world’s largest and most important market for cryptocurrency in terms of mining, investing and research. This article reviews the current trends in cryptocurrency regulation with a particular focus on China, including an analysis of current cryptocurrency laws in China, as well as the new Chinese Cryptography Law. Also, it explains recent developments in Chinese regulation and policy will continue to shape the development of the global cryptocurrency markets.
##### Current Development C hina & WTO R ev . 2021:1; 135-152 http://dx.doi.org/10.14330/cwr.2021.7.1.06 pISSN 2383-8221 [•] eISSN 2384-4388 # CWR China and WTO Review ### **The Current Status of Cryptocurrency ** **Regulation in China and Its Effect ** **around the World ** #### John Riley [∗] *There is no single approach in the world regarding the legal regulation of cryptocurrency.* *Most countries are wary of legalizing this payment instrument, fearing problems associated* *with tax evasion, terrorist financing, fraud and other illegal transactions. Nevertheless, the* *issue of legalization of cryptocurrencies has recently moved to a different level as the market* *capitalization of cryptocurrencies grew to over USD 237 billion 2020, with several leading* *cryptocurrencies such as Bitcoin skyrocketing in value in 2021. The explosive growth has* *been lead in no small part by China, the world’s largest and most important market for* *cryptocurrency in terms of mining, investing and research. This article reviews the current* *trends in cryptocurrency regulation with a particular focus on China, including an analysis* *of current cryptocurrency laws in China, as well as the new Chinese Cryptography Law.* *Also, it explains recent developments in Chinese regulation and policy will continue to* *shape the development of the global cryptocurrency markets.* **Keywords** : Cryptocurrency Regulation, Chinese Digital Currency, Digital Yuan, China’s Cryptography Law, Bitcoin - ‌Professor of Law at Sogang University School of Law. J.D. (Pittsburgh). ORCID: http://orcid. org/0000-0002-7512-9090. The author may be contacted at: johnriley007@gmail.com/Address: Sogang University School of Law, 35 Baekbeom-ro (Sinsu-dong), Mapo-gu, Seoul 04107 Korea. All the websites cited in this article were last visited on February 1, 2021. ##### 135 ----- John Riley ## CWR ### **1. Introduction** ###### A 2016 European Commission report estimated the market value of cryptocurrency to be above Euro 7 billion worldwide. [1] In 2018, the cumulative market capitalization of cryptocurrencies increased to USD128 billion, which has grown to over USD237 billion in 2020. [2] In the third quarter of 2020, the cryptocurrency Ethereum alone saw an average of over 1100 daily transactions. [3] This explosive was lead in no small part by China, the world’s largest and most important market for cryptocurrency in terms of mining, investing and research. [4] For example, at its peak 90 percent of cryptocurrency exchanges originated in China and 75 percent of all crypto mining occurred in China due to local advantages in power costs, chip production and cheap labor. [5] As a response to this explosive growth, the Chinese government began to severely restrict the expansion of this emerging market. For example, in 2013, the People’s Bank of China (PBOC) banned financial institutions from engaging in Bitcoin-related businesses, which lead to a 50 percent decrease in the value of Bitcoin. [6] As discussed more-fully below, in 2017, the Chinese government banned cryptocurrency exchanges and initial coin offerings (ICOs). [7] Despite stricter regulations, China’s market remained attractive for cryptocurrency transactions. After China cracked down on Bitcoin exchanges and ICOs in September 2017, Bitcoin’s price dropped, but only temporarily. Not long after, Bitcoin entered a bull market, and Chinese Bitcoin investors turned to over- the-counter (OTC) trading, i.e., trading between two parties without an exchange. [8] According to Canaan’s IPO prospectus filed last year (one of China’s largest manufacturers of blockchain servers), sales of blockchain hardware used primarily for cryptocurrency mining in China were worth RMB8.7 billion (USD1.30 billion) in 2017, 45 percent of global sales by value. The prospectus forecasted that sales in China would rise to RMB35.6 billion in 2020. [9] Moreover, there is ample evidence that the Chinese government is optimistic about the potential of blockchain to serve as the fundamental infrastructure for the global economy and is eager to dominate innovation in this market. [10] One example is that the PBOC has conducted one of the largest real-world trials for cryptocurrency in the world, e.g., by issuing digital currency in various test cities, including Shenzhen, where nearly 50,000 were issued its new digital currency through a public lottery system, ##### 136 ----- Cryptocurrency Regulation in China ## CWR ###### and are able to use the currency in over 3,000 stores within Shenzhen. [11] Moreover, for approximately 10 days in December 2020, China gave 100,000 residents of Suzhou 200 digital yuan as part of a pilot program for citizens to spend cryptocurrency in traditional brick and mortar stores. [12] Due to these recent developments and China’s relative importance to the future of blockchain, this article will review the current trends in cryptocurrency regulation with a particular focus on China, and how recent developments in Chinese regulation and policy will continue to shape the development of the global cryptocurrency markets. This paper is composed of six parts including short Introduction and Conclusion. Part two will examine legal definitions of cryptocurrency. Part three will discuss regulatory approaches to cryptocurrency. Part four will analyze cryptocurrency laws in China. Part five will introduce the new Chinese Cryptography Law. ### **2. Legal Definitions** ###### There are many forms of cryptocurrencies which are based on the same type of decentralized technology known as blockchain. [13] Blockchain utilizes advanced cryptography (mathematical algorithms) and distributed ledger technology that allows for any digital transactions to be recorded transparently and verifiable by anyone on a distributed network of computer servers called nodes, which are incentivized to support the network by being rewarded with a new coin and/ or transactional fees. [14] Prior to the development of blockchain, in particular Bitcoin, the Internet commerce relied on financial institutions to serve as trusted third-party intermediaries between merchants and consumers which resulted in “inherent weaknesses” such as the non-reversibility of transactions (because third parties cannot avoid mediating disputes), increased transactional costs (due to third-party involvement), excessive collection and storage of a customer’s personal information (because payments can be reversed), and a certain level of unavoidable fraud. [15] Prior to Bitcoin’s creation, meanwhile, electronic transactions remained problematic without a trusted third-party intermediary. [16] Because transactions are “publicly announced” in a P2P system in which consensus is required in ##### 137 ----- John Riley ## CWR ###### determining the order and verification of payments, thereby effectively eliminating security breaches, Bitcoin’s key innovation is that it allows a payment system to operate without a trusted third-party intermediary in a decentralized manner through publication of all transactions on distributed ledger. [17] Although commonly associated with Bitcoin and payment systems, blockchain covers a wide array of systems that range from being fully open to private, and has the power to transform record-keeping for a wide variety of applications, including smart contracts, smart property, multi-signature software and many other applications. [18] While the underlying technology is basically the same, the terms used to describe blockchain varies greatly from country to country, such as: digital currency (Argentina, Thailand, and Australia), virtual commodity (Canada, China, Taiwan), crypto-token (Germany), payment token (Switzerland), cyber currency (Italy and Lebanon), electronic currency (Colombia and Lebanon), and virtual asset (Honduras and Mexico). [19] Similarly, a crypto-asset, according to the European Securities and Markets Authority (ESMA), is a private asset that relies primarily on cryptography and distributed ledger technology as part of its perceived or inherent value. [20] The ESMA refers to “virtual currencies” and “digital tokens” as crypto-assets, which are traditionally not issued by a central bank. [21] Perhaps the simplest definition of cryptocurrency was issued by the Bank of England, which can be helpful to recall as it is referred to throughout the paper: The first part of the word, ‘crypto’, means ‘hidden’ or ‘secret’ reflecting the secure technology used to record who owns what, and for making payments between users. The second part of the word, ‘currency,’ tells us the reason cryptocurrencies were designed in the first place: a type of electronic cash. But cryptocurrencies aren’t like the cash we carry. They exist electronically and use a peer-to-peer system. There is no central bank or government to manage the system or step in if something goes wrong. [22] ### **3. Regulatory Approaches to Cryptocurrency** ###### Currently there are a wide variety of legal regimes regulating cryptocurrency around the world. Beyond protection for investors, some countries have included cryptocurrency markets within newly promulgated regulations related to taxation, money laundering, counterterrorism, and organized crime, requiring financial ##### 138 ----- Cryptocurrency Regulation in China ## CWR ###### institutions to conduct due diligence on their customers. [ 23] For example, Australia and Canada recently enacted laws to bring cryptocurrency transactions and institutions that facilitate them under the ambit of money laundering and counter- terrorist financing laws. [24] The US Federal government considers virtual currencies property, [25] with certain agencies proposing comprehensive regulations for digital wallets and exchanges, [26] while other agencies have maintained a softer approach to the trading of cryptocurrencies. [27] State regulation varies, with some jurisdictions such as New York taking a ‘tough’ approach to cryptocurrency regulation by imposing strict disclosure and consumer-protection requirements for any business that offers cryptocurrency-related services in New York. [28] Other countries such as Algeria, Bolivia, Morocco, Nepal, Pakistan, and Vietnam have banned all cryptocurrency activities. [29] Other countries allow citizens to engage in cryptocurrency but only outside of their borders (Qatar and Bahrain), while some allow private transactions as long as they are not facilitated by licensed financial institutions (Bangladesh, Iran, Thailand, Lithuania, Lesotho, China, and Colombia). [30] Some economies such as China, Macau and Pakistan have completely banned initial coin offerings, which are essentially the offer of a new cryptocurrency in order to raise capital similar to an initial public offering of stock, while others strictly regulate them, e.g .: **●** New Zealand regulations vary depending on whether the token offered is categorized as a debt security, equity security, managed investment product, or derivative. **●** Netherlands regulations are applicable depending on whether the token offered is considered a security or a unit in a collective investment, an assessment made on a case-by case basis. [31] ###### For countries that are not yet recognizing cryptocurrencies as legal tender, many view the technology potential and are promoting crypto-friendly legal regimes to attract tech companies developing this nascent market (Spain, Belarus, the Cayman Islands, and Luxemburg). [32] Other countries are currently develop their own system of cryptocurrencies (Marshall Islands, Venezuela, the Eastern Caribbean Central Bank member states, and Lithuania). Finally, for some countries that have previously warned citizens of cryptocurrency investment risks, several have also determined that the size of cryptocurrency market is too small to have specific regulation or to ban the market entirely (Belgium, South Africa, and the United ##### 139 ----- John Riley ## CWR ###### Kingdom). [33] Considering these varied and diverse approaches to the regulation of cryptocurrencies, this paper will now focus on the legal developments in China. ### **4. Cryptocurrency Laws in China** ###### Since 2014, the PBOC has been developing a digital fiat currency fully backed by the government, which is expected to become one of the first digital currencies launched by a central bank. [34] The PBOC began conducting studies of digital currency several years ago when it established an Institute of Digital Money within the PBOC that has employed approximately 1000 researchers. [35] Despite its apparent interest in developing a digital currency, the government has taken a very cautious approach. In March 2018, citing prudence, the need to avoid excessive speculation, and the country’s desire for the financial sector to serve the “real economy,” Xiaochuan Zhou, the then head of the PBOC, cautioned that China was in no hurry to develop digital currency. [36] According to Zhou, Chinese regulators do not recognize virtual currencies such as Bitcoin as a tool for retail payments like paper bills, coins, or credit cards, and that banking system are not accepting any existing virtual currencies or providing relevant services. [37] Likewise, in 2017, several other government agencies [38] issued statements announcing the ban of initial coin offerings (ICOs) in China, warning that tokens or virtual currencies involved in ICO financing were not issued by monetary authorities and could neither be accepted legal tender, nor circulated and used as a currency in the markets. [39] Therefore, despite its interest in developing a fully-backed digital currency, cryptocurrencies are not accepted by the relevant agencies nor utilized by the banking system to provide relevant services. [40] Moreover, the Chinese government has severely cracked down on the private trading of cryptocurrencies in the name of protecting investors and reducing financial risk. Such restrictions have included the prohibition of ICOs, restricting cryptocurrency trading platforms, and discouraging the country’s massive Bitcoin mining market, which sent ripples throughout the global cryptocurrency markets. [41] For example, in response to nearly USD400 million raised by Chinese investors, in September 2017, the PBOC declared ICOs illegal and required refunds to investors for any amounts raised through an ICO, resulting in a USD200 drop ##### 140 ----- Cryptocurrency Regulation in China ## CWR ###### in the value of Bitcoin. [42] Moreover, in early 2018, the government banned all offshore cryptocurrency trading platforms after it was unable to eradicate trading following the shutdown of all domestic websites. [43] This strict regulatory approach fits within the context of China’s overall economic growth and financial markets over the past 20 years. In particular, China’s rapid development has come at the cost of over-leverage in the financial system, which the government seeks to correct: In the past two years, control of financial risks and stabilization of the financial system has become the top priority of PBOC. Before ICOs, internet platforms providing P2P loans and micro lending had been targeted by PBOC and other financial regulators and are still in the process of cleansing and rectification. It is no surprise that ICOs, due to the sheer increase both in numbers and in the amount of funds raised, as well as some socially chaotic events caused by ICOs, were banned by the PBOC. [44] ###### In response to these restrictions, market participants changed tactics away from engaging in ICOs and began focusing on the sale of mining equipment to investors who were then awarded with tokens for mining activities, commonly referred to as Initial Miner Offerings (IMOs). [45] In an IMO, companies sell mining equipment to generate a particular cryptocurrency or token that are then rewarded to contributors, essentially disguising an ICO as an IMO. [46] In 2018, the National Internet Finance Association of China (NIFA), the national-level self-regulatory body for China’s internet finance industry, recognized this subversion and issued a warning to potential investors claiming that IMOs were just a disguised form of ICOs and were therefore prohibited. [47] Shortly after its release, the IMO market in China collapsed. [48] Notwithstanding this tough approach, the Chinese government has supported the development of the underlying blockchain technology to help modernize China’s financial system and to become a global leader in this cutting-edge technology, which it believes will have a similar economic and technological impact as the development of artificial intelligence. In 2019, President Xi Jinping stated that China needed to “seize the opportunities” presented by blockchain because it represents an “important breakthrough in independent innovation of core technologies.” [49] The economic fallout from the COVID-19 pandemic further pushed the government to focus on the development of digital technologies, ##### 141 ----- John Riley ## CWR ###### with China’s Ministry of Industry and Information declaring that blockchain is one of the core technological developments that has “played a crucial role in both epidemic control and prevention, alongside the resumption of industrial production.” [50] While these development have led to renewed investment in blockchain technologies within China, the government continues to take a cautious approach to limit potential social problems associated with the development of blockchain: The endorsement of blockchain technology is not without reservation. In the view of PBOC, blockchain technology and digital currency should be researched for the goal of better service to the real economy. PBOC believes that blockchain technology can be developed without the use of tokens, which are believed to have been the roots of various social problems such as illegal fundraising and fraud. [51] ###### Prohibitions on the issuance and sale of tokens are regulated in the Law of the People’s Republic of China on the People’s Bank of China (amended in 2003) and is administered under the supervision of the PBOC. [52] Article 20 states that “[n]o units or individuals may print or sell promissory notes as substitutes for Renminbi to circulate on the market.” [53] Individuals and institutions that issue and sell tokens illegally will be required to cease such acts immediately and face fines amounting to up to RMB200,000. [54] In 2018, NIFA urged investors to use the utmost caution when reviewing ICOs that may contain misleading or fraudulent claims. Moreover, NIFA stated its intention to enhance security measures. Further, while the warning does not ban overseas cryptocurrency trading itself, policymakers may possibly introduce stricter regulatory measures in the future. [55] More recently, China passed the country’s long-awaited civil code and expanded the scope of inheritance rights to include cryptocurrency, which are now protected under the new law. [56] While attempts to legalize cryptocurrency have been made, cryptocurrency transactions continue to be heavily restricted by the government. Most likely, China will move towards the creation of the world’s first digital currency controlled and backed by a central bank, which has already finished building the infrastructure for its Digital Currency Electronic Payment system and laying the groundwork for providing the digital yuan the same legal status as the physical yuan. [57] As noted above, the PBOC has recently conducted one of the largest real##### 142 ----- Cryptocurrency Regulation in China ## CWR ###### world trials for cryptocurrency in the world, e.g., by issuing digital currency in various test cities, including Shenzhen, where nearly 50,000 were issued its new digital currency through a public lottery system, and are able to use the currency in over 3,000 stores within Shenzhen. [58] Additionally, China also gave 100,000 residents of Suzhou 200 digital yuan as part of a pilot program for citizens to spend cryptocurrency in traditional brick and mortar stores. [59] Therefore, despite its tight control of unregulated instruments like cryptocurrency, the government will likely continue to lead in the development of blockchain technology and, when it deems prudent, the development of digital currencies managed through centralized control. Other public and private development projects utilizing blockchain technology include: **●** A cross-border financing platform administered by the State Administration of Foreign Exchange, which facilitates financing and information verification for cross border transactions used in 19 provinces throughout the country. **●** Smart contracts that assist in the automation of contracts and adjudication of cases introduced by the Hangzhou Internet Court; **●** An identification system for the use of government services in Shenzhen; **●** A logistics application introduced by Customs in Tianjin Province that facilitates transactions; **●** A number of public projects expected to be developed in fields such as anticorruption, security, translation, and criminal investigations; and **●** A number of private use cases including: product certification and verification, invoicing, e-billing, recording of intellectual property rights, and management of pharmaceutical supply chains. [60] ###### In addition to these legislative and administrative policy developments, the Chinese courts have also recently issued a series of decisions regarding cryptocurrency. In particular, Chinese courts have recognized the validity of cryptocurrency as legal property worthy of protection. For example, in July 2019, the Hangzhou Internet Court, which has subject matter jurisdiction for e-commerce cases in the city of Hangzhou, the largest e-commerce city in China and home to many such companies as Alibaba, [61] became the first court in China to uphold the legality of Bitcoin ownership and was protected under China’s General Civil Law. [62] In 2013, the plaintiff Wu purchased 2.675 Bitcoins for approximately RMB 20,000 from the store FXBTC, which was hosted on Taobao, Chinese largest online ##### 143 ----- John Riley ## CWR ###### marketplace. However, when the plaintiff tried to logon to the FXBTC website in 2017, he found that the online store was close and that there was no way to contact the operator and gain possession of his Bitcoin. Plaintiff alleged that prior to the website’s closure the defendant did not provide any notice, resulting in damages from being unable to retrieve the Bitcoin. [63] Before the closure of the store, digital currencies such as Bitcoin and Litecoin and related products were prohibited by the Chinese government, leading to the sudden closure by Taobao. As such, the plaintiff claimed that Taobao and its parent company were jointly and severally liable for plaintiff’s losses amounting to RMB76,000, i.e., transaction price of 2.675 Bitcoins at the time of the complaint was filed. [64] The court held that the plaintiff had insufficient grounds for claiming tort liability against the defendants because Taobao was fulfilling its legal responsibility to not facilitate the trading of Bitcoin and, therefore, dismissed the plaintiff’s claims. [65] Regardless of the result, the decision is meaningful in that it was the first time a Chinese court identified the attributes of virtual property in digital currencies such as Bitcoin, stating that they possess the value, scarcity, and dominance required of property as an object of rights, and should be recognized as virtual property. [66] Other Chinese courts made similar decisions in 2020 with the Taobao case regarding the analysis and recognition of digital currency. For example, the Shanghai No. 1 Intermediate People’s Court held that Bitcoin is an asset protected by law. [67] In that case, plaintiffs sued defendants alleging the theft of 18.88 Bitcoins and 6,466 Skycoins. [68] The defendants argued that Bitcoin and Skycoin were not legal property under Chinese law, and therefore should be ordered to return the coins to the plaintiffs. [69] The chief judge, Liu Jiang, held that Bitcoins were assets deserving of protection because the government had never explicitly rejected defining Bitcoin as an asset, nor did the law prohibit Chinese citizens from owning digital currencies. [70] Likewise, the Shenzhen District People’s Court recently held that Ethereum is legally protected property with an economic value. [71] In this case, a disgruntled blockchain engineer stole his company’s private key and payment password, allegedly stealing Ethereum and other digital coins. In holding that Ethereum is lawful property, the court ordered the defendant to pay plaintiff damages, in addition to imposing a fine and a seven-month prison sentence on the defendant. [72] These decisions indicate a willingness on the part of the Chinese courts to deal ##### 144 ----- Cryptocurrency Regulation in China ## CWR ###### with and recognize ownership rights in cryptocurrencies. ### **5. China’s New Cryptography Law** ###### On January 1, 2020, the Cryptography Law of the People’s Republic of China entered into force “for the purpose of regulating the application and administration of cryptography, promoting the development of cryptography work, ensuring cyber and information security, safeguarding national security and public interests, and protecting the legitimate rights and interests of citizens, legal persons and other organizations.” [73] Chapter III of the Cryptography Law regulates Commercial Cryptography and requires the government to encourage “the research, development, academic exchange, transfer and application of commercial cryptography technology, facilitates a unified, open, competitive, and orderly commercial cryptography market environment, encourages and promotes the development of commercial cryptography industry.” [74] Articles 22-25 require the government to adopt appropriate standards in the area of commercial cryptography. [75] Cryptography administrative departments shall establish supervisory control over commercial cryptography including routine and randomized inspections; creating a unified information platform to supervise and manage commercial cryptography; coordinating the supervision mechanism and social credit system; as well as strengthening self-regulation by cryptography businesses and the public. [76] Despite the lack of clear definitions regarding cryptocurrencies, the Cryptography Law provides the foundation for the further development of this area. Even a cursory review of the new law indicates that the Chinese government intends to tightly administer and control cryptographic activities based on the text of the law although it seems obvious that the government wants to support its growth. While the issue of regulating cryptocurrency transactions remains unclear, perhaps the government will develop appropriate rules and a control mechanism for this activity. However, many questions remain open as to how the Chinese government will promote the development of blockchain technologies without losing its ability to control and regulate decentralized cryptocurrencies such as Bitcoin that lack central monetary authority. Regardless, due to the size ##### 145 ----- John Riley ## CWR ###### and influence China has in the cryptocurrency markets, other countries will be watching carefully in developing their own policies not to lose out on leading the development of this cutting-edge technology. ### **6. Conclusion ** ###### As shown above, there is no single approach in the world regarding the legal regulation of cryptocurrency. Most countries are wary of legalizing this payment instrument, fearing problems associated with tax evasion, terrorist financing and other illegal transactions. Nevertheless, the issue of legalization of cryptocurrencies has recently moved to a different level. Governments realize that despite the lack of legal instruments, transactions with cryptocurrencies are carried out on the black market, and the turnover from these transactions is significant. As such, attempts are being made to define the rules by which transactions with cryptocurrency can occur. China will not stand on the sidelines as other countries move forward. Due to recent developments and its massive influence in the blockchain economy, China’s regulation and policies are expected to continue to shape the development of the global cryptocurrency markets. ### **R efeRrences** 1. Commission Staff Working Document Impact Assessment Accompany the Document -Proposal for a Directive of the European Parliament and the Council Amending Direc tive (EU) 2015/849 on the Prevention of the Use of the Financial System for the Purposes of Money Laundering or Terrorist Financing and Amending Directive 2009/101/EC, SWD/2016/0223 Final, *available at* https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/ ?uri=CELEX:52016SC0223&from=EN. 2. J. Rudden, *Cryptocurrency Market Capitalization 2013-2019*, S tatiSsta, Nov. 6, 2020, *available at* https://www.statista.com/statistics/730876/cryptocurrency-maket-value. 3. J. Rudden, *Number of Daily Cryptocurrency Transactions 2020, By Type*, S tatiSsta, Nov. 5, 2020, *available at* https://www.statista.com/statistics/730876/cryptocurrency-maket value. ##### 146 ----- Cryptocurrency Regulation in China ## CWR 4. *Crypto in China: A Detailed History*, S kalex, *available at* https://www.skalex.io/crypto china. 5. *Id* . 6. Rain Xie, *Why China Had To “Ban” Cryptocurrency But the U.S. Did Not: A Comparative* *Analysis of Regulations on Crypto-Markets between the U.S. and China,* 18 W aSsh . U. G lobal S tUud . lL. R ev . 474-5 (2019), *available at* https://openscholarship.wustl.edu/cgi/ viewcontent.cgi?article=1684&context=law_globalstudies. 7. *Id* . at 475-7. 8. Zhehao Chen, *A Guide to China’s Cryptocurrency Market: Which Tokens Are Most* *Popular?*, Longlash.com (June 2020), *available at* https://www.longhash.com/en/news/3360/ A-Guide-to-China's-Cryptocurrency-Market:-Which-Tokens-Are-Most-Popular%3F. 9. B. Goh & A. John, *China Wants to Ban Bitcoin Mining*, R eUuteRSrs, Apr. 9, 2019, *available* *at* https://www.reuters.com/article/us-china-cryptocurrency/china-wants-to-ban-Bitcoin mining-idUSKCN1RL0C4. 10. *Crypto in China: A Detailed History*, S kalex, *available at* https://www.skalex.io/crypto china. 11. A. Kharpal, *China Hands Out USD1.5 Million of its Digital Currency in One of the* *Country’s Biggest Public Tests*, CNBC, Oct. 12, 2020, *available at* https://www.cnbc. com/2020/10/12/china-digital-currency-trial-over-1-million-handed-out-in-lottery.html. 12. K. Rapoza, *Does China Have A Role In Bitcoin’s Rise?*, F oRrbeSs, Jan. 10. 2021, *available* *at* https://www.forbes.com/sites/kenrapoza/2021/01/10/does-china-have-a-role-in Bitcoins--rise/?sh=7dc6b0b24965. 13. The Law Library of Congress Global Legal Research Center, Regulation of Cryptocurrency around the World, June 2018, at 1, *available at* https://www.loc.gov/law/ help/cryptocurrency/cryptocurrency-world-survey.pdf. 14. R. Ali, Innovations in Payment Technologies and the Emergence of Digital Currencies, Bank of England (2014), *available at* http://www.cftc.gov/PressRoom/SpeechesTestimony/ opagiancarlo-14#P47_14508. *See also* Satoshi Nakamoto, *Bitcoin: A Peer-to-Peer* *Electronic Cash System*, Bitcoin.org (2009), at 4, *available at* https://Bitcoin.org/Bitcoin. pdf. 15. Nakamoto, *id* . 16. *Id* . 17. Ali, *id* . 18. R. Houben & A. Snyers, *Cryptocurrencies and Blockchain-Legal Context and Implications* *for Financial Crime, Money Laundering and Tax Evasion* 15 (Paper requested by the TAX3 committee of eEU Parliament, July 2018), *available at* https://www.europarl. europa.eu/cmsdata/150761/TAX3%20Study%20on%20cryptocurrencies%20and%20 blockchain.pdf. 19. *Id* . ##### 147 ----- John Riley ## CWR 20. *Advice: Initial Coins Offerings and Crypto-Assets* 7 (ESMA50-157-1391, Jan. 9, 2019), *available at* https://www.esma.europa.eu/sites/default/files/library/esma50-157-1391_ crypto_advice.pdf. 21. *Id* . at 7-8. 22. What are Cryptoassets (Cryptocurrencies)?, Bank of England, *available at* https://www. bankofengland.co.uk/knowledgebank/what-are-cryptocurrencies. 23. The Law Library of Congress, *supra* note 13. 24. *Id* . 25. IRS Virtual Currency Guidance: Virtual Currency Is Treated as Property for U.S. Federal Tax Purposes; General Rules for Property Transactions Apply, IRS (Mar. 25, 2014), *available at* https://www.irs.gov/newsroom/irs-virtual-currency-guidance. 26. J. Clayton, Statement on Cryptocurrencies and Initial Coin Offerings, U.S. Securities and Exchange Commission (Dec. 11, 2017), *available at* https://www.sec.gov/news/public statement/statement-clayton-2017-12-11. 27. Commodity Futures Trading Commission, Bitcoin, *available at* https://www.cftc.gov/ Bitcoin/index.htm. 28. For example, New York regulations require a comprehensive surveillance regime and record-keeping for all virtual currency transactions, including for a period of seven years the following: (1) the identity and physical addresses of the party or parties to the transaction that are customers or accountholders of the Licensee and, to the extent practicable, any other parties to the transaction; (2) the amount or value of the transaction, including in what denomination purchased, sold, or transferred; (3) the method of payment; (4) the date or dates on which the transaction was initiated and completed; and (5) a description of the transaction. *See* N.Y. Comp. Codes R. & Regs. Title 23 § 200.15(e)(1) (2015). 29. For example, the Central Bank of Vietnam prohibits the issuance, supply and use of Bitcoin and other similar virtual currencies. *See* *State Bank Declared Banning the Use* *of Bitcoin* <available only in Vietnamese>, tT UuoitRre, Oct. 28, 2017, *available at* https:// tuoitre.vn/ngan-hang-nha-nuoc-tuyen-bo-cam-su-dung-Bitcoin-20171028102135916. htm. 30. The Law Library of Congress, *supra* note 13, at 1-2. 31. *Id* . at 2. 32. *Id* . 33. *Id* . 34. *See* *China Accelerates Blockchain Adoption in the New Decade*, J oneSs dD ay (Commentaries) (Jan. 2020), *available at* https://www.jonesday.com/en/insights/2020/01/ china-accelerates-blockchain-adoption. 35. *See* *China is Ready for Central Bank Digital Currency Issuance. Here’s The Plan*, lL edGgeRr iI nSsiGghtSs (2019), *available at* https://www.ledgerinsights.com/china-ready-central##### 148 ----- Cryptocurrency Regulation in China ## CWR bank-digital-currency-cbdc. 36. Xiang Bo, *China Not in Hurry to Develop Digital Currency: Central Bank*, xX inhUuanet, Mar. 9, 2018, *available at* http://www.xinhuanet.com/english/2018-03/09/c_137027677. htm. 37. The Law Library of Congress, *supra* note 13, at 106. 38. The PBOC, the Cyberspace Administration of China (CAC), the Ministry of Industry and Information Technology (MIIT), the State Administration for Industry and Commerce (SAIC), the China Banking Regulatory Commission (CBRC), the China Securities Regulatory Commission (CSRC), and the China Insurance Regulatory Commission (CIRC). 39. The Law Library of Congress, *supra* note 13, at 106. 40. L. Zhang, Regulation of Cryptocurrency: China, The Law Library of Congress (June 2018), *available at* https://www.loc.gov/law/help/cryptocurrency/china.php. 41. *Id* . 42. *See* *China Bans Initial Coin Offerings Calling Them* ‘ *Illegal Fundraising*,’ bbBBC, Sept. 5. 2017, *available at* https://www.bbc.com/news/business41157249#:~:text=China%20 bans%20initial%20coin%20offerings%20calling%20them%20'illegal%20 fundraising',5%20September%202017&text=Chinese%20regulators%20have%20 launched%20a,them%20to%20%22cease%20immediately%22. 43. Xie Yu *, China to Stamp out Cryptocurrency Trading Completely with Ban on Foreign* *Platforms*, S oUuth C hina M oRrninGg P oSst, Feb. 5, 2018, *available at* http://www.scmp. com/business/banking-finance/article/2132009/china-stamp-out-cryptocurrency-trading completely-ban. 44. Wenhao Shen, Regulation of Cryptocurrency in China, JunZeJun Law Offices (June 2020), *available at* https://www.mondaq.com/china/fin-tech/944330/regulation-of cryptocurrency-in-china. 45. N. De, *A Self-Regulatory Organization in China is Warning about a New Kind of Mining-* *Focused Cryptocurrency Offering*, C oindeSsk, Jan 12, 2018, *available at* https://www. coindesk.com/chinas-internet-finance-association-warns-initial-miner-offerings. *”* 46. Liangyu, *China’s Industry Organization Warns of Risks in “Initial Miner Offerings*, xX inhUuanet, Jan. 13, 2018, *available at* http://www.xinhuanet.com/english/2018-01/13/ c_136892763.htm. 47. *Id* . 48. Shen, *supra* note 44. 49. A. Kharpal, *With Xi’s Backing, China Looks to Become a World Leader in Blockchain* *as US Policy is Absent*, CNBC, Dec. 15, 2019, *available* *at* https://www.cnbc.com/ 2019/12/16/china-looks-to-become-blockchain-world-leader-with-xi-jinping-backing. html. 50. B. Savic, *China’s New Digital Industrial Transformation*, dD iPploMmat, June 19, 2020, ##### 149 ----- John Riley ## CWR *available at* https://thediplomat.com/2020/06/chinas-new-digital-industrial-transformation. 51. Shen, *supra* note 44. 52. J. Dewey, Blockchain & Cryptocurrency Regulation, Association of Corporate Counsel (2019), at 263, *available at* https://www.acc.com/sites/default/files/resources/vl/ membersonly/Article/1489775_1.pdf. 53. P.R.C. Laws on the People’s Bank of China, art. 20, *available at* http://www.china.org. cn/business/laws_regulations/2007-06/22/content_1214826.htm. 54. *Id* . art. 45. 55. Dewey, *supra* note 52, at 264. 56. K. Helms, *China Passes Law Protecting Cryptocurrency Inheritance*, Bitcoin.com, May 30, 2020, *available at* https://news.Bitcoin.com/china-law-cryptocurrency-inheritance. 57. Yue Hu, Liwei Wang & Meihan Luo, *In Depth: China’s Digital Currency Ambitions* *Lead the World*, nN ikkei aA Ssia, Dec. 3, 2020, *available at* https://asia.nikkei.com/Spotlight/ Caixin/In-depth-China-s-digital-currency-ambitions-lead-the-world. 58. A. Kharpal, *China Hands Out USD1.5 Million of Its Digital Currency in One of the* *Country’s Biggest Public Tests*, CNBC, Oct. 12, 2020, *available at* https://www.cnbc. com/2020/10/12/china-digital-currency-trial-over-1-million-handed-out-in-lottery.html. 59. K. Rapoza, *Does China Have A Role In Bitcoin’s Rise?*, F oRrbeSs, Jan. 10. 2021, *available* *at* https://www.forbes.com/sites/kenrapoza/2021/01/10/does-china-have-a-role-in Bitcoins--rise/?sh=7dc6b0b24965. 60. *Supra* note 34. 61. Guodong Du & Meng Yu, *A Close Look at Hangzhou Internet Court: Inside China’s* *Internet Courts Series*, C hina J USustiCce oO bSseRrveRr, Nov. 3, 2019, *available at* https://www. chinajusticeobserver.com/a/a-close-look-at-hangzhou-internet-court. 62. M. Moos, *Chinese Court Upholds Legality of Bitcoin Ownership, BTC Protected by* *China’s Property Laws*, C RryPptoSslate, July 18, 2019, *available at* https://cryptoslate.com/ chinese-court-upholds-legal-Bitcoin-ownership-btc-protected-china-property-law. 63. Shuxin Zhang, *The First Court in China Determines the Legal Status of Bitcoin*, [ 首例 比特币财产侵权纠纷案宣判 认定比特币虚拟财产地位 ], bB eiJjinGg nN eWSws, July 18, 2019, *available at* http://www.bjnews.com.cn/finance/2019/07/18/604945.html. 64. *Id* . 65. *Id.* 66. *Id.* 67. K. Helms, *Chinese Court Rules Bitcoin Is Asset Protected by Law*, Bitcoin.com, May 9, 2020, *available at* https://news.Bitcoin.com/chinese-court-Bitcoin-asset-protected-by law. 68. *Id* . 69. *Id* . 70. *Id* . ##### 150 ----- Cryptocurrency Regulation in China ## CWR 71. K. Helms, *Chinese Court Declares Ethereum Legal Property with Economic Value*, Bitcoin.com, Apr. 28, 2020, *available at* https://news.Bitcoin.com/chinese-court ethereum-legal. 72. *Id* . 73. PRC Cryptography Law art. I (adopted at the 14th Meeting of the Standing Committee of the Thirteenth National People’s Congress on Oct. 26, 2019), *available at* http://www. npc.gov.cn/englishnpc/c23934/202009/dfb74a30d80b4a2bb5c19678b89a4a14.shtml. 74. *Id* . art. 21. 75. *Id* . arts. 22-25. 76. *Id* . art. 31. ##### 151 ----- -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.14330/CWR.2021.7.1.06?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.14330/CWR.2021.7.1.06, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBYNC", "status": "HYBRID", "url": "http://cwr.yiil.org/home/pdf/archives/2021v7n1/cwr_v7n1_06.pdf" }
2,021
[ "Review" ]
true
2021-03-30T00:00:00
[ { "paperId": "099e97d13056275cd1dd5a214e5c30c0a4a76525", "title": "The Regulation of Cryptocurrency in China" }, { "paperId": null, "title": "Does China Have A Role In Bitcoin’s Rise" }, { "paperId": null, "title": "Number of Daily Cryptocurrency Transactions 2020, By Type, StatiSta, Nov. 5, 2020, available at https://www.statista.com/statistics/730876/cryptocurrency-maketvalue" }, { "paperId": null, "title": "China’s New Digital Industrial Transformation, diPloMat" }, { "paperId": null, "title": "Cryptocurrency Market Capitalization 2013-2019" }, { "paperId": null, "title": "China Passes Law Protecting Cryptocurrency Inheritance, Bitcoin.com, May 30, 2020, available at https://news.Bitcoin.com/china-law-cryptocurrency-inheritance" }, { "paperId": null, "title": "Chinese Court Declares Ethereum Legal Property with Economic Value, Bitcoin.com" }, { "paperId": null, "title": "The First Court in China Determines the Legal Status of Bitcoin" }, { "paperId": null, "title": "Blockchain & Cryptocurrency Regulation, Association of Corporate Counsel (2019), at 263, available at https://www.acc.com/sites/default/files/resources/vl/ membersonly/Article/1489775_1.pdf" }, { "paperId": null, "title": "Why China had to “Ban” Cryptocurrency but the U.S. did not: A Comparative Analysis of Regulations on Crypto-Markets Between the U.S. and China" }, { "paperId": null, "title": "See China is Ready for Central Bank Digital Currency Issuance. Here's The Plan, ledGeR inSiGhtS" }, { "paperId": null, "title": "China's Industry Organization Warns of Risks in \"Initial Miner Offerings" }, { "paperId": null, "title": "China Not in Hurry to Develop Digital Currency: Central Bank, xinhUanet, Mar" }, { "paperId": null, "title": "A Self-Regulatory Organization in China is Warning about a New Kind of MiningFocused Cryptocurrency Offering, CoindeSk" }, { "paperId": null, "title": "Regulation of Cryptocurrency: China, The Law Library of Congress (June 2018), available at https://www.loc.gov/law/help/cryptocurrency/china.php" }, { "paperId": null, "title": "Statement on Cryptocurrencies and Initial Coin Offerings, U.S" }, { "paperId": null, "title": ") a description of the transaction" }, { "paperId": "ecdd0f2d494ea181792ed0eb40900a5d2786f9c4", "title": "Bitcoin : A Peer-to-Peer Electronic Cash System" }, { "paperId": "a2b907a0886996df2d9dbebc319e7ad86725e481", "title": "Law Library of Congress" }, { "paperId": null, "title": "China to Stamp out Cryptocurrency Trading Completely with Ban on Foreign Platforms" }, { "paperId": "d829197c1d1db56eb570371c666507f8ffa19cee", "title": "Innovations in payment technologies and the emergence of digital currencies" }, { "paperId": null, "title": "Chinese Court Upholds Legality of Bitcoin Ownership, BTC Protected by China's Property Laws, CRyPtoSlate" }, { "paperId": null, "title": "See China Bans Initial Coin Offerings Calling Them 'Illegal Fundraising,' bbC" }, { "paperId": null, "title": "Cryptocurrencies and Blockchain-Legal Context and Implications for Financial Crime, Money Laundering and Tax Evasion 15 (Paper requested by the TAX3 committee of eU Parliament" }, { "paperId": null, "title": "Laws on the People's Bank of China, art" }, { "paperId": null, "title": "IRS Virtual Currency Guidance: Virtual Currency Is Treated as Property for U.S. Federal Tax Purposes; General Rules for Property Transactions Apply" }, { "paperId": null, "title": "A Guide to China's Cryptocurrency Market: Which Tokens Are Most Popular?" }, { "paperId": null, "title": "Supra note 34" }, { "paperId": null, "title": "With Xi's Backing, China Looks to Become a World Leader in Blockchain as US Policy is Absent, CNBC" }, { "paperId": null, "title": "Chinese Court Rules Bitcoin Is Asset Protected by Law" }, { "paperId": null, "title": "China Wants to Ban Bitcoin Mining, ReUteRS" }, { "paperId": null, "title": "China Hands Out USD1.5 Million of its Digital Currency in One of the Country's Biggest Public Tests, CNBC" }, { "paperId": null, "title": "A Close Look at Hangzhou Internet Court: Inside China's Internet Courts Series, China JUStiCe obSeRveR" } ]
9,874
en
[ { "category": "Business", "source": "s2-fos-model" }, { "category": "Computer Science", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01fe7aaf89a6175d234e390db4a34018eb2a571c
[]
0.800043
Problems of Displaying Transactions with Digital Assets in Accounting
01fe7aaf89a6175d234e390db4a34018eb2a571c
Scientific Bulletin of Mukachevo State University Series "Economics
[]
{ "alternate_issns": null, "alternate_names": [ "Sci Bull Mukachevo State Univ Ser \"economics" ], "alternate_urls": null, "id": "f31bba6f-1cc2-43d3-84b7-dc6346450909", "issn": "2313-8114", "name": "Scientific Bulletin of Mukachevo State University Series \"Economics", "type": null, "url": "http://www.msu.edu.ua/visn/" }
At the present stage of the digital economy, approaches to the use of cash are changing. Electronic non-cash payments are increasingly used to order services and pay for goods online. Therefore, the important value of this process for the accounting system is the reflection of such transactions in accounting. Using e-wallets and e-business environments, displaying cryptocurrency transactions, transferring funds, mining, investing in high-risk assets – all this requires learning how to account for such transactions. The main purpose of the study is to scientifically substantiate the approaches to the reflection in the accounting of transactions with digital assets and to determine the ways of receipt of cryptocurrency in the enterprise. In the course of scientific research such methods of scientific cognition as description, analysis and synthesis were used. It is established that there is no single approach to the recognition and accounting of cryptocurrencies. It is advisable to consider cryptocurrency, which belongs to intangible assets, only in terms of long-term investments. Another vector of development is the identification of cryptocurrency as a resource or stocks and its accounting as stocks. It is determined that, first, before using cryptocurrency, it is necessary to economically justify a certain method of cryptocurrency valuation at the legislative level. In the future, this is necessary for companies that will use cryptocurrency to be able to constantly use the method in their accounting policies. The author analyzed the forms of electronic money and found that they can exist in the form of information in the middle of computer networks (network-based) and may have an additional connection with the payment smart card (card-based). In order to identify the subject of accounting, the author determines that cryptocurrency should be accounted for as an intangible asset, while wallets for storing cryptocurrency should be accounted for as other non-current tangible assets
УДК 657.422.4 DOI: 10.31339/2313-8114-2020-7(2)-87-95 # Проблеми відображення операцій із цифровими активами в обліку ## Андрій Андрійович Макурін _Національний технічний університет «Дніпровська політехніка»_ _49005, просп. Дмитра Яворницького, 19, м. Дніпро, Україна_ Анотація. На сучасному етапі функціонування цифрової економіки змінюються підходи до використання готівки. Усе більше використовуються для замовлення послуг та оплати товарів в Інтернеті електронні безготівкові розрахунки. Тому важливу цінність цього процесу для облікової системи має відображення таких операцій в обліку. Використовуючи електронні гаманці та середовища електронного бізнесу, відображаючи транзакції з криптовалютою, перераховуючи кошти, майнінг, інвестуючи в активи з високим ризиком – усе це вимагає вивчення способу обліку таких операцій. Основна мета дослідження полягає у науковому обґрунтуванні підходів до відображення операцій із цифровими активами в обліку та визначенні шляхів надходження криптовалюти на підприємство. У процесі наукового дослідження були використані такі методи наукового пізнання як опис, аналіз і синтез. Встановлено, що не існує єдиного підходу до визнання та обліку криптовалют. Доцільно розглядати криптовалюту, яка належить до нематеріальних активів, лише в умовах довгострокових інвестицій. Іншим вектором розвитку є ідентифікація криптовалюти як ресурсу або запасів та облік її як запасів. Визначено, що, насамперед, перед використанням криптовалюти необхідно економічно обґрунтувати певний метод оцінки криптовалюти на законодавчому рівні. Надалі це необхідно для компаній, які будуть використовувати криптовалюту, щоб мати можливість постійно використовувати метод у своїй обліковій політиці. Автор проаналізував форми електронних грошей і виявив, що вони можуть існувати у вигляді інформації посеред комп’ютерних мереж (на основі мережі) і можуть мати додатковий зв’язок із платіжною смарт-карткою (на основі картки). З метою ідентифікації суб’єкта бухгалтерського обліку автор визначає, що криптовалюта має обліковуватись як нематеріальний актив, тоді як гаманці для зберігання криптовалюти варто обліковувати як інші необоротні матеріальні активи Ключові слова: облік криптовалюти, господарські операції, майнінг, блокчейн, цифрова економіка _Scientific Bulletin of Mukachevo State University. Series "Economics", 7(2), 87-95_ ----- UDC 657.422.4 DOI: 10.31339/2313-8114-2020-7(2)-87-95 # Problems of Reflecting Transactions with Digital Assets in Accounting ## Andrii A. Makurin[*] _Dnipro University of Technology_ _49005, 19 Dmytro Yavornytskyi Ave., Dnipro, Ukraine_ _(Received: 23.08.2020, Revised: 22.09.2020, Accepted: 25.10.2020)_ _*Corresponding author_ Abstract. At the present stage of functioning of the digital economy, approaches to the use of cash are changing. Electronic non-cash payments are used to order services and pay for goods on the Internet increasingly often. Therefore, reflecting such operations in accounting constitutes an essential value for the accounting system. Using e-wallets and e-business environments, mapping cryptocurrency transactions, transferring funds, mining, investing in high-risk assets – all this requires learning the accounting methods for such transactions. The main purpose of this study is to scientifically substantiate approaches to reflecting operations with digital assets in accounting and determine the ways of receiving cryptocurrency by the enterprise. This study employed such methods of scientific cognition as description, analysis, and synthesis. It is established that there is no single approach to the recognition and accounting of cryptocurrencies. It is advisable to consider a cryptocurrency that belongs to intangible assets only in the context of long-term investments. Another development vector is identifying cryptocurrencies as a resource or inventory and accounting for them as inventory. It is determined that, first of all, before using cryptocurrencies, it is necessary to economically justify a certain method of evaluating cryptocurrencies legislatively. In the future, this will be necessary for companies that will use cryptocurrency to have the opportunity to continuously use this method in their accounting policies. The author of this study analysed the forms of electronic money and found that they can exist as information between computer networks (network-based) and can have an additional connection with a smart payment card (card-based). To identify the subject of accounting, the author determines that cryptocurrency should be considered as an intangible asset, while wallets for storing cryptocurrency should be considered as other non-current tangible assets Keywords: cryptocurrency accounting, business operations, mining, blockchain, digital economy ### Introduction Modern information technology is changing the world. Blockchain technology is gradually developing, based on which it becomes possible to investigate issues re­ lated not only to cryptocurrency, but also to medicine, management, and the conduct of elections. That is, to use blockchain technology for the benefit of humanity. The modern transformation of the economy requires a rapid response to the processes transpiring in it. Such concepts as accounting digitalisation and multi-level digitalisation are being introduced. Therefore, the mod­ ern vision of accounting and taxation is changing and improving. The emergence of new digital assets in the form of cryptocurrencies creates a challenge to the ac­ counting system regarding their reflection in it. Any **Suggested Citation:** Makurin, A.A. (2020). Problems of reflecting transactions with digital assets in accounting. Scientific Bulletin of Mukachevo _State University. Series “Economics”, 7(2), 87-95._ _Scientific Bulletin of Mukachevo State University. Series "Economics", 7(2), 87-95_ ----- activity must be defined, recognised, and transactions – be subject to taxation, since taxes are the source of re­ plenishing the budget [1]. Considering modern approaches to ordering ser­ vices and paying for goods without leaving the home, non-cash payments become of particular importance, and necessitate the investigation of such approaches to reflect transactions in the conventional accounting system. In the context of conducting e-business, when using electronic (digital) wallets, there is a need for accounting for transactions with them. One needs to establish which ledgers to use to reflect a certain number of such wallets. Furthermore, one needs to understand which ledgers to use to reflect activities such as mining, and how to classify such activities (financial, investment, or basic). Making investments in highly liquid, high-risk assets – all this requires investigating approaches to reflect such operations in accounting. If an enterprise controls resources based on previ­ ous experience and expects to receive economic benefits in the future, then such resources are assets. Disputes continue among researchers regarding the economic nature of cryptocurrencies. Cryptocurrency should be understood as a special type of intangible assets that an enterprise can use for investment. That is, a crypto­ currency constitutes a non-monetary asset that has no material form. An asset can be considered any resource, the value of which can be reliably estimated, and its use is expected to bring benefits in the future. It is advisable to consider whether a cryptocurrency belongs to in­ tangible assets only in the context of long-term invest­ ment. Given the high volatility of the cryptocurrency market and its constantly changing value, the process of evaluating such an asset becomes very complex. And determining the real value of one Bitcoin depends on the resources spent (for example, the power of equip­ ment for generating a Bitcoin, internet speed, electricity costs, the complexity of the reward redistribution system, the main characteristics of the pool, etc.) [2]. Apart from the fact that a cryptocurrency can be an intangible asset, under certain conditions it can also be identified as a stock and reflected in other led­ gers. That is, if the cryptocurrency is in the process of mining with the subsequent purpose of selling it, then it is proposed to apply the provisions of IFRS 2 “Re­ serves” [3]. This approach requires some approaches and discussions among researchers. Since cryptocurrency has a high risk, which is inherent in changes in value, it cannot be classified as a standard asset. As part of the company's funds, there are deposit funds, cash in the cash register, funds in the current account, highly liquid investments, securities, precious metals. That is, these are assets that can be quickly converted into cash, or they are cash and have high liquidity. It is very difficult to make a correct assessment of the value of cryptocurrencies in the accounting system at a certain date of drawing up a balance sheet or conducting a transaction. The exchange rate difference when buying or selling can be significant, since the market value of a cryptocurrency depends on the supply-de­ mand mechanism in the closed market of using this tool [4]. For example, in August 2019, 1 Bitcoin was worth just over 10,000 US dollars and in October of the same year, its value was 8,000 US dollars. About 50-80% of all cryptocurrency mining capacity is con­ centrated in China, and therefore the ban on crypto­ currency mining in eastern China has adversely af­ fected the value of Bitcoin and digital assets [5]. First of all, before using cryptocurrency, it is necessary to economically justify a certain method of evaluating cryptocurrency at the legislative level. In the future, this is necessary so that companies that will use cryp­ tocurrency can reflect a certain method in their account­ ing policies. In the academia studying accounting, which include O. Petruck and O. Novak [6], O. Augustova [7], V. Fostolovich [8], S. Lecgenchyc and A. Semeneс [9], T. Tarasova, O. Usatenko, A. Makurin, V. Ivanenko and A. Cherchata [10], discussions continue concerning the valuation method to use, since for various purposes of using cryptocurrency, its value plays an essential role. For example, the value of a cryptocurrency can be de­ termined by the cost of resources spent, or by revalued cost; by net realisable value (for mining purposes); by fair value for traders and those who want to conduct certain operations on a crypto exchange. Consequently, the professional competencies of an accountant in the context of blockchain technology are acquiring a fun­ damentally new level of development. Among the Ukrainian researchers, it is worth noting O. Petruck and O. Novak, who investigated the essence of cryptocurrency and studied the features of accounting for such an asset. They distinguished the concepts of cryptocurrency and electronic money. These authors also provided examples of accounting for opera­ tions related to cryptocurrencies. They proved that cryp­ tocurrency does not correspond to the term “money”, and therefore it cannot be reflected in the balance sheet of the enterprise under the item “Cash and their equivalent” [6]. I. Derun, I. Sklyaruk investigated the classification of cryptocurrencies, studied its features and types. The authors analysed the model of decen­ tralised digital currency schemes and covered their main _Scientific Bulletin of Mukachevo State University. Series "Economics", 7(2), 87-95_ 89 ----- characteristics. They proposed an original approach to reflecting business operations with cryptocurrency in ledgers [11]. O. Augustova considered the economic and legal essence of cryptocurrencies. She investigated the stages of development of digital currency on the world mar­ ket. This author suggests defining cryptocurrency as a virtual currency and equating it with the means of payment of business entities [7]. V. Fostolovich investi­ gated the issues of the digital information space and the necessity of solving the issue of accounting and taxation of operations with cryptocurrency [8]. The au­ thor suggests considering cryptocurrency as a finan­ cial instrument that should be evaluated at fair value. He notes that all operations related to the generation of income and expenses incurred should be controlled by the tax administration, which should also monitor existing wallets for the safety of any cryptocurrency. Despite all the existing developments of research­ ers and the increased interest in such a specific asset as cryptocurrency, as well as the rapid development of information technology, requires numerous additional studies of this asset. It is necessary to obtain unambigu­ ous answers to the frequently asked questions related to the recognition, calculation of value, and evaluation, as well as reflection in ledgers and understanding of the tax base to receive tax revenues to the state. Furthermore, the National Bank of Ukraine does not recognise cryp­ tocurrency as a means of payment, which makes its use illegal. But at the same time, converting cryptocurrencies into national monetary units and vice versa does not cause violations in terms of the legislation of Ukraine [8]. ### Materials and Methods This study is based on a comprehensive analysis of events in the world of cryptocurrencies. The research was per­ formed in two main areas. The first area is the analysis of previous studies related to the theoretical premise of the emergence of cryptocurrencies. The main method of research is the empirical method, which was em­ ployed to observe changes in the attitude of countries towards cryptocurrencies. The measurement process also provided an insight into the scope of the Bitcoin market. As a result of researching the literature, it was established that cryptocurrency as electronic money constitutes a non-personalised payment instrument and rotates outside the banking system in electronic form. Therefore, this is precisely what influences the fact that the state cannot control this process, making the na­ tional banks of many countries suspicious of such money. As for the second area, the study investigated the legal status of cryptocurrencies in Ukraine and abroad. It was established that the lack of state control is condi­ tioned upon the imperfection of the system of legal reg­ ulation of the status of cryptocurrencies in Ukraine. The following hypotheses were put forward: − legalisation and recognition of modern funds as means of payment will make this process more controlled and regulated; − transparency of cryptocurrency transactions on exchanges will increase confidence, as well as create op­ portunities for improving the tax system to tax such transactions and this type of activity. In the course of the study, the description method was used to record certain features of specifying cryp­ tocurrency records in accounting. This provided a greater insight into what the digital assets (cryptocurrencies) should be recognised as. A considerable number of sci­ entific papers were analysed, indicating in them the basics of solving a scientific problem related to the iden­ tification of the accounting object. The identification and recognition of an object in accounting as a certain type of asset influences its further accounting and de­ termination of the tax base and further specification in the financial statements. ### Results and Discussion Thanks to gadgets and the global network, business re­ quirements are evolving. A new approach to its man­ agement, namely e-business, is being introduced. At the same time, the approach to making payments for goods, works, and services is evolving [12]. Approximately during 1998-2002, the electronic payment system WebMoney and Yandex funds were created. Much later, namely in 2007, the Qiwi settlement service was created Apart from such systems, there are also EasyPay, PayPal, GlobalMoney, Maxi, and many others [13]. In 2008, certain changes were introduced concerning the use of modern cash in the settlement procedures. It was that year when a digital currency, namely Bitcoin, was first used to calculate and exchange 10,000 Bitcoins for two pizzas [12]. When keeping records, it is proposed to separate the concepts of cryptocurrency and electronic money. Focusing on the fact that electronic money can be immediately converted into money from the country where it is used. For example, the authors of this study consider the WebMoney electronic payment system. Until 2018, this system allowed creating a WMU-type wallet, which was the title sign equivalent to the Ukrainian Hryvnia. And WMZ is the equivalent of the US dollar. WMX in this system is designated as an analogue of Bitcoin, where it is possible to exchange 1 WMX for 0.001 BTC (as of October 2019, this is almost 8.2 US dollars). _Scientific Bulletin of Mukachevo State University. Series "Economics", 7(2), 87-95_ 90 ----- Having analysed the form of electronic money, it was found that they can exist in the form of information in the middle of computer networks (network-based), as well as have an additional connection with a smart payment card (card-based). Cryptocurrency is a certain amount of information in electronic form, which is represented using a cryptographic key with a size of 256 bits, in which 33 characters are encoded, for ex­ ample, 1bq9qza7fn9snscyjqb3zcn46bibtkt4ee – the first digit: one or three is not included in the calculation. Thus, in this matter, cryptocurrency is similar to elec­ tronic money [14]. From the standpoint of anonymity, electronic money comes with certain requirements for user identification, i.e., personalisation and without such requirements (anonymous). Yet again, there are some similarities in this issue, namely cryptocurrency is completely anonymous. It is not possible to iden­ tify who and to whom such funds were transferred, but users can make sure that they received funds from a particular person. If one analyses the issuer of funds, then electronic money can be fiat, included in the state financial system as separate payment subsystems and denominated, and always in the national currency of a particular country. At the same time, they can be a private currency that recognises their value in this state, but always needs to be exchanged for the currency of the state, for ex­ ample, as WMR – roubles and WMU – Hryvnia. Any cryptocurrency has no issuer. This system functions in such a way that it is possible to increase the number of funds in a logarithmic progression until the figure of 21 million is reached. Therefore, to ensure a suffi­ cient amount of funds, bitcoin is divided to the eighth decimal place. The smallest unit of measurement is 0.00000001 BTC and is called Satoshi. On currency exchanges, traders know that the smallest unit of mea­ surement for changes in the exchange rate is 0.0001 and is called Peeps [15]. Most foreign companies use cryptocurrency for investment purposes, or accept it as a means of payment. This has led to an urgent need for managers to develop accounting standards, consider this feature of the modern economic world, and develop mecha­ nisms to regulate how they are reflected in financial statements. The lack of particular guidelines and meth­ odology has led to various accounting methods used in practice, which have created considerable issues for developers of financial statements. This has compelled the management to understand exactly where to reflect cryptocurrency, since independent decision-making destroys such stages of accounting as continuity, measurement, registration, which contributes to the emergence of scraps of accounting procedures on the market. Furthermore, these challenges can lead to revenue management opportunities or increased information asymmetry between stakeholders and organisations. In the future, this leaves a certain imprint on the comparison of enterprises with each other, for example, at the level of income, because some enterprises will use and ac­ count for cryptocurrencies, while others will not rec­ ognise it. At the same time, any cryptocurrency can be exchanged for real money, that is, get additional income that must be taxed [15]. Currently, there is a debate among researchers about what to consider cryptocurrency assets and the cryptocurrency itself. Two approaches were developed for accounting and taxation of such a specific asset. The first is to recognise cryptocurrencies as commodi­ ties that can be exchanged for other commodities. Then such transactions should be subject to the basic valueadded tax rate of 20%. The second is to define crypto­ currency as a financial instrument and consider it as a type of modern money. When performing taxation, it is necessary to use the income tax rate of 18% for enterprises or for individuals to apply the personal in­ come tax rate of 18%. To solve the problems associated with accounting and taxation of cryptocurrencies, it is necessary to define the object. For example, the committee on International Financial Reporting Standards recognised that cryp­ tocurrency cannot be identified with cash or financial assets, but suggested that it should be classified as an intangible asset [16]. The International Financial Re­ porting Interpretations Committee (IFRIC) is a fairly influential body in the international financial system. According to the IFRIC's conclusions, cryptocurren­ cy constitutes a non-monetary asset without physical embodiment [16]. Cryptocurrency cannot be considered securities, since it does not give the owner contractual exchange rights. The authors of this study emphasise that the IFRS document on the status of cryptocur­ rencies is of a recommendatory nature and reflects only the Committee's opinion. Back in 2014, the State of New York recognised Bitcoin as intangible property, and the State of Nevada also signed a corresponding agreement on its recognition. Until the legal status of cryptocurrencies in Ukraine is determined, it is not possible to single out a common opinion among researchers concerning the accounting of transactions with this type of asset. Factually, any cryptocurrency constitutes a source code, which is a cryptocurrency key, and it is this key that is the object of _Scientific Bulletin of Mukachevo State University. Series "Economics", 7(2), 87-95_ 91 ----- ownership rights. It can be used as a means of exchange that functions in the blockchain system as accounting units. Furthermore, all assets related to cryptocurrency are stored on separate wallets. For example, Bitcoin – BTC is stored on emcd.io, and the Bitcoin Cash – BCH is stored in the pool.viabtc. In other words, it is nec­ essary to reflect the wallet as a non-current tangible asset, and any cryptocurrency as an intangible asset. Thus, it can be concluded that in accounting, transactions related to cryptocurrency should be re­ flected as ordinary intangible assets. That is, the ac­ quired (created) intangible assets must be credited to the balance sheet of the enterprise at the initial cost, comprising the cost (acquisition) and other expens­ es associated with this asset. The question of the cost of accounting for cryptocurrencies is problematic, since in Ukraine and in the world, there are no clear standards in the field of cryptocurrencies. Valuation of intangible assets constitutes a very complex issue, which is conditioned upon the specificity of this category, the lack of valuation standards, an underdeveloped active market that is ever-evolving, new cryptocurrencies are being designed – all this hinders an adequate assess­ ment. Too strict regulation of the current accounting legislation in developed countries such as the United States, Germany, Canada, and the United Kingdom does not contribute to the development of an objective assessment of the value of cryptocurrencies [17]. Thus, cryptocurrency and the establishment of ownership rights to its use, management of a partic­ ular enterprise or individual for accounting purposes is understudied. It is proposed to determine the ways of receipt to understand the historical value of cryp­ tocurrency, registration and its further application. It is established that an intangible asset should be identified by various ways of receipt: exchange, gratuitous receipt, contribution to the authorised capital by a participant, receipt as a result of a merger of enterprises. Table 1 demonstrates the main options for receiving crypto­ currency to the enterprise, reflecting the main ledgers to be used. **Table 1. Main ways of receipt and options for reflecting cryptocurrencies in accounting** **Seq.** **Receipt paths** **Rating** **Ledgers** **No.** 1 Exchange Exchange for a similar IA[*] Exchange for non-similar IA[*] Initial cost-fair value at the date 2 Free receipt and time of the transaction 127 “Other intangible assets” – cryptocurrency 117 “Other non-current tangible assets” – cryptocurrency wallet 424 “Non-current assets received free of charge” 3 Participant's contribution to the authorised capital Initial cost = fair value agreed D 127 K 46 by the founders Reception as a result of 4 Initial cost = fair value a merger of enterprises The cost of mining expenses (Expenses for ASIC purchases; electricity costs; internet traffic costs) 5 Cryptocurrency generated by its own information and technological means D 425 K 127 **Note: [*]IA – intangible asset** **Source: compiled by the author based on [7; 12]** O. Augustova [7] suggested accounting for cryp­ tocurrency as electronic money, using accounts 315 “Special accounts in national currency”, 127 “Other intangible assets”. Considering cryptocurrency as a financial investment, it is necessary to use items 143 “Investments to unrelated parties”, 352 “Other current financial investments” or account them as part of ac­ counts receivable per item 377 “Settlements with other debtors”. And in the balance sheet, the value of such assets should be reflected in item 1165 “Money and its equivalent” [12]. Control over the turnover of cryptocurrencies _Scientific Bulletin of Mukachevo State University. Series "Economics", 7(2), 87-95_ 92 ----- should be carried out by the National Bank of Ukraine, and all transactions with it must be taxed. Cryptocur­ rency is not a completely legal tender, as most compa­ nies in the world do not yet work with it. It is also not recognised as a means of payment in many countries at the legislative level. In particular, the Central Bank of Finland stated that Bitcoin is neither a currency nor even an electronic means of payment, since such objects must have an appropriate issuer responsible for their activities. The Central Bank of China also banned any operations with virtual currency, noting that this is an unlawful means of payment that has no legal status. Notably, Bitcoin does not even have a legit­ imate developer (only their alias is publicly known), and therefore this confirms that this object is outside the legal framework so far [18]. The way cryptocurrency is recognised will directly affect its further accounting, crediting to the company's balance sheet and further operations related to taxation. If a legislative decision is made to recognise crypto­ currency as an intangible asset, then it is worth apply­ ing the usual operations for this procedure [19]. And if it is recognised and credited to modern money, then income tax or personal income tax should be used. The solution to the tax process lies in determining and improving the legal status of cryptocurrencies, amend­ ing the Tax Code of Ukraine. Introduction of a single mechanism for recognising the object of accounting and the tax base. After certain changes associated with the recognition of a cryptocurrency, it is necessary to determine its place in accounting. For example, if cryptocurrencies are considered money, then it should be reflected in line 1165 “Money and its equivalent” [12]. If cryptocurrency is defined as other current financial investments, then the account 352 should be used, and respective reflection be made in the financial statements in item 1160 “Current financial investments” [7]. For accounting purposes, it is appropriate to identify the value of cryptocurrencies available on the wallet at the disposal of the enterprise at each reporting date [20], which determines further research towards evaluating, re-evaluating, markdown, and revaluation of crypto­ currencies. ### References ### Conclusions Without understanding how to recognise cryptocurrency in accounting, it is impossible to carry out accounting itself. It is impossible to determine the fair value of such an asset, and use certain ledgers with subsequent re­ flection in the financial statements. Therefore, Ukraine needs to develop and implement regulations that can govern the turnover of digital assets on its territory. Furthermore, it is necessary to change the terminology, since it has not yet been determined what cryptocurren­ cies, digital assets, and digital currency are and how to distinguish them. It also remains unclear what a cryp­ tocurrency is to be considered as – an intangible asset, a stock, a financial investment, cash, etc. For the de­ velopment of a transparent cryptocurrency market, it is necessary to create the appropriate legal conditions. The first step, which confirms the state's readiness to work on the development of legislative and regulatory frameworks that will ensure transparency and quality of relations between investors and market participants with cryptocurrencies, was taken in the form of a Con­ cept of State Regulation of Operations with Crypto­ currencies in Ukraine. Notably, in Ukraine, the issue of developing statu­ tory regulation of cryptocurrency operations and rela­ tions is urgent. The lack of legal regulation of operations with encoded currency does not allow the National Bank of Ukraine and other bodies to control, guarantee, and protect against abuse of such operations, although the fact of their implementation in the business sector is indisputable. This requires amendments to the Tax Code of Ukraine and the development of a tax system for this process. The legislative vacuum is a springboard for abuse of power and a hindrance of the country's development. It is crucial that the legal side keeps up with the techno­ logical side, for effective scaling of business in Ukraine and interaction of regulatory authorities with such a business. Given the evolution of economic relations in society, the tax system should also evolve. Prospects for further research are to develop the accounting display of cryptocurrency depending on the method of its re­ ceipt to the owner, for example, by mining, exchanging for money or another asset. Separate research is required on the taxation of operations with cryptocurrencies. [1] [2] [3] Kornieiev, V., & Cheberiako, O. (2018). Crypto currencies: The age and sphere of financial innovation. Bulletin of the _Kiev National University. Taras Shevchenko. Series “Economics”, 1(196), 40-46._ Fedorova, Y. (2018). Crypto currencies and their place in the financial system. Economy and Society, 15, 771-774. International Accounting Standard 2 (IAS 2) Inventories. (2010). Retrieved from https://zakon.rada.gov.ua/laws/ show/929_021#Text. _Scientific Bulletin of Mukachevo State University. Series "Economics", 7(2), 87-95_ 93 ----- [4] [5] [6] [7] [8] [9] [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] Tzouvanas, P., Kizys, R., & Tsend-Ayush, B. (2019). Momentum trading in cryptocurrencies: Short-term returns and diversification benefits. Economics Letters, 191, article number 108728. Shen, D., Urquhart, A., & Wang, P. (2019). A three-factor pricing model for cryptocurrencies. _Finance Research_ _Letters, 34, 52-60._ Petruck, O., & Novak, O. (2017). The essence of cryptocurrency as a methodological premise of its accounting display. _Bulletin of Zhytomyr State Technological University. Series “Economics, Management and Administration”, 4(82), 48-55._ Augustova, O. (2018). Economic content of cryptocurrency and its accounting in Ukraine. Economy and Society, 18, 844-849. Fostolovich, V. (2018). The mechanism of cryptocurrency management in the enterprise accounting system. Effective _Economics, 5, Retrieved from http://www.economy.nayka.com.ua/?op=1&z=6324._ Lecgenchyc, S., & Semenec, A. (2017). Accounting for electronic money transactions: Methodological aspect. Scientific _Bulletin of Kherson State University. Series “Economic Sciences”, 23(3), 144-147._ Tarasova, T., Usatenkob, O., Makurin, A., Ivanenko, V., & Cherchata, A. (2020). Accounting and features of mathematical modeling of the system to forecast cryptocurrency exchange rate. _Accounting, 6, 357-364._ doi: 10.5267/j.ac.2020.1.003. Derun, I., & Sklyaruk, I. (2018). The ontological aspects of the essence of cryptocurrency and its display in accounting. _Scientific Notes of Ostroh Academy National University. Series “Economics”, 11(39), 163-170._ Pieters, G., & Vivanco, S. (2017). Financial regulations and price inconsistencies across Bitcoin markets. Information _Economics and Policy, 39, 1-14._ Bouri, E., Molnár, P., Azzi, G., Roubaud, D., & Hagfors, L.I. (2017). On the hedge and safe haven properties of Bitcoin: Is it really more than a diversifier? Finance Research Letters, 20, 192-198. Feng, W., Wang, Y., & Zhang, Z. (2018). Informed trading in the Bitcoin market. Finance Research Letters, 26, 63-70. Phillip, A., Chan, J.S.K., & Peiris, S. (2018). A new look at Cryptocurrencies. Economics Letters, 163, 6-9. Strauss, W., & Howe, N. (2000). Millennials rising: The next great generation. New York: Vintage Books. Chaum, D. (1983). Blind signatures for untraceable payments. In D. Chaum, R.L. Rivest, A.T. Sherman (Eds.), _Advances in cryptology (pp. 199-203). Boston: Springer._ Dwyer, G. (2015). The economics of Bitcoin and similar private digital currencies. Journal of Financial Stability, 17, 81-91. Adhami, S., Giudici, G., & Martinazzi, S. (2018). Why do businesses go crypto? An empirical analysis of initial coin offerings. Journal of Economics and Business, 100(C), 64-75. Pashkevych, M., Bondarenko, L., Makurin, A., Saukh, I., & Toporkova, O. (2020). Blockchain technology as an organization of accounting and management in a modern enterprise. International Journal of Management (IJM), 11(6), 516-528. ### Список використаних джерел [1] [2] [3] [4] [5] [6] [7] [8] [9] Корнєєв В., Чеберяко O. Криптовалюти: вік та сфера фінансових інновацій. Вісник Київського національного _університету. Тараса Шевченка. Серія: «Економіка». 2018. Вип. 1, № 196. С. 40–46._ Федорова Ю. Криптовалюти та їх місце у фінансовій системі. Економіка та суспільство. 2018. Вип. 15. С. 771–774. Міжнародний стандарт бухгалтерського обліку 2 (МСБО 2) «Запаси». URL: https://zakon.rada.gov.ua/laws/ show/929_021#Text (дата звернення: 19.08.2020). Tzouvanas P., Kizys R., Tsend-Ayush B. Momentum trading in cryptocurrencies: Short-term returns and diversification benefits. Economics Letters. 2019. Vol. 191. Article number 108728. Shen D., Urquhart A., Wang P. A three-factor pricing model for cryptocurrencies. Finance Research Letters. 2020. Vol. 34. Article number 101248. Петрук О., Новак О. Сутність криптовалюти як методологічної передумови її облікового відображення. _Вісник Житомирського державного технологічного університету. Серія: «Економіка, управління та_ _адміністрування». 2017. Вип. 4, № 82. С. 48–55._ Августова О. Економічний зміст криптовалюти та її облік в Україні. Економіка та суспільство. 2018. Вип. 18. С. 844–849. Фостолович В. Механізм управління криптовалютою в системі бухгалтерського обліку підприємства. _Ефективна економіка, № 5. URL: http://www.economy.nayka.com.ua/?op=1&z=6324 (дата звернення: 01.09.2020)._ Легенчук С.Ф., Семенець А.П. Облікове відображення операцій з електронними грошима: методичний аспект. _Науковий вісник Херсонського державного університету. Серія «Економічні науки». 2017. Вип. 23, № 3. С. 144–147._ _Scientific Bulletin of Mukachevo State University. Series "Economics", 7(2), 87-95_ 94 ----- [10] [11] [12] [13] [14] [15] [16] [17] [18] [19] [20] Accounting and features of mathematical modeling of the system to forecast cryptocurrency exchange rate / T. Tarasova et al. _Accounting. 2020. Vol. 6. P. 357–364. doi: 10.5267/j.ac.2020.1.003._ Derun I., Sklyaruk I. The ontological aspects of the essence of cryptocurrency and its display in accounting. Scientific _Notes of Ostroh Academy National University. Economics Series. 2018. Vol. 11, No. 39. P. 163–170._ Pieters G., Vivanco S. Financial regulations and price inconsistencies across Bitcoin markets. Information Economics _and Policy. 2017. Vol. 39. P. 1–14._ On the hedge and safe haven properties of Bitcoin: Is it really more than a diversifier? / E. Bouri et al. Finance Research _Letters. 2017. Vol. 20. P. 192–198._ Feng W., Wang Y., Zhang Z. Informed trading in the Bitcoin market. Finance Research Letters. 2018. Vol. 26. P. 63–70. Phillip A., Chan J.S.K., Peiris S. A new look at Cryptocurrencies. Economics Letters. 2018. Vol. 163. P. 6–9. Strauss W., Howe N. Millennials rising: The next great generation. New York: Vintage Books, 2000. 432 p. Chaum D. Blind signatures for untraceable payments. Advances in Cryptology / Ed. by D. Chaum, R.L. Rivest, A.T. Sherman. Boston: Springer. 1983. P. 199–203. Dwyer G. The economics of Bitcoin and similar private digital currencies. Journal of Financial Stability. 2015. Vol. 17. P. 81–91. Adhami S., Giudici G., Martinazzi S. Why do businesses go crypto? An empirical analysis of initial coin offerings. _Journal of Economics and Business. 2018. Vol. 100(C). P. 64–75._ Blockchain technology as an organization of accounting and management in a modern enterprise / M. Pashkevych et al. _International Journal of Management (IJM). 2020. Vol. 11, No. 6. P. 516–528._ _Scientific Bulletin of Mukachevo State University. Series "Economics", 7(2), 87-95_ 95 -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.52566/msu-econ.7(2).2020.87-95?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.52566/msu-econ.7(2).2020.87-95, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "https://economics-msu.com.ua/web/uploads/pdf/Scientific Bulletin of Mukachevo State University. Series Economics_2020_Vol.7_No.2-87-95.pdf" }
2,020
[]
true
2020-12-28T00:00:00
[]
10,419
en
[ { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Engineering", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/01fed8807b9f01dec75e14294e41c69c71996879
[]
0.875586
Next-Generation Intrusion Detection and Prevention System Performance in Distributed Big Data Network Security Architectures
01fed8807b9f01dec75e14294e41c69c71996879
International Journal of Advanced Computer Science and Applications
[ { "authorId": "2255696388", "name": "Michael Hart" }, { "authorId": "2255707606", "name": "Rushit Dave" }, { "authorId": "2255454623", "name": "Eric Richardson" } ]
{ "alternate_issns": null, "alternate_names": [ "Int J Adv Comput Sci Appl" ], "alternate_urls": [ "http://thesai.org/Publication/Default.aspx", "https://thesai.org/Publications/IJACSA" ], "id": "20a3a2f3-532a-4f04-9f3d-1e268e100872", "issn": "2156-5570", "name": "International Journal of Advanced Computer Science and Applications", "type": "journal", "url": "http://sites.google.com/site/ijacsa2010/" }
—Big data systems are expanding to support the rapidly growing needs of massive scale data analytics. To safeguard user data, the design and placement of cybersecurity systems is also evolving as organizations to increase their big data portfolios. One of several challenges presented by these changes is benchmarking real-time big data systems that use different network security architectures. This work introduces an eight-step benchmark process to evaluate big data systems in varying architectural environments. The benchmark is tested on real-time big data systems running in perimeter-based and perimeter-less network environments. Findings show that marginal I/O differences exist on distributed file systems between network architectures. However, during various types of cyber incidents such as distributed denial of service (DDoS) attacks, certain security architectures like zero trust require more system resources than perimeter-based architectures. Results illustrate the need to broaden research on optimal benchmarking and security approaches for massive scale distributed computing systems.
# Next-Generation Intrusion Detection and Prevention System Performance in Distributed Big Data Network Security Architectures ## Michael Hart[1], Rushit Dave[2], Eric Richardson[3] College of Science, Engineering, & Technology, Minnesota State University, Mankato, United States[1, 2] College of Health and Human Services, University of North Carolina Wilmington, United States[3] **_Abstract—Big data systems are expanding to support the_** **rapidly growing needs of massive scale data analytics. To** **safeguard user data, the design and placement of cybersecurity** **systems is also evolving as organizations to increase their big data** **portfolios. One of several challenges presented by these changes** **is benchmarking real-time big data systems that use different** **network security architectures. This work introduces an eight-** **step benchmark process to evaluate big data systems in varying** **architectural environments. The benchmark is tested on real-** **time big data systems running in perimeter-based and perimeter-** **less network environments. Findings show that marginal I/O** **differences exist on distributed file systems between network** **architectures. However, during various types of cyber incidents** **such as distributed denial of service (DDoS) attacks, certain** **security architectures like zero trust require more system** **resources than perimeter-based architectures. Results illustrate** **the need to broaden research on optimal benchmarking and** **security approaches for massive scale distributed computing** **systems.** **_Keywords—Big_** **_data_** **_systems;_** **_zero_** **_trust_** **_architecture;_** **_benchmarking; distributed denial of service attacks_** I. INTRODUCTION Big data systems are unified environments designed for massive-scale data analytics. Systems capable of handling large amounts of data are becoming more important as the volume of data created and communicated over the Internet increases [1]. Cybersecurity systems play an important role in ensuring the large quantities of data on the Internet remains safe. One dimension of several necessary to accomplish the latter are next-generation security devices. Intrusion detection and prevention systems (IDPSs) properly manage data accessibility, privacy, and safety. IDPS algorithms are able to identify cyber threats using several mechanisms. This includes using prior information from previous attacks, anomalies in network packets [1], and machine learning [2]. As big data systems become more common, their roles will continue to expand. This includes the capability to analyze and detect information security vulnerabilities at scale. For example, several big data frameworks exist that discover distributed denial of service (DDoS) attacks [3]. This expansion of roles offers many exciting opportunities for organizations. However, as the use of big data systems grows, the capability of attackers to leverage associated parallel computing power for nefarious reasons also increases [3]. A systematic review of 32 papers pertaining to securing big data found that a critical need in future research is building more secure big data infrastructure [4]. Contributing to the latter objective, the researchers demonstrate how varying network architectures impact the security and performance of big data systems. Organization of the paper is as follows. Section II reviews literature on intrusion detection and prevention methods for big data systems. Section III outlines the research design and methodologies used to test perimeter-based security and perimeter-less security applied to a big data system environment. Section IV describes the research results. Section V concludes the study by discussing the limitations and future outlook. II. LITERATURE REVIEW Work is necessary to optimize both the information security and performance of distributed systems. Today, several opensource big data frameworks provide remarkable potential for solving challenging data science and related problems by leveraging powerful parallel and distributed data processing. However, securing these systems often carries performance penalties. The review of literature that follows explores research on the impact of various IT infrastructure security strategies and their influence on big data environments. It begins by reviewing comprehensive surveys most closely related to information security and big data systems. _A._ _Surveys of Big Data and Intrustion Detection_ Previous systematic reviews of literature focused on information security and big data provide a vast array of objectives. A prominent theme is using deep learning [1] and machine learning [2] to assist in detecting or preventing cybersecurity attacks. This line of research often utilizes deep learning or machine learning algorithms for near real-time data protection. A recent and well cited comprehensive survey in [1] evaluates how deep learning is used for intrusion detection systems in the cybersecurity domain. It found notable contrasts between machine learning approaches in cybersecurity and deep learning. Conventional machine learning approaches utilized in cybersecurity were classified by approaches such as artificial neural networks (ANNs), Bayesian networks, decision trees, fuzzy logic, k-means clustering, k-nearest neighbor (kNN) algorithm, and support vector machines (SVMs). The ----- survey centered on deep learning focal intrusion detection methods that included autoencoders (AEs), convolutional neural networks (CNNs), deep belief networks (DBNs), generative adversarial networks (GANs), and long short-term memory (LSTM) recurrent neural networks [1]. AEs, DBNs, and GANs were highlighted in [1] for their unsupervised learning strengths. In the absence of gradient estimation, AEs can use gradient descent to train data. A strength of LSTM is its capabilities in analyzing time-series data. CNNs do not need as much data processing prior to evaluation as certain algorithms and is able to classify cyberattacks using multiple characteristics well. Combined, the survey of literature finds that AEs, CNNs, DBNs, GANs, and LSTM networks each have potential to improve intrusion detection methods. Furthermore, the survey [1] outlined the importance of dataset reliability when evaluating deep learning intrusion detection effectiveness. Variance in cybersecurity attack datasets can introduce model bias when comparing multiple deep learning methods. Thus, any biases in attack datasets or data from live systems could increase spurious results [1]. A subsequent theme in the literature concentrates on cybersecurity and privacy prevention in big data applications. While this research again employs various data science methods to detect or prevent data breaches, it also illustrates how big data techniques can prevent information privacy issues. Research in [4] led to a proposed model for enhancing information privacy. The model highlights people, organizations, society, and government roles. It leverages IDS, IPS, and encryption as its primary techniques to prevent data breaches [4]. _B._ _Big Data Architectures and Information Security_ As big data evolves, the supporting infrastructures will require proper encryption, intrusion detection, and intrusion prevention. Changing architectures within computer networks, messaging techniques, and undefined communication methods introduce numerous challenges. In a 2014 study Mitchel and Chen [5] recognized this paradigm. Their emphasis on cyberphysical systems (CPS) ranging from smart grids to unmanned aircraft systems led to the classification of four primary intrusion detection categories. These include legacy technologies, attack sophistication, closed control loops, and physical process monitoring. Each of the latter is narrow concepts as they relate to the broader field of intrusion detection, underlying the unique customization of IDSs for cyber-physical systems [5]. Three years later Zarpelo et al. [6] outlined a similar but distinct paradigm; intrusion detection focal to the Internet of things (IoT). The researchers stated that IoT has similar information security matters as the Internet, cloud services, and wireless sensor networks (WSNs). Despite similarities, IoT information security approaches are distinct, according to the authors due to concepts such as data sharing between users, the volume of interconnected objects, and the amount of computational power of the associated devices. Like cyberphysical systems, IoT presents diverse challenges to the design of instruction detection systems [6]. Designing secure cloud computing environments poses several novel problems at multiple infrastructure layers. As an example, cloud resources can be leased by numerous vendors focused on varying as-a-service models such as infrastructure as a service (Iaas), platform as a service (PaaS), and/or software as a service (SaaS). Multi-cloud applications rely upon the seamless integration of cloud resources from providers focused on one or many as-a-service types, which continue to expand. In Casola et al. [7] a model is outlined for designing, creating, and implementing multi-cloud applications. The flexible approach accounts for varying as-aservice components. Security-by-design is a primary objective of the process lifecycle between the functional design of multicloud applications and the security design. The functional design phase defines the application logic, interconnections of services, and resource requirements. In the security design phase, each cloud element is assessed in terms of security risks and security needs. Security policies and controls are designed based on the latter requirements. Similar to CPS [5] and IoT [6], the multi-cloud application model is a subsequent example of how information security solutions play a prominent role due to the systems’ distinct architectural and infrastructure layers. Securing big data environments or leveraging associated techniques like machine learning to enhance information security intertwines numerous fields include but not limited to CPS, IoT, and cloud computing. Like big data systems, CPS requires cybersecurity protection [8] of private data [9]. Big data, IoT, and CPS often overlap through the ad hoc interfaces of systems such as smart vehicles, buildings, factories, transportation systems, and grids [10]. As a vulnerable attack surface, IoT advances the need for intelligent information security. Machine learning [11], including ensemble intrusion detection [12], and IDS design [13] are proposed techniques to mitigate malicious cybersecurity attacks. Due in part to porous attack surfaces in cloud centric big data, IDSs may require collaborative frameworks [14]. In [15], fuzzy c means cluster (FCM) and support vector machine (SVM) were proposed as a collaborative technique for IDS detection rates. Compared to other mechanisms, the proposed hybrid FCM-SVM showed lower false alarm ratios and higher detection accuracy [15]. Furthermore, [16] illuminates the need for scaling IDS detection algorithms using the resources of parallel computing in the cloud. In [17] the researchers propose the BigCloud security-by design framework. The framework draws from the need to integrate big data security into the system development lifecycle. Its primary cloud application domain is focal to infrastructure as a service. It notes IaaS as one of the faster growing as-a-service options for big data. The model helps design and enforce secure authentication, authorization, data auditability, availability, confidentiality, integrity, and privacy. However, its IaaS concentration could provide greater benefits to as-a-service components specific to host operating systems, hypervisors, networking, and hardware [17]. Similar to IaaS, the evolution of serverless platforms and Function-as-a-service (FaaS) applications requires careful security design to overcome security threats that new services often suffer [18]. ----- While distinct, CPS, IoT, cloud computing, and big data are merely a few examples of why designing intrusion detection and prevention systems remains highly elastic in modern computational architectures. As the information technology landscape changes, information security bends to meet the evolving needs of the complete environment. To conclude the literature review, the authors will outline several relevant studies introducing potential solutions to design stronger information security controls for big data systems. _C._ _Encryption_ An ongoing challenge in distributed big data systems is securing communication between multiple systems operating across various computer networks. Apache Hadoop and Apache Spark are examples of big data frameworks that present several opportunities for attackers to access the data they facilitate. Central to big data frameworks is the ability to use parallel processing to analyze massive amounts of data. MapReduce is one of many programming paradigms that leverages Hadoop to extract valuable knowledge from large volumes of data. However, like most application or service modules within big data frameworks, MapReduce highlights the vast attack vectors that exist in distributed big data systems. MapReduce examples in literature include side channel attacks [19], job composition attacks [20], and malicious worker compromises in the form of distributed denial-of-service (DDoS) or replay attacks [21], Eaves dropping and data tampering [22]. Encryption is a primary countermeasure to secure transmissions and prevent data leaks between big data servers [19]. A primary objective in addressing cybersecurity attacks on parallel processing services is identifying and preventing leaks that often occur during data transmission between distributed worker nodes, also referred to as DataNodes in Apache Hadoop. These unique yet integrated servers work in parallel to complete MapReduce jobs. Often in Hadoop, data is stored and retrieved from the Hadoop Distributed File System (HDFS). In [19] side-channel attacks are addressed that can occur between MapReduce workers that utilize HDFS for data storage. These types of cybersecurity attacks can target worker nodes to extract valuable information pertaining to MapReduce jobs such as the amount of packet bandwidth. This further contributes to successful pattern attacks. The authors proposed a solution to this vulnerability labeled Strong Shuffle that enforces strong data hiding between workers [19]. In contrast to alternative countermeasures such as correlation hiding in [20], Strong Shuffle avoids leaking the number of records accepted by each reducer during MapReduce runtime. Secure plaintext communications is a function of semantically secure encryption in the Strong Shuffle solution [19]. In [19] data communicated between Hadoop DataNodes and stored in HDFS is encrypted with semantically secure AES-128-GCM encryption. Although the latter helps prevent clear text leakage between MapReduce jobs in Hadoop, encryption in big data environments has limitations. For example, encrypted databases can still reveal certain information during operations that include table queries. Deterministic encryption and order-preserving encryption can leak the equality relationship and the order between records. One proposed solution is semantically secure encryption. In [23] the authors propose a semantically secure database system named Arx. Alternative to order-preserving encryption, semantic security within Arx only allows an attacker to extract order relationships and frequency of the direct database query in use in contrast to the entire database. The authors note that worst-case attackers would gain as much information from a data leak as deterministic or order-preserving encryption over time [23]. While methods such as encryption and authentication help with cross-node data leaks, they do not prevent other attacks, such as DDoS and passive network eavesdropping [21]. A subsequent countermeasure is the effective design and implementation of intrusion detection and prevention systems [14]. _D._ _Next-Generation Security and Big Data Systems_ Next-generation security at a high level can detect and prevent malicious cybersecurity attacks. Much of the literature focuses on identifying malicious network packets in real-time. The comprehensive survey in [24] reviews how modern data mining techniques are evolving to meet real-time detection needs. The review classifies intrusion detection systems by architecture, implementation, and detection methods. Detection methods are categorized as anomaly-based, signature based, and hybrids. Signature based methods or misuse often rely upon a database that defines patterns or existing malicious attack signatures. Anomaly detection can detect non-normal network traffic behavior that has yet to be defined in a signature database. Data mining methods including supervised, unsupervised, and hybrid learning are being used to improve anomaly-based intrusion detection systems [24]. While supervised, unsupervised, and hybrid learning IDS research continues to progress [24], the ongoing need to improve existing big data implementations remains. In several systematic literature reviews [1, 2, 3, 24], IDSs are known to have limitations that contradict the performance benefits of parallel processing and distributed computing. For example, large signature based systems drain CPU and memory resources [24]. While researchers continue to advance areas of intrusion detection such as packet anomalies and encryption, only a few studies are advancing security by design and its effects on varying big data architectures [1]. To address this need, the authors of this study designed a distributed big data system over a wide area network to explore the performance of distributed nodes under different network traffic loads. III. METHODS This research methodology follows the design science approach in [25] and [26]. Design science is based on a scientific framework for IT research. As March and Smith [25] outline, IT research should consider natural and design science as a method to build and evaluate tangible objects. Within this philosophy, objects often have outputs in the form of models or instantiations. Instantiations associate with new artifacts in the design science methodology and the understanding of the artifact in its environment [25]. IT artifacts can be realized in many forms such as through the design of an object that helps solve business problems [26]. ----- _A._ _Organizational Problem_ Central to the organizational problem in this study is the need to architect a real-world or simulated big data environment that generates important inputs and outputs. In the case of this study, several architectural layers require design, configuration, benchmarking, and evaluation that accurately represent industry big data system implementations. These research activities could establish a more mature model for IDPS placement in evolving network architectures. Design science methods guide the latter activities [26]. Big data clusters can have thousands of nodes. Attempting to secure individual servers poses several issues ranging from significant costs to lost computational resources. Important to the artifact design process is the creation of an IDS and IPS testing environment that results in minimal disruption to existing big data infrastructures. Additionally, the authors constructed an experimental setup similar to several local small business environments that are readily available, relatively inexpensive, and relevant to a broad audience. Therefore, the testing environment is limited to several small commodity virtual machines (VMs) operating in physically distanced data centers. The authors will briefly outline the network architecture, hardware, software used in the experimental environment. _B._ _Network Architecture_ Fig. 1 depicts the baseline network architecture used in this study. The experimental network emulates a small to mediumsized business with a 200 Mbps dedicated lease line between four distinct physical locations. Connections are 1 Gbps copper from the demarcation point to the LAN nodes. Each server is connected to layer 2 switches followed by a layer 3 Cisco Systems enterprise class router. Fig. 1. Perimeter-based security network architecture. The cybersecurity servers labeled “CyberOne” to “CyberFour” illustrate the systems used to attack the big data cluster. The big data cluster includes four servers labeled “SparkOne” to “SparkFour.” One streaming server is depicted as the data stream located in the same local area network (LAN) as SparkOne. Four intrusion detection and prevention systems are situated between each big data server and its extrinsic networks. _C._ _Hardware_ The big data servers run on parallel Dell hardware [27]. The hardware is manufactured on the same date and shipped in the same container. The testing server used the same single Intel CPU with 16 logical cores and 32 GBs of physical random-access memory. The baseline Intel CPU benchmark average results from the PassMark version 10 performance test [29] are 2,799 MOps per second for a single thread and 5,443 megabytes per second for data encryption. Cisco RV series routers with integrated firewalls exist between each Apache Spark node and the external network. Cisco Firmware 1.0.3.55 is in use with the default firewall ruleset. The authors added customized rules that allow the internal LAN IP addresses to communicate on the necessary Apache HDFS and Spark ports. Subsequent ports are blocked [28]. _D._ _Big Data Systems_ Each big data server and streaming server used equivalent software and versions. Systems ran on the Ubuntu server 20.04.3 LTS operating system. Installed software included Java 11, Python 3.8, Apache Hadoop 3.2, and Apache Spark 3.2. The big data environment is comprised of five servers. This includes one primary cluster manager labeled _SparkOne and_ three secondary work nodes labeled _SparkTwo,_ _SparkThree,_ and _SparkFour. Apache Spark is tuned using optimal_ parameters such as those specified in [30] and [31]. HDFS disks are balanced between nodes with DFS replicating three blocks. The data stream denotes the independent Spark streaming instance. SparkOne is the primary node in the testing environment used in this study. It is comprised of the driver program. The driver program executes the big data application’s main() class and generates the SparkContext [32]. SparkContext is capable of using various big data resource managers. Tests in this study use Yet Another Resource Negotiator (YARN) as the distributed cluster manager [33]. SparkContext helps communicate application jobs containing code in various forms such as Python and JAR files to the executors on the worker or secondary nodes in the cluster. YARN has two primary high-level components labeled the NodeManager and ResourceManager. Secondary nodes in a big data cluster managed by YARN each have a NodeManager. Its function is to manage containers on each server. Containers encompass resources such as network, disk, CPU, and memory. These are allocated properly to facilitate task execution. The YARN ResourceManager consists of the ApplicationsManager and the Scheduler. While the Scheduler determines the necessary resources for each application the ----- ApplicationsManager identifies which container the application will use and subsequently monitors their task execution [33]. Apache Spark and HDFS replicate between three secondary big data servers. The secondary or worker nodes labeled SparkTwo, SparkThree, and SparkFour contain executor processes. An executor process remains throughout the runtime of tasks that each worker is allocated by the cluster manager. Every application receives its own executor process and/or processes as necessary. The driver program on SparkOne is configured to listen for executor process communications from the secondary nodes until the job is completed. Per Apache Spark documentation in [32], when possible, the driver program should be on the same local area network as the worker nodes due to the latter communication. In the experimental network design, the worker nodes are physically distanced. Therefore, Spark is optimized to open local remote procedure calls on the worker LANs [32]. _E._ _Attack Systems_ Although the cybersecurity servers ran on the same hardware as the big data servers, they used different software. CyberOne, CyberTwo, CyberThree, and CyberFour each delineate a server used to carry out cyber-attacks on the big data cluster. The software includes the Kali Linux operating system running the 5.14 kernel. Kali Linux is an open-source operating system based on Debian Linux. It is designed for numerous information security objectives such as reverse engineering, forensics, pen testing, and research [34]. _F._ _Intrustion Detection and Prevention Systems_ Consistent with Fig. 1, the baseline IDS and IPS systems are located between the cyber-attack and big data systems. Regardless, the authors manipulate the placement of these systems throughout each experimentation. As a simulated construct in the research methodology, the authors propose that IDS and IPS architecture placement predicts data streaming performance between worker nodes. Performance evaluation of this potential construct is an important step toward advancing a future IDPS placement framework for physically distanced big data systems. The authors implemented Snort and Suricata, two popular open-source IDS and IPS systems. Snort is developed by Cisco Systems. It serves as a leading intrusion detection engine and rule set for Cisco next-generation firewalls and IPSs. Its mechanisms for detecting and preventing security threats continue to evolve. However, a fundamental capability during this writing is the formation of rules. In contrast to traditional methods such as signature-based detection, rules focus on vulnerability detection [35]. Suricata is developed by the Open Information Security Foundation (OISF). Similar to Snort, Suricata can use rules to detect and block cyber-attacks [36]. Version 2.9.7 of Snort ran with libpcap version 1.9.1 and version 8.39 of the payload detection rules. Suricata testing uses version 6.0.6 with the emerging threats open ruleset. The authors customized the latter default Snort and Suricata rulesets to secure the distributed nodes. The rulesets are parallel in count and type (e.g. alert, drop) to control significant variations in resource contention. Suricata and Snort use the same rules in the tests, except for minor incompatibilities. Where incompatible, the rules are adjusted to perform the same action in both IDSs at parallel throughput rates. Snort and Suricata run on the same server hardware and operating systems as the big data servers. A second NIC allows the servers to act as gateways between trusted and untrusted networks. The servers communicate between the local area networks using Transport Layer Security (TLS) and Secure Shell (SSH) Protocols. Ubuntu server 20.04.3 LTS is configured using OpenSSH version 8.2 and OpenSSL version 1.1.1. _G._ _Benchmarks_ The authors developed custom benchmarks to identify how big data clusters perform under various IDS physically distanced network architectures. The benchmarks perform two significant network load functions, 1) streaming unstructured data to the Spark big data cluster and 2) flooding the Spark nodes via DDoS attacks. Network and system benchmarking uses version 16m of the nmon source code to measure network performance. Originally developed by IBM, nmon is an opensource Linux project that monitors system resource utilization. Performance metrics include CPU, disk, memory, and networking [37]. The authors follow the design science methodology [25] to design and implement an IDS placement experiment for physically distanced big data systems. Next, the authors construct a series of tests to determine how IDS locations influence real-world distributed worker nodes. IV. RESULTS Each of the tests followed an eight-step process, 1) network architecture is determined and implemented, 2) IDPS locations are identified and configured, 3) IDPS customized rulesets are implemented, 4) the big data system cluster is started and tested as operational, 5) data streams to the cluster are invoked, 6) DDoS attacks are executed, 7) the benchmarks are run, and 8) the researchers maintain and monitor the testing environment for anomalies. Each of the tests was repeated three times to ensure saturation existed in the results. _A._ _Test 1 Perimeter-Based Security Results_ Fig. 1 illustrates the IDPS placement location for the first test. The cloud represents the leased line between the geographical sites. Below the cloud icon is the selected IDPS solution followed by the Apache Spark cluster. Network architecture in the first test follows Cisco Systems’ best practices for a collapsed data center and LAN core [38]. Within this design, a hardware-based IDPS is situated between the public untrusted and private trusted networks. Test one includes a traditional perimeter Cisco Systems IDPS. Individual Spark nodes are networked in a single VLAN connected through the collapsed core. In contrast to the network architecture in Fig. 1, CyberOne through CyberFour servers are not deployed for tests 1-3. In each of these tests, typical network traffic is present void of any DDoS attacks. Benchmark metrics are specific to the big data systems unless otherwise specified. During the data stream, HDFS is ----- writing 128 MB blocks to disk on all three Spark worker nodes at a constant rate. Inconsequential wait time exists on disk reads and writes. Average CPU utilization per thread or “CPU%” on the big data worker nodes is 4.3% during the first test. The average time a process waits for an input-output (I/O) to complete or “wait%” is 0.3. The average number of processor context switches per second is 1,728, identified as “PWps” hereafter. The authors measured network performance between each of the Spark nodes using four metrics. Metrics are captured on the worker node network interface cards. The first performance variable measures the average number of all network packet reads per second (APRps). The second variable captures the average number of all network packet writes per second (APWps). The measure “APIORkBs” refers to the amount of network I/O read traffic in kB per second sent between the servers. The fourth metric, “APIOWkBs,” indicates the amount of network I/O write traffic in kB per second sent between the servers. Fig. 3 illustrates the average network I/O (KB/s) on each Apache Spark node in tests 1-3 while Fig. 4 demonstrate the average network I/O (KB/s) on each Apache Spark node in tests 3-6. In the perimeter-based network architecture, the average APRps reads per second are 637 across all Spark worker nodes. The average APWps writes per second are 620. The average APIORkBs read traffic between all Spark worker nodes is 80 while APIOWkBs is 78. The authors reconfigured the network architecture in the subsequent test to provide further insight into IDPS placement impact on distributed big data systems. Fig. 2. Perimeter-less security network architecture. _B._ _Tests 2-3 Perimeter-less Security Results_ Fig. 2 demonstrates the big data network designed for tests two and three. Network architecture uses a modified perimeter-less design proposed by Kotantoulas [39]. In contrast to the traditional perimeter IDPS location in Fig. 1, every big data worker node is in a zero trust network. The authors designed an SD-WAN trust boundary to secure each big data node. The boundary consists of Snort and Suricata intrusion detection and prevention security gateways. Similar to the virtual software defined perimeter (vEPC) proposed by Bello et al. [40], this study’s zero trust software-based system acts as a security gateway for all distributed servers. Sparkone through Sparkfour are designed to operate securely in most cloud architectures in this model by integrating an SDN security stack on each physically distanced server. The integrated IDPS gateways control and authorize incoming and outgoing network communication. The design emulates the trust boundary surrounding the cloud edge in [39] using the SSH and TLS protocols. Gateways authenticate and connect the distributed systems using a 3072-bit key generated by the Rivest–Shamir–Adleman (RSA) algorithm. Benchmark results for test 2 with Snort SDN gateways show the wait% is 0.413% and CPU% is 12.54%. Results from this study show that CPU resource consumption is over two times greater in the zero trust architecture than the perimeter network design. Test 3 with Suricata SDN gateways results in 11.05% CPU% and 0.342% wait%. Similar to the perimeterless design in test 2, test 3 used considerably more CPU resources than test 1. Despite similar rulesets, Suricata SDN gateways used slightly less CPU than Snort. In the test 2 perimeter-less network architecture the average APRps reads per second are 2,198 across all Spark worker nodes. The average APWps writes per second are 653. The average APIORkBs read traffic between all Spark worker nodes is 298 in test 2, APIOWkBs is 82. 1200 1000 800 600 400 200 0 1 2 3 4 5 6 7 8 9 1011121314151617181920 Seconds Fig. 3. Tests 1-3 spark per node network I/O in KB/s. The test 3 network architecture had similar results to test 2. The average APRps reads per second are 2,120 across the distributed Spark systems. The average APWps is 611. APIORkBs between the big data servers is 289 and APIOWkBs is 77. Fig. 3 illustrates the average network I/O (KB/s) on each Apache Spark node in tests 1-3. These results indicate that network traffic and network I/O are nominal when writing to HDFS in all network architectures within this study. In contrast, the number of packets the systems have to read is higher in the perimeter-less network architectures. APRps is over three times higher in tests 2 and 3 than in test 1. ----- _C._ _Test 4 Perimeter-Based DDoS Attack Results_ Test 4 uses the network architecture (Fig. 1), parallel to test 1. Perimeter-based intrusion detection and prevention systems protect the internal LANs of the Spark nodes. CyberOne through CyberFour are active in test 4. The cyber servers are configured to flood the big data cluster with unlimited TCP SYN handshakes. Benchmark results for the big data servers during the DDoS attacks parallel test 1 in test 4. In test 4, the IDPSs prevented additional CPU load and network load on the big data servers. In the test case, the hardware IPSs successfully blocked the DDoS attacks. _D._ _Tests 5-6 Perimeter-less DDoS Attack Results_ Tests 5 and 6 are similar to tests 3 and 4. However, DDoS attacks are administered on the big data cluster. Tests 5-6 use the (Fig. 2) perimeter-less security network architecture. Test 5 uses the Snort-based SDN security boundary, while test 6 uses Suricata. CyberOne through CyberFour are active in tests 5 and 6. The cyber servers execute DDoS attacks on the big data cluster by flooding the servers with unlimited TCP SYN handshakes. Snort and Suricata security gateways successfully protect the big data systems from DDoS attacks in a zero trust network in tests 5 and 6; however, at the expense of local computational resource increases. Results for test 5 with Snort SDN gateways show the wait% is 0.308% and CPU% is 13.8%. CPU resource consumption increases on average over 1% on the big data servers during the DDoS attacks. Test 6 with Suricata SDN gateways results in 11.95% CPU% and 0.337% wait%. DDoS attacks increased average CPU% by 0.9% across big data systems. Suricata SDN gateways used slightly less CPU than Snort SDN gateways during the DDoS attacks. Within the test 5 perimeter-less network architecture the average APRps reads per second are 4,762 across all distributed by data secondary nodes. The average APWps writes per second are 626. The average APIORkBs traffic between the distributed systems is 425. APIOWkBs is 79. 1800 1600 1400 1200 1000 800 600 400 200 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 **Seconds** Fig. 4. Tests 4-6 spark per node network I/O in KB/s. The Suricata gateways in test 6 have average APRps reads per second of 4,311 across the distributed Spark systems. Average APWps is 661. APIORkBs between the big data servers is 416 and APIOWkBs is 81. Fig. 4 demonstrates the average network I/O (KB/s) on each Apache Spark node in tests 3-6. _E._ _Test 7 Perimeter-Based DDoS Attack Results_ Test 7 shares the same network architecture as test 1 and test 4, illustrated in Fig. 1. To decipher how the DDoS attacks affect the big data servers in the perimeter-based network architecture without IDPS protection, test 7 repeats test 4 but allow all network traffic from CyberOne through CyberFour to the big data cluster. When the DDoS attacks are allowed through the perimeter IPSs in the Fig. 1 network architecture, results show an average CPU% of 17.9% across all distributed big data systems. Predictably, network packets increase in test 7 compared to tests 1 and 4. APRps is 2,895 while APIORkBs is 518. Test 7 has the highest APIORkBs of all network benchmarks performed in this study. _F._ _Discussion of the Results_ The results illustrate that network traffic and network I/O have marginal differences when writing to HDFS in the network architectures studied. CPU resources and network traffic read by the operating systems increased in zero trust network architectures. The most substantial differences were between tests 4 and 5. During the DDoS attacks, the big data servers required more CPU resources in the perimeter-less security network architecture. In test 5, APIORkBs are considerably higher at 425 than test 4 at 80. This additional traffic is partly due to the SDN security boundaries necessary to protect the systems in a zero trust network environment. Shifting compute resources closer to individual devices may be necessary as network security perimeters dissipate. However, zero trust architectures in the experimental environment reduced cluster performance. Therefore, additional research is beneficial to optimize the design of perimeter-less network environments. _G._ _Limitations_ Several environmental factors limit the results. Site-to-site networks were on leased 200 Mbps connections. Future studies might consider leased lines capable of establishing more robust data streams to the distributed nodes. A subsequent restriction is the number of architectures and communication technologies tested. Similar to the architecture in [40], gateways allow for IP Security (IPsec) or Transport Layer Security (TLS) protocols. Future IDPS SDN gateways could add this layer of encryption in a software-defined security boundary between geodistributed big data systems. The outlined limitations emphasize the need for future research to investigate more extensive network architectures and IDPS technologies for big data system security. V. CONCLUSION As the volume of data expands, organizations require big data systems to perform large-scale data analytics. One of several needs for these systems is effective intrusion detection and prevention strategies. This paper builds a review of the ----- literature on methods used to reduce cybersecurity threats in a range of network architectures that big data systems operate. Findings from literature suggest intrusion detection and prevention systems can respond to certain security attacks. However, a potential disadvantage of capable security systems is the impact on big data system cluster performance. Using a design science approach, the authors develop an eight-step process to benchmark big data systems in varying network architectural environments. The new benchmark process is tested on real-time big data systems running in perimeter-based and perimeter-less network environments. During DDoS cyber-attacks, perimeter-based network architectures outperformed perimeter-less network architectures. This underlines the importance of optimizing the design of zero trust architectures for distributed big data systems. REFERENCES [1] D. Gümüşbaş, T. Yıldırım, A. Genovese, and F. Scotti, “A comprehensive survey of databases and deep learning methods for cybersecurity and intrusion detection systems,” _IEEE Systems Journal,_ vol. 15, no. 2, pp. 1717–1731, Jun. 2021, doi: [10.1109/JSYST.2020.2992966.](https://doi.org/10.1109/JSYST.2020.2992966) [2] I. D. Aiyanyo, S. Hamman, and H. Lim, “A systematic review of defensive and offensive cybersecurity with machine learning,” Applied Sciences, vol. 10, no. 17, p. 5811, 2020, doi: 10.3390/app10175811. [3] N. V. Patil, C. Rama Krishna, and K. Kumar, “Distributed frameworks for detecting distributed denial of service attacks: A comprehensive review, challenges and future directions,” _Concurrency_ _and_ _Computation: Practice and Experience, vol. 33, no. 10, pp. 1-21, May_ [2021, doi: 10.1002/cpe.6197.](https://doi.org/10.1002/cpe.6197) [4] R. Rafiq, M. J. Awan, A. Yasin, H. Nobanee, A. M. Zain, and S. A. Bahaj, “Privacy prevention of big data applications: A systematic literature review,” _Sage Open, vol. 12, no. 2, Apr. 2022, doi:_ [10.1177/21582440221096445.](https://doi.org/10.1177/21582440221096445) [5] R. Mitchell and I. R. Chen, “A survey of intrusion detection techniques for cyber-physical systems,” _ACM Comput. Surv., vol. 46, no. 4, Mar._ [2014, doi: 10.1145/2542049.](https://doi.org/10.1145/2542049) [6] B. B. Zarpelão, R. S. Miani, C. T. Kawakani, and S. C. de Alvarenga, “A survey of intrusion detection in Internet of Things,” _Journal of_ _Network and Computer Applications, vol. 84, pp. 25–37, Apr. 2017, doi:_ [10.1016/j.jnca.2017.02.009.](https://doi.org/10.1016/j.jnca.2017.02.009) [7] V. Casola, A. De Benedictis, M. Rak, and U. Villano, “Security-by design in multi-cloud applications: An optimization approach,” _Information Sciences, vol. 454–455, pp. 344–362, Jul. 2018, doi:_ [10.1016/j.ins.2018.04.081.](https://doi.org/10.1016/j.ins.2018.04.081) [8] R. Atat, L. Liu, J. Wu, G. Li, C. Ye, and Y. Yang, “Big data meet cyber physical systems: a panoramic survey,” IEEE Access, vol. 6, pp. 73603– [73636, 2018, doi: 10.1109/ACCESS.2018.2878681.](https://doi.org/10.1109/ACCESS.2018.2878681) [9] R. Gifty, R. Bharathi, and P. Krishnakumar, “Privacy and security of big data in cyber physical systems using Weibull distribution-based intrusion detection,” Neural Computing and Applications, vol. 31, no. 1, [pp. 23–34, Jan. 2019, doi: 10.1007/s00521-018-3635-6.](https://doi.org/10.1007/s00521-018-3635-6) [10] S. F. Ochoa, G. Fortino, and G. Di Fatta, “Cyber-physical systems, internet of things and big data,” _Future Generation Computer Systems,_ [vol. 75, pp. 82–84, Oct. 2017, doi: 10.1016/j.future.2017.05.040.](https://doi.org/10.1016/j.future.2017.05.040) [11] K. A. P. da Costa, J. P. Papa, C. O. Lisboa, R. Munoz, and V. H. C. de Albuquerque, “Internet of Things: A survey on machine learning-based intrusion detection approaches,” Computer Networks, vol. 151, pp. 147– [157, Mar. 2019, doi: 10.1016/j.comnet.2019.01.023.](https://doi.org/10.1016/j.comnet.2019.01.023) [12] N. Moustafa, B. Turnbull, and K. R. Choo, “An ensemble intrusion detection technique based on proposed statistical flow features for protecting network traffic of Internet of Things,” _IEEE Internet of_ _Things Journal, vol. 6, no. 3, pp. 4815–4830, Jun. 2019, doi:_ [10.1109/JIOT.2018.2871719.](https://doi.org/10.1109/JIOT.2018.2871719) [13] A. Yang, Y. Zhuansun, C. Liu, J. Li, and C. Zhang, “Design of intrusion detection system for Internet of Things based on improved BP neural network,” _IEEE Access, vol. 7, pp. 106043–106052, 2019, doi:_ [10.1109/ACCESS.2019.2929919.](https://doi.org/10.1109/ACCESS.2019.2929919) [14] Z. Tan et al., “Enhancing big data security with collaborative intrusion detection,” IEEE Cloud Computing, vol. 1, no. 3, pp. 27–33, Sep. 2014, [doi: 10.1109/MCC.2014.53.](https://doi.org/10.1109/MCC.2014.53) [15] A. N. Jaber and S. U. Rehman, “FCM–SVM based intrusion detection system for cloud computing environment,” Cluster Computing, vol. 23, [no. 4, pp. 3221–3231, Dec. 2020, doi: 10.1007/s10586-020-03082-6.](https://doi.org/10.1007/s10586-020-03082-6) [16] M. Hafsa and F. Jemili, “Comparative study between big data analysis techniques in intrusion detection,” _Big Data and Cognitive Computing,_ [vol. 3, no. 1, pp. 1-13, Dec. 2018, doi: 10.3390/bdcc3010001.](https://doi.org/10.3390/bdcc3010001) [17] F. M. Awaysheh, M. N. Aladwan, M. Alazab, S. Alawadi, J. C. Cabaleiro, and T. F. Pena, “Security by design for big data frameworks over cloud computing,” _IEEE_ _Transactions_ _on_ _Engineering_ _[Management, pp. 1–18, Feb. 2021, doi: 10.1109/TEM.2020.3045661.](https://doi.org/10.1109/TEM.2020.3045661)_ [18] A. Bocci, S. Forti, G. L. Ferrari, and A. Brogi, “Secure FaaS orchestration in the fog: How far are we?” Computing, vol. 103, no. 5, pp. 1025–1056, May 2021, doi: 10.1007/s00607-021-00924-y. [19] Y. Wang, X. Zhang, Y. Wu, and Y. Shen, “Enhancing leakage prevention for mapreduce,” _IEEE Transactions on Information_ _Forensics and Security, vol. 17, pp. 1558–1572, 2022, doi:_ [10.1109/TIFS.2022.3166641.](https://doi.org/10.1109/TIFS.2022.3166641) [20] O. Ohrimenko, M. Costa, C. Fournet, C. Gkantsidis, M. Kohlweiss, and D. Sharma, “Observing and preventing leakage in MapReduce,” in _Proceedings of the 22nd ACM SIGSAC Conference on Computer and_ _Communications Security, New York, NY, USA, 2015, pp. 1570–1581._ [doi: 10.1145/2810103.2813695.](https://doi.org/10.1145/2810103.2813695) [21] A. M. Sauber, A. Awad, A. F. Shawish, and P. M. El-Kafrawy, “A novel hadoop security model for addressing malicious collusive workers,” _Computational Intelligence and Neuroscience, vol. 2021, pp. 1-10,_ [2021, doi: 10.1155/2021/5753948.](https://doi.org/10.1155/2021/5753948) [22] P. Derbeko, S. Dolev, E. Gudes, and S. Sharma, “Security and privacy aspects in MapReduce on clouds: A survey,” Computer Science Review, [vol. 20, pp. 1–28, May 2016, doi: 10.1016/j.cosrev.2016.05.001.](https://doi.org/10.1016/j.cosrev.2016.05.001) [23] R. Poddar, T. Boelter, and R. Popa, “Arx: An encrypted database using semantically secure encryption,” Proceedings of the VLDB Endowment, [vol. 12, pp. 1664–1678, Jul. 2019, doi: 10.14778/3342263.3342641.](https://doi.org/10.14778/3342263.3342641) [24] A. Nisioti, A. Mylonas, P. D. Yoo, and V. Katos, “From intrusion detection to attacker attribution: A comprehensive survey of unsupervised methods,” _IEEE Communications Surveys & Tutorials,_ vol. 20, no. 4, pp. 3369–3388, Fourthquarter 2018, doi: [10.1109/COMST.2018.2854724.](https://doi.org/10.1109/COMST.2018.2854724) [25] S. T. March and G. F. Smith, “Design and natural science research on information technology,” _Decision Support Systems, vol. 15, no. 4, pp._ [251–266, Dec. 1995, doi: 10.1016/0167-9236(94)00041-2.](https://doi.org/10.1016/0167-9236(94)00041-2) [26] A. R. Hevner, S. T. March, J. Park, and S. Ram, “Design science in information systems research,” _MIS Quarterly, vol. 28, no. 1, pp. 75–_ [105, 2004, doi: 10.2307/25148625.](https://doi.org/10.2307/25148625) [27] “Dell technology,” _Dell Inc, June, 2022. [Online]. Available:_ https://www.dell.com. [28] “Cisco routers and SD-WAN,” _Cisco Systems, June, 2022. [Online]._ Available: https://www.cisco.com/site/us/en/products/networking/sdwanrouters/index.html. [29] “Benchmarking & Diagnostic Software,” _Passmark Software, June,_ 2022. [Online]. Available: https://www.passmark.com. [30] “Spark tuning guide on 3rd generation Intel® Xeon® scalable processors based platform,” _Intel Corporation, August, 2021, [Online]._ Available: https://www.intel.cn/content/www/cn/zh/developer/articles/guide/sparktuning-guide-on-xeon-based-systems.html. [31] “Tuning Spark,” The Apache Software Foundation, July, 2022. [Online]. Available: https://spark.apache.org/docs/3.2.2/. [32] “Cluster Mode Overview,” _The Apache Software Foundation, June,_ 2022. [Online]. Available: https://spark.apache.org/docs/latest/clusteroverview.html. [33] “Apache Hadoop YARN,” _The Apache Software Foundation, June,_ 2022. [Online]. Available: ----- https://hadoop.apache.org/docs/stable/hadoop-yarn/hadoop-yarnsite/YARN.html. [34] “Kali linux features,” _OffSec Services Limited, June, 2022. [Online]._ Available: https://www.kali.org/features. [35] “Snort FAQ/Wiki,” _Cisco Systems, July, 2022. [Online]. Available:_ [https://www.snort.org/faq.](https://www.snort.org/faq) [36] “Suricata user guide,” _Open Information Security Foundation, July,_ 2022. [Online]. Available: https://suricata.readthedocs.io/en/suricata6.0.6. [37] “nmon for Linux,” _IBM,_ June, 2022. [Online]. Available: http://nmon.sourceforge.net. [38] “Collapsed data center and campus core deployment guide,” _Cisco_ _Systems,_ June, 2022. [Online]. Available: https://www.cisco.com/c/dam/global/en_ca/solutions/strategy/docs/sbaG ov_nexus7000Dguide_new.pdf. [39] J. Kotantoulas, “Zero trust for government networks,” _Cisco Systems,_ June, 2022. [Online]. Available: https://blogs.cisco.com/government/zero-trust-for-governmentnetworks-6-steps-you-need-to-know. [40] Y. Bello, A. R. Hussein, M. Ulema, and J. Koilpillai, “On sustained zero trust conceptualization security for mobile core networks in 5G and beyond,” IEEE Transactions on Network and Service Management, vol. [19, no. 2, pp. 1876–1889, Jun. 2022, doi: 10.1109/TNSM.2022.3157248.](https://doi.org/10.1109/TNSM.2022.3157248) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.14569/ijacsa.2023.01409103?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.14569/ijacsa.2023.01409103, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "GOLD", "url": "http://thesai.org/Downloads/Volume14No9/Paper_103-Next_Generation_Intrusion_Detection_and_Prevention_System.pdf" }
2,023
[ "JournalArticle" ]
true
null
[ { "paperId": "6ca87b87dc8464dfe918864e8013d2f6a9a5f2e5", "title": "On Sustained Zero Trust Conceptualization Security for Mobile Core Networks in 5G and Beyond" }, { "paperId": "7ce2bd2fd3e71fafcdabe1c2b8cdf577e3a700f8", "title": "Privacy Prevention of Big Data Applications: A Systematic Literature Review" }, { "paperId": "dd25b767a1de93d8d011955ad69dc061e9e216b5", "title": "A Novel Hadoop Security Model for Addressing Malicious Collusive Workers" }, { "paperId": "f7ed94f26ba347c77e523d637807c1dc98b74d67", "title": "Secure FaaS orchestration in the fog: how far are we?" }, { "paperId": "0763a0f18e30d6fd380e13dc4715bbc78f07b6e9", "title": "Distributed frameworks for detecting distributed denial of service attacks: A comprehensive review, challenges and future directions" }, { "paperId": "692fb7c661cb83356d803b5013b1890eb1b1c838", "title": "A Systematic Review of Defensive and Offensive Cybersecurity with Machine Learning" }, { "paperId": "40a174543353719e3c52a6f65cdb1f6760c94ecb", "title": "A Comprehensive Survey of Databases and Deep Learning Methods for Cybersecurity and Intrusion Detection Systems" }, { "paperId": "a0a2d3adcbe80cc912de7357ce08579212dc1e36", "title": "FCM–SVM based intrusion detection system for cloud computing environment" }, { "paperId": "20bc8e85b462238fc9dad348ca05c7c5f0daf5d9", "title": "Arx: An Encrypted Database using Semantically Secure Encryption" }, { "paperId": "295e9f9484ef05946b4553127360eaf3d7d1fc8f", "title": "An Ensemble Intrusion Detection Technique Based on Proposed Statistical Flow Features for Protecting Network Traffic of Internet of Things" }, { "paperId": "8ec4b08a83faf5622fea30888a7fe14b7b0baf9b", "title": "Internet of Things: A survey on machine learning-based intrusion detection approaches" }, { "paperId": "f49be6398a8e397b100bed21836511534321baf0", "title": "Comparative Study between Big Data Analysis Techniques in Intrusion Detection" }, { "paperId": "e311eab55d98e0624e39872b91aa17a065e65822", "title": "Big Data Meet Cyber-Physical Systems: A Panoramic Survey" }, { "paperId": "8e8929d718d3cf759fb2148ba23a06036b619224", "title": "Privacy and security of big data in cyber physical systems using Weibull distribution-based intrusion detection" }, { "paperId": "ea530e2a2511b04bea962a72b3e4504890a2a882", "title": "From Intrusion Detection to Attacker Attribution: A Comprehensive Survey of Unsupervised Methods" }, { "paperId": "ef89fd25b24c63feddb7bcc791f64cf5ad04d305", "title": "Security-by-design in multi-cloud applications: An optimization approach" }, { "paperId": "72e713f0bee2fbc4b824f00777d9c24deadaa4df", "title": "Cyber-physical systems, internet of things and big data" }, { "paperId": "1c65649ffb61144f867aa1dc526ddaac19b3cecd", "title": "A survey of intrusion detection in Internet of Things" }, { "paperId": "7e5dd32e29c6f58c510347f45c6131c8d82fac01", "title": "Security and privacy aspects in MapReduce on clouds: A survey" }, { "paperId": "3ca369fa2cadb403db7ac5e75deefd9acbb10723", "title": "Observing and Preventing Leakage in MapReduce" }, { "paperId": "ea548d78ecb43b750a0ca1f6d3093128057a9bf3", "title": "Enhancing Big Data Security with Collaborative Intrusion Detection" }, { "paperId": "22b56df45e2ff1aafca278253c24c6abafd0d0c7", "title": "A survey of intrusion detection techniques for cyber-physical systems" }, { "paperId": "0ee5a26a6dc64d3089c8f872bd550bf1eab7051d", "title": "Design Science in Information Systems Research" }, { "paperId": "d93ffe572b15a163e2ec1336a4e507b0b7a766f0", "title": "Design and natural science research on information technology" }, { "paperId": "a630fcc0716635bdece98f266b06dfc90b28abf8", "title": "Enhancing Leakage Prevention for MapReduce" }, { "paperId": "35e8c10dce9d2782cf7091d6994158d02f2c6f40", "title": "Security by Design for Big Data Frameworks Over Cloud Computing" }, { "paperId": "ce11c28f930d540c5331bc6eb6c38218bfa123aa", "title": "Design of Intrusion Detection System for Internet of Things Based on Improved BP Neural Network" }, { "paperId": null, "title": "“Tuning Spark,”" }, { "paperId": null, "title": "“Snort FAQ/Wiki,”" }, { "paperId": null, "title": "“Benchmarking & Diagnostic Software,”" }, { "paperId": null, "title": "“Dell technology,”" }, { "paperId": null, "title": "“Cluster Mode Overview,”" }, { "paperId": null, "title": "“Suricata user guide,”" }, { "paperId": null, "title": "“nmon for Linux,”" }, { "paperId": null, "title": "“Kali linux features,”" }, { "paperId": null, "title": "“Cisco routers and SD-WAN,”" }, { "paperId": null, "title": "“Collapsed data center and campus core deployment guide,”" }, { "paperId": null, "title": "“Spark tuning guide on 3rd generation Intel® Xeon® scalable processors based platform,”" }, { "paperId": null, "title": "“Zero trust for government networks,”" }, { "paperId": null, "title": "Open Information Security Foundation" }, { "paperId": null, "title": "“Apache Hadoop YARN,”" } ]
12,615
en
[ { "category": "Computer Science", "source": "external" }, { "category": "Computer Science", "source": "s2-fos-model" }, { "category": "Education", "source": "s2-fos-model" } ]
https://www.semanticscholar.org/paper/0200d453f5c995c87761e50976ed07692e257a30
[ "Computer Science" ]
0.918995
The Blockchain and Kudos: A Distributed System for Educational Record, Reputation and Reward
0200d453f5c995c87761e50976ed07692e257a30
European Conference on Technology Enhanced Learning
[ { "authorId": "2357522", "name": "M. Sharples" }, { "authorId": "145543299", "name": "J. Domingue" } ]
{ "alternate_issns": null, "alternate_names": [ "EC-TEL", "Eur Conf Technol Enhanc Learn" ], "alternate_urls": null, "id": "64efda39-fbce-4563-9435-4064cc930715", "issn": null, "name": "European Conference on Technology Enhanced Learning", "type": "conference", "url": "http://www.wikicfp.com/cfp/program?id=826" }
The ‘blockchain’ is the core mechanism for the Bitcoin digital payment system. It embraces a set of inter-related technologies: the blockchain itself as a distributed record of digital events, the distributed consensus method to agree whether a new block is legitimate, automated smart contracts, and the data structure associated with each block. We propose a permanent distributed record of intellectual effort and associated reputational reward, based on the blockchain that instantiates and democratises educational reputation beyond the academic community. We are undertaking initial trials of a private blockchain or storing educational records, drawing also on our previous research into reputation management for educational systems.
# **The Blockchain and Kudos: A Distributed System** **for Educational Record, Reputation and Reward** Mike Sharples [1] [(] [✉] [)] and John Domingue [2] 1 Institute of Educational Technology, The Open University, Milton Keynes, UK mike.sharples@open.ac.uk 2 Knowledge Media Institute, The Open University, Milton Keynes, UK john.domingue@open.ac.uk **Abstract.** The ‘blockchain’ is the core mechanism for the Bitcoin digital payment system. It embraces a set of inter-related technologies: the blockchain itself as a distributed record of digital events, the distributed consensus method to agree whether a new block is legitimate, automated smart contracts, and the data structure associated with each block. We propose a permanent distributed record of intellectual effort and associated reputational reward, based on the blockchain that instantiates and democratises educational reputation beyond the academic community. We are undertaking initial trials of a private blockchain or storing educational records, drawing also on our previous research into reputation management for educational systems. **Keywords:** Blockchain · Reputation management · Self-determined learning · e-portfolios · Records of achievement ## **1 Introduction** The blockchain is being proposed as a disruptive technology that could transform the finance and commerce sectors (see e.g. [1, 2]). In this paper we explore the disruptive potential of the blockchain for education and its value in support of self-determined learning. To understand the relevance of the blockchain to education, it is important to understand its components, as any one or more may be adapted for educational use. First, there is the blockchain itself, a distributed record of digital events. The block‐ chain is a long chain of linked data items stored on every participating computer, where the next item can only be added by consensus of a majority of those participating. There are public blockchains that anyone can access and potentially add to, and there are private blockchains used within an organization or consortium. The best known, but not the only, blockchain is the one at the heart of the Bitcoin system of digital money [3]. Second, there is the ‘distributed consensus’ method to agree whether a new block is legitimate and should be added to the chain. This is done by requiring a participant’s computer to perform a significant amount of computational work (‘proof of work’ or ‘mining’) before it can try to add a new item to the shared blockchain. To create a false blockchain and get that accepted by consensus would be prohibitively difficult. An unfortunate consequence of the ‘proof of work’ requirement, is that the computer © The Author(s) 2016 K. Verbert et al. (Eds.): EC-TEL 2016, LNCS 9891, pp. 490–496, 2016. DOI: 10.1007/978-3-319-45153-4_48 ----- The Blockchain and Kudos: A Distributed System 491 performing the mining operation to produce a new block must spend a considerable amount of computational power and electricity, just to provide the proof of work. Alter‐ rnatives are being developed for distributed validation of new blocks, including ‘proof of stake’ where, to add a new block, a participant must show a certain amount of currency or reputation, which is lost if that block is not accepted by consensus [4]. Third, each block in the blockchain can hold a small amount of data (typically up to 1 Mb) which could be any information that is required to be kept secure, yet distributed. These could be records of currency transactions (as in Bitcoin) or, for education, exam credentials or records of learning. That information is stored across all participating computers and can be viewed by anyone possessing the cryptographic ‘public key’ but cannot be modified, even by the original author. The data records are timestamped, providing a trusted and timed record of the added data. Last, there are Smart Contracts, segments of computer code which enact blockchain transactions when certain conditions have been met. These enable business and legal agreements to be stored and executed online, for example to automate invoicing. In October, 2015 Visa and DocuSign demonstrated Smart Contracts for leasing cars without the need to fill in forms. [1] To explore the value of the blockchain for education, we take each of these elements separately, then examine how they fit together. ## **2 The Blockchain as a Distributed Digital Record** The distinguishing elements of the blockchain are that it is a single linked record of digital events, stored on each participating computer. It has the properties that: - The entire record is distributed over a wide network of participating computers and so is resilient to loss of infrastructure; - it is possible to confirm the identity of any addition or modification to the record; - once a block has been added by consensus among participants, it cannot be removed or altered, even by the original authors; - the events are publically-accessible, but not publically readable without a digital key. An obvious educational use is to store records of achievement and credit, such as degree certificates. The certificate data would be added to the blockchain by the awarding institution which the student can access, share with employers, or link from an online CV. It provides a persistent public record, safeguarded against changes to the institution or loss of its private records. This opens opportunities for direct awarding of certificates and badges by trusted experts and teachers. The University of Nicosia is the first higher education institution to issue academic certificates whose authenticity can be verified through the Bitcoin blockchain [5] and Sony Global Education has announced devel‐ opment of a new blockchain for storing academic records [6]. The blockchain provides public evidence that a student identity received an award from an institutional identity, but does not, of itself, verify the trustworthiness of either party. A university could still award a bogus certificate or a student could still cheat in 1 [https://www.docusign.com/blog/the-future-of-car-leasing-is-as-easy-as-click-sign-drive/.](https://www.docusign.com/blog/the-future-of-car-leasing-is-as-easy-as-click-sign-drive/) ----- 492 M. Sharples and J. Domingue an exam. The blockchain solves a problem of rapidly and reliably checking the occur‐ rence of an event, such as the awarding of a degree, but not its validity. However, just as MOOCs make teaching widely visible, so the blockchain may expose awarding bodies and their products to public scrutiny. ## **3 The Blockchain as a Proof of Intellectual Work** Consider a system where any person could lodge a public record of a ‘big idea’, such as an invention, a contribution to knowledge, or a creative work such as a poem or artwork. That record links to an expression of the work (e.g. the text or artwork). Each big idea is identified with its author, and timestamped to indicate when it was first recorded. Once lodged it cannot be modified, but it could be replaced by a later version. This can act as a permanent e-portfolio of intellectual achievement, for personal use as a logbook, or to present to an employer. It also serves as a crowd-sourced method of patenting. There is no need for a person to make and prove claims for invention – the record is there to see. The startup company Blockai has already implemented a block‐ chain system to help creative workers register their work to protect it from copyright infringement [7]. The blockchain as record of intellectual work has resonances with the Xanadu project of Ted Nelson [8]. Conceived in the early 1960s, Nelson’s vision was for a “digital repository scheme for world-wide electronic publishing” [9, p. 3/2] with aspects that go beyond the worldwide web including unbreakable links, attribution to authors, and micropayments for re-use of content. Each item in the Xanadu repository would be linked back to its author and the record would be stored across many locations to main‐ tain availability in the case of disaster. Most of Nelson’s 17 rules for Xanadu could be mapped onto the blockchain as a record of learning, e.g.: every user is uniquely and securely identified; permission to link to a document is explicitly granted by the act of publication; every record is automatically stored redundantly to maintain availability even in case of didaster; the communication protocol is an openly published standard. A problem with the blockchain as a record of learning or intellectual effort is similar to that for its use as a digital store for certificates: it is proof of existence [2], but does not guarantee that the data held in the record is valid, authentic or useful. A user’s claim to be the originator of an idea, invention claim or creative work could be contested, nor is there guarantee that the item is valuable or even interesting to others. This is a serious issue, but it is addressed by the academic community through processes of peer review and reputation management. Nelson proposed a payment and royalty mechanism for Xanadu. For the blockchain as a record of learning, we indicate a mechanism for intel‐ lectual credit and reputation. ## **4 The Blockchain as Intellectual Currency** Currently, the main use of the blockchain is as a mechanism for recording transactions of the Bitcoin digital currency. This is a public ledger that records Bitcoin transactions 2 [https://www.proofofexistence.com/.](https://www.proofofexistence.com/) ----- The Blockchain and Kudos: A Distributed System 493 (though it can store other types of record). Bitcoins, like traditional currencies, can be used to pay for products and services from merchants who accept them. Thus, Bitcoin micro-payments could be used as reward for small educational services, such as a student who carries out a peer assessment task being automatically rewarded [10]. But other commodities can have tradeable value, notably reputation [11]. Reputation is a foundation of the new digital economy, with companies such as AirBnB and Uber building trust through ratings and reviews. Amongst academics, reputation is already a tradeable commodity, with promotion and recruitment being based in part on reputation measured through number of citations and the H-index metric of publication impact. Imagine that trading of scholarly reputation could be extended beyond the academic world and made the basis of an educational economy. Consider the following proposi‐ tion. A new public blockchain is initiated to manage educational records and rewards, perhaps by a consortium of educational institutions and companies. Each recognized educational institution, innovative organization, and intellectual worker is given an initial award of ‘educational reputation currency’, which we will call Kudos. The initial award might be based on some existing (albeit crude) metric: Times Higher Education World Reputation Rankings for Universities, H-index for academics, Amazon author rank for published authors etc. An institution could allocate some of its initial fund of Kudos to staff whose reputation it wishes to promote. Each person and institution stores its fund of reputation in a virtual ‘wallet’ on a universal educational blockchain. Then, any institution or individual can make a reputational transaction. For an educational institution such as a university, that might be the award of a degree or certificate, which would involve posting the certificate on the blockchain and also trans‐ ferring some Kudos from awarding institution to the awardee. For individual, it could support an economy of online tutoring, with students paying a tutor for online teaching in financial (e.g., Bitcoin) currency, who would then pay the student in reputation (Kudos) for passing a test or completing the course. The Smart Contracts mechanism could allow such peer-to-peer micropayments to be made in a variety of currencies. Any individual (not necessarily someone who already has reputational credit) can also post an item of note to the educational blockchain. It might be a creative or scholarly production, a work of art, or a great idea, which is timestamped and archived. Thus, a simple posting is a permanent record of authorship as well as an item in a personal, but shareable, e-portfolio. In addition, an individual with reputation can decide to associate Kudos with one or more postings to the blockchain, up to the amount the person holds in their wallet. The amount would not be spent, but is an indication of the value of the work or idea. Other people might then transfer some of their reputational credit to the author, to boost the reputation of that person’s artefact or idea. They might do that to promote or be asso‐ ciated with the idea, in a similar way to investing in a Kickstarter project, but with a currency of reputation. A consequence is that the educational blockchain would provide a single universal record of lodged creative works or ideas, each associated with reputational credit. The amount of Kudos associated with each item indicates its value to the author and thus, if needed, its real world monetary value (e.g. for purchasing a copy of the creative work). ----- 494 M. Sharples and J. Domingue Lastly, reputation could be ‘mined’ by institutions, which stake part of their repu‐ tation on adding valid blocks to the chain (through a proof-of-stake algorithm) for which they are rewarded with additional Kudos. There is no limit in theory to the items that could be added to an educational blockchain – assignments, blog postings, comments – but there is computational cost in storing and maintaining a distributed educational record. That record is public, so anyone can determine how a person gained the repu‐ tation, and the rules for associating value are agreed by a consensus of the volunteers mining the blocks. Such a reputational management system for education is not fanciful. Something similar, though without the blockchain and tradeable reputation, is in operation for The Open University iSpot citizen science site [12], where acknowledged wildlife experts are initially given a high reputational score on the platform and new users can earn visible reputation (indicated by reputation points as well as virtual badges) through making wildlife observations and validating the observations of others. This process of enhancing reputation on iSpot happens automatically and most of the computational complexity of managing an educational blockchain and reputation system could be hidden from the user or institution. We have been experimenting in adding OpenLearn badges [3] to a private blockchain. OpenLearn hosts over 800 free Open University courses and attracts over 5 million visitors per year. Our Open Blockchain platform is implemented on the open source Ethereum infrastructure [4] which supports the creation of Distributed Applications comprising sets of Smart Contracts. Our system currently allows students to register for courses and receive badges which can be viewed in a student Learning Passport. An administration interface enables awarding of badges to students. All transactions are timestamped and are cryptographically signed. The transactions are peer-to-peer: in principle no host institution is required for the awarding of accreditation. Future work will integrate badges from other institutions including FutureLearn [5] and optionally place badges onto the public Ethereum blockchain. ## **5 Implications** What might be the implications for education of trusted distributed educational records combined with a system of tradeable reputation? The first benefit is in providing a single secure record of educational attainment, accessible and distributed across many insti‐ tutions. Once there is a recognised educational blockchain, then individuals as well as institutions could store secure public records of personal achievement. Second, a gener‐ alized system of reputation management associated with blockchain technology could help to open up the system of scholarly reputation currently associated with academics. This will require thought to develop accepted and trusted practices of acquiring public reputation, but there are already of examples of reputation management at work in 3 [http://www.open.edu/openlearn/get-started/badges-come-openlearn.](http://www.open.edu/openlearn/get-started/badges-come-openlearn) 4 [https://www.ethereum.org/.](https://www.ethereum.org/) 5 [http://www.futurelearn.com.](http://www.futurelearn.com) ----- The Blockchain and Kudos: A Distributed System 495 companies such as AirBnB as well as in educational systems including iSpot. Third, and more controversially, reputation could be traded, by being associated with academic awards, as well as being put up as collateral for important ideas or to validate the adding of new block to the chain. There are deep practical and ideological issues raised by trading educational repu‐ tation as a currency. One practical problem is how to create a conversion rate between reputation and money. What is the financial value of a novel idea or an A* dissertation? A fundamental ideological concern is that a system of trading reputation will further entrench the commodification of education – where students browse, buy and consume educational products, with no empathy for scholarship or intellectual value. Yet it could be argued that reputation as a commodity has long been a part of academia, though citation counts, impact factors, and national research assessment exercises. The block‐ chain and reputational currency might reduce education to a marketplace of knowledge, or they might extend the community of researchers and inventors to anyone with good ideas to share. **Open Access.** This chapter is distributed under the terms of the Creative Commons Attribution [4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, dupli‐](http://creativecommons.org/licenses/by/4.0/) cation, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, a link is provided to the Creative Commons license and any changes made are indicated. The images or other third party material in this chapter are included in the work's Creative Commons license, unless indicated otherwise in the credit line; if such material is not included in the work's Creative Commons license and the respective action is not permitted by statutory regulation, users will need to obtain permission from the license holder to duplicate, adapt or reproduce the material. ## **References** 1. Jones, H.: Broker ICAP says first to use blockchain for trading data. Reuters, London, 15 [March 2016. http://uk.reuters.com/article/us-icap-markets-blockchain-idUKKCN0WH2J7](http://uk.reuters.com/article/us-icap-markets-blockchain-idUKKCN0WH2J7) 2. Valenzuela, J.: Arcade City: Ethereum’s Big Test Drive to Kill Uber. The Cointelegraph, 15 [March, 2016. http://cointelegraph.com/news/arcade-city-ethereums-big-test-drive-to-kill-uber](http://cointelegraph.com/news/arcade-city-ethereums-big-test-drive-to-kill-uber) 3. Nakamoto, S.: Bitcoin: A Peer-to-Peer Electronic Cash System, October 2008. [http://](http://www.cryptovest.co.uk/resources/Bitcoin%2520paper%2520Original.pdf) [www.cryptovest.co.uk/resources/Bitcoin%20paper%20Original.pdf](http://www.cryptovest.co.uk/resources/Bitcoin%2520paper%2520Original.pdf) 4. Buterin, V.: Understanding Serenity, Part 2: Casper, 28 December 2015. [https://](https://blog.ethereum.org/2015/12/28/understanding-serenity-part-2-casper/) [blog.ethereum.org/2015/12/28/understanding-serenity-part-2-casper/](https://blog.ethereum.org/2015/12/28/understanding-serenity-part-2-casper/) 5. University of Nicosia. Academic Certificates on the Blockchain. [http://digital](http://digitalcurrency.unic.ac.cy/free-introductory-mooc/academic-certificates-on-the-blockchain/) [currency.unic.ac.cy/free-introductory-mooc/academic-certificates-on-the-blockchain/](http://digitalcurrency.unic.ac.cy/free-introductory-mooc/academic-certificates-on-the-blockchain/) 6. Sony Global Education. Sony Global Education Develops Technology Using Blockchain for Open Sharing of Academic Proficiency and Progress Records, 22 February 2016. [http://](http://www.sony.net/SonyInfo/News/Press/201602/16-0222E/index.html) [www.sony.net/SonyInfo/News/Press/201602/16-0222E/index.html](http://www.sony.net/SonyInfo/News/Press/201602/16-0222E/index.html) 7. Ha, A.: Blockai uses the blockchain to help artists protect their intellectual property, [TechCrunch, 15 March 2016. http://techcrunch.com/2016/03/14/blockai-launch/](http://techcrunch.com/2016/03/14/blockai-launch/) 8. Struppa, D.C., Douglas R. D.: Intertwingled: The Work and Influence of Ted Nelson. SpringerOpen (2015) 9. Nelson, T.H.: Literary machines. Mindful Press, Sausalito (1993) ----- 496 M. Sharples and J. Domingue 10. Devine, P.: Blockchain learning: can crypto-currency methods be appropriated to enhance online learning? In: ALT Online Winter Conference, 7th–10th December (2015) 11. Schlegel, H.: Reputation Currencies. Institute of Customer Experience. [http://ice.hum](http://ice.humanfactors.com/money.html) [anfactors.com/money.html](http://ice.humanfactors.com/money.html) 12. Clow, D., Makriyannis, E.: iSpot Analysed: Participatory Learning and Reputation. In: Proceedings of the 1st International Conference on Learning Analytics and Knowledge, 28 Feburary – 01 March 2011, Banff, Alberta, pp. 34–43 (2011) -----
{ "disclaimer": "Notice: Paper or abstract available at https://api.unpaywall.org/v2/10.1007/978-3-319-45153-4_48?email=<INSERT_YOUR_EMAIL> or https://doi.org/10.1007/978-3-319-45153-4_48, which is subject to the license by the author or copyright owner provided with this content. Please go to the source to verify the license and copyright information for your use.", "license": "CCBY", "status": "HYBRID", "url": "https://link.springer.com/content/pdf/10.1007/978-3-319-45153-4_48.pdf" }
2,016
[ "JournalArticle" ]
true
2016-09-13T00:00:00
[ { "paperId": "a1d6492f298dc695be1eac29484dea58358c86a7", "title": "Blockchain learning: can crypto-currency methods be appropriated to enhance online learning?" }, { "paperId": "7a97831eff7a1443040b0bb62840396a192fb075", "title": "Intertwingled: The Work and Influence of Ted Nelson" }, { "paperId": "7b8694409fa9cf3af75632bca3fe604a3b4bf117", "title": "iSpot analysed: participatory learning and reputation" }, { "paperId": null, "title": "Arcade City: Ethereum’s Big Test Drive to Kill Uber" }, { "paperId": null, "title": "Sony Global Education Develops Technology Using Blockchain for Open Sharing of Academic Proficiency and Progress Records" }, { "paperId": null, "title": "Blockai uses the blockchain to help artists protect their intellectual property, TechCrunch, 15 March 2016" }, { "paperId": null, "title": "Broker ICAP says first to use blockchain for trading data" }, { "paperId": null, "title": "Understanding Serenity, Part 2: Casper" }, { "paperId": "4e9ec92a90c5d571d2f1d496f8df01f0a8f38596", "title": "Bitcoin: A Peer-to-Peer Electronic Cash System" }, { "paperId": null, "title": "Literary machines" }, { "paperId": null, "title": "Academic Certificates on the Blockchain" }, { "paperId": null, "title": "Broker ICAP says first to use blockchain for trading data. Reuters, London, 15 March 2016. http://uk.reuters.com/article/us-icap-markets-blockchain-idUKKCN0WH2J7" }, { "paperId": null, "title": "Reputation Currencies" } ]
5,097