text
stringlengths 11
320k
| source
stringlengths 26
161
|
|---|---|
Conference room pilot(CRP) is a type ofsoftwareprocurement and software acceptance testing. A CRP may be used during the selection and implementation of asoftware applicationin an organization or company.
The purpose of the conference room pilot is to validate a software application against the business processes ofend-usersof the software, by allowing end-users to use the software to carry out typical or key business processes using the new software. A commercial advantage of a conference room pilot is that it may allow the customer to prove that the new software will do the job (meets business requirements and expectations) before committing to buying the software, thus avoiding buying an inappropriate application. The term is most commonly used in the context of 'out of the box' (OOTB) or 'commercial off-the-shelf' software (COTS).
Although a conference room pilot shares some features ofuser acceptance testing(UAT), it should not be considered a testing process – it validates that a design or solution is fit for purpose at a higher level than functional testing.
Shared features of CRP and UAT include:
Differences between a conference room pilot and a formal UAT:
|
https://en.wikipedia.org/wiki/Conference_room_pilot
|
Thesoftware release life cycleis the process of developing, testing, and distributing a software product (e.g., anoperating system). It typically consists of several stages, such as pre-alpha, alpha, beta, and release candidate, before the final version, or "gold", is released to the public.
Pre-alpha refers to the early stages of development, when the software is still being designed and built. Alpha testing is the first phase of formal testing, during which the software is tested internally usingwhite-box techniques. Beta testing is the next phase, in which the software is tested by a larger group of users, typically outside of the organization that developed it. The beta phase is focused on reducing impacts on users and may include usability testing.
After beta testing, the software may go through one or more release candidate phases, in which it is refined and tested further, before the final version is released.
Some software, particularly in the internet and technology industries, is released in a perpetual beta state, meaning that it is continuously being updated and improved, and is never considered to be a fully completed product. This approach allows for a more agile development process and enables the software to be released and used by users earlier in the development cycle.
Pre-alpha refers to all activities performed during the software project before formal testing. These activities can includerequirements analysis,software design,software development, andunit testing. In typicalopen sourcedevelopment, there are several types of pre-alpha versions.Milestoneversions include specific sets of functions and are released as soon as the feature is complete.[citation needed]
The alpha phase of the release life cycle is the first phase ofsoftware testing(alpha is the first letter of theGreek alphabet, used as the number 1). In this phase, developers generally test the software usingwhite-box techniques. Additional validation is then performed usingblack-boxorgray-boxtechniques, by another testing team. Moving to black-box testing inside the organization is known asalpha release.[1][2]
Alpha software is not thoroughly tested by the developer before it is released to customers. Alpha software may contain serious errors, and any resulting instability could cause crashes or data loss.[3]Alpha software may not contain all of the features that are planned for the final version.[4]In general, external availability of alpha software is uncommon forproprietary software, whileopen source softwareoften has publicly available alpha versions. The alpha phase usually ends with afeature freeze, indicating that no more features will be added to the software. At this time, the software is said to befeature-complete. A beta test is carried out followingacceptance testingat the supplier's site (the alpha test) and immediately before the general release of the software as a product.[5]
Afeature-complete(FC) version of a piece ofsoftwarehas all of its planned or primaryfeaturesimplemented but is not yet final due tobugs,performanceorstabilityissues.[6]This occurs at the end of alpha testing indevelopment.
Usually, feature-complete software still has to undergobeta testingandbug fixing, as well as performance or stability enhancement before it can go torelease candidate, and finallygoldstatus.
Beta, named afterthe second letter of the Greek alphabet, is the software development phase following alpha. A beta phase generally begins when the software is feature-complete but likely to contain several known or unknown bugs.[7]Software in the beta phase will generally have many more bugs in it than completed software and speed or performance issues, and may still cause crashes or data loss. The focus of beta testing is reducing impacts on users, often incorporatingusability testing. The process of delivering a beta version to the users is calledbeta releaseand is typically the first time that the software is available outside of the organization that developed it. Software beta releases can be eitheropen or closed, depending on whether they are openly available or only available to a limited audience. Beta version software is often useful for demonstrations and previews within an organization and to prospective customers. Some developers refer to this stage as apreview,preview release,prototype,technical previewortechnology preview(TP),[8]orearly access.
Beta testersare people who actively report issues with beta software. They are usually customers or representatives of prospective customers of the organization that develops the software. Beta testers tend to volunteer their services free of charge but often receive versions of the product they test, discounts on the release version, or other incentives.[9][10]
Some software is kept in so-calledperpetual beta, where new features are continually added to the software without establishing a final "stable" release. As theInternethas facilitated the rapid and inexpensive distribution of software, companies have begun to take a looser approach to the use of the wordbeta.[11]
Developers may release either aclosed beta, or anopen beta; closed beta versions are released to a restricted group of individuals for a user test by invitation, while open beta testers are from a larger group, or anyone interested. Private beta could be suitable for the software that is capable of delivering value but is not ready to be used by everyone either due to scaling issues, lack of documentation or still missing vital features. The testers report any bugs that they find, and sometimes suggest additional features they think should be available in the final version.
Open betas serve the dual purpose of demonstrating a product to potential consumers, and testing among a wide user base is likely to bring to light obscure errors that a much smaller testing team might not find.[citation needed]
Arelease candidate(RC), also known as gamma testing or "going silver", is a beta version with the potential to be a stable product, which is ready to release unless significantbugsemerge. In this stage of product stabilization, all product features have been designed, coded, and tested through one or more beta cycles with no known showstopper-class bugs. A release is calledcode completewhen the development team agrees that no entirely new source code will be added to this release. There could still be source code changes to fix defects, changes to documentation and data files, and peripheral code for test cases or utilities.[citation needed]
Also calledproduction release, thestable releaseis the lastrelease candidate(RC) which has passed all stages of verification and tests. Any known remaining bugs are considered acceptable. This release goes toproduction.
Some software products (e.g.Linux distributionslikeDebian) also havelong-term support(LTS) releases which are based on full releases that have already been tried and tested and receive only security updates.[citation needed]
Once released, the software is generally known as a "stable release". The formal term often depends on the method of release: physical media, online release, or a web application.[12]
The term "release to manufacturing" (RTM), also known as "going gold", is a term used when a software product is ready to be delivered. This build may be digitally signed, allowing the end user to verify the integrity and authenticity of the software purchase. The RTM build is known as the "gold master" or GM[13]is sent for mass duplication or disc replication if applicable. The terminology is taken from the audio record-making industry, specifically the process ofmastering. RTM precedes general availability (GA) when the product is released to the public. A golden master build (GM) is typically the final build of a piece of software in the beta stages for developers. Typically, foriOS, it is the final build before a major release, however, there have been a few exceptions.
RTM is typically used in certain retail mass-production software contexts—as opposed to a specialized software production or project in a commercial or government production and distribution—where the software is sold as part of a bundle in a related computer hardware sale and typically where the software and related hardware is ultimately to be available and sold on mass/public basis at retail stores to indicate that the software has met a defined quality level and is ready for mass retail distribution. RTM could also mean in other contexts that the software has been delivered or released to a client or customer for installation or distribution to the related hardware end user computers or machines. The term doesnotdefine the delivery mechanism or volume; it only states that the quality is sufficient for mass distribution. The deliverable from the engineering organization is frequently in the form of a golden master media used for duplication or to produce the image for the web.
General availability(GA) is the marketing stage at which all necessarycommercializationactivities have been completed and a software product is available for purchase, depending, however, on language, region, and electronic vs. media availability.[14]Commercialization activities could include security and compliance tests, as well as localization and worldwide availability. The time between RTM and GA can take from days to months before a generally available release can be declared, due to the time needed to complete all commercialization activities required by GA. At this stage, the software has "gone live".
Release to the Web(RTW) orWeb releaseis a means of software delivery that utilizes the Internet for distribution. No physical media are produced in this type of release mechanism by the manufacturer. Web releases have become more common as Internet usage has grown.[citation needed]
During its supported lifetime, the software is sometimes subjected to service releases,patchesorservice packs, sometimes also called "interim releases" or "maintenance releases" (MR). For example, Microsoft released three major service packs for the32-biteditions ofWindows XPand two service packs for the64-biteditions.[15]Such service releases contain a collection of updates, fixes, and enhancements, delivered in the form of a single installable package. They may also implement new features. Some software is released with the expectation of regular support. Classes of software that generally involve protracted support as the norm includeanti-virus suitesandmassively multiplayer online games. Continuing with this Windows XP example, Microsoft did offer paid updates for five more years after the end of extended support. This means that support ended on April 8, 2019.[16]
When software is no longer sold or supported, the product is said to have reached end-of-life, to be discontinued, retired, deprecated, abandoned, or obsolete, but user loyalty may continue its existence for some time, even long after its platform is obsolete—e.g., theCommon Desktop Environment[17]and SinclairZX Spectrum.[18]
After the end-of-life date, the developer will usually not implement any new features, fix existing defects, bugs, or vulnerabilities (whether known before that date or not), or provide any support for the product. If the developer wishes, they may release the source code, so that the platform may be maintained by volunteers.
Usage of the "alpha/beta" test terminology originated atIBM.[citation needed]Similar terminologies for IBM's software development were used by people involved with IBM from at least the 1950s (and probably earlier). "A" test was theverificationof a new product before the public announcement. The "B" test was the verification before releasing the product to be manufactured. The "C" test was the final test before the general availability of the product. As software became a significant part of IBM's offerings, the alpha test terminology was used to denote the pre-announcement test and the beta test was used to show product readiness for general availability. Martin Belsky, a manager on some of IBM's earlier software projects claimed to have invented the terminology. IBM dropped the alpha/beta terminology during the 1960s, but by then it had received fairly wide notice. The usage of "beta test" to refer to testing done by customers was not done in IBM. Rather, IBM used the term "field test".
Major public betas developed afterward, with early customers having purchased a "pioneer edition" of the WordVision word processor for theIBM PCfor $49.95. In 1984,Stephen Maneswrote that "in a brilliant marketing coup, Bruce and James Program Publishers managed to get people topayfor the privilege of testing the product."[19]In September 2000, aboxed versionofApple'sMac OS X Public Betaoperating system was released.[20]Between September 2005 and May 2006, Microsoft releasedcommunity technology previews (CTPs) forWindows Vista.[21]From 2009 to 2011,Minecraftwas in public beta.
In February 2005,ZDNetpublished an article about the phenomenon of a beta version often staying for years and being used as if it were at the production level.[22]It noted thatGmailandGoogle News, for example, had been in beta for a long time although widely used; Google News left beta in January 2006, followed by Google Apps (now namedGoogle Workspace), including Gmail, in July 2009.[12]Since the introduction ofWindows 8,Microsofthas called pre-release software apreviewrather thanbeta. All pre-release builds released through theWindows Insider Programlaunched in 2014 are termed "Insider Preview builds". "Beta" may also indicate something more like arelease candidate, or as a form of time-limited demo, or marketing technique.[23]
|
https://en.wikipedia.org/wiki/Development_stage
|
Insoftware development,dynamic testing(ordynamic analysis) is examining theruntimeresponse from a software system to particular input (test case).
Tests can be run manually or viaautomation.
Unit testing,integration testing,System testingandacceptance testingare forms of dynamic testing.
In contrast tostatic testing, the software must runnable.
Advocates for dynamic testing cite that it can help identify weak areas in aruntime environment, supports application analysis even when the tester cannot access the source code, it can identify vulnerabilities that are difficult to find via static testing, and that it can verify the correctness ofstatic testingresults.
However, critics of dynamic testing cite that automated testing tools may give the wrong security, automated testing tools can generate false positives and negatives, and that dynamic testing makes it more expensive to fix bugs, as tracking them down can be more difficult, taking longer than needed.
|
https://en.wikipedia.org/wiki/Dynamic_testing
|
Anengineering verification test(EVT) is performed on firstengineeringprototypes, to ensure that the basic unit performs todesigngoals andspecifications.[1]Verification ensures that designs meets requirements and specification while validation ensures that created entity meets the user needs and objectives.[2]
Tests may include:
Identifying design problems and solving them as early in the design cycle as possible is a key to keeping projects on time and within budget. Too often,product designand performance problems are not detected until late in theproduct developmentcycle, when the product is ready to be shipped.[3]
In theprototypingstage, engineers create actual working samples of the product they plan to produce. Engineering verification testing (EVT) is used on prototypes to verify that the design meets pre-determined specifications and design goals. This valuable information is used to validate the design as is, or identify areas that need to be modified.
Design Verification Test (DVT) is an intensive testing program which is performed to deliver objective, comprehensive testing verifying all product specifications, interface standards,Original Equipment Manufacturer (OEM)requirements, and diagnostic commands. It consists of the following areas of testing:
After prototyping, the product is moved to the next phase of the design cycle:design refinement. Engineers revise and improve the design to meet performance and design requirements and specifications.
|
https://en.wikipedia.org/wiki/Engineering_validation_test
|
Gray-box testing(International English spelling:grey-box testing) is a combination ofwhite-box testingandblack-box testing. The aim of this testing is to search for the defects, if any, due to improper structure or improper usage of applications.[1][2]
A black-box tester is unaware of the internal structure of the application to be tested, while a white-box tester has access to the internal structure of the application. A gray-box tester partially knows the internal structure, which includes access to the documentation of internal data structures as well as the algorithms used.[3]
Gray-box testers require both high-level and detailed documents describing the application, which they collect in order to define test cases.[4]
Gray-box testing is beneficial because it takes the straightforward technique of black-box testing and combines it with the code-targeted systems in white-box testing.
Gray-box testing is based on requirement test case generation because it presents all the conditions before the program is tested by using the assertion method. A requirementspecification languageis used to make it easy to understand the requirements and verify its correctness.[5]
Object-oriented software consists primarily of objects; where objects are single indivisible units having executable code and/or data. Some assumptions are stated below which are needed for the application of use gray-box testing.
Cem Kanerdefines "gray-box testing as involving inputs and outputs, but test design is educated by information about the code or the program operation of a kind that would normally be out of view of the tester".[9]Gray-box testing techniques are:
The distributed nature ofWeb servicesallows gray-box testing to detect defects within aservice-oriented architecture(SOA). As we know, white-box testing is not suitable for Web services as it deals directly with the internal structures. White-box testing can be used for state art methods; for example, message mutation which generates the automatic tests for large arrays to help exception handling states, flow without source code or binaries. Such a strategy is useful to push gray-box testing nearer to the outcomes of white-box testing.
|
https://en.wikipedia.org/wiki/Grey_box_testing
|
Test-driven development(TDD) is a way of writingcodethat involves writing anautomatedunit-leveltest casethat fails, then writing just enough code to make the test pass, thenrefactoringboth the test code and the production code, then repeating with another new test case.
Alternative approaches to writing automated tests is to write all of the production code before starting on the test code or to write all of the test code before starting on the production code. With TDD, both are written together, therefore shortening debugging time necessities.[1]
TDD is related to the test-first programming concepts ofextreme programming, begun in 1999,[2]but more recently has created more general interest in its own right.[3]
Programmers also apply the concept to improving anddebugginglegacy codedeveloped with older techniques.[4]
Software engineerKent Beck, who is credited with having developed or "rediscovered"[5]the technique, stated in 2003 that TDD encourages simple designs and inspires confidence.[6]
The original description of TDD was in an ancient book about programming. It said you take the input tape, manually type in the output tape you expect, then program until the actual output tape matches the expected output. After I'd written the first xUnit framework inSmalltalkI remembered reading this and tried it out. That was the origin of TDD for me. When describing TDD to older programmers, I often hear, "Of course. How else could you program?" Therefore I refer to my role as "rediscovering" TDD.
The TDD steps vary somewhat by author in count and description, but are generally as follows. These are based on the bookTest-Driven Development by Example,[6]and Kent Beck's Canon TDD article.[8]
Each tests should be small and commits made often. If new code fails some tests, the programmer canundoor revert rather thandebugexcessively.
When usingexternal libraries, it is important not to write tests that are so small as to effectively test merely the library itself,[3]unless there is some reason to believe that the library is buggy or not feature-rich enough to serve all the needs of the software under development.
TDD has been adopted outside of software development, in both product and service teams, astest-driven work.[9]For testing to be successful, it needs to be practiced at the micro and macro levels. Every method in a class, every input data value, log message, and error code, amongst other data points, need to be tested.[10]Similar to TDD, non-software teams developquality control(QC) checks (usually manual tests rather than automated tests) for each aspect of the work prior to commencing. These QC checks are then used to inform the design and validate the associated outcomes. The six steps of the TDD sequence are applied with minor semantic changes:
There are various aspects to using test-driven development, for example the principles of "keep it simple, stupid" (KISS) and "You aren't gonna need it" (YAGNI). By focusing on writing only the code necessary to pass tests, designs can often be cleaner and clearer than is achieved by other methods.[6]InTest-Driven Development by Example, Kent Beck also suggests the principle "Fake it till you make it".
To achieve some advanced design concept such as adesign pattern, tests are written that generate that design. The code may remain simpler than the target pattern, but still pass all required tests. This can be unsettling at first but it allows the developer to focus only on what is important.
Writing the tests first: The tests should be written before the functionality that is to be tested. This has been claimed to have many benefits. It helps ensure that the application is written for testability, as the developers must consider how to test the application from the outset rather than adding it later. It also ensures that tests for every feature gets written. Additionally, writing the tests first leads to a deeper and earlier understanding of the product requirements, ensures the effectiveness of the test code, and maintains a continual focus onsoftware quality.[11]When writing feature-first code, there is a tendency by developers and organizations to push the developer on to the next feature, even neglecting testing entirely. The first TDD test might not even compile at first, because the classes and methods it requires may not yet exist. Nevertheless, that first test functions as the beginning of an executable specification.[12]
Each test case fails initially: This ensures that the test really works and can catch an error. Once this is shown, the underlying functionality can be implemented. This has led to the "test-driven development mantra", which is "red/green/refactor", where red meansfailand green meanspass. Test-driven development constantly repeats the steps of adding test cases that fail, passing them, and refactoring. Receiving the expected test results at each stage reinforces the developer's mental model of the code, boosts confidence and increases productivity.
Test code needs access to the code it is testing, but testing should not compromise normal design goals such asinformation hiding, encapsulation and theseparation of concerns. Therefore, unit test code is usually located in the same project ormoduleas the code being tested.
Inobject oriented designthis still does not provide access to private data and methods. Therefore, extra work may be necessary for unit tests. InJavaand other languages, a developer can usereflectionto access private fields and methods.[13]Alternatively, aninner classcan be used to hold the unit tests so they have visibility of the enclosing class's members and attributes. In the.NET Frameworkand some other programming languages,partial classesmay be used to expose private methods and data for the tests to access.
It is important that such testing hacks do not remain in the production code. InCand other languages,compiler directivessuch as#if DEBUG ... #endifcan be placed around such additional classes and indeed all other test-related code to prevent them being compiled into the released code. This means the released code is not exactly the same as what was unit tested. The regular running of fewer but more comprehensive, end-to-end, integration tests on the final release build can ensure (among other things) that no production code exists that subtly relies on aspects of the test harness.
There is some debate among practitioners of TDD, documented in their blogs and other writings, as to whether it is wise to test private methods and data anyway. Some argue that private members are a mere implementation detail that may change, and should be allowed to do so without breaking numbers of tests. Thus it should be sufficient to test any class through its public interface or through its subclass interface, which some languages call the "protected" interface.[14]Others say that crucial aspects of functionality may be implemented in private methods and testing them directly offers advantage of smaller and more direct unit tests.[15][16]
Unit tests are so named because they each testone unitof code. A complex module may have a thousand unit tests and a simple module may have only ten. The unit tests used for TDD should never cross process boundaries in a program, let alone network connections. Doing so introduces delays that make tests run slowly and discourage developers from running the whole suite. Introducing dependencies on external modules or data also turnsunit testsintointegration tests. If one module misbehaves in a chain of interrelated modules, it is not so immediately clear where to look for the cause of the failure.
When code under development relies on a database, a web service, or any other external process or service, enforcing a unit-testable separation is also an opportunity and a driving force to design more modular, more testable and more reusable code.[17]Two steps are necessary:
Fake and mock object methods that return data, ostensibly from a data store or user, can help the test process by always returning the same, realistic data that tests can rely upon. They can also be set into predefined fault modes so that error-handling routines can be developed and reliably tested. In a fault mode, a method may return an invalid, incomplete ornullresponse, or may throw anexception. Fake services other than data stores may also be useful in TDD: A fake encryption service may not, in fact, encrypt the data passed; a fake random number service may always return 1. Fake or mock implementations are examples ofdependency injection.
A test double is a test-specific capability that substitutes for a system capability, typically a class or function, that the UUT depends on. There are two times at which test doubles can be introduced into a system: link and execution. Link time substitution is when the test double is compiled into the load module, which is executed to validate testing. This approach is typically used when running in an environment other than the target environment that requires doubles for the hardware level code for compilation. The alternative to linker substitution is run-time substitution in which the real functionality is replaced during the execution of a test case. This substitution is typically done through the reassignment of known function pointers or object replacement.
Test doubles are of a number of different types and varying complexities:
A corollary of such dependency injection is that the actual database or other external-access code is never tested by the TDD process itself. To avoid errors that may arise from this, other tests are needed that instantiate the test-driven code with the "real" implementations of the interfaces discussed above. These areintegration testsand are quite separate from the TDD unit tests. There are fewer of them, and they must be run less often than the unit tests. They can nonetheless be implemented using the same testing framework.
Integration tests that alter anypersistent storeor database should always be designed carefully with consideration of the initial and final state of the files or database, even if any test fails. This is often achieved using some combination of the following techniques:
For TDD, a unit is most commonly defined as a class, or a group of related functions often called a module. Keeping units relatively small is claimed to provide critical benefits, including:
Advanced practices of test-driven development can lead toacceptance test–driven development(ATDD) andspecification by examplewhere the criteria specified by the customer are automated into acceptance tests, which then drive the traditional unit test-driven development (UTDD) process.[18]This process ensures the customer has an automated mechanism to decide whether the software meets their requirements. With ATDD, the development team now has a specific target to satisfy – the acceptance tests – which keeps them continuously focused on what the customer really wants from each user story.
Effective layout of a test case ensures all required actions are completed, improves the readability of the test case, and smooths the flow of execution. Consistent structure helps in building a self-documenting test case. A commonly applied structure for test cases has (1) setup, (2) execution, (3) validation, and (4) cleanup.
Some best practices that an individual could follow would be to separate common set-up and tear-down logic into test support services utilized by the appropriate test cases, to keep eachtest oraclefocused on only the results necessary to validate its test, and to design time-related tests to allow tolerance for execution in non-real time operating systems. The common practice of allowing a 5-10 percent margin for late execution reduces the potential number of false negatives in test execution. It is also suggested to treat test code with the same respect as production code. Test code must work correctly for both positive and negative cases, last a long time, and be readable and maintainable. Teams can get together and review tests and test practices to share effective techniques and catch bad habits.[19]
Test-driven development is related to, but different fromacceptance test–driven development(ATDD).[20]TDD is primarily a developer's tool to help create well-written unit of code (function, class, or module) that correctly performs a set of operations. ATDD is a communication tool between the customer, developer, and tester to ensure that the requirements are well-defined. TDD requires test automation. ATDD does not, although automation helps with regression testing. Tests used in TDD can often be derived from ATDD tests, since the code units implement some portion of a requirement. ATDD tests should be readable by the customer. TDD tests do not need to be.
BDD (behavior-driven development) combines practices from TDD and from ATDD.[21]It includes the practice of writing tests first, but focuses on tests which describe behavior, rather than tests which test a unit of implementation. Tools such asJBehave,Cucumber,MspecandSpecflowprovide syntaxes which allow product owners, developers and test engineers to define together the behaviors which can then be translated into automated tests.
There are many testing frameworks and tools that are useful in TDD.
Developers may use computer-assistedtesting frameworks, commonly collectively namedxUnit(which are derived from SUnit, created in 1998), to create and automatically run the test cases. xUnit frameworks provide assertion-style test validation capabilities and result reporting. These capabilities are critical for automation as they move the burden of execution validation from an independent post-processing activity to one that is included in the test execution. The execution framework provided by these test frameworks allows for the automatic execution of all system test cases or various subsets along with other features.[22]
Testing frameworks may accept unit test output in the language-agnosticTest Anything Protocolcreated in 1987.
Exercising TDD on large, challenging systems requires a modular architecture, well-defined components with published interfaces, and disciplined system layering with maximization of platform independence. These proven practices yield increased testability and facilitate the application of build and test automation.[11]
Complex systems require an architecture that meets a range of requirements. A key subset of these requirements includes support for the complete and effective testing of the system. Effective modular design yields components that share traits essential for effective TDD.
A key technique for building effective modular architecture is Scenario Modeling where a set of sequence charts is constructed, each one focusing on a single system-level execution scenario. The Scenario Model provides an excellent vehicle for creating the strategy of interactions between components in response to a specific stimulus. Each of these Scenario Models serves as a rich set of requirements for the services or functions that a component must provide, and it also dictates the order in which these components and services interact together. Scenario modeling can greatly facilitate the construction of TDD tests for a complex system.[11]
In a larger system, the impact of poor component quality is magnified by the complexity of interactions. This magnification makes the benefits of TDD accrue even faster in the context of larger projects. However, the complexity of the total population of tests can become a problem in itself, eroding potential gains. It sounds simple, but a key initial step is to recognize that test code is also important software and should be produced and maintained with the same rigor as the production code.
Creating and managing thearchitectureof test software within a complex system is just as important as the core product architecture. Test drivers interact with the UUT,test doublesand the unit test framework.[11]
Test Driven Development (TDD) is a software development approach where tests are written before the actual code. It offers several advantages:
However, TDD is not without its drawbacks:
A 2005 study found that using TDD meant writing more tests and, in turn, programmers who wrote more tests tended to be more productive.[25]Hypotheses relating to code quality and a more direct correlation between TDD and productivity were inconclusive.[26]
Programmers using pure TDD on new ("greenfield") projects reported they only rarely felt the need to invoke adebugger. Used in conjunction with aversion control system, when tests fail unexpectedly, reverting the code to the last version that passed all tests may often be more productive than debugging.[27]
Test-driven development offers more than just simple validation of correctness, but can also drive the design of a program.[28]By focusing on the test cases first, one must imagine how the functionality is used by clients (in the first case, the test cases). So, the programmer is concerned with the interface before the implementation. This benefit is complementary todesign by contractas it approaches code through test cases rather than through mathematical assertions or preconceptions.
Test-driven development offers the ability to take small steps when required. It allows a programmer to focus on the task at hand as the first goal is to make the test pass. Exceptional cases and error handling are not considered initially, and tests to create these extraneous circumstances are implemented separately. Test-driven development ensures in this way that all written code is covered by at least one test. This gives the programming team, and subsequent users, a greater level of confidence in the code.
While it is true that more code is required with TDD than without TDD because of the unit test code, the total code implementation time could be shorter based on a model by Müller and Padberg.[29]Large numbers of tests help to limit the number of defects in the code. The early and frequent nature of the testing helps to catch defects early in the development cycle, preventing them from becoming endemic and expensive problems. Eliminating defects early in the process usually avoids lengthy and tedious debugging later in the project.
TDD can lead to more modularized, flexible, and extensible code. This effect often comes about because the methodology requires that the developers think of the software in terms of small units that can be written and tested independently and integrated together later. This leads to smaller, more focused classes, loosercoupling, and cleaner interfaces. The use of themock objectdesign pattern also contributes to the overall modularization of the code because this pattern requires that the code be written so that modules can be switched easily between mock versions for unit testing and "real" versions for deployment.
Because no more code is written than necessary to pass a failing test case, automated tests tend to cover every code path. For example, for a TDD developer to add anelsebranch to an existingifstatement, the developer would first have to write a failing test case that motivates the branch. As a result, the automated tests resulting from TDD tend to be very thorough: they detect any unexpected changes in the code's behaviour. This detects problems that can arise where a change later in the development cycle unexpectedly alters other functionality.
Madeyski[30]provided empirical evidence (via a series of laboratory experiments with over 200 developers) regarding the superiority of the TDD practice over the traditional Test-Last approach or testing for correctness approach, with respect to the lower coupling between objects (CBO). The mean effect size represents a medium (but close to large) effect on the basis of meta-analysis of the performed experiments which is a substantial finding. It suggests a better modularization (i.e., a more modular design), easier reuse and testing of the developed software products due to the TDD programming practice.[30]Madeyski also measured the effect of the TDD practice on unit tests using branch coverage (BC) and mutation score indicator (MSI),[31][32][33]which are indicators of the thoroughness and the fault detection effectiveness of unit tests, respectively. The effect size of TDD on branch coverage was medium in size and therefore is considered substantive effect.[30]These findings have been subsequently confirmed by further, smaller experimental evaluations of TDD.[34][35][36][37]
Test-driven development does not perform sufficient testing in situations where full functional tests are required to determine success or failure, due to extensive use of unit tests.[38]Examples of these areuser interfaces, programs that work withdatabases, and some that depend on specificnetworkconfigurations. TDD encourages developers to put the minimum amount of code into such modules and to maximize the logic that is in testable library code, using fakes andmocksto represent the outside world.[39]
Management support is essential. Without the entire organization believing that test-driven development is going to improve the product, management may feel that time spent writing tests is wasted.[40]
Unit tests created in a test-driven development environment are typically created by the developer who is writing the code being tested. Therefore, the tests may share blind spots with the code: if, for example, a developer does not realize that certain input parameters must be checked, most likely neither the test nor the code will verify those parameters. Another example: if the developer misinterprets the requirements for the module they are developing, the code and the unit tests they write will both be wrong in the same way. Therefore, the tests will pass, giving a false sense of correctness.
A high number of passing unit tests may bring a false sense of security, resulting in fewer additionalsoftware testingactivities, such asintegration testingandcompliance testing.
Tests become part of the maintenance overhead of a project. Badly written tests, for example ones that include hard-coded error strings, are themselves prone to failure, and they are expensive to maintain. This is especially the case withfragile tests.[41]There is a risk that tests that regularly generate false failures will be ignored, so that when a real failure occurs, it may not be detected. It is possible to write tests for low and easy maintenance, for example by the reuse of error strings, and this should be a goal during thecode refactoringphase described above.
Writing and maintaining an excessive number of tests costs time. Also, more-flexible modules (with limited tests) might accept new requirements without the need for changing the tests. For those reasons, testing for only extreme conditions, or a small sample of data, can be easier to adjust than a set of highly detailed tests.
The level of coverage and testing detail achieved during repeated TDD cycles cannot easily be re-created at a later date. Therefore, these original, or early, tests become increasingly precious as time goes by. The tactic is to fix it early. Also, if a poor architecture, a poor design, or a poor testing strategy leads to a late change that makes dozens of existing tests fail, then it is important that they are individually fixed. Merely deleting, disabling or rashly altering them can lead to undetectable holes in the test coverage.
First TDD Conference was held during July 2021.[42]Conferences were recorded onYouTube[43]
|
https://en.wikipedia.org/wiki/Test-driven_development
|
White-box testing(also known asclear box testing,glass box testing,transparent box testing, andstructural testing) is a method ofsoftware testingthat tests internal structures or workings of an application, as opposed to its functionality (i.e. black-box testing). In white-box testing, an internal perspective of the system is used to design test cases. The tester chooses inputs to exercise paths through the code and determine the expected outputs. This is analogous to testing nodes in a circuit, e.g. in-circuit testing (ICT).
White-box testing can be applied at theunit,integrationandsystemlevels of the software testing process. Although traditional testers tended to think of white-box testing as being done at the unit level, it is used for integration and system testing more frequently today. It can test paths within a unit, paths between units during integration, and between subsystems during a system–level test. Though this method of test design can uncover many errors or problems, it has the potential to miss unimplemented parts of the specification or missing requirements. Where white-box testing is design-driven,[1]that is, drivenexclusivelyby agreed specifications of how each component of software is required to behave (as inDO-178CandISO 26262processes), white-box test techniques can accomplish assessment for unimplemented or missing requirements.
White-box test design techniques include the followingcode coveragecriteria:
White-box testing is a method of testing the application at the level of the source code. These test cases are derived through the use of the design techniques mentioned above:control flowtesting, data flow testing, branch testing, path testing, statement coverage and decision coverage as well as modified condition/decision coverage. White-box testing is the use of these techniques as guidelines to create an error-free environment by examining all code. These white-box testing techniques are the building blocks of white-box testing, whose essence is the careful testing of the application at the source code level to reduce hidden errors later on.[2]These different techniques exercise every visible path of the source code to minimize errors and create an error-free environment. The whole point of white-box testing is the ability to know which line of the code is being executed and being able to identify what the correct output should be.[2]
White-box testing's basic procedures require the tester to have an in-depth knowledge of the source code being tested. The programmer must have a deep understanding of the application to know what kinds of test cases to create so that every visible path is exercised for testing. Once the source code is understood then it can be analyzed for test cases to be created. The following are the three basic steps that white-box testing takes in order to create test cases:
A more modern view is that the dichotomy between white-box testing and black-box testing has blurred and is becoming less relevant. Whereas "white-box" originally meant using the source code, and black-box meant using requirements, tests are now derived from many documents at various levels of abstraction. The real point is that tests are usually designed from an abstract structure such as the input space, a graph, or logical predicates, and the question is what level of abstraction we derive that abstract structure from.[4]That can be the source code, requirements, input space descriptions, or one of dozens of types of design models. Therefore, the "white-box / black-box" distinction is less important and the terms are less relevant.[citation needed]
Inpenetration testing, white-box testing refers to a method where awhite hat hackerhas full knowledge of the system being attacked.[6]The goal of a white-box penetration test is to simulate a malicious insider who has knowledge of and possibly basic credentials for the target system. For such a penetration test, administrative credentials are typically provided in order to analyse how or which attacks can impact high-privileged accounts.[7]Source code can be made available to be used as a reference for the tester. When the code is a target of its own, this is not (only) a penetration test but asource code security audit(or security review).[8]
|
https://en.wikipedia.org/wiki/White_box_testing
|
Inmanufacturing,functional testing(FCT) is performed during the last phase of theproduction line.[1]This is often referred to as a finalquality controltest, which is done to ensure that specifications are carried out by FCTs.
The process of FCTs is entailed by the emulation or simulation of the environment in which a product is expected to operate. This is done so to check, and correct any issues with functionality. The environment involved with FCTs consists of any device that communicates with anDUT, the power supply of said DUT, and any loads needed to make the DUT function correctly.
Functional tests are performed in an automatic fashion by production line operators using test software. In order for this to be completed, the software will communicate with any external programmable instruments such as I/O boards,digital multimeters, and communication ports. In conjunction with the test fixture, the software that interfaces with the DUT is what makes it possible for a FCT to be performed.
|
https://en.wikipedia.org/wiki/Functional_testing_(manufacturing)
|
Forensic engineeringhas been defined as "the investigation of failures—ranging from serviceability to catastrophic—which may lead to legal activity, including both civil and criminal".[1]The forensic engineering field is very broad in terms of the many disciplines that it covers, investigations that use forensic engineering are case of environmental damages to structures, system failures of machines, explosions, electrical, fire point of origin, vehicle failures and many more.[2][1]
It includes the investigation ofmaterials,products,structuresor components that fail or do not operate or function as intended, causingpersonal injury, damage to property or economic loss. The consequences of failure may give rise to action under either criminal or civil law including but not limited to health and safety legislation, the laws of contract and/orproduct liabilityand the laws oftort. The field also deals with retracing processes and procedures leading to accidents in operation of vehicles or machinery. Generally, the purpose of aforensicengineeringinvestigation is to locate cause or causes of failure with a view to improve performance or life of a component, or to assist a court in determining the facts of anaccident. It can also involve investigation ofintellectual propertyclaims, especiallypatents. In the US, forensic engineers require a professional engineering license from each state.
As the field of engineering has evolved over time, so has the field of forensic engineering. Early examples include investigation ofbridge failuressuch as theTay rail bridge disasterof 1879 and theDee bridge disasterof 1847. Many early rail accidents prompted the invention oftensile testingof samples andfractographyof failed components.[3]
Vital to the field of forensic engineering is the process of investigating and collecting data related to the: materials, products, structures or components that failed.[2]This involves: inspections, collecting evidence, measurements, developing models, obtaining exemplar products, and performing experiments. Often, testing and measurements are conducted in anIndependent testing laboratoryor other reputable unbiased laboratory.
When investigating a case a forensic engineer will follow a series of standard steps of their investigation process. First thing is when the forensic engineer arrives to the scene is to establish safety, they make sure that all the hazards have been dealt with an are safe to handle and be analyzed.[4]The next step would be to do an initial incident appraisal, this is done before any analysis is done and they take a quick observation of what the solution is at hand.[4]The third step in the investigative process is to plan how to the investigation will go and would resources they will need to obtain to do the analysis accurately.[4]Next would be establishing the terms of reverence, this is when the forensic engineer will consult with the client on what they want done in the investigation.[4]The next step is to create the investigative team, once there is plan on how to investigate they will make a team of the experts in the given field needed to conduct the analysis.[4]lastly would be to start the investigation, and this is where they conduct their analysis.
There are two of the main types of analysis done in forensic engineering, there is root cause analysis and failure analysis. Root cause analysis is defined as looking at the system as a whole and what led to the system failing, and is done with large scale object, for example a building collapse.[2]Failure analysis is defined as the analysis of one part in the system that failed to operate, an example of this would be a car failure causing an accident.[2]These two types of analysis are the initial assessments done when forensic engineering investigators start their investigation.[2]
Failure mode and effects analysis(FMEA) andfault tree analysismethods also examine product or process failure in a structured and systematic way, in the general context ofsafety engineering. However, all such techniques rely on accurate reporting offailure rates, and precise identification, of the failure modes involved.
There is some common ground between forensic science and forensic engineering, such as scene of crime and scene of accident analysis, integrity of the evidence and court appearances. Both disciplines make extensive use of optical andscanning electron microscopes, for example. They also share common use ofspectroscopy(infrared,ultraviolet, andnuclear magnetic resonance) to examine critical evidence.RadiographyusingX-rays(such asX-ray computed tomography), orneutronsis also very useful in examining thick products for their internal defects before destructive examination is attempted. Often, however, a simplehand lensmay reveal the cause of a particular problem.
Trace evidenceis sometimes an important factor in reconstructing the sequence of events in an accident. For example, tire burn marks on a road surface can enable vehicle speeds to be estimated, when the brakes were applied and so on. Ladder feet often leave a trace of movement of the ladder during a slip and may show how the accident occurred. When a product fails for no obvious reason,SEMandEnergy-dispersive X-ray spectroscopy(EDX) performed in the microscope can reveal the presence of aggressive chemicals that have left traces on the fracture or adjacent surfaces. Thus anacetal resinwater pipe joint suddenly failed and caused substantial damages to a building in which it was situated. Analysis of the joint showed traces of chlorine, indicating astress corrosion crackingfailure mode. The failed fuel pipe junction mentioned above showed traces ofsulfuron the fracture surface from thesulfuric acid, which had initiated the crack.
Extracting physical evidence from digital photography is a major technique used in forensic accident reconstruction.Camera matching,photogrammetry, andphoto rectificationtechniques are used to create three-dimensional and top-down views from the two-dimensional photos typically taken at an accident scene. Overlooked or undocumented evidence for accident reconstruction can be retrieved and quantified as long as photographs of such evidence are available. By using photographs of the accident scene including the vehicle, "lost" evidence can be recovered and accurately determined.[5]
Forensic materials engineeringinvolves methods applied to specific materials, such asmetals,glasses,ceramics,compositesandpolymers.
The National Academy of Forensic Engineers (NAFE) was founded in 1982 by Marvin M. Specter, P.E., L.S.; Paul E. Pritzker, P.E., and William A. Cox Jr., P.E. to identify and bring together professional engineers having qualifications and expertise as practicing forensic engineers to further their continuing education and promote high standards of professional ethics and excellence of practice. It seeks to improve the practice, elevate the standards, and advance the cause of forensic engineering. Full membership in the academy is limited to Registered Professional Engineers who are also members of theNational Society of Professional Engineers(NSPE). They must also be members in an acceptable grade of a recognized major technical engineering society. NAFE also offers Affiliate grades of membership to those who do not yet qualify for Member grade.[6]Full members are board-certified through theCouncil of Engineering and Scientific Specialty Boards[7]and earn the title "Diplomate of Forensic Engineering", or "DFE". This is typically used after their designation as Profesional Engineer.
The broken fuel pipe shown at left caused a serious accident whendiesel fuelpoured out from a van onto the road. A following car skidded and the driver was seriously injured when she collided with an oncominglorry.Scanning electron microscopy(SEM) showed that thenylonconnector had fractured bystress corrosion cracking(SCC) due to a small leak ofbattery acid. Nylon is susceptible tohydrolysiswhen in contact withsulfuric acid, and only a small leak of acid would have sufficed to start a brittle crack in theinjection mouldednylon 6,6connector by SCC. The crack took about 7 days to grow across the diameter of the tube. The fracture surface showed a mainly brittle surface with striations indicating progressive growth of the crack across the diameter of the pipe. Once the crack had penetrated the inner bore, fuel started leaking onto the road.
The nylon 6,6 had been attacked by the following reaction, which was catalyzed by the acid:
Diesel fuelis especially hazardous on road surfaces because it forms a thin, oily film that cannot be easily seen by drivers. It is much likeblack icein its slipperiness, so skids are common when diesel leaks occur. The insurers of the van driver admitted liability and the injured driver was compensated.
Most manufacturing models will have a forensic component that monitors early failures to improve quality or efficiencies. Insurance companies use forensic engineers to prove liability or nonliability. Most engineering disasters (structural failuressuch as bridge and building collapses) are subject to forensic investigation by engineers experienced in forensic methods of investigation.Rail crashes,aviation accidents, and someautomobile accidentsare investigated by forensic engineers in particular where component failure is suspected. Furthermore, appliances, consumer products, medical devices, structures, industrial machinery, and even simple hand tools such as hammers or chisels can warrant investigations upon incidents causing injury or property damages. The failure ofmedical devicesis oftensafety-criticalto the user, so reporting failures and analysing them is particularly important. The environment of the body is complex, andimplantsmust both survive this environment, and not leach potentially toxic impurities. Problems have been reported withbreast implants,heart valves, andcatheters, for example.
Failures that occur early in the life of a new product are vital information for the manufacturer to improve the product.New product developmentaims to eliminate defects by testing in the factory before launch, but some may occur during its early life. Testing products to simulate their behavior in the external environment is a difficult skill, and may involveaccelerated life testingfor example. The worst kind of defect to occur after launch is asafety-criticaldefect, a defect that can endanger life or limb. Their discovery usually leads to aproduct recallor even complete withdrawal of the product from the market. Product defects often follow thebathtub curve, with high initial failures, a lower rate during regular life, followed by another rise due to wear-out. National standards, such as those ofASTMand theBritish Standards Institute, andInternational Standardscan help the designer in increasing product integrity.
There are many examples of forensic methods used to investigate accidents and disasters, one of the earliest in the modern period being the fall of theDee bridgeatChester,England. It was built usingcast irongirders, each of which was made of three very large castings dovetailed together. Each girder was strengthened bywrought ironbars along the length. It was finished in September 1846, and opened for local traffic after approval by the first Railway Inspector, General Charles Pasley. However, on 24 May 1847, a local train toRuabonfell through the bridge. The accident resulted in five deaths (three passengers, the train guard, and the locomotive fireman) and nine serious injuries. The bridge had been designed byRobert Stephenson, and he was accused of negligence by a localinquest.
Although strong in compression, cast iron was known to be brittle in tension or bending. On the day of the accident, the bridge deck was covered with track ballast to prevent the oak beams supporting the track from catching fire, imposing a heavy extra load on the girders supporting the bridge and probably exacerbating the accident. Stephenson took this precaution because of a recent fire on the Great Western Railway at Uxbridge, London, where Isambard Kingdom Brunel's bridge caught fire and collapsed.
One of the first major inquiries conducted by the newly formedRailway Inspectoratewas conducted by Captain Simmons of theRoyal Engineers, and his report suggested that repeated flexing of the girder weakened it substantially. He examined the broken parts of the main girder, and confirmed that the girder had broken in two places, the first break occurring at the center. He tested the remaining girders by driving a locomotive across them, and found that they deflected by several inches under the moving load. He concluded that the design was flawed, and that the wrought iron trusses fixed to the girders did not reinforce the girders at all, which was a conclusion also reached by the jury at the inquest. Stephenson's design had depended on the wrought iron trusses to strengthen the final structures, but they were anchored on the cast iron girders themselves, and so deformed with any load on the bridge. Others (especially Stephenson) argued that the train had derailed and hit the girder, theimpact forcecausing it tofracture. However,eyewitnessesmaintained that the girder broke first and the fact that thelocomotiveremained on the track showed otherwise.
Product failures are not widely published in theacademic literatureor trade literature, partly because companies do not want to advertise their problems. However, it then denies others the opportunity to improve product design so as to prevent further accidents.[citation needed]
The journalEngineering Failure Analysis,[8][non-primary source needed]published in affiliation with theEuropean Structural Integrity Society, publishes case studies of a wide range of different products, failing under different circumstances.
A publication dealing with failures of buildings, bridges, and other structures, is theJournal of Performance of Constructed Facilities,[9][non-primary source needed]which is published by theAmerican Society of Civil Engineers, under the umbrella of its Technical Council on Forensic Engineering.[10][non-primary source needed]
TheJournal of the National Academy of Forensic Engineersis a peer-reviewedopen accessjournal that provides a multi-disciplinary examination of the forensic engineering field. Submission is open to NAFE members and the journal's peer review process includes in-person presentation for live feedback prior to a single-blind technical peer review.[11][non-primary source needed]
|
https://en.wikipedia.org/wiki/Forensic_engineering
|
In the context of software or information modeling, ahappy path(sometimes calledhappy flow) is a defaultscenariofeaturing noexceptional or error conditions.[1][2]For example, the happy path for a function validating credit card numbers would be where none of thevalidation rulesraise an error, thus letting execution continue successfully to the end, generating a positive response.
Process steps for a happy path are also used in the context of ause case. In contrast to the happy path, process steps for alternate flow and exception flow may also be documented.[3]
Happy path test is a well-definedtest caseusing known input, which executes without exception and produces an expected output.[4]Happy path testing can show that a system meets itsfunctional requirementsbut it doesn't guarantee a graceful handling of error conditions or aid in finding hiddenbugs.[5][4]
Happy day (or sunny day) scenario and golden path are slang synonyms for happy path.[6]
In use case analysis, there is only one happy path, but there may be any number of additional alternate path scenarios which are all valid optional outcomes. If valid alternatives exist, the happy path is then identified as the default or most likely positive alternative. The analysis may also show one or more exception paths. An exception path is taken as the result of a fault condition. Use cases and the resulting interactions are commonly modeled in graphical languages such as theUnified Modeling Language(UML) orSysML.[7]
There is no agreed name for the opposite of happy paths: they may be known as sad paths, bad paths, or exception paths. The term 'unhappy path' is gaining popularity as it suggests a complete opposite to 'happy path' and retains the same context. Usually there is no extra 'unhappy path', leaving such 'term' meaningless, because the happy path reaches the utter end, but an 'unhappy path' is shorter, ends prematurely, and doesn't reach the desired end, i.e. not even the last page of a wizard. And in contrast to a single happy path, there are a lot of different ways in which things can go wrong, so there is no single criterion to determine 'the unhappy path'.[8]
This computing article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Happy_path
|
Intheoretical computer science, acertifying algorithmis an algorithm that outputs, together with a solution to the problem it solves, a proof that the solution is correct. A certifying algorithm is said to beefficientif the combined runtime of the algorithm and aproof checkeris slower by at most a constant factor than the best known non-certifying algorithm for the same problem.[1]
The proof produced by a certifying algorithm should be in some sense simpler than the algorithm itself, for otherwise any algorithm could be considered certifying (with its output verified by running the same algorithm again). Sometimes this is formalized by requiring that a verification of the proof take less time than the original algorithm, while for other problems (in particular those for which the solution can be found inlinear time) simplicity of the output proof is considered in a less formal sense.[1]For instance, the validity of the output proof may be more apparent to human users than the correctness of the algorithm, or a checker for the proof may be more amenable toformal verification.[1][2]
Implementations of certifying algorithms that also include a checker for the proof generated by the algorithm may be considered to be more reliable than non-certifying algorithms. For, whenever the algorithm is run, one of three things happens: it produces a correct output (the desired case), it detects a bug in the algorithm or its implication (undesired, but generally preferable to continuing without detecting the bug), or both the algorithm and the checker are faulty in a way that masks the bug and prevents it from being detected (undesired, but unlikely as it depends on the existence of two independent bugs).[1]
Many examples of problems with checkable algorithms come fromgraph theory.
For instance, a classical algorithm for testing whether a graph isbipartitewould simply output a Boolean value: true if the graph is bipartite, false otherwise. In contrast, a certifying algorithm might output a2-coloringof the graph in the case that it is bipartite, or acycleof odd length if it is not. Any graph is bipartite if and only if it can be 2-colored, and non-bipartite if and only if it contains an odd cycle. Both checking whether a 2-coloring is valid and checking whether a given odd-length sequence of vertices is a cycle may be performed more simply than testing bipartiteness.[1]
Analogously, it is possible to test whether a givendirected graphisacyclicby a certifying algorithm that outputs either atopological orderor a directed cycle. It is possible to test whether an undirected graph is achordal graphby a certifying algorithm that outputs either an elimination ordering (an ordering of all vertices such that, for every vertex, the neighbors that are later in the ordering form aclique) or a chordless cycle. And it is possible to test whether a graph isplanarby a certifying algorithm that outputs either a planar embedding or aKuratowski subgraph.[1]
Theextended Euclidean algorithmfor thegreatest common divisorof two integersxandyis certifying: it outputs three integersg(the divisor),a, andb, such thatax+by=g. This equation can only be true of multiples of the greatest common divisor, so testing thatgis the greatest common divisor may be performed by checking thatgdivides bothxandyand that this equation is correct.[1]
|
https://en.wikipedia.org/wiki/Certifying_algorithm
|
AFermi problem(orFermi question,Fermi quiz), also known as anorder-of-magnitude problem, is anestimationproblem inphysicsorengineeringeducation, designed to teachdimensional analysisorapproximationof extreme scientific calculations. Fermi problems are usuallyback-of-the-envelope calculations. Fermi problems typically involve making justified guesses about quantities and theirvarianceor lower and upper bounds. In some cases, order-of-magnitude estimates can also be derived usingdimensional analysis. AFermi estimate(ororder-of-magnitude estimate,order estimation) is an estimate of an extreme scientific calculation.
The estimation technique is named after physicistEnrico Fermias he was known for his ability to make good approximate calculations with little or no actual data.
An example isEnrico Fermi's estimate of the strength of theatomic bombthat detonated at theTrinity test, based on the distance traveled by pieces of paper he dropped from his hand during the blast. Fermi's estimate of 10kilotons of TNTwas well within an order of magnitude of the now-accepted value of 21 kilotons.[1][2][3]
Fermi estimates generally work because the estimations of the individual terms are often close to correct, and overestimates and underestimates help cancel each other out. That is, if there is no consistent bias, a Fermi calculation that involves the multiplication of several estimated factors (such as the number of piano tuners in Chicago) will probably be more accurate than might be first supposed.
In detail, multiplying estimates corresponds to adding their logarithms; thus one obtains a sort ofWiener processorrandom walkon thelogarithmic scale, which diffuses asn{\displaystyle {\sqrt {n}}}(in number of termsn). In discrete terms, the number of overestimates minus underestimates will have abinomial distribution. In continuous terms, if one makes a Fermi estimate ofnsteps, withstandard deviationσunits on the log scale from the actual value, then the overall estimate will have standard deviationσn{\displaystyle \sigma {\sqrt {n}}}, since the standard deviation of a sum scales asn{\displaystyle {\sqrt {n}}}in the number of summands.
For instance, if one makes a 9-step Fermi estimate, at each step overestimating or underestimating the correct number by a factor of 2 (or with a standard deviation 2), then after 9 steps the standard error will have grown by a logarithmic factor of9=3{\displaystyle {\sqrt {9}}=3}, so 23= 8. Thus one will expect to be within1⁄8to 8 times the correct value – within anorder of magnitude, and much less than the worst case of erring by a factor of 29= 512 (about 2.71 orders of magnitude). If one has a shorter chain or estimates more accurately, the overall estimate will be correspondingly better.
Fermi questions are often extreme in nature, and cannot usually be solved using common mathematical or scientific information.
Example questions given by the official Fermi Competition:[clarification needed]
"If the mass of one teaspoon of water could be converted entirely into energy in the form of heat, what volume of water, initially at room temperature, could it bring to a boil? (litres)."
"How much does the Thames River heat up in going over theFanshawe Dam? (Celsius degrees)."
"What is the mass of all the automobiles scrapped in North America this month? (kilograms)."[4][5]
Possibly the most famous order-of-magnitude problem is theFermi paradox, which considers the odds of a significant number of intelligent civilizations existing in the galaxy, and ponders the apparent contradiction of human civilization never having encountered any. A well-known attempt to ponder this paradox through the lens of a Fermi estimate is theDrake equation, which seeks to estimate the number of such civilizations present in the galaxy.[6]
Scientists often look for Fermi estimates of the answer to a problem before turning to more sophisticated methods to calculate a precise answer. This provides a useful check on the results. While the estimate is almost certainly incorrect, it is also a simple calculation that allows for easy error checking, and to find faulty assumptions if the figure produced is far beyond what we might reasonably expect. By contrast, precise calculations can be extremely complex but with the expectation that the answer they produce is correct. The far larger number of factors and operations involved can obscure a very significant error, either in mathematical process or in the assumptions the equation is based on, but the result may still be assumed to be right because it has been derived from a precise formula that is expected to yield good results. Without a reasonable frame of reference to work from it is seldom clear if a result is acceptably precise or is many degrees of magnitude (tens or hundreds of times) too big or too small. The Fermi estimation gives a quick, simple way to obtain this frame of reference for what might reasonably be expected to be the answer.
As long as the initial assumptions in the estimate are reasonable quantities, the result obtained will give an answer within the same scale as the correct result, and if not gives a base for understanding why this is the case. For example, suppose a person was asked to determine the number of piano tuners in Chicago.
If their initial estimate told them there should be a hundred or so, but the precise answer tells them there are many thousands, then they know they need to find out why there is this divergence from the expected result. First looking for errors, then for factors the estimation did not take account of – does Chicago have a number of music schools or other places with a disproportionately high ratio of pianos to people? Whether close or very far from the observed results, the context the estimation provides gives useful information both about the process of calculation and the assumptions that have been used to look at problems.
Fermi estimates are also useful in approaching problems where the optimal choice of calculation method depends on the expected size of the answer. For instance, a Fermi estimate might indicate whether the internal stresses of a structure are low enough that it can be accurately described bylinear elasticity; or if the estimate already bears significant relationship inscalerelative to some other value, for example, if a structure will be over-engineered to withstand loads several times greater than the estimate.[citation needed]
Although Fermi calculations are often not accurate, as there may be many problems with their assumptions, this sort of analysis does inform one what to look for to get a better answer. For the above example, one might try to find a better estimate of the number of pianos tuned by a piano tuner in a typical day, or look up an accurate number for the population of Chicago. It also gives a rough estimate that may be good enough for some purposes: if a person wants to start a store in Chicago that sells piano tuning equipment, and calculates that they need 10,000 potential customers to stay in business, they can reasonably assume that the above estimate is far enough below 10,000 that they should consider a different business plan (and, with a little more work, they could compute a rough upper bound on the number of piano tuners by considering the most extremereasonablevalues that could appear in each of their assumptions).
The following books contain many examples of Fermi problems with solutions:
There are or have been a number of university-level courses devoted to estimation and the solution of Fermi problems. The materials for these courses are a good source for additional Fermi problem examples and material about solution strategies:
|
https://en.wikipedia.org/wiki/Fermi_problem
|
Aproof of concept(POCorPoC), also known asproof of principle, is aninchoaterealization of a certain idea or method in order to demonstrate itsfeasibility[1]or viability.[2]A proof of concept is usually small and may or may not be complete, but aims to demonstrate in principle that the concept has practical potential without needing to fully develop it.
Aproof of value(PoV) is sometimes used along proof of concept, and differs by focusing more on demonstrating the potential customeruse caseand value, and is usually less in-depth than a proof of concept.[3]
The term has been in use since 1967.[4][5]In a 1969 hearing of the Committee on Science and Astronautics, Subcommittee on Advanced Research and Technology,proof of conceptwas defined as following:
The Board defined proof of concept as a phase in development in which experimental hardware is constructed and tested to explore and demonstrate the feasibility of a new concept.[6]
One definition of the term "proof of concept" was by Bruce Carsten in the context of a "proof-of-concept prototype" in his magazine column "Carsten's Corner" (1989):
Proof-of-Concept Prototype is a term that (I believe) I coined in 1984. It was used to designate a circuit constructed along lines similar to an engineering prototype, but one in which the intent was only to demonstrate the feasibility of a new circuit and/or a fabrication technique, and was not intended to be an early version of a production design.[7]
The column also provided definitions for the related but distinct terms 'breadboard' (a term used since 1940[8]), 'prototype', 'engineering prototype', and 'brassboard'.
Sky Captain and the World of Tomorrow,300, andSin Citywere all shot in front of agreenscreenwith almost all backgrounds andpropscomputer-generated. All three used proof-of-concept short films. In the case ofSin City, the short film became the prologue of the final film.[citation needed]
Pixarsometimes creates short animated films that use a difficult or untested technique. Their short filmGeri's Gameused techniques for animation of cloth and of human facial expressions later used inToy Story 2. Similarly,Pixarcreated several short films as proofs of concept for new techniques for water motion, sea anemone tentacles, and a slowly appearing whale in preparation for the production ofFinding Nemo.[citation needed]
In engineering and technology, a rough prototype of a new idea is often constructed as a "proof of concept". For example, a working concept of an electrical device may be constructed using abreadboard. Apatentapplication often requires a demonstration of functionality prior to being filed. Some universities have proof of concept centers to "fill the 'funding gap'" for "seed-stage investing" and "accelerate the commercialization of university innovations". Proof of concept centers provide "seed funding to novel, early stage research that most often would not be funded by any other conventional source".[9]
In the field ofbusiness developmentandsales, a vendor may allow a prospect customer to try a product. This use of proof of concept helps establish viability, isolate technical issues, and suggest an overall direction, as well as provide feedback forbudgetingand other forms of internal decision-making processes.[citation needed]
In these cases, the proof of concept may mean the use of specializedsales engineersto ensure that the vendor makes a best-possible effort.
In both computer security and encryption,proof of conceptrefers to a demonstration that in principle shows how a system may be protected or compromised, without the necessity of building a complete working vehicle for that purpose.Winzapperwas a proof of concept which possessed the bare minimum of capabilities needed to selectively remove an item from theWindows Security Log, but it was not optimized in any way.[citation needed]
Insoftware development, the term 'proof of concept' often characterizes several distinct processes with different objectives and participant roles: vendor business roles may utilize a proof of concept to establish whether a system satisfies some aspect of the purpose it was designed for. Once a vendor is satisfied, a prototype is developed which is then used to seek funding or to demonstrate to prospective customers.[citation needed]
The US General Services Administration has a checklist for defining an Agile software proof of concept, which includes clear definitions of the problem, pre-POC input required, and output criteria (including success criteria).[10]
The key benefits of the proof of concept in software development are:[11]
A 'steel thread' is technical proof of concept that touches all of the technologies in a solution. By contrast, a 'proof of technology' aims to determine the solution to some technical problem (such as how two systems might integrate) or to demonstrate that a given configuration can achieve a certain throughput. No business users need be involved in a proof of technology.
Apilotproject refers to an initial roll-out of a system into production, targeting a limited scope of the intended final solution. The scope may be limited by the number of users who can access the system, the business processes affected, the business partners involved, or other restrictions as appropriate to the domain. The purpose of a pilot project is to test, often in a production environment.
Tech demosare designed as proof of concept for the development ofvideo games.[12]They can demonstrate graphical or gameplay capabilities crucial for particular games.
Although not suggested by natural language, and in contrast to usage in other areas,proof of principleandproof of conceptare not synonymous indrug development. A third term,proof of mechanism, is closely related and is also described here. All of these terms lack rigorous definitions and exact usage varies between authors, between institutions and over time. The descriptions given below are intended to be informative and practically useful.[citation needed]
The underlying principle is related to the use of biomarkers as surrogate endpoints in early clinical trials.[13]In early development it is not practical to directly measure that a drug is effective in treating the desired disease, and a surrogate endpoint is used to guide whether or not it is appropriate to proceed with further testing. For example, although it cannot be determined early that a new antibiotic cures patients with pneumonia, early indicators would include that the drug is effective in killing bacteria in laboratory tests, or that it reduces temperature in infected patients—such a drug would merit further testing to determine the appropriate dose and duration of treatment. A new anti-hypertension drug could be shown to reduce blood pressure, indicating that it would be useful to conduct more extensive testing of long-term treatment in the expectation of showing reductions in stroke (cerebrovascular accident) or heart attack (myocardial infarction). Surrogate endpoints are often based on laboratory blood tests or imaging investigations like X-ray or CT scan.[citation needed]
Phase I is typically conducted with a small number of healthy volunteers who are given single doses or short courses of treatment (e.g., up to 2 weeks). Studies in this phase aim to show that the new drug has some of the desired clinical activity (e.g., that an experimental anti-hypertensive drug actually has some effect on reducing blood pressure), that it can be tolerated when given to humans, and to give guidance as to dose levels that are worthy of further study. Other Phase I studies aim to investigate how the new drug is absorbed, distributed, metabolised and excreted (ADME studies).
Phase IIA is typically conducted in up to 100 patients with the disease of interest. Studies in this Phase aim to show that the new drug has a useful amount of the desired clinical activity (e.g., that an experimental anti-hypertensive drug reduces blood pressure by a useful amount), that it can be tolerated when given to humans in the longer term, and to investigate which dose levels might be most suitable for eventual marketing.
A decision is made at this point as to whether to progress the drug into later development, or if it should be dropped. If the drug continues, it will progress into later stage clinical studies, termed Phase IIB and Phase III.
Phase III studies involve larger numbers of patients—commonlymulticenter trials—treated at doses and durations representative of marketed use, andin randomised comparisonto placebo and/or existing active drugs. They aim to show convincing, statistically significant evidence of efficacy and to give a better assessment of safety than is possible in smaller, short-term studies.
A decision is made at this point as to whether the drug is effective and safe, and if so an application is made to regulatory authorities (such as the US Food and Drug AdministrationFDAand theEuropean Medicines Agency) for the drug to receive permission to be marketed for use outside of clinical trials.
Clinical trials can continue after marketing authorization has been received, for example, to better delineate safety, to determine appropriate use alongside other drugs or to investigate additional uses.
|
https://en.wikipedia.org/wiki/Proof_of_concept
|
Ahighly accelerated life test(HALT) is astress testingmethodologyfor enhancing productreliabilityin which prototypes are stressed to a much higher degree than expected from actual use in order to identify weaknesses in the design or manufacture of the product.[1]Manufacturingandresearch and developmentorganizations in the electronics, computer, medical, and military industries use HALT to improve product reliability.
HALT can be effectively used multiple times over a product's life time. During product development, it can find design weakness earlier in theproduct lifecyclewhen changes are much less costly to make. By finding weaknesses and making changes early, HALT can lower product development costs and compresstime to market. When HALT is used at the time a product is being introduced into the market, it can expose problems caused by new manufacturing processes. When used after a product has been introduced into the market, HALT can be used to audit product reliability caused by changes in components, manufacturing processes, suppliers, etc.
Highly accelerated life testing (HALT) techniques are important in uncovering many of the weak links of a new product. These discovery tests rapidly find weaknesses using accelerated stress conditions. The goal of HALT is to proactively find weaknesses and fix them, thereby increasing product reliability. Because of its accelerated nature, HALT is typically faster and less expensive than traditional testing techniques.
HALT is a test technique calledtest-to-fail, where a product is tested until failure. HALT does not help to determine or demonstrate the reliability value or failure probability in field. Many accelerated life tests aretest-to-pass, meaning they are used to demonstrate the product life or reliability.
It is highly recommended to perform HALT in the initial phases of product development to uncover weak links in a product, so that there is better chance and more time to modify and improve the product.
HALT uses several stress factors (decided by a Reliability Test Engineer) and/or the combination of various factors. Commonly used stress factors are temperature, vibration, and humidity for electronics and mechanical products. Other factors can include voltage, current, power cycling and combinations of them.
Environmental stresses are applied in a HALT procedure,[2]eventually reaching a level significantly beyond that expected during use. The stresses used in HALT are typically hot and cold temperatures, temperature cycles, random vibration, power margining, and power cycling. Theproduct under testis in operation during HALT and is continuously monitored for failures. As stress-induced failures occur, the cause should be determined, and if possible, the problem should be repaired so that the test can continue to find other weaknesses.
Output of the HALT gives you:
A specialized environmental chamber is required for HALT. A suitable chamber also has to be capable of applying pseudo-random vibration with a suitable profile in relation to frequency. The HALT chamber should be capable of applying random vibration energy from 2 to 10,000 Hz in 6degrees of freedomand temperatures from -100 to +200°C.[3]Sometimes HALT chambers are called repetitive shock chambers because pneumatic air hammers are used to produce vibration. The chamber should also be capable of rapid changes in temperature, 50°C per minute should be considered a minimum rate of change. Usually high power resistive heating elements are used for heating andliquid nitrogen(LN2) is used for cooling.
Test fixturesmust transmit vibration to the item under test. They must also be open in design or use air circulation to produce rapid temperature change to internal components. Test fixtures can use simple channels to attach the product to the chamber table or more complicated fixtures sometimes are fabricated.
The equipment under test must be monitored so that if the equipment fails under test, the failure is detected. Monitoring is typically performed withthermocouplesensors,vibrationaccelerometers,multimetersanddata loggers. Common causes of failures during HALT are poor product design, workmanship, and poor manufacturing. Failures to individual components such asresistors,capacitors,diodes,printed circuit boardsoccur because of these issues. Failure types found during HALT testing are associated with the infant mortality region of thebathtub curve.[1]
HALT is conducted before qualification testing. By catching failures early, flaws are found earlier in the acceptance process, eliminating repetitive later-stage reviews.
|
https://en.wikipedia.org/wiki/Highly_accelerated_life_test
|
Inmaterials science,fatigueis the initiation and propagation of cracks in a material due to cyclic loading. Once a fatigue crack has initiated, it grows a small amount with each loading cycle, typically producingstriationson some parts of the fracture surface. The crack will continue to grow until it reaches a critical size, which occurs when thestress intensity factorof the crack exceeds thefracture toughnessof the material, producing rapid propagation and typically complete fracture of the structure.
Fatigue has traditionally been associated with the failure of metal components which led to the termmetal fatigue. In the nineteenth century, the sudden failing of metal railway axles was thought to be caused by the metal crystallising because of the brittle appearance of the fracture surface, but this has since been disproved.[1]Most materials, such as composites, plastics and ceramics, seem to experience some sort of fatigue-related failure.[2]
To aid in predicting the fatigue life of a component,fatigue testsare carried out using coupons to measure the rate of crack growth by applying constant amplitude cyclic loading and averaging the measured growth of a crack over thousands of cycles. There are also special cases that need to be considered where the rate of crack growth is significantly different compared to that obtained from constant amplitude testing, such as the reduced rate of growth that occurs for small loads near the threshold or after the application of an overload, and the increased rate of crack growth associated with short cracks or after the application of an underload.[2]
If the loads are above a certain threshold, microscopic cracks will begin to initiate atstress concentrationssuch as holes,persistent slip bands(PSBs),compositeinterfaces orgrain boundariesin metals.[3]Thestressvalues that cause fatigue damage are typically much less than theyield strengthof the material.
Historically, fatigue has been separated into regions of high cycle fatigue that require more than 104cycles to failure where stress is low and primarilyelasticandlow cycle fatiguewhere there is significant plasticity. Experiments have shown that low cycle fatigue is also crack growth.[4]
Fatigue failures, both for high and low cycles, all follow the same basic steps: crack initiation, crack growth stages I and II, and finally ultimate failure. To begin the process, cracks must nucleate within a material. This process can occur either atstress risersin metallic samples or at areas with a high void density in polymer samples. These cracks propagate slowly at first during stage I crack growth along crystallographic planes, whereshear stressesare highest. Once the cracks reach a critical size they propagate quickly during stage II crack growth in a direction perpendicular to the applied force. These cracks can eventually lead to the ultimate failure of the material, often in a brittle catastrophic fashion.
The formation of initial cracks preceding fatigue failure is a separate process consisting of four discrete steps in metallic samples. The material will develop cell structures and harden in response to the applied load. This causes the amplitude of the applied stress to increase given the new restraints on strain. These newly formed cell structures will eventually break down with the formation of persistent slip bands (PSBs). Slip in the material is localized at these PSBs, and the exaggerated slip can now serve as a stress concentrator for a crack to form. Nucleation and growth of a crack to a detectable size accounts for most of the cracking process. It is for this reason that cyclic fatigue failures seem to occur so suddenly where the bulk of the changes in the material are not visible without destructive testing. Even in normally ductile materials, fatigue failures will resemble sudden brittle failures.
PSB-induced slip planes result in intrusions and extrusions along the surface of a material, often occurring in pairs.[5]This slip is not amicrostructuralchange within the material, but rather a propagation ofdislocationswithin the material. Instead of a smooth interface, the intrusions and extrusions will cause the surface of the material to resemble the edge of a deck of cards, where not all cards are perfectly aligned. Slip-induced intrusions and extrusions create extremely fine surface structures on the material. With surface structure size inversely related to stress concentration factors, PSB-induced surface slip can cause fractures to initiate.
These steps can also be bypassed entirely if the cracks form at a pre-existing stress concentrator such as from an inclusion in the material or from a geometric stress concentrator caused by a sharp internal corner or fillet.
Most of the fatigue life is generally consumed in the crack growth phase. The rate of growth is primarily driven by the range of cyclic loading although additional factors such as mean stress, environment, overloads and underloads can also affect the rate of growth. Crack growth may stop if the loads are small enough to fall below a critical threshold.
Fatigue cracks can grow from material or manufacturing defects from as small as 10 μm.
When the rate of growth becomes large enough, fatigue striations can be seen on the fracture surface. Striations mark the position of the crack tip and the width of each striation represents the growth from one loading cycle. Striations are a result of plasticity at the crack tip.
When the stress intensity exceeds a critical value known as the fracture toughness, unsustainable fast fracture will occur, usually by a process ofmicrovoid coalescence. Prior to final fracture, the fracture surface may contain a mixture of areas of fatigue and fast fracture.
The following effects change the rate of growth:[2]
TheAmerican Society for Testing and Materialsdefines fatigue life, Nf, as the number of stress cycles of a specified character that a specimen sustains beforefailureof a specified nature occurs.[24]For some materials, likesteelandtitanium, there is a theoretical value for stress amplitude below which the material will not fail for any number of cycles, called afatigue limit or endurance limit.[25]In practice, several bodies of work done at greater numbers of cycles suggest that fatigue limits do not exist for any metals.[26][27][28]
Engineers have used a number of methods to determine the fatigue life of a material:[29]
Whether using stress/strain-life approach or using crack growth approach, complex or variable amplitude loading is reduced to a series of fatigue equivalent simple cyclic loadings using a technique such as therainflow-counting algorithm.
A mechanical part is often exposed to a complex, oftenrandom, sequence of loads, large and small. In order to assess the safe life of such a part using the fatigue damage or stress/strain-life methods the following series of steps is usually performed:
Since S-N curves are typically generated for uniaxial loading, some equivalence rule is needed whenever the loading is multiaxial. For simple, proportional loading histories (lateral load in a constant ratio with the axial),Sines rulemay be applied. For more complex situations, such as non-proportional loading,critical plane analysismust be applied.
In 1945, Milton A. Miner popularised a rule that had first been proposed byArvid Palmgrenin 1924.[16]The rule, variously called Miner's rule or the Palmgren–Miner linear damage hypothesis, states that where there arekdifferent stress magnitudes in a spectrum,Si(1 ≤i≤k), each contributingni(Si) cycles, then ifNi(Si) is the number of cycles to failure of a constant stress reversalSi(determined by uni-axial fatigue tests), failure occurs when:
Usually, for design purposes, C is assumed to be 1. This can be thought of as assessing what proportion of life is consumed by a linear combination of stress reversals at varying magnitudes.
Although Miner's rule may be a useful approximation in many circumstances, it has several major limitations:
Materials fatigue performance is commonly characterized by anS-N curve, also known as aWöhlercurve. This is often plotted with the cyclic stress (S) against the cycles to failure (N) on alogarithmic scale.[31]S-N curves are derived from tests on samples of the material to be characterized (often called coupons or specimens) where a regularsinusoidalstress is applied by a testing machine which also counts the number of cycles to failure. This process is sometimes known as coupon testing. For greater accuracy but lower generality component testing is used.[32]Each coupon or component test generates a point on the plot though in some cases there is a runout where the time to failure exceeds that available for the test (seecensoring). Analysis of fatigue data requires techniques fromstatistics, especially survival analysis andlinear regression.
The progression of theS-N curvecan be influenced by many factors such as stress ratio (mean stress),[33]loading frequency,temperature,corrosion, residual stresses, and the presence of notches. A constant fatigue life (CFL) diagram[34]is useful for the study of stress ratio effect. TheGoodman lineis a method used to estimate the influence of the mean stress on thefatigue strength.
A Constant Fatigue Life (CFL) diagram is useful for stress ratio effect on S-N curve.[35]Also, in the presence of a steady stress superimposed on the cyclic loading, the Goodman relation can be used to estimate a failure condition. It plots stress amplitude against mean stress with the fatigue limit and theultimate tensile strengthof the material as the two extremes. Alternative failure criteria include Soderberg and Gerber.[36]
As coupons sampled from a homogeneous frame will display a variation in their number of cycles to failure, the S-N curve should more properly be a Stress-Cycle-Probability (S-N-P) curve to capture the probability of failure after a given number of cycles of a certain stress.
With body-centered cubic materials (bcc), the Wöhler curve often becomes a horizontal line with decreasing stress amplitude, i.e. there is a fatigue strength that can be assigned to these materials. With face-centered cubic metals (fcc), the Wöhler curve generally drops continuously, so that only a fatigue limit can be assigned to these materials.[37]
When strains are no longer elastic, such as in the presence of stress concentrations, the total strain can be used instead of stress as a similitude parameter. This is known as the strain-life method. The total strain amplitudeΔε/2{\displaystyle \Delta \varepsilon /2}is the sum of the elastic strain amplitudeΔεe/2{\displaystyle \Delta \varepsilon _{\text{e}}/2}and the plastic strain amplitudeΔεp/2{\displaystyle \Delta \varepsilon _{\text{p}}/2}and is given by[2][38]
Basquin's equation for the elastic strain amplitude is
whereE{\displaystyle E}isYoung's modulus.
The relation for high cycle fatigue can be expressed using the elastic strain amplitude
whereσf′{\displaystyle \sigma _{\text{f}}^{\prime }}is a parameter that scales with tensile strength obtained by fitting experimental data,Nf{\displaystyle N_{\text{f}}}is the number of cycles to failure andb{\displaystyle b}is the slope of the log-log curve again determined by curve fitting.
In 1954, Coffin and Manson proposed that the fatigue life of a component was related to the plastic strain amplitude using
Combining the elastic and plastic portions gives the total strain amplitude accounting for both low and high cycle fatigue
whereσf′{\displaystyle \sigma _{f}'}is the fatigue strength coefficient,b{\displaystyle b}is the fatigue strength exponent,εf′{\displaystyle \varepsilon _{f}'}is the fatigue ductility coefficient,c{\displaystyle c}is the fatigue ductility exponent, andNf{\displaystyle N_{f}}is the number of cycles to failure (2Nf{\displaystyle 2N_{f}}being the number of reversals to failure).
An estimate of the fatigue life of a component can be made using acrack growth equationby summing up the width of each increment of crack growth for each loading cycle. Safety or scatter factors are applied to the calculated life to account for any uncertainty and variability associated with fatigue. The rate of growth used in crack growth predictions is typically measured by applying thousands of constant amplitude cycles to a coupon and measuring the rate of growth from the change in compliance of the coupon or by measuring the growth of the crack on the surface of the coupon. Standard methods for measuring the rate of growth have been developed by ASTM International.[9]
Crack growth equations such as theParis–Erdoğan equationare used to predict the life of a component. They can be used to predict the growth of a crack from 10 um to failure. For normal manufacturing finishes this may cover most of the fatigue life of a component where growth can start from the first cycle.[4]The conditions at the crack tip of a component are usually related to the conditions of test coupon using a characterising parameter such as the stress intensity,J-integralorcrack tip opening displacement. All these techniques aim to match the crack tip conditions on the component to that of test coupons which give the rate of crack growth.
Additional models may be necessary to include retardation and acceleration effects associated with overloads or underloads in the loading sequence. In addition, small crack growth data may be needed to match the increased rate of growth seen with small cracks.[39]
Typically, a cycle counting technique such as rainflow-cycle counting is used to extract the cycles from a complex sequence. This technique, along with others, has been shown to work with crack growth methods.[40]
Crack growth methods have the advantage that they can predict the intermediate size of cracks. This information can be used to schedule inspections on a structure to ensure safety whereas strain/life methods only give a life until failure.
Dependable design against fatigue-failure requires thorough education and supervised experience instructural engineering,mechanical engineering, ormaterials science. There are at least five principal approaches to life assurance for mechanical parts that display increasing degrees of sophistication:[41]
Fatigue testingcan be used for components such as a coupon or a full-scale test article to determine:
These tests may form part of the certification process such as forairworthiness certification.
Composite materialscan offer excellent resistance to fatigue loading. In general, composites exhibit goodfracture toughnessand, unlike metals, increase fracture toughness with increasing strength. The critical damage size in composites is also greater than that for metals.[53]
The primary mode of damage in a metal structure is cracking. For metal, cracks propagate in a relatively well-defined manner with respect to the applied stress, and the critical crack size and rate of crack propagation can be related to specimen data through analytical fracture mechanics. However, with composite structures, there is no single damage mode which dominates. Matrix cracking, delamination, debonding, voids, fiber fracture, and composite cracking can all occur separately and in combination, and the predominance of one or more is highly dependent on thelaminateorientations and loading conditions.[54]In addition, the unique joints and attachments used for composite structures often introduce modes offailuredifferent from those typified by the laminate itself.[55]
The composite damage propagates in a less regular manner and damage modes can change. Experience with composites indicates that the rate of damage propagation in does not exhibit the two distinct regions of initiation and propagation like metals. The crack initiation range in metals is propagation, and there is a significant quantitative difference in rate while the difference appears to be less apparent with composites.[54]Fatigue cracks of composites may form in thematrixand propagate slowly since the matrix carries such a small fraction of the appliedstress. And thefibersin the wake of the crack experience fatigue damage. In many cases, the damage rate is accelerated by deleterious interactions with the environment likeoxidationor corrosion of fibers.[56]
Following theKing Louis-Philippe I's celebrations at thePalace of Versailles, a train returning to Paris crashed in May 1842 atMeudonafter the leading locomotive broke an axle. The carriages behind piled into the wrecked engines and caught fire. At least 55 passengers were killed trapped in the locked carriages, including the explorerJules Dumont d'Urville. This accident is known in France as the"Catastrophe ferroviaire de Meudon". The accident was witnessed by the British locomotive engineerJoseph Lockeand widely reported in Britain. It was discussed extensively by engineers, who sought an explanation.
The derailment had been the result of a brokenlocomotiveaxle.Rankine'sinvestigation of broken axles in Britain highlighted the importance of stress concentration, and the mechanism of crack growth with repeated loading. His and other papers suggesting a crack growth mechanism through repeated stressing, however, were ignored, and fatigue failures occurred at an ever-increasing rate on the expanding railway system. Other spurious theories seemed to be more acceptable, such as the idea that the metal had somehow "crystallized". The notion was based on the crystalline appearance of the fast fracture region of the crack surface, but ignored the fact that the metal was already highly crystalline.
Twode Havilland Cometpassenger jets broke up in mid-air and crashed within a few months of each other in 1954. As a result, systematic tests were conducted on afuselageimmersed and pressurised in a water tank. After the equivalent of 3,000 flights, investigators at theRoyal Aircraft Establishment(RAE) were able to conclude that the crash had been due to failure of the pressure cabin at the forwardAutomatic Direction Finderwindow in the roof. This 'window' was in fact one of two apertures for theaerialsof an electronic navigation system in which opaquefibreglasspanels took the place of the window 'glass'. The failure was a result of metal fatigue caused by the repeated pressurisation and de-pressurisation of the aircraft cabin. Also, the supports around the windows were riveted, not bonded, as the original specifications for the aircraft had called for. The problem was exacerbated by thepunch rivetconstruction technique employed. Unlike drill riveting, the imperfect nature of the hole created by punch riveting caused manufacturing defect cracks which may have caused the start of fatigue cracks around the rivet.
The Comet's pressure cabin had been designed to asafety factorcomfortably in excess of that required by British Civil Airworthiness Requirements (2.5 times the cabinproof testpressure as opposed to the requirement of 1.33 times and an ultimate load of 2.0 times the cabin pressure) and the accident caused a revision in the estimates of the safe loading strength requirements of airliner pressure cabins.
In addition, it was discovered that thestressesaround pressure cabin apertures were considerably higher than had been anticipated, especially around sharp-cornered cut-outs, such as windows. As a result, all futurejet airlinerswould feature windows with rounded corners, greatly reducing the stress concentration. This was a noticeable distinguishing feature of all later models of the Comet. Investigators from the RAE told a public inquiry that the sharp corners near the Comets' window openings acted as initiation sites for cracks. The skin of the aircraft was also too thin, and cracks from manufacturing stresses were present at the corners.
Alexander L. Kiellandwas a Norwegiansemi-submersibledrilling rigthatcapsizedwhilst working in theEkofisk oil fieldin March 1980, killing 123 people. The capsizing was the worst disaster in Norwegian waters since World War II. The rig, approximately 320 km east ofDundee, Scotland, was owned by the Stavanger Drilling Company of Norway and was on hire to the United States companyPhillips Petroleumat the time of the disaster. In driving rain and mist, early in the evening of 27 March 1980 more than 200 men were off duty in the accommodation onAlexander L. Kielland. The wind was gusting to 40 knots with waves up to 12 m high. The rig had just been winched away from theEddaproduction platform. Minutes before 18:30 those on board felt a 'sharp crack' followed by 'some kind of trembling'. Suddenly the rig heeled over 30° and then stabilised. Five of the six anchor cables had broken, with one remaining cable preventing the rig from capsizing. Thelistcontinued to increase and at 18:53 the remaining anchor cable snapped and the rig turned upside down.
In March 1981, the investigative report[58]concluded that the rig collapsed owing to a fatigue crack in one of its six bracings (bracing D-6), which connected the collapsed D-leg to the rest of the rig. This was traced to a small 6 mm fillet weld which joined a non-load-bearing flange plate to this D-6 bracing. This flange plate held a sonar device used during drilling operations. The poor profile of the fillet weld contributed to a reduction in its fatigue strength. Further, the investigation found considerable amounts oflamellar tearingin the flange plate and cold cracks in the butt weld. Cold cracks in the welds, increased stress concentrations due to the weakened flange plate, the poor weld profile, and cyclical stresses (which would be common in theNorth Sea), seemed to collectively play a role in the rig's collapse.
|
https://en.wikipedia.org/wiki/Fatigue_(material)
|
Incontinuum mechanics,stressis aphysical quantitythat describesforcespresent duringdeformation. For example, an object being pulled apart, such as a stretched elastic band, is subject totensilestress and may undergoelongation. An object being pushed together, such as a crumpled sponge, is subject tocompressivestress and may undergo shortening.[1][2]The greater the force and the smaller the cross-sectional area of the body on which it acts, the greater the stress. Stress hasdimensionof force per area, withSI unitsof newtons per square meter (N/m2) orpascal(Pa).[1]
Stress expresses the internal forces that neighbouringparticlesof a continuous material exert on each other, whilestrainis the measure of the relativedeformationof the material.[3]For example, when asolidvertical bar is supporting an overheadweight, each particle in the bar pushes on the particles immediately below it. When aliquidis in a closed container underpressure, each particle gets pushed against by all the surrounding particles. The container walls and thepressure-inducing surface (such as a piston) push against them in (Newtonian)reaction. These macroscopic forces are actually the net result of a very large number ofintermolecular forcesandcollisionsbetween the particles in thosemolecules. Stress is frequently represented by a lowercase Greek letter sigma (σ).[3]
Strain inside a material may arise by various mechanisms, such asstressas applied by external forces to the bulk material (likegravity) or to its surface (likecontact forces, external pressure, orfriction). Anystrain (deformation)of a solid material generates an internalelastic stress, analogous to the reaction force of aspring, that tends to restore the material to its original non-deformed state. In liquids andgases, only deformations that change the volume generate persistent elastic stress. If the deformation changes gradually with time, even in fluids there will usually be someviscous stress, opposing that change. Elastic and viscous stresses are usually combined under the namemechanical stress.
Significant stress may exist even when deformation is negligible or non-existent (a common assumption when modeling the flow of water). Stress may exist in the absence of external forces; suchbuilt-in stressis important, for example, inprestressed concreteandtempered glass. Stress may also be imposed on a material without the application ofnet forces, for example bychanges in temperatureorchemicalcomposition, or by externalelectromagnetic fields(as inpiezoelectricandmagnetostrictivematerials).
The relation between mechanical stress, strain, and thestrain ratecan be quite complicated, although alinear approximationmay be adequate in practice if the quantities are sufficiently small. Stress that exceeds certainstrength limitsof the material will result in permanent deformation (such asplastic flow,fracture,cavitation) or even change itscrystal structureandchemical composition.
Humans have known about stress inside materials since ancient times. Until the 17th century, this understanding was largely intuitive and empirical, though this did not prevent the development of relatively advanced technologies like thecomposite bowandglass blowing.[4]
Over several millennia, architects and builders in particular, learned how to put together carefully shaped wood beams and stone blocks to withstand, transmit, and distribute stress in the most effective manner, with ingenious devices such as thecapitals,arches,cupolas,trussesand theflying buttressesofGothic cathedrals.
Ancient and medieval architects did develop some geometrical methods and simple formulas to compute the proper sizes of pillars and beams, but the scientific understanding of stress became possible only after the necessary tools were invented in the 17th and 18th centuries:Galileo Galilei's rigorousexperimental method,René Descartes'scoordinatesandanalytic geometry, andNewton'slaws of motion and equilibriumandcalculus of infinitesimals.[5]With those tools,Augustin-Louis Cauchywas able to give the first rigorous and general mathematical model of a deformed elastic body by introducing the notions of stress and strain.[6]Cauchy observed that the force across an imaginary surface was a linear function of its normal vector; and, moreover, that it must be a symmetric function (with zero total momentum).
The understanding of stress in liquids started with Newton, who provided a differential formula for friction forces (shear stress) in parallellaminar flow.
Stress is defined as the force across a small boundary per unit area of that boundary, for all orientations of the boundary.[7]Derived from a physical quantity (force) and a purely geometrical quantity (area), stress is also a physical quantity, like velocity,torqueorenergy, that can be quantified and analyzed without explicit consideration of the nature of the material or of its physical causes.
Following the basic premises of continuum mechanics, stress is amacroscopicconcept. Namely, the particles considered in its definition and analysis should be just small enough to be treated as homogeneous in composition and state, but still large enough to ignorequantumeffects and the detailed motions of molecules. Thus, the force between two particles is actually the average of a very large number of atomic forces between their molecules; and physical quantities like mass, velocity, and forces that act through the bulk of three-dimensional bodies, like gravity, are assumed to be smoothly distributed over them.[8]: 90–106Depending on the context, one may also assume that the particles are large enough to allow the averaging out of other microscopic features, like the grains of ametalrod or thefibersof a piece ofwood.
Quantitatively, the stress is expressed by theCauchy traction vectorTdefined as thetraction forceFbetween adjacent parts of the material across an imaginary separating surfaceS, divided by the area ofS.[9]: 41–50In afluidat rest the force is perpendicular to the surface, and is the familiarpressure. In asolid, or in aflowof viscousliquid, the forceFmay not be perpendicular toS; hence the stress across a surface must be regarded a vector quantity, not a scalar. Moreover, the direction and magnitude generally depend on the orientation ofS. Thus the stress state of the material must be described by atensor, called the(Cauchy) stress tensor; which is alinear functionthat relates thenormal vectornof a surfaceSto the traction vectorTacrossS. With respect to any chosencoordinate system, the Cauchy stress tensor can be represented as asymmetric matrixof 3×3 real numbers. Even within ahomogeneousbody, the stress tensor may vary from place to place, and may change over time; therefore, the stress within a material is, in general, a time-varyingtensor field.
In general, the stressTthat a particlePapplies on another particleQacross a surfaceScan have any direction relative toS. The vectorTmay be regarded as the sum of two components: thenormalstress(compressionortension) perpendicular to the surface, and theshear stressthat is parallel to the surface.
If the normal unit vectornof the surface (pointing fromQtowardsP) is assumed fixed, the normal component can be expressed by a single number, thedot productT·n. This number will be positive ifPis "pulling" onQ(tensile stress), and negative ifPis "pushing" againstQ(compressive stress). The shear component is then the vectorT− (T·n)n.
The dimension of stress is that ofpressure, and therefore its coordinates are measured in the same units as pressure: namely,pascals(Pa, that is,newtonspersquare metre) in theInternational System, orpoundspersquare inch(psi) in theImperial system. Because mechanical stresses easily exceed a million Pascals, MPa, which stands for megapascal, is a common unit of stress.
Stress in a material body may be due to multiple physical causes, including external influences and internal physical processes. Some of these agents (like gravity, changes intemperatureandphase, and electromagnetic fields) act on the bulk of the material, varying continuously with position and time. Other agents (like external loads and friction, ambient pressure, and contact forces) may create stresses and forces that are concentrated on certain surfaces, lines or points; and possibly also on very short time intervals (as in theimpulsesdue to collisions). Inactive matter, self-propulsion of microscopic particles generates macroscopic stress profiles.[11]In general, the stress distribution in a body is expressed as apiecewisecontinuous functionof space and time.
Conversely, stress is usually correlated with various effects on the material, possibly including changes in physical properties likebirefringence,polarization, andpermeability. The imposition of stress by an external agent usually creates somestrain (deformation)in the material, even if it is too small to be detected. In a solid material, such strain will in turn generate an internal elastic stress, analogous to the reaction force of a stretchedspring, tending to restore the material to its original undeformed state. Fluid materials (liquids,gasesandplasmas) by definition can only oppose deformations that would change their volume. If the deformation changes with time, even in fluids there will usually be some viscous stress, opposing that change. Such stresses can be either shear or normal in nature. Molecular origin of shear stresses in fluids is given in the article onviscosity. The same for normal viscous stresses can be found in Sharma (2019).[12]
The relation between stress and its effects and causes, including deformation and rate of change of deformation, can be quite complicated (although alinear approximationmay be adequate in practice if the quantities are small enough). Stress that exceeds certainstrength limitsof the material will result in permanent deformation (such asplastic flow,fracture,cavitation) or even change itscrystal structureandchemical composition.
In some situations, the stress within a body may adequately be described by a single number, or by a single vector (a number and a direction). Three suchsimple stresssituations, that are often encountered in engineering design, are theuniaxial normal stress, thesimple shear stress, and theisotropic normal stress.[13]
A common situation with a simple stress pattern is when a straight rod, with uniform material and cross section, is subjected totensionby opposite forces of magnitudeF{\displaystyle F}along its axis. If the system is inequilibriumand not changing with time, and the weight of the bar can be neglected, then through each transversal section of the bar the top part must pull on the bottom part with the same force,Fwith continuity through the full cross-sectional area, A. Therefore, the stress σ throughout the bar, across any horizontal surface, can be expressed simply by the single number σ, calculated simply with the magnitude of those forces,F, and cross sectional area,A.σ=FA{\displaystyle \sigma ={\frac {F}{A}}}On the other hand, if one imagines the bar being cut along its length, parallel to the axis, there will be no force (hence no stress) between the two halves across the cut.
This type of stress may be called (simple) normal stress or uniaxial stress; specifically, (uniaxial, simple, etc.) tensile stress.[13]If the load iscompressionon the bar, rather than stretching it, the analysis is the same except that the forceFand the stressσ{\displaystyle \sigma }change sign, and the stress is called compressive stress.
This analysis assumes the stress is evenly distributed over the entire cross-section. In practice, depending on how the bar is attached at the ends and how it was manufactured, this assumption may not be valid. In that case, the valueσ{\displaystyle \sigma }=F/Awill be only the average stress, calledengineering stressornominal stress. If the bar's lengthLis many times its diameterD, and it has no gross defects orbuilt-in stress, then the stress can be assumed to be uniformly distributed over any cross-section that is more than a few timesDfrom both ends. (This observation is known as theSaint-Venant's principle).
Normal stress occurs in many other situations besides axial tension and compression. If an elastic bar with uniform and symmetric cross-section is bent in one of its planes of symmetry, the resultingbending stresswill still be normal (perpendicular to the cross-section), but will vary over the cross section: the outer part will be under tensile stress, while the inner part will be compressed. Another variant of normal stress is thehoop stressthat occurs on the walls of a cylindricalpipeorvesselfilled with pressurized fluid.
Another simple type of stress occurs when a uniformly thick layer of elastic material like glue or rubber is firmly attached to two stiff bodies that are pulled in opposite directions by forces parallel to the layer; or a section of a soft metal bar that is being cut by the jaws of ascissors-like tool. LetFbe the magnitude of those forces, andMbe the midplane of that layer. Just as in the normal stress case, the part of the layer on one side ofMmust pull the other part with the same forceF. Assuming that the direction of the forces is known, the stress acrossMcan be expressed simply by the single numberτ{\displaystyle \tau }, calculated simply with the magnitude of those forces,Fand the cross sectional area,A.τ=FA{\displaystyle \tau ={\frac {F}{A}}}Unlike normal stress, thissimple shear stressis directed parallel to the cross-section considered, rather than perpendicular to it.[13]For any planeSthat is perpendicular to the layer, the net internal force acrossS, and hence the stress, will be zero.
As in the case of an axially loaded bar, in practice the shear stress may not be uniformly distributed over the layer; so, as before, the ratioF/Awill only be an average ("nominal", "engineering") stress. That average is often sufficient for practical purposes.[14]: 292Shear stress is observed also when a cylindrical bar such as ashaftis subjected to opposite torques at its ends. In that case, the shear stress on each cross-section is parallel to the cross-section, but oriented tangentially relative to the axis, and increases with distance from the axis. Significant shear stress occurs in the middle plate (the "web") ofI-beamsunder bending loads, due to the web constraining the end plates ("flanges").
Another simple type of stress occurs when the material body is under equal compression or tension in all directions. This is the case, for example, in a portion of liquid or gas at rest, whether enclosed in some container or as part of a larger mass of fluid; or inside a cube of elastic material that is being pressed or pulled on all six faces by equal perpendicular forces — provided, in both cases, that the material is homogeneous, without built-in stress, and that the effect of gravity and other external forces can be neglected.
In these situations, the stress across any imaginary internal surface turns out to be equal in magnitude and always directed perpendicularly to the surface independently of the surface's orientation. This type of stress may be calledisotropic normalor justisotropic; if it is compressive, it is calledhydrostatic pressureor justpressure. Gases by definition cannot withstand tensile stresses, but some liquids may withstand very large amounts of isotropic tensile stress under some circumstances. seeZ-tube.
Parts withrotational symmetry, such as wheels, axles, pipes, and pillars, are very common in engineering. Often the stress patterns that occur in such parts have rotational or evencylindrical symmetry. The analysis of suchcylinder stressescan take advantage of the symmetry to reduce the dimension of the domain and/or of the stress tensor.
Often, mechanical bodies experience more than one type of stress at the same time; this is calledcombined stress. In normal and shear stress, the magnitude of the stress is maximum for surfaces that are perpendicular to a certain directiond{\displaystyle d}, and zero across any surfaces that are parallel tod{\displaystyle d}. When the shear stress is zero only across surfaces that are perpendicular to one particular direction, the stress is calledbiaxial, and can be viewed as the sum of two normal or shear stresses. In the most general case, calledtriaxial stress, the stress is nonzero across every surface element.
Combined stresses cannot be described by a single vector. Even if the material is stressed in the same way throughout the volume of the body, the stress across any imaginary surface will depend on the orientation of that surface, in a non-trivial way.
Cauchy observed that the stress vectorT{\displaystyle T}across a surface will always be alinear functionof the surface'snormal vectorn{\displaystyle n}, the unit-length vector that is perpendicular to it. That is,T=σ(n){\displaystyle T={\boldsymbol {\sigma }}(n)}, where the functionσ{\displaystyle {\boldsymbol {\sigma }}}satisfiesσ(αu+βv)=ασ(u)+βσ(v){\displaystyle {\boldsymbol {\sigma }}(\alpha u+\beta v)=\alpha {\boldsymbol {\sigma }}(u)+\beta {\boldsymbol {\sigma }}(v)}for any vectorsu,v{\displaystyle u,v}and any real numbersα,β{\displaystyle \alpha ,\beta }.
The functionσ{\displaystyle {\boldsymbol {\sigma }}}, now called the(Cauchy) stress tensor, completely describes the stress state of a uniformly stressed body. (Today, any linear connection between two physical vector quantities is called atensor, reflecting Cauchy's original use to describe the "tensions" (stresses) in a material.) Intensor calculus,σ{\displaystyle {\boldsymbol {\sigma }}}is classified as a second-order tensor oftype(0,2) or (1,1) depending on convention.
Like any linear map between vectors, the stress tensor can be represented in any chosenCartesian coordinate systemby a 3×3 matrix of real numbers. Depending on whether the coordinates are numberedx1,x2,x3{\displaystyle x_{1},x_{2},x_{3}}or namedx,y,z{\displaystyle x,y,z}, the matrix may be written as[σ11σ12σ13σ21σ22σ23σ31σ32σ33]{\displaystyle {\begin{bmatrix}\sigma _{11}&\sigma _{12}&\sigma _{13}\\\sigma _{21}&\sigma _{22}&\sigma _{23}\\\sigma _{31}&\sigma _{32}&\sigma _{33}\end{bmatrix}}}or[σxxσxyσxzσyxσyyσyzσzxσzyσzz]{\displaystyle {\begin{bmatrix}\sigma _{xx}&\sigma _{xy}&\sigma _{xz}\\\sigma _{yx}&\sigma _{yy}&\sigma _{yz}\\\sigma _{zx}&\sigma _{zy}&\sigma _{zz}\\\end{bmatrix}}}The stress vectorT=σ(n){\displaystyle T={\boldsymbol {\sigma }}(n)}across a surface withnormal vectorn{\displaystyle n}(which iscovariant-"row; horizontal"- vector) with coordinatesn1,n2,n3{\displaystyle n_{1},n_{2},n_{3}}is then a matrix productT=n⋅σ{\displaystyle T=n\cdot {\boldsymbol {\sigma }}}(where T in upper index istransposition, and as a result we getcovariant(row) vector) (look onCauchy stress tensor), that is[T1T2T3]=[n1n2n3]⋅[σ11σ21σ31σ12σ22σ32σ13σ23σ33]{\displaystyle {\begin{bmatrix}T_{1}&T_{2}&T_{3}\end{bmatrix}}={\begin{bmatrix}n_{1}&n_{2}&n_{3}\end{bmatrix}}\cdot {\begin{bmatrix}\sigma _{11}&\sigma _{21}&\sigma _{31}\\\sigma _{12}&\sigma _{22}&\sigma _{32}\\\sigma _{13}&\sigma _{23}&\sigma _{33}\end{bmatrix}}}
The linear relation betweenT{\displaystyle T}andn{\displaystyle n}follows from the fundamental laws ofconservation of linear momentumandstatic equilibriumof forces, and is therefore mathematically exact, for any material and any stress situation. The components of the Cauchy stress tensor at every point in a material satisfy the equilibrium equations (Cauchy's equations of motionfor zero acceleration). Moreover, the principle ofconservation of angular momentumimplies that the stress tensor issymmetric, that isσ12=σ21{\displaystyle \sigma _{12}=\sigma _{21}},σ13=σ31{\displaystyle \sigma _{13}=\sigma _{31}}, andσ23=σ32{\displaystyle \sigma _{23}=\sigma _{32}}. Therefore, the stress state of the medium at any point and instant can be specified by only six independent parameters, rather than nine. These may be written[σxτxyτxzτxyσyτyzτxzτyzσz]{\displaystyle {\begin{bmatrix}\sigma _{x}&\tau _{xy}&\tau _{xz}\\\tau _{xy}&\sigma _{y}&\tau _{yz}\\\tau _{xz}&\tau _{yz}&\sigma _{z}\end{bmatrix}}}where the elementsσx,σy,σz{\displaystyle \sigma _{x},\sigma _{y},\sigma _{z}}are called theorthogonal normal stresses(relative to the chosen coordinate system), andτxy,τxz,τyz{\displaystyle \tau _{xy},\tau _{xz},\tau _{yz}}theorthogonal shear stresses.[citation needed]
The Cauchy stress tensor obeys the tensor transformation law under a change in the system of coordinates. A graphical representation of this transformation law is theMohr's circleof stress distribution.
As a symmetric 3×3 real matrix, the stress tensorσ{\displaystyle {\boldsymbol {\sigma }}}has three mutually orthogonal unit-lengtheigenvectorse1,e2,e3{\displaystyle e_{1},e_{2},e_{3}}and three realeigenvaluesλ1,λ2,λ3{\displaystyle \lambda _{1},\lambda _{2},\lambda _{3}}, such thatσei=λiei{\displaystyle {\boldsymbol {\sigma }}e_{i}=\lambda _{i}e_{i}}. Therefore, in a coordinate system with axese1,e2,e3{\displaystyle e_{1},e_{2},e_{3}}, the stress tensor is a diagonal matrix, and has only the three normal componentsλ1,λ2,λ3{\displaystyle \lambda _{1},\lambda _{2},\lambda _{3}}theprincipal stresses. If the three eigenvalues are equal, the stress is anisotropiccompression or tension, always perpendicular to any surface, there is no shear stress, and the tensor is a diagonal matrix in any coordinate frame.
In general, stress is not uniformly distributed over a material body, and may vary with time. Therefore, the stress tensor must be defined for each point and each moment, by considering aninfinitesimalparticle of the medium surrounding that point, and taking the average stresses in that particle as being the stresses at the point.
Human-made objects are often made from stock plates of various materials by operations that do not change their essentially two-dimensional character, like cutting, drilling, gentle bending and welding along the edges. The description of stress in such bodies can be simplified by modeling those parts as two-dimensional surfaces rather than three-dimensional bodies.
In that view, one redefines a "particle" as being an infinitesimal patch of the plate's surface, so that the boundary between adjacent particles becomes an infinitesimal line element; both are implicitly extended in the third dimension, normal to (straight through) the plate. "Stress" is then redefined as being a measure of the internal forces between two adjacent "particles" across their common line element, divided by the length of that line. Some components of the stress tensor can be ignored, but since particles are not infinitesimal in the third dimension one can no longer ignore the torque that a particle applies on its neighbors. That torque is modeled as abending stressthat tends to change thecurvatureof the plate. These simplifications may not hold at welds, at sharp bends and creases (where theradius of curvatureis comparable to the thickness of the plate).
The analysis of stress can be considerably simplified also for thin bars, beams or wires of uniform (or smoothly varying) composition and cross-section that are subjected to moderate bending and twisting. For those bodies, one may consider only cross-sections that are perpendicular to the bar's axis, and redefine a "particle" as being a piece of wire with infinitesimal length between two such cross sections. The ordinary stress is then reduced to a scalar (tension or compression of the bar), but one must take into account also abending stress(that tries to change the bar's curvature, in some direction perpendicular to the axis) and atorsional stress(that tries to twist or un-twist it about its axis).
Stress analysisis a branch ofapplied physicsthat covers the determination of the internal distribution of internal forces in solid objects. It is an essential tool in engineering for the study and design of structures such as tunnels, dams, mechanical parts, and structural frames, under prescribed or expected loads. It is also important in many other disciplines; for example, in geology, to study phenomena likeplate tectonics, vulcanism andavalanches; and in biology, to understand the anatomy of living beings.
Stress analysis is generally concerned with objects and structures that can be assumed to be in macroscopicstatic equilibrium. ByNewton's laws of motion, any external forces being applied to such a system must be balanced by internal reaction forces,[15]: 97which are almost always surface contact forces between adjacent particles — that is, as stress.[9]Since every particle needs to be in equilibrium, this reaction stress will generally propagate from particle to particle, creating a stress distribution throughout the body.
The typical problem in stress analysis is to determine these internal stresses, given the external forces that are acting on the system. The latter may bebody forces(such as gravity or magnetic attraction), that act throughout the volume of a material;[16]: 42–81or concentrated loads (such as friction between an axle and abearing, or the weight of a train wheel on a rail), that are imagined to act over a two-dimensional area, or along a line, or at single point.
In stress analysis one normally disregards the physical causes of the forces or the precise nature of the materials. Instead, one assumes that the stresses are related to deformation (and, in non-static problems, to the rate of deformation) of the material by knownconstitutive equations.[17]
Stress analysis may be carried out experimentally, by applying loads to the actual artifact or to scale model, and measuring the resulting stresses, by any of several available methods. This approach is often used for safety certification and monitoring. Most stress is analysed by mathematical methods, especially during design.
The basic stress analysis problem can be formulated byEuler's equations of motionfor continuous bodies (which are consequences ofNewton's lawsfor conservation oflinear momentumandangular momentum) and theEuler-Cauchy stress principle, together with the appropriate constitutive equations. Thus one obtains a system ofpartial differential equationsinvolving the stress tensor field and thestrain tensorfield, as unknown functions to be determined. The external body forces appear as the independent ("right-hand side") term in the differential equations, while the concentrated forces appear as boundary conditions. The basic stress analysis problem is therefore aboundary-value problem.
Stress analysis forelasticstructures is based on thetheory of elasticityandinfinitesimal strain theory. When the applied loads cause permanent deformation, one must use more complicated constitutive equations, that can account for the physical processes involved (plastic flow,fracture,phase change, etc.). Engineered structures are usually designed so the maximum expected stresses are well within the range oflinear elasticity(the generalization ofHooke's lawfor continuous media); that is, the deformations caused by internal stresses are linearly related to them. In this case the differential equations that define the stress tensor are linear, and the problem becomes much easier. For one thing, the stress at any point will be a linear function of the loads, too. For small enough stresses, even non-linear systems can usually be assumed to be linear.
Stress analysis is simplified when the physical dimensions and the distribution of loads allow the structure to be treated as one- or two-dimensional. In the analysis of trusses, for example, the stress field may be assumed to be uniform and uniaxial over each member. Then the differential equations reduce to a finite set of equations (usually linear) with finitely many unknowns. In other contexts one may be able to reduce the three-dimensional problem to a two-dimensional one, and/or replace the general stress and strain tensors by simpler models like uniaxial tension/compression, simple shear, etc.
Still, for two- or three-dimensional cases one must solve a partial differential equation problem.
Analytical or closed-form solutions to the differential equations can be obtained when the geometry, constitutive relations, and boundary conditions are simple enough. Otherwise one must generally resort to numerical approximations such as thefinite element method, thefinite difference method, and theboundary element method.
Other useful stress measures include the first and secondPiola–Kirchhoff stress tensors, theBiot stress tensor, and theKirchhoff stress tensor.
|
https://en.wikipedia.org/wiki/Stress_(mechanics)
|
Incontinuum mechanics, the most commonly used measure ofstressis theCauchy stress tensor, often called simplythestress tensor or "true stress". However, several alternative measures of stress can be defined:[1][2][3]
Consider the situation shown in the following figure. The following definitions use the notations shown in the figure.
In the reference configurationΩ0{\displaystyle \Omega _{0}}, the outward normal to a surface elementdΓ0{\displaystyle d\Gamma _{0}}isN≡n0{\displaystyle \mathbf {N} \equiv \mathbf {n} _{0}}and the traction acting on that surface (assuming it deforms like a generic vector belonging to the deformation) ist0{\displaystyle \mathbf {t} _{0}}leading to a force vectordf0{\displaystyle d\mathbf {f} _{0}}. In the deformed configurationΩ{\displaystyle \Omega }, the surface element changes todΓ{\displaystyle d\Gamma }with outward normaln{\displaystyle \mathbf {n} }and traction vectort{\displaystyle \mathbf {t} }leading to a forcedf{\displaystyle d\mathbf {f} }. Note that this surface can either be a hypothetical cut inside the body or an actual surface. The quantityF{\displaystyle {\boldsymbol {F}}}is thedeformation gradient tensor,J{\displaystyle J}is its determinant.
The Cauchy stress (or true stress) is a measure of the force acting on an element of area in the deformed configuration. This tensor is symmetric and is defined via
or
wheret{\displaystyle \mathbf {t} }is the traction andn{\displaystyle \mathbf {n} }is the normal to the surface on which the traction acts.
The quantity,
is called theKirchhoff stress tensor, withJ{\displaystyle J}the determinant ofF{\displaystyle {\boldsymbol {F}}}. It is used widely in numerical algorithms in metal plasticity (where there
is no change in volume during plastic deformation). It can be calledweighted Cauchy stress tensoras well.
The nominal stressN=PT{\displaystyle {\boldsymbol {N}}={\boldsymbol {P}}^{T}}is the transpose of the first Piola–Kirchhoff stress (PK1 stress, also called engineering stress)P{\displaystyle {\boldsymbol {P}}}and is defined via
or
This stress is unsymmetric and is a two-point tensor like the deformation gradient.The asymmetry derives from the fact that, as a tensor, it has one index attached to the reference configuration and one to the deformed configuration.[4]
If wepull backdf{\displaystyle d\mathbf {f} }to the reference configuration we obtain the traction acting on that surface before the deformationdf0{\displaystyle d\mathbf {f} _{0}}assuming it behaves like a generic vector belonging to the deformation. In particular we have
or,
The PK2 stress (S{\displaystyle {\boldsymbol {S}}}) is symmetric and is defined via the relation
Therefore,
The Biot stress is useful because it isenergy conjugateto theright stretch tensorU{\displaystyle {\boldsymbol {U}}}. The Biot stress is defined as the symmetric part of the tensorPT⋅R{\displaystyle {\boldsymbol {P}}^{T}\cdot {\boldsymbol {R}}}whereR{\displaystyle {\boldsymbol {R}}}is the rotation tensor obtained from apolar decompositionof the deformation gradient. Therefore, the Biot stress tensor is defined as
The Biot stress is also called the Jaumann stress.
The quantityT{\displaystyle {\boldsymbol {T}}}does not have any physical interpretation. However, the unsymmetrized Biot stress has the interpretation
FromNanson's formularelating areas in the reference and deformed configurations:
Now,
Hence,
or,
or,
In index notation,
Therefore,
Note thatN{\displaystyle {\boldsymbol {N}}}andP{\displaystyle {\boldsymbol {P}}}are (generally) not symmetric becauseF{\displaystyle {\boldsymbol {F}}}is (generally) not symmetric.
Recall that
and
Therefore,
or (using the symmetry ofS{\displaystyle {\boldsymbol {S}}}),
In index notation,
Alternatively, we can write
Recall that
In terms of the 2nd PK stress, we have
Therefore,
In index notation,
Since the Cauchy stress (and hence the Kirchhoff stress) is symmetric, the 2nd PK stress is also symmetric.
Alternatively, we can write
or,
Clearly, from definition of thepush-forwardandpull-backoperations, we have
and
Therefore,S{\displaystyle {\boldsymbol {S}}}is the pull back ofτ{\displaystyle {\boldsymbol {\tau }}}byF{\displaystyle {\boldsymbol {F}}}andτ{\displaystyle {\boldsymbol {\tau }}}is the push forward ofS{\displaystyle {\boldsymbol {S}}}.
Key:J=det(F),C=FTF=U2,F=RU,RT=R−1,{\displaystyle J=\det \left({\boldsymbol {F}}\right),\quad {\boldsymbol {C}}={\boldsymbol {F}}^{T}{\boldsymbol {F}}={\boldsymbol {U}}^{2},\quad {\boldsymbol {F}}={\boldsymbol {R}}{\boldsymbol {U}},\quad {\boldsymbol {R}}^{T}={\boldsymbol {R}}^{-1},}P=JσF−T,τ=Jσ,S=JF−1σF−T,T=RTP,M=CS{\displaystyle {\boldsymbol {P}}=J{\boldsymbol {\sigma }}{\boldsymbol {F}}^{-T},\quad {\boldsymbol {\tau }}=J{\boldsymbol {\sigma }},\quad {\boldsymbol {S}}=J{\boldsymbol {F}}^{-1}{\boldsymbol {\sigma }}{\boldsymbol {F}}^{-T},\quad {\boldsymbol {T}}={\boldsymbol {R}}^{T}{\boldsymbol {P}},\quad {\boldsymbol {M}}={\boldsymbol {C}}{\boldsymbol {S}}}
|
https://en.wikipedia.org/wiki/Stress_measures
|
Structural testingis the evaluation of an object (which might be an assembly of objects) to ascertain its characteristics of physical strength. Testing includes evaluatingcompressive strength,shear strength,tensile strength, all of which may be conducted to failure or to some satisfactory margin of safety. Evaluations may also be indirect, using techniques such asx-rayultrasound, andground-penetrating radar, among others, to assess the quality of the object.[1][2]
Structural engineersconduct structural testing to evaluate material suitability for a particular application and to evaluate the capacity of existing structures to withstand foreseeable loads.
Items may include buildings (or components), bridges, airplane wings[3]or other types of structures.
This engineering-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Structural_testing
|
Abank stress testis an analysis of a bank's ability to endure a hypothetical adverse economic scenario.
Stress tests became widely used after the2008 financial crisis.[1]
For example, in the U.S. in 2012, an adverse scenario used in stress testing was all of the following:[2]
|
https://en.wikipedia.org/wiki/List_of_bank_stress_tests
|
Incomputer programming, acharacterization test(also known asGolden Master Testing[1]) is a means to describe (characterize) theactualbehavior of an existing piece of software, and therefore protect existing behavior oflegacy codeagainst unintended changes viaautomated testing. This term was coined by Michael Feathers.[2]
The goal of characterization tests is to help developers verify that the modifications made to a reference version of a software system did not modify its behavior in unwanted or undesirable ways. They enable, and provide a safety net for, extending andrefactoringcode that does not have adequateunit tests.
InJames Bach'sand Michael Bolton's classification of test oracles,[3]this kind of testing corresponds to thehistorical oracle. In contrast to the usual approach ofassertions-basedsoftware testing, the outcome of the test is not determined by individual values or properties (that are checked with assertions), but by comparing a complex result of the tested software-process as a whole with the result of the same process in a previousversionof the software. In a sense, characterization testing inverts traditional testing: Traditional tests check that individual properties have certain values (whiteliststhem), whereas characterization testing checks that no properties have been changed (blacklisted).
When creating a characterization test, one must observe what outputs occur for a given set of inputs. Given an observation that the legacy code gives a certain output based on given inputs, then a test can be written that asserts that the output of the legacy code matches the observed result for the given inputs. For example, if one observes that f(3.14) == 42, then this could be created as a characterization test. Then, after modifications to the system, the test can determine if the modifications caused changes in the results when given the same inputs.
Unfortunately, as with any testing, it is generally not possible to create a characterization test for every possible input and output. As such, many people opt for either statement or branch coverage. However, even this can be difficult. Test writers must use their judgment to decide how much testing is appropriate. It is often sufficient to write characterization tests that only cover the specific inputs and outputs that are known to occur, paying special attention to edge cases.
Unlikeregression tests, to which they are very similar, characterization tests do not verify thecorrectbehavior of the code, which can be impossible to determine. Instead they verify the behavior that was observed when they were written. Often no specification or test suite is available, leaving only characterization tests as an option, since the conservative path is to assume that the old behavior is the required behavior. Characterization tests are, essentially, change detectors. It is up to the person analyzing the results to determine if the detected change was expected and/or desirable, or unexpected and/or undesirable.
One of the interesting aspects of characterization tests is that, since they are based on existing code, it's possible to generate some characterization tests automatically. An automated characterization test tool will exercise existing code with a wide range of relevant and/or random input values, record the output values (or state changes) and generate a set of characterization tests. When the generated tests are executed against a new version of the code, they will produce one or more failures/warnings if that version of the code has been modified in a way that changes a previously established behavior.
When testing on theGUIlevel, characterization testing can be combined withintelligent monkey testingto create complex test cases that capture use cases and special cases thereof.
Golden Master testing has the following advantages over the traditional assertions-based software testing:
Golden Master testing has the following disadvantages over traditional assertions-based software testing:
|
https://en.wikipedia.org/wiki/Characterization_test
|
Component-based usability testing(CBUT) is a testing approach which aims at empirically testing theusabilityof an interaction component. The latter is defined as an elementary unit of an interactive system, on which behavior-based evaluation is possible. For this, a component needs to have an independent, and by the user perceivable and controllable state, such as a radio button, a slider or a whole word processor application. The CBUT approach can be regarded as part ofcomponent-based software engineeringbranch ofsoftware engineering.
CBUT is based on both software architectural views such asmodel–view–controller(MVC),presentation–abstraction–control(PAC), ICON and CNUCE agent models that split up the software in parts, andcognitive psychologyviews where a person's mental process is split up in smaller mental processes. Both software architecture and cognitive architecture use the principle of hierarchical layering, in which low level processes are more elementary and for humans often more physical in nature, such as the coordination movement of muscle groups. Processes that operate on higher level layers are more abstract and focus on a person's main goal, such as writing an application letter to get a job.
The layered protocol theory (LPT),[1]which is a special version ofperceptual control theory(PCT), brings these views together by suggesting that users interact with a system across several layers by sending messages. Users interact with components on high layers by sending messages, such as pressing keys, to components operating on lower layers, which on their turn relay a series of these messages into a single high-level message, such as DELETE, to a component on a higher layer. Components operating on higher layers, communicate back to the user by sending messages to components operating on lower-level layers. Whereas this layered-interaction model explains how the interaction is established, control loops explain the purpose of the interaction. LPT sees the purpose of the users' behavior as the users' attempt to control their perception, in this case the state of the component they perceive. This means that users will only act if they perceive the component to be in an undesirable state. For example, if a person has an empty glass but want a full glass of water, he or she will act (e.g. walk to the tap, turning the tap on to fill the glass). The action of filling the glass will continue until the person perceives the glass as full. As interaction with components takes places on several layers, interacting with a single device can include several control loops. The amount of effort put into operating a control loop is seen as an indicator for the usability of an interaction component.
CBUT can be categorized according to two testing paradigms, the single-version testing paradigm (SVTP) and the multiple-versions testing paradigm (MVTP). In SVTP only one version of each interaction component in a system is tested. The focus is to identify interaction components that might reduce the overall usability of the system. SVTP is therefore suitable as part of a software-integration test. In MVTP on the other hand, multiple versions of a single component are tested while the remaining components in the system remain unchanged. The focus is on identifying the version with the highest usability of specific interaction component. MVTP therefore is suitable for component development and selection. Different CBUT methods have been proposed for SVTP and MVTP, which include measures based on recorded user interaction and questionnaires. Whereas in MVTP the recorded data can directly be interpreted by making a comparison between two versions of the interaction component, in SVTP log file analysis is more extensive as interaction with both higher and lower components must be considered.[2]Meta-analysis on the data from several lab experiments that used CBUT measures suggests that these measures can be statistically more powerful than overall (holistic) usability measures.[3]
Whileholisticoriented usability questionnaires such as thesystem usability scale(SUS) examine the usability of a system on several dimensions such as defined inISO 9241Part 11 standard effectiveness, efficiency and satisfaction, a component-based usability questionnaire (CBUQ)[4]is a questionnaire which can be used to evaluate the usability of individual interaction components, such as the volume control or the play control of a MP3 player. To evaluate an interaction component, the six perceived ease-of-use (PEOU) statements from thetechnology acceptance modelare taken with a reference to the interaction component, instead of to the entire system.
Users are asked to rate these statements on a seven-pointLikert scale. The average rating on these six statements is regarded as the user's usability rating of the interaction component. Based on lab studies with difficult to use interaction components and easy to use interaction components, a break-even point of 5.29 on seven-point Likert scale has been determined.[4]Using a one-samplestudent's t-test, it is possible to examine whether users' rating of an interaction component deviates from this break-even point. Interaction components that receive rating below this break-even point can be regarded as more comparable to the set of difficult to use interaction components, whereas ratings above this break-even point would be more comparable to the set of easy-to-use interaction components.
If engineers like to evaluate multiple interaction components simultaneously, the CBUQ questionnaire exists of separate sections, one for each interaction component, each with their own 6 PEOU statements.
|
https://en.wikipedia.org/wiki/Component-based_usability_testing
|
Design predicatesare a method invented by Thomas McCabe,[1]to quantify the complexity of the integration of two units of software. Each of the four types of design predicates have an associated integration complexity rating. For pieces of code that apply more than one design predicate, integration complexity ratings can be combined.
The sum of the integration complexity for a unit of code, plus one, is the maximum number of test cases necessary to exercise the integration fully. Though a test engineer can typically reduce this by covering as many previously uncovered design predicates as possible with each new test. Also, some combinations of design predicates might be logically impossible.
Unit A always calls unit B. This has an integration complexity of 0. For example:
Unit A may or may not call unit B. This integration has a complexity of 1, and needs two tests: one that calls B, and one that doesn't.
This is like a programming language'sswitch statement. Unit A calls exactly one of several possible units. Integration complexity is n - 1, where n is the number of possible units to call.
In an iterative call, unit A calls unit B at least once, but maybe more. This integration has a complexity of 1. It also requires two tests: one that calls unit B once, and one test that calls it more than once.
Any particular integration can combine several types of calls. For example, unit A may or may not call unit B; and if it does, it can call it one or more times. This integration combines a conditional call, with its integration complexity of 1, and an iterative call, with its integration complexity of 1. The combined integration complexity totals 2.
Since the number of necessary tests is the total integration complexity plus one, this integration would require 3 tests. In one, where someNumber isn't greater than 0, unit B isn't called. In another, where someNumber is 1, unit B is called once. And in the final, someNumber is greater than 1, unit B is called more than once.
|
https://en.wikipedia.org/wiki/Design_predicates
|
Design by contract(DbC), also known ascontract programming,programming by contractanddesign-by-contract programming, is an approach fordesigning software.
It prescribes that software designers should defineformal, precise and verifiable interface specifications forsoftware components, which extend the ordinary definition ofabstract data typeswithpreconditions,postconditionsandinvariants. These specifications are referred to as "contracts", in accordance with aconceptual metaphorwith the conditions and obligations of business contracts.
The DbC approachassumesallclient componentsthat invoke an operation on aserver componentwill meet the preconditions specified as required for that operation.
Where this assumption is considered too risky (as in multi-channel ordistributed computing), theinverse approachis taken, meaning that theserver componenttests that all relevant preconditions hold true (before, or while, processing theclient component's request) and replies with a suitable error message if not.
The term was coined byBertrand Meyerin connection with his design of theEiffel programming languageand first described in various articles starting in 1986[1][2][3]and the two successive editions (1988, 1997) of his bookObject-Oriented Software Construction. Eiffel Software applied for trademark registration forDesign by Contractin December 2003, and it was granted in December 2004.[4][5]The current owner of this trademark is Eiffel Software.[6][7]
Design by contract has its roots in work onformal verification,formal specificationandHoare logic. The original contributions include:
The central idea of DbC is a metaphor on how elements of a software system collaborate with each other on the basis of mutualobligationsandbenefits. The metaphor comes from business life, where a "client" and a "supplier" agree on a "contract" that defines, for example, that:
Similarly, if themethodof aclassinobject-oriented programmingprovides a certain functionality, it may:
The contract is semantically equivalent to aHoare triplewhich formalises the obligations. This can be summarised by the "three questions" that the designer must repeatedly answer in the contract:
Manyprogramming languageshave facilities to makeassertionslike these. However, DbC considers these contracts to be so crucial tosoftware correctnessthat they should be part of the design process. In effect, DbC advocateswriting the assertions first.[citation needed]Contracts can be written bycode comments, enforced by atest suite, or both, even if there is no special language support for contracts.
The notion of a contract extends down to the method/procedure level; the contract for each method will normally contain the following pieces of information:[citation needed]
Subclasses in aninheritance hierarchyare allowed to weaken preconditions (but not strengthen them) and strengthen postconditions and invariants (but not weaken them). These rules approximatebehavioural subtyping.
All class relationships are between client classes and supplier classes. A client class is obliged to make calls to supplier features where the resulting state of the supplier is not violated by the client call. Subsequently, the supplier is obliged to provide a return state and data that does not violate the state requirements of the client.
For instance, a supplier data buffer may require that data is present in the buffer when a delete feature is called. Subsequently, the supplier guarantees to the client that when a delete feature finishes its work, the data item will, indeed, be deleted from the buffer. Other design contracts are concepts ofclass invariant. The class invariant guarantees (for the local class) that the state of the class will be maintained within specified tolerances at the end of each feature execution.
When using contracts, a supplier should not try to verify that the contract conditions are satisfied—a practice known asoffensive programming—the general idea being that code should "fail hard", with contract verification being the safety net.
DbC's "fail hard" property simplifies the debugging of contract behavior, as the intended behaviour of each method is clearly specified.
This approach differs substantially from that ofdefensive programming, where the supplier is responsible for figuring out what to do when a precondition is broken. More often than not, the supplier throws an exception to inform the client that the precondition has been broken, and in both cases—DbC and defensive programming alike—the client must figure out how to respond to that. In such cases, DbC makes the supplier's job easier.
Design by contract also defines criteria for correctness for a software module:
Design by contract can also facilitate code reuse, since the contract for each piece of code is fully documented. The contracts for a module can be regarded as a form ofsoftware documentationfor the behavior of that module.
Contract conditions should never be violated during execution of a bug-free program. Contracts are therefore typically only checked in debug mode during software development. Later at release, the contract checks are disabled to maximize performance.
In many programming languages, contracts are implemented withassert. Asserts are by default compiled away in release mode in C/C++, and similarly deactivated in C#[8]and Java.
Launching the Python interpreter with "-O" (for "optimize") as an argument will likewise cause the Python code generator to not emit any bytecode for asserts.[9]
This effectively eliminates the run-time costs of asserts in production code—irrespective of the number and computational expense of asserts used in development—as no such instructions will be included in production by the compiler.
Design by contract does not replace regular testing strategies, such asunit testing,integration testingandsystem testing. Rather, it complements external testing with internal self-tests that can be activated both for isolated tests and in production code during a test-phase.
The advantage of internal self-tests is that they can detect errors before they manifest themselves as invalid results observed by the client. This leads to earlier and more specific error detection.
The use of assertions can be considered to be a form oftest oracle, a way of testing the design by contract implementation.
Languages that implement most DbC features natively include:
Additionally, the standard method combination in theCommon Lisp Object Systemhas the method qualifiers:before,:afterand:aroundthat allow writing contracts as auxiliary methods, among other uses.
|
https://en.wikipedia.org/wiki/Design_by_contract
|
Extreme programming(XP) is asoftware development methodologyintended to improvesoftware qualityand responsiveness to changing customer requirements. As a type ofagile software development,[1][2][3]it advocates frequentreleasesin short development cycles, intended to improve productivity and introduce checkpoints at which new customer requirements can be adopted.
Other elements of extreme programming include programmingin pairsor doing extensivecode review,unit testingof all code,not programming features until they are actually needed, a flat management structure, code simplicity and clarity, expecting changes in the customer's requirements as time passes and the problem is better understood, and frequent communication with the customer and among programmers.[2][3][4]The methodology takes its name from the idea that the beneficial elements of traditional software engineering practices are taken to "extreme" levels. As an example,code reviewsare considered a beneficial practice; taken to the extreme, code can be reviewedcontinuously(i.e. the practice ofpair programming).
Kent Beckdeveloped extreme programming during his work on theChrysler Comprehensive Compensation System(C3) payrollproject.[5]Beck became the C3project leaderin March 1996. He began to refine the development methodology used in the project and wrote a book on the methodology (Extreme Programming Explained, published in October 1999).[5]Chryslercancelled the C3 project in February 2000, after seven years, whenDaimler-Benzacquired the company.[6]Ward Cunninghamwas another major influence on XP.
Many extreme-programming practices have been around for some time; the methodology takes "best practices" to extreme levels. For example, the "practice of test-first development, planning and writing tests before each micro-increment" was used as early as NASA'sProject Mercury, in the early 1960s.[7]To shorten the total development time, some formal test documents (such as foracceptance testing) have been developed in parallel with (or shortly before) the software being ready for testing. A NASA independent test group can write the test procedures, based on formal requirements and logical limits, before programmers write the software and integrate it with the hardware. XP takes this concept to the extreme level, writing automated tests (sometimes inside software modules) which validate the operation of even small sections of software coding, rather than only testing the larger features.
Two major influences shaped software development in the 1990s:
Rapidly changing requirements demanded shorterproduct life-cycles, and often clashed with traditional methods of software development.
The Chrysler Comprehensive Compensation System (C3) started in order to determine the best way to use object technologies, using the payroll systems at Chrysler as the object of research, withSmalltalkas the language andGemStoneas thedata access layer. Chrysler brought inKent Beck,[5]a prominent Smalltalk practitioner, to doperformance tuningon the system, but his role expanded as he noted several problems with the development process. He took this opportunity to propose and implement some changes in development practices - based on his work with his frequent collaborator,Ward Cunningham. Beck describes the early conception of the methods:[8]
The first time I was asked to lead a team, I asked them to do a little bit of the things I thought were sensible, like testing and reviews. The second time there was a lot more on the line. I thought, "Damn the torpedoes, at least this will make a good article," [and] asked the team to crank up all the knobs to 10 on the things I thought were essential and leave out everything else.
Beck invitedRon Jeffriesto the project to help develop and refine these methods. Jeffries thereafter acted as a coach to instill the practices as habits in the C3 team.
Information about the principles and practices behind XP disseminated to the wider world through discussions on the originalwiki, Cunningham'sWikiWikiWeb. Various contributors discussed and expanded upon the ideas, and some spin-off methodologies resulted (seeagile software development). Also, XP concepts have been explained, for several years, using ahypertextsystem map on the XP website athttp://www.extremeprogramming.orgc.1999.
Beck edited a series of books on XP, beginning with his ownExtreme Programming Explained(1999,ISBN0-201-61641-6), spreading his ideas to a much larger audience. Authors in the series went through various aspects attending XP and its practices. The series included a book critical of the practices.
XP generated significant interest among software communities in the late 1990s and early 2000s, seeing adoption in a number of environments radically different from its origins.
The high discipline required by the original practices often went by the wayside, causing some of these practices, such as those thought too rigid, to be deprecated or reduced, or even left unfinished, on individual sites. For example, the practice of end-of-dayintegration testsfor a particular project could be changed to an end-of-week schedule, or simply reduced to testing on mutually agreed dates. Such a more relaxed schedule could avoid people feeling rushed to generate artificial stubs just to pass the end-of-day testing. A less-rigid schedule allows, instead, the development of complex features over a period of several days.
Meanwhile, other agile-development practices have not stood still, and as of 2019[update]XP continues to evolve, assimilating more lessons from experiences in the field, to use other practices. In the second edition ofExtreme Programming Explained(November 2004), five years after the first edition, Beck added more values and practices and differentiated between primary and corollary practices.
Extreme Programming Explaineddescribes extreme programming as a software-development discipline that organizes people to produce higher-quality software more productively.
XP attempts to reduce the cost of changes in requirements by having multiple short development cycles, rather than a long one.
In this doctrine, changes are a natural, inescapable and desirable aspect of software-development projects, and should be planned for, instead of attempting to define a stable set of requirements.
Extreme programming also introduces a number of basic values, principles and practices on top of the agile methodology.
XP describes four basic activities that are performed within the software development process: coding, testing, listening, and designing. Each of those activities is described below.
The advocates of XP argue that the only truly important product of the system development process is code – software instructions that a computer can interpret. Without code, there is no working product.
Coding can be used to figure out the most suitable solution. Coding can also help to communicate thoughts about programming problems. A programmer dealing with a complex programming problem, or finding it hard to explain the solution to fellow programmers, might code it in a simplified manner and use the code to demonstrate what they mean. Code, say the proponents of this position, is always clear and concise and cannot be interpreted in more than one way. Other programmers can give feedback on this code by also coding their thoughts.
Testing is central to extreme programming.[9]Extreme programming's approach is that if a little testing can eliminate a few flaws, a lot of testing can eliminate many more flaws.
System-wideintegration testingwas encouraged, initially, as a daily end-of-day activity, for early detection of incompatible interfaces, to reconnect before the separate sections diverged widely from coherent functionality. However, system-wide integration testing has been reduced, to weekly, or less often, depending on the stability of the overall interfaces in the system.[citation needed]
Programmers must listen to what the customers need the system to do, what "business logic" is needed. They must understand these needs well enough to give the customer feedback about the technical aspects of how the problem might be solved, or cannot be solved. Communication between the customer and programmer is further addressed in theplanning game.
From the point of view of simplicity, of course one could say that system development doesn't need more than coding, testing and listening. If those activities are performed well, the result should always be a system that works. In practice, this will not work. One can come a long way without designing but at a given time one will get stuck. The system becomes too complex and the dependencies within the system cease to be clear. One can avoid this by creating a design structure that organizes the logic in the system. Good design will avoid many dependencies within a system; this means that changing one part of the system will not affect other parts of the system.[citation needed]
Extreme programming initially recognized four values in 1999: communication, simplicity, feedback, and courage. A new value, respect, was added in the second edition ofExtreme Programming Explained. Those five values are described below.
Building software systems requires communicating system requirements to the developers of the system. In formal software development methodologies, this task is accomplished through documentation. Extreme programming techniques can be viewed as methods for rapidly building and disseminating institutional knowledge among members of a development team. The goal is to give all developers a shared view of the system which matches the view held by the users of the system. To this end, extreme programming favors simple designs, common metaphors, collaboration of users and programmers, frequent verbal communication, and feedback.
Extreme programming encourages starting with the simplest solution. Extra functionality can then be added later. The difference between this approach and more conventional system development methods is the focus on designing and coding for the needs of today instead of those of tomorrow, next week, or next month. This is sometimes summed up as the "You aren't gonna need it" (YAGNI) approach.[10]Proponents of XP acknowledge the disadvantage that this can sometimes entail more effort tomorrow to change the system; their claim is that this is more than compensated for by the advantage of not investing in possible future requirements that might change before they become relevant. Coding and designing for uncertain future requirements implies the risk of spending resources on something that might not be needed, while perhaps delaying crucial features. Related to the "communication" value, simplicity in design and coding should improve the quality of communication. A simple design with very simple code could be easily understood by most programmers in the team.
Within extreme programming, feedback relates to different dimensions of the system development:
Feedback is closely related to communication and simplicity. Flaws in the system are easily communicated by writing a unit test that proves a certain piece of code will break. The direct feedback from the system tells programmers to recode this part. A customer is able to test the system periodically according to the functional requirements, known asuser stories.[5]To quoteKent Beck, "Optimism is an occupational hazard of programming. Feedback is the treatment."[11]
Several practices embody courage. One is the commandment to always design and code for today and not for tomorrow. This is an effort to avoid getting bogged down in design and requiring a lot of effort to implement anything else. Courage enables developers to feel comfortable withrefactoringtheir code when necessary.[5]This means reviewing the existing system and modifying it so that future changes can be implemented more easily. Another example of courage is knowing when to throw code away: courage to remove source code that is obsolete, no matter how much effort was used to create that source code. Also, courage means persistence: a programmer might be stuck on a complex problem for an entire day, then solve the problem quickly the next day, but only if they are persistent.
The respect value includes respect for others as well as self-respect. Programmers should never commit changes that break compilation, that make existing unit-tests fail, or that otherwise delay the work of their peers. Members respect their own work by always striving for high quality and seeking for the best design for the solution at hand through refactoring.
Adopting the four earlier values leads to respect gained from others in the team. Nobody on the team should feel unappreciated or ignored. This ensures a high level of motivation and encourages loyalty toward the team and toward the goal of the project. This value is dependent upon the other values, and is oriented toward teamwork.
The first version of rules for XP was published in 1999 by Don Wells[12]at the XP website. 29 rules are given in the categories of planning, managing, designing, coding, and testing. Planning, managing and designing are called out explicitly to counter claims that XP doesn't support those activities.
Another version of XP rules was proposed by Ken Auer[13]in XP/Agile Universe 2003. He felt XP was defined by its rules, not its practices (which are subject to more variation and ambiguity). He defined two categories: "Rules of Engagement" which dictate the environment in which software development can take place effectively, and "Rules of Play" which define the minute-by-minute activities and rules within the framework of the Rules of Engagement.
Here are some of the rules (incomplete):
Coding
Testing
The principles that form the basis of XP are based on the values just described and are intended to foster decisions in a system development project. The principles are intended to be more concrete than the values and more easily translated to guidance in a practical situation.
Extreme programming sees feedback as most useful if it is done frequently and promptly. It stresses that minimal delay between an action and its feedback is critical to learning and making changes. Unlike traditional system development methods, contact with the customer occurs in more frequent iterations. The customer has clear insight into the system that is being developed, and can give feedback and steer the development as needed. With frequent feedback from the customer, a mistaken design decision made by the developer will be noticed and corrected quickly, before the developer spends much time implementing it.
Unit tests contribute to the rapid feedback principle. When writing code, running the unit test provides direct feedback as to how the system reacts to the changes made. This includes running not only the unit tests that test the developer's code, but running in addition all unit tests against all the software, using an automated process that can be initiated by a single command. That way, if the developer's changes cause a failure in some other portion of the system that the developer knows little or nothing about, the automated all-unit-test suite will reveal the failure immediately, alerting the developer of the incompatibility of their change with other parts of the system, and the necessity of removing or modifying their change. Under traditional development practices, the absence of an automated, comprehensive unit-test suite meant that such a code change, assumed harmless by the developer, would have been left in place, appearing only during integration testing – or worse, only in production; and determining which code change caused the problem, among all the changes made by all the developers during the weeks or even months previous to integration testing, was a formidable task.
This is about treating every problem as if its solution were "extremely simple". Traditional system development methods say to plan for the future and to code for reusability. Extreme programming rejects these ideas.
The advocates of extreme programming say that making big changes all at once does not work. Extreme programming applies incremental changes: for example, a system might have small releases every three weeks. When many little steps are made, the customer has more control over the development process and the system that is being developed.
The principle of embracing change is about not working against changes but embracing them. For instance, if at one of the iterative meetings it appears that the customer's requirements have changed dramatically, programmers are to embrace this and plan the new requirements for the next iteration.
Extreme programming has been described as having 12 practices, grouped into four areas:
The practices in XP have been heavily debated.[5]Proponents of extreme programming claim that by having the on-site customer[5]request changes informally, the process becomes flexible, and saves the cost of formal overhead. Critics of XP claim this can lead to costly rework and projectscope creepbeyond what was previously agreed or funded.[citation needed]
Change-control boards are a sign that there are potential conflicts in project objectives and constraints between multiple users. XP's expedited methods are somewhat dependent on programmers being able to assume a unified client viewpoint so the programmer can concentrate on coding, rather than documentation of compromise objectives and constraints.[14]This also applies when multiple programming organizations are involved, particularly organizations which compete for shares of projects.[citation needed]
Other potentially controversial aspects of extreme programming include:
Critics have noted several potential drawbacks,[5]including problems with unstable requirements, no documented compromises of user conflicts, and a lack of an overall design specification or document.
Thoughtworkshas claimed reasonable success on distributed XP projects with up to sixty people.[citation needed]
In 2004, industrial extreme programming (IXP)[15]was introduced as an evolution of XP. It is intended to bring the ability to work in large and distributed teams. It now has 23 practices and flexible values.
In 2003,Matt Stephensand Doug Rosenberg publishedExtreme Programming Refactored: The Case Against XP, which questioned the value of the XP process and suggested ways in which it could be improved.[6]This triggered a lengthy debate in articles, Internet newsgroups, and web-site chat areas. The core argument of the book is that XP's practices are interdependent but that few practical organizations are willing/able to adopt all the practices; therefore the entire process fails. The book also makes other criticisms, and it draws a likeness of XP's "collective ownership" model to socialism in a negative manner.
Certain aspects of XP have changed since the publication ofExtreme Programming Refactored; in particular, XP now accommodates modifications to the practices as long as the required objectives are still met. XP also uses increasingly generic terms for processes. Some argue that these changes invalidate previous criticisms; others claim that this is simply watering the process down.
Other authors have tried to reconcile XP with the older methodologies in order to form a unified methodology. Some of these XP sought to replace, such as thewaterfall methodology; exampleProject Lifecycles: Waterfall,Rapid Application Development(RAD), etc.JPMorgan Chase & Co.tried combining XP with the computer programming methods ofcapability maturity model integration(CMMI), andSix Sigma. They found that the three systems reinforced each other well, leading to better development, and did not mutually contradict.[16]
Extreme programming's initial buzz and controversial tenets, such aspair programmingandcontinuous design, have attracted particular criticisms, such as the ones coming from McBreen,[17]Boehm and Turner,[18]Matt Stephens and Doug Rosenberg.[19]Many of the criticisms, however, are believed by Agile practitioners to be misunderstandings of agile development.[20]
In particular, extreme programming has been reviewed and critiqued by Matt Stephens's and Doug Rosenberg'sExtreme Programming Refactored.[6]
|
https://en.wikipedia.org/wiki/Extreme_programming
|
Insoftware development,functional testingis a form ofsoftwaresystem testingthat verifies whether a system meets itsfunctional requirements.[1][2]
Generally, functional testing isblack-box, meaning the internal program structure is ignored (unlike forwhite-box testing).[3]
Sometimes, functional testing is aquality assurance(QA) process.[4]
Functional testing differs fromacceptance testing. Functional testingverifiesa program by checking it against design documentation or specification[citation needed], while acceptance testingvalidatesa program by checking it against the published user or system requirements.[3]
As a form ofsystem testing, functional testing tests slices of functionality of the whole system.
Despite similar naming, functional testing is not testing the code of a singlefunction.
The concept of incorporating testing earlier in the delivery cycle is not restricted to functional testing.[5]
In fixture testing, whileICT fixturestest each individual component on a PCB, functional test fixtures assess the entire board's functionality by applying power and verifying that the system operates correctly.[6]
Functional testing includes but is not limited to:[3]
Functional testing typically involves six steps[citation needed]
|
https://en.wikipedia.org/wiki/Functional_testing
|
Integration testing, also calledintegration and testing(I&T), is a form ofsoftware testingin which multiple parts of asoftware systemare tested as a group.
Integration testing describes tests that are run at the integration-level to contrast testing at theunitorsystemlevel.
Often, integration testing is conducted to evaluate thecomplianceof a component withfunctional requirements.[1]
In a structured development process, integration testing takes as its inputmodulesthat have been unit tested, groups them in larger aggregates, applies tests defined in an integrationtest plan, and delivers as output test results as a step leading to system testing.[2]
Some different types of integration testing are big-bang, mixed (sandwich), risky-hardest,top-down, and bottom-up. Other Integration Patterns[3]are: collaboration integration, backbone integration, layer integration, client-server integration, distributed services integration and high-frequency integration.
In big-bang testing, most of the developed modules are coupled together to form a complete software system or major part of the system and then used for integration testing. This method is very effective for saving time in the integration testing process[citation needed]. However, if the test cases and their results are not recorded properly, the entire integration process will be more complicated and may prevent the testing team from achieving the goal of integration testing.
In bottom-up testing, the lowest level components are tested first, and are then used to facilitate the testing of higher level components. The process is repeated until the component at the top of the hierarchy is tested. All the bottom or low-level modules, procedures or functions are integrated and then tested. After the integration testing of lower level integrated modules, the next level of modules will be formed and can be used for integration testing. This approach is helpful only when all or most of the modules of the same development level are ready. This method also helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage.
In top-down testing, the top integrated modules are tested first and the branch of the module is tested step by step until the end of the related module.
Sandwich testing combines top-down testing with bottom up testing. One limitation to this sort of testing is that any conditions not stated in specified integration tests, outside of the confirmation of the execution of design items, will generally not be tested.
|
https://en.wikipedia.org/wiki/Integration_testing
|
This is a list of notabletest automation frameworkscommonly used forunit testing. Such frameworks are not limited to unit-level testing; can be used forintegrationandsystemlevel testing.
Frameworks are grouped below. For unit testing, a framework must be the same language as thesource codeunder test, and therefore, grouping frameworks by language is valuable. But some groupings transcend language. For example, .NET groups frameworks that work for any language supported for .NET, and HTTP groups frameworks that test an HTTP server regardless of the implementation language on the server.
The columns in the tables below are described here.
Some columns do not apply to some groupings and are therefore omitted from that groupings table.
ForApache Anttasks.
ForAppleScript.
For unit testing frameworks for VB.NET, see.NETlanguages.
Simple, portable C unit testing framework, single header file
Generators available across another component named TBExtreme
See.NETlanguages below.
Generators available across another component named TBExtreme
MPIcolumn: Whether supports message passing via MPI - commonly used for high-performance scientific computing
All entries underJavamay also be used in Groovy.
Back-to-back tests can be executed automatically between MiL and SiL.
|
https://en.wikipedia.org/wiki/List_of_unit_testing_frameworks
|
xUnitis a label used for anautomated testingsoftware frameworkthat shares significant structure and functionality that is traceable to a common progenitorSUnit.
The SUnit framework wasportedtoJavabyKent BeckandErich GammaasJUnitwhich gained wide popularity. Adaptations to other languages were also popular which led some to claim that the structured,object-orientedstyle works well with popular languages including Java andC#.
The name of an adaptation is often a variation of "SUnit" with the "S" replaced with an abbreviation of the target language name. For example, JUnit for Java and RUnit forR. The term "xUnit" refers to any such adaptation where "x" is a placeholder for the language-specific prefix.
The xUnit frameworks are often used forunit testing– testing an isolated unit of code – but can be used for any level ofsoftware testingincludingintegrationandsystem.
An xUnit framework has the following generalarchitecture.[1]
Atest caseis the smallest part of a test that generally encodes a simple path through the software under test. The test case code prepares input data and environmental state, invokes the software under test and verifies expected results.
A programmer writes the code for each test case.
A test case is implemented with one or moreassertionsthat validate expected results.
Generally, the framework provides assertion functionality. A framework may provide a way to use custom assertions.
Atest suiteis a collection of related test cases. They share a framework which allows for reuse of environment setup and cleanup code.
Generally, a test runner may run the cases of a suite in any order so the programmer should not depend on top-to-bottom execution order.
Atest fixture(also known as a test context) provides the environment for each test case of a suite. Generally, a fixture is configured to setup a known, good,runtimeenvironment before tests run, and to cleanup the environment after.
The fixture is configured with one or more functions that setup and cleanup state. The test runner runs each setup function before each case and runs each cleanup function after.
A test runner is a program that runs tests and reports results.[2]The program is often part of a framework.
A test runner may produce results in various formats. Often, a common and default format ishuman-readable,plain-text. Additionally, the runner may produce structured output. Some xUnit adaptations (i.e. JUnit) can outputXMLthat can be used by a continuous integration system such asJenkinsandAtlassian Bamboo.
|
https://en.wikipedia.org/wiki/XUnit
|
Interactive application security testing(abbreviated asIAST)[1]is asecurity testingmethod that detects software vulnerabilities by interaction with the program coupled with observation and sensors.[2][3]The tool was launched by several application security companies.[4]It is distinct fromstatic application security testing, which does not interact with the program, anddynamic application security testing, which considers the program as ablack box. It may be considered a mix of both.[5]
Thiscomputer securityarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Interactive_application_security_testing
|
Black-box testing,sometimes referred to asspecification-based testing,[1]is a method ofsoftware testingthat examines the functionality of an application without peering into its internal structures or workings. This method of test can be applied virtually to every level of software testing:unit,integration,systemandacceptance. Black-box testing is also used as a method inpenetration testing, where anethical hackersimulates an external hacking or cyber warfare attack with no knowledge of the system being attacked.
Specification-based testing aims to test the functionality of software according to the applicable requirements.[2]This level of testing usually requires thoroughtest casesto be provided to the tester, who then can simply verify that for a given input, the output value (or behavior), either "is" or "is not" the same as the expected value specified in the test case.
Specific knowledge of the application's code, internal structure and programming knowledge in general is not required.[3]The tester is aware ofwhatthe software is supposed to do but is not aware ofhowit does it. For instance, the tester is aware that a particular input returns a certain, invariable output but is not aware ofhowthe software produces the output in the first place.[4]
Test casesare built around specifications andrequirements, i.e., what the application is supposed to do. Test cases are generally derived from external descriptions of the software, including specifications, requirements and design parameters. Although the tests used are primarilyfunctionalin nature,non-functionaltests may also be used. The test designer selects both valid and invalid inputs and determines the correct output, often with the help of atest oracleor a previous result that is known to be good, without any knowledge of the test object's internal structure.
Typical black-box test design techniques includedecision tabletesting,all-pairs testing,equivalence partitioning,boundary value analysis,cause–effect graph,error guessing,state transitiontesting,use casetesting,user storytesting,domain analysis, and syntax testing.[5][6]
Test coveragerefers to the percentage ofsoftware requirementsthat are tested by black-box testing for a system or application.[7]This is in contrast withcode coverage, which examines the inner workings of a program and measures the degree to which thesource codeof aprogramis executed when a test suite is run.[8]Measuring test coverage makes it possible to quickly detect and eliminate defects, to create a more comprehensivetest suite. and to remove tests that are not relevant for the given requirements.[8][9]
Black-box testing may be necessary to assure correct functionality, but it is insufficient to guard against complex or high-risk situations.[10]An advantage of the black box technique is that no programming knowledge is required. Whatever biases the programmers may have had, the tester likely has a different set and may emphasize different areas of functionality. On the other hand, black-box testing has been said to be "like a walk in a dark labyrinth without a flashlight."[11]Because they do not examine the source code, there are situations when a tester writes many test cases to check something that could have been tested by only one test case or leaves some parts of the program untested.
|
https://en.wikipedia.org/wiki/Black-box_testing
|
Gray-box testing(International English spelling:grey-box testing) is a combination ofwhite-box testingandblack-box testing. The aim of this testing is to search for the defects, if any, due to improper structure or improper usage of applications.[1][2]
A black-box tester is unaware of the internal structure of the application to be tested, while a white-box tester has access to the internal structure of the application. A gray-box tester partially knows the internal structure, which includes access to the documentation of internal data structures as well as the algorithms used.[3]
Gray-box testers require both high-level and detailed documents describing the application, which they collect in order to define test cases.[4]
Gray-box testing is beneficial because it takes the straightforward technique of black-box testing and combines it with the code-targeted systems in white-box testing.
Gray-box testing is based on requirement test case generation because it presents all the conditions before the program is tested by using the assertion method. A requirementspecification languageis used to make it easy to understand the requirements and verify its correctness.[5]
Object-oriented software consists primarily of objects; where objects are single indivisible units having executable code and/or data. Some assumptions are stated below which are needed for the application of use gray-box testing.
Cem Kanerdefines "gray-box testing as involving inputs and outputs, but test design is educated by information about the code or the program operation of a kind that would normally be out of view of the tester".[9]Gray-box testing techniques are:
The distributed nature ofWeb servicesallows gray-box testing to detect defects within aservice-oriented architecture(SOA). As we know, white-box testing is not suitable for Web services as it deals directly with the internal structures. White-box testing can be used for state art methods; for example, message mutation which generates the automatic tests for large arrays to help exception handling states, flow without source code or binaries. Such a strategy is useful to push gray-box testing nearer to the outcomes of white-box testing.
|
https://en.wikipedia.org/wiki/Gray-box_testing
|
Incryptography, thewhite-boxmodel refers to an extreme attack scenario, in which an adversary has full unrestricted access to a cryptographic implementation, most commonly of ablock ciphersuch as theAdvanced Encryption Standard(AES). A variety of security goals may be posed (see the section below), the most fundamental being "unbreakability", requiring that any (bounded) attacker should not be able to extract the secret key hardcoded in the implementation, while at the same time the implementation must be fully functional. In contrast, the black-box model only provides an oracle access to the analyzed cryptographic primitive (in the form of encryption and/or decryption queries). There is also a model in-between, the so-called gray-box model, which corresponds to additional information leakage from the implementation, more commonly referred to asside-channelleakage.
White-box cryptographyis a practice and study of techniques for designing and attacking white-box implementations. It has many applications, includingdigital rights management(DRM),pay television, protection of cryptographic keys in the presence ofmalware,[1]mobile payments andcryptocurrencywallets. Examples of DRM systems employing white-box implementations includeCSS,Widevine.
White-box cryptography is closely related to the more general notions ofobfuscation, in particular, toBlack-box obfuscation, proven to be impossible, and toIndistinguishability obfuscation, constructed recently under well-founded assumptions but so far being infeasible to implement in practice.[2]
As of January 2023, there are no publicly known unbroken white-box designs of standard symmetric encryption schemes. On the other hand, there exist many unbroken white-box implementations ofdedicatedblock ciphers designed specifically to achieve incompressibility (see§ Security goals).
Depending on the application, different security goals may be required from a white-box implementation. Specifically, forsymmetric-key algorithmsthe following are distinguished:[3]
The white-box model with initial attempts of white-boxDESandAESimplementations were first proposed by Chow, Eisen, Johnson and van Oorshot in 2003.[1][8]The designs were based on representing the cipher as a network oflookup tablesand obfuscating the tables by composing them with small (4- or 8-bit) random encodings. Such protection satisfied a property that each single obfuscated table individually does not contain any information about the secret key. Therefore, a potential attacker has to combine several tables in their analysis.
The first two schemes were broken in 2004 by Billet, Gilbert, and Ech-Chatbi using structural cryptanalysis.[9]The attack was subsequently called "the BGE attack".
The numerous consequent design attempts (2005-2022)[10]were quickly broken by practical dedicated attacks.[11]
In 2016, Bos, Hubain, Michiels and Teuwen showed that an adaptation of standard side-channelpower analysisattacks can be used to efficiently and fully automatically break most existing white-box designs.[12]This result created a new research direction about generic attacks (correlation-based, algebraic,fault injection) and protections against them.[13]
Four editions of theWhibOx contestwere held in 2017, 2019, 2021 and 2024 respectively. These competitions invited white-box designers both from academia and industry to submit their implementation in the form of (possibly obfuscated)C code. At the same time, everyone could attempt to attack these programs and recover the embedded secret key. Each of these competitions lasted for about 4-5 months.
|
https://en.wikipedia.org/wiki/White-box_cryptography
|
Forecastingis the process of making predictions based on past and present data. Later these can be compared with what actually happens. For example, a company mightestimatetheir revenue in the next year, then compare it against the actual results creating a variance actual analysis.Predictionis a similar but more general term. Forecasting might refer to specific formal statistical methods employingtime series,cross-sectionalorlongitudinaldata, or alternatively to less formal judgmental methods or the process of prediction and assessment of its accuracy. Usage can vary between areas of application: for example, inhydrologythe terms "forecast" and "forecasting" are sometimes reserved for estimates of values at certain specificfuturetimes, while the term "prediction" is used for more general estimates, such as the number of times floods will occur over a long period.
Riskanduncertaintyare central to forecasting and prediction; it is generally considered a good practice to indicate the degree of uncertainty attaching to forecasts. In any case, the data must be up to date in order for the forecast to be as accurate as possible. In some cases the data used to predict the variable of interest is itself forecast.[1]A forecast is not to be confused with a Budget; budgets are more specific, fixed-term financial plans used for resource allocation and control, while forecasts provide estimates of future financial performance, allowing for flexibility and adaptability to changing circumstances. Both tools are valuable in financial planning and decision-making, but they serve different functions.
Forecasting has applications in a wide range of fields where estimates of future conditions are useful. Depending on the field, accuracy varies significantly. If the factors that relate to what is being forecast are known and well understood and there is a significant amount of data that can be used, it is likely the final value will be close to the forecast. If this is not the case or if the actual outcome is affected by the forecasts, the reliability of the forecasts can be significantly lower.[2]
Climate change and increasing energy prices have led to the use ofEgain Forecastingfor buildings. This attempts to reduce the energy needed to heat the building, thus reducing the emission ofgreenhouse gases. Forecasting is used incustomer demand planningin everyday business for manufacturing and distribution companies.
While the veracity of predictions for actual stock returns are disputed through reference to theefficient-market hypothesis,forecasting of broad economic trendsis common. Such analysis is provided by both non-profit groups as well as by for-profit private institutions.[citation needed]
Forecastingforeign exchangemovements is typically achieved through a combination of historical and current data (summarized in charts) andfundamental analysis. An essential difference between chart analysis and fundamental economic analysis is that chartists study only the price action of a market, whereas fundamentalists attempt to look to the reasons behind the action.[3]Financial institutions assimilate the evidence provided by their fundamental and chartist researchers into one note to provide a final projection on the currency in question.[4]
Forecasting has also been used to predict the development of conflict situations.[5]Forecasters perform research that uses empirical results to gauge the effectiveness of certain forecasting models.[6]However research has shown that there is little difference between the accuracy of the forecasts of experts knowledgeable in the conflict situation and those by individuals who knew much less.[7]Similarly, experts in some studies argue that role thinking — standing in other people's shoes to forecast their decisions — does not contribute to the accuracy of the forecast.[8]
An important, albeit often ignored aspect of forecasting, is the relationship it holds withplanning. Forecasting can be described as predicting what the futurewilllook like, whereas planning predicts what the futureshouldlook like.[6]There is no single right forecasting method to use. Selection of a method should be based on your objectives and your conditions (data etc.).[9]A good way to find a method is by visiting a selection tree. An example of a selection tree can be found here.[10]
Forecasting has application in many situations:
In several cases, the forecast is either more or less than a prediction of the future.
InPhilip E. Tetlock'sSuperforecasting: The Art and Science of Prediction, he discusses forecasting as a method of improving the ability to make decisions. A person can become better calibrated[citation needed]—i.e.having things they give 10% credence to happening 10% of the time. Or they can forecast things more confidently[citation needed]— coming to the same conclusion but earlier. Some have claimed that forecasting is a transferable skill with benefits to other areas of discussion and decision making.[citation needed]
Bettingon sports or politics is another form of forecasting. Rather than being used as advice, bettors are paid based on if they predicted correctly. While decisions might be made based on these bets (forecasts), the main motivation is generally financial.
Finally,futarchyis a form of government where forecasts of the impact of government action are used to decide which actions are taken. Rather than advice, in futarchy's strongest form, the action with the best forecasted result is automatically taken.[citation needed]
Forecast improvement projects have been operated in a number of sectors: theNational Hurricane Center's Hurricane Forecast Improvement Project (HFIP) and the Wind Forecast Improvement Project sponsored by theUS Department of Energyare examples.[12]In relation to supply chain management, theDu Pont modelhas been used to show that an increase in forecast accuracy can generate increases in sales and reductions in inventory, operating expenses and commitment of working capital.[13]TheGroceries Code Adjudicatorin the United Kingdom, which regulates supply chain management practices in the groceries retail industry, has observed that all the retailers who fall within the scope of his regulation "are striving for continuous improvement in forecasting practice and activity in relation to promotions".[14]
Qualitative forecasting techniques are subjective, based on the opinion and judgment of consumers and experts; they are appropriate when past data are not available.
They are usually applied to intermediate- or long-range decisions. Examples of qualitative forecasting methods are[citation needed]informed opinion and judgment, theDelphi method,market research, and historical life-cycle analogy.
Quantitative forecastingmodelsare used to forecast future data as a function of past data. They are appropriate to use when past numerical data is available and when it is reasonable to assume that some of the patterns in the data are expected to continue into the future.
These methods are usually applied to short- or intermediate-range decisions. Examples of quantitative forecasting methods are[citation needed]last period demand, simple and weighted N-Periodmoving averages, simpleexponential smoothing, Poisson process model based forecasting[15]and multiplicative seasonal indexes. Previous research shows that different methods may lead to different level of forecasting accuracy. For example,GMDHneural network was found to have better forecasting performance than the classical forecasting algorithms such as Single Exponential Smooth, Double Exponential Smooth, ARIMA and back-propagation neural network.[16]
In this approach, the predictions of all future values are equal to the mean of the past data. This approach can be used with any sort of data where past data is available. In time series notation:
wherey1,...,yT{\displaystyle y_{1},...,y_{T}}is the past data.
Although the time series notation has been used here, the average approach can also be used for cross-sectional data (when we are predicting unobserved values; values that are not included in the data set). Then, the prediction for unobserved values is the average of the observed values.
Naïve forecasts are the most cost-effective forecasting model, and provide a benchmark against which more sophisticated models can be compared. This forecasting method is only suitable for time seriesdata.[17]Using the naïve approach, forecasts are produced that are equal to the last observed value. This method works quite well for economic and financial time series, which often have patterns that are difficult to reliably and accurately predict.[17]If the time series is believed to have seasonality, the seasonal naïve approach may be more appropriate where the forecasts are equal to the value from last season. In time series notation:
A variation on the naïve method is to allow the forecasts to increase or decrease over time, where the amount of change over time (called thedrift) is set to be the average change seen in the historical data. So the forecast for timeT+h{\displaystyle T+h}is given by
This is equivalent to drawing a line between the first and last observation, and extrapolating it into the future.
The seasonal naïve method accounts for seasonality by setting each prediction to be equal to the last observed value of the same season. For example, the prediction value for all subsequent months of April will be equal to the previous value observed for April. The forecast for timeT+h{\displaystyle T+h}is[17]
wherem{\displaystyle m}=seasonal period andk{\displaystyle k}is the smallest integer greater than(h−1)/m{\displaystyle (h-1)/m}.
The seasonal naïve method is particularly useful for data that has a very high level of seasonality.
A deterministic approach is when there is no stochastic variable involved and the forecasts depend on the selected functions and parameters.[18][19]For example, given the function
The short term behaviourxt{\displaystyle x_{t}}and the is the medium-long term trendyt{\displaystyle y_{t}}are
whereα,γ,β,μ,η{\displaystyle \alpha ,\gamma ,\beta ,\mu ,\eta }are some parameters.
This approach has been proposed to simulate bursts of seemingly stochastic activity, interrupted by quieter periods. The assumption is that the presence of a strong deterministic ingredient is hidden by noise. The deterministic approach is noteworthy as it can reveal the underlying dynamical systems structure, which can be exploited for steering the dynamics into a desired regime.[18]
Time seriesmethods use historical data as the basis of estimating future outcomes. They are based on the assumption that past demand history is a good indicator of future demand.
Some forecasting methods try to identify the underlying factors that might influence the variable that is being forecast. For example, including information about climate patterns might improve the ability of a model to predict umbrella sales. Forecasting models often take account of regular seasonal variations. In addition to climate, such variations can also be due to holidays and customs: for example, one might predict that sales of college football apparel will be higher during the football season than during the off season.[20]
Several informal methods used in causal forecasting do not rely solely on the output of mathematicalalgorithms, but instead use the judgment of the forecaster. Some forecasts take account of past relationships between variables: if one variable has, for example, been approximately linearly related to another for a long period of time, it may be appropriate to extrapolate such a relationship into the future, without necessarily understanding the reasons for the relationship.
Causal methods include:
Quantitative forecasting models are often judged against each other by comparing their in-sample or out-of-samplemean square error, although some researchers have advised against this.[22]Different forecasting approaches have different levels of accuracy. For example, it was found in one context thatGMDHhas higher forecasting accuracy than traditional ARIMA.[23]
Judgmental forecasting methods incorporate intuitive judgement, opinions and subjectiveprobabilityestimates. Judgmental forecasting is used in cases where there is a lack of historical data or during completely new and unique market conditions.[24]
Judgmental methods include:
Often these are done today by specialized programs loosely labeled
Can be created with 3 points of a sequence and the "moment" or "index". This type of extrapolation has 100% accuracy in predictions in a big percentage of known series database (OEIS).[25]
The forecast error (also known as aresidual) is the difference between the actual value and the forecast value for the corresponding period:
where E is the forecast error at period t, Y is the actual value at period t, and F is the forecast for period t.
A good forecasting method will yield residuals that areuncorrelated. If there arecorrelationsbetween residual values, then there is information left in the residuals which should be used in computing forecasts. This can be accomplished by computing the expected value of a residual as a function of the known past residuals, and adjusting the forecast by the amount by which this expected value differs from zero.
A good forecasting method will also havezero mean. If the residuals have a mean other than zero, then the forecasts are biased and can be improved by adjusting the forecasting technique by an additive constant that equals the mean of the unadjusted residuals.
Measures of aggregate error:
The forecast error, E, is on the same scale as the data, as such, these accuracy measures are scale-dependent and cannot be used to make comparisons between series on different scales.
Mean absolute error(MAE) ormean absolute deviation(MAD):MAE=MAD=∑t=1N|Et|N{\displaystyle \ MAE=MAD={\frac {\sum _{t=1}^{N}|E_{t}|}{N}}}
Mean squared error(MSE) ormean squared prediction error(MSPE):MSE=MSPE=∑t=1NEt2N{\displaystyle \ MSE=MSPE={\frac {\sum _{t=1}^{N}{E_{t}^{2}}}{N}}}
Root mean squared error(RMSE):RMSE=∑t=1NEt2N{\displaystyle \ RMSE={\sqrt {\frac {\sum _{t=1}^{N}{E_{t}^{2}}}{N}}}}
Average of Errors (E):E¯=∑i=1NEiN{\displaystyle \ {\bar {E}}={\frac {\sum _{i=1}^{N}{E_{i}}}{N}}}
These are more frequently used to compare forecast performance between different data sets because they are scale-independent. However, they have the disadvantage of being extremely large or undefined if Y is close to or equal to zero.
Mean absolute percentage error(MAPE):MAPE=100∗∑t=1N|EtYt|N{\displaystyle \ MAPE=100*{\frac {\sum _{t=1}^{N}|{\frac {E_{t}}{Y_{t}}}|}{N}}}
Mean absolute percentage deviation (MAPD):MAPD=∑t=1N|Et|∑t=1N|Yt|{\displaystyle \ MAPD={\frac {\sum _{t=1}^{N}|E_{t}|}{\sum _{t=1}^{N}|Y_{t}|}}}
Hyndman and Koehler (2006) proposed using scaled errors as an alternative to percentage errors.
Mean absolute scaled error(MASE):MASE=∑t=1N|Et1N−m∑t=m+1N|Yt−Yt−m||N{\displaystyle MASE={\frac {\sum _{t=1}^{N}|{\frac {E_{t}}{{\frac {1}{N-m}}\sum _{t=m+1}^{N}|Y_{t}-Y_{t-m}|}}|}{N}}}
wherem=seasonal period or 1 if non-seasonal
Forecast skill(SS):SS=1−MSEforecastMSEref{\displaystyle \ SS=1-{\frac {MSE_{forecast}}{MSE_{ref}}}}
Business forecasters and practitioners sometimes use different terminology. They refer to the PMAD as the MAPE, although they compute this as a volume weighted MAPE. For more information, seeCalculating demand forecast accuracy.
When comparing the accuracy of different forecasting methods on a specific data set, the measures of aggregate error are compared with each other and the method that yields the lowest error is preferred.
When evaluating the quality of forecasts, it is invalid to look at how well a model fits the historical data; the accuracy of forecasts can only be determined by considering how well a model performs on new data that were not used when fitting the model. When choosing models, it is common to use a portion of the available data for fitting, and use the rest of the data for testing the model, as was done in the above examples.[26]
Cross-validationis a more sophisticated version of training a test set.
Forcross-sectional data, one approach to cross-validation works as follows:
This makes efficient use of the available data, as only one observation is omitted at each step
For time series data, the training set can only include observations prior to the test set. Therefore, no future observations can be used in constructing the forecast. Supposekobservations are needed to produce a reliable forecast; then the process works as follows:
This procedure is sometimes known as a "rolling forecasting origin" because the "origin" (k+i -1)at which the forecast is based rolls forward in time.[26]Further, two-step-ahead or in generalp-step-ahead forecasts can be computed by first forecasting the value immediately after the training set, then using this value with the training set values to forecast two periods ahead, etc.
See also
Seasonality is a characteristic of a time series in which the data experiences regular and predictable changes which recur every calendar year. Any predictable change or pattern in a time series that recurs or repeats over a one-year period can be said to be seasonal. It is common in many situations – such as grocery store[27]or even in a Medical Examiner's office[28]—that the demand depends on the day of the week. In such situations, the forecasting procedure calculates the seasonal index of the "season" – seven seasons, one for each day – which is the ratio of the average demand of that season (which is calculated by Moving Average or Exponential Smoothing using historical data corresponding only to that season) to the average demand across all seasons. An index higher than 1 indicates that demand is higher than average; an index less than 1 indicates that the demand is less than the average.
The cyclic behaviour of data takes place when there are regular fluctuations in the data which usually last for an interval of at least two years, and when the length of the current cycle cannot be predetermined. Cyclic behavior is not to be confused with seasonal behavior. Seasonal fluctuations follow a consistent pattern each year so the period is always known. As an example, during the Christmas period, inventories of stores tend to increase in order to prepare for Christmas shoppers. As an example of cyclic behaviour, the population of a particular natural ecosystem will exhibit cyclic behaviour when the population decreases as its natural food source decreases, and once the population is low, the food source will recover and the population will start to increase again. Cyclic data cannot be accounted for using ordinary seasonal adjustment since it is not of fixed period.
Limitations pose barriers beyond which forecasting methods cannot reliably predict. There are many events and values that cannot be forecast reliably. Events such as the roll of a die or the results of the lottery cannot be forecast because they are random events and there is no significant relationship in the data. When the factors that lead to what is being forecast are not known or well understood such as instockandforeign exchange marketsforecasts are often inaccurate or wrong as there is not enough data about everything that affects these markets for the forecasts to be reliable, in addition the outcomes of the forecasts of these markets change the behavior of those involved in the market further reducing forecast accuracy.[2]
The concept of "self-destructing predictions" concerns the way in which some predictions can undermine themselves by influencing social behavior.[29]This is because "predictors are part of the social context about which they are trying to make a prediction and may influence that context in the process".[29]For example, a forecast that a large percentage of a population will become HIV infected based on existing trends may cause more people to avoid risky behavior and thus reduce the HIV infection rate, invalidating the forecast (which might have remained correct if it had not been publicly known). Or, a prediction that cybersecurity will become a major issue may cause organizations to implement more security cybersecurity measures, thus limiting the issue.
As proposed byEdward Lorenzin 1963, long range weather forecasts, those made at a range of two weeks or more, are impossible to definitively predict the state of the atmosphere, owing to thechaotic natureof thefluid dynamicsequations involved. Extremely small errors in the initial input, such as temperatures and winds, within numerical models double every five days.[30]
|
https://en.wikipedia.org/wiki/Forecasting
|
Inmathematics,minimum polynomial extrapolationis asequence transformationused forconvergence accelerationof vector sequences, due to Cabay and Jackson.[1]
WhileAitken's methodis the most famous, it often fails for vector sequences. An effective method for vector sequences is the minimum polynomial extrapolation. It is usually phrased in terms of thefixed point iteration:
Given iteratesx1,x2,...,xk{\displaystyle x_{1},x_{2},...,x_{k}}inRn{\displaystyle \mathbb {R} ^{n}}, one constructs then×(k−1){\displaystyle n\times (k-1)}matrixU=(x2−x1,x3−x2,...,xk−xk−1){\displaystyle U=(x_{2}-x_{1},x_{3}-x_{2},...,x_{k}-x_{k-1})}whose columns are thek−1{\displaystyle k-1}differences. Then, one computes the vectorc=−U+(xk+1−xk){\displaystyle c=-U^{+}(x_{k+1}-x_{k})}whereU+{\displaystyle U^{+}}denotes the Moore–PenrosepseudoinverseofU{\displaystyle U}. The number 1 is then appended to the end ofc{\displaystyle c}, and the extrapolated limit is
whereX=(x2,x3,...,xk+1){\displaystyle X=(x_{2},x_{3},...,x_{k+1})}is the matrix whose columns are thek{\displaystyle k}iterates starting at 2.
The following 4 line MATLAB code segment implements the MPE algorithm:
Thismathematical analysis–related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Minimum_polynomial_extrapolation
|
Innumerical analysis, amultigrid method(MG method) is analgorithmfor solvingdifferential equationsusing ahierarchyofdiscretizations. They are an example of a class of techniques calledmultiresolution methods, very useful in problems exhibitingmultiple scalesof behavior. For example, many basicrelaxation methodsexhibit different rates of convergence for short- and long-wavelength components, suggesting these different scales be treated differently, as in aFourier analysisapproach to multigrid.[1]MG methods can be used as solvers as well aspreconditioners.
The main idea of multigrid is to accelerate the convergence of a basic iterative method (known as relaxation, which generally reduces short-wavelength error) by aglobalcorrection of the fine grid solution approximation from time to time, accomplished by solving acoarse problem. The coarse problem, while cheaper to solve, is similar to the fine grid problem in that it also has short- and long-wavelength errors. It can also be solved by a combination of relaxation and appeal to still coarser grids. This recursive process is repeated until a grid is reached where the cost of direct solution there is negligible compared to the cost of one relaxation sweep on the fine grid. This multigrid cycle typically reduces all error components by a fixed amount bounded well below one, independent of the fine grid mesh size. The typical application for multigrid is in the numerical solution ofelliptic partial differential equationsin two or more dimensions.[2]
Multigrid methods can be applied in combination with any of the common discretization techniques. For example, thefinite element methodmay be recast as a multigrid method.[3]In these cases, multigrid methods are among the fastest solution techniques known today. In contrast to other methods, multigrid methods are general in that they can treat arbitrary regions andboundary conditions. They do not depend on theseparability of the equationsor other special properties of the equation. They have also been widely used for more-complicated non-symmetric and nonlinear systems of equations, like theLamé equationsofelasticityor theNavier-Stokes equations.[4]
There are many variations of multigrid algorithms, but the common features are that a hierarchy ofdiscretizations(grids) is considered. The important steps are:[5][6]
There are many choices of multigrid methods with varying trade-offs between speed of solving a single iteration and the rate of convergence with said iteration. The 3 main types are V-Cycle, F-Cycle, and W-Cycle. These differ in which and how many coarse-grain cycles are performed per fine iteration. The V-Cycle algorithm executes one coarse-grain V-Cycle. F-Cycle does a coarse-grain V-Cycle followed by a coarse-grain F-Cycle, while each W-Cycle performs two coarse-grain W-Cycles per iteration. For adiscrete 2D problem, F-Cycle takes 83% more time to compute than a V-Cycle iteration while a W-Cycle iteration takes 125% more. If the problem is set up in a 3D domain, then a F-Cycle iteration and a W-Cycle iteration take about 64% and 75% more time respectively than a V-Cycle iteration ignoringoverheads. Typically, W-Cycle produces similar convergence to F-Cycle. However, in cases ofconvection-diffusionproblems with highPéclet numbers, W-Cycle can show superiority in its rate of convergence per iteration over F-Cycle. The choice of smoothing operators are extremely diverse as they includeKrylov subspacemethods and can bepreconditioned.
Any geometric multigrid cycle iteration is performed on a hierarchy of grids and hence it can be coded using recursion. Since the function calls itself with smaller sized (coarser) parameters, the coarsest grid is where the recursion stops. In cases where the system has a highcondition number, the correction procedure is modified such that only a fraction of the prolongated coarser grid solution is added onto the finer grid.
These steps can be used as shown in the MATLAB style pseudo code for 1 iteration ofV-Cycle Multigrid:
The following representsF-cycle multigrid. This multigrid cycle is slower than V-Cycle per iteration but does result in faster convergence.
Similarly the procedures can modified as shown in the MATLAB style pseudo code for 1 iteration ofW-cycle multigridfor an even superior rate of convergence in certain cases:
This approach has the advantage over other methods that it often scales linearly with the number of discrete nodes used. In other words, it can solve these problems to a given accuracy in a number of operations that is proportional to the number of unknowns.
Assume that one has a differential equation which can be solved approximately (with a given accuracy) on a gridi{\displaystyle i}with a given grid point
densityNi{\displaystyle N_{i}}. Assume furthermore that a solution on any gridNi{\displaystyle N_{i}}may be obtained with a given
effortWi=ρKNi{\displaystyle W_{i}=\rho KN_{i}}from a solution on a coarser gridi+1{\displaystyle i+1}. Here,ρ=Ni+1/Ni<1{\displaystyle \rho =N_{i+1}/N_{i}<1}is the ratio of grid points on "neighboring" grids and is assumed to be constant throughout the grid hierarchy, andK{\displaystyle K}is some constant modeling the effort of computing the result for one grid point.
The following recurrence relation is then obtained for the effort of obtaining the solution on gridk{\displaystyle k}:
And in particular, we find for the finest gridN1{\displaystyle N_{1}}that
Combining these two expressions (and usingNk=ρk−1N1{\displaystyle N_{k}=\rho ^{k-1}N_{1}}) gives
Using thegeometric series, we then find (for finiten{\displaystyle n})
that is, a solution may be obtained inO(N){\displaystyle O(N)}time. It should be mentioned that there is one exception to theO(N){\displaystyle O(N)}i.e. W-cycle multigrid used on a 1D problem; it would result inO(Nlog(N)){\displaystyle O(Nlog(N))}complexity.[citation needed]
A multigrid method with an intentionally reduced tolerance can be used as an efficientpreconditionerfor an external iterative solver, e.g.,[7]The solution may still be obtained inO(N){\displaystyle O(N)}time as well as in the case where the multigrid method is used as a solver. Multigrid preconditioning is used in practice even for linear systems, typically with one cycle per iteration, e.g., inHypre. Its main advantage versus a purely multigrid solver is particularly clear for nonlinear problems, e.g.,eigenvalueproblems.
If the matrix of the original equation or an eigenvalue problem is symmetric positive definite (SPD), the preconditioner is commonly constructed to be SPD as well, so that the standardconjugate gradient(CG)iterative methodscan still be used. Such imposed SPD constraints may complicate the construction of the preconditioner, e.g., requiring coordinated pre- and post-smoothing. However,preconditionedsteepest descentandflexible CG methodsfor SPD linear systems andLOBPCGfor symmetric eigenvalue problems are all shown[8]to be robust if the preconditioner is not SPD.
Originally described in Xu's Ph.D. thesis[9]and later published in Bramble-Pasciak-Xu,[10]the BPX-preconditioner is one of the two major multigrid
approaches (the other is the classic multigrid algorithm such as V-cycle) for solving large-scale algebraic systems that arise from the discretization of models in science and engineering described by partial differential equations. In view of the subspace correction framework,[11]BPX preconditioner is a parallel subspace correction method where as the classic V-cycle is a successive subspace correction method. The BPX-preconditioner is known to be naturally more parallel and in some applications more robust than the classic V-cycle multigrid method. The method has been widely used by researchers and practitioners since 1990.
Multigrid methods can be generalized in many different ways. They can be applied naturally in a time-stepping solution ofparabolic partial differential equations, or they can be applied directly to time-dependentpartial differential equations.[12]Research on multilevel techniques forhyperbolic partial differential equationsis underway.[13]Multigrid methods can also be applied tointegral equations, or for problems instatistical physics.[14]
Another set of multiresolution methods is based uponwavelets. These wavelet methods can be combined with multigrid methods.[15][16]For example, one use of wavelets is to reformulate the finite element approach in terms of a multilevel method.[17]
Adaptive multigridexhibitsadaptive mesh refinement, that is, it adjusts the grid as the computation proceeds, in a manner dependent upon the computation itself.[18]The idea is to increase resolution of the grid only in regions of the solution where it is needed.
Practically important extensions of multigrid methods include techniques where no partial differential equation nor geometrical problem background is used to construct the multilevel hierarchy.[19]Suchalgebraic multigrid methods(AMG) construct their hierarchy of operators directly from the system matrix. In classical AMG, the levels of the hierarchy are simply subsets of unknowns without any geometric interpretation. (More generally, coarse grid unknowns can be particular linear combinations of fine grid unknowns.) Thus, AMG methods become black-box solvers for certain classes ofsparse matrices. AMG is regarded as advantageous mainly where geometric multigrid is too difficult to apply,[20]but is often used simply because it avoids the coding necessary for a true multigrid implementation. While classical AMG was developed first, a related algebraic method is known as smoothed aggregation (SA).
In an overview paper[21]by Jinchao Xu and Ludmil Zikatanov, the "algebraic multigrid" methods are understood from an abstract point of view. They developed a unified framework and existing algebraic multigrid methods can be derived coherently. Abstract theory about how to construct optimal coarse space as well as quasi-optimal spaces was derived. Also, they proved that, under appropriate assumptions, the abstract two-level AMG method converges uniformly with respect to the size of the linear system, the coefficient variation, and the anisotropy. Their abstract framework covers most existing AMG methods, such as classical AMG, energy-minimization AMG, unsmoothed and smoothed aggregation AMG, and spectral AMGe.
Multigrid methods have also been adopted for the solution ofinitial value problems.[22]Of particular interest here are parallel-in-time multigrid methods:[23]in contrast to classicalRunge–Kuttaorlinear multistepmethods, they can offerconcurrencyin temporal direction.
The well knownPararealparallel-in-time integration method can also be reformulated as a two-level multigrid in time.
Nearly singular problems arise in a number of important physical and engineering applications. Simple, but important example of nearly singular problems can be found at the displacement formulation oflinear elasticityfor nearly incompressible materials. Typically, the major problem to solve such nearly singular systems boils down to treat the nearly singular operator given byA+εM{\displaystyle A+\varepsilon M}robustly with respect to the positive, but small parameterε{\displaystyle \varepsilon }. HereA{\displaystyle A}is symmetricsemidefiniteoperator with largenull space, whileM{\displaystyle M}is a symmetricpositive definiteoperator. There were many works to attempt to design a robust and fast multigrid method for such nearly singular problems. A general guide has been provided as a design principle to achieve parameters (e.g., mesh size and physical parameters such asPoisson's ratiothat appear in the nearly singular operator) independent convergence rate of the multigrid method applied to such nearly singular systems,[24]i.e., in each grid, a space decomposition based on which the smoothing is applied, has to be constructed so that the null space of the singular part of the nearly singular operator has to be included in the sum of the local null spaces, the intersection of the null space and the local spaces resulting from the space decompositions.
|
https://en.wikipedia.org/wiki/Multigrid_method
|
Innumerical analysis,Richardson extrapolationis asequence accelerationmethod used to improve therate of convergenceof asequenceof estimates of some valueA∗=limh→0A(h){\displaystyle A^{\ast }=\lim _{h\to 0}A(h)}. In essence, given the value ofA(h){\displaystyle A(h)}for several values ofh{\displaystyle h}, we can estimateA∗{\displaystyle A^{\ast }}by extrapolating the estimates toh=0{\displaystyle h=0}. It is named afterLewis Fry Richardson, who introduced the technique in the early 20th century,[1][2]though the idea was already known toChristiaan Huygensinhis calculationofπ{\displaystyle \pi }.[3]In the words ofBirkhoffandRota, "its usefulness for practical computations can hardly be overestimated."[4]
Practical applications of Richardson extrapolation includeRomberg integration, which applies Richardson extrapolation to thetrapezoid rule, and theBulirsch–Stoer algorithmfor solving ordinary differential equations.
LetA0(h){\displaystyle A_{0}(h)}be an approximation ofA∗{\displaystyle A^{*}}(exact value) that depends on a step sizeh(where0<h<1{\textstyle 0<h<1}) with anerrorformula of the formA∗=A0(h)+a0hk0+a1hk1+a2hk2+⋯{\displaystyle A^{*}=A_{0}(h)+a_{0}h^{k_{0}}+a_{1}h^{k_{1}}+a_{2}h^{k_{2}}+\cdots }where theai{\displaystyle a_{i}}are unknown constants and theki{\displaystyle k_{i}}are known constants such thathki>hki+1{\displaystyle h^{k_{i}}>h^{k_{i+1}}}. Furthermore,O(hki){\displaystyle O(h^{k_{i}})}represents thetruncation errorof theAi(h){\displaystyle A_{i}(h)}approximation such thatA∗=Ai(h)+O(hki).{\displaystyle A^{*}=A_{i}(h)+O(h^{k_{i}}).}Similarly, inA∗=Ai(h)+O(hki),{\displaystyle A^{*}=A_{i}(h)+O(h^{k_{i}}),}the approximationAi(h){\displaystyle A_{i}(h)}is said to be anO(hki){\displaystyle O(h^{k_{i}})}approximation.
Note that by simplifying withBig O notation, the following formulae are equivalent:A∗=A0(h)+a0hk0+a1hk1+a2hk2+⋯A∗=A0(h)+a0hk0+O(hk1)A∗=A0(h)+O(hk0){\displaystyle {\begin{aligned}A^{*}&=A_{0}(h)+a_{0}h^{k_{0}}+a_{1}h^{k_{1}}+a_{2}h^{k_{2}}+\cdots \\A^{*}&=A_{0}(h)+a_{0}h^{k_{0}}+O(h^{k_{1}})\\A^{*}&=A_{0}(h)+O(h^{k_{0}})\end{aligned}}}
Richardson extrapolation is a process that finds a better approximation ofA∗{\displaystyle A^{*}}by changing the error formula fromA∗=A0(h)+O(hk0){\displaystyle A^{*}=A_{0}(h)+O(h^{k_{0}})}toA∗=A1(h)+O(hk1).{\displaystyle A^{*}=A_{1}(h)+O(h^{k_{1}}).}Therefore, by replacingA0(h){\displaystyle A_{0}(h)}withA1(h){\displaystyle A_{1}(h)}thetruncation errorhas reduced fromO(hk0){\displaystyle O(h^{k_{0}})}toO(hk1){\displaystyle O(h^{k_{1}})}for the same step sizeh{\displaystyle h}. The general pattern occurs in whichAi(h){\displaystyle A_{i}(h)}is a more accurate estimate thanAj(h){\displaystyle A_{j}(h)}wheni>j{\displaystyle i>j}. By this process, we have achieved a better approximation ofA∗{\displaystyle A^{*}}by subtracting the largest term in the error which wasO(hk0){\displaystyle O(h^{k_{0}})}. This process can be repeated to remove more error terms to get even better approximations.
Using the step sizesh{\displaystyle h}andh/t{\displaystyle h/t}for some constantt{\displaystyle t}, the two formulas forA∗{\displaystyle A^{*}}are:
To improve our approximation fromO(hk0){\displaystyle O(h^{k_{0}})}toO(hk1){\displaystyle O(h^{k_{1}})}by removing the first error term, we multiplyequation 2bytk0{\displaystyle t^{k_{0}}}and subtractequation 1to give us(tk0−1)A∗=[tk0A0(ht)−A0(h)]+(tk0a1(ht)k1−a1hk1)+(tk0a2(ht)k2−a2hk2)+O(hk3).{\displaystyle (t^{k_{0}}-1)A^{*}={\bigg [}t^{k_{0}}A_{0}\left({\frac {h}{t}}\right)-A_{0}(h){\bigg ]}+{\bigg (}t^{k_{0}}a_{1}{\bigg (}{\frac {h}{t}}{\bigg )}^{k_{1}}-a_{1}h^{k_{1}}{\bigg )}+{\bigg (}t^{k_{0}}a_{2}{\bigg (}{\frac {h}{t}}{\bigg )}^{k_{2}}-a_{2}h^{k_{2}}{\bigg )}+O(h^{k_{3}}).}This multiplication and subtraction was performed because[tk0A0(ht)−A0(h)]{\textstyle {\big [}t^{k_{0}}A_{0}\left({\frac {h}{t}}\right)-A_{0}(h){\big ]}}is anO(hk1){\displaystyle O(h^{k_{1}})}approximation of(tk0−1)A∗{\displaystyle (t^{k_{0}}-1)A^{*}}. We can solve our current formula forA∗{\displaystyle A^{*}}to giveA∗=[tk0A0(ht)−A0(h)]tk0−1+(tk0a1(ht)k1−a1hk1)tk0−1+(tk0a2(ht)k2−a2hk2)tk0−1+O(hk3){\displaystyle A^{*}={\frac {{\bigg [}t^{k_{0}}A_{0}\left({\frac {h}{t}}\right)-A_{0}(h){\bigg ]}}{t^{k_{0}}-1}}+{\frac {{\bigg (}t^{k_{0}}a_{1}{\bigg (}{\frac {h}{t}}{\bigg )}^{k_{1}}-a_{1}h^{k_{1}}{\bigg )}}{t^{k_{0}}-1}}+{\frac {{\bigg (}t^{k_{0}}a_{2}{\bigg (}{\frac {h}{t}}{\bigg )}^{k_{2}}-a_{2}h^{k_{2}}{\bigg )}}{t^{k_{0}}-1}}+O(h^{k_{3}})}which can be written asA∗=A1(h)+O(hk1){\displaystyle A^{*}=A_{1}(h)+O(h^{k_{1}})}by settingA1(h)=tk0A0(ht)−A0(h)tk0−1.{\displaystyle A_{1}(h)={\frac {t^{k_{0}}A_{0}\left({\frac {h}{t}}\right)-A_{0}(h)}{t^{k_{0}}-1}}.}
A generalrecurrence relationcan be defined for the approximations byAi+1(h)=tkiAi(ht)−Ai(h)tki−1{\displaystyle A_{i+1}(h)={\frac {t^{k_{i}}A_{i}\left({\frac {h}{t}}\right)-A_{i}(h)}{t^{k_{i}}-1}}}whereki+1{\displaystyle k_{i+1}}satisfiesA∗=Ai+1(h)+O(hki+1).{\displaystyle A^{*}=A_{i+1}(h)+O(h^{k_{i+1}}).}
The Richardson extrapolation can be considered as a linearsequence transformation.
Additionally, the general formula can be used to estimatek0{\displaystyle k_{0}}(leading order step size behavior ofTruncation error) when neither its value norA∗{\displaystyle A^{*}}is knowna priori. Such a technique can be useful for quantifying an unknownrate of convergence. Given approximations ofA∗{\displaystyle A^{*}}from three distinct step sizesh{\displaystyle h},h/t{\displaystyle h/t}, andh/s{\displaystyle h/s}, the exact relationshipA∗=tk0Ai(ht)−Ai(h)tk0−1+O(hk1)=sk0Ai(hs)−Ai(h)sk0−1+O(hk1){\displaystyle A^{*}={\frac {t^{k_{0}}A_{i}\left({\frac {h}{t}}\right)-A_{i}(h)}{t^{k_{0}}-1}}+O(h^{k_{1}})={\frac {s^{k_{0}}A_{i}\left({\frac {h}{s}}\right)-A_{i}(h)}{s^{k_{0}}-1}}+O(h^{k_{1}})}yields an approximate relationship (please note that the notation here may cause a bit of confusion, the two O appearing in the equation above only indicates the leading order step size behavior but their explicit forms are different and hence cancelling out of the twoOterms is only approximately valid)Ai(ht)+Ai(ht)−Ai(h)tk0−1≈Ai(hs)+Ai(hs)−Ai(h)sk0−1{\displaystyle A_{i}\left({\frac {h}{t}}\right)+{\frac {A_{i}\left({\frac {h}{t}}\right)-A_{i}(h)}{t^{k_{0}}-1}}\approx A_{i}\left({\frac {h}{s}}\right)+{\frac {A_{i}\left({\frac {h}{s}}\right)-A_{i}(h)}{s^{k_{0}}-1}}}which can be solved numerically to estimatek0{\displaystyle k_{0}}for some arbitrary valid choices ofh{\displaystyle h},s{\displaystyle s}, andt{\displaystyle t}.
Ast≠1{\displaystyle t\neq 1}, ift>0{\displaystyle t>0}ands{\displaystyle s}is chosen so thats=t2{\displaystyle s=t^{2}}, this approximate relation reduces to a quadratic equation intk0{\displaystyle t^{k_{0}}}, which is readily solved fork0{\displaystyle k_{0}}in terms ofh{\displaystyle h}andt{\displaystyle t}.
Suppose that we wish to approximateA∗{\displaystyle A^{*}}, and we have a methodA(h){\displaystyle A(h)}that depends on a small parameterh{\displaystyle h}in such a way thatA(h)=A∗+Chn+O(hn+1).{\displaystyle A(h)=A^{\ast }+Ch^{n}+O(h^{n+1}).}
Let us define a new functionR(h,t):=tnA(h/t)−A(h)tn−1{\displaystyle R(h,t):={\frac {t^{n}A(h/t)-A(h)}{t^{n}-1}}}whereh{\displaystyle h}andht{\displaystyle {\frac {h}{t}}}are two distinct step sizes.
ThenR(h,t)=tn(A∗+C(ht)n+O(hn+1))−(A∗+Chn+O(hn+1))tn−1=A∗+O(hn+1).{\displaystyle R(h,t)={\frac {t^{n}(A^{*}+C\left({\frac {h}{t}}\right)^{n}+O(h^{n+1}))-(A^{*}+Ch^{n}+O(h^{n+1}))}{t^{n}-1}}=A^{*}+O(h^{n+1}).}R(h,t){\displaystyle R(h,t)}is called the RichardsonextrapolationofA(h), and has a higher-order error estimateO(hn+1){\displaystyle O(h^{n+1})}compared toA(h){\displaystyle A(h)}.
Very often, it is much easier to obtain a given precision by usingR(h) rather thanA(h′) with a much smallerh′. WhereA(h′) can cause problems due to limited precision (rounding errors) and/or due to the increasingnumber of calculationsneeded (see examples below).
The following pseudocode in MATLAB style demonstrates Richardson extrapolation to help solve the ODEy′(t)=−y2{\displaystyle y'(t)=-y^{2}},y(0)=1{\displaystyle y(0)=1}with theTrapezoidal method. In this example we halve the step sizeh{\displaystyle h}each iteration and so in the discussion above we'd have thatt=2{\displaystyle t=2}. The error of the Trapezoidal method can be expressed in terms of odd powers so that the error over multiple steps can be expressed in even powers; this leads us to raiset{\displaystyle t}to the second power and to take powers of4=22=t2{\displaystyle 4=2^{2}=t^{2}}in the pseudocode. We want to find the value ofy(5){\displaystyle y(5)}, which has the exact solution of15+1=16=0.1666...{\displaystyle {\frac {1}{5+1}}={\frac {1}{6}}=0.1666...}since the exact solution of the ODE isy(t)=11+t{\displaystyle y(t)={\frac {1}{1+t}}}. This pseudocode assumes that a function calledTrapezoidal(f, tStart, tEnd, h, y0)exists which attempts to computey(tEnd)by performing the trapezoidal method on the functionf, with starting pointy0andtStartand step sizeh.
Note that starting with too small an initial step size can potentially introduce error into the final solution. Although there are methods designed to help pick the best initial step size, one option is to start with a large step size and then to allow the Richardson extrapolation to reduce the step size each iteration until the error reaches the desired tolerance.
|
https://en.wikipedia.org/wiki/Richardson_extrapolation
|
Linear trend estimationis astatisticaltechnique used to analyzedatapatterns.Datapatterns, or trends, occur when theinformationgathered tends to increase or decrease over time or is influenced by changes in an external factor. Linear trend estimation essentially creates a straight line on agraphofdatathat models the general direction that thedatais heading.
Given a set ofdata, there are a variety offunctionsthat can be chosen to fit the data. The simplest function is astraight linewith the dependent variable (typically the measured data) on the vertical axis and the independent variable (often time) on the horizontal axis.
Theleast-squaresfit is a common method to fit a straight line through the data. This methodminimizesthe sum of the squared errors in the data seriesy{\displaystyle y}. Given a set of points in timet{\displaystyle t}and data valuesyt{\displaystyle y_{t}}observed for those points in time, values ofa^{\displaystyle {\hat {a}}}andb^{\displaystyle {\hat {b}}}are chosen to minimize the sum of squared errors
This formula first calculates the difference between the observed datayt{\displaystyle y_{t}}and the estimate(a^t+b^){\displaystyle ({\hat {a}}t+{\hat {b}})}, the difference at each data point is squared, and then added together, giving the "sum of squares" measurement of error. The values ofa^{\displaystyle {\hat {a}}}andb^{\displaystyle {\hat {b}}}derived from the data parameterize the simple linear estimatory^=a^x+b^{\displaystyle {\hat {y}}={\hat {a}}x+{\hat {b}}}. The term "trend" refers to the slopea^{\displaystyle {\hat {a}}}in the least squares estimator.
To analyze a (time) series of data, it can be assumed that it may be represented as trend plus noise:
wherea{\displaystyle a}andb{\displaystyle b}are unknown constants and thee{\displaystyle e}'s are randomly distributederrors. If one can reject the null hypothesis that the errors arenon-stationary, then the non-stationary series{yt}{\displaystyle \{y_{t}\}}is calledtrend-stationary. The least-squares method assumes the errors are independently distributed with a normal distribution. If this is not the case, hypothesis tests about the unknown parametersa{\displaystyle a}andb{\displaystyle b}may be inaccurate. It is simplest if thee{\displaystyle e}'s all have the same distribution, but if not (if some havehigher variance, meaning that those data points are effectively less certain), then this can be taken into account during the least-squares fitting by weighting each point by the inverse of the variance of that point.
Commonly, where only a single time series exists to be analyzed, the variance of thee{\displaystyle e}'s is estimated by fitting a trend to obtain the estimated parameter valuesa^{\displaystyle {\hat {a}}}andb^,{\displaystyle {\hat {b}},}thus allowing the predicted values
to be subtracted from the datayt{\displaystyle y_{t}}(thusdetrendingthe data), leaving theresidualse^t{\displaystyle {\hat {e}}_{t}}as thedetrended data, and estimating the variance of theet{\displaystyle e_{t}}'s from the residuals — this is often the only way of estimating the variance of theet{\displaystyle e_{t}}'s.
Once the "noise" of the series is known, the significance of the trend can be assessed by making thenull hypothesisthat the trend,a{\displaystyle a}, is not different from 0. From the above discussion of trends in random data with knownvariance, the distribution of calculated trends is to be expected from random (trendless) data. If the estimated trend,a^{\displaystyle {\hat {a}}}, is larger than the critical value for a certainsignificance level, then the estimated trend is deemed significantly different from zero at that significance level, and the null hypothesis of a zero underlying trend is rejected.
The use of a linear trend line has been the subject of criticism, leading to a search for alternative approaches to avoid its use in model estimation. One of the alternative approaches involvesunit roottests and thecointegrationtechnique in econometric studies.
The estimated coefficient associated with a linear trend variable such as time is interpreted as a measure of the impact of a number of unknown or known but immeasurable factors on the dependent variable over one unit of time. Strictly speaking, this interpretation is applicable for the estimation time frame only. Outside of this time frame, it cannot be determined how these immeasurable factors behave both qualitatively and quantitatively.
Research results by mathematicians, statisticians, econometricians, and economists have been published in response to those questions. For example, detailed notes on the meaning of linear time trends in the regression model are given in Cameron (2005);[1]Granger, Engle, and many other econometricians have written on stationarity, unit root testing, co-integration, and related issues (a summary of some of the works in this area can be found in an information paper[2]by the Royal Swedish Academy of Sciences (2003)); and Ho-Trieu & Tucker (1990) have written on logarithmic time trends with results indicating linear time trends are special cases ofcycles.
It is harder to see a trend in a noisy time series. For example, if the true series is 0, 1, 2, 3, all plus some independent normally distributed "noise"eofstandard deviationE, and a sample series of length 50 is given, then ifE=0.1, the trend will be obvious; ifE=100, the trend will probably be visible; but ifE=10000, the trend will be buried in the noise.
Consider a concrete example, such as theglobal surface temperaturerecord of the past 140 years as presented by theIPCC.[3]The interannual variation is about 0.2°C, and the trend is about 0.6°C over 140 years, with 95% confidence limits of 0.2°C (by coincidence, about the same value as the interannual variation). Hence, the trend is statistically different from 0. However, as noted elsewhere,[4]this time series doesn't conform to the assumptions necessary for least-squares to be valid.
The least-squares fitting process produces a value,r-squared(r2), which is 1 minus the ratio of the variance of theresidualsto the variance of the dependent variable. It says what fraction of the variance of the data is explained by the fitted trend line. It doesnotrelate to thestatistical significanceof the trend line (see graph); the statistical significance of the trend is determined by itst-statistic. Often, filtering a series increasesr2while making little difference to the fitted trend.
Thus far, the data have been assumed to consist of the trend plus noise, with the noise at each data point beingindependent and identically distributed random variableswith a normal distribution. Real data (for example, climate data) may not fulfill these criteria. This is important, as it makes an enormous difference to the ease with which the statistics can be analyzed so as to extract maximum information from the data series. If there are other non-linear effects that have acorrelationto the independent variable (such as cyclic influences), the use of least-squares estimation of the trend is not valid. Also, where the variations are significantly larger than the resulting straight line trend, the choice of start and end points can significantly change the result. That is, the model is mathematicallymisspecified. Statistical inferences (tests for the presence of a trend, confidence intervals for the trend, etc.) are invalid unless departures from the standard assumptions are properly accounted for, for example, as follows:
InR, the linear trend in data can be estimated by using the 'tslm' function of the 'forecast' package.
Medical andbiomedicalstudies often seek to determine a link between sets of data, such as of a clinical or scientific metric in three different diseases. But data may also be linked in time (such as change in the effect of a drug from baseline, to month 1, to month 2), or by an external factor that may or may not be determined by the researcher and/or their subject (such as no pain, mild pain, moderate pain, or severe pain). In these cases, one would expect the effect test statistic (e.g., influence of astatinon levels ofcholesterol, ananalgesicon the degree of pain, or increasing doses of different strengths of a drug on a measurable index, i.e. a dose - response effect) to change in direct order as the effect develops. Suppose the mean level of cholesterol before and after the prescription of a statin falls from 5.6mmol/Lat baseline to 3.4 mmol/L at one month and to 3.7 mmol/L at two months. Given sufficient power, anANOVA (analysis of variance)would most likely find a significant fall at one and two months, but the fall is not linear. Furthermore, a post-hoc test may be required. An alternative test may be a repeated measures (two way) ANOVA orFriedman test, depending on the nature of the data. Nevertheless, because the groups are ordered, a standard ANOVA is inappropriate. Should the cholesterol fall from 5.4 to 4.1 to 3.7, there is a clear linear trend. The same principle may be applied to the effects of allele/genotype frequency, where it could be argued that asingle-nucleotide polymorphismin nucleotides XX, XY, YY are in fact a trend of no Y's, one Y, and then two Y's.[3]
The mathematics of linear trend estimation is a variant of the standard ANOVA, giving different information, and would be the most appropriate test if the researchers hypothesize a trend effect in their test statistic. One example is levels of serumtrypsinin six groups of subjects ordered by age decade (10–19 years up to 60–69 years). Levels of trypsin (ng/mL) rise in a direct linear trend of 128, 152, 194, 207, 215, 218 (data from Altman). Unsurprisingly, a 'standard' ANOVA givesp< 0.0001, whereas linear trend estimation givesp= 0.00006. Incidentally, it could be reasonably argued that as age is a natural continuously variable index, it should not be categorized into decades, and an effect of age and serum trypsin is sought by correlation (assuming the raw data is available). A further example is of a substance measured at four time points in different groups:
This is a clear trend. ANOVA givesp= 0.091, because the overall variance exceeds the means, whereas linear trend estimation givesp= 0.012. However, should the data have been collected at four time points in the same individuals, linear trend estimation would be inappropriate, and a two-way (repeated measures) ANOVA would have been applied.
|
https://en.wikipedia.org/wiki/Trend_estimation
|
Extrapolation domain analysis (EDA)is amethodologyfor identifying geographical areas that seem suitable for adoption of innovativeecosystem managementpractices on the basis of sites exhibiting similarity in conditions such as climatic,land useandsocioeconomicindicators. Whilst it has been applied to water research projects in nine pilot basins, the concept is generic and can be applied to any project where accelerating change being considered as a central development objective.
The outputs of the method thus far have been used to quantify the global economicimpactof implementing particular innovations together with its effect onwater resources.[1]The research has stimulated members of several of the Challenge Program for Water and Food projects to explore potential areas for scaling out. Such is the case of the Quesungualagroforestrysystem inHonduras,[2][3]which is moving towards new areas in parallel with areas identified by the EDA method.
EDA is a combined approach that incorporates a number ofspatial analysistechniques. It was first investigated in 2006, when it was applied to assess how similarity analysis can be used to scale out research findings within seven Andes pilot systems of basins.[4]The method developed further the research around Jones' 'Homologue' analysis[5][6]by incorporating socio-economic variables in the search for similar sites around the Tropics. It has since been used to evaluate ‘Impact pathways’ and Global Impact Analysis.[1]'Homologue' was developed to determine the similarity of climatic conditions across a geographical area to those exhibited by the pilot site; the pixel resolution at which this is processed is 2.43 arc minutes, or 4.5 km at the equator.
To derive the extrapolation domains, Bayesian and frequentist statistical modelling techniques are used. The weights-of-evidence (WofE) methodology is applied; this is based largely on the concepts ofBayesian probabilistic reasoning.[7][8]In essence, statistical inference is based on determining the probability of target sites adopting the change demonstrated in pilot areas. The assumption is that a collection of training points will, in aggregate, have common characteristics that will allow their presence in other similar sites to be predicted. It is based on the collection of factors (used to create evidential theme data layers) that prove to be consistent with successful implementation at pilot sites and assumes that if target sites exhibit similar socio-economic, together with climatic and landscapes attributes to pilot sites, then there is strong evidence to suggest that out-scaling[clarification needed]to these sites will succeed.
|
https://en.wikipedia.org/wiki/Extrapolation_domain_analysis
|
Innavigation,dead reckoningis the process of calculating the current position of a moving object by using a previously determined position, orfix, and incorporating estimates of speed, heading (or direction or course), and elapsed time. The corresponding term in biology, to describe the processes by which animals update their estimates of position or heading, ispath integration.
Advances innavigational aidsthat give accurate information on position, in particularsatellite navigationusing theGlobal Positioning System, have made simple dead reckoning by humans obsolete for most purposes. However,inertial navigation systems, which provide very accurate directional information, use dead reckoning and are very widely applied.
Contrary to myth, the term "dead reckoning" was not originally used to abbreviate "deduced reckoning", nor is it a misspelling of the term "ded reckoning". The use of "ded" or "deduced reckoning" is not known to have appeared earlier than 1931, much later in history than "dead reckoning", which appeared as early as 1613 according to theOxford English Dictionary. The original intention of "dead" in the term is generally assumed to mean using a stationary object that is "dead in the water" as a basis for calculations. Additionally, at the time the first appearance of "dead reckoning", "ded" was considered a common spelling of "dead". This potentially led to later confusion of the origin of the term.[1]
By analogy with their navigational use, the wordsdead reckoningare also used to mean the process of estimating the value of any variable quantity by using an earlier value and adding whatever changes have occurred in the meantime. Often, this usage implies that the changes are not known accurately. The earlier value and the changes may be measured or calculated quantities.[citation needed]
While dead reckoning can give the best available information on the present position with little math or analysis, it is subject to significant errors of approximation. For precise positional information, both speed and direction must be accurately known at all times during travel. Most notably, dead reckoning does not account for directional drift during travel through a fluid medium. These errors tend to compound themselves over greater distances, making dead reckoning a difficult method of navigation for longer journeys.
For example, if displacement is measured by the number of rotations of a wheel, any discrepancy between the actual and assumed traveled distance per rotation, due perhaps to slippage or surface irregularities, will be a source of error. As each estimate of position is relative to the previous one, errors arecumulative, or compounding, over time.
The accuracy of dead reckoning can be increased significantly by using other, more reliable methods to get a new fix part way through the journey. For example, if one was navigating on land in poor visibility, then dead reckoning could be used to get close enough to the known position of a landmark to be able to see it, before walking to the landmark itself—giving a precisely known starting point—and then setting off again.
Localizinga staticsensor nodeis not a difficult task because attaching aGlobal Positioning System(GPS) device suffices the need of localization. But amobile sensor node, which continuously changes its geographical location with time is difficult to localize. Mostly mobile sensor nodes within some particular domain for data collection can be used,i.e, sensor node attached to an animal within a grazing field or attached to a soldier on a battlefield. Within these scenarios a GPS device for each sensor node cannot be afforded. Some of the reasons for this include cost, size and battery drainage of constrained sensor nodes.
To overcome this problem a limited number of reference nodes (with GPS) within a field is employed. These nodes continuously broadcast their locations and other nodes in proximity receive these locations and calculate their position using some mathematical technique liketrilateration. For localization, at least three known reference locations are necessary to localize. Several localization algorithms based onSequential Monte Carlo(SMC) method have been proposed in literature.[2][3]Sometimes a node at some places receives only two known locations and hence it becomes impossible to localize. To overcome this problem, dead reckoning technique is used. With this technique a sensor node uses its previous calculated location for localization at later time intervals.[4]For example, at time instant 1 if node A calculates its position asloca_1with the help of three known reference locations; then at time instant 2 it usesloca_1along with two other reference locations received from other two reference nodes. This not only localizes a node in less time but also localizes in positions where it is difficult to get three reference locations.[5]
In studies of animal navigation, dead reckoning is more commonly (though not exclusively) known aspath integration. Animals use it to estimate their current location based on their movements from their last known location. Animals such as ants, rodents, and geese have been shown to track their locations continuously relative to a starting point and to return to it, an important skill for foragers with a fixed home.[6][7]
In marine navigation a "dead" reckoning plot generally does not take into account the effect ofcurrentsorwind. Aboard ship a dead reckoning plot is considered important in evaluating position information and planning the movement of the vessel.[8]
Dead reckoning begins with a known position, orfix, which is then advanced, mathematically or directly on the chart, by means of recorded heading, speed, and time. Speed can be determined by many methods. Before modern instrumentation, it was determined aboard ship using achip log. More modern methods includepit logreferencing engine speed (e.g. inrpm) against a table of total displacement (for ships) or referencing one's indicated airspeed fed by the pressure from apitot tube. This measurement is converted to anequivalent airspeedbased upon known atmospheric conditions and measured errors in the indicated airspeed system. A naval vessel uses a device called apit sword(rodmeter), which uses two sensors on a metal rod to measure the electromagnetic variance caused by the ship moving through water. This change is then converted to ship's speed. Distance is determined by multiplying the speed and the time. This initial position can then be adjusted resulting in an estimated position by taking into account the current (known asset and driftin marine navigation). If there is no positional information available, a new dead reckoning plot may start from an estimated position. In this case subsequent dead reckoning positions will have taken into account estimated set and drift.
Dead reckoning positions are calculated at predetermined intervals, and are maintained between fixes. The duration of the interval varies. Factors including one's speed made good and the nature of heading and other course changes, and the navigator's judgment determine when dead reckoning positions are calculated.
Before the 18th-century development of themarine chronometerbyJohn Harrisonand thelunar distance method, dead reckoning was the primary method of determininglongitudeavailable to mariners such asChristopher ColumbusandJohn Caboton their trans-Atlantic voyages. Tools such as thetraverse boardwere developed to enable even illiterate crew members to collect the data needed for dead reckoning.Polynesian navigation, however, uses differentwayfindingtechniques.
On 14 June, 1919,John Alcock and Arthur Browntook off from Lester's Field inSt. John's,Newfoundlandin aVickers Vimy. They navigated across theAtlantic Oceanby dead reckoning and landed inCounty Galway,Irelandat 8:40 a.m. on 15 June completing the first non-stoptransatlantic flight.
On 21 May 1927Charles Lindberghlanded inParis, Franceafter a successful non-stop flight from the United States in the single-enginedSpirit of St. Louis. As the aircraft was equipped with very basic instruments, Lindbergh used dead reckoning to navigate.
Dead reckoning in the air is similar to dead reckoning on the sea, but slightly more complicated. The density of the air the aircraft moves through affects its performance as well as winds, weight, and power settings.
The basic formula for DR is Distance = Speed x Time. An aircraft flying at 250 knots airspeed for 2 hours has flown 500 nautical miles through the air. Thewind triangleis used to calculate the effects of wind on heading and airspeed to obtain a magnetic heading to steer and the speed over the ground (groundspeed). Printed tables, formulae, or anE6Bflight computer are used to calculate the effects of air density on aircraft rate of climb, rate of fuel burn, and airspeed.[9]
A course line is drawn on the aeronautical chart along with estimated positions at fixed intervals (say every half hour). Visual observations of ground features are used to obtain fixes. By comparing the fix and the estimated position corrections are made to the aircraft's heading and groundspeed.
Dead reckoning is on the curriculum for VFR (visual flight rules – or basic level) pilots worldwide.[10]It is taught regardless of whether the aircraft has navigation aids such as GPS,ADFandVORand is anICAORequirement. Many flying training schools will prevent a student from using electronic aids until they have mastered dead reckoning.
Inertial navigation systems(INSes), which are nearly universal on more advanced aircraft, use dead reckoning internally. The INS provides reliable navigation capability under virtually any conditions, without the need for external navigation references, although it is still prone to slight errors.
Dead reckoning is today implemented in some[weasel words]high-end[specify]automotive navigation systemsin order to overcome the limitations of GPS/GNSStechnology alone. Satellite microwave signals are unavailable inparking garagesand tunnels, and often severely degraded inurban canyonsand near trees due to blocked lines of sight to the satellites ormultipath propagation. In a dead-reckoning navigation system, the car is equipped with sensors that know the wheel circumference and record wheel rotations and steering direction. These sensors are often already present in cars for other purposes (anti-lock braking system,electronic stability control) and can be read by the navigation system from thecontroller-area networkbus. The navigation system then uses aKalman filterto integrate the always-available sensor data with the accurate but occasionally unavailable position information from the satellite data into a combined position fix.
Dead reckoning is utilized in some robotic applications.[11]It is usually used to reduce the need for sensing technology, such asultrasonic sensors, GPS, or placement of somelinearandrotary encoders, in anautonomous robot, thus greatly reducing cost and complexity at the expense of performance and repeatability. The proper utilization of dead reckoning in this sense would be to supply a known percentage of electrical power orhydraulicpressure to the robot's drive motors over a given amount of time from a general starting point. Dead reckoning is not totally accurate, which can lead to errors in distance estimates ranging from a few millimeters (inCNC machining) to kilometers (inUAVs), based upon the duration of the run, the speed of the robot, the length of the run, and several other factors.[citation needed]
With the increased sensor offering insmartphones, built-in accelerometers can be used as apedometerand built-inmagnetometeras a compass heading provider.Pedestrian dead reckoning(PDR) can be used to supplement other navigation methods in a similar way to automotive navigation, or to extend navigation into areas where other navigation systems are unavailable.[12]
In a simple implementation, the user holds their phone in front of them and each step causes position to move forward a fixed distance in the direction measured by the compass. Accuracy is limited by the sensor precision, magnetic disturbances inside structures, and unknown variables such as carrying position and stride length. Another challenge is differentiating walking from running, and recognizing movements like bicycling, climbing stairs, or riding an elevator.
Before phone-based systems existed, many custom PDR systems existed. While apedometercan only be used to measure linear distance traveled, PDR systems have an embedded magnetometer for heading measurement. Custom PDR systems can take many forms including special boots, belts, and watches, where the variability of carrying position has been minimized to better utilize magnetometer heading. True dead reckoning is fairly complicated, as it is not only important to minimize basic drift, but also to handle different carrying scenarios and movements, as well as hardware differences across phone models.[13]
The south-pointing chariot was an ancient Chinese device consisting of a two-wheeledhorse-drawn vehiclewhich carried a pointer that was intended always to aim to the south, no matter how the chariot turned. The chariot pre-dated the navigational use of themagnetic compass, and could notdetectthe direction that was south. Instead it used a kind ofdirectional dead reckoning: at the start of a journey, the pointer was aimed southward by hand, using local knowledge or astronomical observations e.g. of thePole Star. Then, as it traveled, a mechanism possibly containingdifferentialgears used the different rotational speeds of the two wheels to turn the pointer relative to the body of the chariot by the angle of turns made (subject to available mechanical accuracy), keeping the pointer aiming in its original direction, to the south. Errors, as always with dead reckoning, would accumulate as distance traveled increased.
Networked games and simulation tools routinely use dead reckoning to predict where an actor should be right now, using its last known kinematic state (position, velocity, acceleration, orientation, and angular velocity).[14]This is primarily needed because it is impractical to send network updates at the rate that most games run, 60 Hz. The basic solution starts by projecting into the future using linear physics:[15]
Pt=P0+V0T+12A0T2{\displaystyle P_{t}=P_{0}+V_{0}T+{\frac {1}{2}}A_{0}T^{2}}
This formula is used to move the object until a new update is received over the network. At that point, the problem is that there are now two kinematic states: the currently estimated position and the just received, actual position. Resolving these two states in a believable way can be quite complex. One approach is to create a curve (e.g. cubicBézier splines,centripetal Catmull–Rom splines, andHermite curves)[16]between the two states while still projecting into the future. Another technique is to use projective velocity blending, which is the blending of two projections (last known and current) where the current projection uses a blending between the last known and current velocity over a set time.[14]
The first equation calculates a blended velocityVb{\displaystyle V_{b}}given the client-side velocity at the time of the last server updateV0{\displaystyle V_{0}}and the last known server-side velocityV´0{\displaystyle {\acute {V}}_{0}}. This essentially blends from the client-side velocity towards the server-side velocity for a smooth transition. Note thatT^{\displaystyle {\hat {T}}}should go from zero (at the time of the server update) to one (at the time at which the next update should be arriving). A late server update is unproblematic as long asT^{\displaystyle {\hat {T}}}remains at one.
Next, two positions are calculated: firstly, the blended velocityVb{\displaystyle V_{b}}and the last known server-side accelerationA´0{\displaystyle {\acute {A}}_{0}}are used to calculatePt{\displaystyle P_{t}}. This is a position which is projected from the client-side start positionP0{\displaystyle P_{0}}based onTt{\displaystyle T_{t}}, the time which has passed since the last server update. Secondly, the same equation is used with the last known server-side parameters to calculate the position projected from the last known server-side positionP´0{\displaystyle {\acute {P}}_{0}}and velocityV´0{\displaystyle {\acute {V}}_{0}}, resulting inP´t{\displaystyle {\acute {P}}_{t}}.
Finally, the new position to display on the clientPos{\displaystyle Pos}is the result of interpolating from the projected position based on client informationPt{\displaystyle P_{t}}towards the projected position based on the last known server informationP´t{\displaystyle {\acute {P}}_{t}}. The resulting movement smoothly resolves the discrepancy between client-side and server-side information, even if this server-side information arrives infrequently or inconsistently. It is also free of oscillations which spline-based interpolation may suffer from.
In computer science, dead-reckoning refers to navigating anarray data structureusing indexes. Since every array element has the same size, it is possible todirectly accessone array element by knowing any position in the array.[17]
Given the following array:
knowing the memory address where the array starts, it is easy to compute the memory address of D:
addressD=addressstart of array+(sizearray element∗arrayIndexD){\displaystyle {\text{address}}_{\text{D}}={\text{address}}_{\text{start of array}}+({\text{size}}_{\text{array element}}*{\text{arrayIndex}}_{\text{D}})}
Likewise, knowing D's memory address, it is easy to compute the memory address of B:
addressB=addressD−(sizearray element∗(arrayIndexD−arrayIndexB)){\displaystyle {\text{address}}_{\text{B}}={\text{address}}_{\text{D}}-({\text{size}}_{\text{array element}}*({\text{arrayIndex}}_{\text{D}}-{\text{arrayIndex}}_{\text{B}}))}
This property is particularly important forperformancewhen used in conjunction with arrays ofstructuresbecause data can be directly accessed, without going through apointer dereference.
Transport portal
|
https://en.wikipedia.org/wiki/Dead_reckoning
|
Initerative reconstructionindigital imaging,interior reconstruction(also known aslimited field of view (LFV)reconstruction) is a technique to correct truncation artifacts caused by limiting image data to a smallfield of view. The reconstruction focuses on an area known as the region of interest (ROI). Although interior reconstruction can be applied to dental or cardiacCTimages, the concept is not limited to CT. It is applied with one of several methods.
The purpose of each method is to solve for vectorx{\displaystyle x}in the following problem:
LetX{\displaystyle X}be the region of interest (ROI) andY{\displaystyle Y}be the region outside ofX{\displaystyle X}.
AssumeA{\displaystyle A},B{\displaystyle B},C{\displaystyle C},D{\displaystyle D}are known matrices;x{\displaystyle x}andy{\displaystyle y}are unknown vectors of the original image, whilef{\displaystyle f}andg{\displaystyle g}are vector measurements of the responses (f{\displaystyle f}is known andg{\displaystyle g}is unknown).x{\displaystyle x}is inside regionX{\displaystyle X}, (x∈X{\displaystyle x\in X}) andy{\displaystyle y}, in the regionY{\displaystyle Y}, (y∈Y{\displaystyle y\in Y}), is outside regionX{\displaystyle X}.f{\displaystyle f}is inside a region in the measurement corresponding toX{\displaystyle X}. This region is denoted asF{\displaystyle F}, (f∈F{\displaystyle f\in F}), whileg{\displaystyle g}is outside of the regionF{\displaystyle F}. It corresponds toY{\displaystyle Y}and is denoted asG{\displaystyle G}, (g∈G{\displaystyle g\in G}).
For CT image-reconstruction purposes,C=0{\displaystyle C=0}.
To simplify the concept of interior reconstruction, the matricesA{\displaystyle A},B{\displaystyle B},C{\displaystyle C},D{\displaystyle D}are applied to image reconstruction instead of complexoperators.
The first interior-reconstruction method listed below isextrapolation. It is a local tomography method which eliminates truncation artifacts but introduces another type of artifact: a bowl effect. An improvement is known as the adaptive extrapolation method, although the iterative extrapolation method below also improves reconstruction results. In some cases, the exact reconstruction can be found for the interior reconstruction. The local inverse method below modifies the local tomography method, and may improve the reconstruction result of the local tomography; the iterative reconstruction method can be applied to interior reconstruction. Among the above methods, extrapolation is often applied.
A{\displaystyle A},B{\displaystyle B},C{\displaystyle C},D{\displaystyle D}are known matrices;x{\displaystyle x}andy{\displaystyle y}are unknown vectors;f{\displaystyle f}is a known vector, andg{\displaystyle g}is an unknown vector. We need to know the vectorx{\displaystyle x}.x{\displaystyle x}andy{\displaystyle y}are the original image, whilef{\displaystyle f}andg{\displaystyle g}are measurements of responses. Vectorx{\displaystyle x}is inside the region of interestX{\displaystyle X}, (x∈X{\displaystyle x\in X}). Vectory{\displaystyle y}is outside the regionX{\displaystyle X}. The outside region is calledY{\displaystyle Y}, (y∈Y{\displaystyle y\in Y}) andf{\displaystyle f}is inside a region in the measurement corresponding toX{\displaystyle X}. This region is denotedF{\displaystyle F}, (f∈F{\displaystyle f\in F}). The region of vectorg{\displaystyle g}(outside the regionF{\displaystyle F}) also corresponds toY{\displaystyle Y}and is denoted asG{\displaystyle G}, (g∈G{\displaystyle g\in G}).
In CT image reconstruction, it has
To simplify the concept of interior reconstruction, the matricesA{\displaystyle A},B{\displaystyle B},C{\displaystyle C},D{\displaystyle D}are applied to image reconstruction instead of a complex operator.
The response in the outside region can be a guessG{\displaystyle G}; for example, assume it isgex{\displaystyle g_{ex}}
A solution ofx{\displaystyle x}is written asx0{\displaystyle x_{0}}, and is known as the extrapolation method. The result depends on how good the extrapolation functiongex{\displaystyle g_{ex}}is. A frequent choice is
at the boundary of the two regions.[1][2][3][4]The extrapolation method is often combined witha prioriknowledge,[5][6]and an extrapolation method which reduces calculation time is shown below.
Assume a rough solution,x0{\displaystyle x_{0}}andy0{\displaystyle y_{0}}, is obtained from the extrapolation method described above. The response in the outside regiong1{\displaystyle g_{1}}can be calculated as follows:
The reconstructed image can be calculated as follows:
It is assumed that
at the boundary of the interior region;x1{\displaystyle x_{1}}solves the problem, and is known as the adaptive extrapolation method.g1ex{\displaystyle g_{1ex}}is the adaptive extrapolation function.[7][8][9][10][5]
It is assumed that a rough solution,x0{\displaystyle x_{0}}andy0{\displaystyle y_{0}}, is obtained from the extrapolation method described below:
or
The reconstruction can be obtained as
Hereg1ex{\displaystyle g_{1ex}}is an extrapolation function, and it is assumed that
x1{\displaystyle x_{1}}is one solution of this problem.[11]
Local tomography, with a very short filter, is also known as lambda tomography.[12][13]
The local inverse method extends the concept of local tomography. The response in the outside region can be calculated as follows:
Consider the generalized inverseB+{\displaystyle B^{+}}satisfying
Define
so that
Hence,
The above equation can be solved as
considering that
Q{\displaystyle Q}is the generalized inverse ofQ{\displaystyle Q}, i.e.
The solution can be simplified as
The matrixA+Q=A+[I−BB+]{\displaystyle A^{+}Q=A^{+}[I-BB^{+}]}is known as the local inverse ofmatrix[ABCD]{\displaystyle {\begin{bmatrix}A&B\\C&D\\\end{bmatrix}}}, corresponding toA{\displaystyle A}. This is known as the local inverse method.[11]
Here a goal function is defined, and this method iteratively achieves the goal. If the goal function can be some kind of normal, this is known as the minimal norm method.
subject to
andf{\displaystyle f}is known,
whereR{\displaystyle R},S{\displaystyle S}andT{\displaystyle T}are weighting constants of the minimization and‖⋅‖{\displaystyle \|\cdot \|}is some kind ofnorm. Often-used norms areL0{\displaystyle L_{0}},L1{\displaystyle L_{1}},L2{\displaystyle L_{2}},L+∞{\displaystyle L_{+\infty }}total variation(TV) norm or a combination of the above norms. An example of this method is the projection onto convex sets (POCS) method.[14][15]
In special situations, the interior reconstruction can be obtained as an analytical solution; the solution ofx{\displaystyle x}is exact in such cases.[16][17][18]
Extrapolated data oftenconvolutesto akernel function. After data is extrapolated its size is increasedNtimes, whereN= 2 ~ 3. If the data needs to be convoluted to a known kernel function, the numerical calculations will increase log(N)·Ntimes, even with thefast Fourier transform(FFT). Analgorithmexists, analytically calculating the contribution from part of the extrapolated data. The calculation time can be omitted, compared to the original convolution calculation; with this algorithm, the calculation of a convolution using the extrapolated data is not noticeably increased. This is known as fast extrapolation.[19]
The extrapolation method is suitable in a situation where
The adaptive extrapolation method is suitable for a situation where
The iterative extrapolation method is suitable for a situation in which
Local tomography is suitable for a situation in which
The local inverse method, identical to local tomography, suitable in a situation in which
The iterative reconstruction method obtains a good result with large calculations. Although the analytic method achieves an exact result, it is only functional in some situations. The fast extrapolation method can get the same results as the other extrapolation methods, and can be applied to the above interior reconstruction methods to reduce the calculation.
|
https://en.wikipedia.org/wiki/Interior_reconstruction
|
Extreme value theoryorextreme value analysis(EVA) is the study of extremes in statistical distributions.
It is widely used in many disciplines, such asstructural engineering,finance,economics,earth sciences, traffic prediction, andgeological engineering. For example, EVA might be used in the field ofhydrologyto estimate the probability of an unusually large flooding event, such as the100-year flood. Similarly, for the design of abreakwater, acoastal engineerwould seek to estimate the 50 year wave and design the structure accordingly.
Two main approaches exist for practical extreme value analysis.
The first method relies on deriving block maxima (minima) series as a preliminary step. In many situations it is customary and convenient to extract the annual maxima (minima), generating anannual maxima series(AMS).
The second method relies on extracting, from a continuous record, the peak values reached for any period during which values exceed a certain threshold (falls below a certain threshold). This method is generally referred to as thepeak over thresholdmethod (POT).[1]
For AMS data, the analysis may partly rely on the results of theFisher–Tippett–Gnedenko theorem, leading to thegeneralized extreme value distributionbeing selected for fitting.[2][3]However, in practice, various procedures are applied to select between a wider range of distributions. The theorem here relates to the limiting distributions for the minimum or the maximum of a very large collection ofindependentrandom variablesfrom the same distribution. Given that the number of relevant random events within a year may be rather limited, it is unsurprising that analyses of observed AMS data often lead to distributions other than thegeneralized extreme value distribution(GEVD) being selected.[4]
For POT data, the analysis may involve fitting two distributions: One for the number of events in a time period considered and a second for the size of the exceedances.
A common assumption for the first is thePoisson distribution, with thegeneralized Pareto distributionbeing used for the exceedances.
Atail-fittingcan be based on thePickands–Balkema–de Haan theorem.[5][6]
Novak (2011) reserves the term "POT method" to the case where the threshold is non-random, and distinguishes it from the case where one deals with exceedances of a random threshold.[7]
Applications of extreme value theory include predicting the probability distribution of:
The field of extreme value theory was pioneered byL. Tippett(1902–1985). Tippett was employed by theBritish Cotton Industry Research Association, where he worked to make cotton thread stronger. In his studies, he realized that the strength of a thread was controlled by the strength of its weakest fibres. With the help ofR.A. Fisher, Tippet obtained three asymptotic limits describing the distributions of extremes assuming independent variables.E.J. Gumbel(1958)[25]codified this theory. These results can be extended to allow for slight correlations between variables, but the classical theory does not extend to strong correlations of the order of the variance. One universality class of particular interest is that oflog-correlatedfields, where the correlations decay logarithmically with the distance.
The theory for extreme values of a single variable is governed by theextreme value theorem, also called theFisher–Tippett–Gnedenko theorem, which describes which of the three possible distributions for extreme values applies for a particular statistical variableX{\displaystyle X}.
Extreme value theory in more than one variable introduces additional issues that have to be addressed. One problem that arises is that one must specify what constitutes an extreme event.[26]Although this is straightforward in the univariate case, there is no unambiguous way to do this in the multivariate case. The fundamental problem is that although it is possible to order a set of real-valued numbers, there is no natural way to order a set of vectors.
As an example, in the univariate case, given a set of observationsxi{\displaystyle \ x_{i}\ }it is straightforward to find the most extreme event simply by taking the maximum (or minimum) of the observations. However, in the bivariate case, given a set of observations(xi,yi){\displaystyle \ (x_{i},y_{i})\ }, it is not immediately clear how to find the most extreme event. Suppose that one has measured the values(3,4){\displaystyle \ (3,4)\ }at a specific time and the values(5,2){\displaystyle \ (5,2)\ }at a later time. Which of these events would be considered more extreme? There is no universal answer to this question.
Another issue in the multivariate case is that the limiting model is not as fully prescribed as in the univariate case. In the univariate case, the model (GEV distribution) contains three parameters whose values are not predicted by the theory and must be obtained by fitting the distribution to the data. In the multivariate case, the model not only contains unknown parameters, but also a function whose exact form is not prescribed by the theory. However, this function must obey certain constraints.[27][28]It is not straightforward to devise estimators that obey such constraints though some have been recently constructed.[29][30][31]
As an example of an application, bivariate extreme value theory has been applied to ocean research.[26][32]
Statistical modeling for nonstationary time series was developed in the 1990s.[33]Methods for nonstationary multivariate extremes have been introduced more recently.[34]The latter can be used for tracking how the dependence between extreme values changes over time, or over another covariate.[35][36][37]
|
https://en.wikipedia.org/wiki/Extreme_value_theory
|
In themathematicalfield ofnumerical analysis,interpolationis a type ofestimation, a method of constructing (finding) newdata pointsbased on the range of adiscrete setof known data points.[1][2]
Inengineeringandscience, one often has a number of data points, obtained bysamplingorexperimentation, which represent the values of a function for a limited number of values of theindependent variable. It is often required tointerpolate; that is, estimate the value of that function for an intermediate value of the independent variable.
A closely related problem is theapproximationof a complicated function by a simple function. Suppose the formula for some given function is known, but too complicated to evaluate efficiently. A few data points from the original function can be interpolated to produce a simpler function which is still fairly close to the original. The resulting gain in simplicity may outweigh the loss from interpolation error and give better performance in calculation process.
This table gives some values of an unknown functionf(x){\displaystyle f(x)}.
Interpolation provides a means of estimating the function at intermediate points, such asx=2.5.{\displaystyle x=2.5.}
We describe somemethodsof interpolation, differing in such properties as: accuracy, cost, number of data points needed, andsmoothnessof the resultinginterpolantfunction.
The simplest interpolation method is to locate the nearest data value, and assign the same value. In simple problems, this method is unlikely to be used, aslinearinterpolation (see below) is almost as easy, but in higher-dimensionalmultivariate interpolation, this could be a favourable choice for its speed and simplicity.
One of the simplest methods is linear interpolation (sometimes known as lerp). Consider the above example of estimatingf(2.5). Since 2.5 is midway between 2 and 3, it is reasonable to takef(2.5) midway betweenf(2) = 0.9093 andf(3) = 0.1411, which yields 0.5252.
Generally, linear interpolation takes two data points, say (xa,ya) and (xb,yb), and the interpolant is given by:
This previous equation states that the slope of the new line between(xa,ya){\displaystyle (x_{a},y_{a})}and(x,y){\displaystyle (x,y)}is the same as the slope of the line between(xa,ya){\displaystyle (x_{a},y_{a})}and(xb,yb){\displaystyle (x_{b},y_{b})}
Linear interpolation is quick and easy, but it is not very precise. Another disadvantage is that the interpolant is notdifferentiableat the pointxk.
The following error estimate shows that linear interpolation is not very precise. Denote the function which we want to interpolate byg, and suppose thatxlies betweenxaandxband thatgis twice continuously differentiable. Then the linear interpolation error is
In words, the error is proportional to the square of the distance between the data points. The error in some other methods, includingpolynomial interpolationand spline interpolation (described below), is proportional to higher powers of the distance between the data points. These methods also produce smoother interpolants.
Polynomial interpolation is a generalization of linear interpolation. Note that the linear interpolant is alinear function. We now replace this interpolant with apolynomialof higherdegree.
Consider again the problem given above. The following sixth degree polynomial goes through all the seven points:
Substitutingx= 2.5, we find thatf(2.5) = ~0.59678.
Generally, if we havendata points, there is exactly one polynomial of degree at mostn−1 going through all the data points. The interpolation error is proportional to the distance between the data points to the powern. Furthermore, the interpolant is a polynomial and thus infinitely differentiable. So, we see that polynomial interpolation overcomes most of the problems of linear interpolation.
However, polynomial interpolation also has some disadvantages. Calculating the interpolating polynomial is computationally expensive (seecomputational complexity) compared to linear interpolation. Furthermore, polynomial interpolation may exhibit oscillatory artifacts, especially at the end points (seeRunge's phenomenon).
Polynomial interpolation can estimate local maxima and minima that are outside the range of the samples, unlike linear interpolation. For example, the interpolant above has a local maximum atx≈ 1.566,f(x) ≈ 1.003 and a local minimum atx≈ 4.708,f(x) ≈ −1.003. However, these maxima and minima may exceed the theoretical range of the function; for example, a function that is always positive may have an interpolant with negative values, and whose inverse therefore contains falsevertical asymptotes.
More generally, the shape of the resulting curve, especially for very high or low values of the independent variable, may be contrary to commonsense; that is, to what is known about the experimental system which has generated the data points. These disadvantages can be reduced by using spline interpolation or restricting attention toChebyshev polynomials.
Linear interpolation uses a linear function for each of intervals [xk,xk+1]. Spline interpolation uses low-degree polynomials in each of the intervals, and chooses the polynomial pieces such that they fit smoothly together. The resulting function is called a spline.
For instance, thenatural cubic splineispiecewisecubic and twice continuously differentiable. Furthermore, its second derivative is zero at the end points. The natural cubic spline interpolating the points in the table above is given by
In this case we getf(2.5) = 0.5972.
Like polynomial interpolation, spline interpolation incurs a smaller error than linear interpolation, while the interpolant is smoother and easier to evaluate than the high-degree polynomials used in polynomial interpolation. However, the global nature of the basis functions leads to ill-conditioning. This is completely mitigated by using splines of compact support, such as are implemented in Boost.Math and discussed in Kress.[3]
Depending on the underlying discretisation of fields, different interpolants may be required. In contrast to other interpolation methods, which estimate functions on target points, mimetic interpolation evaluates the integral of fields on target lines, areas or volumes, depending on the type of field (scalar, vector, pseudo-vector or pseudo-scalar).
A key feature of mimetic interpolation is thatvector calculus identitiesare satisfied, includingStokes' theoremand thedivergence theorem. As a result, mimetic interpolation conserves line, area and volume integrals.[4]Conservation of line integrals might be desirable when interpolating theelectric field, for instance, since the line integral gives theelectric potentialdifference at the endpoints of the integration path.[5]Mimetic interpolation ensures that the error of estimating the line integral of an electric field is the same as the error obtained by interpolating the potential at the end points of the integration path, regardless of the length of the integration path.
Linear,bilinearandtrilinear interpolationare also considered mimetic, even if it is the field values that are conserved (not the integral of the field). Apart from linear interpolation, area weighted interpolation can be considered one of the first mimetic interpolation methods to have been developed.[6]
TheTheory of Functional Connections(TFC) is a mathematical framework specifically developed forfunctional interpolation. Given any interpolant that satisfies a set of constraints, TFC derives a functional that represents the entire family of interpolants satisfying those constraints, including those that are discontinuous or partially defined. These functionals identify the subspace of functions where the solution to a constrained optimization problem resides. Consequently, TFC transforms constrained optimization problems into equivalent unconstrained formulations. This transformation has proven highly effective in the solution ofdifferential equations. TFC achieves this by constructing a constrained functional (a function of a free function), that inherently satisfies given constraints regardless of the expression of the free function. This simplifies solving various types of equations and significantly improves the efficiency and accuracy of methods likePhysics-Informed Neural Networks(PINNs). TFC offers advantages over traditional methods likeLagrange multipliersandspectral methodsby directly addressing constraints analytically and avoiding iterative procedures, although it cannot currently handle inequality constraints.
Interpolation is a common way to approximate functions. Given a functionf:[a,b]→R{\displaystyle f:[a,b]\to \mathbb {R} }with a set of pointsx1,x2,…,xn∈[a,b]{\displaystyle x_{1},x_{2},\dots ,x_{n}\in [a,b]}one can form a functions:[a,b]→R{\displaystyle s:[a,b]\to \mathbb {R} }such thatf(xi)=s(xi){\displaystyle f(x_{i})=s(x_{i})}fori=1,2,…,n{\displaystyle i=1,2,\dots ,n}(that is, thats{\displaystyle s}interpolatesf{\displaystyle f}at these points). In general, an interpolant need not be a good approximation, but there are well known and often reasonable conditions where it will. For example, iff∈C4([a,b]){\displaystyle f\in C^{4}([a,b])}(four times continuously differentiable) thencubic spline interpolationhas an error bound given by‖f−s‖∞≤C‖f(4)‖∞h4{\displaystyle \|f-s\|_{\infty }\leq C\|f^{(4)}\|_{\infty }h^{4}}wherehmaxi=1,2,…,n−1|xi+1−xi|{\displaystyle h\max _{i=1,2,\dots ,n-1}|x_{i+1}-x_{i}|}andC{\displaystyle C}is a constant.[7]
Gaussian processis a powerful non-linear interpolation tool. Many popular interpolation tools are actually equivalent to particular Gaussian processes. Gaussian processes can be used not only for fitting an interpolant that passes exactly through the given data points but also for regression; that is, for fitting a curve through noisy data. In the geostatistics community Gaussian process regression is also known asKriging.
Inverse Distance Weighting(IDW) is a spatial interpolation method that estimates values based on nearby data points, with closer points having more influence.[8]It uses an inverse power law for weighting, where higher power values emphasize local effects, while lower values create a smoother surface. IDW is widely used inGIS,meteorology, and environmental modeling for its simplicity but may produce artifacts in clustered or uneven data.[9]
Other forms of interpolation can be constructed by picking a different class of interpolants. For instance, rational interpolation isinterpolationbyrational functionsusingPadé approximant, andtrigonometric interpolationis interpolation bytrigonometric polynomialsusingFourier series. Another possibility is to usewavelets.
TheWhittaker–Shannon interpolation formulacan be used if the number of data points is infinite or if the function to be interpolated has compact support.
Sometimes, we know not only the value of the function that we want to interpolate, at some points, but also its derivative. This leads toHermite interpolationproblems.
When each data point is itself a function, it can be useful to see the interpolation problem as a partialadvectionproblem between each data point. This idea leads to thedisplacement interpolationproblem used intransportation theory.
Multivariate interpolation is the interpolation of functions of more than one variable.
Methods includenearest-neighbor interpolation,bilinear interpolationandbicubic interpolationin two dimensions, andtrilinear interpolationin three dimensions.
They can be applied to gridded or scattered data. Mimetic interpolation generalizes ton{\displaystyle n}dimensional spaces wheren>3{\displaystyle n>3}.[10][11]
In the domain of digital signal processing, the term interpolation refers to the process of converting a sampled digital signal (such as a sampled audio signal) to that of a higher sampling rate (Upsampling) using various digital filtering techniques (for example, convolution with a frequency-limited impulse signal). In this application there is a specific requirement that the harmonic content of the original signal be preserved without creating aliased harmonic content of the original signal above the originalNyquist limitof the signal (that is, above fs/2 of the original signal sample rate). An early and fairly elementary discussion on this subject can be found in Rabiner and Crochiere's bookMultirate Digital Signal Processing.[12]
The termextrapolationis used to find data points outside the range of known data points.
Incurve fittingproblems, the constraint that the interpolant has to go exactly through the data points is relaxed. It is only required to approach the data points as closely as possible (within some other constraints). This requires parameterizing the potential interpolants and having some way of measuring the error. In the simplest case this leads toleast squaresapproximation.
Approximation theorystudies how to find the best approximation to a given function by another function from some predetermined class, and how good this approximation is. This clearly yields a bound on how well the interpolant can approximate the unknown function.
If we considerx{\displaystyle x}as a variable in atopological space, and the functionf(x){\displaystyle f(x)}mapping to aBanach space, then the problem is treated as "interpolation of operators".[13]The classical results about interpolation of operators are theRiesz–Thorin theoremand theMarcinkiewicz theorem. There are also many other subsequent results.
|
https://en.wikipedia.org/wiki/Interpolation
|
Empiricalmethods
Prescriptiveand policy
Ineconomics,deadweight lossis the loss of societaleconomic welfaredue to production/consumption of a good at a quantity wheremarginal benefit(to society) does not equalmarginal cost(to society). In other words, there are either goods being produced despite the cost of doing so being larger than the benefit, or additional goods are not being produced despite the fact that the benefits of their production would be larger than the costs. The deadweight loss is the net benefit that is missed out on. While losses to one entity often lead to gains for another, deadweight loss represents the loss that is not regained by anyone else. This loss is therefore[1]attributed to both producers and consumers.
Deadweight loss can also be a measure of losteconomic efficiencywhen the socially optimal quantity of agoodor a service is not produced. Non-optimal production can be caused bymonopolypricing in the case ofartificial scarcity, a positive or negativeexternality, atax or subsidy, or a bindingprice ceilingorprice floorsuch as aminimum wage.
Assume a market for nails where the cost of each nail is $0.10. Demand decreases linearly; there is a high demand for free nails and zero demand for nails at a price per nail of $1.10 or higher. The price of $0.10 per nail represents the point ofeconomic equilibriumin a competitive market.
If market conditions areperfect competition, producers would charge a price of $0.10, and every customer whose marginal benefit exceeds $0.10 would buy a nail. Amonopolyproducer of this product would typically charge whatever price will yield the greatest profit for themselves, regardless of lost efficiency for the economy as a whole. In this example, the monopoly producer charges $0.60 per nail, thus excluding every customer from the market with a marginal benefit less than $0.60. The deadweight loss due to monopoly pricing would then be the economic benefit foregone by customers with a marginal benefit of between $0.10 and $0.60 per nail. The monopolist has "priced them out of the market", even though their benefit exceeds the true cost per nail.
Conversely, deadweight loss can also arise from consumers buyingmoreof a product than they otherwise would based on their marginal benefit and the cost of production. For example, if in the same nail market the government provided a $0.03 subsidy for every nail produced, the subsidy would reduce the market price of each nail to $0.07, even though production actually still costs $0.10 per nail. Consumers with a marginal benefit of between $0.07 and $0.10 per nail would then buy nails, even though their benefit is less than the real production cost of $0.10. The difference between the cost of production and the purchase price then creates the "deadweight loss" to society.
A tax has the opposite effect of a subsidy. Whereas a subsidy entices consumers to buy a product that would otherwise be too expensive for them in light of their marginal benefit (price is lowered to artificially increase demand), a tax dissuades consumers from a purchase (price is increased to artificially lower demand). Thisexcess burden of taxationrepresents the lost utility for the consumer. A common example of this is the so-calledsin tax, a tax levied against goods deemed harmful to society and individuals. For example, "sin taxes" levied againstalcoholand tobacco are intended to artificially lower demand for these goods; some would-be users are priced out of the market, i.e. total smoking and drinking are reduced. Products such as alcohol and tobacco have historically been highly taxed and incur excise duties which are one of the categories of indirect tax.
Indirect tax (VAT), weighs on the consumer, is not a cause of loss of surplus for the producer, but affects consumer utility and leads to deadweight loss for consumers. Indirect taxes are usually paid by large entities such as corporations or manufacturers but are partially shifted towards the consumer. Furthermore, indirect taxes can be charged based on the unit price of a said commodity or can be calculated based on a percentage of the final retail price. Additionally, indirect taxes can either be collected at one stage of the production and retail process or alternatively can be charged and collected at multiple stages of the overall production process of a commodity.
Harberger's triangle, generally attributed toArnold Harberger, shows the deadweight loss (as measured on a supply and demand graph) associated with government intervention in a perfect market. Mechanisms for this intervention includeprice floors,caps, taxes, tariffs, or quotas. It also refers to the deadweight loss created by a government's failure to intervene in a market withexternalities.[2]
In the case of a government tax, the amount of the tax drives awedgebetween what consumers pay and what producers receive, and the area of this wedge shape is equivalent to the deadweight loss caused by the tax.[3]
The area represented by the triangle results from the fact that the intersection of the supply and the demand curves are cut short. The consumer surplus and the producer surplus are also cut short. The loss of such surplus is never recouped and represents the deadweight loss.
Some economists likeMartin Feldsteinmaintain that these triangles can seriously affect long-termeconomic trendsby pivoting the trend downwards and causing a magnification of losses in the long run but others likeJames Tobinhave argued that they do not have a huge impact on the economy.
TheHicksian(perJohn Hicks) and theMarshallian(perAlfred Marshall) demand function differ about deadweight loss. After theconsumer surplusis considered, it can be shown that the Marshallian deadweight loss is zero if demand is perfectlyelasticor supply is perfectly inelastic. However, Hicks analyzed the situation throughindifference curvesand noted that when the Marshallian demand curve is perfectly inelastic, the policy or economic situation that caused a distortion inrelative priceshas asubstitution effect, i.e. is a deadweight loss.
In modern economic literature, the most common measure of a taxpayer's loss from a distortionary tax, such as a tax on bicycles, is theequivalent variation, the maximum amount that a taxpayer would be willing to forgo in a lump sum to avoid the tax. The deadweight loss can then be interpreted as the difference between the equivalent variation and the revenue raised by the tax. The difference is attributable to the behavioral changes induced by a distortionary tax that are measured by the substitution effect. However, that is not the only interpretation, andPigoudid not use a lump sum tax as the point of reference to discuss deadweight loss (excess burden).[4]
When a tax is levied on buyers, the demand curve shifts downward in accordance with the size of the tax. Similarly, when tax is levied on sellers, the supply curve shifts upward by the size of tax. When the tax is imposed, the price paid by buyers increases, and the price received by seller decreases. Therefore, buyers and sellers share the burden of the tax, regardless of how it is imposed. Since a tax places a "wedge" between the price buyers pay and the price sellers get, the quantity sold is reduced below the level that it would be without tax. To put it another way, a tax on a good causes the size of market for that good to decrease.
For example, suppose that Will is a cleaner who is working in the cleaning service company and Amie hired Will to clean her room every week for $100. Theopportunity costof Will's time is $80, while the value of a clean house to Amie is $120. Hence, each of them get same amount of benefit from their deal. Amie and Will each receive a benefit of $20, making the total surplus from trade $40.
However, if the government were to decide to impose a $50 tax upon the providers of cleaning services, their trade would no longer benefit them. Amie would not be willing to pay any price above $120, and Will would no longer receive a payment that exceeds his opportunity cost. As a result, not only do Amie and Will both give up the deal, but Amie has to live in a dirtier house, and Will does not receive his desired income. They have thus lost amount of the surplus that they would have received from their deal, and at the same time, this made each of them worse off to the tune of $40 in value.
Government revenue is also affected by this tax: since Amie and Will have abandoned the deal, the government also loses any tax revenue that would have resulted from wages. This $40 is referred to as the deadweight loss. It causes losses for both buyers and sellers in a market, as well as decreasing government revenues. Taxes cause deadweight losses because they prevent buyers and sellers from realizing some of the gains from trade.[5]
In the graph, the deadweight loss can be seen as the shaded area between the supply and demand curves. While the demand curve shows the value of goods to the consumers, the supply curve reflects the cost for producers. As the example above explains, when the government imposes a tax upon taxpayers, the tax increases the price paid by buyers toPc{\displaystyle P_{c}}and decreases price received by sellers toPp{\displaystyle P_{p}}. Buyers and sellers (Amie and Will) give up the deal between them and exit the market. Thus, the quantity sold reduces fromQe{\displaystyle Q_{e}}toQt{\displaystyle Q_{t}}. The deadweight loss occurs because the tax deters these kinds of beneficial trades in the market.[5]
Price elasticities of supply and demand determine whether the deadweight loss from a tax is large or small. This measures to what extent quantity supplied and quantity demanded respond to changes in price. For instance, when the supply curve is relatively inelastic, quantity supplied responds only minimally to changes in the price. However, when the supply curve is more elastic, quantity supplied responds significantly to changes in price. In other words, when the supply curve is more elastic, the area between the supply and demand curves is larger. Similarly, when the demand curve is relatively inelastic, deadweight loss from the tax is smaller, comparing to more elastic demand curve.
A tax results in deadweight loss as it causes buyers and sellers to change their behaviour. Buyers tend to consume less when the tax raises the price. When the tax lowers the price received by sellers, they in turn produce less. As a result, the overall size of the market decreases below the optimum equilibrium. The elasticities of supply and demand determine to what extent the tax distorts the market outcome. As the elasticities of supply and demand increase, so does the deadweight loss resulting from a tax.[5]
Taxes may be changed by the government or policymakers at different levels. For instance, when a low tax is levied, the deadweight loss is also small (compared to a medium or high tax). An important consideration is that the deadweight loss resulting from a tax increasesmore quicklythan the tax itself; the area of the triangle representing the deadweight loss is calculated using the area (square) of its dimension. Where a tax increases linearly, the deadweight loss increases as the square of the tax increase. This means that when the size of a tax doubles, the base and height of the triangle double. Thus, doubling the tax increases the deadweight loss by a factor of 4.
The varying deadweight loss from a tax also affects the government's total tax revenue. Tax revenue is represented by the area of the rectangle between the supply and demand curves. When a low tax is levied, tax revenue is relatively small. As the size of the tax increases, tax revenue expands. However, when a much higher tax is levied, tax revenue eventually decreases. The higher tax reduces the total size of the market; Although taxes are taking a larger slice of the "pie", the total size of the pie is reduced. Just as in the nail example above, beyond a certain point, the market for a good will eventually decrease to zero.[5]
A deadweight loss occurs with monopolies in the same way that a tax causes deadweight loss. When a monopoly, as a "tax collector", charges a price in order to consolidate its power above marginal cost, it drives a "wedge" between the costs born by the consumer and supplier. Imposing this effective tax distorts the market outcome, and the wedge causes a decrease in the quantity sold, below thesocial optimum. It is important to remember the difference between the two cases: whereas the government receives the revenue from a genuine tax, monopoly profits are collected by a private firm.[5]
|
https://en.wikipedia.org/wiki/Deadweight_loss
|
Defunct
Newspapers
Journals
TV channels
Websites
Other
Congressional caucuses
Economics
Gun rights
Identity politics
Nativist
Religion
Watchdog groups
Youth/student groups
Social media
Miscellaneous
Other
InAmerican political theory,fiscal conservatismoreconomic conservatism[1]is apoliticalandeconomic philosophyregardingfiscal policyandfiscal responsibilitywith an ideological basis incapitalism,individualism,limited government, andlaissez-faireeconomics.[2][3]Fiscal conservatives advocatetax cuts, reducedgovernment spending,free markets,deregulation,privatization,free trade, and minimalgovernment debt.[4]Fiscal conservatism follows the same philosophical outlook asclassical liberalism. This concept is derived from economic liberalism.[5]
The term has its origins in the era of the AmericanNew Dealduring the 1930s as a result of the policies initiated bymodern liberals, when many classical liberals started calling themselvesconservativesas they did not wish to be identified with what was passing forliberalism in the United States.[6]In the United States, the termliberalismhas become associated with thewelfare stateand expandedregulatorypolicies created as a result of the New Deal and its offshoots from the 1930s onwards.[7]
Fiscal conservatives formed one of the three legs of the traditional Americanconservative movementthat emerged during the 1950s together withsocial conservatismandnational defenseconservatism.[8][9]Many Americans who are classical liberals also tend to identify aslibertarian,[10]holding morecultural liberalviews and advocating anon-interventionistforeign policy while supporting lower taxes and lessgovernment spending.[8]As of 2020, 39% of Americans polled considered themselves "economically conservative".[11]
Because of its close proximity to the United States, the term has entered the lexicon in Canada.[12]In many other countries,economic liberalismor simplyliberalismis used to describe what Americans call fiscal conservatism.[13][14]
Fiscal conservatism is the economic philosophy of prudence in government spending and debt. The principles ofcapitalism,limited government, andlaissez-faireeconomics form its ideological foundation.[2][3]Fiscal conservatives advocate the avoidance ofdeficit spending, thelowering of taxes, and the reduction of overallgovernment spendingandnational debtwhilst ensuringbalanced budgets. In other words, fiscal conservatives are against the government expanding beyond its means through debt, but they will usually choose debt over tax increases.[15]They strongly believe inlibertarianprinciples such asindividualismandfree enterprise, and advocatederegulation,privatization, andfree trade.[4]
In hisReflections on the Revolution in France,Edmund Burkeargued that a government does not have the right to run up large debts and then throw the burden on the taxpayer, writing "it is to the property of the citizen, and not to the demands of the creditor of the state, that the first and original faith of civil society is pledged. The claim of the citizen is prior in time, paramount in title, superior in equity. The fortunes of individuals, whether possessed by acquisition or by descent or in virtue of a participation in the goods of some community, were no part of the creditor's security, expressed or implied. ... [T]he public, whether represented by a monarch or by a senate, can pledge nothing but the public estate; and it can have no public estate except in what it derives from a just and proportioned imposition upon the citizens at large".[16]
Although all fiscal conservatives agree generally on a smaller and less expensive government, there are disagreements over priorities.[8]There are three main factions or subgroups, each advocating for a particular emphasis.Deficit hawksemphasize balancing government budgets and reducing the size of government debt, viewing government debt as economically damaging and morally dubious since it passes on obligations on to future generations who have played no part in present-day tax and spending decisions.[8]Deficit hawks are willing to consider tax increases if the additional revenue is used to reduce debt rather than increase spending.[8]
A second group put their main emphasis ontax cutsrather than spending cuts or debt reduction. Many embracesupply-side economics, arguing that as high taxes discourage economic activity and investment, tax cuts would result in economic growth leading in turn to higher government revenues.[8]According to them, these additional government revenues would reduce the debt in the long term. They also argue for reducing taxes even if it were to lead to short term increases in the deficit.[8]Some supply-siders have advocated that the increases in revenue through tax cuts make drastic cuts in spending unnecessary.[8]However, theCongressional Budget Officehas consistently reported that income tax cuts increase deficits and debt and do not pay for themselves. For example, the CBO estimated that theBush tax cutsadded about $1.5 trillion to deficits and debt from 2002 to 2011[17]and it would have added nearly $3 trillion to deficits and debt over the 2010–2019 decade if fully extended at all income levels.[18]
A third group makes little distinction between debt and taxes. This group emphasizes reduction inspendingrather than tax policy or debt reduction.[19]They argue that the true cost of government is the level of spending not how that spending is financed.[8]Every dollar that the government spends is a dollar taken from workers, regardless of whether it is from debt or taxes. Taxes simply redistribute purchasing power; it does so in a particularly inefficient manner, reducing the incentives to produce or hire and borrowing simply forces businesses and investors to anticipate higher taxes later on.[19]
Classical liberalism in the United Statesforms the historical foundation for modern fiscal conservatism. Kathleen G. Donohue argues that classical liberalism in the 19th century United States was distinct from its counterpart in Britain:
[A]t the center of classical liberal theory [in Europe] was the idea oflaissez-faire. To the vast majority of American classical liberals, however,laissez-fairedid not mean no government intervention at all. On the contrary, they were more than willing to see government provide tariffs, railroad subsidies, and internal improvements, all of which benefited producers. What they condemned was intervention in behalf of consumers.[20]
Economic liberalism owes its ideological creation to theclassical liberalismtradition in the vein ofAdam Smith,Friedrich Hayek,Milton Friedman,Ayn Rand, andLudwig von Mises.[20]They provided moral justifications for free markets. Liberals of the time, in contrast to modern ones, disliked government authority and preferredindividualism. They saw free marketcapitalismas the preferable means of achieving economic ends.[2][3]
In the early 20th century, fiscal conservatives were often at odds withprogressiveswho desired economic reform. During the 1920s,RepublicanPresidentCalvin Coolidge's pro-business economic policies were credited for the successful period of economic growth known as theRoaring Twenties. However, his actions may have been due more to a sense offederalismthan fiscal conservatism asRobert Sobelnotes: "As Governor of Massachusetts, Coolidge supported wages and hours legislation, opposed child labor, imposed economic controls during World War I, favored safety measures in factories, and even worker representation on corporate boards".[21]
Contrary to popular opinion, then-Republican PresidentHerbert Hooverwas not a fiscal conservative. He promoted government intervention during the earlyGreat Depression, a policy that his successor,DemocraticPresidentFranklin D. Roosevelt, continued and increased[22]despite campaigning to the contrary.[23]Coolidge's economic policies are often popularly contrasted with theNew Dealdeficit spending of Roosevelt and Republican Party opposition to Roosevelt's government spending was a unifying cause for a significant caucus of Republicans through even the presidencies ofHarry S. TrumanandDwight D. Eisenhower.Barry Goldwaterwas a famous champion of both the socially and fiscally conservative Republicans.[24]
In 1977, Democratic PresidentJimmy CarterappointedAlfred E. Kahn, a professor of economics atCornell University, to be chair of theCivil Aeronautics Board(CAB). He was part of a push forderegulationof the industry, supported by leading economists, leading think tanks in Washington, a civil society coalition advocating the reform (patterned on a coalition earlier developed for the truck-and-rail-reform efforts), the head of the regulatory agency, Senate leadership, theCarter administrationand even some in the airline industry. This coalition swiftly gained legislative results in 1978.[25]
TheAirline Deregulation Act(Pub.L.95–504) was signed into law by President Carter on October 24, 1978. The main purpose of the act was toremove government controlover fares, routes and market entry of new airlines fromcommercial aviation. The CAB's powers of regulation were to be phased out, eventually allowing market forces to determine routes and fares. The Act did not remove or diminish theFederal Aviation Administration's regulatory powers over all aspects of airline safety.[26]
In 1979, Carter deregulated the American beer industry by making it legal to sellmalt,hopsandyeastto Americanhome brewersfor the first time since the effective 1920 beginning ofProhibition in the United States. This Carter deregulation led to an increase in home brewing over the 1980s and 1990s that by the 2000s had developed into a strong craftmicrobrewculture in the United States, with 3,418 micro breweries, brewpubs and regional craft breweries in the United States by the end of 2014.[27]
Public debt as a percentage of GDPfell rapidly in the post-World War II period and reached a low in 1974 underRichard Nixon.Debt as a share of GDPhas consistently increased since then, except under Carter andBill Clinton. TheUnited States national debtrose during the 1980s asRonald Reagancut tax rates and increased military spending. The numbers of public debt as a percentage of GDP are indicative of the process:[28][29]
Fiscal conservatism was rhetorically promoted during the presidency of RepublicanRonald Reagan(1981–1989). During Reagan's tenure, the top personal incometax bracketdropped from 70% to 28%[31]while payroll taxes and the effective tax rates on the lower two income quintiles increased.[32][33]Reagan cut the maximum capital gains tax from 28% to 20%, though in his second term he raised it back up to 28%. He successfully increased defense spending, but conversely liberal Democrats blocked his efforts to cut domestic spending.[34]RealGDP growthrecovered strongly after the 1982 recession, growing at an annual rate of 3.4% for the rest of his time in office.[35]Unemployment dropped after peaking at over 10.7% percent in 1982, and inflation decreased significantly. Federal tax receipts nearly doubled from $517 billion in 1980 to $1,032 billion in 1990. Employment grew at about the same rate as population.[36]
According to aUnited States Department of the Treasurynonpartisan economic study, the major tax bills enacted under Reagan caused federal revenue to fall by an amount equal to roughly 1% of GDP.[37]Although Reagan did not offset the increase in federal government spending or reduce the deficit, his accomplishments are more notable when expressed as a percent of the gross domestic product. Federal spending fell from 22.2% of the GDP to 21.2%.[38]By the end of Reagan's second term, the national debt held by the public increased by almost 60% and the total debt equalled $2.6 trillion. In fewer than eight years, the United States went from being the world's largest creditor nation to the world's largest debtor nation.[39]
In the 1992 presidential election,Ross Perot, a successful American businessman, ran as a third-party candidate. Despite significant campaign stumbles and the uphill struggles involved in mounting a third-party candidacy, Perot received 18.9% of the popular vote (the largest percentage of any third-party candidate in modern history), largely on the basis of his central platform plank of limited-government, balanced-budget fiscal conservatism.[40]
While the mantle of fiscal conservatism is most commonly claimed byRepublicansandlibertarians, it is also claimed in some ways by manycentristormoderateDemocratswho often refer to themselves asNew Democrats. Although not supportive of the wide range tax cut policies that were often enacted during the Reagan and Bush administrations,[41][42]the New Democrat coalition's primary economic agenda differed from the traditional philosophy held by liberal Democrats and sided with the fiscal conservative belief that a balanced federal budget should take precedence over some spending programs.[42]
Former PresidentBill Clinton, who was a New Democrat and part of the somewhat fiscally conservativeThird WayadvocatingDemocratic Leadership Council, is a prime example of this as his administration along with the Democratic-majority congress of 1993 passed on a party-line vote theOmnibus Budget Reconciliation Act of 1993which cut government spending, created a 36% individual income tax bracket, raised the top tax bracket which encompassed the top 1.2% earning taxpayers from 31% to 39.6% and created a 35% income tax rate for corporations.[43]The 1993 Budget Act also cut taxes for fifteen million low-income families and 90% of small businesses. Additionally during the Clinton years, thePAYGO(pay-as-you-go) system originally introduced with the passing of theBudget Enforcement Act of 1990(which required that all increases in direct spending or revenue decreases be offset by other spending decreases or revenue increases and was very popular withdeficit hawks) had gone into effect and was used regularly until the system's expiration in 2002.[44]
In the1994 midterm elections, Republicans ran on a platform that included fiscal responsibility drafted by then-CongressmanNewt Gingrichcalled theContract with Americawhich advocated such things as balancing the budget, providing the President with aline-item vetoandwelfare reform. After theelectionsgave the Republicans a majority in theHouse of Representatives, newly mintedSpeaker of the HouseGingrich pushed aggressively for reduced government spending which created a confrontation with the White House that climaxed in the1995–1996 government shutdown. After Clinton's re-election in 1996, they were able to cooperate and pass theTaxpayer Relief Act of 1997which lowered the topcapital gains taxrate from 28% to 20% and the 15% rate to 10%.[44]
After this combination of tax hikes and spending reductions, the United States was able to createbudget surplusesfrom fiscal years 1998–2001 (the first time since 1969) and the longest period of sustainedeconomic growthin United States history.[45][46][47]
American businessman, politician and formerMayor of New York CityMichael Bloombergconsiders himself a fiscal conservative and expressed his definition of the term at the 2007British Conservative PartyConference, stating:
To me, fiscal conservatism means balancing budgets – not running deficits that the next generation can't afford. It means improving the efficiency of delivering services by finding innovative ways to do more with less. It means cutting taxes when possible and prudent to do so, raising them overall only when necessary to balance the budget, and only in combination with spending cuts. It means when you run a surplus, you save it; you don't squander it. And most importantly, being a fiscal conservative means preparing for the inevitable economic downturns – and by all indications, we've got one coming.[48]
While the term "fiscal conservatism" would imply budget deficits would be lower under conservatives (i.e., Republicans), this has not historically been the case. EconomistsAlan Blinderand Mark Watsonreportedin 2016 that budget deficits since WW2 tended to be smaller under Democratic Party presidents, at 2.1% potential GDP versus 2.8% potential GDP for Republican presidents, a difference of about 0.7% GDP. They wrote that higher budget deficits should theoretically have boosted the economy more for Republicans, and therefore cannot explain the greater GDP growth under Democrats.[49]
As a result of the expansion of the welfare state and increased regulatory policies by the Roosevelt administration beginning in the 1930s, in the United States the term liberalism has today become associated with modern rather than classical liberalism.[7]In Western Europe, the expanded welfare states created after World War II were created bysocialistorsocial-democraticparties such as the BritishLabour Partyrather than liberal parties.[7]Many liberal parties in Western Europe tend to adhere toclassical liberalism, with theFree Democratic Partyin Germany being one example.[7]TheLiberal Democratsin the United Kingdom have aclassical liberaland asocial liberalwing of the party. In many countries,liberalismoreconomic liberalismis used to describe what Americans call fiscal conservatism.[13][7][50]
Fiscal conservatism in the United Kingdom was arguably most popular during the premiership ofConservativeMargaret Thatcher. After a number of years of deficit spending under the previous Labour government, Thatcher advocated spending cuts and selective tax increases to balance the budget. As a result of the deterioration in the United Kingdom's public finances—according to fiscal conservatives caused by another spate of deficit spending under the previous Labour government, thelate-2000s recessionand by theEuropean sovereign debt crisis—theCameron–Clegg coalition(Conservative–Liberal Democrats) embarked on anausterity programmefeaturing a combination of spending cuts and tax rises in an attempt to halve thedeficitand eliminate thestructural deficitover the five-year parliament.[51]
In Canada, the rise of the socialistCo-operative Commonwealth Federationpushed theLiberal Partyto create and expand the welfare state before and after World War II.[7]Fiscal conservatism in Canada is generally referred to asblue Toryismwhen it is present within theConservative Party of Canada.[52]In Alberta, fiscal conservatism is represented by theUnited Conservative Party.[53]In Ontario, fiscal conservatism is represented by theProgressive Conservative Party of Ontario.[52]
The term is sometimes used in South Korea, where left-liberalDemocratic Party of Korea(DPK) and conservativePeople Power Party(PPP) are the two main parties.[54]Fiscal conservatism is mainly represented by PPP.[55]South Korea's current president,Yoon Suk-yeol, is known as a "fiscal conservative".[56]
|
https://en.wikipedia.org/wiki/Fiscal_conservatism
|
Empiricalmethods
Prescriptiveand policy
The followingoutlineis provided as an overview of and topical guide toeconomics. Economics is abranch of sciencethat analyzes theproduction,distribution, andconsumptionofgoodsandservices. It aims to explain howeconomieswork and howagents(people) respond toincentives.
Economics is abehavioral science(a scientific discipline that focuses on the study of human behavior) as well as asocial science(a scientific discipline that explores aspects of human society).
Economy–systemof human activities related to the production, distribution, exchange, and consumption ofgoodsand services of a country or other area.
Economic policy– all strategic interventions by public administrations – including the state, central bank, and local authorities – across economic activity, aimed at achieving objectives like growth, full employment, and social justice, thereby correcting existing imbalances.
Infrastructure
Market
Market form
Money– a medium of exchange, a unit of account, a store of value and sometimes, a standard of deferred payment.
Resource
Resource management
Factors of production
Land
Capital– durable produced goods that are in turn used as productive inputs for further production of goods and services
History of economic thought
Economic history
|
https://en.wikipedia.org/wiki/List_of_economics_topics
|
TheRahn curveis agraphused to illustrate aneconomic theory, proposed in 1996 by American economistRichard W. Rahn, which suggests that there is a level ofgovernment spendingthat maximizeseconomic growth. The theory is used byclassical liberalsto argue for a decrease in overall government spending andtaxation. The inverted-U-shaped curve suggests that the optimal level of government spending is 15–25% ofGDP.[1][2]
|
https://en.wikipedia.org/wiki/Rahn_curve
|
Empiricalmethods
Prescriptiveand policy
Trickle-down economics, also known as thehorse-and-sparrow theory,[1][2]is a pejorative term for government economic policies that disproportionately favor the upper tier of the economic spectrum (wealthy individuals and large corporations). The term has been used broadly by critics ofsupply-side economicsto refer totaxing and spending policiesby governments that, intentionally or not, result in wideningincome inequality; it has also been used in critical references toneoliberalism.[3]These critics reject the notion that spending by this elite group would "trickle down" to those who are less fortunate and lead to economic growth that will eventually benefit the economy as a whole.[4]
It has been criticized by economists on the grounds that no mainstream economist or major political party advocates theories or policies using the term trickle-down economics.[5]While criticisms have existed since at least the 19th century, the term "trickle-down economics" was popularized in the US in reference to supply-side economics and theeconomic policies of Ronald Reagan.[6]
Major examples of what critics have called "trickle-down economics" in the US include theReagan tax cuts,[7]theBush tax cuts,[8]and theTrump tax cuts.[9]Major UK examples includeMargaret Thatcher's economic policies in the 1980s andLiz Truss'smini-budget tax cuts of 2022,[10]which was an attempt to revive suchThatcheritepolicies.[11]While economists who favor supply-side economics generally avoid applying the "trickle down" analogy to it and dispute the focus on tax cuts to the rich, the phrase "trickle down" has also been used by proponents of such policies.[4][12]
The concept that economic prosperity in the upper classes flows down into the lower classes is at least 100 years old. TheMerriam-Webster Dictionarynotes that the first known use of "trickle-down" as anadjectivemeaning "relating to or working on the principle of trickle-down theory" was in 1944,[13]while the first known use of "trickle-down theory" was in 1954.[14]
In 1896, United StatesDemocraticpresidential candidateWilliam Jennings Bryandescribed the concept using the metaphor of a "leak" in hisCross of Gold speech.[15][16]William Safiretraced the origin of the term to this speech.[17]William J. Bennettcredits humorist and social commentatorWill Rogersfor coining the term and observed in 2007 its persistent use throughout the decades since.[18]In a 1932 column criticizingHerbert Hoover'spolicies and approach toThe Great Depression, Rogers wrote:
This election was lost four and six years ago, not this year. They [Republicans] didn't start thinking of the old common fellow till just as they started out on the election tour. The money was all appropriated for the top in the hopes that it would trickle down to the needy. Mr. Hoover was an engineer. He knew that water trickles down. Put it uphill and let it go and it will reach the driest little spot. But he didn't know that money trickled up. Give it to the people at the bottom and the people at the top will have it before night, anyhow. But it will at least have passed through the poor fellow's hands. They saved the big banks, but the little ones went up the flue.[19]
In 1933, Indian nationalist and statesmanJawaharlal Nehruwrote positively of the term (in the sense that wealth entered upper classes then "trickled down") in critical reference to the colonial seizing of wealth in India and other territories being a cause of increased wealth in England:
The exploitation of India and other countries brought so much wealth to England that some of it trickled down to the working class and their standard of living rose."[20]
After leaving thepresidency, DemocratLyndon B. Johnsonalleged "Republicans... simply don't know how to manage the economy. They're so busy operating the trickle-down theory, giving the richest corporations the biggest break, that the whole thing goesto hell in a handbasket."[21]Presidential speechwriterSamuel Rosenmanwrote that "trickle down policies" had been prevalent in American government since 1921.[22]
Ronald Reagan launched his 1980 campaign for the presidency on a platform advocating forsupply-side economics. During the1980 Republican Party presidential primaries,George H. W. Bushhad derided Reagan's economic approach as "voodoo economics".[23][24]Following Reagan's election, the "trickle-down" reached wide circulation with the publication of "The Education of David Stockman" a December 1981 interview of Reagan's incomingOffice of Management and BudgetdirectorDavid Stockman, in the magazineAtlantic Monthly. In the interview, Stockman expressed doubts aboutsupply side economics, telling journalistWilliam Greiderthat theKemp–Roth Tax Cutwas a way to rebrand a tax cut for the top income bracket to make it easier to pass into law.[25]Stockman said that "It's kind of hard to sell 'trickle down,' so the supply-side formula was the only way to get a tax policy that was really 'trickle down.' Supply-side is 'trickle-down' theory."[25][26][27]Reagan administration officials includingMichael Deaverwanted Stockman to be fired in response to his comments, but he was ultimately kept on in exchange for a private apology.[28]
Political opponents of theReagan administrationsoon seized on this language in an effort to brand the administration as caring only about the wealthy.[29]In 1982,John Kenneth Galbraithwrote the "trickle-down economics" thatDavid Stockmanwas referring to was previously known under the name "horse-and-sparrow theory", the idea that feeding a horse a huge amount of oats results in some of the feed passing through for lucky sparrows to eat.[30]
While the term "trickle-down" is commonly used to refer to income benefits, it is sometimes used to refer to the idea ofpositive externalitiesarising from technological innovation or increased trade.Arthur Okun,[31]and separatelyWilliam Baumol,[32]for example, have used the term to refer to the flow of the benefits of innovation, which do not accrue entirely to the "great entrepreneurs and inventors", but trickle down to the masses. And Nobel laureate economistPaul Romerused the term in reference to the impact on wealth from tariff changes.[33]TheLaffer curveis often cited by proponents of trickle-down policy.[34][10]
In the US,Republicantax plans and policies are often labeled "trickle-down economics", including theReagan tax cuts, theBush tax cutsand theTax Cuts and Jobs Act of 2017.[35]In each of the aforementioned tax reforms, taxes were cut across all income brackets, but the biggest reductions were given to the highest income earners,[36]although theReagan Eratax reforms also introduced theearned income tax creditwhich has received bipartisan praise for poverty reduction and is largely why the bottom half of workers pay no federal income tax.[37]In contrast, the Tax Cuts and Jobs Act of 2017 cut taxes across all income brackets, but especially favored the wealthy roughly 67x more than the middle class.[38][39][40]
In the1992 presidential election,independentcandidateRoss Perotalso referred to trickle-down economics as "political voodoo".[41]In the same election, during a presidential town hall debate,Bill Clintonblamed trickle-down economics for the declining economic conditions in America, saying that "...we've had 12 years of trickle-down economics. We've gone from first to twelfth in the world in wages. We've had four years where we’ve produced no private-sector jobs. Most people are working harder for less money than they were making 10 years ago.".[42]
The political campaign group,Tax Justice Networkhas used the term referring broadly to wealth inequality in its criticisms oftax havens.[43]In 2013,Pope Franciscondemned "trickle-down theories" in hisapostolic exhortationEvangelii Gaudium, saying that "Some people continue to defend trickle-down theories which assume that economic growth, encouraged by a free market, will inevitably succeed in bringing about greater justice and inclusiveness in the world. This opinion, which has never been confirmed by the facts, expresses a crude and naïve trust in the goodness of those wieldingeconomic powerand in the sacralized workings of the prevailing economic system."[44]
In New Zealand,Damien O'Connor, anMPfrom theLabour Party, called trickle-down economics "the rich pissing on the poor" in the Labour Party campaign launch video for the2011 general election.[45]In a2016 US presidential electioncandidates debate,Hillary ClintonaccusedDonald Trumpof supporting the "most extreme" version of trickle-down economics with his tax plan, calling it "trumped-up trickle-down" as a pun on his name.[46]In hisspeech to a joint session of Congresson April 28, 2021, US PresidentJoe Bidenstated that "trickle-down economics has never worked".[47]Biden has continued to be critical of trickle-down.[48][49]
A Columbia journal article comparing a failed UKEnterprise Zoneproposal to later US proposals references them as a form of trickle-down policy where lower regulatory and tax burdens were aimed at wealthier developers with the hope they would benefit residents.[50]Nobel laureatePaul Krugmanstates that despite the narrative of trickle-down style tax cuts, the effective tax rate of the top 1% of earners has failed to change very much.[51]Political commentatorRobert Reichhas implicated institutions such asThe Heritage Foundation,Cato Institute, andClub for Growthfor promoting what he considers to be a discredited idea.[52]Kansas governor and politicianSam Brownback's 2018 tax cut package was widely labelled as an attempt at trickle-down economics.[53]Friedrich Hayek's economic theories have also been described as trickle-down.[54][55]
Speaking on theUS Senatefloor in 1992,Hank Brown(Republican senator for Colorado) said: "Mr. President, the trickle-down theory attributed to the Republican Party has never been articulated by President Reagan and has never been articulated byPresident Bushand has never been advocated by either one of them. One might argue whether trickle-down makes any sense or not. To attribute to people who have advocated the opposite in policies is not only inaccurate but poisons the debate on public issues."[56]
Thomas Sowell, a proponent ofsupply-side economics, says that trickle-down economics have never been advocated by any economist, writing in his 2012 book"Trickle Down" Theory and "Tax Cuts for the Rich"that "[t]he 'trickle-down' theory cannot be found in even the most voluminous scholarly studies of economic theories."[57]Sowell disagrees with the characterization of supply-side economics as trickle-down, saying that the economic theory of reducing marginal tax rates works in precisely the opposite direction: "Workers are always paid first and then profits flow upward later – if at all."[58][59]
In 2022, theLiz Trussadministration objected to characterizing its policies as "trickle-down economics".[60]
Nobel laureateJoseph Stiglitzwrote in 2015 that the post–World War II evidence does not support trickle-down economics, but rather "trickle-up economics" whereby more money in the pockets of the poor or the middle benefits everyone.[61]In a 2020 research paper, economists David Hope and Julian Limberg analyzed data spanning 50 years from 18 countries, and found that tax cuts for the rich increased inequality in the short and medium term, and had no significant effect on real GDP per capita or employment in the short and medium term. According to the study, this shows that the tax cuts for the upper class did not trickle down to the broader economy. From 1980 to 2016, adivergencein the distribution of wealth was noted, with the top .01% of earners seeing a 600% change in real income, vs a 0% change in the bottom 99%, leading to the top 1% accruing 15% more of the total wealth pool, from a share of 15 to 30%.[62][63][64][65]
A 2015, IMF staff discussion note by Era Dabla-Norris, Kalpana Kochhar, Nujin Suphaphiphat, Frantisek Ricka and Evridiki Tsounta suggests that lowering taxes on the top 20% could actually reduce growth.[66][67]Political scientists Brainard Guy Peters and Maximilian Lennart Nagel in 2020 described the 'trickle down' description of tax cuts for the wealthy and corporations stimulating economic growth that helps the less affluent as a "zombie idea", and stated that it has been the most enduring failed policy idea in American politics.[68]Some studies suggest a link between trickle-down economics and reduced growth, and some newspapers concluded that trickle-down economics does not promote jobs or growth, and that "policy makers shouldn't worry that raising taxes on the rich ... will harm their economies".[69]
To see if an economic theory works it must be tested and, if reasonably possible, implemented. Since supply-side economics (the trickledown theory) was implemented previously; Bruce Bartlett, and Timothy P. Roth analyzed the results of its implementation in 1983 by publishing the book “The Supply-Side Solution”.[70]According to these two authors the trickledown theory has temporary effects on investment and productivity growth.[70]Where the growth in capital stocks was observed to accelerate mildly but then depreciate; just as in 1968 to 1969 the investment in capital stocks had risen to 10.1% but in 1969 to 1970 the investment slowed to a 3.7% growth. Another analysis given by these authors was that the dollar net stock from 1962 to 1974 for private nonresidential fixed capital rose at a 4.8% rate, and from 1974 to 1980 it rose at a 3.0% rate.[70]To quote the book “Investments will accelerate to reach optimal proportions.”[70]So, these accelerations in growth were foreseen by the authors Bruce Bartlett, and Timothy P. Roth, to continue and then diminish their size of acceleration (slow down) until the “optimal proportions” are met.
|
https://en.wikipedia.org/wiki/Trickle-down_economics
|
Incomputing, afork bomb(also calledrabbit virus) is adenial-of-service (DoS) attackwherein aprocesscontinually replicates itself to deplete available system resources, slowing down or crashing the system due toresource starvation.
Around 1978, an early variant of a fork bomb called wabbit was reported to run on aSystem/360. It may have descended from a similar attack calledRABBITSreported from 1969 on aBurroughs 5500at theUniversity of Washington.[1]
Fork bombs operate both by consumingCPUtime in the process offorking, and by saturating theoperating system's process table.[2][3]A basic implementation of a fork bomb is aninfinite loopthat repeatedly launches new copies of itself.
InUnix-like operating systems, fork bombs are generally written to use the forksystem call.[3]As forked processes are also copies of the first program, once they resume execution from the next address at theframe pointer, they continue forking endlessly within their own copy of the same infinite loop. this has the effect of causing anexponential growthin processes. As modernUnix systemsgenerally use acopy-on-writeresource management technique when forking new processes,[4]a fork bomb generally will not saturate such a system's memory.
Microsoft Windowsoperating systems do not have an equivalent functionality to the Unix fork system call;[5]a fork bomb on such an operating system must therefore create a new process instead of forking from an existing one, such as withbatchecho %0^|%0 > $_.cmd & $_. In this batch script,%0|%0is written to$_.cmd, which is then executed by& $_.[6]
A classic example of a fork bomb is one written inUnix shell:(){ :|:& };:, possibly dating back to 1999,[7]which can be more easily understood as
In it, a function is defined (fork()) as calling itself (fork), thenpiping(|) its result into itself, all in a backgroundjob(&).
The code using a colon:as the function name is not valid in a shell as defined by POSIX, which only permitsalphanumeric charactersand underscores in function names.[8]However, its usage is allowed inGNU Bashas an extension.[9]
As a fork bomb's mode of operation is entirely encapsulated by creating new processes, one way of preventing a fork bomb from severely affecting the entire system is to limit the maximum number of processes that a single user may own. On Linux, this can be achieved by using theulimitutility; for example, the commandulimit -u 30would limit the affected user to a maximum of thirty owned processes.[10]OnPAM-enabled systems, this limit can also be set in/etc/security/limits.conf,[11]and on *BSD, the system administrator can put limits in/etc/login.conf.[12]Modern Linux systems also allow finer-grained fork bomb prevention throughcgroupsand process number (PID) controllers.[13]
|
https://en.wikipedia.org/wiki/Fork_bomb
|
XML External Entity attack, or simplyXXE attack, is a type of attack against an application that parsesXMLinput. This attack occurs when XML input containing a reference to an external entity is processed by a weakly configured XML parser. This attack may lead to the disclosure of confidential data,DoS attacks,server-side request forgery,port scanningfrom the perspective of the machine where the parser is located, and other[which?]system impacts.[1]
The XML 1.0 standard defines the structure of an XML document. The standard defines a concept called anentity, which is a term that refers to multiple types of data unit. One of those types of entities is an external general/parameter parsed entity, often shortened to external entity, that can access local or remote content via a declaredsystem identifier. The system identifier is assumed to be aURIthat can be accessed by the XML processor when processing the entity. The XML processor then replaces occurrences of the named external entity with the contents that is referenced by the system identifier. If the system identifier contains tainted data and the XML processor dereferences this tainted data, the XML processor may disclose confidential information normally not accessible by the application. Similar attack vectors apply the usage of externalDTDs, externalstyle sheets, externalschemas, etc. which, when included, allow similar external resource inclusion style attacks.
Attacks can include disclosing local files, which may contain sensitive data such as passwords or private user data, usingfile://schemes or relative paths in the system identifier. Since the attack occurs relative to the application processing the XML document, an attacker may use this trusted application to pivot to other internal systems, possibly disclosing other internal content viaHTTPrequests or launching aSSRFattack to any unprotected internal services. In some situations, an XML processor library that is vulnerable to client-sidememory corruptionissues may be exploited by dereferencing a maliciousURI, possibly allowing arbitrary code execution under theapplication account. Other attacks can access local resources that may not stop returning data, possibly impacting application availability if too many threads or processes are not released.
The application does not need to explicitly return the response to the attacker for it to be vulnerable to information disclosures. An attacker can leverageDNSinformation to exfiltrate data through subdomain names to a DNS server under their control.[2]
The examples below are fromOWASP'sTesting for XML Injection (WSTG-INPV-07).[3]
When thePHP"expect" module is loaded,remote code executionmay be possible with a modified payload.
Since the entire XML document is communicated from an untrusted client, it is not usually possible to selectivelyvalidateor escape tainted data within the system identifier in the DTD. The XML processor could be configured to use a local static DTD and disallow any declared DTD included in the XML document.
|
https://en.wikipedia.org/wiki/XML_external_entity_attack
|
Adocument type definition(DTD) is a specification file that contains a set ofmarkup declarationsthat define adocument typefor anSGML-familymarkup language(GML,SGML,XML,HTML). The DTD specification file can be used to validate documents.
A DTD defines the valid building blocks of an XML document. It defines the document structure with a list of validated elements and attributes. A DTD can be declared inline inside an XML document, or as an external reference.[1]
A namespace-aware version of DTDs is being developed as Part 9 of ISODSDL. DTDs persist in applications that need special publishing characters, such as theXML and HTML Character Entity References, which derive from larger sets defined as part of theISO SGML standardeffort.XMLuses a subset ofSGMLDTD.
As of 2009[update], newerXML namespace-awareschema languages(such asW3C XML SchemaandISO RELAX NG) have largely superseded DTDs as a better way to validate XML structure.
A DTD is associated with an XML or SGML document by means of adocument type declaration(DOCTYPE). The DOCTYPE appears in the syntactic fragmentdoctypedeclnear the start of an XML document.[2]The declaration establishes that the document is an instance of the type defined by the referenced DTD.
DOCTYPEs make two sorts of declarations:
The declarations in the internal subset form part of the DOCTYPE in the document itself. The declarations in the external subset are located in a separate text file. The external subset may be referenced via apublic identifierand/or asystem identifier. Programs for reading documents may not be required to read the external subset.
Any valid SGML or XML document that references anexternal subsetin its DTD, or whose body contains references toparsed external entitiesdeclared in its DTD (including those declared within itsinternal subset), may only be partially parsed but cannot be fully validated byvalidatingSGML or XML parsers in theirstandalonemode (this means that these validating parsers do not attempt to retrieve these external entities, and their replacement text is not accessible).
However, such documents are still fully parsable in thenon-standalone mode of validating parsers, which signals an error if it can not locate these external entities with their specifiedpublic identifier(FPI)orsystem identifier(a URI), or are inaccessible. (Notations declared in the DTD are also referencing external entities, but these unparsed entities are not needed for the validation of documents in thestandalonemode of these parsers: the validation of all external entities referenced by notations is left to the application using the SGML or XML parser). Non-validating parsersmayeventually attempt to locate these external entities in thenon-standalone mode (by partially interpreting the DTD only to resolve their declared parsable entities), but do not validate the content model of these documents.
The following example of a DOCTYPE contains both public and system identifiers:
All HTML 4.01 documents conform to one of three SGML DTDs. The public identifiers of these DTDs are constant and are as follows:
The system identifiers of these DTDs, if present in the DOCTYPE, areURI references. A system identifier usually points to a specific set of declarations in a resolvable location. SGML allows mapping public identifiers to system identifiers incatalogsthat are optionally available to the URI resolvers used by documentparsingsoftware.
This DOCTYPE can only appearafterthe optionalXML declaration, and before the document body, if the document syntax conforms to XML. This includesXHTMLdocuments:
An additional internal subset can also be provided after the external subset:
Alternatively, only the internal subset may be provided:
Finally, the document type definition may include no subset at all; in that case, it just specifies that the document has a single top-level element (this is an implicit requirement for all valid XML and HTML documents, but not for document fragments or for all SGML documents, whose top-level elements may be different from the implied root element), and it indicates the type name of the root element:
DTDs describe the structure of a class of documents via element and attribute-list declarations. Element declarations name the allowable set of elements within the document, and specify whether and how declared elements and runs of character data may be contained within each element. Attribute-list declarations name the allowable set of attributes for each declared element, including thetypeof each attribute value, if not an explicit set of valid values.
DTD markup declarations declare whichelement types,attribute lists,entities, andnotationsare allowed in the structure of the corresponding class of XML documents.[3]
An element type declaration defines an element and its possible content. A valid XML document contains only elements that are defined in the DTD.
Various keywords and characters specify an element's content:
For example:
Element type declarations are ignored bynon-validatingSGML and XML parsers (in which cases, any elements are accepted in any order, and in any number of occurrences in the parsed document), but these declarations are still checked for form and validity.
An attribute list specifies for a given element type the list of all possible attribute associated with that type. For each possible attribute, it contains:
For example:
Here are some attribute types supported by both SGML and XML:
A default value can define whether an attribute must occur (#REQUIRED) or not (#IMPLIED), or whether it has a fixed value (#FIXED), or which value should be used as a default value ("…") in case the given attribute is left out in an XML tag.
Attribute list declarations are ignored bynon-validatingSGML and XML parsers (in which cases any attribute is accepted within all elements of the parsed document), but these declarations are still checked for well-formedness and validity.
An entity is similar to amacro. The entity declaration assigns it a value that is retained throughout the document. A common use is to have a name more recognizable than a numeric character reference for an unfamiliar character.[5]Entities help to improve legibility of an XML text. In general, there are two types: internal and external.
An example of internal entity declarations (here in an internal DTD subset of an SGML document) is:
Internal entities may be defined in any order, as long as they are not referenced and parsed in the DTD or in the body of the document, in their order of parsing: it is valid to include a reference to a still undefined entity within the content of a parsed entity, but it is invalid to include anywhere else any named entity reference before this entity has been fully defined, including all other internal entities referenced in its defined content (this also prevents circular or recursive definitions of internal entities). This document is parsed as if it was:
Reference to the "author" internal entity is not substituted in the replacement text of the "signature" internal entity. Instead, it is replaced only when the "signature" entity reference is parsed within the content of the "sgml" element, but only by validating parsers (non-validating parsers do not substitute entity references occurring within contents of element or within attribute values, in the body of the document.
This is possible because the replacement text specified in the internal entity definitions permits a distinction betweenparameterentity references (that are introduced by the "%" character and whose replacement applies to the parsed DTD contents) andgeneralentity references (that are introduced by the "&" character and whose replacement is delayed until they are effectively parsed and validated). The "%" character for introducing parameter entity references in the DTD loses its special role outside the DTD and it becomes a literal character.
However, the references to predefined character entities are substituted wherever they occur, without needing a validating parser (they are only introduced by the "&" character).
Notations are used in SGML or XML. They provide a complete reference to unparsed external entities whose interpretation is left to the application (which interprets them directly or retrieves the external entity themselves), by assigning them a simple name, which is usable in the body of the document. For example, notations may be used to reference non-XML data in an XML 1.1 document. For example, to annotate SVG images to associate them with a specific renderer:
This declares theTEXTof external images with this type, and associates it with a notation name "type-image-svg". However, notation names usually follow a naming convention that is specific to the application generating or using the notation: notations are interpreted as additional meta-data whose effective content is an external entity and either a PUBLIC FPI, registered in the catalogs used by XML or SGML parsers, or a SYSTEM URI, whose interpretation is application dependent (here a MIME type, interpreted as a relative URI, but it could be an absolute URI to a specific renderer, or a URN indicating an OS-specific object identifier such as a UUID).
The declared notation name must be unique within all the document type declaration, i.e. in the external subset as well as the internal subset, at least for conformance with XML.[6][7]
Notations can be associated to unparsed external entities included in the body of the SGML or XML document. ThePUBLICorSYSTEMparameter of these external entities specifies the FPI and/or the URI where the unparsed data of the external entity is located, and the additionalNDATAparameter of these defined entities specifies the additional notation (i.e., effectively the MIME type here). For example:
Within the body of the SGML document, these referenced external entities (whose name is specified between "&" and ";") arenotreplaced like usual named entities (defined with a CDATA value), but are left as distinct unparsed tokens that may be used either as the value of an element attribute (like above) or within the element contents, provided that either the DTD allows such external entities in the declared content type of elements or in the declared type of attributes (here theENTITYtype for thedataattribute), or the SGML parser is not validating the content.
Notations may also be associated directly to elements as additional meta-data, without associating them to another external entity, by giving their names as possible values of some additional attributes (also declared in the DTD within the<!ATTLIST...>declaration of the element). For example:
The example above shows a notation named "type-image-svg" that references the standard public FPI and the system identifier (the standard URI) of an SVG 1.1 document, instead of specifying just a system identifier as in the first example (which was a relative URI interpreted locally as a MIME type). This annotation is referenced directly within the unparsed "type" attribute of the "img" element, but its content is not retrieved. It also declares another notation for a vendor-specific application, to annotate the "sgml" root element in the document. In both cases, the declared notation named is used directly in a declared "type" attribute, whose content is specified in the DTD with the "NOTATION" attribute type (this "type" attribute is declared for the "sgml" element, as well as for the "img" element).
However, the "title" attribute of the "img" element specifies the internal entity "example1SVGTitle" whose declaration that does not define an annotation, so it is parsed by validating parsers and the entity replacement text is "Title of example1.svg".
The content of the "img" element references another external entity "example1SVG" whose declaration also does not define a notation, so it is also parsed by validating parsers and the entity replacement text is located by its defined SYSTEM identifier "example1.svg" (also interpreted as a relative URI). The effective content for the "img" element be the content of this second external resource. The difference with the GIF image, is that the SVG image is parsed within the SGML document, according to the declarations in the DTD, where the GIF image is just referenced as an opaque external object (which is not parsable with SGML) via its "data" attribute (whose value type is an opaque ENTITY).
Only one notation name may be specified in the value of ENTITY attributes (there is no support in SGML, XML 1.0 or XML 1.1 for multiple notation names in the same declared external ENTITY, so separate attributes are needed). However multiple external entities may be referenced (in a space-separated list of names) in attributes declared with type ENTITIES, and where each named external entity is also declared with its own notation).
Notations are also completely opaque for XML and SGML parsers, so they are not differentiated by the type of the external entity that they may reference (for these parsers they just have a unique name associated to a public identifier (an FPI) and/or a system identifier (a URI)).
Some applications (but not XML or SGML parsers themselves) also allow referencing notations indirectly by naming them in the"URN:''name''"value of a standard CDATA attribute, everywhere a URI can be specified. However this behaviour is application-specific, and requires that the application maintains a catalog of known URNs to resolve them into the notations that have been parsed in a standard SGML or XML parser. This use allows notations to be defined only in a DTD stored as an external entity and referenced only as the external subset of documents, and allows these documents to remain compatible with validating XML or SGML parsers that have no direct support for notations.
Notations are not used in HTML, or in basic profiles for XHTML and SVG, because:
Even in validating SGML or XML 1.0 or XML 1.1 parsers, the external entities referenced by an FPI and/or URI in declared notations are not retrieved automatically by the parsers themselves. Instead, these parsers just provide to the application the parsed FPI and/or URI associated to the notations found in the parsed SGML or XML document, and with a facility for a dictionary containing all notation names declared in the DTD; these validating parsers also check the uniqueness of notation name declarations, and report a validation error if some notation names are used anywhere in the DTD or in the document body but not declared:
The XML DTD syntax is one of severalXML schemalanguages. However, many of the schema languages do not fully replace the XML DTD. Notably, the XML DTD allows defining entities and notations that have no direct equivalents in DTD-less XML (because internal entities and parsable external entities are not part of XML schema languages, and because other unparsed external entities and notations have no simple equivalent mappings in most XML schema languages).
Most XML schema languages are only replacements for element declarations and attribute list declarations, in such a way that it becomes possible to parse XML documents withnon-validatingXML parsers (if the only purpose of the external DTD subset was to define the schema). In addition, documents for these XML schema languages must be parsed separately, so validating the schema of XML documents in pure standalone mode is not really possible with these languages: the document type declaration remains necessary for at least identifying (with aXML Catalog) the schema used in the parsed XML document and that is validated in another language.
A common misconception holds that anon-validatingXML parser does not have to read document type declarations, when in fact, the document type declarations must still be scanned for correct syntax as well as validity of declarations, and the parser must still parse all entity declarations in theinternal subset, and substitute the replacement texts of internal entities occurring anywhere in the document type declaration or in the document body.
Anon-validatingparser may, however, elect not to read parsableexternal entities(including theexternal subset), and does not have to honor the content model restrictions defined in element declarations and in attribute list declarations.
If the XML document depends on parsable external entities (including the specifiedexternal subset, or parsable external entities declared in theinternal subset), it should assertstandalone="no"in itsXML declaration. The validating DTD may be identified by usingXML Catalogsto retrieve its specifiedexternal subset.
In the example below, the XML document is declared withstandalone="no"because it has an external subset in its document type declaration:
If the XML document type declaration includes any SYSTEM identifier for the external subset, it can not be safely processed as standalone: the URI should be retrieved, otherwise there may be unknown named character entities whose definition may be needed to correctly parse the effective XML syntax in the internal subset or in the document body (the XML syntax parsing is normally performedafterthe substitution of all named entities, excluding the five entities that are predefined in XML and that are implicitly substitutedafterparsing the XML document into lexical tokens). If it just includes any PUBLIC identifier, itmaybe processed as standalone, if the XML processor knows this PUBLIC identifier in its local catalog from where it can retrieve an associated DTD entity.
An example of a very simple external XML DTD to describe the schema of a list of persons might consist of:
Taking this line by line:
An example of an XML file that uses and conforms to this DTD follows. The DTD is referenced here as an external subset, via the SYSTEM specifier and a URI. It assumes that we can identify the DTD with the relative URI reference "example.dtd"; the "people_list" after "!DOCTYPE" tells us that the root tags, or the first element defined in the DTD, is called "people_list":
One can render this in an XML-enabledbrowser(such asInternet ExplorerorMozilla Firefox) by pasting and saving the DTD component above to a text file namedexample.dtdand the XML file to a differently-named text file, and opening the XML file with the browser. The files should both be saved in the same directory. However, many browsers do not check that an XML document confirms to the rules in the DTD; they are only required to check that the DTD is syntactically correct. For security reasons, they may also choose not to read the external DTD.
The same DTD can also be embedded directly in the XML document itself as an internal subset, by encasing it within [square brackets] in the document type declaration, in which case the document no longer depends on external entities and can be processed in standalone mode:
Alternatives to DTDs (for specifying schemas) are available:
An XML DTD can be used to create a denial of service (DoS) attack by defining nested entities that expand exponentially, or by sending the XML parser to an external resource that never returns.[10]
For this reason, .NET Framework provides a property that allows prohibiting or skipping DTD parsing,[10]and recent versions of Microsoft Office applications (Microsoft Office 2010 and higher) refuse to open XML files that contain DTD declarations.
|
https://en.wikipedia.org/wiki/Document_type_definition
|
This article compares thesyntaxof many notableprogramming languages.
Programming languageexpressionscan be broadly classified into four syntax structures:
A language that supports thestatementconstruct typically has rules for one or more of the following aspects:
Some languages define a special character as a terminator while some, calledline-oriented, rely on thenewline. Typically, a line-oriented language includes a line continuation feature whereas other languages have no need for line continuation since newline is treated like otherwhitespace. Some line-oriented languages provide a separator for use between statements on one line.
Listed below are notable line-oriented languages that provide for line continuation. Unless otherwise noted the continuation marker must be the last text of the line.
The C compiler concatenates adjacentstring literalseven if on separate lines, but this is not line continuation syntax as it works the same regardless of the kind of whitespace between the literals.
Languages support a variety of ways to reference and consume other software in the syntax of the language. In some cases this is importing the exported functionality of alibrary, package or modulebut some mechanisms are simpler text file include operations.
Import can be classified by level (module, package, class, procedure,...) and by syntax (directive name, attributes,...).
The above statements can also be classified by whether they are a syntactic convenience (allowing things to be referred to by a shorter name, but they can still be referred to by some fully qualified name without import), or whether they are actually required to access the code (without which it is impossible to access the code, even with fully qualified names).
Ablockis a grouping of code that is treated collectively. Many block syntaxes can consist of any number of items (statements, expressions or other units of code) – including one or zero. Languages delimit a block in a variety of ways – some via marking text and others by relative formatting such as levels of indentation.
With respect to a language definition, the syntax ofCommentscan be classified many ways, including:
Other ways to categorize comments that are outside a language definition:
In these examples,~represents the comment content, and the text around it are the delimiters.Whitespace(includingnewline) is not considered delimiters.
Indenting lines inFortran66/77 is significant. The actual statement is in columns 7 through 72 of a line. Any non-space character in column 6 indicates that this line is a continuation of the prior line. A 'C' in column 1 indicates that this entire line is a comment. Columns 1 though 5 may contain a number which serves as a label. Columns 73 though 80 are ignored and may be used for comments; in thedays of punched cards, these columns often contained a sequence number so that the deck of cards could be sorted into the correct order if someone accidentally dropped the cards. Fortran 90 removed the need for the indentation rule and added line comments, using the!character as the comment delimiter.
In fixed format code, line indentation is significant. Columns 1–6 and columns from 73 onwards are ignored. If a*or/is in column 7, then that line is a comment. Until COBOL 2002, if aDordwas in column 7, it would define a "debugging line" which would be ignored unless the compiler was instructed to compile it.
Cobra supports block comments with "/#...#/" which is like the "/*...*/" often found in C-based languages, but with two differences. The#character is reused from the single-line comment form "#...", and the block comments can be nested which is convenient for commenting out large blocks of code.
Curl supports block comments with user-defined tags as in|foo# ... #foo|.
Like raw strings, there can be any number of equals signs between the square brackets, provided both the opening and closing tags have a matching number of equals signs; this allows nesting as long as nested block comments/raw strings use a different number of equals signs than their enclosing comment:--[[comment --[=[ nested comment ]=] ]]. Lua discards the first newline (if present) that directly follows the opening tag.
Block comments in Perl are considered part of the documentation, and are given the namePlain Old Documentation(POD). Technically, Perl does not have a convention for including block comments in source code, but POD is routinely used as a workaround.
PHP supports standard C/C++ style comments, but supports Perl style as well.
The use of the triple-quotes to comment-out lines of source, does not actually form a comment.[19]The enclosed text becomes a string literal, which Python usually ignores (except when it is the first statement in the body of a module, class or function; seedocstring).
The above trick used in Python also works in Elixir, but the compiler will throw a warning if it spots this. To suppress the warning, one would need to prepend the sigil~S(which prevents string interpolation) to the triple-quoted string, leading to the final construct~S""" ... """. In addition, Elixir supports a limited form of block comments as an official language feature, but as in Perl, this construct is entirely intended to write documentation. Unlike in Perl, it cannot be used as a workaround, being limited to certain parts of the code and throwing errors or even suppressing functions if used elsewhere.[20]
Rakuuses#`(...)to denote block comments.[21]Raku actually allows the use of any "right" and "left" paired brackets after#`(i.e.#`(...),#`[...],#`{...},#`<...>, and even the more complicated#`{{...}}are all valid block comments). Brackets are also allowed to be nested inside comments (i.e.#`{ a { b } c }goes to the last closing brace).
Block comment in Ruby opens at=beginline and closes at=endline.
The region of lines enclosed by the#<tag>and#</tag>delimiters are ignored by the interpreter. The tag name can be any sequence of alphanumeric characters that may be used to indicate how the enclosed block is to be deciphered. For example,#<latex>could indicate the start of a block of LaTeX formatted documentation.
The next complete syntactic component (s-expression) can be commented out with#;.
ABAP supports two different kinds of comments. If the first character of a line, including indentation, is an asterisk (*) the whole line is considered as a comment, while a single double quote (") begins an in-line comment which acts until the end of the line. ABAP comments are not possible between the statementsEXEC SQLandENDEXECbecause Native SQL has other usages for these characters. In the most SQL dialects the double dash (--) can be used instead.
Manyesoteric programming languagesfollow the convention that any text not executed by theinstruction pointer(e.g.,Befunge) or otherwise assigned a meaning (e.g.,Brainfuck), is considered a "comment".
There is a wide variety of syntax styles for declaring comments in source code.BlockCommentin italics is used here to indicate block comment style.LineCommentin italics is used here to indicate line comment style.
commentBlockCommentcommentcoBlockCommentco#BlockComment#£BlockComment£
*LineComment(not all dialects)!LineComment(not all dialects)REMLineComment
|foo#BlockComment#|
/+BlockComment+/(nestable)/++ DocumentationBlockComment+/(nestable, ddoc comments)
(before--after)stack comment convention
/**BlockComment*/(Javadocdocumentation comment)
__END__Comments after end of code
(Documentation stringwhen first line of module, class, method, or function)
=commentThis comment paragraph goes until the next POD directiveor the first blank line.[23][24]
///LineComment("Outer" rustdoc comment)//!LineComment("Inner" rustdoc comment)
/**BlockComment*/("Outer" rustdoc comment)/*!BlockComment*/("Inner" rustdoc comment)
@commentLineComment
'''LineComment(XML documentation comment)RemLineComment
|
https://en.wikipedia.org/wiki/Comparison_of_programming_languages_(syntax)
|
Hungarian notationis anidentifier naming conventionincomputer programmingin which the name of avariableorfunctionindicates its intention or kind, or in some dialects, itstype. The original Hungarian notation uses only intention or kind in its naming convention and is sometimes calledApps Hungarianas it became popular in theMicrosoftApps division in the development ofMicrosoft Officeapplications. When theMicrosoft Windowsdivision adopted the naming convention, they based it on the actual data type, and this convention became widely spread through theWindows API; this is sometimes calledSystems Hungariannotation.
Simonyi: ...BCPL [had] a single type which was a 16-bit word... not that it matters.
Booch: Unless you continue the Hungarian notation.
Simonyi: Absolutely... we went over to the typed languages too later ... But ... we would look at one name and I would tell you exactly a lot about that...[1]
Hungarian notation was designed to be language-independent, and found its first major use with theBCPLprogramming language. Because BCPL has no data types other than the machineword, nothing in the language itself helps aprogrammerremember variables' types. Hungarian notation aims to remedy this by providing the programmer with explicit knowledge of each variable's data type.
In Hungarian notation, a variable name starts with a group of lower-case letters which aremnemonicsfor the type or purpose of that variable, followed by whatever name the programmer has chosen; this last part is sometimes distinguished as thegiven name. The first character of the given name can be capitalized to separate it from the type indicators (see alsoCamelCase). Otherwise the case of this character denotes scope.
The original Hungarian notation was invented byCharles Simonyi, a programmer who worked atXerox PARCcirca 1972–1981, and who later became Chief Architect atMicrosoft. The name of the notation is a reference to Simonyi's nation of origin, and also, according toAndy Hertzfeld, because it made programs "look like they were written in some inscrutable foreign language".[2]Hungarian people's namesare "reversed" compared to most other European names;the family name precedes the given name. For example, the anglicized name "Charles Simonyi" inHungarianwas originally "Simonyi Károly". In the same way, the type name precedes the "given name" in Hungarian notation. The similarSmalltalk"type last" naming style (e.g. aPoint and lastPoint) was common at Xerox PARC during Simonyi's tenure there.[citation needed]
Simonyi's paper on the notation referred to prefixes used to indicate the "type" of information being stored.[3][4]His proposal was largely concerned with decorating identifier names based upon the semantic information of what they store (in other words, the variable'spurpose). Simonyi's notation came to be called Apps Hungarian, since the convention was used in theapplicationsdivision of Microsoft. Systems Hungarian developed later in theMicrosoft Windowsdevelopment team. Apps Hungarian is not entirely distinct from what became known as Systems Hungarian, as some of Simonyi's suggested prefixes contain little or no semantic information (see below for examples).[4]
Where Systems notation and Apps notation differ is in the purpose of the prefixes.
In Systems Hungarian notation, the prefix encodes the actual data type of the variable. For example:
Apps Hungarian notation strives to encode the logical data type rather than the physical data type; in this way, it gives a hint as to what the variable's purpose is, or what it represents.
Most, but not all, of the prefixes Simonyi suggested are semantic in nature. To modern eyes, some prefixes seem to represent physical data types, such asszfor strings. However, such prefixes were still semantic, as Simonyi intended Hungarian notation for languages whose type systems could not distinguish some data types that modern languages take for granted.
The following are examples from the original paper:[3]
While the notation always uses initial lower-case letters as mnemonics, it does not prescribe the mnemonics themselves. There are several widely used conventions (see examples below), but any set of letters can be used, as long as they are consistent within a given body of code.
It is possible for code using Apps Hungarian notation to sometimes contain Systems Hungarian when describing variables that are defined solely in terms of their type.
In some programming languages, a similar notation now calledsigilsis built into the language and enforced by thecompiler. For example, in some forms ofBASIC,name$names astringandcount%names aninteger. The major difference between Hungarian notation and sigils is that sigils declare the type of the variable in the language, whereas Hungarian notation is purely a naming scheme with no effect on the machine interpretation of the program text.
The mnemonics for pointers andarrays, which are not actual data types, are usually followed by the type of the data element itself:
While Hungarian notation can be applied to any programming language and environment, it was widely adopted byMicrosoftfor use with the C language, in particular forMicrosoft Windows, and its use remains largely confined to that area. In particular, use of Hungarian notation was widelyevangelizedbyCharles Petzold's"Programming Windows", the original (and for many readers, the definitive) book onWindows APIprogramming. Thus, many commonly seen constructs of Hungarian notation are specific to Windows:
The notation is sometimes extended inC++to include thescopeof a variable, optionally separated by an underscore.[5][6]This extension is often also used without the Hungarian type-specification:
(Some of these apply to Systems Hungarian only.)
Supporters argue that the benefits of Hungarian Notation include:[3]
Most arguments against Hungarian notation are againstSystemsHungarian notation, notAppsHungarian notation[citation needed]. Some potential issues are:
... nowadays HN and other forms of type encoding are simply impediments. They make it harder to change the name or type of a variable, function, member or class. They make it harder to read the code. And they create the possibility that the encoding system will mislead the reader.[8]
Encoding the type of a function into the name (so-called Hungarian notation) is brain damaged—the compiler knows the types anyway and can check those, and it only confuses the programmer.[9]
Although the Hungarian naming convention is no longer in widespread use, the basic idea of standardizing on terse, precise abbreviations continues to have value. Standardized prefixes allow you to check types accurately when you're using abstract data types that your compiler can't necessarily check.[10]
No I don't recommend 'Hungarian'. I regard 'Hungarian' (embedding an abbreviated version of a type in a variable name) as a technique that can be useful in untyped languages, but is completely unsuitable for a language that supports generic programming and object-oriented programming — both of which emphasize selection of operations based on the type and arguments (known to the language or to the run-time support). In this case, 'building the type of an object into names' simply complicates and minimizes abstraction.[11]
If you read Simonyi's paper closely, what he was getting at was the same kind of naming convention as I used in my example above where we decided thatusmeant unsafe string andsmeant safe string. They're both of typestring. The compiler won't help you if you assign one to the other and Intellisense [anintelligent code completionsystem] won't tell youbupkis. But they are semantically different. They need to be interpreted differently and treated differently and some kind of conversion function will need to be called if you assign one to the other or you will have a runtime bug. If you're lucky. There's still a tremendous amount of value to Apps Hungarian, in that it increases collocation in code, which makes the code easier to read, write, debug and maintain, and, most importantly, it makes wrong code look wrong.... (Systems Hungarian) was a subtle but complete misunderstanding of Simonyi’s intention and practice.[4]
|
https://en.wikipedia.org/wiki/Hungarian_Notation
|
Incomputer programming,indentation styleis aconventionorstyle, governing theindentationof lines ofsource code. An indentation style generally specifies a consistent number ofwhitespace charactersbefore each line of a block, so that the lines of code appear to be related, and dictates whether to usespacesortabsas the indentation character.
This article primarily addresses styles forfree-formprogramming languages. As the name implies, such language code need not follow an indentation style. Indentation is asecondary notationthat is often intended to lowercognitive loadfor a programmer to understand the structure of the code.
Indentation can clarify the separation between the code executed based oncontrol flow.
Structured languages, such asPythonandoccam, use indentation to determine the structure instead of using braces or keywords; this is termed theoff-side rule. In such languages, indentation is meaningful to the language processor (such ascompilerorinterpreter). A programmer must conform to the language's indentation rules although may be free to choose indentation size.
This article focuses oncurly-bracket languages(that delimit blocks withcurly brackets, a.k.a. curly braces, a.k.a. braces) and in particularC-family languages, but a convention used for one language can be adapted to another language. For example, a language that usesBEGINandENDkeywords instead of braces can be adapted by treatingBEGINthe same as the open brace and so on.
Indentation style only applies to text-based languages.Visual programming languageshave no indentation.
Despite the ubiquitous use of indentation styles, little research has been conducted on its value. First experiments, conducted by Weissman in 1974, did not show any effect.[1]In 2023, an experiment by Morzeck et al.[2]showed a significant positive effect for nestedifstatements where non-indented code required on average 179% more time to read than indented code. A follow up-experiment by Hanenberg et al.[3]confirmed a large effect (although in that experiment non-indented code just took 113% more time to read) and revealed that the differences in reading times can be explained by the code that can be skipped (for indented code). In another experiment on JSON objects[4]non-indented code took even 544% more time to read.
The table below includes code examples of various indentation styles.
For consistency, indentation size for example code is 4 spaces even though this varies by coding convention.
Attributes ofC,C++and othercurly-brace programming languagecoding style include but are not limited to:
The Kernighan & Ritchie (K&R) style is commonly used for C and C++ code and is the basis for many derivative styles. It is used in the original Unix kernel,KernighanandRitchie's bookThe C Programming Language, as well as Kernighan andPlauger's bookThe Elements of Programming Style.
AlthoughThe C Programming Languagedoes not explicitly define this style, it follows it consistently. From the book:
The position of braces is less important, although people hold passionate beliefs. We have chosen one of several popular styles. Pick a style that suits you, then use it consistently.
In this style, a function has its opening and closing braces on their own lines and with the same indentation as the declaration, while the statements in the body of the function are indented an additional level. A multi-statement block inside a function, however, has its opening brace on the same line as its control clause while the closing brace remains on its own line unless followed by a keyword such aselseorwhile.
Example code:
The non-alignedbraces of the multi-line blocks are nicknamed "Egyptian braces" (or "Egyptian brackets") for their resemblance to arms in some fanciful poses of ancient Egyptians.[5][6][7]
A single-statement block does not have braces, which is a cause of easy-to-miss bugs such as thegoto fail bug.
TheOne True Brace Style[8](abbreviated 1TBS or OTBS[9]) is like the K&R style, but functions are formatted like multi-statement blocks with the opening brace on the same line as the declaration, and braces arenotomitted for a single-statement block.[10]
Although not required by languages such as C/C++, using braces for single-statement blocks ensures that inserting a statement does not result in control flow that disagrees with indenting, as seen for example in Apple's infamousgoto fail bug.
Cited advantages include shorter code (than K&R) since the starting brace needs no extra line, that the ending brace lines up with the statement it conceptually belongs to, and the perceived stylistic consistency of using the same brace style in both function bodies and multi-line statement blocks.[11]
Sources disagree as to the meaning of One True Brace Style. Some say that it is the variation specified here,[10]while others say it is "hacker jargon" for K&R.[12]
TheLinux kernelsource tree is styled in a variant of K&R.[13]Linus Torvaldsadvises contributors to follow it. Attributes include:
A significant body ofJavacode uses a variant of the K&R style in which the opening brace is on the same line not only for the blocks inside a function, but also for class or method declarations.
This style is widespread largely becauseSun Microsystems's original style guides[15][16][17]used this K&R variant, and as a result, most of the standard source code for theJava APIis written in this style. It is also a popular indentation style forActionScriptandJavaScript, along with theAllman style.
Bjarne Stroustrupadapted the K&R style for C++ in his books, such asProgramming: Principles and Practice using C++andThe C++ Programming Language.[18]
Unlike the variants above, Stroustrup does not use a "cuddled else". Thus, Stroustrup would write[18]
Stroustrup extends K&R style for classes, writing them as follows:
Stroustrup does not indent the labelspublic:andprivate:. Also, in this style, while the opening brace of a function starts on a new line, the opening brace of a class is on the same line as the class name.
Stroustrup allows writing short functions all on one line. Stroustrup style is a named indentation style available in the editorEmacs. Stroustrup encourages a K&R-derived style layout with C++ as stated in his modernC++ Core Guidelines.[19]
TheBerkeley Software Distribution(BSD) operating systems uses a style that is sometimes termedkernel normal form(KNF). Although mostly intended for kernel code, it is also widely used inuserlandcode. It is essentially a thoroughly documented variant of K&R style as used in the Bell Labs version 6 & 7Unixsource code.[20]
The SunOS kernel and userland uses a similar indentation style.[20]Like KNF, this also was based on AT&T style documents and is sometimes termed Bill Joy Normal Form.[21]The SunOS guideline was published in 1996; ANSI C is discussed briefly. The correctness of the indentation of a list of source files can be verified by thecstyleprogram written by Bill Shannon.[20][21][22]
In this style, the hard tabulator (ts invi) is kept at eight columns, while a soft tabulator is often defined as a helper also (sw in vi), and set at four. The hard tabulators are used to indent code blocks, while a soft tabulator (four spaces) of additional indentation is used for all continuing lines that must be split over multiple lines.
Moreover, function calls do not use a space before the parenthesis, although C-language native statements such asif,while,do,switchandreturndo (in the case wherereturnis used with parens). Functions that declare no local variables in their top-level block should also leave an empty line after their opening block brace.
Examples:
The Allman style is named afterEric Allman. It is also sometimes termedBSD stylesince Allman wrote many of the utilities forBSDUnix (although this should not be confused with the different "BSD KNF style"; see above).
This style puts the brace associated with a control statement on the next line, indented to the same level as the control statement. Statements within the braces are indented to the next level.[12]
This style is similar to the standard indentation used by thePascallanguages andTransact-SQL, where the braces are equivalent to the keywordsbeginandend.
Consequences of this style are that the indented code is clearly set apart from the containing statement by lines that are almost allwhitespaceand the closing brace lines up in the same column as the opening brace. Some people feel this makes it easy to find matching braces. The blocking style also delineates the block of code from the associated control statement. Commenting out or removing a control statement or block of code, orcode refactoring, are all less likely to introduce syntax errors via dangling or missing braces. Also, it is consistent with brace placement for the outer-function block.
For example, the following is still correct syntactically:
As is this:
Even like this, with conditional compilation:
Allman-8 uses the 8-space indentation tabs and 80-column limit of the Linux Kernel variant of K&R. The style purportedly helps improve readability on projectors. Also, the indentation size and column restriction help create a visual cue for identifying excessive nesting of code blocks. These advantages combine to help provide newer developers and learners implicit guidance to manage code complexity.[citation needed]
The Whitesmiths style, also sometimes termed Wishart style, was originally used in the documentation for the first commercial C compiler, theWhitesmithsCompiler. It was also popular in the early days of Windows, since it was used in three influential Windows programming books,Programmer's Guide to WindowsbyDurant,Carlson&Yao,Programming WindowsbyPetzold, andWindows 3.0 Power Programming TechniquesbyNorton& Yao.
Whitesmiths, along withAllman, were claimed to have been the most common bracing styles in 1991 by theJargon File, with roughly equal popularity at the time.[12][23]
This style puts the brace associated with a control statement on the next line, indented. Statements within the braces are indented to the same level as the braces.
Like Ratliff style, the closing brace is indented the same as statements within the braces.[24]
The advantages of this style are similar to those of theAllman style. Blocks are clearly set apart from control statements. The alignment of the braces with the block emphasizes that the full block is conceptually, and programmatically, one compound statement. Indenting the braces emphasizes that they are subordinate to the control statement. The ending brace no longer lines up with the statement, but instead with the opening brace.
An example:
else ifare treated as statement, much like the#elifpreprocessor statement.
Like theAllmanandWhitesmithsstyles,GNUstyle puts braces on a line by themselves, indented by two spaces, except when opening a function definition, where they are not indented.[25]In either case, the contained code is indented by two spaces from the braces.
Popularised byRichard Stallman, the layout may be influenced by his background of writingLispcode.[26]In Lisp, the equivalent to a block (a progn) is a first-class data entity, and giving it its own indentation level helps to emphasize that, whereas in C, a block is only syntax. This style can also be found in someALGOLandXPLprogramming language textbooks from the 1960s and 1970s.[27][28][discuss]
Although not indentation per se, GNU coding style also includes a space after a function name – before the left parenthesis of an argument list.[25]
This style combines the advantages ofAllmanandWhitesmiths, thereby removing the possible Whitesmiths disadvantage of braces not standing out from the block. One disadvantage is that the ending brace no longer lines up with the statement it conceptually belongs to. Another possible disadvantage is that it might waste space by using two visual levels of indents for one conceptual level, but in reality this is unlikely because, in systems with single-level indentation, each level is usually at least 4 spaces, same as 2 * 2 spaces in GNU style.
TheGNU Coding Standardsrecommend this style, and nearly all maintainers ofGNU projectsoftware use it.[citation needed]
TheGNU Emacstext editor and the GNU systems'indentcommand will reformat code according to this style by default.[29]Those who do not use GNU Emacs, or similarly extensible/customisable editors, may find that the automatic indentation settings of their editor are unhelpful for this style. However, many editors defaulting to KNF style cope well with the GNU style when the tab width is set to two spaces; likewise, GNU Emacs adapts well to KNF style by simply setting the tab width to eight spaces. In both cases, automatic reformatting destroys the original spacing, but automatic line indenting will work properly.
Steve McConnell, in his bookCode Complete, advises against using this style: he marks a code sample which uses it with a "Coding Horror" icon, symbolizing especially dangerous code, and states that it impedes readability.[24]TheLinux kernelcoding style documentation also recommends against this style, urging readers to burn a copy of the GNU coding standards as a "great symbolic gesture".[11]
The 1997 edition ofComputing Concepts with C++ Essentialsby Cay S. Horstmann adaptsAllmanby placing the first statement of a block on the same line as the opening brace. This style is also used in examples in Jensen and Wirth'sPascal User Manual and Report.[30]
This style combines the advantages ofAllmanby keeping the vertical alignment of the braces for readability, and identifying blocks easily, with the saving of a line of the K&R style. However, the 2003 edition now uses Allman style throughout.[31]
This is the style used most commonly in the languagePicoby its designers. Pico lacks return statements, and uses semicolons as statement separators instead of terminators. It yields this syntax:[32]
The advantages and disadvantages are similar to those of saving screen real estate with K&R style. An added advantage is that the starting and closing braces are consistent in application (both share space with a line of code), relative to K&R style, where one brace shares space with a line of code and one brace has a line alone.
In the bookProgrammers at Work,[33]C. Wayne Ratliff,
the original programmer behind the populardBase-II and -IIIfourth-generation programming languages,
discussed a style that is like 1TBS but the closing brace lines up with the indentation of the nested block.
He indicated that the style was originally documented in material fromDigital ResearchInc. This style has sometimes been termedbannerstyle,[34]possibly for the resemblance to a banner hanging from a pole. In this style, which is toWhitesmithsas K&R is to Allman, the closing control is indented the same as the last item in the list (and thus properly loses salience)[24]The style can make visual scanning easier for some, since theheadersof any block are the only thing exdented at that level (the theory being that the closing control of the prior block interferes with the visual flow of the next block header in the K&R and Allman styles). Kernighan and Plauger use this style in the Ratfor code inSoftware Tools.[35]
The following styles are common for various languages derived from C that are both significantly similar and dissimilar.
And, they can be adapted to C as well.
They might be applied to C code written as part of a projectmostlywritten in one of these other languages, where maintaining a consistentlook and feelto the project's core code overrides considerations of using more conventional C style.
WhileGNU styleis sometimes characterized as C code indented by a Lisp programmer, one might even go so far as to insert closing braces together in the last line of a block. This style makes indentation the only way to distinguish blocks of code, but has the advantage of containing no uninformative lines. This could easily be called the Lisp style because this style is very common in Lisp code.
In Lisp, the grouping of identical braces at the end of expression trees is meant to signify that it is not the user's job to visually track nesting levels, only to understand the structure of the tree.
The traditional Lisp variant of this style prefers extremely narrow levels of indentation (typically two spaces) because Lisp code usually nests very deeply since Lisp features onlyexpressions, with no distinct class ofstatements; function arguments are mostly indented to the same level to illustrate their shared status within the enclosing expression. This is also because, braces aside, Lisp is conventionally a very terse language, omitting even common forms of simple boilerplate code as uninformative, such as theelsekeyword in anif : then | elseblock, instead rendering it uniformly as(if expr1 expr2 expr3).
Note:prognis a procedure for evaluating multiple sub-expressions sequentially foreffects, while discarding all but the final (nth) return value. If all return values are desired, thevaluesprocedure would be used.
Haskelllayout can make the placement of braces optional, although braces and semicolons are allowed in the language.[36]The two segments below are equally acceptable to the compiler:
In Haskell, layout can replace braces.
Usually the braces and semicolons are omitted forproceduraldosections and the program text in general, but the style is commonly used for lists, records and other syntactic elements made up of some pair of parentheses or braces, which are separated with commas or semicolons.[37]If code following the keywordswhere,let, orofomits braces and semicolons, then indentation is significant.[38]
For an example of how terse APL typically is, here is the implementation of the step function for the Game of Life:
APLstyle C resembles the terse style of APL code, and is commonly used in their implementations.[39]This style was pioneered byArthur Whitney, and is heavily used in the implementation ofK, Arthur's own project. TheJprogramming language is implemented in this style as well. Notably, not all implementations of APL use this style of C, namely: GNU APL and Dyalog APL.
In addition to APL style C indentation, typically the names are shortened to either single or double characters: To reduce the amount of indentation, and expressions spanning multiple lines.[40]
Typically, programmers use the same width of whitespace to indent each block of code with commonly used widths varying from 1 to 4 spaces.
An experiment performed on PASCAL code in 1983, found that indentation size significantly affected comprehensibility. Indentation sizes between 2 and 4 characters proved optimal.[41]
Although they both affect the general layout of code, indentationsizeis independent of the indentationstylediscussed here.
Typically, a programmer uses a text editor that provides tab stops at fixed intervals (a number of spaces), to assist in maintaining whitespace according to a style. The interval is called thetab width. Sometimes the programmer stores the code with tab characters – one for each tab key press or they store a sequence of spaces equal in number to the tab width.
Storingtab charactersin code can cause visual misalignment when viewed in different contexts, which counters the value of the indentation style.
Programmers lack consensus on storing tab characters.
Proponents of storing tab characters cite ease of typing and smaller text files since a single tab character serves the purpose of multiple spaces. Opponents, such asJamie Zawinski, state that using spaces instead increasescross-platformportability.[42]Others, such as the writers of theWordPresscoding standards, state the opposite: that hard tabs increase portability.[43]A survey of the top 400,000 repositories onGitHubfound that spaces are more common.[44]
Many text editors, includingNotepad++,TextEdit,Emacs,vi, andnano, can be configured to either store tab characters when entered via the tab key or to convert them to spaces (based on the configured tab width) so that tab characters are not added to the file when the tab key is pressed. Some editors can convert tab to space characters and vice versa.
Sometext file pagers, such asless, can be configured for a tab width. Some tools such asexpand/unexpandcan convert on the fly via filters.
A tool can automate formatting code per an indentation style, for example theUnixindentcommand.
Emacsprovides commands to modify indentation, including hittingTabon a given line.M-x indent-regionindents code.
Elastic tabstopsis a tabulation style which requires support from the text editor, where entire blocks of text are kept automatically aligned when the length of one line in the block changes.
In more complicated code, the programmer may lose track of block boundaries while reading the code.
This is often experienced in large sections of code containing many compound statements nested to many levels of indentation.
As the programmer scrolls to the bottom of a huge set of nested statements, they may lose track of context – such as the control structure at the top of the block.
Long compound statements can be acode smellofover complexitywhich can be solved byrefactoring.
Programmers who rely on counting the opening braces may have difficulty with indentation styles such as K&R, where the starting brace is not visually separated from itscontrol statement. Programmers who rely more on indentations will gain more from styles that are vertically compact, such as K&R, because the blocks are shorter.
To avoid losing track of control statements such asfor, a large indentation can be used, such as an 8-unit-wide hard tab, along with breaking up large functions into smaller and more readable functions. Linux is done this way, while using the K&R style.
Some text editors allow the programmer to jump between the two corresponding braces of a block.
For example,vijumps to the brace enclosing the same block as the one under the cursor when pressing the%key.
Since the text cursor'snextkey (viz., thenkey) retained directional positioning information (whether theupordownkey was formerly pressed), thedot macro(the.key) could then be used to place the text cursor on the next brace,[45]given a suitable coding style. Instead, inspecting the block boundaries using the%key can be used to enforce a coding standard.
Another way to maintain block awareness, is to use comments after the closing brace. For example:
A disadvantage is maintaining the same code in multiple locations – above and below the block.
Some editors provide support for maintaining block awareness. Afolding editorcan hide (fold) and reveal (unfold) blocks by indentation level. Some editors highlight matching braces when thecursoris positioned next to one.
|
https://en.wikipedia.org/wiki/Indent_style
|
TheMotor Industry Software Reliability Association(MISRA) is an organization that produces guidelines for the software developed for electronic components used in theautomotive industry.[1]It is a collaboration between numerous vehicle manufacturers, component suppliers and engineering consultancies.
The aim of this organization is to provide advice in questions ofquality assurancemainly to the automotive industry for the creation and application of safe, reliable software within vehicles.[2]The mission statement of MISRA is"To provide assistance to the automotive industry in the application and creation within vehicle systems of safe and reliable software".[3]The safety requirements of the software used in control units of Automobiles is specific as compared to that of other industries and devices.
MISRA creates, reviews and publishes (sells) standards, such as theMISRA CCoding Standard for the C programming language, first published in 1998.
MISRA was formed in the 1990s by a consortium of organizations formed in response to theUK Safety Critical Systems Research Programme. This program was supported by theDepartment of Trade and Industryand theEngineering and Physical Sciences Research Council. Another program was "SafeIT".
Subsequently MISRA published its first guide,"Development guidelines for vehicle based software", which is considered a foundational element offunctional safetyby the engineering community. This was roughly ten years before the creation of theISO 26262standard.
Since 2021, MISRA is managed by the MISRA Consortium Limited, an independent not-for-profit entity.[4][5]
The Steering Committee is as follows (2024).
Former members are:Protean ElectricLtd[6]
According to MISRA, the following activities are pursued:
MISRA guidelines are a set of development guidelines to ensure safe and reliable development of control software forelectronic control units(ECUs). The primary focus of the MISRA guidelines is error prevention, notprogramming style. Among other things, the guidelines are intended to guide and support the following objectives
As with many standards (for example,ISO,BSI,RTCA), the MISRA guideline documents are not free to users or implementers.[8]
MISRA guidelines are primarily focused and derived for theCandC++programming languages. The main standard is known as "MISRA C" and has been updated several times.
Official website
|
https://en.wikipedia.org/wiki/Motor_Industry_Software_Reliability_Association
|
Programming style, also known ascoding style, refers to the conventions and patterns used in writingsource code, resulting in a consistent and readablecodebase. These conventions often encompass aspects such asindentation,naming conventions,capitalization, andcomments. Consistent programming style is generally considered beneficial forcode readabilityandmaintainability, particularly in collaborative environments.
Maintaining a consistent style across a codebase can improve readability and ease of software maintenance. It allows developers to quickly understand code written by others and reduces the likelihood of errors during modifications. Adhering to standardized coding guidelines ensures that teams follow a uniform approach, making the codebase easier to manage and scale. Many organizations andopen-sourceprojects adopt specific coding standards to facilitate collaboration and reduce cognitive load.
Style guidelines can be formalized in documents known ascoding conventions, which dictate specific formatting and naming rules. These conventions may be prescribed by official standards for a programming language or developed internally within a team or project. For example,Python'sPEP 8is a widely recognized style guide that outlines best practices for writing Python code. In contrast, languages likeCorJavamay have industry standards that are either formally documented or adhered to by convention.
Adherence to coding style can be enforced through automated tools, which format code according to predefined guidelines. These tools reduce the manual effort required to maintain style consistency, allowing programmers to focus on logic and functionality. For instance, tools such asBlackfor Python andclang-formatfor C++ automatically reformat code to comply with specified coding standards.
Common elements of coding style include:
Indentation style can assist a reader in various way including: identifying control flow and blocks of code. In some programming languages, indentation is used todelimit blocks of codeand therefore is not matter of style. In languages that ignore whitespace, indentation can affect readability.
For example, formatted in a commonly-used style:
Arguably, poorly formatted:
The ModuLiq Zero Indentation Style groups by empty line rather than indenting.
Example:
Luadoes not use the traditionalcurly bracesorparentheses; rather, the expression in a conditional statement must be followed bythen, and the block must be closed withend.
Indenting is optional in Lua.and,or, andnotfunction as logical operators.
Pythonrelies on theoff-side rule, using indenting to indicate and implement control structure, thus eliminating the need for bracketing (i.e.,{and}). However, copying and pasting indented code can cause problems, because the indent level of the pasted code may not be the same as the indent level of the target line. Such reformatting by hand is tedious and error prone, but sometext editorsandintegrated development environments(IDEs) have features to do it automatically. There are also problems when indented code is rendered unusable when posted on a forum or web page that removes whitespace, though this problem can be avoided where it is possible to enclose code in whitespace-preserving tags such as "<pre> ... </pre>" (forHTML), "[code]" ... "[/code]" (forbbcode), etc.
Python starts a block with a colon (:).
Python programmers tend to follow a commonly agreed style guide known as PEP8.[1]There are tools designed to automate PEP8 compliance.
Haskell, like Python, has theoff-side rule. It has a two-dimension syntax where indenting is meaningful to define blocks (although, an alternate syntax uses curly braces and semicolons).
Haskell is a declarative language, there are statements, but declarations within a Haskell script.
Example:
may be written in one line as:
Haskell encourages the use ofliterate programming, where extended text explains the genesis of the code. In literate Haskell scripts (named with thelhsextension), everything is a comment except blocks marked as code. The program can be written inLaTeX, in such case thecodeenvironment marks what is code. Also, each active code paragraph can be marked by preceding and ending it with an empty line, and starting each line of code with a greater than sign and a space. Here an example using LaTeX markup:
And an example using plain text:
Some programmers consider it valuable to align similar elements vertically (as tabular, in columns), citing that it can make typo-generated bugs more obvious.
For example, unaligned:
aligned:
Unlike the unaligned code, the aligned code implies that the search and replace values are related since they have corresponding elements. As there is one more value for search than replacement, if this is a bug, it is more likely to be spotted via visual inspection.
Cited disadvantages of vertical alignment include:
Maintaining alignment can be alleviated by a tool that provides support (i.e. forelastic tabstops), although that creates a reliance on such tools.
As an example, simple refactoring operations to rename "$replacement" to "$r" and "$anothervalue" to "$a" results in:
With unaligned formatting, these changes do not have such a dramatic, inconsistent or undesirable effect:
Afree-format languageignoreswhitespace characters: spaces, tabs and new lines so the programmer is free to style the code in different ways without affecting the meaning of the code. Generally, the programmer uses style that is considered to enhancereadability.
The two code snippets below are the same logically, but differ in whitespace.
versus
The use oftabsfor whitespace is debatable. Alignment issues arise due to differing tab stops in different environments and mixed use of tabs and spaces.
As an example, one programmer preferstab stopsof four and has their toolset configured this way, and uses these to format their code.
Another programmer prefers tab stops of eight, and their toolset is configured this way. When someone else examines the original person's code, they may well find it difficult to read.
One widely used solution to this issue may involve forbidding the use of tabs for alignment or rules on how tab stops must be set. Note that tabs work fine provided they are used consistently, restricted to logical indentation, and not used for alignment:
|
https://en.wikipedia.org/wiki/Programming_style
|
Insoftware engineeringanddevelopment, asoftware metricis a standard of measure of a degree to which asoftware systemor process possesses some property.[1][2]Even if a metric is not a measurement (metrics are functions, while measurements are the numbers obtained by the application of metrics), often the two terms are used as synonyms. Sincequantitative measurementsare essential in all sciences, there is a continuous effort bycomputer sciencepractitioners and theoreticians to bring similar approaches to software development. The goal is obtaining objective, reproducible and quantifiable measurements, which may have numerous valuable applications in schedule and budget planning, cost estimation, quality assurance, testing, softwaredebugging, softwareperformance optimization, and optimal personnel task assignments.
Common software measurements include:
As software development is a complex process, with high variance on both methodologies and objectives, it is difficult to define or measure software qualities and quantities and to determine a valid and concurrent measurement metric, especially when making such a prediction prior to the detail design. Another source of difficulty and debate is in determining which metrics matter, and what they mean.[8][9]The practical utility of software measurements has therefore been limited to the following domains:
A specific measurement may target one or more of the above aspects, or the balance between them, for example as an indicator of team motivation or project performance.[10]Additionally metrics vary between static and dynamic program code, as well as for object oriented software (systems).[11][12]
Some software development practitioners point out that simplistic measurements can cause more harm than good.[13]Others have noted that metrics have become an integral part of the software development process.[8]Impact of measurement on programmer psychology have raised concerns for harmful effects to performance due to stress, performance anxiety, and attempts to cheat the metrics, while others find it to have positive impact on developers value towards their own work, and prevent them being undervalued. Some argue that the definition of many measurement methodologies are imprecise, and consequently it is often unclear how tools for computing them arrive at a particular result,[14]while others argue that imperfect quantification is better than none (“You can’t control what you can't measure.”).[15]Evidence shows that software metrics are being widely used by government agencies, the US military, NASA,[16]IT consultants, academic institutions,[17]and commercial and academicdevelopment estimation software.
|
https://en.wikipedia.org/wiki/Software_metrics
|
Intelecommunications, anEnd-of-Transmission character(EOT) is atransmissioncontrol character. Its intended use is to indicate the conclusion of a transmission that may have included one or more texts and any associatedmessageheadings.[1]
An EOT is often used to initiate other functions, such as releasing circuits, disconnecting terminals, or placing receive terminals in astandbycondition.[1]Its most common use today is to cause a Unixterminaldriver to signalend of fileand thus exit programs that are awaiting input.
InASCIIandUnicode, the character is encoded atU+0004<control-0004>. It can be referred to asCtrl+D,^Dincaret notation. Unicode provides the characterU+2404␄SYMBOL FOR END OF TRANSMISSIONfor when EOT needs to be displayed graphically.[2]In addition,U+2301⌁ELECTRIC ARROWcan also be used as a graphic representation of EOT; it is defined in Unicode as "symbol for End of Transmission".[3]
The EOT character in Unix is different from theControl-Zin DOS. The DOS Control-Z byte is actually sent and/or placed in files to indicate where the text ends. In contrast, the Control-D causes the Unix terminal driver to signal theEOFcondition, which is not a character, while the byte has no special meaning if actually read or written from a file or terminal.
In Unix, the end-of-file character (by default EOT) causes the terminal driver to make available all characters in its input buffer immediately; normally the driver would collect characters until it sees an end-of-line character. If the input buffer is empty (because no characters have been typed since the last end-of-line or end-of-file), a program reading from the terminal reads a count of zero bytes. In Unix, such a condition is understood as having reached the end of the file.
This can be demonstrated with thecatprogram onUnix-like operating systems such asLinux: Run thecatcommand with no arguments, so it accepts its input from the keyboard and prints output to the screen. Type a few characters without pressing↵ Enter, then typeCtrl+D. The characters typed to that point are sent to cat, which then writes them to the screen. IfCtrl+Dis typed without typing any characters first, the input stream is terminated and the program ends. An actual EOT is obtained by typingCtrl+VthenCtrl+D.
If the terminal driver is in "raw" mode, it no longer interprets control characters, and the EOT character is sent unchanged to the program, which is free to interpret it any way it likes. A program may then decide to handle the EOT byte as an indication that it should end the text; this would then be similar to howCtrl+Zis handled by DOS programs.
The EOT character is used in legacy communications protocols bymainframe computermanufacturers such asIBM,Burroughs Corporation, and theBUNCH. Terminal transmission control protocols such asIBM 3270Poll/Select, or Burroughs TD830 Contention Mode protocol use the EOT character to terminate a communications sequence between two cooperating stations (such as a host multiplexer or Input/Output terminal).
A single Poll (ask the station for data) or Select (send data to the station) operation will include two round-trip send-reply operations between the polling station and the station being polled, the final operation being transmission of a single EOT character to the initiating station.
|
https://en.wikipedia.org/wiki/End-of-Transmission_character
|
In computer data, asubstitute character(␚) is acontrol characterthat is used to pad transmitted data in order to send it in blocks of fixed size, or to stand in place of a character that is recognized to be invalid, erroneous or unrepresentable on a given device. It is also used as an escape sequence in someprogramming languages.
In theASCII character set, this character is encoded by the number 26 (1Ahex). Standardkeyboardstransmit this code when theCtrlandZkeys are pressed simultaneously (Ctrl+Z, often documented by convention as^Z).[1]Unicodeinherits this character from ASCII, but recommends that thereplacement character(�, U+FFFD) be used instead to represent un-decodable inputs, when the output encoding is compatible with it.
Historically, underPDP-6monitor,[2]RT-11,VMS, andTOPS-10,[3]and in early PCCP/M1 and 2operating systems(and derivatives likeMP/M) it was necessary to explicitly mark theend of a file(EOF) because the nativefilesystemcould not record the exact file size by itself; files were allocated in extents (records) of a fixed size, typically leaving some allocated but unused space at the end of each file.[4][5][6][7]This extra space was filled with1A16(hex) characters under CP/M. The extended CP/M filesystems used by CP/M 3 and higher (and derivatives likeConcurrent CP/M,Concurrent DOS, andDOS Plus) did support byte-granular files,[8][9]so this was no longer a requirement, but it remained as a convention (especially fortext files) in order to ensure backward compatibility.
InCP/M,86-DOS,MS-DOS,PC DOS,DR-DOS, and their various derivatives, the SUB character was also used to indicate the end of a character stream,[citation needed]and thereby used to terminate user input in an interactivecommand linewindow (and as such, often used to finish console input redirection, e.g. as instigated by the commandCOPYCON: TYPEDTXT.TXT).
While no longer technically required to indicate the end of a file, as of 2017, many text editors[which?]and program languages still support this convention, or can be configured to insert this character at the end of a file when editing, or at least properly cope with them in text files.[citation needed]In such cases, it is often termed a "soft" EOF, as it does not necessarily represent the physical end of the file, but is more a marker indicating that "there is no useful data beyond this point". In reality, more data may exist beyond this character up to the actual end of the data in the file system, thus it can be used to hide file content when the file is entered at the console or opened in editors. Many file format standards (e.g.PNGorGIF) include the SUB character in their headers to perform precisely this function. Some modern text file formats (e.g.CSV-1203[10]) still recommend a trailing EOF character to be appended as the last character in the file. However, typingControl+Zdoes not embed an EOF character into a file in eitherDOSorWindows, nor do theAPIsof those systems use the character to denote the actual end of a file.
Some programming languages (e.g.Visual Basic) will not read past a "soft" EOF when using the built-in text file reading primitives (INPUT, LINE INPUT etc.),[citation needed]and alternate methods must be adopted, e.g. opening the file in binary mode or using the File System Object to progress beyond it.
Character 26 was used to mark "End of file" even though ASCII calls this character Substitute, and has other characters to indicate "End of file". Number 28 which is called "File Separator" has also been used for similar purposes.
InUnix-like operating systems, this character is typically used inshellsas a way for the user tosuspendthe currently executing interactive process.[11]The suspended process can then be resumed inforeground(interactive) mode, or be made to resume execution inbackgroundmode, or beterminated. When entered by a user at theircomputer terminal, the currently running foreground process is sent a "terminal stop" (SIGTSTP) signal, which generally causes the process to suspend its execution. The user can later continue the process execution by using the "foreground" command (fg) or the "background" command (bg).
The Unicode Security Considerations report[12]recommends this character as a safe replacement for unmappable characters during character set conversion.
In many GUIs and applications,Control+Z(⌘ Command+ZonmacOS) can be used toundothe last action. In many applications, earlier actions than the last one can also be undone by pressingControl+Zmultiple times.Control+Zwas one of a handful ofkeyboardsequences chosen by the program designers atXerox PARCto controltext editing.
ASCIIandUnicoderepresentation of "substitute":
|
https://en.wikipedia.org/wiki/Substitute_character
|
End of messageorEOM(as in "(EOM)" or "<EOM>") signifies the end of a message, often ane-mailmessage.[1]
Thesubject of an e-mail message may contain such an abbreviationto signify that all content is in the subject line so that the message itself does not need to be opened (e.g., "No classes Monday (EOM)" or "Midterm delayed <EOM>"). This practice can save the time of the receiver and has been recommended to increase productivity.[1][2]
EOM can also be used in conjunction withno reply necessary, or NRN, to signify that the sender does not require (or would prefer not to receive) a response (e.g., "Campaign has launched (EOM/NRN)") orreply requestedor RR to signify that the sender wishes a response (e.g., "Got a minute? (EOM/RR)"). These are examples ofInternet slang.
EOM is often used this way, as a synonym to NRN, inblogsand forums online. It is often a snide way for commenters to imply that their message is so perfect that there can be no logical response to it. Or it can be used as a way of telling another specific poster to stop writing back.[citation needed]
EOM can also be defined as the final 3 buzzes of an alert of theEmergency Alert Systemto know when the alert is finished.
In earlier communications methods, anend of message("EOM") sequence of characters indicated to a receiving device or operator that the current message has ended. Inteleprintersystems, the sequence "NNNN", on a line by itself, is an end of message indicator. In severalMorse codeconventions, includingamateur radio, theprosignAR(dit dah dit dah dit) means end of message.
In the originalASCIIcode, "EOM" corresponded to code 03hex, which has since been renamed to "ETX" ("end of text").[3]
This Internet-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/End_of_message
|
Incomputing, ahere document(here-document,here-text,heredoc,hereis,here-stringorhere-script) is a fileliteralorinput streamliteral: it is a section of asource codefile that is treated as if it were a separatefile. The term is also used for a form of multilinestring literalsthat use similar syntax, preserving line breaks and other whitespace (including indentation) in the text.
Here documents originate in theUnix shell,[1]and are found in theBourne shellsince 1979, and most subsequent shells. Here document-style string literals are found in varioushigh-level languages, notably thePerl programming language(syntax inspired by Unix shell) and languages influenced by Perl, such asPHPandRuby.JavaScriptalso supports this functionality viatemplate literals, a feature added in its 6th revision (ES6). Other high-level languages such asPython,JuliaandTclhave other facilities for multiline strings.
Here documents can be treated either as files or strings. Some shells treat them as aformat stringliteral, allowingvariable substitutionandcommand substitutioninside the literal.
The most common syntax for here documents, originating in Unix shells, is<<followed by adelimitingidentifier(often the wordEOForEND[2]), followed, starting on the next line, by the text to be quoted, and then closed by the same delimiting identifier on its own line. This syntax is because here documents are formally stream literals, and the content of the here document is often redirected tostdin(standard input) of the preceding command or current shell script/executable.
The here document syntax is analogous to the shell syntax for inputredirection, which is<followed by the name of the file to be used as input.
Other languages often use substantially similar syntax, but details of syntax and actual functionality can vary significantly. When used simply for string literals, the<<does not indicate indirection, but is simply a starting delimiter convention. In some languages, such as Ruby,<<is also used for input redirection, thus resulting in<<being used twice if one wishes to redirect from a here document string literal.
Narrowly speaking, here documents are file literals or stream literals. These originate in the Unix shell, though similar facilities are available in some other languages.
Here documents are available in many Unix shells.[1]In the following example, text is passed to thetrcommand (transliterating lower to upper-case) using a here document. This could be in a shell file, or entered interactively at a prompt.
In this caseENDwas used as the delimiting identifier. It specified the start and end of the here document. The redirect and the delimiting identifier do not need to be separated by a space:<<ENDor<< ENDboth work equally well.
By default, behavior is largely identical to the contents of double quotes: variable names are replaced by their values, commands within backticks are evaluated, etc.[a]
This can be disabled by quoting any part of the label, which is then ended by the unquoted value;[b]the behavior is essentially identical to that if the contents were enclosed in single quotes. Thus for example by setting it in single quotes:
Double quotes may also be used, but this is subject to confusion, because expansiondoesoccur in a double-quoted string, but doesnotoccur in a here document with double-quoted delimiter.[4]Single- and double-quoted delimiters are distinguished in some other languages, notablyPerl(see below), where behavior parallels the corresponding string quoting.
InPOSIX shellbut not csh/tcsh, appending a minus sign to the<<(i.e.<<-) has the effect that leading tabs are ignored.[5]This allows indenting here documents in shell scripts (primarily for alignment with existing indentation) without changing their value:[c]
A script containing:
produces:
Another use is to output to a file:
Ahere string(available inbash,ksh, orzsh) is syntactically similar, consisting of<<<, and effects input redirection from aword(a sequence treated as a unit by the shell, in this context generally a string literal). In this case the usual shell syntax is used for the word (“here string syntax”), with the only syntax being the redirection: a here string is an ordinary string used for input redirection, not a special kind of string.
A single word need not be quoted:
In case of a string with spaces, it must be quoted:
This could also be written as:
Multiline strings are acceptable, yielding:
Note that leading and trailing newlines, if present, are included:
The key difference from here documents is that, in here documents, the delimiters are on separate lines; the leading and trailing newlines are stripped. Unlike here documents, here strings do not use delimiters.
Here strings are particularly useful for commands that often take short input, such as the calculatorbc:
Note that here string behavior can also be accomplished (reversing the order) via piping and theechocommand, as in:
however here strings are particularly useful when the last command needs to run in the current process, as is the case with thereadbuiltin:
yields nothing, while
This happens because in the previous example piping causesreadto run in a subprocess, and as such cannot affect the environment of theparent process.
InMicrosoftNMAKE, here documents are referred to asinline files. Inline files are referenced as<<or<<pathname: the first notation creates a temporary file, the second notation creates (or overwrites) the file with the specified pathname.
An inline file is terminated with<<on a line by itself, optionally followed by the (case-insensitive) keywordKEEPorNOKEEPto indicate whether the created file should be kept.
Rdoes not have file literals, but provides equivalent functionality by combining string literals with a string-to-file function. R allows arbitrary whitespace, including newlines, in strings. A string then can be turned into afile descriptorusing thetextConnection()function. For example, the following turns a data table embedded in the source code into a data-frame variable:
Perl[6]and Ruby[7]have a form of file literal, which can be considered a form ofdata segment. In these languages, including the line__DATA__(Perl) or__END__(Ruby, old Perl) marks the end of thecode segmentand the start of the data segment. Only the contents prior to this line are executed, and the contents of the source file after this line are available as a file object:PACKAGE::DATAin Perl (e.g.,main::DATA) andDATAin Ruby. As an inline file, these are semantically similar to here documents, though there can be only one per script. However, in these languages the term "here document" instead refers to multiline string literals, as discussed below.
As further explained inData URI scheme, all major web browsers understand URIs that start withdata:as here document.
The term "here document" or "here string" is also used for multilinestring literalsin various programming languages, notably Perl (syntax influenced by Unix shell), and languages influenced by Perl, notably PHP and Ruby. The shell-style<<syntax is often retained, despite not being used for input redirection.
In Perl there are several different ways to invoke here docs.[8]The delimiters around the tag have the same effect within the here doc as they would in a regular string literal: For example, using double quotes around the tag allowsvariables to be interpolated, but using single quotes doesn't, and using the tag without either behaves like double quotes. Using backticks as the delimiters around the tag runs the contents of the heredoc as a shell script. It is necessary to make sure that the end tag is at the beginning of the line or the tag will not be recognized by the interpreter.
Note that the here doc does not start at the tag—but rather starts on the next line. So the statement containing the tag continues on after the tag.
Here is an example with double quotes:
Output:
Here is an example with single quotes:
Output:
And an example with backticks (may not be portable):
It is possible to start multiple heredocs on the same line:
The tag itself may contain whitespace, which may allow heredocs to be used without breakingindentation.
Although since Perl version 5.26,[9]heredocs can include indention:
In addition to these strings, Perl also features file literals, namely the contents of the file following__DATA__(formerly__END__) on a line by itself. This is accessible as the file objectPACKAGE::DATAsuch asmain::DATA, and can be viewed as a form ofdata segment.
In PHP, here documents are referred to asheredocs. In PHP heredocs are not string literals. Heredoc text behaves just like a double-quoted string, but without the double quotes. For example, meaning `$` will be parsed as a variable start, and `${` or `{$` as a complex variable start.
Outputs
In PHP versions prior to 7.3, the line containing the closing identifier must not contain any other characters, except an optional ending semicolon. Otherwise, it will not be considered to be a closing identifier, and PHP will continue looking for one. If a proper closing identifier is not found, a parse error will result at the last line of the script. However, from version 7.3, it is no longer required that the closing identifier be followed by a semicolon or newline. Additionally the closing identifier may be indented, in which case the indentation will be stripped from all lines in the doc string.[10]
In PHP 5.3 and later, like Perl, it is possible to not interpolate variables by surrounding the tag with single quotes; this is called anowdoc:[11]
In PHP 5.3+ it is also possible to surround the tag with double quotes, which like Perl has the same effect as not surrounding the tag with anything at all.
The following Ruby code displays a grocery list by using a here document.
The result:
The<<in a here document does not indicate input redirection, but Ruby also uses<<for input redirection, so redirecting to a file from a here document involves using<<twice, in different senses:
As with Unix shells, Ruby also allows for the delimiting identifier not to start on the first column of a line, if the start of the here document is marked with the slightly different starter<<-.
Besides, Ruby treats here documents as a double-quoted string, and as such, it is possible to use the#{}construct to interpolate code.
The following example illustrates both of these features:
Ruby expands on this by providing the "<<~" syntax for omitting indentation on the here document:
The common indentation of two spaces is omitted from all lines:
Like Perl, Ruby allows for starting multiple here documents in one line:
As with Perl, Ruby features file literals, namely the contents of the file following__END__on a line by itself. This is accessible as the file objectDATAand can be viewed as a form ofdata segment.
Pythonsupports multi-line strings as a "verbatim" string. They may be enclosed in 3 single (') or double (") quotation marks, the latter is shown in the examples below.
From Python 3.6 onwards, verbatim f-strings support variable and expression interpolation.
Text blocks are supported starting withJava 15viaJEP378:[12][13]
SinceC++11, C++ supports string literals with custom delimiter ("my_delimiter" in this example):
will print out
Since version 2.0,Dhas support for here document-style strings using the 'q' prefix character. These strings begin withq"IDENTfollowed immediately by a newline (for an arbitrary identifier IDENT), and end withIDENT"at the start of a line.
D also supports a few quoting delimiters, with similar syntax, with such strings starting withq"[and ending with]"or similarly for other delimiter character (any of () <> {} or []).
On IBM'sJob Control Language(JCL) used on its earlierMVSand currentz/OSoperating systems, data which is inline to a job stream can be identified by an * on a DD statement, such as//SYSINDD*or//SYSINDD*,DLM=textIn the first case, the lines of text follow and are combined into a pseudo file with the DD name SYSIN. All records following the command are combined until either another OS/JCL command occurs (any line beginning with//), the defaultEOFsequence (/*) is found, or the physical end of data occurs. In the second case, the conditions are the same,exceptthe DLM= operand is used to specify the text string signalling end of data, which can be used if a data stream contains JCL (again, any line beginning with//), or the/*sequence (such as comments in C or C++ source code). The following compiles and executes an assembly language program, supplied as in-line data to the assembler.
The//SYSINDD*statement is the functional equivalent of<</*Indicating s stream of data follows, terminated by/*.
Racket'shere stringsstart with#<<followed by characters that define a terminator for the string.[14]The content of the string includes all characters between the#<<line and a line whose only content is the specified terminator. More precisely, the content of the string starts after a newline following#<<, and it ends before a newline that is followed by the terminator.
Outputs:
No escape sequences are recognized between the starting and terminating lines; all characters are included in the string (and terminator) literally.
Outputs:
Here strings can be used normally in contexts where normal strings would:
Outputs:
An interesting alternative is to use the language extensionat-expto write @-expressions.[15]They look like this:
Outputs:
An @-expression is not specific nor restricted to strings, it is a syntax form that can be composed with the rest of the language.
InPowerShell, here documents are referred to ashere-strings. A here-string is a string which starts with an open delimiter (@"or@') and ends with a close delimiter ("@or'@) on a line by itself, which terminates the string. All characters between the open and close delimiter are considered the string literal.
Using a here-string with double quotes allowsvariables to be interpreted, using single quotes doesn't.Variable interpolationoccurs with simple variables (e.g.$xbut NOT$x.yor$x[0]).
You can execute a set of statements by putting them in$()(e.g.$($x.y)or$(Get-Process | Out-String)).
In the following PowerShell code, text is passed to a function using a here-string.
The functionConvertTo-UpperCaseis defined as follows:
Here is an example that demonstrates variable interpolation and statement execution using a here-string with double quotes:
Using a here-string with single quotes instead, the output would look like this:
InDCL scripts, any input line which does not begin with a $ symbol is implicitly treated as input to the preceding command - all lines which do not begin with $ are here-documents. The input is either passed to the program, or can be explicitly referenced by the logical name SYS$INPUT (analogous to the Unix concept ofstdin).
For instance, explicitly referencing the input as SYS$INPUT:
produces:
Additionally, the DECK command, initially intended for punched card support (hence its name: it signified the beginning of adata deck) can be used to supply input to the preceding command.[16]The input deck is ended either by the command $ EOD, or the character pattern specified by the /DOLLARS parameter to DECK.
Example of a program totalling up monetary values:
Would produce the following output (presuming ADD_SUMS was written to read the values and add them):
Example of using DECK /DOLLARS to create one command file from another:
YAMLprimarily relies on whitespace indentation for structure, making it resistant todelimiter collisionand capable representing multi-line strings with folded string literals:
|
https://en.wikipedia.org/wiki/Here_document
|
-30-has been traditionally used by journalists in North America to indicate the end of a story or article that is submitted foreditingandtypesetting. It is commonly employed when writing ondeadlineand sending bits of the story at a time, via telegraphy, teletype, electronic transmission, or paper copy, as a necessary way to indicate the end of the article.[1]It is also found at the end ofpress releases.
The origin of the term is unknown.[1][2]One theory is that the journalistic employment of -30- originated from the number's use during theAmerican Civil Warera in the92 Codeoftelegraphicshorthand, where it signified the end of a transmission[3]and that it found further favor when it was included in thePhillips Codeof abbreviations and short markings for common use that was developed by theAssociated Presswire service. Telegraph operators familiar with numericwire signalssuch as the 92 Code used theserailroadcodes to providelogisticsinstructions andtrain orders, and they adapted them to notate an article's priority or confirm its transmission and receipt. Thismetadatawould occasionally appear in print whentypesettersincluded the codes in newspapers,[1]especially the code for "No more – the end", which was presented as"-30-"on atypewriter.
|
https://en.wikipedia.org/wiki/-30-
|
AnINVITE of Death[1]is a type of attack on aVoIP-system that involves sending a malformed or otherwise maliciousSIPINVITE request to atelephony server, resulting in a crash of that server. Because telephony is usually a critical application, this damage causes significant disruption to the users and poses tremendous acceptance problems with VoIP. These kinds of attacks do not necessarily affect only SIP-based systems; all implementations with vulnerabilities in the VoIP area are affected. TheDoS attackcan also be transported in other messages than INVITE. For example, in December 2007 there was a report about a vulnerability in the BYE message ("BYE BYE") by using an obsolete header with the name "Also".[2]However, sending INVITE packets is the most popular way of attacking telephony systems.[3]The name is a reference to theping of deathattack that caused serious trouble in 1995–1997.
The INVITE of Death vulnerability was found[4]on February 16, 2009.[5]The vulnerability allows the attacker to crash the server causing remote Denial of Service (DoS) by sending a single malformed packet. An impersonator can, using a malformed packet, overflow the specific string buffers, add a large number of token characters, and modify fields in an illegal fashion. As a result, a server is tricked into an undefined state, which can lead to call processing delays, unauthorized access, and a complete denial of service. The problem specifically exists in OpenSBC version 1.1.5-25 in the handling of the “Via” field from a maliciously crafted SIP packet.[6]The INVITE of Death packet was also used to find a new vulnerability in the patched OpenSBC server through network dialog minimization.[7][8]
For the popular open source-based AsteriskPBX, there are security advisories that cover not only signaling-related problems, but also problems with other protocols and their resolution.[9]Problems may be malformed SDP attachments where codex numbers are out of the valid range or obsolete headers such as “Also”.
The INVITE of Death is specifically a problem for operators that run their servers on the public internet. Because SIP allows the usage of UDP packets, it is easy for an attacker to spoof any source address in the internet and send the INVITE of death from untraceable locations. By sending these kinds of requests periodically, attackers can completely interrupt the telephony service. The only choice for the service provider is to upgrade their systems until the attack does not crash the system anymore.
A large number ofVoIP Vulnerabilitiesexist for IP phones. DoS attacks on VoIP phones are less critical than attacks on central devices like IP-PBX, as, usually, only the endpoint is affected.[citation needed]
|
https://en.wikipedia.org/wiki/INVITE_of_Death
|
Aping floodis a simpledenial-of-service attackwhere the attacker overwhelms the victim withICMP"echo request" (ping)packets.[1]This is most effective by using the flood option of ping which sends ICMP packets as fast as possible without waiting for replies. Most implementations of ping require the user to beprivilegedin order to specify the flood option. It is most successful if the attacker has morebandwidththan the victim (for instance an attacker with aDSLline and the victim on a dial-upmodem). The attacker hopes that the victim will respond with ICMP "echo reply" packets, thus consuming both outgoing bandwidth as well as incoming bandwidth. If the target system is slow enough, it is possible to consume enough of its CPU cycles for a user to notice a significant slowdown.
A ping flood can also be used as a diagnostic for networkpacket lossandthroughputissues.[2]
Thiscomputer securityarticle is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Ping_flood
|
ASmurf attackis adistributed denial-of-service attackin which large numbers ofInternet Control Message Protocol(ICMP) packets with the intended victim'sspoofedsource IP are broadcast to acomputer networkusing an IPbroadcast address.[1]Most devices on a network will, by default, respond to this by sending a reply to the source IP address. If the number of machines on the network that receive and respond to these packets is very large, the victim's computer will be flooded with traffic. This can slow down the victim's computer to the point where it becomes impossible to work on.
The original tool for creating a Smurf attack was written by Dan Moschuk (alias TFreak) in 1997.[2][3]
In the late 1990s, many IP networks would participate in Smurf attacks if prompted (that is, they would respond to ICMP requests sent to broadcast addresses). The name comes from the idea of very small, but numerous attackers overwhelming a much larger opponent (seeSmurfs). Today, administrators can make a network immune to such abuse; therefore, very few networks remain vulnerable to Smurf attacks.[4]
ASmurf amplifieris a computer network that lends itself to being used in a Smurf attack. Smurf amplifiers act to worsen the severity of a Smurf attack because they are configured in such a way that they generate a large number ofICMPreplies to the victim at the spoofed source IP address.
In DDoS,amplificationis the degree of bandwidth enhancement that an original attack traffic undergoes (with the help of Smurf amplifiers) during its transmission towards the victim computer. An amplification factor of 100, for example, means that an attacker could manage to create 100 Mb/s of traffic using just 1 Mb/s of its own bandwidth.[5]
Under the assumption no countermeasures are taken to dampen the effect of a Smurf attack, this is what happens in the target network withnactive hosts (that will respond to ICMP echo requests).
The ICMP echo request packets have a spoofed source address (the Smurfs' target) and a destination address (the patsy; the apparent source of the attack). Both addresses can take two forms:unicastandbroadcast.
The dual unicast form is comparable with a regular ping: an ICMP echo request is sent to the patsy (a single host), which sends a single ICMP echo reply (a Smurf) back to the target (the single host in the source address). This type of attack has an amplification factor of 1, which means: just a single Smurf per ping.
When the target is a unicast address and the destination is the broadcast address of the target's network, then all hosts in the network will receive an echo request. In return they will each reply to the target, so the target is swamped withnSmurfs. Amplification factor =n. Ifnis small, a host may be hindered but not crippled. Ifnis large, a host may come to a halt.
If the target is the broadcast address and the patsy a unicast address, each host in the network will receive a single Smurf per ping, so an amplification factor of 1 per host, but a factor ofnfor the network. Generally, a network would be able to cope with this form of the attack, ifnis not too great.
When both the source and destination address in the original packet are set to the broadcast address of the target network, things start to get out of hand quickly. All hosts receive an echo request, but all replies to that are broadcast again to all hosts. Each host will receive an initial ping, broadcast the reply and get a reply from alln-1hosts. An amplification factor ofnfor a single host, but an amplification factor ofn2for the network.
ICMP echo requests are typically sent once a second. The reply should contain the contents of the request; a few bytes, normally. A single (double broadcast) ping to a network with 100 hosts causes the network to process10000packets. If the payload of the ping is increased to15000bytes (or 10 full packets inEthernet) then that ping will cause the network to have to process100000large packets per second. Send more packets per second, and any network would collapse under the load. This will render any host in the network unreachable for as long as the attack lasts.
A Smurf attack can overwhelm servers and networks. The bandwidth of the communication network can be exhausted resulting in the communication network becoming paralyzed.[6]
The fix is two-fold:
It's also important for ISPs to implementingress filtering, which rejects the attacking packets on the basis of the forged source address.[8]
An example of configuring a router so it will not forward packets to broadcast addresses, for aCiscorouter, is:
(This example does not protect a network from becoming thetargetof a Smurf attack; it merely prevents the network fromparticipatingin a Smurf attack.)
A Fraggle attack (named for the creatures in the puppet TV seriesFraggle Rock) is a variation of a Smurf attack where an attacker sends a large amount ofUDPtraffic to ports 7 (Echo) and 19 (CHARGEN). It works similarly to the Smurf attack in that many computers on the network will respond to this traffic by sending traffic back to the spoofed source IP of the victim, flooding it with traffic.[10]
Fraggle.c, thesource codeof the attack, was also released by TFreak.[11]
|
https://en.wikipedia.org/wiki/Smurf_attack
|
Biomedical engineering(BME) ormedical engineeringis the application of engineering principles and design concepts to medicine and biology for healthcare applications (e.g., diagnostic or therapeutic purposes). BME is also traditionally logical sciences to advance health care treatment, includingdiagnosis,monitoring, andtherapy.[1][2]Also included under the scope of a biomedical engineer is the management of current medical equipment in hospitals while adhering to relevant industry standards. This involves procurement, routine testing, preventive maintenance, and making equipment recommendations, a role also known as a Biomedical Equipment Technician (BMET) or as aclinical engineer.
Biomedical engineering has recently emerged as its own field of study, as compared to many other engineering fields.[3]Such an evolution is common as a new field transitions from being aninterdisciplinaryspecialization among already-established fields to being considered a field in itself. Much of the work in biomedical engineering consists ofresearch and development, spanning a broad array of subfields (see below). Prominent biomedical engineering applications include the development ofbiocompatibleprostheses, various diagnostic and therapeuticmedical devicesranging from clinical equipment to micro-implants, imaging technologies such asMRIandEKG/ECG,regenerativetissue growth, and the development of pharmaceuticaldrugsincludingbiopharmaceuticals.
Bioinformaticsis an interdisciplinary field that develops methods and software tools for understanding biological data. As an interdisciplinary field of science, bioinformatics combines computer science, statistics, mathematics, and engineering to analyze and interpret biological data.
Bioinformatics is considered both an umbrella term for the body of biological studies that use computer programming as part of their methodology, as well as a reference to specific analysis "pipelines" that are repeatedly used, particularly in the field of genomics. Common uses of bioinformatics include the identification of candidate genes and nucleotides (SNPs). Often, such identification is made with the aim of better understanding the genetic basis of disease, unique adaptations, desirable properties (esp. in agricultural species), or differences between populations. In a less formal way, bioinformatics also tries to understand the organizational principles within nucleic acid and protein sequences.
Biomechanics is the study of the structure and function of the mechanical aspects of biological systems, at any level from wholeorganismstoorgans,cellsandcell organelles,[4]using the methods ofmechanics.[5]
Abiomaterialis any matter, surface, or construct that interacts with living systems. As a science,biomaterialsis about fifty years old. The study of biomaterials is calledbiomaterials science or biomaterials engineering. It has experienced steady and strong growth over its history, with many companies investing large amounts of money into the development of new products. Biomaterials science encompasses elements of medicine, biology, chemistry, tissue engineering and materials science.
Biomedical optics combines the principles of physics, engineering, and biology to study the interaction of biological tissue and light, and how this can be exploited for sensing, imaging, and treatment.[6]It has a wide range of applications, including optical imaging, microscopy, ophthalmoscopy, spectroscopy, and therapy. Examples of biomedical optics techniques and technologies includeoptical coherence tomography(OCT),fluorescence microscopy,confocal microscopy, andphotodynamic therapy(PDT). OCT, for example, uses light to create high-resolution, three-dimensional images of internal structures, such as theretinain the eye or thecoronary arteriesin the heart. Fluorescence microscopy involves labeling specific molecules with fluorescent dyes and visualizing them using light, providing insights into biological processes and disease mechanisms. More recently,adaptive opticsis helping imaging by correcting aberrations in biological tissue, enabling higher resolution imaging and improved accuracy in procedures such as laser surgery and retinal imaging.
Tissue engineering, like genetic engineering (see below), is a major segment ofbiotechnology– which overlaps significantly with BME.
One of the goals of tissue engineering is to create artificial organs (via biological material) such as kidneys, livers, for patients that need organ transplants. Biomedical engineers are currently researching methods of creating such organs. Researchers have grown solidjawbones[7]andtracheas[8]from human stem cells towards this end. Severalartificial urinary bladdershave been grown in laboratories and transplanted successfully into human patients.[9]Bioartificial organs, which use both synthetic and biological component, are also a focus area in research, such as with hepatic assist devices that use liver cells within an artificial bioreactor construct.[10]
Genetic engineering, recombinant DNA technology, genetic modification/manipulation (GM) and gene splicing are terms that apply to the direct manipulation of an organism's genes. Unlike traditional breeding, an indirect method of genetic manipulation, genetic engineering utilizes modern tools such as molecular cloning and transformation to directly alter the structure and characteristics of target genes. Genetic engineering techniques have found success in numerous applications. Some examples include the improvement of crop technology (not a medical application, but seebiological systems engineering), the manufacture of synthetic human insulin through the use of modified bacteria, the manufacture of erythropoietin in hamster ovary cells, and the production of new types of experimental mice such as the oncomouse (cancer mouse) for research.[citation needed]
Neural engineering(also known as neuroengineering) is a discipline that uses engineering techniques to understand, repair, replace, or enhance neural systems. Neural engineers are uniquely qualified to solve design problems at the interface of living neural tissue and non-living constructs. Neural engineering can assist with numerous things, including the future development of prosthetics. For example, cognitive neural prosthetics (CNP) are being heavily researched and would allow for a chip implant to assist people who have prosthetics by providing signals to operate assistive devices.[11]
Pharmaceutical engineeringis an interdisciplinary science that includes drug engineering, novel drug delivery and targeting, pharmaceutical technology, unit operations ofchemical engineering, and pharmaceutical analysis. It may be deemed as a part ofpharmacydue to its focus on the use of technology on chemical agents in providing better medicinal treatment.
This is anextremely broad category—essentially covering all health care products that do not achieve their intended results through predominantly chemical (e.g., pharmaceuticals) or biological (e.g., vaccines) means, and do not involve metabolism.
A medical device is intended for use in:
Some examples includepacemakers,infusion pumps, theheart-lung machine,dialysismachines,artificial organs,implants,artificial limbs,corrective lenses,cochlear implants,ocular prosthetics,facial prosthetics, somato prosthetics, anddental implants.
Stereolithographyis a practical example ofmedical modelingbeing used to create physical objects. Beyond modeling organs and the human body, emerging engineering techniques are also currently used in the research and development of new devices for innovative therapies,[12]treatments,[13]patient monitoring,[14]of complex diseases.
Medical devices are regulated and classified (in the US) as follows (see alsoRegulation):
Medical/biomedical imaging is a major segment ofmedical devices. This area deals with enabling clinicians to directly or indirectly "view" things not visible in plain sight (such as due to their size, and/or location). This can involve utilizing ultrasound, magnetism, UV, radiology, and other means.
Alternatively, navigation-guided equipment utilizeselectromagnetictracking technology, such ascatheterplacement into the brain orfeeding tubeplacement systems. For example, ENvizion Medical's ENvue, an electromagnetic navigation system for enteral feeding tube placement. The system uses an external field generator and several EM passive sensors enabling scaling of the display to the patient's body contour, and a real-time view of the feeding tube tip location and direction, which helps the medical staff ensure the correct placement in theGI tract.[15]
Imaging technologies are often essential to medical diagnosis, and are typically the most complex equipment found in a hospital including:fluoroscopy,magnetic resonance imaging(MRI),nuclear medicine,positron emission tomography(PET),PET-CT scans, projection radiography such asX-raysandCT scans,tomography,ultrasound,optical microscopy, andelectron microscopy.
An implant is a kind of medical device made to replace and act as a missing biological structure (as compared with a transplant, which indicates transplanted biomedical tissue). The surface of implants that contact the body might be made of a biomedical material such as titanium, silicone or apatite depending on what is the most functional. In some cases, implants contain electronics, e.g. artificial pacemakers and cochlear implants. Some implants are bioactive, such as subcutaneous drug delivery devices in the form of implantable pills ordrug-eluting stents.
Artificial body part replacements are one of the many applications of bionics. Concerned with the intricate and thorough study of the properties and function of human body systems, bionics may be applied to solve some engineering problems. Careful study of the different functions and processes of the eyes, ears, and other organs paved the way for improved cameras, television, radio transmitters and receivers, and many other tools.
In recent years biomedical sensors based in microwave technology have gained more attention. Different sensors can be manufactured for specific uses in both diagnosing and monitoring disease conditions, for example microwave sensors can be used as a complementary technique to X-ray to monitor lower extremity trauma.[16]The sensor monitor the dielectric properties and can thus notice change in tissue (bone, muscle, fat etc.) under the skin so when measuring at different times during the healing process the response from the sensor will change as the trauma heals.
Clinical engineeringis the branch of biomedical engineering dealing with the actual implementation ofmedical equipmentand technologies in hospitals or other clinical settings. Major roles of clinical engineers include training and supervisingbiomedical equipment technicians (BMETs), selecting technological products/services and logistically managing their implementation, working with governmental regulators on inspections/audits, and serving as technological consultants for other hospital staff (e.g. physicians, administrators, I.T., etc.). Clinical engineers also advise and collaborate with medical device producers regarding prospective design improvements based on clinical experiences, as well as monitor the progression of the state of the art so as to redirect procurement patterns accordingly.
Their inherent focus onpracticalimplementation of technology has tended to keep them oriented more towardsincremental-level redesigns and reconfigurations, as opposed to revolutionary research & development or ideas that would be many years from clinical adoption; however, there is a growing effort to expand this time-horizon over which clinical engineers can influence the trajectory of biomedical innovation. In their various roles, they form a "bridge" between the primary designers and the end-users, by combining the perspectives of being both close to the point-of-use, while also trained in product and process engineering. Clinical engineering departments will sometimes hire not just biomedical engineers, but also industrial/systems engineers to help address operations research/optimization, human factors, cost analysis, etc. Also, seesafety engineeringfor a discussion of the procedures used to design safe systems. The clinical engineering department is constructed with a manager, supervisor, engineer, and technician. One engineer per eighty beds in the hospital is the ratio. Clinical engineers are also authorized to audit pharmaceutical and associated stores to monitor FDA recalls of invasive items.
Rehabilitation engineeringis the systematic application of engineering sciences to design, develop, adapt, test, evaluate, apply, and distribute technological solutions to problems confronted by individuals with disabilities. Functional areas addressed through rehabilitation engineering may include mobility, communications, hearing, vision, and cognition, and activities associated with employment, independent living, education, and integration into the community.[1]
While some rehabilitation engineers have master's degrees in rehabilitation engineering, usually a subspecialty of Biomedical engineering, most rehabilitation engineers have an undergraduate or graduate degrees in biomedical engineering, mechanical engineering, or electrical engineering. A Portuguese university provides an undergraduate degree and a master's degree in Rehabilitation Engineering and Accessibility.[7][9]Qualification to become a Rehab' Engineer in the UK is possible via a University BSc Honours Degree course such as Health Design & Technology Institute, Coventry University.[10]
The rehabilitation process for people with disabilities often entails the design of assistive devices such as Walking aids intended to promote the inclusion of their users into the mainstream of society, commerce, and recreation.
Regulatory issues have been constantly increased in the last decades to respond to the many incidents caused by devices to patients. For example, from 2008 to 2011, in US, there were 119 FDA recalls of medical devices classified as class I. According to U.S. Food and Drug Administration (FDA),Class I recallis associated to "a situation in which there is a reasonable probability that the use of, or exposure to, a product will cause serious adverse health consequences or death"[17]
Regardless of the country-specific legislation, the main regulatory objectives coincide worldwide.[18]For example, in the medical device regulations, a product must be 1), safe 2), effective and 3), applicable to all the manufactured devices.
A product is safe if patients, users, and third parties do not run unacceptable risks of physical hazards, such as injury or death, in its intended use. Protective measures must be introduced on devices that are hazardous to reduce residual risks at an acceptable level if compared with the benefit derived from the use of it.
A product is effective if it performs as specified by the manufacturer in the intended use. Proof of effectiveness is achieved through clinical evaluation, compliance to performance standards or demonstrations of substantial equivalence with an already marketed device.
The previous features have to be ensured for all the manufactured items of the medical device. This requires that a quality system shall be in place for all the relevant entities and processes that may impact safety and effectiveness over the whole medical device lifecycle.
The medical device engineering area is among the most heavily regulated fields of engineering, and practicing biomedical engineers must routinely consult and cooperate with regulatory law attorneys and other experts. The Food and Drug Administration (FDA) is the principal healthcare regulatory authority in the United States, having jurisdiction over medicaldevices, drugs, biologics, and combinationproducts. The paramount objectives driving policy decisions by the FDA are safety and effectiveness of healthcare products that have to be assured through a quality system in place as specified under21 CFR 829 regulation. In addition, because biomedical engineers often develop devices and technologies for "consumer" use, such as physical therapy devices (which are also "medical" devices), these may also be governed in some respects by theConsumer Product Safety Commission. The greatest hurdles tend to be 510K "clearance" (typically for Class 2 devices) or pre-market "approval" (typically for drugs and class 3 devices).
In the European context, safety effectiveness and quality is ensured through the "Conformity Assessment" which is defined as "the method by which a manufacturer demonstrates that its device complies with the requirements of the EuropeanMedical Device Directive". The directive specifies different procedures according to the class of the device ranging from the simple Declaration of Conformity (Annex VII) for Class I devices to EC verification (Annex IV), Production quality assurance (Annex V), Product quality assurance (Annex VI) and Full quality assurance (Annex II). The Medical Device Directive specifies detailed procedures for Certification. In general terms, these procedures include tests and verifications that are to be contained in specific deliveries such as the risk management file, the technical file, and the quality system deliveries. The risk management file is the first deliverable that conditions the following design and manufacturing steps. The risk management stage shall drive the product so that product risks are reduced at an acceptable level with respect to the benefits expected for the patients for the use of the device. Thetechnical filecontains all the documentation data and records supporting medical device certification. FDA technical file has similar content although organized in a different structure. The Quality System deliverables usually include procedures that ensure quality throughout all product life cycles. The same standard (ISO EN 13485) is usually applied for quality management systems in the US and worldwide.
In the European Union, there are certifying entities named "Notified Bodies", accredited by the European Member States. The Notified Bodies must ensure the effectiveness of the certification process for all medical devices apart from the class I devices where a declaration of conformity produced by the manufacturer is sufficient for marketing. Once a product has passed all the steps required by the Medical Device Directive, the device is entitled to bear aCE marking, indicating that the device is believed to be safe and effective when used as intended, and, therefore, it can be marketed within the European Union area.
The different regulatory arrangements sometimes result in particular technologies being developed first for either the U.S. or in Europe depending on the more favorable form of regulation. While nations often strive for substantive harmony to facilitate cross-national distribution, philosophical differences about theoptimal extentof regulation can be a hindrance; more restrictive regulations seem appealing on an intuitive level, but critics decry the tradeoff cost in terms of slowing access to life-saving developments.
Directive 2011/65/EU, better known as RoHS 2 is a recast of legislation originally introduced in 2002. The original EU legislation "Restrictions of Certain Hazardous Substances in Electrical and Electronics Devices" (RoHS Directive 2002/95/EC) was replaced and superseded by 2011/65/EU published in July 2011 and commonly known as RoHS 2.RoHSseeks to limit the dangerous substances in circulation in electronics products, in particular toxins and heavy metals, which are subsequently released into the environment when such devices are recycled.
The scope of RoHS 2 is widened to include products previously excluded, such as medical devices and industrial equipment. In addition, manufacturers are now obliged to provide conformity risk assessments and test reports – or explain why they are lacking. For the first time, not only manufacturers but also importers and distributors share a responsibility to ensure Electrical and Electronic Equipment within the scope of RoHS complies with the hazardous substances limits and have a CE mark on their products.
The new International StandardIEC 60601for home healthcare electro-medical devices defining the requirements for devices used in the home healthcare environment. IEC 60601-1-11 (2010) must now be incorporated into the design and verification of a wide range of home use and point of care medical devices along with other applicable standards in the IEC 60601 3rd edition series.
The mandatory date for implementation of the EN European version of the standard is June 1, 2013. The US FDA requires the use of the standard on June 30, 2013, while Health Canada recently extended the required date from June 2012 to April 2013. The North American agencies will only require these standards for new device submissions, while the EU will take the more severe approach of requiring all applicable devices being placed on the market to consider the home healthcare standard.
AS/ANS 3551:2012is the Australian and New Zealand standards for the management of medical devices. The standard specifies the procedures required to maintain a wide range of medical assets in a clinical setting (e.g. Hospital).[19]The standards are based on the IEC 606101 standards.
The standard covers a wide range of medical equipment management elements including, procurement, acceptance testing, maintenance (electrical safety and preventive maintenance testing) and decommissioning.
Biomedical engineers require considerable knowledge of both engineering and biology, and typically have a Bachelor's (B.Sc., B.S., B.Eng. or B.S.E.) or Master's (M.S., M.Sc., M.S.E., or M.Eng.) or a doctoral (Ph.D., orMD-PhD[20][21][22]) degree in BME (Biomedical Engineering) or another branch of engineering with considerable potential for BME overlap. As interest in BME increases, many engineering colleges now have a Biomedical Engineering Department or Program, with offerings ranging from the undergraduate (B.Sc., B.S., B.Eng. or B.S.E.) to doctoral levels. Biomedical engineering has only recently been emerging asits own disciplinerather than a cross-disciplinary hybrid specialization of other disciplines; and BME programs at all levels are becoming more widespread, including theBachelor of Science in Biomedical Engineeringwhich includes enough biological science content that many students use it as a "pre-med" major in preparation formedical school. The number of biomedical engineers is expected to rise as both a cause and effect of improvements in medical technology.[23]
In the U.S., an increasing number ofundergraduateprograms are also becoming recognized byABETas accredited bioengineering/biomedical engineering programs. As of 2023, 155 programs are currently accredited by ABET.[24]
In Canada and Australia, accredited graduate programs in biomedical engineering are common.[25]For example,McMaster Universityoffers an M.A.Sc, an MD/PhD, and a PhD in Biomedical engineering.[26]The first CanadianundergraduateBME program was offered atUniversity of Guelphas a four-year B.Eng. program.[27]The Polytechnique in Montreal is also offering a bachelors's degree in biomedical engineering[28]as is Flinders University.[29]
As with many degrees, the reputation and ranking of a program may factor into the desirability of a degree holder for either employment or graduate admission. The reputation of many undergraduate degrees is also linked to the institution's graduate or research programs, which have some tangible factors for rating, such as research funding and volume, publications and citations. With BME specifically, the ranking of a university's hospital and medical school can also be a significant factor in the perceived prestige of its BME department/program.
Graduate educationis a particularly important aspect in BME. While many engineering fields (such as mechanical or electrical engineering) do not need graduate-level training to obtain an entry-level job in their field, the majority of BME positions do prefer or even require them.[30]Since most BME-related professions involve scientific research, such as inpharmaceuticalandmedical devicedevelopment, graduate education is almost a requirement (as undergraduate degrees typically do not involve sufficient research training and experience). This can be either a Masters or Doctoral level degree; while in certain specialties a Ph.D. is notably more common than in others, it is hardly ever the majority (except in academia). In fact, the perceived need for some kind of graduate credential is so strong that some undergraduate BME programs will actively discourage students from majoring in BME without an expressed intention to also obtain a master's degree or apply to medical school afterwards.
Graduate programs in BME, like in other scientific fields, are highly varied, and particular programs may emphasize certain aspects within the field. They may also feature extensive collaborative efforts with programs in other fields (such as the university's Medical School or other engineering divisions), owing again to the interdisciplinary nature of BME. M.S. and Ph.D. programs will typically require applicants to have an undergraduate degree in BME, oranother engineeringdiscipline (plus certain life science coursework), orlife science(plus certain engineering coursework).
Education in BME also varies greatly around the world. By virtue of its extensive biotechnology sector, its numerous major universities, and relatively few internal barriers, the U.S. has progressed a great deal in its development of BME education and training opportunities. Europe, which also has a large biotechnology sector and an impressive education system, has encountered trouble in creating uniform standards as the European community attempts to supplant some of the national jurisdictional barriers that still exist. Recently, initiatives such as BIOMEDEA have sprung up to develop BME-related education and professional standards.[31]Other countries, such as Australia, are recognizing and moving to correct deficiencies in their BME education.[32]Also, as high technology endeavors are usually marks of developed nations, some areas of the world are prone to slower development in education, including in BME.
As with other learned professions, each state has certain (fairly similar) requirements for becoming licensed as a registeredProfessional Engineer(PE), but, in US, in industry such a license is not required to be an employee as an engineer in the majority of situations (due to an exception known as the industrial exemption, which effectively applies to the vast majority of American engineers). The US model has generally been only to require the practicing engineers offering engineering services that impact the public welfare, safety, safeguarding of life, health, or property to be licensed, while engineers working in private industry without a direct offering of engineering services to the public or other businesses, education, and government need not be licensed. This is notably not the case in many other countries, where a license is as legally necessary to practice engineering as it is for law or medicine.
Biomedical engineering is regulated in some countries, such as Australia, but registration is typically only recommended and not required.[33]
In the UK, mechanical engineers working in the areas of Medical Engineering,Bioengineeringor Biomedical engineering can gainChartered Engineerstatus through theInstitution of Mechanical Engineers. The Institution also runs the Engineering in Medicine and Health Division.[34]The Institute of Physics and Engineering in Medicine (IPEM) has a panel for the accreditation of MSc courses in Biomedical Engineering and Chartered Engineering status can also be sought through IPEM.
TheFundamentals of Engineering exam– the first (and more general) of two licensure examinations for most U.S. jurisdictions—does now cover biology (although technically not BME). For the second exam, called the Principles and Practices, Part 2, or the Professional Engineering exam, candidates may select a particular engineering discipline's content to be tested on; there is currently not an option for BME with this, meaning that any biomedical engineers seeking a license must prepare to take this examination in another category (which does not affect the actual license, since most jurisdictions do not recognize discipline specialties anyway). However, the Biomedical Engineering Society (BMES) is, as of 2009, exploring the possibility of seeking to implement a BME-specific version of this exam to facilitate biomedical engineers pursuing licensure.
Beyond governmental registration, certain private-sector professional/industrial organizations also offer certifications with varying degrees of prominence. One such example is the Certified Clinical Engineer (CCE) certification for Clinical engineers.
In 2012 there were about 19,400 biomedical engineers employed in the US, and the field was predicted to grow by 5% (faster than average) from 2012 to 2022.[35]Biomedical engineering has the highest percentage of female engineers compared to other common engineering professions. Now as of 2023, there are 19,700 jobs for this degree, the average pay for a person in this field is around $100,730.00 and making around $48.43 an hour. There is also expected to be a 7% increase in jobs from here 2023 to 2033 (even faster than the last average).
45. ^Bureau of Labor Statistics, U.S. Department of Labor, Occupational Outlook Handbook,"Bioengineers and Biomedical Engineers", retrieved October 27, 2024.
|
https://en.wikipedia.org/wiki/Biomedical_engineering
|
In engineering, afactor of safety(FoS) orsafety factor(SF) expresses how much stronger a system is than it needs to be for its specified maximum load. Safety factors are often calculated using detailed analysis because comprehensive testing is impractical on many projects, such as bridges and buildings, but the structure's ability to carry a load must be determined to a reasonable accuracy.
Many systems are intentionally built much stronger than needed for normal usage to allow for emergency situations, unexpected loads, misuse, or degradation (reliability).
Margin of safety(MoSorMS) is a related measure, expressed as arelative change.
There are two definitions for the factor of safety (FoS):
The realized factor of safety must be greater than the required design factor of safety. However, between various industries and engineering groups usage is inconsistent and confusing; there are several definitions used. The cause of much confusion is that various reference books and standards agencies use the factor of safety definitions and terms differently.Building codes,structuralandmechanical engineeringtextbooks often refer to the "factor of safety" as the fraction of total structural capability over what is needed. Those are realized factors of safety[1][2][3](first use). Many undergraduatestrength of materialsbooks use "Factor of Safety" as a constant value intended as a minimum target for design[4][5][6](second use).
There are several ways to compare the factor of safety for structures. All the different calculations fundamentally measure the same thing: how much extra load beyond what is intended a structure will actually take (or be required to withstand). The difference between the methods is the way in which the values are calculated and compared. Safety factor values can be thought of as a standardized way for comparing strength and reliability between systems.
The use of a factor of safety does not imply that an item, structure, or design is "safe". Manyquality assurance,engineering design,manufacturing, installation, and end-use factors may influence whether or not something is safe in any particular situation.
The difference between the safety factor and design factor (design safety factor) is as follows: The safety factor, or yield stress, is how much the designed part actually will be able to withstand (first usage from above). The design factor, or working stress, is what the item is required to be able to withstand (second usage). The design factor is defined for an application (generally provided in advance and often set by regulatorybuilding codesor policy) and is not an actual calculation, the safety factor is a ratio of maximum strength to intended load for the actual item that was designed.
By this definition, a structure with an FoS of exactly 1 will support only the design load and no more. Any additional load will cause the structure to fail. A structure with an FoS of 2 will fail at twice the design load.
Many government agencies and industries (such as aerospace) require the use of amargin of safety(MoSorMS) to describe the ratio of the strength of the structure to the requirements. There are two separate definitions for the margin of safety so care is needed to determine which is being used for a given application. One usage of MS is as a measure of capability like FoS. The other usage of MS is as a measure of satisfying design requirements (requirement verification). Margin of safety can be conceptualized (along with the reserve factor explained below) to represent how much of the structure's total capability is held "in reserve" during loading.
MS as a measure of structural capability: This definition of margin of safety commonly seen in textbooks[7][8]describes what additional load beyond the design load a part can withstand before failing. In effect, this is a measure of excess capability. If the margin is 0, the part will not take any additional load before it fails, if it is negative the part will fail before reaching its design load in service. If the margin is 1, it can withstand one additional load of equal force to the maximum load it was designed to support (i.e. twice the design load).
MS as a measure of requirement verification: Many agencies and organizations such asNASA[9]andAIAA[10]define the margin of safety including the design factor, in other words, the margin of safety is calculated after applying the design factor. In the case of a margin of 0, the part is at exactly therequiredstrength (the safety factor would equal the design factor). If there is a part with a required design factor of 3 and a margin of 1, the part would have a safety factor of 6 (capable of supporting two loads equal to its design factor of 3, supporting six times the design load beforefailure). A margin of 0 would mean the part would pass with a safety factor of 3. If the margin is less than 0 in this definition, although the part will not necessarily fail, the design requirement has not been met. A convenience of this usage is that for all applications, a margin of 0 or higher is passing, one does not need to know application details or compare against requirements, just glancing at the margin calculation tells whether the design passes or not. This is helpful for oversight and reviewing on projects with various integrated components, as different components may have various design factors involved and the margin calculation helps prevent confusion.
For a successful design, the realized safety factor must always equal or exceed the design safety factor so that the margin of safety is greater than or equal to zero. The margin of safety is sometimes, but infrequently, used as a percentage, i.e., a 0.50 MS is equivalent to a 50% MS. When a design satisfies this test it is said to have a "positive margin", and, conversely, a "negative margin" when it does not.
In the field of nuclear safety (as implemented at US government-owned facilities) the margin of safety has been defined as a quantity that may not be reduced without review by the controlling government office. The US Department of Energy publishes DOE G 424.1-1, "Implementation Guide for Use in Addressing Unreviewed Safety Question Requirements" as a guide for determining how to identify and determine whether a margin of safety will be reduced by a proposed change. The guide develops and applies the concept of a qualitative margin of safety that may not be explicit or quantifiable, yet can be evaluated conceptually to determine whether an increase or decrease will occur with a proposed change. This approach becomes important when examining designs with large or undefined (historical) margins and those that depend on "soft" controls such as programmatic limits or requirements. The commercial US nuclear industry utilized a similar concept in evaluating planned changes until 2001, when 10 CFR 50.59 was revised to capture and apply the information available in facility-specific risk analyses and other quantitative risk management tools.
A measure of strength frequently used in Europe is thereserve factor(RF). With the strength and applied loads expressed in the same units, the reserve factor is defined in one of two ways, depending on the industry:
The applied loads have many factors, including factors of safety applied.
Forductilematerials (e.g. most metals), it is often required that the factor of safety be checked against bothyieldandultimatestrengths. The yield calculation will determine the safety factor until the part starts todeform plastically. The ultimate calculation will determine the safety factor until failure. Inbrittlematerials the yield and ultimate strengths are often so close as to be indistinguishable, so it is usually acceptable to only calculate the ultimate safety factor.
Appropriate design factors are based on several considerations, such as theaccuracyof predictions on the imposedloads, strength,wearestimates, and theenvironmentaleffects to which the product will be exposed in service; the consequences of engineering failure; and the cost of over-engineering the component to achieve that factor of safety[citation needed]. For example, components whose failure could result in substantial financial loss, serious injury, or death may use a safety factor of four or higher (often ten). Non-critical components generally might have a design factor of two.Risk analysis,failure mode and effects analysis, and other tools are commonly used. Design factors for specific applications are often mandated by law, policy, or industry standards.
Buildings commonly use a factor of safety of 2.0 for each structural member. The value for buildings is relatively low because the loads are well understood and most structures areredundant.Pressure vesselsuse 3.5 to 4.0, automobiles use 3.0, and aircraft and spacecraft use 1.2 to 4.0 depending on the application and materials. Ductile, metallic materials tend to use the lower value while brittle materials use the higher values. The field ofaerospace engineeringuses generally lower design factors because the costs associated with structural weight are high (i.e. an aircraft with an overall safety factor of 5 would probably be too heavy to get off the ground). This low design factor is why aerospace parts and materials are subject to very stringentquality controland strict preventative maintenance schedules to help ensure reliability. A usually applied Safety Factor is 1.5, but for pressurized fuselage it is 2.0, and for main landing gear structures it is often 1.25.[11]
In some cases it is impractical or impossible for a part to meet the "standard" design factor. The penalties (mass or otherwise) for meeting the requirement would prevent the system from being viable (such as in the case of aircraft or spacecraft). In these cases, it is sometimes determined to allow a component to meet a lower than normal safety factor, often referred to as "waiving" the requirement. Doing this often brings with it extra detailed analysis or quality control verifications to assure the part will perform as desired, as it will be loaded closer to its limits.
For loading that is cyclical, repetitive, or fluctuating, it is important to consider the possibility ofmetal fatiguewhen choosing factor of safety. A cyclic load well below a material's yield strength can cause failure if it is repeated through enough cycles.
According toElishakoff[12][13]the notion of factor of safety in engineering context was apparently first introduced in 1729 byBernard Forest de Bélidor(1698-1761)[14]who was a French engineer working in hydraulics, mathematics, civil, and military engineering. The philosophical aspects of factors of safety were pursued by Doorn and Hansson.[15]
|
https://en.wikipedia.org/wiki/Factor_of_safety
|
Incomputer science,formal methodsaremathematicallyrigorous techniques for thespecification, development,analysis, andverificationofsoftwareandhardwaresystems.[1]The use of formal methods for software and hardware design is motivated by the expectation that, as in other engineering disciplines, performing appropriate mathematical analysis can contribute to the reliability and robustness of a design.[2]
Formal methods employ a variety oftheoretical computer sciencefundamentals, includinglogiccalculi,formal languages,automata theory,control theory,program semantics,type systems, andtype theory.[3]
Formal methods can be applied at various points through thedevelopment process.
Formal methods may be used to give a formal description of the system to be developed, at whatever level of detail desired. Further formal methods may depend on this specification to synthesize a program or to verify the correctness of a system.
Alternatively, specification may be the only stage in which formal methods is used. By writing a specification, ambiguities in the informal requirements can be discovered and resolved. Additionally, engineers can use a formal specification as a reference to guide their development processes.[4]
The need for formal specification systems has been noted for years. In theALGOL 58report,[5]John Backuspresented a formal notation for describingprogramming language syntax, later namedBackus normal formthen renamedBackus–Naur form(BNF).[6]Backus also wrote that a formal description of the meaning of syntactically valid ALGOL programs was not completed in time for inclusion in the report, stating that it "will be included in a subsequent paper." However, no paper describing the formal semantics was ever released.[7]
Program synthesis is the process of automatically creating a program that conforms to a specification. Deductive synthesis approaches rely on a complete formal specification of the program, whereas inductive approaches infer the specification from examples. Synthesizers perform a search over the space of possible programs to find a program consistent with the specification. Because of the size of this search space, developing efficient search algorithms is one of the major challenges in program synthesis.[8]
Formal verification is the use of software tools to prove properties of a formal specification, or to prove that a formal model of a systemimplementationsatisfies its specification.
Once a formal specification has been developed, the specification may be used as the basis forprovingproperties of the specification, and by inference, properties of the system implementation.
Sign-off verification is the use of a formal verification tool that is highly trusted. Such a tool can replace traditional verification methods (the tool may even be certified).[citation needed]
Sometimes, the motivation for proving thecorrectnessof a system is not the obvious need for reassurance of the correctness of the system, but a desire to understand the system better. Consequently, some proofs of correctness are produced in the style ofmathematical proof: handwritten (or typeset) usingnatural language, using a level of informality common to such proofs. A "good" proof is one that is readable and understandable by other human readers.
Critics of such approaches point out that theambiguityinherent in natural language allows errors to be undetected in such proofs; often, subtle errors can be present in the low-level details typically overlooked by such proofs. Additionally, the work involved in producing such a good proof requires a high level of mathematical sophistication and expertise.
In contrast, there is increasing interest in producing proofs of correctness of such systems by automated means. Automated techniques fall into three general categories:
Someautomated theorem proversrequire guidance as to which properties are "interesting" enough to pursue, while others work without human intervention. Model checkers can quickly get bogged down in checking millions of uninteresting states if not given a sufficiently abstract model.
Proponents of such systems argue that the results have greater mathematical certainty than human-produced proofs, since all the tedious details have been algorithmically verified. The training required to use such systems is also less than that required to produce good mathematical proofs by hand, making the techniques accessible to a wider variety of practitioners.
Critics note that some of those systems are likeoracles: they make a pronouncement of truth, yet give no explanation of that truth. There is also the problem of "verifying the verifier"; if the program that aids in the verification is itself unproven, there may be reason to doubt the soundness of the produced results. Some modern model checking tools produce a "proof log" detailing each step in their proof, making it possible to perform, given suitable tools, independent verification.
The main feature of the abstract interpretation approach is that it provides a sound analysis, i.e. no false negatives are returned. Moreover, it is efficiently scalable, by tuning the abstract domain representing the property to be analyzed, and by applying widening operators[9]to get fast convergence.
Formal methods includes a number of different techniques.
The design of a computing system can be expressed using a specification language, which is a formal language that includes a proof system. Using this proof system, formal verification tools can reason about the specification and establish that a system adheres to the specification.[10]
A binary decision diagram is a data structure that represents aBoolean function.[11]If a Boolean formulaP{\displaystyle {\mathcal {P}}}expresses that an execution of a program conforms to the specification, a binary decision diagram can be used to determine ifP{\displaystyle {\mathcal {P}}}is a tautology; that is, it always evaluates to TRUE. If this is the case, then the program always conforms to the specification.[12]
A SAT solver is a program that can solve theBoolean satisfiability problem, the problem of finding an assignment of variables that makes a given propositional formula evaluate to true. If a Boolean formulaP{\displaystyle {\mathcal {P}}}expresses that a specific execution of a program conforms to the specification, then determining that¬P{\displaystyle \neg {\mathcal {P}}}is unsatisfiable is equivalent to determining that all executions conform to the specification. SAT solvers are often used in bounded model checking, but can also be used in unbounded model checking.[13]
Formal methods are applied in different areas of hardware and software, includingrouters,Ethernet switches,routing protocols, security applications, andoperating systemmicrokernelssuch asseL4. There are several examples in which they have been used to verify the functionality of the hardware and software used indata centres.IBMusedACL2, a theorem prover, in theAMDx86 processor development process.[citation needed]Intel uses such methods to verify its hardware andfirmware(permanent software programmed into aread-only memory)[citation needed].Dansk Datamatik Centerused formal methods in the 1980s to develop a compiler system for theAda programming languagethat went on to become a long-lived commercial product.[14][15]
There are several other projects ofNASAin which formal methods are applied, such asNext Generation Air Transportation System[citation needed], Unmanned Aircraft System integration in National Airspace System,[16]and Airborne Coordinated Conflict Resolution and Detection (ACCoRD).[17]B-MethodwithAtelier B,[18]is used to develop safety automatisms for the various subways installed throughout the world byAlstomandSiemens, and also forCommon Criteriacertification and the development of system models byATMELandSTMicroelectronics.
Formal verification has been frequently used in hardware by most of the well-known hardware vendors, such as IBM,Intel, and AMD. There are many areas of hardware, where Intel have used formal methods to verify the working of the products, such as parameterized verification of cache-coherent protocol,[19]Intel Core i7 processor execution engine validation[20](using theorem proving,BDDs, and symbolic evaluation), optimization for Intel IA-64 architecture using HOL light theorem prover,[21]and verification of high-performance dual-portgigabit Ethernetcontrollerwith support forPCI expressprotocol and Intel advance management technology using Cadence.[22]Similarly, IBM has used formal methods in the verification of power gates,[23]registers,[24]and functional verification of the IBM Power7 microprocessor.[25]
Insoftware development, formal methods are mathematical approaches to solving software (and hardware) problems at the requirements, specification, and design levels. Formal methods are most likely to be applied to safety-critical or security-critical software and systems, such asavionics software. Software safety assurance standards, such asDO-178Callows the usage of formal methods through supplementation, andCommon Criteriamandates formal methods at the highest levels of categorization.
For sequential software, examples of formal methods include theB-Method, the specification languages used inautomated theorem proving,RAISE, and theZ notation.
Infunctional programming,property-based testinghas allowed the mathematical specification and testing (if not exhaustive testing) of the expected behaviour of individual functions.
TheObject Constraint Language(and specializations such asJava Modeling Language) has allowed object-oriented systems to be formally specified, if not necessarily formally verified.
For concurrent software and systems,Petri nets,process algebra, andfinite-state machines(which are based onautomata theory; see alsovirtual finite state machineorevent driven finite state machine) allow executable software specification and can be used to build up and validate application behaviour.
Another approach to formal methods in software development is to write a specification in some form of logic—usually a variation offirst-order logic—and then to directly execute the logic as though it were a program. TheOWLlanguage, based ondescription logic, is an example. There is also work on mapping some version of English (or another natural language) automatically to and from logic, as well as executing the logic directly. Examples areAttempto Controlled English, and Internet Business Logic, which do not seek to control the vocabulary or syntax. A feature of systems that support bidirectional English–logic mapping and direct execution of the logic is that they can be made to explain their results, in English, at the business or scientific level.[citation needed]
Semi-formal methods are formalisms and languages that are not considered fully "formal". It defers the task of completing the semantics to a later stage, which is then done either by human interpretation or by interpretation through software like code or test casegenerators.[26]
Some practitioners believe that the formal methods community has overemphasized full formalization of a specification or design.[27][28]They contend that theexpressivenessof the languages involved, as well as the complexity of the systems being modelled, make full formalization a difficult and expensive task. As an alternative, variouslightweightformal methods, which emphasize partial specification and focused application, have been proposed. Examples of this lightweight approach to formal methods include theAlloyobject modelling notation,[29]Denney's synthesis of some aspects of theZ notationwithuse casedriven development,[30]and the CSKVDMTools.[31]
There are a variety of formal methods and notations available.
Many problems in formal methods areNP-hard, but can be solved in cases arising in practice. For example, the Boolean satisfiability problem isNP-completeby theCook–Levin theorem, butSAT solverscan solve a variety of large instances. There are "solvers" for a variety of problems that arise in formal methods, and there are many periodic competitions to evaluate the state-of-the-art in solving such problems.[33]
|
https://en.wikipedia.org/wiki/Formal_methods
|
High-integrity softwareissoftwarewhose failure may cause serious damage with possible "life-threatening consequences".[1]"Integrity is important as it demonstrates the safety, security, and maintainability of ... code."[1]Examples of high-integrity software arenuclear reactorcontrol,avionicssoftware, automotive safety-critical software andprocess controlsoftware.[2][3]
[H]igh integrity means that the code:
A number of standards are applicable to high-integrity software, including:
This article related to a type ofsoftwareis astub. You can help Wikipedia byexpanding it.
Thissoftware-engineering-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/High_integrity_software
|
Amission critical(alsomission essential) factor of asystemis any factor (component, equipment, personnel,process, procedure, software, etc.) that isessentialto business, organizational, or governmental operations. Failure or disruption of mission critical factors would have a serious impact on business, organization, or government operations, and can even cause social turmoil and catastrophes.[1]
A mission critical system is a system that is essential to the survival of a business or organization. When a mission critical system fails or is interrupted, business operations are significantly impacted. Mission essential equipment and mission critical application are also known as mission critical system.[2]Examples of mission critical systems are: an online banking system, railway/aircraft operating and control systems, electric power systems, and many other computer systems that will adversely affect business and society when they fail.
A good example of a mission critical system is a navigational system for a spacecraft. The difference between mission critical and business critical lies in the major adverse impact and the very real possibilities of loss of life, serious injury and/or financial loss.[3][4]
There are four different types of critical systems: mission critical, business critical, safety critical and security critical. The key difference between a safety critical system and mission critical system, is that safety critical system is a system that, if it fails, may result in serious environmental damage, injury, or loss of life, while mission critical system may result in failure in goal-directed activity.[5]An example of a safety critical system is a chemical manufacturing plant control system. Mission critical system and business critical system are similar terms, but a business critical system fault can influence only a single company or an organization and can partially stop lifetime activity (hours or days). Security critical system may lead to loss of sensitive data through theft or accidental loss. All these four systems are generalized ascritical system.[6][5]
As a rule incrisis management, if atriage-type decision is made in which certain components must be eliminated or delayed, e.g. because of resource or personnel constraints, mission critical ones must not be among them.
Every business companies and organizations will have mission critical systems if they are functioning.[7]A downed filtration system will cause the water filtration company to malfunction. In this case, the water filtration system is a mission critical system. If a gas system is downed, many restaurants and bakeries will have to shut down until the system functions well again. In this case, the gas system is a mission critical system. There are various other mission critical systems that, if they malfunction, will have serious impacts on other industries or organizations.
The aircraft is highly dependent on the navigating system. Air navigation is accomplished with many methods.Dead reckoningutilizes visual checkpoints along with distance and time calculations. The flight computer system aids the pilots to calculate the time and distance of the checkpoint that they set. The radio navigation aid (NAVAIDS) enables the pilots to navigate more accurately than dead reckoning alone, and in conditions of low visibility, radio navigation is handy.GPSis also used by pilots and uses 24 U.S. Department of Defence satellites to provide precise locational data, which includes speed, position, and track.[8]
Iftwo-way radio communicationmalfunctions, the pilots have to follow the steps in the Title 14 of theCode of Federal Regulations(14 CFR) part 91.Pneumatic systemfailure, the associated loss of altitude, and various unfamiliar situations may cause stress and loss of situational awareness. In this case, pilot should use instruments such as navigators to seek more information about the situational data. In this case, the malfunction of the navigation system would be mission critical and would cause serious consequences.[9][10][11]
Anuclear reactoris a system that controls and contains the sustained nuclearchain reaction. It is usually used for generating electricity, but can also be used for conducting research and producing medicalisotopes.[12]Nuclear reactorshave been one of the most concerning systems for public safety worldwide because the malfunction of a nuclear reactor can cause a serious disaster.[1]Controlling the nuclear reactor system is accomplished by stopping, decreasing, or increasing the chain reaction inside the nuclear reactor. Varying the water level in the vertical cylinder and moving adjuster rods are the methods of controlling the chain reaction when the reactor is operating. Temperatures, reactor power levels, and pressure are constantly monitored by the sensitive detectors.[13]
The mission critical is a business's quintessence and if failed, will cause serious financial and reputational damages. Today, as the companies develop and world becomes more web-based community, the range of mission critical has extended. But the mission critical computing has been evolving since the pre-Web era (before 1995). In the entirely text-based pre-web internet,gopherwas one of the ASCII-based end-user programs. The mission critical system was basically used in transactional applications during this era. A business process management software,ERP, and airline reservation systems were usually mission critical. These applications were run on dedicated system in the data center. There were limited number of end users and usually accessed via terminals and personal computers.[14]
After the pre-Web era, the Web era (1995 - 2010) rose. The range of mission critical increased to include electronic devices and web applications. More users were able to use to the internet and electronic devices, so larger number of end users were able to access increased mission critical applications. Therefore, the customers are expecting limitless availability and stronger security in the devices they are using. The businesses also start to become more web-based and this correspondingly increases the criminal associated with the money and fraud. This increase in range of mission critical made the security to become stronger and increased the security industries. Between 1995 and 2010, number of web users globally increased from 16 million to 1.7 billion. This shows increase in global reliance on web system.[15]
After the Web era,consumerizationera (2010 and beyond) has risen. The range of mission critical is even more increased due to increase in social, mobile, and customer-facing applications. The consumerization of IT became greater, organizations increased and web and IT availabilities to the people increased. Social business, customer service, and customer support applications have increased greatly, so mission critical was expanded further. According to Gartner, native PC projects will be outnumbered by mobile development projects by the ratio of 4:1. Therefore, today's mission critical now encloses all subjects crucial for customer based service, business operation, employee productivity, and finance. The customers' expectations rose and small disruption can cause tremendous loss in the business. It was estimated thatAmazoncould have lost as much as $1,100 per second in net sales when it was suffering from an outage, and a five-minute outage ofGooglelost Google more than $545,000[citation needed]. Failure in mission critical and even short time of outage can cause high price of downtime due to reputations damages. Longer periods of downtime of mission critical systems can result in even more serious problems to the industries or organizations.[16]
Mission critical systems should remain very secured in all industries or organization using it. Therefore, the industries are using various security systems to avoid mission critical failures.Mainframesorworkstationsbased companies are all dependent on database and process control, so database and process control would be mission critical for them. Hospital patient recording, call centers, stock exchanges, data storage centers, flight control tower, and many other industries that are dependent oncommunication systemand computer should be protected from the shutdown of the system and they are considered mission critical. All the companies and industries are unavoidable to the unexpected or extraordinary problems that can cause shutdown to the mission critical. To avoid this, using the safety systems is considered very important part in the business.[1]
The Transport Layer Security (TLS; formerly, Secure Socket Layers, SSL) refers to the standard security technology of networking protocol that controls and manages client and server authentication, andencryptcommunication. This is usually used in the online transaction websites such asPayPal[17]andBank of America,[18]which if systems are downed or hacked, will cause serious problems to the society and the companies themselves. In TLS, public-key and symmetric-key (encryption) are used together to secure the connection between two machines. Usually it is utilized in mail services or client machines that communicates via internet. To use this technology, the web server requires adigital certificateand this can be obtained through completing several questions about the identity of the website and get public keys and private keys (cryptographic keys). The industries using this technology may be also required to pay certain amount of money annually.
Nuclear power plants need safety systems to avoid mission-critical failures. The worst possible consequence that can result is leakage of radioactive materials (U-235orPu-239). One of the systems to avoid mission critical failures for nuclear power plants is shutdown system. It has two different forms: rod controls and safety injection control. When a problem occurs in the nuclear power plant, the rod control shutdown system drops the rods automatically and stops thechain reaction. The safety injection control injects liquid immediately when the system faces the problem in nuclear reactor and stops the chain reaction. Both systems are usually automatically operated, but also can be manually activated.[13][19]
Real timeandmission criticalare often confused by many people but they are not the same concept.
Real time is responsiveness of a computer that makes the computer to continually update on external processes, and should process the procedure or information in a specified time, or could result in serious consequences.[20]Video games are examples of real time since they are rendered by computer so rapidly, it is hard to notice the delay by the user. Each frame must be rendered in a short time to maintain the experience of interactivity.[citation needed]The speed ofrenderinggraphics may vary according to the computer systems.[21]
Real time is a software that if specific time is not met, it fails, but mission critical is a system if failed, will result in catastrophic consequences. Although they go hand-in-hand, since real time can be mission critical, they are not the same concepts. These two are often confused by many people, but they are different concepts, but associated with each other.[22][23]
From the perspective of social function (i.e.: preserving society's life support structure, and overall structure intact), Mission critical aspects of social function would necessitate the provision ofbasic needsfor society. Such basic needs are often said to include food (this includes food production and distribution), water, clothing (not an immediate need in an emergency), sanitation (sewage is an immediate need, but physical waste/garbage/rubbish disposal is not an immediate need in an emergency), housing/shelter, energy (not immediate) and health needs (not immediate in a healthy population). The prior list is not exhaustive. Longer term needs might include communications/transport needs in a developed population. This list of needs is associated with mission critical personnel in a clear manner - food production requires farmers, food distribution requires transportation personnel, water requires water-infrastructure maintenance personnel (a long term requirement if existing water infrastructure has been maintained to a high standard), clothing tends to require individuals to maintain clothes production infrastructure, similarly for sanitation. In emergencies, housing/shelter requires someone to build the shelter and maintain it over the long term if necessary. Health needs are met by doctors, nurses and surgeons. Implicit use of infrastructure requires personnel to maintain that infrastructure also - so food transportation requires not only that there are drivers for food trucks, but also that (over the long term) there are highways maintenance personnel who can maintain the roads, traffic infrastructure and signs for the roads, this in turn requires power supply personnel to provide power for traffic lights, etc... In this light, mission-critical systems have a complexdependency networkwhich enables analysis of the level of interconnected dependencies between different aspects of a mission-critical system, which can be useful in planning, or just for gaining a truthful picture of how mission criticality is organised incomplex systems. This would enable the determination ofchoke pointsin a complex system, points at which a mission-critical system (or set of systems) is vulnerable in one sense or another. Ideas that relate tohuman resourcesand human resources planning (making use of Gantt charts for project management, etc...) are also relevant.
Mission criticality depends upon the timescale associated with basic needs or other deemed mission-critical factors. Over the medium term (often taken to be 10 years) and the long term (which can go into 50-60 year timescales), the planning for mission-critical systems will clearly differ from short-term mission-critical systems planning. Mission-critical personnel can be considered part of the mission-critical systems planning paradigm but require a different approach to technological or mechanical aspects of mission-critical systems (i.e.: they require human resources planning).
Psychometricsenables the determination and characterisation of various psychological aspects of mission-critical personnel (e.g.: theirIQin the case of highly skilled work, such as nuclear physics, for example). Some jobs require physical standards (for instance, in the army), or physical dexterity (e.g.: surgeons). There exist methods and means of characterising the types of skills, qualities and other attributes that certain mission-critical job roles require, and these can be used as benchmarks for determining whether certain individuals are well suited to a particular mission-critical job role, or what assistance a less qualified (or even less capable) individual would need in performing a certain mission-critical job role which might be beyond their abilities (such measures might have to be taken in emergency situations).
|
https://en.wikipedia.org/wiki/Mission_critical
|
Anuclear reactoris a device used to initiate and control afissionnuclear chain reaction. They are used forcommercial electricity,marine propulsion,weapons productionandresearch.Fissile nuclei(primarilyuranium-235orplutonium-239) absorb singleneutronsand split, releasing energy and multiple neutrons, which can induce further fission. Reactors stabilize this, regulatingneutron absorbersandmoderatorsin the core. Fuel efficiency is exceptionally high;low-enriched uraniumis 120,000 times more energy dense than coal.[1][2]
Heat from nuclear fission is passed to aworking fluidcoolant. In commercial reactors, this drivesturbinesandelectrical generatorshafts. Some reactors are used fordistrict heating, andisotopeproduction formedicalandindustrialuse.
Following the 1938discovery of fission, many countries initiatedmilitary nuclear research programs. Earlysubcriticalexperiments probedneutronics. In 1942, the first artificial[note 1]critical nuclear reactor,Chicago Pile-1, was built by theMetallurgical Laboratory.[4]From 1944, forweapons production, thefirst large-scale reactorswere operated at theHanford Site. Thepressurized water reactordesign, used in ~70% ofcommercial reactors, was developed forUS Navysubmarine propulsion, beginning withS1Win 1953.[5]In 1954, nuclear electricity production began with the SovietObninsk plant.[6]
Spent fuelcan bereprocessed, reducingnuclear wasteand recoveringreactor-usable fuel.[7]This also poses aproliferationrisk via production ofplutoniumandtritiumfornuclear weapons.
Reactor accidents have been caused by combinations of design and operator failure. The 1979Three Mile Island accident, atINES Level 5, and the 1986Chernobyl disasterand 2011Fukushima disaster, both at Level 7, all had major effects on the nuclear industry andanti-nuclear movement.
As of 2025[update], there are 417 commercial reactors, 226research reactors, and over 200marine propulsionreactors in operation globally.[8][9][10][11]Commercial reactors provide 9% of the global electricity supply,[12]compared to 30% fromrenewables,[13]together comprisinglow-carbon electricity. Almost 90% of this comes frompressurizedandboiling water reactors.[5]Other designs includegas-cooled,fast-spectrum,breeder,heavy-water,molten-salt, andsmall modular; each optimizes safety, efficiency, cost,fuel type,enrichment, andburnup.
During early 1940s nuclear research, the phrase "atomic pile" was used for any assembly involving uranium and attempts at neutron multiplication, including the majority which were subcritical. AfterChicago Pile-1demonstrated a self-sustaining chain reaction, the "reactor" terminology became more common. The phrases "nuclear pile" and "atomic reactor" were also common.
Critical massexperiments, while being far simpler, are sometimes referred to as research reactors, such as theGodiva device.
"Nuclear reactor" is predominantly used to refer to the nuclear fission reactor. It can also refer to anuclear fusion reactor, of which only net negative power systems have been constructed.Radioisotope thermoelectric generatorsandradioisotope heater units, while deriving power from nuclear decay reactions, are not referred to as nuclear reactors as they do notinducereactions.
Just as conventionalthermal power stationsgenerate electricity by harnessing thethermal energyreleased from burningfossil fuels, nuclear reactors convert the energy released by controllednuclear fissioninto thermal energy for further conversion to mechanical or electrical forms.
When a largefissileatomic nucleussuch asuranium-235,uranium-233, orplutonium-239absorbs a neutron, it may undergo nuclear fission. The heavy nucleus splits into two or more lighter nuclei, (thefission products), releasingkinetic energy,gamma radiation, andfree neutrons. A portion of these neutrons may be absorbed by other fissile atoms and trigger further fission events, which release more neutrons, and so on. This is known as anuclear chain reaction.
To control such a nuclear chain reaction,control rodscontainingneutron poisonsandneutron moderatorsare able to change the portion of neutrons that will go on to cause more fission.[14]Nuclear reactors generally have automatic and manual systems to shut the fission reaction down if monitoring or instrumentation detects unsafe conditions.[15]
The reactor core generates heat in a number of ways:
A kilogram ofuranium-235(U-235) converted via nuclear processes releases approximately three million times more energy than a kilogram of coal burned conventionally (7.2 × 1013joulesper kilogram of uranium-235 versus 2.4 × 107joules per kilogram of coal).[16][17][original research?]
The fission of one kilogram ofuranium-235releases about 19 billionkilocalories, so the energy released by 1 kg of uranium-235 corresponds to that released by burning 2.7 million kg of coal.
Anuclear reactor coolant– usually water but sometimes a gas or a liquid metal (like liquid sodium or lead) ormolten salt– is circulated past the reactor core to absorb the heat that it generates. The heat is carried away from the reactor and is then used to generate steam. Most reactor systems employ a cooling system that is physically separated from the water that will be boiled to produce pressurized steam for theturbines, like thepressurized water reactor. However, in some reactors the water for the steam turbines is boiled directly by thereactor core; for example theboiling water reactor.[18]
The rate of fission reactions within a reactor core can be adjusted by controlling the quantity of neutrons that are able to induce further fission events. Nuclear reactors typically employ several methods of neutron control to adjust the reactor's power output. Some of these methods arise naturally from the physics of radioactive decay and are simply accounted for during the reactor's operation, while others are mechanisms engineered into the reactor design for a distinct purpose.
The fastest method for adjusting levels of fission-inducing neutrons in a reactor is via movement of thecontrol rods. Control rods are made of so-calledneutron poisonsand therefore absorb neutrons. When a control rod is inserted deeper into the reactor, it absorbs more neutrons than the material it displaces – often the moderator. This action results in fewer neutrons available to cause fission and reduces the reactor's power output. Conversely, extracting the control rod will result in an increase in the rate of fission events and an increase in power.
The physics of radioactive decay also affects neutron populations in a reactor. One such process isdelayed neutronemission by a number of neutron-rich fission isotopes. These delayed neutrons account for about 0.65% of the total neutrons produced in fission, with the remainder (termed "prompt neutrons") released immediately upon fission. The fission products which produce delayed neutrons havehalf-livesfor theirdecaybyneutron emissionthat range from milliseconds to as long as several minutes, and so considerable time is required to determine exactly when a reactor reaches thecriticalpoint. Keeping the reactor in the zone of chain reactivity where delayed neutrons arenecessaryto achieve acritical massstate allows mechanical devices or human operators to control a chain reaction in "real time"; otherwise the time between achievement of criticality andnuclear meltdownas a result of an exponential power surge from the normal nuclear chain reaction, would be too short to allow for intervention. This last stage, where delayed neutrons are no longer required to maintain criticality, is known as theprompt criticalpoint. There is a scale for describing criticality in numerical form, in which bare criticality is known aszerodollarsand the prompt critical point isone dollar, and other points in the process interpolated in cents.
In some reactors, thecoolantalso acts as aneutron moderator. A moderator increases the power of the reactor by causing the fast neutrons that are released from fission to lose energy and become thermal neutrons.Thermal neutronsare more likely thanfast neutronsto cause fission. If the coolant is a moderator, then temperature changes can affect the density of the coolant/moderator and therefore change power output. A higher temperature coolant would be less dense, and therefore a less effective moderator.
In other reactors, the coolant acts as a poison by absorbing neutrons in the same way that the control rods do. In these reactors, power output can be increased by heating the coolant, which makes it a less dense poison. Nuclear reactors generally have automatic and manual systems toscramthe reactor in an emergency shut down. These systems insert large amounts of poison (oftenboronin the form ofboric acid) into the reactor to shut the fission reaction down if unsafe conditions are detected or anticipated.[19]
Most types of reactors are sensitive to a process variously known as xenon poisoning, or theiodine pit. The commonfission productXenon-135produced in the fission process acts as a neutron poison that absorbs neutrons and therefore tends to shut the reactor down. Xenon-135 accumulation can be controlled by keeping power levels high enough to destroy it by neutron absorption as fast as it is produced. Fission also producesiodine-135, which in turn decays (with a half-life of 6.57 hours) to new xenon-135. When the reactor is shut down, iodine-135 continues to decay to xenon-135, making restarting the reactor more difficult for a day or two, as the xenon-135 decays into cesium-135, which is not nearly as poisonous as xenon-135, with a half-life of 9.2 hours. This temporary state is the "iodine pit." If the reactor has sufficient extra reactivity capacity, it can be restarted. As the extra xenon-135 is transmuted to xenon-136, which is much less a neutron poison, within a few hours the reactor experiences a "xenon burnoff (power) transient". Control rods must be further inserted to replace the neutron absorption of the lost xenon-135. Failure to properly follow such a procedure was a key step in theChernobyl disaster.[20]
Reactors used innuclear marine propulsion(especiallynuclear submarines) often cannot be run at continuous power around the clock in the same way that land-based power reactors are normally run, and in addition often need to have a very long core life withoutrefueling. For this reason many designs use highly enriched uranium but incorporate burnable neutron poison in the fuel rods.[21]This allows the reactor to be constructed with an excess of fissionable material, which is nevertheless made relatively safe early in the reactor's fuel burn cycle by the presence of the neutron-absorbing material which is later replaced by normally produced long-lived neutron poisons (far longer-lived than xenon-135) which gradually accumulate over the fuel load's operating life.
The energy released in the fission process generates heat, some of which can be converted into usable energy. A common method of harnessing thisthermal energyis to use it to boil water to produce pressurized steam which will then drive asteam turbinethat turns analternatorand generates electricity.[19]
Modern nuclear power plants are typically designed for a lifetime of 60 years, while older reactors were built with a planned typical lifetime of 30–40 years, though many of those have received renovations and life extensions of 15–20 years.[22]Some believe nuclear power plants can operate for as long as 80 years or longer with proper maintenance and management. While most components of a nuclear power plant, such as steam generators, are replaced when they reach the end of their useful lifetime, the overall lifetime of the power plant is limited by the life of components that cannot be replaced when aged by wear andneutron embrittlement, such as the reactor pressure vessel.[23]At the end of their planned life span, plants may get an extension of the operating license for some 20 years and in the US even a "subsequent license renewal" (SLR) for an additional 20 years.[24][25]
Even when a license is extended, it does not guarantee the reactor will continue to operate, particularly in the face of safety concerns or incident.[26]Many reactors are closed long before their license or design life expired and aredecommissioned. The costs for replacements or improvements required for continued safe operation may be so high that they are not cost-effective. Or they may be shut down due to technical failure.[27]Other ones have been shut down because the area was contaminated, like Fukushima, Three Mile Island, Sellafield, and Chernobyl.[28]The British branch of the French concernEDF Energy, for example, extended the operating lives of itsAdvanced Gas-cooled Reactors(AGR) with only between 3 and 10 years.[29]All seven AGR plants were expected to be shut down in 2022 and in decommissioning by 2028.[30]Hinkley Point Bwas extended from 40 to 46 years, and closed. The same happened withHunterston B, also after 46 years.
An increasing number of reactors is reaching or crossing their design lifetimes of 30 or 40 years. In 2014,Greenpeacewarned that the lifetime extension of ageing nuclear power plants amounts to entering a new era of risk. It estimated the current European nuclear liability coverage in average to be too low by a factor of between 100 and 1,000 to cover the likely costs, while at the same time, the likelihood of a serious accident happening in Europe continues to increase as the reactor fleet grows older.[31]
Theneutronwas discovered in 1932 by British physicistJames Chadwick. The concept of a nuclear chain reaction brought about bynuclear reactionsmediated by neutrons was first realized shortly thereafter, byHungarianscientistLeó Szilárd, in 1933. He filed a patent for his idea of a simple reactor the following year while working at theAdmiraltyin London, England.[32]However, Szilárd's idea did not incorporate the idea of nuclear fission as a neutron source, since that process was not yet discovered. Szilárd's ideas for nuclear reactors using neutron-mediated nuclear chain reactions in light elements proved unworkable.
Inspiration for a new type of reactor using uranium came from the discovery byOtto Hahn,Lise Meitner, andFritz Strassmannin 1938 that bombardment of uranium with neutrons (provided by an alpha-on-beryllium fusion reaction, a "neutron howitzer") produced abariumresidue, which they reasoned was created by fission of the uranium nuclei. In their second publication on nuclear fission in February 1939, Hahn and Strassmann predicted the existence and liberation of additional neutrons during the fission process, opening the possibility of anuclear chain reaction. Subsequent studies in early 1939 (one of them by Szilárd and Fermi), revealed that several neutrons were indeed released during fission, making available the opportunity for the nuclear chain reaction that Szilárd had envisioned six years previously.
On 2 August 1939,Albert Einsteinsigned a letter to PresidentFranklin D. Roosevelt(written by Szilárd) suggesting that the discovery of uranium's fission could lead to the development of "extremely powerful bombs of a new type", giving impetus to the study of reactors and fission. Szilárd and Einstein knew each other well and had worked together years previously, but Einstein had never thought about this possibility for nuclear energy until Szilard reported it to him, at the beginning of his quest to produce theEinstein-Szilárd letterto alert the U.S. government.
Shortly after,Nazi Germanyinvaded Poland in 1939, startingWorld War IIin Europe. The U.S. was not yet officially at war, but in October, when the Einstein-Szilárd letter was delivered to him, Roosevelt commented that the purpose of doing the research was to make sure "the Nazis don't blow us up." The U.S. nuclear project followed, although with some delay as there remained skepticism (some of it fromEnrico Fermi) and also little action from the small number of officials in the government who were initially charged with moving the project forward.
The following year, the U.S. Government received theFrisch–Peierls memorandumfrom the UK, which stated that the amount ofuraniumneeded for achain reactionwas far lower than had previously been thought. The memorandum was a product of theMAUD Committee, which was working on the UK atomic bomb project, known asTube Alloys, laterto be subsumedwithin theManhattan Project.
Eventually, the first artificial nuclear reactor,Chicago Pile-1, was constructed at theUniversity of Chicago, by a team led byItalianphysicist Enrico Fermi, in late 1942. By this time, the program had been pressured for a year by U.S. entry into the war. The Chicago Pile achievedcriticalityon 2 December 1942[4]at 3:25 PM. The reactor support structure was made of wood, which supported a pile (hence the name) of graphite blocks, embedded in which was natural uranium oxide 'pseudospheres' or 'briquettes'.
Soon after the Chicago Pile, theMetallurgical Laboratorydeveloped a number of nuclear reactors for theManhattan Projectstarting in 1943. The primary purpose for the largest reactors (located at theHanford SiteinWashington), was the mass production ofplutoniumfor nuclear weapons. Fermi and Szilard applied for a patent on reactors on 19 December 1944. Its issuance was delayed for 10 years because of wartime secrecy.[33]
"World's first nuclear power plant" is the claim made by signs at the site of theEBR-I, which is now a museum nearArco, Idaho. Originally called "Chicago Pile-4", it was carried out under the direction ofWalter ZinnforArgonne National Laboratory.[34]This experimentalLMFBRoperated by theU.S. Atomic Energy Commissionproduced 0.8 kW in a test on 20 December 1951[35]and 100 kW (electrical) the following day,[36]having a design output of 200 kW (electrical).
Besides the military uses of nuclear reactors, there were political reasons to pursue civilian use of atomic energy. U.S. PresidentDwight Eisenhowermade his famousAtoms for Peacespeech to theUN General Assemblyon 8 December 1953. This diplomacy led to the dissemination of reactor technology to U.S. institutions and worldwide.[37]
The first nuclear power plant built for civil purposes was the AM-1Obninsk Nuclear Power Plant, launched on 27 June 1954 in theSoviet Union. It produced around 5 MW (electrical). It was built after theF-1 (nuclear reactor)which was the first reactor to go critical in Europe, and was also built by the Soviet Union.
After World War II, the U.S. military sought other uses for nuclear reactor technology. Research by the Army led to the power stations for Camp Century, Greenland and McMurdo Station, AntarcticaArmy Nuclear Power Program. The Air Force Nuclear Bomber project resulted in theMolten-Salt Reactor Experiment. The U.S. Navy succeeded when they steamed theUSSNautilus(SSN-571) on nuclear power 17 January 1955.
The first commercial nuclear power station,Calder HallinSellafield, England was opened in 1956 with an initial capacity of 50 MW (later 200 MW).[38][39]
The first portable nuclear reactor "Alco PM-2A" was used to generate electrical power (2 MW) forCamp Centuryfrom 1960 to 1963.[40]
All commercial power reactors are based onnuclear fission. They generally useuraniumand its productplutoniumasnuclear fuel, though athorium fuel cycleis also possible. Fission reactors can be divided roughly into two classes, depending on the energy of the neutrons that sustain the fissionchain reaction:
In principle,fusion powercould be produced bynuclear fusionof elements such as thedeuteriumisotope ofhydrogen. While an ongoing rich research topic since at least the 1940s, no self-sustaining fusion reactor for any purpose has ever been built.
Used by thermal reactors:
In 2003, the FrenchCommissariat à l'Énergie Atomique(CEA) was the first to refer to "Gen II" types inNucleonics Week.[95]
The first mention of "Gen III" was in 2000, in conjunction with the launch of theGeneration IV International Forum(GIF) plans.
"Gen IV" was named in 2000, by theUnited States Department of Energy(DOE), for developing new plant types.[96]
More than a dozen advanced reactor designs are in various stages of development.[101]Some are evolutionary from thePWR,BWRandPHWRdesigns above, and some are more radical departures. The former include theadvanced boiling water reactor(ABWR), two of which are now operating with others under construction, and the plannedpassively safeEconomic Simplified Boiling Water Reactor(ESBWR) andAP1000units (seeNuclear Power 2010 Program).
Rolls-Royce aims to sell nuclear reactors for the production ofsynfuelfor aircraft.[105]
Generation IV reactorsare a set of theoretical nuclear reactor designs. These are generally not expected to be available for commercial use before 2040–2050,[106]although the World Nuclear Association suggested that some might enter commercial operation before 2030.[94]Current reactors in operation around the world are generally considered second- or third-generation systems, with the first-generation systems having been retired some time ago. Research into these reactor types was officially started by the Generation IV International Forum (GIF) based on eight technology goals. The primary goals being to improve nuclear safety, improve proliferation resistance, minimize waste and natural resource utilization, and to decrease the cost to build and run such plants.[107]
Generation V reactors are designs which are theoretically possible, but which are not being actively considered or researched at present. Though some generation V reactors could potentially be built with current or near term technology, they trigger little interest for reasons of economics, practicality, or safety.
Controllednuclear fusioncould in principle be used infusion powerplants to produce power without the complexities of handlingactinides, but significant scientific and technical obstacles remain. Despite research having started in the 1950s, no commercial fusion reactor is expected before 2050. TheITERproject is currently leading the effort to harness fusion power.
Thermal reactors generally depend on refined andenriched uranium. Some nuclear reactors can operate with a mixture of plutonium and uranium (seeMOX). The process by which uranium ore is mined, processed, enriched, used, possiblyreprocessedand disposed of is known as thenuclear fuel cycle.
Under 1% of the uranium found in nature is the easily fissionable U-235isotopeand as a result most reactor designs require enriched fuel.
Enrichment involves increasing the percentage of U-235 and is usually done by means ofgaseous diffusionorgas centrifuge. The enriched result is then converted intouranium dioxidepowder, which is pressed and fired into pellet form. These pellets are stacked into tubes which are then sealed and calledfuel rods. Many of these fuel rods are used in each nuclear reactor.
Most BWR and PWR commercial reactors use uranium enriched to about 4% U-235, and some commercial reactors with a highneutron economydo not require the fuel to be enriched at all (that is, they can use natural uranium). According to theInternational Atomic Energy Agencythere are at least 100research reactorsin the world fueled by highly enriched (weapons-grade/90% enrichment) uranium. Theft risk of this fuel (potentially used in the production of a nuclear weapon) has led to campaigns advocating conversion of this type of reactor to low-enrichment uranium (which poses less threat of proliferation).[110]
FissileU-235 and non-fissile butfissionableandfertileU-238 are both used in the fission process. U-235 is fissionable by thermal (i.e. slow-moving) neutrons. A thermal neutron is one which is moving about the same speed as the atoms around it. Since all atoms vibrate proportionally to their absolute temperature, a thermal neutron has the best opportunity to fission U-235 when it is moving at this same vibrational speed. On the other hand, U-238 is more likely to capture a neutron when the neutron is moving very fast. This U-239 atom will soon decay into plutonium-239, which is another fuel. Pu-239 is a viable fuel and must be accounted for even when a highly enriched uranium fuel is used. Plutonium fissions will dominate the U-235 fissions in some reactors, especially after the initial loading of U-235 is spent. Plutonium is fissionable with both fast and thermal neutrons, which make it ideal for either nuclear reactors or nuclear bombs.
Most reactor designs in existence are thermal reactors and typically use water as a neutron moderator (moderator means that it slows down the neutron to a thermal speed) and as a coolant. But in afast breeder reactor, some other kind of coolant is used which will not moderate or slow the neutrons down much. This enables fast neutrons to dominate, which can effectively be used to constantly replenish the fuel supply. By merely placing cheap unenriched uranium into such a core, the non-fissionable U-238 will be turned into Pu-239, "breeding" fuel.
Inthorium fuel cyclethorium-232absorbs aneutronin either a fast or thermal reactor. The thorium-233beta decaystoprotactinium-233 and then touranium-233, which in turn is used as fuel. Hence, likeuranium-238, thorium-232 is afertile material.
The amount of energy in the reservoir ofnuclear fuelis frequently expressed in terms of "full-power days," which is the number of 24-hour periods (days) a reactor is scheduled for operation at full power output for the generation of heat energy. The number of full-power days in a reactor's operating cycle (between refueling outage times) is related to the amount offissileuranium-235(U-235) contained in the fuel assemblies at the beginning of the cycle. A higher percentage of U-235 in the core at the beginning of a cycle will permit the reactor to be run for a greater number of full-power days.
At the end of the operating cycle, the fuel in some of the assemblies is "spent", having spent four to six years in the reactor producing power. This spent fuel is discharged and replaced with new (fresh) fuel assemblies.[citation needed]Though considered "spent," these fuel assemblies contain a large quantity of fuel.[citation needed]In practice it is economics that determines the lifetime of nuclear fuel in a reactor. Long before all possible fission has taken place, the reactor is unable to maintain 100%, full output power, and therefore, income for the utility lowers as plant output power lowers. Most nuclear plants operate at a very low profit margin due to operating overhead, mainly regulatory costs, so operating below 100% power is not economically viable for very long.[citation needed]The fraction of the reactor's fuel core replaced during refueling is typically one-third, but depends on how long the plant operates between refueling. Plants typically operate on 18 month refueling cycles, or 24 month refueling cycles. This means that one refueling, replacing only one-third of the fuel, can keep a nuclear reactor at full power for nearly two years.[citation needed]
The disposition and storage of this spent fuel is one of the most challenging aspects of the operation of a commercial nuclear power plant. This nuclear waste is highly radioactive and its toxicity presents a danger for thousands of years.[90]After being discharged from the reactor, spent nuclear fuel is transferred to the on-sitespent fuel pool. The spent fuel pool is a large pool of water that provides cooling and shielding of the spent nuclear fuel as well as limit radiation exposure to on-site personnel. Once the energy has decayed somewhat (approximately five years), the fuel can be transferred from the fuel pool to dry shielded casks, that can be safely stored for thousands of years. After loading into dry shielded casks, the casks are stored on-site in a specially guarded facility in impervious concrete bunkers. On-site fuel storage facilities are designed to withstand the impact of commercial airliners, with little to no damage to the spent fuel. An average on-site fuel storage facility can hold 30 years of spent fuel in a space smaller than a football field.[citation needed]
Not all reactors need to be shut down for refueling; for example,pebble bed reactors,RBMK reactors,molten-salt reactors,Magnox,AGRandCANDUreactors allow fuel to be shifted through the reactor while it is running. In a CANDU reactor, this also allows individual fuel elements to be situated within the reactor core that are best suited to the amount of U-235 in the fuel element.
The amount of energy extracted from nuclear fuel is called itsburnup, which is expressed in terms of the heat energy produced per initial unit of fuel weight. Burnup is commonly expressed as megawatt days thermal per metric ton of initial heavy metal.
Nuclear safety covers the actions taken to preventnuclear and radiation accidents and incidentsor to limit their consequences. The nuclear power industry has improved the safety and performance of reactors, and has proposed new, safer (but generally untested) reactor designs but there is no guarantee that the reactors will be designed, built and operated correctly.[111]Mistakes do occur and the designers of reactors atFukushimain Japan did not anticipate that a tsunami generated by an earthquake would disable the backup systems that were supposed to stabilize the reactor after the earthquake,[112]despite multiple warnings by the NRG and the Japanese nuclear safety administration.[citation needed]According toUBSAG, theFukushima I nuclear accidentshave cast doubt on whether even an advanced economy like Japan can master nuclear safety.[113]Catastrophic scenarios involving terrorist attacks are also conceivable.[111]An interdisciplinary team fromMIThas estimated that given the expected growth of nuclear power from 2005 to 2055, at least four serious nuclear accidents would be expected in that period.[114]
Serious, though rare,nuclear and radiation accidentshave occurred. These include theWindscale fire(October 1957), theSL-1accident (1961), theThree Mile Island accident(1979),Chernobyl disaster(April 1986), and theFukushima Daiichi nuclear disaster(March 2011).[116]Nuclear-powered submarinemishaps include theK-19reactor accident (1961),[117]theK-27reactor accident (1968),[118]and theK-431reactor accident (1985).[116]
Nuclear reactors have been launched into Earth orbit at least 34 times. A number of incidents connected with the unmanned nuclear-reactor-powered SovietRORSATespeciallyKosmos 954radar satellite which resulted in nuclear fuel reentering the Earth's atmosphere from orbit and being dispersed in northern Canada (January 1978).
Almost two billion years ago a series of self-sustaining nuclear fission "reactors" self-assembled in the area now known asOkloinGabon, West Africa. The conditions at that place and time allowed anatural nuclear fissionto occur with circumstances that are similar to the conditions in a constructed nuclear reactor.[119]Fifteen fossil natural fission reactors have so far been found in three separate ore deposits at the Oklo uranium mine in Gabon. First discovered in 1972 by French physicistFrancis Perrin, they are collectively known as theOklo Fossil Reactors. Self-sustainingnuclear fissionreactions took place in these reactors approximately 1.5 billion years ago, and ran for a few hundred thousand years, averaging 100 kW of power output during that time.[120]The concept of a natural nuclear reactor was theorized as early as 1956 byPaul Kurodaat theUniversity of Arkansas.[121][122]
Such reactors can no longer form on Earth in its present geologic period. Radioactive decay of formerly more abundant uranium-235 over the time span of hundreds of millions of years has reduced the proportion of this naturally occurring fissile isotope to below the amount required to sustain a chain reaction with only plain water as a moderator.
The natural nuclear reactors formed when a uranium-rich mineral deposit became inundated with groundwater that acted as a neutron moderator, and a strong chain reaction took place. The water moderator would boil away as the reaction increased, slowing it back down again and preventing a meltdown. The fission reaction was sustained for hundreds of thousands of years, cycling on the order of hours to a few days.
These natural reactors are extensively studied by scientists interested in geologicradioactive wastedisposal. They offer a case study of how radioactive isotopes migrate through the Earth's crust. This is a significant area of controversy as opponents of geologic waste disposal fear that isotopes from stored waste could end up in water supplies or be carried into the environment.
Nuclear reactors producetritiumas part of normal operations, which is eventually released into the environment in trace quantities.
As anisotopeofhydrogen, tritium (T) frequently binds to oxygen and formsT2O. This molecule is chemically identical toH2Oand so is both colorless and odorless, however the additional neutrons in the hydrogen nuclei cause the tritium to undergobeta decaywith ahalf-lifeof 12.3 years. Despite being measurable, the tritium released by nuclear power plants is minimal. The United StatesNRCestimates that a person drinking water for one year out of a well contaminated by what they would consider to be a significant tritiated water spill would receive a radiation dose of 0.3 millirem.[123]For comparison, this is an order of magnitude less than the 4 millirem a person receives on a round trip flight from Washington, D.C. to Los Angeles, a consequence of less atmospheric protection against highly energeticcosmic raysat high altitudes.[123]
The amounts ofstrontium-90released from nuclear power plants under normal operations is so low as to be undetectable above natural background radiation. Detectable strontium-90 in ground water and the general environment can be traced to weapons testing that occurred during the mid-20th century (accounting for 99% of the Strontium-90 in the environment) and the Chernobyl accident (accounting for the remaining 1%).[124]
|
https://en.wikipedia.org/wiki/Nuclear_reactor
|
Inengineeringandsystems theory,redundancyis the intentional duplication of critical components or functions of a system with the goal of increasing reliability of thesystem, usually in the form of a backup orfail-safe, or to improve actual system performance, such as in the case ofGNSSreceivers, ormulti-threadedcomputer processing.
In manysafety-critical systems, such asfly-by-wireandhydraulicsystems inaircraft, some parts of the control system may be triplicated,[1]which is formally termedtriple modular redundancy(TMR). An error in one component may then be out-voted by the other two. In a triply redundant system, the system has three sub components, all three of which must fail before the system fails. Since each one rarely fails, and the sub components are designed to preclude common failure modes (which can then be modelled as independent failure), the probability of all three failing is calculated to be extraordinarily small; it is often outweighed by other risk factors, such ashuman error.Electrical surgesarising fromlightningstrikes are an example of a failure mode which is difficult to fully isolate, unless the components are powered from independent power busses and have no direct electrical pathway in their interconnect (communication by some means is required for voting). Redundancy may also be known by the terms "majority voting systems"[2]or "voting logic".[3]
Redundancy sometimes produces less, instead of greater reliability – it creates a more complex system which is prone to various issues, it may lead to human neglect of duty, and may lead to higher production demands which by overstressing the system may make it less safe.[4]
Redundancy is one form ofrobustnessas practiced incomputer science.
Geographic redundancyhas become important in thedata centerindustry, to safeguard data againstnatural disastersandpolitical instability(see below).
In computer science, there are four major forms of redundancy:[5]
A modified form of software redundancy, applied to hardware may be:
Structuresare usually designed with redundant parts as well, ensuring that if one part fails, the entire structure will not collapse. A structure without redundancy is calledfracture-critical, meaning that a single broken component can cause the collapse of the entire structure. Bridges that failed due to lack of redundancy include theSilver Bridgeand theInterstate 5 bridge over the Skagit River.
Parallel and combined systems demonstrate different level of redundancy. The models are subject of studies in reliability and safety engineering.[6]
Unlike traditional redundancy, which uses more than one of the same thing, dissimilar redundancy uses different things. The idea is that the different things are unlikely to contain identical flaws. The voting method may involve additional complexity if the two things take different amounts of time. Dissimilar redundancy is often used with software, because identical software contains identical flaws.
The chance of failure is reduced by using at least two different types of each of the following
Geographic redundancy corrects the vulnerabilities of redundant devices deployed by geographically separating backup devices. Geographic redundancy reduces the likelihood of events such aspower outages,floods,HVACfailures,lightningstrikes,tornadoes, building fires,wildfires, andmass shootingsdisabling most of the system if not the entirety of it.
Geographic redundancy locations can be
The following methods can reduce the risks of damage by a fireconflagration:
Geographic redundancy is used by Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, Netflix, Dropbox, Salesforce, LinkedIn, PayPal, Twitter, Facebook, Apple iCloud, Cisco Meraki, and many others to provide geographic redundancy, high availability, fault tolerance and to ensure availability and reliability for their cloud services.[15]
As another example, to minimize risk of damage from severe windstorms or water damage, buildings can be located at least 2 miles (3.2 km) away from the shore, with an elevation of at least 5 feet (1.5 m) above sea level. For additional protection, they can be located at least 100 feet (30 m) away from flood plain areas.[16][17]
The two functions of redundancy are passive redundancy andactive redundancy. Both functions prevent performance decline from exceeding specification limits without human intervention using extra capacity.
Passive redundancy uses excess capacity to reduce the impact of component failures. One common form of passive redundancy is the extra strength of cabling and struts used in bridges. This extra strength allows some structural components to fail without bridge collapse. The extra strength used in the design is called the margin of safety.
Eyes and ears provide working examples of passive redundancy. Vision loss in one eye does not cause blindness butdepth perceptionis impaired. Hearing loss in one ear does not causedeafnessbut directionality is lost. Performance decline is commonly associated with passive redundancy when a limited number of failures occur.
Active redundancy eliminates performance declines by monitoring the performance of individual devices, and this monitoring is used in voting logic. The voting logic is linked to switching that automatically reconfigures the components. Error detection and correction and theGlobal Positioning System(GPS) are two examples of active redundancy.
Electrical power distributionprovides an example of active redundancy. Several power lines connect each generation facility with customers. Each power line includes monitors that detect overload. Each power line also includes circuit breakers. The combination of power lines provides excess capacity. Circuit breakers disconnect a power line when monitors detect an overload. Power is redistributed across the remaining lines.[citation needed]At the Toronto Airport, there are 4 redundant electrical lines. Each of the 4 lines supply enough power for the entire airport. Aspot network substationuses reverse current relays to open breakers to lines that fail, but lets power continue to flow the airport.
Electrical power systems use power scheduling to reconfigure active redundancy. Computing systems adjust the production output of each generating facility when other generating facilities are suddenly lost. This prevents blackout conditions during major events such as an earthquake.
Charles Perrow, author ofNormal Accidents, has said that sometimes redundancies backfire and produce less, not more reliability. This may happen in three ways: First, redundant safety devices result in a more complex system, more prone to errors and accidents. Second, redundancy may lead to shirking of responsibility among workers. Third, redundancy may lead to increased production pressures, resulting in a system that operates at higher speeds, but less safely.[4]
Voting logic uses performance monitoring to determine how to reconfigure individual components so that operation continues without violating specification limitations of the overall system. Voting logic often involves computers, but systems composed of items other than computers may be reconfigured using voting logic. Circuit breakers are an example of a form of non-computer voting logic.
The simplest voting logic in computing systems involves two components: primary and alternate. They both run similar software, but the output from the alternate remains inactive during normal operation. The primary monitors itself and periodically sends an activity message to the alternate as long as everything is OK. All outputs from the primary stop, including the activity message, when the primary detects a fault. The alternate activates its output and takes over from the primary after a brief delay when the activity message ceases. Errors in voting logic can cause both outputs to be active or inactive at the same time, or cause outputs to flutter on and off.
A more reliable form of voting logic involves an odd number of three devices or more. All perform identical functions and the outputs are compared by the voting logic. The voting logic establishes a majority when there is a disagreement, and the majority will act to deactivate the output from other device(s) that disagree. A single fault will not interrupt normal operation. This technique is used withavionicssystems, such as those responsible for operation of theSpace Shuttle.
Each duplicate component added to the system decreases the probability of system failure according to the formula:-
where:
This formula assumes independence of failure events. That means that the probability of a component B failing given that a component A has already failed is the same as that of B failing when A has not failed. There are situations where this is unreasonable, such as using twopower suppliesconnected to the same socket in such a way that if onepower supplyfailed, the other would too.
It also assumes that only one component is needed to keep the system running.
You can achieve higheravailabilitythrough redundancy. Let's say you have three redundant components: A, B and C. You can use following formula to calculateavailabilityof the overall system:
Availability of redundant components = 1 - (1 - availability of component A) X (1 - availability of component B) X (1 - availability of component C)[18][19]
In corollary, if you have N parallel components each having X availability, then:
Availability of parallel components = 1 - (1 - X)^ N
Using redundant components can exponentially increase the availability of overall system.[19]For example if each of your hosts has only 50% availability, by using 10 of hosts in parallel, you can achieve 99.9023% availability.
Note that redundancy doesn't always lead to higher availability. In fact, redundancy increases complexity which in turn reduces availability. According to Marc Brooker, to take advantage of redundancy, ensure that:[20]
|
https://en.wikipedia.org/wiki/Redundancy_(engineering)
|
TheSafety-Critical Systems Club(SCSC)[1]is a professional association in theUnited Kingdom.[2][3]It aims to share knowledge aboutsafety-critical systems, including current and emerging practices in safety engineering, software engineering, and product andprocess safetystandards.[4]
Since it started in 1991, the Club has met its objectives by holding regular one- and two- day seminars, publishing a newsletter three times per year, and running an annual conference – theSafety-critical Systems Symposium(SSS), for which it publishes proceedings.[5]In performing these functions, and in adding tutorials to its programme, the Club has been instrumental in helping to define the requirements for education and training in the safety-critical systems domain.
The SCSC also implements initiatives to improve professionalism in the field of safety-critical systems engineering, and organises various working groups to develop and maintain industry-standard guidance. Notable outputs of these groups include theData Safety Guidance, Service Assurance GuidanceandSafety Assurance Objectives for Autonomous Systems, which have been adopted by UK government organisations such as theNHS,[6]Dstl[7][8]and theMinistry of Defence;[9]and theGoal Structuring Notation(GSN) community standard, which has influenced the development of theOMG'sStructured Assurance Case Metamodelstandard.[10]
The Safety-Critical Systems Club formally commenced operation on 1 May 1991 as the result of a contract placed by the UKDepartment of Trade and Industry(DTI) and theScience and Engineering Research Council(SERC).[11][12]A report to the UKParliamentary and Scientific Committeeon the science ofsafety-critical systemsled to the 'SafeIT' programme, which recommended formation of the Club.[13]As part of their safety-critical systems research programme,[14]the DTI and SERC awarded a three-year contract for organising and running the Safety-Critical Systems Club to theInstitution of Electrical Engineers,[15]theBritish Computer Society,[16]and theUniversity of Newcastle upon Tyne, the last of these to implement the organisation.[12]The SCSC became self-sufficient in 1994, based at Newcastle University through theCentre for Software Reliability.[17]Activities included detailed technical work, such as planning and organising events and editing the SCSC newsletter and other publications. From the start, the UKHealth and Safety Executivewas an active supporter of the Club, and, along with all the other organisations already mentioned, remains so.
It was intended that the Club should include in its ambit both technical and managerial personnel, and that it should facilitate communication among all sections of the safety-critical systems community.
The inaugural seminar, intended to introduce the Club to the safety-critical systems community, took place atUMIST,Manchester, on 11 July 1991 and attracted 256 delegates. The need for such an organisation was perceived by many in the software-engineering and safety-critical systems communities.[18]
Management of the SCSC moved to theUniversity of Yorkin 2016.[18]In 2020 it became an independentcommunity interest company.[4][19]
|
https://en.wikipedia.org/wiki/Safety-Critical_Systems_Club
|
SAPHIREis a probabilistic risk and reliability assessment software tool. SAPHIRE stands forSystems Analysis Programs for Hands-on Integrated Reliability Evaluations. The system was developed for the U.S.Nuclear Regulatory Commission(NRC) by theIdaho National Laboratory.
Development began in the mid-1980s when the NRC began exploring two notions: 1) thatProbabilistic Risk Assessment(PRA) information could be displayed and manipulated using the emerging microcomputer technology of the day and 2) the rapid advancement of PRA technology required a relatively inexpensive and readily available platform for teaching PRA concepts to students.
1987 Version 1 of the code called IRRAS (now known as SAPHIRE) introduced an innovative way to draw, edit, and analyze graphicalfault trees.
1989 Version 2 is released incorporating the ability to draw, edit, and analyze graphicalevent trees.
1990 Analysis improvements to IRRAS led to the release of Version 4 and the formation of the IRRAS Users Group.
1992 Creation of 32-bit IRRAS, Version 5, resulted in an order-of-magnitude decrease in analysis time. New features included: end state analysis; fire, flood, and seismic modules; rule-base cut set processing; and rule-based fault tree to event tree linking.
1997 SAPHIRE for Windows, version 6.x, is released. Use of a Windows user-interface makes SAPHIRE easy to learn. The new "plug-in" feature allows analysts to expand on the built-in probability calculations.
1999 SAPHIRE for Windows, version 7.x, is released. Enhancements are made to the event tree "linking rules" and to the use of dual language capability inside the SAPHIRE database.
2005 SAPHIRE for Windows, version 8.x, undergoes development.
2008 SAPHIRE for Windows, version 8.x, release as a beta version.
2010 SAPHIRE for Windows, version 8.x, release for U.S. Government and industry use.
The evolution of software and related analysis methods has led to the current generation of the SAPHIRE tool. The current SAPHIRE software code-base started in the mid-1980s as part of the NRC's general risk activities. In 1986, work commenced on the precursor to the SAPHIRE software – this software package was named the Integrated Reliability and Risk Analysis System, or IRRAS. IRRAS was the first IBM compatible PC-based risk analysis tool developed at theIdaho National Laboratory, thereby allowing users to work in a graphical interface rather than with mainframe punch cards. While limited to the analysis of only fault trees of medium size, version 1 of IRRAS was the initial step in the progress that today has led to the SAPHIRE software, software that is capable of running on multiple processors simultaneously and is able to handle extremely large analyses.
Historically, NASA relied on worst-caseFailure mode and effects analysisfor safety assessment. However, this approach has problems, such as it is qualitative and does not aggregate risk at a system or mission level. On October 29, 1986, the investigation of the Challenger accident criticized NASA for not “estimating the probability of failure of the various [Shuttle] elements.” Further, in January 1988, the Post-Challenger investigation recommended that “probabilistic risk assessment approaches be applied to the Shuttle risk management program."
Consequently, probabilistic methods are now being used at NASA. Specifically, the following projects have all used the SAPHIRE software as the primary analysis tool for risk:
SAPHIRE contains an advanced minimal cut set solving engine. This solver, which has been fine tuned and optimized over time, has a variety of techniques for analysis, including:
Use of these and other optimization methods has resulted in SAPHIRE having one of the most powerful analysis engines in use forprobabilistic risk assessmenttoday.
General basic event probability capabilities for SAPHIRE include:
SAPHIRE has been designed to handle large fault trees, where a tree may have up to 64,000 basic events and gates. To handle the fault trees, two mechanisms for developing and modifying the fault tree are available – a graphical editor and a hierarchical logic editor. Analysts may use either editor; if the logic is modified SAPHIRE can redraw the fault tree graphic. Conversely, if the user modifies the fault tree graphic, SAPHIRE automatically updates the associated logic. Applicable objects available in the fault tree editors include basic events and several gate types, including: OR, AND, NOR, NAND, and N-of-M. In addition to these objects, SAPHIRE has a unique feature known as “table events” that allows the user to group up to eight basic events together on the fault tree graphic, thereby compacting the size of the fault tree on the printed page or computer screen. All of these objects though represent traditional static-type Boolean logic models. Models explicitly capturing dynamic or time-dependent situations are not available in current versions of SAPHIRE.
|
https://en.wikipedia.org/wiki/SAPHIRE
|
TheTherac-25is a computer-controlledradiation therapymachine produced byAtomic Energy of Canada Limited(AECL) in 1982 after the Therac-6 and Therac-20 units (the earlier units had been produced in partnership withCompagnie générale de radiologie (CGR)of France).[1]
The Therac-25 was involved in at least six accidents between 1985 and 1987, in which some patients were given massiveoverdoses of radiation.[2]:425Because ofconcurrent programming errors(also known as race conditions), it sometimes gave its patients radiation doses that were hundreds of times greater than normal, resulting in death or serious injury.[3]These accidents highlighted the dangers of softwarecontrolof safety-critical systems.
The Therac-25 has become a standard case study inhealth informatics,software engineering, andcomputer ethics. It highlights the dangers of engineer overconfidence[2]:428after the engineers dismissed end-user reports, leading to severe consequences.
The French company CGR manufactured theNeptuneandSagittairelinear accelerators.
In the early 1970s, CGR and the Canadian public companyAtomic Energy of Canada Limited(AECL) collaborated on the construction of linear accelerators controlled by a DECPDP-11minicomputer: the Therac-6, which produced X-rays of up to 6 MeV, and the Therac-20, which could produce X-rays or electrons of up to 20 MeV. The computer increased ease of use because the accelerator could operate without it. CGR developed the software for the Therac-6 and reused some subroutines for the Therac-20.[4]
In 1981, the two companies ended their collaboration agreement. AECL developed a new double pass concept for electron acceleration in a more confined space, changing its energy source fromklystrontomagnetron. In certain techniques, the electrons produced are used directly, while in others they are made to collide against a tungsten anode to produce X-ray beams. This dual accelerator concept was applied to the Therac-20 and Therac-25, with the latter being much more compact, versatile, and easy to use. It was also more economical for a hospital to have a dual machine that could apply treatments of electrons and X-rays, instead of two machines.
The Therac-25 was designed as a machine controlled by a computer, with some safety mechanisms switched from hardware to software as a result. AECL decided not to duplicate some safety mechanisms, and reused modules and code routines from the Therac-20 for the Therac-25.
The first prototype of the Therac-25 was built in 1976 and was put on the market in late 1982.
The software for the Therac-25 was developed by one person over several years using PDP-11 assembly language. It was an evolution of the Therac-6 software. In 1986, the programmer left AECL. In a subsequent lawsuit, lawyers were unable to identify the programmer or learn about his qualification and experience.
Six machines were installed in Canada and five in the United States.[4]
After the accidents, in 1988 AECL dissolved theAECL Medicalsection and the companyTheratronics International Ltdtook over the maintenance of the installed Therac-25 machines.[5]
The machine had three modes of operation, with a turntable moving some apparatus into position for each of those modes: either a light, somescan magnets, or a tungsten target andflattener.[6]
The patient is placed on a fixed stretcher. Above them is a turntable to which the components that modify the electron beam are fixed. The turntable has a position for the X-ray mode (photons), another position for the electron mode and a third position for making adjustments using visible light. In this position an electron beam is not expected, and a light that is reflected in a stainless steel mirror simulates the beam. In this position there is no ion chamber acting as a radiation dosimeter because the radiation beam is not expected to function.
The turntable has somemicroswitchesthat indicate the position to the computer. When the plate is in one of the three allowed fixed positions a plunger locks it byinterlocking. In this type of machine, electromechanical locks were traditionally used to ensure that the turntable was in the correct position before starting treatment. In the Therac-25, these were replaced by software checks.[6]
The six documented accidents occurred when the high-current electron beam generated in X-ray mode was delivered directly to patients. Two software faults were to blame.[6]One was when the operator incorrectly selected X-ray mode before quickly changing to electron mode, which allowed the electron beam to be set for X-ray mode without the X-ray target being in place. A second fault allowed the electron beam to activate during field-light mode, during which no beam scanner was active or target was in place.
Previous models had hardware interlocks to prevent such faults, but the Therac-25 had removed them, depending instead on software checks for safety.
The high-current electron beam struck the patients with approximately 100 times the intended dose of radiation, and over a narrower area, delivering a potentially lethal dose ofbeta radiation. The feeling was described by patient Ray Cox as "an intense electric shock", causing him to scream and run out of the treatment room.[7]Several days later,radiation burnsappeared, and the patients showed the symptoms ofradiation poisoning; in three cases, the injured patients later died as a result of the overdose.[8]
A Therac-25 had been in operation for six months inMarietta, Georgiaat the Kennestone Regional Oncology Center when, on June 3, 1985, applied radiation therapy treatment following alumpectomywas being performed on 61-year-old woman Katie Yarbrough. She was set to receive a 10-MeV dose of electron therapy to herclavicle. When therapy began, she stated she experienced a "tremendous force of heat...this red-hot sensation." The technician entered the room, to whom Katie stated, "you burned me." The technician assured her this was not possible. She returned home where, in the following days, she experienced reddening of the treatment area. Shortly after, her shoulder became locked in place and she experienced spasms. Within two weeks, the aforementioned redness spread from her chest to her back, indicating that the source of the burn had passed through her, which is the case with radiation burns. The staff at the treatment center did not believe it was possible for the Therac-25 to cause such an injury, and it was treated as a symptom of hercancer. Later, the hospital physicist consulted the AECL about the incident. He calculated that the applied dose was between 15,000 and 20,000radwhen she should have been dosed with 200 rad. A dose of 1000 rad can be fatal. In October 1985, Katie sued the hospital and the manufacturer of the machine. In November 1985, the AECL was notified of the lawsuit. It was not until March 1986, after another incident involving the Therac-25, that the AECL informed the FDA that it had received a complaint from the patient.
Due to the radiation overdose, her breast had to be surgically removed, an arm and shoulder were immobilized, and she was in constant pain. The treatment printout function was not activated at the time of treatment and there was no record of the applied radiation data. An out-of-court settlement was reached to resolve the lawsuit.[3]
The Therac-25 had been in operation in the clinic for six months when, on July 26, 1985, a 40-year-old patient was receiving her 24th treatment forcervical cancer. The operator activated the treatment, but after five seconds the machine stopped with the error message "H-tilt", the treatment pause indication and the dosimeter indicating that no radiation had been applied. The operator pressed thePkey (Proceed : continue). The machine stopped again. The operator repeated the process five times until the machine stopped the treatment. A technician was called and found no problem. The machine was used to treat six other patients on the same day.
The patient complained of burning and swelling in the area and was hospitalized on July 30. She was suspected of a radiation overdose and the machine was taken out of service. On November 3, 1985, the patient died of cancer, although the autopsy mentioned that if she had not died then, she would have had to undergo a hip replacement due to damage from the radiation overdose. A technician estimated that she received between 13,000 and 17,000 rad.
The incident was reported to the FDA and the Canadian Radiation Protection Bureau.
The AECL suspected that there might be an error with three microswitches that reported the position of the turntable. The AECL was unable to replicate a failure of the microswitches and microswitch testing was inconclusive. They then changed the method to be tolerant of one failure and modified the software to check if the turntable was moving or in the treatment position.
Afterward, the AECL claimed that the modifications represented a five-order-of-magnitude increase in safety.[3]
In December 1985 a woman developed anerythemawith a parallel band pattern after receiving treatment from a Therac-25 unit. Hospital staff sent a letter on January 31, 1986, to the AECL about the incident. The AECL responded in two pages detailing the reasons why radiation overdose was impossible on the Therac-25, stating both machine failure and operator error were not possible.
Six months later, the patient developed chronic ulcers under the skin due to tissue necrosis. She had surgery and skin grafts were placed. The patient continued to live with minorsequelae.[3]
Over two years, this hospital treated more than 500 patients with the Therac-25 with no incident. On March 21, 1986, a patient presented for his ninth treatment session for a tumor on his back. The treatment was set to be 22-MeV of electrons with a dose of 180 rad in an area of 10x17 cm, with an accumulated radiation in 6 weeks of 6000 rad.
The experienced operator entered the session data and realized that she had written an “X” for ‘X-ray’ instead of an “E” for ‘electron beam’ as the type of treatment. With the cursor she went up and changed the “X” to an “E” and since the rest of the parameters were correct she pressed↵ Enteruntil she got down to the command box. All parameters were marked "Verified" and the message "Rays ready" was displayed. She hit theBkey ("Beam on"). The machine stopped and displayed the message "Malfunction 54" (error 54). It also showed 'Treatment pause'. The manual said that the "Malfunction 54" message was a "dose input 2" error. A technician later testified that "dose input 2" meant that the radiation delivered was either too high or too low.
The radiation monitor (dosimeter) marked 6 units supplied when it had demanded 202 units. The operator pressedP( Proceed : continue). The machine stopped again with the message "Malfunction 54" (error 54) and the dosimeter indicated that it had delivered fewer units than required. The surveillance camera in the radiation room was offline and the intercom had been broken that day.
With the first dose the patient felt an electric shock and heard a crackle from the machine. Since it was his ninth session, he realized that it was not normal. He started to get up from the table to ask for help. At that moment the operator pressedPto continue the treatment. The patient felt a shock of electricity through his arm, as if his hand was torn off. He reached the door and began to bang on it until the operator opened it. A physician was immediately called to the scene, where they observed intenseerythemain the area, suspecting that it had been a simple electric shock. He sent the patient home. The hospital physicist checked the machine and, because it was calibrated to the correct specification, it continued to treat patients throughout the day. The technicians were unaware that the patient had received a massive dose of radiation between 16,500 and 25,000 rads in less than a second over an area of one cm2. The crackling of the machine had been produced by saturation of the ionization chambers, which had the consequence that they indicated that the applied radiation dose had been very low.
Over the following weeks the patient experienced paralysis of the left arm, nausea, vomiting, and ended up being hospitalized for radiation-inducedmyelitisof the spinal cord. His legs, mid-diaphragm and vocal cords ended up paralyzed. He also had recurrent herpes simplex skin infections. He died five months after the overdose.
From the day after the accident, AECL technicians checked the machine and were unable to replicate error 54. They checked the grounding of the machine to rule out electric shock as the cause. The machine was back in operation on April 7, 1986.[3]
On April 11, 1986, a patient was to receive electron treatment for skin cancer on the face. The prescription was 10 MeV for an area of 7x10 cm. The operator was the same as the one in the March incident, three weeks earlier. After filling in all the treatment data she realized that she had to change the mode from X to E. She did so and pressed↵ Enterto go down to the command box. As "Beam ready" was displayed, she pressedP(Proceed : continue). The machine produced a loud noise, which was heard through the intercom. Error 54 was displayed. The operator entered the room and the patient described a burning sensation on his face. The patient died on May 1, 1986, just shy of 3 weeks later. The autopsy showed severe radiation damage to the right temporal lobe and brain stem.
The hospital physicist stopped the machine treatments and notified the AECL. After strenuous work, the physicist and operator were able to reproduce the error 54 message. They determined that speed in editing the data entry was a key factor in producing error 54. After much practice, he was able to reproduce the error 54 at will. The AECL stated they could not reproduce the error and they only got it after following the instructions of the physicist so that the data entry was very rapid.[3]
On January 17, 1987, a patient was to receive a treatment with two film-verification exposures of 4 and 3 rad, plus a 79-rad photon treatment for a total exposure of 86 rad. Film was placed under the patient and 4 rad were administered through a 22 cm × 18 cm opening. The machine was stopped, the aperture was opened to 35 cm × 35 cm and a dose of 3 rad was administered. The machine stopped. The operator entered the room to remove the film plates and adjust the patient's position. He used the hand control inside the room to adjust the turntable. He left the room, forgetting the film plates. In the control room, after seeing the "Beam ready" message, he pressed theBkey to fire the beams. After 5 seconds the machine stopped and displayed a message that quickly disappeared. Since the machine was paused, the operator pressedP(Proceed : continue). The machine stopped, showing "Flatness" as the reason. The operator heard the patient on the intercom, but could not understand him, and entered the room. The patient had felt a severe burning sensation in his chest. The screen showed that he had only been given 7 rad. A few hours later, the patient showed burns on the skin in the area. Four days later the reddening of the area had a banded pattern similar to that produced in the incident the previous year, and for which they had not found the cause. The AECL began an investigation, but was unable to reproduce the event.
The hospital physicist conducted tests with film plates to see if he could recreate the incident, which involved two X-ray parameters with the turntable in field-light position. The film appeared to match the film that was left by mistake under the patient during the accident. It was found the patient was exposed to between 8,000 and 10,000 rad instead of the prescribed 86 rad. The patient died in April 1987 from complications due to radiation overdose. The relatives filed a lawsuit that ended with an out-of-court settlement.[3]
A commission attributed the primary cause to generally poor software design and development practices, rather than singling out specific coding errors. In particular, the software was designed so that it was realistically impossible to test it in a rigorous,automatedway.: Safeware, [48][additional citation(s) needed]
Researchers who investigated the accidents found several contributing causes. These included the following institutional causes:
The researchers also found severalengineeringissues:
Levesonnotes that a lesson to be drawn from the incident is to not assume that reused software is safe:[9]"A naive assumption is often made that reusing software or using commercial off-the-shelf software will increase safety because the software will have been exercised extensively. Reusing software modules does not guarantee safety in the new system to which they are transferred ..."[6]In response to incidents like those associated with Therac-25, theIEC 62304standard was created, which introduces development life cycle standards for medical device software and specific guidance on usingsoftware of unknown pedigree.[10]
|
https://en.wikipedia.org/wiki/Therac-25
|
Zonal Safety Analysis(ZSA) is one of three analytical methods which, taken together, form aCommon Cause Analysis(CCA) inaircraftsafety engineeringunderSAEARP4761.[1]The other two methods areParticular Risks Analysis(PRA) andCommon Mode Analysis(CMA). Aircraftsystem safetyrequires theindependenceof failure conditions for multiplesystems. Independent failures, represented by anAND gatein afault tree analysis, have a low probability of occurring in the same flight.Common causesresult in the loss of independence, which dramatically increases probability of failure. CCA and ZSA are used to find and eliminate or mitigate common causes for multiple failures.
ZSA is a method of ensuring that the equipment installations within each zone of an aircraft meet adequatesafety standardswith respect to design and installation standards, interference between systems, and maintenance errors. In those areas of the aeroplane where multiple systems and components are installed in close proximity, it should be ensured that the zonal analysis would identify any failure or malfunction which by itself is considered sustainable but which could have more serious effects when adversely affecting other adjacent systems or components.[1]
Aircraft manufacturers divide the airframe into zones to supportairworthinessregulations, the design process, and to plan and facilitate maintenance. The commonly used aviation standardATAiSpec 2200, which replacedATA Spec 100, contains guidelines for determining airplane zones and their numbering. Some manufacturers use ASDS1000Dfor the same purpose. The zones and subzones generally relate to physical barriers in the aircraft. A typical zone map for a small transport aircraft is shown.[2]
Aircraft zones differ in usage,pressurization,temperaturerange, exposure tosevere weatherandlightning strikes, and the hazards contained such as ignition sources,flammablefluids, flammable vapors, or rotating machines. Accordingly, installation rules differ by zone. For example, installation requirements for wiring depends on whether it is installed in a fire zone, rotor burst zone, or cargo area.
ZSA includes verification that a system's equipment and interconnecting wires, cables, and hydraulic and pneumatic lines are installed in accordance with defined installation rules and segregation requirements. ZSA evaluates the potential for equipment interference. It also considers failure modes and maintenance errors that could have a cascading effect on systems,[3]such as:
Potential problems are identified and tracked for resolution. For example, if redundant channels of adata buswere routed through an area where rotorburst fragments could result in loss of allchannels, at least one channel should be rerouted.
On July 19, 1989,United Airlines Flight 232, aMcDonnell DouglasDC-10-10, experienced an uncontained failure of its No. 2enginestage 1 fan rotor disk assembly. The engine fragments severed the No. 1 and No. 3hydraulicsystem lines. Forces from the engine failure fractured the No. 2 hydraulic system line. With the loss of all three hydraulic-poweredflight control systems, safe landing was impossible. The lack of independence of the three hydraulic systems, although physically isolated, left them vulnerable to a single failure event due to their close proximity to one another. This was a zonal hazard. The aircraft crashed after diversion toSioux Gateway AirportinSioux City, Iowa, with 111 fatalities, 47 serious injuries and 125 minor injuries.[4][5][6]
On August 12, 1985,Japan Air Lines Flight 123, aBoeing747-SR100, experienced cabin decompression 12 minutes after takeoff fromHaneda AirportinTokyo,Japan, at 24,000 feet. The decompression was caused by failure of a previously repairedaft pressure bulkhead. Cabin air rushed into the unpressurizedfuselagecavity, overpressurizing the area and causing failure of theauxiliary power unit(APU)firewalland the supporting structure for thevertical fin. The vertical fin separated from the airplane. Hydraulic components located in the aft body were also severed, leading to a rapid depletion of all four hydraulic systems. The loss of the vertical fin, coupled with the loss of all four hydraulic systems, left the airplane extremely difficult, if not impossible, to control in all three axes. Lack of independence of four hydraulic systems from a single failure event was a zonal hazard. The aircraft struck a mountain at forty-six minutes after takeoff with 520 fatalities and 4 survivors.[7]
|
https://en.wikipedia.org/wiki/Zonal_safety_analysis
|
Incomputer programming,machine codeiscomputer codeconsisting ofmachine languageinstructions, which are used to control a computer'scentral processing unit(CPU). For conventionalbinary computers, machine code is the binary[nb 1]representation of a computer program that is actually read and interpreted by the computer. A program in machine code consists of a sequence of machine instructions (possibly interspersed with data).[1]
Each machine code instruction causes the CPU to perform a specific task. Examples of such tasks include:
In general, each architecture family (e.g.,x86,ARM) has its owninstruction set architecture(ISA), and hence its own specific machine code language. There are exceptions, such as theVAXarchitecture, which includes optional support of thePDP-11instruction set; theIA-64architecture, which includes optional support of theIA-32instruction set; and thePowerPC 615microprocessor, which can natively process bothPowerPCand x86 instruction sets.
Machine code is a strictly numerical language, and it is the lowest-level interface to the CPU intended for a programmer.Assembly languageprovides a direct map between the numerical machine code and a human-readable mnemonic. In assembly, numericalopcodesand operands are replaced with mnemonics and labels. For example, thex86architecture has available the 0x90 opcode; it is represented asNOPin the assemblysource code. While it is possible to write programs directly in machine code, managing individual bits and calculating numericaladdressesis tedious and error-prone. Therefore, programs are rarely written directly in machine code. However, an existing machine code program may be edited if the assembly source code is not available.
The majority of programs today are written in ahigh-level language. A high-level program may be translated into machine code by acompiler.
Every processor or processor family has its owninstruction set. Machine instructions are patterns ofbits[nb 2]that specify some particular action.[2]An instruction set is described by itsinstruction format. Some ways in which instruction formats may differ:[2]
A processor's instruction set needs to execute the circuits of a computer'sdigital logic level. At the digital level, the program needs to control the computer's registers, bus, memory, ALU, and other hardware components.[3]To control a computer'sarchitecturalfeatures, machine instructions are created. Examples of features that are controlled using machine instructions:
The criteria for instruction formats include:
Determining the size of the address field is a choice between space and speed.[7]On some computers, the number of bits in the address field may be too small to access all of the physical memory. Also,virtual address spaceneeds to be considered. Another constraint may be a limitation on the size of registers used to construct the address. Whereas a shorter address field allows the instructions to execute more quickly, other physical properties need to be considered when designing the instruction format.
Instructions can be separated into two types: general-purpose and special-purpose. Special-purpose instructions exploit architectural features that are unique to a computer. General-purpose instructions control architectural features common to all computers.[8]
General-purpose instructions control:
A much more human-friendly rendition of machine language, namedassembly language, usesmnemonic codesto refer to machine code instructions, rather than using the instructions' numeric values directly, and usessymbolic namesto refer to storage locations and sometimesregisters.[9]For example, on theZilog Z80processor, the machine code00000101, which causes the CPU to decrement theBgeneral-purpose register, would be represented in assembly language asDEC B.[10]
TheIBM 704, 709, 704x and 709xstore one instruction in each instruction word; IBM numbers the bit from the left as S, 1, ..., 35. Most instructions have one of two formats:
For all but theIBM 7094and 7094 II, there are three index registers designated A, B and C; indexing with multiple 1 bits in the tag subtracts thelogical orof the selected index registers and loading with multiple 1 bits in the tag loads all of the selected index registers. The 7094 and 7094 II have seven index registers, but when they are powered on they are inmultiple tag mode, in which they use only the three of the index registers in a fashion compatible with earlier machines, and require a Leave Multiple Tag Mode (LMTM) instruction in order to access the other four index registers.
The effective address is normally Y-C(T), where C(T) is either 0 for a tag of 0, the logical or of the selected index registers in multiple tag mode or the selected index register if not in multiple tag mode. However, the effective address for index register control instructions is just Y.
A flag with both bits 1 selects indirect addressing; the indirect address word has both a tag and a Y field.
In addition totransfer(branch) instructions, these machines have skip instruction that conditionally skip one or two words, e.g., Compare Accumulator with Storage (CAS) does a three way compare and conditionally skips to NSI, NSI+1 or NSI+2, depending on the result.
TheMIPS architectureprovides a specific example for a machine code whose instructions are always 32 bits long.[11]: 299The general type of instruction is given by theop(operation) field, the highest 6 bits. J-type (jump) and I-type (immediate) instructions are fully specified byop. R-type (register) instructions include an additional fieldfunctto determine the exact operation. The fields used in these types are:
rs,rt, andrdindicate register operands;shamtgives a shift amount; and theaddressorimmediatefields contain an operand directly.[11]: 299–301
For example, adding the registers 1 and 2 and placing the result in register 6 is encoded:[11]: 554
Load a value into register 8, taken from the memory cell 68 cells after the location listed in register 3:[11]: 552
Jumping to the address 1024:[11]: 552
On processor architectures withvariable-length instruction sets[12](such asIntel'sx86processor family) it is, within the limits of the control-flowresynchronizingphenomenon known as theKruskal count,[13][12][14][15][16]sometimes possible through opcode-level programming to deliberately arrange the resulting code so that two code paths share a common fragment of opcode sequences.[nb 3]These are calledoverlapping instructions,overlapping opcodes,overlapping code,overlapped code,instruction scission, orjump into the middle of an instruction.[17][18][19]
In the 1970s and 1980s, overlapping instructions were sometimes used to preserve memory space. One example were in the implementation of error tables inMicrosoft'sAltair BASIC, whereinterleaved instructionsmutually shared their instruction bytes.[20][12][17]The technique is rarely used today, but might still be necessary to resort to in areas where extreme optimization for size is necessary on byte-level such as in the implementation ofboot loaderswhich have to fit intoboot sectors.[nb 4]
It is also sometimes used as acode obfuscationtechnique as a measure againstdisassemblyand tampering.[12][15]
The principle is also used in shared code sequences offat binarieswhich must run on multiple instruction-set-incompatible processor platforms.[nb 3]
This property is also used to findunintended instructionscalledgadgetsin existing code repositories and is used inreturn-oriented programmingas alternative tocode injectionfor exploits such asreturn-to-libc attacks.[21][12]
In some computers, the machine code of thearchitectureis implemented by an even more fundamental underlying layer calledmicrocode, providing a common machine language interface across a line or family of different models of computer with widely different underlyingdataflows. This is done to facilitateportingof machine language programs between different models. An example of this use is the IBMSystem/360family of computers and their successors.
Machine code is generally different frombytecode(also known as p-code), which is either executed by an interpreter or itself compiled into machine code for faster (direct) execution. An exception is when a processor is designed to use a particular bytecode directly as its machine code, such as is the case withJava processors.
Machine code and assembly code are sometimes callednativecodewhen referring to platform-dependent parts of language features or libraries.[22]
From the point of view of the CPU, machine code is stored in RAM, but is typically also kept in a set of caches for performance reasons. There may be different caches for instructions and data, depending on the architecture.
The CPU knows what machine code to execute, based on its internal program counter. The program counter points to a memory address and is changed based on special instructions which may cause programmatic branches. The program counter is typically set to a hard coded value when the CPU is first powered on, and will hence execute whatever machine code happens to be at this address.
Similarly, the program counter can be set to execute whatever machine code is at some arbitrary address, even if this is not valid machine code. This will typically trigger an architecture specific protection fault.
The CPU is oftentimes told, by page permissions in a paging based system, if the current page actually holds machine code by an execute bit — pages have multiple such permission bits (readable, writable, etc.) for various housekeeping functionality. E.g. onUnix-likesystems memory pages can be toggled to be executable with themprotect()system call, and on Windows,VirtualProtect()can be used to achieve a similar result. If an attempt is made to execute machine code on a non-executable page, an architecture specific fault will typically occur. Treatingdata as machine code, or finding new ways to use existing machine code, by various techniques, is the basis of some security vulnerabilities.
Similarly, in a segment based system, segment descriptors can indicate whether a segment can contain executable code and in whatringsthat code can run.
From the point of view of aprocess, thecode spaceis the part of itsaddress spacewhere the code in execution is stored. Inmultitaskingsystems this comprises the program'scode segmentand usuallyshared libraries. Inmulti-threadingenvironment, different threads of one process share code space along with data space, which reduces the overhead ofcontext switchingconsiderably as compared to process switching.
Machine code can be seen as a set of electrical pulses that make the instructions readable to the computer; it is not readable by humans,[23]withDouglas Hofstadtercomparing it to examining the atoms of aDNAmolecule.[24]However, various tools and methods exist to decode machine code to human-readablesource code. One such method isdisassembly, which easily decodes it back to its corresponding assembly languagesource codebecause assembly language forms a one-to-one mapping to machine code.[25]
Machine code may also be decoded tohigh-level languageunder two conditions. The first condition is to accept anobfuscatedreading of the source code. An obfuscated version of source code is displayed if the machine code is sent to adecompilerof the source language. The second condition requires the machine code to have information about the source code encoded within. The information includes asymbol tablethat containsdebug symbols. The symbol table may be stored within the executable, or it may exist in separate files. Adebuggercan then read the symbol table to help the programmer interactivelydebugthe machine code inexecution.
|
https://en.wikipedia.org/wiki/Overlapping_code
|
Apolymorphic engine(sometimes calledmutation engineormutating engine) is asoftware componentthat usespolymorphic codeto alter thepayloadwhile preserving the same functionality.
Polymorphicenginesare used almost exclusively inmalware, with the purpose of being harder forantivirus softwareto detect. They do so either byencryptingorobfuscatingthe malware payload.
One common deployment is afile binderthat weaves malware into normalfiles, such as office documents. Since this type of malware is usually polymorphic, it is also known as apolymorphic packer.
The engine of theVirutbotnetis an example of a polymorphic engine.[1]
|
https://en.wikipedia.org/wiki/Polymorphic_engine
|
Incomputing, apersistent data structureornot ephemeral data structureis adata structurethat always preserves the previous version of itself when it is modified. Such data structures are effectivelyimmutable, as their operations do not (visibly) update the structure in-place, but instead always yield a new updated structure. The term was introduced in Driscoll, Sarnak, Sleator, and Tarjan's 1986 article.[1]
A data structure ispartially persistentif all versions can be accessed but only the newest version can be modified. The data structure isfully persistentif every version can be both accessed and modified. If there is also a meld or merge operation that can create a new version from two previous versions, the data structure is calledconfluently persistent. Structures that are not persistent are calledephemeral.[2]
These types of data structures are particularly common inlogicalandfunctional programming,[2]as languages in those paradigms discourage (or fully forbid) the use of mutable data.
In the partial persistence model, a programmer may query any previous version of a data structure, but may only update the latest version. This implies alinear orderingamong each version of the data structure.[3]In the fully persistent model, both updates and queries are allowed on any version of the data structure. In some cases theperformance characteristicsof querying or updating older versions of a data structure may be allowed to degrade, as is true with therope data structure.[4]In addition, a data structure can be referred to as confluently persistent if, in addition to being fully persistent, two versions of the same data structure can be combined to form a new version which is still fully persistent.[5]
A type of data structure where user may query any version of the structure but may only update the latest version.
An ephemeral data structure can be converted to partially persistent data structure using a few techniques.
One of the technique is by using randomized version of Van Emde Boas Tree which is created using dynamic perfect hashing. This data structure is created as follows:
The size of this data structure is bounded by the number of elements stored in the structure that is O(m). The insertion of a new maximal element is done in constant O(1) expected and amortized time. Finally query to find an element can be done in this structure in O(log(log n)) worst-case time.[6]
One method for creating a persistent data structure is to use a platform provided ephemeral data structure such as anarrayto store the data in the data structure and copy the entirety of that data structure usingcopy-on-write semanticsfor any updates to the data structure. This is an inefficient technique because the entire backing data structure must be copied for each write, leading to worst case O(n·m) performance characteristics for m modifications of an array of size n.[citation needed]
The fat node method is to record all changes made to node fields in the nodes themselves, without erasing old values of the fields. This requires that nodes be allowed to become arbitrarily “fat”. In other words, each fat node contains the same information andpointerfields as an ephemeral node, along with space for an arbitrary number of extra field values. Each extra field value has an associated field name and a version stamp which indicates the version in which the named field was changed to have the specified value. Besides, each fat node has its own version stamp, indicating the version in which the node was created. The only purpose of nodes having version stamps is to make sure that each node only contains one value per field name per version. In order to navigate through the structure, each original field value in a node has a version stamp of zero.
With using fat node method, it requires O(1) space for every modification: just store the new data. Each modification takes O(1) additional time to store the modification at the end of the modification history. This is anamortized timebound, assuming modification history is stored in a growablearray. Ataccess time, the right version at each node must be found as the structure is traversed. If "m" modifications were to be made, then each access operation would have O(log m) slowdown resulting from the cost of finding the nearest modification in the array.
With the path copying method a copy of all nodes is made on the path to any node which is about to be modified. These changes must then becascadedback through the data structure: all nodes that pointed to the old node must be modified to point to the new node instead. These modifications cause more cascading changes, and so on, until the root node is reached.
With m modifications, this costs O(log m) additivelookuptime. Modification time and space are bounded by the size of the longest path in the data structure and the cost of the update in the ephemeral data structure. In a Balanced Binary Search Tree without parent pointers the worst case modification time complexity is O(log n + update cost). However, in a linked list the worst case modification time complexity is O(n + update cost).
Driscoll, Sarnak,Sleator,Tarjancame up[1]with a way to combine the techniques of fat nodes and path copying, achieving O(1) access slowdown and O(1) modification space and time complexity.
In each node, one modification box is stored. This box can hold one modification to the node—either a modification to one of the pointers, or to the node's key, or to some other piece of node-specific data—and a timestamp for when that modification was applied. Initially, every node's modification box is empty.
Whenever a node is accessed, the modification box is checked, and its timestamp is compared against the access time. (The access time specifies the version of the data structure being considered.) If the modification box is empty, or the access time is before the modification time, then the modification box is ignored and only the normal part of the node is considered. On the other hand, if the access time is after the modification time, then the value in the modification box is used, overriding that value in the node.
Modifying a node works like this. (It is assumed that each modification touches one pointer or similar field.) If the node's modification box is empty, then it is filled with the modification. Otherwise, the modification box is full. A copy of the node is made, but using only the latest values. The modification is performed directly on the new node, without using the modification box. (One of the new node's fields is overwritten and its modification box stays empty.) Finally, this change is cascaded to the node's parent, just like path copying. (This may involve filling the parent's modification box, or making a copy of the parent recursively. If the node has no parent—it's the root—it is added the new root to asorted arrayof roots.)
With thisalgorithm, given any time t, at most one modification box exists in the data structure with time t. Thus, a modification at time t splits the tree into three parts: one part contains the data from before time t, one part contains the data from after time t, and one part was unaffected by the modification.
Time and space for modifications require amortized analysis. A modification takes O(1) amortized space, and O(1) amortized time. To see why, use apotential functionϕ, whereϕ(T) is the number of full live nodes in T . The live nodes of T are just the nodes that are reachable from the current root at the current time (that is, after the last modification). The full live nodes are the live nodes whose modification boxes are full.
Each modification involves some number of copies, sayk, followed by 1 change to a modification box. Consider each of thekcopies. Each costs O(1) space and time, but decreases the potential function by one. (First, the node to be copied must be full and live, so it contributes to the potential function. The potential function will only drop, however, if the old node isn't reachable in the new tree. But it is known that it isn't reachable in the new tree—the next step in the algorithm will be to modify the node's parent to point at the copy. Finally, it is known that the copy's modification box is empty. Thus, replaced a full live node has been replaced with an empty live node, andϕgoes down by one.) The final step fills a modification box, which costs O(1) time and increasesϕby one.
Putting it all together, the change inϕis Δϕ=1 −k. Thus, the algorithm takes O(k+Δϕ)= O(1) space and O(k+Δϕ+1) = O(1) time
Path copying is one of the simple methods to achieve persistency in a certain data structure such as binary search trees. It is nice to have a general strategy for implementing persistence that works with any given data structure. In order to achieve that, we consider a directed graphG. We assume that each vertexvinGhas a constant numbercof outgoing edges that are represented by pointers. Each vertex has a label representing the data. We consider that a vertex has a bounded numberdof edges leading into it which we define as inedges(v). We allow the following different operations onG.
Any of the above operations is performed at a specific time and the purpose of the persistent graph representation is to be able to access any version ofGat any given time. For this purpose we define a table for each vertexvinG. The table containsccolumns andd+1{\displaystyle d+1}rows. Each row contains in addition to the pointers for the outgoing edges, a label which represents the data at the vertex and a timetat which the operation was performed. In addition to that there is an array inedges(v) that keeps track of all the incoming edges tov. When a table is full, a new table withd+1{\displaystyle d+1}rows can be created. The old table becomes inactive and the new table becomes the active table.
A call to CREATE-NODE creates a new table and set all the references to null
If we assume that CHANGE-EDGE(v,i,u) is called, then there are two cases to consider.
It works exactly the same as CHANGE-EDGE except that instead of changing theithedge of the vertex, we change theithlabel.
In order to find the efficiency of the scheme proposed above, we use an argument defined as a credit scheme. The credit represents a currency. For example, the credit can be used to pay for a table. The argument states the following:
The credit scheme should always satisfy the following invariant: Each row of each active table stores one credit and the table has the same number of credits as the number of rows. Let us confirm that the invariant applies to all the three operations CREATE-NODE, CHANGE-EDGE and CHANGE-LABEL.
As a summary, we conclude that havingn1{\displaystyle n_{1}}calls to CREATE_NODE andn2{\displaystyle n_{2}}calls to CHANGE_EDGE will result in the creation of2⋅n1+n2{\displaystyle 2\cdot n_{1}+n_{2}}tables. Since each table has sizeO(d){\displaystyle O(d)}without taking into account the recursive calls, then filling in a table requiresO(d2){\displaystyle O(d^{2})}where the additional d factor comes from updating the inedges at other nodes. Therefore, the amount of work required to complete a sequence of operations is bounded by the number of tables created multiplied byO(d2){\displaystyle O(d^{2})}. Each access operation can be done inO(log(d)){\displaystyle O(\log(d))}and there aremedge and label operations, thus it requiresm⋅O(log(d)){\displaystyle m\cdot O(\log(d))}. We conclude that There exists a data structure that can complete anynsequence of CREATE-NODE, CHANGE-EDGE and CHANGE-LABEL inO(n⋅d2)+m⋅O(log(d)){\displaystyle O(n\cdot d^{2})+m\cdot O(\log(d))}.
One of the useful applications that can be solved efficiently using persistence is the Next Element Search. Assume that there arennon intersecting line segments that don't cross each other that are parallel to the x-axis. We want to build a data structure that can query a pointpand return the segment abovep(if any). We will start by solving the Next Element Search using the naïve method then we will show how to solve it using the persistent data structure method.
We start with a vertical line segment that starts off at infinity and we sweep the line segments from the left to the right. We take a pause every time we encounter an end point of these segments. The vertical lines split the plane into vertical strips. If there arenline segments then we can get2⋅n+1{\displaystyle 2\cdot n+1}vertical strips since each segment has2end points. No segment begins and ends in the strip. Every segment either it doesn't touch the strip or it completely crosses it. We can think of the segments as some objects that are in some sorted order from top to bottom. What we care about is where the point that we are looking at fits in this order. We sort the endpoints of the segments by theirxcoordinate. For each stripsi{\displaystyle s_{i}}, we store the subset segments that crosssi{\displaystyle s_{i}}in a dictionary. When the vertical line sweeps the line segments, whenever it passes over the left endpoint of a segment then we add it to the dictionary. When it passes through the right endpoint of the segment, we remove it from the dictionary. At every endpoint, we save a copy of the dictionary and we store all the copies sorted by thexcoordinates. Thus we have a data structure that can answer any query. In order to find the segment above a pointp, we can look at thexcoordinate ofpto know which copy or strip it belongs to. Then we can look at theycoordinate to find the segment above it. Thus we need two binary searches, one for thexcoordinate to find the strip or the copy, and another for theycoordinate to find the segment above it. Thus the query time takesO(log(n)){\displaystyle O(\log(n))}. In this data structure, the space is the issue since if we assume that we have the segments structured in a way such that every segment starts before the end of any other segment, then the space required for the structure to be built using the naïve method would beO(n2){\displaystyle O(n^{2})}. Let us see how we can build another persistent data structure with the same query time but with a better space.
We can notice that what really takes time in the data structure used in the naïve method is that whenever we move from a strip to the next, we need to take a snap shot of whatever data structure we are using to keep things in sorted order. We can notice that once we get the segments that intersectsi{\displaystyle s_{i}}, when we move tosi+1{\displaystyle s_{i+1}}either one thing leaves or one thing enters. If the difference between what is insi{\displaystyle s_{i}}and what is insi+1{\displaystyle s_{i+1}}is only one insertion or deletion then it is not a good idea to copy everything fromsi{\displaystyle s_{i}}tosi+1{\displaystyle s_{i+1}}. The trick is that since each copy differs from the previous one by only one insertion or deletion, then we need to copy only the parts that change. Let us assume that we have a tree rooted atT. When we insert a keykinto the tree, we create a new leaf containingk. Performing rotations to rebalance the tree will only modify the nodes of the path fromktoT. Before inserting the keykinto the tree, we copy all the nodes on the path fromktoT. Now we have 2 versions of the tree, the original one which doesn't containkand the new tree that containskand whose root is a copy of the root ofT. Since copying the path fromktoTdoesn't increase the insertion time by more than a constant factor then the insertion in the persistent data structure takesO(log(n)){\displaystyle O(\log(n))}time. For the deletion, we need to find which nodes will be affected by the deletion. For each nodevaffected by the deletion, we copy the path from the root tov. This will provide a new tree whose root is a copy of the root of the original tree. Then we perform the deletion on the new tree. We will end up with 2 versions of the tree. The original one which containskand the new one which doesn't containk. Since any deletion only modifies the path from the root tovand any appropriate deletion algorithm runs inO(log(n)){\displaystyle O(\log(n))}, thus the deletion in the persistent data structure takesO(log(n)){\displaystyle O(\log(n))}. Every sequence of insertion and deletion will cause the creation of a sequence of dictionaries or versions or treesS1,S2,…Si{\displaystyle S_{1},S_{2},\dots S_{i}}where eachSi{\displaystyle S_{i}}is the result of operationsS1,S2,…Si{\displaystyle S_{1},S_{2},\dots S_{i}}. If eachSi{\displaystyle S_{i}}containsmelements, then the search in eachSi{\displaystyle S_{i}}takesO(log(m)){\displaystyle O(\log(m))}. Using this persistent data structure we can solve the next element search problem inO(log(n)){\displaystyle O(\log(n))}query time andO(n⋅log(n)){\displaystyle O(n\cdot \log(n))}space instead ofO(n2){\displaystyle O(n^{2})}. Please find below the source code for an example related to the next search problem.
Perhaps the simplest persistent data structure is thesingly linked listorcons-based list, a simple list of objects formed by each carrying areferenceto the next in the list. This is persistent because thetailof the list can be taken, meaning the lastkitems for somek, and new nodes can be added in front of it. The tail will not be duplicated, instead becoming shared between both the old list and the new list. So long as the contents of the tail are immutable, this sharing will be invisible to the program.
Many common reference-based data structures, such asred–black trees,[7]stacks,[8]andtreaps,[9]can easily be adapted to create a persistent version. Some others need slightly more effort, for example:queues,dequeues, and extensions includingmin-deques(which have an additionalO(1) operationminreturning the minimal element) andrandom access deques(which have an additional operation of random access with sub-linear, most often logarithmic, complexity).
There also exist persistent data structures which use destructive[clarification needed]operations, making them impossible to implement efficiently in purely functional languages (like Haskell outside specialized monads like state or IO), but possible in languages like C or Java. These types of data structures can often be avoided with a different design. One primary advantage to using purely persistent data structures is that they often behave better in multi-threaded environments.
Singlylinked listsare the bread-and-butter data structure in functional languages.[10]SomeML-derived languages, likeHaskell, are purely functional because once a node in the list has been allocated, it cannot be modified, only copied, referenced or destroyed by thegarbage collectorwhen nothing refers to it. (Note that ML itself isnotpurely functional, but supports non-destructive list operations subset, that is also true in theLisp(LISt Processing) functional language dialects likeSchemeandRacket.)
Consider the two lists:
These would be represented in memory by:
where a circle indicates a node in the list (the arrow out representing the second element of the node which is a pointer to another node).
Now concatenating the two lists:
results in the following memory structure:
Notice that the nodes in listxshave been copied, but the nodes inysare shared. As a result, the original lists (xsandys) persist and have not been modified.
The reason for the copy is that the last node inxs(the node containing the original value2) cannot be modified to point to the start ofys, because that would change the value ofxs.
Consider abinary search tree,[10]where every node in the tree has therecursiveinvariantthat all subnodes contained in the left subtree have a value that is less than or equal to the value stored in the node, and subnodes contained in the right subtree have a value that is greater than the value stored in the node.
For instance, the set of data
might be represented by the following binary search tree:
A function which inserts data into the binary tree and maintains the invariant is:
After executing
The following configuration is produced:
Notice two points: first, the original tree (xs) persists. Second, many common nodes are shared between the old tree and the new tree. Such persistence and sharing is difficult to manage without some form ofgarbage collection(GC) to automatically free up nodes which have no live references, and this is why GC is a feature commonly found infunctional programming languages.
GitHub repo containing implementations of persistent BSTs using Fat Nodes, Copy-on-Write, and Path Copying Techniques.
To use the persistent BST implementations, simply clone the repository and follow the instructions provided in the README file.[11]
A persistent hash array mapped trie is a specialized variant of ahash array mapped triethat will preserve previous versions of itself on any updates. It is often used to implement a general purpose persistent map data structure.[12]
Hash array mapped tries were originally described in a 2001 paper byPhil Bagwellentitled "Ideal Hash Trees". This paper presented a mutableHash tablewhere "Insert, search and delete times are small and constant, independent of key set size, operations are O(1). Small worst-case times for insert, search and removal operations can be guaranteed and misses cost less than successful searches".[13]This data structure was then modified byRich Hickeyto be fully persistent for use in theClojureprogramming language.[14]
Conceptually, hash array mapped tries work similar to any generictreein that they store nodes hierarchically and retrieve them by following a path down to a particular element. The key difference is that Hash Array Mapped Tries first use ahash functionto transform their lookup key into a (usually 32 or 64 bit) integer. The path down the tree is then determined by using slices of the binary representation of that integer to index into asparse arrayat each level of the tree. The leaf nodes of the tree behave similar to the buckets used to constructhash tablesand may or may not contain multiple candidates depending onhash collisions.[12]
Most implementations of persistent hash array mapped tries use a branching factor of 32 in their implementation. This means that in practice while insertions, deletions, and lookups into a persistent hash array mapped trie have a computational complexity ofO(logn), for most applications they are effectively constant time, as it would require an extremely large number of entries to make any operation take more than a dozen steps.[15]
Haskell is apure functional languageand therefore does not allow for mutation. Therefore, all data structures in the language are persistent, as it is impossible to not preserve the previous state of a data structure with functional semantics.[16]This is because any change to a data structure that would render previous versions of a data structure invalid would violatereferential transparency.
In its standard library Haskell has efficient persistent implementations for linked lists,[17]Maps (implemented as size balanced trees),[18]and Sets[19]among others.[20]
Like many programming languages in theLispfamily, Clojure contains an implementation of a linked list, but unlike other dialects its implementation of a linked list has enforced persistence instead of being persistent by convention.[21]Clojure also has efficient implementations of persistent vectors, maps, and sets based on persistent hash array mapped tries. These data structures implement the mandatory read-only parts of theJava collections framework.[22]
The designers of the Clojure language advocate the use of persistent data structures over mutable data structures because they havevalue semanticswhich gives the benefit of making them freely shareable between threads with cheap aliases, easy to fabricate, and language independent.[23]
These data structures form the basis of Clojure's support forparallel computingsince they allow for easy retries of operations to sidestepdata racesand atomiccompare and swapsemantics.[24]
TheElm programming languageis purely functional like Haskell, which makes all of its data structures persistent by necessity. It contains persistent implementations of linked lists as well as persistent arrays, dictionaries, and sets.[25]
Elm uses a customvirtual DOMimplementation that takes advantage of the persistent nature of Elm data. As of 2016 it was reported by the developers of Elm that this virtual DOM allows the Elm language to render HTML faster than the popularJavaScriptframeworksReact,Ember, andAngular.[26]
TheJava programming languageis not particularly functional. Despite this, the core JDK package java.util.concurrent includes CopyOnWriteArrayList and CopyOnWriteArraySet which are persistent structures, implemented using copy-on-write techniques. The usual concurrent map implementation in Java, ConcurrentHashMap, is not persistent, however. Fully persistent collections are available in third-party libraries,[27]or other JVM languages.
The popular JavaScript frontend frameworkReactis frequently used along with a state management system that implements theFlux architecture,[28][29]a popular implementation of which is the JavaScript libraryRedux. The Redux library is inspired by the state management pattern used in the Elm programming language, meaning that it mandates that users treat all data as persistent.[30]As a result, the Redux project recommends that in certain cases users make use of libraries for enforced and efficient persistent data structures. This reportedly allows for greater performance than when comparing or making copies of regular JavaScript objects.[31]
One such library of persistent data structures Immutable.js is based on the data structures made available and popularized by Clojure and Scala.[32]It is mentioned by the documentation of Redux as being one of the possible libraries that can provide enforced immutability.[31]Mori.js brings data structures similar to those in Clojure to JavaScript.[33]Immer.js brings an interesting approach where one "creates the next immutable state by mutating the current one".[34]Immer.js uses native JavaScript objects and not efficient persistent data structures and it might cause performance issues when data size is big.
Prolog terms are naturally immutable and therefore data structures are typically persistent data structures. Their performance depends on sharing and garbage collection offered by the Prolog system.[35]Extensions to non-ground Prolog terms are not always feasible because of search space explosion. Delayed goals might mitigate the problem.
Some Prolog systems nevertheless do provide destructive operations like setarg/3, which might come in different flavors, with/without copying and with/without backtracking of the state change. There are cases where setarg/3 is used to the good of providing a new declarative layer, like a constraint solver.[36]
The Scala programming language promotes the use of persistent data structures for implementing programs using "Object-Functional Style".[37]Scala contains implementations of many persistent data structures including linked lists,red–black trees, as well as persistent hash array mapped tries as introduced in Clojure.[38]
Because persistent data structures are often implemented in such a way that successive versions of a data structure share underlying memory[39]ergonomic use of such data structures generally requires some form ofautomatic garbage collectionsystem such asreference countingormark and sweep.[40]In some platforms where persistent data structures are used it is an option to not use garbage collection which, while doing so can lead tomemory leaks, can in some cases have a positive impact on the overall performance of an application.[41]
|
https://en.wikipedia.org/wiki/Persistent_data_structure
|
TheAARD codewas a segment of code in abeta releaseofMicrosoftWindows 3.1that would issue a cryptic error message when run on theDR DOSoperating system rather than the Microsoft-affiliatedMS-DOSorPC DOS. Microsoft inserted the code in an attempt to manipulate people into not using competing operating systems; it is an example of the company'sfear-uncertainty-doubttactics.
ThisXOR-encrypted,self-modifying, and deliberatelyobfuscated x86 assembly codeused a variety of undocumented MS-DOS structures and functions to detect if a machine was running DR DOS. The code was present in the installer, in theWIN.COMfile used to load Windows, and in several other EXE and COM files within Windows 3.1.[1]
The AARD code was discovered by Geoff Chappell on 17 April 1992 and further analyzed and documented in a joint research effort with Andrew Schulman.[2][3][4][5][6]The name "AARD code" came from the letters "AARD" that were found in ahex dumpof the Windows 3.1 installer; this turned out to be the signature of Microsoft programmer Aaron R. Reynolds (1955–2008).[7][8][9]
Microsoft disabled the AARD code for the final release of Windows 3.1, but did not remove it so it could be later reactivated by the change of a single byte.[5]
DR DOS publisherDigital Researchreleased apatchnamed "business update" in 1992 to bypass the AARD code.[10][11][12]
The rationale for the AARD code came to light when internal memos were released during theUnited States v. Microsoft Corp.antitrust case in 1999. Internal memos released by Microsoft revealed that the specific focus of these tests wasDR DOS.[1][13][14]At one point, Microsoft CEOBill Gatessent a memo to a number of employees that said, "You never sent me a response on the question of what things an app would do that would make it run with MSDOS and not run with DR-DOS. Is there [sic] feature they have that might get in our way?"[12][15]Microsoft Senior Vice PresidentBrad Silverberglater sent another memo, saying, "What the [user] is supposed to do is feel uncomfortable, and when he has bugs, suspect that the problem is dr-dos and then go out to buy ms-dos."[12][15]
AfterNovellbought DR DOS and renamed it "Novell DOS", Microsoft Co-PresidentJim Allchinwrote in a memo, "If you're going to kill someone there isn't much reason to get all worked up about it and angry. Any discussions beforehand are a waste of time. We need to smile at Novell while we pull the trigger."[16][12][15]
Novell DOS changed hands again. The new owner,Caldera, Inc., began a lawsuit against Microsoft over the AARD code,Caldera v. Microsoft,[12][17][18][19]which was later settled.[15][20][21][22]It was originally believed that the settlement was around $150 million,[a][23]but in November 2009, the settlement agreement was released, and the total was revealed to be $280 million.[b][24][21][22][25]
|
https://en.wikipedia.org/wiki/AARD_code
|
In computer science, the expressioncode as datarefers to the idea thatsource codewritten in aprogramming languagecan be manipulated as data, such as a sequence of characters or anabstract syntax tree(AST), and it has anexecutionsemantics only in the context of a givencompilerorinterpreter.[1]The notion is often used in the context ofLisp-like languages that useS-expressionsas their main syntax, as writing programs using nested lists of symbols makes the interpretation of the program as an AST quite transparent (a property known ashomoiconicity).[2][3]
These ideas are generally used in the context of what is calledmetaprogramming, writing programs that treat other programs as their data.[4][5]For example, code-as-data allows theserializationoffirst-class functionsin a portable manner.[6]Another use case is storing a program in a string, which is then processed by a compiler to produce an executable.[4]More often there is areflectionAPI that exposes the structure of a program as an object within the language, reducing the possibility of creating a malformed program.[7]
Incomputational theory,Kleene's second recursion theoremprovides a form of code-is-data, by proving that a program can have access to its own source code.[8]
Code-as-data is also a principle of theVon Neumann architecture, sincestored programsand data are both represented as bits in the same memory device.[4]This architecture offers the ability to writeself-modifying code.[citation needed]It also opens the security risk of disguising a malicious program as user data and then using anexploitto direct execution to the malicious program.[9]
Indeclarative programming, theData as Code(DaC) principle refers to the idea that an arbitrary data structure can be exposed using a specialized language semantics or API. For example, a list of integers or a string is data, but in languages such as Lisp and Perl, they can be directly entered and evaluated as code.[1]Configuration scripts,domain-specific languagesandmarkup languagesare cases where program execution is controlled by data elements that are not clearly sequences of commands.[10][11]
Thiscomputer sciencearticle is astub. You can help Wikipedia byexpanding it.
Thiscomputer-programming-related article is astub. You can help Wikipedia byexpanding it.
|
https://en.wikipedia.org/wiki/Data_as_code
|
In someprogramming languages,eval, short forevaluate, is afunctionwhich evaluates a string as though it were anexpressionin the language, and returns aresult; in others, it executes multiple lines of code as though they had been included instead of the line including theeval. The input toevalis not necessarily a string; it may be structured representation of code, such as anabstract syntax tree(likeLispforms), or of special type such ascode(as in Python). The analog for astatementisexec, which executes a string (or code in other format) as if it were a statement; in some languages, such as Python, both are present, while in other languages only one of eitherevalorexecis.
Usingevalwith data from an untrusted source may introduce security vulnerabilities. For instance, assuming that theget_data()function gets data from the Internet, thisPythoncode is insecure:
An attackercould supply the programwith the string"session.update(authenticated=True)"as data, which would update thesessiondictionary to set an authenticated key to be True. To remedy this, all data which will be used withevalmust be escaped, or it must be run without access to potentially harmful functions.
Ininterpreted languages,evalis almost always implemented with the same interpreter as normal code. Incompiled languages, the same compiler used to compile programs may be embedded in programs using theevalfunction; separate interpreters are sometimes used, though this results incode duplication.
InJavaScript,evalis something of a hybrid between an expression evaluator and a statement executor. It returns the result of the last expression evaluated.
Example as an expression evaluator:
Example as a statement executor:
One use of JavaScript'sevalis to parseJSONtext, perhaps as part of anAjaxframework. However, modern browsers provideJSON.parseas a more secure alternative for this task.
InActionScript(Flash's programming language),evalcannot be used to evaluate arbitrary expressions. According to the Flash 8 documentation, its usage is limited to expressions which represent "the name of a variable, property, object, or movie clip to retrieve. This parameter can be either a String or a direct reference to the object instance."[1]
ActionScript 3 does not support eval.
The ActionScript 3 Eval Library[2]and the D.eval API[3]were development projects to create equivalents toevalin ActionScript 3. Both have ended, asAdobe Flash Playerhas reached itsend-of-life.
Lispwas the original language to make use of anevalfunction in 1958. In fact, definition of theevalfunction led to the first implementation of the language interpreter.[4]Before theevalfunction was defined, Lisp functions were manually compiled toassembly languagestatements. However, once theevalfunction had been manually compiled it was then used as part of a simpleread-eval-print loopwhich formed the basis of the first Lisp interpreter.
Later versions of the Lispevalfunction have also been implemented as compilers.
Theevalfunction in Lisp expects a form to be evaluated as its argument. The resulting value of the given form will be the returned value of the call toeval.
This is an example Lisp code:
Lisp is well known to be very flexible and so is theevalfunction. For example, to evaluate the content of a string, the string would first have to be converted into a Lisp form using theread-from-stringfunction and then the resulting form would have to be passed toeval:
One major point of confusion is the question, in which context the symbols in the form will be evaluated. In the above example,form1contains the symbol+. Evaluation of this symbol must yield the function for addition to make the example work as intended. Thus some dialects of Lisp allow an additional parameter forevalto specify the context of evaluation (similar to the optional arguments to Python'sevalfunction - see below). An example in theSchemedialect of Lisp (R5RS and later):
InPerl, theevalfunction is something of a hybrid between an expression evaluator and a statement executor. It returns the result of the last expression evaluated (all statements are expressions in Perl programming), and allows the final semicolon to be left off.
Example as an expression evaluator:
Example as a statement executor:
Perlalso hasevalblocks, which serves as itsexception handlingmechanism (seeException handling syntax#Perl). This differs from the above use ofevalwith strings in that code insideevalblocks is interpreted at compile-time instead of run-time, so it is not the meaning ofevalused in this article.
InPHP,evalexecutes code in a string almost exactly as if it had been put in the file instead of the call toeval(). The only exception is that errors are reported as coming from a call toeval(), and return statements become the result of the function.
Unlike some languages, the argument toevalmust be a string of one or more complete statements, not just expressions; however, one can get the "expression" form ofevalby putting the expression in a return statement, which causesevalto return the result of that expression.
Unlike some languages, PHP'sevalis a "language construct" rather than a function,[5]and so cannot be used in some contexts where functions can be, like higher-order functions.
Example using echo:
Example returning a value:
InLua5.1,loadstringcompiles Lua code into an anonymous function.
Example as an expression evaluator:
Example to do the evaluation in two steps:
Lua 5.2 deprecatesloadstringin favor of the existingloadfunction, which has been augmented to accept strings. In addition, it allows providing the function's environment directly, as environments are nowupvalues.
PostScript'sexecoperator takes an operand — if it is a simple literal it pushes it back on the stack. If one takes a string containing a PostScript expression however, one can convert the string to an executable which then can be executed by the interpreter, for example:
converts the PostScript expression
which pops the string "Hello World" off the stack and displays it on the screen, to have an executable type, then is executed.
PostScript'srunoperator is similar in functionality but instead the interpreter interprets PostScript expressions in a file, itself.
InPython, theevalfunction in its simplest form evaluates a single expression.
evalexample (interactive shell):
Theevalfunction takes two optional arguments,globalandlocals, which allow the programmer to set up a restricted environment for the evaluation of the expression.
Theexecstatement (or theexecfunction in Python 3.x) executes statements:
execexample (interactive shell):
The most general form for evaluating statements/expressions is using code objects. Those can be created by invoking thecompile()function and by telling it what kind of input it has to compile: an "exec" statement, an "eval" statement or a "single" statement:
compileexample (interactive shell):
Dis a statically compiled language and therefore does not include an "eval" statement in the traditional sense, but does include the related "mixin" statement. The difference is that, where "eval" interprets a string as code at runtime, with a "mixin" the string is statically compiled like ordinary code and must be known at compile time. For example:
The above example will compile to exactly the same assembly language instructions as if "num++;" had been written directly instead of mixed in. The argument to mixin doesn't need to be a string literal, but arbitrary expressions resulting in a string value, including function calls, that can be evaluated at compile time.
ColdFusion'sevaluatefunction lets users evaluate a string expression at runtime.
It is particularly useful when users need to programmatically choose the variable they want to read from.
TheRuby programming languageinterpreter offers anevalfunction similar to Python or Perl, and also allows ascope, orbinding, to be specified.
Aside from specifying a function's binding,evalmay also be used to evaluate an expression within a specific class definition binding or object instance binding, allowing classes to be extended with new methods specified in strings.
Most standard implementations ofForthhave two variants ofeval:EVALUATEandINTERPRET.
Win32FORTH code example:
InREALbasic, there is a class calledRBScriptwhich can execute REALbasic code at runtime. RBScript is very sandboxed—only the most core language features are there, and users have to allow it access to things they would like it to have. They can optionally assign an object to the context property. This allows for the code in RBScript to call functions and use properties of the context object. However, it is still limited to only understanding the most basic types, so if they have a function that returns a Dictionary or MySpiffyObject, RBScript will be unable to use it. Users can also communicate with their RBScript through the Print and Input events.
Microsoft's VBScript, which is an interpreted language, has two constructs.Evalis a function evaluator that can include calls to user-defined functions. (These functions may have side-effects such as changing the values of global variables.)Executeexecutes one or more colon-separated statements, which can change global state.
Both VBScript and JScriptevalare available to developers of compiled Windows applications (written in languages which do not support Eval) through an ActiveX control called the Microsoft Script Control, whose Eval method can be called by application code. To support calling of user-defined functions, one must first initialize the control with the AddCode method, which loads a string (or a string resource) containing a library of user-defined functions defined in the language of one's choice, prior to calling Eval.
Visual Basic for Applications(VBA), the programming language of Microsoft Office, is a virtual machine language where the runtime environment compiles and runsp-code. Its flavor of Eval supports only expression evaluation, where the expression may include user-defined functions and objects (but not user-defined variable names). Of note, the evaluator is different from VBS, and invocation of certain user-defined functions may work differently in VBA than the identical code in VBScript.
AsSmalltalk's compiler classes are part of the standard class library and usually present at run time, these can be used to evaluate a code string.
Because class and method definitions are also implemented by message-sends (to class objects), even code changes are possible:
TheTclprogramming language has a command calledeval, which executes the source code provided as an argument. Tcl represents all source code as strings, with curly braces acting as quotation marks, so that the argument toevalcan have the same formatting as any other source code.
bshas anevalfunction that takes one string argument. The function is both an expression evaluator and a statement executor. In the latter role, it can also be used for error handling. The following examples and text are from thebsman pageas appears in theUNIX System VRelease 3.2 Programmer's Manual.[6]
The string argument is evaluated as absexpression. The function is handy for converting numeric strings to numeric internal form. Theevalcan also be used as a crude form of indirection, as in the following (Note that, inbs,_(underscore) is the concatenation operator.):
which increments the variablexyz.
In addition,evalpreceded by the interrogation operator,?, permits the user to controlbserror conditions. For example:
returns the value zero if there is no file named "XXX" (instead
of halting the user's program).
The following executes agototo the labelL(if it exists):
Theevalcommand is present in allUnix shells, including the original "sh" (Bourne shell). It concatenates all the arguments with spaces, then re-parses and executes the result as a command.sh(1)–FreeBSDGeneral CommandsManual
InPowerShell, theInvoke-ExpressionCmdlet serves the same purpose as the eval function in programming languages like JavaScript, PHP and Python.
The Cmdlet runs any PowerShell expression that is provided as a command parameter in the form of a string and outputs the result of the specified expression.
Usually, the output of the Cmdlet is of the same type as the result of executing the expression. However, if the result is an empty array, it outputs$null. In case the result is a single-element array, it outputs that single element. Similar to JavaScript, PowerShell allows the final semicolon to be left off.
Example as an expression evaluator:
Example as a statement executor:
In 1966IBMConversational Programming System(CPS) introduced amicroprogrammedfunctionEVALto perform "interpretive evaluation of expressions which are written in a modifiedPolish-string notation" on anIBM System/360 Model 50.[7]Microcoding this function was "substantially more" than five times faster compared to a program that interpreted anassignmentstatement.[8]
Intheoretical computer science, a careful distinction is commonly made between eval andapply.Evalis understood to be the step of converting a quoted string into a callable function and its arguments, whereasapplyis the actual call of the function with a given set of arguments. The distinction is particularly noticeable infunctional languages, and languages based onlambda calculus, such asLISPandScheme. Thus, for example, in Scheme, the distinction is between
where the form (f x) is to be evaluated, and
where the functionfis to be called with argumentx.
Evalandapplyare the two interdependent components of theeval-apply cycle, which is the essence of evaluating Lisp, described inSICP.[9]
Incategory theory, theevalmorphismis used to define theclosed monoidal category. Thus, for example, thecategory of sets, with functions taken as morphisms, and thecartesian producttaken as theproduct, forms aCartesian closed category. Here,eval(or, properly speaking,apply) together with itsright adjoint,currying, form thesimply typed lambda calculus, which can be interpreted to be the morphisms of Cartesian closed categories.
|
https://en.wikipedia.org/wiki/Eval
|
TheIBM 1130Computing System, introduced in 1965,[3]wasIBM's least expensivecomputerat that time. Abinary16-bit machine, it was marketed to price-sensitive, computing-intensive technical markets, like education and engineering, succeeding thedecimalIBM 1620in that market segment. Typical installations included a 1 megabyte disk drive that stored the operating system, compilers and object programs, with program source generated and maintained onpunched cards.Fortranwas the most commonprogramming languageused, but several others, includingAPL, were available.
The 1130 was also used as an intelligent front-end for attaching anIBM 2250Graphics Display Unit, or asremote job entry(RJE) workstation, connected to aSystem/360mainframe.
The total production run of the 1130 has been estimated at 10,000.[4]The 1130 holds a place in computing history because it (and its non-IBM clones) gave many people their first direct interaction with a computer. Its price-performance ratio was good and it notably included inexpensive, removabledisk storage, with reliable, easy-to-use software that could be in severalhigh-level languages. The low price (from around $32,000 or $41,000 with disk drive)[3]and well-balanced feature set enabled interactive "open shop"program development.
The IBM 1130 uses the same electronics packaging, calledSolid Logic Technology(SLT), that was used inSystem/360. It has a16-bitbinary architecture, as do laterminicomputerslike thePDP-11andData General Nova.
Theaddress spaceis 15 bits, limiting the 1130 to32,768 16-bitwords(65,536 bytes) of memory. The 1130 usesmagnetic-core memory, which the processor addresses on word boundaries, using direct, indirect, and indexed addressing modes.
IBM implemented five models of the 1131 Central Processing Unit, the primary processing component of the IBM 1130. The Model 1 through Model 5 describe the core memory cycle time, as well as the model's ability to have disk storage. A letter A through D appended to the model number indicates the amount of core memory installed.
IBM 1131 Central Processing Unit weighs about 760/1050 lb (345/477 kg).[5]
The Model 4 was a lower-priced product with a 5.9 μs cycle time. Some purchasers of performance upgrades observed that the field adjustment to achieve the improvement was surprisingly trivial.
TheIBM 1132printer relies on the 1130 processor rather than internal logic to determine when to fire the print wheels as they rotated. Printers for the Model 4 run more slowly, but the slower processor still can not keep up with it. The hardware manual discloses that when the Model 4 was servicing the two highest-levelinterrupts(the level 0 card-reader column interrupt or the level 1 printer interrupt), it ran at the faster 3.6 μs cycle time. Some users of the Model 4 would write a phonyprinter driverthat did not dismiss the printer interrupt, in order to benefit from the higher processor speed. However, lower-level interrupts are disabled during this interval, even the end-of-card interrupt (level 4) from the 1442 card reader.
TheIBM 1800, announced November 1964,[6]is a variant of the IBM 1130 forprocess controlapplications. It uses hardware rather than core memory for the three index registers and features two extrainstructions(CMP and DCM) plus extra interrupt andI/Ocapabilities. It is a successor to theIBM 1710, as the IBM 1130 is a successor to theIBM 1620.
TheIBM 1500is a multi-user educational system based around either an IBM 1130 or an IBM 1800. It can connect to up to 32 student work stations, each with a variety ofaudio-visualcapabilities.
Other than these, IBM produced no compatible successor systems to the 1130. TheIBM System/7is a process control and real-time system, and theIBM Series/1is a general-purpose 16-bit minicomputer, both having differentarchitecturesfrom the 1130, and from each other.
To maximize speed and conserve space, the operating system and compilers are written entirely inassembly languageand employ techniques that are rare today, including intermixing code and data as well asself-modifying code.
Much user programming is done inFortran. The 1130 Fortrancompilercan run on a machine with only 4,096 words of core—though the compiled program might not fit on such a machine. In thismulti-pass compiler, each "phase" processes the entire source program and takes it another step toward machine code. For example, the first phase reads the source statements into memory, discards comment lines, removes spaces except in text literals, concatenates continuation lines and identifies labels. The compiler is available in a disk-resident version as well as on eight-channelpunched paper tapeor punched cards.
The most widely usedoperating systemfor the 1130 is theDisk Monitor System Version 2(DM2) introduced in 1967. DM2 is a single-taskbatch-orientedsystem. It requires a system with at least 4 KB of core memory and one integrated 2310 disk drive for system residence. The Supervisor is tiny by modern standards, containing assorted system details such as first-level interrupt routines, calledInterrupt Level Subroutines, plus the disk driver and routines to load the interpreter ofjob controlcommands and the card reader driver. Device drivers for other I/O devices required by a job are incorporated as part of the loading of that job, which might also include the replacement of the basic disk driver by a more advanced driver. During the execution of a job, only aresident monitor, called theSkeleton Supervisor, resides in memory. This Supervisor requires just 1020 bytes, so a task's first available memory starts with address/01FE(hexadecimal) or word 510. When the job ends or is aborted, the supervisor loads theMonitor Control Record Analyzer(MCRA) to read the job control for the next. While the job is running, the Supervisor is inactive. Aside from device drivers and interrupt processing all CPU time is entirely devoted to the job's activities. Other programs distributed as part of the operating system are acore dumputility,DUMP, and theDisk Utility Program,DUP.
A Card/Paper Tape Programming System was available to support systems without disk.
There is a hierarchy of device drivers: those with names ending in Z are for Fortran, such as DISKZ, while assembler programmers might use DISK0, and DISK1 was even faster at reading multiple disk sectors. But DISKZ starts its sector addressing with the first available unused sector, while the others start with sector zero of the disk, making it possible for a programmer unfamiliar with disk organization to inadvertently overwrite the bootstrap loader.
Other programming languages available on the 1130 include
There is even a French languageALGOLcompiler, in which for example "Debut ...Fin;" take the place of "Begin ... End;". All its messages are in French, so "Bonne compilation" is the goal.
Eastern Michigan Universitydeveloped a Fortran IV compiler for the 1130, known as Fortran-EMU, as an alternative to the Fortran IV (subset) compiler provided by IBM. It adds many Fortran Iv features not supported by the IBM compiler, including the LOGICAL data type, six-letter variable names, and enhanced diagnostics. The Fortran-EMU compiler was distributed as a deck of punched cards in a disk image file format with all the remaining system area deleted, to prevent copying other modules that would normally reside on the same disk, such as the assembler or compilers.
Oklahoma State Universitydeveloped anALGOL 68compiler, written in ANSI Fortran 1966.[13][14][15]
AFOCALinterpreter was developed at the University of Michigan.
IBM also distributed a large library of programs, both IBM-supported (Type I and II) and unsupported (Type III and IV).
Since the 1130 was aimed primarily at the scientific market,
scientific and engineering programs predominated:
The 1130 also occupied a niche as adata processingmachine for smaller organizations:
There is also special-purpose software:
Batch operation of the 1130 is directed by control records in the primary input stream (card or paper tape reader). There are two types of control records, monitor control records and supervisor control records.[19]
Monitor control records are identified by//␢followed by a "pseudo-operation code" in columns 4–7. "␢" represents a single blank.
TheJOBrecord can have a "T" in column 8 to indicate that any files added to the User Area by this job should be deleted at the end. Columns 11 thru 15 can contain a cartridge label; the system verifies that the specified cartridge is mounted before proceeding.
TheXEQrecord may contain the name of the program to be run in columns 8 thru 12. If this is omitted, the program currently in Working Storage will be executed. If column 14 contains "L", and the program is in Disk System Format (not core-image) a core map will be printed by the Core Load Builder. If this statement is followed byLOCALNOCAL, orFILESSupervisor Control Records, columns 16 and 17 contain the count of these records. Column 19 optionally indicated which disk driver routine is to be linked. "0", "1", or "N", request DISK1, DISK2, or DISKN, any other character, including blank, requests DISKZ, the FORTRAN disk routine.
Supervisor control records begin with an "*" in column 1, immediately followed by the command pseudo-operation in column 2. They areLOCAL,NOCAL, andFILESfor the Core Load Builder. DUP control records have a similar format. These records control program linking, either for the// XEQstatement or the DUP*STORECIcommand.
The enduring memories of the IBM 1130 may have resulted from its need for continual human intervention. It was usually occupied running "jobs" specified by a deck ofpunched cards. The human operator would load jobs into the card reader and separate them back into jobs for return, perhaps along with printed output, to the submitter. The operator would also have to watch the 1130 for evidence of a malfunctioning or stalled job and intervene by pressing theINT REQkey on the keyboard to skip ahead to the start of the next job.[20]
Marking the start of a job was a punched card that started with// JOB. Any card that started with//was a command to the Supervisor and could not be used as user program or data. Other commands included// DUPto execute the Disk Utility Program (to delete files or add the file in the temporary area to the file collection) and// XEQto execute a named program from disk. If a user program tried to read a command card, the standard card reader routine would signal end-of-input to the program and save that card's content for the Supervisor.
Unlike the IBM 360, where abootingdevice can be selected from the system console, an IBM 1130 can only be "booted" (IPL'd: Initial Program Load) from an external device: a card reader or a paper tape reader.[21][22]
The bootstrap procedure reads one card from the card reader. The boot card contains binary code[23]to read the contents of sector zero of the disk drive, which in turn handles the "operation complete" interrupt from the disk drive and performs additional disk reads to prepare the 1130 for the first punched-card job. The whole process takes about a second to complete.
When the IBM 1130 is started, the Supervisor is still in memory and probably intact, as core memory retains its state without power. If the operator concludes that a user program has stalled, the Supervisor can sense a key press to abort the program and skip ahead to the next // card. The Supervisor is not protected against modification by a badly written job, a case that might require that the operator reboot the 1130. Nor was there protection against writing to disk. If the copy of the system software on disk is modified, it can be restored by reloading it from about 4000 binary-coded punched cards (approximately two boxes).
The IBM 2310 disk drive storessectorsof 320 words (640 bytes) plus a one-word sector address. Acylinderconsists of twotracksstacked on the top and bottom surfaces of the 2315, or stacked on the top and bottom surfaces of each of the 6 platters (outermost surfaces excepted) of the 1316 disk pack used in the 2311. Each disk cylinder contains eight sectors per surface with each surface having a dedicated read-write head. Each sector is logically divided by the monitor into sixteendisk blocksof 20 words (40 bytes) each. The disk block is the unit of allocation for files.
The system distinguishes betweensystem cartridges, which contain the monitor and utilities along with user data, andnonsystem cartridges, which contain user data only. All cartridges contain information on cylinder 0, including the defective cylinder table, cartridge id, and a bootstrap program (bootstrap code). On nonsystem cartridges, the bootstrap simply prints an error message and waits if an attempt is made to boot from this cartridge. On a system cartridge this is thecold-start program, followed by acommunications areaand the resident monitor in sectors one and two. Sectors three through five contain theSystem Location Equivalence Table(SLET)—a directory of all phases of all monitor programs. Other control information fills out the first track.
The system area is present on system cartridges. It contains the Disk Monitor program, and optionally the FORTRAN compiler, the Assembler, and acore image bufferused for linking relocatable programs. It also contains the user file directories—Fixed Location Equivalence Table(FLET), andLocation Equivalence Table(LET),
Following the system area, the cartridge contains up to three logical subdivisions: thefixed area, theuser area, andworking storage. Both the fixed area and user area store non-temporary programs and data. The fixed area size is defined by DUP, and stores data, and programs in core image format only. It is not repacked when files are deleted. The user area stores data and programs in any format. The boundary between the user area and working storage "floats"— the user area expands as files are added and contracts as it is repacked to reclaim space from deleted files. If a file needs to be modified, the usual process is to use// DUPcommands to delete it, which moves any subsequent files back to close the gap, and then give that name to the temporary file as the new version of the file. Rarely modified files thus migrate towards the start of the disk as new files or new versions are appended, and frequently modified files are stored towards the end of the disk.
Working storage starts after the last file in the user area and occupies all the remaining space on the cartridge. It may contain one temporary file created by the system or the user, such as the output of a compiler or an application program. This file is subject to possible deletion at the end of the current job, unless saved to the fixed area or the user area.
All disk files are contiguous disk blocks, thus there is nofragmentation. A program can use and modify named files, but can not expand them beyond their created size. A program which creates more than one file must have all but one pre-allocated by a DUP.
With limited disk space, program source files are normally kept as decks of cards. Users having larger requirements may have a disk of their own containing the operating system, but only their files, and would have to replace the "pool" system disk with theirs and restart the system when their programs are to be run. A system with a second disk drive that can be devoted entirely to one user's code and data provides some relief.
A disk pack or cartridge is initialized for use on the 1130 by theDisk Pack Initialization Routine(DIPR). This routine scans the disk, and writes sector addresses on all cylinders, flags defective sectors, and writes a cartridge id on cylinder zero. DIPR is aStandalone program, which is loaded from cards or paper tape, and accepts the cartridge id from the system console.[19]
TheDisk Utility Program (DUP)provides commands for transferring programs, subroutines and data. It is invoked by the job control// DUPcard, followed by one or more control cards:[24]
Other commands, mainly for use by the system administrator, define or expand the Fixed Area, delete the FORTRAN compiler and/or Assembler from the system, and restore correct sector addresses to Working Storage if they have been modified.
The operands have to be placed into fixed columns.The source device code goes in columns 13 and 14, the destination device in columns 17 and 18. These device codes are:
Optionally, a program name may be coded in columns 21 thru 25, and a count field in 27 thru 30. The interpretation of these fields depends on the DUP function requested.
Programs can be converted to a faster-loading format with theSTORECIcommand, which invokes Core Image Builder (DM2's counterpart to the 360's Linkage Editor). Alternatively, a program can go through this process each time it is to be run, and for infrequently used programs this is preferred in order to conserve disk space.
The following control card instructs DUP to take the current contents of working storage and move it to the user area naming it PROGM. DUP knows the size of the file in working storage. The size of the user area will be increased by the size of the file, and the size of working storage will be decreased correspondingly.
Disk memory is used to store the operating system, object code, and data, but source code is kept on punched cards.
The basic 1130 came with anIBM 2310voice-coil actuated disk drive, called "Ramkit", from IBM's General Products Division in San Jose.[7]: 497Their pizza-box-sized IBM 2315 single platter cartridges holds 512,000 words or 1,024,000 bytes (less than a3.5" HD floppy's 1.44MB or even the5.25" HD floppy's 1.2MB). Transfer rate is 35,000 words per second (70 KB/sec) usingcycle stealing.[25]
The IBM 1053 console typewriter uses an IBMSelectricmechanism, which means one could change the typeface or character set by replacing a hollow, golf-ball sized type element. There is a special type element available forAPL, a powerful array-orientedprogramming languageusing a special symbolic notation. A row of 16 toggle switches on the console typewriter can be individually tested from within programs, using the special Fortran statementIF (SENSE SWITCHi), for example.
Other available peripherals included:
To simplify the design of peripheral devices, these rely on the processor. The card reader has no memory buffers, but instead gives the CPU a level-zero (highest priority) interrupt after each individual column of the card has been read. If the CPU does not respond and store the twelve bits of data before another such interrupt indicates that the next column has been read, data will be lost. Similarly, the 1132 printer relies on software in the 1130. When a letter such asAcomes into position, the CPU has to analyze a buffered line of text and assemble an array of bits that will indicate to the 1132 which print positions should be printed withA. If the CPU can not respond before theArotates out of position, print speed could be severely degraded.
Other peripherals accept text in a device-specific code convenient for its hardware. The CPU has to translate it to or from the EBCDIC code in which the CPU processes the text.
Instructions have short (one-word) and long (two-word) formats. Most computational, load, and store instructions reference one register (usually ACC) and a memory location. The memory location is identified, in the short format, by an 8-bit signed displacement from either the current address or one of the index registers; or in the long format, by a full 15-bit address, which can be indexed and specify indirection. Memory is addressed in units of words.
The 1130 supports only single-precision and double-precision binary data natively (16 and 32 bits) stored inbig-endianformat. Standard- and extended-precision floating-point (32 and 48 bits) and decimal data are supported through the use of subroutines.
Conditional transfers are based on (a) the current contents of theaccumulator, or (b) the carry and overflow indicators set by a preceding operation. Transfers can be by skip (which assumed that the next instruction was short) or by branch. A skip occurs if any of the specified tests are true. A branch occurs ifnoneof the specified tests were true.
The lowest addresses of core memory have uses dictated either by the hardware or by convention:
The 1130 has no hardware support for astack. Most subprograms are called with the instruction BSI (Branch and Store IAR). This deposits the value of IAR (the return address) at the destination address and transfers control to destination+1. Subprograms return to wherever they were called on that occasion using an indirect branch through that first word of the subprogram. Placing the return address in-line was a common technique of computers at that time, such as the Hewlett-PackardHP 2100,[30]the DECPDP-8,[31]and the Scientific Data SystemsSDS 920.[32]
So a subprogram named SIMPL might be organized as follows (comments follow the instruction operand):
The subprogram would be called as follows:
Thepseudo-opcodeCALL would typically be used.
As shown, a subprogram's entry point isDC *-*, an assembler pseudo operation that is used to Define a Constant (occupying one word of storage) with the value specified by the expression. The * stands for the current address of the assembly and so *-* results in zero. Writing this rather than 0 provides a visually distinctive note that a meaningful value (the return address) will be placed there at run time. The entry point need not be the first word of the subprogram. Indeed, the preceding word can be the start of a two-word direct branch instruction whose address field is at SIMPL. Then, returns can be effected by one-word branches there:B SIMPL-1
When SIMPL is called, the BSI instruction replaces*-*with the current value of IAR, which is the address just past the BSI instruction. After SIMPL does whatever it is written to do,B I SIMPLbranches not to SIMPL, but indirect through it, thus continuing execution with the instruction following the BSI instruction that called SIMPL.
Without extra arrangements to protect the return address,recursionis impossible: If SIMPL calls itself, or called a subprogram that called it, its original return address is overwritten.Re-entrancyis problematic for the same reason: An interrupt service routine must refrain from calling any subprogram that might have been the code that was interrupted.
The caller of SIMPL might pass it parameters, which might be values or addresses of values. Parameters might be coded in-line (immediately following the BSI instruction) or might be placed in index registers XR1 and XR2. If parameters are placed in-line, SIMPL modifies its own return address so its final indirect branch returns beyond the parameters.
Integer functions of a single integer expect the parameter in the accumulator and return their result there. Floating-point functions employ the floating-point accumulator (a two word area set aside by the floating-point library, three words for extended precision), and so on.
The convention of coding 0 as the initial value at the entry point means that if a programming error leads to SIMPL returning before the first time it was ever called, execution would jump to memory location 0. As mentionedabove, it is customary to have location 0 contain a branch to location 0. The 1130 would be stuck at location 0, and the IAR lights on the console would be entirely dark, making it clear the program had failed.
For subprograms that would be called many times (for example, subprograms forfloating-point arithmetic), it is important to reduce the size of each call to one word. Such "library routines" use the LIBF protocol. It is more complex than the CALL protocol described in the previous section, but LIBF hides the complexity from the writer of the assembly-language program.
Library routines are addressed through index register XR3. (Fortran subprograms use index register XR1 for the addresses of parameters and the return address, but register XR2 is unused.) XR3 points to a sequence of three-wordtransfer vectorssuch that the first entry is -128 words from XR3's value. The programmer calls the library routine using theLIBFpseudo-operation, which assembles not a directBSIto the routine but a one-word indexed branch instruction (BSI 3disp) whose displacement (-128, -125, and so on) identifies the start of the routine's transfer vector.
The transfer vector is prepared by the linkage loader when it puts together the program. A transfer vector entry to a library function named SIMPL takes this form:
The way SIMPL knew where its return address was is that, if SIMPL were declared a LIBF routine, the linkage loader would modify the code of SIMPL, placing the address of SIMPL's transfer vector entry at SIMPL+2. LIBF routines, unlike CALL subprograms, do not start with a DC directive to hold the return address (it is in the transfer vector) but with actual code, as follows:
Placing the address of SIMPL's transfer vector at SIMPL+2 leaves room for a one-word instruction to save the chosen index register, here XR1. Then the indirect LDX instruction points XR1 not at the transfer vector, but through it to the return address, or to any parameters stored in-line after the BSI. SIMPL then does whatever it was written to do, gaining access to any in-line parameters through XR1 (in which case it must increment XR1 for the return address), and returns as follows:
Suppose a LIBF-style call to SIMPL were at address 100. Then the return address would be 101, becauseBSI 3dispis a one-word instruction. XR3 points into the group of transfer vectors. If the transfer vector for SIMPL started at address 2000, then the BSI would be assembled with adispso that XR3+disp = 2000. Executing the BSI stores 101 at location 2000 and jumps to location 2001. At 2001 is a two-word long jump to the entry point of SIMPL, which the linkage loader might have placed at address 300.
The long jump transfers control to SIMPL. After the instruction at 300 stores XR1, the instruction at 301 isLDXI1 2000, the linkage loader having placed 2000 at location 302. This does not load 2000 into XR1; it is an indirect instruction, and loads the contents of 2000, which is 101, the return address for that call to SIMPL.
In the return sequence shown above, by the time control reaches RETN, the instruction there isB L 101, which returns to the caller. (If there is one or more in-line parameters at 101, SIMPL would increment XR1 to point to 102 or beyond, and this would be the destination of theBinstruction.)
If SIMPL took parameters coded in-line following the BSI instruction, SIMPL gains access to them with indexed addressing off XR1. The first could be obtained byLD 1 0, the second byLD 1 1, and so on. If the second parameter is the address of the actual parameter, thenLDI1 1obtains its value. Before returning, SIMPL increments XR1 past thenparameters with an instruction such asMDX 1nso as to place the right value at RETN+1.
A LIBF routine that declined to restore the original value of XR1 could omit the above steps and return with a simpleB 1nto skipnin-line parameters. However, such a routine can not be called by other LIBF routines because it disrupts the caller's use of XR1 for access to its own parameters and return address.
The complexity of LIBF saves memory for subprograms that are frequently called.:[33]: p.24The LIBF linkage requires one word per invocation, plus three words for the transfer vector entry and extra code in the routine itself, whereas the CALL linkage requires two words per invocation because most CALLs will be to an address beyond the -128 to +127 word reach of the one-word opcode.
The register XR3 must point to the transfer vector entries for the library routines rather than adispatch tableof only their addresses, because this latter would require that LIBF routines be called with an indirect BSI instruction. These instructions are two words long, so such a design would negate the code size savings of LIBF. The eight-bit limit for thedispfield of the one-word instruction code limits usage of LIBF routines to no more than 85 distinct entries.
The previous sections show that code and data are intermingled. It is common in 1130 programming to modify the address fields of instructions and, in fact, to modify entire instructions.
The Fortran compiler produces self-modifying code when generating code for any subprograms (subroutines or functions) that have parameters. The compiler builds a table of every location where the subprogram references one of its parameters, and compiles as the first instruction in the body of the subprogram a call to a subprogram called SUBIN that uses the table to modify the address field of every reference to a parameter to be the actual address of the parameter during the current invocation. SUBIN makes these patches every time the subprogram is called.
When a Fortran program calls a subprogram, the addresses of any parameters appear in-line following the call. For example, the Fortran statement CALL SIMPL(X) might compile into:
Within the subprogram, parameters could be accessed by indirect indexed addressing as shown above inVariations, so, given that XR1 has been suitably prepared, an integer parameter could be loaded into the accumulator with an instruction like this:
The compiler instead used direct addressing. When SUBIN runs, it obtains the address of X and patches the instruction's address field to become:
The advantages of SUBIN are as follows:
The disadvantages of SUBIN are the time it requires to run and the memory required for the table of references. The size of this table is the sum of 5, the number of parameters, and the number of references; if this sum exceeds 511, compilation will fail. For subprograms with many references to a parameter, the author of the subprogram might copy the parameter into a local variable.
Modifying entire instructions was a common technique at the time. For example, although the 1130 has an OR instruction, the syntax of Fortran provides no way to write it. An integer function IOR can be defined, enabling logical OR to be part of a Fortran expression such as:
The Fortran compiler places the addresses of I and J in-line and expects the result in the accumulator. Using IOR(I,J) in a Fortran expression compiles the following four words:
In fact, the assembler IOR function does not compute IorJ at all. Instead, itreplacesthe above four words with the following:
After performing that transformation, it does not return past the end of the four-word block (which it had just modified). Instead, it branches to the exact address from which it had been called originally. The BSI instruction is no longer there; what is now there is the two instructions it has just written. They combine the two integers with the machine-language OR instruction and leave the result in the accumulator, as required.
The call to IOR and the transformation of the four-word block happens at most once per program run. If the Fortran line illustrated above is executed again, it runs faster than it did the first time. Similar functions could be devised for other useful operations.
A function that self-modifies, as IOR does, can not be used in a Fortran subprogram on any of the parameters to that subprogram (though it could be used to combine local variables) because it is incompatible with the SUBIN subprogram discussed above. IOR's transformation of its four-word calling sequence, shown above, moves the location of the address of variable I. On subsequent calls to the Fortran subprogram, the table of references to parameters would be in error and SUBIN would patch the wrong word, in this case placing the new address of I over the OR operation code.
1130 FORTRAN offers two floating point formats: a 32-bit "standard precision" format and a 40-bit "extended precision" format.
Standard precision format contains a 24-bittwo's complementsignificand whileextended precisionutilizes a 32-bit two's complementsignificand. This format makes full use of the CPU's 32-bit integer operations. The extended format occupies three 16-bit words, with the high-order eight bits of the first word unused. Thecharacteristicin both formats is an 8-bit field containing the power of twobiasedby 128. Floating-point arithmetic operations are performed by software.[34]
The*EXTENDED PRECISIONcompiler option card tells the FORTRAN compiler to use 40 bits instead of 32 bits for all floating point data, there is no provision for mixing formats.
Data to be manipulated and the instructions that manipulate them have to reside together in core memory. The amount of installed memory (from 4,096 to 32,768 words) is a key limitation. Fortran provides several techniques to write large programs despite this limitation.
Fortran let any subprogram be designated as "LOCAL" (Load-on-Call). Each LOCAL subprogram is anoverlay; it is part of the disk-resident executable program but is only loaded into core memory (if not already there) during the time it is called. So, for example, six LOCAL subprograms would require only as much core memory as the largest, rather than the total amount for all six. However, none of the six can invoke another, either directly or through intermediary subprograms.
An entire Fortran program can pass control to a subsequent phase, exiting to the Supervisor with an instruction to load the follow-on phase into core memory. A large program might be split into three parts, separately compiled, called PART1, PART2, and PART3. Execution is started by// XEQ PART1and at a suitable point, PART1 would execute the Fortran statementCALL LINK(PART2)and so forth. The name of the successor program in the CALL can not be variable, but program logic can govern whether control is transferred to another phase, and whichCALL LINKstatement is executed. As mentionedabove, the Fortran compiler itself was written this way, with each phase of compilation achieved by a separate program.
Programs, such as Fortran programs, reside at low core memory addresses (just above the Supervisor). Fortran allocates space at the highest addresses for any variables and arrays declared COMMON. If a follow-on phase of the program contains a corresponding COMMON declaration, then information in this common area can be shared among phases. Phases could omit the COMMON declaration without problem, provided those phases were not so large as to have their program code invade the common area. COMMON storage not only shares data between phases; lower-memory COMMON variables can be used to pass data among a main program and subprograms within a single phase, though the data could be lost on moving to the next phase.
The examples can be executed on the IBM 1130 emulator available atIBM 1130.org.
The following listing shows acarddeck that compiles and runs anassemblerprogram that lists a deck of cards to the line printer.
In this job, the assembler leaves the result of its assembly in the temporary area of the system disk, and the XEQ command executes the content of the temporary area. The odd-lookingEND STARThas two meanings: end of assembler source, and the name of the entry point of the routine, which has the label START.
Assembler source starts with column 21 of the card, not column one. In systems without a disk drive, the assembler would punch code into the start of the card just read (the card reader was actually a reader-punch, with the punch station after the read station) and then read the next card. To handle forward branches and the like, the assembler's second pass literally involved a second pass of the cards through the reader/punch. If source changes were needed the programmer would duplicate the cards to obtain a deck with columns 1-20 blank ready for the next run through the assembler.
By convention, buffers are preceded by a word count. TheDC(Define Constant) assembles a count word and the followingBSS(Block Started by Symbol) reserves the required number of words for the buffer. The card buffer requires 80 words, one for each card column. Driver CARD0 reads each card column literally, using 12 of the 16 bits in the buffer word, with a bit set toonfor each hole punched in the corresponding row for that column. The pattern of punches typically describes a text character using theHollerith code. The console keyboard also gives input to the program in the Hollerith code, the only case of two devices using the same character encoding.
The printer routine, however, works with text in 8-bitEBCDICwith two characters per word, requiring a 40-word buffer. The program uses library routine ZIPCO to perform the conversion. Despite appearances, the statementCALL HLEBCis not executed because HLEBC is not a subroutine but an IBM-supplied Hollerith-to-EBCDIC conversion table. The CALL statement provides the address of the table to ZIPCO and ensures that the linking loader includes the table in the program, thus it is the fifth parameter to ZIPCO, though one occupying two words of storage: the BSI operation code word for the CALL is unused and thus usually wasted, but the second word of the expansion ofCALL HLEBCis the address of the HLEBC table needed by ZIPCO. After the conversion, the program sends the converted output, now in buffer PBUFF, to the printer through driver PRNT1. Again, the program loops until the printer driver reports completion, then the program reads the next card.
This example contains no code to decide when to stop. A more complete program would check for cards that begin with//, which denotes the start of the next job. To stop the card reader as soon as possible, a program could check for the Hollerith code of/before even converting the card to EBCDIC.
The call to CARD0 to read a card initiates that operation and immediately returns to the caller, which could proceed with other activity. However, the example program makes no attempt to overlap input and output using buffers even though it has two separate work areas; it simply loops back to CIMP to test afresh. After CARD0 has sensed the card reader's operation-complete interrupt, it returns one word further on, thus skipping the jump back to CIMP and leaving the loop.
The example routines do not run the I/O devices at top speed. Notably, the card reader, only a few milliseconds after reporting completion on reading a card, will commence its stop sequence, after which a new read command will have to wait to initiate another read cycle. The IBM 1442 reader could read 400 cards/minute at full speed, but just a little hesitancy in the read commands would halve its throughput or worse. A Fortran program could not complete even the simplest input processing in time, and so could not read cards at full speed. One common FortranDOloop to read cards made the motor stop and start so frequently as to accelerate wear. With buffering, the card reader control could be overlapped with processing, and the reader could be run at full speed through large data decks, but memory for the more complex program and for buffers was often at a premium.
Even with assembler and double buffering, a program to list a deck of cards from the IBM 2501 reader (1,000 cards/minute) on the line printer could not keep up, as the translation from card hole patterns to EBCDIC for the printer as done by EBPRT was too slow; the more complex ZIPCO and HLEBC were needed instead, as in the example.
The following image shows a simpleAPL\ 1130 session. This session was performed via the 1130 simulator available fromIBM 1130.orgThe above session shows a signon, addition of the integers 1 to 100, generation of an addition table for the integers 1..5 and a sign off.
In the same year as the 1130's introduction, Digital Equipment Corporation introduced the smaller, cheaper, and better-selling 12-bitPDP-8, recognized as the first successful minicomputer.
... I pounded the doors at the local IBM sales office until a salesman took pity on me. After we chatted for a while, he handed me a Fortran [manual]. I'm sure he gave it to me thinking, "I'll never hear from this kid again." I returned the following week saying, "This is really cool. I've read the whole thing and have written a small program. Where can I find a computer?" The fellow, to my delight, found me programming time on an IBM 1130 on weekends and late-evening hours. That was my first programming experience, and I must thank that anonymous IBM salesman for launching my career. Thank you, IBM.
The system was an IBM 1130 computer, a machine the size of a desk with 8 KB of main memory, a 512 KB disk drive, a Teletype CX paper tape reader and BRPE paper tape punch, and a Photon 713 photomechanical typesetter. The assignment was my first experience with managing a machine-readable document database: I learned to roll the punched paper tape carefully so that it could be stored neatly in cylindrical waste paper baskets.In the meantime, though I didn't know about it, the roots of generalized markup were being planted. Historically, electronic manuscripts contained control codes or macros that caused the document to be formatted in a particular way ("specific coding"). In contrast, generic coding, which began in the late 1960s, uses descriptive tags (for example, "heading", rather than "format-17").
Out of an estimated 10,000 systems produced, the following are known to exist as of 2025:
Speculation on why the product was given the number 1130 centered on the following possibilities:
Others have speculated that the existence of the IBM 1130 explains why no computer designated "11/30" ever appeared in thePDP-11family of machines.[54]
|
https://en.wikipedia.org/wiki/IBM_1130#Code_modification
|
Incomputer programming,homoiconicity(from theGreekwordshomo-meaning "the same" andiconmeaning "representation") is an informal property of someprogramming languages. A language ishomoiconicif a program written in it can be manipulated as data using the language.[1]The program's internal representation can thus be inferred just by reading the program itself. This property is often summarized by saying that the language treatscode as data. The informality of the property arises from the fact that, strictly, this applies to almost all programming languages. No consensus exists on a precise definition of the property.[2][3]
In a homoiconic language, the primary representation of programs is also adata structurein aprimitive typeof the language itself.[1]This makesmetaprogrammingeasier than in a language without this property:reflectionin the language (examining the program's entities atruntime) depends on a single, homogeneous structure, and it does not have to handle several different structures that would appear in a complex syntax. Homoiconic languages typically include full support ofsyntactic macros, allowing the programmer to express transformations of programs in a concise way.
A commonly cited example isLisp, which was created to allow for easy list manipulations and where the structure is given byS-expressionsthat take the form ofnestedlists, and can be manipulated by other Lisp code.[4]Other examples are the programming languagesClojure(a contemporary dialect of Lisp),Rebol(also its successorRed),Refal,Prolog, and possiblyJulia(see the section“Implementation methods”for more details).
The term first appeared in connection with theTRAC programming language, developed byCalvin Mooers:[5]
One of the main design goals was that the input script of TRAC (what is typed in by the user) should be identical to the text which guides the internal action of the TRAC processor. In other words, TRAC procedures should be stored in memory as a string of characters exactly as the user typed them at the keyboard. If the TRAC procedures themselves evolve new procedures, these new procedures should also be stated in the same script. The TRAC processor in its action interprets this script as its program. In other words, the TRAC translator program (the processor) effectively converts the computer into a new computer with a new program language -- the TRAC language. At any time, it should be possible to display program or procedural information in the same form as the TRAC processor will act upon it during its execution. It is desirable that the internal character code representation be identical to, or very similar to, the external code representation. In the present TRAC implementation, the internal character representation is based uponASCII. Because TRAC procedures and text have the same representation inside and outside the processor, the term homoiconic is applicable, from homo meaning the same, and icon meaning representation.
The last sentence above is annotated with footnote 4, which gives credit for the origin of the term:[a]
Following suggestion of McCullough W. S., based upon terminology due to Peirce, C. S.
The researchers implicated in this quote might be neurophysiologist and cyberneticianWarren Sturgis McCulloch(note the difference in the surname from the note) and philosopher, logician and mathematicianCharles Sanders Peirce.[2]Pierce indeed used the term "icon" in hisSemiotic Theory. According to Peirce, there are three kinds of sign in communication: the icon, the index and the symbol. The icon is the simplest representation: an icon physically resembles that which it denotes.
Alan Kayused and possibly popularized the term "homoiconic" through his use of the term in his 1969 PhD thesis:[7]
A notable group of exceptions to all the previous systems are Interactive LISP [...] and TRAC. Both are functionally oriented (one list, the other string), both talk to the user with one language, and both are "homoiconic" in that their internal and external representations are essentially the same. They both have the ability to dynamically create new functions which may then be elaborated at the users's pleasure.
Their only great drawback is that programs written in them look like KingBurniburiach's letter to the Sumerians done in Babylonian cuniform! [...]
One advantage of homoiconicity is that extending the language with new concepts typically becomes simpler, as data representing code can be passed between themetaand base layer of the program. Theabstract syntax treeof a function may be composed and manipulated as a data structure in the meta layer, and thenevaluated. It can be much easier to understand how to manipulate the code since it can be more easily understood as simple data (since the format of the language itself is as a data format).
A typical demonstration of homoiconicity is themeta-circular evaluator.
AllVon Neumann architecturesystems, which includes the vast majority of general purpose computers today, can implicitly be described as homoiconic due to the way that raw machine code executes in memory, the data type being bytes in memory. However, this feature can also be abstracted to the programming language level.
Languages such asLispand its dialects,[8]such asScheme,[9]Clojure, andRacketemployS-expressionsto achieve homoiconicity, and are considered the "Purest" forms of homoiconicity, as these languages use the same representation for both data and code.
Other languages provide data structures for easily and efficiently manipulating code. Notable examples of this weaker form of homoiconicity includeJulia,Nim, andElixir.
Languages often considered to be homoiconic include:
LispusesS-expressionsas an external representation for data and code. S-expressions can be read with the primitive Lisp functionREAD.READreturns Lisp data: lists,symbols, numbers, strings. The primitive Lisp functionEVALuses Lisp code represented as Lisp data, computes side-effects and returns a result. The result will be printed by the primitive functionPRINT, which creates an external S-expression from Lisp data.
Lisp data, a list using different data types: (sub)lists, symbols, strings and integer numbers.
Lisp code. The example uses lists, symbols and numbers.
Create above expression with the primitive Lisp functionLISTand set the variableEXPRESSIONto the result
Change theCOSterm toSIN
Evaluate the expression
Print the expression to a string
Read the expression from a string
On line 4 we create a new clause. The operator:-separates the head and the body of a clause. Withassert/1* we add it to the existing clauses (add it to the "database"), so we can call it later. In other languages we would call it "creating a function during runtime". We can also remove clauses from the database withabolish/1, orretract/1.
* The number after the clause's name is the number of arguments it can take. It is also calledarity.
We can also query the database to get the body of a clause:
callis analogous to Lisp'sevalfunction.
The concept of treating code as data and the manipulation and evaluation thereof can be demonstrated very neatly inRebol. (Rebol, unlike Lisp, does not require parentheses to separate expressions).
The following is an example of code in Rebol (Note that>>represents the interpreter prompt; spaces between some elements have been added for readability):
(repeatis in fact a built-in function in Rebol and is not a language construct or keyword).
By enclosing the code in square brackets, the interpreter does not evaluate it, but merely treats it as a block containing words:
This block has the type block! and can furthermore be assigned as the value of a word by using what appears to be a syntax for assignment, but is actually understood by the interpreter as a special type (set-word!) and takes the form of a word followed by a colon:
The block can still be interpreted by using thedofunction provided in Rebol (similar toevalin Lisp).
It is possible to interrogate the elements of the block and change their values, thus altering the behavior of the code if it were to be evaluated:
|
https://en.wikipedia.org/wiki/Homoiconicity
|
ThePCASTL(an acronym forby Parent and Childset Accessible Syntax Tree Language) is an interpretedhigh-level programming language. It was created in 2008 by Philippe Choquette.[1]The PCASTL is designed to ease the writing ofself-modifying code. The language hasreserved wordsparentandchildsetto access the nodes of thesyntax treeof the currently written code.[2]
The "Hello world program" is quite simple:
or
will do the same.
The syntax of PCASTL is derived fromprogramming languagesCandR. The source ofRversion 2.5.1 has been studied to write thegrammarand thelexerused in the PCASTL interpreter.
Like inR, statements can, but do not have to, be separated bysemicolons.[3]Like inR, avariablecan change type in asession. Like inCandR, PCASTL uses balancedbrackets({and}) to makeblocks.
Operatorsfound in PCASTL have the sameprecedenceandassociativityas their counterparts inC.[2][4]forloops are defined like inC.++and--operatorsare used like inCto increment or decrement a variable before or after it is used in its expression.
An example of PCASTL using theforreserved wordand the++operator:
Functionsandcommentsin PCASTL are defined like inR:
Thosereserved wordscan only be written lowercase and will not be recognized otherwise. Theparentreserved word gives areferenceto the parent node in thesyntax treeof the code where the word is placed. In the following code, the parent node is theoperator=.
Thevariable"a" will hold areferenceto the=node. The following code shows how to getreferencesto the two child nodes of theoperator=with thechildsetreserved word.
To display the value of "a", some ways are given in this example:
In the following code: we assign a code segment to the right child of the=node, we execute the=node a second time and we call the newly defined function.
|
https://en.wikipedia.org/wiki/PCASTL
|
Aquineis acomputer programthat takes no input and produces a copy of its ownsource codeas its only output. The standard terms for these programs in thecomputability theoryandcomputer scienceliterature are "self-replicating programs", "self-reproducing programs", and "self-copying programs".
A quine is afixed pointof an execution environment, when that environment is viewed as afunctiontransforming programs into their outputs. Quines are possible in anyTuring-completeprogramming language, as a direct consequence ofKleene's recursion theorem. For amusement, programmers sometimes attempt to develop the shortest possible quine in any givenprogramming language.
The name "quine" was coined byDouglas Hofstadter, in his popular 1979 science bookGödel, Escher, Bach, in honor of philosopherWillard Van Orman Quine(1908–2000), who made an extensive study ofindirect self-reference, and in particular for the following paradox-producing expression, known asQuine's paradox:
"Yields falsehood when preceded by its quotation" yields falsehood when preceded by its quotation.
John von Neumanntheorized aboutself-reproducing automatain the 1940s. Later, Paul Bratley and Jean Millo's article "Computer Recreations: Self-Reproducing Automata" discussed them in 1972.[1]Bratley first became interested in self-reproducing programs after seeing the first known such program written inAtlas Autocodeat Edinburgh in the 1960s by theUniversity of Edinburghlecturer and researcherHamish Dewar.
The "download source" requirement of theGNU Affero General Public Licenseis based on the idea of a quine.[2]
In general, the method used to create a quine in any programming language is to have, within the program, two pieces: (a)codeused to do the actual printing and (b)datathat represents the textual form of the code. The code functions by using the data to print the code (which makes sense since the data represents the textual form of the code), but it also uses the data, processed in a simple way, to print the textual representation of the data itself.
Here are three small examples in Python3:
The followingJavacode demonstrates the basic structure of a quine.
The source code contains a string array of itself, which is output twice, once inside quotation marks.
This code was adapted from an original post from c2.com, where the author, Jason Wilson, posted it as a minimalistic version of a Quine, without Java comments.[3]
Thanks to newtext blocksfeature in Java 15 (or newer), a more readable and simpler version is possible:[4]
The same idea is used in the followingSQLquine:
Some programming languages have the ability to evaluate a string as a program. Quines can take advantage of this feature. For example, thisRubyquine:
Luacan do:
In Python 3.8:
In many functional languages, includingSchemeand otherLisps, and interactive languages such asAPL, numbers are self-evaluating. InTI-BASIC, if the last line of a program returns a value, the returned value is displayed on the screen. Therefore, in such languages a program consisting of only a single digit results in a 1-byte quine. Since such code does notconstructitself, this is often considered cheating.
In some languages, particularlyscripting languagesbut alsoC, an empty source file is a fixed point of the language, being a valid program that produces no output.[a]Such an empty program, submitted as "the world's smallest self reproducing program", once won the "worst abuse of the rules" prize in theInternational Obfuscated C Code Contest.[5]The program was not actually compiled, but usedcpto copy the file into another file, which could be executed to print nothing.[6]
Quines, per definition, cannot receiveanyform of input, including reading a file, which means a quine is considered to be "cheating" if it looks at its own source code. The followingshellscript is not a quine:
A shorter variant, exploiting the behaviour ofshebangdirectives:
Other questionable techniques include making use of compiler messages; for example, in theGW-BASICenvironment, entering "Syntax Error" will cause the interpreter to respond with "Syntax Error".
Quine code can also be outputted visually, for example it's used to visualize the neutral zone inYars' Revenge, along withsyntactic saccharin, to obfuscate the source code.
The quine concept can be extended to multiple levels of recursion, giving rise to "ouroborosprograms", or quine-relays. This should not be confused withmultiquines.
This Java program outputs the source for a C++ program that outputs the original Java code.
Such programs have been produced with various cycle lengths:
David Madore, creator ofUnlambda, describes multiquines as follows:[16]
"A multiquine is a set of r different programs (in r different languages – without this condition we could take them all equal to a single quine), each of which is able to print any of the r programs (including itself) according to the command line argument it is passed. (Cheating is not allowed: the command line arguments must not be too long – passing the full text of a program is considered cheating)."
A multiquine consisting of 2 languages (or biquine) would be a program which:
A biquine could then be seen as a set of two programs, both of which are able to print either of the two, depending on the command line argument supplied.
Theoretically, there is no limit on the number of languages in a multiquine.
A 5-part multiquine (or pentaquine) has been produced withPython,Perl,C,NewLISP, andF#[17]and there is also a 25-language multiquine.[18]
Similar to, but unlike a multiquine, apolyglotprogram is a computer program or script written in a valid form of multiple programming languages or file formats by combining their syntax. A polyglot program is not required to have a self-reproducing quality, although a polyglot program can also be a quine in one or more of its possible ways to execute.
Unlike quines and multiquines, polyglot programs are not guaranteed to exist between arbitrary sets of languages as a result of Kleene's recursion theorem, because they rely on the interplay between the syntaxes, and not a provable property that one can always be embedded within another.
A radiation-hardened quine is a quine that can have any single character removed and still produces the original program with no missing character. Of necessity, such quines are much more convoluted than ordinary quines, as is seen by the following example inRuby:[19]
Usingrelational programmingtechniques, it is possible to generate quines automatically by transforming the interpreter (or equivalently, the compiler and runtime) of a language into a relational program, and then solving for afixed point.[20]
|
https://en.wikipedia.org/wiki/Quine_(computing)
|
Self-replicationis any behavior of adynamical systemthat yields construction of an identical or similar copy of itself.Biological cells, given suitable environments, reproduce bycell division. During cell division,DNA is replicatedand can be transmitted to offspring duringreproduction.Biological virusescanreplicate, but only by commandeering the reproductive machinery of cells through a process of infection. Harmfulprionproteins can replicate by converting normal proteins into rogue forms.[1]Computer virusesreproduce using the hardware and software already present on computers. Self-replication inroboticshas been an area of research and a subject of interest inscience fiction. Any self-replicating mechanism which does not make a perfect copy (mutation) will experiencegenetic variationand will create variants of itself. These variants will be subject tonatural selection, since some will be better at surviving in their current environment than others and will out-breed them.
Early research byJohn von Neumann[2]established that replicators have several parts:
Exceptions to this pattern may be possible, although almost all known examples adhere to it. Scientists have come close to constructingRNA that can be copiedin an "environment" that is a solution of RNA monomers and transcriptase, but such systems are more accurately characterized as "assisted replication" than "self-replication". In 2021 researchers succeeded in constructing a system with sixteen specially designed DNA sequences. Four of these can be linked together (through base pairing) in a certain order following a template of four already-linked sequences, by changing the temperature up and down. The number of template copies is thus increased in each cycle. No external agent such as an enzyme is needed, but the system must be supplied with a reservoir of the sixteen DNA sequences.[3]
The simplest possible case is that only a genome exists. Without some specification of the self-reproducing steps, a genome-only system is probably better characterized as something like acrystal.
Self-replication is a fundamental feature of life. It was proposed that self-replication emerged in the evolution of life when a molecule similar to a double-strandedpolynucleotide(possibly likeRNA) dissociated into single-stranded polynucleotides and each of these acted as a template for synthesis of a complementary strand producing two double stranded copies.[4]In a system such as this, individual duplex replicators with different nucleotide sequences could compete with each other for available mononucleotide resources, thus initiating natural selection for the most “fit” sequences.[4]Replication of these early forms of life was likely highly inaccurate producing mutations that influenced the folding state of the polynucleotides, thus affecting the propensities for strand association (promoting stability) and disassociation (allowing genome replication). The evolution of order in living systems has been proposed to be an example of a fundamental order generating principle that also applies to physical systems.[5]
Recent research[6]has begun to categorize replicators, often based on the amount of support they require.
The design space for machine replicators is very broad. A comprehensive study[7]to date byRobert FreitasandRalph Merklehas identified 137 design dimensions grouped into a dozen separate categories, including: (1) Replication Control, (2) Replication Information, (3) Replication Substrate, (4) Replicator Structure, (5) Passive Parts, (6) Active Subunits, (7) Replicator Energetics, (8) Replicator Kinematics, (9) Replication Process, (10) Replicator Performance, (11) Product Structure, and (12) Evolvability.
Incomputer scienceaquineis a self-reproducing computer program that, when executed, outputs its own code. For example, a quine in thePython programming languageis:
A more trivial approach is to write a program that will make a copy of any stream of data that it is directed to, and then direct it at itself. In this case the program is treated as both executable code, and as data to be manipulated. This approach is common in most self-replicating systems, including biological life, and is simpler as it does not require the program to contain a complete description of itself.
In many programming languages an empty program is legal, and executes without producing errors or other output. The output is thus the same as the source code, so the program is trivially self-reproducing.
Ingeometrya self-replicating tiling is a tiling pattern in which severalcongruenttiles may be joined together to form a larger tile that is similar to the original. This is an aspect of the field of study known astessellation. The "sphinx"hexiamondis the only known self-replicatingpentagon.[8]For example, four suchconcavepentagons can be joined together to make one with twice the dimensions.[9]Solomon W. Golombcoined the termrep-tilesfor self-replicating tilings.
In 2012,Lee Sallowsidentified rep-tiles as a special instance of aself-tiling tile setor setiset. A setiset of ordernis a set ofnshapes that can be assembled inndifferent ways so as to form larger replicas of themselves. Setisets in which every shape is distinct are called 'perfect'. A rep-nrep-tile is just a setiset composed ofnidentical pieces.
One form of natural self-replication that is not based on DNA or RNA occurs inclaycrystals.[10]Clay consists of a large number of small crystals, and clay is an environment that promotescrystal growth. Crystals consist of a regularlattice of atomsand are able to grow if e.g. placed in awater solutioncontaining the crystal components; automatically arranging atoms at the crystal boundary into the crystalline form. Crystals may have irregularities where the regular atomic structure is broken, and when crystals grow, these irregularities may propagate, creating a form of self-replication ofcrystal irregularities. Because these irregularities may affect the probability of a crystal breaking apart to form new crystals, crystals with such irregularities could even be considered to undergo evolutionary development.
It is a long-term goal of some engineering sciences to achieve aclanking replicator, a material device that can self-replicate. The usual reason is to achieve a low cost per item while retaining the utility of a manufactured good. Many authorities say that in the limit, the cost of self-replicating items should approach the cost-per-weight of wood or other biological substances, because self-replication avoids the costs oflabor,capitalanddistributionin conventionalmanufactured goods.
A fully novel artificial replicator is a reasonable near-term goal.
ANASAstudy recently placed the complexity of aclanking replicatorat approximately that ofIntel'sPentium4 CPU.[11]That is, the technology is achievable with a relatively small engineering group in a reasonable commercial time-scale at a reasonable cost.
Given the currently keen interest in biotechnology and the high levels of funding in that field, attempts to exploit the replicative ability of existing cells are timely, and may easily lead to significant insights and advances.
A variation of self replication is of practical relevance incompilerconstruction, where a similarbootstrappingproblem occurs as in natural self replication. A compiler (phenotype) can be applied on the compiler's ownsource code(genotype) producing the compiler itself. During compiler development, a modified (mutated) source is used to create the next generation of the compiler. This process differs from natural self-replication in that the process is directed by an engineer, not by the subject itself.
An activity in the field of robots is the self-replication of machines. Since all robots (at least in modern times) have a fair number of the same features, a self-replicating robot (or possibly a hive of robots) would need to do the following:
On ananoscale,assemblersmight also be designed to self-replicate under their own power. This, in turn, has given rise to the "grey goo" version ofArmageddon, as featured in the science fiction novelsBloomandPrey.
TheForesight Institutehas published guidelines for researchers in mechanical self-replication.[12]The guidelines recommend that researchers use several specific techniques for preventing mechanical replicators from getting out of control, such as using abroadcast architecture.
For a detailed article on mechanical reproduction as it relates to the industrial age, seemass production.
Research has occurred in the following areas:
The goal of self-replication in space systems is to exploit large amounts of matter with a low launch mass. For example, anautotrophicself-replicating machine could cover a moon or planet with solar cells, and beam the power to the Earth using microwaves. Once in place, the same machinery that built itself could also produce raw materials or manufactured objects, including transportation systems to ship the products.Another modelof self-replicating machine would copy itself through the galaxy and universe, sending information back.
In general, since these systems are autotrophic, they are the most difficult and complex known replicators. They are also thought to be the most hazardous, because they do not require any inputs from human beings in order to reproduce.
A classic theoretical study of replicators in space is the 1980NASAstudy of autotrophic clanking replicators, edited byRobert Freitas.[15]
Much of the design study was concerned with a simple, flexible chemical system for processing lunarregolith, and the differences between the ratio of elements needed by the replicator, and the ratios available in regolith. The limiting element wasChlorine, an essential element to process regolith forAluminium. Chlorine is very rare in lunar regolith, and a substantially faster rate of reproduction could be assured by importing modest amounts.
The reference design specified small computer-controlled electric carts running on rails. Each cart could have a simple hand or a small bull-dozer shovel, forming a basicrobot.
Power would be provided by a "canopy" ofsolar cellssupported on pillars. The other machinery could run under the canopy.
A "castingrobot" would use a robotic arm with a few sculpting tools to makeplastermolds. Plaster molds are easy to make, and make precise parts with good surface finishes. The robot would then cast most of the parts either from non-conductive molten rock (basalt) or purified metals. Anelectricovenmelted the materials.
A speculative, more complex "chip factory" was specified to produce the computer and electronic systems, but the designers also said that it might prove practical to ship the chips from Earth as if they were "vitamins".
Nanotechnologistsin particular believe that their work will likely fail to reach a state of maturity until human beings design a self-replicatingassemblerofnanometerdimensions.[1]
These systems are substantially simpler than autotrophic systems, because they are provided with purified feedstocks and energy. They do not have to reproduce them. This distinction is at the root of some of the controversy about whethermolecular manufacturingis possible or not. Many authorities who find it impossible are clearly citing sources for complex autotrophic self-replicating systems. Many of the authorities who find it possible are clearly citing sources for much simpler self-assembling systems, which have been demonstrated. In the meantime, aLego-built autonomous robot able to follow a pre-set track and assemble an exact copy of itself, starting from four externally provided components, was demonstrated experimentally in 2003.[2]
Merely exploiting the replicative abilities of existing cells is insufficient, because of limitations in the process ofprotein biosynthesis(see also the listing forRNA).
What is required is the rational design of an entirely novel replicator with a much wider range of synthesis capabilities.
In 2011, New York University scientists have developed artificial structures that can self-replicate, a process that has the potential to yield new types of materials. They have demonstrated that it is possible to replicate not just molecules like cellular DNA or RNA, but discrete structures that could in principle assume many different shapes, have many different functional features, and be associated with many different types of chemical species.[16][17]
For a discussion of other chemical bases for hypothetical self-replicating systems, seealternative biochemistry.
|
https://en.wikipedia.org/wiki/Self-replication
|
Incomputer science,reflective programmingorreflectionis the ability of aprocessto examine,introspect, and modify its own structure and behavior.[1]
The earliest computers were programmed in their nativeassembly languages, which were inherently reflective, as these original architectures could be programmed by defining instructions as data and usingself-modifying code. As the bulk of programming moved to higher-levelcompiled languagessuch asALGOL,COBOL,Fortran,Pascal, andC, this reflective ability largely disappeared until new programming languages with reflection built into their type systems appeared.[citation needed]
Brian Cantwell Smith's 1982 doctoral dissertation introduced the notion of computational reflection in proceduralprogramming languagesand the notion of themeta-circular interpreteras a component of3-Lisp.[2][3]
Reflection helps programmers make generic software libraries to display data, process different formats of data, performserializationand deserialization of data for communication, or do bundling and unbundling of data for containers or bursts of communication.
Effective use of reflection almost always requires a plan: A design framework, encoding description, object library, a map of a database or entity relations.
Reflection makes a language more suited to network-oriented code. For example, it assists languages such asJavato operate well in networks by enabling libraries for serialization, bundling and varying data formats. Languages without reflection such asCare required to use auxiliary compilers for tasks likeAbstract Syntax Notationto produce code for serialization and bundling.
Reflection can be used for observing and modifying program execution atruntime. A reflection-oriented program component can monitor the execution of an enclosure of code and can modify itself according to a desired goal of that enclosure. This is typically accomplished by dynamically assigning program code at runtime.
Inobject-oriented programminglanguages such asJava, reflection allowsinspectionof classes, interfaces, fields and methods at runtime without knowing the names of the interfaces, fields, methods atcompile time. It also allowsinstantiationof new objects andinvocationof methods.
Reflection is often used as part ofsoftware testing, such as for the runtime creation/instantiation ofmock objects.
Reflection is also a key strategy formetaprogramming.
In some object-oriented programming languages such asC#andJava, reflection can be used to bypassmember accessibilityrules. For C#-properties this can be achieved by writing directly onto the (usually invisible) backing field of a non-public property. It is also possible to find non-public methods of classes and types and manually invoke them. This works for project-internal files as well as external libraries such as.NET's assemblies and Java's archives.
A language that supports reflection provides a number of features available at runtime that would otherwise be difficult to accomplish in a lower-level language. Some of these features are the abilities to:
These features can be implemented in different ways. InMOO, reflection forms a natural part of everyday programming idiom. When verbs (methods) are called, various variables such asverb(the name of the verb being called) andthis(the object on which the verb is called) are populated to give the context of the call. Security is typically managed by accessing the caller stack programmatically: Sincecallers() is a list of the methods by which the current verb was eventually called, performing tests oncallers()[0] (the command invoked by the original user) allows the verb to protect itself against unauthorised use.
Compiled languages rely on their runtime system to provide information about the source code. A compiledObjective-Cexecutable, for example, records the names of all methods in a block of the executable, providing a table to correspond these with the underlying methods (or selectors for these methods) compiled into the program. In a compiled language that supports runtime creation of functions, such asCommon Lisp, the runtime environment must include a compiler or an interpreter.
Reflection can be implemented for languages without built-in reflection by using aprogram transformationsystem to define automated source-code changes.
Reflection may allow a user to create unexpectedcontrol flowpaths through an application, potentially bypassing security measures. This may be exploited by attackers.[4]Historicalvulnerabilitiesin Java caused by unsafe reflection allowed code retrieved from potentially untrusted remote machines to break out of the Javasandboxsecurity mechanism. A large scale study of 120 Java vulnerabilities in 2013 concluded that unsafe reflection is the most common vulnerability in Java, though not the most exploited.[5]
The following code snippets create aninstancefooofclassFooand invoke itsmethodPrintHello. For eachprogramming language, normal and reflection-based call sequences are shown.
The following is an example inCommon Lispusing theCommon Lisp Object System:
The following is an example inC#:
ThisDelphiandObject Pascalexample assumes that aTFooclass has been declared in a unit calledUnit1:
The following is an example in eC:
The following is an example inGo:
The following is an example inJava:
The following is an example inJavaScript:
The following is an example inJulia:
The following is an example inObjective-C, implying either theOpenSteporFoundation Kitframework is used:
The following is an example inPerl:
The following is an example inPHP:[6]
The following is an example inPython:
The following is an example inR:
The following is an example inRuby:
The following is an example usingXojo:
|
https://en.wikipedia.org/wiki/Reflective_programming
|
Incomputer programming,monkey patchingis a technique used to dynamically update the behavior of a piece of code at run-time. It is used to extend or modify the runtime code ofdynamic languagessuch asSmalltalk,JavaScript,Objective-C,Ruby,Perl,Python,Groovy,Lisp, andLuawithout altering the original source code.
The termmonkey patchseems to have come from an earlier term,guerrilla patch, which referred to changing code sneakily – and possibly incompatibly with other such patches – at runtime. The wordguerrilla, nearly homophonous withgorilla, becamemonkey, possibly to make the patch sound less intimidating.[1]
An alternative etymology is that it refers to “monkeying about” with the code (messing with it).[citation needed]
Despite the name's suggestion, the "monkey patch" is sometimes the official method of extending a program. For example, web browsers such asFirefoxandInternet Explorerused to encourage this, although modern browsers (including Firefox) now have an official extensions system.[2]
The definition of the term varies depending upon the community using it. InRuby,[3]Python,[4]and many otherdynamic programming languages, the termmonkey patchonly refers to dynamic modifications of a class or module at runtime, motivated by the intent to patch existing third-party code as a workaround to a bug or feature which does not act as desired. Other forms of modifying classes at runtime have different names, based on their different intents. For example, inZopeandPlone, security patches are often delivered using dynamic class modification, but they are calledhot fixes.[citation needed]
Monkey patching is used to:
Malicious, incompetently written, and/or poorly documented monkey patches can lead to problems:
The following Python example monkey-patches the value ofPifrom the standard Python math library to make it compliant with theIndiana Pi Bill.
|
https://en.wikipedia.org/wiki/Monkey_patch
|
Acomputer virus[1]is a type ofmalwarethat, when executed, replicates itself by modifying othercomputer programsandinsertingits owncodeinto those programs.[2][3]If this replication succeeds, the affected areas are then said to be "infected" with a computer virus, a metaphor derived from biologicalviruses.[4]
Computer viruses generally require ahost program.[5]The virus writes its own code into the host program. When the program runs, the written virus program is executed first, causing infection and damage. By contrast, acomputer wormdoes not need a host program, as it is an independent program or code chunk. Therefore, it is not restricted by thehost program, but can run independently and actively carry out attacks.[6][7]
Virus writers usesocial engineeringdeceptionsand exploit detailed knowledge ofsecurity vulnerabilitiesto initially infect systems and to spread the virus. Viruses use complex anti-detection/stealth strategies to evadeantivirus software.[8]Motives for creating viruses can include seekingprofit(e.g., withransomware), desire to send a political message, personal amusement, to demonstrate that a vulnerability exists in software, forsabotageanddenial of service, or simply because they wish to explorecybersecurityissues,artificial lifeandevolutionary algorithms.[9]
As of 2013, computer viruses caused billions of dollars' worth of economic damage each year.[10]In response, an industry ofantivirus softwarehas cropped up, selling or freely distributing virus protection to users of variousoperating systems.[11]
The first academic work on the theory of self-replicating computer programs was done in 1949 byJohn von Neumannwho gave lectures at theUniversity of Illinoisabout the "Theory and Organization of ComplicatedAutomata". The work of von Neumann was later published as the "Theory of self-reproducing automata". In his essay von Neumann described how a computer program could be designed to reproduce itself.[12]Von Neumann's design for a self-reproducing computer program is considered the world's first computer virus, and he is considered to be the theoretical "father" of computer virology.[13]
In 1972, Veith Risak directly building on von Neumann's work onself-replication, published his article "Selbstreproduzierende Automaten mit minimaler Informationsübertragung" (Self-reproducing automata with minimal information exchange).[14]The article describes a fully functional virus written inassemblerprogramming language for a SIEMENS 4004/35 computer system. In 1980, Jürgen Kraus wrote hisDiplomthesis "Selbstreproduktion bei Programmen" (Self-reproduction of programs) at theUniversity of Dortmund.[15]In his work Kraus postulated that computer programs can behave in a way similar to biological viruses.
TheCreeper viruswas first detected onARPANET, the forerunner of theInternet, in the early 1970s.[16]Creeper was an experimental self-replicating program written by Bob Thomas atBBN Technologiesin 1971.[17]Creeper used the ARPANET to infectDECPDP-10computers running theTENEXoperating system.[18]Creeper gained access via the ARPANET and copied itself to the remote system where the message, "I'M THE CREEPER. CATCH ME IF YOU CAN!" was displayed.[19]TheReaperprogram was created to delete Creeper.[20]
In 1982, a program called "Elk Cloner" was the first personal computer virus to appear "in the wild"—that is, outside the single computer or computer lab where it was created.[21]Written in 1981 byRichard Skrenta, a ninth grader atMount Lebanon High SchoolnearPittsburgh, it attached itself to theApple DOS3.3 operating system and spread viafloppy disk.[21]On its 50th use theElk Clonervirus would be activated, infecting the personal computer and displaying a short poem beginning "Elk Cloner: The program with a personality."
In 1984,Fred Cohenfrom theUniversity of Southern Californiawrote his paper "Computer Viruses – Theory and Experiments".[22]It was the first paper to explicitly call a self-reproducing program a "virus", a term introduced by Cohen's mentorLeonard Adleman.[23]In 1987, Cohen published a demonstration that there is noalgorithmthat can perfectly detect all possible viruses.[24]Cohen's theoreticalcompression virus[25]was an example of a virus which was not malicious software (malware), but was putatively benevolent (well-intentioned). However,antivirusprofessionals do not accept the concept of "benevolent viruses", as any desired function can be implemented without involving a virus (automatic compression, for instance, is available underWindowsat the choice of the user). Any virus will by definition make unauthorised changes to a computer, which is undesirable even if no damage is done or intended. The first page ofDr Solomon's Virus Encyclopaediaexplains the undesirability of viruses, even those that do nothing but reproduce.[26][27]
An article that describes "useful virus functionalities" was published byJ. B. Gunnunder the title "Use of virus functions to provide a virtualAPLinterpreter under user control" in 1984.[28]The firstIBM PC compatiblevirus in the "wild" was aboot sectorvirus dubbed(c)Brain,[29]created in 1986 and was released in 1987 by Amjad Farooq Alvi and Basit Farooq Alvi inLahore, Pakistan, reportedly to deter unauthorized copying of the software they had written.[30]
The first virus to specifically targetMicrosoft Windows,WinVirwas discovered in April 1992, two years after the release ofWindows 3.0.[31]The virus did not contain anyWindows APIcalls, instead relying onDOS interrupts. A few years later, in February 1996, Australian hackers from the virus-writing crew VLAD created theBizatchvirus (also known as "Boza" virus), which was the first known virus to specifically targetWindows 95.[32]This virus attacked the new portable executable (PE) files introduced in Windows 95.[33]In late 1997 the encrypted, memory-resident stealth virusWin32.Cabanaswas released—the first known virus that targetedWindows NT(it was also able to infect Windows 3.0 and Windows 9x hosts).[34]
Evenhome computerswere affected by viruses. The first one to appear on theAmigawas a boot sector virus calledSCA virus, which was detected in November 1987.[35]By 1988, onesysopreportedly found that viruses infected 15% of the software available for download on his BBS.[36]
A computer virus generally contains three parts: the infection mechanism, which finds and infects new files, the payload, which is the malicious code to execute, and the trigger, which determines when to activate the payload.[37]
Virus phases is thelife cycleof the computer virus, described by using an analogy tobiology. This life cycle can be divided into four phases:
Computer viruses infect a variety of different subsystems on their host computers and software.[45]One manner of classifying viruses is to analyze whether they reside inbinary executables(such as.EXEor.COM files), data files (such asMicrosoft Worddocuments orPDF files), or in theboot sectorof the host'shard drive(or some combination of all of these).[46][47]
Amemory-resident virus(or simply "resident virus") installs itself as part of the operating system when executed, after which it remains inRAMfrom the time the computer is booted up to when it is shut down. Resident viruses overwriteinterrupt handlingcode or otherfunctions, and when the operating system attempts to access the target file or disk sector, the virus code intercepts the request and redirects thecontrol flowto the replication module, infecting the target. In contrast, anon-memory-resident virus(or "non-resident virus"), when executed, scans the disk for targets, infects them, and then exits (i.e. it does not remain in memory after it is done executing).[48]
Many common applications, such asMicrosoft OutlookandMicrosoft Word, allowmacroprograms to be embedded in documents or emails, so that the programs may be run automatically when the document is opened. Amacro virus(or "document virus") is a virus that is written in amacro languageand embedded into these documents so that when users open the file, the virus code is executed, and can infect the user's computer. This is one of the reasons that it is dangerous to open unexpected or suspiciousattachmentsine-mails.[49][50]While not opening attachments in e-mails from unknown persons or organizations can help to reduce the likelihood of contracting a virus, in some cases, the virus is designed so that the e-mail appears to be from a reputable organization (e.g., a major bank or credit card company).
Boot sector virusesspecifically target theboot sectorand/or theMaster Boot Record[51](MBR) of the host'shard disk drive,solid-state drive, or removable storage media (flash drives,floppy disks, etc.).[52]
The most common way of transmission of computer viruses in boot sector is physical media. When reading theVBRof the drive, the infected floppy disk or USBflash driveconnected to the computer will transfer data, and then modify or replace the existing boot code. The next time a user tries to start the desktop, the virus will immediately load and run as part of the master boot record.[53]
Email viruses are viruses that intentionally, rather than accidentally, use the email system to spread. While virus infected files may be accidentally sent asemail attachments, email viruses are aware of email system functions. They generally target a specific type of email system (Microsoft Outlookis the most commonly used), harvest email addresses from various sources, and may append copies of themselves to all email sent, or may generate email messages containing copies of themselves as attachments.[54]
To avoid detection by users, some viruses employ different kinds ofdeception. Some old viruses, especially on theDOSplatform, make sure that the "last modified" date of a host file stays the same when the file is infected by the virus. This approach does not fool antivirussoftware, however, especially those which maintain and datecyclic redundancy checkson file changes.[55]Some viruses can infect files without increasing their sizes or damaging the files. They accomplish this by overwriting unused areas of executable files. These are calledcavity viruses. For example, theCIH virus, or Chernobyl Virus, infectsPortable Executablefiles. Because those files have many empty gaps, the virus, which was 1KBin length, did not add to the size of the file.[56]Some viruses try to avoid detection by killing the tasks associated with antivirus software before it can detect them (for example,Conficker). A Virus may also hide its presence using arootkitby not showing itself on the list of systemprocessesor by disguising itself within a trusted process.[57]In the 2010s, as computers and operating systems grow larger and more complex, old hiding techniques need to be updated or replaced. Defending a computer against viruses may demand that a file system migrate towards detailed and explicit permission for every kind of file access.[citation needed]In addition, only a small fraction of known viruses actually cause real incidents, primarily because many viruses remain below the theoretical epidemic threshold.[58]
While some kinds of antivirus software employ various techniques to counter stealth mechanisms, once the infection occurs any recourse to "clean" the system is unreliable. In Microsoft Windows operating systems, theNTFS file systemis proprietary. This leaves antivirus software little alternative but to send a "read" request to Windows files that handle such requests. Some viruses trick antivirus software by intercepting its requests to the operating system. A virus can hide by intercepting the request to read the infected file, handling the request itself, and returning an uninfected version of the file to the antivirus software. The interception can occur bycode injectionof the actual operating system files that would handle the read request. Thus, an antivirus software attempting to detect the virus will either not be permitted to read the infected file, or, the "read" request will be served with the uninfected version of the same file.[59]
The only reliable method to avoid "stealth" viruses is to boot from a medium that is known to be "clear". Security software can then be used to check the dormant operating system files. Most security software relies on virus signatures, or they employheuristics.[60][61]Security software may also use a database of file "hashes" for Windows OS files, so the security software can identify altered files, and request Windows installation media to replace them with authentic versions. In older versions of Windows, filecryptographic hash functionsof Windows OS files stored in Windows—to allow file integrity/authenticity to be checked—could be overwritten so that theSystem File Checkerwould report that altered system files are authentic, so using file hashes to scan for altered files would not always guarantee finding an infection.[62]
Most modern antivirus programs try to find virus-patterns inside ordinary programs by scanning them for so-calledvirus signatures.[63]Different antivirus programs will employ different search methods when identifying viruses. If a virus scanner finds such a pattern in a file, it will perform other checks to make sure that it has found the virus, and not merely a coincidental sequence in an innocent file, before it notifies the user that the file is infected. The user can then delete, or (in some cases) "clean" or "heal" the infected file. Some viruses employ techniques that make detection by means of signatures difficult but probably not impossible. These viruses modify their code on each infection. That is, each infected file contains a different variant of the virus.[citation needed]
One method of evading signature detection is to use simpleencryptionto encipher (encode) the body of the virus, leaving only the encryption module and a staticcryptographic keyincleartextwhich does not change from one infection to the next.[64]In this case, the virus consists of a small decrypting module and an encrypted copy of the virus code. If the virus is encrypted with a different key for each infected file, the only part of the virus that remains constant is the decrypting module, which would (for example) be appended to the end. In this case, a virus scanner cannot directly detect the virus using signatures, but it can still detect the decrypting module, which still makes indirect detection of the virus possible. Since these would be symmetric keys, stored on the infected host, it is entirely possible to decrypt the final virus, but this is probably not required, sinceself-modifying codeis such a rarity that finding some may be reason enough for virus scanners to at least "flag" the file as suspicious.[citation needed]An old but compact way will be the use of arithmetic operation like addition or subtraction and the use of logical conditions such asXORing,[65]where each byte in a virus is with a constant so that the exclusive-or operation had only to be repeated for decryption. It is suspicious for a code to modify itself, so the code to do the encryption/decryption may be part of the signature in many virus definitions.[citation needed]A simpler older approach did not use a key, where the encryption consisted only of operations with no parameters, like incrementing and decrementing, bitwise rotation, arithmetic negation, and logical NOT.[65]Some viruses, called polymorphic viruses, will employ a means of encryption inside an executable in which the virus is encrypted under certain events, such as the virus scanner being disabled for updates or the computer beingrebooted.[66]This is calledcryptovirology.
Polymorphic codewas the first technique that posed a seriousthreatto virus scanners. Just like regular encrypted viruses, a polymorphic virus infects files with an encrypted copy of itself, which is decoded by adecryptionmodule. In the case of polymorphic viruses, however, this decryption module is also modified on each infection. A well-written polymorphic virus therefore has no parts which remain identical between infections, making it very difficult to detect directly using "signatures".[67][68]Antivirus software can detect it by decrypting the viruses using anemulator, or bystatistical pattern analysisof the encrypted virus body. To enable polymorphic code, the virus has to have apolymorphic engine(also called "mutating engine" or "mutationengine") somewhere in its encrypted body. Seepolymorphic codefor technical detail on how such engines operate.[69]
Some viruses employ polymorphic code in a way that constrains the mutation rate of the virus significantly. For example, a virus can be programmed to mutate only slightly over time, or it can be programmed to refrain from mutating when it infects a file on a computer that already contains copies of the virus. The advantage of using such slow polymorphic code is that it makes it more difficult for antivirus professionals and investigators to obtain representative samples of the virus, because "bait" files that are infected in one run will typically contain identical or similar samples of the virus. This will make it more likely that the detection by the virus scanner will be unreliable, and that some instances of the virus may be able to avoid detection.
To avoid being detected by emulation, some viruses rewrite themselves completely each time they are to infect new executables. Viruses that utilize this technique are said to be inmetamorphic code. To enable metamorphism, a "metamorphic engine" is needed. A metamorphic virus is usually very large and complex. For example,W32/Simileconsisted of over 14,000 lines ofassembly languagecode, 90% of which is part of the metamorphic engine.[70][71]
Damage is due to causing system failure, corrupting data, wasting computer resources, increasing maintenance costs or stealing personal information.[10]Even though no antivirus software can uncover all computer viruses (especially new ones), computer security researchers are actively searching for new ways to enable antivirus solutions to more effectively detect emerging viruses, before they become widely distributed.[72]
Apower virusis a computer program that executes specific machine code to reach the maximumCPU power dissipation(thermal energyoutput for thecentral processing units).[73]Computer cooling apparatus are designed to dissipate power up to thethermal design power, rather than maximum power, and a power virus could cause the system to overheat if it does not have logic to stop the processor. This may cause permanent physical damage. Power viruses can be malicious, but are often suites of test software used forintegration testingand thermal testing of computer components during the design phase of a product, or for productbenchmarking.[74]
Stability testapplications are similar programs which have the same effect as power viruses (high CPU usage) but stay under the user's control. They are used for testing CPUs, for example, whenoverclocking.Spinlockin a poorly written program may cause similar symptoms, if it lasts sufficiently long.
Different micro-architectures typically require different machine code to hit their maximum power. Examples of such machine code do not appear to be distributed in CPU reference materials.[75]
As software is often designed with security features to prevent unauthorized use of system resources, many viruses must exploit and manipulatesecurity bugs, which aresecurity defectsin a system or application software, to spread themselves and infect other computers.Software developmentstrategies that produce large numbers of "bugs" will generally also produce potentialexploitable"holes" or "entrances" for the virus.
To replicate itself, a virus must be permitted to execute code and write to memory. For this reason, many viruses attach themselves toexecutable filesthat may be part of legitimate programs (seecode injection). If a user attempts to launch an infected program, the virus' code may be executed simultaneously.[76]In operating systems that usefile extensionsto determine program associations (such as Microsoft Windows), the extensions may be hidden from the user by default. This makes it possible to create a file that is of a different type than it appears to the user. For example, an executable may be created and named "picture.png.exe", in which the user sees only "picture.png" and therefore assumes that this file is adigital imageand most likely is safe, yet when opened, it runs the executable on the client machine.[77]Viruses may be installed on removable media, such asflash drives. The drives may be left in a parking lot of a government building or other target, with the hopes that curious users will insert the drive into a computer. In a 2015 experiment, researchers at the University of Michigan found that 45–98 percent of users would plug in a flash drive of unknown origin.[78]
The vast majority of viruses target systems runningMicrosoft Windows. This is due to Microsoft's large market share ofdesktop computerusers.[79]The diversity of software systems on a network limits the destructive potential of viruses and malware.[a]Open-sourceoperating systems such asLinuxallow users to choose from a variety ofdesktop environments, packaging tools, etc., which means that malicious code targeting any of these systems will only affect a subset of all users. Many Windows users are running the same set of applications, enabling viruses to rapidly spread among Microsoft Windows systems by targeting the same exploits on large numbers of hosts.[80][81][82][83]
While Linux and Unix in general have always natively prevented normal users from making changes to theoperating systemenvironment without permission, Windows users are generally not prevented from making these changes, meaning that viruses can easily gain control of the entire system on Windows hosts. This difference has continued partly due to the widespread use ofadministratoraccounts in contemporary versions likeWindows XP. In 1997, researchers created and released a virus for Linux—known as "Bliss".[84]Bliss, however, requires that the user run it explicitly, and it can only infect programs that the user has the access to modify. Unlike Windows users, most Unix users do notlog inas an administrator, or"root user", except to install or configure software; as a result, even if a user ran the virus, it could not harm their operating system. The Bliss virus never became widespread, and remains chiefly a research curiosity. Its creator later posted the source code toUsenet, allowing researchers to see how it worked.[85]
Before computer networks became widespread, most viruses spread onremovable media, particularlyfloppy disks. In the early days of thepersonal computer, many users regularly exchanged information and programs on floppies. Some viruses spread by infecting programs stored on these disks, while others installed themselves into the diskboot sector, ensuring that they would be run when the user booted the computer from the disk, usually inadvertently. Personal computers of the era would attempt to boot first from a floppy if one had been left in the drive. Until floppy disks fell out of use, this was the most successful infection strategy and boot sector viruses were the most common in the "wild" for many years. Traditional computer viruses emerged in the 1980s, driven by the spread of personal computers and the resultant increase inbulletin board system(BBS),modemuse, and software sharing.Bulletin board–driven software sharing contributed directly to the spread ofTrojan horseprograms, and viruses were written to infect popularly traded software.Sharewareandbootlegsoftware were equally commonvectorsfor viruses on BBSs.[86][87]Viruses can increase their chances of spreading to other computers by infecting files on anetwork file systemor a file system that is accessed by other computers.[88]
Macro viruseshave become common since the mid-1990s. Most of these viruses are written in the scripting languages for Microsoft programs such asMicrosoft WordandMicrosoft Exceland spread throughoutMicrosoft Officeby infecting documents andspreadsheets. Since Word and Excel were also available forMac OS, most could also spread toMacintosh computers. Although most of these viruses did not have the ability to send infectedemail messages, those viruses which did take advantage of theMicrosoft OutlookComponent Object Model(COM) interface.[89][90]Some old versions of Microsoft Word allow macros to replicate themselves with additional blank lines. If two macro viruses simultaneously infect a document, the combination of the two, if also self-replicating, can appear as a "mating" of the two and would likely be detected as a virus unique from the "parents".[91]
A virus may also send aweb address linkas aninstant messageto all the contacts (e.g., friends and colleagues' e-mail addresses) stored on an infected machine. If the recipient, thinking the link is from a friend (a trusted source) follows the link to the website, the virus hosted at the site may be able to infect this new computer and continue propagating.[92]Viruses that spread usingcross-site scriptingwere first reported in 2002,[93]and were academically demonstrated in 2005.[94]There have been multiple instances of the cross-site scripting viruses in the "wild", exploiting websites such asMySpace(with the Samy worm) andYahoo!.
In 1989 TheADAPSOSoftware Industry DivisionpublishedDealing With Electronic Vandalism,[95]in which they followed the risk of data loss by "the added risk of losing customer confidence."[96][97][98]
Many users installantivirus softwarethat can detect and eliminate known viruses when the computer attempts todownloador run the executable file (which may be distributed as an email attachment, or onUSB flash drives, for example). Some antivirus software blocks known malicious websites that attempt to install malware. Antivirus software does not change the underlying capability of hosts to transmit viruses. Users must update their software regularly topatchsecurity vulnerabilities("holes"). Antivirus software also needs to be regularly updated to recognize the latestthreats. This is because malicioushackersand other individuals are always creating new viruses. The GermanAV-TESTInstitute publishes evaluations of antivirus software for Windows[99]and Android.[100]
Examples of Microsoft Windowsanti virusand anti-malware software include the optionalMicrosoft Security Essentials[101](for Windows XP, Vista and Windows 7) for real-time protection, theWindows Malicious Software Removal Tool[102](now included withWindows (Security) Updateson "Patch Tuesday", the second Tuesday of each month), andWindows Defender(an optional download in the case of Windows XP).[103]Additionally, several capable antivirus software programs are available for free download from the Internet (usually restricted to non-commercial use).[104]Some such free programs are almost as good as commercial
competitors.[105]Commonsecurity vulnerabilitiesare assignedCVE IDsand listed in the USNational Vulnerability Database.Secunia PSI[106]is an example of software, free for personal use, that will check a PC for vulnerable out-of-date software, and attempt to update it.Ransomwareandphishingscamalerts appear as press releases on theInternet Crime Complaint Center noticeboard. Ransomware is a virus that posts a message on the user's screen saying that the screen or system will remain locked or unusable until aransompayment is made.Phishingis a deception in which the malicious individual pretends to be a friend, computer security expert, or other benevolent individual, with the goal of convincing the targeted individual to revealpasswordsor other personal information.
Other commonly used preventive measures include timely operating system updates, software updates, careful Internet browsing (avoiding shady websites), and installation of only trusted software.[107]Certain browsers flag sites that have been reported to Google and that have been confirmed as hosting malware by Google.[108][109]
There are two common methods that an antivirus software application uses to detect viruses, as described in theantivirus softwarearticle. The first, and by far the most common method of virus detection is using a list ofvirus signaturedefinitions. This works by examining the content of the computer's memory (itsRandom Access Memory(RAM), andboot sectors) and the files stored on fixed or removable drives (hard drives, floppy drives, or USB flash drives), and comparing those files against adatabaseof known virus "signatures". Virus signatures are just strings of code that are used to identify individual viruses; for each virus, the antivirus designer tries to choose a unique signature string that will not be found in a legitimate program. Different antivirus programs use different "signatures" to identify viruses. The disadvantage of this detection method is that users are only protected from viruses that are detected by signatures in their most recent virus definition update, and not protected from new viruses (see "zero-day attack").[110]
A second method to find viruses is to use aheuristicalgorithmbased on common virus behaviors. This method can detect new viruses for which antivirus security firms have yet to define a "signature", but it also gives rise to morefalse positivesthan using signatures. False positives can be disruptive, especially in a commercial environment, because it may lead to a company instructing staff not to use the company computer system until IT services have checked the system for viruses. This can slow down productivity for regular workers.
One may reduce the damage done by viruses by making regularbackupsof data (and the operating systems) on different media, that are either kept unconnected to the system (most of the time, as in a hard drive),read-onlyor not accessible for other reasons, such as using differentfile systems. This way, if data is lost through a virus, one can start again using the backup (which will hopefully be recent).[111]If a backup session onoptical medialikeCDandDVDis closed, it becomes read-only and can no longer be affected by a virus (so long as a virus or infected file was not copied onto theCD/DVD). Likewise, an operating system on abootableCD can be used to start the computer if the installed operating systems become unusable. Backups on removable media must be carefully inspected before restoration. The Gammima virus, for example, propagates via removableflash drives.[112][113]
Many websites run by antivirus software companies provide free online virus scanning, with limited "cleaning" facilities (after all, the purpose of the websites is to sell antivirus products and services). Some websites—likeGooglesubsidiaryVirusTotal.com—allow users to upload one or more suspicious files to be scanned and checked by one or more antivirus programs in one operation.[114][115]Additionally, several capable antivirus software programs are available for free download from the Internet (usually restricted to non-commercial use).[116]Microsoft offers an optional free antivirus utility calledMicrosoft Security Essentials, aWindows Malicious Software Removal Toolthat is updated as part of the regular Windows update regime, and an older optional anti-malware (malware removal) toolWindows Defenderthat has been upgraded to an antivirus product in Windows 8.
Some viruses disableSystem Restoreand other important Windows tools such asTask ManagerandCMD. An example of a virus that does this is CiaDoor. Many such viruses can be removed byrebootingthe computer, entering Windows "safe mode" with networking, and then using system tools orMicrosoft Safety Scanner.[117]System RestoreonWindows Me,Windows XP,Windows VistaandWindows 7can restore theregistryand critical system files to a previous checkpoint. Often a virus will cause a system to "hang" or "freeze", and a subsequent hard reboot will render a system restore point from the same day corrupted. Restore points from previous days should work, provided the virus is not designed to corrupt the restore files and does not exist in previous restore points.[118][119]
Microsoft'sSystem File Checker(improved in Windows 7 and later) can be used to check for, and repair, corrupted system files.[120]Restoring an earlier "clean" (virus-free) copy of the entire partition from acloned disk, adisk image, or abackupcopy is one solution—restoring an earlier backup disk "image" is relatively simple to do, usually removes any malware, and may be faster than "disinfecting" the computer—or reinstalling and reconfiguring the operating system and programs from scratch, as described below, then restoring user preferences.[111]Reinstalling the operating system is another approach to virus removal. It may be possible to recover copies of essential user data by booting from alive CD, or connecting the hard drive to another computer and booting from the second computer's operating system, taking great care not to infect that computer by executing any infected programs on the original drive. The original hard drive can then be reformatted and the OS and all programs installed from original media. Once the system has been restored, precautions must be taken to avoid reinfection from any restoredexecutable files.[121]
The first known description of a self-reproducing program in fiction is in the 1970 short storyThe Scarred ManbyGregory Benfordwhich describes a computer program called VIRUS which, when installed on a computer withtelephone modemdialing capability, randomly dials phone numbers until it hits a modem that is answered by another computer, and then attempts to program the answering computer with its own program, so that the second computer will also begin dialing random numbers, in search of yet another computer to program. The program rapidly spreads exponentially through susceptible computers and can only be countered by a second program called VACCINE.[122]His story was based on an actual computer virus written inFORTRANthat Benford had created and run on thelabcomputer in the 1960s, as a proof-of-concept, and whichhe told John Brunner aboutin 1970.[123]
The idea was explored further in two 1972 novels,When HARLIE Was OnebyDavid GerroldandThe Terminal ManbyMichael Crichton, and became a major theme of the 1975 novelThe Shockwave RiderbyJohn Brunner.[124]
The 1973Michael Crichtonsci-fifilmWestworldmade an early mention of the concept of a computer virus, being a central plot theme that causesandroidsto run amok.[125][better source needed]Alan Oppenheimer's character summarizes the problem by stating that "...there's a clear pattern here which suggests an analogy to an infectious disease process, spreading from one...area to the next." To which the replies are stated: "Perhaps there are superficial similarities to disease" and, "I must confess I find it difficult to believe in a disease of machinery."[126]
In 2016,Jussi Parikkaannounced the creation of The Malware Museum of Art: a collection of malware programs, usually viruses, distributed in the 1980s and 1990s on home computers. Malware Museum of Art is hosted atThe Internet Archiveand is curated byMikko HyppönenfromHelsinki,Finland.[127]The collection allows anyone with a computer to experience virus infection of decades ago with safety.[128]
The term "virus" is also misused by extension to refer to other types ofmalware. "Malware" encompasses computer viruses along with many other forms of malicious software, such ascomputer "worms",ransomware,spyware,adware,trojan horses,keyloggers,rootkits,bootkits, maliciousBrowser Helper Object(BHOs), and other malicious software. The majority of active malware threats are trojan horse programs or computer worms rather than computer viruses. The term computer virus, coined byFred Cohenin 1985, is a misnomer.[129]Viruses often perform some type of harmful activity on infected host computers, such as acquisition ofhard diskspace orcentral processing unit(CPU) time, accessing and stealing private information (e.g.,credit cardnumbers,debit cardnumbers, phone numbers, names, email addresses, passwords, bank information, house addresses, etc.), corrupting data, displaying political, humorous or threatening messages on the user's screen,spammingtheir e-mail contacts,logging their keystrokes, or even rendering the computer useless. However, not all viruses carry a destructive "payload" and attempt to hide themselves—the defining characteristic of viruses is that they are self-replicating computer programs that modify other software without user consent by injecting themselves into the said programs, similar to a biological virus which replicates within living cells.
|
https://en.wikipedia.org/wiki/Self-modifying_computer_virus
|
Incomputer programming,self-hostingis the use of aprogramas part of thetoolchainoroperating systemthat produces new versions of that same program—for example, acompilerthat can compile its ownsource code. Self-hostingsoftwareis commonplace onpersonal computersand larger systems. Other programs that are typically self-hosting includekernels,assemblers,command-line interpretersandrevision control software.
An operating system is self-hosted when the toolchain to build the operating system runs on that same operating system. For example, Windows can be built on a computer running Windows.
Before a system can become self-hosted, another system is needed to develop it until it reaches a stage where self-hosting is possible. When developing for a new computer or operating system, a system to run the development software is needed, but development software used to write and build the operating system is also necessary. This is called abootstrappingproblem or, more generically, achicken or the eggdilemma.
A solution to this problem is thecross compiler(or cross assembler when working with assembly language). A cross compiler allowssource codeon one platform to be compiled for a different machine or operating system, making it possible to create an operating system for a machine for which a self-hosting compiler does not yet exist. Once written, software can be deployed to the target system using means such as anEPROM,floppy diskette,flash memory(such as a USB thumb drive), orJTAGdevice. This is similar to the method used to write software for gaming consoles or for handheld devices like cellular phones or tablets, which do not host their own development tools.
Once the system is mature enough to compile its own code, the cross-development dependency ends. At this point, an operating system is said to be self-hosted.
Software development using compiler or interpreters can also be self hosted when the compiler is capable of compiling itself.[1]
Since self-hosted compilers suffer from the same bootstrap problems as operating systems, a compiler for a new programming language needs to be written in an existing language. So the developer may use something like assembly language,C/C++, or even a scripting language likePythonorLuato build the first version of the compiler. Once the language is mature enough, development of the compiler can shift to the compiler's native language, allowing the compiler to build itself.
The first self-hosting compiler (excluding assemblers) was written forLispby Hart and Levin at MIT in 1962. They wrote a Lisp compiler in Lisp, testing it inside an existing LispInterpreter. Once they had improved the compiler to the point where it could compile its own source code, it was self-hosting.[2]
The compiler as it exists on the standard compiler tape is a machine language program that was obtained by having theS-expressiondefinition of the compiler work on itself through the interpreter.
This technique is usually only practicable when an interpreter already exists for the very same language that is to be compiled; though possible, it is extremely uncommon to humanly compile a compiler with itself.[3]The concept borrows directly from and is an example of the broader notion of running a program on itself as input, used also in various proofs intheoretical computer science, such as the proof that thehalting problemis undecidable.
Ken Thompsonstarted development onUnixin 1968 by writing and compiling programs on theGE-635and carrying them over to thePDP-7for testing. After the initial Unix kernel, acommand interpreter, an editor, an assembler, and a few utilities were completed, the Unix operating system was self-hosting – programs could be written and tested on the PDP-7 itself.[4]
Douglas McIlroywroteTMG(acompiler-compiler) in TMG on a piece of paper and "decided to give his piece of paper to his piece of paper", doing the computation himself, thus compiling a TMG compiler intoassembly, which he typed up and assembled on Ken Thompson's PDP-7.[3]
Development of theGNUsystem relies largely onGCC(the GNU Compiler Collection) and GNUEmacs(a popular editor), making possible the self contained, maintained and sustained development offree softwarefor theGNU Project.
Manyprogramming languageshave self-hosted implementations: compilers that are both in and for the same language.
An approach isbootstrapping, where a core version of the language is initially implemented using another high-level language, assembler, or evenmachine language; the resulting compiler is then used to start building successive expanded versions of itself.
The following programming languages have self-hosting compilers:[citation needed]
|
https://en.wikipedia.org/wiki/Self-hosting_(compilers)
|
Incomputer science,bootstrappingis the technique for producing aself-compiling compiler– that is, acompiler(orassembler) written in the sourceprogramming languagethat it intends to compile. An initial core version of the compiler (thebootstrap compiler) is generated in a different language (which could be assembly language); successive expanded versions of the compiler are developed using this minimal subset of the language. The problem of compiling a self-compiling compiler has been called thechicken-or-egg problemin compiler design, and bootstrapping is a solution to this problem.[1][2]
Bootstrapping is a fairly common practice whencreating a programming language. Many compilers for many programming languages are bootstrapped, including compilers forALGOL,BASIC,C,Common Lisp,D,Eiffel,Elixir,Go,Haskell,Java,Modula-2,Nim,Oberon,OCaml,Pascal,PL/I,Python,Rust,Scala,Scheme,TypeScript,Vala,Zigand more.
A typical bootstrap process works in three or four stages:[3][4][5]
The full compiler is built twice in order to compare the outputs of the two stages. If they are different, either the bootstrap or the full compiler contains a bug.[3]
Bootstrapping a compiler has the following advantages:[6]
Note that some of these points assume that the languageruntimeis also written in the same language.
If one needs to compile a compiler for language X written in language X, there is the issue of how the first compiler can be compiled. The different methods that are used in practice include:
Methods for distributing compilers in source code include providing a portablebytecodeversion of the compiler, so as tobootstrapthe process of compiling the compiler with itself. TheT-diagramis anotationused to explain these compiler bootstrap techniques.[6]In some cases, the most convenient way to get a complicated compiler running on a system that has little or no software on it involves a series of ever more sophisticated assemblers and compilers.[8]
Assemblers were the first language tools to bootstrap themselves.
The first high-level language to provide such a bootstrap wasNELIACin 1958. The first widely used languages to do so wereBurroughs B5000Algol in 1961 andLISPin 1962.
Hart and Levin wrote a LISP compiler in LISP at MIT in 1962, testing it inside an existing LISP interpreter. Once they had improved the compiler to the point where it could compile its own source code, it was self-hosting.[9]
The compiler as it exists on the standard compiler tape is a machine language program that was obtained by having theS-expressiondefinition of the compiler work on itself through the interpreter.
This technique is only possible when an interpreter already exists for the very same language that is to be compiled. It borrows directly from the notion of running a program on itself as input, which is also used in various proofs intheoretical computer science, such as the variation of the proof that thehalting problemis undecidable that usesRice's Theorem.
Due to security concerns regarding theTrusting Trust Attack(which involves a compiler being maliciously modified to introduce covert backdoors in programs it compiles or even further replicate the malicious modification in future versions of the compiler itself, creating a perpetual cycle of distrust) and various attacks against binary trustworthiness, multiple projects are working to reduce the effort for not only bootstrapping from source but also allowing everyone to verify that source and executable correspond. These include the Bootstrappable builds project[10]and the Reproducible builds project.[11]
|
https://en.wikipedia.org/wiki/Compiler_bootstrapping
|
Acontrol storeis the part of aCPU'scontrol unitthat stores the CPU'smicroprogram. It is usually accessed by amicrosequencer. A control store implementation whose contents are unalterable is known as aRead Only Memory(ROM) or Read Only Storage (ROS); one whose contents are alterable is known as a Writable Control Store (WCS).
Early control stores were implemented as a diode-array accessed via address decoders, a form of read-only memory. This tradition dates back to theprogram timing matrixon theMIT Whirlwind, first described in 1947. ModernVLSIprocessors instead use matrices offield-effect transistorsto build theROMand/orPLAstructures used to control the processor as well as its internal sequencer in amicrocodedimplementation.IBM System/360used a variety of techniques:CCROS(Card Capacitor Read-Only Storage) on theModel 30,TROS(Transformer Read-Only Storage) on theModel 40, andBCROS(Balanced Capacitor Read-Only Storage) on Models50,65and67.
Some computers are built using "writable microcode" — rather than storing the microcode in ROM or hard-wired logic, the microcode is stored in a RAM called awritable control storeorWCS. Such a computer is sometimes called aWritable Instruction Set ComputerorWISC.[1]Many of these machines were experimental laboratory prototypes, such as the WISC CPU/16[2]and the RTX 32P.[3]
The originalSystem/360models have read-only control store, but later System/360,System/370and successor models load part or all of their microprograms from floppy disks or otherDASDinto a writable control store consisting of ultra-high speedrandom-accessread–write memory. The System/370 architecture includes a facility calledInitial-Microprogram Load(IMLorIMPL)[4]that can be invoked from the console, as part ofPower On Reset(POR) or from another processor in atightly coupledmultiprocessorcomplex. This permitted IBM to easily repair microprogramming defects in the field. Even when the majority of the control store is stored in ROM, computer vendors would often sell writable control store as an option, allowing the customers to customize the machine's microprogram. Other vendors, e.g., IBM, use the WCS to run microcode for emulator features[5][6]and hardware diagnostics.[7]
Other commercial machines that use writable microcode include theBurroughs Small Systems(1970s and 1980s), the Xerox processors in theirLisp machinesandXerox Starworkstations, theDECVAX8800 ("Nautilus") family, and theSymbolicsL- and G-machines (1980s). Some DECPDP-10machines store their microcode in SRAM chips (about 80 bits wide x 2 Kwords), which is typically loaded on power-on through some other front-end CPU.[8]Many more machines offer user-programmable writable control stores as an option (including theHP 2100, DECPDP-11/60andVarian Data MachinesV-70 seriesminicomputers).
TheMentec M11andMentec M1store its microcode in SRAM chips, loaded on power-on through another CPU.
TheData General Eclipse MV/8000("Eagle") has a SRAM writable control store, loaded on power-on through another CPU.[9]
WCS offers several advantages including the ease of patching the microprogram and, for certain hardware generations, faster access than ROMs could provide. User-programmable WCS allow the user to optimize the machine for specific purposes. However, it also had the disadvantage of making it harder to debug programs, and making it possible for malicious users to negatively affect the system and data.[10]
Some CPU designs compile the instruction set to a writableRAMorFLASHinside the CPU (such as theRekursivprocessor and theImsysCjip),[11]or an FPGA (reconfigurable computing).
Several Intel CPUs in thex86architecture family have writable microcode,[12]starting with thePentium Proin 1995.[13][14]This has allowed bugs in theIntel Core 2microcode and IntelXeonmicrocode to be fixed in software, rather than requiring the entire chip to be replaced.
Such fixes can be installed by Linux,[15]FreeBSD,[16]Microsoft Windows,[17]or the motherboard BIOS.[18]
The control store usually has a register on its outputs. The outputs that go back into the sequencer to determine the next address have to go through some sort of register to prevent the creation of arace condition.[19]In most designs all of the other bits also go through a register. This is because the machine will work faster if the execution of the next microinstruction is delayed by one cycle. This register is known as a pipeline register. Very often the execution of the next microinstruction is dependent on the result of the current microinstruction, which will not be stable until the end of the current microcycle. It can be seen that either way, all of the outputs of the control store go into one big register. Historically it used to be possible to buy EPROMs with these register bits on the same chip.
Theclock signaldetermining theclock rate, which is the cycle time of the system, primarily clocks this register.
|
https://en.wikipedia.org/wiki/Patchable_microcode
|
Accessibilityis the design of products, devices, services, vehicles, or environments so as to be usable bydisabledpeople.[1]The concept of accessible design and practice of accessible developments ensures both "direct access" (i.e. unassisted) and "indirect access" meaning compatibility with a person'sassistive technology(for example, computerscreen readers).[2]
Accessibility can be viewed as the "ability to access" and benefit from some system or entity. The concept focuses on enabling access for people with disabilities, or enabling access through the use ofassistive technology; however, research and development in accessibility brings benefits to everyone.[3][4][5][6][7]Therefore, an accessible society should eliminatedigital divideorknowledge divide.
Accessibility is not to be confused withusability, which is the extent to which a product (such as a device, service, or environment) can be used by specified users to achieve specified goals with effectiveness, efficiency, and satisfaction in a specified context of use.[8]
Accessibility is also strongly related touniversal design, the process of creating products that are usable by the widest possible range of people, operating within the widest possible range of situations.[9]Universal design typically provides a single general solution that can accommodate people with disabilities as well as the rest of the population. By contrast, accessible design is focused on ensuring that there are no barriers to accessibility for all people, including those with disabilities.
Thedisability rights movementadvocates equal access to social, political, and economic life which includes not only physical access but access to the same tools, services, organizations and facilities as non-disabled people (e.g., museums[10][11]). Article 9 of the United NationsConvention on the Rights of Persons with Disabilitiescommits signatories to provide for full accessibility in their countries.[12]
While it is often used to describe facilities or amenities to assist people with impaired mobility, through the provision of facilities likewheelchair ramps, the term can include other types of disability. Accessible facilities therefore extend to areas such asBraillesignage,elevators, audio signals atpedestrian crossings, walkway contours,website accessibilityandaccessible publishing.[13]
In the United States, government mandates including Section 508, WCAG,[14]DDA are all enforcing practices to standardize accessibility testing engineering in product development.
Accessibility modifications may be required to enable persons with disabilities to gain access to education, employment, transportation, housing, recreation, or even simply to exercise their right to vote.
Various countries have legislation requiring physical accessibility which are (in order of enactment):
Legislation may also be enacted on a state, provincial or local level. InOntario, Canada, theOntarians with Disabilities Actof 2001 is meant to "improve the identification, removal and prevention of barriers faced by persons with disabilities".[25]
TheEuropean Union(EU), which has signed the United Nations' Convention on the Rights of Persons with Disabilities, also has adopted a European Disability Strategy for 2010–20. The Strategy includes the following goals, among others:[26]
AEuropean Accessibility Actwas proposed in late 2012.[27]This Act would establish standards within member countries for accessible products, services, and public buildings. The harmonization of accessibility standards within the EU "would facilitate the social integration of persons with disabilities and the elderly and their mobility across member states, thereby also fostering the free movement principle".[28]
Enforcement of the European Accessibility Act (EAA) begins in June 2025
Assistive technologyis the creation of a new device that assists a person in completing a task that would otherwise be impossible. Some examples include new computer software programs likescreen readers, and inventions such asassistive listening devices, includinghearing aids, andtraffic lightswith a standardcolor codethat enablescolorblindindividuals to understand the correct signal.
Adaptive technology is the modification, or adaptation, of existing devices, methods, or the creation of new uses for existing devices, to enable a person to complete a task.[29]Examples include the use of remote controls, and theautocomplete(word completion)[30]feature in computer word processing programs, which both help individuals with mobility impairments to complete tasks. Adaptations to wheelchair tires are another example; widening the tires enables wheelchair users to move over soft surfaces, such as deep snow on ski hills, and sandy beaches.
Assistive technology and adaptive technology have a key role in developing the means for people with disabilities to live more independently, and to more fully participate in mainstream society. In order to have access to assistive or adaptive technology, however, educating the public and even legislating requirements to incorporate this technology have been necessary.
The UN CRPD, and courts in the United States, Japan, UK, and elsewhere, have decided that when it is needed to assure secret ballot, authorities should provide voters with assistive technology.
The European Court of Human Rights, on the contrary, in case Toplak v. Slovenia ruled that due to high costs, the abandonment of the assistive equipment in elections did not violate human rights.
Accessibility of employment covers a wide range of issues, from skills training, tooccupational therapy,[31]finding employment, and retaining employment.
Employment rates for workers with disabilities are lower than for the general workforce. Workers in Western countries fare relatively well, having access to more services and training as well as legal protections against employment discrimination. Despite this, in the United States the 2012 unemployment rate for workers with disabilities was 12.9%, while it was 7.3% for workers without disabilities.[32]More than half of workers with disabilities (52%) earned less than $25,000 in the previous year, compared with just 38% of workers with no disabilities. This translates into an earnings gap where individuals with disabilities earn about 25 percent less of what workers without disabilities earn. Among occupations with 100,000 or more people, dishwashers had the highest disability rate (14.3%), followed by refuse and recyclable material collectors (12.7%), personal care aides (11.9%), and janitors and building cleaners (11.8%). The rates for refuse and recyclable material collectors, personal care aides, and janitors and building cleaners were not statistically different from one another.[33]
Surveys of non-Western countries are limited, but the available statistics also indicate fewer jobs being filled by workers with disabilities. In India, a large 1999 survey found that "of the 'top 100 multinational companies' in the country [...] the employment rate of persons with disabilities in the private sector was a mere 0.28%, 0.05% in multinational companies and only 0.58% in the top 100 IT companies in the country".[34]India, like much of the world, has large sections of the economy that are without strong regulation or social protections, such as theinformal economy.[35]Other factors have been cited as contributing to the high unemployment rate, such as public service regulations. Although employment for workers with disabilities is higher in the public sector due to hiring programs targeting persons with disabilities, regulations currently restrict types of work available to persons with disabilities:
"Disability-specific employment reservations are limited to the public sector and a large number of the reserved positions continue to be vacant despite nearly two decades of enactment of the PWD Act".[34]
Expenses related to adaptive or assistive technology required to participate in the workforce may be tax deductible expenses for individuals with a medical practitioner's prescription in some jurisdictions.
Disability management (DM) is a specialized area ofhuman resourcesthat supports efforts of employers to better integrate and retain workers with disabilities. Some workplaces have policies in place to provide "reasonable accommodation" for employees with disabilities, but many do not. In some jurisdictions, employers may have legal requirements to enddiscrimination against persons with disabilities.
It has been noted by researchers that where accommodations are in place for employees with disabilities, these frequently apply to individuals with "pre-determined or apparent disabilities as determined by national social protection or Equality Authorities",[36]which include persons with pre-existing conditions who receive an official disability designation. One of the biggest challenges for employers is in developing policies and practises to manage employees who develop disabilities during the course of employment. Even where these exist, they tend to focus on workplace injuries, overlooking job retention challenges faced by employees who acquire a non-occupation injury or illness. Protecting employability is a factor that can help close the unemployment gap for persons with disabilities.[36]
Providing mobility to people with disabilities includes changes for public facilities like gently sloping paths of travel for people using wheelchairs and difficulty walking up stairs, or audio announcements for the blind (either live orautomated); dedicated services likeparatransit; and adaptations to personal vehicles.
Automobile accessibility also refers to ease of use by disabled people. Automobiles, whether a car or a van, can be adapted for a range of physical disabilities. Foot pedals can be raised, or replaced with hand-controlled devices. Wheelchair hoists, lifts or ramps may be customized according to the needs of the driver.Ergonomicadaptations, such as a lumbar support cushion, may also be needed.[37]
Generally, the more limiting the disability, the more expensive the adaptation needed for the vehicle. Financial assistance is available through some organizations, such asMotabilityin the United Kingdom, which requires a contribution by the prospective vehicle owner. Motability makes vehicles available for purchase or lease.[38]
When an employee with a disability requires an adapted car for work use, the employee does not have to pay for a "reasonable adjustment" in the United Kingdom; if the employer is unable to pay the cost, assistance is offered by government programs.[39]
A significant development in transportation, andpublic transportin particular, to achieve accessibility, is the move to "low-floor" vehicles. In a low-floor vehicle, access to part or all of the passenger cabin is unobstructed from one or more entrances by the presence of steps, enabling easier access for the infirm or people withpush chairs. A further aspect may be that the entrance and corridors are wide enough to accommodate a wheelchair. Low-floor vehicles have been developed forbuses,trolleybuses,tramsandtrains.
A low floor in the vehicular sense is normally combined in a conceptual meaning with normal pedestrian access from a standardkerb (curb)height. However, the accessibility of a low-floor vehicle can also be utilised from slightly raising portions of kerb atbus stops, or through use of level boardingbus rapid transitstations or tram stops.[40]The combination of access from a kerb was the technological development of the 1990s, as step-free interior layouts for buses had existed in some cases for decades, with entrance steps being introduced as chassis designs and overall height regulations changed.
Low-floor buses may also be designed with special height adjustment controls that permit a stationary bus to temporarily lower itself to ground level, permitting wheelchair access. This is referred to as akneeling bus.
Atrapid transitsystems, vehicles generally have floors in the same height as the platforms but the stations are often underground or elevated, so accessibility there is not a question of providing low-floor vehicles, but providing a step-free access from street level to the platforms (generally byelevators, which may be restricted to disabled passengers only, so that the step-free access is not obstructed by non-disabled people taking advantage).[citation needed]
In theUnited Kingdom, local transport authorities are responsible for checking that all people who live within their area can access essential opportunities and services, and where gaps in provision are identified the local authorities are responsible for organizing changes to make new connections. These requirements are defined in the UK Community Planning Acts legislation[41]and more detailed guidance has been issued by theDepartment for Transportfor eachlocal authority. This includes the requirement to produce an Accessibility Plan under Community Planning legislation and to incorporate this within theirLocal Transport Plan.[42]An Accessibility Plan sets out how each local authority plans to improve access to employment, learning, health care, food shops and other services of local importance, particularly for disadvantaged groups and areas. Accessibility targets are defined in the accessibility plans, these are often the distance or time to access services by different modes of transport including walking, cycling and public transport.
Accessibility Planning was introduced as a result of the report "Making the Connections: Final Report on Transport and Social Exclusion".[43]This report was the result of research carried out by theSocial ExclusionUnit. The United Kingdom also has a "code of practice" for making train and stations accessible: "Accessible Train and Station Design for Disabled People: A Code of Practice".[44]This code of practice was first published in 2002 with the objective of compliance to Section 71B of the Railways Act 1993,[45]and revised after a public consultation period in 2008.
Some transport companies have since improved the accessibility of their services, such as incorporatinglow-floor busesinto their stock as standard.[citation needed]In August 2021,South Western Railwayannounced the streamlining of their accessibility services, allowing passengers requiring assistance to inform the company with as little as 10 minutes' notice at all 189 stations on its network, replacing an older scheme wherein assisted journeys had to be booked six hours to a day in advance. The system will utilise clear signage at stations andQR codes, allowing customers to send details of the assistance they require and their planned journey to staff remotely.[46]
Making public services fully accessible to the public has led to some technological innovations.Public announcementsystems usingaudio induction looptechnology can broadcast announcements directly into the hearing aid of anyone with a hearing impairment, making them useful in such public places as auditoriums and train stations.
TheUN Convention on the Rights of Persons with Disabilities(2006) requires ‘appropriate measures’ to ensure people with disabilities are able to ‘access, on an equal basis with others','the physical environment’, ‘transportation’ and ‘other facilities and services open or provided to the public’’. This requirement also applies to ‘roads’ and ‘transportation’ as well as ‘buildings, and other indoor and outdoor facilities’.[47]
At the same time, promotion ofactive travel, or 'shared space' initiatives to pedestrianise city centres can introduce unintended barriers, especially for pedestrians who are visually impaired and who can find these environments confusing or even dangerous.[48]It is important to have effective mechanisms to ensure that urban spaces are designed to be inclusive of pedestrians with disabilities. These can include early consultation with disabled persons or their representative organisations, and appropriate regulation of city planning.[48]
Most existing and new housing, even in the wealthiest nations, lack basic accessibility features unless the designated, immediate occupant of a home currently has a disability. However, there are some initiatives to change typical residential practices so that new homes incorporate basic access features such as zero-step entries and door widths adequate for wheelchairs to pass through.Occupational Therapistsare a professional group skilled in the assessment and making of recommendations to improve access to homes.[49]They are involved in both the adaptation of existing housing to improve accessibility,[50]and in the design of future housing.[51]
The broad concept ofUniversal designis relevant to housing, as it is to all aspects of the built environment. Furthermore, aVisitabilitymovement[52]begun by grass roots disability advocates in the 1980s focuses specifically on changing construction practices in new housing. This movement, a network of interested people working in their locales, works on educating, passing laws, and spurring voluntary home access initiatives with the intention that basic access become a routine part of new home construction.
Accessibility in the design of housing and household devices has become more prominent in recent decades due to a rapidly ageing population in developed countries.[53]Ageing seniors may wish to continue living independently, but the ageing process naturally increases the disabilities that a senior citizen will experience. A growing trend is the desire for many senior citizens to 'age in place', living as independently as possible for as long as possible. Accessibility modifications that allow ageing in place are becoming more common. Housing may even be designed to incorporate accessibility modifications that can be made throughout the life cycle of the residents.
The English Housing Survey for 2018/19 found only 9% of homes in England have key features, such as a toilet at entrance level and sufficiently wide doorways, to deem them accessible. This was an improvement from 5% in 2005. More than 400,000 wheelchair users in England were living in homes which are neither adapted nor accessible.[54]
Under theConvention on the Rights of Persons with Disabilities, states parties are bound to assure accessibleelections,voting, and voting procedures. In 2018, the United NationsCommittee on the Rights of Persons with Disabilitiesissued an opinion that all polling stations should be fully accessible. At theEuropean Court of Human Rights, there are currently two ongoing cases about the accessibility of polling places and voting procedures. They were brought againstSloveniaby two voters and theSlovenian Disability Rights Association.[55]As of January 2020, the case, calledToplak and Mrak v. Slovenia, was ongoing.[56]The aim of the court procedure is to make accessible all polling places in Europe.[57]
Advances in information technology and telecommunications have represented a leap forward for accessibility. Access to the technology is restricted to those who can afford it, but it has become more widespread in Western countries in recent years. For those who use it, it provides the ability to access information and services by minimizing the barriers of distance and cost as well as the accessibility and usability of the interface. In many countries this has led to initiatives, laws and/or regulations that aim toward providing universal access to the internet and to phone systems at reasonable cost to citizens.[58]
A major advantage of advanced technology is its flexibility. Some technologies can be used at home, in the workplace, and in school, expanding the ability of the user to participate in various spheres of daily life.Augmentative and alternative communicationtechnology is one such area of IT progress. It includes inventions such asspeech-generating devices,teletypewriterdevices, adaptive pointing devices to replace computer mouse devices, and many others. Mobile telecommunications devices and computer applications are also equipped with accessibility features.[59][60][61]They can be adapted to create accessibility to a range of tasks, and may be suitable for different kinds of disability.
The following impairments are some of the disabilities that affect communications and technology access, as well as many other life activities:
Each kind of disability requires a different kind of accommodation, and this may require analysis by a medical specialist, an educational specialist or a job analysis when the impairment requires accommodation.
One of the first areas where information technology improved the quality of life for disabled individuals is the voice operated wheelchair. Quadriplegics have the most profound disability, and the voice operated wheelchair technology was first developed in 1977 to provide increased mobility. The original version replaced the joystick system with a module that recognized 8 commands. Many other technology accommodation improvements have evolved from this initial development.[66]
Missing arms or fingers may make the use of a keyboard and mouse difficult or impossible. Technological improvements such asspeech recognitiondevices and software can improve access.
A communication disorder interferes with the ability to produce clearly understandable speech. There can be many different causes, such as nerve degeneration, muscle degeneration, stroke, and vocal cord injury. The modern method to deal with speaking disabilities has been to provide a text interface for a speech synthesizer for complete vocal disability. This can be a great improvement for people that have been limited to the use of a throat vibrator to produce speech since the 1960s.
An individual satisfies the definition of hearing disabled when hearing loss is about 30 dB for a single frequency, but this is not always perceptible as a disability.[67]For example, loss of sensitivity in one ear interferes withsound localization(directional hearing), which can interfere with communication in a crowd. This is often recognized when certain words are confused during normal conversation. This can interfere with voice-only interfaces, like automated customer service telephone systems, because it is sometimes difficult to increase the volume and repeat the message.
Mild to moderate hearing loss may be accommodated with a hearing aid that amplifies ambient sounds. Portable devices with speed recognition that can produce text can reduce problems associated with understanding conversation. This kind of hearing loss is relatively common, and this often grows worse with age.
The modern method to deal with profound hearing disability is the Internet using email or word processing applications. Thetelecommunications device for the deaf(TDD) became available in the form of theteletype(TTY) during the 1960s. These devices consist of a keyboard, display and modem that connects two or more of these devices using a dedicated wire or plain old telephone service.
Moderncomputer animationallows forsign languageavatars to be integrated into public areas. This technology could potentially make train station announcements, news broadcasts, etc. accessible when a human interpreter is not available.[68][69]Sign language can also be incorporated into film; for example, all movies shown in Brazilian movie theaters must have aBrazilian Sign Languagevideo track available to play alongside the film via asecond screen.[70][71]
A wide array of technology products is available to assist with visual impairment. These include screen magnification for monitors, screen-reading software for computers and mobile devices, mouse-over speech synthesis for browsing, braille displays, braille printers, braille cameras, and voice-activated phones and tablets.
One emerging product that will make ordinary computer displays available for the blind is the refreshable tactile display, which is very different from a conventional braille display. This provides a raised surface corresponding to the bright and dim spots on a conventional display. An example is the Touch Sight Camera for the Blind.
Speech Synthesis Markup Language[72]andSpeech Recognition Grammar Specification[73]) are relatively recent technologies intended to standardize communication interfaces using AugmentedBNFForm andXMLForm. These technologies assist visual impairments and physical impairment by providing interactive access to web content without the need to visually observe the content. While these technologies provides access for visually impaired individuals, the primary benefactor has been automated systems that replace live human customer service representatives that handle telephone calls.
There have been a few major movements to coordinate a set of guidelines for accessibility for theweb. The first and most well known is TheWeb Accessibility Initiative(WAI), which is part of the World Wide Web Consortium (W3C). This organization developed theWeb Content Accessibility Guidelines(WCAG) 1.0 and 2.0 which explain how to make Web content accessible to everyone, including people with disabilities. Web "content" generally refers to the information in a Web page or Web application, including text, images, forms, and sounds. (More specific definitions are available in the WCAG documents.)[74]
The WCAG is separated into three levels of compliance, A, AA and AAA. Each level requires a stricter set of conformance guidelines, such as different versions ofHTML(Transitional vs Strict) and other techniques that need to be incorporated into coding before accomplishing validation. Online tools allow users to submit their website and automatically run it through the WCAG guidelines and produce a report, stating whether or not they conform to each level of compliance.Adobe Dreamweaveralso offers plugins which allow web developers to test these guidelines on their work from within the program.
TheISO/IEC JTC1 SC36 WG724751 Individualized Adaptability and Accessibility in e-learning, education and training series is freely available and made of 3 parts: Individualized Adaptability and Accessibility in e-learning, education and training, Standards inventory and Guidance on user needs mapping.
Another source of web accessibility guidance comes from theUS government. In response toSection 508 of the US Rehabilitation Act, the Access Board developed standards to which U.S. federal agencies must comply in order to make their sites accessible. The U.S. General Services Administration has developed a website where one can take online training courses for free to learn about these rules.[75]
Examples of accessibility features include:
While WCAG provides much technical information for use by web designers, coders and editors,BS 8878:2010 Web accessibility – Code of Practice[76]has been introduced, initially in the UK, to help site owners and product managers to understand the importance of accessibility. It includes advice on the business case behind accessibility, and how organisations might usefully update their policies and production processes to embed accessibility in their business-as-usual. On 28 May 2019, BS 8878 was superseded byISO 30071-1,[77]the international Standard that built on BS 8878 and expanded it for international use.
Another useful idea is for websites to include a web accessibility statement on the site. Initially introduced in PAS 78,[78]the best practice for web accessibility statements has been updated in BS 8878[79]to emphasise the inclusion of: information on how disabled and elderly people could get a better experience of using the website by using assistive technologies or accessibility settings of browsers and operating systems (linking to "BBC My Web My Way"[80]can be useful here); information on what accessibility features the site's creators have included, and if there are any user needs which the site does not currently support (for example, descriptive video to allow blind people to access the information in videos more easily); and contact details for disabled people to be able to use to let the site creators know if they have any problems in using the site. While validations against WCAG, and other accessibility badges can also be included, they should be put lower down the statement, as most disabled people still do not understand these technical terms.[81]
Equal access to education for students with disabilities is supported in some countries by legislation. It is still challenging for some students with disabilities to fully participate in mainstream education settings, but many adaptive technologies and assistive programs are making improvements. In India, theMedical Council of Indiahas now passed the directives to all the medical institutions to make them accessible to persons with disabilities. This happened due to a petition by Satendra Singh founder ofInfinite Ability.[82]
Students with a physical or mental impairment or learning disability may require note-taking assistance, which may be provided by a business offering such services, as with tutoring services. Talking books in the form of talking textbooks are available in Canadian secondary and post-secondary schools. Also, students may require adaptive technology to access computers and the Internet. These may be tax-exempt expenses in some jurisdictions with a medical prescription.
It is important to ensure that the accessibility in education includes assessments.[83]Accessibility in testing or assessments entails the extent to which a test and its constituent item set eliminates barriers and permits the test-taker to demonstrate their knowledge of the tested content.[84]
With the passage of theNo Child Left Behind Actof 2001 in the United States,[85]student accountability in essential content areas such as reading, mathematics, and science has become a major area of focus in educational reform.[86]As a result, test developers have needed to create tests to ensure all students, including those with special needs (e.g., students identified with disabilities), are given the opportunity to demonstrate the extent to which they have mastered the content measured on state assessments. Currently, states are permitted to develop two different types of tests in addition to the standard grade-level assessments to target students with special needs. First, the alternate assessment may be used to report proficiency for up to 1% of students in a state. Second, new regulations permit the use of alternate assessments based on modified academic achievement standards to report proficiency for up to 2% of students in a state.
To ensure that these new tests generate results that allow valid inferences to be made about student performance, they must be accessible to as many people as possible. The Test Accessibility and Modification Inventory (TAMI)[87]and its companion evaluation tool, the Accessibility Rating Matrix (ARM), were designed to facilitate the evaluation of tests and test items with a focus on enhancing their accessibility. Both instruments incorporate the principles of accessibility theory and were guided by research on universal design, assessment accessibility, cognitive load theory, and research on item writing and test development. The TAMI is a non-commercial instrument that has been made available to all state assessment directors and testing companies. Assessment researchers have used the ARM to conduct accessibility reviews of state assessment items for several state departments of education.
|
https://en.wikipedia.org/wiki/Accessibility
|
Inreliability engineering, the termavailabilityhas the following meanings:
Normallyhigh availabilitysystems might be specified as 99.98%, 99.999% or 99.9996%. The converse,unavailability, is 1 minus the availability.
The simplest representation ofavailability(A) is a ratio of the expected value of the uptime of a system to the aggregate of the expected values of up and down time (that results in the "total amount of time"Cof the observation window)
Another equation foravailability(A) is a ratio of the Mean Time To Failure (MTTF) and Mean Time Between Failure (MTBF), or
If we define the status functionX(t){\displaystyle X(t)}as
therefore, the availabilityA(t) at timet> 0 is represented by
Average availability must be defined on an interval of the real line. If we consider an arbitrary constantc>0{\displaystyle c>0}, then average availability is represented as
Limiting (or steady-state) availability is represented by[1]
Limiting average availability is also defined on an interval[0,c]{\displaystyle [0,c]}as,
Availability is the probability that an item will be in an operable and committable state at the start of a mission when the mission is called for at a random time, and is generally defined as uptime divided by total time (uptime plus downtime).
Let's say a series component is composed of components A, B and C. Then following formula applies:
Availability of series component = (availability of component A) x (availability of component B) x (availability of component C)[2][3]
Therefore, combined availability of multiple components in a series is always lower than the availability of individual components.
On the other hand, following formula applies to parallel components:
Availability of parallel components = 1 - (1 - availability of component A) X (1 - availability of component B) X (1 - availability of component C)[2][3]
In corollary, if you have N parallel components each having X availability, then:
Availability of parallel components = 1 - (1 - X)^ N[3]
Using parallel components can exponentially increase the availability of overall system.[2]For example if each of your hosts has only 50% availability, by using 10 of hosts in parallel, you can achieve 99.9023% availability.[3]
Note that redundancy doesn’t always lead to higher availability. In fact, redundancy increases complexity which in turn reduces availability. According to Marc Brooker, to take advantage of redundancy, ensure that:[4]
Reliability Block DiagramsorFault Tree Analysisare developed to calculate availability of a system or a functional failure condition within a system including many factors like:
Furthermore, these methods are capable to identify the most critical items and failure modes or events that impact availability.
Availability, inherent (Ai)[5]The probability that an item will operate satisfactorily at a given point in time when used under stated conditions in an ideal support environment. It excludes logistics time, waiting or administrative downtime, and preventive maintenance downtime. It includes corrective maintenance downtime.
Inherent availability is generally derived from analysis of an engineering design:
It is based on quantities under control of the designer.
Availability, achieved (Aa)[6]The probability that an item will operate satisfactorily at a given
point in time when used under stated conditions in an ideal support environment (i.e., that personnel, tools, spares, etc. are instantaneously available). It excludes logistics time and waiting or administrative downtime.
It includes active preventive and corrective maintenance downtime.
Availability, operational (Ao)[7]The probability that an item will operate satisfactorily at a given point in time when used in an actual or realistic operating and support environment. It includes logistics time, ready time, and waiting or administrative downtime, and both preventive and corrective maintenance downtime. This value is equal to the mean time between failure (MTBF) divided by the mean time between failure plus the mean downtime (MDT). This measure extends the definition of availability to elements controlled by the logisticians and mission planners such as quantity and proximity of spares, tools and manpower to the hardware item.
Refer toSystems engineeringfor more details
If we are using equipment which has amean time to failure(MTTF) of 81.5 years andmean time to repair(MTTR) of 1 hour:
Outage due to equipment in hours per year = 1/rate = 1/MTTF = 0.01235 hours per year.
Availabilityis well established in the literature ofstochastic modelingandoptimal maintenance. Barlow and Proschan [1975] define availability of a repairable system as "the probability that the system is operating at a specified time t." Blanchard [1998] gives a qualitative definition of availability as "a measure of the degree of a system which is in the operable and committable state at the start of mission when the mission is called for at an unknown random point in time." This definition comes from the MIL-STD-721. Lie, Hwang, and Tillman [1977] developed a complete survey along with a systematic classification of availability.
Availability measures are classified by either the time interval of interest or the mechanisms for the systemdowntime. If the time interval of interest is the primary concern, we consider instantaneous, limiting, average, and limiting average availability. The aforementioned definitions are developed in Barlow and Proschan [1975], Lie, Hwang, and Tillman [1977], and Nachlas [1998]. The second primary classification for availability is contingent on the various mechanisms for downtime such as the inherent availability, achieved availability, and operational availability. (Blanchard [1998], Lie, Hwang, and Tillman [1977]). Mi [1998] gives some comparison results of availability considering inherent availability.
Availability considered in maintenance modeling can be found in Barlow and Proschan [1975] for replacement models, Fawzi and Hawkes [1991] for an R-out-of-N system withsparesand repairs, Fawzi and Hawkes [1990] for a series system with replacement and repair, Iyer [1992] for imperfect repair models, Murdock [1995] for age replacement preventive maintenance models, Nachlas [1998, 1989] for preventive maintenance models, and Wang and Pham [1996] for imperfect maintenance models. A very comprehensive recent book is by Trivedi and Bobbio [2017].
Availability factoris used extensively inpower plant engineering. For example, theNorth American Electric Reliability Corporationimplemented theGenerating Availability Data Systemin 1982.[8]
|
https://en.wikipedia.org/wiki/Availability
|
Coding best practicesorprogramming best practicesare a set of informal, sometimes personal, rules (best practices) that manysoftware developers, incomputer programmingfollow to improvesoftware quality.[1]Many computer programs require being robust and reliable for long periods of time,[2]so any rules need to facilitate both initial development and subsequent maintenance ofsource codeby people other than the original authors.
In theninety–ninety rule, Tom Cargill explains why programming projects often run late: "The first 90% of the code takes the first 90% of the development time. The last 10% takes another 90% of the time."[3]Any guidance which can redress this lack of foresight is worth considering.
The size of a project or program has a significant effect on error rates,programmer productivity, and the amount of management needed.[4]
As listed below, there are many attributes associated with goodsoftware. Some of these can be mutually contradictory (e.g. being very fast versus performing extensive error checking), and different customers and participants may have different priorities. Weinberg provides an example of how different goals can have a dramatic effect on both effort required and efficiency.[5]Furthermore, he notes that programmers will generally aim to achieve any explicit goals which may be set, probably at the expense of any other quality attributes.
Sommerville has identified four generalized attributes which are not concerned with what a program does, but how well the program does it:Maintainability,dependability,efficiencyandusability.[6]
Weinberg has identified four targets which a good program should meet:[7]
Hoarehas identified seventeen objectives related to software quality, including:[8]
Before coding starts, it is important to ensure that all necessary prerequisites have been completed (or have at least progressed far enough to provide a solid foundation for coding). If the various prerequisites are not satisfied, then the software is likely to be unsatisfactory, even if it is completed.
From Meek & Heath: "What happens before one gets to the coding stage is often of crucial importance to the success of the project."[9]
The prerequisites outlined below cover such matters as:
For small simple projects it may be feasible to combine architecture with design and adopt a very simple life cycle.
A software development methodology is a framework that is used to structure, plan, and control the life cycle of a software product. Common methodologies includewaterfall,prototyping,iterative and incremental development,spiral development,agile software development,rapid application development, andextreme programming.
The waterfall model is a sequential development approach; in particular, it assumes that the requirements can be completely defined at the start of a project. However, McConnell quotes three studies that indicate that, on average, requirements change by around 25% during a project.[10]The other methodologies mentioned above all attempt to reduce the impact of such requirement changes, often by some form of step-wise, incremental, or iterative approach. Different methodologies may be appropriate for different development environments.
Since its introduction in 2001, agile software development has grown in popularity, fueled by software developers seeking a more iterative, collaborative approach to software development.[11]
McConnell states: "The first prerequisite you need to fulfill before beginning construction is a clear statement of the problem the system is supposed to solve."[12]
Meek and Heath emphasise that a clear, complete, precise, and unambiguous written specification is the target to aim for.[13]Note that it may not be possible to achieve this target, and the target is likely to change anyway (as mentioned in the previous section).
Sommerville distinguishes between less detailed user requirements and more detailed system requirements.[14]He also distinguishes between functional requirements (e.g. update a record) and non-functional requirements (e.g. response time must be less than 1 second).
Hoare points out: "there are two ways of constructing a software design: one way is to make it so simple that there areobviouslyno deficiencies; the other way is to make it so complicated that there are noobviousdeficiencies. The first method is far more difficult."[15](Emphasis as in the original.)
Software architecture is concerned with deciding what has to be done and which program component is going to do it (how something is done is left to the detailed design phase below). This is particularly important when a software system contains more than one program since it effectively defines the interface between these various programs. It should include some consideration of any user interfaces as well, without going into excessive detail.
Any non-functional system requirements (response time, reliability, maintainability, etc.) need to be considered at this stage.[16]
The software architecture is also of interest to various stakeholders (sponsors, end-users, etc.) since it gives them a chance to check that their requirements can be met.
The primary purpose of design is to fill in the details which have been glossed over in the architectural design. The intention is that the design should be detailed enough to provide a good guide for actual coding, including details of any particular algorithms to be used. For example, at the architectural level, it may have been noted that some data has to be sorted, while at the design level, it is necessary to decide which sorting algorithm is to be used. As a further example, if an object-oriented approach is being used, then the details of the objects must be determined (attributes and methods).
Mayer states: "No programming language is perfect. There is not even a single best language; there are only languages well suited or perhaps poorly suited for particular purposes. Understanding the problem and associated programming requirements is necessary for choosing the language best suited for the solution."[17]
From Meek & Heath: "The essence of the art of choosing a language is to start with the problem, decide what its requirements are, and their relative importance since it will probably be impossible to satisfy them all equally well. The available languages should then be measured against the list of requirements, and the most suitable (or least unsatisfactory) chosen."[18]
It is possible that different programming languages may be appropriate for different aspects of the problem. If the languages or their compilers permit, it may be feasible to mix routines written in different languages within the same program.
Even if there is no choice as to which programming language is to be used, McConnell provides some advice: "Every programming language has strengths and weaknesses. Be aware of the specific strengths and weaknesses of the language you're using."[19]
This section is also really a prerequisite to coding, as McConnell points out: "Establish programming conventions before you begin programming. It's nearly impossible to change code to match them later."[19]
As listed near the end ofcoding conventions, there are different conventions for different programming languages, so it may be counterproductive to apply the same conventions across different languages. It is important to note that there is no one particular coding convention for any programming language. Every organization has a custom coding standard for each type of software project. It is, therefore, imperative that the programmer chooses or makes up a particular set of coding guidelines before the software project commences. Some coding conventions are generic, which may not apply for every software project written with a particular programming language.
The use of coding conventions is particularly important when a project involves more than one programmer (there have been projects with thousands of programmers). It is much easier for a programmer to read code written by someone else if all code follows the same conventions.
For some examples of bad coding conventions, Roedy Green provides a lengthy (tongue-in-cheek) article on how to produce unmaintainable code.[20]
Due to time restrictions or enthusiastic programmers who want immediate results for their code, commenting of code often takes a back seat. Programmers working as a team have found it better to leave comments behind since coding usually follows cycles, or more than one person may work on a particular module. However, some commenting can decrease the cost of knowledge transfer between developers working on the same module.
In the early days of computing, one commenting practice was to leave a brief description of the following:
The "description of the module" should be as brief as possible but without sacrificing clarity and comprehensiveness.
However, the last two items have largely been obsoleted by the advent ofrevision control systems. Modifications and their authorship can be reliably tracked by using such tools rather than by using comments.
Also, if complicated logic is being used, it is a good practice to leave a comment "block" near that part so that another programmer can understand what exactly is happening.
Unit testingcan be another way to show how code is intended to be used.
Use of proper naming conventions is considered good practice. Sometimes programmers tend to use X1, Y1, etc. as variables and forget to replace them with meaningful ones, causing confusion.
It is usually considered good practice to use descriptive names.
Example: A variable for taking in weight as a parameter for a truck can be named TrkWeight, TruckWeightKilograms or Truck_Weight_Kilograms, with TruckWeightKilograms (SeePascal casenaming of variables) often being the preferable one since it is instantly recognizable, but naming convention is not always consistent between projects and/or companies.
The code that a programmer writes should be simple. Complicated logic for achieving a simple thing should be kept to a minimum since the code might be modified by another programmer in the future. The logic one programmer implemented may not make perfect sense to another. So, always keep the code as simple as possible.[21]
Program code should not contain "hard-coded" (literal) values referring to environmental parameters, such as absolute file paths, file names, user names, host names, IP addresses, and URLs, UDP/TCP ports. Otherwise, the application will not run on a host that has a different design than anticipated. A careful programmer can parametrize such variables and configure them for the hosting environment outside of the application proper (for example, in property files, on an application server, or even in a database). Compare the mantra of a "single point of definition".[22](SPOD).
As an extension, resources such as XML files should also contain variables rather than literal values, otherwise, the application will not be portable to another environment without editing the XML files. For example, with J2EE applications running in an application server, such environmental parameters can be defined in the scope of the JVM, and the application should get the values from there.
Design code with scalability as a design goal because very often in software projects, new features are always added to a project which becomes bigger. Therefore, the facility to add new features to a software code base becomes an invaluable method in writing software.
Re-use is a very important design goal in software development. Re-use cuts development costs and also reduces the time for development if the components or modules which are reused are already tested. Very often, software projects start with an existing baseline that contains the project in its prior version and depending on the project, many of existing software modules and components are reused, which reduces development and testing time, therefore, increasing the probability of delivering a software project on schedule.
A general overview of all of the above:
A best practice for building code involves daily builds and testing, or better stillcontinuous integration, or evencontinuous delivery.
Testing is an integral part of software development that needs to be planned. It is also important that testing is done proactively; meaning that test cases are planned before coding starts, and test cases are developed while the application is being designed and coded.
Programmers tend to write the complete code and then begin debugging and checking for errors. Though this approach can save time in smaller projects, bigger and more complex ones tend to
have too many variables and functions that need attention. Therefore, it is good to debug every module once you are done and not the entire program. This saves time in the long run so that one does not end up wasting a lot of time on figuring out what is wrong.unit testsfor individual modules and/orfunctional testsforweb servicesand web applications can help with this.
Deployment is the final stage of releasing an application for users. Some best practices are:[23][24]
|
https://en.wikipedia.org/wiki/Best_coding_practices
|
Asoftware bugis a design defect (bug) incomputer software. Acomputer programwith many or serious bugs may be described asbuggy.
The effects of a software bug range from minor (such as a misspelled word in theuser interface) to severe (such as frequentcrashing).
In 2002, a study commissioned by the USDepartment of Commerce'sNational Institute of Standards and Technologyconcluded that "software bugs, or errors, are so prevalent and so detrimental that they cost the US economy an estimated $59 billion annually, or about 0.6 percent of the gross domestic product".[1]
Since the 1950s, some computer systems have been designed to detect or auto-correct various software errors during operations.
Mistake metamorphism(from Greekmeta= "change",morph= "form") refers to the evolution of a defect in the final stage of software deployment. Transformation of amistakecommitted by an analyst in the early stages of the software development lifecycle, which leads to adefectin the final stage of the cycle has been calledmistake metamorphism.[2]
Different stages of a mistake in the development cycle may be described as mistake,[3]: 31anomaly,[3]: 10fault,[3]: 31failure,[3]: 31error,[3]: 31exception,[3]: 31crash,[3]: 22glitch, bug,[3]: 14defect, incident,[3]: 39or side effect.
Software bugs have been linked to disasters.
Sometimes the use ofbugto describe the behavior of software is contentious due to perception. Some suggest that the term should be abandoned; contending thatbugimplies that the defect arose on its own and push to usedefectinstead since it more clearly indicates they are caused by a human.[8]
Some contend thatbugmay be used tocover upan intentional design decision. In 2011, after receiving scrutiny from US SenatorAl Frankenfor recording and storing users' locations in unencrypted files,[9]Apple called the behavior a bug. However, Justin Brookman of theCenter for Democracy and Technologydirectly challenged that portrayal, stating "I'm glad that they are fixing what they call bugs, but I take exception with their strong denial that they track users."[10]
Preventing bugs as early as possible in thesoftware development processis a target of investment and innovation.[11][12]
Newerprogramming languagestend to be designed to prevent common bugs based on vulnerabilities of existing languages. Lessons learned from older languages such asBASICandCare used to inform the design of later languages such asC#andRust.
Acompiledlanguage allows for detecting some typos (such as a misspelled identifier) beforeruntimewhich is earlier in thesoftware development processthan for aninterpretedlanguage.
Languages may include features such as a statictype system, restrictednamespacesandmodular programming. For example, for a typed, compiled language (likeC):
is syntactically correct, but fails type checking since the right side, a string, cannot be assigned to a float variable. Compilation fails – forcing this defect to be fixed before development progress can resume. With an interpreted language, a failure would not occur until later at runtime.
Some languages exclude features that easily lead to bugs, at the expense of slower performance – the principle being that it is usually better to write simpler, slower correct code than complicated, buggy code. For example, theJavadoes not supportpointerarithmetic which is generally fast, but is considered dangerous; relatively likely to cause a major bug.
Some languages include features that add runtime overhead in order to prevent some bugs. For example, many languages include runtimebounds checkingand a way to handle out-of-bounds conditions instead of crashing.
Programming techniques such asprogramming styleanddefensive programmingare intended to prevent typos.
For example, a bug may be caused by a relatively minor typographical error (typo) in the code. For example, this code executes functionfooonly ifconditionis true.
But this code always executesfoo:
A convention that tends to prevent this particular issue is to require braces for a block even if it has just one line.
Enforcement of conventions may be manual (i.e. viacode review) or via automated tools.
Some[who?]contend that writing aprogram specification, which states the intended behavior of a program, can prevent bugs. Others[who?], however, contend that formal specifications are impractical for anything but the shortest programs, because of problems ofcombinatorial explosionandindeterminacy.
One goal ofsoftware testingis to find bugs.
Measurements during testing can provide an estimate of the number of likely bugs remaining. This becomes more reliable the longer a product is tested and developed.[citation needed]
Agile software developmentmay involve frequent software releases with relatively small changes. Defects are revealed by user feedback.
Withtest-driven development(TDD),unit testsare written while writing the production code, and the production code is not considered complete until all tests complete successfully.
Tools forstatic code analysishelp developers by inspecting the program text beyond the compiler's capabilities to spot potential problems. Although in general the problem of finding all programming errors given a specification is not solvable (seehalting problem), these tools exploit the fact that human programmers tend to make certain kinds of simple mistakes often when writing software.
Tools to monitor the performance of the software as it is running, either specifically to find problems such asbottlenecksor to give assurance as to correct working, may be embedded in the code explicitly (perhaps as simple as a statement sayingPRINT "I AM HERE"), or provided as tools. It is often a surprise to find where most of the time is taken by a piece of code, and this removal of assumptions might cause the code to be rewritten.
Open sourcedevelopment allows anyone to examine source code. A school of thought popularized byEric S. RaymondasLinus's lawsays that popularopen-source softwarehas more chance of having few or no bugs than other software, because "given enough eyeballs, all bugs are shallow".[13]This assertion has been disputed, however: computer security specialistElias Levywrote that "it is easy to hide vulnerabilities in complex, little understood and undocumented source code," because, "even if people are reviewing the code, that doesn't mean they're qualified to do so."[14]An example of an open-source software bug was the2008 OpenSSL vulnerability in Debian.
Debuggingcan be a significant part of thesoftware development lifecycle.Maurice Wilkes, an early computing pioneer, described his realization in the late 1940s that
“a good part of the remainder of my life was going to be spent in finding errors in my own programs”.[15]
A program known as adebuggercan help a programmer find faulty code by examining the inner workings of a program such as executing code line-by-line and viewing variable values.
As an alternative to using a debugger, code may be instrumented with logic to output debug information to trace program execution and view values. Output is typically toconsole,window,log fileor ahardwareoutput (i.e.LED).
Some contend that locating a bug is something of an art.
It is not uncommon for a bug in one section of a program to cause failures in a different section,[citation needed]thus making it difficult to track, in an apparently unrelated part of the system. For example, an error in a graphicsrenderingroutine causing a fileI/Oroutine to fail.
Sometimes, the most difficult part of debugging is finding the cause of the bug. Once found, correcting the problem is sometimes easy if not trivial.
Sometimes, a bug is not an isolated flaw, but represents an error of thinking or planning on the part of the programmers. Often, such alogic errorrequires a section of the program to be overhauled or rewritten.
Some contend that as a part ofcode review, stepping through the code and imagining or transcribing the execution process may often find errors without ever reproducing the bug as such.
Typically, the first step in locating a bug is to reproduce it reliably. If unable to reproduce the issue, a programmer cannot find the cause of the bug and therefore cannot fix it.
Some bugs are revealed by inputs that may be difficult for the programmer to re-create. One cause of theTherac-25radiation machine deaths was a bug (specifically, arace condition) that occurred only when the machine operator very rapidly entered a treatment plan; it took days of practice to become able to do this, so the bug did not manifest in testing or when the manufacturer attempted to duplicate it. Other bugs may stop occurring whenever the setup is augmented to help find the bug, such as running the program with a debugger; these are calledheisenbugs(humorously named after theHeisenberg uncertainty principle).
Since the 1990s, particularly following theAriane 5 Flight 501disaster, interest in automated aids to debugging rose, such asstatic code analysisbyabstract interpretation.[16]
Often, bugs come about during coding, but faulty design documentation may cause a bug.
In some cases, changes to the code may eliminate the problem even though the code then no longer matches the documentation.
In anembedded system, the software is often modified towork arounda hardware bug since it's cheaper than modifying the hardware.
Bugs are managed via activities like documenting, categorizing, assigning, reproducing, correcting and releasing the corrected code.
Toolsare often used to track bugs and other issues with software. Typically, different tools
are used by the software development team totrack their workloadthan bycustomer servicetotrack user feedback.[17]
A tracked item is often calledbug,defect,ticket,issue,feature, or foragile software development,storyorepic. Items are often categorized by aspects such as severity, priority andversion number.
In a process sometimes calledtriage, choices are made for each bug about whether and when to fix it based on information such as the bug's severity and priority and external factors such as development schedules. Triage generally does not include investigation into cause. Triage may occur regularly. Triage generally consists of reviewing new bugs since the previous triage and maybe all open bugs. Attendees may include project manager, development manager, test manager, build manager, and technical experts.[18][19]
Severityis a measure of impact the bug has.[20]This impact may be data loss, financial, loss of goodwill and wasted effort. Severity levels are not standardized, but differ by context such as industry and tracking tool. For example, a crash in a video game has a different impact than a crash in a bank server. Severity levels might becrash or hang,no workaround(user cannot accomplish a task),has workaround(user can still accomplish the task),visual defect(a misspelling for example), ordocumentation error. Another example set of severities:critical,high,low,blocker,trivial.[21]The severity of a bug may be a separate category to its priority for fixing, or the two may be quantified and managed separately.
A bug severe enough to delay the release of the product is called ashow stopper.[22][23]
Prioritydescribes the importance of resolving the bug in relation to other bugs. Priorities might be numerical, such as 1 through 5, or named, such ascritical,high,low, anddeferred. The values might be similar or identical to severity ratings, even though priority is a different aspect.
Priority may be a combination of the bug's severity with the level of effort to fix. A bug with low severity but easy to fix may get a higher priority than a bug with moderate severity that requires significantly more effort to fix.
Bugs of sufficiently high priority may warrant a special release which is sometimes called apatch.
A software release that emphasizes bug fixes may be called amaintenancerelease – to differentiate it from a release that emphasizes new features or other changes.
It is common practice to release software with known, low-priority bugs or other issues. Possible reasons include but are not limited to:
The amount and type of damage a software bug may cause affects decision-making, processes and policy regarding software quality. In applications such ashuman spaceflight,aviation,nuclear power,health care,public transportorautomotive safety, since software flaws have the potential to cause human injury or even death, such software will have far more scrutiny and quality control than, for example, an online shopping website. In applications such as banking, where software flaws have the potential to cause serious financial damage to a bank or its customers, quality control is also more important than, say, a photo editing application.
Other than the damage caused by bugs, some of their cost is due to the effort invested in fixing them. In 1978, Lientz et al. showed that the median of projects invest 17 percent of the development effort in bug fixing.[26]In 2020, research onGitHubrepositories showed the median is 20%.[27]
In 1994, NASA'sGoddard Space Flight Centermanaged to reduce their average number of errors from 4.5 per 1,000 lines of code (SLOC) down to 1 per 1000 SLOC.[28]
Another study in 1990 reported that exceptionally good software development processes can achieve deployment failure rates as low as 0.1 per 1000 SLOC.[29]This figure is iterated in literature such asCode CompletebySteve McConnell,[30]and theNASA study on Flight Software Complexity.[31]Some projects even attained zero defects: thefirmwarein theIBM Wheelwritertypewriter which consists of 63,000 SLOC, and theSpace Shuttlesoftware with 500,000 SLOC.[29]
To facilitate reproducible research on testing and debugging, researchers use curated benchmarks of bugs:
Some notable types of bugs:
A bug can be caused by insufficient or incorrect design based on the specification. For example, given that the specification is to alphabetize a list of words, a design bug might occur if the design does not account for symbols; resulting in incorrect alphabetization of words with symbols.
Numerical operations can result in unexpected output, slow processing, or crashing.[34]Such a bug can be from a lack of awareness of the qualities of the data storage such as aloss of precisiondue torounding,numerically unstablealgorithms,arithmetic overflowandunderflow, or from lack of awareness of how calculations are handled by different software coding languages such asdivision by zerowhich in some languages may throw an exception, and in others may return a special value such asNaNorinfinity.
Acontrol flowbug, a.k.a. logic error, is characterized by code that does not fail with an error, but does not have the expected behavior, such asinfinite looping, infiniterecursion, incorrect comparison in aconditionalsuch as using the wrongcomparison operator, and theoff-by-one error.
The Open Technology Institute, run by the group,New America,[39]released a report "Bugs in the System" in August 2016 stating that U.S. policymakers should make reforms to help researchers identify and address software bugs. The report "highlights the need for reform in the field of software vulnerability discovery and disclosure."[40]One of the report's authors said thatCongresshas not done enough to address cyber software vulnerability, even though Congress has passed a number of bills to combat the larger issue of cyber security.[40]
Government researchers, companies, and cyber security experts are the people who typically discover software flaws. The report calls for reforming computer crime and copyright laws.[40]
The Computer Fraud and Abuse Act, the Digital Millennium Copyright Act and the Electronic Communications Privacy Act criminalize and create civil penalties for actions that security researchers routinely engage in while conducting legitimate security research, the report said.[40]
|
https://en.wikipedia.org/wiki/Computer_bug
|
In the context ofsoftware quality,defect criticalityis a measure of the impact of a software defect. It is defined as the product of severity, likelihood, and class.
Defects are different fromuser stories, and therefore the priority (severity) should be calculated as follows.
|
https://en.wikipedia.org/wiki/Defect_criticality
|
Insystems engineering,dependabilityis a measure of a system'savailability,reliability,maintainability, and in some cases, other characteristics such asdurability,safetyandsecurity.[1]Inreal-time computing,dependabilityis the ability to provide services that can be trusted within a time-period.[2]The service guarantees must hold even when the system is subject to attacks or natural failures.
TheInternational Electrotechnical Commission(IEC), via its Technical CommitteeTC 56develops and maintains international standards that provide systematic methods and tools for dependability assessment and management of equipment, services, and systems throughout their life cycles. The IFIP Working Group 10.4[3]on "Dependable Computing and Fault Tolerance" plays a role in synthesizing the technical community's progress in the field and organizes two workshops each year to disseminate the results.
Dependability can be broken down into three elements:
Some sources hold that word was coined in the nineteen-teens in Dodge Brothers automobile print advertising. But the word predates that period, with theOxford English Dictionaryfinding its first use in 1901.
As interest in fault tolerance and system reliability increased in the 1960s and 1970s, dependability came to be a measure of [x] as measures ofreliabilitycame to encompass additional measures like safety and integrity.[4]In the early 1980s, Jean-Claude Laprie thus chosedependabilityas the term to encompass studies of fault tolerance and system reliability without the extension of meaning inherent inreliability.[5]
The field of dependability has evolved from these beginnings to be an internationally active field of research fostered by a number of prominent international conferences, notably theInternational Conference on Dependable Systems and Networks, theInternational Symposium on Reliable Distributed Systemsand theInternational Symposium on Software Reliability Engineering.
Traditionally, dependability for a system incorporatesavailability,reliability,maintainabilitybut since the 1980s,safetyandsecurityhave been added to measures of dependability.[6]
Attributes are qualities of a system. These can be assessed to determine its overall dependability usingQualitativeorQuantitativemeasures. Avizienis et al.[1]define the following Dependability Attributes:
As these definitions suggested, only Availability and Reliability are quantifiable by direct measurements whilst others are more subjective. For instance Safety cannot be measured directly via metrics but is a subjective assessment that requires judgmental information to be applied to give a level of confidence, whilst Reliability can be measured as failures over time.
Confidentiality, i.e.the absence of unauthorized disclosure of informationis also used when addressing security. Security is a composite ofConfidentiality,Integrity, andAvailability. Security is sometimes classed as an attribute[7]but the current view is to aggregate it together with dependability and treat Dependability as a composite term called Dependability and Security.[2]
Practically, applying security measures to the appliances of a system generally improves the dependability by limiting the number of externally originated errors.
Threats are things that can affect a system and cause a drop in Dependability. There are three main terms that must be clearly understood:
It is important to note that Failures are recorded at the system boundary. They are basically Errors that have propagated to the system boundary and have become observable.
Faults, Errors and Failures operate according to a mechanism. This mechanism is sometimes known as a Fault-Error-Failure chain.[8]As a general rule a fault, when activated, can lead to an error (which is an invalid state) and the invalid state generated by an error may lead to another error or a failure (which is an observable deviation from the specified behavior at the system boundary).[9]
Once a fault is activated an error is created. An error may act in the same way as a fault in that it can create further error conditions, therefore an error may propagate multiple times within a system boundary without causing an observable failure. If an error propagates outside the system boundary a failure is said to occur. A failure is basically the point at which it can be said that a service is failing to meet its specification. Since the output data from one service may be fed into another, a failure in one service may propagate into another service as a fault so a chain can be formed of the form: Fault leading to Error leading to Failure leading to Error, etc.
Since the mechanism of a Fault-Error-Chain is understood it is possible to construct means to break these chains and thereby increase the dependability of a system.
Four means have been identified so far:
Fault Prevention deals with preventing faults being introduced into a system. This can be accomplished by use of development methodologies and good implementation techniques.
Fault Removal can be sub-divided into two sub-categories: Removal During Development and Removal During Use.Removal during development requires verification so that faults can be detected and removed before a system is put into production. Once systems have been put into production a system is needed to record failures and remove them via a maintenance cycle.
Fault Forecasting predicts likely faults so that they can be removed or their effects can be circumvented.[10][11]
Fault Tolerancedeals with putting mechanisms in place that will allow a system to still deliver the required service in the presence of faults, although that service may be at a degraded level.
Dependability means are intended to reduce the number of failures made visible to the end users of a system.
Based on how faults appear or persist, they are classified as:
Some works on dependability[12]use structuredinformation systems, e.g. withSOA, to introduce the attributesurvivability, thus taking into account the degraded services that an Information System sustains or resumes after a non-maskable failure.
The flexibility of current frameworks encourage system architects to enable reconfiguration mechanisms that refocus the available, safe resources to support the most critical services rather than over-provisioning to build failure-proof system.
With the generalisation of networked information systems,accessibilitywas introduced to give greater importance to users' experience.
To take into account the level of performance, the measurement ofperformabilityis defined as "quantifying how well the object system performs in the presence of faults over a specified period of time".[13]
More regionally focused conferences:
|
https://en.wikipedia.org/wiki/Dependability
|
GQM, theinitialismforgoal, question, metric, is an establishedgoal-orientedapproach tosoftware metricsto improve and measure software quality.[1]
GQM has been promoted byVictor Basiliof theUniversity of Maryland, College Parkand the Software Engineering Laboratory at theNASAGoddard Space Flight Center[2]after supervising a Ph.D. thesis by Dr. David M. Weiss.[3]Dr. Weiss' work was inspired by the work of Albert Endres at IBM Germany.[4][5][6]
GQM defines ameasurementmodel on three levels:[7]
Another interpretation of the procedure is:[9]
Sub-steps are needed for each phases. To complete thedefinition phase, an eleven-step procedure is proposed:[9]
TheGQM+Strategiesapproach was developed byVictor Basiliand a group of researchers from theFraunhofer Society.[10]It is based on the Goal Question Metric paradigm and adds the capability to create measurement programs that ensure alignment between business goals and strategies, software-specific goals, and measurement goals.
Novel application of GQM towards business data are described.[11]Specifically in the software engineering areas of Quality assurance and Testing, GQM is used.[12]
|
https://en.wikipedia.org/wiki/GQM
|
ISO/IEC 15504Information technology – Process assessment, also termedSoftware Process Improvement and Capability dEtermination(SPICE), is a set oftechnical standardsdocuments for the computersoftware developmentprocess and related business management functions. It is one of the jointInternational Organization for Standardization(ISO) andInternational Electrotechnical Commission(IEC) standards, which was developed by the ISO and IEC joint subcommittee,ISO/IEC JTC 1/SC 7.[1]
ISO/IEC 15504 was initially derived from process lifecycle standardISO/IEC 12207and from maturity models like Bootstrap,Trilliumand theCapability Maturity Model(CMM).
ISO/IEC 15504 has been superseded byISO/IEC 33001:2015 Information technology – Process assessment – Concepts and terminologyas of March, 2015.[2]
ISO/IEC 15504 is the reference model for the maturity models (consisting of capability levels which in turn consist of the process attributes and further consist of generic practices) against which the assessors can place the evidence that they collect during their assessment, so that the assessors can give an overall determination of the organization's capabilities for delivering products (software, systems, and IT services).[3]
A working group was formed in 1993 to draft the international standard and used the acronym SPICE.[4][5]SPICE initially stood forSoftwareProcess Improvement and Capability Evaluation, but in consideration of French concerns over the meaning ofevaluation, SPICE has now been renamedSoftware Process Improvement and Capability Determination.[citation needed]SPICE is still used for the user group of the standard, and the title for the annual conference. The first SPICE was held inLimerick,Irelandin 2000,SPICE 2003was hosted byESAin theNetherlands,SPICE 2004was hosted inPortugal,SPICE 2005inAustria,SPICE 2006inLuxembourg,SPICE 2007inSouth Korea,SPICE 2008inNuremberg,GermanyandSPICE 2009inHelsinki,Finland.
The first versions of the standard focused exclusively on software development processes. This was expanded to cover all related processes in a software business, for exampleproject management,configuration management,quality assurance, and so on. The list of processes covered grew to cover six areas: organizational, management, engineering, acquisition supply, support, and operations.
In a major revision to the draft standard in 2004, the process reference model was removed and is now related to theISO/IEC 12207(Software Lifecycle Processes). The issued standard now specifies the measurement framework and can use different process reference models. There are five general and industry models in use.
Part 5 specifies software process assessment and part 6 specifies system process assessment.
The latest work in the ISO standards working group includes creation of a maturity model, which is planned to become ISO/IEC 15504 part 7.
The Technical Report (TR) document for ISO/IEC TR 15504 was divided into 9 parts. The initial International Standard was recreated in 5 parts. This was proposed from Japan when the TRs were published at 1997.
The International Standard (IS) version of ISO/IEC 15504 now comprises 6 parts. The 7th part is currently in an advanced Final Draft Standard form[6]and work has started on part 8.
Part 1of ISO/IEC TR 15504 explains the concepts and gives an overview of the framework.
ISO/IEC 15504 contains areference model. The reference model defines aprocess dimensionand acapability dimension.
The process dimension in the reference model is not the subject ofpart 2of ISO/IEC 15504, but part 2 refers to external process lifecycle standards including ISO/IEC 12207 and ISO/IEC 15288.[7]The standard defines means to verify conformity of reference models.[8]
Theprocess dimensiondefines processes divided into the five process categories of:
With new parts being published, the process categories will expand, particularly for IT service process categories and enterprise process categories.
For each process, ISO/IEC 15504 defines acapability levelon the following scale:[3]
The capability of processes is measured using process attributes. The international standard defines nine process attributes:
Each process attribute consists of one or more generic practices, which are further elaborated into practice indicators to aid assessment performance.
Each process attribute is assessed on a four-point (N-P-L-F) rating scale:
The rating is based upon evidence collected against the practice indicators, which demonstrate fulfillment of the process attribute.[9]
ISO/IEC 15504 provides a guide forperforming an assessment.[10]
This includes:
Performing assessments is the subject ofparts 2 and 3of ISO/IEC 15504.[11]Part 2 is the normative part and part 3 gives a guidance to fulfill the requirements in part 2.
One of the requirements is to use a conformant assessment method for the assessment process. The actual method is not specified in the standard although the standard places requirements on the method, method developers and assessors using the method.[12]The standard provides general guidance to assessors and this must be supplemented by undergoing formal training and detailed guidance during initial assessments.
The assessment process can be generalized as the following steps:
An assessor can collect data on a process by various means, including interviews with persons performing the process, collecting documents and quality records, and collecting statistical process data. The assessor validates this data to ensure it is accurate and completely covers the assessment scope. The assessor assesses this data (using his expert judgment) against a process's base practices and the capability dimension's generic practices in the process rating step. Process rating requires some exercising of expert judgment on the part of the assessor and this is the reason that there are requirements on assessor qualifications and competency. The process rating is then presented as a preliminary finding to the sponsor (and preferably also to the persons assessed) to ensure that they agree that the assessment is accurate. In a few cases, there may be feedback requiring further assessment before a final process rating is made.[13]
Theprocess assessment model (PAM)is the detailed model used for an actual assessment. This is an elaboration of the process reference model (PRM) provided by the process lifecycle standards.[14]
The process assessment model (PAM) in part 5 is based on the process reference model (PRM) for software: ISO/IEC 12207.[15]
The process assessment model in part 6 is based on the process reference model for systems: ISO/IEC 15288.[16]
The standard allows other models to be used instead, if they meet ISO/IEC 15504's criteria, which include a defined community of interest and meeting the requirements for content (i.e. process purpose, process outcomes and assessment indicators).
There exist several assessment tools. The simplest comprise paper-based tools. In general, they are laid out to incorporate the assessment model indicators, including the base practice indicators and generic practice indicators. Assessors write down the assessment results and notes supporting the assessment judgment.
There are a limited number of computer based tools that present the indicators and allow users to enter the assessment judgment and notes in formatted screens, as well as automate the collated assessment result (i.e. the process attribute ratings) and creating reports.
For a successful assessment, theassessormust have a suitable level of the relevant skills and experience.
These skills include:
The competency of assessors is the subject ofpart 3of ISO/IEC 15504.
In summary, the ISO/IEC 15504 specific training and experience for assessors comprise:
ISO/IEC 15504 can be used in twocontexts:
ISO/IEC 15504 can be used to perform process improvement within a technology organization.[17]Process improvement is always difficult, and initiatives often fail, so it is important to understand the initial baseline level (process capability level), and to assess the situation after an improvement project. ISO 15504 provides a standard for assessing the organization's capacity to deliver at each of these stages.
In particular, the reference framework of ISO/IEC 15504 provides a structure for defining objectives, which facilitates specific programs to achieve these objectives.
Process improvement is the subject ofpart 4of ISO/IEC 15504. It specifies requirements for improvement programmes and provides guidance on planning and executing improvements, including a description of an eight step improvement programme. Following this improvement programme is not mandatory and several alternative improvement programmes exist.[13]
An organization consideringoutsourcingsoftware development needs to have a good understanding of the capability of potential suppliers to deliver.
ISO/IEC 15504 (Part 4) can also be used to inform supplier selection decisions. The ISO/IEC 15504 framework provides a framework for assessing proposed suppliers, as assessed either by the organization itself, or by an independent assessor.[18]
The organization can determine atarget capabilityforsuppliers, based on the organization's needs, and then assess suppliers against a set of target process profiles that specify this target capability. Part 4 of the ISO/IEC 15504 specifies the high level requirements and an initiative has been started to create an extended part of the standard covering target process profiles. Target process profiles are particularly important in contexts where the organization (for example, a government department) is required to accept the cheapestqualifyingvendor. This also enables suppliers to identify gaps between their current capability and the level required by a potential customer, and to undertake improvement to achieve the contract requirements (i.e. become qualified). Work on extending the value of capability determination includes a method called Practical Process Profiles - which uses risk as the determining factor in setting target process profiles.[13]Combining risk and processes promotes improvement with active risk reduction, hence reducing the likelihood of problems occurring.
ISO/IEC 15504 has been successful as:
On the other hand, ISO/IEC 15504 may not be as popular as CMMI for the following reasons:
Like the CMM, ISO/IEC 15504 was created in a development context, making it difficult to apply in a service management context. But work has started to develop anISO/IEC 20000-based process reference model (ISO/IEC 20000-4) that can serve as a basis for a process assessment model. This is planned to become part 8 to the standard (ISO/IEC 15504-8). In addition there are methods available that adapt its use to various contexts.
|
https://en.wikipedia.org/wiki/ISO/IEC_15504
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.